Conference Proceedings of the Society for Experimental Mechanics Series
For other titles published in this series, go to www.springer.com/series/8922
Tom Proulx Editor
Optical Measurements, Modeling, and Metrology, Volume 5 Proceedings of the 2011 Annual Conference on Experimental and Applied Mechanics
Editor Tom Proulx Society for Experimental Mechanics, Inc. 7 School Street Bethel, CT 06801-1405 USA
[email protected]
ISSN 2191-5644 e-ISSN 2191-5652 ISBN 978-1-4614-0227-5 e-ISBN 978-1-4614-0228-2 DOI 10.1007/978-1-4614-0228-2 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011929640 © The Society for Experimental Mechanics, Inc. 2011 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
Optical Measurements, Modeling, and Metrology represents one of eight volumes of technical papers presented at the Society for Experimental Mechanics Annual Conference & Exposition on Experimental and Applied Mechanics, held at Uncasville, Connecticut, June 13-16, 2011. The full set of proceedings also includes volumes on Dynamic Behavior of Materials, Mechanics of Biological Systems and Materials, Mechanics of Time-Dependent Materials and Processes in Conventional and Multifunctional Materials; MEMS and Nanotechnology; Experimental and Applied Mechanics, Thermomechanics and Infra-Red Imaging, and Engineering Applications of Residual Stress. Each collection presents early findings from experimental and computational investigations on an important area within Experimental Mechanics. The papers comprising Optical Measurements, Modeling and, Metrology were taken from the general call for papers as well as sessions organized by: E. Maire, MATEIS-INSA, S. Yoshida, Southeastern Louisiana University and C.A. Sciammarella, Illinois Institute of Technology/Northern Illinois University; R. Rodriguez-Vera, Centro de Investigaciones en Optica A.C. Among the topics included in this volume are: 3D Imaging Applied to Experimental Mechanics Modeling and Numerical Analysis in Optical Methods Identification from Full-field Measurements Recent Advances in Displacement-Metrology Methods Phase Unwrapping, Phase Stepping, and High Speed Camera Calibration Dynamic and Quasi Dynamic Measurements Digital Image Correlation The Society thanks the authors, presenters, organizers and session chairs for their participation and contribution to this volume. The opinions expressed herein are those of the individual authors and not necessarily those of the Society for Experimental Mechanics, Inc. Bethel, Connecticut
Dr. Thomas Proulx Society for Experimental Mechanics, Inc
Contents
1
3D Structures of Alloys and Nanoparticles Observed by Electron Tomography K. Sato, K. Aoyagi, T.J. Konno, Tohoku University
2
Damage Characterization in Dual-phase Steels Using X-ray Tomography C. Landron, E. Maire, J. Adrien, INSA-Lyon, MATEIS; O. Bouaziz, ArcelorMittal Research
11
3
In-situ Synchrotron-radiation Computed Laminography Observation of Ductile Fracture T.F. Morgeneyer, Mines ParisTech; L. Helfen, ANKA/Institute for Synchrotron Radiation/European Synchrotron Radiation Facility; I. Sinclair, University of Southampton; F. Hild, LMT-Cachan; H. Proudhon, Mines ParisTech; F. Xu, T. Baumbach, ANKA/Institute for Synchrotron Radiation; J. Besson, Mines Paris Tech
19
4
Understanding the Mechanical Behaviour of a High Manganese TWIP Steel by the Means of in Situ 3D X ray Tomography D. Fabrègue, C. Landron, Université de Lyon, CNRS/INSA-Lyon; C. Béal, Université de Lyon, CNRS/INSA-Lyon/ArcelorMittal Research; X. Kleber, E. Maire, Université de Lyon, CNRS/INSA-Lyon; M. Bouzekri, ArcelorMittal Research
5
Mechanical Properties of Monofilament Entangled Materials L. Courtois, E. Maire, M. Perez, MATEIS UMR 5510 - INSA Lyon; Y. Brechet, D. Rodney, Domaine Universitaire
6
Characterisation of Mechanical Properties of Cellular Ceramic Materials Using X-ray Computed Tomography O. Caty, F. Gaubert, Laboratoire des Composites Thermostructuraux/LCTS; G. Hauss, Institut de Chimie et de la Matière Condensée de Bordeaux; G. Chollon, Laboratoire des Composites Thermostructuraux/LCTS
7
Multiaxial Stress State Assessed by 3D X-ray Tomography on Semi-crystalline Polymers L. Laiarinandrasana, T.F. Morgeneyer, H. Proudhon, Mines Paris Tech
8
Effect of Porosity on the Fatigue Life of a Cast al Alloy N. Vanderesse, J.-Y. Buffiere, E. Maire, Université de Lyon – INSA; A. Chabod, Centre Technique des Industries de la Fonderie
55
9
Fatigue Mechanisms of Brazed Al-Mn Alloys Used in Heat Exchangers A. Buteri, Université de Lyon/Alcan CRV; J. Réthoré, J-Y. Buffière, D. Fabrègue, Université de Lyon; E. Perrin, S. Henry, Alcan CRV
63
1
27
33
39
47
vii
viii 10
Three Dimensional Confocal Microscopy Study of Boundaries between Colloidal Crystals E. Maire, INSA-Lyon; M. Persson Gulda, N. Nakamura, K. Jensen, E. Margolis, C. Friedsam F. Spaepen, Harvard University
69
11
Scale Independent Fracture Mechanics S. Yoshida, D. Bhattarai, T. Okiyama, K. Ichinose, Southeastern Louisiana University
75
12
Consistent Embedding: A Theoretical Framework for Multiscale Modelling K. Runge, University of Florida
83
13
Analysis of Crystal Rotation by Taylor Theory M. Morita, O. Umezawa, Yokohama National University
91
14
Numerical Solution of the Walgraef-aifantis Model for Simulation of Dislocation Dynamics in Materials Subjected to Cyclic Loading J. Pontes, Federal University of Rio de Janeiro; D. Walgraef, Université Libre de Bruxelles; C.I. Christov, University of Louisiana at Lafayette
15
Photoelastic Determination of Boundary Condition for Finite Element Analysis S. Yoneyama, S. Arikawa, Y. Kobayashi, Aoyama Gakuin University
109
16
Discussion on Hybrid Approach to Determination of Cell Elastic Properties M.C. Frassanito, L. Lamberti, A. Boccaccio, C. Pappalettere, Politecnico di Bari
119
17
Mesh Refinement for Inverse Problems with Finite Element Models A.H. Huhtala, S. Bossuyt, Aalto University
125
18
Assessment of Inverse Procedures for the Identification of Hyperelastic Material Parameters M. Sasso, G. Chiappini, Università Politecnica delle Marche; M. Rossi, Arts et Métiers ParisTech; G. Palmieri, Università degli Studi e-Campus
19
Digital Image Correlation Through a Rigid Borescope P.L. Reu, Sandia National Laboratories
141
20
Scale Independent Approach to Strength Physics and Optical Interferometry S. Yoshida, Southeastern Louisiana University
147
21
Optical Techniques That Measure Displacements: A Review of the Basic Principles C.A. Sciammarella, Northern Illinois University
155
22
Studying Phase Transformations in a Shape Memory Alloy with Full-field Measurement Techniques D. Delpueyo, M. Grédiac, X. Balandraud, C. Badulescu, Clermont Université
181
23
Correlation Between Mechanical Strength and Surface Conditions of Laser Assisted Machined Silicon Nitride F.M. Sciammarella, M.J. Matusky, Northern Illinois University
187
24
Analysis of Speckle Photographs by Subtracting Phase Functions of Digital Fourier Transforms K.A. Stetson, Karl Stetson Associates, LLC
199
97
131
ix 25
Measurement of Residual Stresses in Diamond Coated Substrates Utilizing Coherent Light Projection Moiré Interferometry C.A. Sciammarella, Northern Illinois University; A. Boccaccio, M.C. Frassanito, L. Lamberti, C. Pappalettere, Politecnico di Bari
26
Automatic Acquisition and Processing of Large Sets of Holographic Measurements in Medical Research E. Harrington, Worcester Polytechnic Institute; C. Furlong, Worcester Polytechnic Institute/Massachusetts Eye and Ear Infirmary/Harvard Medical School; J.J. Rosowski, Massachusetts Eye and Ear Infirmary/Harvard Medical School/MIT-Harvard Division of Health Sciences and Technology; J.T. Cheng, Massachusetts Eye and Ear Infirmary/Harvard Medical School
27
Adaptative Reconstruction Distance in a Lensless Digital Holographic Otoscope J.M. Flores-Moreno, Worcester Polytechnic Institute/Centro de Investigaciones en Optica A. C.; C. Furlong, Worcester Polytechnic Institute/Massachusetts Eye and Ear Infirmary/MIT-Harvard Division of Health Sciences and Technology; J.J. Rosowski, Massachusetts Eye and Ear Infirmary
28
3D Shape Measurements With High-speed Fringe Projection and Temporal Phase Unwrapping M. Zervas, C. Furlong, E. Harrington, I. Dobrev, Worcester Polytechnic Institute
29
Measuring Local Mechanical Properties of Membranes Applying Coherent Light Projection Moiré Interferometry F.M. Sciammarella, C.A. Sciammarella, Northern Illinois University; L. Lamberti, Politecnico di Bari
30
Experimental Analysis of Foam Sandwich Panels With Projection Moiré A. Boccaccio, C. Casavola, L. Lamberti, C. Pappalettere, Politecnico di Bari
249
31
Panoramic Stereo DIC-based Strain Measurement on Submerged Objects K. Genovese, L. Casaletto, Università degli Studi della Basilicata; Y.-U. Lee, J.D. Humphrey, Yale University
257
32
Advances in the Measurement of Surfaces Properties Utilizing Illumination at Angles Beyond Total Reflection C.A. Sciammarella, F.M. Sciammarella, Northern Illinois University; L. Lamberti, Politecnico di Bari
33
Filters with Noise/Phase Jump Detection Scheme for Image Reconstruction J.-F. Weng, Y.-L. Lo, National Cheng Kung University
273
34
An Instantaneous Phase Shifting ESPI System for Dynamic Deformation Measurement T.Y. Chen, C.H. Chen, National Cheng Kung University
279
35
Development of Linear LED Device for Shape Measurement by Light Source Stepping Method Y. Oura, M. Fujigaki, A. Masaya, Wakayama University; Y. Morimoto, Moire Institute Inc.
36
Calibration Method for Strain Measurement Using Multiple Cameras in Digital Holography M. Fujigaki, R. Nishitani, Wakayama University
293
37
Performance Assessment of Strain Measurement With an Ultra High Speed Camera M. Rossi, Arts et Métiers ParisTech; R. Cheriguene, Université Paul Verlaine de Metz; F. Pierron, Arts et Métiers ParisTech; P. Forquin, Université Paul Verlaine de Metz
299
209
219
229
235
243
265
285
x 38
Rigid Body Correction Using 3D Digital Photogrammetry for Rotating Structures T. Lundstrom,, C. Niezrecki, P. Avitabile, University of Massachusetts Lowell
39
Development of Sampling Moire Camera for Landslide Prediction by Small Displacement Measurement M. Nakabo, M. Fujigaki, Wakayama University; Y. Morimoto, Moire Institute Inc.; Y. Sasatani, H. Kondo, T. Hara, Wakayama University
40
Energy Dissipation in Impact Absorber S. Ekwaro-Osire, I. Durukan, F.M. Alemayehu, Texas Tech University; J.F. Cardenas-García, United States Patent and Trademark Office
331
41
Mechanics Behind 4D Interferometric Measurement of Biofilm Mediated Tooth Decay M.S. Waters, National Institute of Standards and Technology; B. Yang, American Denal Association Foundation; N.J. Lin, S. Lin-Gibson, National Institute of Standards and Technology
337
42
Validating Road Profile Reconstruction Methodology Using ANN Simulation on Experimental Data H.M. Ngwangwa, University of South Africa; P.S. Heyns, H.G.A. Breytenbach, P.S. Els, University of Pretoria
43
Electro-optical Property of Sol-gel-derived PLZT7/30/70 Thin Films J.-F. Lin, J.-S. Jeng, W.-R. Chen, Far East University
44
Decoupling Six Effective Parameters of Anisotropic Optical Materials Using Stokes Polarimetry T.-T.-H. Pham, Y.-L. Lo, National Cheng Kung University
45
Measurement of Creep Deformation in Stainless Steel Welded Joints Y. Sakanashi, S. Gungor, P.J. Bouchard, The Open University
371
46
Thermal Deformation Measurement in Thermoelectric Coolers by ESPI and DIC Method W.-C. Wang, T.-Y. Wu, National Tsing Hua University
379
47
Structural Health Monitoring Using Digital Speckle Photography F.-P. Chiang, J.-D. Yu, Stony Brook University
393
48
Determining the Strain Distribution in Bonded and Bolted/Bonded Composite Butt Joints Using the Digital Image Correlation Technique and Finite Element Methods D. Backman, G. Li, T. Sears, National Research Council Canada
49
Improved Spectral Approach for Continuous Displacement Measurements From Digital Images F. Mortazavi, M. Lévesque, École Polytechnique de Montréal; I. Villemure, École Polytechnique de Montréal/Sainte-Justine University Hospital Center
50
Experimental Testing (2D DIC) and FE Modelling of T-stub Model D. Carazo Alvarez, University of Jaén; M. Haq, Michigan State University; J.D. Carazo Alvarez, University of Jaén; E. Patterson, Michigan State University
307
323
345
359
365
401
407
415
3D structures of alloys and nanoparticles observed by electron tomography Kazuhisa Sato*, Kenta Aoyagi, and Toyohiko J. Konno Institute for Materials Research, Tohoku University 2-1-1 Katahira, Aoba, Sendai, Miyagi 980-8577, Japan *E-mail address:
[email protected]
ABSTRACT 3D structures of bulk alloys and nanoparticles have been studied by means of electron tomography using scanning transmission electron microscopy (STEM). In the case of nanoparticles of Fe-Pd alloy, particle size, shape, and locations were reconstructed by weighted backprojection (WBP), as well as by simultaneous iterative reconstruction technique (SIRT). We have also estimated the particle size by simple extrapolation of tilt-series original data sets, which proved to be quite powerful. We demonstrate that WBP yields a better estimation of the particle size in the z direction than SIRT does, while the latter algorithm is superior to the former from the viewpoints of surface roughness and dot-like artifacts. In contrast, SIRT gives a better result than WBP for the reconstruction of plate-like precipitates in Mg-Dy-Nd alloys, in respect of the plate thickness perpendicular to the z direction. We also show our recent results on the 3D-tomographic observations of microstructures in Ti-V-Al, Ti-Nb, Cu-Ag, and Co-Ni-Cr-Mo alloys obtained by STEM tomography. 1.
Introduction
Our understanding on the microstructure in metals and alloys has been advanced with the progress of transmission electron microscopy (TEM) and electron diffraction. Thus, for example in the fifties, dislocation theories were directly confirmed by electron imaging based on the diffraction contrasts [1]. This technique immediately found enormous areas of applications in materials science, and utilized for instance to clarify the phase transformation behavior in a number of alloy systems. Other applications of TEM include of course high-resolution transmission electron microscopy (HRTEM) and scanning transmission electron microscopy (STEM) [2]. The images obtained by these techniques are projections of three dimensional (3D) objects; and in order to better understand the nature of phase transformation behavior, for example, a direct 3D observation is much needed. In this respect, recent advance in 3D-tomography (x-ray, electron, and atom probe tomography) has opened a new prospect. Electron tomography, especially its applications to materials science, is a novel technique, which can retrieve 3D structural information usually missing in TEM and STEM. A 3D structure can be reconstructed by processing a tilt-series of electron micrographs with mass-thickness contrasts, formed by several different imaging techniques: bright-field (BF) TEM [3], dark-field (DF) TEM [4, 5], atomic number (Z) contrast of STEM [6], energy-filtered TEM [6] and electron holography [7]. The recent progress in this field has been summarized in review articles [8, 9]. In all the techniques, acquisition of clear contrast images and accurate alignments of the tilt axis are essential for subsequent 3D reconstruction. Some model simulations on the accuracy of reconstruction have been presented in detail [6]. An investigation for a novel method to quantify 3D reconstructed structures is one of the fundamental interests in the electron tomography [10-13]. Here, we report some of our recent studies on magnetic T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_1, © The Society for Experimental Mechanics, Inc. 2011
1
2
nanoparticles and bulk alloys, where electron tomography has played an important role in identifying the 3D structures and spatial distribution of nanocrystals, dislocations, and precipitates. 2.
Experimental Procedures
We employed BF and high-angle annular dark-field (HAADF) imaging modes of STEM for the tilt-series acquisition using an FEI Titan 80-300 (S)TEM operating at 300 kV with a field emission gun. We set the beam convergence to be 10-14mrad in half-angle, taking into account the spherical aberration coefficient (1.2 mm) of the pre-field of objective lens. The Xplore3D software (FEI Co. Ltd) was used for data sets acquisition taking the dynamic focus into consideration. A single-tilt holder (Fischione model 2020) and a triple-axes holder (Mel-Build model HATA-8075) were used for the tilt series acquisition with the maximum tilt angle of 70º. Alignment of the tilt axis for the obtained data set by an iterative cross-correlation technique and subsequent 3D reconstruction were performed by using the Inspect3D software package (FEI Co. Ltd). As for the algorithm for 3D reconstruction, we employed weighted back-projection (WBP) [14], as well as simultaneous iterative reconstruction technique (SIRT) [15]. The reconstructed 3D density data were then visualized using the AMIRA 4.1 software (VISAGE IMAGING). 3.
Results and Discussion
3.1 Shapes and distribution of FePd nanoparticles Figure 1a shows a series of STEM-HAADF images taken at different tilt angles with the detector inner half angle of 60mrad. The tilt-series was observed sequentially from 0 to -70º and then 0 to +70 º. The tilt angle increments were set 2º for angle ranges of 0 to |50|º, and 1º for |50| to |70|º. Out of this data set, we employed, by careful inspection of contrasts, images taken at tilt angles between -66 and +64º for later 3D reconstruction. As seen, the apparent particle length in the y-direction becomes shorter as the tilting angle increases. A nanoparticle enclosed by the circle in the figure is one of the examples to demonstrate the reduction of the particle image in the y-direction. To examine an accuracy of a reconstructed particle height in the z-direction, we therefore measured projected particle length in the y-direction as a function of tilt-angle, and deduced the particle height by extrapolating the projected length to the value expected at the tilt-angle of 90º. The results are plotted in Fig.1b. The projected length clearly decreases with tilting, which indicates that the particle height is actually shorter than the in-plane diameter. Here, the extrapolation was performed by fitting the data points at angles higher than 40º using cosine of the tilt angle (α), because of the fact that the projected y-length is proportional to cosα at high angles when the particle height is shorter than the diameter. Using the aforementioned procedure, here termed “tilt-series extrapolation (TSE) method”, we obtained a relation, which summarizes the relation between particle diameter and thickness estimated by using several different techniques (Fig.2a). Solid triangles and solid squares indicate the results obtained from the reconstructed images based on SIRT and WBP, respectively. In the present study, 20 iterations were carried out in SIRT to minimize the differences between the original projected series and the calculated ones. The large error bar for WBP indicates a possible elongation of dz = 4.1 nm [13]. Therefore, we divided the apparent particle thickness (tz), which was deduced from the 3D volumes based on WBP, by the elongation factor (eyz=1.42) [12] for the present experimental condition. The results, tz / eyz, are indicated by open squares. Solid circles denote the deduced particle thickness measured from the TSE method. A solid curve indicates the previous result based on the electron holography [16]. Note that the deduced thicknesses obtained by the TSE agree well
3
with those obtained by WBP (tz / eyz) as well as those by electron holography. On the other hand, the thicknesses suggested by SIRT are much larger than the values deduced by the TSE method or electron holography. The apparent thickness predicted by WBP (tz) is close to the deduced values with an error of about 1-4 nm in thickness, without taking the elongation factor into consideration. Therefore, within a framework of single-axis tilt geometry, it is demonstrated in a semi-quantitative manner that the WBP gives a better result in terms of the accuracy of the particle length in the z-direction than that predicted by SIRT, despite the fact that the latter algorithm is superior to the former from the viewpoints of artifacts.
Fig. 1 (a) A series of Z-contrast images taken at different tilt angles. (b) The analyzed particle length in the y-direction as a function of the tilt-angle. The particle length decreases as the tilt-angle increases towards 90º, indicating the fact that the particle height is shorter than the diameter. Extrapolation of particle length in the y-direction to the value expected at the tilt-angle α = 90º leads to an elucidation of the true particle height. Here, the extrapolation was performed by fitting the data points at angles higher than 40º using cosine of the tilt angle [13].
Fig. 2 (a) The relation between particle diameter and thickness (height) for the FePd nanoparticles estimated by using several different techniques. (b) Oblique-view of the reconstructed volume processed by SIRT (upper) and (c) WBP (lower). Large discrepancy in particle thickness (height) is apparent. The reconstructed volume is 75 × 75 × 36 nm3 [13].
4
The difference in the particle height of the reconstructed results is pronounced when viewed from an oblique direction as shown in Fig.2b. Indeed SIRT gave particle heights, which are almost comparable to or even longer than the particle diameter (Fig.2b), while rather flat 3D shapes can be seen in the result by WBP (Fig.2c). Nanoparticles in the upper image (SIRT) show prolate 3D-shapes, i.e., elongated in the z-direction. The reason for this artifact is not clear at this moment. To reduce the artifacts, a minimization of the missing wedge will be most effective, which can be attained by increasing the maximum tilt-angle together with number of 2D-slice images as possible. Using the same experimental setup, recently we succeeded in reconstructing double-layer of 2nm-sized CoPt nanoparticles separated by a thin amorphous carbon film [18]. 3.2 Phase separation in Ti-V-Al alloy We have examined 3D morphology of α (hcp) and β (bcc) dual phase structure of a Ti-12mass%V-2mass%Al alloy after aging for 24 h at 500oC by means of STEM-HAADF tomography. In the present study, we set the inner half angle of the HAADF detector to be 30mrad to ensure a clear contrast during the tilt-series acquisition. This setting of a rather low angle may break the simple Z2 dependence of the HAADF-STEM images to some extent due to possible diffraction contrasts during the tilting. The tilt-series was obtained sequentially from 0 to -70° and then 0 to +70°. The angular tilt angle increments were set 2°. Out of this data set, we employed images taken at tilt angles between -62 and +62° for subsequent 3D reconstruction. Figure 3 shows one of the original images (Fig.3a) and corresponding reconstructed images processed by WBP (Fig.3b). Here, the x-axis is the tilt-axis, about which the specimen film is sequentially tilted towards the y-direction; while the primary beam incidence direction is parallel to the z-axis. Bright contrast region corresponds to the precipitated V-rich β phase (<40mass%V). A bird-view of the reconstructed volume is shown in Fig. 3c. The plate-like 3D shapes of β phase, precipitated by decomposition of hexagonal α’ martensites, has been successfully reconstructed. As seen in these images, general features projected onto the x-y plane, such as plate-like shape, size, and the location of precipitates, are clearly reconstructed. However, it is noted that floating dot-like artifacts are seen in the reconstructed volume, which can be attributed to a low signal to noise ratio and diffraction effects of the original tilt series STEM images.
Fig.3 3D distribution of the β phase in Ti-12mass%V-2mass%Al after annealing for 24 h at 500oC [19]. (a) An original STEM-HAADF image. Bright regions correspond to the V-rich β phase. (b) Reconstructed images processed by WBP. (c) A bird-view of the reconstructed volume (690 × 720 × 375nm3). Plate-like structures correspond to the β precipitates.
5
3.3 Dislocations in a Ti-Nb alloy Figure 4 shows STEM-BF images of an as-quenched Ti-35mass%Nb alloy acquired during a tilt-series observation. The alloy composed of acicular orthorhombic α” martensites and bcc β phase [20]. In this observation, hh0 systematic reflections of the β phase (bcc) was set parallel to the tilt-axis of the triple-axes holder. As seen in the corresponding diffraction patterns, 1 1 0 reflection of the β phase is always excited during the tilt-series observation. The tilt-series was obtained sequentially from 0 to -70° and then 0 to +70° with the angular tilt angle increments of 2°. The reconstruction was carried out by WBP using 71 images. Figure 5 shows a snapshot of reconstructed image of dislocations observed in the β phase region. It is presumed that origin of these dislocations in the β phase can be attributed to the β−α” martensitic transformation at quenching. Detailed characterization of these dislocation structures is now in progress.
Fig.4 STEM-BF images of an as-quenched Ti-35mass%Nb alloy acquired during a tilt-series observation (after tilt-axis correction). Corresponding diffraction patterns are shown in the inset, showing excitation of hh0 systematic row.
Fig.5 Reconstructed 3D image of dislocations in an as-quenched Ti-35mass%Nb alloy processed by WBP from the tilt series of STEM-BF images in Fig.4.
6
3.4 Precipitates in Cu-Ag, Mg-Dy-Nd, and Co-Ni-Cr-Mo alloys Figure 6a and 6b show STEM-BF and HAADF images of a Cu-4at%Ag alloy aged at 450oC for 20min, respectively [21]. Discontinuous precipitation of Ag at grain boundary regions is clearly seen. Figure 6c and 6d are snapshots of reconstructed images of Ag precipitates processed by SIRT. The reconstruction was carried out using 70 individual images. As can be seen in these snapshots, the precipitates possess a “flat” rod shape, i.e., the cross-section of the rods are not circular but elliptic. We found that the aspect ratio or the ellipticity is more than two, the origin of which is still under question.
Fig.6 STEM-BF (a) and HAADF (b) images of a Cu-4at%Ag alloy aged at 450oC for 20min, showing discontinuous precipitation of Ag at grain boundary regions. (c, d) Reconstructed volumes of the Ag precipitates processed by SIRT.
Fig.7 Reconstructed images of β’ precipitates in a Mg-7mass%Dy-3mass%Nd alloy aged at 200oC for 30h.
7
Figure 7 shows reconstructed images of β’ precipitates in a Mg-7mass%Dy-3mass%Nd alloy aged at 200oC for 30h processed by SIRT [22]. The tilt-series was observed sequentially from 0 to -70º and then 0 to +70 º. The tilt angle increments were set 2º for angles of 0 to |40|º, and 1º for |40| to |70|º. We can observe distribution of plate-like precipitates, separated by 10-50nm. Note that in the reconstruction of plate-like precipitates, it was found that SIRT gives a better result than WBP does in respect of the plate thickness perpendicular to the z direction as well as a smooth surface with little apparent dot-like artifacts. Using the same experimental setup, we also observed 3D structures of lamellar precipitates in Co-Ni based superalloys as shown in Fig.8 [23].
Fig.8 STEM-HAADF image (a) and a snapshot of reconstructed 3D image (b) of a Co-Ni-Cr-Mo superalloy. 4.
Conclusion
We have studied 3D structures of nanoparticles and bulk alloys by means of electron tomography using STEM. In the case of FePd nanoparticles, we demonstrate that WBP yields a better estimation of the particle size in the z direction than SIRT does, most likely due to the presence of a missing wedge in the original data set, while the latter algorithm is superior to the former from the viewpoints of surface roughness and dot-like artifacts. Dislocation network in β phase of a Ti-Nb alloy was visualized by STEM-BF tomography exciting hh0 systematic reflections using a triple-axes holder. We also observed 3D structures and spatial distribution of precipitates in Ti-V-Al, Cu-Ag, Mg-Dy-Nd, and Co-Ni-Cr-Mo alloys by means of single-axis STEM-HAADF tomography. For the reconstruction of plate-like precipitates in bulk Mg-Dy-Nd alloys, it was found that SIRT gives a better result than WBP does in respect of the plate thickness perpendicular to the z direction. Acknowledgments The authors would like to express their sincere thanks to Prof. A. Chiba, Dr. H. Matsumoto, and Dr. S. Semboshi, Tohoku University, for their kind supplying samples of Ti-V-Al and Ti-Nb alloys used in the present study, and also to Dr. K. Inoke, FEI Co. Japan Ltd., Mr. E. Aoyagi, and Mr. Y. Hayasaka, Tohoku University, for their help using TEM. This work was partially supported by a Grant-in-Aid for Young Scientists (B) (Grant No. 19760459) from the Ministry of Education, Culture, Sports, Science, and Technology, Japan, a Grant from the New Energy and Industrial Technology Development Organization (NEDO, 08E51003d), and by the Nano-Materials Functionality Creation Research Project in IMR (2008-2009). TJK appreciates supports from JFE-21 century foundation.
8
References [1] Hirsch P, Howie A, Nicholson R, Pashley DW, Whelan MJ, Electron Microscopy of Thin Crystals (Krieger, Florida, 1977). [2] Buseck P, Cowley JM, Eyring L, (ed.), High-Resolution Transmission Electron Microscopy and Associated Techniques (Oxford, New York, 1992). [3] Shirai M, Horiuchi T, Horiguchi A, Matsumura S, Yasuda K, Watanabe M, Masumoto T, Morphological change in FePt nanogranular thin films induced by irradiation with 2.4MeV Cu2+ ions: electron tomography observation, Mater. Trans. 47(1), 52-58 (2006). [4] Kimura K, Hata S, Matsumura S, Horiuchi T, Dark-field transmission electron microscopy for a tilt series of ordering alloys: toward electron tomography, J. Electron Microscopy 54(4), 373-377 (2005). [5] Barnard JS, Sharp J, Tong JR, Midgely PA, High-resolution three-dimensional imaging of dislocations, Science 313, 319 (2006). [6] Midgley PA, Weyland M, 3D electron microscopy in the physical sciences: the development of Z-contrast and EFTEM tomography, Ultramicroscopy 96, 413-431 (2003). [7] Twitchett AC, Yates TJV, Dunin-Borkowski RE, Newcomb SB, Midgley PA, Three-dimensional electrostatic potential of a Si p-n junction revealed using tomographic electron holography, J. Phys. Conf. Ser. 26, 29-32 (2006). [8] Kübel C, Voigt A, Schoenmakers R, Otten M, Su D, Lee, Carlsson A, Bradley J, Recent advances in electron tomography: TEM and HAADF-STEM tomography for materials science and semiconductor applications, Microsc. Microanal. 11, 378-400 (2005). [9] Midgley PA, Dunin-Borkowski RE, Electron tomography and holography in materials science, Nature Mater. 8, 271-280 (2009). [10] Fujita T, Qian LH, Inoke K, Erlebacher J, Chen MW, Three-dimensional morphology of nanoporous gold, Appl. Phys. Lett. 92(25), 251902-1−251902-3 (2008). [11] Benlekbir S, Epicier T, Bausach M, Aouine M, Berhault G, STEM-HAADF electron tomography of palladium nanoparticles with complex shapes, Philos. Mag. Lett. 89(2), 145-153 (2009). [12] Alloyeau D, Ricolleau C, Oikawa T, Langlois C, Le Bouar Y, Loiseau A, Comparing electron tomography and HRTEM slicing methods as tools to measure the thickness of nanoaprticles, Ultramicrosc. 109, 788-796 (2009). [13] Sato K, Aoyagi K, Konno TJ, Three-dimensional shapes and distribution of FePd nanoparticles observed by electron tomography using HAADF-STEM, J. Appl. Phys. 107(2), 024304-1−024304-7 (2010). [14] Radermacher M, Weighted Back-Projection Method, in: Frank J. (ed.), Electron Tomography: Three-dimensional Imaging with the Transmission Electron Microscope (Plenum Press, New York, London, 1992). [15] Gilbert P, Iterative methods for the three-dimensional reconstruction of an object from projections, J. Theor. Biol. 36, 105-117 (1972). [16] Sato K, Hirotsu Y, Mori H, Wang Z, Hirayama T, Long-range order parameter of single L10-FePd nanoparticle determined by nanobeam electron diffraction, J. Appl. Phys. 98(2), 024308-1−024308-8 (2005). [17] Arslan I, Tong J, Midgley PA, Reducing the missing wedge: High-resolution dual axis tomography of inorganic materials, Ultramicrosc. 106, 994-1000 (2006). [18] Epicier T, Benlekbir S, Sato K, Tournus F, Konno TJ, "STEM-HAADF tomography and generalized stereoscopy 3D studies of nano-particles in Transmission Electron Microscopy", invited talk at MicroScience 2010 (session M2.1),
9
London, UK, June 29 – July 1, 2010. [19] Sato K, Matsumoto H, Kodaira K, Konno TJ, Chiba A, Phase transformation and age-hardening of hexagonal α’ martensite in Ti-12%V-2%Al alloys studied by TEM, J. Alloys Compd. 506, 607-614 (2010). [20] Semboshi S, Shirai T, Konno TJ, Hanada S, In-situ transmission electron microscopy observation on the phase transformation of Ti-Nb-Sn shape memory alloys, Metall. Mater. Trans. 39A, 2820-2829 (2008). [21] Shizuya E, Konno TJ, A study on age hardening in Cu-Ag alloys by transmission electron microscopy in Frontiers in Materials Science, edited by Fujikawa Y, Nakajima K, Sakurai T (Springer-Verlag, New York, 2008). [22] Konno TJ, Aoyagi K, Shizuya E, Lee JB, Sato K, Kiguchi T, Hiraga K, Phase transformation behavior in alloys viewed by 3D-tomography, Proc. 9th Asia-Pacific Microscopy Conference (APMC9), 227-228 (2008). [23] Konno TJ, Tadano T, Matsumoto H, Chiba A, Microstructure of Co-Ni based superalloys, Proc. 14th European Microscopy Congress (EMC2008), 2, 447-448 (2008).
Damage characterization in Dual-Phase steels using X-ray tomography C. Landron a, E. Maire a, J. Adrien a, O. Bouaziz b a
b
INSA-Lyon, MATEIS UMR5510, 25 av. Capelle, 69621 Villeurbanne, France ArcelorMittal Research, Voie Romaine, 57283 Maizieres-les-Metz Cedex, France
ABSTRACT In-situ tensile tests have been carried out during X-ray microtomography imaging of dual-phase steels. Void nucleation has been quantified as a function of strain and triaxiality using the obtained 3D images. The Argon's criterion of decohesion has then been used in a model for nucleation in the case where martensite plays the role of inclusions. This criterion has been modified to include the local stress field and the effect of kinematic hardening present in such an heterogeneous material. 1 Introduction Ductile damage is characterized by a three step process: cavities first nucleate, then grow, until coalescence leads to the ductile fracture. The first step of nucleation has been extensively studied and modeled. Void nucleation is usually associated to the presence of a second phase, like particles or inclusions. In the latter case, the cavities appear close to the inclusions, either inside the particle or at the interface [1-3]. Dual-Phase steels (DP steels) containing hard martensite islands embedded in a ductile ferritic matrix, are such kind of materials promoting inhomogeneous nucleation. In DP steels, the main nucleation mechanism is the interface decohesion as experimentally observed by [4, 5]. To model this interface debonding, an energy criterion [2,6,7] is necessary for the creation of new surfaces and a stress criterion [1,8] or a strain criterion [9,10] is required for breaking the bonds. To combine the two criteria, numerical models using cohesive zones have also been developed [1113]. In order to be validated, these models have to be compared with key experiments. X-ray absorption microtomography is currently one of the most reliable ways to obtain quantitative three-dimensional (3D) information on damage [14,15]. In the present paper, damage in a DP steel is studied by in-situ tensile tests during X-ray microtomography imaging. Quantitative data is then used to validate an analytical modeling of void nucleation based on the Argon's criterion [1]. 2 Experimental procedure X-ray microtomography has been used in the present study to quantify damage during in-situ tensile tests. The method can be used for the imaging and the quantification of the microstructure of materials. Applications to study damage in ductile materials can be found in Refs. [14-16]. The tomography setup used is the one located at the ID15 beam line at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France (more information is given in [17]). Tomography acquisition is carried out with a voxel size of (1.6µm) 3. With such a resolution, the smallest observed voids have a diameter of almost 2µm. Smaller voids, not accounted in the quantification, do exist in the sample but may not play a major role in damage. The DP steel used for this study was cut from a 3 mm thick sheet obtained by hot rolling and thermal treatment. Its mechanical properties are given in Table 1. It has been checked by image analysis of optical micrographs of polished surfaces that the steel contains about 11% martensite. Axisymetric specimens were machined from the original sheet. Two kinds of specimen's shapes inspired by [18] were cut: two smooth samples and two samples with a 1 mm notch radius. The specimen's geometry is given in Fig.1. Each shape induces a different initial triaxiality. This allows us to study the effect of this important parameter on damage. Only the central part, 1.6 mm in height, is imaged during the present study. Table 1 Mechanical properties in tension of the studied DP steel Re (MPa) Rm (MPa) Ag (%) A (%) 366 603 17.7 26.6 The fractured samples were polished after the in situ tensile test down to their central plane and etched with a 2 pct nital solution. The samples were dipped in a solution of ethanol and placed into a ultrasound cleaner for a duration of 30 minutes
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_2, © The Society for Experimental Mechanics, Inc. 2011
11
12
after the polishing to eliminate the possible fragments blunting the cavities. Light optical micrographs were then acquired in order to observe the nucleation sites.
Fig.1 Tensile samples used: smooth specimen (a), 1mm radius notched specimen (b), 3D view (c) 3 Results and discussion 3-1 Damage characterization X-ray microtomography imaging has already been used in order to visualize and quantify damage in DP steel in [17]. The same procedure was used in this study: raw processing were performed with the ImageJ freeware [19]. Initial volumes were median filtered and simply thresholded to differentiate material from voids. Damage can be qualitatively observed in 2D using sections inside the volume as ones presented in Fig.2 or in 3D using a global view of the sample as ones showed in Fig.3. 3D visualization softwares allow one to have a transparent view of some of the voxels (for instance those located in the solid phase) and then lead to the possibility of seeing cavities inside the sample. Tomography volumes can also be employed to quantify damage appearing during the tensile test. As in [17], only the central part of the tensile specimen is used for this damage quantification. This sub-region was chosen to be a cubic volume of (300µm) 3. Fig.4. shows this studied sub-region in a notched specimen of DP steel at several steps of deformation. This qualitative figure shows clearly that the number of cavities increases (nucleation) and that the size of the nucleated cavities also increases (growth) with the increase of strain. It is noticeable in this image that nucleation is a quantitatively important part of the damage progression in these materials, as evidenced already in [17]. Each pore of the volume is then subsequently labeled using a dedicated image processing plugin implemented in the ImageJ [19] freeware. The labeling plugin uses a binary image as input. It simply detects the 3D clusters of connected voxels and gives a label to each. The void density is calculated as the number of cavities per cubic millimeter in the sub-volume. The volume of each cavity is also measured as well as its dimensions permitting to quantitatively characterize the growth and the shape change of voids during the tensile deformation.
Fig.2 Sections at the center of a notched strained specimen at various steps of deformation: εloc=0 (a), εloc=0.35 (b) and εloc=0.83 (c)
13
Fig.3 3-D views of a notched strained specimen at various steps of deformation: εloc=0 (a), εloc=0.35 (b) and εloc=0.83 (c). The outline of the specimen appears in gray and the cavities in red
Fig.4 3-D views of damage at the center of a notched strained specimen at various steps of deformation: εloc=0 (a), εloc=0.35 (b) and εloc=0.83 (c) Some mechanical parameters were calculated using the outside shape of the specimen. The minimal section area S was measured in order to calculate the local strain εloc at each step using equation (1): S0 (1) ε loc = ln S S0 being the initial section of the sample. This equation implies that the effect of porosity in the volume change of the sample is neglected in our analysis. The curvature radius Rnotch is also measured in order to determine the stress triaxiality T using equation (2), derived from the Bridgman analysis of notched bars [20]. 1 a (2) T = ln 1 3 2R notch
a being the radius of the minimal section, easily tractable from the value of S. Fig.5. shows the evolution of N, the number of voids per unit volume (expressed per cubic mm) in several DP steel samples with smooth and notched geometries. A very small amount of porosity (0.03%) can be detected before the tensile test, may be due to the fabrication process. The experimental results show that the triaxiality has a straightforward impact on the nucleation kinetic: void nucleation occurs earlier in notched samples inducing higher triaxiality than in smooth samples. Optical micrographs performed on the fractured specimens and given in Fig.6. show that most cavities are localized between the ferritic matrix and martensite islands and thus nucleate by decohesion of the ferrite/martensite interface as previously observed in [4, 5].
14
Fig.5 Evolution of N, the number of cavities per cubic mm in the four studied samples measured during the in-situ tensile tests [21]
Fig.6 Micrograph of fractured specimen. Voids appear in black, ferrite in light gray and martensite in dark gray [21] 3-2 Void nucleation modeling As demonstrated by [6], the energy criterion necessary for the creation of new surfaces at the inclusion/matrix interface is satisfied at the onset of plastic deformation in materials containing inclusions bigger than about 25 nm in diameter. Only a stress criterion will therefore be used to model the interface decohesion in DP steels as the observed inclusions are about 100 times larger than this. The Argon's criterion [1] is a critical stress criterion stating that the void nucleation occurs when a critical stress state, necessary for the interface decohesion, is reached in the material. This stress state involves a contribution of the hydrostatic stress σm and the equivalent stress σeq. σ eq σ m = σ C (3) where σC is the interface strength, e.g. the maximum shear stress that the interface can support without breaking. The interest in using the Argon's criterion lies in the fact that it accounts for the triaxiality T (T being the ratio between σeq and σm ). σ eq T= (4) σm Combining Eq. (3) and Eq. (4), the criterion can be expressed as: (5) σ eq 1 T = σ C In the original Argon's criterion, the triaxiality used is the macroscopic triaxiality. However, decohesion is a local
15
"��� !�� ��+����� ������������������#���� #�#�����������!����������������������.660����������! ������������������� ��������&����� � ����������������������#������������ #�����,���!���������������2������������ �������##����������!���������������� ��� #�� ��� "����������(1���������������##����������+��������!���������������� �� #�����#�����������!�������������%�����������)����# ������ + ��������������� ���������� ��������&����� �������������#�������$�����$������������!���������������# �� +�����&"����� ��� !���� #� !�.6601 �0 273 � ��$ � � 1 �0 � 2
� � �
� ��
)���! ��#�����&"����� �� #�������� �*��������� ��� �������� ������� ���� ���������� !��1 �0 283 1 �0 / 3� 41 � 1 �0 � 2 )�����#������� #������%���� �� #������� ��������� �� #�������� �*��������� ����������#������!��� χ�� � � ����������� � ��� � ����� � # � χ� �� � ��� � �������� � � ������ �ε�� � �� � ������� � ����� � # � σ� �� ��� � �� � ����!���� � �� � ��� � ���� � #� #������G!���������������#�����ε�����������!�����#� !�����%����������������� ��������#� !���������� �!������!����� #������! ��� ��!"��������������������,����������" ����+��������� ��������� ��������� �����������$������ #�;�/9�����;�;6��������"������� �# ���� # �������! �����!"������������� ��������!"���� � σ�0 �� �����,���#� !������&"���!����������������������������������� ��� ��� �����������+�������������!���# �!����.6;0��)��������� #�����,���!�����"���� #���������������2��������!�����#� !�����# �!���� ������� �.640�����! ��#����� �����&"������������#����� �� #��������"���������������� #�#�������5�6�����!����������5�(293 2 � ' �/ � 6 ( � 6 ( �5� ( � 5� 6 � +�����6(��������� ��!��#����� �� #�!���������������!��������������������� ��"����������������������(1����������� ������������� !�����������"�������������� ��� ������ #�����!�������������;�8'�+��E��>�������������+ �,�� ���� �A������ ������"������� �� #� !�������������������.640������������ #�5�( ������� ���9';��1���)������������ #�����#��������"����������,������/5;��1���)��� �� ���� �� #�χ���������! ��������������� �������"���!���+����������# �!��� ��������������@���8��)��������� #����������#���*� � ���������# ���������# �������"���������������� #��������+������������ ���������� ��������������%����� ��� ���//;;��1���� � � !"����� �� #��������# ��������������� #����������#�������+���� �����"�������������#�������# ����� ���##���������� ���������������� )�����6��)��� ����� #�!������������������!��������� �������������������� ����""� �������������� #�#������G!��������������� � ����������������� +��������� #���������������I �������� �������!��� �����������# ��������� ����������� #������������� ��2���� � +����������!���� #�� ������������ �������������������������� ! ���"� ��!����3��)��� ������������������ ��������+���� ���� � !��� �������"� ���� �+���� �������#����� �������������� !"���� ������ ��������!� #�����"���������������
$��%*���������� �� #�����#������G!���������������#����������χ )�����6�$����� #�#�������"�������������#������������ 1������� �����#�������������2�1�3 >�#� J6�' /;;;�/7;; .650 ��? //;;�/4;; .670 /6;;�6;;; .68�690 @�'�
16
As pointed out in [5], void nucleation in DP steels occurs during the entire deformation process, i.e. each different single interface probably exhibits a different value of and each interface is possibly subjected to a scattered value of χ. Interface decohesion is thus a progressive phenomenon, starting for a strain of 0.18 (in smooth specimens) but continuing after this value of strain and the evolution of the cavity density has to be modeled as a function of the local strain. Fig.5 shows that the studied material exhibits two different nucleation regimes at low and at high strain. Firstly, the number of cavities increases slowly and linearly. In a second regime many voids appear exponentially. This experimental observation leads us to propose the following empirical equation based on the local criterion of decohesion and involving the parameters χ and σC. dN χ N =A 1 dεloc σC N 0 (9)
A and N0 being two constants (expressed in the same unit as N, for instance in mm-3). The two extreme regimes are well described by this empirical expression. When N ≪ N 0 the following approximation can be done: dN χ ≈A dεloc σ C (10) The interface decohesion is then only linearly controlled by the local stress χ which increases with the applied strain. In the second regime, when N ≫ N 0 the approximation becomes : dN χ N ≈A (11) dεloc σC N 0
The evolution rate of N with strain is proposed to depend on N itself, transcribing a self catalytic effect and thus the exponential acceleration of the number of cavities. We then now have a mean to integrate the value of N, by accounting for the local triaxiality at the interface. The assessment of the model is firstly done using experimental data from smooth specimens. The values of the two constants A and N0 giving the best fit between modeling and experimental data for the smooth sample are A=4500mm-3 and N0=1250mm-3. These values, when used in the framework of the notched samples, also show a satisfactory agreement as shown in Fig.8. This validates that using a local value of the triaxiality as a driving force in an interface fracture criterion is a reasonable procedure.
Fig.8 Comparison of the prediction of the nucleation model and experimental data [21]. The constants of the model are fitted to reproduce the experimental evolution in the case of the smooth sample and are then used “as-is” to calculate the evolution of the notched sample. 4 Conclusions and perspective Using in-situ tensile tests during X-ray tomography, the present study has shown that it is possible to obtain quantitative information about damage. Concerning the sites of nucleation, optical micrographs of fractured samples have shown that most cavities appear by decohesion of the ferrite/martensite interface. A value of the critical interface strength (1100 MPa)
17
has been estimated for the onset of nucleation. The evolution of the void density has then been modeled according to an analytical approach based on a local version of the Argon decohesion criterion and accounting for the triaxiality. The model has been fitted with the experimental data on the smooth samples. The identified parameters were then used for the notched sample and also lead to a satisfactory agreement of the predicted evolution of the number of nucleated cavities. Some improvements could be foreseen in the present approach, particularly concerning the value of the interface strength in DP steels. This strength probably depends on the carbon content in the martensite and on an eventual tempering. These effects have to be investigated in more details before being modeled. Acknowledgments The authors would like to thank the ESRF for the provision of synchrotron radiation at the ID15 beamline through the ma560 long term project. References [1] Argon AS, Im J, Safoglu R, Cavity formation from inclusions in ductile fracture, Metallurgical Transactions A, Volume 6, Issue 4, pp 825-837, 1975. [2] Goods SH, Brown LM, Nucleation of cavities by plastic-deformation – Overview, Acta Metallurgica,Volume 27, Issue 1, pp 1-15,1979. [3] Beremin FM,Cavity formation from inclusions in ductile fracture of A508 steel, Metallurgical and Materials Transactions A, Volume 12, Issue 5, pp 723-731,1981. [4] Steinbrunner DL, Matlock DK, Krauss G, Void formation during tensile testing of dual phase steels, Metallurgical Transactions A, Volume 19, Issue 3, pp 579-589,1988. [5] Avramovic-Cingara G, Saleh CAR, Jain M, Wilkinson DS, Void Nucleation and Growth in Dual-Phase Steel 600 during Uniaxial Tensile Testing, Metallurgical and Materials Transactions A, Vlume 40, pp 3117-3127, 2009. [6] Tanaka K, Mori T, Nakamura T, Cavity formation at the interface of a spherical inclusion in a plastically deformed matrix, Philosophical Magazine, Volume 21, Issue 170, pp. 267–279, 1970. [7] Thomason PF, Ductile Fracture of Metals, Pergamon Press, Oxford, 1990. [8] Kwon D, Asaro RJ, A study of void nucleation, growth, and coalescence in spheroidized-1518 steel, Metallurgical Transactions, Volume 21, Issue 1, pp 91-101, 1990. [9] Walsh JA, Jata KV, Starke EA, The influence of Mn dispersoid content and stress state on ductile fracture of 2134 type Al-alloys, Acta Metallurgica, Volume 37, Issue 11, pp 2861-2871, 1989. [10] Bugat S, Besson J, PineauA, Micromechanical modeling of the behavior of duplex stainless steels, Computational Materials Science, Volume 16, Issue 1-4, 158-166, 1999. [11] Needleman A, A continuum model for void nucleation by inclusion debonding, Journal of Applied Mechanics, Volume 54, pp 525-531, 1987. [12] Needleman A, Tvergaard V, An analysis of ductile rupture in notched bars, Journal of the Mechanics and Physics of Solids, Volume 32, Issue 6, pp 461-490, 1984. [13] Nutt SR, Needleman A, Void nucleation at fiber ends in Al-SiC composites, Scripta Materialia, Volume 21, Issue 5, pp 705-710, 1987. [14] Buffiere JY, Maire E, Cloetens P, Lormand G, Fougères R, Characterization of internal damage in a MMCp using x-ray synchrotron phase contrast microtomography, Acta Materialia, Volume 47, Issue 5, pp 1613-1625, 1999. [15] Martin CF, Josserond C, Salvo L, Blandin JJ, Cloetens P, Boller E, Characterisation by X-ray micro-tomography of cavity coalescence during superplastic deformation, Scripta Materialia, Volume 42, Issue 4, pp 375-381, 2004. [16] Babout L, Maire E, Fougeres R, Damage initiation in model metallic materials: X-ray tomography and modelling, Acta Materialia, Volume 52, Issue 8, pp 2475-2487, 2004. [17] Maire E, Bouaziz O, Di Michiel M, Verdu C, Initiation and growth of damage in a dual-phase steel observed by X-ray microtomography, Acta Materialia, Volume 56, Issue 18, pp 4954-4964, 2008. [18] Bron F, Besson J, Pineau A, Ductile rupture in thin sheets of two grades of 2024 aluminum alloy, Materials Science and Engineering A, Volume 380, Issue 1-2, pp 356-364, 2004. [19] Abramoff MD, Magelhaes PJ, Ram SJ, Image Processing with ImageJ, Biophotonics International, Volume 11, Issue 7, pp 36-42, 2004. [20] Bridgman PW, Effects of High Hydrostatic Pressure on the Plastic Properties of Metals, Revue of Modern Physics, Volume 17, Issue 1, pp 3-14, 1945. [21] Landron C, Bouaziz O, Maire E, Characterization and modeling of void nucleation by interface decohesion in dual phase steel, Scripta Materialia, Volume 63, Issue 10, pp 973-976, 2010. [22] Helbert AL, Feaugas X, Clavel M, Effects of microstructural parameters and back stress on damage mechanisms in
18
alpha/beta titanium alloys, Acta Metallurgica, Volume 46, Issue 3, 939-951, 1998. [23] Allain S., Bouaziz O., Microstructure based modeling for the mechanical behavior of ferrite-pearlite steels suitable to capture isotropic and kinematic hardening, Materials Science and Engineering A, Volume496, Issue 1-2, pp 329-336, 2008. [24] Grange RA, Hribal CR, Porter LF, Hardness of tempered martensite in carbon and low-alloy steels, Metallurgical Transactions A, Volume 8, Issue 11, pp 1775-1787, 1977. [25] Kosco JB, Koss DA, Ductile fracture of mechanically alloyed iron-yttria alloys Metallurgical Transactions A, Volume 24, Issue 3, pp 681-687, 1993. [26] Qiu H, Mori H, Enoki M, Kishi T, Development of A Three-dimensional Model for Void Coalescence in Materials Containing Two Types of Microvoids,ISIJ International, Volume 39, Issue 4, pp 358-364, 1999. [27] LeRoy G, Embury JD, Edwards G, Ashby MF, A model of ductile fracture based on the nucleation and growth of voids, Acta Metallurgica, Volume 29, Issue 8, pp 1509-1522, 1981. [28] Kwon D, Interfacial decohesion around spheroidal carbide particles, Scripta Metallurgica, Volume 22, Issue 7, pp 11611164, 1988.
In-situ synchrotron-radiation computed laminography observation of ductile fracture
T.F. Morgeneyer a, L. Helfenb,d, I. Sinclairc, F. Hildd, H. Proudhona, F. Xub, T. Baumbachb, J. Bessona a
Mines ParisTech, Centre des Matériaux, CNRS UMR 7633, BP87 91003 Evry Cedex, France b ANKA/Institute for Synchrotron Radiation, Karlsruhe Institute of Technology, Germany c Materials Research Group, School of Engineering Sciences, University of Southampton, Southampton SO17 1BJ, UK d European Synchrotron Radiation Facility/Experimental Division (ESRF), BP 220, 6 rue J. Horowitz, F-38043 Grenoble Cedex, France e LMT-Cachan, ENS Cachan/CNRS/UPMC/PRES UniverSud Paris, 61 avenue du Président Wilson, 94235 Cachan Cedex, France Synchrotron-radiation computed laminography (SRCL) allows for imaging at high resolution (~ 1 µm) and in three dimensions objects that are thin (~ 1 mm) but extended laterally in two dimensions. This represents a major advantage over computed tomography in terms of loading conditions that can typically only investigate samples elongated in one direction. Here SRCL is used to observe ductile crack initiation and propagation in high strength aluminium alloy sheet for aerospace applications. Several load steps are applied and permit us to follow the changes of damage and crack path. An attempt is made to measure strains via a digital volume correlation technique. Introduction Fracture resistance and toughness are critical design criteria for thin sheet materials in aerospace applications and require in-depth understanding of the underlying physical mechanisms to enhance material performance [1]. In the past 2D observation techniques and SEM post mortem fractography have mainly been used to assess fracture mechanisms [2] Synchrotron tomography is increasingly used to assess fracture mechanisms by observing arrested cracks at the initiation [3] and propagation [4] stages. Unprecedented insights into void growth and also intergranular ductile fracture could be gained [5]. In-situ ductile crack growth has been observed [6] via in situ loading of a 2024 aluminium alloy sample with a 1.3 mm × 1.0 mm cross section that permitted to confirm fracture mechanics models. The sample geometry had however dimensions that were far from engineering conditions. With the progress made with synchrotron laminography there is a clear opportunity to observe the damage mechanisms during crack propagation in advanced engineering materials [7], which allows for unprecedented insights into damage mechanisms for sample geometries that can reproduce stress states similar to those experienced in service. In situ loading of a notched sample fracture mechanisms in carbon fibre-epoxy laminates provided insights into delamination processes. This technique is applied in the present study to assess ductile fracture initiation and propagation in a ductile 2XXX alloy for aerospace applications. T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_3, © The Society for Experimental Mechanics, Inc. 2011
19
20
With the recent progress in digital volume correlation (DVC) it has become possible to measure strains in 3 dimensions thanks to natural markers / speckle caused by attenuation differences between phases present in the material [8]. DVC has successfully been performed on tomography data of fatigue crack propagation in nodular cast iron. An attempt is made in the present work to apply this technique to in situ synchrotron laminography data to assess the feasibility of the measurement, to obtain insights into strain / damage relationships, and to validate models [9]. Experimental For the experiment a commercial ductile 2139 Al-Cu alloy in T3 condition for aerospace application has been machined symmetrically from 3.2 mm to 1 mm thickness. Testing has been performed in the T(long transverse)-L(rolling direction) configuration. The material exhibits an initial void volume fraction of ~ 0.3%. For further details on material microstructure and mechanical properties the reader is referred to [3,4]. The sample geometry shown in figure 1a has been used. The notch has been machined via electron discharge (EDMC) machining resulting in a notch diameter close to the EDMC wire diameter of 0.3 mm. The loading has been achieved via a displacement-controlled wedging device that prescribes a specimen crack mouth opening displacement (CMOD) similar to the one used in [7] (see Figure 1). Machined notch Loading device
70mm
CMOD Figure 1: In situ loading device and 1 mm thick notched sample An anti-buckling device was used to hamper the sample from significant buckling and out-ofplane motion. The entire rig was mounted in a dedicated plate that has been removed from the SRCL disc for every loading step. The loading has been applied via stepwise opening of the wedge. Before every loading step a scan has been performed. The scanned regions of interest (ROI) have been moved with the propagating crack to image its tip and the damaged material ahead of the notch / crack tip. Twenty scans have been carried out but only results from 6 scans will be shown hereafter. Laminography is an alternative approach to standard tomography imaging that overcomes the aforementioned sample size / shape problems. The technique is fundamentally similar to tomography except that the sample is imaged at an inclined angle relative to the beam [7]. Previous laminography work at ESRF has yielded very satisfactory results [10]. The further development of the method exploiting propagation-based phase-contrast imaging [11] opens the way for new applications related to weakly absorbing / weakly contrasted structures [12,13].
21
Imaging was performed on beamline ID19 at ESRF using a monochromatic X-ray beam for an energy of 25 keV. Volumes were reconstructed using an in-house software from 1500 angularly equidistant radiographs; the exposure time of each projection was 250 ms. Due to the experimental set-up the sample to detector distance was 70 mm leading to strong phase contrast. The scanned volume was ~ 1mm3 with a voxel size of 0.7 µm. For 3D void representation a simple grey value threshold has been used to segment voids; a VTK software rendering is used for the 3D image. Results and discussion Figure 2 shows 2D sections of laminography data at different openings (CMOD). The section is taken at mid-specimen thickness for every scan. In figure 2a the machined notch can be observed. Initial porosity can clearly be seen in black, intermetallics in white and the aluminium matrix in analogy observations of this or similar material via tomography [3-5]. White edges around porosities are attributed to strong phase contrast. Some ring artefacts can be seen. Figures 2b-f show the change of damage that can clearly be observed. The feasibility of the in-situ experiment is proven, namely, SRCLaminography allows for unprecedented observation of ductile crack initiation and propagation in industrial grade materials at realistic length scales compared to engineering applications. Substantial void growth can be seen ~ 50µm ahead of the notch (figure 2b). The coalescence of the two voids can be seen in figure 2c. The narrow coalescence band is oriented ~ 45 ° with respect to the loading direction. This result shows that crack initiation does not start immediately from the notch but somewhat (50 µm in this case) ahead of the notch. At a CMOD of 1.875 mm coalescence of the crack with the notch and further crack propagation in propagation direction occurs. It can be seen that both large voids have changed shape in this coalescence step with the notch. The crack plane orientation is not normal to the loading direction but inclined. The coalescence mechanisms appear to occur in shear bands that is consistent with findings made on the sample surface and theoretical considerations [14]. Figure 2f shows the crack at CMOD = 2.375 mm, which has propagated and reached a length of ~ 1mm. The void growth ahead of the propagating crack appears substantially less than at crack initiation. The 2 dimensional representation may only be adequate and show the same voids and particles for the mid-thickness plane. This plane can be assumed to show the change of the same microstructural features at least during the first loading steps. Once the fracture process becomes asymmetric, e.g. due to slant fracture, one plane will not show the same material features for all loading steps. As laminography provides 3D data it offers the opportunity to show the change of voids in 3D. This is shown in figure 3 where voids from a 105 µm thick slice symmetric to the mid plane are represented. Only voids are made visible here. (The thickness is not corrected for transverse contraction as the latter is inhomogeneous in the plane). The same loading steps as in figure 2 are shown. Between the initial microstructure (figure 3a) and the deformed voids it can be seen that void growth takes place around the entire notch. A lot more voids can be observed than for the 2D representation. The void growth and orientation appear highly directional. The voids change orientation around the notch during loading and orient themselves tangential to the notch circumference. The 3D representation also confirms the crack initiation from the interior of the material. However, in figure 3d it can be seen that the two large voids are already connected with the notch that was not discernable on the 2D sections at the mid-plane.
22
(a)
(b)
Notch (c)
(d)
(e)
(f)
T S
1500 µm
L
Figure 2: 2D section at sheet mid-thickness from in-situ SRCLaminography data for a) CMOD = 0.5 mm b) CMOD = 1.5 mm c) CMOD = 1.625 mm d) CMOD = 1.75 mm e) CMOD = 1.875 mm f) CMOD = 2.375 mm
23
(a)
(b)
(c)
(d)
(e)
(f)
T S
400 µm
L
Figure 3. 3D representation of voids and the crack in a numerically cut 105 µm thick slice at sample centre for a) CMOD = 0.5mm b) CMOD = 1.5 mm c) CMOD = 1.625 mm d) CMOD = 1.75 mm e) CMOD = 1.875 mm f) CMOD = 2.375mm Figure 4 shows the displacement field measured by DVC with same the code as that used in [8] on the present laminography data by registering 2 volumes located in the squared box indicated in figure 4a. The reconstructed volumes corresponding to CMOD = 0.5 mm and CMOD = 0.625mm are compared. The volume of the compared cubes is taken symmetrically to the sheet mid-thickness. The measured displacement field is qualitatively correct, namely, an expansion in T direction is detected that is consistent with the loading along the same direction. This preliminary result prompts us to continue the analysis for further loading steps, and to derive the strain fields. These measurements may then open the possibility to obtain insights into deformation field and strain localization mechanisms leading to crack
24
bifurcation. A relationship between measurements of void growth at different levels of stress triaxiality and strain may become accessible in that case. (c)
(b) (~350µm)3
T S
L
200µm
S L
T
Figure 4: a) 2D section of the scan when CMOD = 0.5 mm indicating the region of interest for DVC b) Measured displacement field between CMOD = 0.5mm and 0.625 mm, colorbar indicating displacements in T direction expressed in voxels Conclusion Ductile fracture of thin (but laterally extended) sheet aluminium alloy has been successfully observed for the first time thanks to the use of Synchrotron Radiation Laminography. The artefacts introduced by this technique are small enough to clearly observe ductile fracture mechanisms involving void nucleation, growth and coalescence in conditions that are realistic for engineering applications. Damage can easily be extracted via simple grey value thresholding. Fracture revealed to initiate ahead of the notch and only after substantial void growth to join the notch. The crack path is not normal to the loading direction but inclined. The obtained data can be used for in-situ void growth measurements. An initial attempt to perform digital volume correlation on the laminography data led to qualitatively successful measurements of the displacement field, which encourages further strain analyses. References [1] Bron F, Besson J, Pineau A. Ductile rupture in thin sheets of two grades of 2024 aluminum alloy, Mater Sci Eng A 2004;A 380:356–64. [2] Lautridou JC, Pineau A, Crack initiation and stable crack growth resistance in A508 steels in relation to inclusion distribution , Eng Fract Mech 1981, 15, 55-71 [3] Morgeneyer T.F., Besson J., Proudhon H., Starink M.J., Sinclair I., Experimental and numerical analysis of toughness anisotropy in AA2139 Al-alloy sheet, Acta Mater. 2009;57:3902–3915.
25
[4] Morgeneyer TF, Starink MJ, Sinclair I. Evolution of voids during ductile crack propagation in an aluminium alloy sheet toughness test studied by synchrotron radiation computed tomography, Acta Mater 2008;56:1671–9. [5] Morgeneyer TF, Starink MJ, Wang SC, Sinclair I. Quench sensitivity of toughness in an Al alloy: Direct observation and analysis of failure initiation at the precipitate-free zone, Acta Mater 2008;56:2872–84. [6] Toda H., Maire E., Yamauchi S., Tsuruta H., Hiramatsu T., Kobayashi M., In situ observation of ductile fracture using X-ray tomography technique, Acta Maert 2011; 59:1995-2008 [7] Moffat, A. J., Wright, P, Helfen, L., Baumbach, T., Johnson, G., Spearing, S. M., Sinclair, I. In situ synchrotron computed laminography of damage in carbon fibre–epoxy [90/0]s laminates, Scripta Mater, 2010;62: 97-100 [8] Limodin N., Rethore J., Buffiere J.Y., Hild F., Roux S., Ludwig W., Rannou J., Gravouil A. Influence of closure on the 3D propagation of fatigue cracks in a nodular cast iron investigated by X-ray tomography and 3D volume correlation, Acta Mater 2010;58: 2957–2967 [9] McMeeking R.M., Finite deformation analysis of crack-tip opening in elastic-plastic materials and implications for fracture, J. Mech. Phys. Solids 1977;25:357-81 [10] Helfen L., Myagotin A., Rack A., Pernot P., Mikulik P., Di Michiel M., Baumbach T., Synchrotron-radiation computed laminography for high-resolution three-dimensional imaging of flat devices, Phys. Status Solidi A 2007; 204: 2760. [11] Cloetens P., Pateyron-Salome M. , Buffiere J.Y., Peix G., Baruchel J., Peyrin F., Schlenker M., Observation of microstructure and damage in materials by phase sensitive radiography and tomography. J. Appl. Phys. 1997 ;81 :5878–5885. [12] Krug K., Porra L., Coan P., Tauber G., Wallert A., Dik J., Coerdt A., Bravin A., Elyyan M., Helfen L., Baumbach T., Relics in Medieval Altarpieces. Combining X-ray tomographic, laminographic and Phase-Contrast Imaging to Visualize Thin Organic Objects in Paintings, J. Synchrotron Rad. 2008 ;15: 55–61. [13]Helfen L., Baumbach T., Cloetens P., Baruchel J., Phase-contrast and holographic computed laminography, Appl. Phys. Lett. 2009;94:104103. [14]Russo VJ, Chakrabarti AK, Spretnak JW; The role of pure shear strain on the site of crack initiation in notches, Metal Transaction A 1977;8A:729-40
Understanding the mechanical behaviour of a high manganese TWIP steel by the means of in situ 3D X ray tomography
D. Fabrègue(1,2), C. Landron(1,2), C. Béal(1,2,3), X. Kleber(1,2), E. Maire(1,2), M. Bouzekri(3) (1) Université de Lyon, CNRS (2) INSA-Lyon, MATEIS UMR5510, F-69621 Villeurbanne, France (3) ArcelorMittal Research, Voie Romaine-BP30320, F-57283 Maizières les Metz, France ABSTRACT The high manganese TWIP (twinning induced plasticity) steels exhibit very high mechanical properties compared to others grades. Indeed they have a mechanical strength that can attain 1.5 GPa and fracture strain that can go up to 60%. However the governing damage mechanisms that maximize their ductility are not well understood. To have a better understanding of these mechanisms, in situ tensile tests have been carried out at the European Synchrotron Radiation Facility. During tensile test no necking can be observed which has already been observed on this type of steel. Moreover, the number of cavities in a given volume do not deeply evolve during the deformation meaning that nucleation of voids is weak in the TWIP steel considered. This leads to a maximal number at fracture very low compared to other steels (interstitial free steel, dual phase,…). Morever, the growth of cavities according to local strain seems to be equivalent to other austenitic or ferritic steels. At last, shear bands can be observed in the sample which seems to be correlated with some of the cavities. 1. Introduction High manganese austenitic steels are promising candidates in automotive industry due to their excellent mechanical properties. In effect, they exhibit at the same time very high strength (higher than 1000 MPa in tension) and high ductility (about 50% at room temperature). Thus they can absorb a lot of plastic energy before failure due to their unusual work-hardening capacity. This comes from the fact that their stacking fault energy is low enough to permit at the same time deformation by twining and by dislocation gliding without any martensitic transformation that could be detrimental for the ductility. Concerning their fracture behaviour, these alloys do not usually exhibit necking at room temperature but slant fracture because they seem to be sensitive to shearing. Even if the Fe-Mn-C alloys have been developed for a long time now (Hadfield in 1880 already presented the Fe-Mn-C alloys), their deformation and fracture behaviour are still not well understood and there is a real need to investigate them with novel techniques such 3D X Ray tomography permitting to observe in 3 dimensions the damage mechanisms. Recently, the same type of steel has been investigated by post mortem 3D tomography but on a shearing deformation mode [1]. This study is more dedicated to tensile tension and the evolution of the damage is quantitatively carried out for different strains. 2. Experimental Procedure The TWIP steel was cut from a 1mm thick sheet obtained by hot rolling and annealing thermal treatment. The designation of the steel is Fe-22Mn–0.6C (composition in mass percent, balance iron). This steel if fully austenitic at room temperature with an average grain size diameter of about 2-3 µm. Micro tensile specimens have been machine by spark discharge according to the Figure 1.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_4, © The Society for Experimental Mechanics, Inc. 2011
27
28
Fig 1 Specimen dimensions for in situ X ray tomography X ray microtomography has been used to quantify damage during in situ tensile tests. The tomography set up is the one located at the ID15 beam line at the European Synchrotron Radiation Facility (ESRF) in Grenoble, France. Tomography acquisition was realised with a voxel size of 1.6 µm3. Initial reconstructed volumes were median filtered and simply thresholded to differentiate by absorption difference the material from the voids. In order to correlate the distribution of voids with the strain, local values of the strain is obtain by considering the minimal section area S and using the relation:
S0 S
loc ln
With S0 the initial section of the sample. The local strain is then calculated at each step. Using this equation implies that the volume fraction of the voids considered is small enough to keep the total volume unchanged. The beginning of the tensile test is carried out with a stress triaxiality of 0.33. Anyway, after some deformation, the triaxiality can evolve. This is considered by using the Bridgman formula [2] considering the curvature radius of the surface of the sample RS:
T
1 a ln 1 3 2 RS
a being the radius of the minimal section. 3. Results and discussion The tensile curve of the sample submitted to tensile test is given in Figure 2. As it can be seen, the mechanical properties of the steel are outstanding with a yield strength of about 460 MPa, an ultimate tensile strength of about 920MPa and a fracture strain of about 0.55. One can notice also the high hardening due to the twinning phenomenon. These values are in accordance with the supplier specifications and explain their promising use in safety parts in automotive industry.
29 1000
Stress (MPa)
800
600
400
200
0
0
0,1
0,2
0,3
0,4
0,5
0,6
Strain
Fig. 2 Stress/strain curve obtain during the in situ tensile test Anyway, figure 3 shows the 3D tomography of the TWIP steel investigated just before the final fracture. For the sake of comparison, the same state for a DP steel from [3] is also shown and the voids are underline by a red color. The difference in fracture behavior between steel with a ferritic matrix and the TWIP austenitic steel is then obvious. When the first one exhibits an important necking and a very high density of voids before fracture, the TWIP steel shows no localization of the deformation and only a small density of voids. This qualitative observation is checked by plotting the number of voids as a function of the local strain in figure 4. This number is compared with the same experiment on an interstitial free steel. This steel has been chosen for comparison in order to have two monophased steels. The different damage behavior is here straight forward. The IF steel exhibits a three steps damage with a high density of cavities for small strains then a step increase for larger strains and finally a saturation of the number of cavities before fracture. This last step is due to coalescence phenomenon leading to the fracture of the sample. The austenitic TWIP steel exhibits a very different behavior: it contains only a small number of voids and shows only a small increase of this number up to the fracture of the sample. It is worth noticing that the number of voids at fracture is quite different between the two steels: around 10000 per mm3 in the case of the IF and 1000 per mm3 for the TWIP steel. This density of voids is very low considering all other type of ductile material at fracture [4, 5, 6]. Anyway, looking at the fracture surface explains partly this point. In fact, the fracture surface of this austenitic steel is characterized by dimples exhibiting a very small diameter (figure 5). Thus smallest dimples are not seen by using this spatial resolution of about 1,6 µm3 and the number of cavities must be then larger than the one measured. Experiments with better spatial resolution are thus needed to characterize perfectly the damage behavior of such a steel.
a)
b)
30 Fig. 3 3D views of the damage at the center of the specimen strained at a) εloc=0.5 (just before fracture) b) DP steel just before fracture.
density of voids in XiP(N/mm3)
4
1,2 10
3
density of voids in IF(N/mm )
3
Density of voids(N/mm )
1 104 8000 6000 4000 2000 0 0
0,1
0,2
0,3
0,4
0,5
0,6
local strain Fig. 4 Density of voids as a function of the local strain for the TWIP steel and a ferritic one.
Fig. 5 SEM fractography of the in situ tensile test Anyway, as it could be deduced from the absence of necking even at strain close to fracture, the triaxiality in the TWIP steel remains almost constant as it can be seen in Figure 6. Thus considering the Rice and Tacey law, the cavity growth should be limited in the case of TWIP steel, nucleation should be thus the major phenomenon governing the damage behavior. To check that point the evolution of the cavity size is calculated from the 3D images and plotted as a function of the local strain. It can be seen that if the entire population of cavities is considered, the average diameter of the cavity remains constant. This could be due to the fact that in that case, nucleation is also considered. Thus it is worth considering only the 20 largest voids. It leads to the conclusion that growth is experienced during the deformation. This growth is almost equivalent to the one experienced by other steel grade. Thus either the mechanisms governing the growth of cavities is different from the others steels and then growth is possible for low triaxiality or locally the triaxiality could be higher than the macroscopic one. One more interesting feature about the damage behavior of these TWIP austenitic steel is shown in figure 7. This figure shows a 3D picture of the TWIP steel just before fracture. It is clearly seen some shear bands in the material. This is in accordance with previous study on the same alloy but in a different stress state [1]. Primary
31 observations seem to indicate that voids are preferentially situated on these shear bands. Thus we could think that due to these shear bands, the triaxiality could be locally higher involving some growth of the cavities. 9
0,4
Average equivalent diameter (µm)
0,35 0,3
Triaxiliaty
0,25 0,2 0,15 0,1
total population of voids 20 largest voids 8
7
6
5
0,05 0
4 0
0,1
0,2
0,3 Strain
0,4
0,5
0,6
0
0,1
0,2 0,3 0,4 local strain
0,5
0,6
Fig. 6 a) Evolution of the triaxiality during the tensile test and b) Evolution of the average equivalent diameter with the local strain
Fig 7 3D picture of the TWIP steel just before fracture showing shear bands
4 Conclusions and perspectives 3D X ray tomography has been used to get a better insight of an austenitic TWIP steel. It shows that although the triaxiality remains constant equal to 0.33, some cavity growth is experienced. This could be due to the presence of shear bands involving an increase of the local triaxiality and permitting cavity growth. Moreover, the evolution of the number of cavities is different from other steel. However due to the small size of the cavities, tomography with higher resolution is needed to have a better idea of the real number of voids and to propose a clear scenario for the final fracture.
32 References [1] Lorthios J, Nguyen F, Gourgues AF, Morgeneyer TF, Cugy P, Dalage bservation in a high manganese austenitic steel by synchrotron radiation computed tomography, Scripta Materialia, volume 63, Issue 12, pp 1220-1223, 2010 [2] Bridgman PW, Effects of High Hydrostatic Pressure on the Plastic Properties of Metals, Revue of Modern Physics, Volume 17, Issue 1, pp 3-14, 1945. [3] Landron C, Bouaziz O, Maire E, Characterization and modeling of void nucleation by interface decohesion in dual phase steel, Scripta Materialia, Volume 63, Issue 10, pp 973-976, 2010. [4] Martin CF, Josserond C, Salvo L, Blandin JJ, Cloetens P, Boller E, Characterisation by X-ray microtomography of cavity coalescence during superplastic deformation, Scripta Materialia, Volume 42, Issue 4, pp 375-381, 2004. [5] Babout L, Maire E, Fougeres R, Damage initiation in model metallic materials: X-ray tomography and modelling, Acta Materialia, Volume 52, Issue 8, pp 2475-2487, 2004. [6] Bron F, Besson J, Pineau A, Ductile rupture in thin sheets of two grades of 2024 aluminum alloy, Materials Science and Engineering A, Volume 380, Issue 1-2, pp 356-364, 2004. Acknowledgements The authors would like to thank O. Bouaziz and M. Goune from ArcelorMitall for fruitfull discussions.
Mechanical properties of Monofilament entangled materials
Loïc Courtoisa, Eric Mairea, Michel Pereza, Yves Brechetb, David Rodneyb a
b
Université de Lyon – INSA de Lyon - MATEIS, UMR 5510, Villeurbanne SIMAP-GPM2, Domaine Universitaire BP 46 38402 Saint Martin d’Heres, France
ABSTRACT A new type of architectured materials, namely « monofilament entangled materials », were studied in order to have a better understanding of their behavior under compressive loading and damping. The materials studied in this paper were made of an entanglement of a single steel wire. Their complex internal architecture was investigated using X-ray computed tomography. The evolution of the number of contacts per unit of volume, as well as of the density profile, were followed during the compression test in order to compare it to the mechanical results. Dynamical Mechanical Analysis (DMA) was performed to characterize the evolution of the loss factor of this material with the frequency and the volume fraction. It was shown that this material present an interesting strength/loss factor ratio. A discrete element model was proposed to model the mechanical properties of this material. 1. Introduction Playing with the architecture of a material is a clever way of tailoring its properties for multifunctional applications. A lot of research have been made, in the past few years, on what is now referred to as « architectured materials » (metal foams [1], entangled materials, steel wool [2], etc), mostly for their capacity to be engineered in order to present specific properties, inherent to their architecture. In this context, some studies have been carried out concerning entangled materials [3], but only a few on monofilament entangled materials [4-6]. Such a material, with no filament ends, could exhibit interesting properties for shock absorption, vibration damping and ductility. Because of the complex architecture of these materials, X-ray computed tomography is used in this paper as a main characterization method. This technique enables a 3D non destructive microstructural characterization of the material [2], which can be coupled with an in-situ mechanical characterization. Different parameters can be measured from the acquired 3D data: density profiles, number of contact per unit of volume, volume fraction. From the 3D images, a discrete element model [7] is finally defined in order to be able to model the structural and mechanical behaviors of this material. 2. Samples and procedures In this study, entanglements were manually produced, using different wire diameters and yield strengths (Table 1) and placed into a cylindrical die with a 15 mm diameter. The samples were initially 35 mm high with a 5% volume fraction. They were then submitted to a constrained (inside a PVC die) in-situ compressive test within the laboratory tomography presented in more details [8] and shown in Figure 1-A. Table 1 Properties of the used wires Material Stainless steel 304L Pearlitic steel
Diameters (µm) 127, 200 and 280 120
Yield strength (MPa) 200 4000
The samples were compressed, step by step, with a compression rate of 0.5 mm/s and for each step, samples were unloaded. From the displacement of the grips and the initial length and diameter of the wires, it was possible to calculate the “theoretical” volume fraction of the sample during compression tests (starting at 5%). A 3D volume was acquired and reconstructed (Figure 1-B) for both the loaded and unloaded state. Each image had a 24 µm resolution. During the whole T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_5, © The Society for Experimental Mechanics, Inc. 2011
33
34 compression test, a stress-strain curve was acquired and the strength of the entangled media was characterized by discharge modulus measurements (slope of the curve at the first moment of the discharge). Dynamic Mechanical Analysis was also performed on a set of sample with various volume fractions, for frequencies ranging from 1 to 100 Hz. We then obtained the loss factor that could be used to calculate the specific strength/loss factor ratio.
Figure 1: A) In-situ experimental compressive device, B) example of a reconstructed 3D volume In order to link, the internal architecture to the global mechanical strength of the samples, a microstructural analysis was performed based on the 3D images. The homogeneity of the sample was first studied by monitoring the evolution of the radial density profile. The 3D data were first treated to obtain a binary image where the wire appeared as white voxels, and the air as black. By measuring a local density, as shown in Figure 2-A, for each consecutive tube with a thickness dr (Figure 2-B), we obtained an indicator of the local volume fraction at a distance r of the axis of the die.
Figure 2: A) Local density equation, where Np is the total number of voxels, B) Principle of the recursive density measurement From the binary 3D images, it was also possible to measure the number of contacts per unit of volume. The data was first reduced to its center-line (skeletonization of the wire architecture). The whole structure then consisted in a list of
35 segments and nodes where one contact corresponded to a H-like structure, an example of which is shown in Figure 3-A. By counting the number of segments shorter than the diameter of the wire (definition of a contact point), we could estimate the number of contacts per unit volume. This count could be refined if one considers that, if the distance between two short segments is smaller than the diameter of the wire, those two segments belong to the same contact point (Figure 3-B). After refinement, this calculation was applied to both the loaded and unloaded states, for each volume fraction.
Figure 3: A) H-like structure corresponding to a simple contact, B) Refinement of multiple contact From those measurements, it was possible to link the evolution of the internal microstructure to the mechanical behavior of this material and thus, have a better understanding of the behavior of the entangled media under compressive loading. In parallel to the experimental testing and microstructural analysis, a model was defined in order to reproduce the experimental process using a discrete element method. In this model, the wire was represented by a succession of spherical elements (bead-like model) with the same diameter as the wire (Figure 4).
Figure 4: Bead-like representation of the wire Numerical sample could be generated (i) from a random walk algorithm, or, (ii) from experimental 3D images. In the latter case, the 3D data could either be discretized numerically, using their skeleton, or manually, following the wire by hand. This way, it was possible to obtain a description of the initial structure that corresponds exactly to the experimental samples. Two consecutives elements along the wire were bonded by a FENE [9] (finite extensible nonlinear elastic) potential, preventing two parts of the wire to cross each other. Friction plays an important role in the comprehension of the
36 behavior of entangled media. It was taken into account via a Coulomb/Hertz interaction [10-12] (Figure 5), allowing to accurately model both the compressive and the hysteretic behavior. The first parenthesized term is the contact force between two particles and the second parenthesized term is the tangential force. At each “Molecular dynamic timestep”, the tangential force, which corresponds to a “history” effect, is updated to account for the tangential displacement between the particles for the duration of the time they are in contact.
Figure 5: Expression of the granular force between two particles in contact with: delta = d - r = overlap distance of 2 particles Kn = elastic constant for normal contact Kt = elastic constant for tangential contact gamma_n = viscoelastic damping constant for normal contact gamma_t = viscoelastic damping constant for tangential contact m_eff = Mi Mj / (Mi + Mj) = effective mass of 2 particles of mass Mi and Mj Delta St = tangential displacement vector between 2 spherical particles which is truncated to satisfy a frictional yield criterion n_ij = unit vector along the line connecting the centers of the 2 particles Vn = normal component of the relative velocity of the 2 particles Vt = tangential component of the relative velocity of the 2 particles In our model configuration, the sample is contained within a cylindrical die like in the experiment. Regarding friction, the die interacts in the same way as the constitutive elements of the wire. Two pistons were then created in order to apply the displacement along the axis of the cylinder. It was finally possible to fit the model’s parameter using the static experimental results, as well as the dynamic ones (application of a sinusoidal displacement). From this model, stress-strain curves could be plotted as well as density profiles in order to compare them to the experimental ones. 3. Results and discussion A qualitative study of the 3D data was performed in order to have a first look at the compressive behavior of monofilament entangled materials. From the tomography data, it was possible to extract a median section of the sample along the axis of compression in order to follow qualitatively the evolution of the microstructure through the mechanical test.
Figure 6: Evolution of the cross section of a monofilament entangled sample (stainless steel, 200µm diameter) as a function of the volume fraction
37
Figure 7: Evolution of the cross section of a monofilament entangled sample (stainless steel, 127µm diameter) as a function of the volume fraction In the case of stainless steel wire, a follow-up of the deformation was performed for two diameters (Figure 6 and 7). First, we can notice, for both diameters, that the distribution of the wire through the volume of the sample is heterogeneous. The local density seems higher at the contact with the mold and the pistons and smaller in the center. The increase of the volume fraction does not appear to radically change this heterogeneous distribution. Nevertheless, the sample with the smaller diameter (127µm) seems slightly more homogeneous than the one with a 200µm diameter. This was expected since smaller wire diameter means smaller curvature radius and thus easier arrangement of the wire for the same mold radius.
Figure 8: Evolution of the cross section of a monofilament entangled sample (pearlitic steel, 120µm diameter) as a function of the volume fraction. The volume fractions are different from the one for the stainless steel because of technical limitations. In the case of the pearlitic steel (Figure 6-B), we can notice that the profile is even more heterogeneous. Due to the very high yield strength of the wire, the curvature radius is very large and the wire ends up on the outer volume of the mold. Qualitatively, we can already notice the heterogeneous nature of monofilament entangled materials submitted to a constrained compression test, as well as the influence of the diameter and yield strength of the constitutive wire.
38 4. Conclusion In this study, a set of methods was defined in order to characterize the behavior of entangled media and more particularly, of monofilament entangled materials. The mechanical behavior of such a material can now be linked to its microstructure, which was showed to be more or less heterogeneous, depending on the wire’s characteristics. Both mechanical and microstructural behaviors can also be modeled using a discrete element method, taking into account friction. Acknowledgements National Research Agency (MANSART, ANR-REG-071220-01-01) . References [1] Cathy O., « Fatigue des empilements de spères creuses métalliques », PhD thesis, INSA Lyon, 2008 [2] Masse J.P., « Conception optimale de solutions multimatériaux multifonctionnelles : l’exemple des structures sanwdwiches à peaux en acier – choix des matériaux et développement de nouveaux matériaux de cœur », PhD thesis, INPG, 2009 [3] Rodney D., Fivel M., Dendievel R., « Discrete Modeling of the Mechanics of Entangled Materials », Physical Review Letters, Volume 95, pp 108004, 2005 [4] Liu P., He G., Wu L., « Fabrication of sintered steel wire mesh and its compressive properties », Materials Science and Engineering A., Volume 489, pp 21-28, 2008 [5] Tan Q., Liu P., Du C., Wu L., He G., « Mechanical behaviors of quasi-ordered entangled aluminium alloy wire material », Materials Science and Engineering A., Volume 527, pp 38-44, 2009 [6] Liu P., He G., Wu L., « Uniaxial tensile stress-strain behaviour of entangled steel wire material», Materials Science and Engineering A., Volume 509, pp 69-75, 2009 [7] Barbier C., « Modélisation numérique du comportement mécanique de systèmes enchevêtrés », PhD thesis, INPG, 2008 [8] Buffiere J.Y., Maire E., Adrien J., Masse J.P., Boller E., « In Situ Experiments with X ray Tomography: an Attractive Tool for Experimental Mechanics », Experimental Mechanics, Volume 50, Issue 3, pp 289-305, 2010 [9] Kremer K., Grest G.S., « Dynamics of entangled linear polymer melts: A molecular-dynamics simulation », J Chem Phys, Volume 92, pp 5057, 1990 [10] Zhang H.P., Makse H.A., « Jamming transition in émulsions and granular materials », Physical Review E., Volume 72, pp 011301, 2005 [11] Silbert L.E., Ertas D., Grest G.S, Halsey T.C., Levine D., Plimpton S.J., « Granular flow down an inclined plane : Bagnold scaling and rheology », Phys Rev E, Volume 64, pp 051302, 2001 [12] Brilliantov N.V, Spahn F., Hertzsch J.M., Poschel T., « Model for collisions in granular gases», Phys Rev E, 53, pp 5382-5392, 1996
Characterisation of mechanical properties of cellular ceramic materials using X-Ray computed tomography O.Caty1, F.Gaubert1, G.Hauss2, G.Chollon1
[email protected] 1 LCTS : Laboratory of Thermostructural Composites - 3 Allée de la Boetie, 2 ICMCB - 87, Avenue du Docteur Schweitzer F33 600 PESSAC, France
Abstract Carbon foams are refractory cellular materials. The material exhibits an open porosity with a tridimensional architecture offering multifunctional properties. These properties can ever be tailored threw post processing in order to use the material in many fields such as energy (fuel cell) or transport (shock absorber). For all applications, the knowledge of the mechanical properties is important. These properties depend on the 3D architecture and the damage kinetics. In this study the computed X-Ray microtomography (µCT) has been used firstly to analyze the damage kinetics during in-situ compression tests and secondly to simulate the behavior. The µCT compression tests lead to the local damages and global deformations. The analyses of these images will be presented to illustrate the potentiality of tomographic investigations for brittle cellular materials. To assess both the stress and strain fields, a model based on the real material was developed. This model consists in meshing the tridimensional images and modeling the behavior of the constitutive material. The fracture of cells is treated using an adapted law and a brittle criterion. The models are also compared with the measured macroscopic mechanical behavior or adapted to simulate numerically-generated materials.
Introduction Ceramic foams are cellular materials which are used in industry in particular for their attractive mechanical properties coupled with other functionalities. Thermal insulation, acoustical absorption properties, low density with large specific surface and electrical conductivity are some examples of these properties [1-2-3]. The main industrial sectors are packaging (as shock absorbers), filtering (through the open cell) or energy (fuel cells). Although the main reason to select this material is not always its mechanical properties, they need to be known and must be understood in regards with the parameters of the 3D structure. The ceramic foams studied were manufactured at the LCTS using vitreous carbon preform provided by the CEA - Le Ripault [1-2]. This material is composed of a solid 3D arrangement of struts and edges delimitating open cells. The solid phase is glassy carbon and the resulting material combines its properties (thermal stability, high strength…) with the properties of the cellular structure (low density, high specific surface, open porosity). The material is macroscopically ductile while the vitreous carbon itself is brittle. This surprising behavior is explained by the 3D structure and its damage kinetics. Macroscopic compression tests were carried on foams having different pore sizes and relative densities by S.Delettrez [1-2]. The results of these tests are not in accordance with the theory of Gibson and Ashby [1–3]. One hypothesis to explain this difference is that the 3D structure of the material is far from the one considered by Gibson and Ashby [3]. In particular one can not define a simple unite cell, the material being rather random. This random 3D structure must be characterized to better understand the mechanical properties. Probably the better way to do is using micro-computed X-Ray tomography (µCT). Firstly the mechanical properties and damage kinetics are analyzed using a compression test coupled with tomography. These observations are necessary to understand the damaged mechanisms responsible of the macroscopic behavior. In addition it would be useful to measure further informations, such as the strain and stress fields in the material. The choice of T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_6, © The Society for Experimental Mechanics, Inc. 2011
39
40
calculating these data directly using the µCT images will be presented in this paper. This method based on finite element modeling is not a direct experimental measurement but is derived from the images of the real microstructure.
Material and Methods Materials The material was produced using highly porous vitreous carbon foam (relative density
r
* 0.98 – with S
ρ* the apparent density of the foam and ρS the density of the constitutive material). This preform was produced from the pyrolysis of a polymer foam. Fig. 1-a and Fig. 1-b presents two secondary electron microscope (SEM) pictures respectively at low and high magnification of the porous vitreous carbon foam. Unfortunately this foam does not have sufficient mechanical and corrosive strength for shock absorber or fuel cell applications. the foam was thus densified by vapour deposition with pyrocarbon (PyC) or silicon carbide (SiC) using respectively propane and methiltrichlorosilane/hydrogen (MTS/H2) precursor [1-2-4-5] (Fig. 1-c). The deposition time was adjusted to obtain the desired thickness and thus the desired relative density. The final material is an open foam characterised by its relative density ρr and the number of pores per inch (ppi). In this paper only a PyC densified foam was analysed with a relative density and a cell size of 0.14 and 60 ppi respectively.
(a)
(b)
(c)
Fig. 1 : MEB pictures of he foam. (a) Magnification x50 to illustrate the mesoscopic morphology of the non densified foam. (b) and (c) Magnification x500 to illustrate the microscopic morphology of a ligament respectively before and after densification. Adapted from [1]. Fig. 2-a presents the stress/strain compression behavior of a 60 ppi foam densified with PyC up to a relative density of 0.14. The sample was placed between two plates. The superior plate is mobile and controlled in displacement. The device records the load (F) and displacement (l-l0), the stress ( ( ln(1
F ) and the true strain S0
l l0 ) ) being calculated using S0 as the initial surface (S0=78.5 mm2) and l0 the initial length of the l0
sample (l0=10mm). This curve is in good agreement with the standard compression behavior of foams [3], as presented on Fig. 2-b. The different domains are highlighted with numbers. Firstly (1) the material exhibits a pseudo elastic behavior, in this part of the curve, or during unloading, the Young’s modulus is measured. The value is about 320 MPa. Then a crack appears and the load decreases (2) to reach a stable value often named the plateau (3). In this part the load is almost constant with some fluctuations around an average value. These perturbations are generally attributed to successive brittle failures corresponding to individual pores crushing. The average stress value of the plateau (σpl –estimated by an average of the values in this domain) is about 2 MPa, the plateau appearing for a strain higher than about 0.02 in the present case. In this domain the material
41
seems to be ductile although the constitutive material is fragile. (4) When all the open porosity is filled a densification state is observed. Here the densification appears for a strain of 0.7.
(a)
(b)
Fig. 2 : Stress/Strain compression behavior of the foam. (a) Compression test done on a foam with relative density of 14 % and a cell size of 60 PPi, densified with PyC. (b) Scheme reporting the different domains of the curve. Adapted from [1]. To model the ceramic foams Gibson and Ashby [3] have proposed cell models based on elementary cells. They have derived analytical expressions of the Young’s modulus and of the plateau stress. None of these models gives the values measured by [1]. These differences between the experimental and calculated values of the foam’s mechanical properties are probably due to a too simple model. In the Gibson and Ashby model [3], the elementary cell is simply repeated avoiding localization problems. The fact that damage kinetics and the local stress and strain fields are not taken into account in the material could probably explain the differences. A local image analysis during insitu compression test has been used to obtain more information on these points.
Methods X-ray tomography [6] is well-suited to study cellular materials and especially the development of damage [7-89-10]. To scan the material, we used a standard microtomograph from TOMOMAT in Pessac (Nanotom – Phoenix/Xray – see Fig. 3-b) with a voxel size of 6 microns. The tomograph was operated with a Molybden target at 50 keV and 500 µA with an Aluminium filter of 0.2 mm. Compression tests with constant strain were conducted on a specially-designed machine for in-situ compression measurements. The insitu compression device is presented on Fig. 3. Main features are: 1- A quartz pipe which can allow the X-ray to go thought and the visual check of the installation. 2- A manual mechanical actuator using a micrometric screw system for fine displacements. 3- A bearing ball, placed between the screw and the superior plate, avoiding moment transmission to the sample. Foam cylinders (10 mm of diameter and 10 mm length) are analysed at a voxel size of 6µm, resulting in images of about 1600x1600x1600 voxels. The strain is controlled knowing the screw thread (0.5 mm/tr). Eight µCT scans were performed at increasing strained states, the first one corresponding to the initial undeformed state. The true strain ( ln(1 screw thread and confirmed by µCT image analysis.
l l0 ) ) was calculated from the l0
42
(a)
(b)
Fig. 3 : pictures of the insitu compression device used to apply the load to the foam samples installed in the micro-tomograph. (a) Close visualisation of the compression device – the sample is inside the transparent pipe. (b) Compression load cell placed in the laboratory microtomograph – on the right the nanotom Phoenix Xray tube. The solution adopted to determine the stress and strain fields in the foam is a meshing of the µCT images with tetrahedral elements and a finite element calculation ([12] and [13]). The 3D image of the foam was meshed with Avizo software [11] The meshing is done in three steps; the surface is first reconstructed using triangles by a marching cubes algorithm [14], [15]; the meshed surface is then simplified to reduce the number of triangles this is achieved through an edge collapsing algorithm [16]; finally, the tetrahedral mesh is built by an advancing front algorithm [17]. The µCT images of the sample foams being too large to be meshed directly. The image was cropped into a 300x300x300 voxels image (1.8x1.8x1.8 mm3) to get a sub-sample of the foam. The choice of the boundary conditions is very important. We decided to unconstrain the lateral faces, completely constrain the lower face in displacement, and impose a displacement along the compression direction (z direction) on the upper face, the displacements in the perpendicular plane (x and y directions) being blocked on this face (a scheme of the boundary conditions is presented on Fig. 4-b). A parallel study of the modeling of the foam was also executed to determine the ideal parameters of the mesh (convergence study – about 300 000 elements), the constitutive laws (elastic brittle law – E=35 GPa – σy = 300 MPa) for the PyC and the minimum size of sample to be representative. A mesh of the foam is presented on Fig. 4-a.
(a)
(b)
Fig. 4 : (a) mesh of a 3D µCT image of the foam. (b) Scheme of the boundary conditions applied to the mesh.
43
Results Damages in the foam were observed at the nine states of compression acquired by µCT. First, the analysis of the global deformations/damages was carried out by observing the outer surface of the foam. These observations were done by choosing the adapted angle of view to see the failure and using a voltex rendering to visualize the foam in 3D (voltex rendering is a function of the Avizo software - see [11] for details). This angle of view is crucial to well analyze the failure and would be hard to be found without the use of in-situ µCT. On Fig.5, three states of deformation are presented – at the initial state (Fig.5-a) – at 6% of strain (Fig.5-b) and at 15 % of strain (Fig.5-c). As indicated by the arrows a macroscopic failure is visible in the middle of the foam. This failure actually consists in a series of individual failures of the ligaments of the cells, these individual failures being connected in a plane presenting an angle at about 30° to 45° with the loading direction (the loading direction is vertical on these pictures).
(a)
(b)
(c)
Fig. 5 : µCT in-situ compression test on a ceramic foam with a relative density of 0.14. Observation of the external surface using a voltex rendering [11]. 3 states of deformation: (a) Initial state, (b) 6% strain – arrows indicate the initiation of failure (c) 15% strain – Arrows indicate the evolution of damage. In a second study we have exploited one of the main assets of µCT : the possibility to visualize the local damages inside the material. These observations were made at different places in the material. The recognition of the different zones on the 3D images of the foam is difficult and was made easier following particular cells or defects. A major failure mode was evidenced in this study, an example being presented on Fig.6. This figure compares the initial state and a damaged state of the same ligament’s cell. As highlighted by the arrows, the failure appears at both ends of a ligament. It is a fragile failure with no plastic deformation.
(a)
(b)
Fig. 6 : Local analysis of a µCT in-situ compression test on a ceramic foam with a relative density of 0.14. Visualisation of a ligament inside the foam using a voltex rendering [11]. Arrows indicate the fracture, numbers are visual marks to highlight the bounds.
44
The local behavior observed above by µCT analysis is also observable on the measurement of damage in an isolated ligament. Fig. 7 presents a ligament randomly chosen in the foam with no particular orientation and position. The measures of damage clearly show that the damage concentrates at both ends of the ligament where the thickness of the ligaments changes. This stress concentration zone is also a weak point and the weakness of these ligaments initiates the damage in the foam. Thus if one wanted to simulate the damaging behavior of the foam without taken into account the real microstructure, these singular points in the structure would be missed, leading probably to an erroneous behavior.
Fig. 7 : Calculation of the damage rate field inside a ligament. A 0 damage rate is a material without damage and 1 is a failed material. When the first failures appear on both ends of the ligament, the load is transmitted to the neighbouring ligaments. This creates particular development of the damage. This organisation is well illustrated on Fig.8, where the damage rate in a rather large part of the sample is shown. On Fig.8-b, a damage band orientated at about 30° with the loading direction appears. The damage appears in the ligaments and propagates throughout the structure. When the damage in this band is saturated, other local failures appear (Fig.8-c). These subsequent failures are often organized in smaller bands.
(a)
(b)
(c)
Fig. 8 : Damage rate field inside the cellular structure. FE calculations of different states of compression corresponding to states ref. 1, 2 and 3 in the in-situ µCT observations. (a) Initial state. (b) 3% strain. (c) 6% strain.
45
Discussion As explained in the results section, a failure was observed at 30° to 45° with respect to the loading direction on Fig.5. We can make an analogy with ductile metallic materials were failure is often observed at 45°. This angle corresponds to the maximum sheered plane in the sample. Thus, the failure has a quasi-ductile character whereas the constitutive material is intrinsically fragile. This is in good agreement with the macroscopic behavior, as shown on Fig.2-a. In the measure of damage by FE calculation on the structure presented on Fig.8, the failure seems to propagate in a plane oriented at 30°. This plane is not exactly the same as the one described on Fig. 5. Another difference is the presence of local damages in the calculations of Fig. 8. These isolated damages are not observable during experimental investigations of Fig. 5. These differences are probably explained by the difference in the size considered during the experimental tests. Here appears the problem of the representative size. The size of foam used for calculation is probably too small to be representative of the experimental test. Another question appears: the boundary conditions applied were chosen uniformly. This condition is obviously not effective in a foam extracted in the middle of the sample tested. We have chosen these conditions for simplification reasons and because we have estimated that these conditions are rather close to the real ones. Further investigations are needed to validate this choice. A solution would be to apply the displacement directly measured on the µCT images. The comparison of Fig.6 and Fig.7 reveals that the stress/damage concentration zones correspond to the failure zones. Most loaded ligaments are generally broken at their ends as illustrated on Fig.6. These zones are indeed the most stressed/damaged regions in the FE calculation (Fig.7). The final goal of these investigations was to better understand the macroscopic mechanical behavior. The fractures in bands observed explain the perturbations in the plateau domain described in the material section. These perturbations are also well simulated by the model presented. Then, in order to optimize the material, the model is a useful tool. It is rather easy to adjust parameters such as the dimensions, the volume fraction or the constitutive material properties to find the material adapted to a specific application.
Conclusion Using µCT technique, it is possible to visualize the inside of the materials at high resolution. These investigations can also be carried out insitu during mechanical loading. Using the 3D images, it is also possible to calculate several fields like stress, strain or damage. All these microscopic data are very important to understand the macroscopic mechanical behavior. In the case of cellular materials (like the PyC densified porous vitreous carbon foams presented here), these investigations are rather easy. The information obtained is crucial to optimize or understand mechanical behaviour. Another important benefit of these calculations using meshed µCT images is the relevance of the simulated behaviour compared with classical models like those presented in [3]. This benefit is explained by local damages linked to local defects. These defects are singular points that must be considered to simulate the effective mechanical behaviour of such brittle materials. Unfortunately these calculations are heavy. In this study, we have chosen to use only one part of the sample. This choice may be a problem, as explained in the discussion section. A solution would be to use the entire sample for calculations using more powerful computers, or to improve the calculation efficiency. Another improvement would be on the experimental set-up used for in-situ compression tests. A new loading device is under development at the ICMCB in Pessac. This new insitu device is adapted for accurate and small displacements. The main difficulty in the development of such a device is the lack of room available in the laboratory tomograph and the necessity to be X-ray transparent throughout a 360° rotation. Finally, to improve the test and the resolution of 3D images in particular, a solution would be to use a synchrotron like the ESRF in Grenoble [18].
46
Acknowledgments CEA is thanked for providing the starting vitreous carbon foam. The insitu compression tests and µCT acquisitions were carried out at the ICMCB laboratory (TOMOMAT group), we want to thank Ali Chirazi and Dominique Bernard in particular for their collaboration. The “Groupement d’Intérêt Scientifique” (GIS) Advanced Materials in Aquitaine (AMA) is acknowledged for founding the development of the In-Situ device. Florian Canderaz is acknowledged for the model developments.
References [1] S.Delettrez. Elaboration par voie gazeuse et caractérisation de céramiques alvéolaire bas pyrocarbone ou carbure de silicium. Phd thesis. University of Bordeaux1. 2008. [2] G.L.Vignoles, C.Gaboriau, S.Delettrez, G.Chollon, and F.Langlais. Reinforced carbon foams prepared by chemical vapor infiltration : a process modeling approach. Surface and coatings technology. 203 :510–515. 2008. [3] L.J.Gibson, M.F.Ashby. Cellular solids structure and properties - second edition. Pergamon Press, 2001. [4] R. Naslain, F. Langlais. Mater. Sci. Res. 20-145. 1992. [5] R. Naslain, F. Langlais, G.L. Vignoles, R. Pailler. Ceram. Eng. Sci. Proc. 27 (2) 373. 2006. [6] E. Maire, J.-Y. Buffière, L. Salvo, J.J. Blandin, W. Ludwig, J.M. Létang. Advanced Engineering Materials, 3 No 8, 539 - 546. 2001. [7] L. Babout, E. Maire, R. Fougères. Acta materiala 52, 2475-2487. 2004. [8] E. Maire, A. Elmoutaouakkil, A. Fazekas, L. Salvo. MRS Bulletin, 28, 284. 2003. [9] O.Caty, E.Maire, R.Bouchet. Fatigue of Metal Hollow Spheres Structures. Advanced Engineering Materials. 10. 179-184. 2008. [10] O.Caty, E.Maire, S.Youssef, R.Bouchet. Modelling the properties of closed cell cellular materials from tomography images using finite shell elements. Acta Materialia. 56, 5524-5534. 2008. [11] http://www.vsg3d.com/avizo [12] S.Youssef, E.Maire, R.Gaertner. Finite element modelling of the actual structure of cellular materials determined by X-Ray tomography. Acta Materialia. 53. 719-730. 2005. [13] K.Madi, S.Forest, M.Boussuge, S.Gailliere, E.Lataste, J-Y. Buffière, D.Bernard, D.Jeulin. Finite element simulation of the deformation of fused-cast refractories based on X-Ray computed tomography. Computational Materials Science. 39. 224-229. 2007. [14] W.E. Lorensen, H.E. Cline. Marching cubes: high resolution 3-D surface reconstruction algorithm. Proceeding of the 14th annual conference on Computer graphics and interactive techniques, Anaheim, july 2731. 163-169. 1987. [15] D.A. Rajon, W.E. Bolch. Marching cube algorithme : review and trilinear interpolation adaptation for image based dosimetric models. Computerized Medical Imaging and Graphics 47. 411-435. 2003. [16] P.Cignoni, C.Montani, R.Scopigno. A comparison of mesh simplification algorithms. Comput. & Graphics 22. 1. 37-54. 1998. [17] P.J. Frey, H. Borouchaki, P.L. Georges. 3D Delauney mesh generation coupled with an advancing-front approach. Computer methods in applied mechanics and engineering 157. 115-131. 1998. [18] www.esrf.eu/
Multiaxial stress state assessed by 3D X-Ray tomography on semi-crystalline polymers
L. Laiarinandrasana, T.F. Morgeneyer, H. Proudhon MINES ParisTech, MAT-Centre des Matériaux, CNRS UMR 7633, BP87 91003 Evry Cedex, France ABSTRACT This work aims at linking the microstructural evolution of semi-crystalline polymers to the macroscopic material behaviour under multiaxial stress state. Tensile tests on notched round bars, interrupted after different stages of deformation before failure, were supposed to have undergone various states of stress in the vicinity of the net cross-section. They were examined using synchrotron radiation tomography. A sample obtained from a test stopped at the end of stress softening stage showed elongated axi-symmetric columns of voids separated by thin ligaments of material. This special morphology allowed investigating the distributions of voids in terms of both volume fraction and orientation. This distribution was compared with the theoretical multiaxial stress/strain field. The combination of the tomographic images analyses and the continuum mechanics approach results in a determination of the principal mechanical parameter driving voids evolution. 1. Introduction In the last years, more and more researchers used the tomography technique to better understand the deformation mechanisms of various materials. Most of studies were dealing with light metals such as aluminium alloys. For a semi-crystalline polymeric material, Laiarinandrasana et al. [1] reported a specific morphology of voids evolution thanks to 3D X-ray tomography. In an initially necked region, void aspect shows elongated shapes separated by thin walls. Taking advantage of the excellent resolution of the images obtained during this campaign, an attempt was made to observe the voids microstructure within an initially notched round bars used to enhance void growth. Tomographic observations were carried out on several polymers. The present paper focuses on the investigations of PolyAmide 6. Before describing the experimental setup (PA6 material, sample preparation, tomography technique), a theoretical background on the mechanical stress/strain state within a notched round bar is given. Then, the radial distribution of voids is discussed in order to highlight the role of the stress triaxiality ratio on void growth. Further analyses of the tomographic images allowed determining void volume fraction gradient according to the longitudinal direction (axis z). Critical analyses of this gradient were performed with the help of the theoretical results about the stress/strain state within the notch region. The last part of the paper describes the void orientation distribution. It is demonstrated that the trajectories of the largest principal stress coincide with the void orientation. 2. Theoretical background on circumferentially notched specimens Figure 1a describes the circumferentially notched specimen. Conventional cylindrical coordinates (r, θ, z) are used. Rotational symmetry is considered around longitudinal axis z. R and a are respectively the radius of curvature of the notch root and the outside radius of the minimal cross section. Authors like Bridgman [2-3], Kachanov [4], Davidenkov and Spiridonova [5] evaluated the distribution of true stress/strain in a necked tensile specimens made of metallic materials. Beremin [6] followed the approach by using initially notched specimens with machined radius curvature. The general assumptions required were: i) perfectly plastic material ii) isochoric transformation with a homogeneous axial strain in the minimal cross section. It was then demonstrated that the stress and strain tensors in the vicinity of the minimal cross section could be expressed as follows [6-7]:
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_7, © The Society for Experimental Mechanics, Inc. 2011
47
48 2 2 2η − 4η z − 2η r a a ~≅σ σ 0 eq 4ηrz a2
1 − 2 ηz −12 2 ~ε ≅ ε e a 0 eq 6ηrz 2 a 2
0 −
1 2
0
4ηrz
0 2
z r 2η − 4η − 2η a a 0
a2
2
0 z 1 + 2η − 4η a
2
2 r − 2η a
(1)
6ηrz a2 0 1
(2)
where, η = 0.5 ln[1+a/(2R)] and ln is the naeperian logarithm, (r, z) are the current coordinates of the considered point in this plane. Note that in equations (1) and (2) the distribution of stress in the minimal cross section is assumed to be parabolic.
z R r
σΙ
Line of iso-principal stress α
+ ω r
θ a
r
z
α
a)
a
R
ω +
b)
Figure 1: Sketches of a circumferentially notched specimen with the characteristic parameters
A lot of papers focused on the minimal cross section (z = 0) because it was the critical zone where damage and fracture occurred. The following equations tractable “by hand” were used in many studies. The radial, hoop and axial stresses are expressed as: 2 σ rr (r, z = 0) = σ θθ (r, z = 0) = σ eq 2η1 − r a (3) 2 r σ zz ( r, z = 0) = σ eq (1 + 2η)1 − a It has to be noticed that any stress component in equation (3) consists of a structural term (function of η) and a constitutive term (the equivalent stress). The multiaxiality of the stress state in the minimal cross section is measured by the stress triaxiality ratio (τσ), defined as the mean stress divided by the equivalent von Mises stress.
49
τ σ ( r , z = 0) =
~ )/3. where σm = trace( σ
r 2 σm 1 = + 2η1 − a σ eq 3
(4)
In addition, strains are assumed to be homogeneous within the cross section. They can be approximated by: εzz = εeq and
εrr = εθθ = -(1/2)εzz
(5)
Similarly, the stress/strain gradients with respect to z axis (r = 0) can also be expressed as: 2 σ rr ( r = 0, z) = σ θθ (r = 0, z) = σeq 2η1 − 2 z a 2 1 − 2 z ( ) σ ( r = 0 , z ) = σ 1 + 2 η zz eq a
ε zz (r = 0, z) = ε eq e
−12
(6)
ηz 2 a2
εrr(r=0, z) = εθθ(r=0, z) = -(1/2)εzz(r=0, z)
and
(7)
Equations (6-7) were checked by Beremin [6] by comparing these solutions with finite element results. 100
Test stop for observations
80
Net stress (MPa)
Stress softening
60
Unloading stage
40
20
0 0
0.1
0.2
0.3
Applied displacement (mm)
0.4
0.5
Figure 2: Experimental preparation of the PA6 notched specimen Furthermore, the concept of the trajectories of largest principal stress/strain, discussed in [2-4] has never been exploited since. No experimental verification could be done. Additionally, a true stress cannot be measured and the strain field can be estimated at the surface of the specimen. These trajectories of principal stress were assumed to be circular in a necked sample. Figure 1b illustrates this feature and highlights that at any point the line of iso-principal stress should be
50 perpendicular to the corresponding trajectory of principal stress. Actually, the eigenvector of the principal stress is oriented with an angle α that depends on the coordinates (r,z) of the point of interest. Let ρ be the curvature radius of the trajectory of the largest principal stress at point M(r, z). Two different expressions of ρ are given by the authors. According to Kachanov, Davidenkov and Spiridonova [4-5], ρ = aR/r, whereas ρ = (a2+2aR-r2)/2r for Bridgman [2-3]. One can notice that ρ tends to infinity when r = 0, that is on the axis z whereas ρ = R for r = a on the notch root. In fact, by plotting both expressions, one can realize that the difference is very small. In figure 1b, it can be moticed that :
r = ρ sin α z = R + a − ρ cos α
(8)
Equation (8) clearly shows that the orientation of the eigenvector corresponding to the largest principal stress depends on the considered point. Up to now, there was no clear verification of this theory. The present paper aims at showing some features, observed on tomographic images of a polymer that can match the trends implied by the previous equations. 2. Experiments The material under study is a PolyAmide 6 polymer that was selected due to the quality of the images obtained by Synchrotron Radiation Tomography (SRT) carried out at the European Synchrotron Radiation Facilities (ESRF). The physico-chemical properties, as well as the tomography technique description were detailed in Laiarinandrasana et al. [1]. Following this last reference, a series of interrupted tests were carried on circumferentially notched specimens. In this paper, focus is set on a specimen with initial notch root radius R = 3.5mm and a minimum section radius a = 2mm. This specimen was tested using a traction machine with a load cell and the crosshead displacement measurements. The test was stopped at the end of the stress softening (fig.2) to focus on the voids morphology and distribution assumed to be “frozen” at that time. After unloading and specimen removal, deformed samples were, first photographed (fig.2) in order to locate the volume of interest (VOI within the box in fig.2), then, scanned at ESRF in Grenoble (France). Locations of volume scan is depicted in fig.1a, symbolized by small rectangles and circles. SRT was carried out using the ID19 tomograph. The local tomographic setup [8] was used to avoid a cutting of the sample. A tomographic scan corresponded to 1500 radiographs, recorded over a 180° rotation of the sample. A radiograph was constituted of 1024 x 512 pixels with an isotropic pixel size of 0.7µm. Hence, in the following tomographic images, the width corresponding to the diameter of the maximum reconstructed 3D volume was of 716 µm whereas the height was of 358µm. 3. Results and discussions 3.1. Radial distribution of voids
Notch root
a)
b)
Figure 3: Radial distribution and morphology of voids within the neck
Figure 3 describes the morphology and distribution of voids within the minimum cross sections through longitudinal cuts. Voids are observed in black. They exhibit elongated aspect separated by thin walls [1]. This particular morphology allows, at
51 least qualitatively, observations of the variation of the height, the radial expansion and the relative orientation of voids. Indeed, spherical voids would not give such all of these information. Figure 3a observed in a VOI located in the centre of the minimal cross section indicates large void expansion in height (distance between two walls and the full height of elongated voids as described in [1]) but also in radial expansion. Note that some patterns indicate coalescence in both directions (radial = flat ellipse and in column = elongated ellipse). Figure 3b shows image of a VOI located close to the sample boundary. No void could be observed in the vicinity of the notch root. Moreover, it can be observed that there is less void with the mean diameter/”height” of voids smaller than at the centre (figure 3a). It can be concluded that in any cross section within the notched region, the void volume fraction (porosity) is maximum in the centre and gradually decreases towards the surface (notch root).
1.4 σ/σeq
τσ
0.7
σzz/σeq
1.2 1
0.6 0.5
τσ
0.8
0.4
0.6
0.3
0.4
0.2 σrr/σeq
0.2
0.1 0
0 0
0.2
0.4
0.6
0.8
1
r/a
Figure 4: Normalized stress and stress triaxiality ratio versus normalized radial abscissa for z = 0. R = 3.5mm, a = 2mm.
This result, reported for many materials in the literature argues that void growth is driven by the stress triaxiality ratio. Indeed, by recalling in figure 4 the plots of the normalized axial/radial stresses (equation 3) as well as the stress triaxiality ratio (equation 4) respect to the normalized radial abscissa, the distribution of the porosity is consistent with the trend of these plots. Conversely, the strain is homogeneous over the whole cross section. Moreover, from equation (2) it can be demonstrated that for any z, ∂ε/∂r is constant. The contour map of any strain component is “flat” (no gradient). Therefore, the strain cannot be considered as a relevant parameter to be associated with void growth. 3.2. Axial distribution of voids Considering the axial distribution of voids near the axis, from the minimal cross section to the basis of the notch shoulder, figure 5 depicts the tomographic images to highlight the void volume fraction gradient. The same analyses as in the previous subsection are carried out here about the height, width and the amount of voids. All of these characteristics decrease from the centre (minimal cross section) to the notch shoulder.
52
▬
▬
▬
▬ Figure 5: Distribution of voids along z axis
Following Beremin [6], figure 6 shows plots of normalized stress/strain with respect to z/a according to equations (6-7). The first conclusion is that figure 5 constitutes an experimental verification of the theory. This important result –which can be considered as a novelty from experimental viewpoint- is essentially due to the tomographic images. It can be expected to further assess, for a given material, how far from the basis of the notch/neck shoulder the effect of the notch is present. Nevertheless, it is to be mentioned that unlike the radial distribution of voids (figure 4), the axial distribution (figure 6) cannot indicate which of the stress or the strain is the leading parameter for void growth.
53 ε/εeq
1.4 σ/σeq 1.2
1.2 1
σzz/σeq
1
0.8
0.8 0.6 εzz/εeq
0.6
0.4
0.4 σrr/σeq
0.2
0.2 0
0 0
0.2
0.4
0.6
0.8
1
z/a
Figure 6: Normalized stress/strain function of z/a for r = 0. R = 3.5mm, a = 2mm.
3.3. Orientation distribution of voids For the sake of clarity, a zoom of figure 3b is discussed in this section (figure 7). The knowledge of the void morphology [1] enables to draw arrows indicating the orientation of these voids. By superimposing on figure 7 the geometrical construction in figure 1b, three points were considered with respective radii ρ1, ρ2, ρ3. Local voids orientations were symbolized by the corresponding angles β1, β2, β3 respectively measured between the arrows and the vertical z axis. Recall that in figure 1b, the orientation of the largest principal stress is symbolized by the angle α described in equation (8). In figure 7, it turns out that the evolution of angle α is in excellent agreement with the aforementioned β. Indeed, voids angle is null close to the z axis and gradually increases to coincide with the local notch root curvature near the surface. As a matter of fact, these observations were encountered when voids were located outside the minimum cross section (z ≠ 0). To the authors’ knowledge, such investigations dealing with the mechanical parameters state combined with tomographic observations constitute a novel experimental approach. At this stage, the main important conclusions applied to the PA6 under study consist of: - voids orientation parallel to the largest principal stress. Therefore this seems to indicate the component of the stress involved in void stretching; - no relevance of the largest principal strain with voids orientation (flat contour map); - quantification of voids characteristics allowing the local stress measurement provided a relevant stress scaling methodology (e.g. finite element analysis). 5. Conclusion PolyAmide 6 semi-crystalline polymer deformation mechanisms were studied thanks to Synchrotron Radiation Tomography (SRT) carried out at the European Synchrotron Radiation Facilities (ESRF). An initially notched round bar was tested under tension, up to the end of the stress softening. The sample was then released from the traction machine to be observed by SRT. Specific morphology of voids allowed identification of the voids distribution according to the axial/radial direction. Furthermore, voids orientation was observed to be dependent on their location within the notched region. By comparing these features with the theoretical stress/strain fields, it can be concluded that the largest principal stress is the key mechanical parameter that control voids growth. Data collected from image analysis of SRT would be of great importance to be utilised as input in the mechanical analyses (FEA). In particular, material model parameters governing damage evolution should be adjusted to match the experimentally measured void volume fraction distribution.
54
β2
β3 ρ3
ρ2
β1 ρ1
R
Minimal cross section
Figure 7: Distribution of voids orientation along r in a longitudinal cut.
References [1] Laiarinandrasana, L., Morgeneyer, T.F. Proudhon, H., Regrain, C, Damage of semi-crystalline polyamide 6 assessed by 3D X-ray tomography: from microstructural evolution to constitutive modelling. Journal of Polymer Science: Part B: Polymer Physics 48, 1516–1525, 2010. [2] Bridgman, P.W., The stress distribution at the neck of a tension specimen, Transaction ASM. 32, 553-574, 1944. [3] Bridgman, P.W., The effect of nonuniformities of stress at the neck of a tension specimen, in Large plastic flow and fracture, Metallurgy and metallurgical engineering series, First edition, New York, McGraw-Hill book company, Inc., 937. 1952. [4] Kachanov, L.M., The state of stress in the neck of a tension specimen, in Fundamentals of the theory of plasticity, MIR Publisher Moscow, 292-294, 1974. [5] Davidenkov, N.N., Spiridonova, N.I., The analysis of the state of stress in the neck of tension specimen. Proc ASTM, 46, 1-12, 1946. [6] Beremin, F.M., Elastoplastic calculation of circumferentially notched specimens using finite element method, Journal de Mécanique appliquée, 4(3), 307-325. 1980. [7] François, D. Pineau, A., Zaoui, A. Comportement mécanique des matériaux 2nd volume. HERMES edition, Paris. 1993. ISBN 2-86601-348-4. [8] Youssef, S., Maire, E., Gaertner, R. Finite element modelling of the actual structure of cellular materials determined by X ray tomography, Acta Materialia, 53(3), 719-730, 2005.
Effect of Porosity on the Fatigue Life of a Cast Al Alloy
Nicolas Vanderessea, Jean-Yves Buffierea, Eric Mairea, Amaury Chabodb a
Université de Lyon – INSA de Lyon - MATEIS, UMR5510, Bâtiment Saint Exupery 20 Av. A. Enstein 69621 Villeurbanne Cedex France b Centre Technique des Industries de la Fonderie Sèvres France
ABSTRACT A methodology has been developed to investigate the causes of fatigue fracture in the pressure-cast aluminium alloy Al Si9 Cu3 (Fe). Several samples have been tested. In each case porosity was the primary cause for failure. 3D tomographic images of the samples porosity at the initial state have been used as a basis for finite element analysis of the stress concentration around each pore above a minimal given size. Morphological assessment of the pores and Finite Element calculation allow investigating the correlations between, stress concentration and crack initiation. Results indicate that the pores causing failure have sizes located at the end-tail of the size distribution. 1.
Introduction
Cast Al-Si alloys are widely used in the automotive industry to produce block engines, because of their high strength to weight ratio, their low processing cost and their ability to be cast in intricate shapes. However, this type of material contains microstructural defects resulting from the casting process such as porosity, oxides and metallic inclusions which strongly reduce the fatigue life and increase its variability. Close to the fatigue limit, previous studies have shown that fatigue crack nucleation occurs predominantly on pores (see for example [1]) which also have a detrimental effect on static mechanical properties. The volume fraction of porosity in a cast Al alloy typically ranges between 0.1 % and 25 % depending on the casting process used and the shape of the resulting product. A value as low as 1% (volume fraction) of porosity can lead to a reduction of 50% of the fatigue life and 20% of the fatigue limit compared with the same alloy with a similar microstructure but showing no pores [1]. To correlate fatigue properties of cast Al alloys to the presence of porosity, the fracture surfaces of broken fatigue samples have been extensively studied by Scanning Electron Microscopy (SEM), in order to identify which defect(s) is (are) responsible for crack initiation. One major drawback of this type of characterization, however, is that (in the best cases) it only reveals the weak link within the pore distribution. When the tail of the distributions matters, as it is the case in fatigue, comparisons with other samples therefore remain qualitative [2, 3] and statistical models are required to infer the actual pore population and to investigate its relationship with the fatigue life [4]. With the rapid development of X-ray microtomography in the last decade [5] this technique has being increasingly used for the characterization of cast materials because of its ability to provide fast and exhaustive information on porosity [6-10]. In this study, an Al-Si-Cu (A380) cast alloy has been tested in fatigue. SEM characterization of broken samples has been carried to determine the crack initiation sites. The 3D distribution of pores in the fatigue samples was determined by X-ray tomography. Finite element computations, based on the 3D tomographic images, have been used to estimate the stress level around pores taking into account their position in the sample, their size and their shape. Those results were used to investigate the ability of the pores to nucleate fatigue cracks. 2.
Materials and methods
The studied material was a A380 alloy which composition is given in Table 1. The samples were produced by permanent mould casting in an industrial mold using a 630 tons Buhler machine. The metal was injected at 700 °C with a pressure of 800 bars. With an injection speed comprised between 40 and 50 m/s,the mold filling was completed in approximately 15 ms. The specimens were then machined in order to produce samples with a cylindrical gage length ( roughness< 0.8 µm diameter 3.89mm and height ranging between 18 to 24 mm). The 0.2% Yield strength of the Al cast material was found to be 90 MPa and its tensile strength 150 MPa. A value of 70 MPa has been measured for the endurance limit (stair case method, R= -1) with some variation depending on the volume fraction of porosity in the material. T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_8, © The Society for Experimental Mechanics, Inc. 2011
55
56
The fatigue tests presented here have been performed at room temperature at constant stress amplitude until failure. A stress ratio of 0.1 has been used in order to provide clearer evidences of crack initiation on the fracture surfaces. Three samples have been mistakenly cycled below the endurance limit for several millions of cycles, before getting tested at a correct level ie slightly above the fatigue limit (max ranging from 140 to 145 MPa). For this reason, the total fatigue life of the five samples has not be considered and the analysis has instead focused on the ability of a pore to initiate a fatigue crack. Table 1 Chemical composition of the studied material. Element Composition (weight %)
Al
Si
Cu
Zn
Mg + Mn
Others
85.8 %
8.9 %
3.1%
0.7 %
0.2 %
1.3%
A laboratory tomograph was used for the 3D characterization of the fatigue samples[11]. For each tomographic scan,720 images were acquired (exposure time 0.5 s) with a voxel size of 5 µm (a voxel is the 3D equivalent to the pixel for a 2D image). After reconstruction, a 16 bit image was obtained which was first down-sampled to 8 bits and further cropped to dimensions 850*850*1300 voxels (i.e. 6.5 mm in height). Depending on the sample, three or four scans were necessary to image the whole sample gage length. Five samples have been exhaustively characterized in several steps (see Table 2 for a list). First a 3D image of the fatigue samples was obtained by tomography from which a statistical analysis of the pores was performed. The samples were subsequently cycled until fracture. Their fracture surfaces were characterized using SEM and a second tomographic image of the broken samples was recorded. From the 3D images a Finite Element (FE) model of a region surrounding the crack was generated allowing to study the stress level in this part of the sample. Image analysis of the stress concentration levels computed by the FE analysis was finally performed. This last step consisted in trying to establish a correlation between the morphological and spatial parameters of the pores, stress concentration around them and the occurrence of crack initiation. For this purpose, the severity of the pores has been estimated by calculating the volume of the matrix stressed above the yield stress. As the FE analysis has been performed in the elastic framework, this quantity does not strictly correspond to a “micro-plastic zone” but rather refers to the influence zone of each pore. This pore influence zone has been analyzed following two approaches: - The first one aimed at examining the links between the morphology of each pore and the size of its own influence zone, and is referred to hereafter as the pore-by-pore, analysis. - The second one investigated the links between the local pore fraction (in slices perpendicular to the tensile axis) and the influence zone density in the same slices. It is referred to hereafter as the slice-by-slice analysis.
Table 2 Summary of the studied samples. Sample Pore volume fraction Equivalent σmax for R=-1 (MPa) Fatigue life (cycles)
36
164
C48
E29
F40
0.60 %
0.62 %
0.46 %
0.57 %
0.20 %
40 and 90
85
80
40 and 85
80 and 85
4.106 and 113 190
132 037
194 583
6.106 and 183 093
4.106 and 437 257
The Avizo software [12] was used to mesh the solid/air interface of the samples. This included the external shape of the specimen as well as the inner shape of the porosity. As each of the specimens contains a few thousands pores having a minimal volume of 403 µm3, it was not possible to mesh a complete sample in a single pass. Due to computation limitations, the FE analysis was restricted to a sub-block of 2 mm in height. This sub volume was cropped from the initial block around the location of the crack. The generation of a 3D mesh in Avizo was performed by an advancing front strategy. The interfaces between the different phases of the model were first triangulated and used as seeds for the iterative generation of tetrahedral elements
57
that eventually meshed the material completely. The number of initial 2D elements had a strong effect on the final number of tetrahedral elements and on the complexity of the subsequent simulation. The pore geometry being much more complex than that of the sample cylindrical surface, both types of interfaces had to be triangulated separately and the corresponding meshes were exported as two separate STL files (text-type format files), which were then concatenated into a single, global file. This later was used to generate the 3D mesh. The average number of triangles required to mesh the sample's surface was about 15 000, whereas it varied between 100 000 and 800 000 for the pore surfaces. The number of 3D elements was further reduced by assigning them a target average size close to the dimensions of the 2D elements at the sample’s surface. The 3D mesh was thus reasonably refined around the pores and its number of elements ranged between 600 000 and 1 500 000 elements (Figure 1)
Fig. 1: 3D rendering of the mesh used for the mask surrounding the region of interest (left image:15000 elements ) and for the pores contained in this region (right image:181000 elements). The FE calculation was performed using ABAQUS Standard. While an accurate simulation of the mechanical behavior of the material should have taken the cyclic plasticity into account, as a firs approach, an elastic calculation was carried out (E=70 GPa and =0.3) in order to test whether such a simple approach could give a valid estimator for the severity of the pores. The computation was static. A monotonic, uniaxial displacement was imposed to the sample upper extremity, while the lower one was kept at a constant height (the nodes were nevertheless free to move laterally in the plane perpendicular to the loading direction). The value of the imposed displacement was fixed in order to match the maximal stress imposed during the fatigue tests. The results of the FE calculation were converted into an 8bit volumetric image [13] where each of the 256 gray levels corresponded to a 3 to 5 MPa range, depending on the maximal value of the von Mises Stress calculated by the simulation. This image was then downsized to match the size of the initial tomography block containing the pores. The stress values were thus averaged on voxels of 5*5*5 µm3. The statistical distribution of the stress within the meshed part of the material could be conveniently evaluated through the gray levels histogram of the 3D image. This histogram always showed some abnormally high values which were the result of numerical singularities at the vicinity of very tortuous pores. In order to avoid this problem the 3D images were binarized with a threshold value corresponding to the material 0.2 % yield stress (185MPa). The resulting block showed white regions surrounding the largest pores (Figure 2). These regions correspond to the pore influence zones. Each pore could thus be characterized by the volume of its own influence zone which was related to a micro-plastic region that was likely to develop at the immediate vicinity of the pore. Since the computation was elastic, the actual size of the highly loaded regions around the pores was probably slightly different from that obtained with the simulation. Still, this approximation had no consequence on the validity of the analysis, which relied uniquely on comparisons between parameters. To complement the above described analysis, the influence zones of each individual pore were also averaged on each slice in order to obtain the distribution of influence zones density (surface fraction) along the sample axis.
58
Fig. 2: Left:3D rendering of the FE calculation (Von Mises stress) Right: same image after thresholding above the 0.2% yield stress. 3. Results and analysis The analysis of the reconstructed 3D images provided an exhaustive description of the pores present in the samples in terms of volume, surface, sphericity, (ratio of the two former parameters), projected surface, distance to the surface, size of individual influence zone and surface fraction of influence zone in each slice. All those parameters could be used to try and establish correlations with the ability of a pore to initiate a crack as shown elsewhere [13]. Here we mainly focus on the last two parameters which reflect the propensity of a pore to generate local plasticity either individually or more globally in a slice of the sample perpendicular to the loading axis. It is worth recalling that the local stress level (which determines the influence zone sizes) implicitly takes into account all the above mentioned parameters. Figure 3. shows the volumes of the pore influence zones plotted against the volumes of the pores for two typical samples. As expected, large pores tend to induce large stress concentration. For four of the investigated samples the crack leading to failure was found to initiate at the largest pores (like in the case of sample 48). In the case of sample 36, however, the crack nucleates at a small pore which intersects the surface. The slice-by-slice analysis leads to a similar conclusion as can be seen on Figure 4. On this figure, the surface fraction of matrix (on a surface perpendicular to the loading axis) with a stress higher than the yield stress has been plotted as a function of the fraction of pores for the same samples as those shown in Figure 3. Each point represents a slice of 5 µm in thickness. As the location of the crack initiation could not be exactly determined by fractography or post failure tomography within the sample gage length, the pore leading to failure is shown by several marks. Figure 4 shows that there is a correlation between the local densities of the pores and the density of highly stressed zones. This is the result of the sample being highly loaded in sections where the porosity was higher. Crack initiation seemed to occur in sections where the porosity was high, i.e. where the stress was high, with the exception of samples 164 and, to a less extent, F40 [13] The relationship between the pore influence volume / surface fraction and the event of crack initiation varied according to the sample, as is qualitatively summarized in Table 3. For each sample analysed, the “quality of correlation” relates to the position of the initiating pore within the relevant distribution.
59
Fig. 3: Plots of the pore influence zone size as a function of pore volume for samples 48 (top) and 36 (bottom).Pore by pore analysis. Stars indicate the initiating pore. In samples C48, E29 and F40, the crack initiated at a pore which had a wide influence zone and a large volume, i.e. which was located at the tail of both the influence zone and the volume distributions. In these cases, the elastic calculation seems to be accurate enough to predict the location of crack initiation. The same can be said for the slice-byslice analysis of samples 36, C48, E29 and to a lesser extent F40: The crack nucleated at a position rich in pores and showing a high density of influence zones. In other words, in these samples crack initiation occured at a local maximum of the pore density. A poor correlation was found for sample 164 which did not comply with any of the analyses: this sample failed at a small pore with a modest influence zone, albeit located near the surface. These results outline the importance of the pore size distribution and spatial repartition for crack initiation. Indeed, the samples for which both analyses yield satisfying results are those in which the spatial repartition is markedly heterogeneous. Conversely, the samples 164 and F40 were characterized by a rather flat distribution along the principal axis [13]. Interestingly, in sample 36, the crack initiates at a small pore which intersected the sample surface, the influence zone volume was moderate, yet considerably higher than that of internal pores of comparable size (Figure 3). In this case, the pore-by-pore analysis was not fully satisfying. As a whole the proposed methodology allowed to identify the most probable zones for fatigue failure when the pore distribution was spatially heterogeneous. These critical zones were those of high porosity, and were directly related to a high density of stressed matrix.
60
Fig. 4: Results of the slice-by-slice analysis. Plots of the density of influence zones against surface fraction of pores along the axis of samples 48 (top) and 36 (bottom). The approximate location of the crack starting point is marked by stars. Table 3 .Correlation between the occurrence of crack initiation and the influence zone of the pores considered either individually or slice by slice. Symbols used for indicating the correlation quality: -: bad, * poor, **: good, ***: excellent Sample
36
164
C48
E29
F40
Pore influence volume
-
*
***
***
***
Pore influence density
**
-
***
***
**
Crack initiation site
Pore intersecting the surface
Internal pore
Internal pore
Internal pore
Internal pore
4. Conclusion A partly automated treatment has been developed to analyse the causes for fatigue failure in cast aluminum samples containing porosity. It relies on the 3D characterization by micro-tomography of the pores, used as an input for finite
61
element analysis. A careful methodology for generating a refined mesh has been developed,. The proposed methodologyi s based on the conversion of 3D images of the microstructure in FE data and of FE computation results into a 3D image again. This last step proves to be a fast, easy and reliable method for post-processing the results with powerful techniques from the image analysis field The pore influence zone is proposed as a variable that represents the volume of matrix stressed above the yield stress. It has been treated on an individual, pore-by-pore basis, and on a local, slice-by-slice basis. On average, both give satisfying results for samples with a heterogeneous porosity and predict that cracks nucleate at regions with a high local porosity.
References [1] Ødegard J.A., Pedersen K., Report No. 940811, Society of Automotive Engineers, Warrendale, PA, 1984. [2] Zhu X., Yi J.Z., Jones J.W., Allison J.E., . Metallurgical and Materials Transaction A. Vol 38A. 1111 2007. [3] Yi J.Z., Gao Y.X., Lee P.D., Flower H.M., Lindley T.C., Metallurgical and Materials Transaction A. Vol 34A 1879, 2003. [4] Wang Q, Jones P. Metallurgical and Materials Transaction A Vol. 38A, 615, 2007. [5] Stock S., Int Mater Rev 53(3):129, 2008. [6]Buffiere JY, Savelli S, Jouneau PH, Maire E, Fougeres R. Mater Sci Eng, A Vol. 316, 115, 2001. [7] Lashkari O, Yao L, Cockcroft S, Maijer D. Metallurgical and Materials Transaction A Vol.40, 991, 2009. [8] Hardin RA, Beckermann C. Metallurgical and Materials Transaction A Vol. 40, 581, 2009. [9] Zhang H, Toda H, Hara H, Kobayashi M, Kobayashi T, Sugiyama D, Kuroda N,Uesugi K. Metallurgical and Materials Transaction A Vol. 38, 1774, 2007. [10] Kobayashi M, Toda H, Minami K, Mori T, Uesugi K, Takeuchi A, Suzuki Y. J Jpn Inst Light Met Vol. 59, 5, 2009. [11] Buffiere J.-Y., Maire E.,·Adrien J.,·Masse J.P.,·Boller E., Exp Mech Vol. 50, 289, 2010. [12] www.vsg3d.com. Accessed 05/03/11 [13] Vanderesse N., Maire E., Buffiere J.Y., Chabod A., submitted to Int. Jal of Fatigue under revision Feb.2011.
Fatigue mechanisms of brazed Al-Mn alloys used in heat exchangers Aurélien Buteria,b, Julien Réthoréc, Jean-Yves Buffièrea, Damien Fabrèguea, Elodie Perrinb, Sylvain Henryb a
Université de Lyon – INSA de Lyon – MATEIS, UMR5510, Villeurbanne, France b Alcan CRV (Research Center), Voreppe, France c Université de Lyon – INSA de Lyon – LaMCoS, UMR5259, Villeurbanne, France
ABSTRACT The ratio of aluminium alloys used in the automotive industry tends to increase as a consequence of the enforcement of tougher environmental regulation (minimization of vehicles weight). For example, thanks to their good thermal, corrosion and mechanical properties, aluminium alloys have steadily replaced copper alloys and brass for manufacturing heat exchangers in cars or trucks. Such components have been constantly optimized in terms of exchange surface area and, nowadays, this has led to Al components in heat exchangers with a typical thickness of the order of 0.2 to 1.5 mm. With such small thicknesses, the load levels experienced by heat exchangers components has drastically increased leading to an important research effort in order to improve the resistance to damage development during service life. This paper focuses on the resistance to fatigue damage of thin sheets of brazed co-rolled aluminium alloys used for manufacturing heat exchangers and particularly on the mechanisms of fatigue cracks initiation. Digital Image Correlation (DIC) has been used to monitor damage development during constant amplitude fatigue tests of thin (0.27 mm) samples. Fatigue cracks have been found to initiate from deformation bands which presence can be correlated with solidification drops at the sample’s surface resulting from the brazing process. X-ray tomography has been used to obtain the spatial distribution of drops as well as their characteristics (height, surface...), on the sample gauge length. Those 3D data have been used to produce finite element meshes of the samples in order to assess the influence of the drops on fatigue crack initiation. 1.
Introduction
The small thicknesses of the thermal heat exchangers components improve the thermal performance through the increase of exchange surface area, but it leads to an increase of the in use loads which can be detrimental to the service life duration via for example fatigue damage development. Fatigue damage of brazed thin sheet aluminium alloys for thermal heat exchangers has rarely been considered in the literature [1 - 4] and none of these works have dealt with thicknesses below 1.5mm. The main technical issue for the investigation of damage development in fatigue sample with a sub millimeter thickness is that their surfaces cannot be polished. Thus classical optical/electronic microscopy observations of fatigue damage initiation and development cannot be carried out. It has been suggested to use Transmission Electron Microscopy (TEM) to correlate dislocation structures resulting from mechanical cyclic loading of 3000 series alloy on cylindrical gauge section samples with a diameter of 10mm [4]. TEM preparation is however a time consuming and destructive technique. In this study we present a different approach based on DIC [5-6] and 3D tomographic observations. Those techniques are used to identify fatigue crack initiation sites and perform Finite Element (FE) calculations [7-8-9], which help to elucidate the fatigue mechanisms of the studied material. 2.
Experimental procedure
An industrial material made of 3 co-rolled aluminium alloys (total thickness 0.27 mm) has been studied. In spite of its small thickness, the material exhibits a composite structure comprising a core material (3xxx alloy) and 2 clads (4xx and 7xxx alloys). The lower melting point 4xxx alloy is used for producing the heat exchanger assembly during a brazing process while the 7xxx alloy improves internal corrosion resistance. Industrial brazing conditions have been used to produce flat dog bones samples exhibiting representative microstructures through the thickness and also at their surface.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_9, © The Society for Experimental Mechanics, Inc. 2011
63
64
Figure 1: Material configuration and compositions of the different aluminium alloys used 2.1 Fatigue tests and digital image correlation The fatigue test samples investigated had a minimal rectangular section of 4.05 mm2 (15*0.27 mm2) according to the layout of figure 2(a). Fatigue tests were carried out in a hydraulic tension-tension fatigue test machine under constant stress-amplitude conditions (σmax = 100 MPa, stress ratio = 0.1 and F = 10 Hz) at room temperature. Damage development at the sample surfaces has been monitored during cycling by DIC. A large angle telecentric lens with focal distance of 200 mm associated to a CCD camera with a 1200*1600 pixels resolution (12*16 mm2) has been used. Pictures of sample’s surface prepared beforehand by applying a speckle are recorded every 150 cycles in constant lighting conditions (exposure time = 15ms). A Matlab® post-treatment [5] of the pictures allows the measurement of the displacement field on the sample surface between two cycling steps and the determination of the equivalent strain and stress fields.
Figure 2: (a): sample geometry. (b): 3D-rendering of the sample surface showing clad solidification drops 2.2 Tomography X-ray tomography is a 3D imaging technique based on the difference of absorption of the various constituents of the material that allows visualizing the inner structure of an object [9]. This technique has been used here to characterize the surface roughness of fatigue sample (Figure 2(b)) and to investigate the probable influence of the local microstructure on crack initiation. The principle of the technique and the experimental setups used here are described elsewhere [7]. Two different voxel size/imaging modes have been used: 13µm/voxel (Lab. X-ray source: absorption mode) and 0.7µm/voxel (Synchrotron X ray source: absorption + phase contrast). 2.3 Finite element meshing and tensile test simulation parameters A surface and volume meshing with quadratic tetrahedrons is created from the reconstructed 3D images of the fatigue samples with Amira® software (Figure 3). The mesh corresponding to the sample studied here contains 200 000 quadratic tetrahedrons. A Java plugin [7-8-9] allows to import the mesh into Abaqus®. The boundary conditions have been chosen to prevent displacements of sample edges except in load direction. Simulation consists in a maximal stress approach in elastic conditions (E = 70 000 MPa, ν = 0.33) by applying a vertical displacement of 0.1mm, which correspond to a deformation of 0.2% as a 1 st approach. The multimaterial aspect of the sample has not been taken into account for the FE calculations.
65
Figure 3: 3D view of the volume meshing with quadratic tetrahedrons of the fatigue sample 3.
Results and analysis
The fatigue lives (Wöhler curves) and the damage mechanisms (crack initiation sites and propagation) have been characterized. 3.1 Fatigue mechanisms of brazed material The sample analysed by X-ray tomography and shown in section 2.1 (Figure 2(b)) has been submitted to 933744 fatigue cycles. The test was stopped before fracture when the sample contained a 1mm long fatigue crack. Figures 4(a) and 4(b) present the zone of interest (ZOI) used for DIC at reference (750 cycles) and final states. The strain (Exx) fields for 5 specific steps, respectively 47%, 75%, 97% and 100% of the fatigue lifetime, are presented on figures 5(a-c). Large localized plastic deformation leading to large displacements (figure 5(c)) appears during the last part of the cycling, which probably corresponds to the stable crack propagation. Before the few last cycles (97% of the fatigue life, i.e. 28 000 cycles), no strain heterogeneity can be visualized by DIC. The presence of a crack can be evidence by using a discrepancy map (Figure 6(c)) as described in [10].
Figure 4: Zone Of Interest (ZOI) for the digital image correlation (DIC) at: (a) the initial state (750 cycles) and (b): the final state (933744 cycles). The painting black points (speckles) are used to calculate displacement field by DIC. On fatigue crack can be easily visualized on the final state image (b). The shadow clearly visible near the crack on figure 4(b) highlights the presence of a Clad Solidification Drop (CSD). Systematic fractographic analysis of the fatigue samples confirms the presence of a CSD close to the crack initiation zone in most cases. The stress concentration induced by the CSD roughness of the sample surface is likely to promote local plasticity and induce eventually crack initiation. The influence of CSD on local (elastic) stress distribution during cycling can be studied in detail from the 3D FE meshes generated from tomography; this is described in the next section.
66
Figure 5: Strain field Exx (a– c) obtained by DIC at 3 different steps of the fatigue test, respectively 75, 98 and 100% of the lifetime. (Scales - Exx: *100%)
Figure 6: Exx (a), Ux (b) and discrepancy map (c) obtained by DIC at 100% of the lifetime (Scales - Ux: *10µm and Exx: *100%) 3.2 Study of clad solidification drops geometric influence by a maximal stress approach Figure 7 shows the Von Mises stress distribution on the fatigue sample described in the two previous sections. A stress concentration zone can clearly be seen near the different CSDs among which is one responsible for crack initiation. Note that for the model considered here (homogeneous material) the stress concentration zones appear to be localized on the surface. Moreover, the influence CSD’s geometry has been quantified through the important stress concentration induced (figure 5(b)). On figure 7(b) is presented the distribution of the maximum values of stress along the loading direction (σ22) is also shown. It can be seen that the CSD responsible for crack initiation is the one, which induce the maximum value of σ22. This situation has been constituently found for 4 of the 5 samples studied so far.
Figure 7: FE calculation of local stress levels in the fatigue samples based on microstructurally realistic 3D FE mesh generated from tomographic data. (a): Zoom showing the distribution of σ22 stresses in a zone close to the initiation site of the crack, which lead to failure after 934000 cycles with a maximal fatigue stress of 100 MPa. (b): Distribution of σ22 values induced by the different CSD present in the sample gage length; the value corresponding to the initiating CSD is highlighted.
67 In the cases where crack initiation could not be correlated with a local maximum value of the σ 22 stress, inclusions or porosity resulting from the brazing process have been detected at the initiation site by SEM inspection and/or tomographic inspection of the fracture surface (high resolution X-ray tomography - ESRF) as illustrated in figure 8.
Figure 8: Microstructure visualisation around one fatigue crack by high-resolution X-ray tomography – A CSD (Clad Solidification Drop) is observed around fatigue crack as well as some porosity (white arrow). 4.
Conclusions
From these results it can be inferred that in this brazed material, fatigue crack initiation is the result of an interaction between high values of local tensile stresses (resulting from the surface roughness), which induce intense plastic activity as evidenced by DIC measurements, and a local microstructure which further enhance the geometrical stress concentration effect. The tomographic data allows to analyze the scatter in fatigue lives at a given stress level on the basis of the CSD distribution in different samples. Preliminary results confirm that longer (resp. shorter) fatigue lives can be correlated with smaller (resp. larger) CSD/stress levels. The results obtained give clear indication that new alloys and/or brazing fluxes enabling to reduce the presence of CSD are expected to enhance greatly the fatigue resistance of brazed assemblies in heat exchangers. Preliminary results obtained with different brazing conditions confirm this trend.
References [1] X.X.Yao, R.Sandström and T.Stenqvist: Mater. Sci. Eng., A267 (1999) 1-6 [2] J-K.Kim and D-S.Shim: Int. J. Fatigue. 22 (2000) 611-618 [3] U.Zerbst, M.Heinimann, C.Dalle Donne and D.Steglich: Eng. Fract. Mech. 76 (2009) 5-43 [4] H.Yaguchi, H.Mitani, K.Nagano, T.Fujii and M.Kato: Mater. Sci. Eng. A315 (2001) 189-194. [5] Elguedj T., Rethore J., Buteri A. - Isogeometric analysis for strain field measurements. - Comput. Methods Appl. Mech. Eng. 2011; 200: 40-56 [6] J.Rethoré, F.Hild and S.Roux: Comput. Meth. Appl. Mech. Eng. 196 (2007) 5016-5030 [7] J-Y.Buffière, P.Cloetens, W.Ludwig, E.Maire and L.Salvo: M.R.S. Bulletin (2008) 33 – 611-619 [8] O.Caty, E.Maire, S.Youssef and R.Bouchet: Acta Mat. 56 (2008) 19, 5524-5534 [9] O.Caty, « Fatigue des empilements de spères creuses métalliques », PhD thesis, INSA Lyon, 2008 [10] Buteri A., Buffiere J.-Y., Fabregue D., Perrin E., Rethoré J., Havet P. – Fatigue mechanisms of brazed AlMn alloys used in heat exchangers – Proceedings of the 12th International Conference on Aluminium Alloys (2010).
Three Dimensional Confocal Microscopy Study of Boundaries between Colloidal Crystals E. Maire1, M. Persson Gulda, N. Nakamura2, K. Jensen, E. Margolis, C. Friedsam F. Spaepen Harvard University, School of Engineering and Applied Science, Cambridge, MA 02138 ABSTRACT Colloidal crystals were grown on flat or patterned glass slides. The structure of the grains and their defects was first visualized by 3D confocal microscopy and then characterized using simple geometric measurements. Crystals grown on a flat surface maintained a layered structure induced by the closed-packed planes. In the case of the [110] Σ5 grain boundary, the presence of particles in interlayer position was established. 1 Introduction The grain-level microstructure of a material influences a wide range of material properties, including strength, toughness and corrosion resistance. For that reason, understanding and controlling the structure and evolution of grain boundaries is one of the central tasks of materials science. Studying grains at the atomic level moreover, is not an easy task. To aid in this, we used colloidal suspensions as model systems that form crystals. In recent research, colloids have been used to model atomic or molecular systems since they form many of the same phases. They can be used for model glasses as well as crystals [1, 2]. A colloidal system has two distinct phases: a dispersed phase and a continuous one. The dispersed phase consists of small solid particles, on a nano to macro scale, which are dispersed evenly through the continuous fluid phase. The particles used in this study interact as hard spheres. When these particles sediment onto a flat surface, they can form crystals; when they sediment onto an irregular, rough surface, they can form amorphous structures. For the random close-packing, the structural paradigm for the amorphous phase, the volume fraction of solid, f, is about 0.63, and for the close-packed crystals, f is about 0.74. The crystals often contain defects and the focus of the present paper is on grain boundaries. The main purpose of the paper is to show how the crystals can be grown and imaged in three dimensions (3D) using confocal microscopy. 2 Experimental procedure 2.1 Colloidal suspension Silica particles (diameter 1.55 μm, density 2.0 g/cm3, mass 3.9 x 10-15 kg) were suspended in a water - 62.8 vol. % dimethylsulfoxide (DMSO) solution that matched the index of refraction of the silica and had a density of 1.10 g/cm3. The average velocity of Brownian motion is given by:
= (3kB/Tm)1/2
(1)
where kB is Boltzmann's constant, T the temperature and m the mass of the particle. For our case, this gives =2 x 10-3 m/s. The gravitational settling velocity of the particle is given by vS = VP Δρ g / 6π r η
(2)
where VP and r are, respectively, the volume and radius of the particle, Δρ is the density difference between particle and fluid, and η is the viscosity of the fluid, which is about 10-2 Pa.s. This gives in our case vS=10-6 m/s, which satisfies the condition for a colloidal systems (vB>>vs). The index match makes the system optically transparent, which allows investigation by optical confocal microscopy at large distances into the sample. Contrast between particle and solution was achieved by adding fluorescein dye to the solution. The index match also minimizes the van der Waals forces between the particles, which interact therefore like hard spheres. 1 Present address : INSA-Lyon, MATEIS UMR5510, 25 av. Capelle, 69621 Villeurbanne, France 2 Present address : Graduate School of Engineering Science, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 5608531, Japan
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_10, © The Society for Experimental Mechanics, Inc. 2011
69
70
2.2 Template The crystal growth can be controlled by slowing down the sedimentation of the colloids (described above) and the use of a template. The controlled growth of the layers was achieved by diluting the initial suspension by a factor of two. A template is a patterned substrate that directs the setting of the particles in such a way that the structure, orientation, and size of the crystal is pre-determined. This process is called “colloidal epitaxy.”[3] The templates were fabricated in the following way. A positive mask of the targeted pattern (set of holes) was first printed on a chrome-coated substrate (chrome was removed using a HeidelbergTM mask maker). A thin layer of photoresist was then spin coated on a primer-coated glass slide (the subsrate) and polymerized (3 minutes at 110°C). This assembly was then exposed using a mask aligner (SussTM) for 2-2.5 sec. The time varied depending on the size of the holes in the chrome mask, the age of the photoresist, calibration of the mask aligner, and the premixed developer. The slides were then developed for 60 to 90 sec in a photoresist developer (mixture of 1 part of MF351 and 5 parts water). The silica was then etched in a reactive ion etcher (RIE). The plasma needed to be stable before the sample was inserted into the clean chamber. The reactive ion etching ran for about 10 minutes. Finally, to remove the remaining photoresist layer, the sample was exposed tin the same RIE to an oxygen cleaning plasma, which cleans the photoresist without etching the silica. An example of the resulting template, imaged by confocal microscopy, can be seen in Figure 1. In this example template, we have attempted to etch holes with a gradient of sizes.
Figure 1 : Confocal image of a pattern etched in a silica glass microscope slide (the white phase is the glass). The pattern has holes of different diameters. 2.3. Confocal microscope In a laser scanning confocal microscope, [4] light is focused through a microscope objective where it excites fluorescence in the sample. The emitted light is retraced through the microscope and passed through a pinhole in the conjugate focal plane of the lit spot in the sample. This allows only light from that spot to pass; light from all other directions, for example from multiple scattering or fluorescence, is blocked. The light intensity is recorded by a detector and stored as the spot is scanned through volume of the sample. The stored information can then be displayed directly as a three-dimensional image or be processed into a reconstructed image, in which computer graphics is used to redraw the spheres. When the refractive index of the fluid is matched to that of the spheres, light can penetrate quite deeply into the sample with little scattering, so that tens of planes of a crystal can be imaged. The lateral resolution (perpendicular to the optical axis) is about 200 nm, typical for optical microscopy. Because of limitations of the optics, the vertical resolution is only 500 nm. Application of image analysis techniques [5, 6] improves the resolution for the location of the center of the particles by about an order of magnitude. A typical time to scan a stack of planes through a sample is a few seconds.
71
Figure 2 (a) and (b) show a crystal imaged using this technique and presented in the form of gray-level confocal images. Fig. 2(a) shows an x-z plane, orthogonal to the x-y plane shown in Fig. 2(b). These images are numerically extracted from the 3D data set recorded in the confocal microscope. Note that this data set is collected by acquiring 2D images (x-y images in the reference of the microscope) such as Figure 2(b). These are roughly parallel to the plane of the glass substrate. The acquisition is repeated by scanning along the third perpendicular direction, z, aligned with gravity. The spacing along the z plane is chosen to be equal to the pixel size in the x-y plane so that the stacking of these images is an isotropic reconstruction of the 3D structure of the crystal. The imaged crystal was grown along the z direction onto an etched glass pattern of a [100] Σ 5 grain boundary. The location of the grain boundary is indicated by a dark line in figure a). In Figure 2(b), extracted right at the middle plane of the first layer of deposited particles, the grain boundary is clearly visible. A white line has been drawn to indicate the location where Figure 2(a) was extracted. Figure 3 is a 3D rendering of the same data set after binarization by thresholding the particles and applying 100% transparency to the voxels located in the liquid phase.
Figure 2 : Two gray-level confocal images (slices) of a [100] Σ 5 bi-crystal grain boundary grown on a etched silica template. (a) x-z slice; (b) x-y slice. The location of the grain boundary is indicated by a dark vertical line in figure (a). It is clearly observable in figure (b), which is taken at the level of the first layer of particles, just above the glass pattern. The light gray line in figure (b) shows the location of (a).
Figure 3 : Grazing incidence view of a 3D rendering of the bi-crystal shown in Figure 2. The diameter of the silica particles is 1.55 µm.
3 Results and discussion 3.1. Structure of columnar polycrystals on a flat glass slide When setting slowly on a featureless flat glass slide, the colloidal particles spontaneously form crystals by successive deposition of hexagonal close-packed planes. Crystals nucleate in several places with random orientations and grow into grains, separated by grain boundaries. Two different polycrystalline samples were grown using this procedure. The first was grown using a small amount of solution in the container, while the other one was grown using three times this amount. This resulted in two samples with different thickness. Figure 4 shows an x-y plane of the thin sample. It reveals the polycrystalline nature of the deposited crystals. The thin sample had only 9 deposited layers and the structure is rather well preserved all through the thickness. Note that the image also reveals vacancies and defects in the form of agglomerated particles. Figure 5 shows a x-z slice of the thick sample. In this direction again the different grains can be observed, with the structure becoming more disordered as the distance from the substrate increases. Figure 6(a) and (b) again, show the typical structure of these two crystals observed in the x-z direction, but to account for the total volume analyzed, the gray level was averaged along the
72
entire third perpendicular direction (y) and projected onto the 2D images. This averaging clearly highlights the order of the structure along the z direction, in the form of the successive close-packed planes. It also shows that since the sample is imaged from the bottom, the quality of the images is decreases when the distance to the substrate increases because of increasing scattering of the light perhaps due to imperfect index matching.
Figure 4: Confocal imaging of a layer of a thin polycrystalline sample close to the deposition surface.
Figure 5 : Confocal view of an x-z cut of a thick polycrystalline sample showing the different grains. The structure becomes more disordered at a larger distance from the substrate.
Figure 6 a). x-z view of the thin sample. The intensity is averaged over the entire third perpendicular direction (y), to form a projection.
Figure 6 b). x-z view of the thick sample. The intensity is averaged over the entire third perpendicular direction (y), to form a projection.
3.2. [100] Σ 5 grain boundary The structure of the Σ5 boundary shown in Fig. 2-3, is analyzed in more detail here. The bi-crystal consists of only four deposited layers (see Fig. 2(a)). Figure 7 shows the first layer of this model grain boundary. The two superimposed sets of lines show what the grain boundary structure should be if each hole in the pattern was filled with a particle as described in [7]. The atom on the left marked with an arrow is shared between two possible sites which can be considered as a defect. The set of white arrows point atoms which are present while they shouldn't according to the model created for this grain
73
boundary. Figure 8 shows a 3D rendering of the position of these atoms replotted after the determination of the position of their center of gravity. In this figure, the atoms are colored according to the value of their distance to the substrate z. All the small blue points are located in layers. The larger colored points (red, yellow and green) have been determined -thanks to their value of z- as being in interlayer positions. The figure shows that, apart for some defects, a lot of atoms in these interlayer position are located close to the grain boundary which is in the plane located at x=12.5 microns and is indicated by a dashed line in the Figure.
Figure 7 : Structure of the [100] Σ 5 grain boundary. The black arrow on the left indicates a location where a single atom is present where there should be two sites. The white arrows show sites where atoms are present at places they should not be in the ideal Σ 5 hard sphere structure. [7]
Figure 8 : 3D view of the position of atoms as seen from the side of the grain boundary. The large atoms are in interlayer positions. Apart from a few defects, most of the interlayer atoms are located at the grain boundary which is in the plane defined at x=12.5 microns indicated by the dashed black line. 4. Conclusions and perspective This paper shows some 3D images of grain boundaries in colloidal crystals grown on glass microscope slides. The particles are silica spheres, monodisperse with a diameter of 1.55 µm. When deposited on a flat surface, grain boundaries form due to
74
the aggregation of the particles in the form of close-packed planes. The successive deposition of close-packed planes is preserved over a long distance from the substrate, but a side view of a tall sample shows that the grain structure become more and more disordered. The paper also shows that colloidal epitaxy of bi-crystals is feasible, as exemplified in the case of a [100] Σ 5 grain boundary between two face-centered cubic crystals. The boundary contains atoms in an interlayer position. Future work will focus on the study of the mobility of the atoms at these grain boundaries. Acknowledgments We thank David Weitz and the Weitz group for much help and many discussions. This work was supported by the National Science Foundation through the MRSEC and REU programs. References [1]. P. Schall, I. Cohen, D.A. Weitz and F. Spaepen, Nature 440 (2006) 319. [2]. P. Schall, D.A. Weitz and F. Spaepen, Science 318 (2007) 1895. [3]. van Blaaderen A., Ruel R., and Wiltzius P. Template-directed colloidal crystallization Nature 385 (1997) 321-324 [4]. V. Prasad and D. Semwogerere and E.R. Weeks, J. Phys.: Condens. Matter 19 (2007) 113102. [5]. J.C. Crocker and D.G. Grier, J. Coll. Int. Sci. 179 (1996) 298. [6]. E.R. Weeks, J. C. Crocker, A.C. Levitt, A. Schofield, D. A. Weitz, Science 287 (2000) 627. [7]. In publication #25 of the web site http://seas.harvard.edu/matsci/people/fspaepen/Complete.html
Scale Independent Fracture Mechanics
Sanichiro Yoshida, Diwas Bhattarai, Tatsuo Okiyama and Kensuke Ichinose 1. Southeastern Louisiana University Department of Chemistry and Physics SLU 10878, Hammond, LA 70402, USA, [email protected] Tokyo Denki University Department of Mechanical Engineering 2-2, Kanda-Nishiki-cho, Chiyoda, Tokyo 101-8475, Japan
ABSTRACT Fracture mechanics is considered from the viewpoint of a field theoretical approach based on the physical principle known as gauge invariance. The advantage of this approach is scale independent and universal. All stages of deformation, from the elastic stage to fracturing stage can be treated on the same theoretical foundation. A quantity identified as the deformation charge is found to play a significant role in transition from plastic deformation to fracture. Theoretical details along with supporting experimental results are discussed. 1. Introduction Fracture is initiated at the atomistic scale and develops to the final, macroscopic failure of the object. By nature, it is an interscale phenomenon. However, most theories available to date are scale dependent; Quantum mechanics, dislocation theories, continuum mechanics and fracture mechanics all work well at each individual scale level but they do not describe the development in the scale level. For full understanding of fracture, it is important to argue the dynamics independent of the scale level. In this respect, a field theoretical approach proposed by Panin et al [1] as part of the general theory of deformation and fracture called physical mesomechanics [2] has great advantage. Based on a fundamental physical principle known as local (gauge) symmetry [3] (Aitchson, 1989), this approach is capable of describing deformation dynamics (i.e., the form of the force) without relying on empirical concepts or phenomenology such as experimentally determined constitutive relations; thus, by nature, the formalism is scale-independent and universal. According to this approach, on entering the plastic regime materials loses longitudinal elasticity but gains transverse elasticity. At the same time, the longitudinal effect becomes energy dissipative [4, 5]. Consequently, the displacement field in the plastic regime is characterized as a decaying transverse wave phenomenon. Previously, we applied this formalism to various aspect of deformation and fracture of solid-state materials, and verified the theory with experiments based on optical interferometry. Transverse decaying wave of displacement field has been experimentally observed and it has been confirmed that fracture indeed occurs when the transverse wave decays completely [6]. In this dynamics, a quantity defined as the deformation charge [7], which is closely related to the charge of symmetry associated with the gauge symmetry [3] plays an important role. In particular, it seems that materials fracture when the deformation charge stops flowing. In this paper, we elaborate on the dynamics associated with the deformation charge. In addition, an attempt is made to explain recent experiment [8] on a notched specimen based on this formalism. Quite interestingly, the experiment clearly shows the deformation charge, and its behavior toward fracture in complete consistence with the theory.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_11, © The Society for Experimental Mechanics, Inc. 2011
75
76 2. Theory 2.1 Field equations The basic postulate of the mesomechanical-approach is that when a material enters the plastic regime under the influence of external load, the deformation is still locally linear-elastic. The formalism resulting from this postulate is described in detail elsewhere 1 . In short, by applying the physical principle known as gauge invariance [3, 5] to the displacement field of materials under plastic deformation, the following field equations are obtained [4].
Here and are the translational and rotational displacement related to each other as indicated by eq. (3). and appearing on the right-hand side of eqs. (1) and (2) are the temporal and spatial components of the so-called charge of symmetry. is the phase velocity of the spatiotemporal variation of the field1.
Eqs. (1) and (2) yield a wave equation of the following form.
In the plastic regime, eq. (4) represents a decaying transverse wave, and the phase velocity can be expressed in terms of the density and shear modulus as follows.
With eq. (5) substituted and rearrangement of the terms, eq. (2) can be put in the following form [4].
The left-hand side of eq. (6) is the product of the mass and acceleration of a unit volume. The right-hand side represents the external force acting on the unit volume where the first term is transverse force and the second term is longitudinal force. In the plastic regime, the transverse force is a restoring force associated with the rotational displacement of the local region and the shear modulus , and the second term is the longitudinal, energy dissipating force [4]. Here the first term is responsible for the transverse wave characteristics of displacement in the plastic regime [6]. It has been shown that eq. (6) is valid for the linear elastic regime [9]; in that case, the first term on the right-hand side becomes null and the second term represents longitudinal elastic force. In addition, the phase velocity (5) becomes the well-known expression of the square root of the ratio of the Young’s modulus to the density. 2.2 Significance of charge in fracture The symmetry charge plays a significant role in fracture mechanics. From the law of mass conservation applied to a unit volume, the temporal change in the density is equal to the divergence of velocity field. 1
Detailed discussion will be found in a paper scheduled to be published in the Journal of Strain Analysis (the volume number has not been assigned).
77
With the use of eq. (5), eq. (2) can be written as follows.
Application of divergence to eq. (2) with the use of eq. (5) and the mathematical identity called equation of continuity.
Eq. (9) indicates that
is the flow of
leads to the so-
, allowing us to put it in the following form.
With eq. (7), eq. (10) can further be rewritten as
Here the temporal change in density can be interpreted as the generation of dislocation [10]. Being associated with the net flow into the unit volume , this change in density is in the same direction as the local velocity, or . Thus, the corresponding flow can be put in the following form.
Being proportional to velocity, the longitudinal force
is by nature energy dissipating.
2.3 Transition to fracture The above argument indicates the following scenario of transition from elastic deformation to fracture of initially linearly elastic materials [7,11]. When a material enters the plastic regime, it loses the longitudinal elastic force, and instead, gains the transverse restoring force represented by on the right-hand side of eq. (6). At the same time, the longitudinal force becomes energy dissipative as represented by eq. (12). Based on this observation, the transition from the elastic regime to the plastic can be characterized by and proportional to the local velocity. Note that this is a local effect; even if the stress-strain curve is before the yield point (i.e., in the linear regime) and therefore the specimen is considered to be globally elastic, it is possible that the deformation is locally plastic. There are a number of experimental observations that support this interpretation [12]. While the mechanism of energy dissipation via is effective, the work done by the external force causing the deformation, such as the work done by a tensile machine, is partially dissipated in this fashion and partially stored as the rotational elastic energy associated with the restoring force . As the deformation develops, the material tends to lose these mechanisms. Some theoretical consideration [11] and experimental observations [6] indicate that it is likely that materials loses the restoring mechanism first, causing the transverse wave to decay, and then enters the final phase where the
78 dissipative force becomes null as well. Thus the transition from late plastic deformation to fracture can be characterized as the initial phase and
(13)
and, the final phase and
.
(14)
3. Experimental A number of experiments [4-9, 12] have been conducted to test the theory discussed above. In particular, observation of inplane displacement with the use of an optical interferometric technique known as the Electronic Speckle-Pattern Interferometry (ESPI) provides significant information. In this section, recent results of ESPI applied to tensile analysis of a notched specimen are presented. Figs.1and 2 illustrate the ESPI setup and the dimension of the carbon steel specimen used in the experiment [8]. The ESPI was a dual-beam type setup using a laser source of 532 nm in wavelength and was sensitive to vertical in-plane displacement of the specimen. The interferometric image was recorded with a CCD (Charge-Coupled Device) camera continuously during the tensile experiment at a rate of 100 frame/s. The specimen was loaded vertically at the through holes shown in Fig. 2 until it fractured. Initial crack of about 1.3 mm was used at the tip of the notch prior to the tensile loading. 62.5
13
3
70
18.75 18.75
12.5
12.5 Fig. 1 ESPI optical setup [8]
Fig. 2 Carbon steel specimen with a notch [8]
After completion of image recording, interferometric fringe patterns were formed off-line by subtracting the frame taken at a certain time step from the frame taken at another time step. The frame difference for the subtraction, which corresponds to the total deformation that the resultant fringe patter represents, was adjusted so that the total number of the fringes was appropriate for the analysis. Typically, the frame difference of 30 – 40, corresponding to 0.3 – 0.4 s was used.
Fig. 3 ESPI fringes representing vertical displacement and corresponding finite element analysis
79 Fig. 3 is a fringe pattern observed at an early stage of deformation (frame number 8064 minus 8000). At this stage fringes are all continuous although its density is higher near the tip of the notch indicating that the deformation is somewhat concentrated there. The figure shown next to this fringe pattern is the contour of vertical displacement computed with a simple finite element model. The features that the fringes are almost vertical, springing out from the notch tip and that one fringe originating from the notch tip circles around the tip are clearly seen in the computed contours as well. Fig. 4 shows a series of fringes formed 771 frames (or 7.71 s as the frame rate is 100 frame/s) after Fig. 3. The four fringe patterns are created by subtracting the common image (frame number 8835) from images taken after this with an increment of 25 frames (i.e., frame number 8860, 8885, 8910 and 8935 going from the left to right). The frame difference of 25 corresponds to 250 ms, during which the tensile machine’s crosshead moves . While the number of fringes increases during this time but the basic pattern remains the same. Note that at this stage, the fringe patter is vertically symmetric, unlike Fig. 3. Since the difference between Fig. 3 and 4 in time is 7.71 s and the corresponding crosshead’s displacement is over the specimen’s size of 70 mm, the difference in strain is . At this stage, the fringes are still continuous.
Fig. 4 Fringes obtained by subtracting a common frame from various frames to observe increase in displacement When the deformation further develops, the fringes start to show discontinuity. Fig. 5 shows fringe patters formed by subtracting images different by 40 frame numbers (0.4 s). The leftmost pattern is formed by subtracting frame 9600 from 9640, i.e., 765 frames or 7.75 s after the leftmost fringe pattern of Fig. 4. The total strain at this point is approximately . Notice that the discontinuous fringes are divided by circular bright pattern, and the size of the circular pattern increases with time.
(1)
(2)
(3)
(4)
Fig. 5 Discontinuous fringe patterns observed at several time steps This bright circle divides regions of different fringe patterns. From this viewpoint, this bright circular pattern is considered to be equivalent to the bright band pattern (linear bright pattern) shown in Fig. 6, which was observed in our previous tensile experiments [13, 14] on not-notched specimen. Notice that the regions separated by the bright band patterns show completely different fringe patters. Our investigation strongly indicates that this linear bright pattern corresponds to the Lüder’s front, or dislocation line [13]. Thus, it is naturally considered that it represents the flow of deformation charge [eq. (12)].
80
114
124
126
128
137
138
149
161
173
181
184
198
Fig. 6 Bright linear band pattern observed in tensile experiments on a non-notched specimen The series of images on the left in Fig. 6 show the appearance of the bright linear pattern as the deformation develops. The numbers under the images are the frame numbers. The rightmost image is from another experiment in which the specimen has a through hole and, a symmetric, X -shaped bright pattern appears around the hole. Previous studies on the linear bright patterns show that when the patterns stop drifting, the specimen fractures. This is in perfect consistence with the above argument of the final phase of transition from deformation to fracture [eq. (14)]. Another interesting aspect of the linear bright pattern in conjunction with the final fracture is that when the linear pattern is in a symmetric X-shape, the specimen fractures horizontally. However, when the linear bright patter appears in only one diagonal direction (i.e., “/” or “\” shaped as opposed to “X” shaped), the fracture is along the bright-line. Commonly to both cases, fracture occurs after the band becomes stationary. The question is now whether a similar argument can be made for the circular bright pattern observed in Fig. 6. Fig. 7 plots the change in the size of the circular bright pattern as a function of time, where time 0 corresponds to the moment when the bright circular pattern appears for the first time. The numbers (1) – (4) inserted in this figure indicate when images (1) – (4) in Fig. 5 are observed. It is interesting to note that the increase in the size of the circular pattern is saturated around 12 s. Viewing this size increase as the circular pattern being drifting in the radial direction, the saturation of increase in its size can be interpreted as the circular pattern ceases drifting, corresponding to the linear bright pattern becoming stationary. In fact, the above FEM modeling indicates that the displacement of the material in region of the circular pattern is approximately radial. Thus the saturation point in Fig. 7 (around 12 s) can be viewed as the beginning of final phase of the transition from plastic deformation to fracture. In addition, the fracture occurs horizontally penetrating the circular bright pattern symmetrically, as is the case of the X-shaped linear bright bands.
Fig. 7 Variation of the size of circular bright pattern 4. Summary Fracture has been viewed based on the field theory of deformation and fracture. It has been found that the deformation charge plays a significant role in transition from the final stage of plastic deformation to fracture. Experimentally, a bright pattern observed in fringe patterns formed by electronic speckle pattern interferometery has been found to visualize the deformation charge, and thereby useful to study fracture process.
81 Acknowledgement The present study was in part supported by Southeastern Alumni Association grant. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14.
Panin, V. E., Grinaev, Yu. V. Egorushkin, V. E., Buchbinder, I. L., and Kul’kov, S. N. Spectrum of ecxited states and the rotational mechanical field. Sov. Phys. J. 30 , 24-38 (1987). Panin, V. E. Ed. Physical Mesomechanics of Heterogeneous Media and Computer-Aided Design of Materials, vol. 1, Cambridge International Science, Cambridge (1998). Aitchson, I. J. R. and Hey, A. J. G. Gauge theories in particle physics, IOP Publishing, Ltd., Bristol and Philadelphia (1989). Yoshida, S. Dynamics of plastic deformation based on restoring and energy dissipative mechanisms in plasticity. Physical Mesomechanics, 11, 3-1, 140-146 (2008). Yoshida, S. Field theoretical approach to dynamics of plastic deformation and fracture, AIP Conference Proceedings, vol. 1186, pp. 108-119 (2009). Yoshida, S., Siahaan, B., Pardede, M. H., Sijabat, N., Simangunsong, H., Simbolon, T., and Kusnowo, A. Observation of plastic deformation wave in a tensile-loaded aluminum-alloy. Appl. Phys. Lett., A, 251, 54-60 (1999). Yoshida, S. Physical Meaning of Physical-mesomechanical Formulation of Deformation and Fracture, AIP Conference Proceedings, vol. 1301, pp. 146-155 (2010). Okiyama, T, Ichinose, K. and Yoshida, S. Research on evaluation of dynamics fracture characteristics by ESPI, to be presented at 17th Japan Soc. Mech. Eng. Kanto-branch meeting , March 18-19 (2011) Yoshida, S. Physical Meaning of Physical-mesomechanical Formulation of Deformation and Fracture, AIP Conference Proceedings, vol. 1301, pp. 146-155 (2010). Suzuki, T., Takeuchi, S. and Yoshinaga, H. Dislocation dynamics and plasticity, Springer-Verlag, Tokyo (1989). Yoshida, S. Consideration on fracture of solid-state materials. Phys.. Lett. A, 270 , 320-325 (2000). Yoshida, S. Muchiar, Muhamad, I, Widiastuti, R., and Kusnowo, A. Optical interferometric technique for deformation analysis, Optics Express, 2 (focused issue on "Material testing using optical techniques 516 - 530 (1998) Yoshida, S., Ishii, H., Ichinose, K., Gomi, K. and Taniuchi, K., An optical interferometric band as an indicator of plastic deformation front, J. Appl. Mech., 72, 792-794 (2005) B. Hu and S. Yoshida, Stress and strain analysis of metal plates with holes, 2010 SEM Annual Conference, June 7-10, 2010 Indianapolis, IN, USA (2010)
Consistent Embedding: A Theoretical Framework for Multiscale Modeling
Keith Runge Quantum Theory Project University of Florida PO Box 118435 Gainesville, FL 32611-8435
Abstract A fundamental framework for the undertaking of computational science provides clear distinctions between theory, model, and simulation. Consistent embedding provides a set of principles which when appropriately applied can create multi-scale models that capture the physical behavior of more computationally challenging methods within methods that are more easily computed. The consistent embedding methodology is illustrated within the context of brittle fracture for two serial and one concurrent multi-scale modeling examples. The examples demonstrate how predictive modeling hierarchies can be established. Introduction Co-workers and I have previously argued that the fundamental framework for computational science is intrinsically independent of the discipline in which it is applied. [1] In Ref. [1], we argued that process of computational science allows for the clear distinction among theories, models, and simulations. In this framework for the understanding of the computational scientist’s task, theory is taken to be comprised of the axioms and interpretive procedure that construct a mathematical description of the physical world. A model, in our way of thinking, is a chosen physical description of a system or class of systems formulated using the concepts of the theory. One commonly used model is Newtonian dynamics, where the system is modeled as a set of particles that move under the influence of interaction potentials by obeying Newton’s second law. The model can then be computational realized in a simulation using a molecular dynamics computer code where the consequences of the choice of initial conditions and interaction potentials are determined using algorithms and rules. Here an algorithm might solve a differential equation or an eigenvalue problem under the constraining rules applying to boundary condition, number of particles, or a prescribed temperature. In this description of computational
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_12, © The Society for Experimental Mechanics, Inc. 2011
83
84
science, we see that improvements can be made in theory, model, and/or simulation in an attempt to improve the fidelity of the computed result in comparison with experiment. With the forgoing framework for understanding the task of the computational scientist, it may be well to confront a particular physical problem as an exemplar. In order to elucidate the interplay among theory, model, and simulation, let us consider the generic problem of mechanical failure. The phenomenon of mechanical failure presents challenges for the computational scientist at a number of temporal and spatial scales, as it involves the breaking of chemical bonds at the atomistic scale, and perhaps the interaction of grains at a somewhat larger length and time scales and the radical reshaping of overall structure at the macroscopic length scales over times that might be from fraction of seconds to days, weeks, or years. If we restrict ourselves to considering only brittle fracture, which is rapid and abrupt, then it seems clear that the fundamental event is the rupturing of chemical bonds. From a theory perspective, one must choose a level of quantum mechanical theory for the description of the evolution of electrons and nuclei in the stress fields that lead to fracture. While a most fundamental approach might be to attempt a solution of the time-dependent Schrödinger equation for all particles, it would quickly become evident that such an approach is computationally too demanding for current computational resources. One might also realize that simplifying the theory so that more common quantum chemical techniques, which rely on the Born-Oppenheimer approximation, and treating the (slower) nuclei as moving on the potential energy surface of the (faster) electrons seems a reasonable starting point for our choice of theory. We could now try to proceed by applying quantum chemical theory to the electrons and Newtonian dynamics to the nuclei to examine the problem of brittle fracture. Another computational bottleneck would then, no doubt, assail us. The implementation of quantum chemical theory for larger and larger numbers of electrons would quickly become prohibitively expensive. Fortunately, from a theory point of view, only those electrons, and their associated nuclei that are near, that is within a couple of chemical bond lengths or a few Angstoms, of the crack tip, are fully involved in the crack propagation. Atoms, that is, nuclei and their associated electrons, which are more remote are not nearly as perturbed by the crack tip. Hence, one sees the possibility of building a multi-scale model by choosing a set of theories to be applied at various length and time scales as measured from the crack tip. For instance, we might chose Born-Oppenheimer quantum chemistry and Newtonian dynamics in a small region around the crack tip, Newtonian dynamics using atomistic potential in a larger region, and a continuum modeling for the remainder of the structure. Implementing these choices of theory would require a multi-scale model. Multi-scale modeling is frequently divided into concurrent and serial multi-scale modeling. In serial modeling, inputs into models at one scale are generated by computational simulations at another, typically smaller scale. Here I will present an example of training atomistic potentials for Newtonian dynamics from quantum chemical calculations. Concurrent multi-scale modeling
85
implements different theories at a number of length scales and then joins them in a single simulation hierarchy. The seamless joining of multi-scale models is generally challenging as the interface between models must pass all appropriate information while avoiding the generation of artifacts. Concurrent multi-scale models must also take care to obey conservation laws, like mass conservation or energy conservation, for the full system. Consistent Embedding The principle that we call consistent embedding dictates that the information that comprises a model at a larger spatial or longer time scale be compatible with the model used at a smaller or shorter scale. This principle is essential to the development of predictive theory and modeling, as the materials that exist on either side of a multi-scale interface must be physically consistent. For example, if we wish to look at the phenomenon of fracture, then the stress-strain relationship for the model at the shorter length scale should, at the very least, display the same small strain behavior, Young’s modulus, as the model used at the longer length scale. Enforcing this kind of constraint, that the Young’s modulus of the quantum chemical and atomistic models be equivalent, up to some controllable error, is an example of consistent embedding. We can develop other criteria, based on the physical properties being modeled, which improve the likelihood that emergent behavior of larger systems is grounded in the theoretical description of the smaller system. We have now established the computational science framework in which to do develop a multiscale theory, model, and simulation for brittle fracture. In this context, we illustrate the implementation on a multi-scale model and simulation based on the principle of consistent embedding. In particular, the choice of quantum chemical theory will be made subject to consistent embedding constraints, where higher level quantum chemical theory will be used to train less computationally demanding semi-empirical quantum chemical forms. Here it is important to note that the form of the quantum chemical Hamiltonian used is known as ‘semiempirical’, but our serial multi-scale training will be based solely on computed results from correlated calculation. A second illustration of consistent embedding principles in a serial multiscale model will be provided by the training of atomistic potentials solely from quantum chemical calculations. Finally, concurrent multi-scale modeling within consistent embedding principles will be demonstrated using a pseudo-atom termination scheme to facilitate the transfer of information across a quantum chemical/ classical mechanical interface. Transfer Hamiltonian The first example of serial multi-scale modeling that we consider is the training of one less computationally intensive quantum chemical method from a more computational intensive method. The name that has been given to this type of quantum chemical training is the Transfer
86
Hamiltonian. [2] Silica, in particular amorphous silica, is the system whose brittle fracture will serve as the exemplar for this serial multi-scale model. For our higher level of quantum chemical theory we choose a method that includes the effects of electron correlation, known as coupledcluster theory including single and double excitations (CCSD). This highly accurate level of quantum chemical theory is also, of necessity, very computationally demanding. Hence it is necessary to choose a training molecule which exhibits the chemical bonding of interest in the mechanical failure of amorphous silica, but is limited to a relatively small number of atoms. We choose pyrosilicic acid (H6Si2O7) to create a CCSD training set for the ‘semi-empirical’ Hamiltonian. As seen in Fig. 1, the Si-O bond length is varied through compressions and stretches to generate a training set for the Transfer Hamiltonian. In this case, we have chosen to use a neglect of diatomic differential overlap (NDDO) Hamiltonian as our less computationally demanding quantum chemical model. NDDO is one of a set of approximations collectively referred to as zero differential overlap methods. When computers were much less powerful than they are today, these methods were developed to be computationally tractable and used mathematical forms derived from theory. These forms were parameterized to reproduce certain empirical data, e.g. heats of fusion, for simple molecules and the resulting parameterized Hamiltonians were applied to more complex problems with some success. The empirical parameterization of theoretically derived forms came to be known as ‘semi-empirical’ theory. By choosing to parameterize the NDDO Hamiltonian based on high accuracy, ab initio quantum chemistry, CCSD, we remove the empirical information from the procedure and replace it with a more detailed theoretical model. This substitution of theoretical for empirical information characterizes one possible implementation of the consistent embedding framework for the development of predictive modeling. The training of the Transfer Hamiltonian is accomplished using genetic algorithms, which are tuned to reproduce CCSD forces for the training set of molecular geometries. The choice of training on forces is motivated by our interest in stressstrain relations. In the next section, the implications of this Transfer Hamiltonian will be examined in a somewhat larger system.
87
Figure 1 Pyrosilicic acid is used to create a training set of forces from CCSD for the Transfer Hamiltonian Small Strain Potential Pyrosilicic acid was sufficient for the training of the Transfer Hamiltonian, however, to see the effects of using it, we need a somewhat larger model system. Figure 2 shows a silica nanorod which we use to illustrate the next step in our serial multi-scale model. The nanorod is comprised of 108 atoms with two oxygen atoms for each silicon atom, the same ratio as found in silica. Various deformations are used to build a database of forces for training a small strain potential based on Transfer Hamiltonian calculations. Two ionic silica potentials, referred to by their authors’ initials, have found wide use in recent years, BKS [3] and TTAM [4, 5]. These potentials have the same general form and we have chosen to use this form for the parameterization of a small strain potential from Transfer Hamiltonian force data. Again, the parameterization is accomplished using a genetic algorithm. As shown in Table 1, the small strain potential reproduces the Young’s modulus obtained by the quantum chemical Hamiltonian to within a few percent for uniaxial stain along the long axis of the nanorod, while equilibrium
88
bond lengths and bond angles are reproduced to within 2.5%. Details of the construction of the small strain potential can be found elsewhere. [6]
Figure 2 A silica nanorod comprised of 108 atoms, oxygen is green and silicon is gray Table 1: Young’s modulus comparison among potential and Transfer Hamiltonian Method Transfer Hamiltonian New potentail BKS TTAM
Young’s Modulus (arbitrary units) 1026 1022 1516 1214
Concurrent Multi-scale Modeling As a last example, we turn to the topic of concurrent multi-scale modeling. Dealing with the details of the interface is essential in this style of multi-scale modeling and as our target properties relate to stress-strain relations, we must concern ourselves with forces on either side of the interface. However, brittle fracture occurs by the rupture for chemical bonds, so it is also essential that the character of the chemical bonding of the system be preserved as well. The small strain potential presented in the previous section assures that the forces across a quantum chemical/classical mechanical interface are in good agreement for small strains. This has been confirmed by the agreement of the Young’s modulus and equilibrium configurations. In this section, we consider the interaction between the classical mechanical part of the system and the part describe by quantum chemistry. We choose to represent the effect of the remainder of the nanorod in the quantum chemical regime by a truncation scheme which we call pseudoatoms. These pseudoatoms replace oxygen atoms in the full system that serve as the interface between the two styles of treatment. For a typical fracture problem, the quantum chemical treatment would be focused around the crack tip, where the most strained chemical bonds are found.
89
Psuedoatoms are trained using the pyrosilicic acid molecule shown in Fig. 1. Fig. 3 illustrates that the full system as seen from the quantum chemical viewpoint. In the case of the Transfer Hamiltonian, a fluorine atom has been reparameterized to preserve the equilibrium bond lengths and electron distribution in the remainder of the molecule. More detailed studies of the effect of pseudoatoms in the context of both the Transfer Hamiltonian and Density Functional Theory have been presented elsewhere. [7]
Figure 3 Pseudoatoms (labeled Modified F) are trained to reproduce local effects in the electron density Conclusions The distinction among the roles of theory, model, and simulation provides us with insight into ways one might improve our descriptions of the physical world. Using the conceptual framework of consistent embedding, we are able to pose sharp questions with quantifiable answers that can allow us to assess the quality of serial and current multi-scale models and their simulations. Illustrations of serial multi-scale modeling shown here, indicate that less computationally intensive quantum chemical methods can be developed that reflect the quality of more computationally intensive quantum chemical methods to a few percent for chosen properties, for brittle fracture we have concerned ourselves with forces. Further, small strain potentials can be trained, using forms available in the literature, to capture the behavior of quantum chemical methods. Finally, a strategy for concurrent multi-scale modeling has been provided that allows a system to be separated into classical and quantum domains while preserving the fidelity of each to the full system. These ingredients are essential to a predictive modeling capability. References [1] Trickey, S. B., Yip, S., Cheng, H.-P., Runge, K., and Deymier, P. A. A perspective on multiscale simulation: Toward understanding water-silica, J. Computer-Aided Mat. Design 13, 75 (2006).
90
[2] Taylor, C. E., Cory, M. G., Bartlett, R. J. and Thiel, W., The transfer Hamiltonian: a tool for large scale simulations with quantum mechanical forces, Comp. Mater. Sci. 27, 204 (2003). [3] van Beest, B. W. H, Kramer, G. J. and van Santen, R. A., Force fields for silicas and aluminophosphates based on ab initio calculations, Phys. Rev. Lett. 64, 1955 (1990). [4] Tsuneyuki, S., Tsukada, M., Aoki, H. and Matsui, Y., First-Principles Interatomic Potential of Silica Applied to Molecular Dynamics, Phys. Rev. Lett. 61, 869 (1988). [5] Tsuneyuki, S., Tsukada, M., Aoki, H. and Matsui, Y., Molecular-dynamics study of the α to β structural phase transition of quartz, Phys. Rev. Lett. 64, 776 (1990). [6] Mallik, A., Runge, K., Cheng, H.-P. and Dufty, J. W., Constructing a Small Strain Potential for Multi-Scale Modeling, Molecular Simulation 31, 695 (2005). [7] Mallik, A., Taylor, D. E., Runge, K., Dufty, J. W. and Cheng, H.-P., Procedure for building a consistent embedding at the QMCM interface, J. Computer-Aided Mat. Design 13, 45 (2006).
Analysis of Crystal Rotation by Taylor Theory Motoaki Morita Graduate Student, Graduate School of Engineering, Yokohama National University 79-5 Tokiwadai, Hodogaya, Yokohama, 240-8501, Japan Osamu Umezawa Professor, Faculty of Engineering, Yokohama National University 79-5 Tokiwadai, Hodogaya, Yokohama, 240-8501, Japan ABSTRACT Simple shear along specific slip plane in polycrystalline and rotation of grains was discussed. The Taylor theory was applied to bridge between macroscopic deformation behavior and crystal plasticity and to evaluate the orientation distribution. Its theoretical solution can hardly satisfy all of boundary condition and plastic dynamics so that the condition of dynamics was simplified and relaxed in the analysis. The path of crystal rotation due to slip deformation was quantitatively predicted by Taylor theory and gave an advantage on understanding of deformation texture. The analysis method can be applied to polycrystalline materials. Although good evaluation was available in fcc and bcc where the orientation distribution fitted well, no good fitting to experimental result in hcp materials was obtained. 1. Introduction 1.1 Texture The arrangement of lattice is almost the same in a grain, and each grain in polycrystalline is usually distributed in random (Fig. 1(a)). On the other hand, the grains after working and/or heat-treatment reveal an arrangement with almost the same orientation (Fig. 1(b)). The arrangement of “deformation texture” is developed as the slip deformation progresses, because the grains are constrained by surroundings in polycrystalline. The deformation texture as well as “recrystallized texture” by heat treatment accompanies with an anisotropy in microstructure and affects on the properties of materials. The deformation texture mainly results from “crystal rotation” by slip deformation or twinning. Slip deformation occurs on the specific crystal planes and to the specific crystal directions. Each orientation of grain, therefore, changes to the specific one. The specific orientation is called as “preferred orientation”. Since its formation mechanism makes an important role to control their microstructure, quantitative prediction of the deformation texture is needed. To understand the formation mechanism of the deformation texture, the active slip systems and crystal rotation in each of grain should be considered under large deformation. In order to bridge macroscopic deformation behavior and crystal plasticity , Taylor theory which was based on minimum internal work principle has been applied to analyze the slip deformation behavior under large deformation and to evaluate the orientation distribution .[1-3] The theory is applicable to finite element method.[4]
. Fig. 1 Illustration of polycrystalline with random orientation (a) and texture (b).
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_13, © The Society for Experimental Mechanics, Inc. 2011
91
92
1.2 Crystal Rotation Due to Slip Deformation The plastic deformation in materials strain and the rotation is described as velocity gradient tensor Lij (u i / xi ) . The velocity gradient tensor Lnij is described as
Lnij
u i x j
(1)
where u i is velocity of material point in current configuration xi (i 1, 2, 3) . Strain velocity tensor Dij and total spin Wij are derived as
Dij
1 u i u j 2 x j xi
(2)
Wij
1 u i u j 2 x j xi
(3)
where Wij shows rigid body rotation, and is determined by not only stress but also the constrained geometrical condition. Figure 2 shows that single slip operation cause the crystal rotation under tension mode. When the tensile stress is applied to the body of single crystal, the deformation along a slip direction on a slip plane results in the change of its shape (Fig. 2 (a)). In the crystal coordinate, the slip deformation induces the crystal rotation Wijp in the body. However, the body cannot rotate itself in the specimen coordinate, because the slip plane is invariant (Fig. 2(b)). The proportion of material axis is parallel to the tensile axis so that the body rotates about specimen coordinate as
ij Wij WijP
(4)
where WijP is plastic spin, and lattice spin ij means the crystal rotation. Lattice spin is dominant in the condition of constraint. When elastic component is ignored during deformation, operating slip system gives the strain rate Dij and the rotation rate
WijP at each point in the body. When n-th slip system of the normal to slip plane n in and slip direction bin operates, Dij and WijP are described as
Dij
1 N n n n (mij m ji ) 2 n 1
(5)
WijP
1 N n n n (mij m ji ) 2 n 1
(6)
b1n n1n mijn b2n n1n n n b3 n1
b1n n 2n b2n n 2n b3n n 2n
b1n n3n b2n n3n b3n n3n
(7)
where n is slip rate from n-th slip system. The equations in above are integrated as N
Lij Dij Wij mijn n ij n 1
(8)
93
ij Wij WijP Wij
1 N n n n (mij m ji ) 2 n1
(9)
The lattice spin ij means the infinitesimal rotation in t . Through the repeated calculation on the lattice spin, the path of crystal rotation can be analyzed.
Fig. 2 Illustration of crystal rotation due to slip deformation under tensile deformation in single crystal.[5] 1.3 Taylor’s Full Constraints Model A polycrystalline body deforms without defects at grain boundary during deformation, and all of grains are compatible each other in their strain. Taylor assumed that all of grains have the same strain. The Taylor model is called as full constraints model. In all of deformation modes, the compatibility in polycrystalline can be achieved by operating five independent slip systems.[5] When an uniaxial strain is parallel to the z-axis in the specimen’s coordinate system, XYZ, grain deformation takes place under axial symmetry at a fixed volume ( xx yy zz 0 ):
1 2
xx yy zz
(10)
xy yz zx 0
(11)
where xx , yy and zz are the plastic strain rate, and xy , yz and zx are the plastic shear strain rate in a grain. The internal plastic work rate W is the increment of work per volume and it is the sum of the work of five independent slip systems in a grain:
W n min n
(12)
where n is the CRSS (critical resolved shear stress) and n is the slip rate in the n-th slip system. There are a number of combinations of operating slip systems that satisfy the external work constraint, but only one or few numbers combinations should be chosen. When the minimum rate of internal plastic work W can be obtained, the combination of operating slip systems is selected and the slip rate of their operating slip systems can be analyzed. W depends on the relationship between the tensile axis or compressive one (z-axis) and grain orientation.
94
2. Texture Formation in Magnesium-based Solid Solution Taylor theory can hardly satisfy all of boundary condition and plastic dynamics so that the condition of dynamics was simplified and relaxed in the analysis. We have mentioned the problems in the description of slip deformation and the prediction of crystal rotation by Taylor theory. Simple shear along specific slip plane in polycrystalline and rotation of grains in magnesium alloy have been discussed. 2.1 Application of Taylor’s Full Constraints Model Recently activities on research and development for magnesium alloys were very high. However their applications have been mostly for cast parts because of poor workability at low temperature. The poor workability in magnesium alloys intrinsically reflects on the behavior of plastic deformation in the hcp. To understand the deformation in magnesium alloys, deformation texture in magnesium-based solid solution was evaluated by Taylor’s full constraints model. In the texture simulation, the crystal rotations in 300 grains were done. The grains were initially given as random orientation (Fig. 3(a)). In the case of fcc, the n was taken into account, because only one slip system {111} 110 operates. In the case of hcp, three principal slip systems, {0001} 1120 , {10 1 0} 1120 and {1122} 1123 , were taken into account as deformation mode. However, the CRSSs were installed to the analysis, because they were different from each other (Eq. (13)) [6]:
W n n min
(13)
n
Table 1 represents the conditions of CRSSs for the evaluation. The deformation becomes more homogeneous at higher temperature and all primary slip systems can operate sufficiently as Type 1.[7] In Type 2, the ratio of CRSSs at room temperature were installed. The primary slip was {0001} 1120 , because the CRSSs of {10 1 0} 1120 and {1122} 1123 was higher than that of {0001} 1120 .[8-10] In addition, the CRSS of {10 1 0} 1120 in magnesium alloys was lower than that of {1122} 1123 at room temperature.[8] In the present study, no deformation twinning was considered. The solution may be given at the ratios between Types 1 and 2. Table 1 Ratios of critical resolved shear stress of principal slip systems in magnesium alloy.[7-9]
Slip system Type 1 Type 2 (300 K)
{0001} 1120
{101 0} 1120
{1122} 1123
1
1
1
40
80
1
2.2 Evaluation The calculated deformation texture in magnesium alloy was insensitive to the condition of CRSSs so that the reasonable solution may be given. Figure 3 represents the rotation of the grains in the case of Type 2. The grains rotated to two orientations from ε = 0.0 to 0.5 [-], although some grains still remained around (α, β) = (0, 0) (Fig. 3 (a) and (b)). At ε = 1.0 [-], two preferred orientations at (, ) = (27, 30) and (90, 30) were clear as shown in Fig. 3 (c). The grains around (α, β) = (0, 0) can hardly deformed but their number was lower than that in the preferred orientation. Therefore, (α, β) = (0, 0) direction gives quasi-stable orientation. The orientation distribution with ε = 1.0 [-] at 573 K for AZ61 alloy is shown in Fig. 4.[11] In this experiment the most preferred orientation was detected at (α, β) = (33, 0). It was reported out that the preferred orientation resulted from slip deformation. In the present simulation, however, the preferred orientation due to slip deformation was not at (α, β) = (33, 0) but (α, β) = (27, 30) and (90, 30). The simulation suggested that the deformation texture at (α, β) = (33, 0) did not result from slip deformation. The path of crystal rotation was also shown in Fig. 3(d). According to the simulation, the rotation paths were divided into the regions of (α, β) = (25 ~ 35, 0 ~ 30) which were called as tradition bands [12]. It is pointed out that the recrystallization easily occur at the grains on the tradition band.[11] The recrystallization may cause the stronger texture at grains with (α, β) = (33, 0). The reason why the preferred orientation at (α, β) = (90, 30) was not observed in the experiment [11] may result in twinning.
95
Fig. 3 Prediction of inverse pole figure for 300 grains in the Type 2: (a) ε = 0 [-], (b) ε = 0.5 [-], (c) ε = 1.0 [-] and (d) illustration of rotation path. α is defined as the rotation angle from c-axis [0001] to a-axis [1120] and β is defined as the rotation angle about c-axis.
Fig. 4 Contour map of orientation distribution at 573 K for AZ61 alloy (ε = 1.0 [-], ε 1.0 10 4 [/s]).[7] 3. Summary The Taylor theory was applied to bridge between macroscopic deformation behavior and crystal plasticity and to evaluate the orientation distribution. Simple shear along specific slip plane in polycrystalline and rotation of grains was discussed. The path of crystal rotation due to slip deformation was quantitatively predicted by Taylor theory and gave an advantage on
96
understanding of deformation texture. The analysis method can be applied to polycrystalline materials. Although good evaluation was available in fcc and bcc where the orientation distribution fitted well, no good fitting to experimental result in hcp materials was obtained. It is still difficult to understand the mechanism of texture formation in hcp metals. 4. Acknowledgement The authors thank Prof. H. Fukutomi and Prof. K. Sekine in Yokohama National University for their valuable discussion. 5. References [1] Taylor G.I., Plastic strain in metals. J. Inst. Metals 62, 307-324, 1938. [2] Bishop, J.F.W. and Hill, R., A theory of the plastic distortion of a polycrystalline aggregate under combined stresses. Phil. Mag. 42, 414, 1951a. [3] Bishop, J.F.W. and Hill, R., A theoretical derivation of the plastic properties of a polycrystalline face centred metal. Phil. Mag. 42, 1298 , 1951b. [4] Houtte P.V. et al., Deformation texture prediction: from the Taylor model to the advanced Lamel model, Int. J. Plasticity, 21, 589, 2005. [5] Hosford, W.F., The mechanics of crystals and textured polycrystals, Oxford Univ. Press, 56, 1993. [6] Morita, M. and Umezawa, O., Slip deformation analysis based on full constraints model for α-type titanium alloy at low temperature, Journal of Japan Institute of Light metals, 2, 60, 61-67, 2010. [7] ION, S.E., Humphreys, F.J. and White, S.H., Dynamic Recrystallization and the development of microstructure during high temperature deformation of magnesium, Acta metal., 30, 1909, 1982. [8] Hutchinson W.B. and M.R. Barnett, Effective values of critical resolved shear stress for slip in polycrystalline magnesium and other hcp metals, Scripta Meter., 63, 737, 2010. [9] Akhtar, A. and Teghtosoonian, E., Solid solution strengthening of magnesium single crystals- I Alloying behavior in basal slip, Acta Metal., 17, 1339, 1969. [10] Akhtar, A. and Teghtosoonian, E., Solid solution strengthening of magnesium single crystals- II The effect of solution on the ease of prismatic slip, Acta Metal., 17, 1351, 1969. [11] Helis, L., Behavior of deformation and texture formation of AZ31 and AZ61 magnesium alloys at high temperatures, Ph.D. Thesis, Yokohama National University, 2006. [12] Dillamore, I.L. and Katoh, H., The mechanisms of recrystallization in cubic metals with particular reference to their orientation-dependence, Mat. Sci.Tech., 8, 73, 1974.
Numerical Solution of the Walgraef-Aifantis Model for Simulation of Dislocation Dynamics in Materials Subjected to Cyclic Loading José Pontes∗ , Daniel Walgraef† and Christo I. Christov∗∗ ∗
Metallurgy and Materials Engineering Department, Federal University of Rio de Janeiro, P.O. Box 68505, 21941-972, Rio de Janeiro, RJ, Brazil † Center for Nonlinear Phenomena and Complex Systems, CP-231, Université Libre de Bruxelles, B-1050, Brussels, Belgium ∗∗ Department of Mathematics, University of Louisiana at Lafayette, Lafayette, LA, 70504-1010, USA Abstract. Strain localization and dislocation pattern formation are typical features of plastic deformation in metals and alloys. Glide and climb dislocation motion along with accompanying production/annihilation processes of dislocations lead to the occurrence of instabilities of initially uniform dislocation distributions. These instabilities result into the development of various types of dislocation micro-structures, such as dislocation cells, slip and kink bands, persistent slip bands, labyrinth structures, etc., depending on the externally applied loading and the intrinsic lattice constraints. The Walgraef-Aifantis (WA) (Walgraef and Aifanits, J. Appl. Phys., 58, 668, 1985) model is an example of a reaction-diffusion model of coupled nonlinear equations which describe 0 formation of forest (immobile) and gliding (mobile) dislocation densities in the presence of cyclic loading. This paper discuss two versions of the WA model, the first one comprising linear diffusion of the density of mobile dislocations and the second one, with nonlinear diffusion of said variable. Subsequently, the paper focus on a finite difference, second order in time Cranck-Nicholson semi-implicit scheme, with internal iterations at each time step and a spatial splitting using the Stabilizing, Correction (Christov and Pontes, Mathematical and Computer 0, 35, 87, 2002) for solving the model evolution equations in two dimensions. The discussion on the WA model and on the numerical scheme was already presented on a conference paper by the authors (Pontes et al., AIP Conference Proceedings, Vol. 1301 pp. 511-519, 2010). The first results of four simulations, one with linear diffusion of the mobile dislocations and three with nonlinear diffusion are presented. Several phenomena were observed in the numerical simulations, like the increase of the fundamental wavelength of the structure, the increase of the walls height and the decrease of its thickness. Keywords: Finite differences, pattern formation, dislocation patterns, fatigue PACS: 05.45.-a, 02.70.Bf, 62.20.me, 46.70.-p
THE WALGRAEF-AIFANTIS (WA) MODEL In the spirit of earlier dislocation models derived for example by Ghoniem et al. (1990) [1] for creep, or by Walgraef and Aifantis (1985 [2], 1986 [3], 1997 [4]), by Schiller and Walgraef (1988 [5]), and by Kratochvil (1979) [6], for dislocation microstructures formation in fatigue, the dislocation population is divided into static dislocations, which may result from work hardening and consist in the nearly immobile dislocations of the “forest”, of sub-grains walls or boundaries, etc., and the mobile dislocations which glide between these obstacles. The essential features of the dislocation dynamics in the plastic regime are, on the one
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_14, © The Society for Experimental Mechanics, Inc. 2011
97
98
side, their mobility, dominated by plastic flow, but which also includes thermal diffusion and climb, and their mutual interaction process, the more important being (Mughrabi et al., 1979 [7]): Multiplication of static dislocations within the forest; • Static recovery in the forest via static-static annihilation processes; • Freeing of static dislocations: when the effective stress increases and exceeds some threshold, it disturbs the local structure of the forest and, in particular, destabilize dislocation clusters which decompose into mobile dislocations. The freeing of forest dislocations occurs with a rate β , which depends on the applied stresses and material parameters; • Pinning of mobile dislocation by the forest. Effectively, mobile dislocations may be immobilized by the various dislocation clusters forming the forest. The dynamical contribution of such processes is of the form G(ρs)ρm , where G(ρs) = gn ρsn is the pinning rate of a mobile dislocation by a cluster of n static ones. The WalgraefAifantis (WA) model considers n = 2. •
The resulting dynamical system may then be written as:
∂ ρs = Ds ∇2 ρs + σ − vs dc ρs2 − β ρs + γρs2 ρm ∂t ∂ ρm = Dm ∇2x ρm + β ρs − γρs2 ρm , ∂t
(1) (2)
where time is measured in number of cycles of loading, Ds represents the effective diffusion within the forest resulting from the thermal mobility and climb and Dm represents the effective diffusion resulting from the glide of mobile dislocations between obstacles (Dm ≫ Ds ). The coefficient dc is the characteristic length of spontaneous dipole collapse. β is the rate of dislocation freeing from the forest and is associated with the de-stabilization of dislocation dipoles or clusters under stress. Numerical dislocation dynamics simulations show that in BBC crystals, for 0, there is a critical value of external applied stresses above which dislocation dipoles become unstable. This value is a decreasing function of the distance between dipole slip lines. If the forest may be considered as an ensemble of dipoles with a mean characteristic width, the 0 stress for de-stabilization, or freeing, σ f , could be extracted from such simulations. More extended numerical analysis could include higher order dislocation clusters and provide the dependence of the threshold stress on the forest dislocation 0. The freeing rate should thus be zero below the freeing threshold, and an increasing function of the applied stress above it. Hence, β ≈ β0 (σa − σ f )n for σa > σ f , n being a phenomenological parameter.
THE MODIFIED WA MODEL: EFFECT OF GRADIENT TERMS The approximation of mobile dislocation diffusion is controversial and may be addressed. To do so, the mobile dislocation density, ρm is divided into two 1 representing the dislocation gliding in the direction of the Burgers vector (ρm+) and in the opposite one (ρm− ), with ρm = ρm+ + ρm− .
99
For crystals with well-developed forest density, and oriented for single slip, we now write (with vg oriented along the x direction):
∂ ρs = Ds ∇2 ρs + σ − vs dc ρs2 − β ρs + γρs2 ρm ∂t ∂ ρm+ β = −∇x vg ρm+ + ρs − γρs2 ρm+ ∂t 2 ∂ ρm− β = ∇x vg ρm− + ρs − γρs2 ρm− , ∂t 2
(3) (4) (5)
or:
∂ ρs = Ds ∇2 ρs + σ − vs dc ρs2 − β ρs + γρs2 ρm ∂t ∂ ρm β = −∇x vg ρm + ρs − γρs2 ρm ∂t 2 ∂ σm− = −∇x vg ρm − γρs2 σm , ∂t
(6) (7) (8)
where σm = ρm+ − ρm− is the density of geometrically necessary dislocations. This variables evolves faster than the other two and may be adiabatically eliminated, leading to the following system, which includes a nonlinear diffusion term in the equation of ρm :
∂ ρs = Ds ∇2 ρs + σ − vs dc ρs2 − β ρs + γρs2 ρm ∂t vg ∂ ρm = ∇x 2 ∇x vg ρm + β ρs − γρs2 ρm . ∂t γρs
(9) (10)
THE NUMERICAL SCHEME FOR SOLVING THE WA MODEL In order to solve the modified WA model, we use a numerical scheme based on a one proposed by Christov and Pontes (2002). Equations (9) and (10) are solved numerically in two-dimensional rectangular domains, through the finite difference method, using a grid of uniformly spaced points, a second order in time Crank-Nicholson semi-implicit method with internal iterations at each time step, due to the nonlinear nature of the implicit terms. The proposed scheme is splitted in two equations using the Stabilizing Correction scheme (Christov and Pontes, 2002 [8], Yanenko, 1971 [9]). The first halfstep comprises implicit derivatives with respect to x and explicit derivatives with respect to y. In the second half-step,n the derivatives with respect to y are kept implicit and those with respect to x are explicit. The splitting scheme is shown to be equivalent to the original one.
100
The target scheme The target second order in time, Crank-Nicholson semi-implicit scheme is: n+1 + ρ n n+1 + ρ n ρsn+1 − ρsn n+1/2 ρs n+1/2 ρs n+1/2 s s = Λx + Λy + f1 ∆t 2 2 n+1 n ρmn+1 − ρmn n+1/2 ρm + ρm n+1/2 = Λ2 + f2 , ∆t 2
(11) (12)
where n is the number of the time step. Upon including the 1/2 factor in the operators n+1/2 n+1/2 n+1/2 Λx , Λy and Λ2 , we obtain: ρsn+1 − ρsn n+1/2 n+1/2 n+1/2 = Λx ρsn+1 + ρsn + Λy ρsn+1 + ρsn + f1 ∆t ρmn+1 − ρmn n+1/2 n+1/2 = Λ2 ρmn+1 + ρmn + f2 . ∆t n+1/2
The operators Λx defined as: n+1/2 Λx n+1/2
Λy
n+1/2
f1
n+1/2
Λ2
n+1/2
f2
= = = = =
n+1/2
, Λy
n+1/2
and Λ2
n+1/2
and the functions f1
n+1/2
and f2
n+1 Ds ∂ 2 1 ρs + ρsn β − vs dc − 2 2 ∂x 4 2 4 n+1 2 n Ds ∂ 1 ρs + ρs β − − vs dc 2 2 ∂x 4 2 4 n+1 2 γ ρs + ρsn σ+ ρmn+1 + ρmn 2 2 " # 2 n+1 vg 1 ∂ ∂ ρs + ρsn vg − γ 2 ∂ x γ ρsn+1 + ρ n /22 ∂ x 2 s n+1 ρs + ρsn β . 2
(13) (14) are
(15) (16) (17) (18) (19)
Internal iterations n+1/2
n+1/2
n+1/2
n+1/2
Since the operators Λx , Λx and Λ2 , as well as the functions f1 and n+1/2 f2 contain terms in the new stage, we do internal iterations at each time step, according to: ρsn,k+1 − ρsn n+1/2 n+1/2 n+1/2 = Λx ρsn,k+1 − ρsn + Λy ρsn,k+1 − ρsn + f1 (20) ∆t ρmn,k+1 − ρmn n+1/2 n+1/2 = Λ2 ρmn,k+1 ρmn + f2 . (21) ∆t
101
where the superscript (n, k + 1) identifies the “new” iteration, (n, k) and n stand for the values obtained in the previous iteration and in the previous time step, respectively. The n+1/2 n+1/2 n+1/2 n+1/2 n+1/2 operators Λx , Λy , Λ2 and the functions f1 and f2 are redefined as: n+1/2
Λx
n+1/2
Λy
n+1/2
f1
n+1/2
Λ2
n+1/2
f2
Ds ∂ 2 1 β − vs dc Sn+1/2 − 2 2 ∂x 4 4 2 Ds ∂ 1 β = − vs dc Sn+1/2 − 2 2 ∂x 4 4 γ n+1/2 2 n,k n = σ+ S ρm + ρm 2 ! 2 vg ∂ ∂ n+1/2 = 2 vg − γ S ∂ x 2γ Sn+1/2 ∂ x
(22)
=
= β Sn+1/2 ,
where: Sn+1/2 =
(23) (24) (25)
ρsn,k + ρsn . 2
(26)
The iterations proceed until the following criteria is satisfied: max||ρsn,K+1 − ρsn,K || max||ρsn,K ||
<δ
and
max||ρmn,K+1 − ρmn,K || max||ρmn,K ||
<δ
in all grid points, for a certain K. Then the last iteration gives the value of the sought def def functions in the “new” time step, ρsn+1 = ρsn,K+1 et ρmn+1 = ρsn,K+1 .
The splitting of the ρs equation The splitting of Eq. (20) is made according to: ρ˜ s − ρsn n+1/2 n+1/2 n n+1/2 n+1/2 n+1/2 = Λx ρ˜ s + Λy ρs + f 1 + Λx + Λy ρsn (27) ∆t n,k+1 ρs − ρ˜ s n+1/2 n,k+1 n = Λy ρs − ρs . (28) ∆t In order to show that the splitting represents the original scheme, we rewrite Eqs. (27) and (28) in the form: n+1/2 n+1/2 n+1/2 n+1/2 n+1/2 E − ∆t Λx ρ˜ s = E + ∆t Λy ρsn + ∆t f1 + ∆t Λx + ∆t Λy ρsn(29) n+1/2 n+1/2 n E − ∆t Λy ρsn,k+1 = ρ˜ s − ∆t Λy ρs , (30)
where E is the unity operator. The intermediate variable ρ˜ s is eliminated by applying the n+1/2 operator E − ∆t Λx to the second equation and summing the result to the first one: n+1/2 n+1/2 n+1/2 E − ∆t Λx E − ∆t Λy ρsn,k+1 = E + ∆t Λy ρsn − (31) n+1/2 n+1/2 n n+1/2 n+1/2 n+1/2 E − ∆t Λx ∆t Λy ρs + ∆t f1 + ∆t Λx + ∆t Λy ρsn . (32)
102
This result may be rewritten as: n+1/2 n+1/2 n+1/2 n+1/2 n+1/2 E + ∆t 2 Λx Λy = ∆t Λx + Λy ρsn,k+1 + ρsn + ∆t f1 ,
or either: n,k+1 − ρ n n,k+1 − ρ n n+1/2 n+1/2 ρs n+1/2 n+1/2 ρs n+1/2 s s E + ∆t 2 Λx Λy = Λx + Λy + f1 . ∆t 2 (33) A comparison with Eq. (20) shows that Eq. (33) is actually equivalent to the first one except by the defined positive operator having a norm greater than one, n+1/2 n+1/2 B ≡ E + ∆t 2 Λx Λy = E + O ∆t 2 , n,k+1 n which acts on the term ρs − ρs /∆t. This means that this operator does not change the steady state solution. Furthermore, since ||B|| > 1 the 3 scheme is more stable than the original scheme.
Spatial discretization The grid is “staggered” and the discretization of the diffusive term of Eq. (10) is made according to the following formula, which preserves the conservation law implicit in the divergence: " # vg ∂ vg ∂ ∂ ∂ ρmn,k+1 + ρmn ∆t vg ρm ≈ ∆t vg . ∂ x γρs2 ∂ x ∂ x γ (Sn+1/2 )2 ∂ x 2 Upon defining: Qi, j =
∆t vg n+1/2 2 )
4γ (Si, j
we replace the diffusive term of ρm by: vg vg ∂ ∆t ∂ v ρ ≈ Q + Q ( ρ ) − ( ρ ) g m i, j m m i, j i, j+1 i, j+1 2 ∂ x γρs2 ∂ x ∆x vg − Qi, j−1 + Qi, j (ρm )i, j − (ρm )i, j−1 = ∆x vg Qi, j−1 + Qi, j (ρm)i, j−1 − Qi, j−1 − 2Qi, j + Qi, j+1 (ρm )i, j + ∆x Qi, j + Qi, j+1 (ρm )i, j+1.
(34) (35) (36) (37)
The diffusive terms of Eq. (9) are written in discrete form by using the usual three points centered formula, of second order. Neumann boundary conditions are used in the integration of the WA model, with derivatives in the direction perpendicular to the walls equal to zero. The algebraic linear systems were solved using a routine with 0 elimination and pivoting, written by one of us (CIC).
103
RESULTS We present the results of four simulations in a box with dimensions 25 × 5 µ m. system parameters are: vs = 1µ m cm1 , dc = 2.5−2 µ m, Ds = 3 × 10−3 µ m2 cy−1 (linear case), vg = 102 µ m cy−1 , γ = 2 × 10−2 , σ = 250µ m−2 cy−1 . The simulations were run with a time step of 2.5 × 10−3 cy. Case #1 refers to system with linear diffusion of ρm, initial condition consisting of a central stripe with random values of ρs and zero everywhere else. Cases #2 to 4 refer to systems with nonlinear diffusion of ρm , initial condition con 2 2 sisting of the uniform base state ρ¯ m = γ σ / β vs dc and ρs = β / (γρm ) and bifurcation parameter β = 15, 30 and 60 respectively (see Tab. 1). Figs. 1 and 2 present the time evolution of ρs for the four cases considered. Fig. 3 show the time evolution of the maximum of ρs and the computional effort, given by the number of internal iterations per time step. TABLE 1. Main data of the four simulations presented Case Diffusion of ρm Initial Cond. β Grid points 1 2 3 4
linear nonlinear nonlinear nonlinear
Vertical band random random random
30 15 30 60
3000 × 750 3000 × 750 3000 × 750 4000 × 1000
Walls 12 15 12 10
Case #1: β = 30
t = 0.00
t = 0.25
t = 1.00
t = 1.50
t = 2.00
t = 5.00
t = 350.0
t = 5286.0
FIGURE 1. Time evolution of Case #1 of dislocation pattern formation in a rectangular stripe with 20 × 5 µ m, starting from a vertical stripe with random distribution of ρs and ρm and linear diffusion of ρm .
104
Case #2: β = 15
t = 0.00
t = 0.50
t = 0.75
t = 7.00
t = 300.0
t = 1748.0 Case #3: β = 30
t = 0.00
t = 0.25
t = 3.0
t = 30.0
t = 3900.0
t = 5648.0 Case #4: β = 60
t = 0.00
t = 0.50
t = 3.0
t = 30.0
t = 100.0
t = 248.0
FIGURE 2. Time evolution of Cases #2 to 4 of dislocation pattern formation in a rectangular stripe with 20 × 5 µ m, starting from a random distribution of ρs and ρm and nonlinear diffusion of ρm .
105
Case #1: β = 30 – Linear diffusion of ρm
Case #2: β = 15 – Nonlinear diffusion of ρm 600
15
ρ max( S)
550
10 iter
500
5
450 400
0
500
1000 t (cycles)
1500
0
2000
0
500
1000 t (cycles)
1500
2000
Case #3: β = 30 – Nonlinear diffusion of ρm 800
15
ρ max( S)
750 700
10 iter
650 600
5
550 500
0
0
1000 2000 3000 4000 5000 6000 t (cycles)
0
1000 2000 3000 4000 5000 6000 t (cycles)
Case #4: β = 60 – Nonlinear diffusion of ρm 1000 15
max(ρS)
950 900
iter
5
850 800
10
0
100
t (cycles)
200
300
0
0
100
t (cycles)
200
300
FIGURE 3. Time evolution curves of max(ρs ) × t and of the computational effort (number of internal iterations per step) for the four cases considered.
106
DISCUSSION A number of phenomena emerge from the numerical simulations presented, the main ones being: 1. The increase of the bifurcation parameter β results in strucuteres with larger wavelength. The walls height increases and its thickness decreases with β . Thinner walls required the use of finer numerical meshes, resulting in greater computational effort (Case #4, with β = 60); 2. The height of the walls decreases as pattern defects are eliminated. The pattern evolution accelerates at the moments where defects are eliminated, Higher local crests appear at these moments. The computational effort, measured by the number of internal iterations at each time step increases (see Fig. 3); 3. Increasing the bifurcation parameter β from 15 to 30 accelerates the pattern formation. Further increasing β to 60 results in longer transients, possibly due to the disordering effect of higher forcing; 4. The movement of small pieces of dislocation walls along the x direction is enhanced by the nonlinear diffusion of ρm .
CONCLUSIONS We presented a finite differences second-order in time scheme for solving reactiondiffusion equations in two dimensions. Second order was achieved by performing internal iterations at each time step. The scheme was implemented in a mesoscopic twoequations model proposed by Walgraef and Aifantis (1985) [2] to model dislocation dynamics in materials subjected to cyclic loading. Dislocations are grouped in two variables, the first one consisting of a density of immobile or static dislocations, ρs , (forest of dislocations). The second group consists of mobile dislocations that move along the forest of static ones and are grouped in a density ρm . The density of static dislocations diffuses along both directions, whereas the mobile dislocations present a nonlinear diffusion along one of the directions only. The scheme of Stabilizing Correction was used for the splitting of the evolution equation of ρs (Yanenko,1971 [9], Christov and Pontes, 2002 [8], Pontes et al., 2010 [10]). Several phenomena were observed in the numerical simulations, like the increase of the fundamental wavelength of the structure, the increase of the walls height and the decrease of its thickness. More complete results and discussion will be given in a forthcoming paper.
ACKNOWLEDGMENTS JP acknowledges the Center of Parallel Computing of the Federal University of Rio the Janeiro (NACAD/COPPE/UFRJ) and prof. Alvaro Coutinho for the use of a cluster of computers where the simulations presented were made. He also acknowledges financial support from the Brazilian agency CNPq.
107
REFERENCES N. M. Ghoniem, J. R. Matthews, and R. J. Amoedo, Res Mechanica 29, 197 (1990). D. Walgraef, and E. C. Aifantis, J. Appl. Phys. 58, 668 (1985). D. Walgraef, and E. C. Aifantis, Int. J. Eng. Sci. 23, 1351, 1359 and 1364 (1986). D. Walgraef, Spatio-Temporal Pattern Formation, Springer, New York, 1997. C. Schiller, and D. Walgraef, Acta Metall. 36, 563–574 (1988). J. Kratochvil, Rev. Phys. Apliquée 23, 419 (1988). H. Mughrabi, F. Ackermann, and K. Herz, “,” in Fatigue Mechanisms, ASTM-NBS-NSF Symposium, edited by E. T. Fong, ASTM, Kansas City, 1979, paper No. STP-675. 8. C. I. Christov, and J. Pontes, Mathematical and Computer Modelling 35, 87–99 (2002). 9. N. N. Yanenko, The Method of Fractional Steps, Springer, New York, 1971. 10. J. Pontes, D. Walgraef, and C. I. Christov, “A Splitting Scheme for Solving the Reaction-Diffusion Equations Modelling Dislocation Dynamics in Materials Subjected to Cyclic Loading,” in Appications of Mathematics in Technical and Natural Sciences, edited by M. Todorov, and C. I. Christov, American Institute of Physics, New york, 2010, vol. 1301, pp. 511–519.
1. 2. 3. 4. 5. 6. 7.
Photoelastic Determination of Boundary Condition for Finite Element Analysis
S. Yoneyama, S. Arikawa and Y. Kobayashi Department of Mechanical Engineering, Aoyama Gakuin University, 5-10-1 Fuchinobe, Sagamihara, Kanagawa 252-5258, Japan ABSTRACT An experimental-numerical hybrid method for determining stress components in photoelasticity is proposed in this study. Boundary conditions for a local finite element model, that is, tractions along boundaries are inversely determined from photoelastic fringes. The tractions can be obtained by the method of linear least-squares from both principal stress difference and principal direction. On the other hand, the tractions can also be determined only from the principal stress difference if nonlinear least-squares is used. After determining the boundary conditions for the local finite element model, the stresses can be obtained by finite element direct analysis. The effectiveness of the proposed method is validated by analyzing the stresses in a perforated plate under tension. Results show that the boundary conditions of the local finite element model can be determined from the photoelastic fringes and then the individual stresses can be obtained by the proposed method. INTRODUCTION Optical methods in experimental stress analysis such as photoelasticity, thermoelasticity and a wide variety of interferometric methods are useful and valuable techniques because they provide whole-field information on a specimen surface or the area of interest. However, it is sometimes difficult to extract desired quantities from the quantities obtained by these methods. For example, moiré interferometry provides surface displacements and then strains are obtained by differentiating the displacements spatially. However, the differentiation of measured displacements has the difficulties that the errors in the measured values give rise to even greater errors in their derivatives. Thus, various studies have been performed to obtain strains from measured displacements [1-4]. In the case of photoelasticity, it is well known that the fringe patterns represent the principal stress difference and the principal direction, and thus the stress components themselves cannot be obtained directly. Conventionally, a method based on the equilibrium equation or compatibility equation such as a shear difference method has been used for the stress separation in photoelasticity [5,6]. Several stress separation techniques based on the conventional methods have been developed [7-9]. The major drawback of the conventional methods is that the stresses obtained by these methods usually suffer from error accumulation arising by finite difference approximation. On the other hand, various techniques for determining stress components have also been reported. Patterson and coworkers [10,11], and Sakagami et al. [12] developed a hybrid method of photoelasticity and thermoelasticity. In this method, the difference and the sum of principal stresses are measured separately, and then, they are combined for obtaining stress components. The difference and the sum of principal stresses can also be obtained by combining photoelasticity and interferometry [13-16]. The disadvantage of the hybrid methods of photoelasticity and another experimental method is that the measurement can be complicated. On the other hand, several hybrid methods with theoretical analysis or numerical analysis, and inverse analysis methods have also been proposed for the stress separation. Chang et al. [17] determined the coefficients of Airy stress function from photoelastic fringes for determining stresses. Berghaus [18] proposed a hybrid method with a finite element method. In this method, displacement boundary condition along the axis of symmetry and free boundary for a finite element method is determined by photoelasticity. Hayabusa et al. [19] and Chen et al. [20] proposed a hybrid method with a numerical method such as a boundary element method. They determined boundary conditions by inverse analysis from photoelastic fringes and then stresses are determined by direct analysis. The stress separation can be performed by the methods mentioned above. Particularly it can be considered that the use of numerical methods such as a finite element method or a boundary element method for stress separation is useful because the data processing is easy and full-field stresses and strains can be obtained easily. However, inverse boundary value problems are T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_15, © The Society for Experimental Mechanics, Inc. 2011
109
110
often ill-posed. Therefore, various additional techniques should be introduced to the inverse analysis for obtaining stable and accurate results. In the present study, an alternative and simple hybrid method for stress separation in photoelasticity is proposed. Boundary conditions for a local finite element model, that is, tractions along the boundaries are determined by inverse analysis from photoelastic fringes. Two algorithms are presented. One is linear algorithm that the tractions are determined from the principal stress difference and the principal direction using the method of linear least-squares. In another algorithm, on the other hand, the tractions are determined only from the principal stress difference using nonlinear least-squares. After determining the tractions, the stress components are obtained by finite element direct analysis. The effectiveness of the proposed method is validated by analyzing the stresses around a hole in a plate under tension. Results show that the boundary conditions of the local finite element model can be determined from the photoelastic fringes and then the stresses can be obtained by the proposed method. INVERSION OF BOUNDARY CONDITIONS Basic Principle Figure 1 shows a typical optical setup for photoelasticity, that is, a circular polariscope. A birefringent specimen is placed in the polariscope, and then, photoelastic fringes are appeared when the specimen is loaded. The angle of the principal axis of the specimen is interpreted as the principal direction, i.e., the isoclinic parameter. Similarly, the retardation of the specimen, that is, the isochromatic parameter is related to the principal stress difference as [21] f (1) 1 2 s , 2h where fs is the material fringe value, h is the thickness of a specimen, and 1 and 2 express the principal stresses, respectively. Various techniques such as a phase-stepping method can be used for obtaining the isochromatic and isoclinic parameters [22-25]. Therefore, the principal stress difference and the principal direction are obtained in the region of interest or the whole field of the specimen by introducing one of the data acquisition and processing techniques.
In a finite element method, on the other hand, it can be considered that the reasonably accurate stress distributions are obtained when the appropriate boundary conditions are given, provided that appropriate finite element model is used and material properties are known. In the proposed method, therefore, the boundary conditions of the region for analysis, that is, the tractions along the boundaries, are inversely determined from photoelastic fringes. Then, the stresses are determined by finite element direct analysis by applying the computed boundary conditions. Figure 2 schematically shows a two-dimensional finite element model of the region for analysis. The displacements of some nodes are fixed so that the rigid body motion is not allowed. Then, a unit force along one of the direction of the coordinate system is applied to a node at the boundary of the model. That is, the finite element analysis is performed under the boundary condition of the unit force on the boundary. The analysis is repeated by changing the direction of the unit force and the node at which the unit force is applied. The stress components at a point (xi, yi) for the applied unit force Pj = 1 (j = 1~N) are represented as (x')ij, (y')ij, and (xy')ij. Here, i (= 1~M) is the data index, j is the index of the applied force, M is the number of the data points, and N is the number of the forces to be determined at the nodes along the boundary of the model. The stress components (x)i, (y)i, and (xy)i at the point (xi, yi) under the actual applied forces Fj (j = 1~N) can be expressed using the principle of superposition as ( x )i ( x )ij Fj ( y )i ( y )ij Fj (i 1 ~ M , j 1 ~ N ), (2) ( xy )i ( xy )ij Fj where the summation convention is used. That is, for example,
111
Fig. 1 Typical optical setup for photoelasticity
Fig. 2 Finite element model with boundary condition of unit force
( x )i ( x )ij Fj (i 1 ~ M , j 1 ~ N ) N
( x )ij Fj (i 1 ~ M ) j1
( x )i1 F1 ( x )i 2 F2 ( x )iN FN (i 1 ~ M ). In equation (2), Fj is the nodal forces along the boundary. Therefore, the tractions along the boundaries are determined and subsequent stress analysis can be performed if the values of Fj are determined. Linear Algorithm From the principal stress difference 1 – 2 and the principal direction obtained by photoelasticity, the normal stress difference x – y and the shear stress xy are obtained as x y ( 1 2 ) cos 2, (3) 1 xy ( 1 2 )sin 2. 2 Therefore, the relationships between the values obtained by photoelasticity and the nodal forces Fj along the boundary can be expressed as ( x y )i ( x )ij ( y )ij Fj (i 1 ~ M , j 1 ~ N ), (4) ( xy )i ( xy )ij Fj (i 1 ~ M , j 1 ~ N ), where (x – y)i and (xy)i express the normal stress difference and the shear stress at the point (xi, yi) obtained by photoelasticity. Equation (4) expresses linear equations in the unknown coefficients Fj. For numerous data points, an over-determined set of simultaneous equations is obtained. In this case, the nodal forces Fj along the boundary can be estimated using linear least-squares as F AT A 1 AT S, (5) where F, A and S are the nodal force, stresses under the boundary condition of the unit force and the values obtained by pho-
112
toelasticity, respectively. They are expressed as ( ) ( ) x ij y 11 F ( ) ( ) 1 x ij y M1 F , A ( xy )11 F N M ( xy )M 1
( ) ( x )ij ( y )1N x y 1 ( ) L ( x )ij ( y )MN x y M , S L ( xy )1N ( xy )1 O M ( xy )M L ( xy )MN
.
After determining the nodal force F along the boundary using equation (5), the stress components can be obtained by the finite element direct analysis by using the nodal force F as the boundary condition.
Nonlinear Algorithm As is well known, it is difficult to obtain the accurate values of the principal direction by photoelasticity. The measured principal direction is sometimes affected by the isochromatics and the accuracy of quarter-wave plates. In order to obtain accurate values of the principal direction, various techniques have been proposed [26,27]. Because the accurate values of the principal direction cannot always be obtained, a method for determining the boundary condition without the principal direction is also described below. The principal stress difference is expressed using the stress components as
1 2 ( x y )2 4 2xy .
(6)
Therefore, the relationship between the experimentally obtained values of the principal stress difference and the nodal forces Fj along the boundary can be expressed as
(1 2 )i
2
2
2
( x )ij Fj ( y )ij Fj 4 ( xy )ij Fj (i 1 ~ M , j 1 ~ N ),
(7)
where (1 – 2)i is the principal stress difference obtained by photoelasticity at the point (xi, yi). Equation (7) is nonlinear in the unknown parameters Fj. To solve these parameters, an iterative procedure based on Newton-Raphson method is employed. Equation (7) is rewritten as 2
2
2 hi ( x )ij Fj ( y )ij Fj 4 ( xy )ij Fj ( 1 2 )i (i 1 ~ M , j 1 ~ N ),
A series of iterative equations based on Taylor’s series expansions of equation (8) yields the following equations. h h (hi )k1 (hi )k i F1 i FN (i 1 ~ M ), F1 k FN i
(8) (9)
where subscript k denotes the kth iteration step, and F1, …, FN are the corrections to the previous estimation of F1, …, FN. The desired results (hi)k+1 = 0 yield the following simultaneous equations with respect to the corrections. h h hi i F1 i FN (i 1 ~ M ), (10) F1 FN The solution of equation (10) in the least-squares sense is
D BT B BT H. 1
In that equation,
h 1 F F 1 1 D , B F hM N F1
h1 h FN 1 , H h hM M L FN
(11)
.
113
Fig. 3 Perforated plate specimen used for verifying the proposed method The solution of the matrix equation gives the correction terms for prior estimates of the coefficients. Accordingly, an iterative procedure must be used to obtain the best-fit set of coefficients. Then, the estimates of the unknowns are revised. This procedure is repeated until the corrections become acceptably small. After determining the nodal forces along the boundary, the stress components can be obtained by the finite element direct analysis by using the nodal forces as the boundary condition. That is, the stress separation can be performed. It is noted that because the method described here uses only the principal stress difference, the sign of the applied forces along the boundary cannot be determined. In other words, the proposed nonlinear algorithm cannot judge whether the traction is tension or compression. Therefore, the appropriate initial values should be provided for the calculation of the nonlinear least-squares. EXPERIMENTAL VERIFICATION OF THE PROPOSED METHOD
A simple static problem is analyzed to verify the proposed method. A perforated plate made of epoxy resin, 228 mm in height, 50 mm in width and 3 mm in thickness, having a hole of diameter of 10 mm, is subjected to the tensile load of P = 398 N as shown in Fig. 3. The material fringe value fs of the material is determined as 11.48 kN/m. The specimen is placed in a circular polariscope with the quarter-wave plates matched for the wavelength of 560 nm. Three monochromatic lights of wavelengths of 500 nm, 550 nm, and 600 nm emitted from a halogen lamp with interference filters are used as the light source in order to apply the absolute phase analysis method with tricolor images [28]. The phase-stepped photoelastic fringes are collected by a monochromatic CCD camera with the resolution of 640 480 pixels and 256 gray levels. Then, the fringe pattern is analyzed, the ambiguity of isochromatic phase is corrected and the phase unwrapping is performed by the method proposed previously [28]. Figure 4(a) shows an example of the photoelastic fringe pattern around the hole. Applying the phase-stepping method with 7 images [28,29], the wrapped phases of the retardation and the principal direction are obtained as shown in Figs. 4(b) and (c). The phase map of the retardation in Fig. 4(b) contains the region of ambiguity where the mathematical sign of the retardation is wrong. In addition, the values of the retardation in Fig. 4(b) are lying in the range of the – to rad. On the other hand, the principal direction in Fig. 4(c) lies in the range from –/4 to /4 rad whereas the actual value should be in the range from –/2 to /2 rad. The ambiguity of the retardation is corrected using the phase maps obtained for the three monochromatic wavelengths as shown in Fig. 4(d). Then, the unwrapped phases of the retardation and the principal direction are obtained, as shown
114
Fig. 4 (a) Photoelastic fringe pattern; (b) wrapped retardation with ambiguity of sign; (c) wrapped principal direction; (c) wrapped retardation; (e) unwrapped retardation; (f) unwrapped principal direction
Fig. 5 Finite element model of the analysis region in Figs. 4(e) and (f). It is recognized that the absolute phase values at almost all points are obtained as shown in this figure. The principal stress difference and the principal direction at the number of the data points M = 2394 on the specimen surface are extracted and used as the data input into the algorithm by the proposed method. The stress separation is performed in the 20 mm × 20 mm region around the hole, indicated by ABCD, shown in Fig. 3. Figure 5 shows the finite element model of the 20 mm × 20 mm region used for the proposed method. In this model, 8-noded isoparametric elements are used. The numbers of the elements and the nodes are 200 and 680, respectively. In order to obtain the stresses under the unit force at a point on the boundary, the displacements at some nodes must be fixed to prevent the rigid body motion. In this study, the x and y components of the displacement at the point A and the y directional displacement at the point B are assumed not to displace though these points are displaced actually. This assumption is valid because the rigid body
115
Fig. 6 Example of the variation of nodal force during iteration process in nonlinear algorithm
Fig. 7 Tractions along the boundary CD translation and the rotation of the analysis region do not affect the stress distribution. The nodal forces at the other nodes on the boundary are obtained by the proposed method. The number of the nodes along the boundary is 80 and thus the number of the nodal forces along the boundary is 160. That is, the number of the nodal forces to be determined is N = 157 because the three displacement components at the points A and B are fixed. The nodal forces are determined using both linear and nonlinear algorithms. Figure 6 shows an example of the variation of nodal force at a point during iteration process in the nonlinear algorithm. In this example, the initial value of –1 N is given as shown in this figure. The value of the nodal force is corrected by the Newton-Raphson method. Then, the value is converged to a constant value as shown. Because the nodal forces at 157 points are simultaneously determined in the iteration process, the convergence is not fast and the number of iteration of about 40 is required in this example.
116
Fig. 8 Stresses obtained by the proposed linear algorithm: (a) x; (b) y; (c) xy
Fig. 9 Stresses obtained by the proposed nonlinear algorithm: (a) x; (b) y; (c) xy
Fig. 10 Stresses obtained by finite element direct analysis: (a) x; (b) y; (c) xy The tractions along the boundary CD determined from the nodal forces obtained by the linear and nonlinear algorithms are shown in Fig. 7. In this figure, solid curves represent the values obtained by finite element direct analysis. As shown in this figure, the traction on the boundary of the analysis area obtained by the proposed method show good agreement with the values obtained by the direct analysis. In addition, it is observed that the accuracies of the tractions obtained by linear and nonlinear algorithms are almost same. Using the nodal forces obtained by the proposed method as the input data to finite element analysis, the stresses are computed. Figures 8 and 9 show the stresses around the hole obtained by the linear and nonlinear algorithm, respectively. The stresses obtained by finite element direct analysis are also shown in Fig. 10 for comparison. As shown in these figures, the stress components are obtained from the photoelastic fringes by the proposed linear and nonlinear algorithms. The average difference between the y directional normal stresses y obtained by the linear algorithm and direct ones is 0.11 MPa, the maximum difference is 0.66 MPa, and the standard deviation is 0.10 MPa. On the other hand, the average difference, the maximum difference and the standard deviation between the values by the nonlinear algorithm and those by the direct analysis are 0.28 MPa, 0.68 MPa, and 0.12 MPa, respectively. It seems that the results obtained by the linear algorithm is better
117
than those by the nonlinear algorithm. However, in linear algorithm, the principal direction as well as the principal stress difference is used for obtaining the shear stress and the normal stress difference. Therefore, the results of the stress separation are affected by the accuracy of the principal direction. The principal stress difference can be accurately evaluated in photoelasticity. As mentioned, however, it is known that the accurate evaluation of the principal direction is difficult even if a phase-stepping method is introduced. In this case, therefore, the nonlinear algorithm should be used for obtaining the better results. The drawback of the nonlinear algorithm is that the sign of the nodal force cannot be determined. Therefore, appropriate initial values of the nodal force is determined by the linear algorithm and then the nonlinear algorithm is used for determining the tractions if the accurate result of the principal direction is not obtained. Then, the stresses with appropriate sign can be obtained. CONCLUSIONS
In this study, an experimental-numerical hybrid method for determining stress components in photoelasticity is proposed. Boundary conditions for a local finite element model are inversely determined from the principal stress difference and the principal direction in linear algorithm. On the other hand, the boundary conditions can be determined from the principal stress difference if the nonlinear algorithm is used. Then, the stresses are obtained by finite element direct analysis using the computed boundary conditions. The effectiveness of the proposed method is validated by analyzing the stresses around a hole in a perforated plate under tension. Results show that the boundary conditions of the local finite element model can be determined from the photoelastic fringes and then the stresses can be obtained by the proposed method. ACKNOWLEDGMENT
The authors appreciate the financial support by the Grant-in-Aid for Encouragement of Young Scientists from the Japan Society for the Promotion of Science. REFERENCES
[1] Bossaert W, Dechaene R, Vinckier A (1968) Computation of finite strains from moiré displacement patterns. Strain 3(1): 65–75. [2] Segalman DJ, Woyak DB, Rowlands RE (1979) Smooth spline-like finite-element differentiation of full-field experimental data over arbitrary geometry. Exp Mech 19(12): 429–427. [3] Sutton MA, Turner JL, Bruck HA, Chae TA (1991) Full-field representation of discretely sampled surface deformation for displacement and strain analysis. Exp Mech 31(2): 168–177. [4] Geers MGD, De Borst R, Brekelmans WAM (1996) Computing strain fields from discrete displacement fields in 2D-solids. Int J Solids Struct 33(29): 4293–4307. [5] Haake SJ, Patterson EA (1992) The determination of principal stresses from photoelastic data. Strain 28(4): 153–158. [6] Fernández MSB, Calderon JMA, Diez PMB, Segura IIC (2010) Stress-separation techniques in photoelasticity: a review. J Strain Anal Eng Des 45(1): 1–17. [7] Ramji M, Ramesh K (2008) Whole field evaluation of stress components in digital photoelasticity – issues, implementation and application. Opt Lasers Eng 46(3): 257–271. [8] Ashokan K, Ramesh K (2009) An adaptive scanning scheme for effective whole field stress separation in digital photoelasticity. Opt Laser Technol 41(1): 25–31. [9] Petrucci G, Restivo G (2007) Automated stress separation along stress trajectories. Exp Mech 47(6): 733–743. [10] Barone S, Patterson EA (1996) Full-field separation of principal stresses by combined thermo- and photoelasticity. Exp Mech 36(4): 318–324. [11] Greene RJ and Patterson EA (2006) An integrated approach to the separation of principal surface stresses using combined thermo-photo-elasticity. Exp Mech 46(1): 19–29. [12] Sakagami T, Kubo S and Fujinami Y (2004) Full-field stress separation using thermoelasticity and photoelasticity and its application to fracture mechanics. JSME Int J Ser A 47(3): 298–304. [13] Nishida M, Saito H (1964) A new interferometric method of two-dimensional stress analysis. Exp Mech 4(2): 366–376. [14] Brown GM, Sullivan JL (1990) The computer-aided holophotoelastic method. Exp Mech 30(2) 135–144. [15] Yoneyama S, Morimoto Y, Kawamura M (2005) Two-dimensional stress separation using phase-stepping interferometric photoelasticity. Meas Sci Technol 16(6): 1329–1334.
118
[16] Lim J and Ravi-Chandar K (2009) Dynamic measurement of two dimensional stress components in birefringent materials. Exp Mech 49(3): 403–416. [17] Chang CW, Chen PH, Lien HS (2009) Separation of photoelastic principal stresses by analytical evaluation and digital image processing. J Mech 25(1): 19–25. [18] Berghaus DG (1991) Combining photoelasticity and finite-element methods for stress analysis using least squares. Exp Mech 31(1): 36–41. [19] Hayabusa K, Inoue H, Kishimoto K, Shibuya T (1999) Inverse analysis related to stress separation in photoelasticity. In: Zhu D, Kikuchi M, Shen Y, Geni M (eds) Progress in experimental and computational mechanics in engineering and materials behavior, Northwestern Polytechnical University Press, Xi’an, pp 319–324. [20] Chen D, Becker AA, Jones IA, Hyde TH, Wang P (2001) Development of new inverse boundary element techniques in photoelasticity. J Strain Anal Eng Des 36(3): 253–264. [21] Dally JW, Riley WF (1991) Experimental stress analysis 3rd ed. McGraw-Hill, New York. [22] Ramesh K, Mangal SK (1998) Data acquisition techniques in digital photoelasticity. Opt Lasers Eng 30(1): 53–75. [23] Ajovalasit A, Barone S, Petrucci G (1998) A review of automated methods for the collection and analysis of photoelastic data. J Strain Anal Eng Des 33(2): 75–91. [24] Patterson EA (2002) Digital photoelasticity: principles, practice and potential. Strain 38(1): 27–39. [25] Fernández MSB Data acquisition techniques in photoelasticity. Exp Tech forthcoming. [26] Barone S, Burriesci G, Petrucci G (2002) Computer aided photoelasticity by an optimum phase stepping method. Exp Mech 42(2): 132–139. [27] Pinit P, Umezaki E (2007) Digitally whole-field analysis of isoclinic parameter in photoelasticity by four-step color phase-shifting technique. Opt Lasers Eng 45(7): 795–807. [28] Yoneyama S, Nakamura K, Kikuta H (2009) Absolute phase analysis of isochromatics and isoclinics using arbitrary retarded retarders with tricolor images. Opt Eng 48(12): 123603. [29] Yoneyama S, Kikuta H (2006) Phase-stepping photoelasticity by use of retarders with arbitrary retardation. Exp Mech 46(3): 289–296.
Discussion on hybrid approach to determination of cell elastic properties M.C. Frassanito, L. Lamberti, A. Boccaccio, C. Pappalettere Politecnico di Bari, Dipartimento di Ingegneria Meccanica e Gestionale Viale Japigia 182, Bari, 70126, ITALY E-mail: [email protected]; [email protected]; [email protected]; [email protected] ABSTRACT This study discusses the application of a hybrid experimental-numerical approach to analyze nano-indentation curves of a biological membrane acquired with an Atomic Force Microscope. The proposed procedure combines experimental measurements, FEM analysis and numerical optimization and is completely general. Variations of estimated Young modulus of the membrane are determined when attributing different constitutive laws to the sample and in the case of progressive blunting of the AFM tip during the measurement. Since traditional analysis of Atomic Force Microscope indentation curves relies on an inappropriate application of the classical Hertz theory, a comparison between the hybrid approach and the Hertzian model in the determination of the elastic properties of the sample is presented. In particular, it is found that large errors occur in the derivation of the Young modulus when the Hertzian model is used for the analyis of experimental data.
1. INTRODUCTION Many physiological and patho-physiological processes alter the mechanical properties of the biological tissues they affect. It is well known that aging causes deterioration in mechanical strength of human tissues and that muscles get harder with weight training. Correlation between tissue structural behavior and pathologies manifests itself already at the cellular level, as observed in the case of inflammations, some forms of cancer and cardiac diseases. Mechanical characterization of cells and biological membranes “in vitro”, i.e. in their physiological environment, can be performed thanks to the recent developments in the field of nanotechnology of instruments with nanoscale resolution: these allow to apply and detect forces and displacements with picoNewton (pN) and nanometer sensitivity, respectively. The Atomic Force Microscope (AFM) has recently emerged as a powerful tool to investigate the elastic properties of biological specimens. The AFM was designed to provide high resolution images of the surfaces of nonconductive samples and consists of a very sharp tip mounted at the end of a cantilever that scans the surface of the sample. The 3D topography of the selected area is reconstructed by recording the minute deflections of the cantilever during the scanning procedure. Soon after its invention, the AFM was also used as a nanoindenter to measure the mechanical properties of a sample with a nanometric resolution. In this operating mode, the deflection of the cantilever, is monitored as a function of the indentation depth of the tip into the sample and the instrument registers a forceindentation curve. Traditional analysis of AFM indentation curves relies on an inappropriate application of the classical Hertz theory [1,2] with its hypotheseis of linear elastic material properties, infinitesimal strains and infinite sample thickness and dimensions. None of these assumptions is likely to be valid when a biological membrane is indented with an AFM. Most biological materials exhibit nonlinear constitutive behavior. Furthermore, the AFM probe induces large deformations during the indentation process and the half-space assumption cannot be adapted to thin bio membranes. Previous studies used Finite Element Modeling to simulate AFM indentation curves and evaluated the effect of indentation depth, tip geometry and material nonlinearity on the finite indentation response [3,4]. The current trend in the interpretation of AFM data is to describe mechanical behavior of cell membranes by means of hyperelastic constitutive relationships and to extract values of elastic properties of the specimen with the aid of FEM analysis [5,6]. This work proposes the application of a hybrid procedure which combines experimental measurements, FEM analysis and optimization algorithms to analyze AFM indentation curves. The proposed methodology is applied to the determination of the mechanical properties of a biological membrane. In particular, the membrane analyzed in the paper is the Zona Pellucida (ZP), which is the extracellular coat that surrounds mammalian oocyte. The limit of applying the Hertzian model to extract the elastic properties of a sample are put in evidence as well as the errors incurred in the derivation of the Young modulus. The hybrid procedure allows to take into consideration all the parameters involved in the experiment, such as radius of curvature of the tip, the thickness of the membrane, the constitutive law of the sample. The variations of the estimated Young modulus of the ZP membrane are determined in the case of attribution of different constitutive laws to the sample and in the case of blunting of the AFM tip during the measurement. T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_16, © The Society for Experimental Mechanics, Inc. 2011
119
120
2. FINITE ELEMENT ANALYSIS
The AFM nanoindentation experiments conducted on the membrane were simulated with the ABAQUS® Version 6.7 commercial finite element software [7]. For that purpose, an axisymmetric FE model was developed: the model includes a rigid blunt-conical indenter (tip radius of 10nm and half-open angle of 20°) pressing against a soft layer adherent on a rigid substrate. The Young modulus of the silicon-nitride AFM tip is 300 GPa. The biomembrane was modeled as an incompressible hyperelastic slab with diameter 60 μm and thickness 10 μm.
Figure 1. Finite element model simulating the nanoindentation process. The deformation field corresponding to 100nm indentation is shown in the figure. Figure 1 shows the finite element model with the rigid indenter and the membrane: the deformation field corresponding to 100nm indentation is presented. The mesh of the membrane included 69716 four-node bilinear, hybrid CAX4H elements with constant pressure and 70562 nodes. The hybrid pressure-displacement formulation implemented in the chosen element type allowed incompressible behavior to be modeled. Convergence analysis was carried out in order to have mesh independent solutions. The mesh was properly refined in the contact region between the AFM tip and the membrane: the element size is 0.06 nm where such a value allowed to reach a good compromise between convergence of nonlinear analysis and computation time. The penetration of the indenter was simulated by progressively increasing the value of the force applied to the AFM tip in the vertical direction: the load transferred by the rigid blunt-conical tip to the membrane generates a state of compression in the soft material of the slab. The bottom and side of the slab were fixed in space, and both the rigid blunt-cone and axis of symmetry for the slab are permitted to move only in the vertical direction. Finite element analysis accounted for geometric non-linearity (i.e. large deformations) and the automatic time stepping option was selected to facilitate the convergence of nonlinear analysis. The contact between the indenter and the membrane was assumed to be frictionless; the “hard contact” (i.e. no force is exchanged before surfaces come in contact) option available in ABAQUS was chosen. 3. HYPERELASTIC MODELS Three different hyperelastic constitutive models were considered in this study in order to describe the structural behavior the ZP membrane: (i) Two-parameter Mooney-Rivlin (MR); (ii) Neo-Hookean (NH); (iii) Arruda-Boyce Eight-chain (8chain) model (AB). The two-parameter MR constitutive law [8-10] is a very classical phenomenological model described by the following strain energy density function:
(
)
(
W = C10 I1 − 3 + C 01 I 2 − 3
)
(1)
where C10 and C01 are the MR constants given in input to ABAQUS as material properties. Strain invariants are defined, respectively, as I1 =tr[C] and I 2 ={tr2[C]−tr2[C]2} where [C] is the Cauchy-Green strain tensor. The corresponding uniaxial stress (σ) − stretch (λ) equation can be derived as:
1 ⎞ 1⎞ ⎛ ⎛ σ = 2C10 ⎜ λ − 2 ⎟ + 2C 01 ⎜1 − 3 ⎟ λ ⎠ ⎝ ⎝ λ ⎠
(2)
121
The shear modulus μMR is defined as μMR=2(C10+C01) while the Young modulus is equal to: E MR= 4(1+ν)(C10+C01)
(3)
The NH model [9-11] was selected in this study because it is based on the statistical thermodynamics of cross-linked polymer chains. Although, this model is not phenomenological, it can be however derived from the two-parameter MR model by setting C01=0. Consequently, only one material parameter must be given in input to ABAQUS. The shear modulus μNH is defined as μ NH =2C10 while the Young modulus is equal to: E NH = 4(1+ν)C10.
(4)
The AB model [12] was previously used in literature to describe the mechanical behavior of biospecimens including filamentous collagen networks [13,14] and monolayers of endothelial cells [6]. This model relies on the statistical mechanics of a material with a cubic representative volume element containing eight chains along the diagonal directions. The strain hardening behavior of an incompressible material is predicted by using two constants: the shear modulus μ 8chain and the distensibility λL where the latter corresponds to the limiting network stretch. The strain energy function can be expressed as Eq. (5):
(
)
(
)
(
)
(
)
(
)
⎡1 ⎤ 2 3 4 4 519 2 33 76 I 1 − 27 + I 1 − 81 + I 1 − 243 ⎥ W = μ 8 chain ⎢ I 1 − 3 + I1 − 9 + 6 8 2 4 20 λ L 1050 λ L 7000 λ L 673 , 750 λ L ⎣2 ⎦ The corresponding uniaxial stress (σ) − stretch (λ) equation is: 2 3 4 2I 1 33I1 76I1 519I1 1 ⎞⎛ 1 ⎛ σ = 2μ 8chain ⎜ λ2 − ⎟ ⎜ + + + + λ ⎠ ⎜ 2 20λ2L 1050λ4L 7000λ6L 673,750λ8L ⎝ ⎝
⎞ ⎟ ⎟ ⎠
(6)
The Arruda-Boyce model is activated in ABAQUS by giving in input the values of μ8chain and λL as material parameters. The Young modulus is defined as E 8chain= 2(1+ν)μ8chain.
(7)
4. FORMULATION OF THE INVERSE PROBLEM In order to extract more realistically the hyperelastic properties of the ZP membrane from the above described FE model which accounts for non-linearity of finite indentation process and material non-linearity, a hybrid procedure combining experimental measurements, FE analysis and nonlinear optimization was utilized. Displacement values measured experimentally are compared with the corresponding results of FE analysis. This leads to formulate an optimization problem including the unknown material properties as design variables. The optimization problem describing the inverse problem of material characterization can be stated as follows:
⎧ ⎡ ⎪ ⎢ 1 ⎪Min ⎢Ω(X 1 , X 2 ,..., X NMP ) = N CNT ⎪ ⎢ ⎣ ⎪ U ⎪ L ⎪X 1 ≤ X 1 ≤ X 1 ⎨ L U ⎪X 2 ≤ X 2 ≤ X 2 ⎪ ⎪ L U ⎪X ≤ X NMP −1 ≤ X NMP −1 ⎪ NMP −1 ⎪⎩X NMP L ≤ X NMP ≤ X NMP U
2 ⎤ ⎛ δ FEM j − δ j ⎞ ⎥ ⎜ ⎟ ∑ j ⎜ ⎟ ⎥ j=1 ⎝ δ ⎠ ⎥ ⎦
N CNT
(8)
where Ω is the error functional to be minimized. The design vector X(X1,X2,…,XNMP) includes the unknown number of material properties (NMP) to be determined that can vary between the lower and upper bounds. In Eq. (8), δFEM j and δ , respectively¸ are the displacement values for the j-th load step computed with FE analysis and those measured experimentally with AFM. The number of control locations NCNT is equal to the number of load steps to j
122
complete nonlinear FE analysis. Nanoindentation values measured experimentally can be taken as target values in the identification problem because the execution of AFM measurements does not require any a priori knowledge of material properties. Conversely, the “correct material properties”, i.e. the actual material properties, must be given in input to the FE model to obtain the force-indentation curve that matches the F-δ curve determined experimentally. Theoretically, the functional error Ω computed at the optimum design (i.e. the target material properties) will be equal to 0. However, since target values are measured experimentally, there will be a residual deviation between the forceindentation curve reconstructed numerically and the actual F-δ curve measured with AFM. The suitability of the optimization-based for mechanical characterization problems of nonlinear materials is well documented in literature [15,16]. The inverse problem (8) was solved with the Sequential Quadratic Programming (SQP) method, a gradient-based optimization algorithm that has the property of global convergence. SQP satisfies the necessary Kuhn-Tucker optimality conditions regardless of the initial point from which the optimization process is started [17]. SQP is universally considered as the most efficient gradient-based optimization method. The powerful SQP optimization routine implemented in the commercial general mathematics software MATLAB® Version 7.0 was utilized [18]. The finite element solver of ABAQUS was interfaced with the SQP optimization routine of MATLAB that processed the results of FE analysis, compared the numerical F-δ curve with the experimental data, computed the error functional Ω, and perturbed the material parameters for the subsequent design cycles. 5. RESULTS AND DISCUSSION The elastic properties of the ZP membranes extracted from mature oocytes were evaluated applying both the Hertz model and the hybrid procedure described previously. An indentation range of 100nm was considered, as in this interval the hypothesis of infinitesimal strain could still be considered valid. Firstly, the indentation curves were examined with the modified Hertzian model for the conical indenter and, from the analysis of the experimental data recorded on 50 different points of each sample, the following Young modulus was derived: EHertz=18.5kPa ± 1.58kPa.
a)
b)
c)
Figure 2. Force-indentation curves acquired experimentally on ZP of mature oocytes (symbol) and the corresponding numerical curves obtained with the hybrid procedure (solid line). Three different constitutive models for the membrane were considered: a) Neo-Hookean model; b) Two-parameter Mooney Rivlin model; c) Eight chain model, ArrudaBoyce model.
123
The elastic parameters of the ZP membrane were then extracted applying the hybrid procedure. This approach is completely general, and therefore the derived elastic properties can be considered more reliable. Figure 2 shows a comparison between the force-indentation curves acquired experimentally on the sample and the corresponding curves obtained with the optimization algorithm. Three different constitutive models for the membrane were considered: the NH, the MR and the AB. The corresponding Young modulus of the sample, calculated from Eqs. (3), (4) and (7), respectively for the Neo-Hookean, Mooney-Rivlin and Arruda-Boyce models, are equal to ENH=9.73kPa, EMR=8.88kPa and EAB=6.4kPa. It can be observed that these values are considerably smaller (two or three times) than the Young modulus extracted with the Hertzian model. Hence, large errors are incurred in the estimation of elastic modulus when using linear elastic model to fit the data even for small indentation ranges. This result is consistent with the findings of other authors who conducted FE studies on the indentation of materials with an hyperelastic mechanical behavior [5,6]. The fit between the experimental data and the model was evaluated by means of the coefficient of correlation R2. Table 1 summarizes the R2 values calculated for the three hyperelastic laws and for the Hertzian model. It was found that the best fit of the experimental curve is obtained with the Arruda Boyce model. Therefore, it can be concluded that this constitutive law is the most appropriate to describe the mechanical behavior of the ZP membrane. Table 1. Fitting parameters and Young modulus of ZP isolated from mature oocyte attributing linear elastic and hyperelastic constitutive behavior to the membrane. R2: correlation coefficient Model Hertzian Neo-Hookean Mooney-Rivlin Arruda-Boyce
Fitting parameters E=18.5kPa C10=1.836kPa C10=1.575kPa; C01=0.101kPa μ 8chain=2.399kPa λL=1.9
Young modulus E (kPa) 18.5 9.73 8.88
R2 0.993 0.996 0.997
6.4
0.998
After having evaluated how the Young modulus varies when attributing different constitutive laws to the samples, it was considered the influence of the variation of the radius of the tip in the determination of the Young modulus. The tip may be blunted, for example, during a scan of the sample. In Figure 3 it is shown the variation of the derived shear modulus with the radius of curvature of the tip, in the case of the Arruda Boyce hyperelastic constitutive behavior. It can be seen that when the radius of the tip increases from 10nm to 50nm, the corresponding shear modulus is halved. Therefore, in case of blunting of a tip during an AFM measurement, a considerable error can be induced in the estimation of the Young modulus.
Figure 3. Variation of the shear modulus with the radius of curvature of the tip when attributing the Arruda-Boyce hyperelastic constitutive law to the membrane. 6. CONCLUSIONS This study proposed the application of a hybrid procedure which combines experimental measurements, FEM analysis and nonlinear optimization to analyze the nanoindentation curves on a biological membrane acquired with an AFM. A comparison with the Hertzian model in the determination of the elastic properties of the sample is presented. In particular, it is found that large errors occur in the in the derivation of the Young modulus when applying the Hertzian model to analyze the experimental data. The hybrid procedure is completely general as it takes into consideration all the parameters involved in the experiment, such as radius of curvature of the tip, the thickness of the membrane, the
124
constitutive law of the sample. The variations of the estimated Young modulus of the membrane are determined when attributing different constitutive laws to the sample and in the case of blunting of the tip during the measurement. References [1] Hertz H. On the contact of elastic solids. Journal für die Reine und Angewandte Mathematik, 92, 156-171, 1881. [2] Sneddon I.N. The relation between load and penetration in the axisymmetric Boussinesq problem for a punch of arbitrary profile. International Journal of Engineering Science, 3, 47-57, 1965. [3] Costa K.D., Yin F.C.P. Analysis of indentation: implications for measuring mechanical properties with atomic force microscopy. Journal of Biomechanical Engineering, 121, 462-471, 1999. [4] Costa K.D., Sim A.J., Yin F.C.P. Non-Hertzian approach to analyzing mechanical properties of endothelial cells probed by atomic force microscopy. Journal of Biomechanical Engineering 128, 176-184, 2006. [5] Lin D.C., Shreiber D.I., Dimitriadis E.K., Horkay F. Spherical indentation of soft matter beyond the Hertzian regime: numerical and experimental validation of hyperelastic models. Biomechanics and Modeling in Mechanobiology, 8, 345-358, 2009. [6] Kang I., Panneerselvam D., Panoskaltsis V.P., Eppel S.J., Marchant R.E., Doerschuk C.M. Changes in the hyperelastic properties of endothelial cells induced by tumor necrosis factor-α. Biophysical J., 94, 3273-3285, 2008. [7] Dassault Systèmes, 2007. ABAQUS Version 6.7. Theory and User’s Manual. www.simulia.com [8] Mooney M. A theory of large elastic deformation. Journal of Applied Physics, 11, 582-592, 1940. [9] Rivlin R.S. Large elastic deformations of isotropic materials I. Fundamental concepts. Philosophical Transactions of the Royal Society of London , A240, 459-490, 1948. [10] Rivlin R.S. Large elastic deformations of isotropic materials IV. Further developments of the general theory. Philosophical Transactions of the Royal Society of London, A241, 379-397, 1948. [11] Treolar L.R.G. The Physics of Rubber Elasticity, 3rd Edn. Oxford University Press, Oxford (UK), 1975. [12] Arruda E.M., Boyce M.C. A three dimensional constitutive model for the large stretch behavior of rubber elastic materials. Journal of Mechanics and Physics of Solids, 41, 389-412, 1993. [13] Bischoff J.E., Arruda E.M., Grosh K. Finite element modeling of human skin using an isotropic, nonlinear elastic constitutive model. Journal of Biomechanics, 33, 645-652, 2000. [14] Palmer J.S., Boyce M.C. Constitutive modeling of the stress–strain behavior of F-actin filament networks. Acta Biomaterialia, 4, 597-612, 2008. [15] Cosola E., Genovese K., Lamberti L., Pappalettere C. Mechanical characterization of biological membranes with moiré techniques and multi-point simulated annealing. EXPERIMENTAL MECHANICS, 48, 465-478, 2008. [16] Cosola E., Genovese K., Lamberti L., Pappalettere C. Mechanical characterization of biological membranes with moiré techniques and multi-point simulated annealing. International Journal of Solids Structures , 45, 6074-6099, 2008. [17] Rao S.S. Engineering Optimization. John Wiley and Sons, New York (USA), 1996. [18] The MathWorks, MATLAB® Version 7.0. Austin (TX), 2006. http://www.mathworks.com
Mesh Refinement for Inverse Problems with Finite Element Models Antti H. Huhtala∗ and Sven Bossuyt† ∗ Aalto University, Department of Mathematics and Systems Analysis, P.O. Box 11100, FI-00076 Helsinki, Finland † Aalto University, Department of Engineering Design and Production, P.O. Box 14200, FI-00076 Helsinki, Finland
Abstract Many inverse problems arising in experimental mechanics involve solutions to partial differential equations in the forward problem, typically using finite element methods for those solutions. Given that iterative solutions to the inverse problem then involve repeated evaluations of the finite element model, it is useful to carefully consider the mesh to be used and its effect on the trade-off between accuracy and computational cost of the solution. We show that approximation theory can be applied directly to the inverse problem, not merely to the finite element model contained in the forward problem, to give bounds for the error made by using a given mesh to approximate the solution to the partial differential equation. Adaptive mesh refinement allows to focus computational effort on goals set by the quantities of interest in the inverse problem, rather than on the overall accuracy of the solution to the forward problem.
1
Introduction
Inverse problems —in the sense of problems that, by Hadamard’s criteria[1], are ill-posed whereas the corresponding forward problem is well-posed— occur in many different guises in experimental mechanics, although they are not always recognized as such. For example, the whole family of full-field measurements, including methods based on photoelasticity, moire fringes, interferometry, digital image correlation, et cetera, represent ill-posed inverse problems of determining a continuum field variable from a large but finite set of measurements. Conversely, the name inverse problems is widely used in experimental mechanics to refer to parameter identification problems which are not necessarily ill-posed. Determining linear elastic material parameters from mechanical tests with full-field measurements is well-posed if the experiment is not too poorly designed, but using those same measurements to characterize nonlinear constitutive behaviour such as elastoplastic deformation[2] can be ill-posed. The inherent ill-posedness of full-field measurements is typically resolved without much fuss by implicitly or explicitly taking the spatial resolution of the measurement method into account in the conceptual definition of the field that is measured. Often, the inverse problem is not even considered at all, and the device is simply assumed to provide a means for sampling the continuum field. Nevertheless, there is a growing awareness that conversions between different representations of a continuum field can be problematic[3], and that there are significant advantages to considering the entire measurement process in the inverse methods[4, 5]. Often, the forward problems involve the solution of elliptic partial differential equations, such as the equations of continuum mechanics or the Fourier heat flow equation. These give rise to a particular form of ill-posedness related to Saint Venant’s principle: with increasing distance between load and response, the effects of local details in loading become increasingly difficult to discern. Therefore it is pointless to try to extract too much detail from an inverse method, and it makes sense to search instead for a regularized inverse of the forward problem. For numerical calculations, the forward problem is typically discretized using a finite element approximation, which then raises the question, if it is not useful to extract fine details from the calculation, whether it is necessary to calculate those details at all. In other words, does refining the finite element mesh have the same effect on the accuracy of the solution to the inverse problem as on the accuracy of the solution to the forward problem?
2
Forward problem
Using the terminology of heat diffusion, the problem we consider is that of reconstructing the heat sources based on a number of temperature measurements. This problem might arise when using infrared thermography for non-destructive testing, or in postprocessing thermoelastic stress analysis data[6]. More importantly, the diffusion equation with different terminology describes many different physical processes. The heat equation serves here as a prototype elliptic partial differential equation; the methods used and the resulting conclusions will be more generally applicable.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_17, © The Society for Experimental Mechanics, Inc. 2011
125
126 Our model for the transfer of heat is Poisson’s equation −∇ · (σ ∇u) = f u=0
in Ω
(1)
on ∂ Ω,
where f is the heat source field, σ is the thermal conductivity and u is the temperature field. With suitable additional assumptions [7, 8], there exists a unique solution to this problem for each f ∈ L2 (Ω). Let K −1 be the solution operator to the problem, such that u = K −1 f . Let Kh−1 be the finite element solution operator with mesh density parameter h, and let the corresponding FE solution be denoted uh = Kh−1 f . Assuming full regularity and a first order FE approximation, the standard finite element H 1 error estimate then gives k(K −1 − Kh−1 ) f kH 1 (Ω) = ku − uh kH 1 (Ω) ≤ C1 hk f kL2 (Ω) ,
(2)
and the Aubin-Nitsche lemma gives k(K −1 − Kh−1 ) f kL2 (Ω) = ku − uh kL2 (Ω) ≤ C2 hku − uh kH 1 (Ω) ≤ C3 h2 k f kL2 (Ω) .
(3)
The temperature measurements are modeled as linear functionals on the temperature field. That is, the measurement is a vector m such that h1 (u) .. −1 m= (4) = Hu = HK f . . hN (u) For technical reasons, the measurement functionals are assumed to be square integrable, i.e. h1 , . . . , hN ∈ L2 (Ω).
3
Inverse problem
In the inverse problem, a reconstruction f r of the heat source field f is determined from a measurement m. The relation in equation (4) is not uniquely invertible, but a reconstruction is possible with additional assumptions. Often in practice there is some a priori knowledge of the expectation value of f , as well as of its covariance operator. The expectation value of f is denoted by f¯, and the covariance operator by b(·, ·). In addition, since we are interested in enforcing the smoothness of the reconstruction, it will be sought from the Sobolev space H 1 (Ω). With these assumptions, then for instance by using the method of statistical inversion or generalized Tikhonov regularization, a reconstruction can obtained as the following minimization problem [9] n o f r = argmin kHK −1 f − mk2 + b( f − f¯, f − f¯) . (5) f ∈H 1 (Ω)
The term b( f − f¯, f − f¯) in the minimization hence enforces that f is close to f¯ and that the difference f − f¯ is smooth. This minimization problem can equivalently be written in variational form as: find f r ∈ H 1 (Ω) such that
where and
a( f r , g) = l(g) ∀g ∈ H 1 (Ω),
(6)
a( f , g) = (HK −1 f )T (HK −1 g) + b( f , g)
(7)
l(g) = mT (HK −1 g) + b( f¯, g).
(8)
Given that the covariance operator b(·, ·) is (i) continuous and (ii) coercive in (i)
|b( f , g)| ≤ γk f kH 1 (Ω) kgkH 1 (Ω)
(ii) b( f , f ) ≥ αk f k2H 1 (Ω)
H 1 (Ω)
∀ f , g ∈ H 1 (Ω)
∀ f ∈ H 1 (Ω),
(9) (10)
it follows that the bilinear form a(·, ·) is continuous and coercive. Due to the Lax-Milgram lemma, the problem in equation (6) therefore has a unique solution [7, 8]. However, because both a(·, ·) and l(·) contain the solution operator K −1 of the forward problem, they cannot be computed with finite resources. To obtain a computable problem close to the original problem, we modify a(·, ·) and l(·) to use the FE solution operator Kh−1 instead. The modified problem is then: find fˆr ∈ H 1 (Ω) such that ˆ ∀g ∈ H 1 (Ω), a( ˆ fˆr , g) = l(g)
(11)
127 where and
a( ˆ f , g) = (HKh−1 f )T (HKh−1 g) + b( f , g)
(12)
ˆ = mT (HK −1 g) + b( f¯, g). l(g) h
(13) H 1 (Ω),
Like a(·, ·), the bilinear form a(·, ˆ ·) is also continuous and coercive in the space and again by the Lax-Milgram lemma, the variational problem in equation (11) has a unique solution. This problem can then be discretized using Galerkin’s method [7], by choosing a suitable finite subspace Fh ⊂ H 1 (Ω) and solving the variational equation in that space, to find fˆhr ∈ Fh such that ˆ a( ˆ fˆhr , g) = l(g) ∀g ∈ Fh . (14) For practical reasons, we choose to build our space Fh on the same mesh, using the same polynomial order in the shape functions, as in the finite element solution of the forward problem.
4
Error Analysis
The error of the approximate solution fˆhr can be broken down into two components k f r − fˆhr k = k f r − fˆr + fˆr − fˆhr k ≤ k f r − fˆr k + k fˆr − fˆhr k.
(15)
The first part, the term k f r − fˆr k, is due to using the FE approximation of the forward problem instead of the exact one. We refer to this error source as the consistency error. The second part, k fˆr − fˆhr k, is the discretization error of seeking the solution only from a finite subspace. To estimate the consistency error, we define two operators Ea ( f , g) = a( f , g) − a( ˆ f , g) ∀ f , g ∈ H 1 (Ω) and
ˆ El (g) = l(g) − l(g)
∀g ∈ H 1 (Ω).
(16) (17)
The operator El (·) can be estimated as ˆ |El (g)| = |l(g) − l(g)| = |mT (H(K −1 − Kh−1 )g)| ≤ kmkRN kH(K −1 − Kh−1 )gk ≤ kmkRN kHkL2 (Ω)→RN kK
−1
(18) g − Kh−1 gkL2 (Ω)
≤ C4 h2 kmkRN kgkL2 (Ω) , where the last inequality is due to the Aubin-Nitsche L2 error estimate of the finite element solution (3). The operator Ea (·, ·) can be estimated similarly to give |Ea ( f , g)| ≤ C5 h2 k f kL2 (Ω) kgkL2 (Ω) . (19) Using the definitions of f r and fˆr , we see that for any g ∈ H 1 (Ω) it holds that a( ˆ f r − fˆr , g) = a( ˆ f r , g) − a( ˆ fˆr , g) = a( f r , g) − Ea ( f r , g) − a( ˆ fˆr , g) ˆ = l(g) − Ea ( f r , g) − l(g)
(20)
= El (g) − Ea ( f r , g). Then, due to the coerciveness of a(·, ˆ ·) and the estimates for El (·) and Ea (·, ·), we can estimate the consistency error as αk f r − fˆr k2H 1 (Ω) ≤ |a( ˆ f r − fˆr , f r − fˆr )| = |El ( f r − fˆr ) − Ea ( f r , f r − fˆr )| ≤ |El ( f r − fˆr )| + |Ea ( f r , f r − fˆr )|
(21)
≤ C4 h kmkRN k f − fˆr kL2 (Ω) 2
r
+C5 h2 k f r kL2 (Ω) k f r − fˆr kL2 (Ω) . Combining the constants and estimating k f r − fˆr kL2 (Ω) with a stronger norm gives αk f r − fˆr k2H 1 (Ω) ≤ C6 h2 kmkRN + k f r kL2 (Ω) k f r − fˆr kH 1 (Ω) ,
(22)
128 which finally gives
k f r − fˆr kH 1 (Ω) ≤ C7 h2 (kmkRN + k f r kL2 (Ω) ).
(23) H 1 (Ω),
Cea’s lemma To estimate the discretization error, we note that since a(·, ˆ ·) is continuous and coercive in the space [7, 8] holds true. It states that, up to a constant factor C8 , the discrete solution fˆhr is the best approximation of fˆr , i.e. for any g ∈ Fh k fˆr − fˆhr kH 1 (Ω) ≤ C8 k fˆr − gkH 1 (Ω) . (24) Since we want to compare the error to the norm of f r rather than fˆr , we do the following k fˆr − fˆhr kH 1 (Ω) ≤ C8 k fˆr − gkH 1 (Ω) = C8 k fˆr − f r + f r − gkH 1 (Ω) r
ˆr
(25)
r
≤ C8 (k f − f kH 1 (Ω) + k f − gkH 1 (Ω) ). Now, since we assume that f r has full-regularity and because Fh is a first order finite element space, there exists an interpolating function g ∈ Fh such that k f r − gkH 1 (Ω) ≤ C9 hk f r kH 2 (Ω) . (26) In addition, we can estimate the term k f r − fˆr kH 1 (Ω) with the consistency error estimate in equation (23), to obtain k fˆr − fˆhr kH 1 (Ω) ≤ C8C7 h2 (k f r kL2 (Ω) + kmkRN ) +C8C9 hk f r kH 2 (Ω) .
(27)
Combining the constants and assuming h < 1 gives k fˆr − fˆhr kH 1 (Ω) ≤ C10 hk f r kH 2 (Ω) .
(28)
An L2 error estimate follows from the Aubin-Nitsche lemma [7] applied to the variational problem in equation (11). It then holds that k fˆr − fˆhr kL2 (Ω) ≤ C11 h2 k f r kH 2 (Ω) . (29) Combining the consistency error and the discretization error, we get the following estimates k f r − fˆhr kH 1 (Ω) ≤ k f r − fˆr kH 1 (Ω) + k fˆr − fˆhr kH 1 (Ω) ≤ C7 h2 kmkRN + k f r kL2 (Ω) +C10 hk f r kH 2 (Ω)
(30)
r
≤ C12 hk f kH 2 (Ω) and
k f r − fˆhr kL2 (Ω) ≤ k f r − fˆr kL2 (Ω) + k fˆr − fˆhr kL2 (Ω) ≤ k f r − fˆr kH 1 (Ω) + k fˆr − fˆhr kL2 (Ω) ≤ C7 h2 kmkRN + k f r kL2 (Ω) +C11 h2 k f r kH 2 (Ω) ≤ C13 h2 kmkRN + k f r kH 2 (Ω) .
5
(31)
Numerical experiments
Numerical tests were run in the rectangular domain Ω = (0, 4) × (0, 1). The measurements were average temperatures over discs of radius 0.1 (the average is to ensure that they are L2 continuous), which were scattered in a uniform 16x4 grid. The measurement data vector m was simulated on a highly refined mesh using a smooth temperature source field. The bilinear form b(·, ·) was chosen as Z b( f , g) = 10−3
f (x)g(x) dx.
(32)
Ω
Figure 1 shows three numerical solutions of this problem with different mesh resolutions. The larger features are already present on the coarsest mesh, whereas small details first appear on the second mesh and are further refined on the densest mesh. The behavior of H 1 and L2 errors against the mesh density parameter h are shown in figure 2. Two straight lines are fitted to the individual computations to derive the convergence rate. For the L2 error, we see convergence in the order of h1.87 , which, is reasonably close to h2 as the analysis predicted. The H 1 error also shows a convergence rate quite close to the predicted rate. These errors were computed against a highly refined numerical solution, which acted as the exact solution.
129
Figure 1: Approximate solutions of the inverse problem with an increasing mesh resolution
130
2
L −error, slope = 1.87
H1−error, slope = 0.97
−1
10
−2
Error in norm
10
−3
10
−4
10
−5
10
−1
10
−2
Mesh size (h)
10
Figure 2: L2 and H 1 errors of the approximate solution with different values of h. The convergence rates correspond well with the rates estimated a priori.
6
Conclusions
By formulating the inverse problem as a minimization problem similar to the weak form of the finite element problem, the a priori error analysis of the inverse method can be carried out similarly to the error analysis for finite element methods. Errors arise both due to discretizing the quantity of interest and due to using a FE approximation of the forward problem. Using the same approximation order, the consistency error —i.e., the error due to replacing the exact forward problem with a finite element approximation— is at most the same order of magnitude as the inverse problem discretization error. In fact, in the “natural” H 1 norm, it is a full order of magnitude smaller. This suggests that the accuracy of the forward problem is not as critical as usually thought. One should choose the discretization with just enough resolution so that expected features of the reconstruction can be represented.
7
Acknowledgements
This research is funded by the ISMO project of the Multidisciplinary Institute for Digitalisation and Energy (MIDE) of Aalto University and by the Academy of Finland. The authors are grateful to Antti Hannukainen for contributing to the error analysis.
References [1] Hadamard, J. Sur les probl`emes aux d´eriv´ees partielles et leur significance physique. Princeton University Bulletin 13, 49–52 (1902). [2] Cooreman, S., Lecompte, D., Sol, H., Vantomme, J. & Debruyne, D. Identification of mechanical material behavior through inverse modeling and DIC. Experimental Mechanics 48, 421–433 (2008). [3] Gr´ediac, M., Pierron, F., Avril, S. & Toussaint, E. The virtual fields method for extracting constitutive parameters from full-field measurements: A review. Strain 42, 233–253 (2006). [4] Roux, S. & Hild, F. Digital image mechanical identification (DIMI). Experimental Mechanics 48, 495–508 (2008). [5] Belkassem, B., Bossuyt, S. & Sol, H. Enhanced handshaking between DIC and FE computed deformation fields in an inverse method. In SEM 2009 Annual Conference & Exposition on Experimental & Applied Mechanics (2009). [6] Van Hemelrijck, D., Schillemans, L., Cardon, A. H. & Wong, A. The effects of motion on thermoelastic stress analysis. Composite Structures 18, 221 – 238 (1991). [7] Braess, D. Finite elements: Theory, fast solvers, and applications in elasticity theory (Cambridge University Press, Cambridge, 2007). [8] Johnson, C. Numerical solution of partial differential equations by the finite element method (Dover Publications Inc., Mineola, NY, 2009). Reprint of the 1987 edition. [9] Kaipio, J. & Somersalo, E. Statistical and computational inverse problems, vol. 160 of Applied Mathematical Sciences (Springer-Verlag, New York, 2005).
Assessment of inverse procedures for the identification of hyperelastic material parameters
Marco Sasso Università Politecnica delle Marche, via Brecce Bianche, 60131 Ancona – Italy, [email protected] Gianluca Chiappini Università Politecnica delle Marche via Brecce Bianche, 60131 Ancona - Italy [email protected] Marco Rossi Arts et Métiers ParisTech, rue St. Dominique, 51000 Châlons-en-Champagne - France [email protected] Giacomo Palmieri Università degli Studi e-Campus, via Isimbardi 10 - 22060 Novedrate (CO) - Italy [email protected] ABSTRACT This work aimed to implement and compare two competitive procedures for the identification of hyperelastic material parameters. Formerly, experimental tests have been conducted on fluorosilicone rubber specimens in equal-biaxial tension; the cruciform shaped-specimens underwent heterogeneous large strain distributions, which were captured by Digital Image Correlation technique; while load cells grabbed the force signals. The experimental data have been used in two different inverse techniques for material parameters estimation: the first method was based on “classic” FE model updating, which uses only the global quantities measured during the experiments (i.e. forces and boundary displacements) to define the error function to be minimized; the second method was still based on FE model updating, but experimentally determined strain fields was compared with the numerical ones in order to define a more adequate cost function; third technique was based on the Virtual Fields Method, which naturally takes into account the real strain distributions and permits to overcome the experimental difficulties represented by non-symmetry of the test/specimen, non-uniform boundary conditions, friction. The results of the three procedures are showed and compared in terms of accuracy, transferability, computational efficiency and practicability. INTRODUCTION Modeling the mechanical response of elastomeric materials is commonly carried out within the framework of hyperelasticity [1]. Identifying constitutive parameters that govern such a type of law is classically carried out with homogeneous tests, namely uniaxial tensile extension, pure shear and equibiaxial extension [2-3]. The aim of the present work is to develop an experimental and numerical procedure in order to characterize hyperelastic materials by means of only one single heterogeneous test in which the three different types of strain states exist. In that case, the parameters obtained are directly a weighted average of those that would be obtained from the three different tests described above. Several reliable techniques for full-field strain measurement are nowadays available (Moirè [4-5], Electronic Speckle Pattern Interferometry [6], grid methods [3], Digital Image Correlation [89]), and an important research topic is to find well-suited methods to use the great amount of available data in a sharp way. The classical method for inverse problems is the so called “model updating” [10-12], mainly based on FEM numerical models. Results of experiments, e.g. in terms of load-displacement data, are compared whit results obtained with the FEM model of the experiment, using an appropriate constitutive model of the material. If
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_18, © The Society for Experimental Mechanics, Inc. 2011
131
132
numerical and experimental data do not match properly, material parameters of the constitutive law are iteratively varied until the best set of parameters is reached. Even if this approach to inverse problems is widely known and adopted, it shows some important drawbacks. Firstly, it usually needs a great quantity of computation, that means time; inverse problems on non-linear materials using a standard PC and FEM model updating may require weeks of computation. The other disadvantage is about the assumptions made in FEM model about load distribution, constraints, boundary conditions, geometry, etc.. Usually these assumptions are rough because of the lack of information and they often make the model not faithful to the reality. Furthermore, small quantity of experimental data, usually global entities like load and displacement are exploited. With the rising of new full-field measurement techniques, much more information is available, especially about local strains at each acquired point of the specimen surface, and proper methods should be used to take advantage of this. In this context, a novel technique known as Virtual Fields Method (VFM) [13-14], developed in recent years, seems to be very effective and appropriate; its applications involve the characterization of a large variety of materials, from homogeneous to orthotropic and anisotropic (e.g. composites), from linear to non-linear (e.g. elasto-plasticity, visco-plasticity, hyperelasticity) [15-16]. In this paper, a numerical procedure is proposed to find the best material parameters of nd Ogden and 2 order Mooney–Rivlin hyperelastic models. The tested material is a common fluorosilicone rubber in its virgin state: it means that the stabilization procedure of cyclic loadings required to accommodate the Mullins softening effect has not been applied before testing. Hence, the data are referred to what is called ‘primary path’ in the pseudo-elastic models. Nevertheless, the same procedures can be applied on the stabilized material without significant differences. Global load and deformation data of the surface of the specimen are used to calculate the terms and coefficients of VFM equations. Two independent virtual fields are involved in the procedure to take advantage of as many data as possible; so, we can write two equations for different instants of the test in order to take into account the nonlinearity of the material and obtaining an overdetermined system of equations whose unknowns are the coefficients of the material in the constitutive models mentioned above. The parameters obtained are then compared with those obtained from the classic inverse method based on FEM, not only in terms of global load-displacement curve, but also in terms of strain and stress distributio. EXPERIMENTAL TESTS The experimental setup consists of a electromechanical tensile machine (Zwick ®Z050 model), a CMOS camera with resolution 1280x1024 (Pixelink® BU371F model) and a biaxial testing machine for cruciform shaped samples as illustrated in Fig. 1a. Equibiaxial
Experimental
Pure shearing Uniaxial
Fig. 1 cross-shaped specimen
Fig. 2 dispersion of experimental points on invariant plane
A cruciform sample is glued onto ‘L’-shaped brackets, which are put in proper clamps in the biaxial testing machine; two load cells allow to read load values in the vertical and horizontal directions. Though this system is designed with different possibilities of adjustment, in this work only a 45° diagonal configuration is used, because the shape of the sample allows to generate a strain field sufficiently heterogeneous. Fig.2 shows the dispersion of experimental points obtained through experimental test; x-axis represents the first strain invariant, while y-axis represents the second strain invariant. Since Mooney-Rivlin hyperelastic model expresses the strain energy function as a polynomial combination of these two invariants, it is useful to cover as much area as possible on invariant plane; it can be seen that just one test made it possible to cover most of the space between the two curves corresponding to ideal uniaxial and equibiaxial tests. In figure 3 is shown a set of images of the specimen in the last step of the test , superimposed with color maps of the distribution of strain ε x , ε y , ε xy , ε 1 , ε 2 , ε 3 . Figure 4 shows the load-displacement curve read by the two load cells; the x axis indicates the vertical displacement (almost coincident with the horizontal one). It can be seen that
133
the horizontal force becomes slightly smaller than the vertical force from a certain point; this aspect, probably due to failure in the gluing of the horizontal grips and/or to an asymmetry in the deformation of the specimen.
Fig. 3 Strain maps of the specimen at the last step of the test (112%) Global/2
200
forceV forceH
mean Load [N]
150
100
50
0 0
20
40
60 disp [mm]
80
100
120
Fig.4 Experimental load-displacement curves APPLICATION OF VFM In a generic inverse problem, where the geometry, the strain field and the applied load are given, the parameters governing the constitutive equations have to be identified. However, these unknown parameters that depend on the material and on the form of the adopted constitutive equations are not directly related to measured strain field; this means that a closed-form solution is not available. Besides constitutive laws, also other equations of continuum mechanics of solids must be verified at each point of the specimen; they are the equilibrium and congruence equations. Among recent methods that exploit full-field deformation data to solve inverse problems, one of the most prominent techniques is represented by the VFM [13-14]. It is based on the principle of virtual work; for a given solid of volume V, this principle can be written as follows: ∗ − ∫ σ ij ε ij∗dV + ∫ Ti ui∗dS + ∫ fi ui∗dV = ∫ ρaiui dV V
∂V
V
V
(1)
where σ is the first Piola-Kirchhoff stress tensor, ε* is the virtual displacement gradient tensor, T is the distribution vector of loadings acting on the boundary, ∂V is the part of solid boundary where the loading forces are applied, u*
134
is the virtual displacement vector, f is the distribution of volume forces acting on V , ρ is the density and a is the acceleration. An important feature is that the above equation is verified for any kinematically admissible virtual field (u*, ε*). In a general case the constitutive equation can be written as σ=g(ε), where g is a function of the actual strain components and is defined involving the material dependent parameters. Considering also a static case, whit no acceleration, and no volume forces, equation (1) reduces to
− ∫ gij ( ε ) ε ij∗dV + ∫ Ti ui∗dS = 0 V
(2)
∂V
It is important to note that, if the actual strain field is heterogeneous, any new virtual field in equation (2) leads to a new equation involving the constitutive parameters. Hence, the VFM is based on equation (2) written with a given set of virtual fields, and the resulting system of equations is used to extract the unknown constitutive parameters. The Ogden model adopted in this paper requires knowledge of six parameters; we chose 2 different virtual fields used for 25 test steps, leading to at an over-determined system of 50 equations in six unknowns. In Figure 5 are represented the two displacement maps corresponding to the two virtual fields used, the blue color indicates no displacement, while red indicates the maximum displacement. w
b
w
a St
S y
x
Sl
Sc y
x
Sr
Sb
Fig. 5 displacements associated with two virtual fields Considering a point on the surface of the specimen of x and y coordinates in a reference system with origin at the midpoint of the specimen, the first virtual field applied is expressed by the following equation:
* w π π u = π sin x w cos y 2 w v * = w sin y π cos x π π w 2w
(3)
and consists of an expansion / contraction of the specimen with the edges stuck; this virtual field is equivalent to the imposition of that the global balance of the work of internal actions, including shear, in the absence of external work; the virtual deformation field is expressed by:
135
* π π ε xx = cos x cos y w 2w * π π ε yy = cos y cos x w 2w 1 π π π π * ε xy = − sin x sin y + sin y sin x 4 w 2w w 2w
(4)
and the equation (2) becomes:
∫ (σ S
ε t )ds + ∫ (σ yyε *yy t )ds + 2 ∫ (σ xyε xy* t )ds = 0
* xx xx
S
(5)
S
For the second virtual field, represented in the right map of fig. 5, the area occupied by the specimen was divided into five different areas, and has been assigned a different displacement law for each area, respecting the constraints of congruence between neighboring areas; in this case the displacements of the edges are not null but the displacements are such that it generates a virtual work by external forces vertically and horizontally. However it’ important to note, that the upper and lower edges of the virtual specimen have only vertical displacements, and the left and right edges have only horizontal displacements; in this way, the VFM method is able to filter out any effects that are unknown or unmeasured in the clamps areas. For brevity we give here only the equations that describe the field of virtual displacement of the left region (left):
u * = −a * x+ w v = y b
(6)
which, by differentiation, lead to a virtual strain field described by:
* ε xx = 0 * x+w ε yy = b y * ε xy = b
(7)
In this case the equation (2) becomes:
∫
Sl
(σ ε t )ds + ∫ (σ ε t )ds + ∫ (σ ε t )ds + ∫ (σ ε t )ds + ∫ (σ ε t )ds = 2a(P ij
*
St
ij
ij
*
Sr
ij
ij
*
ij
Sb
ij
*
ij
Sc
ij
* ij
V
+ PH )
(8)
Writing equations (5) and (8) for N different test steps, leads to an overdetermined system of 2xN equations where the unknowns are the constants of the material present in the analytical expression of terms σ ij . These stress components are expressed as a function of the constitutive model chosen; for hyperelastic models, the principal Kirchhoff stresses are expressed by:
σ 1 = λ1
∂W , ∂λ1
σ 2 = λ2
∂W ∂λ2
where W is the strain energy function expressed by:
(9)
136
W = C10 (I1 − 3) + C01 (I 2 − 3) + C20 (I1 − 3) + C11 (I1 − 3)(I 2 − 3) + C02 (I 2 − 3) 2
2
(11)
or by 3
(
)
W = ∑ µ p λ1 p + λ2 p + λ3 p − 3 / α p p =1
α
α
α
(12)
respectively for the 2nd order Mooney-Rivlin model and 3rd order Ogden model. In (10), the terms I 1 , I 2 and I 3 are the invariants of deformation expressed by:
I1 = λ12 + λ22 + λ32 2 2 2 I 2 = (λ1λ2 ) + (λ2λ3 ) + (λ1λ3 ) 2 I 3 = (λ1λ2 λ3 )
(12)
where λ i , i = 1, 2, 3, are the principal stretches determined experimentally. The results of the solution of the system of equations are shown in the next section, with the results of model updating based on finite element simulations (MU) of a quarter of specimen (with hypothesis of double symmetry of both load and geometry). VFM RESULTS AND COMPARISON WITH FEM The coefficients of Ogden and Mooney-Rivlin identified with the two inverse methods implemented are shown in Table 1. It can see that it is actually quite difficult, especially for the Mooney-Rivlin model, compare the parameters obtained, while a certain regularity, at least for the algebraic signs, is observed for the model of Ogden. This is probably related to the polynomial nature of both the constitutive laws, and could suggest a further difficulty due to the presence of multiple solutions, i.e. set of parameters different from each other but which can provide a similar matching of experimental results. Interesting considerations can be deduced from graphs of the load-displacement curves and from comparison of the displacement maps and deformation. Table 1: Ogden coefficients coeff. MU VFM μ 1 [MPa] 1.934 1.687 μ 2 [MPa] -2.755 -0.511 μ 3 [MPa] 2.278 0.146 α1 1.029 0.647 α2 2.987 -0.545 α3 3.241 4.176 err [%] 2.07 2.80
Table 2: Mooney-Rivlin coefficients coeff. MU VFM C 10 [MPa] 0.3953 1.0114 C 01 [Mpa] 0.3354 -0.1940 C 20 [Mpa] 0.1278 -0.2447 C 11 [Mpa] -0.4098 0.2370 C 02 [Mpa] 0.3097 -0.0552 err [%] 1.98 2.51
200
200
EXP
150
EXP
mean Load [N]
150
mean Load [N]
MU
100
VFM
100
50
50
0 0
20
40
60 disp [mm]
80
100
120
Fig. 6a global load-displacement curves exp-FEM
0 0
20
40
60 disp [mm]
80
100
120
Fig. 6b global load-displacement curves exp-VFM
137
It can be seen from figures 6a and 6b that both VFM and MU coefficients can correctly reproduce the experimental global variables, with standard error shown in Table 1. Nevertheless, for a proper comparison of the two methods, next figure 7 shows the results of a FEM simulation, carried out using the two sets of different coefficients (VFM and MU) for the Ogden model, in terms of strain maps.
εX
εY
εZ -1.012 -.955016 -.898157 -.841298 -.784439 -.72758 -.670721 -.613862 -.557003 -.500144
ε1 .356134 .449852 .54357 .637289 .731007 .824725 .918443 1.012 1.106 1.2
ε2 -.576 -.455778 -.335556 -.215333 -.095111 .025111 .145333 .265556 .385778 .506
Fig. 7 strain maps compured by FE simulation using coeff. of table 1: MU at left, VFM at right
138
It can be seen in the images to the left of Figure 7 that the deformation and the form assumed by the specimen simulated with the coefficients of the MU method diverges from with experimental observations; in particular, it seen the first straight edge of the specimen undergoes a lateral contraction much greater than the experimental evidence (Fig. 3). The VFM technique obviously does not have this kind of problem because the shape and strain distributions experimentally measured are input values; this positive aspect is also present in the simulation performed with the coefficients obtained by the VFM method (right images in Fig. 7), where the deformed shape is similar to the experimental one.
200
EXP VFM con coeff. MU 150 mean Load [N]
FEM con coeff VFM
100
50
0 0
20
40
60 disp [mm]
80
100
120
Fig. 8 comparison of load-displacement curves Returning to the issue of multiple solutions, figure 8 shows the load-displacement curves obtained with a FEM model in which the material coefficients obtained by the VFM were used; the diversity of these curves compared to those of figures 6, although not totally excludes the possibility that we can achieve similar results with different sets of coefficients, shows that the coefficients previously identified by the two methods actually describe different mechanical behaviors. The difference between the mechanical behavior estimated with the VFM and the FEM can be attributed to the phenomena that finite element model does not consider, mainly non perfect boundary conditions and asymmetry in the test. These defects in the FE modeling leads to erroneous estimates of the stiffness of the material in various steps of test, also because only the global load was used, and the FE model was not imposed to deform locally as the real specimen. CONCLUSIONS Through a simple device specially designed, it was possible to perform biaxial tensile test on cross-shaped rubber specimens; the deformation of the specimen, observed experimentally by the technique of digital image correlation, was highly heterogeneous, covering almost the entire the region in the plane of invariants between the uniaxial and equi-biaxial limits. The strain distribution obtained experimentally allowed to apply the VFM technique to identify the material parameters using two of the most common hyperelastic constitutive models. In particular, it was shown that these parameters differ significantly from those determined by a standard FEM based inverse procedure. The sets of parameters obtained by the two methods, although providing a good representation of load-displacement curves, describe the mechanical behavior of materials in very different ways. This shows that the traditional technique of model updating based on FEM introduces simplifications that can lead to erroneous assessments of the real behavior of the material. REFERENCES [1] Holzapfel, G.A., Nonlinear Solid Mechanics: A Continuum Approach for Engineering. Wiley, New York (2000). [2] Ward, I.M., Hadley, D.W., An Introduction to the Mechanical Properties of Solid Polymers. Wiley, New York (1993). [3] M. Sasso, G. Palmieri, G. Chiappini, D. Amodio, "Characterization of hyperelastic rubber-like materials by biaxial and uniaxial stretching tests based on optical methods", Polymer Testing, Volume 27, Issue 8, (2008), Pages 995-1004. [4] Sciammarella, C.A. (1982) The moiré method – A review. Experimental mechanics 22(11), 418-433. [5] Genovese, K., Lamberti, L., Pappalettere, C. (2006) Mechanical characterization of hyperelastic materials with fringe projection and optimization techniques. Optics and Lasers in Engineering 44, 423-442.
139
[6] Erf, R.K. (1978) Speckle Metrology. Academic Press, New York. [7] Sasso, M., Palmieri, G., Chiappini, G., Amodio, D. (2008) Characterization of hyperelastic rubber-like materials by biaxial and uniaxial stretching tests based on optical methods. Polymer Testing 27, 995–1004. [8] Sutton, M.A., Wolters, W.J., Peters, W.H., Ranson, W.F., McNeil, S.R. (1983) Determination of Displacements Using an Improved Digital Correlation Method. Image and Vision Computating 1(3), 133-139. [9] Hild, F., Roux, S. (2006) Digital image correlation: from displacement measurement to identification of elastic properties - a review. Strain 42(2), 69-80. [10] Ghouati, O., Gelin, J.-C., (2001) A finite element-based identification method for complex metallic material behaviors. Comput. Mater. Sci. 21, 57–68. [11] Mahnken, R., Stein, E., (1994). The identification of parameters for visco-plastic models via finite-element methods and gradient methods. Model. Simul. Mater. Sci. Eng. 2, 616–697. [12] Sasso, M., Newaz, G., Amodio, D. (2008) Material characterization at high strain rate by Hopkinson bar tests and finite element optimization. Material Science and Engineering A 487, 289-300. [13] Grédiac, M. (1989) Principe des travaux virtuels et identification. Comptes Rendus de l’Académie des Sciences, 1–5. [14] Grédiac, M., Pierron, F., Avril, S., Toussaint, E. (2006) The virtual fields method for extracting constitutive parameters from full-field measurements : a review. Strain 42, 233-253. [15] Grédiac, M., Pierron, F. (2006) Applying the Virtual Fields Method to the identification of elasto-plastic constitutive parameters. Int. J. of Plasticity 22, 602-627. [16] Avril, S., Pierron, F., Yan J., Sutton, M. (2008) Identification of viscoplastic parameters and characterization of Lüders behaviour using Digital Image Correlation and the Virtual Fields Method. Mech. of Materials 40, 729742. [17] G. Palmieri, M. Sasso, G. Chiappini, D. Amodio, “Virtual fields method on planar tension tests for hyperelastic materials characterization”, Strain, (2010) doi: 10.1111/j.1475-1305.2010.00759.x
Digital image correlation through a rigid borescope Phillip L. Reu1* Sandia National Laboratories, PO Box 5800, Albuquerque, NM 87185
1
ABSTRACT There occasionally occur situations in field measurements where direct optical access to the area of interest is not possible. In these cases the borescope is the standard method of imaging. Furthermore, if shape, displacement, or strain are desired in these hidden locations, it would be advantageous to be able to do digital image correlation (DIC) through the borescope. This paper will present the added complexities and errors associated with imaging through a borescope for DIC. Discussion of non-radial distortions and their effects on the measurements, along with a possible correction scheme will be discussed. Keywords: digital image correlation, distortion correction 1. INTRODUCTION Digital image correlation (DIC) has become a standard tool to measure 2D and 3D shape, motion and deformation. To conduct DIC in a hidden area a borescope may be a convenient method of imaging. There are two types of borescopes: fiber-optic scopes which use a lens and intervening fiber to relay the scene to the camera, or a rigid scope which uses a series of relay lenses to transfer the scene to a camera. Both types could be used for DIC; however the fiber-optic solution has severe image resolution limitations. Because of this, it was decided to use a rigid borescope system to attempt DIC in a hidden cavity. At the moment this limits the application to 2D DIC, however, arrangements could be imagined where 3D DIC could be conducted. Because of the complex optical path, distortions will be larger than a typical camera lens and likely not radial in nature. This paper discusses a proof-of-concept experiment conducted to determine the optical distortions and resulting errors in a DIC analysis. Furthermore non-radial distortion correction methods, which were originally developed for use in stereo-microscopes [1], will be used to minimize the distortions. 2. BORESCOPE DISTORTION TESTING 2.1. Experimental setup to determine lens distortions of a rigid borescope A rigid borescope, model D8094K-101, was obtained from Lennox Instruments, a commercial manufacturer of both rigid and flexible borescopes. The borescope can be extended to various lengths using extension tubes. The optional prism objective was used on the end to create a viewing direction perpendicular to the borescope shaft. Dirt on the internal borescope lenses turned out to be problematic, as dirt spots look like a stationary speckle in the DIC and corrupt the results. After carefully cleaning the objectives, they were setup on an optical table looking at a small speckle pattern affixed to an xy-stage. The imaging section of the borescope is shown in Figure 1. The shorter borescope with only the eyepiece and imaging section was approximately 1 meter in length; the full borescope with two extensions was 2.8 meters in length. The camera attachment for the rigid borescope was not appropriate for the Point Grey 5 Megapixel camera and caused severe vignetting on the 2/3-inch detector. The standard camera sold with the system is a much smaller and lower resolution detector and would not yield the desired spatial resolution for our application. To overcome the vignetting, a 75-mm lens was used to relay the image from the eyepiece to the camera (see Figure 1). With care and alignment, the vignetting was able to be minimized. A preliminary study showed that the effect of this additional lens was minimal on the distortions. The same 75-mm lens was used along with the same speckle pattern area to test the distortion removal algorithms and for comparison with the borescope results.
*
[email protected]
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_19, © The Society for Experimental Mechanics, Inc. 2011
141
142
Figure 1. Borescope experimental setup. (Upper Left) Camera setup with relay lens. (Upper Right) Imaging end of borescope with lighting, speckle pattern and stages. (Bottom) Overall experimental setup. 2.2. Experimental procedure After optimizing the focus and lighting, the speckle pattern was translated in an L-shaped pattern, with 6 images taken equally spaced in each direction. The step size used was 0.05 mm and was controlled with a manual micrometer stage. The total translation in the x- and y-direction was 0.3 mm (this corresponds to approximately 15.4 pixels in the sensor plane). Because this is a pure translation perpendicular to the lens, the results should be uniform noise overlaid on the translation. Any deviation from this is an indication of lens distortions. The translation results for the 75-mm lens and the rigid borescope before any correction are shown in Figure 2.
143
75‐mm Lens
Borescope
Figure 2. Optical distortions causing a “false” displacement result.
Figure 2 clearly shows that there is a large non-radial distortion caused by the borescope optics. In comparison the distortions are negligible with the traditional 75-mm optic. The question now to ask, is whether the borescope distortions can be removed? 3. DISTORTION CORRECTION RESULTS 3.1. Method of correction To correct the distortions the same translated speckle pattern shown above was used. It is important that the pattern remain in plane and not be deformed during the translations. The built in distortion correction functionality of Vic2D was used for all results in this paper. The steps to correct the results include: 1. Running the correlation on the translated speckle pattern, 2. Fitting a spline function to the displacement data to map the optical distortion, 3. Correction of the data using the distortion map in subsequent correlations. The distortion correction also scales the data in millimeters if the correct translation amounts are entered into the software during the distortion correction. To calculate the results in pixels, the user may re-run the distortion correction algorithm entering the average displacement in pixels rather than in millimeters. All of the results in this paper are presented in pixel coordinates. 3.2. Correction results After correcting the results as outlined above, the average displacement was subtracted from both the xand y-translation results to show only the errors. The final distortion magnitude was calculated as the root sum of the squares of the x and y errors. Figure 3 shows the magnitude of the distortion at any given location in the field of view. Distortions as high as 0.3 pixels remain even after correction. Typically, for DIC to be useful, correlations on the order of 1/100ths of a pixel are desired. These distortions are 1 to 2 orders of magnitude higher than our desired displacement accuracy. Even more troubling are the errors in
144
the strain results shown in Figure 4. Because the sample is unstrained in the translation, there should be no strain in the results.
Full Borescope
Distortion (pixels)
Short Borescope
Figure 3. Corrected borescope distortions in pixels.
4. CONCLUSIONS 4.1. Summary of borescope distortion correction While it is possible to greatly improve the distortion of the borescope imaging optics, with the current methodology, unacceptably large distortions still occur. This is especially true in the strain results, which give a measure of the slope of the distortion. The relationship between the distortions and the strain error is evident when the distortion results of Figure 3 are compared to the strain error results in Figure 4. The area of the largest errors corresponds to an area in the image that had particularly large distortions and was difficult to focus during the setup of the borescope. This problem was found to be in the final imaging portion of the borescope and was unable to be removed from the system. I suspect that either the imaging lens is miss-formed or misaligned. Because of the flaw in the borescope it is not possible to conclusively determine whether DIC through a borescope is possible. It should be noted that when the high distortion area is removed from the analysis, the strain results are a more acceptable 130 µε. This leaves some hope that with a better (or maybe just different) borescope, much lower distortions could be obtained and DIC could be conducted through a borescope.
145
Full Borescope σ = 650 µε
Micro‐strain
Short Borescope σ = 535 µε
75‐mm Lens σ = 42 µε
Figure 4. Corrected borescope Strain errors. ACKNOWLEDGEMENTS I would like to thank Scott Walkington and Amarante Martinez for assistance in setting up the borescope and obtaining the experimental data. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000. 5. REFERENCES 1.
Schreier, H.W., D. Garcia, and M.A. Sutton, Advances in light microscope stereo vision. Experimental Mechanics, 2004. 44(3): p. 278-288.
Scale Independent Approach to Strength Physics and Optical Interferometry
Sanichiro Yoshida Southeastern Louisiana University Department of Chemistry and Physics SLU 10878, Hammond, LA 70402, USA, [email protected]
ABSTRACT Dynamics of deformation and fracture is considered based on a field theoretical approach. The basic postulate of this approach is that deformation is always locally linear elastic, and that the nonlinear behavior in the plastic regime can be formulated by considering the interaction of these local dynamics. By requesting that the transformation matrix representing the local elastic deformation is coordinate dependent so that it describes nonlinearity, and that the law of elasticity be invariant under the coordinate dependent transformation, this formalism introduces a compensating field known as the gauge field. By applying the Lagrangian formalism to the gauge field, the theory derives field equations that describe interaction between the translational and rotational modes of displacement. The plasticity is viewed as energy dissipating dynamics of the deformation charge, a quantity derived from the symmetry charge associated with the invariance of the elastic law. Recent analysis indicates that transition from plastic deformation to fracture is governed by the behavior of the deformation charge, and that an optical interfermetric technique known as the electronic speckle-pattern interferometry can be used to visualize the deformation charge. 1. Introduction Deformation and fracture behavior observed in nano- and microscopic systems are substantially different from that observed in macroscopic systems. At the nanoscale, basic properties of material strength such as yield stress and flow stress of metals show strong size dependence. Fracture is initiated at the atomic scale, and develops to the final, macroscopic failure of the object. Understanding of deformation and fracture at the nanoscale is essential for full understanding of fracture at any scale level. Conventional theories of plastic deformation rely on experimentally determined constitutive relations and mathematical techniques for calculating non-uniform distributions of stress and strain [1, 2]; they are by nature phenomenological and dependent on the scale level that the experiment is conducted. It is extremely important to develop a scale independent theory. Our approach is to describe the process of deformation and facture based on a fundamental physical principle known as local symmetry [3]. This approach is not only scale independent but also capable of describing all stages of deformation, from elastic deformation through fracture, on a common theoretical basis. The original idea to use local symmetry in the deformation dynamics was proposed by Panin et al [4] in the frame work of the general theory of deformation and fracture known as physical mesomechanics [5]. Essentially, the physical-mesomechanical formalism describes deformation by a linear transformation similar to that used by conventional continuum mechanics [6]. The nonlinearity in the plastic regime is described by allowing the transformation matrix to be coordinate-dependent, and requesting that the law of elasticity is invariant under the transformation. The aim of this paper is to focus on the physics aspect of the physical-mesomechanical formalism with some connection with dislocation dynamics. Supporting experimental data obtained with optical interferometry is also presented and discussed.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_20, © The Society for Experimental Mechanics, Inc. 2011
147
148 2. Formalism 2.1 Postulate The basic postulate of the mesomechanical-approach is that when a material enters the plastic regime under the influence of external load, the deformation is still locally linear-elastic. This is reasonable because microscopically the internal force is proportional to displacement. In fact, most theories use inter-atomic potentials in the form of a quadratic function of the interatomic distance; basically they assume that the force is proportional to the displacement from the equilibrium. It is natural to assume that the conventional transformation matrix of elastic theory is locally applicable. The nonlinearity in the plastic regime is described through consideration of interaction among these local dynamics. As part of the same continuum material, all these local regions ―know‖ how other regions are being deformed elastically at each moment. This interaction is conveniently integrated to the theory as a vector potential associated with a field [7, 8]. Mathematically, this field connects local regions and is called the connection field. In physics, it is called the gauge field, because it makes the local (gauge) dynamics invariant under the coordinate dependent transformation. 2.2 Field equation Once the gauge field is identified, the so-called field equations can be derived by applying the least action principle. Detailed derivation is described in [7]. The field equations of the present dynamics can be given as follows.
Here the meaning of quantities appearing in eqs. (1) and (2) are as follows. and are the field variables of the gauge field, and are related to the temporal and spatial derivatives of the vector potential as shown below1.
and appearing on the right-hand side of eqs. (1) and (2) are the temporal and spatial components of the so-called charge of symmetry. is the phase velocity of the spatiotemporal variation of the field [7]. From eqs. (3) and (4), it is found that and are related as follows.
Eq. (5) leads to the interpretation that represents the average velocity of the unit volume at the coordinate point and represents the rotation of the unit volume. Substitution of eq. (5) to (2) leads a wave equation of the following form.
The general solution to eq. (6) represents decaying transverse waves. The phase velocity of a mechanical transverse wave is in general expressed as the square root of the ratio of shear modulus to the density. From this, and can be interpreted as the density and shear modulus of the material, and the phase velocity can be expressed as follows2. 1
Some explanation about eqs. (3) and (4) can be found in ref. 7. More detailed description will be found in a paper scheduled to be published in the Journal of Strain Analysis (the volume number has not been assigned).
149
2.3 Deformation charge and plasticity The charge of symmetry appearing on the right-hand side of eq. (1) has a significant role in the deformation dynamics. From the law of mass conservation applied to a unit volume, the divergence of velocity field is equal to the temporal change in the density.
With the use of eq. (7), eq. (2) can be written in the following form.
Application of divergence to eq. (2) with the use of eq. (7) and the mathematical identity following relation between charge and .
leads to the
where is the flow of . This allows us to call Eq. (10) can be viewed as an equation of continuity regarding the deformation charge, and the current of the deformation charge putting it in the following form.
With eq. (8),
can further be written as
Here is the change in density over time. The current per unit volume.
has a dimension of momentum change over time, or force,
Eq. (12) tells us that when the velocity field is diverging, the density of a unit volume affixed on the coordinate axis changes over time, and that the temporal change can be viewed as a drift of . However, it does not tell whether the deformation is elastic or plastic; whether a given deformation is elastic or plastic depends on how the density changes over time. Fig. 1 illustrates it schematically where the left picture represents the elastic case and the right the plastic case [8].
(a)
Fig. 1 Schematic illustration of one-dimensional
2
(b)
; (a) elastic case and (b) plastic case
More detailed description about the phase velocity will be found in the paper to be published in the Journal of Strain Analysis mentioned in footnote 1.
150 When the deformation is elastic, the displacement field has longitudinal wave character; when a trough of the longitudinal wave (a peak of rarefaction) passes through a unit volume, the density of the volume increases at the next moment. From this viewpoint, it is natural to interpret that in the elastic case, the drift velocity represents the phase velocity of the longitudinal wave
( : Young’s modulus). Substitution of
into eq. (11) leads to
hence,
and
Here
is the displacement,
Multiplication of
to the left-hand side and
to the right-hand side of eq. (13) leads to
With being the net force acting on the unit volume, the appearing on the rightmost-hand side of eq. (15) can be interpreted as the field force at the boundary of the unit volume. Note that it is proportional to the Young’s modulus. When the displacement is characterized as a longitudinal wave of linear elasticity, the left-hand side of eq. (2) is zero [7, 9]. Under this condition, with eqs. (13) and (14), eq. (2) becomes
Eq. (16) is the well-known wave equation representing longitudinal elastic waves. When the deformation is plastic, the temporal change in density associated with the divergence of the velocity field is due to dislocations, as Fig 1 illustrates; hence represents a flow of dislocation. Since the change in density is due to the net flow into the unit volume, is in the same direction as the total velocity of the point i.e., , or . So,
Being proportional to velocity, the longitudinal force be written in the following form.
is by nature energy dissipating. With the use of eq. (17), eq. (6) can
If the dislocation density rate is uniform and therefore the right-hand side of eq. (18) is small, the general solution to eq. (18) represents a decaying transverse wave. In a previous experimental study [10], clear decaying transverse wave characteristics have been observed in the displacement field of an aluminum alloy specimen under plastic deformation.
151 2.4 Ductility and fracture With eq. (7) and rearrangement of the terms, eq. (2) can be rewritten as follows, which can be viewed as an equation of motion [11].
The left-hand side represents the change in momentum of the unit volume over time, equaling the net external force on the volume. The first term on the right-hand side represents the transverse restoring force due to the rotational deformation in proportion to the shear modulus . The second term on the right-hand side represents the longitudinal force acting on the unit volume. In the elastic regime, it represents the differential elastic force and in the plastic regime energy dissipating force as discussed above. Being the recovery force proportional to the rotational displacement, the first terms is the source of transverse wave characteristics in the plastic regime. As deformation develops, this term becomes less effective and instead the second term becomes more effective, causing decay of the wave. This idea can be naturally extended to fracture as the final stage where even the force due to the second term becomes zero. According to eqs. (11) and (12), there are two ways to make ; the first is or , and the second is . The former represents the situation where the material cannot create dislocation anymore via divergence in the velocity field. The second is the situation where the location where divergence in velocity cannot move, and consequently the material keeps increasing strain concentration at the same spot, or in other words, the same spot keeps losing the density. Naturally, this leads to discontinuity of the material or fracture. 3. Experimental A number of experiments [9-12] have been conducted with the use of an optical interferometric technique known as the ESPI (Electronic Speckle-pattern Intrerferometry). Fig. 2 illustrates a typical ESPI setup. The laser beam (helium-neon laser oscillating at 632.8 nm) is split into two horizontal interferometric paths that are recombined on the specimen’s surface after being expanded by beam expanders. The interferometric image of the specimen is captured by the CCD (charge coup-led device) camera at a constant frame rate (typically 30 frames/s), and the digital images are transferred to computer memory. Fringe patterns containing the in-plane displacement are generated via subtraction of a frame taken at a certain time step from a frame taken another time step.
Fig. 2 Typical ESPI optical setup for deformation study
152 Fig. 3 shows typical subtraction fringe patterns obtained in a tensile experiment on a 0.5 mm-thick aluminum alloy specimen (25 mm x 4 mm in gauge length) under constant pulling speed of 20 m/s. The left three patterns [(a-1)-(a-3)] are formed in an early stage of plastic regime and the right three [(b-1)-(b-3)] are in the final stage of deformation. Bright band patterns are seen at various locations. Detailed studies [8, 11, 13] on this pattern indicate that it represents concentrated normal strain and considered to be representing the deformation charge defined in conjunction with its current given by eq. (11). Notice that Fig. 3 indicates that in the early stage, the deformation charge is dynamic (it changes the locations to appear), but in the late stage it tends to be stationary at a spot where the specimen eventually fractures. This observation clearly supports the above becoming null. argument on the final stage of deformation where the drift velocity (a-1)
(a-2)
(a-3)
(b-1)
(b-2)
(b-3)
Fig. 3 Fringe showing bright representing(b-2) deformation(b-3) charges (a-1) patterns(a-2) (a-3)patterns presumably(b-1) Also interesting to note in Fig. 3 is that the bright band appearing near the upper end of the specimen in Fig. 3 (a-3) has curvature whose radial direction seems to be parallel to the change in the width of the specimen toward the upper end. This indicates that the bright band representing the deformation charge is drifting in the direction of the local displacement whose pattern of stream lines diverge upward as the specimen becomes wider. A similar observation that the pattern considered to be representing the deformation charge seemingly drift in the direction of displacement in an tensile experiment of a notched specimen3. 4 Summary Dynamics of deformation has been discussed from the viewpoint of a field theoretical approach based on gauge invariance. The deformation charge has identified as a quantity that causes energy dissipation and hence irreversible deformation in the plastic regime. It also plays a significant role in the transition from late plastic deformation to fracture. The ESPI technique has been described as a powerful experimental method to investigate deformation and fracture based on the theoretical approach discussed in this paper. Acknowledgement The present study was in part supported by Southeastern Alumni Association grant. We are grateful to T. Sasaki of Niigata University, Japan for his support in experiment and discsuiions. References 1. 2. 3. 4.
3
Hill, R,The Mathematical Theory of Plasticity, Oxford Classic Texts in the physical sciences, 1998 (Clarendon Press, Oxford). Doltsinis, I, Element of Plasticity, 2000, (WITpress, Southampton). Aitchson, I. J. R. and Hey, A. J. G. Gauge theories in particle physics, IOP Publishing, Ltd., Bristol and Philadelphia (1989). Panin, V. E., Grinaev, Yu. V. Egorushkin, V. E., Buchbinder, I. L., and Kul’kov, S. N. Spectrum of ecxited states and the rotational mechanical field. Sov. Phys. J. 30 , 24-38 (1987).
See paper #224 of this proceedings book.
153 5. 6. 7. 8. 9. 10. 11. 12. 13.
Panin, V. E. Ed. Physical Mesomechanics of Heterogeneous Media and Computer-Aided Design of Materials, vol. 1, Cambridge International Science, Cambridge (1998). Dill, E. H., Continuum Mechanics, Taylor & Francis, Inc., New York, (2006). Yoshida, S. Field theoretical approach to dynamics of plastic deformation and fracture, AIP Conference Proceedings, vol. 1186, pp. 108-119 (2009). Yoshida, S. Physical Meaning of Physical-mesomechanical Formulation of Deformation and Fracture, AIP Conference Proceedings, vol. 1301, pp. 146-155 (2010). Yoshida, S., Gaffney, J. A., and Yoshida, K. Revealing load hysteresis based on physical-mesomechanical deformation and fracture criteria. Physical Mesomechanics, 13, 337-343 (2010). Yoshida, S., Siahaan, B., Pardede, M. H., Sijabat, N., Simangunsong, H., Simbolon, T., and Kusnowo, A. Observation of plastic deformation wave in a tensile-loaded aluminum-alloy. Appl. Phys. Lett., A, 251, 54-60 (1999). Yoshida, S. Dynamics of plastic deformation based on restoring and energy dissipative mechanisms in plasticity. Physical Mesomechanics, 11, 3-1, 140-146 (2008). Yoshida, S. and Toyooka, S. Field theoretical interpretation on dynamics of plastic deformation—Portevin–Le Chatelie effect and propagation of shear band. J. Phys.: Condens. Matter, 13, 6741-6757 (2001). Toyooka, S., Widiastuti, R., Qingchuan, Z. and Kato, H., Dynamics observation of localized strain pulsation generated in the plastic deformation process by electronic speckle pattern interferometry, Japan. J. Appl. Phys. 40, 310-13 (2001).
Optical Techniques That Measure Displacements: A Review of the Basic Principles Cesar Sciammarella Department of Mechanical Engineering, College of Engineering & Engineering Technology Northern Illinois University, DeKalb, IL USA Keywords: moiré, holography, speckle methods, fringe analysis, digital image correlation Abstract There are a number of optical techniques that can be used to measure displacements on surfaces utilizing electromagnetic radiation. Moiré, holography, speckle interferometry speckle photography, developed separately in the range of visible light. These techniques make it feasible is to find displacements fields and through displacements get the necessary information to compute strain tensors. Concurrently all the above mentioned optical techniques are used to perform metrological measurements on surfaces both on light reflecting and light diffusing conditions, and slopes on reflecting surfaces. In this paper taking a very general approach, these techniques are presented as a body of scientific and technological knowledge that has basic components that are common to all of them. 1.0 Introduction There are a number of methods to find displacements and topography of surfaces. These methods in chronological order of development are: the moiré method, holography and speckle. These techniques will be called in this paper, optical techniques that measure displacements or OTD. It is interesting to make a group evaluation and to put them into perspective. What is common to them and what it is different is of interest to get a proper evaluation. All of them utilize light as a basic tool of measurement. 2.0 The imaging process A fundamental step in all these OTD is the utilization of an optical system that produces some kind of image of an object surface that is illuminated by a light source. If the instrument that gathers the information is a camera, all of the OTD methods are subjected to a common set of relationships that are consequence of imaging objects of the 3-D space into a 2-D space. These relationships are generally referred to an ideal model camera, the pinhole camera [1]. All the laws of projective geometry apply to the pinhole camera. These laws describe the transformation of a 3-D space to a 2-D space through a point, the projection center. The center of projection plays a fundamental role in all these techniques although in many derivations the presence of the observation center is not specifically mentioned. Figure 1 shows a complex object, a tree reduced to a two dimensional distribution of intensities and colors and from this projection one must make measurements related to the object, the tree.
Figure 1. Pinhole Camera 3.0 Coherent and non coherent sources All the different techniques including holography can be implemented with coherent or non coherent illumination [2]. Although in the case of holography the non-coherent recording is not the ideal way of generating holograms. The type of illumination creates a separation in what it is called interferometric techniques and non interferometric techniques. A very important point to be remarked on this aspect of image formation is that the basic laws that connect light intensity modulation with displacements and geometrical variables are the same, only the range of application is modified. Coherent illumination allows the measurements to be carried out to beyond the limits of the classical optical resolution. Incoherent illumination is also a powerful tool in microscopy applications and in applications where color plays a role in the performed measurements. Each one of the techniques we are looking at has particular aspects that need to be taken into consideration when dealing with T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_21, © The Society for Experimental Mechanics, Inc. 2011
155
156
incoherent and coherent illumination. The optical transfer function is a function of the wavelength of light and changes with the coherence of the light. Hence images that contain the information we are seeking are influenced by the coherence or non coherence of light. 4.0 Fundamental parameters common to all the techniques Spatial frequency is a fundamental concept that applies to all the OTD methods, Figure 2. Utilizing nomenclature of the Fourier transform (FT), there is two spaces the physical space and the frequency space that are related by the reciprocal relationship. Through the Fourier analysis we know that to any configuration in the physical space corresponds a configuration in the frequency space through the FT. To extend this concept to image formation through a projection center it is necessary to introduce an angular variable [3], Figure 3. The angle θ=X/R is the variable that relates the fundamental frequency X to the process of projection from a center and it is possible to define the angular frequency ξang=R/X. The laws of projective geometry provide the means of extracting information from the 2-D image that pertains to the 3-D object. Added to this aspect of the process of gathering information through a projecting center is the fact that there is a lens system that has its own laws of transferring information. If one wants to get mathematical we can say that we have a set of points in S3 space that is converted into a set in the S2 space, through the recording of a 2D matrix of rows and columns that contain positive integers coded into a set of gray levels. Because of the presence of pupil apertures in the lens system the geometrical ideal point is transformed due to diffraction effect of the pupil into a distribution of intensities that depends on the particular pupil that controls the image, in the simple case of a circular pupil is an Airys’ intensity distribution.
. Figure 2.Spatial and inverse space frequencies
5.0 The sensitivity vector Figure 4 brings us to another important concept that is common to all the techniques, the sensitivity vector [4]. Although it was originally introduced in holography it applies to all the OTD methods since it defines the displacement components that the optical set up is sensitive to. In the original developments of moiré there was a separation between two forms of moiré, moiré that is sensitive to in-plane displacements, called intrinsic moiré and the moiré sensitive to out-of-plane displacements called shadow moiré.
Figure 3.Angular space frequency ξang=R/X.
157
. Ke
Se
state i
P
L
state f
P’
Ko So
Figure 4. Definition of the sensitivity vector In Figure 3 we have an observation point So that connects the spatial frequencies with the angular frequencies that result from projecting an object point P from a observation point So. These two points define the observation vector ko. Figure 4 completes the picture because it brings the other factor that is pertinent to recording spatial information in coherent illumination, the illumination source Se. The illumination vector ke is defined by the position of the illumination source Se that illuminates a given object point P.
Figure 5. Definition of the sensitivity vector. Subtracting the vectors illumination and observation one obtains the sensitivity vector.
(
r r r S = Ke − K0
)
(1)
That leads to the fundamental equation,
r r δs = S • L
(2)
The difference of the optical path produced by the displacement of a point under observation is equal to the dot product of the
r
displacement vector L and the sensitivity vector. Although the derivation has been made for coherent light the sensitivity concept can be extended to incoherent
→
S3
→
d
P
→
S2 →
S1
158
Figure 6.General case of detection of the displacement vector d with a sensitivity vector S. Figure 6 illustrates the general case of finite point source and finite observation point. The vector sensitivity will be detecting a projection of the displacement component that involves all the Cartesian components [5]. Determination of the Cartesian components requires at least 3 equations with three unknowns. To achieve this objective it is necessary to get a well conditioned linear system. To improve accuracy multiple sensitivity vectors can be created and a process of optimization must be selected, for example minimum square errors [5]. To this particular point we have not referred to any particular method since all have the same capability of recording information. The solution of getting Cartesian displacement components besides doing the point by point solution through different sensitivity vectors is to manipulate the optical set-up to make it sensitive to a selected component or to manipulate the processing system to achieve the same objective.
⎡S1x ⎢ ⎢S2 x ⎢S3x ⎣
S1y S2 y S3 y
S1z ⎤ ⎥ S2 z ⎥ S3z ⎥⎦
⎡d x ⎤ ⎡ N1 ⎤ ⎢d ⎥ = 2π ⎢ N ⎥ ⎢ y⎥ ⎢ 2⎥ ⎢⎣ d z ⎥⎦ ⎢⎣ N 3 ⎥⎦
(3)
The matrix equation 3 connects the Cartesian components of the sensitivity vector S, the components of the displacement vector d and the orders of the fringes observed at least three in 3 different images of the same object [5]. The matrix equation must be solved for each point of the surface. 6.0 Recording displacements in a plane. Coherent sources At this point we can separate measurement performed on a plane surface and measurements that can retrieve displacement information from a 3-D surface. moiré, speckle and holography can be utilized to measure displacements of initially plane surfaces and the accuracies that can be obtained are very close. Dealing with 3-D surfaces classical moiré is not a practical choice; although its use in 3-D surfaces has been attempted, the need to print a grating in a 3-D surface presents a technological difficult task. As a consequence of this fact techniques based on speckles are the simplest choice for the task of measuring displacements in 3-D surfaces. Speckle techniques can be utilized for this purpose either with a reference beam, that is holography or without a separate reference beam. Then one can utilize speckle techniques in all their different versions including the so called white light speckle method [6]. At this instance we should mention an important fact that the white light speckle is many times associated with a particular form of data processing, digital image correlation, DIC that in reality is very general tool, a can be applied to any of the OTD methods. Let us look at the expression of the sensitivity vector in the general case Figure 7.
k ex =
xC − xP
(x C − x P )2 + (y C − y P )2 + (z C − z P )2
(4)
The x-component of the vector observation is given by,
k 0x =
x P − xS
(x P − x S ) + (y P − y S )2 + (z P − zS )2 2
(5)
159
Figure 7.Computation of the vectors observation and illumination. The x-component of the vector illumination is given by,
S1x =
2π (k ex − k 0 x ) λ
(6)
Now the preceding equations can be expressed as function of the direction cosines of the vectors and we end up with,
2π (cos α ex − cos α 0 x ) λ 2π (cos β ey − cos β 0 y ) S1y = λ 2π S1z = ( cos γ ez − cos γ 0 z ) λ S1x =
(7) (8) (9)
Where α, β, γ, with the corresponding subscripts represents the direction cosines of the vectors illumination and observation with respect to the axes x, y and z. The above equations apply to all the coherent illumination sources. The above equations are very general and one can manipulate the sensitivity vectors in different ways to separate components of displacement in convenient forms that avoid the solutions of systems of equations where more than one component is involved. Figure 8 shows an optical arrangement to separate the u and v components. For example in Figure 8 the cosines of the illumination vectors and of the observation vector are selected to be zero with respect to the y-axis, in this way the sensitivity vector components S1y = 0 . The fact that the illumination vectors are perpendicular to the y axis makes the sensitivity component in the y-direction equal to zero as well as the fact that the viewing direction is orthogonal to the y-axis.
K e1
Ke2
Figure 8. Determination of the in-plane components of the displacement vector
160
Figure 8 shows an optical arrangement to separate the u and v components [7]. For example a double system of illumination is introduced to make the sensitivity component in the z-direction equal to zero. The symmetry of the two illumination vectors with respect to the z-axis removes the sensitivity with respect to z. However the observation vector poses a problem. One must assume that the image is viewed with a telecentric lens that allows only the passage of paraxial beams that make small angles with the z-axis yielding S1z =0. This set up is valid for all the OTD methods. Figure 9 shows a typical arrangement with dual symmetrically oriented beams with respect to the normal to the plane under analysis [8], [9], [10].
Se1
Ke1 θ x
Ke2
So
Ko
L Se2 z
Figure 9. Typical interferometric set up for double illumination producing an in-plane sensitivity vector. This configuration yields the value for Sx [8],
λ 2 sin θ
(10)
nλ 2(1 + cos θ)
(11)
Sx =
The same value is obtained for Sy if one rotates the illumination 900. Using data processing algorithms it is possible to measure the out-of-plane displacement w with the same set-up [8],
w=
In the case that one utilizes moiré both equations are also valid, the only difference is that the angle θ is selected to match the pitch p of the grating. The double illumination arrangement can be replaced by a double viewing arrangement. One form of double viewing is shown in Figure 10, [11], [12],
A
, ,
α
si A
Figure 10. Duffy’s double viewing optical arrangement A double perforated aperture is attached to a lens. The two apertures produce two sets of coherent speckle that are added in amplitude generating interference fringes. The pitch of the interference fringes is, p =
1 λSi = f p A1A2
An interferometer with sensitivity only in the z-direction is shown in the figure 11,
(12)
161 Se Ke
w Ko
So
Figure 11.Interferometer with sensitivity in the normal direction to the surface The interferometer arrangement is equivalent to a Michelson interferometer for rough surfaces in the case of speckle interferometry [9], [13]. It can be utilized to measure the deflection of plates. Again the illumination must be collimated. In such a case the vector sensitivity will be the same for all the points of the surface and equal to, Sz = n
λ 2
(13)
The interferometer is very sensitive as it can measure displacements on the order of half a wavelength. This interferometer can also be utilized in moiré interferometry and by adequate optical operations can yield in-plane and out-of-plane displacements.
Figure 12.Moiré pattern form by moiré interferometry and by speckle interferometry. Figure 12 shows two moiré patterns, one was obtained with moiré interferometry, that is, the coherent interference of two smooth waves [14]. The other pattern was produced by speckle interferometry, the interference of two rough wavefronts. It is possible to see that the moiré fringes, isothetic lines are almost the same. The main difference is that the isothetic lines in the speckle patterns are formed by the modulation of the speckle field by the displacement field of the disk under diametrical compression. Both patterns were produced by superposition of the undeformed carrier and the deformed carrier. In the case of moiré interferometry the superposition was made by computer software operating on the superposition of the deformed carrier and the undeformed carrier and by digital filtering. In the case of the speckle pattern the superposition was made by the addition of the speckle patterns of the undeformed and the deformed disk and also applying digital filtering. There are similarities in the two procedures; both are based on the presence of a carrier, in the case of moiré it is a deterministic carrier, in the case of speckle it is a random carrier. There is an important difference between the two ways of producing isothetic lines. In the case of moiré the isothetic lines are produced in a number of difference ways if the deformed carrier is recorded or if the object carrier is seen by utilizing a master carrier or reference carrier. To understand the meaning of Figure 12 the process of fringe formation must be analyzed. In the case of moiré interferometry two smooth wavefronts interfere and produce an interference pattern
I(x, y ) = I o (x, y ) + I a (x, y )cos δ
(14)
In (14), I0 is the background illumination, Ia is the amplitude of the fundamental harmonic of the grating, δ represents the optical path difference that corresponds to the projected displacement of a particular point of coordinates (x,y). Equation (14) is valid for all the points of the medium. It is assumed that the displacement field that produces the optical path difference is continuous and has continuous derivatives up to the third order. Utilizing the definition of visibility,
162
I(x, y) = I o (x, y)(1 + Vs (x, y) cos δ)
(15)
The value of Vs (x, y ) will depend on the coherence of the two interfering beams. In the case of interference by diffusing surfaces there is an important additional phenomenon to be considered, the decorrelation of the wavefronts which is a direct consequence of the second order statistics property of speckle patterns [15]. Decorrelation is the most important limiting factor in the use of speckle patterns to measure displacements [14]. Two diffuse light wavefronts can only interfere if the light is coming from two zones of the object that are within the correlation region. For example in the case of a circular pupil the Airy’s pattern of diffraction of the pupil controls the formation of the image. If the wavefronts are not coming from the correlation regions, the randomness of the wavefronts will produce a random pattern of intensities without any distinctive feature. If they are within the correlation region the ability to interfere will depend on a mutual complex correlation factor. In the measure that the complex correlation factor [15] μ A (Δx, Δy ) → 0 the ability to gather displacement information is lost and the process is called the decorrelation process. The decorrelation problem is very complex because of the many different mechanisms that can operate in the formation of speckle patterns [14]. There is no general theory of decorrelation but there are some models that were developed for some usual configurations utilized in applications to obtain isothetic lines from speckle patterns. There are theories for some specific variables. For example for rigid body displacements of an object under observation, there is pupil plane decorrelation, where the magnitude of the surface roughness is comparable to the wavelength of light [15]. In practice these problems are most often solved experimentally. When there is interference of diffusing wavefronts [14], [15] (15) is still valid point wise. For a rough surface the following interference equation can be written [14], [15]. I(P) = I 0 (P) (1 + Vcs (x, y) cos δ)
where
(16)
I is the resulting average intensity, the symbol,
indicates the mean value over the ensemble, δ is the difference
of phase due to the change of the optical path caused by the applied load, Vcs is the visibility of the carrier signal (speckle). Considering that the ensemble averages are taken over a distance large enough compared to the speckle size, the ordinary definition of fringe visibility can be utilized [16],
Vcs =
I max − I min I max + I min
=
2 I1 − I 2 I1 + I 2
(17)
Γv
The additional term Γv [16] depends on the degree of correlation between the two local wavefronts that interfere; this factor has values between 1 and zero. Figure 12 shows the difference between the isothectic fringes produced by the two methods moiré interferometry and speckle interferometry. As far as interpretation is concerned and the basic laws of formation are the same. The difference arises from the type of reference signal that is utilized to produce them. In the figure it is shown that the visibility of the fringes is reduced by the speckle and that the interference is produced by rough wavefronts. The information is collected point wise, actually in a very small region. I
0
π
2π
3π
Figure 13. Formation of speckle fringes.
4π
Δϕ
163
In Figure 13, the envelope represents the ideal irradiance produced by a smooth wavefront. The red line represents the ensemble average produced by a rough wavefront. This fact is emphasized in Figure 13 by showing a large drop in fringe visibility. It is possible to see in Figure 12 that in the upper part of the disc the decorrelation has reduced the contrast of the fringes almost to zero. In one word speckle interferometry yields a signal with smaller signal to noise ratio and suffers from a severe limitation as far as the displacements that can be observed at one loading step. The two interferometric methods utilize as carriers of information signals that are an integral part of the observed surface. Moiré utilizes deterministic carriers, sinusoidal signals that produce smooth wavefronts that ideally can follow deformations without limitations. Speckle interferometry utilizes random signals as carriers of information. This property of the carrier produces a limitation of the deformation that can be observed in one single picture and reduces the quality of the information that they carry. The double illumination can be replaced by double viewing, in both techniques. Let us turn our attention to the case of non coherent illumination. 7.0 Recording displacements in a plane. Incoherent sources
In the case of incoherent illumination the following derivations apply to all the methods that operate with incoherent illumination, moiré and speckle photography. The speckle photography is in the same relationship to incoherent light moiré that speckle interferometry is to moiré interferometry. The classical speckle photography utilizes a carrier signal that is produced
Figure 14. Sensitivity of the incoherent moiré to the third dimension. by the random interference fringes produced by the surface roughness or one can as well create a random pattern applied to the surface by any of the possible engraving mechanisms provided by the current technology and is referred in the literature as white light speckle [6]. White light speckle is a methodology identical in many aspects to speckle photography, it has however one advantage with respect to the classical speckle photography. In speckle photography it is necessary to assume that the formed speckles remain practically unchanged when the body is deformed, that is the surface carries the speckles as it is deformed; this assumption has limitations that do not affect the white light speckle. What is the difference between moiré with incoherent illumination and the so called white light speckle? Incoherent light moiré as in the case of moiré interferometry utilizes a deterministic signal; white light speckle utilizes a random signal. In the vast majority of the white light speckle applications a different fringe processing methodology has evolved from that utilized in traditional speckle photography. This methodology as it has been said before can as well be applied to any type of signal whether deterministic or random. This point will be further discussed later on in this paper. Returning to the concept of sensitivity vector for non coherent illumination, let us first consider the case of traditional moiré that it facilitates the application of this concept to non coherent illumination. Figure 14 shows the recording of a moiré pattern by a camera [17]. In moiré the sensitivity in the in plane displacements is given by, assuming a grating with two orthogonal rulings of the same pitch p, S1x = S1y = p
(18)
Although Figure 14 corresponds to the moiré case the conclusion extracted from this figure is valid for speckle photography as we are going to see in what follows. Figure 14 provides the necessary elements to establish the sensitivity in the z-
164
direction. The sensitivity in the z-direction for a given point P depends on the projecting ray that forms the image of the point in the camera. Although the reference image is shown as a reference master, it also applies when an initial image or reference image is utilized to form the moiré pattern. As shown in the picture the sensitivity for a point P depends on the displacement w that the point initially in a plane surface has experienced as the plane surface is deformed. If D is the distance between the optical projection center L of the lens and the plane of the model, r is the distance of the point P to the optical axis, the slope of the ray that projects P from L is tan β =
D r
(19)
In the derivation of (18) it is assumed that the displacement w<< D, Figure 14, and then it can be neglected in the computation of the tangent of β. The fictitious displacement P'P" is then equal to P' P" =
w wr = tan β D
(20)
The components of the fictitious displacement in the direction of the x and y axes are: u' =
wr w cos α = x D D
(21)
v' =
wr w sin α = y D D
(22)
The above equations show that in incoherent moiré the corresponding fringes contain not only information concerning the in-plane displacements u and v but also information on the out-of-plane w. In the old literature of moiré this effect was considered as a source of error in the moiré method. In reality it provides complete information concerning the displacements of the points of a plane. The classical way that this problem was handled in moiré is two-fold. One simple way is to make D>>w so that the components u’ and v’ are negligible with respect to the values of the displacements u and v. A more accurate way is to utilize a telecentric lens system that minimizes β. In places where w becomes larger than in the rest of the surface, for example in areas where concentrated loads are applied w will produce components that may not be negligible. A third alternative if one is interested in all the three displacements is to take at least two different views of the pattern to write three equations for the three unknowns u, v and w. Hence as it was the case with coherent moiré, moiré with incoherent light is sensitive to the out-of-plane displacements. Hence oblique views can be utilized to obtain complete displacement information of a surface. To extend this concept to speckle photography, we can look at a classical data processing technique. 8.0 Speckle Photography Utilizing the analogy with the Young’s fringe formation the displacement vector can be measured point wise in speckle photography by illuminating an exposed film with a thin beam of light [18]. From the analysis of the fringe formation in the point by point method of speckle photography it is possible to derive a very important conclusion. In speckle photography in order to get fringes it is necessary to have displacements larger than the speckle size. This result separates the two methods of speckle interferometry and speckle photography. When one of the two techniques is no longer valid (i.e. speckle interferometry) the other technique (i.e. speckle photography) starts to be valid. The determination of the displacement field can be extended to the full field. The procedure can be carried out utilizing optical filtering. The first assumption is that the initial and final patterns are recorded using the same recording medium. The optical system of Figure 16 can be utilized to retrieve full field fringes [18].
Speckle
165
Figure 16.Coherent light filtering to record speckle photography fringes in a full field The transparency containing the superimposed speckle patterns is set on the input plane and illuminated with collimated coherent laser light. In the FT plane there are four orifices that allow performing optical filtering [8]. The image is filtered utilizing pairs of orifices and the image plane provides the correlation fringes that correspond not just to a point but the projected displacements in two orthogonal directions for the entire specimen. Performing the analysis of the formation of the fringes by the imaging lens one arrives to the conclusion that the observed pattern is equivalent to a moiré pattern produced by a grating of pitch p, p =
1 2 λf = f p mL ob
(23)
Where λ is the wavelength of the laser light utilized to illuminate the transparency, f the focal distance of the lens, m is the magnification of utilized to take the specimen picture and Lob is the size of the illuminated area in the specimen, if for simplicity we assume a square area the same pitch applies to the orthogonal direction. The pattern in the FT plane of the optical system is also a speckle pattern. In the process of filtering this speckle pattern an aperture on the screen of diameter d f was introduced. The size ρ the speckle pattern observed in the image plane of the optical system is of the order of magnitude, ρ=
λf df
(24)
Relating the pitch of the equivalent grating to the speckle pattern size ρ in the image plane one arrives at,
ρ L0 x 2 = .Since L0x/2>>df the formed fringes are internal to the speckle. pv df
Figure 16. Aperture to filter the displacement information in the frequency plane The value of df has to be adjusted experimentally to get a satisfactory image. To get an idea of the order of magnitude of the pitches that can be practically observed, let us assume that a system such that f = 60 inches (or f = 1524 mm); the size of the region under observation is L ob = 60mm mm (paraxial region of the lens) and the magnification is m=1, the wavelength of the light λ = 0.6328 μm the following numbers are obtained, pv =
1 0.6328 x1524 x10 −6 = = 16.1 μm fp 0.5x 60
In lines per mm: 22
1 mm .The angular aperture is, α =
1 = 0.0393 rad = 2.251 degrees 0.6328 / 16.1
166
The above optical processing system can be directly transformed into a digital operation by capturing two images of the deformed object in the sensor of a CCD camera and implementing all the operations digitally in a way similar to digital holography. Figure 17 shows the arrangement for double viewing in speckle photography.
9.0 3-D surfaces All the preceding developments were devoted to the measurement of displacements in plane surfaces. We need to analyze a more general problem: displacements and deformations of 3-D surfaces. The first developments in this area come from holographic interferometry
K o1 Camera 1 θ d K o2
x
Illumination Camera 2
z Figure 17. Double viewing arrangement for speckle photography. In the solution of this problem the sensitivity vector plays a fundamental role as it has been pointed out in sections 5 and 6. There are a number of problems associated with the determination of displacements on a 3-D surface. The first is to introduce a carrier on the surface. As pointed out before it is a difficult technological problem to introduce a deterministic carrier. Thus one is left with the utilization of a random carrier, either the speckle pattern produced by the coherent illumination, or applying a random pattern to the surface, case of the white light speckle. The same decorrelation phenomena that was mentioned for the speckle patterns in 2-D images is present in 3-D holography limiting the observed displacements to displacements smaller than the speckle size. An additional problem that arises is the localization of the interference fringes. When one focuses a plane and the plane does not experience large displacements in the direction perpendicular to the plane the fringes are formed on the plane surface. This is no longer the case in 3-D surfaces, the fringe visibility deteriorates and a number of different approaches need to be applied [18]. In the case of a 3-D surface it is necessary to make the same distinction between interferometry and photography. 9.1 3-D.Coherent Illumination
One has the classical problem of holography analyzed in 5 and 6. In view of the changes of the sensitivity vector one needs to have at least 3 different holograms that must provide well conditioned equations [5]. In general to get accurate results a redundant system is necessary [5]. While the method was successfully applied it is not a very practical tool.
167
Figure 18. 1n-plane displacement with dual illumination. (a), (b)Single-beam patterns; (c)superposition of (a) and (b); (d)same as (c) but with an initial pattern; (e), (f) diffraction pattern of (d) and filtered image A method to simplify the process of 3-D analysis in holographic interferometry is the holographic moiré method [7], [10]. Figure 18 shows the double illumination effect when double illumination is applied and thus the sensitivity vector is an inplane vector [10]. Two patterns are formed but if the density of fringes is not large, however if carrier fringes are added good quality isothectic lines can be obtained as shown in Figure 18. There are a number of ways that the carrier fringes can be generated [16]. .
Figure 19. Rotation of the reference beam to introduce auxiliary fringes One of the problems encountered in holographic-moiré was the method to generate carrier fringes. For this purpose the holographic plate was rotated. By utilizing the scheme shown in Figure 19 carrier fringes can be generated by rotating the reference beam [16]. In this case to find the components of displacement only holograms utilizing a single plane of projection can be utilized.
y
ur θ
v w
α α
z
eθ er
Figure 20. Displacement components and coordinate system. Utilizing the method outlined in Figure 19 the holographic-moiré method is extended to three dimensions. Utilizing the Monge type of representation, the three components of the displacement vector, u, v and w can be separated. Then it is possible to represent 3 different systems of isothetic lines, u-lines, the v lines and the w-lines. Utilizing vectors sensitivity in two orthogonal Cartesian direction one can obtain u( x, y) = f u ( x.y ) and v( x, y) = fv ( x.y) , w( x, y) = f w ( x.y) , [19]-[21]. This method has been utilized in the case of a cylinder under internal pressure. Figure 21 shows the displacement components of the cylinder of the pipe under internal pressure obtained by double illumination in two orthogonal directions. The displacement w was obtained utilizing the method mentioned in section 6 and applying (11). With two orthogonal illumination holograms the displacements of the points of a surface have been obtained. The following procedure projects the displacements of the points of a surface onto a Cartesian system of coordinates that has the x-y plane as its projection plane.
168
If the strains are the final objective of the study it is necessary to perform additional transformations that require geometrical information of the body. 9.2 3-D.Determination of the strains
Strains can be obtained directly from recorded holographic moiré by differentiation of the patterns in the frequency space [22]. This operation is straight forward when dealing with plane surfaces. If dealing with 3-D surfaces there is an important set of concepts that are a consequence of the description of Continuum Mechanics variables and the changes of these variables with the coordinates systems utilized to describe them [21], [23], [24]. In the case of displacements that are vectorial quantities the rules of transformation are simple and are contained in the vector algebra. The transformations of tensors are not so simple because when analyzing tensors on a 3-D surface the tensor must be contained in the tangent plane to the surface. This is well known in applications of 3-D elasticity solutions, (i.e. Theory of Shells).
Figure 21 Fringe patterns of the displacements of a cylinder under internal pressure. In Experimental Mechanics a similar procedure must be followed and local strains in 3-D surfaces must be represented in 2D coordinate systems contained in the tangent plane to the surfaces. This operation of transformation of the strain tensor is very important in 3-D holographic interferometry of surfaces since displacements are obtained in a selected coordinate system, a global coordinate system that is utilized for displacement computation. Hence the derivatives of the displacements are obtained in the global system. The passage from the global coordinate system to the local involves the transformation of the strain tensor from the global coordinates system to the local coordinate systems, at each of the points of the surface. Figure 22 shows the local coordinate system 0-x1,x2, x3 and the local coordinate system contained in the tangent plane to the surface 0-x’1,x’ 2, x’ 3. It involves a rotation of the coordinates system in the 3-D space. It involves the matrix transformation, ⎡ α11
α12
⎢⎣α 31
α 22 α 32
[ε'] = ⎢⎢α 21
α13 ⎤ α 23 ⎥ [ε ] ⎥ ε 33 ⎥⎦
⎡ α11 ⎢α ⎢ 12 ⎢⎣α13
α 21 α 22 α 23
α 31 ⎤ α 32 ⎥ ⎥ ε33 ⎥⎦
(25)
169
Figure 22.Global coordinate system and local system. Hence it is necessary to know all the direction cosines of the transformation and this can be done only if one has the information concerning the contour of the surface. The set up shown in Figure 23 was used to get the strain and stresses in resonant mode of the SRB-SPU turbine that provides power to the space shuttle during landing [23], [24].
Figure 23.Set up to obtain strains and stresses in resonant modes of a turbine blade. The turbine works under extreme condition of speed (80,000 rpm), temperature (>1000oC), and has a diameter of 6 inches producing 150 HP. An initial recording of the blade is made in a holographic recording system. The CCD camera captures the interference fringes between the reference image and the image of the vibrating blade produced by the stroboscopic illumination in real time. The blade is excited with the shaker and the corresponding resonant modes are observed in real time. The process previously outlined to obtain strains from holographic moiré patterns was utilized. Since stroboscopic illumination was utilized sinusoidal fringes where recorded. Displacements and strains were obtained by applying double illumination in two orthogonal directions. Since the determination of the local strains in the blades requires the changes of coordinates, it is necessary to get the profile of the blades. A contouring method based on the rotation of the illumination source [7], [25] was used to get the contour of the blade. From the geometrical information all required direction cosines needed to perform the change of coordinates from the global to the local coordinates were computed. Figure 24 shows the holographic moiré patterns of carrier fringes for both displacements and contours. Figure 25 provides the principal stresses and isostatics caused by blade vibration.
170
Figure 24. Vibration pattern carrier fringes recorded with stroboscopic illumination b) Carrier contour fringes used to obtain the geometry of the turbine blades Upper figure, shape of the blade, and global coordinate axis utilized Consequently moiré holography can provide complete information of a general 3-D surface subjected to loading.
Figure 25. Principal stresses and isostatics of the SRB-SPU turbine blade.
9.3 3-D Incoherent illumination
The utilization of double viewing system as shown in Figure 17, can be applied to get 3-D information in a similar way to what was described in the preceding sections. The recording of the double illumination in the unloaded and loaded conditions yields the contour of the deformed surface. Likewise in incoherent illumination the superposition of the recorded pattern in the unloaded and loaded conditions will provide the projected displacements. However the sensitivity vector for the general case will be changing from point to point and it will be necessary to utilize the point wise solution of three different projections to get the displacement vector.
Figure 26. Graphical representation of the process of modulation of a carrier
171
Viewing with telecentric lenses produces an effect similar to a collimated illumination reducing the changes of the sensitivity vector to the z-direction when one wants to get the in-plane displacement. 10.0 Process to recover the information from recorded data.
Figure 26 shows a cross-section of the signal along the x-axis to simplify the visualization of the relationships between the different variables that must be considered [26]-[30]. In the theory of communications, a carrier wave, or carrier is a waveform (usually sinusoidal) that is modulated (modified) with an input signal for the purpose of conveying information. The carrier wave is usually of much higher frequency than the input signal. The purpose of the carrier is to encode information to transmit it. Phase modulation and amplitude modulation (AM) are used methods to modulate the carrier. In what follows phase modulation will be described. The carriers in the optical techniques applied to Experimental Mechanics can be gratings printed on the surface under analysis. In the case of surface contouring the carriers are projected lines. In methods like speckle or holography the carrier are extracted from surface features existing on the surfaces of the analyzed bodies. These features can also be artificially created. In what follows the analysis of a sinusoidal carrier will be introduced as a mathematical model for all types of carriers. One must remember that by utilizing Fourier transform methodology all the integrable functions can be represented by their expansion in Fourier integrals. The carrier can be thought of as a sinusoidal function generated by a rotating vector E and the phase of the carrier at a point of coordinate x is defined as the total angle rotated by the vector up to that point. Ψ (x) is the modulation function, a function that encodes the optical paths difference as an angular variable. The total phase of the modulated carrier is the addition of the phase generated by the constant rotation plus the modulation function contribution. The general equation for a modulated carrier is, I ( x ) = I oc + I 1 c cos (2 π f c x + δ ( x )
)
(26)
Where f c is the frequency of the carrier, a known quantity because it was introduced with certain spatial frequency that can be defined as f c = 1 p c . The presence of a carrier is required in certain optical methods. In other methods the introduction of a carrier may be useful because it can greatly simplify the process of data processing. Figure 26 shows that the phase change is a linear function of the coordinate x and the modulation function is added to this function, resulting in a total phase that corresponds to the modulated carrier. Hence from (26) φ( x, y) = ( 2πfc x + δ( x) ) = arccos
I( x, y) − Io ( x.y) I1 ( x, y)
(27)
Knowing φ( x, y ) one can get the modulation function. δ( x ) = φ( x ) − 2πf c x
(28)
From (26) to (28) and looking at Figure 26 it is possible to see that starting from a given sign convention the presence of a carrier defines the corresponding sign of the modulation function δ(x). This means that the carrier provides a reference frequency that removes the need of knowing where the zero reference order is. This is a problem that arises in the interpretation of fringe systems, as in Photoelasticity. There are several ways that one can determine the phase but all of them are based on the utilization of a trigonometric function that limits the phase retrieval from 0 to 2π. This leads to the process of unwrapping that although for smooth functions it works well, in actual applications it can present difficult practical problems of implementation. 11.0 Digital imagine correlation In DIC the optical process to obtain correlation between signals is replaced by digital procedures [31]-[34]. In DIC displacements are directly obtained from point trajectories and the process of fringe unwrapping is bypassed. The understanding and the interpretation of the basic aspects that relate phase and displacements [26]-[30], is a straightforward process. The theory behind DIC to relate displacements and light irradiances is more complex. DIC is a general technique to extract displacement information from recorded irradiance of deformed bodies. DIC is particularly useful when random carriers are utilized. Hence this particular application will be emphasized in this paper.
172
In the application of two random carriers with DIC two random signal images are recorded and saved in the memory of a computer. From these two images a small subset is extracted, Figure 27. The subset contains a distribution of gray levels. The cross-correlation between the two subsets is computed. A correlation peak is produced; the position of the peak in the sub-set gives the local displacement of the subset. The height of the peak gives the degree of correlation or similarity of the gray levels of the initial and final configuration of the subset. If the cross-correlation has been normalized to the value of 1, values of the peak near one will indicate a good correlation. In the measure that this peak gets lower values the correlation degrades. Unlike moiré or speckle interferometry based on resolved patterns of irradiance levels, DIC is based on a subset of pixels. As a result, information of displacements inside the subset cannot be obtained. This aspect of DIC poses a problem of spatial resolution that must be considered in actual applications. Therefore the ratio of the pixels subset size to the overall size of the region under observation is a very important quantity that determines the spatial resolution of the obtained results. Summarizing, the measured displacements are the displacements of a subset.
Figure 27. Illustration of the cross-correlation of images. Figure 28 shows a speckle displacement field after all the different subsets of the field have been correlated and merged into trajectories. In one single sentence DIC provides the lines that are tangent to the trajectories of the points of a surface. From the trajectories one can extract the displacement field information. If one performs the above described process of correlation without additional corrections the displacement vectors will have random variations both direction and magnitude from subset-to-subset. In the DIC literature there is a large variety of approaches to the solution of this problem. DIC heavily depends on knowledge based information to introduce corrections to the recovered displacements. These different optimization procedures can be subdivided in two basic groups, methods that operate in the actual space and methods that utilize the FT space, [26]-[30],[35]-[38].
Figure28. Displacement field obtained from a speckle pattern. A certain region of a deformed surface is analyzed; this region has experienced rigid body translations and rotations due to the deformation of the rest of the body that the observed patch belongs to, plus a local deformation; the object of DIC to obtain the local deformation. One has a given surface that for the sake of simplicity is assumed to be a plane and is viewed in the direction normal to the surface. Furthermore it is assumed that a telecentric system is used to get the image of the surface. In this way it is possible to separate the problems that were analyzed in some detail in preceding sections concerning the image formation, from the problem of image correlation. In this surface one has a certain distribution of intensities that it will be assumed corresponds to the random signal incorporated to the surface and is represented by a function Ii(x,y). A
173
displacement field is applied to the surface and a final distribution of intensities If(x,y) is obtained. It is assumed that the light intensity changes are only a function of the displacement field and as it is the case in all experimental methods noise is present. Noise is indicated as all the changes of intensity that are not caused by the displacement field. The displacement field is defined by the function [39], [41], ∧
∧
(29)
D( x , y ) = u ( x , y ) i + v ( x , y ) j
From the preceding assumption, (30) In (22) ΔI is the change of intensity caused by the rigid body motion plus the local deformation of the analyzed surface. In (30) the assumption that the light intensity is modified only by the displacements is implicit. The term In refers to all other causes of change of intensity. The validity of (30) boils down to the signal to noise ratio. To develop the model one has to postulate that the signal content of In is small and hence can be neglected. The problem to be solved is to find u (x,y) and v (x,y) knowing Ii(x,y) and If(x+u,y+v). The solution of the above problem requires the regularity of the functions u(x,y) and v(x,y) implicit in the Theory of the Continuum. One can formulate the problem as an optimization problem, that is find the best values of these two functions that minimize or maximize a real function, the objective function of the optimization process. There are many criteria that can be utilized for this purpose. One criterion is the minimum squares; the difference of the intensities of the two images must be minimized as a function of the experienced displacements. Calling Φ (u,v) the optimization function. I f ( x i + u, yi + v) = Ii ( x i , y i ) + ΔI( x i +Δu, y i + Δv) + I n
Φ ( u , v) =
∫∫ [I (x f
i
+ u, y i + v) − I i ( x i , y i )]2 dxdy
(31)
For small u(x,y) and v(x,y) the above expression can be expanded in a Taylor series and limiting the expansion to the first order and using vectorial notation as,
Φ(D) =
∫∫ [I (r) − I (r) + D(r) • ∇I f
i
f
(r )]2 dxdy
(32)
In the above equation r is the spatial coordinate; D(r) is the displacement vector and ∇ is the gradient operator. Equation (32) tells us that the gradient of If provides the following information; the displacement information is associated with the gradient of the intensity distribution. If is a scalar function (light intensity), the gradient is a vector and going back to Figure 28 the vectors displacements are plotted following the vectors joining the centers of correlation peaks of the sub-images. Hence the displacement information can be retrieved following the gradient function of the light intensity.The minimization of the objective functions is then a central problem of the image digital correlation technique. In the technical literature there is a large variety of approaches to this problem. One can utilize criteria other than the minimum squares [42]-[44]. u
(b) (a) Figure 29. Field for the correlation process. (a) Dotted rectangle, NsxNs sub-element, δ mesh of the region of interest. (b) Displacement experienced by the sub-image with components u and v.
174
Let us now look at the overall procedures that are necessary to obtain the displacement field. The region of interest is symbolically represented in Figure 29 by a square region. Figure 29 (a) shows a scheme of computation. There is a region of interest, the big square; in one corner there is a sub-element that has a chosen size of Ns×Ns pixels and the raster of dots indicate the position of the centroids that form a regular mesh of δ×δ pixels. Figure 29(b) shows how the sub-image is displaced and distorted after the deformation of the sample has taken place. By utilizing the model adopted in (32) and operating in the coordinate-space it is possible to define the vectors displacement in the region of interest, as shown in Figure 28 and 29.Two images are being compared. The reference image represented in Figure 29(a), by square image of Ns×Ns pixels, and the second one, called the deformed image, represented by the distorted square. The operator chooses the size of the zones of interest, the sub-sample by setting the size Ns so that Ns×Ns pixels are considered. To map the whole region of interest, the second parameter to choose is the separation δ between two consecutive sub-samples. The parameter δ defines the mesh formed by the centers of each sub-sample used to analyze the displacement field (Figure 29). Different strategies can be applied to retrieve the full field. Let us concentrate in the fundamental operation, the extraction of the information from a sub-sample. This aspect of the problem will be covered by utilizing an approach that is followed by a large number of contributors to this method.
∑∑ [I ( x , y ) − I ( x , y )] m
CN =
m
'
i
i
m
m
j
f
i
i =1 j=1
∑∑ I
2
i
'
2
j
(33)
( x'i , y' j )
i =1 j=1
The process begins with a discrete and normalized version of (32).The deformed coordinates are obtained from the initial coordinates by Taylor’s series expansion, ∂u ∂u dx + dy ∂x ∂y ∂v ∂v y' = y + v + dx + dy ∂x ∂y
x' = x + u +
(34) (35)
For example the Taylor’s series is terminated in the first order. Although higher orders can be introduced, it is easier and more convenient to explain the basic ideas of this particular approach to DIC by utilizing the first order. The meaning of the above equation can be better grasped by looking at Figure 29, where u and v contain components of the rigid body displacement of the sub-sample, and the derivatives express the effect of the local deformations in the displacement field. To make sure that the distribution of intensities in one subset is continuous and has continuous derivatives the light distribution, I(x,y) is interpolated utilizing (i.e. bicubic-spline) an expansion of the light intensity, as chosen by many authors that have contributed to DIC. The relationship between the displacement field and the gradient of the intensity field comes from (32). This equation indicates that the displacement field is associated with the gradient of the intensity field. To get displacement information from the image intensity distribution one replaces the bicubic spline expression in a normalized expression of (33). After this substitution the optimization of (33) requires the solution of a non linear system of equations. This brings additional complications but there are many methods that were developed and can be applied in this case. There is a large variety of software packages for DIC. These packages depend fundamentally on the specific choices of the correlation coefficient C defined in this paper by (33). The next step is to select a function that defines the displacement field in a subset. This function is called the shape function ϕ, and on the optimization algorithms and interpolation functions that are needed to compute sub-pixel displacements from images that were obtained with specific pixel resolutions. One very important aspect that is quite often not referred to in the literature is that no matter how complex your algorithm is no gain of information can be achieved if this information does not already exist in your primary data, the gray levels. These levels depend on satisfying the Nyquist condition in connection to both the frequencies recovered and on the sampling of the gray levels by the camera sensor. Summarizing the first basic concept that is clearly shown in Figure 27, the comparison of the distribution of gray levels coming from two images (initial and final) provides a measure of the mechanical displacements experienced by a surface. The analysis of the intensity distribution is done on sub-set images and following the structure of electronic image sensor these sub-images are squares of Ns× Ns pixels. This is the basic foundation of DIC that separates it from the other methods that measure displacements.
175
The second basic development is connected with the description of the displacement field in the sub-set. This second basic aspect in DIC heavily depends on knowledge based information; a function ϕ is introduced that describes the displacement field of the sub-set domain; following the nomenclature of Finite Element, ϕ is called the shape function. There are several shape functions ϕ utilized in DIC: ϕ constant that corresponds to a rigid body motion of the sub-image; ϕ linear or affine transformation, ϕ quadratic and it is possible to include higher orders. The next fundamental development is embodied in (33-35), that relate the optical flow to the kinematic variables that depend on the choice of ϕ and can be represented by a vector P. Having posed the problem in terms of optical and mechanical variables the next step is to relate both set of variables. This is an inverse problem, which says knowing Ii(x,y) and If(x,y) find ϕ, that is determine the vector P that best accounts for the observed optical flow. This connection between the two sets of variables is represented by (32) and is embodied in (33). This equation implies two choices, the first choice is utilization of a truncated Taylor’s expansion of the displacement field to the first order term or higher order terms. The second choice is the selection of minimum squares criterion for the optimization procedure implicit in (31). This is the approach followed by the majority of the authors in the field and for most of the commercial packages that are available. However as pointed out before there are other optimization mechanisms that can be utilized. The theoretical framework described above to connect displacements to light intensity is unique to DIC and separates it from all the other techniques that were previously described. The inverse problem is formulated, the main variables set up, and the next step is to solve the optimization problem. The problem is formulated in terms of minimum squares hence it is a non linear problem. The symbol Φ ( P ) represents the solution of the problem. The solution is the sum of the leading term, plus additional terms: P( l ) indicates a linear approach to the displacement vector, P(q) indicates a quadratic solution and one can utilize successive higher order terms.
Φ( P) = P0 (C) + P(l) + P(q ) + ...
(36)
In (28) P 0 (C) , indicates a constant, P(l) indicates a linear term, P(q) indicates a quadratic term. The higher order terms of the power series become smaller as the order of the terms increase. This optimization is achieved utilizing nonlinear iterative optimization algorithms, such as first gradient descent, Newton-Raphson, or Levenberg-Marquard. To summarize DIC in a few sentences, although the actual approach to the solution of obtaining displacements from light intensity is complex and requires a number of choices, the actual choices are made by the developer of the software. Once a package of software is put together the operation of the software is pretty much automatic. This has made DIC a very popular choice for experimental mechanics. Users should be cautious however that the Nyquist condition must always be satisfied otherwise the results obtained will have no value. 11.1 Displacement and spatial resolution of DIC. There are two important aspects of DIC as applied to what is basically a speckle photography method. These two aspects are not specific to DIC as a method to retrieve displacements but are relevant to the currently prevailing methodology applied to the so called white speckles, the resolution in the measurement of displacements, and the spatial resolution. The resolutions in the measurement of displacements of the other techniques that have been considered in this paper depend on the pitch or equivalent pitch of the basic carrier that records the displacements. The application of the Nyquist condition tells us that the maximum spatial frequency that can be retrieved is half the frequency of sampling carrier.
In moiré the sampling depends on the pitch p of the carrier, in speckle interferometry the sampling frequency is given by the sensitivity equation (10) and in photographic speckle by equation (15) that defines the equivalent pitch. It is also required that these frequencies are recorded by the sensor of the camera that must have a spatial frequency twice the frequency of the carrier. This subject is not addressed in many papers in the DIC literature applied to white light speckles but it is an important parameter in the displacement resolution of DIC as in all other methods utilized in pattern analysis. Figure 32 (a), [40], illustrates the definition of the equivalent of the speckle radius as the distance from the center of the correlation to the point of one half of the intensity called r. These definitions are statistical and give a statistical estimate of the minimal distance between spots that can be considered as measurable in the selected sub-domain. Figure 30 (b) illustrates the definition [40] of the equivalent of the speckle radius as the distance from the center of the correlation peak to the point of one half of the intensity called r. The values of r are utilized to define a fine pattern with r slightly larger than one pixel, medium with the radius r=2 pixels and coarse 4 pixels. If one assumes that the minimum distance that can be measured corresponds to the distance of two points that can be separated for a fine pattern the spatial resolution will be 2 pixels, for a medium pattern 4 pixels and for a coarse pattern 8 pixels. These quantities then provide the
176
maximum displacement resolution that statistically can be achieved for the corresponding patterns. This is a point that should be clearly understood in the application of DIC to white light speckles. Signal processing laws are laws that apply to all the type of signals that are utilized independently of the algorithms that one can introduce. Concerning the spatial resolution studies that are described in [40] show that in all the different program utilized in the DIC studies the displacement field inside the sub-set is not defined. Hence the number of the sub-set pixels related to the total number of pixels of the observed region provides a measure of the effect of the subset size in the spatial resolution
Figure 30. (a) Fine, medium speckle sizes defined as the radius of the autocorrelation factor at 50% of the intensity [40].
12. Discussion and conclusions
All the OTD methods have potentially the same capability to perform the different operations required to measure displacement either in 2-D or in 3-D and to retrieve shape information. Basically the OTD methods can be separated in two basic categories, techniques that utilize deterministic signals and methods that utilize random signals. Within the techniques that utilize random signals there are two basic types: a) Techniques that use random signals produced by the pattern of interference generated by random surface roughness and b) Artificially generated random patterns or random patterns existing already on the surface from sources other than random interference patterns. The basic difference between utilizing deterministic and random signals is the final signal-to-noise ratios and the decorrelation phenomenon that is caused by statistical structure the wavefronts of random signals. At the same time the signals produced by all the techniques depend on whether the light is coherent or incoherent. The relationship between signals and displacements or metrology is independent of the light coherence, only the range of application is affected by the degree of coherence of the light. Both speckle interferometry and moiré interferometry can reach very high accuracy because changes of phase of 2π correspond to the wavelength of light λ. Hence one can arrive to the 10 nm range in the displacement measurement [12]. Table 1 1
2
x
Theory 10-6 166.497
0.250 Δ
3 p μm 0.365 10-6 165.000 1.49
4 p μm 0.413 10-6 163.162 3.35
5 p μm 0.492 10-6 162.681 3.81
6 p μm 0.635 10-6 164.021 2.47
7 p μm 0.925 10-6 166.335 0.162
8 p μm 1.22 10-6 166.396 0.10
Despite the possible sources of errors that speckle interferometry may have, it has been verified that in a disk under diametrical compression the computed strains obtained from speckle interferometry, Table 1, are within an error less of 1 % compared to the theoretically computed values. In a disk under diametrical compression it is known that the strains theoretically computed with an ideal concentrated load and the actual case, a disk with a narrow region of contact stresses, experimental and theoretical strains are approximately equal at points located around ¼ of the diameter, Similar agreement in strain values were observed for holographic moiré, [20], with errors on the order of 1 %.
177
Incoherent techniques apply to larger deformations, both moiré and speckle photography can be applied to a very large spectrum of deformations and specimen sizes. In all cases the most important factor is the quality of the signals that encode the displacement information. Utilizing the speckle pattern method, [45] the displacements and strains along the diameter of a disk under diametrical compression were determined. Data were obtained for the same load with six different carrier pitches, from 1.22 to 0.365 microns (Table 1). All these carrier frequencies satisfy the Nyquist condition both for the required sampling frequency of the carrier and the required sampling of the utilized sensor. The final result indicated that the accuracy achieved in displacements and strains is the same when the Nyquist condition is satisfied, regardless of the carrier frequency utilized. These studies resulted in the formulation of the following principle similar to the Heisenberg indetermination principle of signal analysis [46], (37)
ΔI s Δf s = C
In the above equation ΔIs is the minimum detectable gray level calling Is the maximum amplitude of the available gray levels. The gray levels in a CCD camera or similar devices are quantized and the maximum theoretical dynamical range (amplitude of the vector) is one half of the total number of gray levels 2 n (for n=8, I = 128 ). The actual dynamic range is smaller than this quantity and Δf is the maximum detectable sampling frequency. 50 45
gray levels
40
Heisenberg Principle
35
Experimental data
30 25 20 15 10 5 0 0
20
40
60
80
100 120 140 160 180 200 220 240
Δ fs Figure 31.Plot of the experimental data that provides a numerical expression for the Heisenberg equation (37). The quantity Δf is defined as: Δf s =
p Δu m
(38)
The practical question to be answered is: what is the minimum displacement information that can be recovered within fringe spacing δ ? Where δ is a fringe wavelength; it is evident that there is a finite limit to the subdivision of the fringe spacing. The constant C reflects the whole process to obtain displacement information. The constant C is a function of the optical system, the device used to detect the fringes (CCD camera) and the algorithms used to get the displacement information. There are many important practical consequences of the principle formulated in (37). This equation is a valuable tool for planning experiments involving fringe analysis. Once the Whittaker-Shannon theorem is applied and the required minimum frequency of the carrier is computed, the next step is to select the carrier that is going to be used. In order to obtain frequency and displacement information it is necessary to maximize the amount of energy levels to encode this information. This implies that the largest portion of the dynamic range of the encoding system should be used to store useful information. By doing this the amount of noise in the signal is minimized. An immediate consequence is the need to increase the visibility of
178
the fringes within the range of options available. Consequently when selecting a carrier the Optical Transfer Function (OTF) and the (MTF), the modulus of the (OTF), of the whole system used to encode the information needs to be taken into consideration. Figure 31 clearly shows the effect of encoding information in gray levels. If it is possible to detect close to 5 of the available 128 gray levels, it is possible to get displacement information that is 1/200 of the spatial frequency of the signal or fringe pitch p. On the other extreme if the minimum detectable gray levels is 45 out of the 128, only 1/20 of the pitch p can be recovered. This is a basic law in the process of encoding displacement information and it is independent of the particular method utilized for fringe analysis. Consequently whether the illumination is coherent or incoherent the recovery of information is governed by (37). Both incoherent light moiré, speckle photography and the white light speckle are particularly suited for the measurement of large displacements. What is called large is relative to the actual field of view. There is a relationship between the actual physical size of the analyzed area and of the sampling frequency required to observe the displacement field in the area. The smaller the area the higher the required sampling frequency and vice versa. To reduce gray levels intensity there are presently two methods, one method has its foundations on the classical analysis of signals as was developed the Theory of Communications. To avoid problems arising from intensity based analysis and following a common trend in optics the notion of phase is utilized. The other option is the digital image correlation and it utilizes the irradiance as expressed in gray levels in a different form. It relates the displacement vector to the changes in irradiance. To put it in perspective the objective of both methods is to obtain from gray levels a vector field that depends on a tensor field (either the strain tensor or the stress tensor). The classical fringe analysis processing technique operates through projection of the vector displacement in two Cartesian components; let us say u and v which are separately determined although they are components of one entity, the displacement vector. Since the basic selected variable is the concept of phase and hence utilizes trigonometric variables. This particular selection of variables leads to a problem, fringe unwrapping. Fringe unwrapping is based on a simple concept, but the difficulty is in the implementation of this concept, due to presence of noise in the signal that are processed. This is particularly true if one is dealing with random signal as is the case with speckle patterns. Utilizing random signals as a carrier of information another important problem must be faced, the decorrelation problem; DIC bypasses these two obstacles. As shown in Figure 29 DIC searches directly for the displacement vectors in the field and operates directly with intensities as shown in (31) by relating the displacement vector to the changes of the intensity field. In the actual implementation of this approach there are a large number of functions that need to be introduced and optimized and choices must be made in the selection of these functions and in the optimization processes. As said before the choices of functions and optimization processes are made by the software developer; once a package of software is assembled the operation does not require intense involvement of the user. In a few words and utilizing general conclusion presented in [41] DIC is particularly suitable for the observation of displacement fields where the selected pixel size of the subsets Ns is a small quantity compared to the total number of pixels of the observed region, strains are large and low order shape functions can be utilized. It is possible to say that DIC has made a variation of speckle photography, white light speckles, a practical tool in many technical problems of mechanics of materials.
References [1] Faugeras, O., Three-Dimensional Computer Vision, Cambridge, MA: MIT Press, 1993. [2] George W. Stroke and Robert C, Restrick III, Volume 7, Number 9, I November 1965. [3] Goodman, J.W., Introduction to Fourier Optics, Roberts and Co. Publishers, USA,2005 [4] Sollid, J.E., Translational displacements versus deformation displacements in double exposure holographic interferometry, Optics Communications, Volume 2, Pages 282-288, 1970. [5] Sciammarella C.A., and J. A. Gilbert, Strain Analysis of a Disk Subjected to Diametral Compression, Applied Optics, Vol. 12, N 8, July 1973. [6] Forno C., White-light speckle photography for measuring deformation, strain, and shape, Optics & Laser Technology, 7, 5, Pages 217221, October 1975 [7] Sciammarella C.A., Holographic Moiré, an Optical Tool for the Determination of Displacements, Strains, Contours, and Slopes of Surfaces”, Optical Engineering, Vol. 21, 3, pp. 447-457, 1982. [8] Sciammarella C.A., and N. Lurowist , Multiplication and interpolation of moiré fringe orders by purely optical techniques, J. of Appl. Mech. 425-430, 1967. [9] ] LEENDERTZ J.A. Interferometric displacement measurement on scattering surfaces utilizing speckle effect. 1. Phys. E .• Sci. Instruments. 3, pp 214-218,1970. [10] Sciammarella C.A., and J. A. Gilbert, A Holographic-Moiré to Obtain Separate Patterns for Components of Displacement”, Experimental Mechanics, Vol. 16, N 6, June 1976. [11] Duffy D. E. , Moiré Gauging of In-Plane Displacement Using Double Aperture Imaging. Applied Optics, Vol. 11, Issue 8, pp17781781, 1972. [12] Duffy, D.E., Measurement of surface displacement normal to the line of sight, Exp. Mech. 14,pp 378-384, 1974. [13] LEENDERTZ J.A. and J.N. BUTTERS. A double exposure technique for speckle pattern interferometry. J. Phys. E Sci. Instrum., 4, 277–279 (1971).
179 [14] Jacquot P., Speckle Interferometry: A Review of the Principal Methods in use for Experimental Mechanics Applications, Strain, 44, 57-69, 2009. [15] Goodman J.W., Statistical Properties of Laser Speckle Patterns, 2.5. Laser Speckle and related Phenomena, Topics in Applied Physics, volume 9, Dainty, J.C., Editor, Springer-Verlag, 1975. [16] Sciammarella C.A., Jacquot P. and P. K. Rastogi, Holographic Moiré Real Time Observation", presented at SESA's IV International Congress on Experimental Mechanics, Boston, May 1980, Experimental Mechanics, Vol. 22, No. 2, 1982. [17] Sciammarella C. A. and Fu-Pen Chiang, Gap effect on moiré patterns, Zeitschrift für Angewandte Mathematik und Physik (ZAMP), Volume 19, Number 2, 326-333, DOI: 10.1007/BF01601476, 1968. [18] Ennos A.E., Speckle In terferometry, Laser Speckle and related Phenomena, Topics in Applied Physics, volume 9, Dainty, J.C., Editor, Springer-Verlag, 1975. [19] Gilbert J.A., Sciammarella, C.A.,and Chawla S.K., Extension of Three Dimensions of Holographics-Moiré Technique to Separate Patterns Corresponding to Components of Displacement, Experimental Mechanics, Vol. 18 (9), September 1978. [20] Sciammarella C.A., and Chawla S.K, A lens Holographic-Moiré Technique to Obtain Components of Displacements and Derivatives, Experimental Mechanics, Vol. 18 (10), pg. 373, October 1978. [21] Sciammarella, C.A., Ahmadshahi, M, A .,A Computer Based Holographic Interferometry to Analyze 3-D Surfaces,” Proceedings of IMEKO XI World Congress of the International Measurement Confederation, 167-175, Houston, October 1988. [22] Sciammarella C.A., and R. Naryanan, The Determination of the Components of the Strain Tensor in Holographic Interferometry Experimental Mechanics", Vol. 24, No. 4, December 1984. [23] Sciammarella C.A., and M. Ahmadshahi, Computer Aided Holographic Moire Technique to Determine the Strains of Arbitrary Surfaces Vibrating Resonant Modes”, Proceedings of the 1989 Spring Conf. on Exp. Mech., Boston, Massachusetts, May-June, 1989. [24] Sciammarella C.A., and M. Ahmadshahi, Non-Destructive Evaluation of Turbine Blades Vibrating in Resonant Modes”, Moiré Techniques, Holographic Interferometry, Optical NDT and Application to Fluid Mechanics, Fu-Pen Chiang, Editor, Proceedings of SPIE, Part Two, Vol. 1554B, 1991. [25] Sciammarella. C.A., Computer-aided holographic moiré contouring”. Optical Engineering, Vol. 39, 99-105, 2000. [26] Sciammarella, C.A., "Basic Optical Law in the Interpretation of Moiré Patterns Applied to the Analysis of Strains L" EXPERIMENTAL MECHANICS, 5, 154-160 (1965). [27] Sciammarella, C.A., "A Numerical Technique of Data Retrieval from Moiré or Photoelasticity Patterns," Pattern Recognition Studies, Ptvc. SPIE,18, 92-101 (I969). [28] Sciammarella, C.A., "Moiré Analysis of Displacements and Strain Fields", Applications of holography in mechanics, edited by W. G. Gottenberg, The American Society of Mechanical Engineers 1971. [29] Sciammarella, C.A.,and M.A. Ahmadshahi, Determination of fringe pattern information using a computer based method, Proc.8th International Conference on Experimental Stress analysis, Amsterdam, the Netherlands, H,Weringa, editor, Martinus Nijhoff Publisher, ,pp59-368, 1986 [30] Sciammarella C.,A., Fast Fourier Transform Methods to Process Fringe Data, Basic Metrology and Applications, G.Barbato, editor, Levrotto & Bella,Publishers, Torino, 1994. [31] W. H. Peters and W. F. Ranson, "Digital imaging techniques in experimental mechanics," Opt. Eng. 21, 427-431 (1982). [32] Sutton MA , Wolters WJ, Peters WH , Ranson WF, McNeill SR. Determination of displacements using an improved digital correlation method. Image and Vision Computing,Elsevier; I(3): 133-139, 1983. [33] Sutton MA, McNeill SR, Jang J, Babai M. The effects of subpixel image restoration on digital correlation error estimates. Optical Engineeering ;27(3): 173-175, 1988. [34] Sutton MA, Cheng M, McNeill SR, Chao YJ, Peters W.H., Application of an optimized digital correlation method to planar deformation analysis. Image and Vision Computing,Elsevier, ;4(3):143-150, 1988 [35] Chen, D.J., Chiang, F.P., Tan, Y.S., Don, H.S., Digital Speckle-Displacement Measurement Using a Complex Spectrum Method. Appl. Opt. 32, 1839-1849,1993 [36] Chiang, F.P., Wang, Q., Lehman, F., New Developments in Full-Field Strain Measurements Using Speckles. In Non-Traditional Methods of Sensing Stress, Strain and Damage in Materials and Structures, ASTM, Philadelphia (USA), STP 1318, 156- 169,1997. [37] Chiang, F.P., Wang, Q., Lehman, F., New Developments in Full-Field Strain Measurements Using Speckles. In Non-Traditional Methods of Sensing Stress, Strain and Damage in Materials and Structures, ASTM, Philadelphia (USA), STP 1318, 156- 169,1997. [38] Sjödahl,M. Digital Speckle Photography, Trends in optical non destructive Testing,Rastogi, P.K. and Daniele Inaudi, Editors, Elsevier, 2000. [39] Hild, F., and S. RouX, DIGITAL IMAGE CORRELATION:FROM DISPLACEMENT MEASUREMENT TO IDENTIFICATION OF ELASTIC PROPERTIES, strain, 42, 2 ,69-88,may 2006 [40] S Roux, J Réthoré and F Hild, Recent Progress in Digital Image Correlation: From Measurement to Mechanical Identification, 6th International Conference on Inverse Problems in Engineering: Theory and Practice IOP PublishingJournal of Physics: Conference Series 135 (2008). [41] M. Bornert et al, Assessment of Digital Image Correlation Measurement. Errors: Methodology and Results, Workgroup “Metrology” of the French CNRS research network 2519,“Mesures de Champs et Identification en Mécanique des Solides. November 24, 2008. [42] Hubert, P.J. Robust Statistics. Wiley, New York (USA). 1981. [43] Black, M., Robust Incremental Optical Flow. PhD dissertation, Yale University.1992 [44] Odobez, J.-M., Bouthemy, P. Robust multiresolution estimation of parametric motion models. J. Visual Comm. Image Repres. 6, 348365,1995. [45] Sciammarella C.A., Bhat G. K., and A. Albertazzi, Analysis of the Sensitivity and Accuracy in the Measurement of Displacements by Means of Interferometric Fringes”, Hologram Interferometry and Speckle Metrology, Proceedings of SEM, 1990.
180 [46] Sciammarella C.A., and F, M. Sciammarella, Heisenberg principle applied to the analysis of speckle interferometry fringes”. Optics and Lasers in Engineering, Vol. 40, 573-588, 2003.
Studying phase transformation in a shape memory alloy with full-field measurement techniques D. Delpueyo, M. Grédiac, X. Balandraud, C. Badulescu Clermont Université Université Blaise Pascal & IFMA, EA 3867, Laboratoire de Mécanique et Ingénieries BP 10448, F-63000 Clermont-Ferrand, France
ABSTRACT This paper deals with phase transformations that occur in a Cu-Al-Be single-crystal specimen of shape memory subjected to a tensile test. Two different techniques are used to experimentally evidence these phase transformations: the grid method to obtain strain maps and infrared thermography to deduce heat source distributions from temperature fields. Some typical strain and heat source maps obtained during the loading and unloading phases are discussed and interpreted. INTRODUCTION Many studies aimed at studying martensitic microstructures that appear in shape memory alloys (SMA) are available in the literature. Classic means such as microscopes are generally employed to observe them, but the recent development of full-field measurement techniques has made it possible to observe phase appearance and transformation during mechanical tests. The spatial distribution of the phases on the surface of the specimens can be observed by analyzing the contrast in the strain or in the heat source maps. This is due to the fact that the strain level generally varies from one phase to each other and that first-order phase transformation is a phenomenon that is accompanied by latent heat. In Refs. [1-2] for instance, digital image correlation has been used to observe phase transformation in SMA specimens. Phase transformation is accompanied with latent heat that can be deduced from temperature variation fields measured with infrared cameras. This property has been used in Ref. [3] for instance to study phase transformations by combining infrared thermography with digital image correlation. The aim of the current work is to analyse the response of a SMA specimen subjected to a tensile test using two different full-field measurement techniques: the grid method and infrared thermography. These techniques are complementary since they provide strain and temperature maps, respectively. These maps can be determined at different steps of the load, thus enabling us to analyse the evolution of these quantities when the applied stress increases. These two techniques are described in the first part of the paper. Some typical strain and heat source maps obtained during a tensile test performed on a Cu-Al-Be single-crystal specimen are then shown and discussed. FULL-FIELD MEASUREMENT TECHNIQUES USED Grid method The grid method consists first in depositing a crossed grid on the surface under investigation in order to track the evolution of the grid as loading increases and to deduce the 2D strain fields. The grid is deposited using the procedure described in [4]. The pitch of the grid is equal to 0.2 mm along both directions. Processing images of grids classically provides phase evolution maps of this periodical marking. This phase evolution is then
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_22, © The Society for Experimental Mechanics, Inc. 2011
181
182 unwrapped and becomes directly proportional to the displacement [5]. Recently, it has been shown that the metrological performance of this technique could be significantly improved by getting rid of grid marking defects which unavoidably occur when grids are printed on their support [6-7]. In particular, a very good compromise is -4 obtained between resolution in strain and spatial resolution. Typically, the resolution is strain is nearly 10 for a spatial resolution equal to 30 pixels. In addition, calculations are performed pixelwise, thus allowing to detect very localized phenomena. In the current case, a 12-bit/1040x1376 pixel SENSICAM camera connected to its companion software CamWare is employed. The small strain maps are obtained directly from the images of the grids taken by the camera. Full details on small strain calculation can be found in [5][6] for unidirectional and crossed grids, respectively. Since large strains must be measured in the current work, the in-plane Green-Lagrange strain tensor E is calculated. In practice, small strain increments are measured using the procedure described in [7]. Assuming that local rotations are small, the Hencky strain tensor H is then deduced by adding these small strain increments. The Green-Lagrange E is finally deduced using the following relationship between E and H
H=
1 ln(I + 2E) 2
(1)
where I is the second-order unit tensor.
Temperature and heat source field determination with an infrared camera Very small temperature changes on the surface of the specimen under load (painted in black to increase thermal emissivity) are detected since the Noise Equivalent Temperature Difference (NETD) of the camera used in this study (a Cedip Jade III-MWIR featuring a 240x320 IR sensor matrix) is nearly 20 mK. For the thermomechanical analysis of materials, the temperature change is however not really the relevant information since it is the consequence of various phenomena among which phase transformation. Hence heat sources must be determined from these temperature fields using a suitable strategy whose main steps are as follows. The temperature evolution is governed by the bi-dimensional version of the heat diffusion equation, which is suitable for thin flat specimens [8]
⎛ dθ θ ⎞ ρC⎜ + ⎟ − kΔθ = s ⎝ dt τ ⎠
(2)
where θ (x, y, t) is the temperature variation with respect to a reference temperature field measured in practice just before the beginning of the test, s(x, y, t) is the heat source field produced by the material, ρ is the mass per unit volume, C the specific heat and k the thermal conductivity of the material, which is assumed to be isotropic. τ a time constant characterizing the heat exchange with ambient air by convection. This latter quantity is determined experimentally during a simple return to room temperature. In the current study where phase transformations occur, source s is mainly due to phase transformation. The objective here is to retrieve the heat source distribution in order to characterize the phase transformation throughout the specimen. Thermal images are processed using the same procedure as that described in [9]. By integrating in time and dividing by ρC, a field of “heat” expressed in °C is obtained. SPECIMEN PREPARATION AND TESTING CONDITIONS
The specimen under test was made of a Cu-Al-Be single-crystal SMA (dimensions: 0.94x17.78x72mm3). The test consisted of a loading-unloading uniaxial test at room temperature. During loading, the strain rate was equal to 0.064 %/s and the maximum strain reached was equal to 9%. The duration of the loading phase was 141.2 s and
183 the maximum stress reached was 73 MPa. The specimen was then unloaded back to zero stress, with a stress rate equal to -0.50 MPa/s up to 20 N. The experiments were performed two times to check that very similar responses were obtained. A typical stress-strain curve obtained during one of these three tests is shown in Figure 1. TYPICAL RESULTS AND DISCUSSION
Stress-strain curve
A typical stress-strain curve is shown in Figure 1. A classic plateau during which austenite transforms into martensite is clearly visible. A hysteresis loop also occurs. The idea is to observe the strain maps at some points chosen during the loading (points A, B and C in Figure 1) and loading phases (points D, E and F) of the curve in order to analyse the microstructure evolution and its link with the global mechanical response of the specimen. Thermal measurements are also analysed to establish the link that exists between strain and heat source maps.
Figure 1. Typical stress-strain curve obtained Loading phase
Three typical longitudinal strain maps collected during the loading phase are shown in Figure 2. At the beginning of the test, the specimen is completely composed of austenite which only slightly elastically deforms because of the loading which is applied. Some martensite needles first appear at Point A, which is located at the beginning of the plateau of the curve. Several parallel needles can be easily seen at the bottom part of the specimen as well as along its right-hand side. The strain amplitude in these very thin needles that is given in these figures is however probably not the actual one. It is in fact certainly underestimated because of the nature of the image processing which is used here to retrieve the strain maps. This procedure is based on a windowed Fourier analysis which cannot correctly identify very small events [6-7]. These local significant strain level increases are due to austenite/martensite phase transform which is accompanied by a sudden strain change in the maps, thus revealing the event that has occurred. A needle is very clearly visible at the top of the specimen (see Needle 11 on the Figure). The strain level in this needle is much higher than the strain level observed in the other needles. Its thickness is also greater. Point B is located at the middle of the plateau. About one half of the austenite has been transformed at this stage. Martensite clearly appears in the red bands in Figure 2-b. Comparing Needle 11 between Figures 1-a and 1-b clearly shows that the displacement of the austenite/martensite transformation front is greater at the bottom of the band than at its top. The strain level in the bands is not completely homogeneous. For instance, the strain level in Band 17 is greater along its right-hand side, thus illustrating the fact that martensite develops from the right to the left. The lower strain level at the left-hand side of this band means that there is a mixture of martensite and
184 austenite in this region. Some intermediate strain levels can be observed at the very top of the strain map at Point B, thus probably revealing that very thin martensite needles are developing in this zone at this stage of the test. The so-called habit planes correspond to the boundary between austenite and martensite. Most of the habit planes are parallel. The habit plane located at the bottom of band 17 exhibits a different orientation. This is probably due to the more complicated state of stress in this region because of the bottom grip which is very close. Point C is located at the very end of the loading stage. Almost all the austenite has been transformed into martensite at this stage. However, a residual region of pure untransformed austenite still remains in this zone (Band 28). The amplitude of the longitudinal strain is globally the same above and below this austenite band. However, comparing the transverse strain fields in these two regions leads to a significant difference between these two regions (the corresponding figures are not shown in this paper): -4% at the top and -6% at the bottom. This is due to the fact that the martensite variants are certainly not the same from one region to each other. The color shade in the martensitic zone is probably due to the fact that different martensite variants exist and that the twinning plane between these variant is nearly parallel to the habit plane. The austenite/martensite transformation is accompanied by a heat source localized in the region where this transformation occurs. This heat source can be detected by a suitable processing of the temperature variation maps captured by the infrared camera. Comparing strain maps and heat source maps shows that the phenomena are logically linked in the specimen.
a- Point A
b- Point B
c- Point C
Figure 2. Typical longitudinal strain maps measured during the loading phase Unloading phase
Some other strain maps taken during the unloading phase of the test are shown in Figure 3. Point D is located just after the sudden change of slope level observed in the unloading phase of Figure 3-a. Interestingly, the strain pattern is different of that observed during the loading phase. In particular, some X-shaped microstructures are clearly visible (see for instance Region 55). This particular microstructure corresponds to austenite appearance in the martensite block. The strain level is the lowest at the crossing between the two branches. It is approximately the same as the color in pure austenite in Figure 3-a above, thus showing that pure austenite has appeared here. The strain level in the other parts of these branches lies between the strain level in austenite and in martensite. This is certainly due to the fact that both phases are mixed in this zone. The strain level in Region 53 is lower than in the surrounding martensite and much greater than in the branches the X-shaped zone 55. This is due to the
185 fact that the percentage of austenite in this zone is certainly much lower than the percentage of martensite. These results are confirmed by thermal measurements. Figure 4 presents a comparison between the in-plane strain maps and the heat source map near point D. This figure enables us to identify the zones which were subjected to the reverse transformation (from martensite to austenite). Based on both measurements, an interpretation in terms of microstructures is proposed on the right-hand part of Figure 4. The strain map at Point E illustrates the fact that the greatest part of martensite is transformed into austenite at this stage. The size of Region 52 has increased. It is also interesting to note that the strain level is slightly greater near the right-hand side compared to the left-hand side, as in Figure 4-a. However, this region stretches significantly along the horizontal direction in Figure 4-a compared to Figure 4-b. This means that the length of the martensite needles that are mixed with austenite in this zone become shorter at Point E, thus showing that they are withdrawing. Interestingly, some traces of the X-shaped region are also still visible in zones 56 and 57. Point F is located close to the end of the hytheresis loop. A wide austenitic region now clearly appears since it corresponds to the blue zone in the map. It is bordered by martensite bands at the bottom. Some traces of the Xshaped region are still visible at the top right and left corners of the specimen. As in the preceding case, the strain level just above Region 59 and along the right-hand side border is slightly higher than in the other parts of the austenitic zone. Again, it is proposed to interpret this result by the fact that martensite needles progressively withdraw. Finally, the strain pattern is more complex at the bottom of the specimen. This is certainly due to the fact that it is located close to the grips.
a- Point D
b- Point E
c- Point F
Figure 3. Typical longitudinal strain maps measured during the unloading phase
186
Figure 4. In-plane strain maps and corresponding heat source map measured during the unloading phase, and possible interpretation in terms of martensitic microstructures
CONCLUSION
The grid method and infrared thermography have been combined to investigate the mechanical response of a CuAl-Be single crystal specimen subjected to a tensile test. Various martensitic microstructures are clearly revealed by these techniques, especially the grid method with which very small details can be distinguished. The difference between the loading and unloading phases of the hysteresis loop of the strain-stress curve is illustrated by different microstructures that occur during these phases. Identifying the martensite variants that appear in during the test will be the next step of this study. REFERENCES
[1] Efstathiou C., Sehitoglu H., Carroll J., Lambros J. and Maier H.J., Full-feld strain evolution during intermartensitic transformations in single-crystal Ni-Fe-Ga, Acta Materialia, 56:3791-3799, 2008 [2] Daly S., Rittel D., Bhattacharya K. and Ravichandran G., Large deformation of nitinol under shear dominant loading, Experimental Mechanics, 49:225-233, 2009 [3] Favier D., Louche H., Schlosser P., Orgeas L., Vacher P. and Debove L., homogeneous and heterogeneous deformation mechanisms in an austenitic polycrystalline Ti-50.8 at. Ni thin tube under tension. Acta Materialia, 55:5310-5322, 2007 [4] Piro J.L. and Grédiac M., Producing and transferring low-spatial-frequency grids for measuring displacement fields with moiré and grid methods, Experimental Techniques, 28(4), 23-26, 2004 [5] Surrel Y., Fringe Analysis, in Photomechanics, Topics Appli. Phys. 77, editor: P.K. Rastogi, 55-102, 2000 [6] Badulescu C., Grédiac M., Mathias J.-D. and Roux D., A procedure for accurate one-dimensional strain measurement using the grid method, Experimental Mechanics, 49(6), 841-854, 2009 [7] Badulescu C., Grédiac M. and Mathias J.-D., Investigation of the grid method for accurate in-plane strain measurement, Measurement Science and Technology, 20(9), 2009 [8] Chrysochoos A. and Louche H., An infrared image processing to analyse the calorific effects accompanying strain localisation, International Journal of Engineering Science, 38: 1759-1788, 2000 [9] Badulescu C., Grédiac M., Haddadi H., Mathias J.D., Balandraud X. and Tran H.S., Applying the grid method and infrared thermography to investigate plastic deformation in aluminium multicrystal, Mechanics of Materials, 43(11): 36-53, 2011
Correlation between Mechanical Strength and Surface Conditions of Laser Assisted Machined Silicon Nitride
F.M. Sciammarella, M.J. Matusky College of Engineering & Engineering Technology Northern Illinois University, DeKalb, IL USA Keywords: Silicon Nitride, Lasers, Machining, Flexure Strength, Surface Roughness
ABSTRACT High power fiber-coupled diode lasers for Laser-Assisted Machining (LAM) of ceramics provides an efficient, cost effective solution for surface finishing of ceramic products. This paper presents experimental evidence of advantages of LAM over the traditional diamond wheel grinding, a standard technique currently utilized in the finishing of ceramic surfaces. LAM, utilizing fiber-coupled diode lasers, also provides advantages over other types of lasers such as CO2 and Nd:YAG lasers. The emphasis of this work is in the evaluation of LAM in the strength of finished products of two different sources of silicon nitride. An optical technique based on evanescent illumination was utilized to measure the Ra of the finished surfaces utilizing LAM, laser glazed, diamond ground, and as-received surface conditions. Four point bending test for specimens of each surface condition were utilized to measure the fracture strength. A correlation was found between the measured Ra and the predicted strengths resulting from Weibull analysis. The correlation shows a decrease of strength with the increase of Ra. The fracture surfaces were observed both optically and with a SEM, and the flaw sizes were measured. The analysis of the fractographs indicated that the flaw sizes are consistent with Fracture Mechanics predictions. Explanations of the correlation between Ra, strength, and flaw sizes require further testing.
INTRODUCTION Over the last three decades, ceramics have moved from low strength applications to high temperature and high strength applications, based on remarkable improvements in strength, fracture toughness, and impact resistance [1, 2]. Demand for advanced ceramics is expected to increase as the infiltrate several applications cutting tools, joint implants, capacitors, military armor, aerospace, and automotive components. Advanced ceramics (silicon nitride, silicon carbide, zirconia, etc) offer higher temperature capability, lower density, higher stiffness, and better wear resistance when compared to metals [3, 4]. Traditional processing of these ceramic materials includes forming, green machining, sintering, and final machining stages. These components typically require very tight dimensional tolerances during shaping and surface machining, generally done by diamond wheel surface grinding (containing coarse, intermediate, and fine grinding stages) due to the high hardness of these ceramics. [5]. Also, conventional single point machining (turning, milling, and drilling) produces brittle failure, excessive surface damage, and excessive tool wear at acceptable machining rates [6, 7]. Therefore, the cost of machining ceramics represents 70% to 90% of the cost of finished parts [8]. Considering silicon nitride (Si3N4) as a baseline material for this study, diamond grinding still results in low material removal rates and is limited to simple contours. Prior academic research on Laser Assisted Machining (LAM) of high strength ceramics (silicon nitride, toughened zirconia, silicon carbide, etc) has included empirical studies [10-17] and numerical modeling [18-21]. This work quantifies the benefits of LAM over grinding including, rapid material removal rates and extended tool life. In LAM, the machining process is simplified and accelerated, leading to reduction in equipment cost, labor, and machining time. Applying fiber-couple diode lasers creates a more robust and industrial rugged system for LAM of ceramics.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_23, © The Society for Experimental Mechanics, Inc. 2011
187
188
How LAM Works In laser assisted machining (LAM), a high energy laser beam is used to locally heat a small zone, on the workpiece, ahead of a single point tool (diamond or cubic boron nitride (CBN)) and then machined by turning or milling. By preheating a ceramic workpiece in the machining zone ahead of the tool, the local spot temperature rises to a point where quasi-ductile behavior, rather than brittle fracture occurs. Quasi-ductile deformation in the ceramic enables reduced cutting forces, high material removal rates, minimal surface damage, and increased tool life [9-11]. A schematic of the LAM process (with the ceramic rod, cutting tool, laser spot, and material removal plane) is illustrated in Figure 1. A good review of past LAM research on a variety of materials using Nd:YAG and CO2 lasers and the benefits is reported in [9].
Figure 1 Laser assisted machining model illustrating the machining removal plane [11]. In the case of silicon nitride, the intense heating of the surface locally raises the temperature of a glassy grain boundary phase, which is the residual of an oxide sintering aid used during liquid phase sintering at temperatures above 1300°C [4]. The composition of the sintering aid determines the temperature (~600-1200°C) at which the grain boundary phase softens. With sufficient heat, the grain boundary phase will soften and produce the desired ductile deformation. However, local overheating can introduce undesirable thermal damage such as devitrification, melting, sublimation of the grain boundary phase, or oxidation of the silicon nitride. Therefore, it is important to monitor the surface temperature profile in the cutting zone with a thermal (IR) imaging and/or 2-colour pyrometer so that material removal zone remains within the critical minimum and maximum preheat temperatures.
EXPERIMENTAL SETUP AND DISCUSSION Material Selection Commercial high strength silicon nitride from two independent sources was chosen as the baseline material for the LAM study and classified as silicon nitride A and silicon nitride B. Both silicon nitride sources were LAM turned in the form of 25 mm diameter rods, 150 mm long. Some tests were also completed on 13 mm by 100 mm rods of silicon nitride B. An investigation into material properties of a similar material to silicon nitride A was characterized (in a 2002 vintage) in a 2005 Army Research Laboratory study [22]. The microstructure of silicon nitride A is illustrated in Figure 2, from [22]. Note the bimodal grain size with the large acicular silicon nitride grains, the light gray grain boundary phase, and the very small white inclusions.
189
Figure 2 Microstructure of silicon nitride A [22] Laboratory & Prototype Production LAM Systems A laboratory scale LAM system, Figure 3, was constructed at Northern Illinois University (NIU). The system setup, containing a 250W fiber-coupled diode laser, has been reported in detail in [23-25]. The system utilizes a FLIR® A325 thermal imaging camera for experimentally measuring thermal fields in the ceramic material, allowing for baseline studies of laser parameters during LAM. This academically focused bench top system is designed to assist a commercial partner in integrating LAM technology into their manufacturing operations. Based on the laboratory LAM system, an industrial scale LAM system was designed and built at the industrial partner location using a 25 HP commercial CNC 5-axis turning center. The system utilizes a custom built multi-beam fibercoupled diode laser system reported in [23-25]. The primary laser processing head is equipped with a built-in, coaxial, 2colour pyrometer system (temperature range 543-1500°C) for process monitoring. Thermal imaging is also utilized for thermal measurement during part production. Dedicated hardware and software are used for data acquisition and process control.
Figure 3 NIU LAM system overview Laser Surface Analysis In order to validate the LAM process as a viable industrial solution, Experimental Mechanics represents a necessary tool to understanding properties and behavior of the ceramic parts after machining. Particularly, it is important to measure the surface finish produced during LAM. A recently developed optical method for characterization of the surface that was presented last year at SEM is known as Advanced Digital Moiré contouring [31]. This technique utilizes evanescent illumination to interact with the surface under inspection. This is achieved through the phenomenon of light generation produced by electromagnetic resonance where the self generation of light is achieved through the use of total internal reflection (TIR). When a plane wave front impinges the surface separating two media such that the index of refraction of medium 1, glass, is higher than the index of refraction of medium 2, air (i.e. n1 > n2 ), at the limit angle, total reflection takes place. Under these circumstances a very interesting phenomenon occurs at the interface (glass-air) and evanescent waves are
190
produced. At the same time, scattered waves emanate from the medium 1 (glass). More detailed theory on this optical measurement methodology is described in [26, 30, 31]. To achieve this experimentally, a laser-microscope system, seen in Figure 4, has been constructed with a capacity to resolve vertical differences to 120 nm across a 480 × 480 micron surface area. Essentially, the surface of the ceramic is in contact with a grating that has 400 lines per mm. The surface is illuminated by a HeNe laser at an oblique angle that provides total internal reflectance. This generates a 3-D interference image captured by a CCD camera. Accuracy was calibrated against a NIST traceable surface roughness calibration block (Ra range of 3.018 – 3.079 µm). CCD Camera
Grating HeNe Laser
Calibration Block
Figure 4 Laser Surface Analysis system setup during calibration The interference patterns are analyzed with a fringe analysis software package, Holo Moiré Strain AnalyzerTM (HMSA) Version 2.0, developed by Sciammarella et. al. and supplied by General Stress Optics Inc. (Chicago, IL USA). The fringe analysis uses powerful Fast Fourier Transforms for filtering, carrier modulation, fringe extension, edge detection and masking operations, and removal of discontinuities, etc. The software produces a full statistical analysis of the interference pattern in both spread sheet and graphic form (Figure 5). Different roughness metrics like Ra, Rq, and Rz can be quickly determined and Weibull analysis used to characteristic values.
Figure 5 Laser surface analysis field of view for a) LAM turned Si3N4 sample b) As-received Si3N4 sample
191
Flexure Strength Testing The 13 mm diameter rods, of silicon nitride B, were split along their axis into half rounds (5 mm thick, 12.8 mm wide, and 50 mm long), two test specimens per rod, while the 25 mm diameter, 50 mm long rods of both silicon nitride, A and B, were sliced into three arc segments (test specimens) with a larger size (5 mm thick, 19 mm wide and 50 mm long – Figure 6a). These specimens were tested in flexure with the surface condition (curved face) in tension (down) at loads less than 4000 N (880 pounds). This arc segment specimen geometry has been used for other ceramics [22] and the method (specimens, fixturing, calculations) is described in detail by Quinn [27]. The test specimens were tested in 4 point- ¼ point bend test at room temperature in an Instron testing machine, using an articulated fixture (40 mm-20 mm spans) with tool steel roller bearings (Figure 6b). The cross head rate was 0.125 mm/min.
Figure 6 (a) Schematic of arc samples used for 4 point bend testing (b) View of experimental setup
EXPERIMENTAL RESULTS AND DISCUSSION Design of Experiments The room temperature flexural strength of LAM machined Si3N4 specimens were investigated, in both silicon nitrides, against surface conditions as listed in Table 1. All specimens were prepared on the prototype production LAM system. While, process parameters remain proprietary, please note that baseline multi-beam LAM parameters were used, and are not optimized for material removal rates, tool wear, or surface conditions. Ongoing and future work aims at process optimization. A two-parameter Weibull analysis was used to determine variability of strength and surface roughness for each sample. In ceramics, the Weibull distribution is used to characterize strength behavior on the basis that the weakest link in the body will control the strength, as described by Quinn [28]. Table 1 Specimens considered for flexure testing and surface characterization Silicon Nitride Material A Silicon Nitride Material B 25 mm Dia., 50 mm Long Arc Segments 25 mm Dia., 50 mm Long Arc Segments # of Samples # of Samples Surface Condition Surface Condition Tested Tested As-Received 9 As-Received 9 Diamond Ground (100 Grit) 9 Diamond Ground (800 Grit) 9 LAM Turned 9 LAM Turned 9 Laser Glazed 9 13 mm Dia., 50 mm Long Half Rounds As-Received 6 Diamond Ground (800 Grit) 6
192
Statistical Static Properties Weibull analysis of flexure strength and surface roughness, relating to the 27 tested specimens in silicon nitride A, was previously reported in [25]. Results indicated that LAM did show an increase in Weibull strength and lower variance over the as-received condition, with the exception of one LAM test rod (three arc segment tests). Optical and SEM fractography was done on the high and low strength rods from Source A and flaw sizes measured. Further investigation into the effect of laser surface heating, laser glazing, is highlighted in Silicon nitride B testing and reported in [32]. Results lead to further confirm an increase in Weibull strength, with lower variance, during LAM, as well as laser glazed conditions over as-received conditions. Surface Roughness Surface measurements, Ra, of the 36 silicon nitride B (from twelve 25 mm dia. rods) and 27 silicon nitride A test specimens consisted of taking three different area measurements per specimen, 27 areas per rod, for a total of 81 area measurements per surface condition. From the Weibull distribution, the characteristic roughness value, Raθ with 95% confidence, along with the upper and lower bound (UB and LB) values of Raθ for the two silicon nitrides is found in Table 2. Table 2 Weibull analysis of surface roughness measurements, Raθ Silicon Nitride B Diamond AsLAM Laser Ground Received Turned Glazed (800 Grit) Raθ (µm) 1.2378 0.9339 0.887 0.9485 UB of Raθ (µm) 1.4509 1.034 1.0407 1.0052 LB of Raθ (µm) 1.0561 0.8434 0.7588 0.8949
Silicon Nitride A Diamond AsLAM Ground Received Turned (100 Grit) 1.260 1.336 0.956 1.619 1.544 1.067 0.981 1.156 0.856
Weibull analysis of LAM surface roughness, for both silicon nitrides, showed improvement over the corresponding as-received conditions, while the coarser 100 grit diamond ground produced the highest roughness measurements of all samples considered. LAM and laser glazed conditions are found to have comparable surface finishes to that of the 800 grit diamond ground. Flexure Strength The silicon nitride A, 27 wide (19 mm), specimens fractured across a range of applied loads - 1905 to 3950 N (427 to 885 lbs). While silicon nitride B, 36 wide (19 mm), specimens fractured across a range of applied loads – 2433 to 3960 N (545 to 887 lbs) and the 12 narrow (12 mm) specimens fractured across a range of applied loads – 1679 to 3504 N (376 to 785 lbs). All of the specimens fractured within the inner span and took generally about 100140 seconds. Tables 3 & 4 shows the characteristic strength, σθ, along with its UB and LB and mean flexure strength along with the coefficient of variation (CoV) and extreme values (high and low) for the four surface conditions and both silicon nitrides. Table 3 Comparison of Weibull strength & statistical strength values in Silicon Nitride B testing 25 mm Dia. Rods 13 mm Dia. Rods Diamond Diamond AsLAM Laser AsGround Ground Received Turned Glazed Received (800 Grit) (800 Grit) σθ (MPa) 452.1 526.5 582.0 547.9 N/A N/A UB, LB of σθ (MPa) 476, 429 575, 483 611, 554 575, 522 N/A N/A Mean Strength (MPa) 436.3 497.9 560.5 528.2 318.1 516.4 CoV – Mean Strength 8.10% 13.20% 9.00% 8.50% 5.00% 18.10% High, Low Strength (MPa) 499, 388 624, 430 623, 469 588, 458 338, 296 620, 388 Number of Samples Tested 9 9 9 9 6 6
193
Table 4 Comparison of Weibull strength & statistical strength values in Silicon Nitride A testing 25 mm Dia. Rods Diamond LAM LAM AsGround Turned Turned Received (100 Grit) Rods #5&6 Rod #4 416.35 σθ (MPa) 549.0 488.0 609.8 UB, LB of σθ (MPa) 584, 517 514, 463 623, 597 481, 361 Mean - Strength (MPa) 524.0 469.0 599.2 390 CoV – Strength 11.60% 9.30% 4.9% 16.5% High, Low Strength (MPa) 604, 415 531, 408 623, 541 447, 300 Number of Samples Tested 9 9 6 3 Further analysis of the flexure strength data has shown, experimentally, a correlation with an increase in surface roughness, a decrease in flexure strength occurs in both sources of Si3N4 material, as seen in Figure 7. Along with the observed trend, the laser glazed condition also demonstrates a decrease in surface roughness and an increase in flexure strength over the as-received condition. It would appear, as with the LAM turned specimens, that the laser heating may have a healing effect on surface flaws. More investigation into these trends is required and further discussed in the following sections.
Figure 7 Weibull strength, σθ, vs. Weibull surface roughness, Raθ, for all surface conditions tested Fracture Mechanics NIST recommended practice guide by Quinn [28] for the fractography of ceramics is followed. Images of the fracture surfaces were captured both optically and by SEM (gold coated) for sample specimens selected from silicon nitride A testing. Selected samples highlight the low strength LAM (three test specimens), high strength LAM (three test specimens), and as-received conditions. Selected samples for silicon nitride B testing, highlight the various surface conditions and all values reported are averaged per condition, at least three test specimens. Fractographs are also post processed using the HMSA software package, taking advantage of light contrast over the fractured surface, to aid in visualizing the fracture mirror, Figure 7.
194
Fracture Toughness Estimation Approximations of fracture toughness (KIc), given in Tables 5 & 6, are estimated based on a technique and (1) described by Quinn [28]. The flaw size (a and c are shown in Figure 8) and shape are estimated by inspection of optical and SEM fractographs. Additionally, the maximum stress intensity factor, Y, is found as the Newman-Raju Y factor at the surface and deepest part of the crack [28]. The Newman-Raju Y factors are also included in ASTM standard C 1421 for fracture toughness of ceramics. The Y factor at the surface is considered for discussed during comparisons. Values for strength, σf, were measured experimentally during silicon nitride A & B testing. These calculations and interpretations are not conclusive as further investigation of these findings, in more detail, is underway. K Ic = Y × σ f × a
(1)
Table 5 Estimation of KIc for highlighted surface conditions of Si3N4 A by descending strength values Crack KIcd KIics Surface Condition Strength, Ysurface Depth, Ydepth Si3N4 A Mpa*m1/2 Mpa*m1/2 MPa µm LAM (Low Strength) 390 140.3 1.62 7.08 1.25 5.51 As- Received (#7B) 604 63.0 1.55 7.43 1.24 5.92 LAM (High Strength) 619 183.6 1.53 12.82 1.21 10.10 Table 6 Estimation of KIc for Highlighted surface conditions of Si3N4 by descending strength values Crack Strength, KIcd KIcs Surface Condition Ysurface Depth, Ydepth MPa Si3N4 B Mpa*m1/2 Mpa*m1/2 µm As-Received (Ave.) 431 245.7 1.55 10.11 1.17 7.73 800 Grit DG (Ave.) 464 181.6 1.58 9.87 1.17 7.33 Laser Glazed (Ave.) 538 148.7 1.61 10.40 1.16 7.59 LAM Turned (Ave.) 595 213.5 1.59 13.80 1.16 10.01 The manufacturer specified fracture toughness for silicon nitride A and B is reported as 6.3 MPa*m1/2 and 6.1 MPa*m1/2, respectively, which is close to that estimated in the as-received conditions. While low values of KIc are estimated for one LAM case in silicon nitride A, Figure 8; inherent material flaws, machining damage, mishandling, or specimen preparation (cutting of arc segments) may have resulted in undesired strengths. On the other hand, laser effected samples of both materials (higher estimated KIc in LAM) appears to further support the surface healing effect of laser heating. Laser glazing of alumina has been reported to similarly improve Weibull strength characteristics over as-received conditions [29]. These results indicate that fully understanding the mechanism warrants additional in-depth study. Fracture Origin
195
Figure 8 Fracture Origin in low strength LAM #4A shown during post processing, highlighting the fracture mirror.
a 2 Figure 9 Fracture origin in low strength LAM #4A a) optical photo of the fracture mirror shows crack of about 136 µm deep, and combined with a 447.2 MPa fracture strength and a surface Y factor of ~1.27 gives KIc ≈ 6.65 MPa*m1/2. b) SEM image of fracture mirror showing LAM turned surface.
CONCLUSIONS Various test specimens prepared by laser-assisted machining, laser glazing, conventional diamond grinding, and as-received surface conditions were evaluated on silicon nitride rods from two different sources. During the LAM process it was found that LAM turned silicon nitride rods had a measured 25%-30% increase in flexure strength, when compared to their as-received counterparts. For test specimens prepared by laser glazing an increase in strength and decrease in surface roughness, over as-received conditions, was also observed. Experimental results from both material sources have demonstrated an apparent trend correlating the surface roughness and flexure strength for all surface conditions tested. Initial fractography of the laser heated specimens along with the as-received condition was completed. In one case, LAM indicates reduced strength and fracture toughness compared to as-received conditions. While this result was uncharacteristic of all other LAM samples tested, care must be taken while machining and handling the ceramic so as to avoid degrading the strength aside from surface induced flaws. Results also indicate laser heating of the ceramic surface may in fact increase the strength of the ceramic material . While further investigation is required to confirm this estimation, the increase in strength may be a result of a modification of the flaw sizes and flaw populations found in as-received counterparts, acting in a beneficial way. LAM provides a very promising, cost effective, improvement over conventional diamond grinding of advanced ceramics.
ACKNOWLEDGMENTS The authors thank Tom Wagner and U.S. Army TARDEC (Contract Number W56HZV-04-C-0783) for support that made this work possible. The authors would like to express their gratitude towards Dr. Richard Johnson, Director of ROCK, Alan Swiglo, Assistant Director of ROCK, and Stefan Kyselica, Senior Systems Engineer of ROCK for their continued technical and management support. Thanks are also due to Richard Roberts, Jeff Staes, and Rick Deleon for their partnership in this project. Special thanks to the graduate students at NIU Mechanical Engineering Department, SriHarsha Panuganti and Vishal Burra for all their efforts on this project. Mechanical flexure testing was done at Illinois Institute of Technology with the assistance of Mr. Russ Janota and Professor Philip Nash. Some fractography was provided by Ceradyne, Inc. Costa Mesa, Bilijana Mikijelj.
196
REFERENCES [1] US Advanced Ceramics Market, Fredonia Industrial Market Study, Cincinnati, OH (December 2008). [2] T. Abraham, U.S. Advanced Ceramics Industry—Status and Market Projections, Ind. Ceram., 19 [2] 94–6 (1999). [3] Y. Liang and S. P. Dutta, Application Trend in Advanced Ceramic Technologies, Technovation, 21, 61–5 (2001). [4] F. L. Riley, Silicon Nitride and Related Materials, J. Am. Ceram. Soc., Vol. 83, 245-265 (2000). [5] I. P. Tuersley, A. Jawaid, and I. R. Pashby, Review: Various Methods of Machining Advanced Ceramic Materials,’ J. Mater. Processing Technol., 42, 377–90 (1994). [6] V. Sinhoff, S. Schmidt, and S. Bausch, Machining Components Made of Advanced Ceramics: Prospects and Trends, Ceram. Forum Int./Berichte Der Dkg, 78 [6], E12–8 (2001). [7] W. Konig and A. Wagemann, Machining of Ceramic Components: Process-Technological Potentials, Machining Adv. Mater., NIST Spec. Publ., 847, 3–16 (1993). [8] I. D. Marinescu, Handbook of Advanced Ceramic Machining, CRC Press, (2007). [9] N.B. Dahotre and S.P. Harimkar, Laser Fabrication and Machining of Materials, Springer Science + Business Media (2008). [10] W. Konig and A. K. Zaboklicki, Laser-Assisted Hot Machining of Ceramics and Composite Materials, Int. Conf. Machining Adv. Mater. NIST Spec. Publ., 847, 455–63 (1993). [11] Y. C. Shin, S. Lei, F. E. Pfefferkorn, P. Rebro, and J. C. Rozzi, Laser-Assisted Machining: Its Potential and Future, Machining Technol., 11 [3] 1–6 (2000). [12] F. Klocke and T. Bergs, Laser-Assisted Turning of Advanced Ceramics, Rapid Prototyping Flexible Manuf., Proc. SPIE, 3102, 120–30 (1997). [13] J.C. Rozzi, F.E Pfefferkorn, Y.C. Shin, and F.P. Incropera, Experimental Evaluation of the Laser Assisted Machining of Silicon Nitride Ceramics, J. of Manufacturing Science and Engineering, Vol. 122, 666-670 (Nov. 2000). [14] S. Lei, Y.C. Shin, and F.P. Incropera, Experimental Investigation of Thermo-Mechanical Characteristics in Laser-Assisted Machining of Silicon Nitride Ceramics, J. of Manufacturing Science and Engineering, Vol. 123,639646 (Nov. 2001). [15] S. Lei, Y. Shin, and F. Incropera, Experimental Investigation of Thermo-Mechanical Characteristics in LaserAssisted Machining of Silicon Nitride Ceramics, ASME J. Manuf. Sci. Eng., 123, 639–46 (2001). [16] F. E. Pfefferkorn, Y. C. Shin, Y. Tian, and F. P. Incropera, Laser-Assisted Machining of Magnesia-Partially Stabilized Zirconia, ASME J. Manuf. Sci. Eng., 126, 42–51 (2004). [17] Y. Tian and Y.C. Shin, Laser-Assisted Machining of Damage-Free Silicon Nitride Parts with Complex Geometric Features via In-Process Control of Laser Power, J. Am. Ceram. Soc., 89 [11], 3397–3405 (2006). [18] J. C. Rozzi, M. J. M. Krane, F. P. Incropera, and Y. C. Shin, Numerical Prediction of Three-Dimensional Unsteady Temperatures in a Rotating Cylindrical Workpiece Subjected to Localized Heating by a Translating Laser Source, 1995 ASME Int. Mech. Eng. Conf. Exposition, San Francisco, California, HTD,317 [2] 399–411 (1995). [19] J. C. Rozzi, F. E. Pfefferkorn, F. P. Incropera, and Y. C. Shin, Transient, Three-Dimensional Heat Transfer Model for the Laser Assisted Machining of Silicon Nitride: I. Comparison of Predictions with Measured Surface Temperature Histories, Int J. Heat Mass Transfer, 43, 1409–24 (2000). [20] J. C. Rozzi, F.P Incropera, and Y.C. Shin, Transient, Three-Dimensional Heat Transfer Model for the Laser Assisted Machining of Silicon Nitride: II. Assessment of Parametric Effects, Int. J. of Heat and Mass Transfer 43, 1425-1437 (2000). [21] F. E. Pfefferkorn, F. P. Incropera, and Y. C. Shin, Heat Transfer Model of Semi-Transparent Ceramics Undergoing Laser-Assisted Machining, Int. J. Heat Mass Transfer, 48 [10] 1999–2012 (2005). [22] J. J. Swab, A.A. Wereszczak, J. Tice, R. Caspe, R. H. Kraft, and J. W. Adams, Mechanical and Thermal Properties of Advanced Ceramics for Gun Barrel Applications, Army Research Laboratory Report ARL-TR-3417, February (2005). [23] Panuganti, S., Understanding Fiber-Coupled Diode Laser Superheating in Laser Assisted Machining of Silicon Nitride (Si3N4), Department of Mechanical Engineering, Northern Illinois University (2009). [24] F.M. Sciammarella and M.J. Matusky, Fiber Laser Assisted Machining of Silicon Nitride, Conference Proceedings ICALEO, (2009) (to be published). [25] F.M. Sciammarella, J. Santner, J. Staes, R. Roberts, F. Pfefferkorn, S.T. Gonczy, S. Kyselica, and R. Deleon, Production Environment Laser Assisted Machining of Silicon Nitride, Conference Proceedings ICACC, (2010). [26] C.A. Sciammarella, L. Lamberti, F.M. Sciammarella, G. Demelio, A. Dicuonzo, and A. Boccaccio, Application of Plasmons to the Determination of Surface Profile and Contact Stress Distribution, Strain, (2009).
197
[27] G. D. Quinn, The Segmented Cylinder Flexure Strength Test, Ceramic Eng. and Sci. Proc., 27[3], 295 – 305 (2006). [28] G.D. Quinn, Fractography of Ceramics and Glasses, NIST, Spec. Publ. 960-17, (April 2007). [29] J. Meeker, A.E. Segall, and V.V.Semak, Surface effects of alumina ceramics machined with femtosecond lasers, Journal of Laser Applications, Vol. 22 7-12, (Feb. 2010). [30] C.A. Sciammarella, F.M. Sciammarella, and L. Lamberti, Experimental Mechanics in Nano-engineering, to be published by Springer Verlag, edited by EE Godutous, 2011 [31] F.M. Sciammarella, C.A. Sciammarella, L. Lamberti, V. Burra, Industrial Finishes of Ceramic Surfaces at the Micro-Level and Its Influence on Strength, Conference Proceedings, SEM, (2010). [32] F.M. Sciammarella, J.S. Santner, M.J. Matusky, S.T. Gonczy, Investigating Mechanical Strength and Surface Conditions of Fiber-Coupled Diode Laser-Assisted Machining of Silicon Nitride, Conference Proceedings, MS&T, (2010).
Analysis of speckle photographs by subtracting phase functions of digital Fourier transforms Karl A. Stetson Karl Stetson Associates, LLC 2060 South Street Coventry, CT 06238 Abstract This paper presents a method for measuring displacements and strain in digital speckle photography that is an alternative to currently used correlation techniques. The method is analogous to heterodyne speckle photogrammetry wherein optical Fourier transforms are taken of individually recorded specklegrams and combined in a heterodyne interferometer where an electronic phase meter measured the phase differences between the two transforms. Here, digital photographs are recorded and Fourier transformed so that their phase functions can be subtracted and fitted to a linear function of the transform coordinates. The effect of different recording and processing parameters is investigated. It is found that incoherent speckles give better results that those formed by coherent laser light. In addition, image correlation is used to process an identical data set so that comparison of the two methods can be made. 1. Introduction Laser speckle photography1 arose as an alternative to holographic interferometry. When an object is illuminated with laser light, the speckles that form in its image move as if attached to the object itself. If an object moves between two exposures of a photograph, the resultant doubling of the speckle pattern can be observed via an optical Fourier transform created by illuminating a small region of the photograph with a narrow, converging laser beam. The doubling of speckles in the photograph gives rise to linear fringes in the transform plane where the beam comes to focus, and because of the similarity to Young’s experiment, they are often called Young’s fringes. The fringes are normal to the direction of the speckle displacement, and their spacing is inversely proportional to its magnitude. Whereas speckle photographs, or specklegrams, can measure object displacements, they have limitations for measuring strain, which is defined as the change in displacement between two object points divided by their separation. Resistive strain gages can measure strains down to 10-5 over gage lengths as short as one to two mm, and to duplicate this with speckle photography is very difficult. This is especially so in regions of the object where the displacement is so small that no fringes are observable in the transform plane and in regions where the displacement is so large that the fringes are too narrow to observe. To overcome these problems, a technique was developed2-6 for heterodyne readout of halo fringes. In heterodyne speckle photogrammetry, two photographic glass plates are used to record separate specklegrams before and after a stress is applied to the object. These two plates are placed side by side in an interferometer on a common translation stage and aligned so that the same region on each can be illuminated with each of two small, mutually coherent, converging laser beams. The transmitted beams and scattered halos are combined after equal propagation by a set of mirrors and a beamsplitter. Adjustments are provided so that the halo fringes can be minimized, and the two plates aligned to eliminate relative rotation. The interferometer is provided with the capability of shifting the optical frequency of one beam relative to the other to cause the halo fringes to move and generate sinusoidal irradiance fluctuations. These are detected by an array of photodiodes located in the transform plane, processed by a phase meter, and the phases recorded. As the pair of plates is moved, any translation of one speckle pattern relative to the other causes a change in the relative phases in the fluctuating halo fringe pattern. These changes are recorded and used to calculate the relative displacement of the speckle pattern, which, divided by the amount by which the pair of plates is translated, gives the object strain. Strain measurement by this technology requires that the object be photographed along the surface normal by means of a telecentric lens system, i.e., one that has no spherical perspective. The high accuracy available from electronic phase meters, 0.1 degree out of 360, makes it possible to measure strain down to a theoretical level of 10 microstrain (i.e., 10-5) from recordings made with lens systems whose f/numbers are as high as f/10. Furthermore, multiple recordings allow cumulative strain to be measured up to several percent. Although this technique was demonstrated, it has drawbacks. The photographic plates require chemical development and drying before they can be analyzed. The setup and alignment of the interferometer is quite complex and requires considerable space in a darkened laboratory, and the alignment of the photographs is quite critical. T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_24, © The Society for Experimental Mechanics, Inc. 2011
199
200 They must be placed so that the same regions are at least approximately illuminated in order to obtain halo fringes at all. Finally, the procedure of recording the phases, moving the pairs of plates, and computing the strains is time consuming. For these reasons, it is desirable to reconsider this process from the point of view of modern digital photography and digital image analysis to see which aspects can be improved and which cannot. Digital photography has been used for specklegram analysis for quite some time,8,9 however, speckle displacements have been analyzed by image correlation. (See www.lavision.de, and Aramis at www.trilion.com, for example. ) The purpose of this paper is to present and investigate an alternative method of specklegram analysis based upon the subtraction of the phases of digital Fourier transforms in a manner analogous to heterodyne speckle photogrammetry. It begins with a description of the process, followed by the mathematical analysis and experimental study. Several parameters were investigated – recording lens f/number, 8-bit versus 12-bit digitization, and incoherent versus coherent speckle patterns. Finally, a comparison is made to an image correlation measurement of displacement of the best translation data set. 2 Procedure for digital Fourier processing The procedure begins by photographing the object with a digital camera. Unless a telecentric lens system is used, any translation of the object toward or away from the object will result in a magnification change, and this will give an apparent strain that must be considered in any subsequent analysis. The optical axis of the lens should be aligned along the surface normal and the camera should have black and white pixels with square pixel format. Twelve-bit image digitization should offer increased accuracy; however, shot noise may be expected to restrict the useful digitization to eight bits. Photographs are captured before and after object perturbation and designated as A and B. For simple displacement analysis, an entire camera image may be used; however, for strain analysis it will be necessary to divide the camera images into segments for separate processing. The next step is to compute the digital Fourier transforms FAnm and FBmn for each photograph or segment thereof. A digital Fourier transform has the advantage over an optical Fourier transform that it is possible to calculate the phase of the pixels in the transform. The phase values will range randomly from –π to π; however, a speckle displacement will generate a linear phase change across the transform plane whose slope is proportional to the displacement and whose gradient is in the direction of displacement as described by the shift theorem of Fourier transforms. The phase function can be obtained by subtracting the phases of the two transforms; however, simple subtraction will exhibit what may be called random wrapping. This occurs when the value of a pixel phase before displacement is made to exceed –π or π when the additional phase is added to it. For example, if the phase of a pixel is π - ∆ and the phase change is +2∆, the resulting phase will be −π+∆, and subtracting the two values will give 2∆ − 2π. Wrapping the phase difference from the range of –2π to 2π into the range of –π to π will remove these random wrapping effects.
Figure 1. 1a. Subtraction of two analytically created random 8-bit phase patterns with a linear phase difference between them. 1b. The same data as 1a after wrapping into 8-bit pixels. Note the wrapping removes the effect of random wrapping shown in 1a. [ Figure 1a illustrates the random wrapping that occurs when a phase function is added to a random phase distribution and is attempted to be recovered by simple subtraction. Figure 1b shows the result of wrapping the data of Fig. 1a into the range of –π to π, i.e., the phase difference is recovered without the effects of random wrapping. Of course, the phase difference itself
201 is wrapped every time it exceeds either limit, –π to π, and it must be unwrapped for further data processing. After it is unwrapped, the next step is to fit the unwrapped phase difference to a linear function of the spatial frequencies, ωx and ωy, and these slopes correspond to the x and y translations of the speckles. If strain analysis is being performed, the image will be divided into segments and the slope values from neighboring segments can be used to calculate the average strains experienced between the segments as described below. 3. Mathematical Analysis The digital Fourier transform used for this analysis is described by the following equation:
where: f(m,n) is the discrete function whose transform is being calculated, m and n are the pixel indices in the x and y directions, M and N are the number of pixels in the x and y directions of the image or segment thereof, j and k are the horizontal and vertical indices of the coordinates of the transform, and F(j,k) is the Fourier transform of f(m,n) with respect to the variables j and k. Consistent with the usage in television, we take the y axis, represented by n, to be positive downward. We may calculate the effect of a displacement of one pixel in the x or y directions, dpx and dpy, for the function f(m,n) by substituting m-1 for m or n-1 for n in Eq.(1). This will generate the respective phase functions Φωx or Φωy in the transform plane, where Φωx = exp(i2πj/M), and Φωy = exp(i2πk/N).
(2a) (2b)
The slopes of these phase functions, sj1 and sk1 per incremental values of j or k, are sj1 = 2π/M, and sk1 = 2π/N.
(3a) (3b)
If dx and dy are the actual displacements of the image pattern and sk and sj are the corresponding measured phase changes per pixel of their transforms, then the corresponding fractions of pixel displacements, dx/dpx and dy/dpy, will equal the corresponding fractional changes in slope of the transform function, sk/sj1 and sk/sk1. dx/dpx = sj/sj1, and dy/dpy = sk/sk1
(4)
Substituting from Eqs.(3) gives dx = dpxsjM/2π, and dy = dpyskN/2π,
(5a) (5b)
where the units of sj and sk are radians per pixel in the Fourier transform plane. Equations (5) allow calculation of the object displacement in terms of the pixel spacing, the transform plane slope, and the total number of pixels in the direction considered. The products, dpxM and dpyN, are the physical sizes in x and y of the camera array or the segments of the array. If the object is magnified relative to its image on the camera, then both sides of equations 5 must be multiplied by that magnification to obtain the object displacement. Strain analysis requires measuring the change in displacement between two points on a surface and dividing that by the distance between the two points. Because, in general, surface strain is expressed as a 2x2 matrix, we need to measure the fractional displacements of four points on the surface. Let these four points be the centers of four neighboring segments which we identify by the subscripts shown below.
202 11 21
12 22
The average relative displacements of these segments may be defined as: ∆xx = (dx12 – dx11 + dx22 – dx21)/2, the average x expansion in the x direction.
(6a)
∆ yy = (dy21 – dy11 + dy22 – dy12)/2, is the average y expansion in the y direction.
(6b)
∆ xy = (dx21 – dx11 + dx22 – dx12)/2, is the average x expansion in the y direction.
(6c)
∆ yx = (dy11 – dy12 + dy21 – dy22)/2 , is the average y expansion in the x direction.
(6d)
Given the fact that the M by N pixel segments are separated in x by M pixels and in y by N pixels, the x strain, y strain, and shear are defined as: εxx = ∆xx/dpxM
(7a)
εyy = ∆yy/dpyN
(7b)
εxy = (∆yx/dpxM + ∆xy/dpyN)/2
(7c)
When Eqs. (3) and (4) are substituted into Eqs. 6a-6d, and the results substituted into Eqs. 7a-7c, it is seen that the factors of pxM and pyN cancel from the strain calculations. Equations 7a-7c may be rewritten in terms of the measured Fourier transform plane slopes as εxx = (sj12 – sj11 + sj22 – sj21 )/4π
(8a)
εyy = (sk21 – sk11 + sk22 – sk12)/4π
(8b)
εxy = (sj21 – sj11 + sj22 – sj12 + sk12 – sk11 + sk22 – sk21)/8π
(8c)
For a segment of 256 by 256 pixels, for example, the strain corresponding to a lateral shift of one pixel is (1/256) which equals 3906 microstrain. Measurement of strain to a level of 10-5 would require measurement of subpixel displacement to a level of 1/391 of the pixel spacing. 4. Experimental Study An experimental study was carried out to determine the effectiveness of the process described above and investigate the effect of some experimental parameters. Images were obtained via a Prosilica EC650 monochrome TV camera with a 1/3 inch format sensor (4.7 mm horizontal by 3.5 mm vertical) with a pixel array of 659x493 elements on 7.4 µm centers. Image capture was done via two separate programs: the HoloFringe300K program that provided 8-bit images, 640x480 pixels, in a raw data format, and the Prosilica Viewer program that provided 12-bit images, 659x493, in tiff format. The object, mounted on a translation table, was a flat aluminum bar whose visible surface was whitened to provide a flat, diffuse reflection. The object was translated laterally by amounts that were read from the dial of the micrometer on the translation stage, single divisions of which corresponded to 10 µm each. A 25 mm lens was used on the camera which was set with its entrance pupil 355 mm from the object, which resulted in a demagnification of 12.9. The lens aperture was set to f/10, for which the characteristic speckle size should be 7.6 µm, approximately the size of the pixel spacing. Recordings were made of the object at displacements of 0, -10, -20, -40, -80, -160, and -320 micrometers, each was transformed and the phase functions of all the transforms calculated. The phase of the transform at 0 micrometers was subtracted from the phase of each of the other the transforms and the results wrapped into eight bits obtain the wrapped phase differences. These operations were done using a program called DADiSP, which provides a number of useful functions in addition to the 2DFFT computation. (See www.dadisp.com) These images were unwrapped using the method of calculated wrap regions from the HoloFringe300K electronic holography program7. Figure 2 shows, for the specklegrams at 0 and 320 micrometers, the wrapped and unwrapped phase difference. Note that the quality of the data is poorer at the left and right edges, which leads to errors in the unwrapping for those regions. To eliminate those errors, the unwrapped phase data was win-
203 dowed to remove the left, right, top, and bottom 25% of the pixels. The corners of the window were located at, xmin=160, xmax=480, ymin=120, and ymax=360.
Figures 2. 2a shows the wrapped phase difference between the transforms of the specklegram recorded at 0 micrometers and the one recorded at 320 micrometers, and 2b shows the data of 2a unwrapped. A linear function in x and y was then fitted to each data image for least square error by means of a program written in Liberty Basic. The resulting slope values in ωx and ωy, that is, sj and sk, were substituted into Eqs. 5a and 5b to obtain the x and y image displacements at the camera detector with M taken as 640 and N as 480. The image displacements were then multiplied by the magnification, 12.9, to get displacements of the object. Table 1 presents the measured displacements beside the displacements read from the micrometer dial. x trans. µm
x meas. µm
y meas. µm
-10.0 -20.0 -40.0 -80.0 -160 -320
-10.0 -23.7 -49.3 -90.6 -166 -324
2.57 7.68 8.62 8.06 7.03 6.85
Table 1. Tabulated values of measured displacement versus micrometer dial readings. The measured displacements in the y direction should be zero, and it is not clear why they have a nearly constant value. It is possible that the translation stage had some vertical displacement associated with the first 20 µm of horizontal travel. The RMS error for the x displacements is 5.75 µm and 7.09 µm for the y displacements. The role of lens aperture was investigated by recording another set of translations, 0 to +640 mm, with the lens set to f/2.8. This resulted in a noticeably different image, with a pixel histogram that looked much more like a Gaussian distribution. The results are tabulated in Table 2. The RMS error in the x measurement is 16.8 µm and for the y measurement is 2.33 µm. Clearly, the increase in lens aperture has not helped the measurement accuracy. x trans. µm 10.0 20.0 40.0 80.0 160 320 640
x meas. µm 5.04 30.0 57.5 96.6 181 343 667
y meas. µm 4.51 -1.86 -2.64 -3.02 -2.69 -2.37 -4.68
Table 2. Translations measured with the lens set to f2.8.
204 It was also of interest to learn if increasing the digitization to 12 bits would improve the measurement accuracy. Data was acquired via the Prosilica Viewer program for the same translations, the lens set to f/11 and processed in the same way. The results are presented in Table 3, and these show poorer results than with the 8-bit digitization; the RMS error for the x measurement is 11.9 µm and for the y measurement is 1.14 µm. Because the detectors are small, it is expected that their performance is limited by shot noise so that the difference between 8-bit and 12-bit digitization should not be significant. Why the RMS error for the x measurement is approximately twice that found with 8-bit digitization is not clear. It may be an artifact of the image capture programs which are different for the two cases. x trans. µm 10.0 20.0 40.0 80.0 160 320 640
x meas. µm 16.6 316 56.4 95.8 172 324 527
y meas. µm 1.42 0.494 -0.0219 -0.0142 -1.84 -1.45 -12.8
Table 3. Translations measured with the camera lens set to f/11 and the data digitized to 12 bits. Live observation of the speckle patterns showed that, for continuous translation, the shifted patterns exhibit a periodic change of the speckles themselves. CCD detector arrays, with interline transfer, typically have active sensors that occupy only about 35% of the area of the array itself. Thus, the speckles integrated by the detectors vary periodically with the translation of the object. Also, the object is illuminated spherically, and this causes a shift of the field that actually passes through the entrance aperture of the lens so that the speckles decorrelate slightly with translation. To get a comparison, an incoherent speckle pattern was generated with the DADiSP program as an array of random numbers displayed as pixels. This was printed on a sheet of paper and cemented to the object surface. The recorded speckle pattern and its histogram of pixel values looked very much like those for laser speckles. Images were recorded of this incoherent speckle pattern at the positions of the translation stage that were previously used. The camera lens was set to f/5.8, and only 8 bit images were captured based upon the evidence that 12 bit digitization did not improve the results. The results are shown in Table 4. x trans. µm 10.0 20.0 40.0 80.0 160 320 640
x meas. µm 12.54 21.79 40.90 80.17 161.4 319.7 639.6
y meas. µm 7.940 13.51 14.51 16.31 20.89 15.82 15.31
Table 4. Translations measured using an incoherent speckle pattern. The results obtained using an incoherent speckle pattern are considerably more accurate than any of those obtained using laser speckles. The RMS error for the x measurements is 1.34 µm and for the y measurements is 15.3 µm. Observation of the images obtained by this method showed that the speckles translated with much less change in their pattern than was observed with the laser speckles, which is consistent with the improved measurement results. The large amount of measured displacement in the y direction; however, indicates that the stage has a significant amount of vertical displacement associated with its horizontal travel. For comparison, another stage was used made by New Focus, their model 8095, and the results are presented in Table 5. These results show an RMS error for the x and y displacements of 1.97 µm and 3.22 µm respectively. Clearly, this stage moves with less vertical displacement that the previously used one. It is interesting to note that there is an almost linear increase in vertical displacement for the y displacement as a function of the quadratic increase in x displacement.
205 x trans. µm 8.00 16.0 32.0 64.0 128 256 472
x meas. µm 9.339 16.27 34.14 66.12 131.6 257.2 473.3
y meas. µm -1.043 -0.1983 1.619 3.152 3.351 4.044 4.941
Table 5. Displacement for incoherent speckles with the New Focus 8095 stage. Strain was simulated by reorienting the translation stage so that the surface moved toward the camera. An apparent strain of 2817E-6 was generated between two recordings by moving the object toward the camera by 1 mm at a distance of 355 mm. The camera lens was set to f/5.8, and 8-bit digitization was used. The two recordings were divided into 12 segments of 160x160 pixels each, four horizontal by three vertical. Each segment in one recording was processed with its corresponding segment in the other recording to obtain the slope of the unwrapped phase difference for that segment pair. Sets of slope values for four adjacent segments were used in Eqs 8a and 8b to obtain six values of strain, three horizontal by 2 vertical. Table 6 presents the results. Simulated 2.817E-03
x strain 2.829E-03 2.792E-03
x strain 2.776E-03 2.749E-03
x strain 2.845E-03 2.802E-03
Simulated 2.817E-03
y strain 2.765E-03 2.765E-03
y strain 2.815E-03 2.754E-03
y strain 2.837E-03 2.789E-03
Table 6. Strain measurements using digital FFT specklegram analysis. All six results for x strain and all six for y strain should be equal to 2.817E-3. The average of the six x values is 2.799E-03 and 2.787E-03 for the y values which differ from the correct value by 18E-6 and 30E-6 respectively. The standard deviations are 32E-6 and 30E-6 respectively. 5. Comparison to Image Correlation The data set that generated the data presented in Table 5 was alternatively analyzed by image correlation via the equation, Cab(m,n) = F-1{Fa Fb*},
(9)
Where Cab(m,n) is the correlation of the two specklegram images, a and b, F-1 is the inverse Fourier transform operator, and Fa and Fb are the Fourier transforms of the two specklegrams where the asterisk indicates the complex conjugate. This image correlation results in a peak whose displacement from zero equals the displacement of one image relative to the other. Several issues arise with this method of measurement. First, zero displacement for these operations normally lies in the upper left corner (for the DADisP program), so the quadrants of the correlation array must be re-arranged to center the zero-displacement point, and care must be taken in doing this to keep track of which point actually corresponds to zero displacement. Next, in order to measure the image displacement to subpixel accuracy, the discrete values calculated by Eq. (9) must be modeled by a continuous function and its maximum located between the discrete values. As pointed out by Sjödahl and Benckert10, the function used to do this modeling will influence the results obtained. The model chosen here is a Gaussian function. An array of 10 by 10 values surrounding the peak was selected and the natural logarithms calculated of those values. Ideally, these values should be fitted to a bi-quadratic function, to fit the data to a Gaussian function, but a cubic spline function was used instead, because that was available in the DADisP program, and interpolation was performed to 100th of
206 the spacing between the correlation values. For this data, the magnification was 12.56 so that the pixel spacing on the object was 92.94 µm, and interpolation to 100th of that spacing resulted in a resolution of 0.93 µm. x trans. µm 8.00 16.0 32.0 64.0 128 256 472
Phase diff. x meas. µm 9.339 16.27 34.14 66.12 131.6 257.2 473.3
Phase diff. y meas. µm -1.043 -0.1983 1.619 3.152 3.351 4.044 4.941
Correlation x meas. µm 6.5 13.0 30.7 66.0 128.3 258.4 474.0
Correlation y meas. µm -1.8 -0.93 0.93 1.8 1.8 2.8 2.8
Table 7. Comparison of displacement measurement obtained via the FFT shift theorem and via image correlation using the same data as for table 5. 6. Discussion and Conclusions Based on these results, we may state that the process presented here, using incoherent speckles and 8-bit digitization, can measure displacements to within a few micrometers and strains to approximately 30 microstrain. In this setup, the gage length for the strain measurement is approximately 15 mm owing to the magnification of the lens system. With one-to-one imaging, the gage length could be reduced to about 1.2 mm, and the accuracy should remain the same. The process is considerably less accurate using laser speckles, due probably to speckle decorrelation. The laser speckle decorrelation results, most likely, from the relatively small aperture of the camera lens and the small fill factor of the detector array. The distinction between laser and incoherent speckles observed here should be observable with image correlation as well and should be investigated. The displacement results obtained for image correlation are shown here to be of similar accuracy to the method of transform phase subtraction. Each method has advantages and problems, however, due to the digital nature of the data. With the phase subtraction method, there is a clear connection between the phase difference of Fourier transforms and displacement via the shift theorem, and it is natural to fit the digital values obtained to a linear function of transform coordinates. This method requires applying phase unwrapping to the wrapped phase difference of two transforms; however, the unwrapping method used here, via calculated phase unwrap regions, is extremely fast and robust. Although this whole process has not been integrated into a single program, there is no reason to believe that such an integrated program would not be as rapid and easy to implement as existing correlation programs. The correlation method is straightforward in that it measures the translation needed to get the best correlation between the two images. Subpixel resolution can only be obtained, however, by interpolating between the discrete values obtained, and this is dependent upon the nature of the model of the continuous function used to interpolate between pixels. A Fourier series expansion was shown to work well in Ref. 9 and a cubic spline fit to the natural logarithm of the correlation values was shown to work well here; however, there really is no compelling argument for any model over another beyond its demonstrated accuracy. In any case, the two measurement methods are distinctly different and equivalent only to the extent that they give similar results when analyzing the same data. It is not the intent of this communication to present the FFT phase subtraction method as better than the correlation method but simply to present it as an alternative that, with future investigation, may possibly be shown to have advantages. References 1. 2. 3. 4. 5. 6.
A. E. Ennos, “Speckle Interferometry,” in Progress in Optics Vol. XVI, Emil Wolf, Ed. (North-Holland, Amsterdam, 1978), Chap. 4, pp. 235-290. K. A. Stetson and G. B. Smith, “Heterodyne readout of specklegram halo fringes,” Appl. Opt., 19, 3031-3033 (1980). K. A. Stetson, "Speckle and Its Application to Strain Sensing," Proc. SPIE 353, 12-18 (1983). K. A. Stetson, "The Use of Heterodyne Speckle Photogrammetry to Measure High Temperature Strain Distributions," Proc. SPIE, 370, 46-55 (1983). K. A. Stetson, "The Effect of Scintillation Noise in Heterodyne Photogrammetry," Appl. Opt. 23, 920-923 (1984). K. A. Stetson, "Strain Measurement Using Heterodyne Photogrammetry of White-Light Specklegrams," Exp. Mech., 25, 312-315 (1985).
207 7.
K. A. Stetson, J. Wahid, and P. Gauthier "Noise-immune phase unwrapping by use of calculated wrap regions, Appl. Opt. 36, 4830-4838 (1997). 8. Mikael Sjödahl, “Digital Speckle Photography” in Digital Speckle Pattern Interferometry and Related Techniques, P. K. Rastoji, (Ed.), J. Wiley & Sons, New York, 2001, Chap. 5. 9. J. M. Huntley, “Automated Analysis of Speckle Interferograms” in Digital Speckle Pattern Interferometry and Related Techniques, P. K. Rastoji, (Ed.), J. Wiley & Sons, New York, 2001, Chap. 2, pp. 89-95. 10. M. Sjödahl and L. R. Benckert, “Electronic speckle photography: analysis of an algorithm giving the displacement with subpixel accuracy,” Appl. Opt. 32, 2278-2284 (1993).
MEASUREMENT OF RESIDUAL STRESSES IN DIAMOND COATED SUBSTRATES UTILIZING COHERENT LIGHT PROJECTION MOIRÉ INTERFEROMETRY C.A. Sciammarella*, A. Boccaccio**, M.C. Frassanito**, L. Lamberti** and C. Pappalettere** *
Northern Illinois University, Department of Mechanical Engineering, 590 Garden Road, DeKalb, IL 60115, USA. [email protected] **Politecnico di Bari, Dipartimento di Ingegneria Meccanica e Gestionale, Viale Japigia 182, Bari, 70126, ITALY. E-mail: [email protected] , [email protected], [email protected], [email protected]
Abstract Thin film technology is an area of great importance in current applications of opto-electronics, electronics, MEMS and computer technology. A critical issue in thin film technology is represented by residual stresses that arise when thin films are applied to a substratum. Residual stresses can be very large in magnitude and may result in detrimental effects on the role of the thin film must play. For this reason it is very important to perform “online” measurements in order to control variables influencing residual stress. The research work presented in the paper represents the first step towards the practical solution of such a challenging problem. A methodology to measure residual stresses utilizing reflection/projection moiré interferometry to measure deflections of thin coated specimens is developed. Results are in good agreement with experimental values provided by well established measurement techniques. A special optical circuit for the in situ measurement of residual stresses is designed trying to satisfy the constraints deriving from the tight geometry of the vacuum system utilized to carry out the deposition.
1. INTRODUCTION Thin film technology has great importance in opto-electronics, electronics, MEMS and computer technology applications. A critical issue in thin film technology is represented by the presence of residual stresses that develop when thin films are applied to a substratum. Residual stresses arise because of the existence of discontinuous interfaces, inhomogeneous thermal history during deposition or subsequent fabrication process, and various imperfections by ion bombardment [1]. Residual stresses may affect significantly mechanical properties and reliability of the thin film as well as the performance of thin-film based devices: in particular, high residual stresses will result in detrimental effects on the role that the thin film is designed to play. Experimental techniques for measuring residual stresses in thin films are critically revised in Ref. [2]. In general, there are two possible approaches to this problem: (i) lattice strain based methods, including X-ray diffraction and neutron diffraction; (ii) physical surface curvature-based methods. However, the measured values of residual stress may be quite different because the former methods provide local information while the latter methods provide average values. Lattice-based methods rely on the principle that residual strains and hence residual stresses are caused by the relative displacement between atomic planes: therefore, the variations of lattice spacing are measured. However, these methods are rather expensive in terms of experimental equipment and can be used only if the films are crystalline. Determination of residual stress through curvature measurements is relatively easy to perform by the experimental point of view. The average level of residual stress can be found by using the classical Stoney’s equation [3,4] which is valid when the coating is much thinner than the substrate. Nanoindentation is the most recent approach to the problem of measuring residual stresses in thin films: this is done by observing how the force-indentation curve changes with respect to a stress free surface (see, for example, the discussion presented in Ref. [1] and the references cited in that paper). Among curvature measurement methods, non-contact optical techniques are preferable in view of their high sensitivity and because they do not alter the specimen surface. Classical interferometry (i.e., Newton’s rings) [5] or moiré techniques [6] can be used for measuring curvatures. Reflection moiré allows to measure the slope of a reflective surface and, through differentiation of the fringe pattern in the frequency space, the curvature of the surface. Projection moiré measures the height of a surface with respect to a reference plane. A system of lines is projected onto the specimen surface and modulated by the curved specimen. The topography of the surface can be obtained from the phase difference generated by the modulation of the grating lines due to the surface curvature. The reconstructed surface can be fitted by a mathematical function and then differentiated in order to compute curvature values. Variables influencing residual stress could be controlled by performing in situ measurements of quantities such as deflections and curvatures that are directly related to stress. This work represents a first step towards the practical solution of such a challenging problem. For that purpose, a reflection moiré interferometry setup is developed. The T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_25, © The Society for Experimental Mechanics, Inc. 2011
209
210 novelty is in the fact that the optical setup is now utilized in the projection moiré mode: therefore, the experimental setup measures displacements but not slopes. The validity of the optical setup developed in this paper is tested in the measurement of residual stress developed in a diamond-like-carbon (DLC) thin film deposited on a quartz substratum via Plasma Enhanced Chemical Vapor Deposition (PECVD). The value of residual stresses developed in the film and in the substrate are finally derived from the average curvature of the DLC specimen determined via moiré measurements. The optical circuit is then adapted to the vacuum system utilized to make the deposition taking care to satisfy the tight geometry constraints posed by the layout of the deposition reactor.
2. STRESS ANALYSIS OF THE THIN FILM In the Plasma Enhanced Chemical Vapor Deposition (PECVD) process for thin films, an electric discharge generates film precursors, such as neutral radicals and ions, by electron-impact decomposition. Significant residual stresses can develop in the coated specimen because the atoms passing from the gas phase to the substrate during the adsorption process do not reach their correct position in the reticular structure finally formed. The energetic ions cause atoms to be incorporated into spaces in the growing film which are smaller than the usual atomic volume. This leads to expansion of the film outwards from the substrate. In the plane of film, however, the film is not free to expand and the entrapped atoms cause macroscopic compressive stresses. Figure 1 shows the stress distributions developed in the coating and in the substrate by the deposition process. The coating/substrate system can be considered as a plate subject to bending. Since the substrate is much thicker than the coating, it will present the classical bi-triangular stress distribution while the compressive stress field in the coating can be considered uniform.
w(x,y) Y
X
Z
Figure 1. Schematic representation of stresses developed in the coating and in the substrate
Following the Kirchhoff’s plate theory, stress components can be expressed as:
⎧ Esz ⎛ ∂ 2w ∂2w ⎞ ⎜ ⎟ +ν ⎪σ xx = − 2 2 ∂ y 2 ⎟⎠ 1 − ν s ⎜⎝ ∂x ⎪ ⎪ Esz ⎛ ∂ 2w ∂ 2w ⎞ ⎪ ⎜ ⎟ +ν ⎨σ yy = − 2 2 ∂x 2 ⎟⎠ 1 − ν s ⎜⎝ ∂y ⎪ ⎪ E z ∂ 2w ⎪ τ xy = − s ⋅ 1 + ν s ∂x ∂y ⎪⎩
(1)
where: w(x,y) is the out-of-plane displacement experienced by the middle plane of the substrate; z is the distance from the middle plane measured in the direction of the curvature radius; Es and νs are respectively the Young modulus and the Poisson ratio of the substrate material. If the thickness of the deposited film ff is much smaller than the thickness of the substrate ts (i.e. tf <
211
σ film =
Es t2 ⋅ s 6(1 − ν s ) t f
⎛1 1 ⎞ ⎟⎟ ⋅ ⎜⎜ − R R o ⎠ ⎝
(2)
where R is the radius of curvature taken by the specimen after the deposition process while Ro is the radius of curvature prior to deposition. If one assumes that the initial curvature is not significant, Eq. (3) simplifies to:
σ film =
Es t2 1 ⋅ s ⋅ 6(1 − ν s ) t f R
(3mod)
3. EXPERIMENTAL SETUP In this study, a reflection moiré setup was developed in order to precisely measure the deflection produced by the deposition process. However, the novelty is that the reflection moiré setup is used in the projection moiré mode thus providing displacement values rather than surface slopes. Figure 2 shows the schematic of the optical set-up adopted for determining the out-of-plane displacement field w.
Figure 2. Schematic of the optical set-up used in the residual stress measurements
The set-up is comprised of two principal systems: the projection system (PS) and the acquisition system (AS). The optical axis of the projection system is inclined with respect to the optical axis of the acquisition system by the angle α=20°. The projection system includes the laser source L, the polarizer P, the microscope pin-hole system PH, a first lens L1, a grating G, a second lens L2, the iris IR and a third lens L3. The acquisition system includes a microscope M fixed on a CCD camera. The acquired images have been processed by means of the Holo Moiré Strain Analizer (HMSA) software developed by Sciammarella and his collaborators [7]. The coherent and polarized light beam generated by the 35 mW He-Ne laser, passes through the polarizer P that decreases its intensity. A 40× magnification microscope expands the laser beam and a pin-hole system removes the noise effects produced by the diffraction phenomena which may occur in beam expansion. The pin-hole diameter is 10 μm. The exit pupil of the pinhole system is located in the focal plane of the first lens L1 and therefore the wave front that comes out from lens L1 is plane. The collimated beam passes through a Ronchi Ruling grating G including 500 lines/in (this corresponds to a nominal pitch of 50.8 μm). The beam passes through a second lens L2 that in its own focal plane, placed at the f2 distance, produces the Fourier transform of the light wave diffracted by the grating G. In that focal plane, there is placed an iris IR opened in such a way to let pass orders 0 and/or ±1. Higher diffraction orders were filtered out in order to reduce the noise. The lens L3, that has the same focal length of lens L2 and is placed in such a way to have the iris lying in its focal plane, collimates again the light beam. The collimated wave front carrying the filtered spectrum of the grating is projected onto the reference plane which is basically a mirror onto whose surface is attached the specimen SP to be analyzed. The light wave front that hits the mirror or the sample is reflected back to the sensor of the CCD camera. However, this does not mean that the utilized set-up works in the reflection moiré mode. In fact, the CCD camera is focused right on
212 the plane of the sample and not in the reference plane. Consequently, the distribution of light intensity I(x,y) recorded by the camera at each pixel (x,y) of the image can be expressed as:
⎛ 2π ⎞ I( x , y) = I 0 + I1 cos⎜ ⋅ w ( x , y) ⋅ sin α + φ c ( x , y) ⎟ ⎜p ⎟ ⎝ j ⎠
(3)
where: Io is the background intensity; I1 is the amplitude, pj is the pitch of the projected grating onto the specimen; α is the angle between the optical axis of the projection system and the optical axis of the acquisition system, φc(x,y) is the phase modulation term. Therefore, the intensity distribution I(x,y) corresponds to the out-of-plane displacement of the plate with respect to the reference plane, not to the surface slope as it would be in the classical case of reflection moiré. Preliminary analyses indicated that if the refection moiré were to be adopted, fringe orders related to slope surface and fringe orders related to out-of-plane displacement cannot be distinguished. This leads to incorrectly evaluate phase. The reason for this is that, in general, when surface curvature is large, fringe orders can be very distorted to such an extent that it is practically impossible to correctly reconstruct the phase signal. It should be noted that the iris aperture was regulated in such a way that orders 0 and ±1 could pass. The choice to allow the passage of the order +1 or the passage of the order −1 (in addition to the background order 0) was done in purpose to account for the Talbot effect, i.e. the physical phenomenon that regulates the process of projection of a grating onto the surface to be investigated. The Talbot effect produces a periodic propagation in the space of the projected grating. Periodicity depends on the grating pitch and wavelength of light. Another effect strictly related to this phenomenon is that at the focal distance f2 the diffraction orders ±1 will recombine thus generating one grating. At a distance different from f2, diffraction orders ±1 will separate thus generating two different fringe orders that interfere and produce moiré fringes. Since the plane containing the specimen cannot be perpendicular to the laser beam but must rotated in order to allow light to be reflected back to the sensor, it can be understood that because of Talbot effect, if both orders ±1 were allowed to pass different fringe orders would create and interfere thus altering the signal of the function w(x,y). Conversely, since only the order +1 or the order −1 was allowed to pass together with the background order, such a problem was eliminated thus making it easier to reconstruct the phase function. The sensitivity Δs of the moiré setup is:
Δs =
pj tan θ1 + tan θ 2
(4)
where: pj is the pitch of the projected grating; θ1 is the angle formed by the optical axis of the projection system with the direction normal to the reference plane; θ2 is the angle limited by the optical axis of the recording system and the direction normal to the reference plane. From the schematic of the moiré setup shown in Figure 2 it follows α=θ1+θ2. Since θ1=θ2 it is α=2θ1. As is mentioned above, the classical reflection moiré technique gives in output directly the slope of the surface, i.e. the first derivative of the displacement function w(x,y). This quantity is obtained optically following a principle similar to finite differences. Consequently, the coordinate where derivative takes a given value remains indefinite within the range where finite difference is determined. Since the specimen is very small the error that could be introduced by following this approach would be very significant. For this reason, it was chosen to use reflection moiré as a method to determine displacements and not derivatives. A microscope (denoted as M in the schematic of Figure 2) was coupled with a CCD camera able to focus planes located at a distance greater than 50 cm. The field of view of the sensor was as large as the DLC specimen (i.e. 1 cm2) under investigation. The microscope includes an auxiliary lens and a 2X lens to maximize resolution. Figure 3a shows the assembly view of the optical setup while Figure 3b shows the detail of the reference plane with the specimen attached. A high precision frame onto which the reference plane is fixed allowed the reflected beam to be lined up with the axis of the CCD camera.
4. MOIRÉ LABORATORY MEASUREMENTS IN SITU IMPLEMENTATION OF THE MOIRÉ SETUP The moiré setup described in Section 3 was tested in the residual stress measurement of a diamond-like carbon thin film deposited on a quartz substrate. The specimen was circular in shape with a diameter of 1 cm. The nominal thickness of the substrate was 2 mm (i.e. 2000 μm) while the nominal thickness of the coating was 500 nm. Average values of elastic properties of the quartz substrate were selected from material databases as follows: E=94 GPa and ν=0.17.
213 The grating utilized in the experiments included 500 lines/in: the corresponding nominal pitch hence is 50.8 μm. The angle of illumination θ1≈13° was measured by analyzing the change in spatial frequency of the projected grating when no specimen is mounted onto the reference mirror. Cares were taken so to realize the nominal condition of reflection where θ1=θ2 the most as it was feasible. The corresponding projected pitch pj =p/cosθ1 is 52.14 μm. From Eq. (5), sensitivity can be computed as pj / (2tanθ1), that is 112.9 μm.
a)
b) Figure 3. Moiré setup utilized in the residual stress measurement: a) 3D assembly view; b) Detail of the reference plane and the DLC specimen .
Prior to executing the moiré test, the curvature of the DLC specimen was measured by means of the Newton’s rings interferometric technique. A total of 33 rings were seen to form on the specimen surface. The specimen was properly positioned so to lie in the center of the image field. The total deflection δDLC of 10.44 μm was measured as the product of the number of interference fringes formed (i.e. 33) and the measurement sensitivity which is half of the wavelength of the He-Ne laser light (λ=632.8 nm) used in the experiments. Therefore, we have δDLC=33x0.3164=10.44 μm. Since the maximum deflection to be measured was about 1/10 of the sensitivity of the moiré setup, the nominal grating pitch was rated good for carrying out the moiré experiments. Figures 4a and 4b show respectively the phase maps obtained for the specimen surface and the reference plane. The corresponding phase difference is shown in Figure 4c. As expected, the phase difference is less than one order. However, it is very difficult to achieve the condition of parallelism between the specimen surface and the reference plane. This effect was corrected with a MATLAB routine and the displacement map finally obtained is shown in Figure 5. The resulting out-of-plane displacement measured by the moiré setup hence is 11 μm, which is a value very close to the corresponding displacement measured by means of the Newton’s rings interferometric technique. It should be noted some distortion was observed in the shape of the rings but no correction routine was implemented for that pattern as the goal of the interferometric measurement was just to have information on the order of the magnitude of the expected deflection. However, this may explain the difference observed between the two measurements.
214 The MATLAB datafile was further processed in order to extract profiles and derive the value of the radius of curvature. The radius of curvature of the DLC specimen extrapolated from MATLAB is 1.598 m. By substituting this value in the Stoney’s formula (3mod) and assuming a film thickness of 500 nm, which is a rather usual value for DLC coatings, it follows that the compressive residual stress σfilm developed in the film is 93.9 GPa. This value seems to be very large compared with data usually reported in literature which indicate film stress values in the range of few GPa. The out-ofplane displacement measured with moiré 11μm is very close to the deflection computed with the Newton’s rings 10.44 μm, the difference is 5 %. Since the two measurements are completely independent, one can consider the value obtained
a)
b)
c) Figure 4. a) Phase of the specimen surface; b) Phase of the reference plane; c) Phase difference
215
Figure 5. Distribution of out-of-plane displacement determined for the DLC specimen Since the two measurements are completely independent, one can consider the value obtained from the moiré as correct. The discrepancy in the value of the residual stresses obtained with the utilized sample is currently under investigation.
5. IN SITU IMPLEMENTATION OF THE MOIRÉ SETUP A special optical circuit was designed for the in situ monitoring of residual stress developed during the thin film deposition process. Figure 6a shows the device for film deposition equipped with the moiré system developed in this research. The moiré device is now physically located in the Thin Films Laboratory of the Italian National agency for new technologies, Energy and sustainable economic development (ENEA), Mesagne (Italy). The tight constraints on geometric layup of the deposition device have required some modifications of the optical setup with respect to the laboratory setup described in the preceding paragraphs. The reactor includes two large windows that have been used as an access window for carrying the illuminating wave front and as outlet window for recording fringe patterns. The projection system has been mounted onto a prismatic arm rigidly supported in order to realize an angle of illumination of 10° with respect to the horizontal plane. The grating is then projected onto the specimen to be analyzed by the composite prism system schematized in Figure 6b. Preliminary analyses have indicated that the optimal range of grating spatial frequency is 300 lines/in, which corresponds to a nominal pitch of 84.7 μm. Although this decreases the sensitivity to about 200 μm per fringe, it is still possible to sense displacements of the order of few microns [8].
216
a)
b) Figure 6. a) Device for thin film deposition with 3D view of the projection system; b) Schematic of the prism that carries the projected grating onto the specimen surface.
The deposition reactor has three plates that can rotate about their vertical axis. These plates are in turn placed on a rotating platform. The composite prism system was fixed above the plate closest to the exit window. In order to simplify measurements and avoid damage of the prism surface, film deposition was allowed only in correspondence of the plate located at the longest distance from the exit window. Furthermore, revolution of plates about their axis was disabled. One concern was that the eventuality that the luminescence induced by the deposition process would have affected the intensity distribution of the modulated grating recorded by the CCD sensor. However, this was found not to be a problem either because deposition was done far away from the prism and the color of light emitted in the deposition process was light violet, thus not overlapping with the CCD camera spectrum.
6. SUMMARY AND CONCLUSIONS The research work presented in the paper is the first step towards the application of moiré to the in situ measurements of residual stresses generated in thin films during the deposition process. A reflection moiré method working in the projection mode was developed and tested in the case of diamond-like carbon film. The largest deflection measured for that specimen was found to be in good agreement with interferometric measurements independently carried out. The
217 corresponding stress value in the film is however large compared to values usually reported in literature. This does not invalidate the present measurements which were totally consistent with other experimental data in terms of values of deflection and radius of curvature. The device was then implemented in a real device for thin film deposition. The original setup was adapted to the rigid geometric layout of the reactor. Measurements carried out on four ZrN specimens with different film thickness (from 200 to 400 nm) and different temperature of deposition (from room temperature to 600°C) indicated the presence of compressive residual stress values of about 30 GPa. The corresponding measurements with Newton’s rings provided instead values of residual stress ranging between 22 and 50 GPa. These values are now of the same order of magnitude of data recently reported in literature [9]. Furthermore, significant statistical dispersion is reported for different conditions of deposition.
References [1] Lee Y.H., Takashima K., Kwon D. Micromechanical analysis on residual stress-induced nanoindentation depth shifts in DLC films. Scripta Materialia, 50, 1193-1198, 2004. [2] Tomasella E., Thomas L, Meunier C., Nadal M., Mikhailov S. Coupled effects of bombarding ions energy on the microstructure and stress level of RFPEVCD a-C:H films: correlation with Raman spectroscopy. Surface and Coatings Technology, 174-175, 360-364, 2003. [3] Stoney G.G. The tension of metallic films deposited by electrolysis. Proceedings of the Royal Society of London Series A , 82, 172-175, 1909. [4] Von Preissig F.J. Applicability of the classical curvature-stress relation for thin films on plate substrates. Journal of Applied Physics, 66, 4262-4268, 1989. [5] Born M, Wolf E. Principles of Optics, Seventh Edition. Cambridge (UK): Cambridge University Press, 2002. [6] Sciammarella CA. 2001 William M. Murray Lecture. Overview of optical techniques that measure displacements. EXPERIMENTAL MECHANICS, 43, 1-19, 2003. [7] General Stress Optics Inc. Holo-Moiré Strain Analyzer software HoloStrain™, Version 2.0. Chicago, IL (USA), 2008. http//:www.stressoptics.com [8] Sciammarella C.A., Lamberti L., Boccaccio A., Sciammarella F.M. High precision contouring with moiré and related methods: a review. Strain 2011 (In press). [9] Meng Q.N., Wen M., Qu C.Q., Hu C.Q., Zheng W.T. Preferred orientation, phase transition and hardness for sputtered zirconium nitride films grown at different substrate biases. Surface and Coatings Technology, 205, 28652870, 2011.
Automatic Acquisition and Processing of Large Sets of Holographic Measurements in Medical Research
Ellery Harrington1, Cosme Furlong1,2,3, John J. Rosowski2,3,4, and Jeffrey T. Cheng2,3 1
Center for Holographic Studies and Laser micro-mechaTronics, Department of Mechanical Engineering, Worcester Polytechnic Institute, 100 Institute Rd, Worcester MA 01609, USA 2 Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary 3 Department of Otology and Laryngology, Harvard Medical School, Boston MA, 02114, USA 4 Speech and Hearing Bioscience and Technology Program, MIT-Harvard Division of Health Sciences and Technology, Cambridge, MA 02139, USA
ABSTRACT We are developing an advanced computer-controlled digital opto-electronic holographic system (DOEHS) with the ability to operate in several modalities, including optoelectronic holographic microscopy, lensless digital holography, speckle pattern interferometry, and fringe projection. The DOEHS is being designed for medical applications requiring full-field-of-view information of shape and deformations to aid in the diagnosis and investigation of specific disorders. Particular features of the DOEHS include the capabilities to rapidly acquire and quantitatively process a relatively large amount of interferometric data. Therefore, automatic procedures for acquisition of images or video, computation and application of quality metrics, and phase unwrapping are applied with minimal user interaction. The results are catalogued with relevant metadata in a database allowing for massive analysis and comparison between subjects or patients. Currently, the DOEHS is being developed to measure acoustically induced deformations of the eardrum of several species, including humans. Measurements are used in diagnosing middle-ear conductive disorders and investigating the causes of failure of middle-ear surgical procedures. We present representative measurement procedures and results. Keywords: Medical Data, Automation, Digital Holography, Otolaryngology, Holographic Processing, Phase Unwrapping
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_26, © The Society for Experimental Mechanics, Inc. 2011
219
220
Introduction In the medical research field, among others, recording and storing large amounts of data is often required for a variety of purposes. Achieving this in an effective method has been an important topic of discussion for decades, in both the medical and computing fields[1,2]. Processing this data can be a complex task, requiring a series of events to properly manage the data. When a research process requires the acquisition of image or video data, in order to be effective, these data should be stored in a way where they can be easily queried based on properties of the data and the circumstances.
1. Area of Research The tympanic membrane (TM), also known as the eardrum, is part of the middle-ear which executes the function of transmitting sound from airborne compressional waves to the cochlea in the inner ear[3,4]. Most existing studies of the TM are limited to measurements on a single point or a series of points[5]. While this is a useful approach for research and diagnoses, having a large field of view can provide a much higher level of detail about the complex modes of vibration of the TM and may be more useful for diagnosis of the TM and middle-ear disorders, and also for determining the effectiveness of surgical procedures. Generating quantitative full-field-of-view displacement maps, holography can provide doctors with important information about the TM, both experimental and clinical, not obtainable through other methods[6]. To perform these measurements a device, the optoelectronic holographic otoscope, is in development to view the shape and motions of the TM using laser holography. The first generation of this device has been utilized for research purposes, and the second generation, with hardware for recording patients’ TMs, has recently been deployed for additional research and plans to record patients in the coming months.
2. Overview of Hardware The optoelectronic holographic otoscope (OEHO) system is comprised of four major subsystems: (1) laser delivery system, (2) otoscope head, (3) positioning system, and (4) the image processing computer. These three assemblies give the ability to measure the vibrations of a human tympanic membrane, located in the middle ear. Fig. 1 shows the first three subsystems, with the otoscope head mounted on the positioning arm, which is used to hold it steady while increasing the range of motion for the device during positioning.
Fig. 1. Opto-electro-mechanical components of OEHO deployed at the Massachusetts Eye and Ear Infirmary (MEEI): (1) is the laser delivery system; (2) is the otoscope head, and (3) is the positioning system.
221 The fourth major subsystem, the image processing computer, connects with several of the components in the laser delivery system and the otoscope head, and interfaces directly with the camera (CCD) in the otoscope head. In addition, the computer communicates with a function generator, which provides a digital pulse signal to the acousto-optic modulator (AOM) synchronized with a sinusoidal stimulus signal to the sound presentation system (SPS) – which contains a speaker and sound delivery mechanisms. The computer is also connected via a data acquisition card (DAQ) to the camera and a piezo-electronic transducer (PZT) to allow for phase-shifting, synchronized with each camera frame[6,7,9]. The otoscope head does not contain imaging lenses, so a series of algorithms based on the Fresnel-Kirchhoff integral, the raw phase-shifted images are reconstructed into a valid digital hologram, ultimately similar to those which are generated by the lens-based system[7]. Without the lenses and the required alignment mechanisms associated with them, the lensless system allows for much better packaging densities. Due to the lack of lens the system does not require calibration for distortion reduction of the active optical elements. In addition, recorded videos and images can be reconstructed at different distances than the one used when recording, allowing for the determination of shape or deformations of multiple planes, focusing each one separately from a single recording[8,9]. This OEHO system was designed specifically for measuring the human TM, but has other possible uses: its ability to be positioned and held steady in various orientations, along with its small imaging tip, and focusing by software, make the system an ideal mechanism for applications requiring measurements in confined volumes.
3. Control and Acquisition Software To facilitate synchronized control of the hardware of the OEHO, the control software, called LaserView, is used[9]. It provides a user-friendly interface to receive a live digital video stream from the camera at rates in the range of 10 to 210 frames per second depending on the speed, exposure time, and resolution required for the experiment being performed. LaserView is programmed in Visual C++ and has the ability to connect to one of five separate camera interfaces, a DAQ card, a function generator, laser controllers, and other hardware for other applications. Fig. 2 shows a diagram of the basics of LaserView’s operation, from the digital camera to the image window, using multiple threads. ImageEffectProcessor is a thread which takes the raw images from the camera and processes them to show interferometric data.
Fig 2. Sequence of events in LaserView, from camera to display of quantitative holographic data[9].
222
/ + /
4 9 :
; < =
• + ! 0 • + " " - •
- ", • ( "1 • 2-"- 1" " • *" ", 1 • 3 " - 0,
: >"
• • !" " #$%&'() •
* "" + , • ! , - . +--- - "
• ", - . ?5 0! ," -
4&1 • 5"", "",- 61 • 7 8 -1 -1"- " 1"+-" -
9- • &" " " -. • " "- 6 " - " "
;*"2 " -, • "-
- "- " -," -
<(? 3 • " " " -
" • &" " @ "", • "", " "- -+ -
= "3 • 5"",
" -- • 5 "
" - @""+"-
" @ • &1"
" #1" )
Fig. 3. Screenshot of LaserView’s main window, with descriptions of its primary features.
Fig. 3 shows a screenshot of the main LaserView window. The buttons to the left each have context menus, allowing for settings for each section, as well as the ability to open tool windows for specific tasks. When running, LaserView can display live images, unprocessed from the camera, or processed images in 3 modes of operation:
223 (a) Time-averaged, for viewing the full range of vibration of the object. This mode produces fringes showing the shape of vibrations on the object of interest due to a stimulus. It is used for rapid identification of modes of vibration. (b) Double-exposure, for quantitative measurements of shape and deformation between two loading states. This mode takes a set of phase-shifted images at the reference state of the object and a set of phase-shifted images at the deformed state of the object to compute interferometric data. (c) Stroboscopic mode, for viewing the position of the object at a precise segment of the stimulus. This mode, which utilizes the double-exposure technique and the AOM synchronized to a stimulus on the object of interest, provides quantitative maps indicating the relative change in the object’s shape and position from a reference state. Using a reference where the object is at rest can produce the position of the entire TM, at multiple points of its vibration. With LaserView, the video from the camera is processed in real-time to show either the modulation level or the optical phase map of the object, based on four sequential phase-stepped images from the camera[9]. LaserView can save a single set of these images, images at specific points of interest (e.g. points of the vibration in stroboscopic mode), or a continuous video stream, which can later be reconstructed and processed at any point in the video. Images and video are saved in a proprietary formats (RTI for images and LVVID for video) to allow for up to 16-bit grayscale camera raw images, as well as 32-bit floating point images for saving phase information in radians or actual units (e.g. µm). Every image, either individual or in a video file, contains application-specific metadata to save all relevant information about the camera’s settings, reconstruction distance, active stimulus, and many other pieces of information.
4. Post-Processing Software While LaserView provides an immediate view of the processed images, a post-processing application is being developed: HoloStudio. HoloStudio is a separate program to load and process holographic data by application of digital holographic algorithms and filters. 4.1. Filters HoloStudio implements a collection of filters to allow the user to convert the phase data into high-quality shape maps. These are applied after digital holographic reconstruction algorithms are applied. Notable filters include: • • • • • •
Unwrapping, to remove discontinuities in phase data Masking based on simple shapes, external image files, or automatic detection based on modulation levels. Median, mean, Lee, FFT filters to reduce noise, prior to unwrapping Scaling, to give real units (e.g. µm) to x, y and z axes Color map filter applies a rainbow or other color scale to the image The 3D Plot filter uses OpenGL to plot a wireframe or solid model of the surface recorded
Each filter has a set of options that are configured and applied to the image, and the filter is added to a list. This list of filters remains active, and can be reordered as needed. The list of filters and their options can also be exported and imported later, to use a common set of filters on separate groups of data. The image is processed through the filters in order, and the resultant image is displayed. The filter list gives advantages over a standard image editing program, in that it allows for testing various filter combinations, and processing multiple images while using the same filter combinations. 4.2. Phase unwrapping algorithm The phase map uses the arctangent function, resulting in values in the range [–π, π]. This results in phase discontinuities that need to be removed by phase unwrapping[10,11]. HoloStudio implements an algorithm based on a flood fill procedure with
224 a tolerance to find like-areas, which are subsequently ordered by finding the difference between pixels on the border of each area [9]. The algorithm is implemented in four steps: (1) Start at a seeding point and work outwards. If any pixel p is found that is not yet in a group, a recursive flood fill algorithm is performed starting on that p and adding neighbors p’ to a common group if | p - p’ | < t where t is a given tolerance in radians. When no additional pixels can be added to the group, if the number of pixels in the group is less than or equal to the minimum group size g, that group is discarded and those pixel are marked as null. Repeat step (1), continuing to scan for pixels that are not assigned a group, until all pixels are in a group or discarded. (2) For each group, all neighbor groups are identified and a record is kept of the difference between the neighboring pixels of that group and each of its neighbors. (3) The first group (the group of the seed point) is assigned level L = 0 (its values will not be modified). For each group (in the order they were created), the neighbors of the group are examined to find the longest border with a neighbor. That neighbor is then assigned an L relative to the group that is known. The difference d between a group and its neighbor along the border is compared: if d < -ߨ, the neighbor is assigned a group one less than the group. If d > π, it is assigned one level higher. Otherwise, the neighbor is assigned the same L value as the original group’s l. (4) New p values are assigned by adding 2πL to the original p values. This algorithm normally operates about 10 – 50 times faster than the previously used unwrapping algorithm, depending on the settings applied and the complexity of the image. This speed is essential when processing massive amounts of images, as it can reduce the processing time by several orders. 4.3. Exporting and Batch Processing Images can be saved after unwrapping and filtering into various formats, including RTI, JPEG, or LVVID. When saving in RTI or LVVID formats, the metadata from the recording is preserved and augmented by the filters applied. A batch processing feature in HoloStudio is under development, which will automatically process an entire series of acquired image or detect optimal frames from a recorded video to be processed, and then save the results to a video file or individual files. Fig. 4 shows HoloStudio’s ability to take a wrapped optical phase image (a), unwrap it, apply filters to obtain a 3D plot of the object.
(a)
(b)
Fig 4. Stroboscopic measurements of acoustically excited deformations on a thin latex membrane: a) Wrapped phase of images obtained from LaserView; b) 3D plot generated by HoloStudio
225
/
4
:
;
9 /(" • -",-" - " - " - " • A +-" -
• (+ -- "
4" • B-" " """- " -""
92 " - " - • -" - -- " - - @ +--
:" • " • (-" " ","
;? • - ",- + "
Fig 5. HoloStudio interface with color map applied to unwrapped phase, with descriptions of its primary features. Fig. 5 gives an overview of the major areas in the HoloStudio. Modifying any of the settings will automatically update the large image. Other dialogs and controls, such as cross-section and batch processing, are available through the menus.
226
5. Data Analysis and Storage Holographic data capture is handled by LaserView, while the processing for analysis is handled by HoloStudio. After processing is complete, analysis and organization of the raw and processed data are required for further analysis.
The RTI and LVVID formats contain metadata which gives enough information for applications to determine which types of processing, display, etc. are relevant for the data. To perform experimental analysis on this data, a Matlab toolbox[12] for these proprietary formats is also used. This toolbox allows full access to all metadata fields, as well as the bitmaps stored in the files. Currently, algorithms are being developed and optimized in Matlab, which will later be implemented as part of the features of HoloStudio. The streamlined process of capturing, processing, and analysis, all with minimal user interaction, allows for a faster and more efficient process for data analysis and handling as required by future users trained in the medical field.
6. Representative Results
Fig. 6. Selected Time-average holograms from four specimens using OEH[13].
(a) (b) Fig. 7. Cadaveric chinchilla TM in stroboscopic mode at the frequency of 1.085 KHz (a) unwrapped phase map (b) 3D rendering. Peak-to-peak displacement is 1160nm.
227
7. Conclusions LaserView provides a means of acquiring and processing images using stroboscopy by controlling the hardware in the OEHO and performing digital holography algorithms on the camera’s imagery to obtain a wrapped phase map of the TM’s deformation with an applied stimulus. HoloStudio processes the images acquired from the camera using an unwrapping algorithm and filters, resulting in an unwrapped phase map, and ultimately in the actual shape or deformation of the object, in full-field-of-view. Once these images are obtained, there is tremendous potential to construct a database of patients’ TM’s vibration patterns, giving the potential for surgeons to be more informed about operations to be performed. This database and the accompanying software may also provide previously unknown information about the behavior of the human TM.
8. Future Work Storage of these data is currently handled through a standard file system, requiring operators of the system and researchers to keep track of the location of all experimental data for comparison and analysis. To improve the process of research by comparison of data gathered, another tool – integrated into HoloStudio – is under development. Each experiment that is performed results in two primary data types: raw imagery from the camera and holographically processed, unwrapped imagery. Both of these are stored on the system, with references to them in a database. The database keeps a record of every image, and every frame of video that exists from that experiment, along with metadata about the stimulus, conditions,etc. Meanwhile, a comprehensive database of the experiments and examinations is constructed, containing relevant patient information such as age and hearing capabilities. The ultimate goal of this database is to be able to query for sets of conditions (e.g. male patients, age 50 – 60, with either normal or diminished hearing) and find the differences between the sets’ results. Continuing, the database will have the ability to compare the results obtained through a query with a patient’s results to be diagnosed. Results from pre- and postsurgery examinations can be compared to find differences in the TM’s behavior. Using these database queries, results of stroboscopic measurements can be compared quantitatively, comparing the shape of the TM’s deformation at points of its vibration pattern, as well as the maximum peak-to-peak deformation of various patients In addition to the database, addition work has been started on further optimizing acquisition time, processing time, and conditional or synchronized acquisition, depending on metrics of the images acquired, or external indicators such as heartbeat. Automated processing and analysis of acquired frames immediately after acquisition is also in progress, to allow near-instantaneous results.
Acknowledgements This work has been funded by the National Institute on Deafness and Other Communication Disorders (NIDCD), the Massachusetts Eye and Ear Infirmary (MEEI), and the Mittal Fund. The authors also gratefully acknowledge the support by all members of the CHSLT and researchers at the MEEI.
228
References [1] Shortliffe, E.H. and Barnett, G. O. “Medical data: their acquisition, storage, and use,” Medical informatics: computer applications in health care, Edward H. Shortliffe, Leslie E. Perreault, Gio Wiederhold, and Lawrence M. Fagan (Eds.). Addison-Wesley Longman, Boston, pp 37-69, 1990. [2] Brooks, R., Grotz, C.. “Implementation Of Electronic Medical Records: How Healthcare Providers Are Managing The Challenges Of Going Digital.” JBER, 8, Dec. 2010. [3] Rosowski, J. J., “Models of external- and middle- ear function,” Auditory computation, Hawkins, H. L., McMullen, T. A., Popper A. N., and Fay, R. R., eds, Springer-Verlag, New York, pp 15-60, 1996. [4] Rosowski, J. J., Cheng, J. T., Ravicz, M. E., Hulli, N., Hernandez-Montes, M., Harrington, E., Furlong, C., “Computer-assisted time-averaged holograms of the motion of the surface of the mammalian tympanic membrane with sound stimuli of 0.4–25 kHz”, Hear. Res.253: 83–96, 2009. [5] Whittemore, K. R., Merchant, S. N., Poon B. B., and Rosowski, J. J., “A normative study of tympanic membrane motion in humans using a laser Doppler vibrometer (LDV)”, Hear. Res. 187(1-2):85-104, 2004. [6] Hulli, N., Development of an optoelectronic holographic otoscope system for characterization of sound-induced displacements in tympanic membranes, Department of Mechanical Engineering, Worcester Polytechnic Institute, Worcester, MA, 2008. [7] Hernández-Montes, M., Furlong, C., Rosowski, J. J., Hulli, N., Harrington, E., Cheng, J. T., Ravicz, M. E., and Santoyo, M., “Optoelectronic holographic otoscope for measurement of nanodisplacements in tympanic membranes,” J Biomed Opt.; 14(3), 1-1–1-9 2009. [8] Dobrev, I., Balboa, M., Fossett, R., Furlong, C., Harrington, E., “MEMS for real-time infrared imaging,” Proc. SEM, In Press, 2011. [9] Harrington, E., Dobrev, I., Bapat, N., Flores, J. M., Furlong, C., Rosowski, J. J., Cheng, J. T., Scarpino, C., Ravicz, M., “Development of an optoelectronic holographic platform for otolaryngology applications,” Proc. SPIE, 7791, 2010. [10] Bushman, T., Gennert, M.A., Pryputniewicz, R.J., “Phase unwrapping by least squares error minimization of phase curvature,” Computer Science, Worcester Polytechnic Institute, Worcester, MA, 1993. [11] Giglia, D., Pritt, M., Two-Dimensional Phase Unwrapping. Wiley, New York, 1998. [12] Matlab, The MathWorks, Inc., Natick, MA. [13] Furlong C., Hernández-Montes, M. S., Hulli, N., Cheng, J. T., Ravicz M. E., and Rosowski J. J., “Development of an optoelectronic holographic otoscope for characterization of sound-induced displacements in tympanic membranes,” Proc. 31st Midwinter Meeting of the ARO, Phoenix AZ, 2008.
Adaptative reconstruction distance in a lensless Digital Holographic Otoscope
J. M. Flores-Moreno*1, 4, Cosme Furlong1,2,3 and John J. Rosowski2,3 1
Center for Holographic Studies and Laser micro-mechaTronics,Worcester Polytechnic Institute, 100 Institute Rd, Worcester, MA 01609, USA; 2
3
Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, 234 Charles Street, Boston, MA 02114, USA;
Speech and Hearing Bioscience and Technology Program, MIT-Harvard Division of Health Sciences and Technology, 77 Massachusetts Av., Cambridge, MA 02139, USA; 4
Centro de Investigaciones en Optica A. C., Loma del Bosque 115, Leon, Gto 37150, Mexico.
ABSTRACT We are developing a Digital Optoelectronic Holographic System (DOEHS) for measurements of shape and deformations acoustically induced of the human tympanic membrane (TM) in the clinic. Such measurements will be used to perform quantitative diagnosis of the middle-ear. The DOEHS platform consists of laser-delivery illumination (IS), optical head (OH), image-processing computer (IP), and positioning arm (PS) subsystems. Particularly, the OH of the DOEHS has been configured as an in-line, lensless, holographic arrangement to quantify deformations of the TM when it is subjected to controlled sound excitation. Holographic information is recorded by a digital camera and reconstructed numerically by the Fresnel integral approximation. Accurate measurements are achieved when TM samples are imaged within the depth of focus of the OH, which requires the selection of the optimal numerical reconstruction distance. In this paper, we discuss an experimental approximation to identify the best reconstruction distance based on the position of the reference beam (RB) within the OH and its associated phase factor within the Fresnel integral approximation. Results of these investigations are being used to further optimize the OH design of the DOEHS system, and to improve the numerical reconstruction algorithms used. Representative experimental measurements are shown.
Keywords: holography, digital holography, numerical reconstruction, Fresnel integral.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_27, © The Society for Experimental Mechanics, Inc. 2011
229
230 1.
Introduction
Similar to conventional holography, digital holography utilizes two steps to recover intensity and phase information: recording and reconstruction. Recording is done by use of a camera and reconstruction is performed numerically, which can be done at the speed of the camera while allowing focusing by software. Limitation of digital holography is the spatial resolution which is constrained to the pixel size and dimensions of the detector of the camera. We are developing a digital optoelectronic holographic system (DOEHS) for quantitative measurements of shape and displacements in a full field-ofview and in non-contact mode. The DOEHS is designed to measure nanoscale displacements and vibrations of small surfaces subjected to controlled excitations.
The core of the DOEHS system is the optical head (OH) subsystem, arranged in a compact in-line holographic configuration that uses phase-shifting where no imaging lenses are used. With the in-line configuration, the entire array of the detector is used efficiently for image reconstruction therefore making the holographic information free of the presence of zero-order and conjugated images [1-3]. In the overall reconstruction process, the reference beam (RB) has a special significance because it sets parameters such as the reconstruction distance as well as the reference phase necessary for numerical reconstruction. In this paper, we discuss an experimental approximation to identify an adaptative reconstruction distance based on the position of the RB within the OH and its associated phase factor within the Fresnel integral approximation. The adaptative reconstruction distance allows us to read the position of the object even though the RB is not coplanar with the object. Results of these investigations will be used to further optimize the OH design of the DOEHS system, and to improve the numerical reconstruction algorithm used. We show measurement capabilities of the DOEHS system by providing quantitative information of confined objects subjected to acoustic excitation.
2.
Methods
2.1 The digital optoelectronic holographic system
An overview of the DOEHS is shown in the block diagram of Fig. 1 including the electronic control signals.
Fig. 1 Block diagram of the main elements in each subsystem and control signals used in the DOEHS.
231 The system consists of four main modules: the laser-illumination delivery (IS), optical head (OH) including the sound presentation (SPS), and image-processing computer (IP) subsystems, allowing quantification of the mechanical mobility of the object under study. The SPS delivers controlled sound excitation used to measure vibration modes, e.g., in visco-elastic membranes. The OH is a custom packaged case mounted in a positioning mechatronic subsystem to place and maintain its relative orientation during an object’s examination. The DOEHS use time-averaged (TA), double-exposure (DE) and stroboscopic (ST) modes to process the dynamic events on the surface of interest when it is subjected to an external loading force (mechanical, electrical, magnetic or thermal).
2.2 Optical head
The OH set consists of a beam splitter (BS), a digital camera, and a fiber optic otoscope, as is shown in Fig. 2 (left). In the setup, the RB is not coplanar with the object. Under this configuration, no mechanical adjustment for focusing is necessary, since no lenses or apertures are required to increase the depth of field. The numerical reconstruction allows adjustments in focusing as is conceptually shown in next section (Fig. 3). The OH is packaged in a custom designed easy-to-maneuver tool for use in the clinic [4] using a series of concurrent threads to acquire images, control the hardware, and display quantitative holographic data at video rates [5]. The custom design is shown in Fig. 2 (right), where the SPS is included.
Fig. 2 In-line optical head configuration used in the DOEHS (left) and the custom package design (right). From left to right: charge coupled device (CCD), beam splitter (BS), reference beam (RB), object beam (OB), sound presentation subsystem (SPS) and fiber cables (FC).
The design of the OH was planned taken into account the field-of-view (FOV), depth-of-field (DOF) and fringe resolution required to measure dynamic deformations in the human TM. The main constrain is the minimum distance at which the object should be placed according to its size and the CCD chip dimensions used in the recording process. Since the diffraction of a plane wave at the hologram is explained by the Huygens-Fresnel principle, the distance between the recording medium, in this case the CCD, and the object should be greater than the maximum dimensions of the CCD chip. Then the Fresnel approximation is satisfied [6] and the Fresnel transform integral can be used as is explained in section 3.
2.3 Measurement capabilities of the DOEHS
The optical parameters of the DOEHS are described next. The OH has a FOV of 7mm, a pixel size corresponding to and a pixels sensor area; the region-of-interest (ROI) yields . This CCD area fits the FOV. The DOF measured experimentally was . In the numerical reconstruction plane, the maximum image field results in 8mm and the reconstructed pixel size is close to 10µm, which restricts the lateral resolution of the system. Since the RB is impinging normal to the CCD plane, a pixel fill factor of 100% is achieved [2]. The resolution of the OH was measured ⁄ experimentally using a standard USAF target, resulting close to with a fringe visibility of 0.5. Since we use a coherent light source, speckle effects are noticeable by reducing the resolution of the reconstructed image. Other aspects
232 affecting the resolution of the reconstructed image have been reported in Refs. [2, 3, 7]. The DOEHS has capabilities of measurement in TA, DE and ST modes. In Fig. 3, we shown quantitative measurements of the displacements of a copper-foil and a cadaveric chinchilla TM subjected to sound excitation, using ST mode.
Fig. 3 a) wrapped phase map and b) unwrapped phase map of a copper foil membrane at an excitation signal frequency of 10.1276 KHz, showing a peak-to-peak out-of-plane displacement of dz = 1110 nm; c) wrapped phase map and b) unwrapped phase map of a cadaveric chinchilla TM at an excitation signal frequency of 3.489 KHz, showing a peak-to-peak out-of-plane displacement of dz = 800 nm.
In the results shown in Fig. 3, the measured optical phases were processed by numerical reconstruction of the corresponding four stepped intensity images recorded as hologram in the camera plane. The numerical reconstruction distance obtained, does not match with the localization of the object plane since the reference beam is not placed at the same position. From the experimental point of view, we emphasize the need to know the location of the object, from the focused reconstruction distance of the image since it provides valuable information of the position of the object at different reconstruction planes. This is important to define in medical research, i. e. the actual position of the umbo in the TM with respect the tip of the otoscope in the DOEHS.
3.
RB and adaptative reconstruction distance
The OH subsystem shown in Fig. 2 is an in-line phase-shifting digital holography arrangement for numerical reconstruction, where the RB is impinging normally to the camera by means of the BS cube. The OH configuration can be defined as a lensless Fourier holography, since we are not using a Fourier transforming lens to deliver the RB or the object to the hologram plane; it only diverges from its position in the OH to the recording plane. Since the RB is not coplanar to the object plane in the OH arrangement, the reconstruction distance does not specify the position of the object. We need to know, at least experimentally, how to calculate the position of the object based on the reconstruction distance value when the object reconstructed image of the object is numerically in focus. The goal is to make this value somehow adaptative with respect to the position of the object.
One way to find a solution is analyzing the Fresnel approximation used to numerically reconstruct the object. The reconstruction algorithm is based on the wave field reconstruction by the Fresnel transform integral [2-3] with the advantages given by the phase-shifting digital holography [1]. The Fresnel integral is the representation of the numerical solution of the physical diffraction process, providing a fast and suitable algorithm for numerical reconstruction of small objects. The schematic geometry of the coordinates used is defined in Fig. 4.
233
Fig. 4 Recording and reconstruction coordinates showing the free focusing plane of the image.
According to the geometry shown in Fig. 4, the reconstruction algorithm in the plane of the real image can be written as [2] (
)
(
)
(
[
)] ∫ ∫
(
)
(
[
)]
(
[
)]
( )
) are the coordinates in the hologram plane and the plane where the reconstructed wave field, ( ) where ( ) and ( is the amplitude transmittance of the digitally recorded hologram. From Fig. 3, is the recording distance and the reconstruction distance. In the configuration shown in Fig. 2 the reference beam is not coplanar to the object plane. The ) is digitized when the hologram transmission ( ) is acquired with a photoelectronic reconstructed wavefront ( transducer such as a CCD array. The accuracy of Eq. (1) is based on the initial assumption that the distances and are larger than the maximum dimensions of the CCD chip [3, 6]. The first exponential term to the right of the equal’s sign of the Eq. (4) represents a constant phase delay and the second exponential term before the double integral becomes a multiplicative constant when the infinite continuous integral is transformed into a finite discrete sum expressed as (
)
*
∑∑ (
(
)
[
)+
(
)]
[
(
)]
( )
) and the pixel size In Eq. (2) the main parameters are the number of pixels ( and given by the physical ⁄ ⁄ dimensions of the CCD array, where the relations and defining the steps along the coordinate system, were used. Equations (1) or (2) represent a complex function of the reconstructed wave field.
Since the RB is not at the same distance from the recording plane with respect the object (Fig. 2), in the reconstruction algorithm the distance does not define the position of the object in the reconstruction plane. To resolve this issue, we need to reconstruct the image in focus and measure how much this value can be adjusted to keep it focused. Then, following a similar frequency analysis based on square apertures in the recording medium [2, 6-7], calculate how small changes in the position of the reference wave increase the number of complex multiplications in the Fresnel approximation and possible rounding errors in high-spatial-frequency regions of the digital holograms.
An experimental way to obtain a measurement of the position of the object from the CCD based on the numerical value of the reconstructed image is changing the RB position with respect to the CCD plane at several working distances. Considering
234 the effects that the beam splitter cube [2, 8] adds in the reconstructed image, we can compensate for a specific position on the object plane and then obtain a characteristic curve of the object position based on the reconstruction distance.
4.
Conclusions and future work
We have described the digital optoelectronic holographic system (DOEHS) capable of providing qualitative and quantitative information regarding displacements in surfaces subjected to controlled excitation signals in a full-field-of-view and noncontact mode in nanometer scale at video frame. The OH is arranged in an in-line lensless configuration to record the hologram and with the use of numerical reconstruction algorithms based on Fresnel approximation; the focusing can be freely adjusted both during and after the recording to avoid the use of magnifying optics to focus the image. Since the RB is not located at the same distance than the object, the reconstruction distance does not represent the position plane of the object. We propose an experimental solution to compensate the value of the reconstruction distance. This requires a prior-knowing of the position of the object. Then by varying the position of the RB at several working distances to obtain a characteristic curve of the object position based on the reconstruction distance. At the same time, we are working in analyze this issue from the Fresnel diffraction integral algorithm implemented for the numerical reconstruction following a frequency analysis based on digital holography. To illustrate the measurement capabilities of the DOEHS system, we present quantitatively measurements of a copper foil membrane and a post-mortem chinchilla excited using sound frequencies which are in agreement with recently results reported in Refs. [4-7]. The system is being used in the clinic to continue with research and diagnosis of TM.
5.
Acknowledgements
This research was supported by the U.S. National Institute for Deafness and other Communication Disorders (NIDCD). Partial support provided by Worcester Polytechnic Institute (WPI), Centro de Investigaciones en Óptica, A. C. (CIO), Massachusetts Eye and Ear Infirmary (MEEI), and Mittal fund.
References
[1] Yamaguchi I. and Zhang T., “Phase-shifting digital holography,” Opt. Lett. 22 (16):1268-1270, (1997). [2] Th. Kreis, “Handbook of Holographic Interferometry,” Wiley-VCH, Ch. 1-3, (2005). [3] Schnars U. and Juptner W., “Digital Holography,” Springer-Verlag, Ch. 3, (2005). [4] Dobrev I., Flores-Moreno J. M., Furlong C., Harrington E. J., Rosowski J. J. and Scarpino C. “Design of a positioning system for a holographic otoscope,” Proc. SPIE 7791:D1-D12, (2010). [5] Harrington E., Dobrev I., Bapat N., Flores J. M., Furlong C., Rosowski J. J., Cheng J. T., Scarpino C. and Ravicz M., “Development of an optoelectronic holographic platform for otolaryngology applications,” Proc. SPIE 7791:J1-J14, (2010). [6] J. G. Goodman, “Introduction to Fourier optics,” Roberts&Company, Ch. 4, 3rd. Ed, (2005). [7] Takeda M., Taniguchi K., Hirayama T. and Kohgo H. “Single-Transform Fourier/Hartley fringe analysis for Holographic Interferometry,” Simulation and Experiment in Laser Metrology. Akademie Verlag, Berlin (1996). [8] De Nicola S., Ferraro P., Finizio A., Grilli S and Pierattini G. “Experimental demonstration of the longitudinal image shift in digital holography,” Opt. Eng. 42 (6):1625-1630.
3D Shape Measurements with High-Speed Fringe Projection and Temporal Phase Unwrapping
Michael Zervas, Cosme Furlong, Ellery Harrington, Ivo Dobrev Center for Holographic Studies and Laser micro-mechaTronics CHSLT NanoEngineering, Science, and Technology NEST Mechanical Engineering Department Worcester Polytechnic Institute, 100 Institute Rd, Worcester MA 01609, USA Abstract The abilities to increase precision of surgical procedures by tracking real-time motions, accurately measuring the mechanical properties of complex 3D geometries, and tracking deformations of components over time, among many other applications, depend on the availability of robust, high-speed, full-field-of-view, 3D shape measurement systems. In this paper, we present advances in our development of a high-speed 3D shape measurement system based on fringe projection. The system consists of a high-speed projector, with speeds up to 20,000 frames per second, that is integrated with a CCD camera to provide full-field-of-view information. By using high-speed projection of sinusoidal fringe patterns with varying spatial densities together with temporal phase unwrapping algorithms that we are developing, we are able to compute and display unwrapped phase maps at video rates, which enable the capability to perform absolute shape measurements of components. We present representative results obtained with our system as we have applied it to art conservation and to biomedical imaging. Results validate system capabilities as a high-speed method of dynamically gathering, analyzing, and displaying shape information. Keywords: fringe projection, high-speed shape measurements, structured light, temporal phase unwrapping. 1. INTRODUCTION This paper focuses on recent advancements in our development of a fringe projection based system for 3D shape measurements. Noninvasive techniques for surface measurements have become paramount for quality analysis in industrial applications, art conservation and restoration, as well as precision aid in medical procedures. Continued development of optical measurement systems enhances the versatility, applicability, and repeatability required by industry. Additionally, integration of 3D measurement techniques with computer aided design (CAD) software and computer aided machining (CAM) equipment provides opportunities for reverse engineering [1]. The critical advantage of the fringe projection optical technique is the ability to provide full field-of-view (FOV) information. Although this technique is promising, limitations in speed and difficulties achieving sinusoidal projection patterns have restricted many systems and their potential applications. For fringe projection, sinusoidal patterns are critical because they minimize discontinuities and errors in the reconstruction algorithms. This project explores the mathematical importance of sinusoidal projections while analyzing their quality via quantification of processed images, which will help in the continued development of our system as a combined high-speed, high resolution measurement device.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_28, © The Society for Experimental Mechanics, Inc. 2011
235
236
3D image reconstruction is achievable through several image unwrapping techniques. Our system uses an optimized Temporal Phase Unwrapping (TPU) algorithm that utilizes varying fringe frequencies to recover shape information in the time domain. This algorithm was chosen based on its robustness and an error analysis showing the optimal projection pattern for TPU. Contrary to other systems, the 3D shape measurement system developed in the CHSLT laboratories has unprecedented versatility to accommodate a variety of applications with the resolution and speed requirements. Hardware systems are integrated into user-friendly software developed in our laboratory [2].
2. SYSTEM SETUP The Fringe Projection system consists of two major components, a spatial light modulator (SLM) and a digital chargedcouple device (CCD) camera, shown in Fig. 1. The SLM, packaged by Vialux, contains a digital light processing (DLP®) unit from Texas Instruments called DLP Discovery™ [3]. The system uses a Digital Micro-mirror Device (DMD) with a 1080 x 1920 chip resolution [4]. Each of the independent 10.8 x 10.8 μm2 micro-mirrors are controlled by a duty cycle representing a percentage of time each mirror is in the on-state; thus, the SLM has intensity modulation control, producing the projected sinusoidal fringe pattern. The second component of the system is a Pike F-032 CCD camera with 640 x 480 chip resolution, 7.4 x 7.4 μm2 pixel size, a maximum frame rate of 208 frames per second at this resolution, and a bit-depth of 14-bit. Depending on the application and required field-of-view (FOV), the CCD camera can be interchangeable.
AOI θ
SLM
CCD
PC (a)
(b)
Fig. 1. Fringe projection setup that we are developing: (a) a schematic of the system shows the spatial light modulator and the CCD camera separated by triangulation angle, θ, both interfaced into a laptop computer; and (b) realization of our system with an art sculpture under examination. The SLM projects a pattern onto an object that is recorded by a camera separated by an angle via the method of triangulation. System sensitivity increases with larger triangulation angles, but is more susceptible to unresolvable areas caused by shadowing [5, 6]. 3. PRINCIPLE OF FRINGE PROJECTION Structured light projection is the basis for 3D reconstruction, as patterns will deform to the shape of an object. Inducing a precise phase shift of the projected pattern makes it possible to solve for the interference phase containing object depth information. The camera recorded intensity distribution is represented by:
I ( x, y) a( x, y ) b( x, y) cos ( x, y ) ( x, y ) i ,
(1)
where the recorded intensity distribution, I, is a function of the brightness, a, the amplitude, or contrast, b, the random phase, ΔΦ, the known induced phase shift, αi, and the fringe-locus function, Ω , containing shape information for each pixel (x, y).
237
A least-squares method can be used to solve for Ω by minimization of the summation of quadratic errors [7]. In general, the greater number of phase shifted images used to recover the phase results in better resolutions by minimizing random electronic noise and inaccuracies in phase shifting, ΔΦ. Using more images for reconstruction increases the measurement and processing time, but could be advantageous to particular applications that are not time critical. The four phase shift algorithm was chosen for the 3D shape measurements because it is nearly insensitive to phase shifting calibration, therefore minimizing electronic noise [8]. The resulting interference phase is an arctangent function described as:
I ( x, y ) I 2 ( x, y ) ( x, y ) arctan 4 . I1 ( x, y ) I 3 ( x, y )
(2)
Each of the four phase-shifted images is represented by “I” and a subscript that corresponds to an increasing shift ranging from 0 to 3π/2, in increments of π/2. The resolution of the interference phase is directly related to how closely the projected fringes follow a sinusoidal pattern. Projection of gray-code, or dark and light, fringes produces discontinuities that are typically corrected by digitally smoothing, resulting in increased processing time. Sinusoidal projection can be decomposed into a Fourier series approximation: f ( x) I o [ an cos( nx) bn sin(nx)]. n1
(3)
Since the Fourier approximation, f(x), contains only the summation of continuous cosine and sine approximations, with integration functions, an and bn, discontinuities will appear as high frequency components. A theoretical sinusoidal fringe pattern projection, Fig. 2a, shows a corresponding cross section and power spectrum, in the frequency domain. The center DC component with a frequency component based on the number of fringes is shown. Figure 2b shows a resulting power spectrum of a square wave projection with many other higher frequency components that do not contain any shape information, but can be regarded as noise from the discontinuous square function. The energy density for a sinusoidal projection and square wave with a frequency of one fringe are 694 and 1351, respectively, proving yet again that sinusoidal fringe projection results in better image resolutions. 250
1 0.8 0.6
200
0.4
Amplitude
Intensity
0.2
(a)
0 -0.2
150
100
-0.4 -0.6
50
-0.8 -1
0
50
100
150
200
250 Pixels
300
350
400
450
0
500
0
50
100
150
200
250
150
200
250
Frequency
1
250
0.9 0.8
Amplitude
0.6 Intensity
(b)
200
0.7
0.5 0.4
150
100
0.3 0.2
50
0.1 0
0
50
100
150
200
250 Pixels
300
350
400
450
500
0
0
50
100 Frequency
Fig. 2. Projected fringes: (a) 512 x 512 sinusoidal fringe projection pattern with a sample cross sectional area and power spectrum; and (b) 512 x 512 square projection pattern, cross section and power spectrum. Gray scale sinusoidal projection is achieved in our system by controlling each of the mirrors, or pixels, in the DMD, Fig. 3, by setting the duty cycle for each mirror appropriately for the desired gray scale. The camera’s exposure time is set to a level
238
corresponding to the maximum time a mirror can be in the on-state to represent a completely light fringe. Over this exposure, the camera will integrate, or average, the light intensity of other pixels and produce an equivalent to a gray scale level [9, 10].
Number of mirrors 1080 x 1920
(a)
(b)
20 m
Fig. 3. Device developed by Texas Instruments and used in our fringe projection system: (a) DMD chip; and (b) Enlarged view of micro mirrors enabling sinusoidal projection [11]. Current developments enable the projector to change bit-depth rapidly from 5 to 14 bits, and an equivalent range of 32 to 16384 gray levels. An approximation method determines the duty cycle to produce the most appropriate gray scale depending on fringe density. Higher bit-depths result in more accurate sinusoidal representations, but slow the acquisition speeds to a few frames per second (fps). Lower bit-depth projections can maintain speeds, as well as process and display information on the order of 200 fps. 4. ADVANCEMENTS IN TEMPORAL PHASE UNWRAPPING (TPU) Processing and viewing the information with high-speed and accuracy requires an unwrapped algorithm. By varying fringe densities on an object, the TPU algorithm can be used to determine the fringe order number and, based on this, resolve 2π phase discontinuities. Different from spatial unwrapping techniques, TPU is performed in the time domain for each pixel by using the wrapped phase images calculated for each of the varied fringe densities, shown in Fig. 4. Consequently, pixels are not affected by poor signal to noise ratios in neighboring pixels, as often seen in spatial unwrapping techniques [12].
Fig. 4. Temporal phase unwrapping is executed along the time axis, with increasing fringe frequency. This hierarchical approach to unwrapping is based on following an increasing fringe frequency starting with no discontinuities in the phase map [13]. Mathematically, the unwrapped phase, uΦ, of a particular image can be calculated as follows: ui1( x, y ) i1( x, y ) 2Ni1
(4)
Where Φ represents the wrapped phase and N is the fringe order number that characterizes phase jumps with the addition or
239
ε margin
subtraction of integer values; thus, images are corrected and have no phase jump discontinuities. An error term, ε, is added due to electrical noise and effects of uncertainties in the algorithm. A particular resolution can be reached by determining how the error term propagates through varying numbers of images used to recover the unwrapped phase. Figure 5 shows how the error is minimized as more images are used in the TPU. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 2
3
4 5 6 Number of images for TPU
7
8
Fig. 5. Error propagation as a function of the number of images used in the temporal phase unwrapping algorithm. Although the error approaches zero, the maximum fringe frequency is limited to the DMD pixel. Thus, higher resolutions can be achieved by using more images in the unwrapping algorithm in situations where time is not a critical factor. The TPU unwrapping sequence can be determined based on the particular application and its requirements. 5. REPRESENTATIVE RESULTS Advancements in the system, particularly gray scale projection, were analyzed by taking cross sections of images captured by the camera from the projected patterns of the SLM. For evaluation purposes, a telecentric lens with a limited FOV was used to reduce distortions. The projected pattern, at 128 pixels per fringe, and the corresponding cross section in Fig. 6 shows the ability to successfully project sinusoidal images from the SLM. Sinusoidal Fringe Projection 16000 14000
Light Intensity
12000 10000
8000 6000
4000 2000 0
(a)
100
200
300 Pixels
400
500
600
(b)
Fig. 6. Fringe projection analysis: (a) Fringe pattern captured by the CCD camera; and (b) Plotted cross section of the image validating the capabilities of our system to projection sinusoidal patterns.
The effects of varying SLM bit-depth were explored using a consistent projection density and triangulation angle based on the 14 bit CCD camera using a highly reflective, diffusive surface. Results indicate that the calculated standard deviation in each individual image varied by less than 5% between the 5 and 14 bit projection patterns. In practical applications, this gives an advantage for performing at higher speeds without losing resolution in the sinusoidal projection.
240
Using the sinusoidal fringe projection, we were able to successfully implement the TPU technique to acquire 3D images, shown in Fig. 7.
Fig. 7. Unwrapped sculpture under examination: (a) 2D unwrapped image; and (b) Actual 3D images reconstructed via our advanced algorithms with two alternate views. 6. CONCLUSIONS AND FUTURE WORK Representative results show the potential of the system to combine high speeds with high resolution images by implementing the sinusoidal projections. Similarly, temporal unwrapping proves a valuable technique for 3D imaging. Preliminary results have validated that the resolution of unwrapped images increases as the number of phase images used increases. Future work will focus on several advancing aspects of the system. The first is the system calibration process, which will be critical in evaluation of system accuracies and measurement repeatability. A packaging system with mechanical components for easy FOV and triangulation angle adjustments will be developed. The objective is to increase the portability and versatility of the system to adapt to a variety of applications. 7. ACKNOWLEDGEMENTS The authors gratefully acknowledge the support provided by Texas Instruments (DLP), as well as support of the NanoEngineering, Science, and Technology (NEST) program at Worcester Polytechnic Institute. We would also like to thank our colleagues at the CHSLT labs and the Massachusetts Eye & Ear Infirmary, Boston. 8. REFERENCES [1]
Sansoni, G., Patrioli, A., and Docchio, F., “OPL-3D: A Novel, Portable Free-Form Surfaces,” Rev. Scien. Inst., 74(4):2593-2603, 2003.
Optical Digitizer for Fast Acquisition of
[2]
Harrington, E., Dobrev, I., Bapat, N., Flores, J. M., Furlong., C., Rosowski, J., Cheng, T., Scarpino, C., and Ravicz, M., “Development of an optoelectronic holographic platform for otolaryngology applications,” Proc. SPIE 7791D, 2010
[3]
Texas Instruments, DLP System Optics, 2010.
[4]
Hofling, R., “High Speed 3D Imaging by DMD Technology,” Proc. SPIE, 5303:184-194, 2004.
[5]
Bothe, T., Osten, W., Gesierich, A., and Juptner, W., “Compact 3D-Camera,” Proc. SPIE, 4778:48-59, 2002.
[6]
Xiaobo, C., Xi, J., Tao, J., Ye, J., “Research and Development of an Accurate 3D Shape Measurement System Based on Fringe Projection: Model Analysis and Performance Evaluation,” Prec. Engg., 32:215-221, 2008.
[7]
Kreis, T., Handbook of Holographic Interferometry, Wiley-VCH Verlag GmbH& Co. KGaA, Weinheim, 2005.
[8]
Hariharan, P. Oreb, B.F., Eiju, T., “Digital phase-shifting interferometry: a simple –error compensating phase calculation algorithm,” Appl. Opt., 26:2504-2506, 1987.
241
[9]
Hornbeck, L. J., “Digital Light Processing for High-Brightness, High-Resolution Applications,” Proc. SPIE, 3013:2741, 1997.
[10]
Hofling, R. and Aswendt, P., “Real Time 3D Shape Recording by DLP® Based all-digital Surface Encoding,” Proc. SPIE, 7210:72100E-1:72100E-8, 2009
[11]
Texas Instruments, “DLP Technology,” Accessed 17 February 2010. .
[12]
Kinell, L., Shape Measurements using Temporal Phase Unwrapping, Licentiate Thesis, Lulea University of Technology, 2000.
[13]
Burke, J., Bothe, T., Osten, W., and Hess, C., “Reverse Engineering by Fringe Projection,” Proc. SPIE, 4778:312-324, 2002.
_______________
Measuring Local Mechanical Properties of Membranes Applying Coherent Light Projection Moiré Interferometry Sciammarella F.M.a , Sciammarella C.A.a, Lamberti Lb. a.- Department of Mechanical Engineering, College of Engineering & Engineering Technology Northern Illinois University, DeKalb, IL USA b.-Dipart. Ingegneria Meccanica e Gestionale, Politecnico di Bari, BARI – ITALY Keywords: nanomechanics, thin films, bulge test, nano moiré
Abstract The bulge test is a versatile and reliable way to determine mechanical properties of thin films. It can be applied to obtain constitutive equations of the material at different ranges of deformation including time effects. It provides a biaxial state of stresses that is the prevailing condition in thin film operate. In this paper the bulge test information is retrieved with interferometric nano-moiré. A special set up was designed and built to pressurize a membrane in the platina of a conventional metrological microscope. The utilized field of view is 326 × 326 microns and the spatial resolution is 318 nanometers, the depth information is within 10 nanometers. An aluminum foil was cemented to a plate that has a circular aperture of 10 mm. The foil was inflated to 3.5 psi at 0.5 psi intervals and images were recorded at each pressure level. Local properties of the deformed surface were compared with the results of the membrane theory by determining from the experimentally measured values the surface trend. The comparison shows that the membrane takes the parabolic trend with maximum observed deviation of 43 nm. At the testing pressure of 3.5 psi the calculated radius of curvature from the surface trend is 96.8 mm, while the theoretical radius of curvature according to the geometry and material properties of aluminum foil is 96.3 mm. 1.0 Introduction The bulge test can be considered as one of the most versatile and reliable tests to determine mechanical properties of thin films. It can be applied to obtain the elastic modulus of thin film, get the yield stress, and the fracture toughness. It can be utilized also to determine residual stresses, cyclic stress-strain behavior and time dependent behavior. The preparation of the sample is relatively simple when compared to the micro-bending and micro-tension specimens. Furthermore it provides a biaxial state of stresses and can be utilized to obtain properties of anisotropic materials. Combining the bulge test with interferometric nano-moiré makes possible to measure the constitutive equations of thin films at the sub micron and micron range. Interferometric nano-moiré utilizes a diffraction grating illuminated with angles of inclination beyond total internal reflection to generate fringes that encode the displacement information of the thin membrane. There are two aspects of this characterization that are very important. The first deals with how to successfully apply the bulge test so that the desired information can be obtained. The bulge test assumes that a thin film can be loaded in a pure membrane state. This model has limitations that arise from the actual geometry of the surface and the presence of residual stresses in the film. It is necessary to develop the adequate continuum mechanics models to get the correct information. The phenomena related to the geometry and the residual stresses are local and in order to observe them it is necessary to have a large spatial resolution. In order to carry out these developments successfully it is necessary to begin with a well established material with known properties. Aluminum foil was chosen for the preliminary test to observe the local phenomena. A special set up was designed and built to pressurize a membrane in the platina of a conventional metrological microscope.
2.0 Goal of the research effort
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_29, © The Society for Experimental Mechanics, Inc. 2011
243
244
The main objective of the research presented in this paper is to measure the mechanical properties of sculpted thin films (Parelene-C) created by Penn State University. These nanoengineered thin films are assemblies of parallel columns created in a variety of shapes. STFs control the polarization state and frequency band of light impinging on them. Chiral polarizers offer promise for use in voltage monitoring devices, temperature and pressure sensors, and gyroscopes for in-flight guidance. Chiral polarizers may also prove useful in the design of low-cost, high-efficiency lasers, light filters, light modulators, and optical sensors. Micoscale STFs can be also useful of acoustics. At this point what is not well understood is at what stage the stresses influence the degradation of the optical and acoustical properties. There are no simple methods available to obtain their mechanical properties, and without that knowledge their deployment for several applications (e.g., highly accurate sensors in MEMS) remains unexplored. When the STFs are subject to Nano-indentation the material gets damaged (Figure 1) because of the high concentration of forces experienced at the tip. As a result this method is not a viable way to measure constitutive properties.
Figure 1. Damage to a chiral STF by the application of 30 micro Pascal pressure Utilization of interferometric nano-moiré enables the measurement of the constitutive equations for the STF at the sub micron and micron range because it is non-contact. This method utilizes diffraction grating illuminated with angles of inclination beyond total internal reflection [1]. The advantage of this method is that near field phenomena are seen using a conventional far field microscope. This method was used to observe nano crystals, with the accuracy of plus minus 5 nanometers [2].
3. Tests of mechanical properties of thin films There are a large number of references on testing thin films and the determination of their mechanical properties. Basic experimental set ups for uniaxial testing are not too different from the macro arrangements. However, critical issues arise when the dimensional scales are reduced as in this case. Typical problems that arise are: sample handling, specimen alignment, accurate strain and force measurements. In 1994 a review paper by Brotzen [3] covered the initial history of uniaxial testing and the state of the art to that date. Since this review paper, advances in all the above critical areas were achieved. Read et al [4] are credited with the fabrication of free standing specimens. Current state of the art for micro-size specimens is found in [5]. Another method that has been extensively used and is still used in many thin film studies is the bulge test that can be traced back to [6]. Two geometries have been used, the circular plate and the rectangular plate. A review of the methodology of the bulge test can be found in [7], and [8]. The bulge test will be utilized for a portion of our experimental testing. There is one type of test that is perhaps the most utilized today in thin films, the indentation test. Commercially equipment is available to perform the indentation test. The indentation test is not a direct measurement of mechanical properties; it does not provide the same information than one can obtain applying the technology of materials testing. The obtained results are subjected to interpretation and advanced mechanics analysis is required to derive material properties on the basis of the indentation response. Numerous models were developed to extract the material properties from the recorded indentation load-depth curve, including the elastic modulus, yield stress, strain hardening coefficient, residual stress, fracture toughness, etc. A review of this methodology can be found in [9]. Of the described tests the bulge test is the one that is more apt to determine the mechanical properties of the STFs that will be studied in this project. As shown in Figure 1 the indentation test cannot be applied to this type of thin films. The bulge test unlike the indentation tests provides a direct measure of the mechanical properties furthermore
245
combined with numerical techniques can provide not only the elastic properties of these materials but also the behavior beyond the elastic limit. Besides testing methodology that was briefly reviewed in the preceding paragraph there is another very important area, the interpretation of the performed tests. The main topics are scale effects in elasticity and plasticity. A discussion of these topics in the context of testing micro specimens can be found in [10] where there is an extensive summary of the results obtained to date. These results provide important insight into size-scale effects on the elastic, brittle, and ductile behavior of micron-sized structures. Finally when a material possesses a high degree of inhomogeneity and anisotropy or undergoes a general state of stress the formulation of constitutive equations is a challenging problem. Continuum mechanics theory states that, regardless of load non-uniformity, material inhomogeneity anisotropy and type of boundary conditions, the structural response is univocally defined by the values of u, v and w-displacements. Therefore, in order to capture local variations in structural response, full-field displacement maps must be available. In fact, strains are just complicated combinations of displacement gradients. Furthermore, stress values are accurate only if the constitutive model is reliable. In view of this, a procedure directly based on displacements seems to be the most straightforward and robust approach to the identification of whatever complex material behavior. Full-field displacement maps can be obtained with a great deal of accuracy by utilizing the powerful optical methodology described in this paper. The measured displacements with the optical techniques will be compared with numerical predictions provided by finite element models in order to identify constitutive behavior of the STFs. .
h
Figure 2 shows a schematic of how the bulge test looks like and some of the important parameters that will be utilized for data analysis. The reverse engineering problem will be formulated in fashion of an optimization problem where the unknown material properties are included as design variables. The cost function to be minimized is the error functional Φ built by summing over the differences between displacements measured experimentally and those predicted numerically. 4. Measuring the elastic properties (i.e. E and ν) of the STFs Utilizing simple considerations of equilibrium and elementary geometry derivations one arrive to,
p=
4σ 0 t h 8 tEh 3 + 4 2 a 3a (1 − ν )
(1)
In this equation p is the pressure applied to the membrane, σ0 is an initial state of stresses that can be applied to the membrane, h is the deflection of the membrane, t is the thickness of the membrane, E the modulus of elasticity of the membrane, and ν is the Poisson’s ratio. Also based on geometrical arguments of the deformation of the membrane one can get,
ε=
a2 + h2 ⎛ 2 ah ⎞ arcsin ⎜ 2 ⎟ − (1 + ε 0 ) 2 ah ⎝ a + h2 ⎠
To get E and ν one can use (1) along with optimization techniques.
(2)
246
5. Set up, pressure test cell The pressure cell system was designed with a dual purpose:1) to make global measurements of the membrane deflections, 2) Allow high spatial resolution measurements of the deflections of the membrane. This last requirement means that the set up should be thin enough to be utilized in conjunction with microscopic observation. The dimension of the radius a shown in Figure 3 is dictated by the size of the specimen that can be fabricated, in this case 25 mm. A circular membrane specimen with diameter 10 mm was selected. The membrane is glued to a thin metallic plate. This plate is set in a slot of another metallic plate, Figure 3.
Figure 3. side view of the pressure cell. Another thin plate is put on top of the membrane to produce the clamping effect; o-rings are added to secure that the pressure cell is sealed. The pressurizing system was designed to produce a minimum measurable pressure p= .05 psi , and has the capability of increasing the pressure within a series of ranges. 6. Preliminary testing In order to verify the designed testing set up it was necessary to begin with a well established material with known properties. Aluminum foil was chosen for the preliminary test to observe the local phenomena. Figure 4 shows the set up utilized to measure the deflection of the aluminum foil. The first tests were carried out to observe membrane deflections with high spatial resolution, the most challenging part of the project. In order to observe the local phenomena a large magnification is required therefore the setup was designed to accommodate a conventional metrological microscope. For this set up the field of view is 326 × 326 microns and the spatial resolution is 318 nanometers. The depth resolution is 1.25 microns, utilizing interpolation it is possible to resolve up to 30 nanometers.
Mirror LASER
Thin Film Test Fixture
Pressure Control
Figure 4. View of experimental set-up to perform measurements on the thin films
247
The foil was cemented to a plate with circular aperture of 10 mm. The foil with thickness t= 25 μm was inflated to 3.5 psi at 0.5 psi intervals and images were recorded at each pressure level. These images were then processed utilizing the Holostrain™ software.
7. Obtained results Experimental observations showed that the foil developed wrinkles at lower levels of pressure. This is a phenomenon that occurs when very thin membranes are stretched, local buckling of the membrane is observed. As pressure increased the foil begins to bulge and the membrane takes a parabolic shape. To have an understanding of the observed deflections one has to compare the total diameter of the membrane 10,000 μm and the observed region with the microscope is only 326 μm. It should be mentioned that during inflation the portion of the membrane that is observed experiences rigid body displacements and rotations. As a result it was necessary to develop a software routine to match the experimental profile with the expected theoretical profile according to the membrane equation. The trend of the optical profile in the range of observation was determined by assuming a parabolic curve and this trend is shown in Figure 5, where it is compared with the obtained optical profile. Local properties of the deformed surface were compared by extracting the values of curvature. At the pressure of 3.5 psi the calculated radius of curvature is 96.8 mm, while the theoretical radius of curvature according to the geometry and material properties of aluminum foil is 96.3 mm. This means that the error in curvature is only 0.5%. In principle, the carried out preliminary study demonstrates the feasibility of utilizing this set up for determination of thin film constitutive behavior and at the same time observe the actual local effects in the membrane.
Figure 5. Matching of optical surface profile to a parabolic trend. Actual local deviations can be observed but they are at the submicron range. The upper curve shows the matching of profiles after the trend is rigidly roto-translated to match the theoretical membrane results. Figure 5 shows the obtained membrane profile compared to the theoretical membrane deflections, one should understand that this comparison is between the trend of the membrane and the theoretical values corresponding to the membrane equation and not with the local shape of the membrane. As shown in Figure 5 the local profiles show deviations with respect to the surface trend, the best parabolic fit of the profile. The maximum deviation is at the right end of the plot and is 43 nm.
8. Conclusions and discussion The inflation test has been successfully applied to a thin film of aluminum (t= 25 μm). The deflections of the membrane agree well with the predicted deflections utilizing the membrane equation with the properties of the aluminum foil, E=68.971 GPa and ν=0.33 and the level of stress σ = 34.485 MPa which is practically the yield
248
stresses of the aluminum foil. The evanescent illumination has made it possible to observe the membrane deflections under the very tight conditions of the microscopic observation and the experienced rigid body motions.
Figure 6. Comparison of theoretical and experimental trend of the membrane profile. In Figure 6 it is possible to see that the trend of the membrane agree well with the theoretical predictions, however at the nanometric level the shape of the membrane shows local deviations with respect to the trend as it should be expected, the maximum observed is 43 nm. It should be understood that this value represents a deviation of the trend. Below the applied pressure the local deviation of the membrane is considerably larger. It shows that at yielding the plastic deformations tend to produce a smoothing of the membrane shape. At yielding the membrane adopts the parabolic shape predicted by the membrane equation of Continuum Mechanics.
References [1] Sciammarella C.A., Lamberti L. Sciammarella F.M., Demelio G.P., Dicuonzo A., and Boccaccio A., “Applications of Plasmons to the Determination of Surface Profile and Contact Strain Distribution” Strain, Vol. 46, pp 307-323, 2010. [2] Sciammarella C.A., Lamberti L. Sciammarella F.M., “ The equivalent of Fourier Holography at the Nanoscale” Experimental Mechanics, Vol. 49, pp. 747-773, 2009. [3] F.R. Brotzen: Mechanical testing of thin films. Int. Mater. Rev. 39, 24 (1994) [4] DT Read, YW Cheng, RR Keller, JD McColskey, Tensile properties of free-standing aluminum thin films, Scripta Materialia, Elsevier, 2001 [5] William N. Sharpe, A Review of Tension Test Methods for Thin Films, Materials Research Society Symposium Proceedings, 2008. [6] Beams, J.W., Structure and properties of thin films, editors, C.A. Neugenbauer, J.B. Newkirk, D.A. Vermilyea, John Wiley and sons, New York, 1959. [7] Martha K. Small, W.D. Nix, Analysis of the accuracy of the bulge test in determining the mechanical properties of thin films, Journal of Materials Research,7(6), 1553-1563(1992). [8] Oliver Paul, Joao Gaspar, Thin-Film Characterization Using the Bulge Test, Reliability of MEMS, Editor(s): Dr. Osamu Tabata, Dr. Toshiyuki Tsuchiya,Wiley,2008. [9] AC Fischer-Cripps, Critical review of analysis and interpretation of nanoindentation test data, Surface & Coatings Technology, - Elsevier 2006. [10] K.J. Hemker and W.N. Sharpe, Jr., Microscale Characterization of Mechanical Properties, AR. Annual Review of Materials Research, Vol. 37: 93-126, ( 2007) [11] E. Cosola , K. Genovese , L. Lamberti a, C. Pappalettere, A general framework for identification of hyperelastic membranes with moiré techniques and multi-point simulated annealing, International Journal of Solids and Structures 45 (2008) 6074–6099.
EXPERIMENTAL ANALYSIS OF FOAM SANDWICH PANELS WITH PROJECTION MOIRÉ A. Boccaccio, C. Casavola, L. Lamberti, C. Pappalettere Politecnico di Bari, Dipartimento di Ingegneria Meccanica e Gestionale Viale Japigia 182, Bari, 70126, ITALY E-mail: [email protected]; [email protected]; [email protected]; [email protected] Abstract The use of polymeric and metallic foam sandwich panels in naval, aerospace, railway and automotive constructions is rapidly growing in the recent years because of technological improvements in manufacturing processes. However, it is still difficult to establish a direct relationship between the mechanical properties possessed by the panel and the specific manufacturing process. Mechanisms behind panel deformation, crack growth, fracture initiation and propagation still are not completely understood and therefore are intensively studied. In particular, structural behavior under compression is a critical issue also in view of the lack of official standards on foam core sandwich panels. This work aims at studying mechanical properties of high density polyethylene foam core sandwich panels produced by rotational molding. These panels can be built without using adhesives as the polyethylene foam grows inside mold and then adheres to facesheets while material still is at high temperature. In the present study, polyethylene foam panels of different thickness are tested under edgewise compression loading. The resulting out-of-plane deformation is then monitored in detail with a projection moiré setup including two projectors and one camera.
1. INTRODUCTION Foam core sandwich panels, made up of two polyethylene skins separated by a lightweight polyethylene foam, may represent an excellent solution to many design problems in the automotive field [1]. Mechanical properties of these panels can be improved by properly selecting the manufacturing process although it is very difficult to establish a direct relationship between mechanical properties and manufacturing parameters. Amongst manufacturing techniques, rotational molding is an innovative process which makes it possible to build the entire sandwich component within just one step, thus obtaining a better adhesion and improving the continuity between skins and core. It is expected that the above mentioned technology will allow manufacturing time to be reduced as well as mechanical characteristics of the panel to be improved. Compressive strength, flexural properties and impact behavior are important issues in the mechanical characterization of foam core sandwich structures [2-4]. However, mechanisms behind panel deformation, crack growth, fracture initiation and propagation still are not completely understood and therefore are intensively studied. In particular, structural behavior under compression probably is the most critical issue also in view of the lack of official standards on foam core sandwich panels. For example, Casavola et al. [5] have recently carried out a detailed investigation on the mechanical behavior of aluminum foam sandwich panels subject to flatwise compression (i.e. in the direction orthogonal to facesheets) or edgewise compression (in the direction parallel to facesheets). They found that the foam contributes significantly to the ultimate compressive strength of the panel as long as aluminum skins remain bonded to the core. Flatwise compression strength increases with specimen size. Structural response is sensitive to the presence of voids or imperfections in the foam. Since sandwich foam panels are thin-walled structures, their structural response under compression may be driven by buckling. The onset of buckling becomes more probable as the foam core structure is less regular: that is, when there are voids or irregularities in the foam structure. Buckling modes are characterized by large out-of-plane displacements and the main concern hence is to measure deflections with a great deal of accuracy. Fringe projection techniques [6] are naturally suited for the problem of monitoring large out-of-plane displacements of thin walled structures. The projected lines are modulated by the specimen surface: if lines are projected onto a curved surface, curvature and frequency of these lines will change; if, instead, lines are projected onto a plane, they remain straight and parallel but their spacing may change. In shadow moiré, fringes form because the projected grating modulated by the object surface is superimposed to the grating itself located in the vicinity of the specimen. Therefore, the sensor “sees” a pattern a fringes that represents loci of equal parallax. The parallax is in turn related to the surface height Z(x,y), expressed as the distance from the reference plane. In projection moiré, the image of the modulated pattern of lines is recorded and digitally compared with the image of the same grating projected onto a flat surface called reference plane. By subtracting the two images recorded for the curved surface and for the reference plane, it is possible to obtain the level lines (in reality, the loci of equal parallax) of the surface: that is, to know the distribution of T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_30, © The Society for Experimental Mechanics, Inc. 2011
249
250 height Z(x,y) for each point P(x,y) of the deformed object. Whilst moiré fringes are directly sensed in the case of shadow moiré, projection moiré fringes are generated via digital subtraction of images. In the case of large structures, the collimated illumination can be replaced by a central projection and some correction model must be introduced to account for the divergence of the illumination beam. Fleming et al. [7-9] demonstrated that projection moiré can accurately measure displacements of models submitted to aerodynamic loads; projection moiré and photogrammetry were found to be substantially competitive. Featherston and Lester [10] monitored postbuckling behavior of thin panels with projection moiré. Other examples of experimental buckling analysis of thinwalled stiffened panels are documented in Ref. [11]. The most recent development in projection moiré is the four-projector setup proposed by Sciammarella and his coworkers [12,13]. A simplified version of that optical setup employing only two projectors and one camera is utilized in this research in order to characterize the compressive behavior of polyethylene foam core sandwich panels subject to edgewise compression tests.
2. EDGEWISE COMPRESSION TESTS In edgewise compression tests the load acting on the specimen is directed parallel to facesheets. The purpose of the test is twofold: (i) to analyze the mechanical behavior of the skins, that are much thinner than the specimen core; (ii) to analyze the core contribution to the compression resistance. In this study, experimental tests were carried out following partially ASTM standards [14] that supply indications for generic sandwich structures. Specimens were cut from sandwich foam panels: each specimen is 150 mm long and 70 cm wide while thickness may either be 30 mm or 40 mm. A total of 5 specimens (two of thickness 30 mm, and 3 of thickness 40 mm; thickness of skin is about 3 mm) were tested. The nominal dimensions of each specimen were carefully checked before executing the compression tests. This was done in purpose to precisely compute stress. The average density of the 30 mm thick specimens is 222.84 kg/m3 while the average density of the 40 mm thick specimens is 149.34 kg/m3. Compression tests were conducted on an electromechanical testing machine (MTS Alliance RT/30). Special fixtures were designed in order to clamp the specimen in the testing machine. Figure 1 shows the loading frame. Load and displacement signals as well as strain gage signals (two strain gage rosettes were bonded on the skins with 3 mm gage length and 120Ω electrical resistance; the central grid b was aligned with the longitudinal axis of the specimen) were acquired by System 5000, Micro Measurements, USA.
Figure 1. Loading frame utilized for the edgewise compression tests The edgewise compression loading produced a huge deformation in the sample (see Figure 2). Facesheets underwent buckling because of their detachment from the core. Significant out-of-plane displacements are associated to this phenomenon which initiated where local thickness is very small or where foam is irregularly distributed in the core: this caused facesheet delamination from core (see detailed view in Figure 2). However, samples recovered almost completely their original geometry upon removal of the applied load in spite of the fact that end-shortening vales of up to 10 mm were observed (see Figure 3).
251
Figure 2. Deformation pattern typically observed by the foam core sandwich panel in the edgewise compression test Stress-strain curves recorded in the experiments are shown in Figures 4 and 5, respectively, for the 40mm and 30mm thick specimens. The stress value was computed as the ratio between the force value sensed by the loading cell and the area of the transverse section of each facesheet. The strain value was computed as ΔL/Lo where Lo is the initial length of the sample while ΔL corresponds to the nominal end shortening sensed by the loading frame.
c) Figure 3. Image of a 40mm thick specimen before (a) and after (b) executing the edgewise compression test; c) Loaddisplacement curve registered for a 30mm thick specimen
252
0,14
STRESS [ [MPa]]
0,12 0,1 0,08
2
0,06
3
0,04
1
0,02 0 0
0,01
0,02
0,03
0,04
0,05
0,06
0,07
0,08
STRAIN [μm/m] Figure 4. Stress-strain curves recorded for the 40mm thick specimens
]
0,12
STRESS[ [MPa]
0,14
0,1
4
0,08 0,06
5
0,04 0,02 0 0
0,01
0,02
0,03
0,04
0,05
0,06
0,07
0,08
STRAIN [μm/m] Figure 5. Stress-strain curves recorded for the 30mm thick specimens! Stress-strain curves recorded experimentally always include a pseudo-linear range corresponding to the Hookean behavior. The average values of Young modulus derived from the linear fitting of the σ-ε curves are listed in Table 1 for all specimens tested in this study. As expected, stiffness increased in the case of thinner panels. Besides the obvious consideration that thick cores must be less stiff than thin cores as the fraction in volume of facesheets decreases and this makes the specimen easier to be deformed by the edgewise compression load, it is to be noted that stress-strain curves are very different even for specimens with the same nominal thickness and a small deviation on average thickness. Table 1. Values of the mechanical properties possessed by the samples submitted to edgewise compression tests Sample 1 2 3 4 5
Average thickness [mm] 42.11 42.15 41.96 33.97 34.27
Young modulus [MPa] 4.16 4.07 4.29 7.15 6.36
Specific stiffness (E/ρ) 0.030 0.030 0.030 0.032 0.028
253 This behavior can be explained with the variations in core structure deriving from the process of fabrication which either results in a different topology of the cells and a different level of compactness of the core. However, the average value of specific stiffness was found to be practically the same for all specimens.
3. PROJECTION MOIRÉ MEASUREMENTS The out-of-plane displacement field experienced by the foam sandwich panels subject to edgewise compression was investigated in detail with the multi-projector moiré model developed by Sciammarella et al. [12,13]. As the surface is a tensor entity, the optical setup should always include four projectors. However, if the surface to be measured is curved prevalently with respect to the horizontal axis, like it occurred in the present case, only two projectors can be utilized (Figure 6). The two projectors must be placed symmetrically with respect to the optical axis of the camera and project structured non-collimated light onto the surface of the sample. This allows the ideal condition of projection from infinity to be achieved. By changing the angle θ formed by the optical axis of the projector with respect to the optical axis of the sensor, and utilizing gratings of different pitch one can obtain different values of sensitivity and thus contour samples with dimensions ranging from few microns to meters.
Top projector
Foam sandwich panel
CCD camera
Bottom projector
Figure 6. Schematic of the optical set-up utilized to measure out-of-plane displacements of foam sandwich panels Two different sets of images are necessary in order to conduct the measurements. The first set is represented by the images of the grating projected onto the surface of a reference plane: the grating is obtained by illuminating the surface with the top side or the bottom side projector. The second set is represented by the images of the grating projected on the surface of the sample obtained in the same way as described above. The first set of images can be recorded and then stored in the computer memory as the system is calibrated. If φRP T(x,y) is the phase of the reference plane obtained by illuminating the plane with the upper projector and φRP B(x,y) is the phase of the reference plane obtained illuminating the plane with the lower projector, the phase of the reference plane is ΔφRP(x,y)= φRP T(x,y)− φRP B(x,y). Similarly, if φS T(x,y) and φS B(x,y), are the phases of the sample obtained by illuminating it with the top and bottom projectors, respectively, the total phase of the sample is : ΔφS(x,y)= φS T(x,y)- φS B(x,y). If ΔΦTOT is the total phase determined as ΔφTOT(x,y)=ΔφS(x,y)−ΔφRP(x,y), the height Z(x,y) of each point of the sample with respect to the reference plane is: −
−
−
−
−
−
−
Z( x , y ) =
−
mp o Δφ TOT ( x , y) 2 ⋅ tan θ 2⋅π
(1)
where: m is the magnification and po is nominal pitch size of the grating. The accuracy of the two-projector moiré setup is of the order to some hundreds of sensitivity mpo/2tanθ.
254
Figure 7. Schematic of the metallic frame realized in order to carry out measurements of out-of-plane displacement
An ad hoc metallic frame was designed and realized to measure out-of-plane displacements with the optical set-up above described. The frame is shown in Figure 7. Three aluminum bars were assembled to guide the movement of the projectors and CCD camera. A tooth gear engages with two racks onto which are fixed the projectors. By utilizing the tooth gear it is possible to place projectors symmetrically with respect to the axis of the CCD camera. The projectors are fixed to the frame through holders that can rotate about an axis of a hinge. A graduated scale reported on the holder allows to give to the projector the exact value of rotation necessary to have its optical axis aligned with the center of the sample. The tooth gear is fixed onto a support including a plate onto which a CCD camera is attached. The two racks as well as the tooth gear can translate along the vertical direction so that panels of different dimensions can be measured on different testing machines. In the present experiments, two LTPR3W/R OEPL 50° diode projectors and a XCL-5000 5 Mega CCD camera were utilized. The 3D view of the optical setup is shown in Figure 8. Ad hoc fixtures were realized to fix the specimen in the loading frame having no shadows projected onto the surface of the sample. A very thin layer of powder was coated on the specimen to improve the contrast of the projected fringes. The plane containing the sensor of the CCD camera was disposed parallel to the surface of the sample and its center was aligned with the center of the sample under investigation. A preliminary calibration procedure was conducted to determine the size of the pixels of the acquired images. The pitch of the projected gratings po was 127 µm. Due to the large aperture of the beam emerging from the projectors (the angle of aperture of the beam is 50°) a magnification m=53.33 was obtained. By inclining projectors by 52.5° with respect to the optical axis of the CCD camera a value of sensitivity Δs equal to 2598 µm (see Eq. (1)) was obtained. The compression test was conducted in subsequent steps: the vertical displacement of the cross bar was increased by 0.5 mm per step to give the corresponding end-shortening to the specimen. Images were taken at intervals of 60 seconds. The surface of the unloaded specimen was assumed to be the reference plane. Figure 9 shows the surface of the unloaded sandwich panel (Figures 9a and 9b) and loaded panel (Figures 9c and 9d). The “loaded” configuration corresponds to an axial displacement of 8 mm. It is interesting to observe how projected fringes are approximately parallel before the beginning of the compression test and how they are modulated by the surface of the sample as the axial displacement given to the sample increases. Figure 10 shows the moiré fringes formed for each axial displacement investigated. Fringe topology is consistent with the deformations observed in the experimental tests described in Section 2. By analyzing the distribution of spatial frequency of moiré fringes it can be observed that the relationship between out-of-plane displacement and end-shortening is highly nonlinear.
255
Figure 8. Optical setup for measuring out-of-plane displacements
Figure 9. Fringes projected onto the surface of the unloaded sample (a,b) and loaded sample (c,d) by the top projector (a,c) and the bottom projector (b,d)
6. SUMMARY AND CONCLUSIONS This study presented an experimental study to evaluate the structural response of polyethylene foam sandwich panels subject to edgewise compression. Two different experimental campaigns were conducted. The first campaign served to analyze the stress-strain curves while the second campaign analyzed in detail the out-of-plane displacement field by means of two-projector moiré setup. The experimental analyses carried out in this study are relevant as it is still difficult to establish a direct relationship between mechanical properties of foam sandwich panels and the specific manufacturing process.
256
Figure 10. Evolution of moiré pattern at increasing end shortening
References [1] Gibson L.J., Ashby M. F.: Cellular solids. Structure and properties – Second edition (Cambridge University Press, UK 1997). [2] Rizov V.I. Low velocity localized impact study of cellular foams. Materials and Design, 28, 2632- 2640, 2007. [3] Hazizan M.A., Cantwell W.J. The low velocity impact response of foam-based sandwich structures. Composites Part B, 33, 193-204, 2002. [4] Fan X., Xiao-Quing W. Study on impact properties of through-thickness stitched foam sandwich composites. Composite Structures, 92, 412-421, 2010. [5] Casavola C., Dell’Orco V., Giannoccaro R., Pappalettere C. Structural response of aluminum foam sandwich under compressive loading. In: Proceedings of the 2009 SEM ANNUAL CONFERENCE AND EXPOSITION ON EXPERIMENTAL AND APPLIED MECHANICS, Albuquerque (NM), June 2009. [6] Sciammarella C.A. 2001 William M. Murray Lecture. Overview of optical techniques that measure displacements. EXPERIMENTAL MECHANICS , 43, 1-19, 2003. [7] Fleming G.A., Soto H.L., South B.W., Bartram S.M. Advances in projection moiré interferometry development for large wind tunnel applications. In: Proceedings of SAE 1999 World Aviation Congress and Exposition, San Francisco, CA (USA), 1999. AIAA paper 1999-01-5598. [8] Burner A.W., Fleming G.A., Hoppe J.C. Comparison of three optical methods for measuring model deformation. In: Proceedings of 38th Aerospace Sciences Meeting and Exhibit, Reno, NV (USA), 2000. AIAA paper 2000-0835. [9] Fleming G.A., Gorton S.A. Measurement of rotorcraft blade deformation using projection moiré interferometry. Shock and Vibration, 7, 149-165, 2000. [10] Featherston C.A., Lester D.A. The use of automated projection interferometry for monitoring aerofoil buckling. EXPERIMENTAL MECHANICS, 42, 253-260, 2002. [11] B.G. Falzon and M.H. Aliabadi, Eds. Computational and Experimental Methods in Structures, Vol. 1, Buckling and postbuckling structures. Experimental, Analytical and Numerical Studies. London (UK): Imperial College Press, 2008. ISBN: 9789-1860947940. [12] Sciammarella C.A., Lamberti L., Boccaccio A. A general model for moiré contouring. Part 1: Theory. Optical Engineering, 47, Paper No. 033605, pp. 1-15, 2008. [13] Sciammarella C.A., Lamberti L., Boccaccio A., Sciammarella F.M. High precision contouring with moiré and related methods: a review. Strain 2011 (In press). [14] ASTM C364, Standard Test Method of Compressive Properties of Sandwich Constructions, 2000.
Panoramic stereo DIC-based strain measurement on submerged objects
K. Genovese1, L. Casaletto1, Y-U. Lee2, J.D. Humphrey2 1
Dipartimento di Ingegneria e Fisica dell’Ambiente – Università degli Studi della Basilicata – Italy 2 Department of Biomedical Engineering, Yale University, New Haven, CT, USA
ABSTRACT In this paper, we describe a theoretical foundation for and experimental implementation of a novel 360-degree stereo digital image correlation (DIC) method for quantifying shape and deformation of quasi-cylindrical specimens along their full length and around their circumference. The proposed approach has been implemented for in-vitro experiments on arteries immersed within a physiologic solution (PS) that maintains unaltered native biomechanical properties. To this end, we also address the ubiquitous issue of refraction in non-contacting optical measurements on biological samples immersed in PS by developing and contrasting three different approaches for the correction of the refraction error.
1.
Introduction
Quantifying the regionally varying properties of arteries is of primary importance to investigate the mechanical response of arteries in order to better understand disease progression and response to clinical intervention [1]. Towards this end, in vitro experiments represent an irreplaceable tool to collect the needed data over a full range of multiaxial loads, but they are typically performed on arterial specimens that are nearly straight and cylindrical [2]. Stereo-systems tracking a given set of markers affixed to the surface of complex shaped specimens [3, 4] demonstrated to be able to give information on potential regional differences in the tissue mechanical properties when used within sub-domain inverse finite elements frameworks [5, 6]. However, the recent move toward the use of mouse model to study changes in arterial properties due to genetic mutations or disease progression [7-9] makes it cumbersome to apply a large number of markers on the arterial surface to sample the displacement field with a sufficient resolution and without affecting the estimated wall properties. To address the above mentioned issues, a panoramic Digital Image Correlation (DIC)-based system for quantifying full surface strain fields along the full length and around the entire circumference of small arteries having complex geometries has been developed and tested [10, 11]. The strength point of this method relies in the capability to map the whole 3D deformation of arterial specimens in their native (e.g. curved) or altered (e.g. with aneurism) geometries with the high spatial resolution typical of the DIC measurement. In this paper we present an improved version of this method that includes a more efficient calibration procedure. Moreover, since living arteries need to be tested while immersed in physiological solution, three methods for correcting the measurement error due to refraction at the air/water interface of the specimen bath have been developed and compared.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_31, © The Society for Experimental Mechanics, Inc. 2011
257
258 2.
Materials and Methods
A detailed description of the theoretical rational behind the panoramic DIC method together with the results of the experimental campaign carried out to assess accuracy and resolution of the measurement can be found in [10, 11]. Briefly, measurement of full surface strain fields of quasi-cylindrical objects can be achieved if the specimen is placed nearly coaxial within a 45° concave conical mirror. When viewed from above, the mirror reflects the surface of the specimen over the full 360 degrees; moreover, when at least two different points of view are used, 3-D information can be obtained using the principles of Stereophotogrammetry [12]. In particular, using a DIC-based procedure [13] to match paired stereo-images allows full reconstruction of the surface in reference and deformed configurations, which in turn permits full analysis of the surface strain field with high spatial resolution. In the original set-up presented in [10, 11], a four-view stereo-system was realized by placing a 45° gimbal-mounted flat mirror between the camera and the conical mirror and sequentially tilting the mirror to enable a single camera to view four nearly polar-symmetrical images: right (R), left (L), up (U), and down (D). This peculiar scheme of the image formation, in fact, disallows the use of a standard lateral view stereo-system with two fixed cameras since the stereo-angle required to get satisfactory images needs to be less than 1°. An immediate consequence of the adoption of a very small stereo-angle is a large reconstruction error for the object points close to the plane containing the axes of the two cameras (called hereafter the ‘stereo-plane’). An evenly accurate reconstruction of the sample can be obtained only by adopting a four-view set-up composed by two stereo-systems (namely RL and UD) with their stereo-planes fairly perpendicular to each others. Merging the two sets of data points sufficiently far for the stereo-planes in a complementary way, in fact, allows us to reconstruct the full 360-deg shape of quasi-cylindrical specimens with an accuracy of 10 −2 mm [10, 11]. In this work we address two issues critical for the implementation of the methodology presented in [10, 11]: calibration and correction of the refraction error. In particular, we test an alternative procedure to calibrate the cameras more accurately and we implement and contrast three different methods to correct the error due to the refraction at the air/water interface of the bath containing the specimen. 2.1.
Experimental system set-up and calibration
Figure 1a shows a scheme of the experimental set-up adopted in this work. We chose a two-view stereo-system with two fixed cameras in order to not include in the analysis the eventual error due to repositioning of the cameras of the original four-view arrangement. A beam splitter has been initially positioned at 45° with respect to the axis of the cone and the axes of two fixed cameras as depicted in Fig. 1a. Then, the cameras were slightly tilted in order to realize the desired stereo-angle. A ½” steel post with a printed random pattern on it was placed coaxially with the conical mirror. On the uppermost portion of the mirror, another speckle pattern was glued to serve as a calibration target. A preliminary calibration procedure was needed prior to performing the fine alignment of the system. Conversely to the procedure adopted in [10, 11], in fact, calibration of the cameras and the position of the conical mirror surface into the global reference system was achieved by using a speckle pattern instead of a dot calibration pattern. This allowed us to obtain better efficiency in aligning and calibrating the system as will be discussed later. The developed calibration/alignment procedure can be summarized as follows: -
The two cameras where aligned in order to form a stereo-angle of about 16°. A steel bracket with a regular dot pattern on it was placed inside the cone and served as a target to retrieve the position of the cameras in a global reference system conveniently joint to the bracket (named GRS_1) via calibration with the Direct Linear Transformation Method (DLT) [14]. The bracket was removed and the position of 6000 points (SET_1) of the speckle pattern glued on the uppermost portion of the mirror was retrieved with great accuracy (due to the large stereo-angle adopted) in the GRS_1. An optimization procedure allowed to find the components of the rotation matrix and the translational vector needed to transform the coordinates of the points belonging to the cone from the GRS_1 to another system (GRS_2) in which the cone equation can be written as z tan γ = x 2 + y 2 . The angle γ of aperture of the cone has been
-
included among the design variables of the optimization procedure. The so obtained new coordinates of the 6000 points (SET_2) belonging to the calibration pattern on the cone can be used to find intrinsic and extrinsic parameters of the two cameras in the new global reference system GRS_2. The two cameras were progressively moved by means of multiaxial translational and rotational stages in order to finely align the system to the final configuration. In particular, the stereo-angle was reduced to 1° as required from the panoramic measurement and the two cameras were aligned in order to be symmetrical with respect to the axis of
259 the cone. The exact position of the cameras was retrieved at each step of the alignment procedure by calibration via the SET_2 of points and calculating from the DLT parameters the position of sensor centers and pin-holes and visualizing them into a CAD environment. The stages were thus progressively tuned in order to move the axes of the cameras in the desired final position. This procedure possesses several advantages with respect to the calibration reported in [10, 11]. In the previous work in fact, a dot pattern was glued on the uppermost portion of the cone and the ‘actual’ position of the dots was deduced on the basis of geometrical considerations starting from the position of the dots of the unrolled printed pattern. As can be easily understood, however accurate the positioning of the pattern on the cone can be, the exact theoretical position of the dots cannot be practically obtained. This obviously leads to an inaccurate calibration of the stereo-system and to an erroneous evaluation of the position of the mirror surface in the global reference system. This has effects mainly on the reconstruction of the points close to the stereo-planes for which the ‘virtual’ stereo-angle coincide with the actual stereoangle (see [10, 11] for details). With the present calibration procedure, instead of using a centroid-seeking algorithm, the more efficient DIC algorithm is used to match the two views of the stereo-system to retrieve the position of 6 times more points than in the previous calibration pattern. The actual position of these points (SET_1) in the global reference system is evaluated by using a stereo-system with a large stereo-angle and a more reliable ‘bracket-like’ calibration target. The expected augmented efficiency of the calibration procedure used in this work was confirmed by a strongly reduced error on the stereo-plane observed in the reconstruction of the ½” steel post (less than 5% on radius).
a)
b)
c)
Fig. 1 a) Scheme of the optical set-up adopted in this work; b) picture of the experimental set-up; c) detailed view of the conical mirror with the steel post used in this work as the calibration sample.
2.2.
Correction of the refraction error
Once the system has been calibrated, a series of experiments were run to evaluate the relative merits of three different methodologies to correct the refraction error at the air/water interface of the specimen bath. This methodology, in fact, has been conceived to be used for measuring the full surface strain field on living arteries which necessarily must be tested while immersed within the physiological solution. To evaluate the effect of the refraction on the panoramic measurement, two couples of images have been captured with the cone and sample immersed into the water (IW1 and IW2 from camera 1 and 2, respectively) and in air (IA1 and IA2). The three approaches are here briefly described and discussed. - DLT-based approach. The two cameras are calibrated via standard DLT by using the images captured into the water IW1 and IW2, and the object is reconstructed by using the set of the 11 DLT parameters so obtained. The straightforward use of this approach implies to make a theoretical mistake since the DLT method relies on the pin-hole projection scheme and thus on the collinearity condition that considers the image point, the pin-hole and the object point belonging on the same line. This condition is not
260 satisfied in the presence of refraction. Indeed, it is well known that when light crosses an air/water interface, its direction of travel deviates according to the Snell’s law η a sin θ a = η w sin θ w where θ a is the incident angle, θ w is the refracted angle, and η a and η w are indices of refraction for air and water respectively ( η a = 1.0 and η w ≅ 1.33 ). Hence, when using the DLT approach to calibrate the cameras position by using the images captured in air, the set of the DLT parameters obtained represent a solution not physically feasible. In other words, if the DLT parameters are used to calculate the sensor center or the pin-hole position, they yield to locate the camera in an unfeasible position in the reference system. However, since the DLT parameters represent the solution in a least square sense of a largely redundant system (6000 points for 11 unknowns) it is possible that, under certain conditions, they can be effectively used to reconstruct the points falling within the calibration area. The very good results obtained here by using this method (see Fig. 2b and 3b) can be explained by considering that: i) the calibration points lie on the same surface that contains the object points; ii) the calibration pattern surrounds the area of measurement and occupies a small central region of the camera sensor; iii) the angle of stereo vision is very small, i.e. the rays are almost perpendicular to the air/water interface with a consequent small difference between incident and refracted angles.
a)
b)
c)
d)
Fig. 2 Sample reconstruction. a) The sample reconstructed in air (black dots) vs the sample under water reconstructed without correcting refraction (red dots); b) correction with the DLT approach (purple dots); c) correction with the RP approach (green dots); d) correction with the ES approach (blue dots). - Refraction plane (RP) approach. This approach aims to find the actual position of the air/water interface in the global reference system in order to allow the reconstruction of the object via a ray-tracing procedure. A flat plastic sheet with a random pattern is let to float on the experimental bath and reconstructed in the global reference system (Fig. 4). The coefficients of this plane are used as starting values in an optimization routine aimed to find the new set of coefficients of the actual air/water interface. The objective function to be minimized is the sum of the distances between the points of the calibration target as reconstructed in air and their counterparts obtained from the images of the pattern under water. In particular, the travel of the light from each image
261 point is traced from the sensor, through the pin-hole and then deviated at the current air/water interface according to Snell’s law. The refraction index of the water has been included among the optimization variables (here, a value of η w ≅ 1.327 has been obtained). Once the optimal air/water interface has been found, reconstruction of the sample is performed via raytracing by using the DLT parameters from calibration in air (Fig. 2c and Fig.3c).This approach yields to the larger error in reconstruction among the three methods considered for the analysis in spite of being the most ‘physically’ meaningful. This may be due to the inherent sensitivity of any optimization procedure to the starting point and to the adopted optimization algorithm.
a)
b)
c)
d)
Fig. 3 Plots of the reconstruction error (calculated as the distance between reconstructed and target points). a) reconstruction without correction of the refraction error; b) correction with the DLT approach; c) correction with the RP approach; d) correction with the ES approach. - Error surfaces (ES) approach. Generally speaking, it is not possible to express the refraction error as a function of the image coordinates alone since two separate object points P1 and P2 that map to the same image point IW have two different refraction errors ∆I1 and ∆I 2 with respect to their image points in air I A1 and I A 2 that depend on their 3D position in the space. However, in our case, all objectrays hitting the camera sensors come from a surface (the conical mirror surface) whose position is fixed and known with respect to the stereo system. This allows us to implement a more straightforward approach for reducing the error due to refraction as described in detail in [10, 11]. Briefly, the two couples of images IW1- IA1 and IW2-IA2 are correlated via DIC. This allows us to quantify the effect of the refraction by mapping the displacement uζ and uη [pixel] along the x and y directions for a set of points belonging to the calibration pattern applied on the uppermost part of the cone. If a distortion function is determined for all points belonging to the conical surface and for each camera sensor, it could be used to convert images of the specimen immersed in water to corresponding distortion-free images as they would have been captured in air.
262 To this aim, the values of uζ and uη for points on the inner portion of the mirror are extrapolated from those on the upper annular portion by fitting the displacement maps with a Non-Uniform Rationale B-Spline (NURBS) in a CAD environment. The so obtained ‘distortion functions’ are used to correct the position of the image points of the sample that is finally reconstructed by using the DLT set of parameters of calibration in air (Fig. 2d and Fig.3d). Several trial tests showed the results of this procedure to be strongly dependent from the setting parameters of the NURBS (stiffness and u-v spans) since displacement values of the order of 10 −1 pixel demonstrated to yield to an error of several percents on the radius of the sample. This limit could be overcome by utilizing a speckle pattern on the whole surface of the cone and thus evaluating the distortion functions directly from the images and avoiding extrapolation from the outer annular pattern [10, 11]. However, this would imply a longer and more tedious procedure not free from additional sources of errors.
a)
b)
Fig. 4 a) The plastic sheet floating on the specimen bath used for calculating the initial values of the design variables in the optimization procedure of the RP method; b) the air/water interface as reconstructed in the GRS_2.
3.
Conclusion
In this work, we presented an improved panoramic DIC system for quantifying shape and deformation of quasi-cylindrical specimens along their full length and around their circumference. In particular we introduced a more efficient calibration procedure that demonstrated to improve reconstruction accuracy and investigated the different merits of three approaches for correcting the refraction error. Future work will focus on investigating the possibilities of the DLT localized method for correcting the refraction error and on improving the efficiency of the optimization procedure at the basis of the RP method.
References
[1] Humphrey JD. Cardiovascular Solid Mechanics: Cells, Tissues, and Organs, Springer, 2002. [2] Gleason RL, Gray SP, Wilson E, Humphrey JD. 2004. A multiaxial computer-controlled organ culture and biomechanical device for mouse carotid arteries. ASME J Biomech Engr 126: 787-795. [3] Hsu FPK, Downs J, Liu AMC, Rigamonti D, and Humphrey JD. 1995. A triplane video-based experimental system for studying axisymmetrically inflated biomembranes. IEEE Trans Biomed Engr 42:442-449. [4] Everett WN, Shih P, Humphrey JD. 2005. A bi-plane video-based system for studying the mechanics of arterial bifurcations. Exp Mech 45(4): 377–82.
263 [5] Seshaiyer P, Hsu FPK, Shah AD, Kyriacou SK, and Humphrey JD. 2001. Multiaxial mechanical behavior of human saccular aneurysms. Comp Meth Biomech Biomed Engr 4: 281-290. [6] Seshaiyer P, Humphrey JD. 2003. A sub-domain inverse finite element characterization of hyperelastic membranes, including soft tissues. ASME J Biomech Engr 125: 363-371. [7] Gleason RL, Dye WW, Wilson E, Humphrey JD. 2008. Quantification of the mechanical behavior of carotid arteries from wild-type, dystrophin-deficient, and sarcoglycan-delta knockout mice. J Biomech 41: 3213-3218. [8] Eberth JF, Taucer AI, Wilson E, Humphrey JD. 2009. Mechanics of carotid arteries from a mouse model of Marfan Syndrome. Annls Biomed Engr 37: 1093-1104. [9] Wan W, Yanagisawa H, Gleason RL. 2010. Biomechanical and microstructural properties of common carotid arteries from fibulin-5 null mice. Annl Biomed Engr 38 (12): 3605-3617. [10] Genovese K, Lee YU, Humphrey JD, Novel optical system for in vitro quantification of full surface strain fields in small arteries: I. Theory and design. Computer Methods in Biomechanics and Biomedical Engineering, 2011. DOI: 10.1080/10255842.2010.545823. [11] Genovese K, Lee YU, Humphrey JD, Novel optical system for in vitro quantification of full surface strain fields in small arteries: II. Correction for refraction and illustrative results. Computer Methods in Biomechanics and Biomedical Engineering, 2011. DOI: 10.1080/10255842.2010.545824.
[12] Faugeras OD. 1993. Three-Dimensional Computer Vision: A geometric Viewpoint, MIT Press. [13] Sutton MA, Orteu J-J, Schreier H. 2009. Image Correlation for Shape, Motion and Deformation Measurements. Springer. [14] Hatze H. 1988. High precision three-dimensional photogrammetric calibration and object space reconstruction using a modified DLT-approach. J Biomech 21(7): 533-538.
Advances in the Measurement of Surfaces Properties Utilizing Illumination at Angles Beyond Total Reflection Sciammarella C.A.a Sciammarella F.M.a , Lamberti Lb. a.- Department of Mechanical Engineering, College of Engineering & Engineering Technology Northern Illinois University, DeKalb, IL USA b.-Dipart. Ingegneria Meccanica e Gestionale, Politecnico di Bari, BARI – ITALY Keywords: evanescent waves, surface roughness, high accuracy topography Abstract In preceding papers the authors have employed evanescent illumination to perform measurement of depth information on rough surfaces. In the developments presented in this paper new experimental evidence has been gathered. This information provides additional elements that help to formulate a more complete model of the phenomena taking place. The carried out measurements when confronted with independently gather information support the formulated model. 1.0 Introduction The authors have successfully applied illumination via evanescent fields to measure the topography of metallic surfaces in the micron and sub-micron range. The basis of the developed methodology was explained utilizing the interaction of evanescent fields and metallic strata. In more recent developments the technique was successfully extended to non metallic surfaces, ceramics. Measurements in the micron and submicron ranges have been independently verified by alternative techniques. In this paper the process of recording surface topography is reanalyzed utilizing the experimental evidence gathered. The method is based on the utilization of a metallic grating deposited on a thin glass plate. The plate is in contact with the surface to be studied. A coherent wavefront impinges on the interface between the metallic grating and the surface to be studied at an angle larger than the limit angle of total reflection. The interactions of wavefronts with the subjacent surface are re-examined. The grating provides the carrier that upon reflection on the studied surface is modulated and this modulation encodes the depth information. The observation of the grating surface by a microscope produces an image. The modulated carrier is contained in this image that can be separated by the un-modulated carrier by digital filtering in the frequency space. The process of information recovery is evaluated from the observation of known standard surfaces. 2.0 Fundamental Equations Governing the formation of fringe patterns When a plane wave front impinges the surface separating two media such that the index of refraction of medium 1, glass, is higher than the index of refraction of medium 2, air, at the limit angle total reflection takes place (see Figure 1). Under these circumstances a very interesting phenomenon occurs. At the interface (glass-air) evanescent waves are produced. At the same time, scattered waves emanate from the medium 1 (glass). If a third medium, a conducting material, for example a metal, is close enough to interact with this field the energy from this field interacts with the metal’s surface free electrons to generate plasmons (dark orange area in Figure 1) that by decaying cause the metal surface to emit light. Between the copper and the glass surface there is an optical cavity or optical resonator that produces standing waves. The electromagnetic field confined in the cavity is reflected multiple times inside the cavity producing standing waves for certain resonance frequencies that depend on the geometry of cavity. The standing wave patterns thus generated are called modes: each mode is characterized by a frequency fn, where the subscript n is an integer. In the classical Fabry-Perot cavity analysis the two interacting surfaces are mirror-like surfaces. In the present case one of the surfaces is a mirror-like surface (the glass) while the other (the metal) is an optically rough surface. Therefore, many different spatial frequencies can be observed experimentally. At this point it is important to describe the phenomenon leading to the generation of the emitted light with different spatial frequencies on rough metallic T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_32, © The Society for Experimental Mechanics, Inc. 2011
265
266
surfaces. A rough surface can be thought of as the superposition of many gratings of different periodicities. Kretschmann analyzed this problem in the following fashion [1]. A rough surface can be defined through the following statistical correlation function: G ( x , y) =
1 z( x ' , y' ) z( x '− x, y'− y) dx' dy' A A∫
(1)
where z(x,y) is the Monge’s representation of the surface height and A is the area of integration. Under the assumption that the height distribution is a random function, (this is usually done in the analysis of random surfaces), a Gaussian distribution can be utilized. The correlation function becomes: ⎛ r2 ⎞ G( x, y) = R q2 exp⎜⎜ − 2 ⎟⎟ ⎝ σi ⎠
where:
(2)
R q2 is the root-mean-square value of the surface heights assumed to be random variables with a correlation
length k = 2π / p gr is the distance from the generic point P(x,y) of the object surface, where pgr is the pitch of the surf equivalent sinusoidal grating. From the Fourier transform (FT) of (1) the spectrum s of spatial frequencies present in the surface can be obtained. From the point of view of plasmon excitation, one can prove that in order to excite a plasmon resonance it is necessary that the exciting frequency coincides with a frequency in the Fermi’s electromagnetic state. Hence, the larger is the spectrum of the frequencies the greater will be the amount of energy available for producing coupling of plasmons within the metallic surface. The spatial frequency spectrum is described by the following equation:
(
s k
surf
) = 41π σ
2 i
⎡ σ i2 k R q2 exp ⎢− 4 ⎢⎣
2 surf
⎤ ⎥ ⎥⎦
(3)
The above equation shows that the spectrum of light emitted by the surface consists of multiple wave vectors that are related directly with surface topography properties. Each wave vector corresponds to an equivalent pitch pgr defined as:
k surf =
2π pgr.
(4)
.
Figure 1. Generation of the evanescent field and surface plasmon resonance in the cavity between a glass plate and a rough copper surface.
267
If the surface has only one Fourier component of roughness (i.e. the surface profile is sinusoidal), then the s function is discrete and exists only at k = 2π / pgr where pgr is the pitch of the equivalent sinusoidal grating. surf However, most surfaces of practical interest have a definite structure and cannot be considered random surfaces. Surfaces of technical interests manufactured industrially present a periodic structure. For this reason finished surfaces are more similar to a deterministic diffraction grating than to a random grating. Any plasmon propagating on a rough surface with the appropriate ksurf can generate the emission of a photon [2]. Since ksurf can be a random quantity, even if the light has a defined direction it is possible to generate plasmons in all directions. This phenomenon was verified experimentally by Teng and Stern [3]. 3.0 Observed light intensity distribution in an experimental set up In order to determine the validity of the above model of formation of an electromagnetic field in the cavity between a glass surface and a metallic surface the following experiment was performed. Roughness measurements were carried out on an HQC226 roughness precision reference standard certified by NIST according to ANSI B46.1. Figure 2 shows a schematic of the experimental setup. A grating is added to the surface of the glass plate forming the cavity. In this way it is possible to generate more than one fundamental frequency by utilizing the different frequencies produced by the grating. The whole set up is on a microscope that views the surface in the direction of the normal to the plane of the specimen.
Figure 2. Experimental setup for surface topography analysis including a grating.
The surface consists of a saw tooth profile of nominal pitch Lt=100 μm and depth ht=6 μm. The resultant Ra (average depth) is 3 μm. This standard is used to calibrate devices based on the use of stylus probes. Figure 2 provides a model for the process of contouring that can be also applied to other surfaces that are not deterministic. The figure illustrates the case of double illumination but only one illumination beam is analyzed here for the sake of simplicity. The inclination of the beam is larger than the critical angle and therefore the light is totally reflected at the glass-air interface. However, as it was indicated previously, the electromagnetic field penetrates the cavity between the glass and the metal surface. Schematically, the figure shows the trajectory of the photons that enter the cavity and, according to the preservation of momentum, continue their trajectory. Because of the ray inclination with respect to the direction of observation the wavefronts reaching the objective of the microscope are due to the light scattering effect of the glass as shown in Figure 3. In the direction of observation one sees the amplitude of the polarization vector according to the law of light scattering sketched in Figure 3. Simultaneously with this phenomenon evanescent wavefronts are created at the interface glass-metal. The evanescent field generates photons that penetrate in the cavity glass-metal. These photons in contact with the metallic surface [4] produce light emission in the metal surface. The generated wavefronts are diffracted by the metallic grating printed on the glass surface. Figure 4 shows the image of the standard with the superimposed 5 μm pitch grating. Figure 4a is the image obtained with the incoherent yellow light of the microscope illumination system. Figure 4b corresponds to coherent illumination of the laser light of 635 nm.
268
Figure 3. Illustrating the observation of a beam perpendicular to the plane of the page, performed in the plane of the page. The incoherent light illumination that is normal to the plane of the standard shows the tops and valleys of the saw tooth. The evanescent illumination exhibits an interesting light irradiance in the region of contact with the grating. Figure 4 shows the plot of the light intensity distribution of the coherent image measured in gray levels in two different scales. It is interesting to evaluate this light distribution.
(a)
(b)
Figure 4a) White light image of the HQC226 standard (5 μm grating superimposed on the specimen); b) Coherent illumination image (5 μm grating superimposed on the standard). The profile of the standard is sketched in red in Figure 4a. The insert in Figure 5 shows the light intensity distribution in the whole field of view of view. It can be seen that the maximum irradiance corresponds to a very small region of the saw tooth surface, the region where the direct influence of the evanescent field manifests itself. In this region, there is saturation of the camera sensor. The corresponding depth of penetration of the evanescent field computed utilizing the equations of evanescent fields is 89.6 nm. Utilizing the geometry of the tooth, it is possible to compute the distance from the point of contact between the glass and the saw tooth to the point of the surface where the depth reaches the value 89.6 nm, the corresponding value is x= 750 nm. At about 2 times the distance computed from the theory of evanescent waves the intensity begins to decay reaching a minimum of about 20% of the maximum intensity in the central valley of the saw tooth. It is possible to make the following interpretation of the observed irradiance. The amount of energy available in the cavity between the glass surface and the metallic surface is maximum in the region under the influence of the evanescent field and decays very rapidly outside this region.
269
Figure 5. Intensity distribution received by the CCD camera sensor observing in the direction normal to the plane of the grating. 4.0 High accuracy measurements of surface topography The next step is to analyze the actual process of depth information encoding in the electromagnetic field, for this process a simplified model is utilized applying ray theory. Figure 5 is a schematic representation to explain the contouring model.
a)
b) Fig. 6. Detail of the optical path of the beams diffracted by the grating (a) and equivalent shadow/projection moiré scheme (b). The figure shows the beam arriving at the inclination θi with respect to the standard surface which is now described as a blazed diffraction grating. The observed wavefronts come from the scattered light and therefore are equivalent to a quasi normal illumination. That is the equivalent of the angle of illumination is 90-θi. According to the equation of diffraction for a blazed grating, the angle of emergence β with respect to the normal to the middle surface of the grating is:
270 ⎛λ⎞ β = arcsin⎜⎜ ⎟⎟ ⎝ Lt ⎠
(5)
In this particular case, the wavelength of light λ is 635 nm and the spacing of the grating is Lt=101 μm (i.e. the saw tooth pitch). By replacing this value in (5), the angle β of the emerging wave front is 0.360o. Hence, the beam emerges practically orthogonal to the grating surface. As is shown in Figure 5, the classical shadow-projection moiré equation for one beam illumination can be applied and recalling that w=
np np = sin θ sin (90 o − θ i )
(6)
Where p is the pitch of the grating, in this case p=5 μm, the direction of observation is normal to the surface of the specimen and the direction of illumination is the mangle θ=(90o-θi). This is in agreement with the formulation developed in Ref. [4].
Figure 7. FT pattern of the 5 μm pitch grating imaged by the CCD. The first harmonic (corresponding to the pitch of 5 μm) and the second harmonic (corresponding to the pitch of 2.5 μm) are visible in the FT spectrum
φmc φc
Figure 8. a) Main geometric dimensions of the saw tooth; b) Relationship between total phase, carrier phase and modulation function. The height ht of the saw tooth (Figure 8) is determined by considering that the equivalent grating formed in the process of illumination and projected onto the standard is modulated by the slope of the surface. The intensity distribution of the modulated carrier is (see [5,6]) Ic ( x, y) = I0c + I1c cos
2π x + Ψ( x, y) p
(7)
The phase of the moiré fringes thus formed can be determined as, Ψ ( x, y ) = φmc ( x, y ) − φc ( x, y )
(8)
The phases of the modulated carrier and of the carrier are linear are indicated in Figure 8b.The frequencies can be extracted from the FT of the image schematically represented in Figure 9 and in this way it is possible to get the modulation function that provides the profile of the saw tooth.
271
Figure 9. Schematic representation of the different harmonics extracted from the FT spectrum. The carrier frequency fc is a known quantity. The frequency of the modulated carrier fmc is close to the carrier frequency.
On the basis of the performed measurement the average of the tooth profile was determined, Figure 10.
Figure 10 Average profile of the roughness standard and reconstruction of the profile The value of Ra evaluated, accordingly to ANSI B46.1, from the experimental data gathered with the present advanced digital moiré contouring technique falls in the 3.0175÷3.0784 μm range certified by NIST. The average measured pitch is 101.24 μm with a standard deviation of ±0.322 μm. The average measured depth is 6.078 μm: the average value of Ra is hence 3.039 μm, well within the range of NIST’s measurements. The difference between the value of roughness measured optically and the average value of roughness indicated by NIST is only 0.231% (i.e. 3.039 μm vs. 3.032 μm). 5.0 Discussion and conclusions The results presented in this paper show the applicability of evanescent illumination to get the topography of rough surfaces. A mathematical model of the process of encoding the depth information in the optical path of the observed wavefronts has been formulated. Although additional theoretical derivations are needed to completely substantiate the foundations of the method, the experimental results support its applicability to metrology of surfaces with a high degree of accuracy. This model together with the algorithms contained in the software utilized to process the experimental data provide a powerful tool for the solution of a very important technological problem, the evaluation of industrial surfaces. 6.0 References
[1] Kretschmann E (1974) Die Bestimmung der Oberflächenrauhigkeit dünner schichten durch Messung der Winkelabhängigkeit der Streustrahlung von Oberflächen Plasma Schwingungen. Optics Communications 10: 353356. [2] Heitmann D (1977) Radiative decay of surface plasmons excited by fast electrons on periodically modulated silver surfaces. Journal of Physics C: Solid State Physics 10: 397-405. [3] Teng YY, Stern EA (1967) Plasmon radiation from metal gratings. Physical Review Letters 19: 511-514.
272
[4] Sciammarella CA, Lamberti L, Sciammarella FM, Demelio GP, Dicuonzo A, Boccaccio A (2010). Application of plasmons to the determination of surface profile and contact strain distribution. Strain, [5] Sciammarella CA (2003) Overview of optical techniques that measure displacements: Murray Lecture. Experimental Mechanics 43: 1-19. [6] Sciammarella CA, Lamberti L, Sciammarella FM (2005) High accuracy contouring with projection moiré. Optical Engineering 44: Paper No. 093606 (pp. 1−12).
Filters with Noise/Phase Jump Detection Scheme for Image Reconstruction Jing-Feng Weng
Yu-Lung Lo*
Department of Mechanical Engineering, National Cheng Kung University, Tainan, Taiwan * Corresponding Author: [email protected] ABSTRACT: Residual noise, speckle noise, and noise at the lateral surface of height discontinuities influence the image reconstruction in the wrapped phase map of a 3D object containing the height discontinuities. This paper develops two robust filters, namely Filters A and B, in order to resolve these noise problems. A previously proposed noise/phase jump detection scheme bases on the two filters. Filter A is composed of the detection scheme and an adaptive median filter, whereas Filter B replaces detected noise with the median phase value of an N × N mask centered on the noise. Three types of noise are mostly removed by Filter A, and then the remaining noise, especially the noise at the lateral surface, is removed by Filter B. The integration of two filtering algorithms and phase unwrapping algorithms is proposed for 3D image reconstruction. Two different types of phase unwrapping algorithms are used, namely the path-dependent MACY algorithm and the pathindependent cellular automata (CA) algorithm. Note that because Filters A and B remove noise precisely and clearly, they enable the phase unwrapping algorithms (e.g., MACY or CA) to successfully cross the unwrapping path of height discontinuities and easily obtain the successful 3D image reconstructions. ©2011 Optical Society of America OCIS codes: (100.5088) Phase unwrapping; (100.2000) Digital image processing; (100.6890) Three-dimensional image processing. Keywords: Speckle noise, Noise in phase map, Height discontinuity, Phase unwrapping
1. Introduction In the wrapped phase map, three types of noise, residual noise, speckle noise, and noise at the lateral surface of height discontinuities, are common problems. Residual noise is induced by environmental effects or contamination of the optical system and sample [1]. Speckle noise is usually removed or reduced during the experiment [2] or the numerical-filtering algorithms [3]. For microscope interferometers, increasing depth of field is usually not suitable for 3D nanometer-image reconstruction of height discontinuities. Due to the depth of field limit and the diffraction limit, height discontinuities usually produce the noise in the wrapped phase map [4]. The spatial phase unwrapping algorithms can be traditionally classified as path-dependent (e.g., MACY algorithm) [5] or path-independent (e.g., the cellular automata (CA) algorithm) [6-8]. The unwrapping path starts from a fixed pixel and unwraps along a fixed line in path-dependent algorithms. Thus, the unwrapping time is short. Unfortunately, if there is the noise in the fixed wrapping paths, the unwrapping error accumulates line (path) by line (path). The path-independent phase unwrapping algorithm seeks all possible unwrapping paths between two pixels. The unwrapping speed is thus slow. The noise at the lateral surface of the object causes the MACY algorithm to accumulate unwrapping error path by path. The CA algorithm fails to converge since the noise at the lateral surface of height discontinuities makes CA miss useful phase unwrapping paths. In the present study, two filtering mechanisms, namely Filter A and Filter B, are developed, which are based on a detection scheme [9]. Filters A and B can remove the detected noise without smearing the detected 2π edges of phase jumps. Additionally, it is known that the CA algorithm can be implemented on an array processor to decrease runtime [7]. The wrapped phase sub-maps are obtained from an array processor. An additional area of each sub-map is used to reduce the stitching error caused by an array processor. The integration of filtering and phase unwrapping algorithms is proposed for image reconstruction, which will be introduced later. 2.
Theories of the detection scheme and filtering operations
2.1
Noise/phase jump detection scheme
From [9], the noise/phase jump detection scheme has four comparative phase parameters, S1-S4, which can be expressed as
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_33, © The Society for Experimental Mechanics, Inc. 2011
273
274
φ(i +1, j) −φ(i, j) −σΑ,Β φ(i +1, j +1) − φ(i +1, j) +σΑ,Β ] +[ ] 2π 2π φ(i, j +1) −φ(i +1, j +1) −σΑ,Β φ (i, j) −φ (i, j +1) +σΑ,Β +[ ] +[ ] 2π 2π φ(i +1, j) −φ(i, j) +σΑ,Β φ(i +1, j +1) − φ(i +1, j) −σΑ,Β S 2(i, j) = [ ] +[ ] 2π 2π φ(i, j +1) −φ(i +1, j +1) +σΑ,Β φ (i, j) −φ (i, j +1) −σΑ,Β +[ ] +[ ] 2π 2π φ(i +1, j) − φ(i, j) +σΑ,Β φ(i +1, j +1) − φ(i +1, j) +σΑ,Β S3(i, j) = [ ] +[ ] 2π 2π φ(i, j +1) −φ(i +1, j +1) −σΑ,Β φ(i, j) −φ(i, j +1) −σΑ,Β +[ ] +[ ] 2π 2π φ(i +1, j) −φ(i, j) −σΑ,Β φ(i +1, j +1) − φ(i +1, j) −σΑ,Β S 4(i, j) = [ ] +[ ] 2π 2π φ(i, j +1) −φ(i +1, j +1) +σΑ,Β φ (i, j) −φ (i, j +1) +σΑ,Β +[ ] +[ ] 2π 2π S1(i, j) = [
(1)
where (i, j) is the pixel position, φ is the phase value, and [ ] indicates a rounding operation. The input parameter of σ Α,Β is the threshold of the detection scheme, where 0 < σ Α,Β < π . Note that Filter A uses the parameter of σ Α and Filter B uses the parameter of σΒ , where σ Α,Β = σ Α = σ Β in this study. In [9], the conclusion of Eq. (1) is as follows:
⎧⎪If PD < π − σΑ,Β , the pixel (i, j) is the good pixel. ⎨ ⎪⎩If PD ≥ π − σ Α,Β , the pixel (i, j) is the bad pixel (i.e. the noise).
(2)
where the value of π − σ Α,Β is the absolute-maximum phase difference in the whole phase map and PD is the absolute phase difference of any two neighboring pixels. In Eq. (1), by adding or subtracting σ Α,Β , the detection scheme always enables the phase jump to obtain more than one of S1-S4 to be not equal to zero and for the sum of S1-S4 to be equal to zero, which defines a phase jump pixel. The detected noise is marked on the noise map and the phase jump pixels are marked on the phase jump map. 2.2
Filter A –detection scheme combined with adaptive median filter
Filter A comprises the detection scheme given in Eq. (1) and the adaptive median filter proposed in [10]. Filter A removes the noise in the noise map and preserves the phase jump pixels in the phase jump map. The main concept of an adaptive median filter is that a 5x5 mask has five different positions relative to a phase jump, namely far from a phase jump, close to a phase jump to a higher phase region, close a phase jump to a lower phase region, straddling a phase jump to a higher phase region, and straddling a phase jump to a lower phase region. Then, according to one of the five different positions and the calculation with the corresponding weighted parameters, the median phase value of the mask-center position is obtained. The performance of Filter A is that the phase values of detected-bad pixels are replaced by the median phase value of the maskcenter position. 2.3
Filter B – detection scheme combined with noisy pixel replacement mechanism
Filter B is used to remove the noise missed by Filter A during the phase unwrapping process. The phase map is processed pixel-by-pixel using a sliding N × N mask with its center located at the pixel of interest, i.e., pixel (ic, jc). If the pixel (ic, jc) is the detected noise, the phase value of this center pixel will be replaced by the median value of detected-good phase values within the mask. 2.4 Array processor with additional area in CA algorithm The wrapped phase map is implemented on an array processor in order to improve the runtime of the CA unwrapping algorithm [7]. In the array of the wrapped phase map, each sub-map adds to an additional border area in order to improve the quality of the unwrapped results. The original sub-maps are extended by an additional 3 pixels in the row and column directions, respectively. After the individual sub-maps with an additional area are unwrapped by the CA algorithm, the additional areas of the unwrapped phase sub-maps are cropped and the complete unwrapped phase map is reconstructed by stitching together the individual sub-maps.
275 3.
Effective Integration of Filtering and Phase Unwrapping Algorithms for Image Reconstruction
Fig. 3 is a system which integrates the theories in Section 2 for removing noise for image reconstruction. The flowchart shows three paths. In Path I, Filter A filters the noisy wrapped phase map. The dotted rectangle in the figure means re-using Filter A. By an inspection of the noise map from the detection scheme, the iteration number in utilizing Filter A can be determined. The noise-reduced phase map is obtained and then is unwrapped by the MACY algorithm. Filter B1 is used to remove any noise missed by Filter A and row-unwrapping error, and Filter B2 is used to remove any noise missed by Filter A and column-unwrapping error. Finally, the unwrapped phase map (or 3D image reconstruction) is obtained. In Path II, the noise-reduced phase map is unwrapped by the CA algorithm. The uncut wrapped phase map is unwrapped using the cycle of local and global iterations. In the cycle, Filter B1 filters the noise between the global and local iterations. When the cycle of local and global iterations finishes, Filter B2 filters the noise and the unwrapped phase map (or image reconstruction) is obtained. Path III cuts the wrapped phase map to several sub-maps, each of which is calculated using global and local iterations. The unwrapped phase map is obtained when all unwrapped sub-maps are meshed together and then filtered by Filter B2.
Fig. 1. Flowchart for image reconstruction.
4. Simulation results The simulations were performed by MATLAB software. A personal computer was used, which equipped with an AMD Athlon™ 64 ×2 4400+ 2.31GHz dual-core processor and 2 GB of RAM. Fig. 2(a) was the noisy wrapped phase map, which was converted from five simulated interferograms of height discontinuities. The following three types of noise were generated. Speckle noise, namely Noise A, was generated using the “imnoise” function in MATLAB with an intensity
276 parameter value of 0.08, written as “imnoise (each of five interferograms, 'speckle', 0.08).” Residual noise, namely Noise B, was produced by the “imnoise” function using the salt and pepper noise with an intensity parameter value of 0.35, written as “imnoise (each of five interferograms, 'salt & pepper', 0.35)”. Finally, noise at the lateral surface of height discontinuities, namely Noise C, was also produced by the written program and the same function of Noise B with an intensity parameter value of 0.01. Fig. 2(b) shows the cross-section at column pixel 125. And in Fig. 2(a) and 2(b), the ellipse indicates one phase jump position.
Fig. 2. (a) Noisy wrapped phase map. (b) Cross-section at column pixel 125 in the noisy wrapped phase map.
4.1 Application of Filter A to noisy wrapped phase map In [9], a suitable parameter value of σ Α is 2.4 for the noise map and phase jump map. Fig. 2(a) is a noisy wrapped phase map and is the input to the flowchart in Fig. 1. After the noisy wrapped phase map is filtered by Filter A twice, the noisereduced wrapped phase map of Fig. 3(a) is obtained and Fig. 3(b) shows the cross-section at column pixel 125. The detection scheme is used again to detect the noise and phase jump positions of the noise-reduced wrapped phase map, and the two maps are illustrated in Figs. 4(a) and 4(b), respectively. Obviously, the majority of the noise, Noise A and Noise B, in the noise map of Fig. 4(a) is removed. The noise missed by Filter A is located in the lateral surface of height discontinuities (Noise C). The next subsection will demonstrate that Filter B solves the problem of Noise C.
Fig. 3. (a) Noise-reduced wrapped phase map after filtering by Filter A with a noise detection threshold setting of σ Α = 2.4. (b) Cross-section at column pixel 125 in the noise-reduced wrapped phase map in Fig. 5(a).
277 Fig. 4. Filtering results of the detection scheme using Filter A twice. (a) Noise map and (b) phase jump map.
4.2 Application of Filter B to noise-reduced wrapped phase map 4.2.1 Path I for the flow chart Fig. 5(a) is the successful unwrapped phase map by performing Path I which contains Filters A and B. Fig. 5(b) is the unsuccessful unwrapped phase map by MACY algorithm with Filter A and without B. An inspection of Figs. 5(a) and 5(b) show that Filters A and B enable the image reconstruction of the 3D object containing height discontinuities to be successful.
Fig. 5. (a) Unwrapping result of Path I (with Filter B). (b) Unwrapping result of Path I without Filter B.
4.2.2 Path II for the flow chart Fig. 6(a) presents the unwrapping results of Path II, which uses Filters A and B. The successful result proves that Filter B removes the noise at the lateral surface of height discontinuities and therefore enables the CA algorithm to converge. Path II without Filter B is used to reconstruct the noisy wrapped phase map. As a result, Noise C prevents the CA algorithm from converging, as illustrated in Fig. 6(b).
Fig. 6. (a) Unwrapping result of Path II (with Filter B). (b) Unwrapping result of Path II without Filter B.
4.2.3 Path III for the flow chart Fig. 7(a) shows the successful unwrapping result for Path III with the additional area. Fig. 7(b) shows the cross-sections at column pixel 125 and row pixel 203.
Fig. 7. (a) Unwrapping result of Path III. (b) Cross-section at column pixel 125 and row pixel 203, respectively.
278 5.
Conclusions
Two filtering mechanisms, Filter A and Filter B, were proposed. Filter A comprises the noise/phase jump detection scheme [9] and the adaptive median filter proposed by Capanni et al. [23], and Filter B replaces the detected noise with the median phase value of the pixels within an N × N mask centered on the noise. Since the detection scheme preserves the phase jumps marked on the phase jump map and detects the noise marked on the noise map, Filter A can be reused to remove the majority of the three types of noise without smoothing the phase jumps. Filter B is designed to eliminate any noise missed by Filter A during the unwrapping procedure. Since Filters A and B are robust and effective, especially at removing noise at lateral surfaces, the two filtering algorithms enable the unwrapping path to uniquely cross the height discontinuities and easily obtain the successful image reconstructions. From the simulation results, Path I, Path II and Path III make each case of height discontinuities successful. Therefore, the proposed Path I to Path III are robust and precise for the noisy wrapped phase map of a 3D object containing height discontinuities. Acknowledgements This study was financially supported by the National Science Council of Taiwan under grant NSC 98-2221-E-006-053-MY3. References 1.
Yamaki, R. and Hirose, A., “Singularity-Spreading Phase Unwrapping,” IEEE Transactions on Geoscience and Remote Sensing, 45(10) 3240 – 3251(2007) 2. Pouet, B.F. and Krishnaswamy, S., “Technique for the removal of speckle phase in electronic speckle interferometry,” Opt. Lett. 20 (3) 318-320 (1995) 3. Aebischery, H.A. and Waldner, S., “A simple and effective method for filtering speckle-interferometric phase fringe patterns,” Optics Communications 162(4-6) 205-210(1999). 4. Saldner, H.O. and Huntley, J.M., “Temporal phase unwrapping: Application to surface profiling of discontinuous objects,” Appl. Opt. 36 (13), 2770-2775 (1997). 5. William, W. and MACY, J.R., “Two-dimensional fringe-pattern analysis,” Applied Optics 22(23), 3898-3901 (1983). 6. Ghiglia, D.C., Mastin, G., and Romero, L.A., “Cellular-automata method for phase unwrapping,” J. Opt. Soc. Am. 4, 267-80 (1987). 7. Spik, A. and Robinson, D.W., “Investigation of the cellular automata method for phase unwrapping and its implementation on an array processor,” Optics and Lasers in Engineering 14, 25-37 (1991). 8. Chang, H.Y., Chen, C.W., Lee, C.K., and Hu, C.P., “The Tapestry Cellular Automata phase unwrapping algorithm for interferogram analysis,” Optics and Lasers in Engineering 30, 487-502 (1998). 9. Weng, J.F. and Lo, Y.L., “Robust detection scheme on noise and phase jump for phase maps of objects with height discontinuities-theory and experiment,” Optics Express 19 (4), 3086-3105 (2011). 10. Capanni, A., Pezzati, L., Bertani, D., Cetica, M., and Francini, F., “Phase-shifting speckle interferometry: a noise reduction filter for phase unwrapping,” Opt. Eng. 36(9), 2466–2472 (1997).
An Instantaneous Phase Shifting ESPI System for Dynamic Deformation Measurement T. Y. Chen1,2 and C. H. Chen1 1 Department of Mechanical Engineering 2 Center for Micro/Nano Science and Technology National Cheng-Kung University Tainan, Taiwan, 70101, ROC
ABSTRACT In this study, an ESPI measurement system which is capable of grabbing four phase-shifted interferometric images instantaneously (or simultaneously) is developed for out-of-plane dynamic deformation measurement. A new polarization phase-shifting interferometric system is designed. The system constructed allows the four phase-shifted images to be grabbed by four CCD simultaneously. And digital image correlation method was applied to correct the pixel position mismatch among the four images. Thereafter, the phase at each pixel can be calculated for further obtaining the out-of-plane displacement. Test of the system on an edge-clamped circular plate heated from behind is demonstrated. The results reveal that the proposed system is applicable for the dynamic deformation measurement. 1. INTRODUCTION Electronic speckle pattern interferometry (ESPI), integrates the optical、image processing and computer systems, avoids the time consuming and inconvenient wet processing, and has the advantages of being a non-contact, full-filed and real-time measurement technique. It is a very useful tool for R&D and testing to verify the analytical and numerical results. Combining the phase-shifting technique, ESPI measurement can achieve higher accurate results. In the past, stepping motor or piezoelectric transducers are normally used to move reference surface 、test specimen or rotate the polarizer to get interference fringe images of different phase. However, this kind of practice takes a long time and is susceptible to environmental effects such as ambient vibration or air turbulence which results in measurement errors. To facilitate the applicability of ESPI for dynamic deformation measurement, the system that is capable to acquire several phase shifted images instantaneously or simultaneously is required. In order to reduce the influence of environment and increase the accuracy and stabilization, the instantaneous phase-shifting interferometry (IPSI) technique is introduced. Using IPSI, the phase-shifted interference patterns can be captured simultaneously to determine the phase or surface profile of specimen in a short period of time by applying phase unwrapping. IPSI can be achieved by applying several kinds of technique, such as multi-CCD cameras [1, 2], micro-retarder array [3], and phase mask [4]. However, the above methods are applicable only to objects having mirror surface. For coarse object surface, Electrical Speckle Pattern Interferometry has been successfully applied to transient vibration measurement of coarse surface object and the deformation of metals [5-7]. In this study, an ESPI measurement system which is capable of grabbing four phase-shifted interferometric images instantaneously (or simultaneously) was developed for out-of-plane dynamic deformation measurement. A new polarization phase-shifting interferometric system were designed and constructed. And digital image correlation method was adopted to correct the pixel position mismatch among the four images. Thereafter, the phase at each pixel can be calculated. Test of the system on an edge-clamped circular plate heated from behind were demonstrated. 2. INSTANTANEOUS PHASE-SHIFTING INTERFEROMETER The instantaneous phase-shifting interferometer setup is shown in Fig. 1, where QWP45° is 45 degree quarter-wave
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_34, © The Society for Experimental Mechanics, Inc. 2011
279
280
Fig. 1 The setup of instantaneous polarized phase-shifting interferometer
PBS is polarized beam-splitter, BS is beam-splitter; P0°, P45°,P-45°, P90° are polarizer with 0°, 45°, -45°, 90° transmission axis, respectively. As the light emerges from a linear polarized He-Ne laser and passes through the spatial filter and lens L4, it becomes a collimated light beam. When the light passes through the PBS, which can let P-wave (Transverse magnetic wave) pass through and reflect S-wave (Transverse electric wave). Both S-wave and P-wave would pass through a quarter wave plate and reflected by reference or object surface and pass through the same quarter-wave plate and then the polarized beam-splitter again. Thereafter they pass another quarter wave plate at the same time. Finally the two waves passed through three beam-splitters and divided into four imaged and fed into the four CCD cameras. Applying the Jones vectors, the light intensity captured by the four CCD cameras can be represented as
I1
1 2 a b2 2ab cos[ψ] 2
1 I 2 a 2 b 2 2ab cos[ψ ] 2 2
1 I 3 a 2 b 2 2ab cos[ψ ] 2 1 2 2 3 I4 a b 2ab cos[ψ ] 2 2
(1) (2) (3) (4)
where a and b are the light intensity reflected by object surface and the reference surface respectively, ψis an unknown phase which is related to the altitude difference between the object and reference surface. Similarly, the intensities of light which captured by the four CCD cameras after the specimen is deformed can be given as shown in eqs (5)-(8).
1 I 1 ' = (a2 b2 2abcos(ψ ψ)) 2 1 I 2 ' = (a2 b2 2abcos(ψψ )) 2 2
1 I 3' = (a 2 b2 2ab cos(ψ ψ )) 2 1 3 I 4' = ( a 2 b 2 2 ab cos(ψ ψ )) 2 2
(5) (6) (7) (8)
where Δψ is the phase difference caused by the deformation of specimen. From eqs (1)-(8), the phase of the specimen before and after deformation can be obtained from the following equations, respectively.
281
ψ ta n ( ψ ψ ) ta n
1
1
I I
[ I I
[
2 1
2' 1'
I I
I I
4'
4
(9)
]
3
(10)
]
3'
After subtraction, the phase difference Δψ is can be obtained from eq (11) .
ψ tan 1[
I2' I4' I2 I4 ] tan 1[ ] I1' I 3' I1 I 3
(11)
3. DIGITAL IMAGE CORRELATION In order to make the interference images all align pixel to pixel, the digital image correlation method based on the following equation is adopted. C (~ p)
[G ( s ) G
sS ,sS
0
G
2
0
d
( s)]
2
(s)
( 12)
sS
where G0(s) is the gray level of the sub-image before deformation, Gd(s’).is the gray level of the same sub-image after ~ deformation. A minimum value of C(p) can be obtained by least squares method if G (s)= G (s’).For better correlation of the 0
d
phase-shifted images, both translation and rotation of the test specimen should be considered.
4. EXPERIMENTAL RESULT AND DISCUSSION An experiment system as shown in Fig. 2 was constructed according to Fig. 1 for deformation measurement. The light source used is a He-Ne laser of wave length 633nm. The specimen used is a circular plate (made of PMMA) with a diameter of 48 mm and thickness of 3 mm, and was edge-clamped and heated from behind to test the system. Since the light is collimated, the resolution would be 6.46μm per pixel for the CCD cameras chip (640× 480 pixel) used. The experimental flowchart is shown in Fig. 3. Firstly four marked images were captured by four CCD cameras. Then the digital image correlation method was applied to calculate the position mismatch of the four CCD cameras. Using the position mismatch values, the speckle images can be corrected to align pixel to pixel. Thereafter, the phase value of all pixels in the images were calculated by Eqs.(9)-(11) and for further unwrapping. Fig. 4 shows the speckle images of the specimen before heated and after heated 0.2℃ higher with a heating rate of 0.03 ℃ /sec. The determined position mismatch values are given in table 1, in which u and v represent the mismatch values in the x-direction, y-direction, and the in-plane rotation, respectively. Subtraction image 4(a) from the image 4(e), an interference fringe due to temperature change can be observed as shown in Fig 5(a). The calculated wrapped phase map in shown in Fig. 5(b). It can be seem that the wrapped phase map agrees well with the interference fringe pattern. Therefore it is feasible to measure dynamic deformation by the proposed system. However, the quality of the phase map is not good enough for directly unwrapping the phase due to speckle noises on the map. 5. CONCLUSION In this study, an instantaneous phase-shifting electronic speckle pattern interferometry system is developed based on polarization phase shifting. The main advantage is it can capture multiple images with different phases of the speckle pattern at the same time using four CCD cameras; therefore, it can reduce the measurement time and the external environment effect and thus applicable to the dynamic measurement. Test of the system by a circular plate demonstrates the feasible of the developed system. Further study to unwrapped the phase is required for making the system more useful.
282
Capture symbolized images
Capture undeformed speckle images
Apply DIC to obtain the position mismatch
Capture deformed speckle images
Use position mismatch values to correct the position of the speckle images. Calculate the phase value and unwrap the phase to obtain the deformation of specimen Fig. 2 Instantaneous ESPI experiment setup
(a)
(b)
(e) Fig. 4
(f)
Fig. 3 Experiment flowchart
(c)
(d)
(g)
(h)
Four phase-shifted speckle images, (a)-(d) befor heated, (e) –(h) after being heated 0.2℃ higher.
Table-1 The image position mismatch determined by DIC for four images.
position mismatch
u (pixel)
v (pixel)
Ψ (degree)
Image No.
(a)
(b)
Fig. 5 (a) ESPI fringe pattern, and (b) phase map of (a) as calculated from Fig.4.
0
0
0
0
1 2 3
-1.24 -0.17 -0.29
-0.28 -1.70 0.55
0.3 0.8 0.2
283 ACKNOWLEDGEMENT This work is supported by National Science Council, Republic of China under the contract no. NSC-99-2221-E-006-025.
REFERENCES [1] [2] [3] [4] [5] [6] [7]
C.L. Koliopoulos, “Simultaneous phase shift interferometer” SPIE Vol. 1531(1991), 119-127 T.Y. Chen and Y.L. Du, ”One-shot surface profile measurement using polarized phase-shifting,” International Conference on Optical Instrument and Technology, China, (2009) J.E Millerd., Brock, N.J., Hayes, J.B., North-Morris, M.B., Novak, M., and Wyant, J.C., ”Pixelated phase-mask dynamic interferometer," Proc. SPIE, 5531(2004), 304-314. B. B. Garcia, A. J. Moore, C. Perez-Lopez, L. Wang, and T. Tschudi, “Spatial phase-stepped interferometry using a holographic optical element,” Opt. Eng. 38(1999), 2069-2074. V Madjarova, S Toyooka, R Widiastuti, et al., “Dynamic ESPI with subtraction-addition method for obtaining the phase,” Optics Communications, (2002), 212: 35-43. P Sun., “Spatial phase-shift technique in large image-shearing electronic speckle pattern interferometry,” Optical Engineering, 2007, 46(2): 025602-1-025602-6 C. J. Tay, C. Quan, and W. Chen, ”Dynamic measurement by digital holographic interferometry based on complex phasor method,” Optics & Laser Technology, (2009),41: 172-180.
Development of Linear LED Device for Shape Measurement by Light Source Stepping Method
Yohei OURA1, Motoharu FUJIGAKI2, Akihiro MASAYA2, and Yoshiharu MORIMOTO3 1 Graduate School of Systems Engineering Wakayama University 930, Sakaedani Wakayama 640-8510, Japan 2 Department of Opto-Mechatoronics, Faculty of Systems Engineering, Wakayama University 930, Sakaedani Wakayama 640-8510, Japan 3 Moire Institute Inc. 2-1-4-840, Hagurazaki, Izumisano, Osaka 598-0046, Japan
ABSTRACT Equipments for 3D shape measurement with compact and low-cost are required in wide industrial fields. We proposed Shape Measurement by a light source stepping method to realize the requirement. The method is one of the grating projection methods. In this method, multiple point light sources and a ronchi grating are used as a grating projector. Phase-shifting is performed with switching the light source position. By using this method, the shape measurement equipments can be produced with low-cost and compact because of no phase-shifting mechanisms. Also, the measurement speed is very fast because it is possible to switch the LED light sources in very short time. The grating can be projected without defocusing any place because of any focusing lenses in this projector. In this method, it is necessary that the emitting width on the point light source is narrow than the pitch of the ronchi grating. Brightness is also necessary for measuring the shape with accurate. Therefore, in this study, we develop a linear LED device for shape measurement by the light source stepping method. The effectiveness of the device is evaluated with an experimental test of shape measurement. INTRODUCTION Real-time accurate 3D shape measurement is requested in industry. It is now difficult to satisfy both of real-time and accurate measurement. In almost conventional shape measurement methods, the optical systems are modeled and the parameters of the model are obtained with a calibration process using calculation of the geometrical parameters of optical devices such as positions of lens centers of a camera and a projector. However, the model cannot contain all of the information of the optical system such as lens distortion, intensity error of a projected grating, brightness linearity of a projector and a camera, etc. The less information causes measurement errors. Furthermore, it is time-consuming to calculate the spatial coordinates using the parameters. In order to obtain a 3D-shape with grating projection method, the authors previously proposed whole-space tabulation method (WSTM) [1-4]. The relationship between the coordinates and the phase of the grating recorded at each pixel of a camera is obtained as calibration tables in a three-dimensional space by experiment beforehand. Therefore the analysis is very fast because of looking at the calibration tables from the phase information at the pixel without any complex calculation. It provides fine resolution even when the phase distribution of the grating is not linear. In this paper, the tabulation method is extended to a case of inaccurate phase-shifting of a grating. A grating projector with five light emitted diode (LED) line light sources is newly developed. The phase-shifting usually uses a grating projector with a high resolution stage or a liquid crystal display (LCD) projector. It is expensive and limits the speed of phase-shifting. The light-power efficiency of an LED is very high. The size of the light source is very small. It is easy and very fast to control the power and the switch on/off timing. The five LED line light sources are set in front of the grating, and each switch for an LED line of the light sources is ‘switch on’ during each corresponding one-fifth cycle of phase-shifting in synchronization with the phase-shifting and the recording with the camera sequentially. Using the LED light sources for a grating projector, it is possible to make phase-shifting without any moving devices. The projector is low cost and high-speed. We call the method ‘light source stepping method.’ Even when the positions of the LED light sources are not so accurate, the error is almost cancelled by using the calibration tables obtained by the WSTM with the same experimental setup.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_35, © The Society for Experimental Mechanics, Inc. 2011
285
286
In this paper, the theory of the phase distribution of the LED light source stepping method is shown. And a system of shape measurement using the WSTM and the light stepping method, and some experimental results of shape measurement using the system are shown. PRINCIPLE Grating projection method In shape measurement using grating projection method, phase analysis of the projected grating provides accurate results. Figure 1 shows a schematic system for grating projection method. A grating with a co sinusoidal bright distribution is projected by a projector. The grating projected on an object is deformed according to the shape of an object. The deformed grating is recorded by a camera and analyzed the deformed grating to obtain the shape. As analysis method of the deformed grating, phase shifting method which is the most popular accurate method, is used as mentioned next section. Phase-shifting method When a grating with a cosinusoidal brightness distribution is projected or displayed on a reference plane or an object, and the phase of the grating is shifted N times for one cycle, the k-th phase-shifted grating images can be expressed as follows: I k ( x, y, z ) = I a (x, y ) cos θ (x, y, z ) + I b (x, y, z ) (1) Where Ib(x, y, z) represents background brightness in the image, which are insensitive to the change in phase; Ia(x, y, z) represents the amplitude of the grating brightness and θ (x, y) is the initial phase value. There are many methods to analyze phase [5-7]. The phase distribution of the grating pattern can be obtained as follows [8, 9]. N −1 2π I ( x, y, z )sin ( k ) k =0 k N (2) tan θ ( x, y, z ) = − N −1 2π I k ( x, y, z )cos( k ) k =0 N Usually N is selected as 3 or 4. If the number N of the phase-shifted moire patterns is larger, the phase analysis becomes more accurate. In this study, N =5 is selected.
∑ ∑
Figure 1 Schematic view of grating projection system
287
(a) Table of phase θ and x (b) Table of phase θ and y (c) Table of phase θ and z Figure 2 Schematic view of calibration tables to obtain x, y and z coordinates from phase θ Whole-space Tabulation Method (WSTM) Figure 1 also shows a schematic view of a shape measurement system by grating projection and explanation of the principle of the calibration method using multiple reference planes. An LCD panel as a reference plane is set on a linear stage. The LCD panel is covered with a scattered film. The scattered film functions as a screen when a grating pattern is projected from the projector in order to make calibration tables on the relationship between the phase θ and the z coordinate. The scattered film also functions as a backward screen when a grating pattern is displayed on the LCD panel to determine the x and y coordinates. The LCD reference plane installed vertically to the z-direction is translated in the z-direction step by step. A camera and a projector are arranged and fixed in front of the reference plane. The grating is projected on the reference plane at first, and it is also projected on an object to be measured later. By translating the reference plane along the z-axis, a pixel of the camera records the intensities as images at the points P0, 1, P2 …. and PN, on the reference planes R0, R1, R2…. and RN, respectively. And also, by recording the phase-shifted grating images, the phase distribution along the view line L of a pixel of the camera is obtained. In order to obtain the x and y coordinates on the reference plane, the phase-shifted gratings displayed on the LCD panel are taken by the camera. From these phase-shifted images, the calibration tables are formed to obtain the x, y and z coordinates from the phase θ at each pixel as shown in Fig. 2. In the measurement procedure, an object is placed between the reference planes R0 and RN. Phase-shifted gratings are projected onto the object and the phase distributions of the grating are analyzed from the phase-shifted grating images. The 3-D coordinates at each pixel are obtained from the phase value in high-speed by referring the calibration tables for each pixel. This method is called the WSTM as mentioned already. It excludes the effect of lens distortion and intensity error of the projected grating in measurement results theoretically. Tabulation makes short-time measurement possible because the 3D coordinates are obtained by looking at the calibration tables from the phase at each pixel point and it does not require any time-consuming complex calculation. Light source stepping method The light source stepping method is one of the grating projection methods. Unlike the other grating projection methods, the light source stepping method is capable to get a phase by switching the light sources position. The light source stepping method uses point light sources. Because of this, the light-stepping method does not need the lens for focus. A cosign waved grating is projected in arbitrary point P (xP, zP). Following this, also the phase of point G (xG, 0) which is on a grating grass is cosigning wave. Now on, the phase θ (xP, zP) in arbitrary point P (xP, zP) will derived. The following equations are basic equations of light source stepping method. Every equations work out in every y direction. In Fig. 1, the grating pitch is p and the coordinate of light source is L (xL, zL). The phase θG (x) in the point G (xG, 0) is expressed in Eq. (3). 2π θ G ( x) = x (3) p Expression of straight Line PL shown in Fig. 3 is expressed in Eq. (4). x − xL x= P (z − z P ) + x p (4) zP − zL From the Eq. (4), the x coordinate of point G (xG, 0) is expressed in Eq. (5).
288
xG = − z P
xP − xL + xP zP − zL
(5)
The phase θG (x) in the point G (xG, 0) is obtained by substituting the Eq. (5) into the Eq. (3) as follows. ⎞ x − xL 2π ⎛ ⎜⎜ − z P P (6) θ G ( xG ) = + x P ⎟⎟ p ⎝ zP − zL ⎠ The phase θG (x) in the point G (xG, 0) is equal with the phase θ (xP, zP) in arbitrary point P (xP, zP). Therefore, the phaseθ (xP, zP) in arbitrary point P (xP, zP) is obtained in Eq. (7) ⎞ x − xL 2π ⎛ ⎜⎜ − z P P + x P ⎟⎟ (7) θ (xP , z P ) = p ⎝ zP − zL ⎠
Figure 3 Arbitrary point P (x, z) in the phase of the light source EQUIPMENT Devise of linear LED and optical system Previously, we proposed a new phase-shifting method using fixed three LED point light sources [10]. However, the positions of the three LED point light sources were manually set and the power of the point light sources was dark. So the position was not so accurate, though the WSTM cancelled the error. In this paper, a new grating projector using a new linear LED device with five LED lines is proposed. Recently tips of LED are lined on a board to make the LED line. In this study, the special linear LED device is developed. Each line is switched on /off separately and quickly. Figure 4 shows the linear LED. In this figure, rightmost LED is switched on and the other lines are switched off. There are five parallel lines with 2.63 mm pitch. Each line has 10 LED tips. The chip size is a square of 350 μm by 350 μm.The height of the LED line is 4.5 mm. Figure 5 shows the photo of the optical system consisting of the linear LED device, a grating and a camera.
289
Figure 4 Linear LED device. The rightmost LED is switched on
Figure 5 Optical system using linear LED line device, grating and camera
Shape measurement system using linear LED device The light is emitted through a grating on a glass plate and the enlarged grating is projected on a reference plane or on an object shown in Fig. 6. By changing the LED line in switch-on, the phase of the projected grating is changed on a point of the reference plane or the object. If a conventional grating projector such as a liquid crystal projector is used, the phase-shift amount is the same on the whole field. However, the phase-shift amount of this LED line projector is depending on the position z, though the shift amount is a constant at the same z when the LED, the grating and the reference plane are parallel each other. The total phase-shift amount for five phase-shifting of one cycle does not cover accurate 2π. Therefore the analyzed phase using the light source stepping method is not so accurate. However, the phase calibration tables along a view line L from a pixel of the camera, as shown in Fig. 1, are used in the WTSM. An example of the relationship between the phase and the z coordinate at a pixel is shown in Fig. 6. It was fairly nonlinear but monotonically increasing. The phase error was almost cancelled by using the same system for the phase analysis of the reference plane and for the phase analysis of the object.
Figure 6 Example of relationship between analyzed phase and height Next, in order to make a low cost and fast measurement system, a shape measurement system using five LED line light sources is developed as shown in Figs. 6 and 7. This equipment is composed of a computer (PC which are not shown in Figs. 6 and 7), a camera, five LED line light sources, a grating, a reference plane that is an LCD panel covered with a scattered film, and a linear stage. The LED is controlled by the PC and the microcomputer and projects a grating pattern onto the reference plane and also onto an object. The camera and the grating projector are operated synchronously by the PC and the microcomputer. The five phase shifted images of the grating are recorded by changing the timing of ‘switch on’ for each LED line light source with about 2π/5 phase difference each other by synchronizing the camera exposure timing. It is very simple to obtain phase-shifted images without any expensive stage and slow equipment such as an LCD projector. The phase is analyzed from the five phase-shifted images by using Eq. (2). The phases of arbitrary point in each light source are determined from Eq. (7). Substitute the phases into Eq. (1). Substitute the light intensity into Eq. (2). Three-dimensional coordinates of a point on an object can be analyzed quickly by the WSTM using the phase-coordinate calibration tables.
290
Figure 7 Shape measurement system for light-stepping method
Figure 8 Photo of shape measurement system for light-stepping method
A glass plate evaporated chromium ronchi grating with 0.508 mm pitch was also used for grating projection. The grating was placed 20 mm from the LED. In Fig.6, the distance between camera and the rightmost LED is 70 mm. The recorded images of the gratings projected on the specimen had an almost cosinusoidal distribution because 0.35 mm width of the LED line source blurred the shadow of the grating. As a reference plane, an LCD panel on a stage with 3.0μm accuracy was also used. By translating the reference plane from 0 to 30000 μm at every 300 μm, the relationship between the phases of the displayed or projected gratings and the positions of the z coordinates was recorded. The image size was 640 x 480 pixels. From the data, calibration tables were made at every 2π/1000 in phase. By using these calibration tables, the height distribution of the object was analyzed. In experiment, at first, the reference plane is translated along the z direction to obtain the calibration tables for the relationship between the z coordinates and the phase. In the measurement procedure, the object is placed between the reference planes R0 and RN as shown in Fig. 1. By changing the timing of ‘switch on’ of the LED line light sources, phase-shifted gratings were projected onto the object. Five phase-shifted grating patterns are recorded by the camera and the phase distributions of the grating were analyzed at each pixel using Eq. (2). The height value at each pixel is obtained in high-speed with referring the calibration tables.
Figure 9 Relationship between phase and height the camera’s pixel number (320,240)
291
Figure 10 Object
Figure 11 Projected grating image on the object
(a) Height distribution of the object
Figure 12 Phase distribution of the object
(b)Height distribution along horizontal line
Figure 13 Height distribution measurements by light source stepping method EXPERIMENT Figure 9 shows the relationship between the height and the phase in measurement area. Figure 9 shows the measurement area is 27.1 mm. The measured object is shown in Fig. 10. The image of grating projection is shown in Fig. 11. The grating projection image is taken by each light source is lighting. The phase distribution obtained from 5 grating projection images. Fig. 12 shows the analyzed phase distribution. The vertical fringe shown in Fig. 13 (a) is beyond the measurement area. The white area behind the object is the shadow of the object. The shadow was made when the grating was projected. Figure 12 (b) shows the chart of the height distribution. CONCLUSIONS A new shape measurement method called ‘light source stepping method’ using whole-space tabulation method (WSTM) and a system using special linear LED device was proposed. In this paper, the theory of the phase distribution of the LED light source stepping method was shown. The linear LED device could shift the phase of the projected grating quickly. The principle and some systems of the shape measurement using the light source stepping method were shown. It is possible to make a shape measurement system low cost, fast, compact and accurate shape measurement system. It also excludes lens distortion and intensity error of the projected grating in measurement results theoretically. REFERENCE [1] Fujigaki, M. and Morimoto, Y., "Shape Measurement with Grating Projection Using Whole-Space Tabulation Method," Journal of JSEM (in Japanese), 8-4, 92-98 (2008). [2] Fujigak,. M., Takagishi, A., Matui, T. and Morimoto, Y., "Development of Real-Time Shape Measurement System Using Whole-Space Tabulation Method," SPIE International Symposium, Proc. SPIE 7066, 706606 (2008). [3] Fujigaki, M., Masaya, A., Murakami, R. and Morimoto. Y, “Accuracy Improvement of Shape Measurement Using Whole-Space Tabulation Method”, Proc. of ICEM2009 held in Singapore, (2009).
292 [4] [5] [6] [7] [8] [9] [10]
Morimoto, Y., Fujigaki, M. and Masaya, A., “Shape Measurement by Grating Projection and Whole-space Tabulation Method”, Proc. of ISOT2009 held in Istanbul, (2009). Takeda, M. and Mutoh, K., "Fourier Transform Profilometry for the Automatic Measurement of 3-D Object Shapes," Applied Optics, 22-24, 3977-3982 (1983). Asundi, K. and Zhou, W., "Mapping Algorithm for 360-deg Profilometry with Time Delayed Integration Imaging," Optical Engineering, 38, 339-344 (1999). Sitnik, P. and Kujawinska, M., "Digital Fringe Projection System for Large-volume 360-deg Shape Measurement," Optical Engineering, 41-2, 443-449 (2002). Srinivasan, V., Liu, H. C. and Halioua, M., “Automated Phase-measuring Profilometry of 3-D Diffuse Objects,” Applied Optics, 23, 3105-3108 (1984). Srinivasan, V., Liu, H. C and Halioua, M., “Automated Phase-measuring Profilometry: a Phase Mapping Approach, “ Applied Optics, 24, 185-188 (1985). Morimoto, Y., Fujigaki, M., Masaya, A. and Amino, A., “Shape Measurement by Whole-space Tabulation Method Using Phase-shifting LED Projector”, Proce of International Conference on Advanced Phase Measurement Methods in Optics and Imaging, Monte Verita, Locarno, Switzerland, 17 (2010).
Calibration Method for Strain Measurement Using Multiple Cameras in Digital Holography
1
Motoharu FUJIGAKI1 and Riku NISHITANI 2 Department of Opto-Mechatoronics, Faculty of Systems Engineering, Wakayama University 930, Sakaedani Wakayama 640-8510, Japan 2 Graduate School of Systems Engineering Wakayama University 930, Sakaedani Wakayama 640-8510, Japan
ABSTRACT Phase-shifting digital holography is a convenient method to measure displacement and strain distributions on the surface of an object. Development of compact and conventional strain distribution measurement equipment for practical use is required for inspection of health monitoring and life lengthening of infrastructures such as steel bridges. Device miniaturization is effective to improve portability, reduce its price and resist vibration. Reducing a number of object beams simplifies the optical setup. Therefore, authors proposed a deformation and strain measurement method to use multiple imaging sensors with an object wave. It is necessary to find corresponding points between the reconstructed images obtained from multiple digital holograms taken by the multiple imaging sensors. In this paper, a calibration method for strain measurement using multiple imaging sensors is proposed. The principle and the experimental results to apply to the strain measurement of a deformed cantilever are shown. INTRODUCTION Inspection is very important for health monitoring and life lengthening of infrastructures such as steel bridges. An efficient measurement method of strain distribution is required to find cracks. Development of compact and conventional strain distribution measurement equipment for practical use is required for inspection of health monitoring and life lengthening of infrastructures such as steel bridges. Phase-shifting digital holography[1] is a convenient method to measure displacement and strain distributions on the surface of an object. Many researchers are studying this method and several compact equipments are developed[2]. We also developed compact equipments for strain measurement[3]. In these equipments, several object waves are necessary to measure the in-plane displacement and strain distribution. To simplify the optical setup is required for producing more compact equipment. We proposed the strain measurement with multiple image sensors using an off-axis reconstruction method[4, 5]. In this method, the reconstructed images are shifted using Fourier transform to adjust the reconstructed area easily. The directions of the image sensors are needed to place in parallel. In this paper, a calibration method with a reference grating plate for strain measurement using multiple imaging sensors is proposed. The principle and the experimental results to apply to the strain measurement of a deformed cantilever are shown. PRINCIPLE Relationship between phase difference and displacement Figure 1 shows the relationship between a displacement vector and a sensitivity vector. The angle between an incident object wave and the scattered wave to the direction for an image sensor is θ in this figure. The direction of the sensitivity vector e is the direction of the bisected angle of θ. The vector d is the displacement vector at the point P as shown in Fig. 1, when a point P on an object is displaced into the point P'. The phase difference Δφ obtained with a digital holographic interferometry is expressed as follows.
Δφ = e ⋅ d T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_36, © The Society for Experimental Mechanics, Inc. 2011
€
(1) 293
294 The displacement vector and the sensitivity vector have each component of the x, y, and z directions. Thus, they are expressed as follows:
Δφ = [ ex
ey
ez ]
⎡ d x ⎤ ⎢ ⎥ ⎢ d y ⎥ = ex d x + ey d y + ez d z , ⎢⎣ d z ⎥⎦
(2)
where, e = [ex, ey, ez], d = [dx, dy, dz].
€ When three image sensors 1, 2 and 3 are places in the independent position, three sensitivity vectors are obtained. In this case, Eq. (2) can be rewritten as Eq. (3), ⎡ Δφ1 ⎤ ⎡ e1x ⎥ ⎢ ⎢ ⎢ Δφ 2 ⎥ = ⎢e2x ⎢⎣ Δφ 3 ⎥⎦ ⎢⎣e3x
e1y e2y e3y
e1z ⎤ ⎥ e2z ⎥ e3z ⎥⎦
⎡ d x ⎤ ⎢ ⎥ ⎢ d y ⎥ , ⎢⎣ d z ⎥⎦
(3)
where subscripts 1, 2 and 3 mean each image sensor. When each component of the matrix on the right side of Eq. (3) is specified, displacement component dx,dy and dz can be obtained from phase difference Δφ1, Δφ2 and Δφ3 obtained by each € image sensors.
Figure 1
Relationship between a displacement vector and a sensitivity vector
Calibration method for finding corresponding pixels among image sensors A calibration method for finding corresponding pixels among image sensors using a reference grating plate is proposed. Figure 2 shows an optical setup for displacement distribution measurement using multiple image sensors. Figure 2 shows the optical setup in the calibration to find the corresponding pixels. The reference plate has a two-dimensional grating with a regular and known pitch on the surface. The grating means the scale for the x and y directions. The reconstructed image obtained with an image sensor becomes the grating image of the reference grating plate. The x and y coordinates on the reference grating plate corresponding to each pixel on the reconstructed image can be obtained from the phases of the two-dimensional grating. The phases can be easily analyzed by the phase shifting method. It easy to apply the phase shifting method using linear stages for the x and y directions as shown in Fig. 2. EXPERIMENT FOR DISPLACEMENT AND STRAIN DISTRIBUTION MEASUREMENT Figure 4 shows an experimental setup for displacement and strain distribution measurement. One image sensor and a mirror are used instead of two image sensors as shown in Fig. 4. The left half and right half areas on the reconstructed image are the real image and the mirror image of the object, respectively. The image sensor is a CCD and the pixel size is 3.65 µm x 3.65 µm and the pixel number is 960 x 960 pixels. A reference wave reaches the image sensors through the path shown in Fig. 4. The phase can be shifted by a mirror placed on a PZT stage. An object beam irradiates an object with an incident angle of 0 degree. The object beam scattered on the object are recorded as digital holograms on the image sensor. The reconstructed length is 340 mm.
295
Figure 2
Optical setup to calibrate to find the corresponding pixels for displacement distribution measurement using multiple image sensors
Figure 4 shows a reference plate on a linear stage. The pitch of the reference grating is 8 mm. In this experiment, the each y coordinate of the real image and the mirrored image at the corresponding points. The x coordinates of the real image and the mirrored image are, therefore, corresponded using the phase of the reference grating plate. A cantilever as shown in Fig. 5 is used as the object. The material is anodized aluminium. The part near the free end (x = -20 mm) is displaced with 12 µm for the y-direction by a PZT actuator. One of the specimen has no crack and another has a crack as shown in Fig. 5. Figure 6 shows the reconstructed image of the reference plate. The real image and the mirrored image of the grating pattern on the reference plate are the left half and right half areas on the reconstructed image, respectively. Phase distributions of the reference plate for the x-direction shown in Fig. 7 are obtained by the phase shifting method. Figure 8 shows coordinate distributions of the reference plate for the x direction.
(a) Diagram
(b) Photograph Figure 3 Optical setup
Figure 4
Reference grating plate on a linear stage
296
Figure 5
Specimens
(a) Real image (b) Mirrored image Figure 7 Phase distributions of the reference plate for the x-direction
Figure 6
Reconstructed image of the reference plate
(a) Real image (b) Mirrored image Figure 8 Coordinate distributions of the reference plate for the x-direction Figure 9 shows the reconstructed image of the specimen with a crack. The real image and the mirrored image of the specimen are the left half and right half areas on the reconstructed image, respectively. Figure 10 shows the phase distributions of the specimen with crack for the x direction. The coordinate distributions of the reference plate shown in Fig. 8 give the corresponding points between the phases of the real image and the mirrored image. Figure 11 shows results of the specimen without crack for the x direction. Figure 11(a) and (b) shows the displacement distribution for the x and y direction, respectively. Figure 11(c) shows the strain distribution for the x direction obtained with spatial differentiation of the displacement distribution for the x direction shown in Fig. 11(a). Figure 12 shows results of the specimen with crack for the x direction. Figure 12(a) and (b) shows the displacement distribution for the x and y direction, respectively. Figure 12(c) shows the strain distribution for the x direction obtained with spatial differentiation of the displacement distribution for the x direction shown in Fig. 12(a). The result shows that the strain in the area near the tip of the crack is higher than the surrounding area.
297
Figure 9
Reconstructed image of the specimen with crack
Figure 10
Phase distributions of the specimen with crack for the x direction
(a) Displacement distribution (b) Displacement distribution (c) Strain distribution for the x direction for the x direction for the z direction Figure 11 Results of the displacement distribution and the strain distribution of the specimen without crack
(a) Displacement distribution (b) Displacement distribution (c) Strain distribution for the x direction for the x direction for the z direction Figure 12 Results of the displacement distribution and the strain distribution of the specimen with crack CONCLUSIONS We proposed a calibration method with a reference grating plate for strain measurement using multiple cameras in digital holography. It is confirmed that the strain distribution measurement using an image sensor and a mirror instead of two image sensors was obtained with the proposed method.
298 REFERENCE [1] Yamaguchi, I., and Zhang, T., Phase-shifting Digital Holography. Optics Letters, 22-16, 1268-1270(1997). [2] Kujawinska, M., Michalkiewicz, A., New Approaches and Concepts for Engineering Objects Monitoring and Measurements Based on Digital Holography and Interferometric Principles, Proceedings of the International Symposium to Commemorate the 60th Anniversary of the Invention of Holography, 81-88(2008). [3] Fujigaki, M., Kido, R., Shiotani, K., Morimoto, Y., High-speed and Compact Strain Measurement System by Phase-shifting Digital Holography. Proceedings of the International Symposium to Commemorate the 60th Anniversary of the Invention of Holography, 316-323(2008). [4] Fujigaki, M., Shiotani, K., Nishitani, R., Masaya, A., Morimoto, Y., Off-axis Reconstruction Method for Displacement and Strain Distribution Measurement with Phase-Shifting Digital Holography, Fringe09, 6th International Workshop on Advanced Optical Metrology, Osten, W. and Kujawinska, M. ed., Springer, 764-769(2009). [5] Fujigaki, M., Nishitani, R., Morimoto, Y., Strain Measurement Using Phase-shifting Digital Holography with Two Cameras, Proceedings of ICEM 14, EPJ Web of Conferences 6, 30001(CD-ROM)(2010).
Performance Assessment of Strain Measurement with an Ultra High Speed Camera Marco Rossi1, Rachid Cheriguene2, Fabrice Pierron1, Pasqual Forquin2 1
Arts et Métiers ParisTech, rue St. Dominique, 51000 Châlons-en-Champagne, France, email: [email protected][email protected] 2 LEM3, Université Paul Verlaine de Metz, Ile du Saulcy, 57012 Metz, France, email: [email protected], [email protected]
ABSTRACT Ultra high speed cameras allow to acquire images typically up to about 1 million frames per second for a full spatial resolution of the order of 1 Mpixel. A classical architecture for such cameras is the rotating mirror system however, in the last years, an all solid state architecture has been developed where the image storage is incorporated into the sensor chip. The main problem of this system is the low fill factor (less than 15%) since most of the space in the sensor is occupied by memory. This peculiarity introduces a series of systematic errors when the camera is used to perform full-field measurement on a specimen surface. The aim of this paper is to thoroughly characterize the error and find a possible post-processing procedure to improve the measurement. A series of test were performed on a Shimadzu HPV-1 high speed camera first using uniform fields and then grids in order to apply the grid method as full field measurement optical technique. In such way it has been possible to appropriately identify the camera behaviour and utilize this information to improve actual measurements. Introduction In recent years, the technology of ultra high speed cameras has been constantly progressing leading to enhanced performances and higher frame rates. Thanks to these developments, ultra high speed cameras for full field strain measurements is becoming a very interesting and promising tool for dynamic testing in order to characterize the material behaviour at high strain rates. The displacement and strain fields can be obtained using the same optical techniques adopted in quasi-static tests, e.g. Digital Image Correlation (DIC) [1,2] or the grid method [3,4]. For instance the strain field measured during an impact test can be directly used to evaluate the material properties using the Virtual Field Method [5,6]. The quality of the measurement is directly related to the quality of the images acquired and, unfortunately, the quality of the images acquired with ultra-high speed cameras is still not as good as that of images acquired with standard digital cameras used in quasi-static tests. For this reason it is necessary to look carefully at the performance of ultra high speed cameras before using them in full field strain measurements.
Fig. 1 schematic view of the ISIS architecture used by the Shimadzu HPV-1 [7]
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_37, © The Society for Experimental Mechanics, Inc. 2011
299
300 In digital cameras, the main limitation to reach high speed acquisition is the storage time, therefore different strategies have been developed during the years to achieve high rates, e.g. rotating mirror [8] or beam splitter [9]. In this paper the behaviour of the Shimadzu HPV-1 high speed camera [10] is analysed. Such high speed camera uses the so called “In-situ Storage Image Sensor” (ISIS) architecture [7,11], see Figure 1. The idea is to embed the storage memory directly into the CCD sensor, in this way each photo side is followed by a string of storage sites minimizing the storage time. Lazovsky et al. [12] have shown that it is possible to reach 108 frame per second with this architecture. the Shimadzu HPV-1 is able to register 102 images with a resolution of 312×260 pixels, a grey level dynamic range of 8-bit and a maximum acquisition rate of 1000 kfps. The main advantage of the ISIS architecture is the possibility of having an all-solid structure avoiding mechanical devices such as rotating mirrors or light intensifiers. The main drawback is the low pixel fill factor, indeed only around 13% of the sensor is covered by the photodiode, the rest being used for the storage devices [7]. Moreover this acquisition system can also lead to some unexpected errors since, at the same pixel, different storage units are used to register the light intensity at each frame. In this paper, as first step, the performance of the camera has been evaluated using a uniform light field, a similar procedure was used by Tiwari et al [13] to assess the errors in a UHS-ICCD high speed camera. Then a series of test has been performed on a grid in order to evaluate how the camera errors influence the displacement and strain measurement employing the grid method. 6
x 10
3.5
Grey level distributions - effect of light gain1-f/16-63fps gain1-f/11-63fps gain1-f/8-63fps gain1-f/5.6-63fps
3 2.5 2
gain=1 f/16 fps=63
gain=1 f/11 fps=63
gain=1 f/8 fps=63
gain=1 f/5.6 fps=63
gain=1 f/16 fps=63
gain=1 f/8 fps=63
gain=2 f/16 fps=63
gain=2 f/8 fps=63
1.5 1 0.5 0
a)
0
50 6
3.5
x 10
100
150
200
250
Grey level distributions - effect of gain gain1-f/16-63fps gain1-f/8-63fps gain2-f/16-63fps gain2-f/8-63fps
3 2.5 2 1.5 1 0.5 0
b)
0
50 6
2.5
x 10
100
150
200
250
Grey level distributions - high speed gain1-f/4-250kfps gain1-f/2.8-250kfps gain2-f/2-250kfps gain2-f/1.2-250kfps
2 1.5
gain=1 f/4 fps=250k gain=1 f/2.8 fps=250k
1 0.5 0
0
50
100
150
200
250
gain=1 f/2 fps=250k gain=1 f/1.2 fps=250k c) Fig. 2 Histograms of the grey level distribution obtained using a blank target and a uniform light field. Three aspects have been considered: the effect of different light intensities (a), the effect of the gain (b), the effect of a high acquisition rate (c)
301 Experiments on uniform light fields A first set of tests was conducted using a blank sheet as target in order to evaluate the camera behaviour in recording a uniform field of light with intensity values ranging from low to saturation. The blank sheet was illuminated using a high power light in order to obtain an almost uniform light field. The objective aperture was varied to change the light intensity arriving to the CCD sensor. The gain and the acquisition frame rate have been also varied to evaluate their influence in the acquisition process. The grey level histograms for the inspected configurations are illustrated in Figure 2, a small picture of the acquired images is also provided at the right of each graph. In the first set of acquisitions, Figure 1a, the gain is set to 1 (meaning no signal amplification) and a low speed acquisition rate of 63 fps is used. Four diaphragm apertures have been adopted, i.e. f/16, f/11, f/8 and f/5.6. The analysis shows that in this case a saturation is obtained for a grey level value ranging from 110 to 130, far lower than the expected limit of 256 for an 8-bit camera. The images obtained using f/8 and f/5.6 are mainly the same since in both cases the light exceeds the saturation limit. The acquisition was then repeated using a gain equal to 2 and apertures f/16 and f/8. In Figure 2b the grey level distribution obtained using gain 2 is directly compared with the distribution obtained using gain 1 for the same objective apertures. From the comparison comes out that the gain basically increases the average grey level and the data dispersion, besides when gain 2 and f/8 is used, some pixels reach the 8-bit saturation level of 256. The last series of tests, illustrated in Figure 2c, have been obtained using a high acquisition rate of 250 kfps and a gain of 1. In this case, because of the high rate and the consequent low light due to the low number of photons hitting the CCD sensor, the diaphragm apertures have been set to f/4, f/2.8, f/2 and f/1.2 and the light lamp has been concentrated toward the centre of the target sheet to increase the incident light power. The obtained light field is not as uniform as previously, see Figure 1c, however the same saturation effect observed in Figure 1a is also present when the grey level approaches 110 to 130. The effect of saturation is better explained in Figure 3 where images with no saturation and saturation are compared. The contour scale has been varied to magnify the grey level gradient. When there is no saturation, the grey level field is smooth and the observed gradient is due only to the different light distributions provided by the lamp used to illuminate the target. When the saturation level is reached a chaotic pattern is observed which indicates that, spatially, the saturation occurs at different grey levels at the different pixels of the CCD sensor. Images with no light saturation 45 60
35
40
50
30
20
40
25
gain=1 f/16 fps=63
60
70
40
gain=2 f/16 fps=63
gain=1 f/2.8 fps=250k
Images with light saturation 130
130
250 240
120
120
230 110
gain=1 f/8 fps=63
220
gain=2 f/8 fps=63
110
gain=1 f/1.2 fps=250k
Fig 3 Comparison of images with or without saturation at the CCD sensor level. If no amplification is used (gain = 1) the saturation occurs when the grey level goes beyond 110, far before the maximum value of 256 for an 8-bit camera. The conclusion coming from this first analysis is that, at the sensor level, a saturation occurs when the pixel grey level is around 110 to 130, within this range it is not possible to obtain clear images. In order to extend the dynamic range of the image up to the 8-bit limit of 256 the gain has to be increased. Nevertheless, since the gain is only an amplification of the signal coming from the CCD sensor, if the image is already saturated at the sensor level the same saturation will be present in the amplified signal. Indeed, looking at Figure 3, if the same aperture of f/8 and frame rate of 63fps is adopted and only the gain is changed from 1 to 2, the resultant saturated images are qualitatively very similar with the difference that using gain equal to 1 the grey level ranges from 110 to 130 and using gain equal to 2 the grey level ranges from 220 to 256. The same saturation problem with a similar pattern is also observed at high acquisition rate (250 kfps) in the zone of the sheet where the grey level is over 110.
302 At this point, the stability of the signal with time has been analysed looking at the standard deviation of the grey level value recorded at each pixel during the 102 subsequent images. In Figure 4 the standard deviation at each pixel is plotted as a function of the average pixel grey level, for the same configurations as analysed in Figure 2. Where there is no saturation, the standard deviation is usually lower than 2 pixels, a slight increase is observed as the average grey level grows. Also with saturation the standard deviation remains low, less than 3 pixel. It can be noted that a highest standard deviation is observed at some pixels, however this defective pixels represent a very small part of the total (e.g in the test with gain 1, f/16 and 63 fps only 25 pixels have a standard deviation over 2). Using an almost uniform light field, the camera shows a good stability of the signal for the whole 102 acquired images. A final remark can be made about the gain effect looking at the second plot of Figure 4, for the same aperture and frame rate changing the gain from 1 to 2 leads to a higher standard deviation, in other words using gain 2 it is possible to explore the whole 8-bit dynamic range but the noise level in the recorded images will be higher.
a)
b)
c) Fig. 4 Standard deviation of the grey level obtained in the 102 images recorded by the Shimadzu camera Experiments on a fixed grid A new series of test were conducted using this time a 3mm pitch grid as target image. The camera magnification was set in order to have one grid period sampled by about 5 pixels, which is a standard setting in the measurement of displacement fields with the grid method. The measurement area where the grid is framed is 185×245 pixels so that 37×55 independent measurement points can be achieved using the grid method. The following camera and objective settings have been used: gain equal to 1, acquisition rate equal to 250 fps and aperture set to f/4, f/2.8, f/2, f/1.2 respectively. Such parameters represent a situation in between the two previously analysed. The first image is taken as reference and the displacement is then computed with the grid method [3] using the other 101 images as deformed images. In this case the pixel subset needed to obtain a single measurement point is a window of 5×5 pixels. The displacement is expressed as percentage of the pitch and it should be theoretically zero since the image is not moving. In Figure 4, for each measurement points, the standard deviation of the displacement measured over the 101 images
303 is plotted as a function of the average grey level achieved in the corresponding pixel subset used to compute the displacement. Ux and Uy represent the horizontal and vertical displacement, respectively.
a)
b)
c) Fig. 5 standard deviation of the displacement obtained applying the grid method to a fixed grid along the horizontal (a) and vertical (b) direction. In (c) the standard deviation of the pixels in terms of grey level is shown. For the aperture f/4 and f/2.8, the standard deviation of the displacement is low, under 0.5% (i.e. 15 µm with a 3mm pitch), and the average grey level intensity in the subset is lower than 50. Using the apertures f/2 and f/1.2 a higher dispersion in the standard deviation is obtained and the standard deviation observed for Uy is much larger than for Ux. Figure 5c shows the standard deviation of the grey level at each pixel in a similar way as in Figure 4, a high scatter is obtained for f/2 and f/1.2 with the standard deviation that rises up to 40 grey levels. This result is quite surprising and not consistent with what was observed using uniform light fields. In order to understand what happens, the temporal grey level history of a single pixel with such high values of standard deviation is plotted in Figure 6, for different configurations. It can be noted that a high oscillation is obeserved only in the case of the grid with the objective aperture of f/2. This oscillation has a period of 12 images and the grey intensity at the pixel suddenly changes from a value of around 30 to a value of around 115. This large variation in the grey level disappears when lower light is used by closing the aperture to f/2.8 or when a uniform light field is used. A magnification of the acquired grid image is also provided in Figure 6 where the inspected pixel is highlighted by a red circle to show the grey level variation which occurs only when the f/2 aperture is adopted. It is difficult to explain what is the cause of such effect, it seems that when a high light gradient is present inside the pixel area, this can provoke a sudden drop of the stored value from the high level to the low level. This error seems to be not present when the level of light is far from the sensor saturation level.
304 140 120
grid (gain1-f/2-250fps) grid (gain1-f/2.8-250fps) uniform field (gain1-f/16-63fps) uniform field (gain1-f/11-63fps) uniform field (gain1-f/8-63fps) uniform field (gain1-f/4-250kfps) uniform field (gain1-f/2-250kfps) uniform field (gain1-f/1.2-250kfps)
Grey level
100 80 60 40 20 0
20
40
60
80
100
Frame Frame 16 (aperture f/2)
Frame 19 (aperture f/2)
Frame 21 (aperture f/2)
100
100
100
80
80
80
60
60
60
40
40
40
20
20
20
Frame 16 (aperture f/2.8)
Frame 19 (aperture f/2.8)
Frame 21 (aperture f/2.8)
100
100
100
80
80
80
60
60
60
40
40
40
20
20
20
Fig 6 history plot of grey level measured by a single pixel during different tests. The pixel is highlighted with a red circle in the magnifications below the graph that show the drop in the grey level at different frames. Experiments on a grid with imposed displacement The last set of experiments consisted in the analysis of a grid with fixed imposed displacement. The aim is to study the influence of the low fill factor of the Shimadzu camera in the displacement measurement performed with the grid method. The used camera parameters are gain equal to 1, frame rate 250fps and aperture f/4, in this way the saturation and the oscillation problems detailed in the previous section are avoided.
Image of standard grid (5 pixel pitch)
Standard grid pattern
Image of sinusoidal grid (5 pixel pitch)
Sinusoidal grid pattern Fig. 7 normal grid and sinusoidal grid
A first acquisition was performed and the first recorded image used as reference in the displacement computation, then the target grid was manually displaced along the horizontal axis and a second set of 102 images were registered and used as deformed images. The displacement was computed using the grid method and the strain field computed using a polynomial fitting with a 4 pixel radius [14]. The amount of displacement assigned is around 1/3 of the grid pitch, however, since the displacement is constant, the resultant strain field should be zero. Two types of grid were used and they are shown in Figure 7. The first grid (on the left) is a standard cross hatch grid made of black lines over a white sheet. The second grid, called sinusoidal grid, (on the right) is obtained using a sinusoidal signal that
305 leads to a smoother black to white transition. In both cases the grid pitch is 5 pixels. The use of a sinusoidal grid should improve the phase detection because the high spatial frequency contents of the grid image are reduced. Displacement Ux standard grid 5
Displacement Ux sinusoidal grid
38.5
43
5 38
10 15
42.5
10
37.5
20
15 42
20 37
25 30
25
36.5
35
41.5
30 35
10
20
30
40
50
Strain component ε xx standard grid 5
36
10
1
15
30
40
50
Strain component ε xx sinusoidal grid
-3
x 10 3 2
10
20
5
41
-3
x 10 3 2
10
1
15
20
0
20
0
25
-1
25
-1
30
-2
30
-2
35 10
20
30
40
50
-3
35 10
20
30
40
50
-3
Fig 8 maps of the displacement field in the x-direction and strain component εxx obtained employing the grid method on the standard grid and on the sinusoidal grid In Figure 8 the Ux displacement field and the εxx strain field obtained applying the grid method to the two types of grid are compared. Using the standard grid a series of fringes appear in the displacement and, consequently, in the strain field. This effect is due to the low fill factor of the Shimadzu camera which disturbs the phase detection. The fringes almost disappear if the sinusoidal grid is used. -3
Standard deviation of the strain
x 10 2
ε
1.5
ε ε ε
1
ε ε
0.5
0
20
40 60 Image number
80
xx
standard grid
xx
sinusoidal grid
yy
standard grid
yy
sinusoidal grid
xy
standard grid
xy
sinusoidal grid
100
Fig 9 history plot of the standard deviation of the strain field at each frame. The different strain components are illustrated using the standard and the sinusoidal grid Figure 9 shows the standard deviation of the computed strain fields for each image. The sinusoidal grid reduces the standard deviation by a factor of about three for εxx. No variations are observed in the other strain components, indeed the fringe is caused by the displacement that has been imposed only in the x-direction.
306 Conclusions The aim of this paper is the characterization of the performances of the ultra high speed camera Shimadzu HPV-1. Three types of experiments have been performed. First a uniform field was framed, then a stationary grid, finally the grid was translated by a fixed amont in the longitudinal direction. The main outcomes of the analysis are: • • •
•
A saturation is encountered at the sensor level far before the 8-bit limit of 256, when the grey level is around 110130. The saturation threshold changes at each pixel leading to a characteristic saturation pattern illustrated in Figure 3. In order to use the whole 8-bit range a signal amplification has to be applied through the gain function. When a uniform light field is framed, the grey level variation measured at each pixel during the 102 subsequent images acquired by the camera is rather low, with standard deviation less than 2. In presence of high light gradients in the image, as occur in the grid where black and white lines changes every 5 pixels, the stored grey level at some pixels can suddenly jump from low to high values. This oscillation has a period of 12 images and is related to the amount of light. Indeed the effect is not observed when the objective aperture is reduced. When a constant longitudinal displacement is measured using the grid method, a series of spatial fringes are observed. This phenomenon is due to the low fill factor of the camera in the x direction and can be reduced using a sinusoidal grid with a smooth transition between the black and white zones of the grid.
The main conclusion from this study is that in order to have good deformation measurements using this camera, it is essential to avoid the presence of saturation. Playing with the gain can be a good strategy to increase the available dynamic range of the camera although more noise is expected. Finally the use of sinusoidal grids can be beneficial to reduce the error introduced by the low fill factor. In the case of random patterns to be used with correlation algorithm, smooth grey level transitions will probably also be beneficial, though a similar study as this one will have to be performed. References [1] M.A. Sutton, M. A., Orteu, J.-J. and Schreier, H.W.: Image correlation for shape, motion and deformation measurements. Springer New-York, 2009. [2] Amodio, D., Broggiato, G. B., Campana, F. and Newaz, G. M.: Digital speckle correlation for strain measurement by image analysis. Exp. Mech., 43: 396-402, 2003 [3] Y. Surrel, in: Photomechanics, edited by P.K. Rastogi, Topics in Applied Physics, Vol. 77, chapter “fringe analysis” p. 55-102, Springer, 2000. [4] R. Moulart, R., Rotinat, R., Pierron, F. and Lerondel, G.: On the realization of microscopic grids for local strain measurement by direct interferometric photolithography. Opt. & Lasers in Eng., 45: 1131-1147, 2007. [5] Pierron, F., Sutton, M. A. and Tiwari, V.: Ultra high speed DIC and Virtual Fields Method analysis of a three point bending impact test on an aluminium bar. Exp. Mech., DOI 10.1007/s11340-010-9402-y, 2011. [6] Moulart, F., Pierron, F., Hallet, S. R. and Wisnom, M. R.: Full-field strain measurement and identification of composites moduli at high strain rate with the Virtual Fields Method. Exp.Mech., DOI: 10.1007/s11340-010-9433-4, 2010. [7] Etoh, T. G., Poggemann, D., Kreider, G., Mutoh, H., Theuwissen, A.J.P., Ruckelshausen, A., Kondo, Y., Maruno, H., Takubo, K., Soya, H., Takehara, K., Okinaka, T. and Takano, Y.: An image sensor which captures 100 consecutive frames at 1000 000 frames/s. IEEE Transactions on Electron Devices, 50(1): 144-151, 2003. [8] www.cordin.com: Cordin 550 camera. [9] www.itronx.com/drs.htm: DRS IMACON200 camera. [10] www.shimadzu.com: Shimadzu HPV-2 camera (replacement of HPV-1, same performances). [11] Frank, A. M. and Bartolick, J. M.: Solid state replacement of rotating mirror cameras. 27th International Congress on High Speed Photography & Photonics, Xi’An, China, 2006. [12] Lazovsky, L., Cismas, D., Allan, G. and Given, D.: CCD sensor and camera for 100 Mfps burst frame rate image capture. Proc. SPIE 5787, 184; doi:10.1117/12.604523, 2005 [13] Tiwari, V., Sutton, M.A. and McNeill, S.R.: Assessment of high speed imagin systems for 2D and 3D deformation measurements: methodology development and validation. Exp. Mech., 47: 561-579, 2007. [14] Avril, S., Feissel, P., Pierron, F. and Villon, P.: Estimation of the strain field from full-field displacement noisy data, Revue Européenne de Mécanique Numérique, 17: 857-868, 2008.
Rigid Body Correction Using 3D Digital Photogrammetry for Rotating Structures Troy Lundstrom, Christopher Niezrecki, Peter Avitabile SDASL, Department of Mechanical Engineering, University of Massachusetts Lowell One University Ave, Lowell, MA 01854 1.0 Abstract In recent years, stereophotogrammetry techniques have been used to measure the response of high-speed dynamic systems. More recently, this technique has been used to measure the motion of rotating systems, such as wind turbines. In evaluating the vibration of rotating helicopter rotors or wind turbines, the rotor rigid body motion induced from the hub or the flexure in the tower is not of interest to understand the structural dynamic motion of the blades and needs to be removed from the overall motion. Dynamic measurements have previously been taken on a rotating turbine and the rigid body correction (RBC) algorithms have produced unexpected dynamic behavior. To characterize and understand this behavior, a series of experiments were developed using a rotating wedge along with a known flexing element to study the rigid body correction process and induce behavior similar to the unexpected dynamic behavior observed in the turbine. A sensitivity study was also conducted using the rotating wedge system to show how displacement measurements are affected when different combinations of flexure measurement points on the cantilever beam are used in the rigid body correction process. The information gleaned from this experiment was then utilized to mathematically develop a RBC algorithm to understand how the data is and should be processed. A systematic approach to process the dynamic information from rotating systems using RBC algorithms is also presented within this work. 2.0 Introduction For several years, stereophotogrammetry techniques have been used to measure the dynamics of structures. The utility of this measurement technique for the measurement of dynamic systems was illustrated through measurements on a dryer base plate and base-upright (BU) structure in recent studies [1]. The optical measurement approach also has a number of advantages over other non-contacting measurement techniques such as pulsed electronic speckle pattern interferometry (ESPI) and laser Doppler vibrometry (LDV) in the measurement of the dynamics of rotating systems. In ESPI, the fringe patterns change in response to both rotation and deformation making it difficult to use for the dynamic measurement of rotating structures. LDV can be used to measure the dynamics of rotating structures when it is used in concert with a mirror system that guides the laser in a circular path at the same rotational velocity as the structure being measured [2]. Other researchers have recently investigated the use of continuously scanning laser Doppler vibrometry (CSLDV) for rotating structures [3, 4, 5]. To date, measurements using CSLDV can only be made on relatively flat, rotating structures using one axis at multiple points. Three dimensional measurements are not currently possible using CSLDV. Stereophotogrammetry has significant benefits over either of these methods because of its relative simplicity, insensitivity to vibration and rigid body motion of the hardware, and ability to more easily measure rotational motion using a set of cameras. Digital photogrammetry and point tracking software have also been used to measure the oscillatory behavior of a 500 kW wind turbine under normal operating conditions (rotating under wind loads) and the associated dynamics induced onto the blades of the turbine when the system was slowed from 27 rpm to 0 rpm, simulating an emergency stop [6]. This experiment illustrates the effective use of dynamic photogrammetry to measure turbine dynamics under normal operating loads in the field. In addition to taking dynamic measurements on a wind turbine, rigid body correction algorithms can be used to effectively de-rotate a wind turbine and see the blade flexural dynamics as if the turbine was not moving [7]. Similar to rigid body correction, an algorithm to correct for body deformation of a human torso while scanning the body using Single Photon Emission Computed Tomography (SPECT) is presented by Gu [8]. Also, an algorithm to assemble 3-D surfaces using a limited number of data points using principle coordinate analysis (PCA) and rotations is presented by Blanz [9]. This paper focuses on understanding RBC and determining an appropriate way to perform it for data obtained on rotating structures. Three-dimensional (3D) point-tracking (or dynamic photogrammetry) utilizes an ellipse-tracking algorithm to track discrete, circular points on a dynamic structure in three dimension using a series of digital pictures. A more complete explanation on the system process was described by Warren [10]. The coordinate information across the frames of interest can be processed using rigid body correction (RBC) algorithms to isolate the dynamics of a point or feature of interest by removing any rigid body rotation and or translation that has dynamic motion significantly larger than the vibration of interest. The stereophotogrammetry point-tracking system (PONTOSTM) was previously used to take dynamic measurements on a 1.17 meter Southwest Windpower Air BreezeTM wind turbine [11]. Details of this study can be found in two papers by Warren [12]. A RBC was performed using T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_38, © The Society for Experimental Mechanics, Inc. 2011
307
308
all rotor measurement points yielding unforeseen dynamic behavior in the form of a periodic wobble. This wobble was observed in an animation developed using the measurement point location versus time data. A second RBC was performed using just the hub measurement points and the periodic wobble was effectively eliminated allowing one to see the blade structural dynamics. To show how processing the data using the two RBC techniques generates very different results, the computed tip displacement measurements representing the motion of the same point, but processed differently, were extracted. An image showing the turbine point labels is shown in Figure 1a and a plot of the tip displacement measurements is shown in Figure 1b.
Fig. 1 a) Labeled turbine measurement points; b) Measured rotor tip displacement processed with RBC using hub points and all points From Figure 1b, the significant difference between the two plots is easily seen. In Figure 1b, both plots appear to show the superposition of a high-frequency signal and a low-frequency signal. The plot of the tip displacement using RBC on just the hub points appears to show significant, periodic oscillatory behavior at the beam tip. In the plot of the tip displacement using RBC on all points, the large amplitude appears to have been replaced by a lower-amplitude displacement. The experimental setup and procedure used to explain the behavioral differences shown in Figure 1 is described in the following section. 3.0 Experimental Setup and Procedure
To study the rigid body correction process and experimentally determine how it works, a rotating wedge system with a flexing element was created. The wedge system with stereophotogrammetry system is shown in Figure 2.
stereophotogrammetric togrammetric measurement system and wedge assembly (view from above) Fig. 2 Experimental setup with stereopho The wedge system consists of a rigid aluminum wedge with a centrally-located dowel pin that mates with a rigid base plate having a centrally-located hole. The wedge was accurately machined, has an angle of 10 degrees, and can rotate about a vertical axis that is defined by the center of the hole in the horizontal base plate. The wedge also has a flexing element that allows for
309 measurements to be made at a position extending away from the inclined plane of the wedge. For this experiment, the wedge was incrementally spun through an angular displacement as the tip of the flexing element was incrementally flexed. Prior to performing the experiment the out-of-plane noise floor was measured by taking three images of the wedge and base plate and averaging the maximum measured displacement for the images. The vertical noise floor was determined to be approximately 0.02 mm. The experimental procedure for displacing the wedge and flexing element is illustrated in Figure 3. The rotation and flexing procedure for stages 1 through 8 is shown in Figure 3. The complete procedure is not shown because the beam flexure cycle occurs twice and the flexural deflection is repeated for different angles. For this experiment, the wedge was rotated from 0 degrees through 180 degrees in 7.5 degree increments and the tip of the beam was incrementally displaced by 0.5 mm for each of these increments up to a maximum of 3 mm at 45 degrees and then back down to 0 mm at a 90 degree rotation angle. The flexure element was again increased up to 3 mm at a 135 degree rotation angle and then back down to 0 mm at 180 degrees. Images were taken with the stereophotogrammetry system for each 7.5 degree increment to yield a total of 25 images. These maximum displacement measurements were taken at the very tip of the beam and the displacement measurements taken with PONTOSTM will be slightly less because the measurement points were inset from the beam tip. The maximum displacement taken with PONTOSTM was approximately 2.5 mm versus the digital vernier caliper measurement which was 3 mm.
Fig. 3 Rotation and flexing positions for the experiment for the first eight positions of the overall rotation The static and flexure measurement points and labels are shown in Figure 4. These points will be referenced throughout the paper.
Fig. 4 Wedge isometric view and measurement point labels From Figure 4, one can clearly see the static and flexure points. One might also notice that the flexure beam is fully integrated into the wedge (it is a one-piece system) so that all of the measurement points are essentially in the same plane.
310 4.0 Experimental Results An image of the wedge and global coordinate system on the base plate is shown in Figure 5a and a plot of the flexural displacement along the length of the beam as a function of rotation angle is shown in Figure 5b.
Fig. 5 a) Wedge at the initial position with the coordinate system shown at the left bottom corner of the base plate; b) Average beam row displacement as a function of rotation angle From Figure 5, one can see how the global coordinate system was assigned to the static base plate where the origin lies at the lower left measurement point and the Z axis is normal to the base plate surface. The row labels are also included in the plan view of the wedge. The flexural displacement of the three points in each row are averaged together to produce the displacement graph shown in Figure 5b.
The global coordinate system was reassigned to the wedge surface as shown in Figure 6a and a rigid body correction was performed using the points that are static with respect to one another as shown in Figure 4. Displacement measurements in the Z direction were extracted from the measurement point (labeled in Figure 6a) and this flexural displacement as a function of rotation angle is shown in Figure 6b.
Fig. 6 a) Wedge at initial frame with global coordinate system affixed to wedge surface; b) Measured out-of-plane displacement of the measurement point with RBC performed on the static points From Figure 6b, one can see that the measured displacements of the measurement point are within the noise floor of the measurement, indicating that RBC effectively eliminated the wedge rotation and any minute translations that occurred during the experiment when only the points that are static with respect to one another.
311
To better understand the RBC algorithm, different combinations of points on the flexure beam were utilized in addition to all of the points that are static with respect to one another on the wedge. The global coordinate system remained the same as that shown in Figure 6a. To show the effect of using different combinations of flexure points in the RBC algorithm on the computed out-ofplane displacement of the same measurement point shown in Figure 6a, the displacement of the point is compared for the various angular positions of the wedge and is shown in Figure 7. The RBC was performed using all of the optical targets on the rigid surface plus either 1, 2, or 3 additional points that are located on the flexural beam. Within the displacement graph, all of the twelve combinations should generate the identical displacement curve, however it is evident that number of optical targets located on the flexural beam that are used in the RBC strongly affects the computed result.
Fig. 7 Measured out-of-plane displacement of a single measurement point as a result of different flexure point combinations used in the RBC The point labels shown in Figure 7 can be seen on the wedge surface in Figure 6a. Normally, one would expect that the measurement point shown in Figure 6a would not show any out-of-plane displacement (Figure 6b), but the plot in Figure 7 shows otherwise. In Figure 7, one might also notice that the phantom displacement of the measurement point shows similar displacement behavior to that of the beam measurement points in Figure 5 and this indicates that the use of flexure points in the RBC has a direct effect on all measurement points on the wedge surface. From Figure 7, it is also clear that using point 10 in the row farthest from the origin has a far greater effect on the RBC process than using point 1 in the row closest to the origin indicating the point amplitude is extremely important. Also as one uses a greater number of points along a given row in the RBC there is a slight increase in the out-of-plane displacement of the measurement point indicating that point quantity and the amplitude of the motion are both important. From a practical perspective, using flexure points 1, 2, and 3 along the first row has little effect on the RBC process because the maximum measured amplitude at the measurement point was only slightly greater than twice the noise floor. In wind turbine or rotor measurement applications which are of interest in this work, it may be necessary to place targets on the root of the turbine blades where there is minimal flexing, because of the very limited area on the hub. Using the same coordinate system shown in Figure 6a, a rigid body correction was performed using the software PONTOSTM using all points on the wedge (including flexure points) and the coordinate locations of the measurement points were plotted with respect to the tip displacement of the beam. This plot is shown in Figure 8.
312
Fig. 8 Wedge measurement point locations and best-fit plane orientation/position with respect to beam tip deflection In Figure 8, the beam tip is displaced up through 3.0 mm in 0.5 mm increments and this plot shows how RBC yields a static, bestfit plane and coordinate system through all frames. To compensate for the static, best-fit coordinate system and the moving points of the flexure beam, the software moves all measurement points. 5.0 Rigid Body Correction Algorithm Development Using the information from Figures 5-8, a mathematical process to reproduce the RBC results shown in Figure 8 using the experimental data was developed. This code was developed without knowledge of the RBC algorithm used in PONTOSTM software, as it is proprietary. The algorithm is now described. First, the centroids of the point clouds of each stage were calculated. The two coordinates and local coordinate systems of two centroids are shown in Figure 9. The subscript ‘1’ indicates the reference stage and the subscript ‘n’ indicates an arbitrary subsequent stage.
Fig. 9 The coordinates and local coordinate systems of two centroids (c1 and cn)
313 The coordinates for the centroid c1 are denoted by x1 , and
zn .
y1 and z1 and the coordinates for the centroid cn are denoted by x n , y n
th
The reference local z plane and the n local z plane are labeled as p1 and pn, respectively.
The nth centroid of each point cloud can be calculated according to
xn =
1 m 1 m 1 m y = y z = zi , x ∑ i ∑ i n m∑ and n m m i=1 n , = 1 i =1 i n n
(1a, b, c)
where m is the number of points (21) in each point cloud. Using the calculated centroids for each point cloud, the centroids were subtracted from the x, y and z coordinates of the points in each point cloud to translate the center of the point clouds of each stage so that their corresponding centroids reside at the origin of the global coordinate system. Coordinate systems of best fit (also shown in Figure 9 with c1 and cn as centroids) were then calculated using principal component analysis via singular value decomposition (SVD) for each stage. The nth SVD can be written as
Am ×3 n = U m × m S m ×3V T 3×3 , n
(2a)
and
Am×3 n
x1 − xn x − x n = 2 ... xm − xn
y1 − yn y2 − yn ... ym − yn
z1 − zn x01 x z 2 − zn = 02 ... ... zm − zn n x0 m
y01 y02 ... y0 m
z01 z02 ... , z0 m n
(2b)
where the columns of U are the eigenvectors of AAT, the columns of V are the eigenvectors of ATA and the diagonal values in S are the square roots of the eigenvalues of AAT and ATA. Overall, the columns of U are orthogonal with respect to one another. The first column of V corresponds to the unit vector denoting the slope of the best-fit line for each point cloud, the third column of V corresponds to the unit normal vector for the best-fit plane of the point cloud and the second column of V is the cross product of the first two unit vectors. In this way, a local coordinate system of best fit can be established for each frame of data. A paper comparing four different 3-D rigid body transformation algorithms including SVD was written by Eggert [13]. A check was also written into the code to correct for the possibility of a 180 degree rotation error in the X-Y plane if the SVD fails. The SVD can also fail if the point cloud exhibits sufficient circular symmetry. After translating each point cloud and calculating the coordinate systems of best fit, the relative angles between subsequent local coordinate systems and the reference local coordinate system were determined. An image of two local coordinate systems and reference angles is shown in Figure 10. The angles were calculated using the atan2 function in MATLAB.
314
Fig. 10 Two coordinate systems and reference angles Using Figure 10, a procedure can be followed to rotate coordinate system n to coordinate system 1. Coordinate system 1 is the first coordinate system n. First, coordinate system 1 and each coordinate system n were rotated by α1 and αn, respectively, so that nx1 and nxn lie in the global X-Z plane so that an isolated rotation can be performed about the Y axis. The first rotation matrix can be formulated as
RZ 1 n
cos( −α n ) sin(−α n ) 0 = − sin( −α n ) cos( −α n ) 0 0 0 1 ,
(3)
where αn is angle between the projection of each local x axis on the global X-Y plane and the global X axis. A second rotation is then performed about the Y axis to set the angle between each local z axis and the global Z axis equal to that of the reference coordinate system (first coordinate system). The second rotation matrix can be formulated according to
RY 1
n
cos[− (γ 1 − γ n ) ] 0 − sin [− (γ 1 − γ n ) ] 0 1 0 = sin [− (γ 1 − γ n )] 0 cos[− (γ 1 − γ n )]
(4)
, where -(γ1-γn) is the required difference angle of rotation about the Y axis. A third rotation is performed about the X axis to set βn equal to β1 and this rotation matrix is formulated as
RX1
n
0 0 1 = 0 cos( β 1 − β n ) − sin( β 1 − β n ) 0 sin( β 1 − β n ) cos( β 1 − β n ) ,
(5)
where β1-βn is the required difference angle of rotation about the X axis. A final rotation is then performed about the Z axis to rotate all stages back to the original angle between the projection of the local reference x axis and the global X axis, and this final rotation is assembled as
315
RZ 2
cos(α1 ) sin(α1 ) 0 = − sin(α1 ) cos(α1 ) 0 0 0 1 .
(6)
After performing the required rotations on each point cloud about the origin of the global coordinate system, the original centroid of the reference frame is added to all stages to complete the RBC. All of the rotation matrices are formulated using the rotation matrix summary written by Tsai [14]. The following section will describe the dynamic error that can be induced into system measurements when all measurement points on a dynamic system are used for rigid body correction and this will be illustrated through the RBC of a simulated flexing beam and the rotating/flexing wedge system. 6.0 Dynamic Error Description
The RBC algorithm described in the previous section will now be used in an example to show how this algorithm has a significant effect on the perceived displacement measurements of a flexing system when all measurement points are used in the RBC. For simplicity, the problem will first be illustrated using a simulation of a flexing beam with points that are rigid and others that are deformed, analogous to the flexing wedge described earlier. The beam is 101.40 mm (4.00 in.) long and is composed of 9 nodes and 8 elements in which the first 5 nodes do not move and the remaining 4 nodes translate due to a tip displacement at node 9 that will increase in 0.75 mm increments over a length of 3.00 mm. The results of the simulation are shown in Figure 11.
Fig. 11 a) Unflexed beam with labeled nodes; b) Incrementally-flexed beam with corresponding lines of best fit (BFL); c) Translated beams so that centroid of each line is at a global origin; c) Incrementally-flexed beam after RBC is performed
316 In Figure 11a, the beam can be seen in its initial state where the black circles represent rigid nodes that do not deform and the red squares represent flexure nodes. Figure 11b shows the incremental flex of the beam and the corresponding lines of best fit for each state. The displacement of the beam was then translated so that the centroid of each flexure increment is located at the global origin as shown in Figure 11c. Finally, the data was rotated so that each subsequent best-fit line had the same slope as the reference best-fit line (angle of zero degrees) and the data was translated so that the centroids of all stages were located at the original location of the reference stage, as shown in Figure 11d. From Figure 11d, one can see the phantom displacements induced into the data at the static nodes because of the inclusion of the flexure points in the RBC. The maximum phantom displacement for the static points can be seen at the first node and this maximum phantom displacement is approximately 0.60 mm. Overall, to correctly perform an RBC and prevent the addition of dynamic error into the data, only points that are static with respect to one another should be used in RBC across all frames. If a RBC is performed using only the stationary nodes, the results are equivalent to what is shown in Figure 11b minus the best-fit lines. Overall, the use of flexure points in an RBC effectively results in the rotation and translation of all data points to compensate for the changing best-fit coordinate systems keeping the best-fit coordinate system stationary across all frames. The MATLAB RBC algorithm was also performed on the rotating/flexing wedge using data exported from PONTOSTM, in which the RBC was performed on the base plate so that the wedge data contained rotation and translation. The RBC algorithm was used to de-rotate the wedge using coordinate systems of best fit and replicated the phantom displacement induced into the data when all wedge measurement points are used in the RBC process. To simplify the process, a 3-2-1 transformation was performed in PONTOSTM using the three outermost points of the wedge to establish the Z plane, the central point and the central cantilever beam tip point to establish the Y plane and the central point to establish the X plane [15]. The wedge and new coordinate system are shown in Figure 12. The Z axis is in the positive normal direction to the wedge surface.
Fig. 12 Wedge with new coordinate system to verify RBC algorithm A RBC was then performed using just the measurement points of the base plate and the x, y and z coordinates for all measurement points and stages was exported as an ASCII file; this serves as the data set on which the previously-described RBC algorithm is performed. A second RBC was also performed using all of the wedge measurement points and the x, y, and z coordinates were exported for all stages and measurement points. This serves as the reference data set to validate the RBC algorithm explained previously. All exports were performed with PONTOSTM. A comparison between the MATLAB RBC data set and the PONTOSTM RBC data set is shown in Figure 13.
317
Fig. 13 Comparison between MATLAB RBC algorithm using all wedge points and the commercial software (PONTOSTM) RBC using all wedge points From Figure 13, one can see that the phantom displacement induced by the commercially available software using all of the wedge measurement points was essentially identical to the RBC algorithm previously described that was processed in MATLAB. One might also notice that the MATLAB RBC algorithm successfully subtracted the rotation from the data sets. Overall, the plot in Figure 13 supports the conclusions made using the analysis shown in Figure 11. If RBC is performed using points that move with respect to one another, the data will be tainted by phantom displacements and rotations caused by the translation and rotation of these data points in order to maintain a static best-fit coordinate system across all stages.
To reduce any potential static error at the extents of the rotating structure, it is extremely important, when possible, to establish the global coordinate system over as large of an area using as many measurement points as possible. Unfortunately, the use of only three points to establish the Z plane in the 3-2-1 transformation may not be the best representation of the total least squares surface of the rotating system, and this may result in a significant out-of-plane height difference that may be inappropriate for the test system. The choice of points in establishing the global coordinate system is important because it has an effect on the static, absolute error and also the measured out-of plane displacement. 7.0 Static Error Description Using the same geometry from Figure 10, a study was conducted to illustrate the static error problem described above when the coordinate system is established over different size regions. Random initial z values were assigned to the x locations from Figure 11 and two lines representing the X axis were fit through two different point selections. The random z values represent surface imperfections that will show up in the measurement point locations. The plot of this data is shown in Figure 14.
318
Fig. 14 a) Unflexed beam with labeled nodes; b) Initial point locations for unflexed beam with lines of best fit for different point selections From Figure 14, one can see that using different combinations of points to establish the X axis yield very different coordinate values at node 9 and when these slope differences are extrapolated out to the extents of rotating structures, these differences can be quite significant. The first two lines in the legend represent the 3-2-1 transformation utilized by PONTOSTM to establish the global coordinate system; the software requires only two points to establish an axis and this can yield very different initial coordinates for the same measurement points. This is analogous to just using three hub measurement points of a turbine to establish the Z plane; there may be a significant difference between the initial, out-of-plane heights between the blades due to possible plane tilt caused by slight differences in target thicknesses and irregularities in the surface of the rotor hub. The third line represents the X axis calculated via least squares minimization and may yield a more appropriate X axis for the system reducing initial static error. Similarly, a static coordinate system analysis was also performed on the wedge and coordinate system layout, and the results are shown in Figure 15.
319
nitial, static error of each row along the beam length Fig. 15 a) Coordinate system orientation on the wedge; b) IInitial, From Figure 15, one can see how choosing a coordinate system using only three static points on the wedge surface can yield static error along the beam length. The maximum static error occurs at row 4 where the initial position is approximately 6 times the noise floor at 0.12 mm. The wedge surface was machined extremely flat, and therefore, this coordinate system is not an acceptable representation of the wedge surface because it results in a false, initial position as shown in Figure 15b which is used.. If the global coordinate system utilized a essentially non-existent when all measurement points on the wedge surface are used plane of best fit through all of the measurement points as the Z plane, this error would have been significantly reduced. This error can also affect the measured out of plane displacement of a turbine system. A cross-section of a wind turbine blade with two different coordinate systems resulting from this plane-tilt error is shown in Figure 16.
Fig. 16 The effect of different coordinate systems on the measured out-of-plane displacement of a wind turbine blade From Figure 16, the X-Z coordinate system represents the ideal coordinate system for measuring blade displacements and the X’Z’ coordinate system represents the coordinate system with plane-tilt error. Although the error shown in Figure 16 is highly exaggerated, it does have a slight effect on the measured out-of plane blade displacement. An improved RBC method is explained in the following section. 8.0 Improved RBC Method
It was shown that an effective RBC could be performed using a limited number of points that are effectively static with respect to one another and this would prevent the addition of any phantom rigid body movement into the data. It was also shown that establishing the Z plane of a global coordinate system using only 3 points on the rotating system surface may be a poor representation of the median system surface and may result in significant z coordinate differences at the extents of the structure.
320 This error can be reduced by setting the global Z plane equal to a best-fit plane through all measurement points in the reference frame (stage 1). An improved RBC method is summarized in the following steps. 1. Establish a best-fit plane through as many points as possible in the reference stage (when the system is stationary). 2. Set the origin as close to the axis of rotation as possible and establish a line of best fit through this point and some convenient set of additional points out to the extents of the measurement point cloud. 3. Establish the global coordinate system using the best-fit plane as the Z plane, the line of best fit as the X plane and the origin as the Y plane. 4. Select points (at least 3) on the rotating system that are static with respect to one another or have minimal relative displacement with respect to one another. It is advisable to use more than 3 points to minimize RBC error due to measurement noise. Depending on the structure, one may need to perform an RBC sensitivity analysis to determine what measurement points can be used in the RBC without seriously compromising the data if there are not a sufficient number of points that are static with respect to one another. 9.0 Conclusion Dynamic measurements have previously been taken on a rotating, 1.17 meter Southwest Windpower Air BreezeTM wind turbine and these measurements were processed in two ways by performing RBC using two different sets of measurement points. RBC was first performed using all turbine measurement points yielding a phantom wobble superposed with a high-frequency oscillation. The data was processed again using a second RBC on just the hub measurement points and this yielded very different dynamic behavior. An experiment utilizing a rotating wedge and a flexing element was developed to characterize and better understand the RBC process and it was found that RBC is sensitive to both flexure point quantity and flexure point displacement amplitude. This analysis also revealed that minimal displacement at the beam base could be used in RBC yielding minimal phantom displacement values only slightly higher than the noise floor. This information can be applied to the processing of outof-plane displacement data on a real wind turbine where targets must be applied to the roots of the blades because of the shape and size of the rotor hub/cone. It was also found that RBC described in this paper generated results similar to the RBC algorithm computed by the PONTOSTM RBC code. The RBC algorithm was found to successfully replicate the phantom wobble induced into the data set for all (static and flexure) wedge measurement points. Dynamic and static error was described using both a simple beam simulation and experimental data from the wedge system. To perform RBC on a rotating structure, the Z plane of the global coordinate system should be established using a best-fit plane through all structure measurement points in the reference stage and the origin should be placed as close as possible to the axis of rotation. The RBC should be performed using the largest number of points that are static with respect to each other as possible, and if necessary, using points with minimal displacement. This minimal displacement must be determined experimentally by comparing the out-of-plane displacement of a best-fit plane at the extents of the structure to the noise floor. 10.0 References
Helfrick, M., Niezrecki, C., Avitabile, P., Schmidt, T., “3D Digital Image Correlation Methods for Full-Field Vibration Measurement,” Mechanical Systems and Signal Processing, Vol. 25, pp 917–927, 2011. Helfrick, M., Niezrecki, C., and Avitabile, P., “Optical Non-contacting Vibration Measurement of Rotating Turbine Blades,” Proceedings of IMAC-XXVII, Orlando, FL, 2009. Sever, I. A. (Rolls-Royce plc, SinA33), Stanbridge, A. B., Ewins, D.J., “Turbomachinery Blade Vibration Measurements with Tracking LDV Under Rotation,” Proceedings of SPIE - The International Society for Optical Engineering, Seventh International Conference on Vibration Measurements by Laser Techniques: Advances and Applications, Vol. 6345, pp 63450L, 2006.
Stanbridge, A. B., Martarelli, M., Ewins, D. J., “Rotating Disc Vibration Analysis with a Circular-Scanning LDV,” Proceedings of the International Modal Analysis Conference – IMAC, Vol. 1, pp 464 – 469, 2001. Halkon B. and Rothberg, S., “A Comprehensive Velocity Sensitivity Model for Scanning and Tracking Laser Doppler
Vibrometry on Rotating Structures,” Proceedings of the SPIE – The International Society for Optical Engineering, Vol. 4827, pp 9–21, 2002.
321
Paulsen, U. S., Erne, O., Moeller, T., Sanow, G., Schmidt, T., "Wind Turbine Operational and Emergency Stop Measurements Using Point Tracking Videogrammetry," Proceedings of the 2009 SEM Annual Conference and Exposition, Albuquerque, NM, 2009.
Ozbek, M., Rixen, D. J., Erne, O., Sanow, G., “Feasibility of Monitoring Large Wind Turbines Using Photogrammetry,” The 3rd International Conference on Sustainable Energy and Environmental Protection, SEEP 2009, Vol. 35, Issue 12, pp 48024811, 2010.
Gu, S., McNamara, J. E., Mitra, J., Gifford, H. C., Johnson, K., Gennert, M. A., King, M. A., “Body Deformation Correction for SPECT Imaging,” IEEE Trans Nucl Sci., Vol. 4, pp 2708-2714, 2007. Blanz, V., Mehl, A., Vetter, T., Seidel, H., “A Statistical Method for Robust 3D Surface Reconstruction from Sparse Data,” Int. Symp. on 3D Data Processing, Visualization and Transmission, pp 293-300, 2004. Warren, C., “Modal Analysis and Vibrations Applications of Stereophotogrammetry Techniques,” Master’s Thesis. University of Massachusetts Lowell, 2010. “Southwest Windpower| Air XTM and Air BreezeTM small wind generators for remote homes, sailboats, offshore platforms and more,“ Southwest Windpower, http://www.windenergy.com/products/air.htm, Accessed February 5, 2011. Warren, C., Niezrecki, C., and Avitabile, P., “Optical Non-contacting Vibration Measurement of Rotating Turbine Blades II,” Proceedings of IMAC-XXVII, Orlando, FL, 2010. Eggert, D. W., Lorusso, A., Fisher, R. B., “3-D Rigid Body Transformations: A Comparison of Four Major Algorithms,”Machine and Vision Applications, Vol. 9, pp 272-290, 1997. Tsai, L. W., Robot Analysis, The Mechanics of Serial and Parallel Manipulators, John Wiley and Sons, Inc., pp 37-38, 1999 PONTOSTM v6.2 User Manual - Mittelweg 7-8, D-38106 Braunschweig, Germany, 2009.
Development of Sampling Moire Camera for Landslide Prediction by Small Displacement Measurement
1
Makiko NAKABO, 2Motoharu FUJIGAKI, 3Yoshiharu MORIMOTO, 2Yuji SASATANI, 1 Hiroyuki KONDO, 2Takuya HARA 1 Graduate School of Systems Engineering Wakayama University 930, Sakaedani Wakayama 640-8510, Japan 2 Department of Opto-Mechatoronics, Faculty of Systems Engineering, Wakayama University 930, Sakaedani Wakayama 640-8510, Japan 3 Moire Institute Inc. 2-1-4-840, Hagurazaki, Izumisano, Osaka 598-0046, Japan
ABSTRACT The ability to measure the small displacement caused by a landslide is important. If it were possible to measure the displacement before the landslide occurs, people would be able to evacuate before the landslide occurs and reach a safe area. Therefore, we developed a detection system using the small displacement measurement of a landslide. This system uses a displacement measurement technique known as the sampling moire method. This method is capable of analyzing the phase values from one image of a grating pattern. Because of this, the detection system is very simple and compact. In addition, it analyzes displacement very accurately. The phase is calculated from several phase-shifted method fringe patterns that are obtained by changing the sampling phase. The displacement is analyzed by determining the difference in the phase before and after displacement. However, the displacement results contain errors such as stir and air turbulence, which complicate landslide prediction. The level of error can be decreased by taking the averages of displacement that contain errors. Therefore, we developed a sampling moire camera that is capable of rapidly, acquiring images and performing a high-speed analysis. The sampling moire camera uses sampling moire method to analyzing the phase. In this paper, we describe the sampling moire camera’s ability to predict landslides by measuring small displacement. We also conduct experiments that test the system. INTRODUCTION It is very useful to measure the displacement that is required to predict a landslide [1]. If we knew this, we could provide warnings that would allow people to escape from the disaster. Therefore, in our laboratory, we developed a detection system that measures the small displacement that predicts a landslide using the sampling moiré method as a measurement technique [2]-[4]. Unlike the phase-shifted method, the sampling moire method is capable of analyzing phase values from one image of a grating pattern [5]. Because of this, the detection system is very simple and compact. In addition, it is faster than other methods. When attempting to predict a landslide based on displacement, the speed of measurement is very important. If the measurement is slow, a landslide will have already occurred by the time the system predicts it. Our system uses times for which the phase is calculated from images using the sampling moiré method (we analyze the displacement from the phase difference). However, the advantages of the sampling moire method are negated by the amount of phase calculation time that is required by the computer, which depends on the computer’s performance. In addition to the slow calculation time, the outputs contained errors including stir and air turbulence. We believe that if the camera was able to calculate the phase, calculation time would be reduced. In this case, the computer is only required to calculate the displacement of the phases. In this paper, we developed a sampling moire camera. This camera is capable of analyzing the phase of 2-D grating in real-time and outputting the phase distribution. We performed an experiment on displacement measurement that demonstrated the accuracy of the sampling moire camera.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_39, © The Society for Experimental Mechanics, Inc. 2011
323
324 PRINCIPLE Principle of the Sampling Moire Method Figure 1 shows the moire fringe patterns that are determined by the sampling moire method. This figure shows, only three horizontal lines. In reality, the number of sampling lines corresponds to the camera’s resolution. Figure 1(a) shows a deformed grating pattern attached to the specimen. The pitch of the grating was 1.125 times larger than that of the sampling points. Figure 1(b) shows a recorded image that does not show a moire fringe pattern. Figure 1(c) shows the moire fringe patterns when the sampling points of the camera are changed that is, every N-pixel (in the figure, N = 4) from the first, second, third and fourth pixels is chosen from Fig. 1(b) to be the sampling start points. This process corresponds to the phase-shifting of the moire fringe pattern. The sampled images shown in Fig. 1(c) are interpolated using the neighboring data. Figure 1(d) shows the linear interpolated images of Fig. 1(c). Multiple phase-shifted moire images can be obtained from a single image. The k-th phase-shifted images are expressed approximately as follows;
Ik (i, j) = Ia (i, j)cos[θ (i, j) + k
2π ] + Ib (i, j) (k = 0,1,...,N −1) , N
(1)
where Ib (i, j) represents the background intensity in the image, which is insensitive to a change in phase; I a (i, j) represents the amplitude of the grating intensity and θ (i, j) is the initial phase value. The phase distribution of the moire pattern can be obtained by a discrete Fourier transform (DFT) algorithm using Eq. (2) or by a phase-shifting method using a Fourier transform (PSM/FT) [5].
2π ) N tan θ (i, j) = − N −1 2π ∑k =0 Ik (i, j)cos(k N )
∑
N −1
I (i, j)sin(k
k =0 k
(2)
The phase θm of the moire pattern is the difference between the grating phase θg and the phase θr of the reference grating (the sampling phase) as follows; θm = θg - θr . (3) The phase θr of the reference grating is a constant. Therefore, the phase θg of the grating pattern can be calculated from the phase θm of the moire pattern. In the case of most fringe analysis methods, the distribution of phase differences of the grating patterns before and after deformation gives the displacement distribution. The phase differences θm0 and θm1 of the moire pattern before and after deformation are expressed in following equations, respectively; θm0 = θg0 - θr , (4) θm1 = θg1 - θr , (5) where θg0 and θg1 are the phases of grating pattern before and after deformation, respectively. The phase difference Δθg of the grating patterns before and after deformation can be obtained from the phase difference between θm0 and θm1 as shown in Eq. (6); Δθg = θg1 - θg0 = θm1 - θm0 . (6) Phase Analysis for 2-D grating Figure 2 shows some examples of two dimensional phase analysis using the sampling moire method. Figure 2(a) shows a 2-D grating image that was captured by the camera. Figure 2(b) shows a grating image obtained after a smoothing process for the y-direction. Using this process, the x-dimensional phase distribution can be analyzed because the 2-D grating is changed into the pattern image as a one-dimensional grating pattern image. Figure 2(c) shows the phase-shifted sampling moire images rom Fig. 2(b). Figure 2(d) shows the phase distribution for the x-direction that was derived from Figure 2(c) using the phase-shifting method. Figure 2(e) shows a grating image obtained after the y-dimensional smoothing process. Figures 2(f) and (g) show phase-shifted sampling moire images. The phase distribution for the y-direction was produced in the same way.
325
(a) Deformed grating pattern on the specimen, (b) Image recorded by a digital camera, (c) Moire fringe patterns obtained when every N (= 4) pixel from first, second, third and fourth sampling points is taken from (b), (d) Linear interpolated images (Phase-shifted sampling moire images) from (c), (e) Phase fm distribution analyzed from (d) Figure 1: Sampling Moire method
(a) 2-D grating image, (b) Grating image obtained after a smoothing process for the y-direction, (c) Phase-shifted sampling moire images taken from (b), (d) x-directional moire phase distribution, (e) Grating image obtained after a smoothing process for the x-direction, (f) Phase-shifted sampling moire images produced from (e), (g) y-directional moire phase distribution Figure 2: Phase analysis for two-dimensional grating using the sampling moire method Process of Displacement Measurement from the Phase of the Moire A camera was used to record the grating and the recorded image was analyzed using the sampling moire method. The phase difference between the default position image of the grating and the moved position image of the grating dφ is related to the displacement dx and the grating pitch p; this relationship is expressed in Eq. (7).
326 dφ ×p (7) 2π If the grating is 2-D, the x-directional and y-directional displacements can be measured with x-directional and y-directional gratings pitches, respectively. The displacement dx is obtained from the phase difference dφ of the moire and the grating pitch p as shown in Eq. (7). dx =
The Sampling Moire Camera Figure 3 (a) shows a block diagram, and Fig. 3 (b) shows a photograph of the sampling moire camera. This camera is composed of a CMOS sensor, an FPGA, memory modules, a heat sink, a USB interface and a power unit. The algorithm mentioned above is written into the FPGA. The FPGA analyzed a 2-D grating image taken by a CMOS sensor in real-time. The size of this the camera is 131 mm (width) x 84 mm (height) x 179 mm (depth), and it weights 1.8 kg. The effective pixel size is 1024 x 1024 pixels. The frame rate is 3.5 fps in high-resolution mode (1024 x 1024 pixels), 11 fps in normal mode (512 x 512 pixels), and 67 fps in high-speed mode (128 x 128 pixels).
(a) Block diagram
(b) Photograph Figure 3: Sampling Moire camera
EXPERIMENTS Experiment 1: Close-Range Displacement measurement This experiment shows the accuracy of displacement measurement at close range using the sampling moire camera and 2-D grating. Both directions of the grating pitches were 4.0 mm. (The precision of the experiment was maintained by placing the camera on a vibration isolation table). A grating panel was placed on an electronic motion stage at 1 m from the sampling moire camera. Figure 4 (a) shows a diagrammatic illustration of the experiment. Figure 4 (b) shows a photograph of the experiment. In the experiment, the grating gave the displacement for the x-direction. The displacement was set at every 0.2 mm from 0 mm to 1.0 mm by an electronic motion stage. The displacement were analyzed by averaging all of the pixel data in the area as shown in Fig. 4 (c). The analysis area was 100 x 100 pixels. Table 1 and Fig. 5 (a) show the results of the averaged displacement of the analysis area. Figure 5 (b) shows the error of the averaged distribution of the displacement measurement. The average of the error was 0.003 mm; therefore, the analyzed displacement was very accurate.
327
(a) Diagrammatic illustration
(b) Photograph
(c) Grating panel and motion stage Figure 4: Close range experimental setup
(a) Displacement measurement Figure 5: Result of the displacement average
(b) Error
328 Table 1: Result of the displacement average Given displacement [mm] 0.000 0.200 0.400 0.600 0.800 1.000
Measured displacement [mm] 0.003 0.204 0.403 0.604 0.803 1.002
Error of measured displacement [mm] 0.003 0.004 0.003 0.004 0.003 0.002
Standard deviation of measured displacement [mm] 0.001 0.001 0.001 0.001 0.001 0.001
Experiment 2: Long-Range Displacement Measurement This experiment shows the accuracy of long-range measurement displacement using the sampling moire camera. The grating panels were placed 130 m from the sampling moire camera. In this experiment, one of the 2-D grating panels was set on a motion stage and the other was tied. The grating pitches in both the x and the y direction were 15.0 mm. Figure 6 (a) shows a diagrammatic illustration of the experiment. Figure 6 (b) shows a photograph of experiment. In the experiment, the grating was given a displacement for both x-direction and y-direction. The displacement was given in each 1.0 mm by motion stages from 0.0 mm to 3.0 mm. The results of the displacement measurement were obtained by averaging the pixel data in the analysis area in Fig. 6 (c). In this case, the analysis areas was 70 x 70 pixels.
(a) Diagrammatic illustration
(c) Averaging area Figure 6: Long-range experimental setup
(b) Photograph
329
(a) x-directional displacement (b) y-directional displacement Figure 7: Results of the displacement average including the air turbulence
(a) x-directional displacement (b) y-directional displacement Figure 8: Results of the displacement measurement Table 2: Results of the displacement measurement Given Measured displacement for Measured displacement for displacement [mm] x-direction [mm] y-direction 0.000 -0.063 -0.119 1.000 0.948 0.905 2.000 2.010 1.940 3.000 3.053 3.046 Figure 7 shows the displacement measurement average in the analysis area. The red line in Fig. 7 shows the moving averages of 50 results. Figure 7 (a) shows the x-directional displacement. Figure 7 (b) shows the y-direction displacement. In Fig. 7, air turbulence accounts for the error. In Fig. 7 (a), the x-direction was measured from -1.0 mm to 2.0 mm. The y-direction was measured from 0.25 mm to 3.25 mm as shown in Fig. 7 (b). The total measured displacements were 3.0 mm in both cases; therefore, the displacements were measured properly. The air turbulence was canceled out by taking the difference between the motion stage displacements and grating panel displacements as shown in Fig. 8. Table 2 shows the result of the displacement average without air turbulence. The average of error for the x-direction was 0.045 mm. The average error for the y-direction was 0.080 mm. The camera is capable of measuring displacements at a sub-millimeter level. CONCLUSIONS This paper described the development of a sampling moire camera. It was capable of ananlyzing the phase of a 2-D grating in real-time and providing the phase distribution as an output. We also demonstrated the structure and performance of the sampling moire camera. An experiment on short-range displacement measurement showed that the camera was very accrate. Additionaly, we included an experiment on long-range displacement measurement. Those results showed that the camera measured displacement very accurately and that it was also able to measure air turbulence.
330 REFERENCES [1] Fujita, T., Landslide-geology of disaster in mountain district (in Japanese), Kyouritsushuppan, Chapter 6, 107(1990). [2] Arai, Y., Yokozeki, S., Shiraki, K. and Yamada, T., High Precision Two-Dimensional Spatial Fringe Analysis Method, Journal of Modern Optics, Vol. 44, No. 4, 739-751, 1997. [3] Ri, S., Fujigaki, M. and Morimoto, Y., Sampling Moire Method for Accurate Small Deformation Distribution Measurement, Experimental Mechanics, Vol. 50, No. 4, 501-508, 2010. [4] Shimo, K., Fujigaki, M., Masaya, A., Morimoto, Y., Development of Dynamic Shape and Strain Measurement System by Sampling Moire Method, ICEM2009, Proc. of SPIE, Vol. 7522, 2009. [5] Morimoto, Y. and Fujisawa, M., Fringe Pattern Analysis by a Phase-shifting Method Using Fourier Transform, Optical Engineering, Vol. 33, No. 11, 3709-3714, 1994.
Energy Dissipation in Impact Absorber1 S. Ekwaro-Osire2, I. Durukan, F.M. Alemayehu Mechanical Engineering Department Texas Tech University Lubbock, TX 79409-1021 J.F. Cardenas-Garcia, United States Patent and Trademark Office ABSTRACT Impact absorbers are effective in passively absorbing and dissipating excessive energy in a primary system. They perform through momentum transfer by collision and dissipation of kinetic energy as sound and heat energy. The performance and dynamics of the impact absorber is highly dependent on the surface nature of the impact wall and ball. The impact surface material affects the effective coefficient of restitution. The objective of this research was to study and compare the total energy dissipation for different combinations of impact wall material and mass of impact ball. An experimental setup of single-unit impact absorber with different materials for the impact wall was designed and constructed. The exact locations of the impact ball(s) and the response of the primary structure were simultaneously tracked using a novel image processing technique. Based on the tracked motion of the ball and primary structure, two-way ANOVA is implemented to study the effect of variation in wall material and ball mass. It was shown that impact wall material selection is critical to obtain a best configuration for optimal total energy dissipation. INTRODUCTION Impact vibration absorbers have been used extensively to control vibrations of mechanical systems. These systems generally consist of solid mass or masses, placed in the primary system in such a way that the impact ball/s are free to move between two surface ends of the impact wall, rigidly attached to the primary system. They are effective in passively absorbing and dissipating excessive energy in a primary system by which vibration amplitude of the primary system will be limited to an acceptable level. IVAs work in such a way that the Kinetic energy of the primary system is being transformed in to the kinetic energy of the ball by collision [1]. Initially, the system has a higher kinetic energy while the ball is almost at rest, hence a kinetic energy close to zero. Then, when the ball is being impacted by the wall the first time, the ball gets its maximum kinetic energy right after impact taking some energy of the system; and then, the ball tries to slow down, and that is when the second wall comes and hits the ball in to motion. This repeats as long as the system continues to be externally excited. It is a fact that elastic collisions have better separation capabilities than plastic collisions. Hence, having a bigger coefficient of restitution between the ball and the wall that is close to one will facilitate the occurrence of more number of collisions by which more energy will be absorbed by the ball. That is, the bigger the coefficient of restitution, the closer the restitution impulse will be to the deformation impulse. This way, a consistent transfer of momentum by impact will take place. This paper presents the energy dissipation capabilities of a single-unit IVA, considering varying ball mass and impact wall material. Three steel balls with 28.1 g, 35.7 g and 44.0 g combined with steel, aluminum and brass (9 different combinations) for a fixed clearance has been studied. The objective of this research was to study the energy absorption performances for an optimal configuration. In this experimental study, an image processing technique has been used to track the position of the primary system and the ball; using a high speed camera 1
The views expressed in this paper are those of the author and do not necessarily reflect the official policy or position of the United States Patent and Trademark Office or the U.S. Government. 2 Corresponding author: [email protected] T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_40, © The Society for Experimental Mechanics, Inc. 2011
331
332 (Basler A504k) catching 250 frames per second. The time series data was used to obtain motion plots. Based on the tracked motion, two-way ANOVA is implemented to study the effect of wall material and ball mass to performance [2]. It is shown that impact wall material and ball mass combination is critical to obtain best configuration for optimal total energy dissipation. The aim of this study was to examine the energy dissipation of the single IVA system from a vibrating primary system. For this purpose, different impacting wall materials and different impacting masses examined in a systematical way. MATERIALS AND METHODS Experimental Setup and Design Main purpose of the IVA is to reduce the vibrations on the primary system by means of impacting a solid mass to primary system [3, 4]. The effectiveness of the IVA can be examined by simple experimental setup. In this paper, it is studied the effect of different impacting wall material, and different mass of solid impacting ball on performance of the IVA. For this purpose, a systematical way, Design of experiment has been employed to reduce the number of experiments. It is been selected two independent parameters, impacting wall material and impacting ball mass. For each of these independent parameters, three levels are examined. For wall material, aluminum, bronze and steel are the three levels and for the ball mass, 28.1 g, 35.7 g, 44.0 g are the three levels. Mixing these levels according to DOE gives 9 different set of experiments. For each set of experiment 5 different measurements have been recorded which makes 45 total number of experiments. The dependent outputs of the experiments are the effectiveness in the vibrations reduction. Effectiveness is measured as the reduction of the amplitude of the vibrations. Excitation amplitudes and frequencies kept constant for all the experiments. For the impact vibration absorber experiments, harmonic forced vibrations applied to primary system at natural frequency of the system. For this purpose, signal generator used to create a sinusoidal signal at natural frequency of the system. Then this signal is amplified to certain amplitude and send to shaker to create harmonic motion on the primary system. To eliminate the friction and secure the single direction of motion, air bearing is placed under the primary system. Then, a steel ball inside an acrylic tube is attached the primary system. Both ends of the tube are closed with cylindrical caps and rigidly connected to primary system. It has been produced different caps in same dimensions but with different materials. Also, three different size of steel ball has been prepared to be used in the experiments. High speed camera together with image processing code is used to track the primary system displacements and also impacting ball position.
Figure 1 Experimental Setup (a) data acquisition and analysis unit, (b) high speed camera, (c) single unit impact vibration absorber, (d) primary system, (e) air bearing, (f) shaker, (g) amplifier, (h) signal generating unit.
333 RESULTS AND DISCUSSIONS Experiments were conducted for different levels of parameters and the percentage reductions of the vibrations amplitudes are tabulated in the Table 1. It can be seen from table that for a given material, the percentage reductions are increasing with the increasing impact ball mass. On the other hand, for a given impact ball mass, aluminum gives better results compared to bronze and for same cases better than steel. However, it is not possible to see from table if some interaction effects are present between ball mass and impact wall. In addition to that, some numbers are close to each other, so it is not easy to decide if there is a significant difference between materials and ball mass for same cases. For this purpose, the statistical software MINITAB was used to analyze the experimental results. Table 1 Percentage reductions of vibrations with respect to material and mass
Impact Wall Material
Aluminum
Bronze
Steel
28.1 g 38.01 38.22 37.81 38.22 39.05 21.74 20.68 21.01 21.01 20.76 30.94 32.08 32.50 32.40 31.88
Ball Mass [g] 35.7 g 44.0 g 40.72 44.58 40.41 44.99 40.21 44.79 40.71 44.26 41.75 44.17 27.50 31.15 27.71 31.15 27.60 31.04 27.50 30.94 27.60 30.94 37.60 44.17 37.40 45.73 37.50 47.08 37.81 46.04 37.81 46.56
The null hypothesis used in this was that there is no difference in the results (percentage reductions of vibrations) and the variations result from random experimental errors. The alternative hypothesis was that there was at least one set of experimental results shows that there is a difference in the results of experiments and it is different from the random errors associated with the experiments. After setting the hypothesis tests, confidence level was selected as 95%. This confidence interval yields an α-value of 0.05. The α-value is the reference number to compare the P value of the ANOVA table. Looking at the P number of the experiments at a given α-value, the null hypothesis can be accepted or rejected. The values that were used for the ANOVA using MINITAB are summarized in Table 2. Table 2 MINITAB input of the parameters
Factor Ball Mass Material
Type fixed fixed
Levels 3 3
28.1 aluminum
Values 35.7 bronze
44.0 steel
The output of ANOVA is tabulated in the Table 3. First column is the source column. There are two main sources (ball mass and wall materials) and one interaction term (ball mass × material), then the error and the total. Second column is the Degree of freedom of the source and error terms. Third column is the sum of squares of the sources and error. Fourth column is for the adjusted sum of squares, for this case it is same with third column because there is no adjustment. Fifth column is the adjusted mean squares. Sixth column is the F-values. Looking at the Fvalues, P values can be computed which is placed at seventh column. If P values are very small, it can be
displaced as zero. In these experiments it can be seen that P values are all less than for all cases the null hypothesis can be neglected. In other words, there is a significant effect of ball mass and material on the results. In addition to that, there is an interaction between ball mass and material.
Source Ball Mass Material Ball Mass × Material Error Total ANOVA is based on the normal distribution distribution, for example Weibull distribution assumption, residuals are plotted. Figure dots) have to be located near to straight line current case. The residuals versus fitted points and checking the residuals behavior with fitted values. In this case the which is supporting normality assumption plotted on Figure 2 (c). The histogram plot is checking thus it confirms the normality assumption. plot is for checking whether there is an experiment run order effect on the residuals. In this case There is no significant effect of run order.
Figure 2 Residuals plots (a) Normal probability plot, (b) residuals versus fitted values (c) residual
histogram (d) residuals versus observation order The statistical results showed that the mass of the ball and performance. The is due to the fact that During the impact, some energy is transferred from wall to ball and some deformation and heat. The following figure 3 shows the performance comparison of ball mass and wall material.
335
Absorption [%]
50.00 40.00
Wall Material
30.00
Aluminum
20.00
Bronze
10.00
Steel
0.00 28.10
35.70
44.00
Steel Ball Mass [g] Figure 3: IVA Performance Plot As presented in the above plot, lower coefficient of restitution between steel ball and bronze plate caused for a lower performance as compared to both steel- steel and steel-aluminum configuration. Higher coefficient of restitution of steel ball-aluminum wall configuration resulted in best energy absorption capabilities. Nevertheless, with increasing ball mass, the two curves cross each. Hence, with bigger ball mass and aluminum impact wall, performance of energy absorption in IVA is improved. The exchange of effectiveness between aluminum and steel wall, shown for higher ball mass, also reveals that coefficient of restitution is not a material property, but depends upon the severity of the impact [5]; i.e. coefficient of restitution is also a function of the velocity of impact as well as the mass of the impacting wall and ball. In this experiment, it is clearly reflected that variation in material type of the wall and ball brings about a difference in the effectiveness of energy absorption by IVA from the primary system. Furthermore, the surface of the impact ball has shown significant deformation as the ball mass has increased and has deteriorated the effectiveness of aluminum surface with the increase of the impact ball momentum (mbvb). CONCLUSIONS The statistical results showed that the mass of the ball and material of the impacting wall affect the IVA performance. In addition, from the experimental results it is shown that coefficient of restitution is not material property, rather depends on the severity of the impact. Performance of IVA depends on certain wall material and ball mass configuration for specific clearances. REFERENCES 1. Ekwaro-Osire S, Desen IC (2001) Experimental study on an impact vibration absorber. Journal of Vibration and Control, 7:475-493. 2. Milton JS, Arnold JC, "Introduction to probability and statistics : Principles and applications for engineering and the computing sciences," McGraw-Hill, 2003. 3. Ekwaro-Osire S, Nieto E, Gungor F, Gumus E, Ertas A, "Performance of a bi-unit impact damper using digital image processing," in Vibro-impact dynamics of ocean systems and related problems, Ibrahim, R.A., Babitsky, V.I., and Okuma, M., Eds. Berlin, Germany: Springer-Verlag, 2009. 4. Ekwaro-Osire S, Ozerdim C, Khandaker MPH (2006) Effect of attachment configuration on impact vibration absorbers. Experimental Mechanics, 46:669-681. 5. Johnson KL, Contact mechanics. New York, Cambridge, UK (2004).
Mechanics behind 4D interferometric measurement of biofilm mediated tooth decay. Michael S. Waters1, Bin Yang2, Nancy J. Lin1 and Sheng Lin-Gibson1* 1
National Institute of Standards and Technology, Materials Science Engineering Laboratory, Polymers Division, Biomaterials Group, 100 Bureau Dr., Mail Stop 8543, Gaithersburg, MD 20899-8543 2 American Dental Association Foundation, 100 Bureau Dr., Mail Stop XXXX, Gaithersburg, MD 20899-XXXX *
To whom correspondence should be addressed.
ABSTRACT Evaluating the efficacy of dental materials to protect human teeth requires the capacity to measure tooth decay. Currently, practices for determining tooth decay are destructive, qualitative to lowly quantitative, and/or measure bulk changes that have low to no spatial resolution. The combination of the highly variable nature of tooth enamel and the inability to perform serial analyses on the same spatial location limits the capacity to access reproducible information from any experimental set. To help complement the void left by other techniques, this study explores the potential of interferometric optical profilometry to make rapid precision measurements in 3dimensions over time of human tooth enamel decay. Using unique techniques in raw interferometric data evaluation in combination with specially designed biocompatible 3-D alignment translation stages, human tooth decay was measured with respect to pathogenic dental bacterial biofilms. These investigations revealed the capacity to quantitatively determine the rate of tooth decay in previously unseen spatial and temporal scales (4D). These new, rapid, low-cost techniques minimize effort for sample preparation and use very few consumables, opening the feasibility for high-throughput investigations of clinical dental materials to a wider international community. INTRODUCTION Tooth decay is one of the most rampant, chronic, communicable diseases (1, 6). 94% of adults in the United States have manifested coronal caries (a form of cavity) sometime within their life. Tooth decay is primarily caused by a mixed consortium of oral bacteria, which aggregate to colonize the cracks and crevices of the teeth and the region from the gingival line to the periodontal interface, forming a biofilm that is highly resistant to harsh environments, including abrasion and antibiotics. Streptococcus mutans, the bacteria considered to be the primary causative agent of dental caries, secretes lactic acid in a surface-attached bacterial biofilm, inhibiting growth of other bacteria giving itself an opportunity to dominate and dissolve the calcium hydroxyapatite material of the tooth surface (3-5). One of the major challenges to overcome in the battle against tooth decay is the capacity to measure the effectiveness of any therapeutic agent on dental health. This is due in large part to the hypervariable nature of teeth, both between different people and even within a single tooth (7). Teeth are made of calcium hydroxyapatite formed into 2 different major structures: enamel and dentin. Dentin is a tubular material that makes up the majority of the tooth, which connects to the mandibular bones. A thick, hard, impact- and wear-resistant enamel cap covers the top of T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_41, © The Society for Experimental Mechanics, Inc. 2011
337
338
the dentin in the tooth, which is the exposed area inside of the mouth (Figure 1A). Enamel is composed primarily of highly crystalline hydroxyapatite, which is packed together in vertically positioned rods that radiate from the dentin enamel interface (Figure 1B). Both the dentin tubules and the calcium-saturated saliva provide a constant source of calcium that helps remineralize decayed enamel, which can span the entire depth (Figures 1 A, C). The consequences, however, are a highly variable enamel composition, as demonstrated by the variable rod presence in Figures 1 B and C, which yields great measurement inconsistency, using currently available technologies.
Figure 1. Tooth characteristics critical to high-resolution dissolution analyses. A. The anterior lateral (left) and top (right) views of a carious upper 3rd molar demonstrating the dentin base and enamel cap of a tooth. B. Longitudinal thin section of non-carious and C. carious tooth enamel. Note the presence, absence and quantity of visible vertical rod structures in each section are different between each tooth, and inconsistent within each tooth individually. Current dental measurement technologies are limited in their capacity to probe teeth because they cannot distinguish different regions of teeth well enough to characterize and classify how they individually respond to dissolution conditions (2). In order to optimize dental therapeutic technologies, these capacities for measurement must be improved. In response to this need, we identified 2 critical areas in which we could improve current tooth dissolution measurement science. To this end, we endeavored to provide spatially quantifiable differences in enamel structural anatomy using a combination of sample preparation and alignment tools and Interferometric Optical Profilometry (IOP), based on our previous work demonstrating the measurement the vertical height of a bacterial/hydroxyapatite interface(8-10).
339
IOP is a fast, high-resolution (x and y < 200 nm; z <1 nm), non-invasive, optical surfacemapping technique that complements the other imaging tools. IOP employs reflected-light interferometric measurement in combination with a high-resolution piezo-electric motor to precisely calculate the height of virtually any surface without the use of lasers. In order to maximize our capacity to analyze tooth specimens, 2 novel devices had to be designed and manufactured: 1) a special tooth mount designed specifically for optimizing the retention of enamel sections from a single tooth with vertical-oriented rods, and 2) a biocompatible translation stage that permits sample removal and replacement so it is realigned in 3 dimensions. With the aid of these simple tools, we will be able to use IOP to make non-destructive measurements of microbial influenced dissolution of the tooth surface. These novel measurements will provide insight into dental biofilm-tooth enamel interactions, and aid in the development of dental therapeutic technologies, setting new standards of non-destructive diagnostic dental examination of tooth pathogenesis. MULTI-ANGLED TOOTH MOUNT DESIGN, SAMPLE PREPARATION AND VALIDATION Tooth specimens are a controlled, limited commodity and sample preparation can be delicate and complicated. In an intact tooth, enamel rods are oriented vertically with respect to the bacterial exposed area. To ensure samples are comparable to each other enamel sections should be cut so the final product yields flat chips with vertically-oriented rods. Based on these parameters, a mount was designed to cut enamel samples.
Figure 2. Multi-angled tooth mount. A. The multi-angled tooth mount, bolted to the arm of a saw. The tooth is by the base and the diamond saw is cutting through the crown of the tooth. B. A resulting tooth chip after polishing. C. A zoomed-in reflected light confocal view of the surface confirms that the enamel grains are vertically oriented. Note that this image is not an indication of the surface roughness. D. IOP measurement of surface profile indicating that the surface is sufficiently smooth to proceed with dissolution experiments.
340
The multi-angled tooth mount is an aluminum block with a series of 45o angle cuts (Figure 2A). Each cut has a threaded hole in the center for mounting to the saw arm with enough surrounding surface area to stabilize the block during a cut, except for the tooth-mounting surface. The tooth-mounting surface is a larger grooved surface to with enough area and roughness hold the hard mounting wax and the base of the tooth. The dimensions of the mount are decided by the clearance saw blade clearance when mounted by the base of the tooth and secured to the arm with any hole. When the tooth is mounted properly the flat faces of the tooth will line up with the flat faces of the mount. Using this method, the tooth will only have to be mounted once, and all of the enamel can be cut from the tooth. Flat faces should be cut first, then apexes. Once the sections were removed, the dentin was ground away from each chip using sandpaper, and then the ground surface was flipped over and mounted onto a hand microtome. The other side was then slowly ground down with sandpaper until the chip is parallel and polished on both sides to at least a 4000 grit polish. These chips can then be fractured to size. This method produces up to ~64 polished tooth enamel chip samples about 1-2 mm2, all with the enamel grains oriented vertically (Figure 2B-D). To ensure that this sample preparation procedure is sufficient to evaluate biofilm mediated tooth dissolution by IOP, tooth enamel chips were partially masked with transparent tape, sterilized and incubated for 24 h at 37 oC in a 5% CO2 incubator in Todd Hewitt Broth supplemented with 1% sucrose and inoculated 1:100 with an overnight S. mutans UA159 bacterial culture. After 24 h, the biofilm covered enamel chips were stained with a fluorescent DNA stain, removed from the culture. The mask was then removed, the bacterial biofilm was gently wiped away and the chips were confirmed to be biofilm free by fluorescent microscopy. The enamel chips were then evaluated for dissolution by IOP. Each IOP measurement was made within 1-3 s. These tests revealed that not only could biofilm mediated tooth dissolution be measured, but differences in rates of dissolution of secondary structure features could be measured (Figure 3 A-C).
341
Figure 3. IOP measurement of variability in biofilm mediated human tooth enamel dissolution. A. Measurement of enamel displaying large degrees of localized variable in dissolution. Note the masked region shows no sign of dissolution. B. Measurement of tooth enamel dissolution displaying regions where rod ends have become visible and other regions that appear smooth. Note that all regions within the profile measure below the masked surface, indicating the entire exposed region experienced dissolution. C. Measurement of dissolutionexposed a region of dissolution showing consistently that the center of rods is eroding at a different rate from the edge of the rods.
342
DESIGN AND VALIDATION A OF BIOCOMPATIBLE 3D REALIGNMENT STAGE The results shown in Figure 3 indicate that with the capacity to make serial measurements of tooth enamel dissolution, secondary structure dissolution rates could be measured. To make serial dissolution measurements, the sample must be repeatedly measured, incubated in dissolution conditions and measured again. Due to the hypervariability of the tooth enamel, each sample must be precisely realigned in 3 dimensions. With a platform that permits microscale realignment, specific regions/structures can be distinguished from each other. To this end, we designed a 3D realignment stage that could be sterilized and incubated. Made from a hard, non-leaching plastic (e.g. plexiglass or polycarbonate), the realignment stage requires 2 components: an incubation stage and a receiving platform (Figure 4 A-C). The receiving platform is a flat plastic surface immovably fixed to the stage with a vertically positioned stainless steel bolt set to receive the sample. The sample holder has 6 leveling threads, which help stabilize the approach of the sample holder to the receiving platform (Figure 4B). At the base of the sample holder is a flat vertical face that meets (upper rotation stop) that meets a vertically elevated face on the receiving platform (lower rotation stop), just as the sample holder finishes tightening down to the receiving platform. At the top of the sample holder are grooves to accommodate removal and replacement with sterile forceps (Figure 4A). The sample is mounted in the center of the sample holder by either double stick tape or an adhesive (Figure 4C).
343
Figure 4. Biocompatible 3D realignment stage. A. A top view of the sample holder showing the forcep grips and the sample loading area. B. A lateral view of the sample holder fixed to the receiving platform. Note the interface of the raised lower rotation stop from the receiving platform and the flat vertical face of the upper rotation stop from the sample holder. C. 2 realignment stages fixed to a receiving platform, with an enamel tooth chip that is being measured by IOP under red light. Realignment validation experiments were performed by removing and replacing the sample holder with a various tooth chips and making measurements in multiple locations under the highest available magnification. XY realignment error was 2.59 µm ±1.85 µm, and no discernable change in rotation or tilt. At this resolution, coordinate and feature registration can be used to achieve the resolution limits of the objective lens (Figure 5).
Figure 5. Multiple 3D realignment measurements of an enamel chip. Each image represents a 109 µm2 measurement of a polished enamel chip. DISCUSSION In this body of work, we designed, manufactured and demonstrated the functional combination of novel sample preparation tools, a biocompatible 3D realignment stage and IOP to produce novel high-resolution measurements of biofilm-mediated human tooth enamel dissolution in seconds. These results demonstrate the first IOP measurements variability in secondary structure of tooth enamel. The combination of controlled sample preparation strategies, realignment technology and the rapid, high-resolution, non-destructive optical measurement by IOP represented here offers the very opportunity for significant advances in the capacity to measure bacterial mediated enamel pathogenesis. The capacity demonstrated here to optically spatially resolve changes in secondary structure dissolution rates of human tooth enamel, offers the opportunity to measure rates of dissolution of these structures, thereby providing additional data to help characterize and differentiate the different material the properties of human teeth. This knowledge can provide valuable information in the evaluation of novel dental therapeutic agents. It is important to note that this research was focused on developing low-cost, rapid, highresolution techniques for measuring biofilm mediated tooth pathogenesis with minimal consumables. The research efforts of many countries are incapacitated due in large part to the
344
ongoing costs consumables and equipment maintenance. If treated properly, an optical profilometer has great longevity, require very little maintenance and uses no consumables. The tools made here are relatively simple to manufacture, or can be cast from a mold. The possibilities offered here to rapidly make clinically relevant diagnostic evaluations at low cost can permit the participation of laboratories with limited funding in cutting edge research. REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9.
10.
Bryers, J. D. 2008. Medical biofilms. Biotechnol. Bioeng. 100:1-31. Field, J., P. Waterhouse, and M. German. 2010. Quantifying and qualifying surface changes on dental hard tissues in vitro. J Dent 38:182-90. Hamada, S., and H. D. Slade. 1980. Biology, immunology, and cariogenicity of Streptococcus mutans. Microbiol. Rev. 44:331-384. Loesche, W. J. 1986. Role of Streptococcus mutans in human dental decay. Microbiol. Rev. 50:353-380. McDermaid, A. S., A. S. McKee, D. C. Ellwood, and P. D. Marsh. 1986. The effect of lowering the pH on the composition and metabolism of a community of nine oral bacteria grown in chemostat. J. Gen. Microbiol. 132:1205-1214. Surgeon General, U. S. 2000, posting date. Oral health in America: a report of the Surgeon General. [Online.] Wang, L. J., R. Tang, T. Bonstein, P. Bush, and G. H. Nancollas. 2006. Enamel demineralization in primary and permanent teeth. J Dent Res 85:359-63. Waters, M. S. 2009. Microbial detection of material defects and weakness. Dissertation. University of Southern California, Los Angeles. Waters, M. S., M. Y. El-Naggar, L. Hsu, C. A. Sturm, A. Luttge, F. E. Udwadia, D. G. Cvitkovitch, S. D. Goodman, and K. H. Nealson. 2009. Simultaneous interferometric measurement of corrosive or demineralizing bacteria and their mineral interfaces. Appl. Environ. Microbiol. 75:1445-1449. Waters, M. S., C. A. Sturm, M. Y. El-Naggar, A. Luttge, F. E. Udwadia, D. G. Cvitkovitch, S. D. Goodman, and K. H. Nealson. 2008. In search of the microbe/mineral interface: quantitative analysis of bacteria on metal surfaces using vertical scanning interferometry. Geobiology 6:254-262.
Validating Road Profile Reconstruction Methodology Using ANN Simulation on Experimental Data
H.M. Ngwangwa * 1. Department of Mechanical and Industrial Engineering, College of Science, Engineering and Technology, University of South Africa, P.O. Box 392, 0003 Pretoria, South Africa. Tel: +27 11 471 2079; Fax: +2786 568 3758 E-mail: [email protected] P.S. Heyns (Prof), H.G.A. Breytenbach and P.S. Els (Prof) 2. Dynamic Systems Group, Department of Mechanical and Aeronautical Engineering, University of Pretoria, 0002 Pretoria, South Africa.
ABSTRACT This paper reports the performance of an ANN-based road profile reconstruction methodology on measured data. The methodology was previously verified on numerical data and it was shown that road profiles and their associated defects could be reconstructed to within a 20% error at a minimum correlation value of 94%. The data used in the present paper were measured on a Land Rover Defender 110 using an eDAC-lite measurement system. The measurements were carried out under different test conditions, namely, different road surface profiles, different vehicle suspension settings, and different vehicle speeds. The neural network was trained with data extracted from 20 m length of a typical test run for each road profile. The results confirm the findings of the numerical study with the methodology achieving a maximum error of about 25 % and correlation of above 90 %. The methodology performs relatively much better in reconstructing bumps than the Belgian pave. KEYWORDS: Road profile reconstruction, Bayesian regularized NARX neural network, road-vehicle interaction, Road damage identification.
1. INTRODUCTION Wannenburg et al. [1] note that the inputs required for fatigue life prediction of any structure are the structural geometry, material properties and input loading. They argue that the structural geometry and material properties are generally well known and the greatest concern therefore involves determining the input loading. Many studies simply use synthetically generated road profiles in an effort to optimize the vehicle structural design. In this work, the road profiles are reconstructed from accelerations measured on a Land Rover body through artificial neural network (ANN) simulation. This approach is T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_42, © The Society for Experimental Mechanics, Inc. 2011
345
346 becoming particularly attractive owing to the increasing availability of vehicle information systems in which sensors are being mounted on vehicles for assessing vehicle performance and the structural integrity of suspensions. The proposed approach could cut down on costs often incurred with conventional profiling methods due to acquiring or hiring the equipment, as well as the sheer effort required for conventional profiling measurement investigations.
This work follows up on an earlier article by Ngwangwa et al. [2] which demonstrates the methodology by utilizing data generated from a numerical model. The road profiles were generated synthetically from benchmarked ISO displacement power spectral densities (PSDs) of different road grades. The neural networks were trained for each ISO PSD class using data generated from lower and upper bound PSDs, to realize a network structure corresponding to each of the eight ISO PSD classes of road roughness [3]. Thus during simulation of the network, the algorithm searched for a network structure that returned the minimum mean square error value to be used for reconstructing the road profile. The vehicle was represented by a linear pitch plane model. The methodology demonstrated that it could reconstruct the road profiles and their associated defects to within a 20% error at a minimum correlation value of 94%.
The purpose of this paper, therefore, is to verify the methodology using real data measured on a real vehicle and road. The data were measured on Land Rover Defender 110 driven over well-constructed and documented tracks at Gerotek in Pretoria, South Africa. This Land Rover has been extensively used for vehicle research by the Dynamic Systems Group at the University of Pretoria and has been modified to allow different suspension settings. Hence, most of its physical properties are well documented and accordingly updated over the years and a great deal of instrumentation is permanently fitted to it. Gerotek is a multi-disciplinary road-vehicle testing facility established to satisfy the need for an all-encompassing test facility at which vehicle design and development could be monitored in a typical South African environment [4].
The Land Rover accelerations were measured by Breytenbach [5]. His work was aimed at determining suspension characteristics for optimal vehicle structural life. In order to achieve that goal, accelerations were measured over different road surfaces for validation of mathematical models upon which sensitivity analyses were performed. The measurements were conducted under three test conditions: different vehicle speeds (14 and 54 km/h), different vehicle suspension settings (ride and handling modes), and different road profile layouts (bumps and Belgian pave). The road surfaces were profiled by Bekker [6]. Three different methods were used in profiling the terrain: a mechanical profilometer, photogrammetry and a three-dimensional laser scanner. The quality of the measured profiles was verified by comparisons of responses with simulated data evaluated on an off-road vehicle model in MATLAB [5].
In this study, trapezoidal bumps and Belgian pave are used as test tracks. The bumps represent discrete obstacles such as potholes, stones or speed humps while the Belgian pave is widely used as a platform for performing repeatable and comparative suspension tests under simulated conditions [4]. Right front accelerometer was located on vehicle strut mount while accelerations at the rear were measured on the vehicle body some distance away from the line passing through axle centers. In this work, the right front accelerations were used in network training while the rear accelerations were randomly used during network simulation.
The measured accelerations are applied to a supervised ANN for approximation of road profile inputs. The resulting road profiles are then compared with the real measured bumps and Belgian pave. The results show very good correlations between the simulated and the actual profiles. It is further observed that the accuracy in approximating the Belgian pave is relatively less than that over the discrete obstacles. However that is caused by the inherent property of a regularized artificial neural network which often seeks balance between fitting the approximated functions and generalizing them over their space [7].
The article is structured as follows. Section 2 briefly describes the methodology. Theoretical concepts governing the formulation of the ANN used in this work, specifically the Nonlinear AutoRegressive with Exogenous Inputs (NARX)
347 network, are presented in Section 3. Section 4 describes in detail the road profiles under study while Section 5 presents the test procedure. Section 6 presents and discusses the findings of the study. The article is concluded in Section 7.
2. DESCRIPTION OF THE METHODOLOGY The profile reconstruction methodology was presented in an earlier paper by Ngwangwa et al. [3]. It is loosely based on a technique developed for mine haul road maintenance by Hugo, Heyns, Thompson and Visser [8]. Hugo et al. [8] employed an inverse model calculated from the inversion of a multiple degree of freedom model of the mine haul truck. As pointed out by Ngwangwa at al. [3] the methodology’s reliance on extensive system characterization limits its practical feasibility. However the methodology reported in this work does not require extensive system characterization. It employs a neural network which reproduces road profiles upon learning the system input-output behavior from given road-vehicle data.
In effect, the network is similar to the inverse model but with the unique advantage that it can handle the non-linearity problems that may be difficult for most inverse models calculated from direct linear models. In this paper measured vehicle accelerations are used as network inputs, while measured and digitized road profiles are the network targets.
The neural network is created using MATLAB’s newnarxsp.m in the Neural Networks Toolbox (Mathworks Inc. [9]). This has a series-parallel architecture which feeds back the true measured output rather than the estimated output. This is reported to have two advantages: firstly, the input to the feed-forward network is more accurate; secondly, the resulting network has purely feed-forward architecture and static backpropagation can be used for training the network (Mathworks Inc., [9]).
3. THE NARX NEURAL NETWORK
Artificial neural networks are composed of simple elements (neurons) operating in parallel. The networks typically operate by adjusting the values of the connections (weights) between the neurons so that particular inputs lead to specific target outputs. Recently ANN type models have been popularly used to model non-linear dynamics due to their high adaptability to various non-linear systems, the ready availability of tools and the proliferation of computer algorithms [10]. One of the popular networks for non-linear function approximation is the Nonlinear AutoRegressive with Exogenous Inputs (NARX) network. Wong and Worden [10] report that the NARX network’s underlying universal approximation theory guarantees that a basic 3-layer multilayer perceptron (MLP) can perform input-output mapping for any continuous function. It computes the current output using an MLP that takes as input a series of past system input
u ( t − i ) and past output values zr ( t − i ) . The
NARX network’s operation can be given by [9]
z%r (t ) = g ( u(t − nu ),..., u(t − 1), u(t ), zr (t − nz ),..., zr (t − 1) ) where
g(
(1)
) is a non-linear mapping function of the MLP, u ( t ) and zr ( t ) are the input sequences and target observations
at time t, and nu and nz are the maximum input and output lags respectively. The architecture of the NARX network is shown in Fig. 1. The series-parallel architecture was adapted to reduce the computational cost, since it has no feedback loop inside the model itself and employs static backpropagation for the adjustment of parameters (weights). The vehicle-driver system is assumed to be stable within the limits prescribed by the space of the adjusted weights and biases (parameters), implying that the systems have bounded inputs and bounded outputs.
348
Road – Vehicle Test Road Inputs Belgian Pave
Land Rover Defender 110
Discrete obstacles
Inputs:
u(t ) &&(t − i ) Tu IW1 D L
&& ( t ) u
1
b1
zr (t ) as network targets LW2
+
g1
+ 1
b2
g2
+ z%r (t )
∑
e(t)
T zr (t − i ) LW1 D L
Artificial Neural Network b: biases; IW: input weights; LW: layer weights and TDL: time delay lines
Fig. 1. A series-parallel architecture of the NARX model used for road profile reconstruction.
The network was trained by an expanded Levenberg-Marquardt algorithm (with Bayesian regularization) which computes the new weights
wnew
via the relationship [11]
w new = w old + J T J + λ I
(
where
)
−1
JT ε (w old )
(2)
I is the identity matrix, ε ( w old ) is an error vector at the previous point, J is a Jacobian matrix (typically
consisting of first partial derivatives of the error with respect to the parameters) and λ is a parameter governing the step size. However this training function expands the cost function by searching not only for the minimal error, but also for the minimal error using the minimal weights. Therefore the cost function becomes [7. 9]
ε = βε D + αεW where
(3)
ε D is the sum of squared errors, εW is the sum of squares of the network weights, and α and β are parameters of
the objective function, compromising between fitting the data and producing a smooth network response. This enables the neural network to perform as well on novel inputs as on the training data.
349
4. TEST ROAD PROFILES
4.1 Discrete Obstacles
The discrete obstacles are representative of defects such as potholes, bumps and stones. The trapezoidal bump is considered due to its popularity in vehicle model validation tests (Letherwood and Gunter, [12]). Two different trapezoidal bumps are placed along the tracks in two different configurations to construct two different test road surfaces having discrete obstacles. Fig. 2 shows a picture of the trapezoidal bump in frame (a) [5] with diagrams of the two different bumps in frames (b) and (c) at the bottom.
Fig. 2 Trapezoidal bumps as discrete obstacles (a) in picture (b) larger 150 mm hump (c) smaller 100 mm hump.
The two trapezoidal bumps are arranged on the road in two different layouts as shown in Fig. 3. The layout in (a) comprise the larger and smaller bumps symmetrically placed 10.4 m away from each other on either side of the track. Thus both tires on the front axle are expected to climb over the bump at the same time assuming that no wheel steered off ahead of another, as is normally done by drivers, to minimize the effects of severe pitching. In so doing roll effects are kept to a minimum while maximizing pitching effects on the vehicle body due to superposition of resultant vibrations on the vehicle body. The
350 second layout in (b) has the two bumps asymmetrically placed 10.4 m away from each other, thereby allowing for roll effects. These layouts are considered to represent typical road conditions as imposed by discrete obstacles.
Fig. 3 Layout of roads with discrete bumps (a) symmetric large-small bumps (b) symmetric small bumps only and (c) asymmetric large-small bumps.
4.2 Belgian pave In this study, the Belgian pave represents general road roughness despite the fact that Breytenbach, [5] reports that the Belgian pave at Gerotek is of much higher roughness than that often reported in vehicle dynamics literature. Fig. 4 shows the Belgian pave in picture [5] (frames (a) and (c)) and as displacement PSD of the measured profile in frame (c). Bekker [6] notes that the peak at a spatial frequency of 6 cycles/meter in frame (b) corresponds to a wavelength of 167 mm which is the average length of the cobbles (frame (a)) in the direction of vehicle travel.
351
Fig. 4 Belgian pave (a) in detail (b) in PSD of the profiled road and (c) with a travelling vehicle.
5. DESCRIPTION OF THE TEST PROCEDURE
As already mentioned in Section 1, the data used in this paper were collected with the aim of validating mathematical models of a Land Rover Defender 110 [5] that were input into a sensitivity analysis procedure for the optimization of vehicle structural fatigue life.
All tests were conducted on the suspension track for wheeled vehicles at Gerotek. An eDAQ-lite data acquisition system was used as a data logger. The Crossbow 4g/V triaxial accelerometers were used because of their good response definition at low frequencies given that important vehicle ride dynamics fall below a frequency of 25 Hz [13]. A sampling frequency of 1 kHz was used with a linear roll-off anti-aliasing filter set at 333 Hz. In the test several quantities were measured, but for the purpose of this study, the vertical accelerations, vehicle speeds and time span are of interest. Table 1 shows the parameters of interest. The vehicle speeds were measured by three different methods to ensure repeatability in the measurements. Right front accelerometers were located on the vehicle strut mount 0.12 m away from the right front axle center line, while the rear accelerometers were mounted 0.62 m away from the rear axle center line. The vehicle design provided practical limitations to mount the accelerometers on the axles directly above the tire centers.
352
Table 1: Measured parameters of interest Parameter
Transducer
Time
eDAQ-lite built-in
Vehicle speed
VBOX GPS, eDAQ-lite GPS, and Proximity probe measuring drive shaft speed
Left rear (LR) vertical acceleration
Crossbow tri-axial accelerometer
Right rear (RR) vertical acceleration
Crossbow tri-axial accelerometer
Right front (RF) vertical acceleration
Crossbow tri-axial accelerometer
Table 2 shows a summary of all the test runs that were conducted. The tests carried out randomly to reduce the effects of systematic errors in the measurements. The Land Rover was carefully maintained at a constant speed throughout any particular test run. The speed was kept as constant as possible by driving the diesel powered Land Rover against the engine governor in gear [5]. Table 2 shows that there were eight groups of test runs depending on a combination of the course, suspension setting and vehicle speed. The vehicle suspension was either in ride comfort (soft) or handling mode (hard) setting while the vehicle speed was either 14.5 or 54 km/h. The measured data showed excellent repeatability in all these test runs [5] which allows one to use any one of the data sets for validation and/or training purposes. This idea has been fortified further by the findings in this paper.
Table 2: Summary of training (in bold and underlined type) and test data Test No.
Test Description
Suspension Setting
Speed (km/h)
01, 05 &10
Belgian pave
Ride comfort
14.5 (low range, 1st gear)
02, 06 &11
Belgian pave
Ride comfort
54 (low range, 4th gear)
03, 08 & 12
Belgian pave
Handling
14.5 (low range, 1st gear)
04, 09 & 13
Belgian pave
Handling
54 (low range, 4th gear)
16, 19 & 22
Bump course, layout (a)
Ride comfort
14.5 (low range, 1st gear)
29, 31 & 34
Bump course, layout (c)
Handling
14.5 (low range, 1st gear)
353
6. RESULTS AND DISCUSSIONS
The performance of the methodology in the three test conditions is discussed in this section. For all tests on the Belgian pave and the discrete obstacles the test road lengths were 100 m and 20 m respectively. Thus for consistency, the Belgian pave was also divided into 20 m road segments. Data from Test 01 (first 20 m segment) and Test 16 were used for training the neural network. A 1-20-1 NARX network was employed in this study which implies that a network structure for each input-output pattern of the given test runs above was realized.
6.1 Different speeds
In this test case, the data are drawn from the following test runs: Test 01 and Test 05 at a vehicle speed of 14 km/h; and Test 02 at a vehicle speed of 54 km/h. The training data are obtained from Test 01 as above while the other test runs provide the data for simulation. Fig. 5 (a) shows that the network is able to reconstruct the road profiles very accurately within an error margin of 14 % at correlation level of above 99 %. However, at the vehicle speed of 54 km/h, the network performs less accurately with error margin of about 24 % and a correlation of slightly above 90 %. This may be attributed to the smoothing out effect as a result of the vehicle tires skipping over some of the important waves on the road when the vehicle was moving at high speeds. Despite this effect, the reconstructed profile in Fig. 5 (b) still picks up the prominent peaks on the road reasonably well as to be considered a good representation of the road surface.
Fig. 5 Surface profiles reconstructed when the vehicle is moving at different speeds.
354 The vehicle suspension was set to ride comfort at both speeds. As shown in the Fig. 5, the data were obtained from different road segments in different test runs. In order to ensure repeatability in results, track lines can be carefully marked so that wheel tracks are maintained during driving. However, in this test, the narrowness of the width of the Belgian pave at Gerotek was enough to warrant such consistency. Besides, the vehicle speeds afforded enough control to maintain the wheel tracks on the pavement.
6.2 Different profiles
In this test case the methodology is tested over the Belgian pave and bumps. The data are extracted from test runs: Test 01, Test 05, Test 16, Test 22 and Test 32 all at the vehicle speed of 14 km/h and with the suspension set to ride comfort. Test 01 and Test 16 provide training data while the others are used for simulation. The results in Fig. 6 show that the profiles were accurately reconstructed with small lapses in sharp corners such as the corners of the trapezoidal bumps. The accuracy in reconstructing the Belgian pave is slightly below 14 % error in fit and above 99 % correlation while that for bumps is much more accurate at less than 1 % error in fit and above 99 % correlation. There are three observations when reconstructing the bumps. The reconstructed profile tends to curve into the sharp turns at the beginning and end of the trapezoidal bumps. This effect may be caused by the transient vibrations that result in the neighbourhood of a sharp turn and then the attempts by the neural network to approximate the most probable path. The second observation is that the reconstructed profile does not peak as high as the actual profile while at the same time smoothing out the corners at the peaks of the bumps. This effect may be attributed to the filtering effects of the tires and the neural network itself. The tire can only pick a crest or peak whose contact length is at least as long as the length of tire patch in contact to the road at any given point. The length of the tire patch in contact may be dependent on the size and make of tire and the tire pressure. Thirdly, the neural network reconstructs the symmetric bump Layout (a) relatively more accurately than the asymmetric bump Layout (b). This might be very important in further investigation.
Fig. 6 Profile reconstruction given different road profiles (belgian pave and bumps of different configuration).
355
6.3 Different Land Rover suspension settings
The following test runs were used: Test 01, Test 10 in ride comfort; and Test 03 in handling mode. All of these tests were conducted at a vehicle speed of 14.5 km/h over the Belgian pave. The results in Fig. 7 show very good accuracy in fit with error of less than 5 % and correlation level of above 99 % for both ride comfort and handling mode. This is remarkable considering that the simulation data are collected from road segments other than the one used during training of the network.
Fig. 7 The methodology during different vehicle suspension settings.
7. CONCLUSIONS
In this paper a road profile reconstruction methodology based on measured acceleration data and ANN simulation has been tested on real measured data. The experiment was conducted on a well-instrumented Land Rover Defender 110 driven on suspension tracks at a road vehicle testing facility whose geometrical profile is also well-known. The results show that the proposed methodology could be applicable to experimental data especially during low vehicle speeds. At the lower vehicle speed the road profiles are reconstructed to within 14 % error in fit with a correlation level of above 99 %. This result was consistent for a number of tests that were performed although for the purposes of this paper, only two such tests were presented. However, at the higher vehicle speed, the reconstruction was less accurate, reaching up to 25 % error and 90 % correlation. There are no significant differences in the performance of the methodology for different vehicle suspensions. Of course, some differences in performance are observed for different road profiles. The neural network performs much better over discrete obstacles than on the Belgian pave. This may be expected due to the simplicity in shape of the discrete obstacles themselves.
356 It is important to note that despite the effects of filtering by the tires and high vehicle speed, the methodology performs accurately. These findings provide more motivation for detailed investigation into its application to arbitrary road-vehicle interaction scenarios where less stringent measures are applied on the test, taking into account the limitations of the network to fit any particular road profile perfectly due to regularization. There are other issues that might also be very interesting for further study, such as the extent to which disparities between exact profiles and measured profiles, as well as the effects of bump flexibility, affect the overall performance of this methodology.
REFERENCES
[1] Wannenburg J, Heyns PS, Raath AD. Application of a fatigue equivalent static load methodology for the numerical durability assessment of heavy vehicle structures. International Journal of Fatigue 2009; 31:1541–49. [2] Ngwangwa HM, Heyns PS, Labuschagne FJJ, Kululanga GK. Reconstruction of road defects and road roughness classification using vehicle responses with artificial neural networks simulation. Journal of Terramechanics 2010; 47:97– 111. [3] ISO (1995-09-01) Mechanical vibration - Road surface profiles - Reporting of measured data, ISO 8608:1995(E), International Organization for Standardization (ISO). [4] Gerotek. Accessed from http://www.armscorbusiness.com/SubSites/Gerotek/GEROTEK02_04_01.asp date: 18/08/2010. [5] Breytenbach HGA. Optimal vehicle suspension characteristics for increased structural fatigue life. MEng Thesis, Department of Mechanical and Aeronautical Engineering, University of Pretoria, South Africa, 2009. [6] Bekker CM. Profiling rough terrain. MEng Thesis, Department of Mechanical and Aeronautical Engineering, University of Pretoria, South Africa, 2008. [7] Bishop CM. Neural networks for pattern recognition, Oxford University Press, Oxford, 1995. [8] Hugo D, Heyns PS, Thompson RJ, Visser AT. Haul road defect identification and condition assessment using measured truck response, Journal of Terramechanics 2008; 45: 79 – 88. [9] Mathworks Inc., MATLAB Help Tutorial, 2007. [10] C.X. Wong, K. Worden, Generalised NARX shunting neural network modelling of friction. Mechanical Systems and Signal Processing 21 (2007) 553-572. [11] Hagan MT, Menhaj MB. Training feedforward networks with the Marquardt algorithm. IEEE Transactions on Neural Networks 5:6 (1994) 989-993.
357 [12] Letherwood MD, Gunter DD. Ground vehicle modelling and simulation of military vehicles using high performance computing, Parallel Computing 2001; 27: 109-140. [13] Gillespie TD. Fundamentals of vehicle dynamics. Society of Automotive Engineers, Inc., Warrendale, 1992.
Electro-optical Property of Sol-gel-derived PLZT7/30/70 Thin Films
Jing-Fung Lin1* Jiann-Shing Jeng2 Wen-Ruey Chen3 1 Department of Computer Application Engineering, Far East University, No.49, Zhonghua Rd., Xinshi Dist., Tainan City 74448, Taiwan ROC 2 Department of Material Science and Engineering, Far East University, No.49, Zhonghua Rd., Xinshi Dist., Tainan City 74448, Taiwan ROC 3 Department of Energy Application Engineering, Far East University, No.49, Zhonghua Rd., Xinshi Dist., Tainan City 74448, Taiwan ROC *Corresponding author: [email protected]
ABSTRACT Pb0.93La0.07(Zr0.3Ti0.7)0.93O3 (PLZT7/30/70) thin films with and without a seeding layer of PbTiO3 (PT) were successfully deposited on indium-doped tin oxide coated glass substrate via a sol-gel process, and a top conducting SnO2 thin film is also sol-gel-derived. The thicknesses of PLZT and PT layer are 0.5μm and 24nm, respectively. The retardance was enhanced by application of a seeding layer of PT and measured by a new developed heterodyne interferometer. The Pockels linear electro-optical coefficient of PLZT film with a PT layer was determined to be 3.17×10-9m/V when the refractive index is consider as 2.505, which is one order larger than 1.4×10-10m/V for PLZT12/40/60 reported in the literature. From the comparisons, the average transmittance of PLZT film with a PT seeding layer was 70.3%, higher than 62.9% in PLZT film. The rootmean-square roughness of PLZT thin film with a PT layer was determined just 6.87nm. Over all, experimental results implies that the addition of a PT seeding layer to PLZT film plays a significant role in the increase of retardance and then the PLZT film exists a higher Pockels coefficient.
1. Introduction Recently, the increasing interest has been shown in the fabrication of thin films for electronic and optical device applications, such as total internal reflection (TIR) integrated switches, spatial light modulators (SLMs), electro-optic modulators (EOMs) [1-3]. In particular, a thin-film-type device made of PLZT (Lanthanummodified lead zirconate titanate) materials is required for large area devices and also for integration with other processes. The PLZT ceramics are used in electro-optic applications because of their excellent electro-optic performance, rapid response, and high optic transmittance in the visible wavelength region [4]. High phase retardation is mandatory for all electro-optic applications, especially in the case of certain devices such as optic shutters [5], for which it constitutes the most critical property. To achieve such high phase retardation, either the film needs to be quite thick or the electro-optic coefficient needs to be fairly high. An effective electrode configuration is another prerequisite for obtaining a high phase retardation value [4]. PLZT thin films with various compositions have been extensively investigated during the last decades. The reported PLZT thin films were prepared by various methods. They include radio-frequency sputtering, pulse laser deposition (PLD), electron beam deposition (EBD), chemical vapor deposition (CVD) and sol–gel processing. Among these methods, sol–gel processing is well accepted to be a promising method in the preparation of ferroelectric thin films because it offers several advantages over the others. These advantages include (a) higher deposition rates, (b) good stoichiometry control, (c) larger area, pinhole-free film deposition and (d) lower initial facility costs and lower processing temperature [6]. In this paper, the seeding layer of PbTiO3 (PT) was firstly prepared on indium-doped tin oxide (ITO) coated glass substrate and then PLZT7/30/70, Pb0.93La0.07(Zr0.3Ti0.7)0.93O3 was deposited on the PT layer. In addition, a conductive layer as SnO2 is also fabricated using the sol-gel process and its position was on the top of PLZT film. The principal T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_43, © The Society for Experimental Mechanics, Inc. 2011
359
360
axis of the PLZT sample was characterized by a circular heterodyne interferometer in [7] and the retardance was further characterized by a proposed interferometer in this study.
2. Methodology The stock solutions for PLZT thin film were prepared corresponding to the general formula Pb0.93La0.07(Zr0.3Ti0.7)0.93O3 by means of a methanol-based sol-gel method. Reagent grade lead acetate trihydrate (Pb(CH3COO)2·3H2O), lanthanum acetate hydrate (La(CH3COO)3·6H2O), titanium isopropoxide (Ti((CH3)2CHO)4), and zirconium n-propoxide (Zr(OC3H7)4) were used as the starting materials. Then, the PLZT precursors were coated onto the PT seeding layer using a spin coater (Chemat Technology Inc., KW-4A) at (1).300rpm for 5s and 2000rpm for 35s. (2).400rpm for 5s and 1000rpm for 35s with a total of five depositions being made, including three depositions by procedure (1) and two depositions by procedure (2). The films were baked on a hot plate at 100℃ for 5min between each layer deposition first to evaporate the solution and then to decompose the compounds, which resulted in an amorphous inorganic film. After deposition of PLZT films, the films were placed directly into a hot furnace at a temperature of 500℃. The entire process was repeated numerous times to prepare a thin film. Repeating the processes (1) and (2) five times yielded a 0.5μm thin film. To prepare the PLZT films with PbTiO3 (PT) layer, the PT precursor was first coated onto an ITO glass substrate which was used as a seeding layer by using a spin coater at (1).500rpm for 5s. (2).1500rpm for 35s with a total of one depositions being made, The PT film with thickness of 24nm was baked on a hot plate at 100℃ for 5min and then the PT film was placed directly into a hot furnace at a temperature of 500℃. The rise rate of temperature is 500oC/30min. Afterwards, the PLZT precursors were coated onto the PT layer according to the same method as above-mentioned to attain samples of PLZT films with a PT seed layer. The optical configuration for measuring the retardance property of PLZT7/30/70 in the study is shown in Fig. 1. The configuration in Fig. 1 is based on the common-path linear heterodyne interferometer. In Fig. 1, a He–Ne laser light beam of wavelength 632.8 nm passes initially through a polarizer, and then through an electro-optic (EO) modulator which is regulated by a saw-tooth waveform signal supplied by a function generator. Subsequently, the light beam passes through the sample, and finally an analyzer. The output light intensity is then detected by a single photodetector. The final signal with respect to Fig. 1 is processed by a lock-in amplifier in phase-lock for extracting the magnitude of retardance of the sample. It is noted that the fast axis angle is firstly dertimined by a developed circular heterodyne interferometer using a phase-lock technique in [8], and the axis of sample is rotated to be 22.5° . According to the Jones matrix formalism, the vector of the electric field emerging from the analyzer in the optical configuration shown in Fig. 1 has the form as E = A(0o ).S(22.5o , β ).EO(90o , ωt).P(45o ).Ein ⎡ ⎤ ⎡ −iωt β 2 β 2 β cos( ) + i sin( ) i sin( ) ⎢ ⎥ ⎢e 2 1 0 ⎡ ⎤⎢ 2 2 2 2 2 ⎥⎢ =⎢ ⎥ β β β ⎥ 2 2 ⎣0 0⎦ ⎢ i sin( ) cos( ) − i sin( )⎥ ⎢⎣⎢ 0 ⎢ ⎣ 2 2 2 2 2 ⎦
⎤ ⎡1 0 ⎥ ⎢2 ⎢ ωt ⎥ i ⎥ ⎢1 e 2 ⎦⎥ ⎢⎣ 2
1⎤ 2 ⎥ ⎡ 0 ⎤ eiω0t , ⎥ 1 ⎥ ⎣⎢ E0 ⎦⎥ 2 ⎥⎦
(1)
where E0 is the amplitude of the incident field, A(0o ) represents the Jones matrix of the analyzer parallel to the x-axis, S (22.5°, β ) represents the Jones matrix of the sample, and β is the retardance of the sample. Furthermore, EO(90o , ω t ) represents the Jones matrix of the EO modulator driven by a saw-tooth voltage waveform at a frequency ω and with its fast axis parallel to the y-axis, and P(45o ) represents the Jones matrix of the polarizer set at 45° to the x-axis. As a result, the intensity of the transmitted light, I , can be expressed as
I = E02 × (2 + (1 − cos( β )) cos(ω t ) − 2 sin( β ) sin(ω t )) / 8 = I dc + R cos(ω t − θ ),
(2)
where I dc = E02 / 4 is the dc component of the output light intensity, and E02 is the intensity of the input light.
361
The notation R represents Idc (1−cos(β))2 + (− 2sin(β))2 / 2, and θ represents tan−1 (− 2 sin(β ) /1 − cos(β )) . Therefore, the retardance of the linearly birefringnet medium can be determined by the phase-lock technique as Eq. (3).
β = 2 tan −1 (
− 2 ). tan θ
(3)
Y X Z
Analyzer(0°)
EO Modulator Sample
(90°)
He-Ne Laser
Photodetector
* Polarizer(45°) Lincearly Birefringent Medium (22.5°,β) θ
β
Personal Computer
I Lock-In Amplifier
(a)
(b) Fig. 1 (a) Schematic illustration and (b) a photograph of heterodyne interferometer of the retardance.
3. Experimental Results After conducting the calibration and fine-tuning procedures, we perform a series of experiments. The first experiment is conducted to measure the retardance of a half-wave plate (Casix Inc., Model WPF1225-633λ / 2 ) according to Fig. 1. The half-wave plate is used as a linearly birefringnet medium proposed in the retardance measurement system illustrated in Fig. 1. As shown in Fig. 2, the average value of the retardance in the half-wave plate for 10 times measurement is determined to be 178.15°. The average relative error of the retardance is just 1.03%, and the method for the retardance measurement is validated. Secondly, the retardance property of the PLZT sample with or without PT layer is measured as the principal axis angle is 22.5° using the optical setup in Fig. 1. The applied voltage on the PLZT film is 0.1V to 0.5V. In Fig. 3 for PLZT without PT layer, the experimental results of the retardance measurement indicate that the sinusoidal profile does exist and the difference between the maximum and minimum value of the retardance is found to be 1.19° and the average value of the retardance is determined as 14.94°. Accordingly, in Fig. 3 for PLZT with PT layer, the experimental results indicate that the difference between the maximum and minimum value of the retardance is found to be 1.42° and the average value of the retardance is
362
determined as 20.82°. It is noted that when the applied voltage is zero, the PLZT sample with or without PT layer have residual retardance.
Measured Retardance (deg.)
180.0 179.5 179.0 178.5 178.0 177.5 177.0 1
2
3
4
5
6
7
8
9
10
Measurement Times
Fig. 2 Measured retardance of the linearly binefringent medium as a half-wave plate.
22 21 Retardance (deg.)
20 19 18
PT+PLZT
17
PLZT
16 15 14 13 12 0
0.1
(c) 0.2
0.3
0.4
0.5
Voltage (V)
Fig. 3 Absolut retardance of PLZT with PT and PLZT. Subsequently, the birefringence Δn is determined by the wavelength λ , the retardance β , and the thickness d according to Eq. (4). Then the Pockels longitudinal effective linear electro-optical coefficient, r , is determined by Eq. (5) expressed as below, detailed in the Ref. [8], on the condition that the thickness is known and consider the refractive index n is 2.505 according to the Ref. [8]. Δn =
r=
βλ 2π d
2Δn 3
n E
=
(4)
,
2Δnd n3V
.
(5)
According to the measured retardance, using Eq. (4), the birefringence of PLZT films with and without a PT layer can be determined to be 4.8×10-3 and 1.9×10-3, respectively. The results are close to the
363
birefringence of about 1×10-3 ~ 4×10-3 for the PLZT epitxial films, which was investigated as a function of lanthanum content [9]. Simultaneously, the birefringence results are smaller than that of LiNbO3 single crystal -2 (8.4×10 ) [9]. As shown in Table 1, the Pockels linear electro-optical coefficient, r , estimated based on Eq. (5), of PLZT films with and without a PT layer are determined to be 3.17×10-9m/V and 1.25×10-9m/V, respectively, when the refractive index is consider as 2.505. The electro-optical coefficients of PLZT films with and without a PT layer are larger than that of bulk PLZTs (523~612pm/V) [10], whereas electro-optical coefficient of PLZT film with and without a PT layer is almost one order larger than that of polycrystalline dysprosium (Dy) doped lanthanum zirconate-titanate ceramics PLDZT12/40/60 (1.4×10-10m/V) when the refractive index is 2.518 [8]. Table 1. Birefringence Δn and linear electro-optical coefficient r of PLZT with and without a PT layer. sample PT(24nm)+PLZT(500nm) PLZT(500nm)
Birefringence Δn 0.0048 0.0019
r (m/V), 3.17×10-9 1.25×10-9
We investigate the effect of the PT layer on the surface morphology of the PLZT film using AFM (VeecoDI, America, AutoProbe CPR-II AP-0100), as shown in Fig. 4. The AFM is used to investigate the surface roughness of PLZT without PT and with PT in an area of 2μm x 2μm, respectively. The surface morphology of PLZT film without PT layer exhibits flat surface (Fig. 4(a)). On the other hand, the PLZT film with a seeding layer of PT imaged by AFM exhibits rugged morphology, as shown in Fig. 4(b). The roughness of the film surface is expressed by the root-mean-square (rms) value. The roughness of thin film PLZT with a PT layer (6.87nm) is larger than that of PLZT film without a PT layer (0.80nm).
(a)
(b) Fig. 4 AFM surface images of (a) PLZT, (b) PLZT with PT layer. Next, the respective optical transmittance spectra of the PLZT films with wavelength change are shown in Fig. 5, where interference oscillations were caused by the film structure. In addition to the reference spectra of ITO glass illustrated in Fig. 5, the transmittance spectra of the PLZT films with and without a seeding layer of PT were presented. From the comparisons, the average transmittance of the PLZT film with a PT seeding layer is 70.3% from 400 to 700nm, higher than 62.9% in the PLZT film without a PT seeding layer. Hence the seeding layer of PT plays a key role in the increase of the retardance value and avoids the optical loss for the addition of PT layer into the PLZT film.
364 120
ITO PLZT PT24+PLZT
Transmittance (%)
100 80 60 40 20 0 300
400
500
600
700
Wavelength (nm)
Fig. 5 Optical transmittance spectra of the PLZT films (ITO: the ITO conductive film, PLZT: the PLZT film, PT24: the PT seeding layer with 24nm).
4. Conclusions PLZT7/30/70 ferroelectric films with and without a seeding layer of PT were grown on ITO glass substrate using a sol-gel method, and a top electrode of SnO2 were formed on the thin film using a sol-gel method. The retardance is successfully measured by a new measurement system based on an electro-optic modulated linear heterodyne interferometer. Experimental results show that the Pockels linear electro-optical coefficient of PLZT with PT layer was determined to be 3.17×10-9m/V when the refractive index was 2.505 and was one order larger than that for PLZT12/40/60 doped with Dy. Also the PLZT film with a PT layer has a good transmittance as 70.3%. Over all, experimental results implies that the addition of a PT seeding layer to PLZT film plays a significant role in the increase of the retardance and then the PLZT film exists a higher Pockels coefficient. Acknowledgements The authors gratefully acknowledge the financial support provided to this study by the National Science Council of Taiwan under Grant No. NSC 99-2221-E-269-004. References [1]
Preston K. D. and Haertling G. H., “Comparison of electro-optic lead-lanthanum zirconate titanate films on crystalline and glass substrate,” App. Phys. Lett. Vol. 60, pp. 2831-2833, 1972. [2] Baude P. F., Ye C., Tamagawa T., and Polla D. L., “Fabrication of sol-gel derived ferroelectric Pb0.865La0.09Zr0.65Ti0.35O3 optical waveguides,” J. Appl. Phys. Vol. 73, pp. 7960-7962, 1993. [3] Title M. A. and Lee S. H., “Modeling and characterization of embedded electrode performance in transverse electro-optic modulators,” Appl. Opt. Vol. 29, pp. 85-98, 1990. [4] Choi J. J., Kim D. Y., Park G. T., and Kim H. E., “Effect of electrode configuration on phase retardation of PLZT films grown on glass substrate,” J. Am. Ceram. Soc. Vol. 87, pp 950-952, 2004. [5] Taniguchi Y., Murakami K., Kobayashi H., and Tanaka S., “A (Pb.La)(Zr.Ti)O3 (PLZT) polarization-plane rotator with a buried electrode structure for a midinfrared electro-optical shutter,” Jpn. J. Appl .Phys. Vol. 36 (1), pp. 2709-2714, 1997. [6] Kong L. B. and Ma J., “Preparation and characterization of antiferroelectric PLZT2/95/5 thin films via a sol–gel process,” J. Mater. Lett. Vol. 203, pp. 638-642, 2002. [7] Lin J. F. and Lo Y. L., “The new circular heterodyne interferometer with electro-optic modulation for measurement of the optical linear birefringence,” Opt. Commun. Vol. 260, pp. 486-492, 2006. [8] Xiyun H., Yong Z., Xinsen Z., and Pinsun Q., “Structure and electro-optical property of the Dy3+ doped lanthanum zirconate-titanate ceramics,” Acta Optica Sinca Vol. 29, pp. 1601-1604, 2009. [9] Kamehara N., Ishii M., Sato K., Kurihara K., and Kondo M., “Optical properties of epitaxial plzt thin films,” Journal of Electroceram Vol. 21, pp. 99-102, 2008. [10] Haertling G. H. and Land C. E., “Hot-pressed (Pb.La)(Zr.La)O3 ferroelectric ceramics for electrooptic applications,” Journal of the American Ceramic Society Vol. 54, pp. 1-11, 1971.
DECOUPLING SIX EFFECTIVE PARAMETERS OF ANISOTROPIC OPTICAL MATERIALS USING STOKES POLARIMETRY Thi-Thu-Hien Pham and *Yu-Lung Lo Department of Mechanical Engineering, National Cheng Kung University, Tainan, 701, Taiwan * [email protected]
ABSTRACT: An analytical technique based on the Mueller matrix method and the Stokes parameters is proposed for extracting six effective parameters on the principal axis angle, phase retardance, diattenuation axis angle, diattenuation, optical rotation angle and circular diattenuation value of anisotropic optical materials. In this study, the proposed methodology does not require the principal birefringence axes and diattenuation axes to be aligned. In addition, the linear birefringence (LB), circular birefringence (CB), linear diattenuation (LD), and circular diattenuation (CD), all properties are uniquely decoupled within the analytical model. The feasibility of the proposed methodology is demonstrated by measuring the local effective LB, CB, LD, and CD properties in tested samples such as a quarter-wave plate, a half-wave plate, a polarizer, a polarization controller. As authors’ knowledge, this methodology could be the most comprehensive algorithm in extracting all parameters (six parameters) in anisotropic optical materials. Thus, this new algorithm supplies a strong impact in accurately characterizing hybrid properties in anisotropic optical materials without any purification process in sample. 1. INTRODUCTION Methods to accurately determine optical properties of optoelectric materials or bio-sample make the promising applications in the future of inspection and therapeutic or diagnostic detection. Some important applications are, for example, linear birefringence (LB) measurements for LCD’s compensator films, photoelasticity, and tissues; circular birefringence (CB) measurements for diabetics; linear diattenuation (LD) measurements for tumors; circular diattenuation (CD) measurements for protein structures and etc [1-11]. It could be greatly potential applications in biomedicine, biochemistry, semiconductor, LCD and the other related industries. Kaminsky et al. [2-5] constructed a single microscope for measuring and separating the contributions of LB, LD, CB, and CD through modifications of the optical path and mechanically modulated linearly and circularly polarized light input. They used the Jones calculus for extract the characteristic of optical samples. However, they used different tools for each optical properties, such as, the Metripol microscope for LB, LD; the HAUP (high accuracy universal polarimetry) or S-HAUP (scanning high accuracy universal polarimetry) technique for CB; and the CDIM (circular dichroism imaging microscope) for CD. Obviously, the optical parameters are not decoupled in their formulas, and it easily causes the accumulating errors. Therefore, the measured results from the sample may be easily contaminated by other irrelated properties if the purification process is not conducted. Chenault and Chipman [6, 7] proposed a technique to measure LD and LB spectra of infrared materials in transmission. The intensity modulation that resulted from the rotation of the sample was Fourier analyzed, and the linear diattenuation and linear retardance of the sample were calculated from the Fourier series coefficients for each wavelength. However, in extracting the sample parameters, an assumption was made that the principal birefringence and diattenuation axes were aligned. Chenault et al. [8] proposed one method using an infrared Mueller matrix spectropolarimeter to measure a retardance spectrum for the electro-optic coefficient of cadmium telluride. Retardance spectra were calculated from Mueller matrix spectra, and then the electro-optic coefficient was calculated at each wavelength by a least-square fit to the resulting retardance as a function of voltage. Also, a Mueller matrix spectropolarimeter was used to measure an achromatic retarder in transmission, a reflective beamsplitter, and the electro-optic dispersion of a spatial light modulator [9]. Recently, Chen et al. [10] proposed a technique for measuring the effective LB and LD of an optical sample using a polarimeter based on the Mueller matrix formulation and the Stokes parameters. Also, Lo et al. [11] proposed a technique for measuring the effective LB, LD and CB of an optical fiber based on the similar method. However, the problem in the multiple solutions of equations [10, 11] is not solved, and it causes mistakes in some special cases. Also, the method in [10, 11] just can decouple the properties in birefringence and diattenuation, but not decouple the parameters in LB and CB in the analytical model. In this study, the six parameters in effective LB, CB, LD, and CD properties are all decoupled and uniquely solved in a
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_44, © The Society for Experimental Mechanics, Inc. 2011
365
366 single solution within the analytical model. Thus, the feasibility in accurately measuring hybrid properties of a sample in six effective parameters is proved. The new algorithm introduced here has several advantages not only in solving the multiple-solution problems but also in maintaining the accuracy by decoupling all six parameters in the analytical model. Therefore, any purification process in sample is not needed. 2. METHOD OF MEASURING SIX EFFECTIVE OPTICAL PARAMETERS OF AN ANISOTROPIC MATERIAL This section reviews the analytical method presented in [10, 11] for determining the linear birefringence (LB), linear diattenuation (LD) and circular birefringence (CB) properties of an optically anisotropic material utilizing the Stokes parameters and the Mueller matrix formulation. In addition, the Mueller matrix of CD material [12] with the value of circular amplitude anisotropy R can be expressed in Eq. (1) as
M cd
é1 + R 2 0 0 2R ù ê ú 2 0 1- R 0 0 ú =ê ê 0 0 1 - R2 0 ú ê ú 0 0 1 + R 2 ûú ëê 2 R
(1)
Thus, for an anisotropic material with hybrid properties, a total of six effective optical parameters excluding the scattering factor need to be extracted, namely the principal axis angle (α), the retardance (β), optical rotation angle (γ), the diattenuation axis angle (θd), the the diattenuation (D), and the circurlar diattenuation (R). Figure 1 presents a schematic illustration of the model setup proposed in this study for characterizing the LB, LD, CB, and CD properties of an optically anisotropic material. As shown in Fig. 1, P is a polarizer, Q is a quarter-wave plate, and Ŝc and Sc are input and output Stokes vector, respectively. The CB/LB component of the sample is in front of the LD/CD component.
Fig. 1. Schematic diagram of model used to characterize anisotropic material.
The output Stokes vector Sc in Fig.1 can be calculated as
é S0 ù æ m11 m12 ç êS ú m22 çm 1 Sc = ê ú = [ M lb ][ M cb ][ M ld ][ M cd ]Sˆc = ç 21 êS2 ú m m32 ç 31 ê ú ç ë S3 û c è m41 m42
m13 m23 m33 m43
m14 öæ Sˆ0 ö ÷ç ÷ m24 ÷ç Sˆ1 ÷ m34 ÷çç Sˆ2 ÷÷ ÷ m44 ÷øçè Sˆ3 ÷øc
(2)
Given a knowledge of the input polarization state and the measured values of the output Stokes parameters, namely m11 ~ m44 provide the means to solve six effective optical parameters of the anisotropic material. It should be noted here that only m11, m12, m13, m14, are used to calculate the LD/CD values in this study first. In the setup shown in Fig. 1, the sample is illuminated using six different input polarization lights, namely four linear polarization lights (i.e. Sˆ = [1, 1, 0, 0]T , Sˆ = [1, 0, 1, 0]T , Sˆ 0 = [1, -1, 0, 0]T , and Sˆ 0 = [1, 0, -1, 0]T ) and two circular polarization lights 00
450
90
135
(i.e. right-handed SˆRHC = [1, 0, 0, 1]T and left-handed SˆLHC = [1, 0, 0, 1]T ). When all of the elements in the Mueller matrix of diattenuation [MD] can be found via experimentation, those in the effective Mueller matrix [MRD] can be obtained from six output Stokes vectors. Once all of the elements in [MD] and [MRD] are known, those in the Mueller matrix of retardance [MR] can be inversely derived. Therefore, α, β, γ, θd, D, and R of the sample can be obtained, respectively, as
a=
æ A ö 1 tan -1 çç - 24 ÷÷ 2 è A34 ø
(3)
367
ö æ A34 ÷÷ b = tan -1 çç è cos( 2a ) A44 ø æ -C A + C1 A23 ö 1 g = tan -1 ç 2 22 ÷ 2 è C1 A22 + C2 A23 ø
D=
R=
(4)
(5)
æ S 0 ( S ) - S135 0 ( S 0 ) ö ÷ 2q d = tan -1 ç 45 0 ç S 0 ( S 0 ) - S 0 ( S0 ) ÷ 90 è 0 ø [S00 (S0 ) - S900 (S0 )]
(6)
(7)
cos(2qd )[ (S00 (S0 ) + S900 (S0 )) - (SRHC (S0 ) + SLHC (S0 )) ] 2
(
2
)
[S00 (S0 ) + S900 (S0 )] - [ S00 (S0 ) + S900 (S0 ) - (S RHC (S0 ) + S LHC (S0 )) ] 2
[S RHC (S0 ) + S LHC (S0 )]
2
(8)
In summary, α, β, γ, θd, D, and R can be extracted using Eqs. (3) ~ (8), respectively. It is noted that the proposed methodology does not require the principal birefringence axes and diattenuation axes to be aligned. Moreover, uniquely, the effective parameters are completely decoupled within the analytical model. 3. ANALYTICAL SIMULATIONS In this section, the ability of the proposed analytical model to extract the six effective optical parameters of interest over the measurement ranges defined in the previous section is verified using a simulation technique. In performing the simulations, the theoretical values of the output Stokes parameters for the six input lights were obtained for a hypothetical anisotropic sample using the Jones matrix formulation based on known values of the sample parameters and a knowledge of the input Stokes vectors. The theoretical Stokes values were inserted into the analytical model derived in Section 2 and the effective optical parameters were then inversely derived. Finally, the extracted values of the effective optical parameters were compared with the input values. The ability of the proposed method is evaluated by extracting α, β, θd, D, γ, and R of an anisotropic sample, respectively. It is noted that which parameter is extracted the input of that parameter is changed over the full range of α, θd, and γ: 0 ~ 180° and the full range of D and R: 0 ~ 1 (except input values of β: 0 ~ 360°). When one parameter is extracted in its range the other input parameters were specified as follows (except the extracting parameter): α = 50°, β = 60°, θd = 35°, D= 0.03, γ =150, and R= 0.1. For example, to extract the principal axis angle of an anisotropic sample, the input parameters were specified as follows: phase retardance β = 60°, diattenuation axis angle θd = 35°, diattenuation D = 0.4, optical rotation angle γ = 15°, and value of circular diattenuation R = 0.1. Figure 2(a) ~ (e) plots the value of α, β, θd, D, γ, and R extracted using Eq. (3) ~ (8) against the input value of α, β, θd, D, γ, and R over the range of α, θd, and γ: 0 ~ 180° and the range of D and R: 0 ~ 1 (except β: 0 ~ 360°), respectively. Good agreements are observed between the two values of α, β, θd, D, γ, and R, respectively and thus the ability of the proposed method to obtain full-range measurements of α, β, θd, D, γ, and R are confirmed.
180
180
160
160
160
140
140
140
120
120
120
100 80 60
80 60 40
20
20
0
20
40
60
80 100 Input a (deg.)
120
140
160
0
180
100 80 60 40 20 0
0
40
80
120
160 200 Input b (deg.)
240
280
320
360
0
1
180
1
0.9
160
0.9
0.8
0.4 0.3
100 80 60
0
0.1
0.2
0.3
0.4
0.5 Input D
0.6
0.7
0.8
0.9
1
0
80 100 Input qd (deg.)
120
140
160
180
0.6 0.5 0.4
0.2
20
0.1
60
0.3
40
0.2
0
Extracted R'
Extracted g '(deg.)
0.5
40
0.7
120
0.6
20
0.8
140
0.7 Extracted D'
100
40
0
Extracted qd' (deg.)
180
Extracted b'(deg.)
Extracted a '(deg.)
368
0.1
0
20
40
60
80 100 Input g (deg.)
120
140
160
180
0
0
0.1
0.2
0.3
0.4
0.5 Input R
0.6
0.7
0.8
0.9
1
Fig. 2. Correlation between input value and extracted value of (a) principal axis angle α & α’, (b) phase retardance β & β’, (c) diattenuation axis angle θd & θd’, (d) diattenuation D & D’, (e) optical rotation angle γ & γ’, and (f) value of circular diattenuation R & R’, respectively.
Overall, the results presented in Figs. 2(a) ~ (f) demonstrate that the proposed analytical method yields full range measurements of all the optical parameters of interest other than the phase retardance. In other words, the method proposed in this study enables both the LB/CD and the LD/CD properties to be obtained. 4. EXPERIMENTAL SETUP AND RESULTS FOR MEASURING EFFECTIVE PARAMETERS OF OPTICAL FIBER Figure 3 presents a schematic illustration of the experimental setup proposed in this study for characterizing the LB, LD, CB and CD properties of an optically anisotropic. In the experiments, the transmitted light is provided by a frequency-stable He-Ne laser (SL 02/2, SIOS Co.) with a central wavelength of 632.8 nm. Note that in Fig.3, P is a polarizer (GTH5M, Thorlabs Co.) and Q is a quarter-wave plate (QWP0-633-04-4-R10, CVI Co.), which are used to produce linear polarization lights using 0°, 45°, 90°, 135°, right handed circular, and left handed circular polarization light. The output Stokes parameters were determined in accordance with the intensity measurements obtained using a commercial Stokes polarimeter (PAX5710, Thorlabs Co.). Note that the neutral density filter (NDC-100-2, ONSET Co.) and power meter detector (8842A, OPHIT Co.) shown in Fig. 3 are used to ensure that each of the input polarization lights has an identical intensity.
Fig. 3. Schematic illustration of measurement system used to characterize an optical fiber.
The validity of the proposed measurement method was evaluated using two different optical samples, namely a quarter-wave plate (QWP0-633-04-4-R10, CVI Co.), a polarizer (GTH5M, Thorlabs Co.), The quarter-wave plate and polarizer were chosen specifically to evaluate the performance of the proposed method in measuring the parameters of samples with birefringence and diattenuation properties, respectively. 4.1 Quarter-wave plate as a sample (LB property) Figure 4 illustrates the experimental results obtained for six effective properties of the quarter-wave plate. A good agreement is observed between the measured values of the principal axis angle and the known values and the average retardance values
369 are 90.130. In Fig. 4, the average standard deviations of the principal axis angle and phase retardance are found to be 0.04° and 0.013°, respectively. It is observed that the extracted values of the diattenutation are smaller than 0.01 with changes in the slow axis angle of the quarter-wave plate. Therefore, diattenuation axis angle are random from 00 to 1800. Moreover, the values of CB and CD are very small with changes in the slow axis angle of the quarter-wave plate. 120
180
bS
150
110
100
90
90
60
80
30
70
0
0
30
60
90
120
150
60 180
0.3
qdS DS
150
120
0.2
90
0.15
60
0.1
30
0.05
0
0
30
60
90
120
150
0 180
150
180
Known principal axis (deg.)
6
0.6 gS
5
Measured circular diattenuation
Measured optical rotation angle (deg.)
Known principal axis(deg.)
4
3
2
1
0
0.25
Measured diattenuation
120
Measured phase retardation(deg.)
Measured principal axis(deg.)
aS
Measured diattenuation axis (deg.)
180
0
30
60 90 120 Known principal axis (deg.)
150
RS
0.5
0.4
0.3
0.2
0.1
0 0
180
30
60 90 120 Known principal axis (deg.)
Fig. 4. Experimental results six effective properties of quarter-wave plate
4.2 Polarizer as a sample (LD property) Figure 5 present the experimental results obtained for six effective parameters of the polarizer. As expected, Fig. 5 shows that the diattenuation of the polarizer have values equal to approximately 1 and a good agreement is observed between the measured values of the measured diattenuation axis angle and the known values. Moreover, the average standard deviations of θd and D are found to be around 0.007° and 1.42×10-4, respectively. It is noted that the extracting of the principal axis angle of LB are unreliable when phase retardance, β, are smaller than 3°. Moreover, the values of CB and CD are very small with changes in the slow axis angle of the polarizer. 12
bS
150
10
8
90
6
60
4
30
2
0
0
30
60
90
120
150
0 180
1.3
qdS DS
150
120
1.1
90
1
60
0.9
30
0.8
0
0
30
6
90
120
150
0.7 180
0.6 gS
4
3
2
1
0 0
60
Known diattenuation axis (deg.)
Measured circular diattenuation
Measured optical rotation angle (deg.)
Known diattenuation axis (deg.)
5
1.2
30
60 90 120 150 Known diattenuation axis (deg.)
180
0.5
RS
0.4
0.3
0.2
0.1
0 0
30
60 90 120 150 Known diattenuation axis (deg.)
Fig.5. Experimental results obtained for six effective properties of polarizer.
180
Measured diattenuation
120
Measured diattenuation axis (deg.)
180
aS
Measured phase retardation (deg.)
Measured principal axis (deg.)
180
370 5. CONCLUSIONS This study has proposed an analytical method based on the Mueller matrix method and the Stokes parameters for extracting six effective parameters from the linear birefringence, linear diattenuation, circular birefringence, and circular diattenuation properties of an anisotropic optical material. The methodology proposed in this study does not require the birefringence and diattenuation axes of the sample to coincide. In addition, all the effective LB, CB, LD, and CD parameters are uniquely decoupled in the analytical model. Also, the problem in solving equations with multiple solutions existing in the previous model [10, 11] is solved. As authors’ knowledge, this methodology could be the most comprehensive algorithm using Muller matrix method in extracting all effective parameters in anisotropic optical materials excluding the scattering factor. Thus, any purification process in sample is not needed and this will be greatly potential in current therapeutic or diagnostic applications on analysis of protein structures using CD and glucose concentrations using CB. ACKNOWLEDGEMENTS The authors gratefully acknowledge the financial support provided to this study by the National Science Council of Taiwan under grant no. NSC96-2628-E-006-005-MY3. REFERENCES [1] Jr. I. Tinoco, C. Bustamante, M. F. Maestre, “The optical activity of nucleic acids and their aggregates,” Ann. Rev. Biophys. Bioeng., 9, 107-141, (1980). [2] W. Kaminsky, K. Claborn, and B. Kahr, “Polarimetric imaging of crystals,” Chem. Soc. Rev., 33, 514-525, (2004). [3] W. Kaminsky, M. A. Geday, J. H. Cedres, and B. Kahr, “Optical rotatory and cicular dichroism scattering,” J. Phys. Chem., 107, 2800-2807, (2003). [4] K. Claborn, E. P. Faucher, M. Kurimoto, W. Kaminsky, and B. Kahr, “Circular dichroism imaging microscopy: application to enantiomorphous twinning in biaxial crystals of 1,8-dihydroxyanthraquinone,” J. Am. Chem. Soc., 125, 14825-14831, (2003). [5] A. Yogev, L. Margulies, and Y. Mazur, “Studies in linear dichroism. III. Application to molecular associations,” J. Am. Chem. Soc., 92 (20), pp 6059-6061, 1970. [6] D. B. Chenault and R. A. Chipman, “Measurements of linear diattenuation and linear retardation spectra with a rotating sample spectropolarimeter,” Appl. Opt. 32, 3513-3519 (1993). [7] D. B. Chenault and R. A. Chipman, “Infrared birefringence spectra for cadmium-sulfide and cadmium selenide,” Opt. Lett. 17, 4223-4227 (1992). [8] D. B. Chenault, R. A. Chipman, and S. Y. Lu, “Electro-optic coefficient spectrum of cadmium telluride,” Appl. Opt., 33, 7382-7389, (1994). [9] E. A. Sornsin, and R. A. Chipman, “Visible Mueller matrix spectropolarimetry,” SPIE, 3121, 156-160, (1997). [10] P. C. Chen, Y. L. Lo, T. C. Yu, J. F. Lin, and T. T. Yang, “Measurement of linear birefringence and diattenuation properties of optical samples using polarimeter and Stokes parameters,” Opt. Exp., 17, 15860-15884, (2009). [11] Y. L. Lo, T. T. H. Pham, and P. C. Chen, “Characterization on five effective parameters of anisotropic optical material using Stokes parameters-Demonstration by a fiber-type polarimeter,” Opt. Exp., 18, 9133-9150, (2010). [12] S. N. Savenko and I. S. Marfin, “Invariance of anisotropy properties presentation in scope of polarization equivalence theorems,” Proc. of SPIE, 6536, 65360G, (2007).
Measurement of Creep Deformation in Stainless Steel Welded Joints
Y. Sakanashi, S. Gungor* and P. J. Bouchard Materials Engineering The Open University, Walton Hall Milton Keynes MK7 6AA, UK *Corresponding author: [email protected]
ABSTRACT This article reports early findings of an experimental programme aimed at determining local creep properties of welded joints made from AISI Type 316H austenitic stainless steel. For this purpose, 3 mm thick, flat cross-weld specimens were cut from a pipe and subjected to creep testing at 550˚C. In order to determine local creep properties around the weld within the gauge section of the specimens, a full field measurement system based on digital image correlation (DIC) technique has been developed. A purpose built furnace with an optical window was used to allow the gauge section of the specimens to be photographed during testing. The influence of the window opening on the temperature distribution inside the furnace was tested using five thermocouples embedded into a dummy specimen. A digital SLR camera with a 200 mm macro lens and an optical fibre illumination was used to acquire the photos. The gauge section of the specimens was sprayed with a high temperature resistant paint to obtain a speckle pattern, which is required by the DIC. The problems associated with the use of DIC at high temperatures, e.g. image distortion due to convective currents, surface oxidation, etc., and the techniques to overcome these are also discussed in the article. Full field displacement measurements allowed the local creep strain in the weld metal, HAZ and the parent material to be determined. INTRODUCTION Creep is one of the main degradation mechanisms affecting the structural integrity of welded steam pipes in power generation plants operating at high temperature. The safety assessment of these components requires the knowledge of the non-uniform creep deformation in and around the welded joints. The conventional method to measure the variation of creep properties in a weldment is to cut samples from the weld, the heat affected zone (HAZ) and the unaffected region (parent material) and carry out individual creep tests in which the strain response of the test sample with time is measured using extensometers or high temperature strain gauges. It may not be, however, feasible to extract reasonably sized test samples from the required regions if the weldment is not sufficiently large. Moreover, it may be argued that due to the change in constraint conditions, the behaviour of samples extracted from the weldment will be different than the behaviour of the same material in situ. Therefore, studying the creep behaviour of large samples that include part of the weld, HAZ and parent material, extracted from the weldment would give more insight into the creep behaviour of the structure. This requires the measurement of strain variation within the gauge length of the test sample during creep testing. The digital image correlation (DIC) technique, which provides full field measurement of surface deformations, has been successfully employed by many researchers to map the strain variation spanning cross-weld specimens during room temperature tensile tests [1-5]. The working principle of DIC is based on sophisticated computational algorithms that track the grey value patterns in digital images of the test surfaces, taken before and after a loading event that produces surface deformations [6]. Lyons et al [7] and were the first to demonstrate the capability of DIC to measure strains at high temperatures. They measured free thermal expansion and strains due to tensile loads on Inconel 718 superalloy specimens at temperatures up to 650°C. They overcame the surface oxidation problem by applying a ceramic coating on the surface of the test samples. Liu et al [8, 9] made full field creep measurements on single edge notched fracture mechanics test specimens of
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_45, © The Society for Experimental Mechanics, Inc. 2011
371
372
alloy 718 and 800 to measure crack tip deformation. Their experiments were carried out at temperatures of 650°C and 704°C over 146 hours and the results correlated well with finite element calculations. The aim of the present study is to assess the feasibility of the DIC technique for the measurement of full field strains on cross-weld specimens during long term creep tests. A series of creep tests using plain and cross-weld stainless steel (Type 316H) specimens at 550ºC and 650ºC has been carried out. Three potential issues for obtaining high contrast and stable images have been identified; (i) surface oxidation, (ii) illumination of the sample surface, and (iii) heat haze which can cause image distortion. This paper reports the initial finding of the tests. EXPERIMENTAL Material The material used in this study was extracted from an AISI Type 316H stainless steel thick section cylindrical butt weld of outer diameter 430mm and 65mm wall thickness (Figure 1). The asymmetric weld preparation was designed to produce a fusion boundary aligned with a radial-hoop plane to facilitate extraction of compact tension specimens for creep crack growth tests. The weld was made using made using a manual metal arc process with 129 passes deposited in 26 layers starting at the root (bottom surface of Figure 1). The hardness map given in Figure 2(a) clearly shows the location of the weld metal which has a higher yield strength. Figure 2(b) shows the variation in Fig. 1 Photograph of the AISI Type 316H stainless steel thick hardness along the mid-thickness line of section weldment indicating the location and geometry of the the weldment at 33 mm from the bottom cross-weld creep test specimen. face, The parent material hardness is approximately constant (170 HV) at distances greater than about 12 mm from the fusion boundary. In the heat affected zone region (< 12mm from the fusion boundary) the hardness increases to match the higher hardness (approx 220 HV) in the weld metal. The hardness in the weld metal appears to be correlated with individual multi weld passes.
Fig. 2 (a) Vickers hardness map of the weldment, and (b) the variation of hardness across the mid-thickness line shown in (a) corresponding to the gauge length of the test specimen.
373
Specimen design The design of the creep test specimens is given in Figure 3. The specimens are cut from cylindrical blanks extracted from weldments using electro-discharge machining (EDM). Two types of specimens were extracted: a plain specimen was cut from a region away from the weldment and a cross-weld specimen whose gauge length covers approximately one-half of the weld and HAZ, as shown in Figure 1. The ends of the specimens had M12 threads for fixing into the testing machine. The gauge sections were machined flat to 6mm width and 3mm thickness for DIC measurements. Test system Figure 4 shows a schematic of the DIC high temperature creep deformation measurement system. The three-zone furnace was specially manufactured with a porthole side through its wall for the purpose of imaging the sample surface during testing. The field of view of the specimen by the camera is restricted by the dimensions of the opening (window), which is 20 mm by 40 mm. The specimen is placed in the middle of the furnace and vertically aligned to give the required view of gauge section by the camera. A digital SLR 12.3 megapixel resolution camera with a 200 mm focal length macro lens was used to photograph the gauge section of the specimen. Images of the specimen surface during testing were acquired at regular intervals using time-lapse photography software (Nikon Camera-Control-Pro).
Fig. 3 Creep test specimen design
Fig. 4 Schematic drawing showing the DIC creep deformation
374
1 mm (a) After 1 day
1 mm (a) After 7 days
Fig. 5 The deterioration of an EDM surface at 650ºC
Surface preparation For room temperature applications, it has been found that the surface roughness produced by EDM creates an adequate speckle pattern under white light illumination [5]. The suitability of an EDM surface for testing Type 316 stainless steel at high temperature was investigated by placing a sample inside the furnace at a temperature of 650ºC. It was found that the image contrast deteriorated quickly due to surface oxidation, as can be seen in Figure 5. For this reason, the suitability of a silicon ceramic-based paint was tested. This paint (VHT FlameProof™) is used by the automotive industry on exhaust systems and the aerospace industry for jet engines, re-entry vehicles and other high temperature applications, and is claimed to withstand temperatures up to 1093°C. In order to obtain an adequate speckle pattern, several combinations of black and white paints were used. The specimens were cleaned in 60% nitric acid for 10 minutes and then the black and white ceramic coating paint was applied. After spraying, the coatings were cured for 30 minutes at temperatures of 121°C, 204°C and 315°C giving the 1 mm surface enough endurance for long-term creep testing. Figure 6 shows the image of the speckle pattern on the specimen surface after 6 weeks at 600°C. Compared to Fig. 6. Appearance of the surface coating after the image without surface coating in Figure 5, the 6 weeks at 600ºC. speckle pattern is still clear and has a high contrast. Thermal currents Lyons et al [7] reported that the thermal currents between the furnace window and the camera lens caused image distortion created by the variation in refractive index of the local air. To test the effect of the thermal currents on image distortion, a sample with no load was heated up to 650°C and kept there until the temperature is stabilised. Then, 28 images of the test surface were acquired at 2 seconds intervals. The images were analysed using DIC imaging software [10] to compute any apparent displacement in time. Figure 7 shows the variation of the maximum displacement value in each image from the average value. The variation seems to be in the order of ±0.1 pixel.
375
0.5 0.4 0.3 0.2
Pixel
0.1 0 -0.1 -0.2 -0.3 -0.4 -0.5 0
5
10
15
20
25
30
Image number
Fig. 7 The effect of thermal currents on image distortion at 650°C Test procedure The specimen surface is illuminated using a fibre optic light bundle, which is coupled to a strobe flash unit triggered by the digital camera. It was found that this set up provided adequate illumination and the images obtained were of good contrast (Figure 8). The specimen was fixed into the loading frame using screw joint inside the furnace and two thermocouples were attached to the specimen at the top and bottom of the specimen, beyond the gauge length, using drilled small holes. DIC analysis procedure An important parameter in the DIC analysis is the speckle size. If the speckle size is too large, the subset size can be increased to achieve accurate correlation, however, increasing subset size reduces the spatial resolution [11]. The imaging system was set up to give a pixel size of 10.3 μm and the corresponding speckle size was approximately 50 μm, which was found to give good correlation in the DIC analysis. Images were acquired at 1 hour intervals after the specimen was heated to the test temperature and the load was applied. The images were recorded in uncompressed raw format in the camera and then converted to 8 bit grayscale using a commercial software. The converted images were imported into a commercial DIC software [10] for processing. The 6 mm procedure analysing the images is described in [5]. Briefly, the imported images were first corrected for rigid body rotations and the gauge area (approximately 35mm) was extracted from the camera image to calculate the full field of the displacement map and strain distribution. The extracted work space was divided into small subsets called interrogation windows and a cross-correlation algorithm was applied to each window. Once the algorithm detected the next window position (deformed subset Fig. 8 An example image of the plane position), the displacement vector was determined by distance between specimen obtained by the DIC system at each subset centre point. The appropriate parameter settings were 650ºC selected: multi-pass, decreasing window size started 128 × 128 pixels with 50%, 2 iterations overlap then 64 × 64 pixel with 6 iterations for this creep test. The displacement results calculated by the DIC software were exported into the general purpose analysis software Matlab. A script programme developed at the OU was used to determine strains by differentiating seven point displacement data as this approach has been found to gives results that correlate most closely with finite element results [5].
376
RESULTS Plane specimen creep test A preliminary creep test at 650°C was carried out on a Type 316H stainless steel specimen, machined from parent material of the weldment. The specimen was loaded to give a nominal stress of 160 MPa in the gauge length and the creep test carried out over period of 540 hours until the specimen ruptured. The camera was set to take images of the gauge length at one hour intervals. The displacement of the gauge section was also measured using a high temperature extensometer. After specimen rupture, the images captured were analysed to map the strain in the gauge section. The strain map thus obtained was used to calculate the total deformation of the gauge section corresponding to the length measured by the extensometer. Figure 9 compares the strain obtained by the two methods; as can be seen, there is a very good agreement. However, the DIC results revealed that the strain in the gauge section was not uniform. The specimen rupture occurred near the top of the gauge length, close to the shoulder of the specimen, as can be seen in Figure 10. The variation of strain along the gauge length with creep life was obtained from the DIC strain map and is plotted in Figure 11. The strain near the top of the gauge was much higher where the specimen ruptured. The temperature variation between the top and bottom parts of the specimen, which was measured to be around 3°C, was believed to be the cause of non-uniform deformation in the specimen. That is, the strain was concentrated near the top of the gauge length, where the temperature was higher than the other parts of the specimen. The specimen failed at the point where the strain was the highest. 12
DIC average strain
Strain (%)
10
Extensometer
8 6 4 2 0 0
100
200
300
400
500
600
Time (hours)
Fig. 9 Comparison of creep curves based average strain along the gauge length measured by DIC and the extensometer in the plain parent material specimen tested at 650ºC under an applied stress of 160 MPa The results from this preliminary test are highly significant because they show that average creep strain used to derive creep deformation curves in conventional creep tests can underestimate the local creep strain associated with rupture. In the present case the local creep strain at rupture (about 20%) was twice the average strain (10%). The results also show the critical influence of temperature on the creep deformation along the gauge length. It is noteworthy that creep test specimens invariably rupture towards the top of a specimen where the temperature tends to be higher than the mean and therefore this phenomenon is likely to be important for all creep tests.
Fig. 10 Photograph showing the fracture position in the plain creep specimen design tested at 650ºC under an applied stress of 160 MPa.
377
20
t=99% of t (rupture) t=75% of t (rupture) t=50% of t (rupture)
15 Strain (%)
t=25% of t (rupture) 10
5
0 5
10 15 20 Distance from top of gauge length (mm)
25
Fig. 11 Strain distribution measured by DIC along the centre-line of the plain parent specimen tested at 650ºC under an applied stress of 160 MPa.
Cross-weld specimen creep test The cross-weld specimen was tested at 550°C. Once a uniform temperature along the gauge length (to within ±3ºC) was achieved, the load was applied to the specimen, in 50 N steps until 540 N was reached, which introduced an applied stress of 300 MPa in the gauge section. At the time of the writing, the test had undergone 525 hours without rupture. The acquired
Strain after 30 hours Strain after 180 hours Strain after 525 hours
350 1.2 300 1
Hardness profile
250
Strain (%)
0.8 200
0.6
150
0.4
100
0.2
0 -15
-10
-5
Hardness (HV5)
Strain after 15 hours
50 0
5
10
15
Distance from w eld boundary (m m )
Fig. 12 Measured variation in creep strain across the Type 316H stainless steel weldment as a function of exposure time, under an applied stress of 300 MPa at 550ºC. The graph also shows the variation in room temperature (pre-test) hardness along the gauge length.
378
images were analysed over the 525 hours test duration. Figure 12 shows the strain evolution along the gauge length of the specimen at different times. It is evident that the strain accumulation in the weld section (the left part of the plot) is considerably lower than in the HAZ/parent section. The creep deformation behaviour in the HAZ up to 5 mm from the fusion boundary is similar to the weld metal and then rapidly increases in the parent material to levels four to five times greater. The change in creep deformation behaviour correlates closely with the room temperature (pre-test) hardness profile which shows a plateau up to 5 mm from the fusion boundary before falling to parent material properties beyond about 12 mm from the fusion boundary. These early results clearly demonstrate the effectiveness of full field strain maps in the understanding and quantifying spatially resolved creep deformation properties in weldments at elevated temperatures. Conclusions A high temperature deformation measurement system with digital image correlation has been developed and used to measure surface deformation during creep tests at 550°C and 650°C. A high temperature ceramic-based paint was found to provide an excellent speckle pattern and retain its appearance at 650ºC. Comparison of strain measurements using an extensometer and the DIC technique on a preliminary creep test at 650ºC demonstrated that the DIC technique can be reliably used at high temperatures. The technique developed has been used to measure the strain variation in a cross weld specimen in a creep test at 550°C. Acknowledgements The authors would like to thank British Energy Generation Limited for providing the welded sample and funding the research. Professor Bouchard gratefully acknowledges Royal Society Industry Fellowship support. References [1] Lockwood WD, Tomaz B, and Reynolds AP “Mechanical response of friction stir welded AA2024: experiment and modelling” Materials Sciences and Engineering, A 323, 348-353 (2002) [2] Sutton MA, Yang B, Reynolds AP and Yan J “Banded microstructure in 2024-T351 and 2524-T351 aluminum friction stir welds: Part II. Mechanical characterization” Materials Sciences and Engineering, A364, 66-74 (2004) [3] Genevois C, Deschamps A and Vacher P “Comparative study on local and global mechanical properties of 2024 T351, 2024 T6 and 5251 O friction stir welds” Materials Sciences and Engineering, A 415, 162-170 (2006) [4] Kartal M, Molak R, Turski M, Gungor S, Fitzpatrick ME and Edwards L “Determination of Weld Metal Mechanical Properties Utilising Novel Tensile Testing Methods” Applied Mechanics and Materials, 7-8. pp. 127-132 (2007) [5] Acar M, Gungor S, Ganguly S, Bouchard PJ, Fitzpatrick ME “Variation of Mechanical Properties in a Multi-pass Weld Measured Using Digital Image Correlation” Proceedings of the SEM Annual Conference, Albuquerque New Mexico USA Society for Experimental Mechanics Inc. (2009) [6] Sutton MA, McNeill SR, Helm JD and Chao YJ “Advances in two-dimensional and three-dimensional computer vision,” in Topics in Applied Physics, 1st ed., P. K. Rastogi, ed., (Springer, Berlin: Springer) p323-372. (2000) [7] Lyons JS, Liu J, Sutton MA. High-temperature deformation measurements using digital-image correlation. Experimental Mechanics. 36(1):64-70 (1996). [8] Liu J, Lyons JS, Sutton M, Reynolds A. Experimental Characterization of Crack Tip Deformation Fields in Alloy 718 at High Temperatures. Journal of Engineering Materials and Technology. 120(1):71 (1998). [9] Liu J, Sutton M, Lyons JS, Deng X. Experimental investigation of near crack tip creep deformation in alloy 800 at 650°C. International Journal of Fracture. 4363:233-268 (1998). [10] Strain Master, LaVision GmbH, Anna-Vandenhoek-Ring 19, Gottingen, Germany [11] Acar, M, Gungor, S, Bouchard, P and Fitzpatrick, M. E. (2010). Effect of prior cold work on the mechanical properties of weldments. In: Proceedings of the 2010 SEM Annual Conference and Exposition on Experimental and Applied Mechanics, 7-10 Jun 2010, Indianapolis, Indiana, USA.
Thermal Deformation Measurement in Thermoelectric Coolers by ESPI and DIC Method
Wei-Chung Wang* and Ting-Ying Wu** Department of Power Mechanical Engineering National Tsing Hua University Hsinchu, Taiwan 30013 Republic of China * Professor, [email protected] ** Graduate Assistant, [email protected]
ABSTRACT Based on the Peltier effect, the thermoelectric cooler (TEC) is a device which can transfer thermal energy through current to conveniently make the module cooled and heated on the opposite sides. Bismuth telluride (Bi2Te3) is the material commonly used for producing the Peltier effect.
An array
of small bismuth telluride couples is placed between two ceramic plates and bonded to them.
Two
electric wires were soldered to each electrode of the TEC and connected to a direct current (DC) power supply. When a DC is applied, heat is moved from one side to another side. Occasionally, the structure of the TEC is cracked or even broken down due to the significant thermal expansion mismatch between the bismuth telluride cubes and the ceramic plates.
To ensure the safety of the
TEC, it is therefore important to investigate the thermal deformation induced. In this paper, both the electronic speckle pattern interferometry (ESPI) and the digital image correlation (DIC) method were used to measure the small and large thermal deformations induced in a centrally supported TEC, respectively. The results show that the warpage, i.e. the out-of-plane displacement, varies linearly with respect to the temperature difference between the hot and cold plates irrespective the state of deformation (expansion or contraction) of the hot and cold plates.
1. INTRODUCTION By adjusting the input current, the thermoelectric cooler (TEC) is convenient to heat or cool structural
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_46, © The Society for Experimental Mechanics, Inc. 2011
379
380
components.
In addition, TECs have other advantages, including light weight, compact size, and low
noise, etc. It has therefore been widely used in many industrial applications including heat dissipation of central processing unit, cooling water machine, aerospace industry and even in consumer electronics [1, 2].
As depicted in Fig. 1, TEC is a sandwich structure consisting of two ceramic substrates, a
number of metalized pads, an array of p-type and n-type semiconductors (bismuth telluride, Bi2Te3) and two electric wires. The primary operational principle behind the TEC is the Peltier effect (Fig. 2). When a direct current (DC) is applied to a TEC, a hot side and a cold side are resulted from one plate absorbs heat and the other releases heat, respectively.
Because of the resulted temperature gradient,
thermal stress and thermal deformation are produced inside the TEC due to the significant thermal expansion mismatch between the bismuth telluride cubes and the ceramic plates.
The sandwich
structure may be failed. In the past, the reliability of the TEC module was investigated. After the thermal stress test and power cycling test were performed, the failure was found mainly at the solder/substrate interfaces [3].
However, the current studies for TECs are rarely for determine the
thermal deformation. With the development of high performance thermoelectric materials, higher temperature gradient will be caused. In order to improve the design of TEC and elongate its life time, understanding of its thermal deformation is very important. The advantages of optical methods are full-field, non-contact, nondestructive and real-time. Electronic speckle pattern interferometry (ESPI) is a highly accurate measurement method frequently used for the small deformation measurement [4, 5]. In recent years, because of the fast improving computer and image-capture devices, digital image correlation (DIC) method has become a very practical and reliable technique for measurement from nano-scale to meters [6-8]. The out-of-plane displacements of the 199 pairs TEC under smaller current (0.02 A~0.18 A) were successfully measured by ESPI [9, 10].
To measure the displacement produced by the larger current, the measurement range
of the ESPI is exceeded.
In this paper, thermal deformation of TECs of 127 pairs was investigated.
The smaller and larger out-of-plane displacements were measured by ESPI and DIC method, respectively. The mutually complementary data obtained from the ESPI and DIC method help better understand the thermal deformation of TEC.
The results show that the out-of-plane displacement
varies linearly with respect to the temperature difference between the hot and cold plates irrespective the state of deformation of the hot and cold plates.
(a) Photograph of the TEC
(b) Structure of the TEC
Fig. 1 The sandwich structure of the TEC [3]
381
Fig. 2 Schematic diagram of the Peltier effect [1]
2. THEORETICAL BACKGROUND 2.1 THE PRINCIPLE of OUT-OF-PLANE ESPI In the ESPI, a beam splitter is used to separate a laser light beam into reference light beam and object light beam and both light beams are projected to the surface of the specimen.
The schematic
diagram of the out-of-plane ESPI setup is shown in Fig. 3. When the interference or correlation conditions [11] are fulfilled, the light intensity caught by the CCD camera before deformation I1 is given by I1 = I r + I o + 2 I r I o cos φ
(1)
where I r = light intensity of the reference beam; I o = light intensity of the object beam; and φ = phase difference of the reference and object beams.
The light intensity after deformation I 2 is given
by
I 2 = I r + I o + 2 I r I o cos(φ +
2πδ o
λ
)
(2)
where δ o is the additional phase difference caused by the deformation; λ = wavelength of the incident light. The geometrical relation of the out-of-plane ESPI displacement is depicted in Fig. 4. After deformation, point O moves to O'. Suppose the displacement component in the z direction is
dz ,the in-plane displacement component in the horizontal direction is dx , the phase difference of the object beam δ o is given by
δ o = AO ' + O ' D = dz ( cos α + cos β ) + dx ( sin α − sin β )
(3)
where α and β are the incident and reflection angles of object beam, respectively. In this paper, the CCD camera is placed perpendicular to the surface of the specimen as shown in Fig.3, i.e. β = 0 . In addition, the in-plane displacement is assumed smaller than the out-of-plane displacement.
Equation
382
(3) therefore reduces to δ o = dz (cos α + 1) 。 By subtracting images before and after deformation, the dark interferometric fringes are induced and the corresponding displacements are given by
dz =
Nλ , cos α + 1
N = 0, 1, 2, 3, ...
(4)
where N is the fringe order.
(a) Schematic diagram
(b) Geometric diagram
Fig. 3 The setup and geometrical diagram of out-of-plane ESPI 2.2 THE PRINCIPLE OF THE DIC METHOD [12] When the test specimen is deformed by external loads, the characteristic spots on the specimen’s surface are changed. The deformation of the characteristic spots and the test specimen are assumed consistent. The gray levels of the characteristic spots on the specimen’s surface are assumed remain unchanged after deformation. In the DIC method, the relevance of gray level images before and after deformation was determined by the normalized cross-correlation function in this paper. coordinates of the center of a selected subset is
( x, y )
Let the
( x0 , y0 ) and the coordinates of an arbitrary point is
before deformation. The relative position between before and after deformation is expressed
as x ' = x + u ( x, y )
(5)
y ' = y + v ( x, y )
where u ( x, y ) and v( x, y ) are horizontal and vertical displacements, respectively. If the subset area is very small and u ( x, y ) and v( x, y ) are expanded around
( x0 , y0 ) by Taylor’s series, then
∂u ∂u dx + dy ∂x ∂y ∂v ∂v y ' = y + v0 + dx + dy ∂x ∂y
x ' = x + u0 +
(6)
383
where u0 and v0 are displacements; six unknown parameters u0 , v0 ,
∂u ∂u ∂v ∂v , , and are displacement gradients. Once these ∂x ∂y ∂x ∂y
∂u ∂u ∂v ∂v , , , are determined, the displacement information ∂x ∂y ∂x ∂y
can be obtained. In this paper, Newton-Raphson method was used to obtain the optimal solutions of those six parameters. Furthermore, to completely describe the deformation information, the bicubic interpolation method was used to reconstruct the image.
After standard calibration process and
analysis of two-dimensional images and three-dimensional geometry, the surface morphology and three-dimensional deformation information were obtained.
3. TEST SPECIMEN AND EXPERIMENTAL PROCEDURES
3.1 TEST SPECIMEN There are 127 pairs of p-type and n-type semiconductors in the TEC specimen used in this paper. The dimensions of the TEC are 40 mm × 40 mm × 4 mm.
To implement the DIC method, black speckles
were produced by spraying heat-resistant black paint on the hot plate of the TEC specimen.
The
highest temperature of the paint can resist is 650℃. To acquire the real-time variation of temperatures, two thermal couples were adhered to the hot and cold plates, respectively (Fig. 4(a)). 3.2 SUPPORT CONDITION In this paper, the support condition of the TEC specimen follows the one reported by Chang and Wang [9] but with some modifications. Since the deformation is symmetric about the center of the TEC, only the geometrical center of the cold side was adhered rigidly to a pillar (Fig. 4(b)) by using the cyanoacrylate.
The pillar is made of VICTREX® PEEK™ polymer, a material of high strength
(tensile yield strength at 23°C: 99.97MPa and compressive yield strength at 23°C: 117.97MPa), low coefficient of linear thermal expansion (2.6 x10-5°C-1) and low thermal conductivity (0.25Wm-2K-1) [13].
The purpose of the arrangement of this setup is to ensure that the out-of-plane deformation is
produced by thermal loading only. Then the pillar was fixed on an optical table to prevent rigid body motion (Fig. 4(b) and 4(c)). 3.3 EXPERIMENTAL CONDITIONS To investigate the thermal deformation when the TEC is used as a cooler and a heater, two kinds of input currents were used. When the TEC is used as a cooler, the temperatures of the hot and cold plate are maintained higher and lower than the initial room temperature, respectively. To make sure the hot plate is at the expansion state and the cold side is at the contraction state, the time duration of input current was also controlled. During the ESPI experiments, four levels of input current, i.e. 0.05A, 0.10A, 0.15A and 0.20A were respectively applied to the TEC. The time duration of each level of the input current was 100sec. During the 3D-DIC method experiments, the time duration of each input current were 0.7A for 80sec, 0.8A for 70sec, 0.9 A for 60sec and 1.0A for 55sec.
The
aforementioned time duration and level of input current are called State A. The variations of the State A temperature of the hot plate, cold plate and temperature difference with time for ESPI and 3D-DIC
384
method are shown in Figs. 5 and 6, respectively. When the TEC is used as a heater, the temperatures of both hot and cold plates are higher than the initial room temperature. To make sure both the hot and cold plates are at the expansion state, time duration for the different current input time were also controlled. During the 3D-DIC method experiments, time duration of 600 seconds was adopted for the five levels of input current. The time duration and level of input current are called State B.
The
variation of the State B temperature of the hot plate, cold plate and temperature difference with time for the 3D-DIC method is shown in Fig. 7. All of the experimental conditions are listed in Table 1.
(a) Front view
(b) Side view
(c) Top view
Fig. 4 The setup of the TEC specimen [14] 3.4 EXPERIMENTAL SETUP AND PROCEDURES The schematic diagram of the out-of-plane ESPI is shown in Fig. 3. The speckle images of the hot plate before and after deformation were captured by the CCD camera. The phase shifting technique was used to obtain the full field phase data. With the help of the commercial software IntelliWave [15], the full field deformation information was obtained. The 3D-DIC System (Model VIC-3D) developed by Correlated Solutions [16] was used in this paper. As shown in Fig. 8, the system includes two high resolution (1628 pixel x 1236 pixel) CCD cameras, several camera lenses, a laptop, a tripod with accessories and the software package.
To be capable of measuring small deformation and
reduce the environmental disturbance, the two CCD cameras were mounted on the guided rail, placed on the angle bracket and clamped to the optical table.
In addition, the light emitting diode (LED) [17]
was adopted as the light source to improve the accuracy.
Images of the hot plate before and after
applying the DC to the TEC specimen were captured by the two CCD cameras. Full field deformation was obtained by employing the software package to analyze all images.
385
Fig. 5 The variation of the State A temperature of the hot plate, cold plate and temperature difference with time (ESPI) [14]
(a) 0.7 A
(b) 0.8 A
(c) 0.9 A
(d) 1.0 A
Fig. 6 The variation of the State A temperature of the hot plate, cold plate and temperature difference with time (3D-DIC method) [14]
Fig. 7 The variation of the State B temperature of the hot plate, cold plate and temperature difference with time at (3D-DIC method) [14]
386
Fig. 8 The setup of the 3D-DIC system [14] Table 1 Experimental conditions State
State A
State B
Measurement Method
ESPI
3D-DIC
3D-DIC
Current Range (A)
0~0.2
0~0.7/0.8/0.9/1.0
0~1.0
Input Time (sec) / Step Current (A)
100/0.05
Measurement Area
80/0.7, 70/0.8, 60/0.9, 55/1.0
600/0.1
Hot plate
4. RESULTS AND DISCUSSIONS
4.1 STATE A By using the ESPI, the fringe pattern, wrapped phase and three-dimensional displacement map for 0.05 A and 100 sec are shown in Fig. 9. The ESPI fringe pattern is almost perfectly concentric about the center of the TEC specimen (Fig. 9(a)). It should be pointed out that the displacement measured is relative instead of absolute since the central support is not rigidly fixed.
(a) Fringe pattern
(b) Wrapped phase
(c) The three-dimensional displacement map
Fig. 9 Results of the out-of-plane displacement for 0.05 A and 100sec (ESPI) [14]
387
The whole field displacement results of 3D-DIC method for 1.0 A for 55sec are shown in Fig. 10. The geometric diagram of the TEC specimen is depicted in Fig. 11. Since the TEC specimen is symmetrical with respect to the y-axis (x-axis), the in-plane displacement component in the x-direction u (y-direction v) is symmetrical with respect to the y-axis (x-axis) as shown in Fig. 10(a) (Fig. 10(b)). The bottom of Fig. 10(b) is where the input of the DC as indicated by the dotted areas in Fig. 1(b) and Fig.11. In those dotted areas, there are no constraints from the metalized pads, N-type and P-type semiconductors. Therefore, the deformation of the bottom of the hot plate is not affected by the contraction of the cold plate. As for the top part of the hot plate, constraints of the metalized pads, N-type and P-type semiconductors make its deformation be affected by the contraction of the cold plate. When the cold plate contracts, the magnitude of v in the +y-direction is smaller than the -y-direction. In other words, the v value on the bottom is larger than that on the top. The expansion of the hot plate and the contraction of the cold plate produce the deformation in the z distribution. The central deformation is the smallest because of the central support. The deformation near the edge is larger, especially at the four corners.
0.0044 mm
0.0046 mm
(a) u
-0.00455 mm
(b) v
-0.0054 mm
0.0007 mm
(c) w
-0.0193 mm
Fig. 10 Results of the out-of-plane displacement for 1.0A and 55sec (3D-DIC method)[14]
(a) Bottom view
(b) Top view Fig.11 Geometric diagram of the TEC specimen [14]
388
The variation of warpage with the temperature difference between the hot and cold plates obtained from both ESPI and 3D-DIC method is shown in Fig. 12. The least squares fit equation in Fig. 14 is expressed as
Y 3 1 C S A = 0 .1 8 2 8 X 3 1 C S A + 0 .0 0 8 7
(7)
where X 31C SA is the temperature difference (℃) between the hot and cold plates; Y31C SA is the warpage (μm). The magnitude of the warpage is the difference between the maximum and minimum displacements.
Fig. 12 The variation of warpage with temperature difference (State A) [14] 4.2 STATE B To avoid larger error of the out-of-plane displacement measurement of the 3D-DIC method, warpage produced by 0.8A and 1.0A were investigated.
The DIC images of the full field displacement
distribution for 1.0A and 600sec are shown in Fig. 13.
The distribution fashion of the three
displacement components is similar to those of State A, however, the magnitude of the displacement components of State B is larger than that of State A. In contrast to State A, the magnitude of v of the top is larger than that of the bottom. As shown in Fig.6, the temperatures of the hot and cold plates are greater than the initial temperature of every level input current. expansion.
Both the hot and cold plates are in
As described in Section 4.1, the metalized pads, N-type and P-type semiconductors
provide constraints on the top rather than the bottom corner, therefore the magnitude of v in the +y-direction is larger than that in the -y-direction. In other words, the magnitude of v on the bottom is smaller than that of the top. The variation of State B warpage with temperature difference is shown in Fig. 14. It can be seen from Fig. 14 that smaller input current caused the lower warpage. The warpage produced from 0.2A and 0.4A contains larger error because of the lower measurement limit of the 3D-DIC system is reached. Therefore, only the warpage caused by 0.6A to 1.0A is considered in the further analysis.
389 0.0109 mm
(a) u
-0.0093 mm
0.0118 mm
0.0131 mm
(b) v
-0.0093 mm
(c) w
-0.0138 mm
Fig. 13 The DIC images of the full field displacement distribution for 1.0 A and 600 sec [14]
Fig. 14 The variation of warpage with temperature difference (State B) [14]
4.3 STATE A AND STATE B Plot of least-squares fit of the variations of warpage with temperature from both State A and State B is depicted in Fig. 15. The equation of the plot is given by
Y127CS = 0.5948 X127CS + 0.0515
(8)
where X127CS is the temperature difference (℃) between the hot and cold plates; Y127CS is the warpage (μm). It is clear that the tendency of the warpage induced by both State A and State B are essentially the same. In other words, the final warpage produced is irrespective of the state of deformation (expansion and contraction) of the hot and cold plates. It is interesting to note that there is a data gap of the warpage between temperature difference 6℃ and 20℃. The data gap is due to the upper accuracy limit of the ESPI and lower accuracy limit of the 3D-DIC method were reached. Nevertheless, the data obtained from State A and State B is satisfactorily correlated by Eqn. (8).
390
Fig. 15 The variation of warpage with temperature difference of TEC (State A and State B) [14]
5. CONCLUSIONS
The thermal deformation of the centrally supported TEC of 127 pairs was successfully measured by the two optical methods, ESPI and 3D-DIC method. With the careful design of the pillar mechanism, the thermal deformation measured is contributed by the thermoelectric effects only. Among the three displacement components, the magnitude of the out-of-plane displacement component w is the largest and the magnitudes of the two in-plane displacement components u and v are rather close. The distribution fashion of u and v are symmetrical with respect to the y-axis and x-axis, respectively. The distribution fashion of w is concentrically with respect to the center of the TEC specimen. The magnitude of w is smallest near the center and larger with increasing distance from the center and becomes largest at the four corners of the TEC specimen. Based on the findings of this paper, the variation of the warpage and the temperature difference between the hot and cold plates of State A and State B is linear. The least squares fit equation between warpage and temperature difference was also obtained. ACKNOWLEDGMENTS
This paper was supported in part by the National Science Council (grant no. NSC 95-2221-E007-011-MY3), Taiwan, Republic of China.
REFERENCES
[1] Wise Life Technologies, Corp., Taiwan, Republic of China. [2] S. B. Riffat and X. L. Ma, “Thermoelectrics: A Review of Present and Potential Applications”, Applied Thermal Engineering, Vol.23, pp.913-35, 2003. [3] Y. M. Tan, W. Fan, K. M. Chua, Z. F. Shi and C. K. Wang, “Fabrication of Thermoelectric Cooler for Device Integration”, Proceedings of Electronics Packaging Technology Conference, pp.802-805, Grand Copthorne Waterfront, Singapore, 7-9 December, 2005. [4] B. W. Lee, W. Jang, D. W. Kim, J. H. Jeong, J. W. Nah, K.W. Paik and D. Kwon, “Application of Electronic Speckle Pattern Interferometry to Measure In-plane Thermal Displacement in Flip-chip Packages”, Material Science and Engineering A, Vol. 380, pp. 231-236, 2004. [5] K. M. Abedin, S.A. Jesmin and A.F.M.Y. Haider, “Construction and Operation of A Simple Electronic Speckle Pattern Interferometer and Its Use in Measuring Microscopic Deformations”,
391
Optics & Laser Technology, Vol. 32, pp. 323-328, 2000. [6] Y. Sun and J. H. L. Pang, “Digital Image Correlation for Solder Joint Fatigue Reliability in Microelectronics Packages”, Microelectronics Reliability, Vol. 48, pp. 310-318, 2008. [7] N. Li, M.A. Sutton, X. Li and H.W. Schreier, “Full-field Thermal Deformation Measurements in A Scanning Electron Microscope by 2D Digital Image Correlation”, Experimental Mechanics, Vol. 48, pp. 635-646, 2008. [8] M. A. Sutton, N. Li, D. C. Joy, A. P. Reynolds and X. Li, “Scanning Electron Microscopy for Quantitative Small and Large Deformation Measurements. Part II: Experimental Validation for Magnifications from 200 to 10,000”, Experimental Mechanics, Vol. 47, pp. 775-787, 2007. [9] Y. L. Chang and W. C. Wang, “Thermal Deformation Measurement in Thermoelectric Coolers by ESPI”, Proceedings of SEM XI Congress on Experimental and Applied Mechanics, pp.162-171, Orlando, Florida, U. S. A., June 5-8, 2008. [10] W. C. Wang and Y. L. Chang, “Experimental Investigation of Thermal Deformation in Thermoelectric Coolers”, Accepted for Publication in Strain, 2010. [11] Hecht, E. Optics, Addison Wesley, New York, pp. 386~389, 2002. [12] T. Y. Chang, “Investigation of Deformation and Mechanical Properties of Artificial Mesh and Animal Fascia by Digital Image Correlation Method,” M. S. Thesis (in Chinese), Department of Power Mechanical Engineering, National Tsing Hua University, Taiwan, Republic of China, 2008. [13] Website: http://www.victrex.com/tc/ [14] T. Y. Wu,”Experimental and Numerical Investigation of Thermal Deformation of Thermoelectric Coolers”, M. S. Thesis (in Chinese), Department of Power Mechanical Engineering, National Tsing Hua University, Taiwan, Republic of China, 2010. [15] IntelliWave (2006) Version 5.008, Engineering Synthesis Design, Inc., Tucson, AZ, U. S. A. [16] Website: http://www.correlatedsolutions.com/ [17] Website: http://www.moritex.co.jp/home/english/
Structural Health Monitoring Using Digital Speckle Photography Fu-pen Chiang1, Jian-dong Yu2, 1. SUNY Distinguished Professor and Chair, Mechanical Engineering Department, Stony Brook University, NY, 11794; 2. Graduate Student, Mechanical Engineering Department, Stony Brook University, NY, 11794 ABSTRACT Infrastructure aging is of current national concern and the need to monitoring its state of health is paramount. The prevailing monitoring techniques are mostly point-wise and qualitative. In this paper we present some results on using the technique of DSP (Digital Speckle Photography) to monitor the deflection and strain distribution of a model bridge. A truss wood bridge with a span of 10 ft was constructed with pine wood beams of various sizes. A load of 400 lbs was distributed across the span using sand bags to simulate uniform load. One surface of the wood bridge was painted with a thin coat of retro-reflective paint. A 300w flood light was used to illuminate the bridge from one side of a 3008 x 2000 pixel CCD camera. Specklegrams before and after the application of load were recorded digitally and analyzed using a special program called CASI to yield deflection of the entire truss bridge as well as regional local strain distributions. The deflection result was compared with the finite element prediction and the strain results were compared with strain gage readings. Reasonable agreements were obtained. 1. Introduction Most bridges in this country are aging and require frequent and thorough inspection. According to the United States Department of Transportation (USDOT), the prevailing approaches of health monitoring are visual inspection and using instrumentations. In addition to being tedious, time consuming and costly visual inspection by human tends to be uneven depending on the inspector’s training and experience. Most instrumentation inspection techniques, such as, thermal imaging or acoustic technique can only be applied to a small portion of the bridge at a time. Mapping the entire bridge using a regional technique is also time consuming and expensive. And yet failure of a bridge tends to start locally (such as the failure of the Minneapolis Bridge in 2007) and propagates to other regions resulting in catastrophe. Thus, it is paramount that techniques be developed such that the entire bridge can be inspected by a “glance” or single observation. And this full field inspection should be maintained as a function of time. It is the goal of this project to develop such a technique by employing digital speckle photography [1-3] 2. Full Field Mapping Analysis of Deformation of a Wooden Bridge Model 2.1 Equipments and Procedures We built a wooden bridge made of standard pine wood strips purchased locally. The dimension of the bridge is shown in Fig. 1. The bridge is 10 ft. in length, 2 ft. wide and 1.25 ft. high. The cross section of the main beam and secondary beam are depicted in Fig. 1 as well. The actual model is shown in Fig. 2.
Figure 1 Sketch of the model wood bridge T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_47, © The Society for Experimental Mechanics, Inc. 2011
393
394
Figure 2 Picture of the actual model wood bridge The front surface of the bridge that is facing the camera is coated with a special paint called retro-reflective paint which contains imbedded minute glass beads. The size of glass bead is about 20~40 µm. When illuminated it reflects the light beam back to its source due to the index of refraction of the glass beads as schematically shown in Fig. 3.
Figure 3 A glass bead redirects light back towards the source When the light illuminates the bridge, a speckle pattern is generated at the painted surface when viewed along a direction very close to the light source. As a result one can use a relatively small light source to illuminate a large structure such as a bridge and obtain good speckle patterns for subsequent processing. The experimental setup is shown in Figure 4a.
(a)
395
(b)
(c)
(d) Figure 4 Experimental setup for full field bridge deformation mapping (a) Sketch of the experimental setup; (b) Picture of the experimental setup; (c) Distribution of load from sand bags; (d) Sketch of load direction In the experiment, a CCD camera (Nikon Model D70) with 3008×2000 pixels resolution was used to record the full field image of the speckled wooden bridge. The camera was situated about 36ft away from the bridge with its optical axis essentially perpendicular to the beam (Fig. 4a). A PC was connected to the camera for the evaluation process, which was carried out by the method of Computer Aided Speckle Interferometry (CASI)[2,3]. A Flood light (300w type T3 floodlight) was selected as a light source to illuminate the entire bridge (Fig. 7b). Both the camera and the floodlight were set very close to each other for the purpose that the camera could capture the retro-reflected light. A static load was applied by piling up 50 lbs sand bags on the top of the bridge deck. The totality of distributing loads for the structure can be up to about 600 lbs (Fig. 7c and 7d). The surface of the bridge that was treated with retro-reflective paint was photographed by the digital camera and the each digital image of the bridge was divided into subimages of 64 ×64 pixel arrays and then “compared” using the CASI software.
396
2.2 Results and Discussions 2.2.1 Full-Field Mapping of Global Deformation of Bridge Two pictures which covered the entire image of the bridge were taken by the CCD camera: one was before the load was applied the other was after the load was applied. In order to correlate the two images in CASI, two pictures should be provided with same image size and gray color. Then two pictures were “compared” or “subtracted” in terms of digitalized speckle patterns using the CASI program. The displacement vectors thus obtained are shown in Fig. 5 when a total load of 400 lbs. was applied.
Figure 5 Displacement vectors depicting the reflection of the beam under static weight for entire bridge The result in Fig. 5 depicts the map of displacement vectors of the entire bridge to demonstrate not only the technique’s high spatial resolution, but also the value of the deflection. The vectors represent quantitatively the magnitude and direction of the beams at each and every point. The magnitude of the deflection is pseudo colored as depicted in the picture next to the beam. It is seen that it varies from zero at two supported ends to 4.8 pixels (here, 4.8 pixels equals to 3.48 mm) at the center of the bridge. 2.2.2 Finite Element Analysis of Global Deformation Finite element analysis of the bridge was carried out by using the ABAQUS 6.8-1 software. The modeled three dimensional geometry of the bridge is shown in Figure 6.
Figure 6 Geometry model in ABAQUS The material properties of pine wood (as provided by the manufacturer) are the following: the Young’s modulus is 11.6Gpa, Poisson’s ratio is 0.3, and density is 562 kg /m3. The deflection analysis is a linear static step. The distributed load is 400lb, the boundary conditions are defined as U1, U2, U3 being zero, which means that all the translation degree of freedoms are fixed, at the two ends of the bridge representing a pin support. The element is 2-node linear beam in space. In order to simulate the shear flexible effect, Timoshenko beam element is selected and the element type is B31. Abaqus/Standard solver is used and the result of the bridge deflection is compared with CASI’s as shown in Fig. 7 where it is noted that these two results are reasonably compatible.
397
(a)
(b) Figure 7 (a) Deflection vectors of the bridge by FEA, (b) Bridge main beam deflection from CASI and FEA 2.2.3 Local Mapping of Strain Distribution in an Area of the Bridge The strain distribution analysis was also carried out and demonstrated. The purpose of strain calculation is to identify and monitor the critical regions due to stress concentrations. From the theoretical calculation or field observation using the systems described in the previous section, one can determine the regions of stress concentration or the likelihood of stress concentration. Once a region (or regions) is identified, regardless of local small area or global entire region, the speckle technique can be deployed to monitor and display the evolution of the strain field with high resolution.
Figure 8 Total displacement vector distribution of the main beam under 400 lbs total load The example shown in Figure 8 is that of central area where the maximum deflection is experienced by the bridge under the static loading. As shown in Fig. 9, four strain gages were mounted onto the surface of the main beam to measure the strain directly.
398
Figure 9 Strain gage locations on the main beam
Figure 10 Strain distribution of the main beam as obtained by DSP techniques Figure 10 shows the strain distribution of a part of the main beam. It is noted that the strain distribution indicates that while the entire beam is mostly under tension, there are zones of compression on the top of the beam. The strain gage results are: -2 mε (top left), 16 mε (bottom left), -10 mε (top right) and 14 mε (bottom right). Reasonable agreement is obtained.
399
3. Conclusion DSP technique using CASI has been successfully applied to monitoring the deflection and strain distribution of a 10 ft. wooden bridge. We find that both the deflection and strain values obtained by DSP agree reasonably well with that obtained by FEM analysis and strain gauges, respectively. Some discrepancies are near the center of the main beam between FEM result and CASI result. In the CASI result, some oscillation is present. This is, in part, because the data calculated using CASI have numerical uncertainties. Another reason for the discrepancy is that in the FEM simulation, the mechanical properties of pine wood used are not tested in-house with full accuracy. Since wood is essentially an orthotropic material, it has unique and different properties in all three directions. In order to measure the accurate elastic properties, a series of experiments for testing the wood properties need be conducted. However, it is believed the DSP technique can be applied to real bridges and other infrastructures with proper modification of the proposed system. Reference [1] Chiang, F.P. and Asundi, A., “White light speckle method of experimental strain analysis,” Applied Optics., 1979, 18(4):409-411. [2] Chen, D.J. and Chiang, F.P. “Range of measurement of computer aided speckle interferometry (CASI),” Proc. 2nd Int. Conf. On Photomechanics and Speckle Metrology, San Diego, CA, 1554A, July 22-26, 1991, 922-931. [3] Chen, D.J., Chiang, F.P., Tan, Y.S., and Don, H.S., “Digital speckle displacement measurement using complex spectrum method,” Applied Optics, 1993, 32:1839-1849.
Determining the Strain Distribution in Bonded and Bolted/Bonded Composite Butt Joints Using the Digital Image Correlation Technique and Finite Element Methods DAVID BACKMAN, GANG LI and THOMAS SEARS Structures and Materials Performance Laboratory Institute for Aerospace Research National Research Council Canada, Ottawa [email protected]
ABSTRACT Composite primary and secondary aircraft structures have the potential to achieve the same strength and stiffness as conventional metallic structures, but at substantially lower weight. To accomplish this in an efficient and effective manner the joints used to attach these composite structures will be critical. This work investigated the strain distribution in adhesively bonded and hybrid bolted/bonded composite single-strap butt joints using a 2D digital image correlation (DIC) technique and geometrically nonlinear finite element modelling. A high magnification optical setup was used to measure the strains in the through thickness direction, allowing strain measurements in the region of the adhesive and along the bondline in the joint overlap section. This provided a detailed measure of strain flow through the adhesive and into the doubler and adherend under a maximum tensile load of 5338 N (1200 lbf). In all cases a crack developed at the inner overlap section in the hybrid joints and, using DIC techniques, the size of the crack as well as crack tip strains could be determined from the deformed images. The information obtained from the DIC measurements proved valuable in helping to improve the fidelity of the finite element model.
INTRODUCTION Adhesively bonded joints offer several advantages over mechanically fastened joints since they eliminate stress concentrations caused by fastener holes and their stress distribution is relatively uniform. The joint static strength using high strength film adhesives can be significantly greater than in bolted joints [1, 2]. However, the adhesive bondline is still a potential source of weakness. From a modeling perspective, accurately simulating the adhesive stress and strain in this region is crucial for adequately predicting the bonded joint performance under a tensile load. The main objective of this study was to determine the adhesive strain condition using experimental and numerical methods.
EXPERIMENTAL DETAILS Joint coupons were produced from CYCOM 5276-1 T40-800 3.175 mm wide carbon fibre slit tape made by Cytec Engineered Materials. The material properties for the slit tape were as follows: E11 = 145 GPa, E22 = 8.9 GPa, v12 = 0.31, and G12 = 4.5 GPa. The tensile strength was approximately 3000 MPa in the fibre direction (0°) and 90 MPa in the 90° (perpendicular to the fibre direction) at room temperature. The matrix was CYCOM 5276-1 toughened resin. A 6-axis automated fibre placement (AFP) machine was used to layup the 16- and 24-ply ([45/ 45/0/90]ns , n =2 and 3) -ply laminates. After layup, these panels were cured in an autoclave at a peak temperature of 179 ° C and a constant pressure of 586 kPa. The nominal thickness of the cured 16-ply laminates was 2.24 mm. The film adhesive FM300K was used for joint bonding and its thickness was controlled to be 0.17 mm. Two joint cases were made. The case 1 joint was fabricated using identical 16-ply laminates for both the adherends and the doubler, while the thickness of the doubler in the case 2 joint was 50% thicker. The overall joint configuration and dimensions are shown in Fig. 1. The joint width was 25.4 mm and the overall length was 300 mm. Adhesive fillet length on the adherend was approximately 3.2 mm. The outer unbonded adherend length of each side of the joint was approximately 54.2 mm. Tapered end tabs made from 3.15 mm thick FR4 Fibreglass sheet were used for gripping the coupons in the hydraulic grips of the MTS test frame.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_48, © The Society for Experimental Mechanics, Inc. 2011
401
402 The adhesively bonded composite butt joints were loaded in tension using an MTS load frame equipped with a 22.5 kN load cell (model number 5631 and serial number 611.21A-01). Tensile load was applied at a rate of 2 kN/min up to a maximum load of 5338 N (1200 lbf), and then was unloaded. The joints were loaded and unloaded in this manner a total of seven times in order to capture the strains over the entire region of the joint. 8.5 mm
Doubler
Width: 25.4 mm
27 mm
54.2 mm
45 mm
3.2 mm
Adherend
End tab
2c 102 mm
54.2 mm
Joint length: 300 mm
Figure 1: Overall dimensions for composite specimens
DIGITAL IMAGE CORRELATION A 2D digital image correlation (DIC) system was used to measure strains in the adhesive under various degrees of tensile load. To obtain an adequate spatial resolution, a telecentric lens (TC 23-07 Opto-Engineering Inc) combined with a high resolution (12-bit A/D), cooled CCD camera (PCO Sensicam QE 1376 x 1040 pixels) was used for image capture. To create enough contrast for the image correlation algorithms to determine the displacement and strain fields, the composite coupons were speckled lightly with white paint. This combined with the black of the composite provided adequate contrast for the image correlation algorithms while also providing a clear view of the laminate plies and of the adhesive layer. The optical setup employed provided a spatial resolution of 0.02869 mm/pixel which was too high to capture the entire composite specimen in one image. The full image was obtained by mounting the optical setup on a precision adjustable stand and capturing seven images that spanned the joint length. These images were processed and then concatenated together manually using photo editing software (Paint Shop Pro, Corel Inc).
Figure 2: View of optical setup with coupon mounted in MTS test frame
FINITE ELEMENT MODELING Three-dimensional finite element (FE) models of the experimental joints were generated, using MSC.Patran (pre- and postprocessor) version 2010 and MSC.Marc (solver) version 2008. A total of 24,014 nodes and 20,288 8-node brick elements (16 wedge elements included) were used for the case 1 joint and a total of 28,665 nodes and 24,432 8-node brick (16 wedge elements included) were used for the case 2 joint. The joint was modelled assuming an adhesive thickness of 0.17 mm and included a 0.5 mm inner gap between the two adherends to better reflect the actual joint configuration. One element in each
403 lamina was meshed in the thickness direction and two elements were meshed along the adhesive thickness in the overlap section. A relatively fine mesh was created in the joint transition areas in the outer and inner overlap edge regions. Adhesive fillets were simply assumed to be a triangular prism at the outer overlap edges. Due to limited variation in the stress and strain along the joint width direction, a relatively coarse mesh using eight elements was made in the joint width direction. The boundary conditions used were for the joint tensile loading stage: (i) zero displacements in three directions at the joint left remote adherend edge; (ii) zero displacement in the joint thickness and width directions applied to the joint right remote edge; (iii) multi-point constraint (MPC) condition applied to the nodes at the right remote edges ensuring the same longitudinal displacement; (iv) uniform tensile stress applied to the joint right adherend edge; and (v) geometrically nonlinear FE analysis. Variation in the peel strain along the adhesive mid-line on the side surface was plotted and compared with corresponding experimental DIC measurements.
RESULTS AND DISCUSSION Both the case 1 and case 2 composite joints were subjected to tensile loads of 1779 N (400 lbf) and 5338 N (1200 lbf) respectively. Since multiple views were required, each view consisted of an initial reference image at zero load and then a deformed image at the required load level. A final concatenated image of the overall maximum principal strains is shown in Figure 3 for the case 1 joint at both load levels and in Figure 4 for the case 2 joint at both load levels. Validation of the FEA model was performed by comparing the full field image of the principal strains under both a 1779 N tensile load (Figure 3a and Figure 3b) as well as under a 5338 N tensile load (Figure 3c and Figure 3d). The results show that overall the FEA model very closely matches the strains measured in the composite joint. The strains in the adherend at the center of the overlap region show a small divergence from the measured results, likely resulting from the fact that the FEA model did not model cracking of the adhesive bond in this region.
(a)
(b)
(c) (d)
Figure 3: Maximum principal strain (microstrain) for case 1 bonded joints under two applied tensile loads from both DIC and FEA results: (a) 1779 N - DIC (b) 1779 N - FEA (c) 5338 N - DIC and (d) 5338 N - FEA The same type of validation was performed for the case 2 joint with the thicker doubler and a comparison of the results under a 1779 N static load (Figure 4a and 4b) as well as under a 5338 N static load (Figure 4c and 4d) also show excellent corroboration to the experimental measurements using DIC. For the case 2 joint the strains predicted from the FEA model in the adherend near the center of the overlap region are more closely matched than those from the case 1 joint. This is likely due to the fact that the case 2 joint had less extensive cracking in the adhesive joint which in itself was likely due to the thicker double used in this joint design. This meant that the joint behaviour in this case was more closely matched to the idealized model used for the FEA simulation.
404
(a)
(b) (c)
(d)
Figure 4: Maximum principal strain (microstrain) for case 2 bonded joints under two applied tensile loads from both DIC and FEA results: (a) 1779 N - DIC (b) 1779 N - FEA (c) 5338 N - DIC and (d) 5338 N - FEA Additional corroboration of the FEA results was obtained by extracting line profile information from two distinct regions of the composite joint, the first being in the top portion of the joint through the bottom of the adhesive bond (Figure 5a) and the second being in the middle of the joint (Figure 5b) through the center of the adhesive bond line. For corroboration purposes the peel strain ( xx in the co-ordinate system of Figure 5) was extracted and compared to the data from the DIC measurements. The results for the case 1 joint are shown in Figure 6 and show how the overall strain field is fairly similar between those measured by DIC and FEA. Due to the cracking in the adhesive bond, the strains in this area could not be measured, resulting in a small gap in the center of the line profile (Figure 6a). It is in this overlap region, close to the adhesive bond that the largest difference is seen between the two results.
(a)
(b)
Figure 5: Co-ordinate reference frame for (a) top and (b) middle line profile extractions
405
25,000
2000
FEA
20,000
DIC
1000 Microstrain (um/m)
15,000
Microstrain (um/m)
FEA
1500
DIC
10,000 5,000 0
500 0 -500 -1000 -1500 -2000
-5,000
-2500
-10,000 -10.0
-5.0
0.0
5.0
10.0
-3000 0.0
Position from Center (mm)
5.0
10.0
15.0
20.0
Position from Top (mm)
Figure 6: Peel strain comparison between DIC and FEA results for a case 1 joint in the (a) middle region and (b) top region subjected to a 5338 N tensile load
A similar comparison was performed for the case 2 joint (Figure 7a and Figure 7b) with fairly similar results. Due to the cracking in the adhesive bond, the strains in this area could not be measured, resulting in a small gap in the center of the line profile (Figure 7a). It is in this overlap region, close to the adhesive bond that the largest difference is seen between the two results. 25,000
2000
FEA
1500
DIC
1000
15,000 Microstrain (um/m)
Microstrain (um/m)
20,000
10,000 5,000 0
FEA
500
DIC
0 -500 -1000 -1500 -2000
-5,000 -10,000 -10.0
-2500 -3000
-5.0
0.0
5.0
Position from Center (mm)
10.0
0.0
5.0
10.0
15.0
20.0
Position from Top (mm)
Figure 7: Peel strain comparison between DIC and FEA results for a case 2 joint in the (a) middle region and (b) top region subjected to a 5338 N tensile load A preliminary experiment was performed using the same adhesively bonded joints, with the inclusion of four fasteners (Composi-LOK, Monogram Aerospace Inc.). The initial strain measurements made using DIC on these bonded/bolted hybrid joints under tensile loading (Figure 8), show significant differences in the strain field when compared to the adhesively bonded joints (Figure 3 and Figure 4). The inclusion of the bolted connections appears to have increased the amount of secondary bending and resulted in a horizontal crack forming in the central overlap region. The adhesive between the two adherends failed completely in both the case 1 joint (Figure 8a) and the case 2 joint (Figure 8b) leaving a 0.5 mm gap between the adherends. As development of the FEA model continues, it appears that taking into account the failure of the adhesive will be key to accurately simulating joint behaviour under a tensile load.
406
(a)
(b) Figure 8: Maximum principal strain (microstrain) from DIC based strain measurement for hybrid bonded/bolted joint tested at 5338 N in the (a) case 1 and (b) case 2 joint configurations
CONCLUSION Two adhesively bonded joints were tested under uniaxial tension with strain measurement in the lamina plies and adhesive performed using digital image correlation. The results from the DIC measurements were then compared to an FEA model of the adhesive joint to determine whether the performance of the FEA model matched that of the actual joint. Overall corroboration between the measurements and simulation was good, with only small differences noted in the central overlap region near the adhesive bond, likely due to cracking in the adhesive that was not taken into account with the FEA model, as well as the measurement resolution within the adhesive thickness at the overlap end regions.
REFERENCES [1] Kweon, J.H., Jung, J.W., Kim, T.H., Choi, J.H., Kim, D.H., “Failure of carbon composite-to-aluminum joints with combined mechanical fastening adhesive bonding”, Composite Structures, Vol. 75, 2006, pp. 192-198. [2] Li, G., Chen, J., Alloggia, D., Yanishevsky, M., Benak, T., Moyes, B., and Kay, T., Study of static strength and failure behaviour of composite single-strap butt joints with different methods of attachment,” Institute for Aerospace Research, LTR-SMPL-2010-0136, National Research Council Canada, 2010.
Improved spectral approach for continuous displacement measurements from digital images Farhad Mortazvi1,2 , Martin Lévesque∗1,2 and Isabelle Villemure2,3 2 Department
1 CREPEC,
École Polytechnique de Montréal, Montréal, Canada of Mechanical Engineering, École Polytechnique de Montréal, Montréal, Canada 3 Sainte-Justine University Hospital Center, Montréal, Canada
ABSTRACT Digital Image Correlation (DIC) algorithms capable of determining continuous displacement fields are receiving growing attention in some areas of research over subset-based DIC methods. Particularly, in mechanical identification applications where high measurement accuracies are sought, the advantage of continuous displacements are appreciated. Within the framework of inverse problems, the unknown continuous displacements may be expressed in terms of linear combinations of basis functions, e.g. B-Splines or finite element shape functions. In this paper, complementary works have been done to make a spectral decomposition of displacement fields functional, which leads to a fast and memory-efficient approach based on Fast Fourier Transform (FFT). The main challenge has been to make the method operational for images and displacements with non-periodic boundaries. The approach has been evaluated on artificial data based on computer-generated images and prescribed displacement fields. Comparisons made between the spectral approach and the one based on B-Splines and nonlinear optimization proves the superiority of the method in terms of reliability and the required computer resources. keywords: DIC, Spectral approach, complex displacements, Fast Fourier Transform, B-Splines
1
Introduction
Techniques providing full-field measurements are of particular interest in the field of mechanical identification since they provide multiple strain readings for multiple stress states in a single experiment. DIC is one of the optical measurement techniques which enables full-field measurements. DIC is based on establishing spatial correspondences between two images acquired from the same test specimen in the unloaded state and at the same location in the deformed state. Thus, an abundance of information can be retrieved from the zone of interest (ZOI) as opposed to traditional strain gauge method in which the information can be obtained at limited positions. The earliest approach of DIC in solid mechanics [1] relies on a point-wise subset-based correlation scheme. Consequently, the corresponding algorithm results in a discrete displacement field. However, despite further improvements in terms of displacement models [2] and data smoothing [3], the subset-based displacement models are not capable of capturing strain heterogeneities at small scales. In the past decade, efforts have been made [4, 5] to identify continuous displacements through non-linear minimization of a global dissimilarity criterion. Moreover, the speed and memory consumption can be significantly improved by taking advantage of sparseness properties of the Jacobian matrix as a result of basis functions with compact support [5]. Alternatively, other researchers [6, 7] avoided non-linearity of the problem using the optical flow approach along with multiscale iterations. In this regard, an interesting technique in two dimensions [8] considers a spectral decomposition for the displacement field within the framework of a multi-scale iteration. This type of approach leads to a fast and memory-efficient algorithm using FFT while estimating continuous displacements in a more natural way, i.e. reconstructing the displacement Fourier expenasion. Thus, more complex displacement fields, e.g. those of composite materials at matrix-particel scales, can be determined without significantly increasing the complexity of the correlation procedure. The approach works well with periodic displacement fields [8]; a requirement that is rarely met in a real experiment. In this paper, an improved spectral approach is presented with the goal of decreasing the measurement uncertainties as well as making the method operational for non-periodic displacements. The first part of the paper deals with the mathematical ∗ Corresponding author (Dept. Mech. Eng., École Polytechnique de Montréal, C.P. 6079, Succ. Centre-ville, Montréal, Québec, Canada H3C 3A7 Tel.: +1 514 340 4711 Ext. 4857, Fax: +1 514 340 4176, E-mail : [email protected]
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_49, © The Society for Experimental Mechanics, Inc. 2011
407
408
formulation of the improved approach within the framework of a minimization problem. The second part is devoted to the evaluation of the algorithm using artificially generated experiments.
2
Theory
One assumes that f (x) and g(x) represent the intensity functions corresponding to the undeformed and deformed images, respectively. In an ideally perfect condition, these two configurations are related through the following relation: f x − u(x) = g(x) (1) where u(x) is the vector of displacement field as a result of the applied loads. In continuum-based image correlation, the displacement field is decomposed over a set of basis functions ψk (x) in the Hilbert space: u(x) = ∑ uk ψk (x)
(2)
k
Thus, the unknowns are the series coefficients. Similar to many image correlation techniques, the solution to the unknown coefficient vectors uk is obtained through minimizing the following norm of residuals: min kr(uk )k22 =
uk
∈C2
∑ ri2 := ∑
i∈Ω
i∈Ω
2 f xi − u(xi ; uk ) − g(xi )
(3)
where i represents the indices of image pixels sweeping the entire domain Ω, and C2 indicates the two-dimensional complex space. Linearizing the residual vector using the first-order Taylor expansion, the normal equations for the linearized objective function can be written as[9]: (4) JT J pMS = −JT r where pMS is the vector of unknowns containing coefficients and J is the Jacobian matrix containing the first partial derivatives of the the residual vector r. However, the induced displacements need to be sufficiently small so that the above linearization leads to a meaningful solution. This requirement is met by taking the normal equations back to a sufficiently small scale and gradually enrich the solution within a multi-scale iteration [6]. The same framework was established in the spectral approach [8], which relies on the spectral decomposition of the displacement field. In this paper, a smoothly truncated Fourier decomposition is considered for the displacement, which is written as: π u(x) = ∑ uk Hκ (k) exp( i k · x) L k
(5)
where L is the half-size of image interval and
kkk2 ) (6) 2κ 2 implies that high-frequency Fourier modes are gradually attenuated and their contribution to the Fourier series is controlled by the cutoff frequency κ. Obviously, the larger the κ is, the more details can be reconstructed by the displacement model. Constructing pMS as the vector containing the Fourier coefficients rearranged as: Hκ = exp(−
pMS = [p j ] j=1,2,3,··· = [u1 T , u2 T , u3 T , · · · ]T
(7)
one can form the multi-scale iteration, equation (4), for this problem. The elements of the Jacobian matrix, Ji j are the first partial derivatives of the residual vector elements, ri , whose L2 -norm is defined in equation (3). Hence, one writes: Ji j =
∂ ri ∂ u(xi ) = −∇ f T (xi − u(xi )) · ∂ pj ∂ pj
(8)
Substituting the relations for r, J and u into (4) and using the Fourier transform definition, one obtains the following relation in the Fourier domain:
∑0 Hκ (k) k
^ ^ ∇ f ⊗ ∇ f (k − k0 )Hκ (k0 )uk0 = Hκ (k) rk ∇ f (k)
(9)
409
where ⊗ indicates the dyadic product and (e) indicates the discrete Fourier transform. Similar to 1D image velocimetry [10], one could slightly modify Hκ (k) inside the series into Hκ (k − k0 ) provided κ L. Thus, the series in the left-hand-side of the equation becomes a convolution product between the dyadic product tensor and the sequence of uk vectors, both of which filtered in the frequency domain using Hκ . The main interest of the spectral approach lies in the fact that instead of directly solving the above equation in the frequency domain, one may bring the calculations back to the spatial domain. Thus, the convolution product turns into a simple product in the real space. In doing so, equation (9) is transformed to the following practical form: \ \ ∇ f ⊗ ∇ f · u(x) = ( f − g)∇ f (x) (10) where (b) denotes a low-pass filtered function using Hκ . The above linear system holds for each point in the space. On the other hand, the two-by-two linear algebraic system is solved analytically giving rise to an explicit equation for the displacement field. Thus, the approach leads to a fast and memory-efficient algorithm. It should be noted that the assumption of κ L (small cutoff frequency) is an important requirement to be met at each of the multi-scale iterations. The reason is that the dyadic product tensor field, is a rank-deficient matrix unless it is low-pass filtered by a sufficiently small cutoff frequency. Therefore, a successful implementation of the approach starts with a small κ at the lowest scale and gradually increases it so that κ reaches its presumed value at final iterations.
2.1
Non-periodic displacement fields
The main drawback of a spectral approach is that it relies on a periodic expression for the displacement field to take advantage of Fourier domain calculations. Consequently for a non-periodic displacement, significant errors are produced at image boundaries. This may sound a restriction since in real experiments, rarely does one have a periodic displacement field. However, the difficulty can be overcome to a considerable extent using prior estimation of non-periodic part by another correlation scheme. For example, Roux et al. [10] estimated a linear transform in 1D using a multi-scale algorithm to account for the non-periodic part. However, for displacements deviating from the linear behavior, this will not efficiently remove the boundary errors. An alternative way would be using a coarse-mesh subset-based DIC to determine discrete displacements, which could be fit into an analytic surface, e.g. B-Splines. Although the latter seems to be practical, the fact that the grid points in a subset-based DIC can not approach the boundaries necessitates an extrapolation of the fitting surface so as to generate displacement data at the edges. Therefore, a more robust approach has been adopted herein that determines a smooth B-Splines estimation of the non-periodic part using a multi-scale non-linear optimization scheme. More precisely, the method starts from a coarse scale and tries to estimate the displacement vector using the following B-Splines model, which is written in the tensorial form: unp (x) = B3 ⊗ B3 (x) : P (11) where unp (x) is the B-Splines model of the non-periodic part, B3 is the vector of 1D cubic B-Spline basis functions and the 3rd -order tensor, P, corresponds to the control variables of the displacement vector. The so-called variables are determined by minimizing an objective function similar to equation (3) using Levenberg-Marquardt algorithm [9]. The found variables are then used as initial values for the optimization at a higher scale and the procedure continues until it reaches the original scale. The only difference herein with the B-Spline approach developed by Cheng et al. [5] is that no multi-scale scheme was adopted in the mentioned study. Consequently, the optimization would require that the initial guess of the variables be as close as ±3 pixels to the optimal displacement. This restriction is removed by initiating the optimization from a sufficiently small scale where the real displacements are within the mentioned range. On the other hand, the present scheme is also different from the multi-scale approach in [7] based on Non-Uniform Rational B-Splines (NURBS) in that the objective function was linearized following the optical flow approach. As a result, few multi-scale iterations are required herein to obtain a meaningful displacement at the final iteration.
2.2
Implementation
The improved spectral approach was implemented in MATLAB software. Scaling in Fourier domain is used for the multi-scale iterations so as to be consistent with the whole spectral approach. To evaluate the undeformed image f at subpixel locations xˇ i , a four-point cubic interpolation [11] in two dimensions is used. It should be noted that at each iteration, the algorithm verifies the positive definiteness of the dyadic product tensor field. If at any point, the tensor fails to meet this requirement, the cutoff frequency is one step reduced to guarantee that the tensor will be positive definite. Finally, the algorithm stops when the relative change in the objective function value does not exceed a certain limit or the iterations reach at the original scale.
410
Fig. 1: Artificially-generated image for evaluation purposes. 512×512 pixel 16-bit gray-scale image with 3500 speckles and an average of 3 pixels for speckle size
3
Simulated experiments
In order to evaluate the functionality of the algorithm, computer-generated experiments are organized. First, artificial images with random speckle-like intensity patterns are generated using the following analytic function: f (x) =
N
∑ Ip exp[−(x − x p )T W (x − x p )], with
(12a)
p=1
" W=
1 2A2p
0
0
#
1 2B2p
(12b)
where Ip , x p , A p , B p are random sequences, N is the total number of speckles and A and B are the representative speckle sizes in two directions. Thus, the image characteristics can be controlled by the above parameters. Furthermore, the analytic definition of the intensity pattern allows one to ideally transform the image by a preset displacement field to generate a deformed image; hence avoiding interpolation bias [12]. Fig. 1 shows the artificial image generated for the evaluations presented in this paper. The intensities have been quantized to 16 bits to minimize uncertainties caused by image quantization. Finally, the duly provided undeformed and deformed images are used to measure the displacement field, which is subsequently compared with the accurate preset displacement values to evaluate the measurement uncertainties. On this basis, two types of preset displacements were considered in this study giving rise to two test cases, namely Case I and Case II as follows: Case I- In the first case, the computer-generated image is artificially deformed in horizontal direction by a B-Spline function with random variables shown on Fig. 2(a). Case II- This is a more complex displacement similar to those seen in real experiments. Precisely, the second case is constructed by superposing the first case and a polynomial as follows: 1 I 3/2 uII 1 (x1 , x2 ) = u1 (x1 , x2 ) + a x1 4
(13)
The coefficient a is used to scale the second part such that it starts from 0 and ends with 20 pixels at the right boundary (see Fig. 2(b)). In order to make sure that the displacements are sufficiently far from any boundary errors, the measurement uncertainty was calculated excluding an 80-pixel band near the boundaries.
411
(a)
(b) 3/2
Fig. 2: Preset displacements for (a) Case I, random B-Splines and (b) Case II, superposition of Case I and the polynomial a x1 . The vetical components of the displacements are preset to zero
4
Results and discussion
Since the accuracy of a DIC measurement strongly depends on the image properties, i.e. intensity variations, smoothness, quantization, etc., uncertainty evaluations are often limited to specific case studies and therefore may not be generalized for other possible applications. Therefore, one useful strategy would be evaluating the measurement accuracy in comparison to a benchmark approach whose functionality has been already evaluated. For the case of the improved spectral approach, the benchmark algorithm must be capable of reconstructing complex displacements in addition to rigid body movements or affine transforms. Continuum-based approaches such as those based on FE shape functions [4, 6] or B-Splines [5, 7] are possible choices for this purpose. In this study, we have chosen the latter adopted from the work of Cheng et al. [5]. The B-Splines have been shown to have superior convergence properties compared to FE shape functions [7]. Furthermore, the non-linear sparse least-square algorithm implemented in this approach provides a robust and memory-efficient benchmark for our purpose. For the sake of simplicity, the improved spectral approach and the B-Splines algorithm are hereafter referred to as ISA and BSA, respectively. It should be noted that the same interpolation scheme was used in both ISA and BSA to register images at sub-pixel positions. In order to fairly compare the results of the two algorithms, a priori analyses were performed for both cases with the goal of finding their optimum parameters; namely the number of Fourier modes (for ISA) and the B-Splines uniform knot spacing (for BSA). Fig. 3 shows the resulting uncertainties as functions of the mentioned parameters for each algorithm. The criterion to evaluate the uncertainty is the standard deviation (σ ) of the error in the ZOI. Intuitively, one might expect that the uncertainty approaches a certain asymptote as the contribution of Fourier terms increases. However, this will not practically occur due to the fact that the dyadic product tensor might no longer be sufficiently positive definite as κ increases. Hence, a slight increase in the uncertainty is observed for case II in Fig. 3(a) after it reaches its optimum. The optimum parameters were used to evaluate the accuracy of the displacements and strains for both cases, the results of which are shown on Tables 1 and 2. The uncertainty is calculated for the displacements and the normal strain in the horizontal direction. The relative strain uncertainty is also calculated as the ratio of the strain uncertainty to the standard deviation of the exact strain in the ZOI. As shown on the tables, the resulting uncertainties of the ISA is less than those of the BSA in both displacements and strain. As far as the speed and memory consumption are concerned, the spectral approach plays a superior role in comparison to the B-Splines approach thanks to the use of the FFT. Roughly speaking, in the spectral approach the largest matrix size treated in the algorithm is in the order of image size whereas in the B-Splines approach, the Jacobian matrix becomes a limiting factor since its size is much larger than that of image. It is worth noting that this difficulty is overcome to a great extent thanks to the extreme sparseness of the Jacobian matrix. However, in the case where the displacement complexity necessitates the use of higher number of control variables, even the sparse Jacobian matrix becomes so large that it can hardly be treated on a normal PC. This was observed in the above examples while one tried to decrease the knot spacing to less than 10 pixels.
412
3
x 10
−3
2.5 2 1.5 1
0.06 0.04 0.02 0
0.5 0 5
Case I Case II
0.08
σ u 1 (pixel)
σ u 1 (pixels)
0.1
Case I Case II
−0.02 10
15
20
25
3× cutoff frequency, 3 κ
30
10
20 30 40 B−Splines knot spacing
(a)
50
60
(b)
Fig. 3: A priori study of displacement uncertainty in terms of (a) cutoff frequency κ in the spectral approach, and (b) knot spacing in the B-Splines approach Table 1: Uncertainty of displacements and strain for case I Algorithm
σu1 (pixel)
σu2 (pixel)
σ ∂ u1
σ∂relu /∂ x (%)
ISA BSA
6.23 × 10−4 1.27 × 10−3
2.76 × 10−4 5.52 × 10−4
6.55 × 10−5 1.96 × 10−3
0.11 3.3
∂ x1
1
1
Table 2: Uncertainty of displacements and strain for case II
5
Algorithm
σu1 (pixel)
σu2 (pixel)
σ ∂ u1
σ∂relu /∂ x (%)
ISA BSA
3.99 × 10−4 9.5 × 10−4
1.76 × 10−4 2.85 × 10−4
3.82 × 10−5 1.5 × 10−4
0.23 0.9
∂ x1
1
1
Conclusion
In this study, an improved spectral approach was presented to identify complex displacement fields based on their Fourier decomposition. Namely, the approach assumes a smoothly truncated Fourier expansion for displacements to be identified. The unknowns, i.e. the coefficients of sinusoidal basis functions, are determined through multi-scale minimization of dissimilarities between undeformed and deformed images. Thanks to the multi-scale approach, the nonlinear dissimilarity criterion can be approximated by its first-order Taylor expansion leading to a linear system, which includes a convolution product in the Fourier domain. Consequently, the backward Fourier transform of the linear system leads to a simple and memory efficient algorithm in the real space. In order to further improve the method, prior estimation of non-periodic displacement was added to the approach using a multi-scale nonlinear DIC based on B-Spline basis function. Simulated experiments were generated to evaluate the performance of the method in dealing with complex displacements. Two randomly generated continuous displacements were used to artificially deform the reference image. Comparisons made between the improved spectral approach and a continuum method based on B-Splines [5] reveals the superiority of the method in dealing with complex displacements considered in this study. Furthermore, the spectral approach demands less amount of
413
memory in comparison to the B-Splines method. Particularly, in the case of complex displacements, the Jacobian matrix in B-Splines approach becomes so large that can not be easily handled on ordinary PCs. In contrast, no increase in the memory is required while one intends to involve more Fourier terms in the spectral approach by increasing the cutoff frequency. This remarkable advantage would especially be useful when the extension of the method to three dimensions is concerned.
References [1] M.A. Sutton, W.J. Wolters, W.H. Peters, W.F. Ranson, and S.R. McNeill. Determination of displacements using an improved digital correlation method. Image and Vision Computing, 1(3):133 – 139, 1983. ISSN 02628856. [2] H. Lu and P.D. Cary. Deformation measurements by digital image correlation: Implementation of a second-order displacement gradient. Experimental Mechanics, 40(4):393 – 400, 2000. ISSN 00144851. [3] B. Wattrisse, A. Chrysochoos, J.-M. Muracciole, and M. Némoz-Gaillard. Analysis of strain localization during tensile tests by digital image correlation. Experimental Mechanics, 41(1):29 – 39, 2001. [4] Yaofeng Sun, John H. L. Pang, Chee Khuen Wong, and Fei Su. Finite element formulation for a digital image correlation method. Applied Optics, 44(34):7357 – 7363, 2005. ISSN 00036935. [5] P Cheng, MA Sutton, HW Schreier, and SR McNeill. Full-field speckle pattern image correlation with B-Spline deformation function. Experimental Mechanics, 42(3):344–352, SEP 2002. ISSN 0014-4851. [6] G. Besnard, F. Hild, and S. Roux. "finite-element" displacement fields analysis from digital images: Application to portevin-le châtelier bands. Experimental Mechanics, 46(6):789 – 803, 2006. ISSN 00144851. [7] J. Réthoré, T. Elguedj, P. Simon, and M. Coret. On the use of nurbs functions for displacement derivatives measurement by digital image correlation. Experimental Mechanics, 2009. URL http://dx.doi.org/10.1007/s11340-009-9304-z. [8] B. Wagne, S. Roux, and F. Hild. Spectral approach to displacement evaluation from image analysis. Eur. Phys. J. AP, 17 (3):247–252, mar 2002. doi: 10.1051/epjap:2002019. URL http://dx.doi.org/10.1051/epjap:2002019. [9] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer, August 2000. ISBN 0387987932. [10] Stéphane Roux, François Hild, and Yves Berthaud. Correlation image velocimetry: A spectral approach. Applied Optics, 41(1):108–115, 2002. [11] Thomas M. Lehmann, Claudia Gönner, and Klaus Spitzer. Survey: Interpolation methods in medical image processing. IEEE Transactions on Medical Imaging, 18:1049–1075, 1999. [12] H. W. Schreier, J. R. Braasch, and M. A. Sutton. Systematic errors in digital image correlation caused by intensity interpolation. Optical Engineering, 39(11):2915–2921, 2000. doi: 10.1117/1.1314593. URL http://link.aip.org/ link/?JOE/39/2915/1.
Experimental testing (2D DIC) and FE modeling of T-Stub Model Daniel Carazo Alvarez1,a, Mahmoodul Haq2,b, Juan de Dios Carazo Alvarez1,c, and Eann A. Patterson2,d 1
Campus “Las Lagunillas”, Building A3, University of Jaén, 23071 Jaén - Spain 2
Composite Vehicle Research Center, Michigan State University, MI - USA
a
[email protected], [email protected], [email protected], [email protected]
Keywords: T-Stub, DIC, joint. Abstract. Two-dimensional Digital Image Correlation (2D DIC) has been used to obtain displacement fields in bolted T-Stub joint models (as defined by Eurocode 3) subject to elastic and strength testing which was employed to validate a Finite Element (FE) model. It was concluded from the results of the experiments and modeling that the behavior of the T-Stub is more complex than claimed by Eurocode due to contact forces, bolt interaction and plastic behavior. 1. Introduction. The use of bolted steel structures is becoming very popular in construction, due to the capability for rapid assembly of the structure. In these structures, joints are critical parts, and therefore, an understanding of their behavior is especially important. In Europe, the design of joints for steel structures is regulated by Eurocode 3 [1], which provides specifications for both welded and bolted joints. In particular, Eurocode 3 suggests the use of an equivalent T-Stub in tension (see fig. 1) for modeling the design resistance in bending of components such as column flanges, end-plates, flange cleats and base plates (under tension). The basic dimensions of a T-Stub model are defined in Fig. 2.
Fig. 1 Equivalent T-Stubs in a joint
Fig. 2 T-Stub dimensions
The strength and failure modes of the T-Stub model have been addressed over the past forty years [2,3,4,5]. Fig. 3 shows the collapse mechanism model of the T-Stub, where plastic hinges form at the application points of forces, i.e. tension load, Ft,Rd; bolt reaction, Fb,t,Rd; and prying force, Q. This implies three possible plastic collapse mechanisms for a T-Stub: yielding of the flange (Mode 1), mixed mode (Mode 2) and bolt failure (Mode 3), as shown in Fig. 4. It is possible to define a set of analytical expressions relating the forces, specimen geometry and plastic moments, though experimental results found in the literature [2,3,5] tend to report higher values for joint strength. It is commonly accepted that the influence of plastic behavior, bolt interaction and prying forces, lead to great difficulties in analysis, so that creating an accurate model is a complex task. Consequently a model that represents accurately the T-Stub behavior would be a very valuable tool for optimizing the many design parameters. The aim of this work was to develop such a model based on the previous work reported in the literature and, for the first time, to validate it using non-contact optical techniques for strain and stress measurement.
T. Proulx (ed.), Optical Measurements, Modeling, and Metrology, Volume 5, Conference Proceedings of the Society for Experimental Mechanics Series 9999999, DOI 10.1007/978-1-4614-0228-2_50, © The Society for Experimental Mechanics, Inc. 2011
415
416
0,5Ft,Rd
0,5Ft,Rd
Mpl M pl Fb,t,Rd
Q n
Q
m
Fb,t,Rd
Q Q
Fig. 3 Collapse mechanism model for a T-Stub (based on Eurocode 3) The optical techniques available for measuring deformation and evaluating strain and stress provide surface data, and so, the situation was simplified, by considering joints that were short in the direction perpendicular to the plane shown in Fig. 4, and the front face shown in the same figure was analyzed.
Fig. 4 Failure modes of a T-Stub (based on Eurocode 3) However, in order to monitor the behavior along the direction transverse to the studied plane, strain gages are placed just above the flanges and web joint, where plasticity arising from Mode 1 and 2 displacements was expected. Mode 3 was of less interest, because the failure of the joint is due exclusively to bolt collapse. 2. Principles of Two-Dimensional Digital Image Correlation (2D DIC). Digital Image Correlation is a non-contact full-field deformation and strain measurement technique that is widely used in experimental mechanics [6,7,8]. The basic principle of 2D DIC is the tracking (or matching) of the same points (or pixels) between the two images recorded before and after deformation, as schematically illustrated in Fig. 5. The specimen surface must have a random grey-level intensity distribution, which deforms together with the specimen surface. In order to compute the displacement of point P, a square reference subset of (2M+1) pixels centered at point P (x0, y0) in the reference image is chosen and used to track its corresponding location in the deformed image P´(x´0, y´0). To evaluate the degree of similarity between the reference subset and the deformed subset, a cross-correlation criterion or sum-squared difference correlation criterion must be predefined. The matching procedure is completed through searching the peak position of the distribution of correlation coefficient. Once the correlation coefficient extremum is detected, the position of the deformed subset is determined. The differences in the positions of the reference subset center and the target subset center yield the in-plane displacement vector at point P. It is reasonable to assume that the shape of the reference square subset is changed in the deformed image. However, based on the assumption of continuity of displacements in a deformed solid object, a set of neighboring points in a reference subset remains as neighboring points in the target subset. Thus, as shown in Fig. 5, the coordinates of point Q around the subset center P in the reference subset can be mapped to point Q´ in the target subset according to the shape function or displacement mapping function:
xi xi , yi yi xi , yi
i, j M : M (1)
417
Reference subset P (x0, y0)
Displacement vector
Load
Q (xi, yi)
Computer
90º P´(x´0, y´0) Target subset
Reference Image
Light source
Q´(x´i, y´i)
Deformed Image
Fig. 5 Schematic diagram of subset before and after deformation
Fig. 6 2D DIC Experimental setup
In a 2D DIC typical experimental setup (Fig. 6), a CCD camera is placed with its optical axis normal to the specimen surface, and connected to a computer with DIC analysis software. Light sources are used to improve gray contrast. 3. Experimental methodology. Experiments were performed on two different T-Stub models. Models of different thickness were sliced from a W14 beam of A992 steel to provide pieces with the dimensions shown in Table 1. To create a test specimen, two T-Stubs were assembled, flange to flange as shown in figure 7 using M6 grade 8.8 bolts, and painted with random speckle pattern to allow DIC analysis. DIC data was acquired using Dantec Dynamics system (Dantec Dynamics GmbH., Ulm, Germany) and processed using Istra 4D software. Black cloth sheets are used as background in order to minimize noise.
Fig. 7 T-Stub models
Fig. 8 T-Stub 2D DIC testing
Model no. tf [mm] b [mm] r [mm] tw [mm] lef [mm] h [mm] d0 [mm] dw [mm] m [mm] n [mm] 1 2
10,7 10,7
127,6 127,9
15,9 6,5 15 176,8 7 15,9 6,5 25,2 176,8 7 Table 1 Geometric definition of T-Stub models
10 10
27,3 25,5
30,1 31,8
Two different tests were performed: loading within the elastic regime and loading to failure. The frontal and lateral surfaces of model-1 were studied with the DIC system during loading at 10 kN in the elastic regime. Later, strength tests were performed on both models by applying a monotonic tensile load at the rate of 1000 N/min in force control for model-1 (lef =15mm) and in displacement control at 0.5 mm/min for model-2 (lef =25.2mm). The load and deformation data was recorded from the MTS machine and the DIC analysis was performed on the models at a state of just a few seconds before the failure. Also, strain gages were placed on the lateral surfaces of the specimens in the positions detailed in Table 2, using a coordinate
418
origin defined by the centre of the bolt hole with the X-axis parallel to b dimension and the Y-axis parallel to lef. Experimental results and comparison with FE predictions are provided in section 6. Strain gage locations with respect to bolt center X [mm] Y [mm] 1 13.9 0 2 10.2 1.6 Table 2 Position of strain gage on the lateral surface Model no.
4. Mechanical properties of materials. Two series of experiments were performed to characterize the mechanical properties of the steels used for the beams and bolts. In the case of beams, ISO 6892 [9] and ISO 377 [10] standards were applied. The standard test, for rolled steel products, differentiates steel obtained from the web and flanges. Therefore, three specimens were tested from the web and three from the flanges and the average values obtained are shown in Table 3. In addition, five bolts were tested, and the average values obtained for the elastic limit and ultimate tensile strength were 824.5 N/mm2 and 936.5 N/mm2 respectively.
Flange’s steel Web’s steel
Young’s Modulus [N/mm2] Elastic Limit [N/mm2] 204105 373.97 212866 378.43 Table 3 Average material properties of the beam
Poisson’s ratio 0.2882 0.2945
5. Finite Element Modeling of the T-Stub. The finite element (FE) analyses were performed using ABAQUS® [11] and were based on those reported by Swanson et al [12], Girão et al [13] and Bursi et al [14] who have reported that 3D FE models incorporating contact with friction and full non-linear properties are essential to understand the complex phenomena in the T-stubs. The T-stub assembly was modeled utilizing the double symmetry with the flange-to-flange contact replaced by a flange-to-rigid foundation as reported by Swanson et al [12]. The bolt, stub and the base were created as three separate parts and surface-to-surface contact friction interactions were created to simulate the behavior of the interfaces. A coefficient of friction of 0.33 was used for the bolt-head/washer and flange interaction as reported in [12]. No friction was assumed between the flange and rigid base as described in [13]. Non-linear material properties obtained from experimental tests were used for the T-stubs. The material tests on the bolts are currently in progress, instead the non-linear bolt material properties were taken from literature [12]. The base was assumed to be a linear material with a Young’s modulus of 1015 MPa and a Poisson’s ratio of 0.45 as reported in [12]. Symmetric geometrical boundary conditions were used along the web. The base of the foundation was fixed in all directions. The bolt-to-flange and the-flange-to-rigid foundation surfaces were brought into contact prior to the application of load. The bottom surface of the bolt was fixed in the vertical direction such that it is in tension as the web is pulled. The simulation was performed using displacement control and a uniform displacement was applied to the top of the web until failure. All parts in the T-stub models were discretized using solid eight-noded elements with reduced integration (C3D8R). The degree of discretization was finer than reported in literature [12-14]. Fig. 9 shows the FE model used for the T-stub model-2 in this work.
Fig. 9 FE model
419
6. Validation of the model. The FE models for the two T-stub models in this study were validated with experimental data by comparing the overall, behavior (load-deformation curves) and y-displacement maps of the frontal surface. Additionally, total displacement on the lateral surface and strain in the transverse direction was also compared. Fig. 10 shows the y-displacement map for the frontal surface as obtained from the experimental DIC analysis and FE model, respectively for loads in the elastic regime for model1.
Fig. 10 Y-displacement on the frontal surface for model-1 from experimental DIC analysis (left) and FE predictions (right) corresponding to loading in the elastic regime
Fig. 11 Total displacement on lateral surface of model-1 from experimental DIC analysis (left) and FE predictions (right) corresponding to loading in the elastic regime The FE predictions of displacement show excellent agreement with the experimental results from DIC. Also, the comparison of lateral surface displacement images (Fig. 11) from both FE predictions and DIC experiments reveal that the displacement is almost constant along the lef direction, suggesting that three-dimensional effects are minimal in these models. Similarly, Fig. 12 and Fig. 13 show maps of y-displacement from the experimental DIC analysis and FE modeling for the case of loading at failure for model-1 and model-2, respectively. As in the elastic case, the comparison of FE predictions with experiments revealed a good agreement in both the distribution and range of values. Another common way of validating the FE modeling is the comparison of the overall behavior (load-deformation curve) of the model with experimental response.
420
Fig. 12 Y-Displacement on frontal surface for model-1 from experimental DIC analysis (left) and FE predictions (right) during strength testing (load =19.0 kN).
Fig. 13 Y-Displacement on frontal surface of model-2 from experimental DIC analysis (left) and FE predictions (right) during strength testing (load=19.3 kN). Fig. 14 and Fig. 15 provide the comparison of the force-displacement responses from FE simulations and experimental testing for models 1 and 2, respectively. It can be observed that the FE model yields stiffer results than the experiments. Similar results have been reported in the literature [12-14]. Also, the major discrepancy was observed at the onset of yielding, in which the actual plastification was found to be more gradual than those predicted by simulations. Bursi et al. [14] report similar results and attribute this phenomenon to the residual stress effects [14]. Overall, good agreement of FE models with experiments was observed. The strength of the T-stubs as provided by the Eurocode [1] was compared with experimental results and is summarized in Table 4. The strength predictions from FE modeling is currently not reported due to the lack of detailed experimental characterization of the bolt, such work is in progress and will be included in the future communications. In the case of model1 (lef =15mm), the difference between the Eurocode prediction [1] and the experimental value was found to be 2.1% while for model-2 (lef =25.2mm), this difference was substantially larger (16%). Theoretical Experimental strength value [N] strength value [N] 1 25,369 24,822 2 30,363 25,479 Table 4 Strength values obtained for both T-Stub models. Model
421 30
30
Experiment
FE‐ Prediction
FE‐Prediction
Experiment
25
Total Applied Load (kN)
Total Applied Load (kN)
25
20
15
10
20
15
10
5
5
0
0 0.0
0.5
1.0
1.5
2.0
2.5
0
3.0
0.2
0.4
0.6
0.8
1
1.2
1.4
Displacement (mm)
Displacement (mm)
Fig. 14 Load–displacement behavior: Experimental (blue) and FE predictions (red) for model-1 (left) and model-2 (right). Additional validation of the FE model was performed by comparing the strains in the transverse direction at the location as of the experimental strain gages. The location of the strain gages are provided in Table 2. Fig. 15 compares the strain responses from FE model with experiments in the elastic regime for both T-stubs model. A slight discrepancy, in the form of a non-linear behavior in the experiment response was observed in model 1. It should be noted that the strain gages are placed in the vicinity of the bolt and the fillet-curvature which is a region of complex phenomena and is governed by many factors. Additional experimental tests and accurate FE modeling is required to quantify such phenomena. In general, good agreement in strain predictions of simulations with experiments was observed. 10
10 FE ‐ Prediction
FE‐Predictions
9
Experiment
8
8
7
7
6
6
Applied Load (kN)
Applied Load (kN)
9
5 4 3
5 4 3
2
2
1
1
0
Experiment
0 0
100
200
300
Strain ( )
400
500
600
0
50
100
150
200
Strains ( )
250
300
350
Fig. 15 Comparison of strain at lateral surface from experiments (blue) and FE model (red) for model-1 (left) and model-2 (right) in the elastic regime. Overall, the validation of the FE modeling with experimental tests and DIC analysis revealed good agreement of FE models with experimental data. Nevertheless, statistically significant number of experimental tests and accurate modeling of the phenomena with detailed material properties are essential to further increase the confidence of the FE models and to establish the quantitative uncertainties and confidence limits for the numerical models. 7- Conclusions. 2D DIC has been used to study the complex strain field in a T-Stub joint. Good agreement between the results obtained for the displacement from T-surface using DIC and those obtained from an FE analysis reveals the utility of the latter in the study of real engineering problems. Further confirmation of the reliability of the FE model was achieved by using strain gages on the flange surfaces. The use of accurate material properties and the detailed modeling of complex interactions between the bolt-flange and flange-flange connections can further improve the FE model performance. The use of optical methods for stress and strain analysis, such DIC, is a great utility for validating finite element models. FE models developed with the confidence of experimental validation provide a valuable tool for optimizing dimensions or improving safety in complex problems, such T-Stub joints.
422
References [1].
European Committee for Standardization (CEN). Ref. No. prEN 1993-1-8:2005 E – Eurocode3: Design of steel structures – Part 1-8: Design of joints (2005).
[2].
Zoetemeijer P: A design method for the tension side of statically loaded bolted beam-to-column connections. Heron: 20(1): 1-59 (1974).
[3].
Girão AM: Characterization of the ductility of bolted end plate beam-to-column steel connections, PhD Thesis, Universidade de Coimbra (2004).
[4].
Piluso V, Faella C and Rizzano G: Ultimate behavior of bolted T-stubs. I- Theoretical Model, Journal of Structural Engineering, Vol. 127, nº 6, pag. 686-693 (2001).
[5].
Piluso V, Faella C and Rizzano G: Ultimate behavior of bolted T-stubs. II- Model validation, Journal of Structural Engineering, Vol. 127, nº 6, pag. 694-704 (2001).
[6].
Sutton MA, Orteu JJ and Schreier HW: Image Correlation for shape, Motion and Deformation Measurements, Springer Science + Business Media (2009).
[7].
Sutton MA: Digital Image Correlation for Shape and Deformation measurements, Chapter 20 from Springer Handbook of Experimental Solid Mechanics, Sharpe Editor (2008).
[8].
Pan B, Qian K, Xie H and Asundi A: Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review. Measurement Science and Technology, Vol. 20, 062001 (2009).
[9].
European Committee for Standardization (CEN). ISO 6892-1:2009 – Metallic materials. Tensile testing – Part 1: Method of test at room temperature.
[10]. European Committee for Standardization (CEN). ISO 377:1997 – Steel and steel products. Location and preparation of samples and test pieces for mechanical testing. [11]. ABAQUS Documentation (Version 6.9.2) Dassault Systèmes Simulia Corp., Providence, RI, USA, 2009. [12]. Swanson JA, Kokan DS and Leon RT: Advanced finite element modeling of bolted T-stub connection components. Journal of Construction Steel Research 58: 1015-1031 (2002). [13]. Coelho AMG, da Silva LS, and Bijlaard, FSK: Finite element modeling of the Nonlinear behavior of bolted T-stub connections. Journal of Structural Engineering. 132: 918-928 (2006). [14]. Bursi OS and Jaspart JP: basic issues in finite element simulation of extended end plate connections. Computers and Structures 69 :361-382 (1998).