springer proceedings in physics 124
springer proceedings in physics 106 Modern Trends in Geomechanics Editors: W. Wu and H.S. Yu 107 Microscopy of Semiconducting Materials Proceedings of the 14th Conference, April 11–14, 2005, Oxford, UK Editors: A.G. Cullis and J.L. Hutchison 108 Hadron Collider Physics 2005 Proceedings of the 1st Hadron Collider Physics Symposium, Les Diablerets, Switzerland, July 4–9, 2005 Editors: M. Campanelli, A. Clark, and X. Wu 109 Progress in Turbulence II Proceedings of the iTi Conference in Turbulence 2005 Editors: M. Oberlack, G. Khujadze, S. Guenther, T. Weller, M. Frewer, J. Peinke, S. Barth 110 Nonequilibrium Carrier Dynamics in Semiconductors Proceedings of the 14th International Conference, July 25–29, 2005, Chicago, USA Editors: M. Saraniti, U. Ravaioli 111 Vibration Problems ICOVP 2005 Editors: E. Inan, A. Kiris 112 Experimental Unsaturated Soil Mechanics Editor: T. Schanz 113 Theoretical and Numerical Unsaturated Soil Mechanics Editor: T. Schanz 114 Advances in Medical Engineering Editor: T.M. Buzug 115 X-Ray Lasers 2006 Proceedings of the 10th International Conference, August 20–25, 2006, Berlin, Germany Editors: P.V. Nickles, K.A. Janulewicz
116 Lasers in the Conservation of Artworks LACONA VI Proceedings, Vienna, Austria, Sept. 21–25, 2005 Editors: J. Nimmrichter, W. Kautek, M. Schreiner 117 Advances in Turbulence XI Proceedings of the 11th EUROMECH European Turbulence Conference, June 25–28, 2007, Porto, Portugal Editors: J.M.L.M. Palma and A. Silva Lopes 118 The Standard Model and Beyond Proceedings of the 2nd International Summer School in High Energy Physics, Mug˘la, 25–30 September 2006 Editors: T. Aliev, N.K. Pak, and M. Serin 119 Narrow Gap Semiconductors 2007 Proceedings of the 13th International Conference, 8–12 July 2007, Guildford, UK Editor: B. Murdin 120 Microscopy of Semiconducting Materials 2007 Proceedings of the 15th Conference, 2–5 April 2007, Cambridge, UK Editor: A.G. Cullis 121 Time Domain Methods in Electrodynamics Editor: P. Russer 122 Advances in Nanoscale Magnetism Proceedings of the International Conference on Nanoscale Magnetism ICNM-2007 (June 25–29, Istanbul, Turkey) Editors: B. Atkas and F. Mikailov 123 Computer Simulation Studies in Condensed-Matter Physics XIX Proceedings of the 19th Workshop Editors: D.P. Landau, S.P. Lewis and H.-B. Schüttler 124 EKC2008 Proceedings of the EU-Korea Conference on Science and Technology Editor: S.-D. Yoo
Volumes 81–105 are listed at the end of the book.
S.-D. Yoo (Ed.)
EKC2008 Proceedings of the EU-Korea Conference on Science and Technology Ryu-Ryun Kim Han Kyu Lee Hannah K. Lee Hyun Joon Lee Jeong-Wook Seo
123
Seung-Deog Yoo VeKNI Berliner Allee 29 22850 Norderstedt Germany
Editorial Board Ryu-Ryun Kim, University of Hamburg, Institute for Food Chemistry, Grindelallee 117, 20146 Hamburg, Germany Han Kyu Lee, University of Hamburg, Center for Molecular Neurobiology Hamburg (ZMNH) Falkeried 94, 20251 Hamburg, Germany Hannah K. Lee, University of Hamburg, Security in Distributed Systems (SVS), Vogt-Koelln-Str. 30, 22527 Hamburg, Germany Hyun Joon Lee, University of Hamburg, Center for Molecular Neurobiology Hamburg (ZMNH) Institute for the Biosynthesis of Neural Structures, Falkenried 94, 20251 Hamburg, Germany Jeong-Wook Seo, University of Hamburg, Division Wood Biology, Department of Wood Science, Leuschnerstrasses 91, 21031 Hamburg, Germany
ISSN 0930-8989 ISBN 978-3-540-85189-9 Springer Berlin Heidelberg New York Library of Congress Control Number: 2008932954 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specif ically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microf ilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media. springer.com © Springer-Verlag Berlin Heidelberg 2008 The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specif ic statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Data prepared by SPS using a Springer LATEX macro package Cover: eStudio Calamar, Girona, Spain Printed on acid-free paper
SPIN: 12043446
57/3180/SPi
543210
Preface
The EU-Korea Conference on Science and Technology, EKC2008, was held in Heidelberg, Germany from 28 to 31 August 2008. Current research fields in science and technology were presented and discussed at the EKC2008, giving an insight into the interests and directions of researchers in EU countries and Korea. The Korean Scientists and Engineers Association in the FRG (VeKNI) had organized the EKC2008 jointly with the Korean Scientists and Engineers Association in the UK (KSEAUK), the Korean Scientists and Engineers Association in France (ASCoF), and the Korean Scientists and Engineers Association in Austria (KOSEA). This conference is dedicated to the 35th anniversary of the foundation of VeKNI. The EU has been steadily increasing both in the number of its members and in its economic volume. The different economies of the EU countries are becoming more unified, and are achieving a close cooperation in science and technology. For instance, the EU Research Framework Programme - the world’s largest funding programme for research projects - prompts research projects throughout the entire EU community. In the future, the EU will play an increasingly leading role in the world. In the last decades Korea has experienced a rash development of its economy and the level of its science and technology. Korea's economic volume is currently positioned about 12th in the world, and it is a leading country in communication technology, home entertainment and shipbuilding. But despite these achievements, many EU citizens still think of Korea as a minor industrial country. It will be beneficial for both Korea and the EU to get to know each other better, especially in fields of science and technology. The EKC2008 emerged from this idea, and the success of the conference has clearly shown the interest of both sides to strengthen the relationship between EU and Korean scientists and engineers. I would like express my sincere thanks to the members of the international organizing committee, Jinil Kim, Yujin Choi and Man-Wook Han for their excellent cooperation in arranging this conference. I would also like to thank the members of the editorial board for their contribution in the preparation of these proceedings. I am also grateful to all the sponsors, and especially our main sponsors Hyundai, KOFST, LG and Samsung. Many thanks also to the Springer-Verlag for publishing the proceedings of the EKC. Finally, I thank all of the participants of the conference. Seung-Deog Yoo VeKNI EKC 2008 Co-chair
Contents
Computational Fluid Dynamics (CFD) A Numerical Study on Rotating Stall Inception in an Axial Compressor Je Hyun Baek, Minsuk Choi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers Myungsung Lee, Chan-Shik Won, Nahmkeon Hur . . . . . . . . . . . . . . . . . . . . .
19
Application of a Level-Set Method in Gas-Liquid Interfacial Flows Sang Hyuk Lee, Gihun Son, Nahmkeon Hur . . . . . . . . . . . . . . . . . . . . . . . . . .
33
Modelling the Aerodynamics of Coaxial Helicopters – from an Isolated Rotor to a Complete Aircraft Hyo Won Kim, Richard E. Brown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
45
State-of-the-Art CFD Simulation for Ship Design Bettar el Moctar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
61
Investigation of the Effect of Surface Roughness on the Pulsating Flow in Combustion Chambers with LES Balazs Pritz, Franco Magagnato, Martin Gabi . . . . . . . . . . . . . . . . . . . . . . . .
69
Numerical Study on Blood Flow Characteristics of the Stenosed Blood Vessel with Periodic Acceleration and Rotating Effect Kyoung Chul Ro, Seong Hyuk Lee, Seong Wook Cho, Hong Sun Ryou . . . .
77
Analysis and Test on the Flow Characteristics and Noise of Axial Flow Fans Young-Woo Son, Ji-Hun Choi, Jangho Lee, Seong-Ryong Park, Minsung Kim, Jae Won Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85
VIII
Contents
Implicit Algorithm for the Method of Fluid Particle Dynamics in Fluid-Solid Interaction Yong Kweon Suh, Jong Hyun Jeong, Sangmo Kang . . . . . . . . . . . . . . . . . . . .
95
Mechatronics and Mechanical Engineering EKC 2008: Summary of German Intelligent Robots Research Landscape 2007 Doo-Bong Chang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107 Characteristics of the Arrangement of the Cooling Water Piping System for ITER and Fusion Reactor Power Station K.P. Chang, Ingo Kuehn, W. Curd, G. Dell’Orco, D. Gupta, L. Fan, Yong-Hwan Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 Development of Integrated Operation, Low-End Energy Building Engineering Technology in Korea Soo Cho, Jin Sung Lee, Cheol Yong Jang, Sung Uk Joo, Jang Yeul Sohn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 RF MEMS Switch Using Silicon Cantilevers Joo-Young Choi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Robust Algebraic Approach for Radar Signal Processing: Noise Filtering, Time-Derivative Estimation and Perturbation Estimation Sungwoo Choi, Brigitte d’Andr´ea-Novel, Jorge Villagra . . . . . . . . . . . . . . . . 143 A New 3-Axis Force/Moment Sensor for an Ankle of a Humanoid Robot In-Young Cho, Man-Wook Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 DGPS for the Localisation of the Autonomous Mobile Robots Man-Wook Han . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Verification of Shell Elements Performance by Inserting 3-D Model: In Finite Elements Analysis with ANSYS Program Chang Jun, Jean-Marc Martinez, Barbara Calcagno . . . . . . . . . . . . . . . . . . . 171 Analysis of Textile Reinforced Concrete at the Micro-level Bong-Gu Kang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Product Service Systems as Advanced System Solutions for Sustainability Myung-Joo Kang, Robert Wimmer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Contents
IX
Life Prediction of Automotive Vehicle’s Door W/H System Using Finite Element Analysis Byeong-Sam Kim, KwangSoo Lee, Kyoungwoo Park . . . . . . . . . . . . . . . . . . . 201 Peak Oil and Fusion Energy Development Chang Shuk Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator Dong-Seon Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 Literature Review of Technologies and Energy Feedback Measures Impacting on the Reduction of Building Energy Consumption Eun-Ju Lee, Min-Ho Pae, Dong-Ho Kim, Jae-Min Kim, Jong-Yeob Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223 Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography Ki-hwan Kim, Na-young Song, Byung-kwon Choo, Didier Pribat, Jin Jang, Kyu-chang Park . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 ITER Plant Support Sytems Yong Hwan Kim, G. Vine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 Growth Mechanism of Nitrogen Incoporated Carbon Nanotubes with RAP Process Chang-Seok Lee, Je-Hwang Ryu, Han-Eol Im, Sellaperumal Manivannan, Didier Pribat, Jin Jang, Kyu-Chang Park . . . . 249 Micro Burr Formation in Aluminum by Electrical Discharge Machining Dal Ho Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259 Multi-objective Environmental/Economic Dispatch Using the Bees Algorithm with Weighted Sum Ji Young Lee, Ahmed Haj Darwish . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 Experimental Investigation on the Behaviour of CFRP Laminated Composites under Impact and Compression After Impact (CAI) J. Lee, C. Soutis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275 Development of and Research on Energy-saving Buildings in Korea Hyo-Soon Park, Jae-Min Kim, Ji-Yeon Kim . . . . . . . . . . . . . . . . . . . . . . . . . . 287 Non-Iterative MUSIC-Type Imaging Algorithm for Reconstructing Penetrable Thin Dielectric Inclusions Won-Kwang Park, Habib Ammari, Dominique Lesselier . . . . . . . . . . . . . . . . 297
X
Contents
Network Based Service Robot for Education Kyung Chul Shin, Naveen Kuppuswamy, Hyun Chul Jung . . . . . . . . . . . . . . 307 Design of Hot Stamping Tools and Blanking Strategies of Ultra High Strength Steels Hyunwoo So, Hartmut Hoffmann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Information and Communications Technology Efficient and Secure Asset Tracking Across Multiple Domains Jin Wook Byun, Jihoon Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 Wireless Broadcast with Network Coding: DRAGONCAST Song Yean Cho, Cedric Adjih . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333 A New Framework for Characterizing and Categorizing Usability Problems Dong-Han Ham . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 State of the Art in Designers’ Cognitive Activities and Computational Support: With Emphasis on the Information Categorization in the Early Stages of Design Jieun Kim, Carole Bouchard, Jean-Fran¸cois Omhover, Am´eziane Aoussat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 A Negotiation Composition Model for Agent Based eMarketplaces Habin Lee, John Shepherdson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365 Discovering Technology Intelligence from Document Data in an Organisation Sungjoo Lee, Letizia Mortara, Clive Kerr, Robert Phaal, David Probert . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process J.M. Park . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 Life and Natural Sciences Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis Sungbo Cho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 Mathematical Modelling of Cervical Cancer Vaccination in the UK Yoon Hong Choi, Mark Jit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
Contents
XI
Particle Physics Experiment on the International Space Station Chanhoon Chung . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 Effects of Methylation Inhibition on Cell Prolieration and Metastasis of Human Breast Cancer Cells Seok Heo, Sungyoul Hong . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421 Proteomic Study of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes Sung Ung Kang, Karoline Fuchs, Werner Sieghart, Gert Lubec . . . . . . . . . . 429 Climate Change: Business Challenge or Opportunity? Chung-Hee Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437 Comparison of Eco-Industrial Development between the UK and Korea Dowon Kim, Jane C. Powell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443 On Applications of Semiparametric Multiple Index Regression Eun Jung Kim . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455 Towards Transverse Laser Cooling of an Indium Atomic Beam Jae-Ihn Kim, Dieter Meschede . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463 Heat and Cold Stress Indices for People Exposed to Our Changing Climate JuYoun Kwon, Ken Parsons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467 The Thrill Effect in Medical Treatment: Thrill Effect as a Therapeutic Tool in Clinical Health Care (Esp. Music Therapy) Eun-Jeong Lee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477 Status of the Climate Change Policies in South Korea Ilyoung Oh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485 Trends in Microbial Fuel Cells for the Environmental Energy Refinery from Waste/Water Sung Taek Oh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495 Cell Based Biological Assay Using Microfluidics Jung-Uk Shim, Luis Olguin, Florian Hollfelder, Chris Abell, Wilhelm Huck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499 Finite Volume Method for a Determination of Mass Gap in the Two Dimensional O(n) σ-Model Dong-Shin Shin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 503
XII
Contents
Understanding the NO-Sensing Mechanism at Molecular Level Byung-Kuk Yoo, Isabelle Lamarre, Jean-Louis Martin, Colin R. Andrew, Pierre Nioche, Michel Negrerie . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 517 Environmentally Friendly Approach Electrospinning of Polyelectrolytes from Aqueous Polymer Solutions Miran Yu, Metodi Bozukov, Wiebke Voigt, Helga Thomas, Martin M¨ oller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 525 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 531
Computational Fluid Dynamics (CFD)
A Numerical Study on Rotating Stall Inception in an Axial Compressor Je Hyun Baek and Minsuk Choi 1
POSTECH, Department of Mechanical Engineering, Pohang, South Korea
[email protected] 2 Imperial College, Department of Mechanical Engineering, London, UK
[email protected]
Abstract. A series of numerical study was conducted to analyze stall inception process and to find the mechanism of rotating stall in a subsonic axial compressor. The compressor used in this study showed different flow behaviors depending on the inlet boundary layer thickness at the near stall condition. The hub-corner-separation grew to become a full-span separation for the thick inlet boundary layer as the load was increased. The initial disturbance was initiated in these separations on suction surfaces, and then it was transferred to the tip region. This disturbance grew to be an attached stall cell, which adheres on a blade surface and rotates at the same speed as the rotor. Once the attached stall cell reached a critical size, it moved along the blade row and became the rotating stall. On the other hand, it was reduced to be indistinguishable from the rotor wake and other large separation occurred near the casing for the thin boundary layer. The different boundary layer affects the stall cell’s size and the initial disturbance causing the rotating stall. The stall cell grew large with the increasing boundary layer, causing large performance drop during stall developing process. Influence of the number of flow passages on the rotating stall was also investigated by comparing the results obtained using four and eight flow passages. The stall inception process was similar in both cases, while the number of stall cells was different because of the size of the computational domain. Based these results, the minimum number of flow passages was suggested for rotating stall in a subsonic axial compressor.
1 Introduction The rotating stall in a compressor is a phenomenon in which separations in blade passages advance along the blade row in the circumferential direction. It is generally known to be originated in the operating range between normal flow and surge. These moving separation regions also referred to as stall cells can act as the blockage in flow in blade passages, resulting in a reduced operating range. The rotating stall also changes pressures on blade surfaces periodically and could break blades. Because this deterioration and damage have bad effects on the reliability of the airplane as well as the compressor itself, much attention has been paid to the characteristics of the rotating stall to establish effective methods of its active control. Most studies on the rotating stall have been conducted by experiments focused on stall inception.[1-6] Based on these previous results, the rotating stall follows pre-stall waves such as modal perturbations or originated from spike-type precursors depending on compressor operating conditions. It is generally known that interaction between tip
4
J.H. Baek and M. Choi
leakage flow and other flow phenomena, such as boundary layer on end-walls and passage shock, causes the rotating stall. In recent years, many researchers has been using numerical method to investigate the cause of the rotating stall.[7-14] Hoying et al. [9] established a relationship between stall inception and trajectory of the center of the tip leakage flow. Vahdati et al. [14] numerically investigated effects of rotating stall and surge on the blade load. These numerical results allowed researchers to see stall inception process in detail and gave an intuitive understanding about the rotating stall. According to these previous studies, it is now recognized that tip leakage flow plays an important role in stall inception. However, little attention has been paid to role of hub-corner-separation on the rotating stall although it is a common flow feature in an axial compressor operating near the stall point, exerting a large effect on internal flows and loss characteristics. Only Day [3] noted that some relations between corner-separations and modal wave in his stall tests although several researchers [1517] investigated the structure of corner-separations and its impact on internal flows. This work is the results of three previous studies [18-20] conducted to find the cause and key factors of the rotating stall. Firstly, stall inception process was analyzed in detail using numerical results and then evaluated that how the inlet boundary layer thickness and the number of flow passages affect rotating stall in numerical simulation.
2 Test Configuration This work was conducted using a low-speed axial compressor, which was tested by Wagner et al. [17]. Because this compressor not only has a rotor without stator and inlet guide vane, but also rotates slowly about its axis at 510 rpm, the maximum pressure ratio between the inlet and the outlet is equal to about 1.01. The Reynolds number is about 244,000, which is based on the inlet velocity at the design condition and the blade chord length at mid-span. Unlike other axial compressor with a constant tip clearance, this compressor has variable tip clearance as shown in Fig 1. Detailed geometry specifications are Fig. 1. Schematic diagram of the single rotor test rig summarized in Table 1. for thick inlet boundary layer(Wagner et al., 1983) Wagner et al. [17] changed the inlet boundary layer thickness on the hub and casing in their experiment in order to investigate separations on blade surfaces and secondary flows at the downstream of the rotor. In Fig. 1, there are five measurement points which have important meanings in their experiments. The boundary layer thickness was changed at STA.-1 with several screens of different wire diameters and spacing. The inlet and the exit flow conditions were measured at STA.1 and STA.2 respectively. To complete the static pressure rise curve, the upstream and downstream static pressures were measured on the hub and casing of STA.1 and STA.3. There is a small gap on the hub between moving and stationary parts at STA.4. Relative
A Numerical Study on Rotating Stall Inception in an Axial Compressor Table 1. Geometric Specifications No. of blade
28
Casing radius
0.7620m
Hub radius
0.6096m
Chord length
0.1524m
Tip clearance
0.0036m, 0.0015m, 0.0051m
Blade profile
NACA 65
Aspect ratio
1
Hub/Tip ratio
0.8
Rotation speed
510rpm
Stagger angle
35.5 (at mid span)
Inlet flow angle
59.45 (at mid span)
Outlet flow angle
11.50 (at mid span)
o
o o
Table 2. Measurement Positions STA. Thin Boundary Layer Thick Boundary Layer
5
measurement positions are summarized with STA.0 as a reference point in Table 2. On the other hand, the STA.2 is located at 30% axial chord downstream of the rotor for the thick boundary layer but at 10% axial chord downstream of the rotor for the thin boundary layer. However, because exit flow conditions were measured at four locations such as the 10%, 30%, 50% and 110% axial chord downstream of the rotor for the thin boundary layer, exit flow conditions in this paper were specified at 30% axial chord downstream of the rotor regardless of the inlet boundary layer thickness.
3 Numerical Methods
Simulations of the three-dimensional flow were conducted using the in-1 -0.102m house flow solver, TFlow, which has 0 0.000m 0.000m been improved to calculate the inter1 0.206m 0.229m nal flow in turbomachinery since its 2 0.498m 0.498m development in the mid-1990s [21]. 3 0.744m 0.744m This flow solver has been validated 4 0.305m 0.279m through a series of calculations of the subsonic axial compressor, transonic axial compressor and subsonic axial turbine until now.[22-24] TFlow uses the compressible RANS equations to describe the viscous flows through a blade row. The governing equations were discretized by the finite volume method in space. The upwind TVD scheme based on Van Leer’s flux vector splitting method was used to discretize the inviscid flux terms and the MUSCL technique was used for interpolation of flow variables. The second order central difference method was used to discretize the viscous flux terms. The equation was solved using the Euler implicit time marching scheme with first order accuracy to obtain a steady solution and also with second order accuracy to simulate unsteady flow. The laminar viscosity was calculated by Sutherland’s law and the turbulent viscosity was obtained by the algebraic Baldwin-Lomax model because the flow field was assumed to be fully turbulent. The computational domain was fixed in the region between STA.1 and STA.3 and a multi-block hexahedral mesh was generated using ICEM-CFD. To capture the motions of stall cells, four and eight blade passages were used for an unsteady simulation as shown in Fig. 2. Each passage consists of 125 nodes in the stream-wise direction, 58 nodes in the pitch-wise direction, and 73 nodes in the span-wise direction. To capture the tip leakage flow correctly, the region of the tip clearance was filled with the
6
J.H. Baek and M. Choi
embedded H-type grid, which has 52 nodes from leading edge to trailing edge of the blade, 10 nodes across the blade thickness and 16 nodes from blade top to casing. After the steady solutions were obtained using a mesh with about 0.5 million nodes in one passage, the simulation of the rotating stall was conducted using the same mesh with four and eight passages. Therefore, the whole computational domain has a total of about 2.1 million (a) Grid with four passages nodes with four passages and 4.2 million nodes with eight passages. In these computations the distance of the first grid point from the wall was set at to be equal to or less than 5. In the unsteady simulation of the rotating stall, the computational domain must have several blade passages in a blade row to show the movement of stall cells. Since an identical grid was used at each passage, each blade passage was assigned to one processor, and the flow variables in contact with other passages were trans(b) Grid with eight passages ferred by MPI (Message Passing Interface) libraries. Fig. 2. Computational domain and grid for In the internal flow simulation of turstall simulation bomachinery, there are four different types of boundaries such as inlet, outlet, wall and periodic conditions. Obtained by using the temperature and pressure at the standard atmosphere and the velocity profile with different boundary layers on the hub and casing measured by Wagner et al. [17], the total pressure, total temperature and flow angles were fixed at the inlet condition and the upstream-running Riemann invariant was extrapolated from the interior domain. For the outlet condition, the static pressure on the hub was specified and the local static pressures along the span were given by SRE (Simplified Radial Equilibrium) equation. Other flow variables such as density and velocities were extrapolated from the interior. On the wall, the noslip condition was used to calculate the velocity components. The surface pressure and density were obtained using the normal momentum equation and adiabatic wall condition respectively. Since only a part of full passages were calculated, it was necessary to implement the periodic condition between the first and the last passages. The periodic condition was implemented using the ghost cell next to the boundary cell, and it enabled flow variables to be continuous across the boundary. The time accurate unsteady simulation was conducted as the back pressure at the outlet condition, p3/p1, was set to be 1.008 for the thick inlet boundary layer and 1.007 for the thin inlet boundary, which were slightly larger values than the outlet
A Numerical Study on Rotating Stall Inception in an Axial Compressor
7
static pressures of =0.65 at each case. However, no artificial asymmetric disturbances were imposed at the inlet condition.
4 Computational Results A. Performance Curve The performance of a compressor can be presented by the flow coefficient and the static pressure rise coefficient, which are defined as follows. In experiments, the latter was calculated using the static pressure increment between STA.1 and STA.3 at the tip. p − p1 ψ = 3 φ = Vx / U m , 0.5ρU m2 (1) As shown in Fig. 3, the static pressure rise curve obtained by the steady numerical computation corresponds to the experimental one from 0.65 to 0.95 of flow coefficients regardless of the inlet boundary layer thickness. The static pressure rise coefficients for the unsteady simulation were calculated using the instantaneous flow data, which were saved five times a period. One period here is defined as the time it takes a blade to traverse the computational domain once. The numerical results of the unsteady simulation have a good agreement with the experimental one until the rotating stall occurs. However, there are some discrepancies after the development of the rotating stall because a part of the whole blade passages were used in the unsteady simulation. The numerical result for the thick inlet boundary layer predicts the stall point relatively well in the static pressure rise curve, and there is an abrupt drop of the performance in the stall development process between φ =0.58 and φ =0.53. The magnitude of the performance drop matches to the experimental result well although the slope of it is not steep. However, there is no abrupt performance drop in the static pressure rise curve of both experimental and numerical results for the thin inlet boundary layer. The static pressure rise coefficients in both cases are clustered around
1
1 0.9
Stall Inception
0.8
0.8
0.7
0.7
ψ
ψ
0.9
0.6
Stall Inception
0.6 Exp.(tip, Wagner et al.) Cal. Unsteady(tip) Cal. Steady(tip)
0.5 0.4 0.4
0.5
0.6
0.7 φ
0.8
0.9
Exp. (tip, Wagner et al.) Cal. Unsteady(tip) Cal. Steady(tip)
0.5 0.4 1
0.4
0.5
(a) Thick BL
0.6
0.7 φ
(b) Thin BL
Fig. 3. Static pressure rise curve
0.8
0.9
1
8
J.H. Baek and M. Choi
the experimental value at the early stage of the unsteady calculation since the asymmetric disturbance is small. The rotating stall initiates at φ =0.62 and φ =0.595 for the thick and thin inlet boundary layers respectively and it scatters the static pressure rise coefficient because of the disturbance of the stall cell. For the thick inlet boundary layer, the performance drops abruptly below φ =0.58 as the rotating stall develops from the part-span stall to the full-span stall. For the thin inlet boundary layer, the stall inception is retarded to lower flow coefficient because the axial velocity is large near the casing in comparison to that for the thick inlet boundary layer. B. Steady Results 1
1
0.9
0.9
0.8
0.8
0.7
0.7
(V x/V xm)1
(V x/V xm)1
φ
0.6 0.5 0.4
EXP(Wagner et al.) Near φ=65% Near φ=75% Near φ=85% Near φ=95%
0.3 0.2 0.1 0
0
20
40
60
80
100
Span(%)
0.6 0.5 0.4
EXP(Wagner et al.) Near φ=65% Near φ=75% Near φ=85% Near φ=95%
0.3 0.2 0.1 0
0
20
40
60
80
100
Span(%)
(a) Thick BL
(b) Thin BL
Fig. 4. Inlet velocity profiles at STA.1
(a) Thick BL,
φ =0.65
(b) Thick BL,
(c) Thin BL,
φ =0.65
(d) Thin BL,
φ =0.85
φ =0.85
Generally, the steady flow near the stall point has a large effect on the rotating stall. So, it is very important to investigate the effects of the inlet boundary layer thickness on the steady flow. For the clear understanding of different inlet boundary layer thickness in case of “Thick BL” and “Thin BL”, where “BL” is an abbreviation of the boundary layer, Fig. 4 shows the inlet axial velocity profiles obtained in the previous study of Choi et al. [22]. As shown in Fig. 4, the simulation results show a good agreement with experimental ones at each flow condition, meaning that the computation can properly reproduce the inlet condition of the experiment of Wagner et al. [17]. To investigate the change of the hub-corner-separation and the tip leakage flow, Fig. 5 shows the coefficient of the rotary total pressure at STA.2. The rotary total pressure is used to remove rotational effects from the relative total pressure and is defined as:
Fig. 5. Rotary total pressure distribution at STA.2 pt ,Rot = p +
(
1 ρ W 2 −U 2 2
)
(2)
Its coefficient was calculated by using both the area-averaged total pressure at the inlet and the rotary total pressure given in Eq. (2). C pt ,Rot =
pt ,1 − pt ,Rot 0.5ρU m2
(3)
These results were compared with the experimental one in the previous study, revealing a good agreement except the size of the region affected by the tip leakage flow. Although there are no apparent differences on the hub-corner-separation and the
A Numerical Study on Rotating Stall Inception in an Axial Compressor
9
tip leakage flow at the design condition between two cases, a lot of difference could be found depending on the inlet boundary layer thickness at high load. While the hubcorner-separation grows to be a large separation on the suction surface for the thick inlet boundary layer, it is diminished to be indistinguishable from the rotor wake for the thin inlet boundary layer. Moreover, another corner-separation near the casing occurs due to the interaction between the tip leakage flow and the boundary layer on the suction surface. These corner-separations block internal flows and result in a large total pressure loss. C. Effects of the Inlet Boundary Layer Thickness In order to judge whether the rotating stall occurs or not, a time history of the axial velocity before the leading edge of the rotor has been used. According to the number of flow passages, four and eight numerical sensors were installed at 85% span from the hub and 25% of the chord length upstream of the rotor to catch the axial velocity as shown in Fig. 6. These sensors rotate in the counter- clock-wise direction as the calculation goes on because the numerical simulation was conducted in Sensor the rotating frame. These sensors Flow direction read the axial velocity at each position 480 times a period. Figure 7 shows the time-history of axial velocities measured by each numerical sensor in eight flow passages. There is no disturbance at the beginning of the unsteady calculaFig. 6. Positions of the fixed numerical sensor to tion, but some small disturbance apmeasure axial velocity pears in the time-history although no artificial asymmetric disturbance was imposed. For the thick inlet boundary layer, the first disturbance occurs near 3.0 periods and it moves at the same speed as the rotor, meaning that it adheres to the blade row. The rotating stall is found in the time-history of numerical sensors near 7.0 periods and the flow coefficient has a value of 0.62 at this moment. The rotational speed of the stall cell quickly comes down to 75% of the rotor so it moves in the opposite direction to the rotor blade in the rotating frame. For the thin inlet boundary layer the axial velocity at each numerical sensor does not catch any signal of a disturbance before 5.0 periods. The first disturbance occurs near 5.0 periods and grows slowly to be an attached stall. As the flow rate is reduced, this attached stall turns to be a rotating stall at 6.6 periods when the flow coefficient is about 0.60. This flow coefficient is small in comparison to the case with thick boundary layer and this means that the large axial velocity near the casing could delay the stall inception. As the flow coefficient is reduced, the stall cell steadily grows large and its rotational speed goes down to the 74% of the rotor speed. In both cases, firstly one stall cell is generated in the blade row and then one or two another stall cells are originated. Figure 8 shows the rotary total pressure distribution at the tip region in the stall inception process for the thick inlet boundary layer. Numerical sensors cannot detect any signal of a disturbance at 1.0 period because the rotary total pressure in the tip region has similar features in all passages. Then a local disturbance can be observed in
10
J.H. Baek and M. Choi
the tip leakage flow between the eight and second blades at 3.0 periods, when the 0.0 #7 numerical sensors detect some distur1.0 #6 0.0 bance for the first time. The rotary total 1.0 pressure shows some different pattern at #5 V /U 0.0 1.0 each passage as shown in Fig. 8(b). This #4 0.0 1.0 disturbance is fixed inside the blade pas#3 0.0 sage, rotates with the rotor at the same 1.0 #2 0.0 speed and grows to be a bigger attached 1.0 #1 0.0 stall cell by the throttling process as 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 shown in Fig. 8(c). The front line of the (a) Thick BL tip leakage flow is located behind the leading edge plane until this moment. 1.0 74% 100% When attached stall cell reaches a critical 0.0 #8 1.0 size, the tip leakage flow locally moves 0.0 #7 1.0 around the leading edge of the next blade #6 0.0 and it starts spilling into the adjacent flow 1.0 #5 V /U 0.0 passage because of the blockage of the at1.0 #4 tached stall cell. A critical size here 0.0 1.0 means the size of the disturbance when #3 0.0 1.0 the front line of the tip leakage flow #2 0.0 passes over the leading edge of the next 1.0 #1 0.0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 blade. The attached stall cell finally changes to a short-length-scale rotating (b) Thin BL stall through this stall inception process as shown in Fig. 8(d). Once the rotating stall Fig. 7. Time-history of axial velocities of is generated, it advances the next blades numerical sensors with eight flow passages one by one and grows into a large stall cell as shown in Fig. 8(e,f). During this period, another two stall cells are originated and three rotating stall cells are found in the blade row. This stall inception process via an attached stall cell has been already reported by Choi et al. [18] using the rotating stall simulation with four passages. For the thin inlet boundary layer, the stall inception process is similar to the previous case but has some different features. At 3.0 periods, the rotary total pressure distribution at the tip region is very similar to each other between the blades. The first disturbance appears at 5.0 periods between the fourth and the seventh blades as shown in Fig. 9(b). At this time, the front line of the tip leakage flow is also located behind the leading edge plane. This disturbance grows to be a large attached stall cell as the flow coefficient is reduced as shown in Fig. 9(c). The high velocity region occurs before the eighth rotor blade and this region is caused by the detour of the inlet flow due to the blockage of the attached stall cell. If this attached stall cell has a critical size, the low energy flow spills into the next blades around the leading edge of the rotor and through the tip clearance, and the rotating stall finally occurs as shown in Fig. 9(d). After stall inception, stall cell moves along the blade row continuously, letting the high velocity region take precedent over. At 8.0 periods, a new stall cell is originated at the sixth rotor blade by the same stall inception process. 1.0 0.0 1.0
x
m
x
m
100%
75%
#8
A Numerical Study on Rotating Stall Inception in an Axial Compressor
(a) 1.0 period
(b) 3.0 periods
(a) 3.0 period
(b) 5.0 periods
(c) 7.0 periods
(d) 7.2 periods
(c) 6.6 periods
(d) 6.8 periods
(e) 9.0 periods
(f) 12.0 periods
(e) 8.0 periods
(f) 10.0 periods
11
Fig. 8. Rotary total pressure distribution at Fig. 9. Rotary total pressure distribution at the tip the tip region in the stall inception process region in the stall inception process for the thin for the thick inlet boundary layer inlet boundary layer
Finally, two rotating stall cell exists in the blade row as shown in Fig. 9(f). In comparison to the previous case, the size of each stall cell is small in the circumferential direction. Unlike the results of Hoying et al. [9] that the localized disturbance near the casing occurred when the front line of the tip leakage flows propagates forward of the leading edge plane, the first disturbance in this study initiated before the tip leakage flows reached the leading edge plane. Figure 10 shows the rotary total pressure distribution
12
J.H. Baek and M. Choi
at STA.2 at 1.0 and 3.0 periods for the thick and the thin inlet boundary layers respectively. At 1.0 period, there was no disturbance in the time-history of axial velocities and in the rotary total pressure near the casing as mentioned above, but the hub-corner-separation shows some asymmetric disturbance at STA.2 for the thick inlet boundary layer. In the same manner, there is a small disturbance at the cornerseparation near the casing at 3.0 periods for the thin inlet boundary layer. These disturbances might have been generated by the numeri(a) Thick BL, 1.0 period cal round-off error or the change of numerical methods in the unsteady simulation. These results makes the authors conclude that the local disturbance is transferred from the separations on blade surfaces to the tip leakage flow. Based on above numerical results, the different inlet boundary layer affect the steady flow near the stall condition and makes rotating stall to be different in each case. The hub-corner-separation and the corner-separation near the casing cause the rotating stall for the thick and the thin inlet boundary layers respectively. The size of the at(b) Thin BL, 3.0 periods tached and the rotating stall cell for the thick boundary layer is larger than that for the thin boundary layer. This is why the large axial Fig. 10. Rotary total pressure distribution at STA.2 velocity near the casing in the latter in unsteady results case effectively keeps disturbances from growing large. However, for the thick inlet boundary layer, the weak axial velocity near the casing allows a small disturbance to grow large into the attached stall cell quickly. Due to the difference of the size of the stall cell, the performance drops abruptly for the thick inlet boundary layer but there is no steep performance drop for the thin inlet boundary layer.
A Numerical Study on Rotating Stall Inception in an Axial Compressor
13
D. Effects of the Number of Flow Passages
ψ
Before using eight passages, firstly the rotating stall stimulation was conduced in the grid with four passages (Fig. 2(a)). Figure 11 shows the static pressure rise curve for the thick boundary layer with four passages. The numerical results with four passages also predicts the stall point well, and capture an abrupt drop of the performance between φ =0.58 and φ =0.53. Although the pressure drop occurs sharply in this case, it is smaller than the experiment and the numerical result with eight passages in Fig. 3(a). In case with eight passages, the pressure drop is large but the slop of it is not steep. It is why the performance with eight passages is not severely affected by the stalled passage because the computational domain is relatively large while the performance with four passages does. After stall inception process, the distribution with four passages is more scattered than with eight passages because of the size of the computational domain. The time histories of axial velocities in Fig. 12 are very similar to the case with eight passages (Fig. 7(a)), and there is no disturbance until 9.0 periods. One period here is the time it takes for four blades to traverse the computational domain once and it is half of that in eight passages. The first disturbance occurs at 9.0 periods and it has the same speed as the rotor, meaning that it fixed to the blade row. The rotating stall occurs at 19.5 periods, and the flow coefficient has a vale of 0.62, which is 1 equal to the value in eight passages. The 0.9 Stall Inception rotating speed for stall cell quickly decreases to 79% of the rotor. In this case, 0.8 only one stall cell is initiated and grows 0.7 large as the simulations goes on because 0.6 the small computational domain keeps other disturbances from originating in Exp.(tip, Wagner et al.) 0.5 Cal. Unsteady(tip) circumferential direction. Cal. Steady(tip) 0.4 The rotary total pressure in the tip re0.4 0.5 0.6 0.7 0.8 0.9 1 φ gion has similar features in each passage at 3.0 periods. Then local disturbance Fig. 11. Static pressure rise curve for the happens in the tip leakage flow between thick BL with four passages the first and third blades near 9.0 periods as shown in Fig. 13(b). This disturbance 79% 100% rotates at the same speed as the rotor and 1.0 #4 grows to an attached cell as shown in 0.0 Fig. 13(c). The front line of the tip leak1.0 #3 age flow is also located behind the lead0.0 V /U 1.0 ing edge plane until this moment. When #2 0.0 the attached stall cell reaches a critical 1.0 size, it changes to the rotating stall as #1 shown in Fig. 13(d). After stall inception 0.0 0 2 4 6 8 10 12 14 16 18 20 22 24 26 process, the stall cell grows quickly to be a large stall cell in short time Fig. 12. Time-history of axial velocities of (Fig. 13(e,f)). This case with four pasnumerical sensors for the thick BL with four sages has just one stall cell, although this passages x
m
14
J.H. Baek and M. Choi
(a) 3.0 periods
(b) 9.0 periods
(c) 19.0 periods
(d) 19.5 periods
(e) 20.0 periods
(f) 20.5 periods
Fig. 13. Rotary total pressure distribution near the tip region in the stall inception process for the thick BL with four passages
Fig. 14. Rotary total pressure distribution at STA.2 with four passages at 3.0 periods
stall inception process is the same as the case with eight passages. Figure 14 shows the rotary total pressure distribution at STA.2 when there was no disturbance in other indicators. However, the hub-cornerseparation shows some asymmetric behavior at STA.2 at this moment. Choi et al. [18] have suggested using simulation results with four blade passages that this disturbance in the hubcorner- separation might trigger the disturbance near the casing and cause the rotating stall. The simulation of the rotating stall is the time-consuming work for researchers to get a successful result. Most of researchers have been frequently using a few passages instead of whole thing in order to capture the motion of stall cells because it is not easy to simulate the rotating stall with whole flow passages due to the limitations of computational time and memory. In this study, the stall inception process, before the rotating stall is originated, is nearly same regardless of the number of blade passages. However, after the rotating stall was originated, the flow had some discrepancies according to the numbers of passages. If so, how many passages have to be used in the simulation of the rotating stall? It, of course, is the best thing to use whole blade passages to capture the rotating stall if the computational resource and time are sufficient. If not, it depends on the focus of studies to select the appropriate number of the flow passages. If researcher wants to know a detail process of the stall inception and to find the cause of the rotating stall except for modal wave, it is proper for him to use the minimum number of passages. However, if researcher wishes to understand the
A Numerical Study on Rotating Stall Inception in an Axial Compressor
15
effect of the fully developed rotating stall on the performance and internal flows, it is better to use as many passages as possible. The maximum number is clearly the whole blade row but the minimum number may not be fixed generally. Authors propose that it would be possible at least to capture the stall propagation in circumferential direction using the minimum number. The size of the rotating stall, which is just initiated, was about two and half blade pitches in this study. If two or three passages were used in the simulation, the operating condition might jump into the surge without the rotating stall because all passages could be simultaneously blocked by the separation. In this study, four and eight passages were used for the rotating stall and properly showed the movement of the stall cell. In reference, the size of the stall cell in the study of Hoying et al. [9] was also about two pitches at the early stage of the stall inception. Therefore, the minimum number of the blade passage might be four for short-length-scale rotating stall in a subsonic axial compressor.
5 Conclusion In this study, several numerical simulations were conducted to analyze stall inception process in detail and to investigate effects of the inlet boundary layer thickness and the number of flow passages on the rotating stall. The first disturbance occurred in separations on blade surfaces. This disturbance was transferred from the separations to the tip leakage flow and it grows to be an attached stall cell. When this attached stall cell reached a critical size, the rotating stall was initiated. The inlet boundary layer thickness had a large effect on the flow coefficient at the stall inception and the size of stall cells. The small axial velocity near the casing allowed the disturbance to grow to be a large stall cell for the thick inlet boundary layer, while the large axial velocity for the thin inlet boundary layer kept the disturbance from growing. Therefore, the rotating stall occurred at lower flow coefficient for the thin boundary layer in comparison to that for the thick inlet boundary layer. Moreover, the size of the attached and the rotating stall cell grew large with the thick boundary layer. Due to the different size of stall cells, there was abrupt performance drop for the thick inlet boundary layer but not for the thin boundary layer. The number of flow passages did not affect the stall inception process but did the stall development. The stall inception was similar in two cases, four and eight flow passages. The number of flow passages had a large effect on the number of stall cells. Only one stall cell was originated with four passages, but there were three stall cells with eight passages. Therefore, in case researchers want to scrutinize stall inception process only, it is better to use as small number of flow passages as possible, at least 4 passages. Acknowledgments. The authors wish to acknowledge the support of the Agency for Defense Development under the contract UD04006AD.
16
J.H. Baek and M. Choi
References [1] I J Day and N A Cumpsty (1978) Measurement and Interpretation of Flow within Rotating Stall Cells in Axial Compressors. Journal of Mechanical Engineering Science 20:101114 [2] N M McDougall, N A Cumpsty, N A and T P Hynes (1990) Stall Inception in Axial Compressors. Journal of Turbomachinery 112:116-125 [3] I J Day (1993) Stall Inception in Axial Flow Compressors. Journal of Turbomachinery 115:1-9 [4] T R Camp and I J Day (1998) A Study of Spike and Modal Stall Phenomena in a LowSpeed Axial Compressor. Journal of Turbomachinery 120:393- 401 [5] M Inoue, M Kuroumaru, T Tanino, S Yoshida and M Furukawa (2001) Comparative Studies on Short and Long Length-Scale Stall Cell Propagating in an Axial Compressor Rotor. Journal of Turbomachinery 123:24-32 [6] M Inoue, M Kuroumaru, S Yoshida and M Furukawa (2002) Short and Long LengthScale Disturbances Leading to Rotating Stall in an Axial Compressor Stage with Different Stator/Rotor Gaps. Journal of Turbomachinery 124:376-384 [7] L He (1997) Computational Study of Rotating-Stall Inception in Axial Compressors. Journal of Propulsion and Power 13:31-38 [8] L He and J O Ismael (1997) Computations of Blade row Stall Inception in Transonic Flows, Proceedings of 13th International Symposium on Airbreathing Engines, Paper ISABE 97-7100, pp. 697-707 [9] D A Hoying, C S Tan, Huu Duc Vo and E M Greitzer (1999) Role of Blade Passage Flow Structures in Axial Compressor Rotating Stall Inception. Journal of Turbomachinery 121:735-742 [10] H M Saxer-Felici, A P Saxer, A P A Inderbitzin and G Gyarmathy (2000) Numerical and Experimental Study of Rotating Stall in an Axial Compressor Stage. AIAA Journal 38:1132-1141 [11] S Niazi (2000) Numerical Simulation of Rotating Stall and Surge Alleviation in Axial Compressors. Ph. D. Dissertation, Aerospace Engineering Dept., Georgia Tech., Atlanta, GA [12] N Gourdain, S Burguburu, F Leboeuf and H Miton (2004) Numerical Simulation of Rotating Stall in an Subsonic Compressor. AIAA Paper AIAA2004-3929 [13] C Hah, J Bergner and H.-P Schiffer (2006) Short Length-Scale Rotating Stall Inception in a Transonic Axial Compressor – Criteria and Mechanisms. ASME Paper GT2006-90045 [14] M Vahdati, G Simpson and M Impregun. Unsteady Flow and Aeroelasticity Behavior of Aeroengine Core Compressors During Rotating Stall and Surge. Journal of Turbomachinery 130, No. 2 [DOI: 10.1115/1.2777188] [15] C Hah and J Loellbach (1999) Development of Hub Corner Stall and Its Influence on the Performance of Axial Compressor Blade Rows. Journal of Turbomachinery 121:67-77 [16] S A Gbadebo, N A Cumpsty and T P Hynes (2005) Three-Dimensional Separations in Axial Compressors. Journal of Turbomachinery 127:331-339 [17] J H Wagner, R P Dring and H D Joslyn (1983) Axial Compressor Middle Stage Secondary Flow Study. NASA CR-3701 [18] M Choi, J H Baek, S H Oh and D J Ki (in print) Role of the Hub-Corner-Separation on the Rotating Stall in an Axial Compressor. Trans. Japan Soc. Aero. Space Sci. 51, No. 172 [19] M Cohi and J H Baek. Influence of the Number of Flow Passages in the Simulation of the Rotating Stall. ISOROMAC-12, Paper No. ISROMAC12-2008-20106 (also submitted to Int. J. of Rotating Machinery)
A Numerical Study on Rotating Stall Inception in an Axial Compressor
17
[20] M Choi, S H Oh, H, Y Ko and J H Baek. Effects of the Inlet Boundary Layer Thickness on the Rotating Stall in an Axial Compressor. ASME Turbo Expo 2008, Paper No. GT2008-50886 [21] H S Choi and J H Baek (1995) Computations of Nonlinear Wave Interaction in ShockWave Focusing Process Using Finite Volume TVD Schemes. Computers and Fluids 25:509-525 [22] M Choi, J Y Park and J H Baek (2006) Effects of the Inlet Boundary Layer Thickness on the Loss Characteristics in an Axial Compressor. International Journal of Turbo & Jet Engines 23, No. 1, pp 51-72 [23] J Y Park, H T Chung and J H Baek (2003) Effects of Shock-Wave on flow Structure in Tip Region of transonic Compressor. International Journal of Turbo & Jet Engines 20:4162 [24] J Y Park, M Choi and J H Baek (2003) Effects of Axial Gap on Unsteady Secondary Flow in One-Stage Axial Turbine. International Journal of Turbo & Jet Engines 20:315-333
A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers Myungsung Lee, Chan-Shik Won, and Nahmkeon Hur 1 2
Graduate school, Sogang University, Seoul, Korea Department of Mechanical Engineering, Sogang University, Seoul, Korea
[email protected]
Abstract. This paper describes numerical methodologies of the flow and heat transfer analysis in heat exchangers of various types. Heat exchangers considered in the present study include a louver fin radiator for a vehicle, a shell and tube heat exchanger for HVAC and plate heat exchangers with patterns of herringbone and of dimple used in waste heat recovery. For the analysis of the louver fin radiator, a 3-D Semi-microscopic Heat Exchange (SHE) method was used. SHE was characterized by conjugated heat transfer analysis for the domain which consists of water in a tube, tube wall, the region where passes through the louver fin and ambient air. It is shown that both the air flow in louver fin area and the water flow inside the cooling water passages are successfully predicted along with the heat transfer characteristics. A conjugate heat transfer analysis in a shell and tube heat exchanger was also performed. For the analysis of entire shell side of the heat exchanger, geometric features such as tubes, baffles, inlet and outlet were modeled in detail. It is shown from the analysis that a design modification for better flow distribution and thus for better performance can be proposed. Finally an analysis method for the conjugate heat transfer between hot flow–separating plate–cold flow of a plate heat exchanger was proposed. By using periodic boundary conditions for the repeating sections and appropriate inlet and outlet boundary conditions, the heat transfer in a plate heat exchanger with patterns of herringbone and of dimple was successfully analyzed. Comparisons of the present numerical results are in a good agreement with available experiment data.
1 Introduction Heat exchangers are extensively encountered in many engineering applications such as the power generation, HVAC, chemical processing industry and waste heat recovery. The information on details of flow and temperature distribution in the heat exchanger is essential for high performance design of the thermal system. Most previous studies, however, are related to the system performance and/or empirical correlations, and lack the details of the flow and temperature distribution in the heat exchanger. Recent development in CFD enables us to predict the flow and temperature distribution by numerical methodologies through modeling of the detailed physics and geometry in the heat exchangers. The detailed study, however, requires time and cost for the modeling and analysis. In the present study, numerical methodologies of heat transfer analysis in heat exchangers of various types are proposed. Heat exchangers considered in this study are louver fin radiator for a vehicle application, shell and tube heat exchanger for HVAC, plate heat exchangers with patterns of herringbone and of dimple for a waste heat recovery.
20
M. Lee, C.-S. Won, and N. Hur
A design method of louver fin radiator based on experimental correlations is well documented in Kays and London [1]. Correlation equations for heat transfer and pressure drop are given in Davenport [2]. Chang and Wang [3] also proposed a correlation between heat transfer and pressure drop based on the experimental data of 91 louver fin radiator types. Previous studies, however, only considered overall heat transfer coefficient, but not the local heat transfer characteristics which give local flow and temperature distribution. In the present study a new method of simulating louver fin heat exchanger is proposed to analyze an underhood thermal management of a vehicle. A conjugate heat transfer analysis in a shell and tube heat exchanger is also performed. For the analysis of entire shell side, tubes, baffles, and inlet and outlet are modeled in detail. To improve performance of heat transfer in the shell side, the effect of the design factors such as sealing strip is also examined. To obtain a higher heat transfer performance, plate heat exchangers have been developed as a compact heat exchanger with various corrugation types of plate surface. Among the variety patterns of groove on the plate surface, the herringbone and dimple types are widely applied to the plate heat exchanger for many industrial applications. Most of the earlier heat transfer analysis of the herringbone-type plate heat exchanger is for the single passage of hot or cold fluid, and the effect of geometric parameters like chevron angle and plate pitch and flow parameters like Reynolds No. and Prandtl No. has been studied [4, 5, 6]. This gives overall performance, but not the local flow and temperature distribution. In the present study an analysis method is proposed for the conjugate heat transfer between hot flow-separating plate-cold flow in a repeating section of a herringbone-type plate heat exchanger with appropriate boundary conditions, and the effect of pulsatile flow on the heat transfer enhancement was also numerically investigated. A numerical analysis of a dimple plate heat exchanger was also performed. The present study investigated the heat transfer performance and friction characteristic of the dimple plate heat exchanger, proposing the correlations for friction factor and heat transfer coefficient as a function of geometric factor, which are very useful for the practical design work.
2 Louver Fin Radiator in a Vehicle The radiator of a vehicle are probably the most important component that affect the efficiency and stable operation of the thermal system due to its role in exhausting the engine heat to ambient air. The radiators using the louver fin (Fig. 1) are known to give the best efficiency of heat exchange among the types of radiators. To analyze underhood thermal management, a radiator is modeled as porous media through which air flows with resistance gaining heat from coolant in VTM module of STAR-CD [7]. Flow resistance through porous media is modeled from the experimental data. Coolant flows also through porous media occupying the same space as the previous air porous media. This module accurately predicts the temperature distribution of air through the radiator, whereas the coolant temperature distribution is not accurate since the coolant is modeled to flow through the porous media as explained above. In the present study, to predict coolant temperature distribution accurately a Semimicroscopic Heat Exchange (SHE) method is developed, where coolant passage is
A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers
21
Fig. 1. Vehicular louver fin radiator and numerical model
Fig. 2. Composition of louver fin radiator
modeled separately along with tube wall and only louver fin area is modeled as porous media. In this method, two distinct porous media occupying the same space are modeled, one being the same air porous media as previous case and the other being the solid porous media where only heat transfer by conduction is considered (Fig. 2). The amount of heat transfer is computed from the temperature difference of the air and the solid porous media with heat transfer coefficients between them. To model the porous media, Darcy equation of mass flow and pressure drop is used as momentum equation:
ρ ⎛ ∂U D ⎞ + U D ⋅ ∇U D ⎟ ε ⎜⎝ ∂t ⎠ ρC ∂P μ ' 2 μ =− + ∇ U D − U D − 1/ E2 U D U D + ρ f i ∂xi ε K K
(1)
where, ε is porosity, μ ' is effective viscosity and CE is inertia or Ergun coefficient. Different from the model used here, one medium model does not consider the effects
22
M. Lee, C.-S. Won, and N. Hur
of local heat transfer between air and louver fin since they use only one porous media with predetermined heat source. Energy equations of the present two medium model where heat source/sink is computed from local temperature difference between air and solid porous media are as follows: Fluid phase:
⎛ k fe ⎞ hsf a + (U D ⋅ ∇T f ) = ⎜ + D d ⎟ ∇ 2T f + (T − T ⎜ ε (ρC ) ⎟ ∂t ε (ρC p ) f s f p f ⎝ ⎠
∂T f
)
(2)
Solid phase: ∂Ts ⎛ k se =⎜ ⎜ ∂t ⎝ (1 − ε )( ρ C p ) s
⎞ 2 hsf a ⎟⎟ ∇ Ts − (Ts − T f (1 − ε )( ρ C p ) s ⎠
)
(3)
where, k fe and k se are fluid and solid effective thermal conductivity, hsf is interfacial convective heat transfer coefficient and a is the ratio of surface area to volume. In this study, for the interfacial convective heat transfer coefficient j-factor proposed by Chang [3] is used.
(a) heat exchanger and cooling fan
(b) section plot of engine room
Fig. 3. Application of SHE method for full automotive underhood model: computational mesh(left) and temperature distribution(right)
Fig. 3 shows a computational mesh for underhood thermal management. Unstructured mesh topology is seen in the figure along with detailed meshes of radiator and fan assembly. Results of temperature distribution by using VTM module of STARCD and SHE model are compared, where little difference is seen in temperature distributions. The amount of heat transfer rate using VTM model is computed as 24.4 kW and SHE model 23.9 kW. Therefore the heat transfer amount and temperature distribution by the present SHE model is comparable to those by VTM module of
A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers
23
Fig. 4. Heat transfer rate as function of geometric factor
(a) j/ReLp-0.49 = 0.0838
(b) j/ReLp-0.49 = 0.1676
(d) j/ReLp-0.49 = 0.5028
(e) j/ReLp-0.49 = 0.6704
(c) j/ReLp-0.49 = 0.3352
Fig. 5. Temperature distribution of louver fin radiator for various geometric factor
STAR-CD. In the present SHE model, the temperature distribution of louver fin is successfully predicted and that of coolant passages as well, which the VTM model cannot predict at all. SHE model developed in this study can be used for underhood thermal management analysis in case accurate temperature distributions of coolant and louver fin that required without modeling of louver fin geometry in detail. The thermal property of the radiator is affected by the geometry of louver fin. The j-factor and heat transfer coefficient are varied by changing the geometry. Thus, the term j / Re−L0.49 is a function of only the specific geometry of the louver fin radiator P
24
M. Lee, C.-S. Won, and N. Hur
[8]. Fig. 4 represents the heat exchange rate for various louver fin geometries. The temperature distributions in the coolant passages are shown for five radiators using variation of the geometric factor in Fig. 5. It is also seen from the figure that the present method can predict the temperature distribution of the coolant passages in detail which is not possible by conventional methods.
3 Shell and Tube Heat Exchanger Shell and tube heat exchangers are widely used in HVAC and process industry. In the present study a shell and tube heat exchanger designed for chilling the air in shell side by evaporating refrigerant in tube side. The heat exchanger consists of 330 circular tubes in U-shape and 10 baffles to hold the tubes in position and to guide the shellside flow as in Fig. 6. The detailed mesh structure around tubes is also shown in the figure. To model whole shell side of the heat exchanger, the total number of computational cells used in the study is around 15 million. As boundary conditions for the computation, air velocity and temperature are given at the inlet of shell side, while on the tube walls constant temperature of 0°C is given since the refrigerant is evaporating as it flows through the tubes. The results are given in Fig. 7. The velocity magnitude plot shows that most air flows through the gap between tubes and shell wall and little through the tube bundle, which results in lower performance. Thus, design modification for higher performance is required. Based on the findings from the present study, new design was proposed with sealing strips to block the high velocity streams near shell wall (Fig. 8). The velocity magnitude pattern inside shell was examined with respect to the location of the sealing strips as shown in Fig. 9. It is shown that the most active heat transfer occurs in model 3 and the least in model 1.
(a) shell
(b) tube
(c) end plate
(d) mesh structure
Fig. 6. Geometry of shell and tube heat exchanger
Fig. 7. Section plots of velocity magnitude inside shell
A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers
(a) model 1
Fig. 8. Geometry inside shell showing sealing strip
(b) model 2
25
(c) model 3
Fig. 9. Velocity magnitude with various location of sealing strips
Fig. 10. Outlet temperature and pressure drop (left), and heat transfer coefficient (right) as functions of size of sealing strips
The size and location of the sealing strips is a factor affecting the performance of the heat exchanger. From the results of the present study it is found that when the length of sealing strips is longer, the outlet temperature decreases, whereas the pressure drop increases as shown in Fig. 10. The heat transfer coefficient also increases as the size of sealing strips becomes larger. These results are useful in the optimal design of the shell and tube heat exchanger.
4 Plate Heat Exchanger 4.1 Herringbone-Type Plate Heat Exchanger
Plate heat exchangers are widely used in industrial applications. Among the great variety of possible corrugation patterns of plate surface, the herringbone type has been proved to be a successful design with good heat transfer performance [9]. In the previous study [10], heat transfer analysis was performed with herringbone-type plate heat exchanger by using simplified geometry concentrating on obtaining heat transfer coefficient. For this purpose single flow passage was modeled and effects of geometric and flow parameters on heat transfer coefficient were investigated. In the present study, an analysis method is proposed for the conjugate heat transfer between cold flow passage-separating plate-hot flow passage in a repeating section of a herringbone-type plate heat exchanger with appropriate boundary conditions. The
26
M. Lee, C.-S. Won, and N. Hur
model for the complex flow passages of a real plate heat exchanger, whose parts are shown in Fig. 11, has about 4 million computational cells (See Fig. 12). The geometry used in present study is the same as the one used in experimental study by Han and Kang [11].
Fig. 11. Plate heat exchanger used in experiment [11]
Fig. 12. Computational mesh of periodic section
Fig. 13. Schematic diagram of a plate heat exchanger
Fig. 14. Boundary conditions
Fig. 13 shows the schematic of a heat exchanger consisting of 10 plates, which have four cold flow and five hot flow passages, with alternating herringbone pattern (See Fig. 14). Boundary conditions used in the present computation are shown in Fig. 14. Outer plates are halved and a periodic condition was imposed (plate 4 and 6 in the figure). The total amount of flow entering the computational domain through inlet (1) in the figure is divided into two streams: one into the flow passage and the other to the next flow passage depicted as inlet (2). The velocities and temperatures of the flows at
A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers
27
these two boundaries are given. The difference in the flow rates at these two boundaries flows through the plates, and becomes mixed with the flow from inlet (3) and flows out. The flow from inlet (3) is the flow through downstream plates, and the flow rate is the same as that in inlet (2). Since the temperature of the flow in inlet (3) is not known a priori, the temperature is updated from the resulting temperature at the outlet from heat exchange process as the computation proceeds. In this manner one can obtain the final converged solution. Fig. 15 shows a temperature distribution in hot and cold flow passages and on the plate in between. It is well shown from the figure that the cold flow entering the passage gets heated as flowing through the plates. The counter flowing hot flow loses the same amount of heat as the cold flow gains. The temperature distribution on the plate 10 Numerical result - present study Experiment - Han and Kang (2005)
6
c
2
h [kW/m K]
8
4
2
0 0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
mass flux [kg/s]
Fig. 15. Temperature distribution in flow passages and separating plate
Fig. 16. Comparison of heat transfer coefficient with experimental data [11]
570 Hot flow passage Cold flow passage (7.5 Hz) Steady case
560
Q (W)
550
540
530
520 5
0 0
0.2
0.4
0.6
0.8
1
dimensionless time
Fig. 17. Schematic diagram of experimental Fig. 18. Comparison of heat transfer rate equipment [11] and inlet velocity for pulsatile between pulsatile flow and steady state flow
28
M. Lee, C.-S. Won, and N. Hur
is about the average of the temperature distributions of the two flows. It is also shown from the figure that the flow is vigorously mixed in span-wise direction due to the groove pattern. The heat transfer coefficients obtained from the present computation is compared to the existing experimental data of Han and Kang [11] in Fig. 16 for various flow rates of 0.04 to 0.12 kg/s. In these cases the flow rates are the same for both hot and cold passages. The results of the present computation agree quite well with the experimental data thus verifying the validity of the present numerical methodology of analyzing the heat transfer in a plate heat exchanger with pattern of herringbone. One can accurately predict the overall performance of a herringbone-type plate heat exchanger without any empirical correlation for the heat transfer coefficient, and at the same time, obtain the detailed temperature and flow distributions in the flow passages by the present computational method. To evaluate the improvement of heat transfer by pulsatile flow, a 7.5 Hz frequency was imposed at the inlet of cold flow passage (Fig. 17). In this case, the periodic average was adopted. The calculated heat is shown in the Fig. 18. From the result, the periodic-averaged heat transfer amount was obtained as Q=549 W. On the other hand, the steady-state heat transfer amount was Q=532 W. The heat transfer rate increased about 3% in this case with the pulsatile flow since it created more active mixing than flow of the steady state [10]. 4.2 Dimple-Type Plate Heat Exchanger
Recently, a plate heat exchanger with dimple pattern for high performance of heat transfer is getting attention in waste heat recovery industry due to its relatively small size. The present study investigated the characteristics of heat transfer performance and flow friction in the dimple plate heat exchanger with numerical simulation. The effect of the dimple height (Hd) and the dimple diameter (D) on the heat transfer performance was also investigated.
Fig. 19. Dimple plate heat exchanger and numerical model
(a) case1 (flue gas and water)
(b) case2 (flue gas and air)
Fig. 20. Computational domain of dimple plate heat exchanger
A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers
flue gas
water (a) case1
flue gas
29
air (b) case2
Fig. 21. Temperature distribution of each flow passage
The dimple plate heat exchanger and numerical model are shown in Fig. 19. In the present study, the total number of computational cells is around 10 million. The inlet and outlet are located for having the counter flow pattern between flue gas and water in case 1, and cross flow between flue gas and air in case 2 (See Fig. 20). In addition, the spacer of 5 mm was inserted between two separating plates in the case 2 as shown in Fig. 20. To predict the heat transfer characteristics in the heat exchanger, the conjugate heat transfer method was used to analyze the heat transfer between the hot flow passageseparating plate-cold flow passage. From the numerical analysis, Fig. 21 shows the characteristic of the heat transfer in the middle section of each channel. It is also shown that the hot flue gas loses heat as the cold fluid gains. The heat transfer rate obtained from the present study was compared with the experimental data which was presented by the company produced the dimple plate heat exchanger. The numerical results are in a good agreement with the experimental data as shown in Fig. 22. In addition, to analyze the characteristics of heat transfer and flow friction, Reynolds number was evaluated based on the characteristic length (Lc) which was
Fig. 22. Comparison of heat transfer rate between numerical result and experimental data
30
M. Lee, C.-S. Won, and N. Hur
(a) Fanning f-factor
(b) Colburn j-factor
Fig. 23. Correlations for flow friction(left) and heat transfer(right) as function of geometric factor
calculated with fluid volume per surface area. The characteristic of flow friction was correlated in terms of fanning f-factor defined as follow:
f =
ΔP 1 ρV 2 ( A / a ) 2
(4)
In this analysis, the heat transfer coefficient was obtained by the heat transfer area, and the difference of temperature between inlet and outlet, and the heat transfer rate. Thus, the Colburn j-factor was defined as follow: ⎛ ⎞⎛ μc p h ⎟⎜ j=⎜ ⎜ ρc pV ( H / L) ⎟⎜⎝ k ⎝ ⎠
⎞ ⎟ ⎟ ⎠
2/3
, h=
Q A × ΔT
(5)
The simulation was also performed to obtain the effect of geometric factor (Hd/D) which was calculated by dimple height over dimple diameter. In this case, the flue gas and the air were considered for hot and cold flow sides, respectively. Fig. 23 shows that the fanning f-factor and Colburn j-factor were obtained as functions of geometric factor (Hd/D) with variation of velocity at inlet boundary condition. In the figure, the f-factor rises as the geometric factor becomes larger, whereas the Colburn j-factor decreases in the same range.
5 Concluding Remarks In the present paper, numerical methodologies of heat transfer analysis in four types of heat exchanger are studied, which are a vehicular louver fin radiator, a shell and tube heat exchanger, a plate heat exchanger with patterns of herringbone and a plate heat exchanger with patterns of dimple. A Semi-microscopic Heat Exchange (SHE) method was proposed to examine the louver fin radiator in vehicle underhood thermal management. SHE method successfully predicts flow field and temperature distribution of air side, and those of coolant passages as well. Flow simulation method of a shell and tube heat exchanger was also proposed to obtain the overall flow and heat
A Numerical Study on Flow and Heat Transfer Analysis of Various Heat Exchangers
31
transfer patterns inside the shell. To improve the heat transfer performance, the effects of various locations and length of the sealing strips were investigated. A herringbonetype plate heat exchanger was simulated by conjugate heat transfer analysis between hot flow and separating plate and cold flow with appropriate boundary conditions, giving good agreement with experimental data. Furthermore, pulsatile flow was adopted to improve the performance of heat transfer, and it is numerically shown that the heat transfer enhancement was occurred by 7.5Hz pulsatile flow. A numerical analysis of a dimple plate heat exchanger was also performed. As results of the present numerical studies, one can obtain not only the characteristics of flow field and temperature distribution, but also the correlations of flow friction and heat transfer, which would be very useful for optimal design of the heat exchanger system.
References [1] Kays WM, London AL (1984) Compact Heat Exchanger, 3rd ed. McGraw-Hill, New York [2] Davenport CJ (1983) Correlation for heat transfer and flow friction characteristic of louvered fin. AIChE Symp Ser 79(225):19-27 [3] Chang Y, Wang C (1996) Air side performance of brazed aluminum heat exchangers. J Enhanced Heat Transfer 3(1):15-28 [4] Focke WW, Zachariades J, Olivier I (1985) The effect of the corrugation inclination angle on the thermohydraulic performance of plate heat exchangers. Int J Heat Mass Transfer 28(8):1469-1479 [5] Heavner RL, Kumar H, Wannizrachi AS (1993) Performance of an industrial plate heat exchanger: effect of chevron angle. AIChE Symp Ser 89(295):65-70 [6] Martin H (1996) A theoretical approach to predict the performance of chevron-type plate heat exchangers. Chem Eng Process 35:301-310 [7] STAR-CD V3.24 User Guide (1998) Computational Dynamics, LTD [8] Hur N, Park J-T, Lee SH (2007) Development of a Semi-microscopic Heat Exchange (SHE) method for a vehicle underhood thermal management. Asian Symposium on Computational Heat Transfer and Fluid Flow, ASCHT2007-K04 (in CD-ROM) [9] Ko TH (2006) Numerical analysis of entropy generation and optimal Reynolds number for developing laminar forced convection in double-sine ducts with various aspect ratios. Int J Heat Mass Transfer 49:718-726 [10] Chin S-M, Hur N, Kang BH (2004) A numerical study on heat transfer enhancement by pulsatile flow in a plate heat exchanger. (in Korean) 3rd National Congress on Fluids Engineering, pp 1479-1484 [11] Han SK, Kang BH (2005) Effects of flow resonance on heat transfer engancement and pressure drop in a plate heat exchanger. The Korean Society of Mechanical Engineers 17:165-172
Application of a Level-Set Method in Gas-Liquid Interfacial Flows Sang Hyuk Lee1, Gihun Son2, and Nahmkeon Hur2 1 2
Graduate school, Sogang University, Seoul, Korea Department of Mechanical Engineering, Sogang University, Seoul, Korea
[email protected]
Abstract. In this study, a numerical computation of gas-liquid interfacial flows was performed by using a Level Set (LS) method for various applications. The free surface motion from initial disturbed surface and drop motion on an inclined wall due to the gravity were computed for the verification of the simulation with a LS method. The binary drop collision was also simulated in the present study. After the drop collision, the behavior of drop and formation of satellite drop were obtained. Furthermore, an impact of a drop on a liquid film/pool was simulated. From the results, the crown formation and bubble entrapment were successfully observed. The present numerical results showed a good agreement with the theoretical and available experimental data, and hence, the LS method for interface tracking can be applied to various flow problems with sharp gas-liquid interfaces.
1 Introduction The phenomena involving gas-liquid interfacial flows are commonly encountered in various natural phenomena and engineering applications. It is important to predict the behaviors of drop and bubble in the formation of falling drop and evolution of sprays in industrial applications. Therefore, the experimental and theoretical analyses on the drop and bubble dynamics have been performed for this purpose. Ashgriz and Poo [1] analyzed the interaction regime of the binary drop collision through the experiment. From experimental results, they proposed boundaries between interaction regimes; coalescence, reflexive separation and stretching separation. And Ko and Ryou [2] proposed the theoretical correlations of the drop collision in the distributed drops. Not only the collision between drops but also the drop impact on the liquid film and pool has been studied. Cossali et al. [3] proposed the correlation for the drop splashing and spreading from the drop impact on liquid film. And Oguz and Prosperetti [4] analyzed the bubble entrapment from the drop impact on liquid pool. To analyze the characteristics of gas-liquid interfacial flows, not only the experimental analysis but also the numerical analysis has been investigated. In the numerical simulation of interfacial flow, the interface tracking plays an important role. Therefore, the methodologies for the interface tracking have long been developed. Hirt and Nichols [5] introduced the volume of fluid (VOF) method in which the interface is tracked by the VOF function based on the volume fraction of a particular phase in each cell. The VOF method widely used in most commercial CFD programs is
34
S.H. Lee, G. Son, and N. Hur
good for the mass conservation. However, it may have difficulties in computing the sharp interfaces, since the VOF method requires the assumption on how the interface is located inside the computational cell. To overcome these difficulties, Sussman et al. [6] proposed a level set (LS) method based on the smooth distance function. In the LS method, the interface is tracking by the LS function defined as a distance from the interface. Since the LS function can calculate the interface accurately, it has been used recently in the analysis of the gas-liquid two-phase flows. In this study, numerical simulation of gas-liquid interfacial flows was performed by using LS method for various applications. To verify the simulation with the LS method, the free surface motion and drop motion on an inclined wall were simulated. And the binary drop collision and drop impact on a liquid film or pool were numerically analyzed.
2 Numerical Analysis 2.1 Governing Equations To analyze the gas-liquid interfacial flow without mixing between two phases, mass and momentum equations for incompressible fluids are written as: ∇ ⋅u = 0 T ⎛ ∂u ⎞ + u ⋅ ∇u ⎟ = −∇p + ∇ ⋅ μ ⎡( ∇u ) + ( ∇u ) ⎤ + ρ g − σκ∇H ⎣ ⎦ ∂ t ⎝ ⎠
ρ⎜
where, u denotes velocity vector in Cartesian coordinates, p the pressure, ρ the density, μ the dynamic viscosity, g the gravity force, σ the surface tension coefficient, κ the interface curvature and H the smoothed step function. 2.2 Level Set Method For a numerical analysis of the interfacial flow, the information of interface between the liquid and gas are needed. In the LS method, the interface separating the two phases is tracked by a LS function φ defined as a distance from the interface. The negative sign is used for the gas phase and the positive sign for the liquid phase. And the interface is described as φ = 0 . For an incompressible condition, the governing equation for the advection of LS function is used as follows: ∂φ + ∇ ⋅ uφ = 0 ∂t
With the LS function obtained from above equation, a smoothed step function H and interface curvature κ can be calculated. And the density ρ and viscosity μ in governing equations are obtained with the step function in each cell. ⎡ 1 φ sin ( 2πφ / 3hn ) ⎤ ⎪⎫ ⎪⎧ H = min ⎨1, max ⎢0, + + ⎥⎬ 2π ⎪⎩ ⎣ 2 3hn ⎦ ⎪⎭
Application of a Level-Set Method in Gas-Liquid Interfacial Flows
κ = ∇⋅
35
∇φ ∇φ
ρ = ρ g + ( ρ l − ρ g ) H , μ = μ g + ( μl − μ g ) H 2.3 Contact Angle Condition In the LS formulation, a contact angle ϕ is used to evaluate the LS function at the wall. The contact angle varies dynamically between an advancing contact angle ϕ a and a receding contact angle ϕ r . And this dynamic contact angle is determined by the speed U c of the contact line. For a simulation of drop impact on dry wall, Fukai et al. [7] proposed a contact angle model. When the contact angle changes in the range of ϕ r < ϕ < ϕ a , the interline dose not move. However, while the contact line moves, the contact angle remains constant as ϕ = ϕ a for U c > 0 or ϕ = ϕ r for U c < 0 . Son and Hur [8] proposed an efficient formulation for Fukai et al.’s contact angle model to be implemented into the LS method on non-orthogonal grids. Using ∇φ = 1 and referring to Fig. 1, the LS function φ A at the wall can be calculated as r
φ A = φB + d cos (ϕ + β ) where, ⎣(
r
r
)
β = α sign ⎡ d − d n ⋅ ∇φ ⎤
⎦ r r r r r r r d = xB − x A , d n = d ⋅ nw nw
(
)
Fig. 1. Schematic for implementation of a contact angle condition
3 Gas-Liquid Interfacial Flows In this study, various gas-liquid interfacial flows were numerically simulated. The gas-liquid flows are influenced by various parameters. These parameters can be
36
S.H. Lee, G. Son, and N. Hur
summarized by the non-dimensional parameters. The non-dimensional parameters are defined as follows: Re =
ρlUD ρ U 2D U2 , We = l , Fr = , Oh = gD μl σ
μl ρ lσ D
where, Re denotes Reynolds number, We Weber number, Fr Froude number and Oh Ohnesorge number. In the definition, subscript l pertains to liquid. By the effect of these parameter and initial conditions, the interfacial flows are determined. The characteristics of interfacial flows were analyzed with the non-dimensional time τ = D0 / U . 3.1 Free Surface Motion To validate the LS formulation, the numerical analysis was performed for a free surface motion occurring when a heavier liquid is placed under a lighter [8]. Fig. 2(a) shows the computational grids and the initial conditions. The initial interface between
(
)
0 the liquid and gas is disturbed as yint = 7 + 2 cos x / 3 . And the non-dimensional
parameters are ρl / ρ g = 4 , μl / μ g = 1 , Re = 10 and We = 0.333 . From this simula-
tion, the behavior of free surface is shown in Fig. 2(b). During the early period of the free surface simulation, the interface oscillates due to the force imbalance between the gravity and the surface tension. As time elapses, the oscillation decays due to the viscosity and then the interface become stationary. The interface shape obtained at the stationary state was compared with the exact solution, yint = 7 . Fig. 2(b) shows that the numerical solution has an agreement with the exact solution.
(a) Initial interface
(b) Behavior of the free-surface
Fig. 2. Free-surface motion from initial disturbed surface
Application of a Level-Set Method in Gas-Liquid Interfacial Flows
37
3.2 Drop Motion on an Inclined Wall
To validate the LS formulation with the contact angle condition, the drop motion on an inclined wall was also analyzed [8]. The computational grids and initial conditions for a drop motion on the inclined wall are shown in Fig. 3(a). All boundaries of the domain are specified by the no-slip and contact angle condition. A semicircular drop is initially placed on the left side wall with no gravity force. And the non-dimensional
(a) Initial condition (b) Steady drop shapes at ϕ = 30° and ϕ = 90° Fig. 3. Drop motion on the inclined wall I (without gravity)
(a) ϕ adv = ϕ rec = 90° and Fr=0.2
(b) ϕ adv = 90° ϕ rec = 30° and Fr=0.1 Fig. 4. Drop motion on the inclined wall II (with gravity)
38
S.H. Lee, G. Son, and N. Hur
parameters are ρl / ρ g = 1000 , μl / μ g = 100 , Re = 44.7 and We = 1 . Form this simulation, Fig. 3(b) shows the numerical results with the contact angle of ϕ a = ϕ r = 30° and ϕ a = ϕ r = 90° . The steady-state drop shape is formed by the shape of a straight line or a truncated circle that satisfied a specified contact angle. And the numerical results at the steady state have no differences with the exact solution. Also, the drop motions with gravity force are shown in Fig. 4. In these cases, the gravitational force is dominant over the surface tension force holding the drop on the left side wall and hence the drop slides down an inclined wall. Then, the drop behavior is determined by the contact angle condition. 3.3 Binary Drop Collision
In this study, the binary drop collision was numerically simulated by using LS method. When the drop collision occurs, the interactions of drops are influenced by the drop property, drop velocity, drop-size and impact parameter. These parameters can be summarized by the non-dimensional parameter; Weber number, Ohnesorge number, drop-size ratio ( Δ = Ds / Db ) and non-dimensional impact parameter ( x = 2 X / ( Db + Ds ) ). By the effects of these parameters, the collision processes are
generated with the complicated phenomena. The drop collision can be classified into four interactions such as the bouncing, coalescence, reflexive separation and stretching separation. The bouncing regime is observed in the hydrocarbon drop collision. When the bouncing occurs, the gas layer around the drop disturbed the coalescence of two drops. However, this phenomenon cannot be shown in case of water drop coalescence. In this study, the simulations on the regimes of the coalescence, reflexive separation and stretching separation were performed. Fig. 5 shows the schematic of binary drop collision. The simulations on the drop collision were analyzed with two different conditions of head-on and off-center collision. 2D axi-symmetric simulation on head-on collisions and 3D simulation on offcenter collision were performed. From these simulations, the behavior of drops and formation of satellite drop were obtained. Fig. 6 shows the results of the head-on collision with various conditions. As Weber number increases, the reflexive energy
Fig. 5. Schematic of binary drop collision
Application of a Level-Set Method in Gas-Liquid Interfacial Flows
39
increases. Therefore, the reflexive separation occurs easily with high Weber number. And the size of satellite drop increases. Then, the secondary satellite drops may be separated from the bigger satellite drop. Fig. 7 shows the results of the off-center collision with various conditions. In the low impact parameter, the reflexive separation is generated like in Fig. 6. As the impact parameter increases, the stretching energy becomes bigger. When the stretching energy is similar to the reflexive energy, the coalescence of two drops occurs. And then, the stretching separation is generated in the high impact parameter. From these results, the characteristics of drop collision compared with the experimental [1] and theoretical [2] results like in Fig. 8. The interaction regimes of drop collision with various Weber number and impact parameters are shown in Fig. 8(a) and the numbers of satellite drops with various impact parameters are shown in Fig. 8(b). These numerical results have a good agreement with the previous correlations.
(a) We=23 and Δ =1.0
(b) We=40 and Δ =1.0
(c) We=56 and Δ =0.5 Fig. 6. Drop behavior of head-on collision
(a) We=83, Δ =1.0 and x=0.34
(b) We=83, Δ =1.0 and x=0.43 Fig. 7. Drop behavior of off-center collision
40
S.H. Lee, G. Son, and N. Hur
(a) Interaction regimes of drop collision
(b) Number of satellite drop
Fig. 8. Comparison of numerical results with experimental and theoretical results
3.4 Drop Impact on the Liquid Film
When the drop impacts on the surface, the phenomenon of drop impact is determined by the property of the surface such as a dry and wetted surface and liquid pool. In this chapter, the drop impact on the liquid film was numerically analyzed. The phenomenon of drop impact on liquid film depends on the drop property, impact velocity, drop size and liquid film thickness. These parameters can be summarized by the two main non-dimensional parameter; Weber number and non-dimensional film thickness ( δ = h / D0 ). By the effects of these parameters, the splashing and the crown formation and propagation are generated. In this study, 2D axi-symmetric simulations of the drop splashing and spreading due to the water drop impact on liquid film were analyzed with Weber number of 297. Fig. 9 shows the schematic of the drop impact on liquid film. A spherical drop with velocity U impacts on the liquid film with thickness h. Since the kinetic energy of the impacting drop is reflected by the static liquid film, the crown forms in the contact area between drop and liquid film. After a drop impacts on liquid film, the drop splashing and spreading are generated like Fig. 10. The crown height grows up and the secondary drop may be generated by the flow instability. After crown height reaches maximum value, the crown height is decreased by the gravity. Fig. 11(a) shows the evolution of the crown height with various film thicknesses. This numerical result of crown height has a good agreement with the experimental results of Cossali et al. [3]. After a drop impacts on the liquid film, the crown spreads outward. Fig. 11(b) shows the evolution of the crown diameter defined by the outer diameter of
Fig. 9. Schematic of the drop impact on liquid film
Application of a Level-Set Method in Gas-Liquid Interfacial Flows
41
Fig. 10. Drop splashing and spreading (we=297, d=0.29)
(a) Crown heights
(b) Crown diameters
Fig. 11. Characteristics of the crown with various film thickness (We=297)
the neck below the crown rim. These numerical results compared with previous empirical correlation [3, 9]. The correlation of Yarin and Weiss [9] underestimates the crown diameter than Cossali et al. [3] because Yarin and Weiss [9] do not consider film thickness. The numerical results of the present study corresponded with Cossali et al. [3]. 3.5 Bubble Entrapment
In this study, the formation of bubble due to initial drop impact was numerically analyzed. When a spherical drop impacts on the liquid pool, the bubble can be generated by cavity collapse. This drop entrapment is influenced by the drop property, impact velocity and gravity force. These parameters can be summarized by two nondimensional parameter; Weber number and Froude number. In the present study, 2D axi-symmetric simulation for a bubble entrapment was performed with Froude number of 200 and Weber number of 138. Fig. 12 shows the numerical results for a bubble entrapment. After the drop impact on liquid pool, the cavity grows up in lower and outer direction. By the formation of cavity, the imbalance between the gravity and the surface tension were generated. As time elapses, the interface becomes stationary. Then, the impact velocity and gravity have an effect on the tendency of cavity collapse. Especially, the behavior of surface at the center of cavity determines the formation of bubble. In this case, the bubble was entrapped at the center of cavity.
42
S.H. Lee, G. Son, and N. Hur
Fig. 12. Bubble entrapment (Fr=200, We=138, H=4D)
4 Concluding Remarks In the present study, a level set method was used to simulate various gas-liquid interfacial flows. By using the LS formulation and contact angle condition for non-orthogonal grids, it is shown that the LS method can be applied to various complicated phenomena of the sharp interfacial flows. To validate the LS method, the free surface motion from initial disturbed surface and drop motion on an inclined wall were simulated. The results with present LS method were shown to match well with the exact solution. In this study, the drop and bubble dynamics were numerically investigated. From a simulation of the binary drop collision, the behavior of drop and formation of satellite drop were successfully predicted. Furthermore, an impact of a drop on a liquid film or pool was simulated. After the drop impact, the crown formation is predicted on the liquid film and bubble entrapment on liquid pool. These numerical results showed a good agreement with the theoretical and available experimental data, and hence, the LS method for interface tracking can be applied to various flow problems with sharp gas-liquid interfaces.
References [1] Ashgriz N, Poo J Y (1990) Coalescence and separation in binary collisions of liquid drops. J Fluid Mech 221:183-204 [2] Ko G H, Ryou H S (2005) Modeling of droplet collision-induced breakup process. Int J Multiphas Flow 31:723-738 [3] Cossali G E, Marengo M, Coghe A, Zhdanov S (2004) The role of time in single drop splash on thin film. Exp Fluids 36:888-900 [4] Oguz H N, Prosperetti A (1990) Bubble entrainment by the impact of drops on liquid surfaces. J Fluid Mech 219:143-179 [5] Hirt C W, Nichols B D (1981) Volume of fluid (VOF) method for the dynamics of free boundaries. J Comput Phys 39:201-225
Application of a Level-Set Method in Gas-Liquid Interfacial Flows
43
[6] Sussman M, Smereka P, Osher S (1994) A level set approach for computing solutions to incompressible two-phase flow. J Comput Phys 114:146-159 [7] Fukai J, Shiiba Y, Yamamoto T, Miyatake O, Poulikakos D, Megaridis C M, Zhao Z (1995) Wetting effects on the spreading of a liquid droplet colliding with a flat surface. Phys. Fluids 7:236-247 [8] Son G, Hur N (2005) A level set formulation for incompressible two-phase flows on nonorthogonal grids. Numer Heat Transfer B 48:303-316 [9] Yarin A L, Weiss D A (1995) Impact of drops on solid surfaces: self-similar capillary waves, and splashing as a new type of kinematic discontinuity. J Fluid Mech 283:141-173
Modelling the Aerodynamics of Coaxial Helicopters – from an Isolated Rotor to a Complete Aircraft Hyo Won Kim1 and Richard E. Brown2 1
Postgraduate Research Student, Imperial College London, UK (Currently at University of Glasgow as a Visiting Researcher)
[email protected] 2 Mechan Chair of Engineering, University of Glasgow, UK
Abstract. This paper provides an overview of recent research on the aerodynamics of coaxial rotors at the Rotorcraft Aeromechanics Laboratory of the Glasgow University Rotorcraft Laboratories. The Laboratory’s comprehensive rotorcraft code, known as the Vorticity Transport Model, has been used to study the aerodynamics of various coaxial rotor systems. Modelled coaxial rotor systems have ranged from a relatively simple twin two-bladed teetering configuration to a generic coaxial helicopter with a stiff main rotor system, a tail-mounted propulsor, and a horizontal stabiliser. Various studies have been performed to investigate the ability of the Vorticity Transport Model to reproduce the detailed effect of the rotor wake on the aerodynamics and performance of coaxial systems, and its ability to capture the aerodynamic interactions that arise between the various components of realistic, complex, coaxial helicopter configurations. It is suggested that the use of such a numerical technique not only allows insight into the performance of such rotor systems but might also eventually allow the various aeromechanical problems that often beset new helicopter designs of this type to be circumvented at an early stage in their design.
1 Introduction The flow field around a helicopter has posed a significant modelling challenge to the computational fluid dynamics (CFD) community due to the dominant and persistent nature of the vortical structures that exist within the wake generated by its rotors. CFD schemes based on the traditional pressure-velocity formulation of the NavierStokes equations generally struggle to preserve these vortical structures as the vorticity in the flow is quickly diffused through numerical dissipation. The effect of the artificial viscosity that arises from numerical dissipation can be reduced by increasing the grid resolution but the computation soon becomes prohibitively expensive. Of course, the problem is exacerbated further when a full helicopter configuration is considered, especially where the interaction between two (or more) geometrically separated components via their wakes acts to modify their aerodynamic loading. The inability to predict the consequences of certain interactional aerodynamics has indeed led to unexpected flight mechanic issues in many prototype helicopters [1-6]. In several cases, such interactions have resulted in significant overrun of development costs.
46
H.W. Kim and R.E. Brown
Modern requirements for high performance call for a new generation of highly innovative rotorcraft that are capable of both heavy-lift and high speed. Several nonconventional helicopter configurations, such as the tilt rotor and compound helicopter, have been put forward as possible solutions to these requirements. One such proposal, Sikorsky Aircraft Corporation’s X2 Technology Demonstrator [7], is based on a rigid coaxial rotor platform similar to the Advancing Blade Concept (ABC) rotor of the XH-59A, developed by the same company in the 1970s [8]. The advantage of a coaxial rotor with significant flapwise stiffness is that the effects of stall on the retreating blade can be delayed to higher forward flight speed as the laterally unbalanced load on one rotor can be compensated for by an equivalent, anti-symmetric loading on the other, contra-rotating rotor. The other limiting factor on the attainable speed, the effect of compressibility at the tip of the advancing blade, can be deferred by using an auxiliary device to augment the propulsive thrust of the main rotor. This allows the main rotor system to be offloaded, thus delaying the effects of compressibility to higher forward speed. In a system of such complexity, it is fair to expect the sub-components to interact aerodynamically, and hence their performance to be quite different when integrated as a complete rotorcraft compared to when analysed in isolation. The aim of the studies surveyed in this paper was to demonstrate that the current state of the art in computational modelling of helicopter aerodynamics has progressed in recent years to the point where the interactive aerodynamic flow field associated with a coaxial rotor system, and hence its performance, can be captured accurately. This survey will demonstrate that high fidelity computational simulations are capable of lending a detailed insight into the interactive aerodynamic environment of a new rotorcraft, even one with as complex a configuration as that of the compounded coaxial helicopter. The hope is that such analyses may soon be integrated early in the development of all rotorcraft, where they might help to avoid some of the costly mistakes that have been committed during the design of this extremely complex type of flying machines in the past.
2 Computational Model The VTM is a comprehensive code tailored for the aeromechanical analysis of rotorcraft systems. The model was initially developed by Brown [9] and later extended by Brown and Line [10]. Unlike a conventional CFD approach, the governing flow equations are recast into vorticity-velocity form to yield
∂ ω + u ⋅ ∇ω − ω ⋅ ∇u = ν∇ 2ω . ∂t
(1)
This form of the Navier-Stokes equation allows vorticity to be conserved explicitly. The vorticity transport equation is discretised and solved using a finite volume TVD scheme which is particularly well suited to preserving the compactness of the vortical structures in the rotor wake for long periods of time. In the context of coaxial rotor aerodynamics, this property of the VTM enables the long-range aerodynamic interactions between the twin main rotors and any other geometrically well-separated components of the aircraft to be captured and resolved in detail. The flow is assumed
Modelling the Aerodynamics of Coaxial Helicopters
47
to be inviscid everywhere except on the solid surfaces immersed in the flow. The generation of vorticity by lifting elements such as the rotor blades or fuselage appendages is then accounted for by treating these components as sources of vorticity, effectively replacing the viscous term in Equation (1) with a source term, S. The aerodynamics of the rotor blades are modelled using an extension of the Weissinger-L version of lifting-line theory in conjunction with a look-up table for the two-dimensional aerodynamic characteristics of the blade sections. The temporal and spatial variation of the bound vorticity, ωb, then yields the source term
S =−
d ωb + ub∇ ⋅ ωb . dt
(2)
The aerodynamics of the fuselage is modelled using a vortex panel approach in which the condition of zero through-flow is satisfied at the centroid of each panel. Lift generation by the fuselage is modelled by applying the Kutta condition along pre-specified separation lines on its surface. The viscous wake of the fuselage is not accounted for at present, however. The equations of motion for the blades, as forced by the aerodynamic loading along their span, are derived by numerical differentiation of a pre-specified non-linear Lagrangian for the particular system being modelled. No small-angle approximations are involved in this approach and the coupled flaplag-feather dynamics of each of the blades are fully represented. The acoustics, where applicable, are computed using the Farassat-1A formulation of the Ffowcs Williams-Hawkings equations. The aerodynamic force contribution from each collocation point along each blade is used to construct a set of point acoustic sources, integration of which over the span of the blades yields the loading noise contribution. The lifting-line approach to the blade aerodynamics assumes an infinitesimally thin blade and hence the thickness contribution to the noise is modelled using a source-sink pair attached to each blade panel. The noise contribution from quadrupole terms as well as that from the fuselage is neglected. The VTM has shown considerable success in both capturing and preserving the complex vortex structures that are contained within the wakes of conventional helicopter rotors [9, 10] and has been used previously to study rotor response to wake encounters [11, 12], the vortex ring state [13, 14], the influence of ground effect [15, 16], and the acoustics of a model rotor [17]. In this paper, a review of the study of coaxial rotor aerodynamics undertaken at the Rotorcraft Aeromechanics Laboratory of Glasgow University Rotorcraft Laboratories using the VTM is provided. The ability of the method to capture convincingly the aerodynamics of a coaxial rotor system in isolation and as a part of a more complex full helicopter system is demonstrated in the following sections of this paper.
3 Aerodynamics of a Hinged Coaxial Rotor 3.1 Aerodynamic Performance of an Isolated Coaxial Rotor The reliability of the VTM’s representation of coaxial rotor performance has been assessed in Ref. 18 by comparing the predicted power consumption of Harrington’s coaxial rotor to that experimentally measured by Harrington [19] in hover and by
48
H.W. Kim and R.E. Brown
0.007
CT
0.0005
CP
0.006
0.0004
0.005
0.004
0.0003
0.003
0.0002
Experiment - Single
0.002
Exp. - Coaxial Rotor
Experiment - Coaxial VTM - Single
0.001
VTM - Coaxial Rotor with C D ' (á )
0.0001
VTM - Coaxial
CP
VTM - Coaxial Rotor with C D '' (á )
0 0
0.0001
0.0002
0.0003
0.0004
0.0005
0.0006
0 0
Fig. 1. Total power consumption (CP) as a function of thrust (CT) in hover – VTM simulations compared to Harrington’s experiment [18]
0.1
0.2
Advance ratio 0.3
Fig. 2. Total power consumption (CP) in steady level flight as a function of forward speed (thrust coefficient CT = 0.0048) – Dingeldein’s experiment compared to VTM simulations with two different drag polars, CD′(α) and CD″(α) [18]
Dingeldein [20] in forward flight (see Fig. 1 and 2). Comparison of the numerical predictions against the experimental data shows that the overall power consumption is particularly sensitive to the model that is used to represent the drag polar for the blade aerofoil sections, and possibly thus to the precise operating conditions of the rotor blade. However, some degree of absolute quantification does appear to be justified when this variability in profile drag, hence profile power, is removed to reveal the induced component of the power consumption, as in the comparisons presented in the next section of this paper. 3.2 A Rational Approach for Comparing Coaxial and Single Rotors The relative merit of a twin coaxial rotor over a conventional single rotor in terms of efficiency and performance has long been a point of contention. Comparisons made in the existing literature have often failed to account correctly for the essential differences in the configurations of the two types of rotor and thus on occasion have drawn seemingly conflicting conclusions [21]. Numerical results from the VTM have been used to establish a rational approach to a like-for-like comparison of performance between coaxial and conventional single rotor systems[18]. It should be borne in mind, however, when extrapolating isolated rotor data to full helicopter systems, that the comparisons of performance can be skewed by the additional 5-10% of the main rotor power that is required by the tail rotor of the single rotor platform to trim the yaw moment in the system [22]. This torque compensation is provided inherently within the coaxial system. The numerical results obtained by replicating Harrington’s experiment have been used to highlight the potential for misrepresentation of the relative merit of the coaxial rotor when compared to a rotor of more conventional configuration. In the experiment, the performance of one of the constituent rotors of the coaxial system was compared to the performance of the entire coaxial system. Because of its lower solidity,
Modelling the Aerodynamics of Coaxial Helicopters 0.18
49
C T /ó
0.12
Experiment - Single
0.06
Experiment - Coaxial VTM - Single VTM - Coaxial
C P /ó
0 0
0.003
0.006
0.009
0.012
0.015
Fig. 3. Total power consumption (CP) as a function of thrust (CT) in hover – comparison of the coaxial rotor with one of its constituent rotors (‘single’) after normalising by solidity σ [18]
the single rotor is inherently limited in thrust-generating capability by blade stall andhence, even when the numerical results are normalised by solidity, the comparison of the performance of the two systems is misleading (see Fig. 3). It was proposed that the equivalent, conventional single rotor should thus have the same total number of blades as the coaxial system and that the blades of the two systems should be geometrically identical. In Fig. 4 and 5, this definition is shown to yield a fair like-for-like comparison between the two disparate systems. It is seen that the difference in performance of the two types of rotor is actually of the same order as the plausible variability in the profile power of the two systems. The two rotors have the same solidity and thus the lift potential of the two systems is matched. In other words, blade stall cannot obscure the comparison between the two systems. The blades also operate in a comparable aerodynamic environment. The differences in the performance of the two systems are thus induced solely by the fundamental difference in the way that the wake of the two types of rotor interacts with the blades. Definition of the equivalent conventional rotor in this manner thus yields a rational approach to comparing the relative performance of coaxial and single rotor systems. 0.007
0.0005
CT
CP
0.006
0.0004 0.005
0.0003
0.004 0.003 Coaxial Rotor
0.0002
(C D ' )
Coaxial Rotor
Equivalent Rotor (C D ' )
0.002
Coaxial Rotor
(C D '' )
Coaxial Rotor
0.001
CP 0 0
0.0001
0.0002
0.0003
0.0004
0.0005
0.0006
Fig. 4. Total power consumption (CP) as a function of thrust (CT) in hover – comparison between rotors with identical solidity and blade properties [18]
(Cd') (C D ' )
Equivalent Rotor (Cd') (C D ' )
0.0001
Equivalent Rotor (C D '' )
(Cd'') (C D '' )
Equivalent Rotor (Cd'') (C D '' )
0 0
0.1
0.2
Advance ratio 0.3
Fig. 5. Total power consumption (CP) in steady level flight as a function of forward speed (thrust coefficient CT = 0.0048) – comparison between rotors with identical solidity and blade properties [18]
50
H.W. Kim and R.E. Brown
3.3 Comparison of Performance in Steady and Manoeuvring Flight The performance of a coaxial rotor in hover, in steady forward flight, and in a level, coordinated turn has been contrasted with that of an equivalent, conventional rotor defined, as motivated in the previous section, as a single, conventional rotor with the same overall solidity, number of blades and blade aerodynamic properties [18]. Simulations using the VTM have allowed the differences in the performance of the two systems (without undue complication from fuselage and tail rotor effects) to be investigated in terms of the profile, induced and parasite contributions to the overall power consumed by the rotors (see Fig. 6 to 8), and to be traced to the differences in the structure of the wakes of the two systems. In hover, the coaxial system consumes less induced power than the equivalent, conventional system. The wake of the coaxial system in hover is dominated, close to the rotors, by the behaviour of the individual tip vortices from the two rotors as they convect along the surface of two roughly concentric, but distinct, wake tubes. The axial convection rate of the tip vortices, particularly those from the upper rotor, is significantly greater than for the tip vortices of the same rotor operating in isolation. The resultant weakening of the blade-wake interaction yields significantly reduced induced power consumption on the outer parts of the upper rotor that translates into the observed benefit in terms of the overall induced power required by the coaxial system. In steady forward flight, the coaxial rotor again shows a distinct induced power advantage over its equivalent, conventional system at transitional and low advance ratios, but at high advance ratio there is very little difference between the performance of the two systems. At a thrust coefficient, CT, of 0.0048, the maximum forward flight speed of the systems that were simulated was limited to an advance ratio of about 0.28 by stall on the retreating side of the rotors. The rather limited maximum performance of the two systems was most likely related to their low solidity. With the coaxial system, the near-simultaneous stall on the retreating sides of both upper and
0.0006
0.0005
CP
CP
TOTAL POWER
0.0005
0.0004 Coaxial Rotor Coaxial Rotor
0.0004
Induced Equivalent Rotor
Equivalent Rotor
TOTAL POWER
0.0003
0.0003
Parasite
0.0002 Induced
0.0002 Profile
0.0001
CT
0 0
0.001
0.002
0.003
0.004
0.005
0.006
0.0001 Profile 0
0.007
Fig. 6. Total power consumption, together with its constituents, in hover as a function of thrust – comparison of the coaxial rotor with the equivalent single rotor [18]
0
0.1
0.2
Advance ratio 0.3
Fig. 7. Total power consumption, together with its constituents, in steady level flight as a function of forward speed (thrust coefficient CT = 0.0048) [18]
Modelling the Aerodynamics of Coaxial Helicopters 0.0008
51
CP TOTAL POWER
0.0006 Coaxial Coaxial Rotor Equivalent Wind-up Turn Equivalent Steady TurnRotor
0.0004 Induced
Profile
0.0002
Parasite 0 1.0
1.5
Load factor
2.0
Fig. 8. Total power consumption, together with its constituents, in a level, wind-up turn as a function of the load factor of the turn [18]
lower rotors leads to backwards flapping of both discs, although blade strike occurs at the back of the system because the upper rotor stalls more severely than the lower. The structure of the wake generated by the coaxial and conventional systems is superficially similar at all advance ratios, and shows a transition from a tube-like geometry at low advance ratio to a flattened aeroplane-like form as the advance ratio is increased 1 (see Fig. 9). The formation of the wake of the coaxial rotor at posttransitional advance ratio involves an intricate process whereby the vortices from both upper and lower rotors wind around each other to create a single, merged pair of super-vortices downstream of the system (see Fig. 9). The loading on the lower rotor is strongly influenced by interaction with the wake from the upper rotor, and there is also evidence on both rotors of intra-rotor wake interaction especially at low advance ratio. In comparison, the inflow distribution on the conventional rotor, since the inter-rotor blade-vortex interactions are absent, is very much simpler in structure. Simulations of a wind-up turn at constant advance ratio again show the coaxial rotor to possess a distinct advantage over the conventional system (see Fig. 8) – a reduction in power of about 8% for load factors between 1.0 and 1.7 is observed at an
(a) Overall wake geometry
(b) Tip vortex geometry (coaxial rotor: upper rotor vortices shaded darker than lower)
Fig. 9. Wake structure of coaxial rotor (left) and equivalent single rotor (right) in forward flight at advance ratio μ = 0.12 1
The transition occurs at an advance ratio of approximately 0.1.
52
H.W. Kim and R.E. Brown
advance ratio of 0.12 and a thrust coefficient of 0.0048. As in forward flight, the improved performance of the coaxial rotor results completely from a reduction in the induced power required by the system relative to the conventional rotor. This advantage is offset to a certain degree by the enhanced vibration of the coaxial system during the turn compared to the conventional system. As in steady level flight, the turn performance is limited by stall and, in the coaxial system, by subsequent blade strike, at a load factor of about 1.7 for the low-solidity rotors that were used in this study. The inflow distribution on the rotors is subtly different to that in steady, level flight, and a progressive rearwards shift in the positions of the interactions between the blades and their vortices with increasing load factor appears to be induced principally by the effects of the curvature of the trajectory on the geometry of the wake. The observed differences in induced power required by the coaxial system and the equivalent, conventional rotor originate in subtle differences in the loading distribution on the two systems that are primarily associated with the pattern of blade-vortex interactions on the rotors. The beneficial properties of the coaxial rotor in forward flight and in steady turns appear to be a consequence of the somewhat greater lateral symmetry of its loading compared to the conventional system. This symmetry allows the coaxial configuration to avoid, to a small extent, the drag penalty associated with the high loading on the retreating side of the conventional rotor. It is important, though, to acknowledge the subtlety of the effects that lead to the reduced induced power requirement of the coaxial system, and thus the rather stringent requirements that are imposed on the fidelity of numerical models in order for these effects to be resolved.
4 Aerodynamics of a Stiffened Hingeless Coaxial Rotor The effects of flapwise stiffness on the performance of a coaxial rotor were studied using a modified form of Harrington’s rotor as described in Ref. 23. The effects of hub stiffness on the natural frequency of blade flapping are introduced into the simulations by modelling the blades of the rotors as being completely rigid, but then applying a spring across each flapping hinge of their articulated hubs. Introduction of flapwise stiffness into the system has a marked effect on the power consumption of the coaxial rotor in forward flight. VTM calculations suggest that the equivalent articulated system has a power requirement that is over twenty percent greater than that of a completely rigid rotor system when trimmed to an equivalent flight condition. Most of this enhanced power requirement can be attributed to a large increase in the induced power that is consumed by the system when the blades of the rotors are allowed to flap freely. Most of the advantage of the rigid configuration is retained if the stiffness of the rotors is reduced to practical levels, but the advantage of the system with finite stiffness over the conventional articulated system deteriorates quite significantly as the forward speed of the system is reduced. In high-speed forward flight, significant further power savings can be achieved, at least in principle, if an auxiliary device is used to alleviate the requirement for the main rotor system to produce a propulsive force component, and, indeed, such an arrangement might be necessary to prevent rotor performance being limited by aerodynamic stall.
Modelling the Aerodynamics of Coaxial Helicopters
53
Simulations suggest that the unsteady aerodynamic forcing of very stiff rotor systems is relatively insensitive to the actual flapwise stiffness of the system. This implies that simple palliative measures such as structural tailoring may have little effect in being able to modify the inherent vibrational characteristics of stiff coaxial rotor systems. The principal effect of the aerodynamic forcing of very stiff co-axial systems is to produce an excitation of the system, primarily at the fundamental blade passing frequency, in both pitch and heave, but aerodynamic interference between the rotors may introduce a small component of roll excitation into the vibration of the rotor. The numerical results presented in Ref. 23 lend strong support to the existing contention that the introduction of flapwise stiffness can lead to a coaxial rotor system that possesses clear advantages in performance over the corresponding articulated system, and hence, by comparison against the results of previous studies [18], over the equivalent, conventional, planar rotor system. The results also lend some support, though, to the contention that the advantages of the coaxial rotor configuration in terms of overall performance may need to be offset against fundamentally unavoidable penalties in terms of the vibration, and thus, possibly, the noise that is produced by the lower rotor as a result of its aerodynamic interference with the wake that is produced by the upper rotor. It seems risky to jump to such overarching conclusions in the absence of the further insight that could be gleaned from simulations of rotor systems that are more physically representative of full-scale practice than those tested in Ref. 23, however.
5 Thrust-Compounded Coaxial Helicopter The thrust-compounded coaxial helicopter with stiffened rotors is a particularly plausible contender to meet modern requirements for a new generation of high performance rotorcraft. The VTM has been used to study the aerodynamic interactions between the sub-components of a generic, but representative helicopter with this configuration. These studies [24, 25] are summarised below. 5.1 Helicopter Model The generic helicopter configuration studied in Refs. 24 and 25 comprises a stiffened contra-rotating coaxial rotor system, a tail-mounted propulsor, and a streamlined fuselage featuring a horizontal tailplane at its rear (see Fig. 10). In the interests of brevity, only a brief description of the configuration is presented here but a more detailed geometric description can be found in Kim et al. [24]. The coaxial system consists of two three-bladed rotors. The stiffness in the rotor system is approximated, somewhat crudely but supported by the results contained in Ref. 23, by assuming the rotor blades and their attachments to the hub to be completely rigid. The propulsor is a five-bladed variable pitch propeller mounted in a pusher configuraFig. 10. Generic thrust-compounded hingeless coaxial helicopter configuration [24, 25] tion oprovide auxiliary propulsive thrust,
54
H.W. Kim and R.E. Brown
offloading the main rotor in high speed forward flight. The geometry of the fuselage is entirely fictitious but the compact and streamlined design is chosen to be representative of a realistic modern helicopter with high speed requirements. In line with current design practice to yield sufficient longitudinal stability and control, a rectangular tailplane, just forward of the propulsor, is also incorporated into the design. This configuration was developed specifically to provide a realistic representation of the aerodynamic interactions that might occur in practical semi-rigid, thrust-compounded coaxial helicopter systems. 5.2 Interactional Aerodynamics and Aeroacoustics The aerodynamic environment of the configuration described above is characterised by very strong aerodynamic interactions between its various components (see Fig. 11). The aerodynamic environment of the main rotors of the system is dominated by the direct impingement of the wake from the upper rotor onto the blades of the lower rotor, and, particularly at low forward speed, the thrust and torque produced by the system are highly unsteady. The fluctuations in the loading on the coaxial system occur primarily at the fundamental blade-passage frequency and are particularly strong as a result of the phase relationship between the loading on the upper and lower rotors. A contribution to the loading on the coaxial rotor at twice the blade-passage frequency is a result of the loading fluctuations that are induced on the individual rotors of the system as a result of direct blade overpassage. As shown in Fig. 12, the wake of the main rotor sweeps over the fuselage and tailplane at low forward speed, inducing a significant nose-up pitching moment on the tailplane that must be counteracted by longitudinal cyclic input to the main rotor. This pitch-up characteristic has been encountered during the development of several helicopters and has proved on occasion to be very troublesome to eradicate. Over a broad range of forward flight speeds, the wake from the main rotor is ingested directly into the propulsor, where it induces strong fluctuations in the loading produced by this rotor. These fluctuations occur at both the blade-passage frequency of the main rotor and of the propulsor, and have the potential to excite significant vibration of the
(a) Bottom view
(b) Top view
Fig. 11. Visualisation of the wake structure of the thrust-compounded coaxial helicopter using surface contours of constant vorticity at advance ratio μ = 0.15 [25]
Modelling the Aerodynamics of Coaxial Helicopters
55
Upper Rotor Lower Rotor Tail Propeller
(a) μ = 0.05
(b) μ = 0.10
(c) μ = 0.15
(d) μ = 0.30
Fig. 12. Trajectories of the tip vortices of the main rotors and propulsor where they intersect the vertical plane through the fuselage centreline [24]
aircraft. VTM calculations suggest that this interaction, together with poor scheduling of the partition of the propulsive thrust between the main rotor and the rear-mounted propulsor with forward speed, can lead to a distinctly non-optimal situation where the propulsor produces significant vibratory excitation of the system but little useful contribution to its propulsion. The propulsor also induces significant vibratory forcing on the tailplane at high forward speed. This forcing is at the fundamental blade-passage frequency of the propulsor, and suggests that the tailplane position in any vehicle where the lifting surface and propulsor are as closely coupled as in the present configuration may have to be considered carefully to avoid shortening its fatigue life. Nevertheless, the unsteady forcing of the tailplane is dominated by interactions with the wake from the main rotor – the fluctuations at the fundamental blade passage frequency that were observed in the pressure distribution on the tailplane are characteristic of the close passage of individual vortices over its surface. Finally, predictions of the acoustic signature of the system presented in Ref. 24 and 25 suggest that the overall noise produced by the system, at least in slow forward flight, is significantly higher than that produced by similar conventional helicopters in the same weight class. The major contribution to the noise produced by the system in the highly objectionable BVI frequency range comes from the lower rotor because of strong aerodynamic interaction with the upper rotor. The propulsor contributes significant noise over a broad frequency spectrum. At the flight condition that was considered, much of this noise is induced by interactions between the blades of the propulsor and the wake of the main rotor and tailplane. The numerical calculations were thus able to reveal many of the aerodynamic interactions that might be expected to arise in a configuration as aerodynamically complex as the generic thrust-augmented coaxial helicopter that formed the basis of the study. It should be acknowledged though that the exact form, and particularly the effect on the loading produced on the system, of these interactions would of
56
H.W. Kim and R.E. Brown
course vary depending on the specifics of the configuration, and that many of the pathological conditions exposed in the study could quite feasibly have been rectified by careful aerodynamic re-design. 5.3 Understanding the Interactions By comparing the aerodynamics of the full configuration of the helicopter to the aerodynamics of various combinations of its sub-components, the influence of the various aerodynamic interactions within the system on its behaviour could be isolated as described in Ref. 25. The traditional approach to the analysis of interactional effects on the performance of a helicopter relies on an initial characterisation of the system in terms of a network of possible interactions between the separate components of its configuration (see Fig. 13). Thus, within the configuration that was studied in Ref. 25, it is possible to identify the effect of the main rotor on the fuselage and propulsor, the distortion of the wake of the main rotor that is caused by the presence of the fuselage and so on. The characteristics of these various interactions and their effects on the performance of the system have been described in detail in Ref. 25. Main rotor system Upper rotor Lower rotor
Tailplane Fuselage
Propulsor
Fig. 13. Schematic summarising the network of aerodynamic interactions between various components of the simulated configuration [25]
Many of the interactions that were exposed within the aerodynamics of the configuration have exhibited a relatively linear relationship between cause and effect and hence would be amenable to the reductionist approach described above. For instance, the distortion of the wake of the main rotor by the fuselage has a marked effect on the loading generated by the propulsor, but the effect on the propulsor is prevented from feeding back into the performance of the main rotor. This is because of the isolation that is provided by the particular method that was used to trim the vehicle, and also by the inherent directionality of the interaction that results from its physics being dominated by the convection of the wakes of the two systems into the flow behind the vehicle. Several of the interactions that were observed for this helicopter configuration exhibited a less direct relationship between cause and effect, however. These interactions are characterised by strong feedback or closed-loop type behaviour, in certain cases through a path which remains relatively obscure and hidden within the network
Modelling the Aerodynamics of Coaxial Helicopters
57
of interactions that form the basis of the traditional reductionist type approach. For instance, the load that is induced on the tailplane by the direct impingement of the wake of the main rotor requires, through the requirement for overall trim of the forces and moments on the aircraft, a compensatory change in the loading distribution on the main rotor itself, which then modifies the strength of its wake and hence, in circular fashion, the loading on the tailplane itself. Without this understanding of the strong mutual coupling between the performance of the tailplane and the main rotor, the observed dependence of the acoustic radiation of the aircraft on the presence or not of the tailplane (or, in practical terms, more likely on its design and positioning) may appear to the analyst as a very obscure and possibly even unfathomable interdependence within the system. Thus, although the reductionist, network-based approach to classifying the interactions present within the system is conceptually appealing and simple, it must be realised that the possible presence of feedback loops deep within the interactional aerodynamics, such as the one described above, may cause the approach to miss, obscure or hide the presence of interactions between some of the various subcomponents of the system. The analysis presented in Ref. 25 warns against an overly literal application of this reductive, building-block type approach to the categorisation of the interactions that are present within the system.
6 Conclusion The results of a programme of computational study of coaxial rotor aerodynamics conducted using the Vorticity Transport Model at the Rotorcraft Aeromechanics Laboratory of the Glasgow University Rotorcraft Laboratories have been summarised. Analysis of the computational results obtained using the VTM suggests that the differences in performance between a helicopter with a coaxial rotor system and the equivalently defined system with a conventional, single rotor are subtle, and generally result from small differences in the character and strength of the localised interaction between the blades of the rotors and the wakes that they produce. Reliable prediction of these effects is well beyond the scope of simple models and is absolutely dependent on accurate prediction of the detailed structure of the rotor wake. The study lends weight to the assertion though that the state of the art of computational helicopter aerodynamic predictions is advancing to a stage where the use of powerful models such as the VTM may allow useful insight into the likely aeromechanical behaviour of realistic helicopter configurations. Furthermore, it is suggested that there may be no real substitute for detailed simulations of the entire configuration if the effects on the performance of the vehicle of the most deeply hidden interactions within the system are to be exposed. It has been shown that modern numerical techniques are indeed capable of representing the very wide range of aerodynamic interactions that are present within the helicopter system, even one as complex as a compounded coaxial system. This bodes well for the assertion that modern computational techniques may be in a position to help circumvent future repetition of the long history of unforeseen, interaction-induced dynamic problems that have manifested on prototype or production aircraft.
58
H.W. Kim and R.E. Brown
References [1] Cooper D E (1978) YUH-60A Stability and Control. Journal of the American Helicopter Society 23(3):2-9 [2] Prouty R W, Amer K B (1982) The YAH-64 Empennage and Tail Rotor – A Technical History. American Helicopter Society 38th Annual Forum Proceedings, Anaheim, CA, pp.247-261 [3] Main B J, Mussi F (1990) EH101 - Development Status Report. Proceedings of the 16th European Rotorcraft Forum, Glasgow, UK, pp.III.2.1.1-12 [4] Cassier A, Weneckers R, Pouradier J, (1994) Aerodynamic Development of the Tiger Helicopter. Proceedings of the American Helicopter Society 50th Annual Forum, Washington DC [5] Eglin P (1997) Aerodynamic Design of the NH90 Helicopter Stabilizer. Proceedings of the 23rd European Rotorcraft Forum, Dresden, Germany, pp. 68.1-10. [6] Frederickson K C, Lamb J R, (1993) Experimental Investigation of Main Rotor Wake Induced Empennage Vibratory Airloads for the RAH-66 Comanche Helicopter. Proceedings of the American Helicopter Society 49th Annual Forum, St. Louis, MO, pp. 10291039 [7] Bagai A (2008) Aerodynamic Design of the Sikorsky X2 Technology DemonstratorTM Main Rotor Blade. American Helicopter Society 64th Annual Forum Proceedings, Montréal, Canada [8] Burgess R K, (2004) The ABCTM Rotor – A Historical Perspective. American Helicopter Society 60th Annual Forum Proceedings, Baltimore, MD [9] Brown R E, (2000) Rotor Wake Modeling for Flight Dynamic Simulation of Helicopters. AIAA Journal 38(1):57-63 [10] Brown R E, Line A J, (2005) Efficient High-Resolution Wake Modeling Using the Vorticity Transport Equation. AIAA Journal 43(7):1434-1443 [11] Whitehouse G R, Brown R E, Modeling the Mutual Distortions of Interacting Helicopter and Aircraft Wakes. AIAA Journal of Aircraft 40(3):440-449 [12] Whitehouse G R, Brown R E (2004) Modelling a Helicopter Rotor’s Response to Wake Encounters. Aeronautical Journal 108(1079):15-26 [13] Ahlin G A, Brown R E (2005) Investigating the Physics of Rotor Vortex-Ring State using the Vorticity Transport Model. Paper 89, 31st European Rotorcraft Forum, Florence, Italy [14] Ahlin G A, Brown R E (2007) The Vortex Dynamics of the Rotor Vortex Ring Phenomenon. American Helicopter Society 63rd Annual Forum Proceedings, Virginia Beach, VA [15] Brown R E, Whitehouse G R, Modeling Rotor Wakes in Ground Effect. Journal of the American Helicopter Society 49(3):238-249 [16] Phillips C, Brown R E (2008) Eulerian Simulation of the Fluid Dynamics of Helicopter Brownout. American Helicopter Society 64th Annual Forum Proceedings, Montréal [17] Kelly M E, Duraisamy K, Brown R E (2008) Blade Vortex Interaction and Airload Prediction using the Vorticity Transport Model. American Helicopter Society Specialists’ Conference on Aeromechanics, San Francisco, CA [18] Kim H W, Brown R E (2006) Coaxial Rotor Performance and Wake Dynamics in Steady and Manoeuvring Flight. American Helicopter Society 62nd Annual Forum Proceedings, Phoenix, AZ [19] Harrington R D (1951) Full-Scale-Tunnel Investigation of the Static-Thrust Performance of a Coaxial Helicopter Rotor. NACA TN-2318 [20] Dingeldein R C (1954) Wind-Tunnel Studies of the Performance of Multirotor Configurations. NACA TN-3236
Modelling the Aerodynamics of Coaxial Helicopters
59
[21] Coleman C P (1997) A Survey of Theoretical and Experimental Coaxial Rotor Aerodynamic Research,” NASA TP-3675 [22] Leishman J G (2006) Principles of Helicopter Aerodynamics. Second Edition Cambridge University Press, Cambridge, UK [23] Kim H W, Brown R E (2007) Impact of Trim Strategy and Rotor Stiffness on Coaxial Rotor Performance. 1st AHS/KSASS International Forum on Rotorcraft Multidisciplinary Technology, Seoul, Korea [24] Kim H W, Kenyon A R, Duraisamy K, Brown R E (2008) Interactional Aerodynamics and Acoustics of a Propeller-Augmented Compound Coaxial Helicopter. American Helicopter Society Aeromechanics Specialists’ Meeting, San Francisco, CA [25] Kim H W, Kenyon A R, Duraisamy K, Brown R E (2008) Interactional Aerodynamics and Acoustics of a Hingeless Coaxial Helicopter with an Auxiliary Propeller in Forward Flight. International Powered Lift Conference, London, UK
State-of-the-Art CFD Simulation for Ship Design Bettar el Moctar Germanischer Lloyd, Hamburg, Germany
[email protected] Abstract. Nowadays, the work of a classification society associated with design assessment increasingly relies on the support of computer based numerical simulations. Although design assessments based on advanced finite-element analyses have long been part of the services of a classification society, the scope and depth of recently developed and applied simulation methods was so rapid that we present here a survey of these techniques as well as samples of typical applications. The article focuses on the basics of the techniques and points out progress achieved as well as current improvements. Listed references describe the simulation techniques and the individual applications in more detail.
1 Introduction We observe an increased scope and greater importance of simulations in the design process of ships. Shipyards frequently outsource the associated extensive analysis to specialists. The trend of modern classification society work is also towards simulation-based decisions, both to assess the ship’s design as well as to evaluate its operational aspects. Stability analyses were among the first applications of computers in naval architecture. Today, the naval architect can perform stability analyses in the intact and the damaged conditions. Two other “classical” applications of computer simulations for ships are CFD (computational fluid dynamics) and FEA (finiteelement analyses). Both applications were already used for several decades to support ship design, but today’s applications are far more sophisticated than they were 20 years ago. This article reviews different simulation fields as found in the work of Germanischer Lloyd, showing how advanced engineering simulations drifted from research activities to frontier applications
2 Stern and Bow Flare Slamming, Extreme Motions and Loads+ Linear approaches, such as standard strip theory methods and panel methods, el Moctar et al. (2006), are appropriate to solve many ship seakeeping problems, and they are frequently applied. These procedures are fast, and thus they allow investigating the effect of many parameters (frequency, wave direction, ship speed, metacentric height, etc.) on ship response. Nonlinear computations, such as simulation procedures based on Reynolds-averaged Navier-Stokes equation (RANSE) solvers, are necessary for the treatment of extreme conditions. However, simulations are computationally intensive and, consequently, only relatively short time periods can be analyzed. GL developed a numerical procedure based on the combined use of a boundary element method (BEM), a statistical analysis technique using random process theory, and an
62
B. el Moctar
extended RANSE solver to obtain accurate responses of ships in a seaway. The approach starts with a linear analysis to identify the most critical parameter combination for a ship’s response and ends up with a RANSE simulation that captures the complex free-surface deformation and predicts the local pressures more accuratelly. Within the scope of commercial projects, GERMANISCHER LLOYD performed RANSE simulations for a variety of ships to investigate effects of bow flare and stern slamming, water on deck, and wave-impact related slamming loads (Figs. 1-7).
Fig. 1. Computed Earthrace Trimaran motions in waves
Fig. 2. Wave induced loads in extreme wave conditions
Fig. 3. Computed motions of Catamaran in heavy sea
Fig. 4. Computed pressure distribution (wetdeck slamming)
Fig. 5. Computed influence of wetdeck slamming on beding moment of a Catamaran
Fig. 6. Computed and measured slamming forces actin on a MY bow
State-of-the-Art CFD Simulation for Ship Design
Fig. 7. Computed slamming forces acting on a MY stern
63
Fig. 8. Computed whipping effects on bending moment
3 Whipping Effects Impact-related loads lead to increased vibration related accelerations and, consequently, to higher internal hull girder loads. An accurate assessment of these hydroelastic effects requires an implicit coupling between a RANSE solver and a structural strength code. To this end, GERMANISCHER LLOYD developed a numerical procedure whereby the RANSE solver is coupled either to a Timoshenko beam model or to an appropriate interface between the RANSE solver and the FE code (Fig. 8).
Fig. 9. Computed free surface in a cylindrical tank
Fig. 10. Measured free surface in a cylindrical tank
4 Sloshing Sloshing is a strongly nonlinear phenomenon, often featuring spray formation and plunging breakers. Surface-capturing methods can reproduce these features, Sames et al (2002), el Moctar (2006) validated a RANSE solver Comet for sloshing problems (see
64
B. el Moctar
(a)
(b)
Fig. 11. Sloshing simulation for LNG-Tank (a), computed time history of pressure acting on LNG-Tank
Fig. 12. Computed free surface in a prismatic tank
Figs. 9-12). The computed fluid motions agreed also well with videos of the experiments. Extensive experience gathered over the last ten years allows us today to numerically predict with confidence sloshing loads in tanks with arbitrary geometry. A computational procedure to predict sloshing loads was published by el Moctar (2006).
5 Dynamic Stability of Ships To assess the safety of modern ships, it is vital for Germanischer Lloyd to have available numerical tools to investigate the dynamic stability of intact and damaged ships in a seaway. Large amplitude motions may lead to high accelerations. In severe seas, ships may also be subject to phenomena like pure loss of stability, broaching to, and parametric rolling. Linear seakeeping methods are unsuited to predict such phenomena, mainly because they do not account for stability changes caused by passing waves. Furthermore, linear methods are restricted to small amplitude ship motions, and hydrodynamic pressures are only integrated up to the undeformed water surface.
State-of-the-Art CFD Simulation for Ship Design
65
The two simulation tools ROLLSS and GL SIMBEL are available at Germanischer Lloyd to simulate large amplitude ship motions. Depending on the extent of the nonlinearities accounted for, simulation methods tend to be cumbersome to handle and unsuitable for routine application. Therefore, the numerically more efficient method ROLLSS is used to quickly identify regions of large amplitude ship motions, while the fully nonlinear method GL SIMBEL is then employed to yield more accurate motion predictions. To validate these tools and to demonstrate their practical application, extensive simulations were carried out to predict parametrically induced roll motions that were then compared against model test measurements performed at the Hamburg Ship Model Basin, Brunswig et al. (2006), Figs. 13-14.
Fig. 13. Computed cavitation behaviour on rudder (gray areas)
Fig. 14. Computed roll motions in irregular waves
6 Ship Appendages, Cavitation Problems Diagrams to estimate rudder forces were customary in classical rudder design. These diagrams either extrapolate model test results from wind tunnel tests, or they are
Fig. 15. Computed and measured sloshing pressures
Fig. 16. Computed Pressure distribution on podded drive
66
B. el Moctar
based on potential flow computations. However, the maximum lift is determined by viscous flow phenomena, namely, flow separation (stall). Potential flow models are not capable of predicting stall, and model tests predict stall at too small angles. CFD is by now the most appropriate tool to support practical rudder design. The same approach for propeller and rudder interaction can be applied for podded drives (Fig. 15), el Moctar and Junglewitz (2004). RANSE solvers also allow the treatment of cavitating flows (Fig. 16). The extensive experience gathered in the last five years resulted in a GERMANISCHER LLOYD guideline for rudder design procedures, GL (2005).
7 Room Ventilation HVAC (heat, ventilation, air condition) simulations involve the simultaneous solution of fluid mechanics equations and thermodynamic balances, often involving concentrations of different gases. The increasing use of refrigerated containers on ships motivated the application of advanced CFD simulations, Brehm and el Moctar (2004), to some extent replacing simple, analytical methods and model tests. Effects such as the difference between pressure and suction ventilation and the influence of natural thermal buoyancy can be reproduced (Fig. 17).
Fig. 17. Computed roll motions
Fig. 18. Computed temperature distribution in a cargo hold
Fig. 19. Computed smoke propagation
State-of-the-Art CFD Simulation for Ship Design
67
8 Aerodynamics of Ship Superstructures and Smoke Propagation Aerodynamic issues are increasingly of interest for ships and offshore platforms Potential applications include Smoke and exhaust tracing, operational conditions for take-off and landing of helicopters and Wind resistance and drift forces. The traditional approach to study aerodynamic flows around ships employs model tests in wind tunnels. These tests are a proven tool supporting design and relatively fast and cheap Forces are quite easy to measure, but insight into local flow details can be difficult in some spaces. Computational fluid dynamics (CFD) is increasingly used in related fields to investigate aerodynamic flows e.g. around buildings or cars. CFD offers some advantages over wind tunnel tests: The complete flow field can be stored and allowing evaluation at any time in the future. There is more control over what to view and what to block out. CFD can capture more flow details. CFD allows also full-scale simulations. Despite these advantages, CFD has so far rarely been employed for aerodynamic analyses of ships. This is due to a combination of obstacles: The complex geometry of superstructures makes grid generation labor-intensive. The flows are turbulent and require often unsteady simulations due to large-scale vortex generation. Recent progress in available hardware and grid generation techniques allows now a re-evaluation of CFD for aerodynamic flows around ship superstructures. Hybrid grids with tetrahedral and prism elements near the ship allow partially automatic grid generation for complex domain boundaries. The resulting higher cell count is acceptable for aerodynamic flows because the Reynolds numbers are lower than for hydrodynamic ship flows and thus there are fewer elements needed.. GERMANISCHER LLOYD performed RANSE simulations for a ship superstructures to investigate aerodynamic problems and smoke propagation el Moctar, Bertram (2002)
9 Fire Simulation SOLAS regulation allows the consideration of alternative designs and alternative arrangements concerning fire safety. The requirement is to prove (by engineering analysis) that the safety level of the alternative design is equal to that based on prescriptive rules. The main benefit of these regulations is expected for cruise vessels and ferries, as the alternative design approach allows large passenger and car deck spaces beyond what is possible with the prescriptive rules. In principle, ‘engineering analyses’ could also mean fire experiments, but these are too costly and time consuming to support ship design. This leaves computer simulations as a suitable option. At present, zone models and CFD tools are considered for fire simulations in ships. Zone models are suitable for examining more complex, time-dependent scenarios involving multiple compartments and levels, but numerical stability can be a problem for multilevel scenarios, for scenarios with Heating, Ventilation and Air Conditioning (HVAC) systems, and for post-flashover conditions. CFD models can yield detailed information on temperatures, heat fluxes, and species concentrations; however, the time penalty of this approach currently makes CFD unfeasible for long periods of real time simulations or for large computational domains. After initial validation studies, Bertram et al. (2004) presented more complex applications of fire simulations. While reproducing several typical fire characteristics, fire simulations are not yet mature,
68
B. el Moctar
and more progress can be expected in the next decade. For example, results are not grid-independent with the currently employed typical grid resolutions, but finer grids appear out of reach for present computer power and algorithms. Despite such shortcomings, fire simulations appear already suitable as a general support both for fire containment strategies and for design alternatives.
10 Conclusion The technological progress is rapid, both for hardware and software. Simulations for numerous applications now often aid the decision making process, sometimes ‘just’ for qualitative ranking of solutions, sometimes for quantitative ‘optimization’ of advanced engineering solutions. Continues validation feedback not only improves the simulation tools themselves, but it also builds confidence in their use. However, advanced simulation software alone is not enough. Engineering is more than ever the art of modeling and finding the delicate balance between level of detail and resources (time, man-power). This modeling often requires intelligence and considerable (collective) experience. The true value offered by advanced engineering service providers lies thus not in software or hardware, but in the symbiosis of highly skilled staff and these resources.
References [1] BERTRAM, V.; EL MOCTAR, O.M.; JUNALIK, B.; NUSSER, S. (2004), Fire and ventilation simulations for ship compartments, 4th Int. Conf. High-Performance Marine Vehicles (HIPER), Rome, pp.5-17 [2] BREHM, A.; EL MOCTAR, O. (2004), Application of a RANSE method to predict temperature distribution and gas concentration in air ventilated cargo holds, 7th Num. Towing Tank Symp. (NuTTS), Hamburg [3] BRUNSWIG, J, PEREIRA, R., SCHELLIN, T. (2006) Validation of Numerical Tools to Predict Parametric Rolling, HANSA Journal, September 2006 [4] EL MOCTAR, O. SCHELLIN, T.E., PRIEBE, T. (2006), CFD and FE Methods to Predict Wave Loads and Ship Structure Response, 26th Symp. Naval Hydrodyn., Rome [5] EL MOCTAR, O. (2006) Assessement of sloshing loads for tankers Shipping World & Shipbuilder, PP 28- 31 [6] EL MOCTAR, O , JUNGLEWITZ, A.;. (2004), Numerical analysis of the steering capability of a podded drive, Ship Technology Research 51/3, pp.134-145 [7] El MOCTAR, O., BERTRAM, V. (2002) Computation of Viscous Flow around Fast Ship Superstructures, 24th Symposium of Naval Hydrodynamics (ONR), Fukuoka, Japan, 2002 [8] FACH, K. (2006), Advanced simulation in the work of a classification society, 5th Int. Conf. Computer and IT Applications in the Maritime Industries (COMPIT), Oegstgeest [9] GL (2005), Recommendations for preventive measures to avoid or minimize rudder cavitation, Germanischer Lloyd, Hamburg [10] SAMES, P.C, Macouly, D., Schellin, T. (2002) Sloshing in Rectangular and Cylindrical tanks Journal of Ship research, vol. 46, Nr. 3
Investigation of the Effect of Surface Roughness on the Pulsating Flow in Combustion Chambers with LES Balazs Pritz1, Franco Magagnato, and Martin Gabi 1
Institute of Fluid Machinery, University of Karlsruhe, Germany
[email protected]
Abstract. Self-excited oscillations often occur in combustion systems due to the combustion instabilities. The high pressure oscillations can lead to higher emissions and structural damage of the chamber. In the last years intensive experimental investigations were performed at the University of Karlsruhe to develop an analytical model for the Helmholtz resonator-type combustion systems [1]. In order to better understand the flow effects in the chamber and to localize the dissipation, Large Eddy Simulations (LES) were carried out. Magagnato et al. [2] describe the investigation of a simplified combustion system where the LES were carried out exclusively with a hydraulic smooth wall. The comparison of the results with experimental data shows the important influence of the surface roughness in the resonator neck on the resonant characteristics of the system. In order to catch this effect with CFD as well, the modeling of surface roughness is needed. In this paper the Discrete Element Method has been implemented into our research code and extended for LES. The simulation of the combustion chamber with roughness agrees well with the experimental results.
1 Introduction For the successful implementation of advanced combustion concepts it is very important to avoid periodic combustion instabilities in combustion chambers of turbines and in industrial combustors [3, 4]. In order to eliminate the undesirable oscillations it is necessary to fully understand the mechanics of feedback of periodic perturbations in the combustion system. The ultimate aim is to evaluate the oscillation disposition of the combustion system already during the design phase. In order to predict the resonance characteristic of Helmholtz resonator-type combustion systems an analytical model has been developed at the Engler-Bunte-Institute, Division for Combustion Technology at the University of Karlsruhe [1]. For the validation of the model a large series of measurements were carried out with variation of the parameters of the geometry (volume of the chamber, length and diameter of the exhaust gas pipe) and of the operation conditions (fluid temperature, mean mass flow rate, amplitude and frequency of the excitation) [1,5,6]. In order to better understand the flow effects in the combustor and to localize the main dissipation Large Eddy Simulations (LES) were carried out. In the first phase the analytical model was developed to describe a combustion chamber (cc) with the exhaust gas pipe (egp) as the resonator neck (single Helmholtz resonator). At the first comparison of the results from the numerical investigation with a set of experimental
70
B. Pritz, F. Magagnato, and M. Gabi
data a considerable discrepancy was detected. One possible explanation was that the numerical simulations had been carried out with perfectly smooth wall, which is the standard case for most of the simulations. The LES results were later compared with another set of data, where the agreement was quite good. The only difference between these two data sets consisted in the wall roughness in the exhaust gas pipe. The conclusion is that the surface roughness in the resonator neck plays a more important role in the case of pulsating flows than generally in the case of stationary flows. For the ability to predict the damping correctly for the rough case, a roughness model was needed. The choice fell upon the Discrete Element Method, as explained below, and it was implemented in our research code.
2 Modeling of Roughness It has been well known for many decades that the roughness of a wall has a big impact on the wall stresses generated by a fluid flow over that surface. It is of great importance in many technical applications to take this property into account in the simulations, for example for the accurate prediction of the flow around gas turbine blades. The modeling of wall roughness has traditionally been done using the log-law of the wall [7, 8, 9]. There the wall-law is simply modified to account for the roughness effect. Unfortunately the log-law of the wall is not general enough to be applied for complex flows, especially flows with separations and strong accelerations or decelerations (as the flow in the resonator neck). Another modeling has been proposed and used in the past which is based on a modification of the turbulence model close to the wall [10, 11, 12]. Here for example the non-dimensional wall distance is modified to account for the wall roughness in the algebraic Baldwin-Lomax model or the Spalart–Allmaras one–equation model. These are, however, restricted for Reynolds Averaged Navier-Stokes (RANS) calculations. A more general modeling is the Discrete Element Method first proposed by Taylor et al. [13] which can be used for all types of simulation tools. Since our main focus is on Large Eddy Simulation this approach seems to be most appropriate. The main idea of the method is to model the wall roughness effect by including an additional force term into the Navier-Stokes equations by assuming a force proportional to the height of virtual wall roughness elements. 2.1 The Discrete Element Method The Discrete Element Method models the effect of the roughness by virtually replacing the inhomogeneous roughness of a surface by equally distributed simple geometric elements for example cones. The height and the number of the cones per unit area are the parameters of the model. Since the force of a single cone on the fluid can be approximated by known relations it can be used to simulate the drag of the roughness onto the flow. While Taylor et al. have modeled the influence of the discrete element by assuming blockage effects onto the two-dimensional Navier-Stokes equations,
Investigation of the Effect of Surface Roughness on the Pulsating Flow
71
Miyake et al. [14] have generalized this idea by explicitly accounting for the wall drag force fi as source terms in the momentum and energy equations:
∂ρ ∂ρui + =0 ∂t ∂xi
(1)
∂ρui ∂ρuiu j ∂p ∂ ⎡ ⎛⎜ ∂ui ∂u j 2 ∂u k ⎞⎟⎤ + =− + + − δ ij ⎥ + f i ⎢μ ∂t ∂x j ∂xi ∂x j ⎣⎢ ⎜⎝ ∂x j ∂xi 3 ∂xk ⎟⎠⎦⎥ ⎡ ∂u ∂ρE ∂ρui E ∂ (qi − pui ) + = + μ⎢ i ∂t ∂xi ∂xi ⎢⎣ ∂x j
⎛ ∂ui ∂u j ⎞ 2 ⎛ ∂ui ⎞ ⎜ ⎟− ⎜ ⎟⎟ + ⎜ ⎜ ∂x ⎟ ⎝ j ∂xi ⎠ 3 ⎝ ∂xi ⎠
2
⎤ ⎥ + f i ui ⎥⎦
(2)
(3)
Miyake et al. assume the specific drag force fi to be proportional to:
1 A f i = cD ⋅ ρui2 ⋅ C 2 V
(4)
Here AC is the projected surface of the cone in the flow direction, ui is the velocity component in the appropriate direction, cD the drag coefficient of the cone and V is the volume of the cell. In the experiments cD varies between 0.2 and 1.5. We have chosen cD = 0.5. The effect of the roughness onto the wall shear stress must also explicitly be accounted for by:
τ wi = μ
∂ui f i ⋅ V + ∂n A
(5)
Here A is the projected surface of the cone onto the wall and n is the normal distance from the wall. The implemented method has been validated and calibrated by Bühler [14] at the flat plate test case using the experimental findings of Aupoix and Spalart [11]. The prediction of the roughness influence on a high pressure turbine blade has been investigated next and compared with experiments of Hummel/Lötzerich [15]. It was found that the method works very satisfactorily when used with the Spalart et al. one-equation turbulence model for RANS.
3 Simulated Configuration In order to compute the resonance characteristics of the system influenced by the surface roughness in the exhaust gas pipe the same methodology was chosen as described by Magagnato et al. [2]. The only information from the experiments was that the two different sets of data were achieved from a measurement with an exhaust
72
B. Pritz, F. Magagnato, and M. Gabi
gas pipe made of a polished steel tube and of a turned steel tube, respectively. The polished steel tube could be treated as an aerodynamically smooth wall. The roughness height for the turned steel tube could only be roughly estimated at k=0.01-1.0mm. Therefore more simulations were carried out with different height of the virtual cones, each with an excitation frequency of fex=40 Hz (approximately the resonance frequency of the system). The result nearest to the experiment with the turned tube was extended to two other excitation frequencies. In the simulations the mean mass flow rate was m& =61.2kg/h, the diameter and length of the chamber were dcc=0.3m and lcc=0.5m, respectively and the diameter and length of the exhaust gas pipe were degp=0.08m and legp=0.2m, respectively (s. Fig. 1). The rate of pulsation was Pu=25% and the temperature of the fluid was T=298K. 3.1 Numerical Method The simulations were carried out with the in-house developed parallel flow solver called SPARC (Structured PArallel Research Code) [16]. The code is based on the 3D block structured finite volume method and parallelized with the message passing interface (MPI). In the case of the combustor the compressible Navier-Stokes equations are solved. The spatial discretization is a second-order accurate central difference formulation. The temporal integration is carried out with a second-order accurate implicit dual-time stepping scheme. For the inner iterations the 5-stage Runge-Kutta scheme was used. The time step was Δt=2·10-5 s. The Smagorinsky-Lilly model had been used as subgrid-scale (SGS) model [17,18], as this model had been used for the earlier computations in [2] also. The detailed description of the boundary conditions can be found in [2]. Here only a brief listing is given. A pulsating mass flow rate was imposed on the inlet plane.
Non-reflecting far field BC
Wall
Pulsating mass flow rate inlet
legp
degp 28·degp lcc 20·degp
Fig. 1. Sketch of the computational domain and boundary conditions
Investigation of the Effect of Surface Roughness on the Pulsating Flow
73
The outlet was placed in the far field. At the surfaces the no-slip boundary condition and an adiabatic wall are imposed. For the first grid point y+<1 is obtained, the effect of the wall on the turbulence is modeled with the van Driest type damping function. The Discrete Element Method was used on the surface in the exhaust gas pipe. The geometry of the computational domain and the boundary conditions are shown in Figure 1. The entire computational domain contains about 4.3·106 grid points in 111 blocks.
4 Results and Discussion In order to understand the importance of surface roughness in the case of the combustion chamber one has to investigate the pulsating flow in the exhaust gas pipe. 4.1 Flow Features in the Resonator Neck The investigations described in [2] showed that the pulsation of the flow produces an additional dissipation of mechanical energy in the resonator neck. The exact solution of the Navier-Stokes equations for the flow near an oscillating flat plate (2nd problem of Stokes) gives the time-dependent solution and the thickness of the laminar boundary layer [19]. This can serve as a rough estimation for our turbulent flow. Furthermore for oscillating flow in channels or in pipes the exact solution of the Navier-Stokes equations can be derived [19]. This shows that the maximum value of speed does not
Fig. 2. Velocity distributions at the half length of the exhaust gas pipe. a) Normalized U-profile of the non-pulsating and of the pulsating flow. b) U-profiles of the pulsating flow at different fractions of the period of pulsation.
74
B. Pritz, F. Magagnato, and M. Gabi
coincide with the axis of the pipe, but occurs near the wall. This so called annular effect was confirmed also by measurements [20, 21]. Since there is a nonzero mean flow rate in the case of the combustion chamber, this effect is slightly asymmetric. In Fig. 2a velocity distributions normal to the wall are plotted at the half length of the exhaust gas pipe. The streamwise components are normalized by the velocity at the pipe axis (r=0), which are app. Ur=0,non-puls=3.4m/s and Ur=0,puls=12.5m/s. The velocity profile for the pulsating flow is taken for this plot at the time of maximum outflow and from the pulsation near to the resonance frequency. The most obvious difference is that the profile of the pulsating flow has a local maximum near to the wall which is also predicted by the analytical solution. The consequence is a much higher velocity gradient and shear stress on the wall. In Fig. 2b the velocity profiles of the pulsating flow are plotted during a period. The plots Fig. 3. Frequency response curve of the combustion chamber demonstrate the continuous presence of the local maximum of the velocity near to the wall. At the last quarter of the period (3/4T) e.g. the value of the local maximum exceeds the maximum value of the non-pulsating flow, which is, however, at the axis of the pipe. The high gradient of the velocity explains the sensitivity of the flow to the surface roughness. If the roughness magnitude increases slightly, a region of much higher velocities is reached and so the drag force increases significantly. 4.2 Comparison of the Calculation with Experimental Data The exact value of the roughness in the turned steel tube was not available; therefore more calculations were carried out with different virtual cone heights. The results of hc=0.5 mm are closest to the experimental data of the exhaust gas pipe with rough wall. In Fig. 3 the frequency responses of the smooth and rough configuration are indicated. It shows the variation of the amplitude ratio of the mass flow rates in terms of the excitation frequency:
A=
mˆ& out . m&ˆ in
(6)
Investigation of the Effect of Surface Roughness on the Pulsating Flow
75
In [14] the calibration of the virtual cones’ height with the sand grain roughness is described. The next step is to calibrate the cones with the roughness factor mostly used in the technical life.
5 Conclusion Self-sustained instabilities in combustion systems are the focus of investigations at the University of Karlsruhe. In the combustion system the amplitude of the pulsation is limited by the damping of the system. An earlier work [2] shed light on the major role of the oscillating boundary layer in the resonator neck in the damping. The results of this investigation show the impact of the roughness in the resonator neck also. In order to predict the effect of roughness on the damping with numerical simulation, the Discrete Element Method had been chosen and implemented in our research code. The investigation of the velocity profiles in the resonator neck shows that the pulsation produces a local maximum of the velocity near to the wall which agrees well with the analytical solution of oscillating flows in pipes. The local maximum at the wall involves a higher velocity gradient which is present during almost the entire period of the pulsation. This high velocity gradient makes the pulsating flow extraordinary sensible to the surface roughness. The simulations of the combustion chamber predict the increasing damping of the system with increasing cone height quite well. For the combustion system it means that an artificial increasing of roughness in the exhaust gas pipe could moderately stabilize the system. Acknowledgments. The present work is a part of the subproject A7 of the Collaborative Research Centre (CRC) 606 – “Unsteady Combustion: Transportphenomena, Chemical Reactions, Technical Systems” at the University of Karlsruhe. The project is supported by the German Research Foundation.
References [1] Arnold G, Büchner H (2003) Modeling of the Transfer Function of a HelmholtzResonator-Type combustion chamber. In: Proceedings of the European Combustion Meeting 2003 (ECM2003), Federation of the European Sections of the Combustion Institute [2] Magagnato F, Pritz B, Büchner H, Gabi M (2005) Prediction of the Resonance Characteristics of Combustion Chambers on the Basis of Large-Eddy Simulation. International Journal of Thermal and Fluid Sciences 14:156-161 [3] Külsheimer C, Büchner H, Leuckel W, Bockhorn H, Hoffmann S (1999) Untersuchung der Entstehungsmechanismen für das Auftreten periodischer Druck-/Flammenschwingungen in hochturbulenten Verbrennungssystemen. VDI-Berichte 1492:463 [4] Büchner H, Bockhorn H, Hoffmann S (2000) Aerodynamic Suppression of CombustionDriven Pressure Oscillations in Technical Premixed Combustors. In: Proceedings of Symposium on Energy Engineering in the 21st Century (SEE 2000) 4:1573-1580 [5] Lohrmann M, Büchner H (2004) Scaling of Stability Limits of Lean-Premixed Gas Turbine Combustors. Proceedings of ASME Turbo Expo, Wien, Austria
76
B. Pritz, F. Magagnato, and M. Gabi
[6] Lohrmann M, Arnold G, Büchner H (2001) Modeling of the Resonance Characteristics of a Helmholtz-Resonator-Type Combustion Chamber with Energy Dissipation. Proceedings of the International Gas Research Conference (IGRC), Amsterdam, Netherlands [7] Zierep J. (1997) Grundzüge der Strömungslehre. Springer, Karlsruhe, 6th edition [8] Abouali M, Geurts BJ, Gieske A (2007) Atmospheric boundary layers over rough terrian. ERCOFTAC 72:7-11 [9] Apsley D (2007) CFD calculation of turbulent flow with arbitrary wall roughness. Flow Turbulence and Combustion 78:153-175 [10] Cebecci T, Chang KC (1978) Calculation of incompressible rough-wall boundary-layer flows. AIAA Journal 16:730-735 [11] Aupoix B, Spalart PR (2003) Extensions of the Spalart-Allmaras turbulence model to account for wall roughness. International Journal of Heat and Fluid Flow 24:454–462 [12] Mayurama T. (1993) Optimization of roughness parameters for staggered arrayed cubic blocks using experimental data. Journal of Wind Engineering and Industrial Aerodynamics, 165–171 [13] Taylor RP, Coleman HW, Hodge BK (1985) Prediction of turbulent rough-wall skin friction using a discrete element approach. Journal of Fluids Engineering 107:251-257 [14] Bühler S (2007) Numerische Untersuchung von Wandrauhigkeitsmodellen. Master thesis, Department of Fluid Machinery, University of Karlsruhe [15] Hummel F, Lötzerich M (2005) Surface roughness effects on turbine blade aerodynamics. Journal of Turbomachinery 127:453-461 [16] Magagnato F (1998) KAPPA – Karlsruhe Parallel Program for Aerodynamics. TASK Quarterly 2:215-270 [17] Smagorinsky J (1963) General Circulation Experiments with the Primitive Equations. Monthly Weather Review 91:99-164 [18] Lilly DK (1967) The Representation of Small-Scale Turbulence in Numerical Simulation Experiments. Proc. IBM Scientific Computing Symposium on Environmental Sciences, Yorktown Heights, N.Y., IBM form no. 320-1951, White Plains, New Yok, pp.195-210 [19] Schlichting H, Gersten K (1997) Grenzschicht-Theorie. Springer, 9th ed. [20] Sexl T (1930) Über den von E. G. Richardson entdeckten „Annulareffekt“. Zeitschrift für Physik 61:349 [21] Richardson EG, Tyler E (1929) The Transverse Velocity Gradient Near the Mouth of Pipes in which an Alternating or Continuous Flow of Air is Established. The Proceedings of the Physical Society 42:1
Numerical Study on Blood Flow Characteristics of the Stenosed Blood Vessel with Periodic Acceleration and Rotating Effect Kyoung Chul Ro1, Seong Hyuk Lee2, Seong Wook Cho2, and Hong Sun Ryou2 1 2
Department of Mechanical Engineering, Chung-Ang University, Seoul, Korea School of Mechanical Engineering, Chung-Ang University, Seoul, Korea
[email protected]
Abstract. The present study is carried out in order to investigate the effect of a periodic acceleration and rotating effects in a stenosed blood vessel. The blood flow and wall shear stress are changed under body movement or acceleration variation. Numerical studies are performed for various periodic acceleration phase angle and axial rotation speed. It is found that blood flow and wall shear stress are changed as variation of acceleration phase angle with the same periodic frequency also wall shear stress and blood flow rate are increased rapidly as increasing rotation speed.
1 Introduction The human body is subjected to body accelerations or various gravity changes such as driving in vehicles, in airplanes and fast in sports activity. Especially blood pressure fluctuations of the person who has an arterial disease and an ischemic poverty of blood inherently are increased than a healthy person under instantaneous body movement. Blood pressure of the orthostatic hypotension patients is changed about 20 mmHg between lying down and standing position, so they feel the vertigo, eyesight and hearing trouble temporary and fall into a faint in serious case. The high blood pressure patient will be able to suffer from headache, fatigue feeling, abdominal increasing of a heart pulsation and myocardial infraction by an artery stenosis. Therefore, the blood flow and wall shear stress are changed under body movement and acceleration variation. In the moderate exercise situation, Blood flows of abdominal aorta and main artery are increased as twice than in the rest situation [1]. In a research on human acceleration, R.R.Burton[1] studied eyesight trouble by a change of blood flow in a extreme gravity surroundings and E.E.Hookers [2] researched a clinical study of side effect of body acceleration. In research on blood flow and acceleration, Misra and Sahu [3] developed a mathematical model to study the blood flow though large arteries under the action of periodic body acceleration. Belardineli [4] performed experimental study a effect of blood pressure by shock acceleration, and P.K.Mandal [5] performed numerical study a blood characteristics of a cylindrical blood vessel with periodic accelerations. Nakamura [6] and Luo [7] carried out to investigate the blood flow characteristics of stenosed and bifurcated blood vessels.
78
K.C. Ro et al.
But research on a considering body movement and human acceleration was remained an early stage such as simple geometry and conditions, because it is difficult of measuring and simulation. Hence, the purpose of this paper is a numerical analysis of the effect of periodic acceleration and rotational velocity in a stenosed blood vessel.
2 Theoretical Model and Boundary Condition 2.1 Governing Equations In order to simulate the blood flow characteristics, basically mass and momentum conservation equations are required and it needed to consider non-newtonian fluid and pulsatile flow. Also in order to apply the periodic acceleration, unsteady gravity equation is used as a source term of momentum equation. r ∂ρ + ∇ ⋅ (ρv ) = 0 ∂t
(1)
∂ r (ρv ) + ∇ ⋅ (ρvrvr ) = −∇p + ∇ ⋅ (τ ) + G (t ) ∂t
(2)
To simulate non-newtonian fluid problem, a constitution equation is required for blood rheology characteristics and it is described by the second invariant of shear rate tensor. τ = ηγ&
(3)
Where, η and γ& are apparent viscosity and shear rate. Shear rate γ& is represented shear rate tensor as followed equation. γ& =
1 ∑ ∑ γ& γ& 2 i j ij ji
(4)
The cross, power, and carreau [8] models are representative constitution equation of non-newtonian fluid viscosity. In this study, we use a carreau viscosity model because carreau viscosity model is more suitable for representation of blood rheology characteristics
[
η = η ∞ + (η 0 − η ∞ ) 1 + (λγ& )2
(n − 1) 2
]
(5)
Where, η 0 is the zero shear viscosity (0.056 Pa·s), η ∞ is the infinite shear viscosity (0.00345 Pa·s), λ is the time constant (3.313s) and n is the power law index (0.356). In the case of blood, carreau model constants are described above. The periodic acceleration is represented using a trigonometrical function which is composed of the period of flow pulsation, the amplitude of acceleration, and phase angle. The waveform of periodic acceleration as a phase angle is showed in Fig. 1.
G(t ) =a 0 cos(wb t + φ )
(6)
Numerical Study on Blood Flow Characteristics of the Stenosed Blood Vessel
79
Where, a0 is an amplitude of acceleration (0.1 m / s 2 ), Systolic Pressure
wb is a periodic variable of pressure pulsation ( 2πf p ), φ is a phase angle ( 0, π / 2, π ). To simulated blood flow change as phage angle of acceleration, we assume that periodic acceleration and pressure has a same frequency ( f p = 1.2 Hz ) [5].
Diastolic Pressure
In the simulation of rotating effects, except a source term of acceleration, other boundary conditions are same. But we use a MRF (Multiple Reference Frame) method for application the rotating effect of blood vessel [9]. Fig. 1. Inlet unsteady blood pressure and acceleration profiles
2.2 Boundary Condition
In the problems of body force calculation or using a momentum source, it should be use a pressure boundary condition at inlet and outlet boundary because Neumann condition does not satisfy continuity equation and then we use an unsteady pressure condition at inlet boundary. Perodic acceleration (z-axis)
Rotation (z -axis)
P(t ) = A0 + A1 cos wbt (7)
Where, A0 is a steadystate pressure (80mmHg), A1 is amplitude of blood
±
Fig. 2. The schematic view and grid generation of stenosed blood vessel numerical simulations are performed by the FLUENT V6.3 using the User Defined Function (UDF) for viscosity model, pressure and acceleration profile. For unsteady simulations, total flow time is 1.66s which is as twice as period of blood pressure and each time step size is 0.002s
pressure ( 2mmHg), and wb has a same frequency of periodic acceleration. Outlet boundary condition is applied steady-state pressure boundary condition (80mmHg).
80
K.C. Ro et al.
2.3 Modeling of a Blood Vessel
Fig. 2 shows schematic view and grid generation of stenosed blood vessel. Stenosis region is modeled by the Young’s model [10] as follow equation (8). A diameter of blood vessel is 15mm and minimum diameter of stenosis region has a half size of blood vessel. The area ratio of stenosis is 50% with no eccentricity.
t ⎡ ⎤ R( z ) = ⎢r0 − [1 + cos π ( z − z1 ) / z 0 ]⎥ 2 ⎣ ⎦
(8)
Where, t is the maximum width of stenosis region ( 0.5r0 ). z0 Is the half length of entire blood vessel. z1 is the center position of stenosis region. Grid of stenosed blood vessel is generated by hexahedron grid and number of grid is about 70,000.
3 Results and Discussion In order to investigate the effect of human acceleration in the stenosed blood vessel, we analyze the change of velocity profile and wall shear stress. Fig. 3 illustrates the results of the axial velocity profile at the center of stenosis region in the maxi-mum systolic pressure as various phase angle of the periodic acceleration. With the reduction of cross section area in the stenosis region, the axial velocity increased more than minimum 3 times by comparison with blood vessel without stenosis. When pressure and acceleration has a equal or opposite phase angles(0, π , axial velocities are increased or decreased as the superposition effects of pressure and acceleration forces but it does not occurred the phase change or phase delay of blood flow. With these reasons, axial velocities of blood are changed about 20% in comparison with no acceleration case. Fig. 4 shows the variation of the axial velocity at inlet boundary as time variation. The axial velocity at inlet boundary is decreased about 12% by the occurrence of a stenosis with the same pressure gradient. This reason is caused by increase of static pressure in a blood vessel by reduction of section area in the stenosis region. In case of the application of periodic acceleration, it can be observed that the trend of velocity profile is equal to result of Fig. 3. Hence, the phase angle of periodic acceleration has a closely relation to the blood flow. The axial and tangential velocities in the center of stenosis region in the maximum systolic pressure have been shown in Fig. 5 as several angular velocities. The unit of rev/fp means the revolution per of heart pulsation period. In this study, period of unsteady pressure is 0.83s then 1rev/fp is equal to 1.205rev/s. As is shown, higher angular velocity increases the tangential velocity but the axial velocity is decreased according to increase of the centrifugal force. Also position of maximum tangential velocity is shifted to the near wall and wall shear stress is increased. The trends of axial and tangential velocity profiles in a center of axis do not matched at the same time because of the difference of flow friction and phase angle between the centrifugal and pressure forces
)
Numerical Study on Blood Flow Characteristics of the Stenosed Blood Vessel
81
Fig. 3. Axial velocity profile at the center position of blood vessel in maximum systole
Fig. 4. Average axial velocity profiles at the inlet boundary as time variation
Fig. 6 shows contour of wall shear stress as the variation of acceleration in the maximum systolic pressure. Wall shear stress increased as increase of velocity magnitude because wall shear stress is proportional to velocity gradient along the wall. In case of periodic acceleration, Average wall shear stress is similar to all cases, but
82
K.C. Ro et al.
(a)
(b)
Fig. 5. Axial and tangential velocities profile at center of stenosis region in the maximum systole with rotating effect ((a): axial velocity, (b): tangential velocity)
0 Rev / fp 2 Rev / fp
8 Rev / fp
12 Rev / fp
(a)
(b)
Fig. 6. Wall shear stress (WSS) with acceleration variation in the maximum systole ((a): periodic acceleration, (b): rotating effect)
±
maximum wall shear stress is changed about 24% at the stenosis region. On the contrary in the case of rotating effect, maximum average wall shear stress increased dramatically about 430% at 12rev/fp and it is a higher increasing rate than increasing rate of velocity. Consequently effect of the centrifugal force more dominate than the pressure force in a high rotation speed.
4 Conclusion The effects of the periodic acceleration and rotation on a stenosed blood vessel have been studied numerically. It is found that the human body acceleration has the small
Numerical Study on Blood Flow Characteristics of the Stenosed Blood Vessel
83
effect in the simulation of net blood flow but it does influence the instantaneous flow variables such as the axial velocity, the tangential velocity, and the wall shear stress distribution. The fluctuations of flow variables increase as increase of imposed body acceleration. In the results of periodic acceleration, not only the magnitude of body acceleration but also phase angles of blood pressure and body acceleration are important factors which are affected by the blood flow. In the case of rotating effect, maximum average wall shear stress increased dramatically as increase of rotation speed. Consequently effect of the centrifugal force more dominate than the pressure force in a high rotation speed. From these results, the human body acceleration has a deep relation with blood flow. Hence we will research the complex geometry and the various body accelerations in the future work. Acknowledgments. The authors wish to acknowledge the assistance and support of all those who contribute to Brain Korea 21(BK21) programs.
References [1] CA Taylor and T Jr Hughes (1997) Effect of exercise on hemodynamic conditions in the abdominal aorta) Journal of Vascular Surgery 29(6):1077-1089 [2] EE Hookers et al. (1972) A momentum integral solution for pulsatile flow in a rigid tube with and without longitudinal vibration. International Jounal of Engineering Science 10:989-1007 [3] JC Mirsa and BK Sahu (1988) Flow through blood vessels under the action of a periodic acceleration field: A mathematical analysis. Journal of Computational and Applied Mathematics 16:993-1016 [4] Belardinelli et al. (1989) A preliminary theoretical study of arterial pressure perturbations under shock accelerations. ASME J Biomech Eng 111:233-240 [5] PK Mandal et al. (2007) Effect of body acceleration on unsteady pulsatile flow on nonnewtonian fluid through a stenosed artery. Applied Mathematics and Computation 189:766-779 [6] M Nakamura and T Sawada (1981) Numerical Study on the Flow of a non-Newtonian Fluid through an Axisymmetric Stenosis. Journal of Biomechanical Engineering 110:137143 [7] XY Luo and ZB Kuang (1981) Non-Newtonian Flow Patterns Associated with an Arterial Stenosis. Journal of Biomechanical Engineering 114:512-514 [8] YI Cho, LH Back, and DW Crawford (1985) Experimental investigation of branch flow ratio, angel and Reynolds number effects on the pressure and flow fields in arterial branch models. Journal of Biomechanical Engineering 103:102-15 [9] JY Luo, RI Issa, and AD Gosman (1994) Prediction of Impeller-Induced Flows in Mixing Vessels Using Multiple Frames of Reference. In IChemE Symposium Series 136:549-556 [10] DF Young (1968) Effect of a time-dependant stenosis on flow through a tube. ASME J. Eng 90:148-154
Analysis and Test on the Flow Characteristics and Noise of Axial Flow Fans Young-Woo Son1, Ji-Hun Choi2, Jangho Lee2, Seong-Ryong Park3, Minsung Kim3, and Jae Won Kim4 1
Graduate School of Mechanical and Automotive Engineering, Kunsan National University, Korea 2 School of Mechanical and Automotive Engineering, Kunsan National University, Korea 3 New and Renewable Energy Research Department, Korea Institute of Energy Research, Korea 4 Department of Mechanical Engineering, Sun Moon University, Chungnam, Korea
[email protected]
Abstract. Two types of axial flow fans are analyzed with commercial cfd code. Flow noise and charactristics by flow rate and pressure difference are compared with test data. In the analysis, two different conditions are used for flow rate and noise. The two fan show very different flow distributuions and flow noises. The differences are reviewed in detail in the paper using analysis of distribution of flow, pressure, and noise. These show the way to improve the flow rate and noise for axial flow fan.
1 Introduction Outside unit of multi-room air conditioner is getting bigger to satisfy customer’s need to make cool for more rooms. Therefore cooling fan also becomes larger to have much more flow rate, and reducing flow noise generated by the fan is one of key issues in the household air conditioner industry in Korea. Tip leakage, flow separation, and secondary flow of fan generated flow are known to be related with fan capacity and flow noise [1]. Recently, it can be understood about flow separation, secondary flow, down steam patterns, and eddy motions around fan by numerical fluid mechanics based on NavierStokes equation. In this study, commercial numerical code, SC/Tetra, is used for the
Fig. 1. Two different fans to be analyzed
86
Y.-W. Son et al.
analysis and understanding flows around fan, and numerical data is compared with test data. Following Fig. 1 shows two different fans to be analyzed.
2 Analysis Conditions 2.1 Numerical Condition for Noise Analysis 2.1.1 Geometric and Boundary Conditions Geometry for the analysis of fan noise is designed as shown in Fig.2 according to the Korean standard (KS B 6361) [2], and boundary conditions are listed in Table 1. In the table, boundary conditions for each region in Fig. 1 are assigned. Regions of in_rotation_a, out_rotation_a, side_rotation_a, in_rotation_b, out_rotation_b, and side_rotation_b are going to be merged to discontinuous mesh. Rotation region in the table is moving element to be rotated with angular velocity of 204 rpm. Specifications for the different two fans of A and B are summarized in Table 2.
<System>
Fig. 2. Numerical analysis geometric of noise Table 1. List of boundary conditions No
System
Fan
Region
Boundary Condition Static pressure = 0
1
Wall
2-1
in_rotation_a
-
3-1
out_rotation_a
-
4-1
side_rotation_a
-
2-2
in_rotation_b
-
3-2
out_rotation_b
-
4-2
side_rotation_b
-
5
Fan_surf
Mesh velocity
6
Sound_wall
-
7
ROTATION (Volume)
-
Analysis and Test on the Flow Characteristics and Noise of Axial Flow Fans
87
Table 2. Geometrical and operating conditions for the noise analysis Fan diameter Rotational diameter
Rotational length Size of square duct (m) (m)
(m)
(m)
Fan A
0.7
0.703
0.5
Fan B
0.745
0.748
0.5
x-axis: 5 y-axis: 5 z-axis: 9
In the calculation, rotational angle of the fan is changed by 2° in clock wise to 5 times rotation (1800°). It corresponds to 900 times unsteady calculation. In each calculation, acoustic pressure in the last rotation on the fan surface is collected according to the time changes. From this pressure, flow noise is obtained by Flow Noise which is software using Williams & Hawking equation to predict moving noise source based on flow similarity method [3]. The turbulence model of k-ε is used for flow calculations. 2.1.2 Mesh Generation Fig. 3 shows meshes for the fan and the region outside fan. The number of mesh is about 1.3 millions and many meshes are concentrated near by the fan. Detail mesh information is summarized in Table 3.
<System>
Fig. 3. Mesh information of noise numerical analysis Table 3. Mesh information for the noise analysis Approximate numbers of meshes
Mesh size
Fan A Fan B
(millions)
System(m)
Fan(m)
0.24~0.0075
0.06~0.0075
system
fan
total
0.34
1.47
1.81
0.36
1.3
1.66
2.2 Numerical Analysis Condition of Performance 2.2.1 Geometric and Boundary Condition For the analysis of fan performance, system geometry shown in Fig. 4 is designed according to the Korean standard (KS B 6311) [4], and specification is listed in Table 4.
88
Y.-W. Son et al.
<System>
Fig. 4. Numerical analysis geometric of performance Table 4. Geometric information & operating condition of performance analysis fan diameter
Rotational
Rotational
(m)
diameter(m)
length(m)
Duct
Duct
Fan A
0.7
0.701
0.5
0.995
11.2
Fan B
0.745
0.746
0.5
1
11.2
diameter(m) length(m)
Table 5. Boundary condition of performance analysis
System
Fan
No
Region
1
inlet
Boundary Condition Static pressure = 0
2
outlet
Static pressure = 0.1~4.75 Stationary wall
3
wall
4-1
in_rotation_a
-
5-1
out_rotation_a
-
6-1
side_rotation_a
-
4-2
in_rotation_b
-
5
out_rotation_b
6
side_rotation_b
-
7
Fan_surf
Mesh velocity
8
ROTATION(Volume)
Boundary conditions shown in Fig.4 are summarized in Table 5. In the analysis, outlet static pressure is changed with seven conditions: 0.1, 0.75, 1.4, 2, 2.6, 3.6, and 4.75. Calculation is determined to be converged in the condition of lower error than 10-4. 2.2.2 Mesh Information Mesh for the fan capacity analysis is shown in Fig. 3 for the fan and the outside fan, and the number of mesh is listed in Table 6. Mesh size will be about 1~1.4 millions.
Analysis and Test on the Flow Characteristics and Noise of Axial Flow Fans
89
<System>
Fig. 5. Mesh information for the fan performance analysis Table 6. Mesh for the fan performance analysis Approximate numbers of meshes
Mesh size
Fan A Fan B
(millions)
System(m)
Fan(m)
0.24~0.0075
0.06~0.0075
system
fan
total
0.23
1.17
1.40
0.41
0.51
0.92
3 Numerical Analysis Result 3.1 Fan Surface Pressure Distribution Fig.6 shows pressure distribution on the Fan surface. It is shown that pressure on the surface of Fan B is more uniform than Fan A. It can be from that the curvature of fan arc is much smoother in Fan B than Fan A. In Fan A, pressure is concentrated on the tip of fan blade, and it can be reason of fan vibration and flow separation in the tip.
Fig. 6. Fan surface pressure distribution on the fan surface
90
Y.-W. Son et al.
Fig. 7. Turbulent energy distribution in around fan
Fig. 8. Turbulent energy distribution in down stream
Fig. 9. Noise analysis results of fan A and fan B
3.2 Turbulent Energy and Noise Analysis Turbulent energy distribution around fan and in downstream is shown in Fig. 7 and Fig. 8. The turbulent energy of Fan A is greater than Fan B, which can be the reason
Analysis and Test on the Flow Characteristics and Noise of Axial Flow Fans
91
of larger noise. It can be known that the down flow of Fan A is concentrated around fan radius. Fig.9 shows flow noise for Fan A and Fan B. It is shown that the noise of Fan A is much greater than B as can be predicted from turbulent intensity distribution. Analysis and test data is compared in Fig. 10 and in Table 7. It is shown that the analysis and test data is similar except on microphone data.
Fig. 10. Experiment and analysis result of noise (fan B)
Table 7. Location of microphones and comparison of noise Microphone position
Experiment
Analysis
result (dB)
result (dB)
ch_1
23.4
33.3
ch_2
30.8
31.6
ch_3
31.5
32.4
ch_4
31.8
31.8
Microphone
3.3 Velocity Distribution and Performance Fig. 11 and Fig. 12 show velocity vector and velocity distribution around fan. Velocity vector after hub of Fan AB is smaller Fan B. The hub of Fan B has half height of Fan A which allows flow after hub by centrifugal force. Fan B has greater velocity vector and wide velocity distribution than A which makes greater flow rate as shown in Fig. 13. The figure shows performance curves of Fan A and B. Fan B has greater flow rate about 30% than A. Numerical analysis on flow rate is verified with test data of Fan B. Fig. 14 shows performance curve of fan A by numerical analysis and test. In the higher and lower static pressure, flow rate by numerical analysis is similar to test data, but, in the medium static pressure, the flow rate by numerical analysis is lower than test data about 50%. It is needed more work to reduce the discrepancy.
92
Y.-W. Son et al.
Fig. 11. Comparison of velocity vector at center
Fig. 12. Comparison of velocity distribution at center
Fig. 13. Performance curve of Fan A and B
Fig. 14. Performance curve of Fan B by numerical analysis and test
4 Conclusion In this study, the flow rate and noise for two different axial fans are analyzed by commercial numerical code and compared with test data. And detail numerical
Analysis and Test on the Flow Characteristics and Noise of Axial Flow Fans
93
analysis has been performed to understand flow and noise characteristics. And followings are obtained; 1. Flow rate of Fan B is greater than A, but flow noise is smaller than A. Flow of fan A is concentrated in small passage region of downstream around radius, which makes larger turbulence intensity and flow noise. Fan B has more uniform velocity distribution in wide area of downstream because lower height of hub helps to generate flow smoothly after hub by centrifugal force. 2. It was shown that numerical approach is acceptable to understand flow characteristics but it needs additional works to have more validation for numerical analysis because flow characteristics with medium static pressure showed big discrepancy with test data. Acknowledgments. This research was financially supported partially by the Ministry of Commerce, Industry and Energy (MOCIE), Korea Industrial Technology Foundation (KOTEF) through the Human Resource Training Project for Regional Innovation.
References [1] Seo SJ, Choi SM, Kim KY (2006) Design of An Axial Flow Fan Whit Shape Optimization. The Korean Society of Mechanical Engineers, vol. B no. 30, pp. 603-611 [2] KS B6361 (2002) Methods of A-Wegihted sound pressure level measurement for fans, blowers and compressors, Korean industrial standard [3] Ffowcs Williams JE, Hawkings DS (1969) Sound Generation by Turbulence and Surfaces in Arbitrary Motion. Philosophical Transactions of the Royal Society of London, vol. 264, no. A1511, pp. 321-342 [4] KS B6311 (2001) Testing methods for industrial fans, Korean industrial standard
Implicit Algorithm for the Method of Fluid Particle Dynamics in Fluid-Solid Interaction Yong Kweon Suh1, Jong Hyun Jeong2, and Sangmo Kang1 1
Professor, Department of Mechanical Engineering, Dong-A University, 840 Hadan-dong, Saha-gu, Busan 604-714, Korea 2 Ph.D Student, Department of Mechanical Engineering, Dong-A University, 840 Hadan-dong, Saha-gu, Busan 604-714, Korea [email protected]
Abstract. In this paper we present an implicit algorithm for the implementation of the method of FPD(Fluid Particle Dynamics) for use in simulation of particle motions immersed in a viscous fluid. Compared with the original explicit algorithm, the implicit scheme presented in this paper allows a much larger time step. Applying our method to specific model flows, we demonstrated the effect of various parameters associated with FPD, such as the viscosity ratio and the interfacial width, on the stability as well as the accuracy of the numerical methods based on the simulation results. It was found that the method can be used for the case when the simplicity than the accuracy of the code is the most important priority for simulating the motion of single or multiple particles submerged in a viscous fluid.
1 Introduction Multi-phase flows are very important in the engineering applications. For instance, in microfluidics bubbles or drops are important objects to be treated for flow control such as pumping and/or fluid mixing. Prediction of the motion of bio-particles surrounded by a viscous fluid under the electric or other kinds of body forces is a key issue among researchers in this area. On the other hand, numerical simulation of multi-phase flows is considered to be difficult because; (a) the interface is in general curved and so employing boundary conditions there demands sophisticated algorithms, and (b) the interface must be tracked in time. Most of the current simulation methods for the multi-phase problems are based on the fixed-grid system. We can take VOF (volume of fluid) and level-set method as the most popular among others. However, these methods still require complex algorithms to take care of the above two items. LBM (lattice Boltzmann method) with the phase field function is thought to be one of the most convenient and useful methods. This method however also demands many arithmetic works for implementation of the phase-field-function method. In this regard, it is worth paying our attention to the method of ‘fluid particle dynamics’ (FPD hereafter) developed by the group of Tanaka and Araki [1, 2]. This method has been proposed as a very simple one to solve the above problems. In this method the solid is considered to be one kind of fluids but having much higher viscosity than the real fluid so that it should be hardly deformable. A separately defined
96
Y.K. Suh, J.H. Jeong, and S. Kang
phase function is also used in this method, based on which the viscosity as well as the density of each phase is determined. The interfacial forces such as the pressure and the fluid’s shear forces are automatically considered and thus we do not need to calculate these forces separately. However, the algorithms presented in their papers have been developed by using the explicit method. So, the time step must be taken small in particular at low Reynolds numbers. Furthermore, they have not addressed the accuracy of their FPD. In this paper, we present the implicit algorithms for implementation of FPD. Application of the algorithms will be given to the case of a falling sphere under the gravity in order to confirm the stability as well as the accuracy of the method. We then apply this method to the case of a falling, horizontal cylinder under the gravity in order to study the wall effect.
2 Governing Equations and Numerical Methods 2.1 Governing Equations For the simplicity of our presentation, we confine ourselves to two-dimensional problems. Consider a two-dimensional space ( x* , y* ) filled with an incompressible fluid with density ρ w* and viscosity μw* . A solid particle of a circular shape with radius a * and density ρ s* is immersed in the fluid; see Fig. 1. As explained in the previous section, in the method of FPD liquid the solid is treated as a kind of fluid. So, we denote μ s* as the viscosity of the solid medium. The viscosity ratio dea* fined as μ s = μ s* / μ w* is assumed to be * g H very high. We also need to define the density ratio as ρ s = ρ s* / ρ w* . We can solid consider arbitrary body forces, but in y* this paper we only take the gravitational force into account as shown in Fig. 1. Later, we will show the application of our code to an axi-symmetric problem, * x but in order to avoid the complexity of L the presentation we also confine ourFig. 1. Geometry of the flow model defined for selves to the Cartesian coordinate sysillustration of the FPD method. Here, g* detem. Modifying formula or algorithms notes the gravitational acceleration. presented in this paper for the axisymmetric or 3-D problems should be straightforward. In order to make the governing equations dimensionless, we take a * as the reference length, U = a*2 Δρ * g * / μ w* as the reference velocity (here, Δρ * = ρ s* − ρ w* is the
Implicit Algorithm for the Method of Fluid Particle Dynamics
97
density difference and g * the magnitude of the gravitational acceleration), a * / U as the reference time, and ρ w* U 2 as the reference pressure. Further, we use ρ w* and μw* as the reference density and reference viscosity, respectively; in FPD these two quantities are treated as a function of space and time like the other variables. These variables are only dependent on the distance from the center of the particle,
( x* − x*p ) 2 + ( y* − y*p ) , where ( x*p (t * ), y*p (t * )) are coordinates of the particle’s cen-
ter. The reference velocity employed in this formulation comes from the formula of the final speed of a falling sphere in the limit of zero Reynolds numbers; i.e. 2U / 9 corresponds to the final speed. Then, the dimensionless governing equations take the following form ∂u ∂v + =0, ∂x ∂y
(1a)
∂ρ u ∂ρ u 2 ∂ρ uv ∂p 1 ⎡ ∂ ⎛ ∂u ⎞ ∂ ⎪⎧ ⎛ ∂u ∂v ⎞ ⎪⎫⎤ + + =− + + ⎨μ ⎜ + ⎟ ⎬⎥ , ⎢ ⎜ 2μ ∂t ∂x ∂y ∂x Re ⎢⎣ ∂x ⎝ ∂x ⎟⎠ ∂y ⎪⎩ ⎝ ∂y ∂x ⎠ ⎭⎪⎥⎦
(1b)
∂ρ v ∂ρ uv ∂ρ v 2 ∂p 1 ⎡ ∂ ⎧⎪ ⎛ ∂u ∂v ⎞ ⎫⎪ ∂ ⎛ ∂v ⎞ ⎤ + + =− + + ⎟ ⎬ + ⎜ 2μ ⎟ − φ ⎥ , ⎢ ⎨μ ⎜ ∂t ∂x ∂y ∂y Re ⎣⎢ ∂x ⎩⎪ ⎝ ∂y ∂x ⎠ ⎭⎪ ∂y ⎝ ∂y ⎠ ⎦⎥
(1c)
where the Reynolds number is defined as Re = ρ w* Ua* / μ w* . In Eq. (1c), φ denotes the phase function; it takes φ = 0 for the liquid, φ = 1 for the solid and intermediate values in the thin layer between the liquid and the solid that represents the interface. For the phase function, we use the following formula as suggested in [1].
⎡
φ = 0.5 ⎢1 + tanh ⎢ ⎣
1 − ( x − x p )2 + ( y − y p ) 2 ⎤ ⎥ ⎥ ξ ⎦
(2)
where ξ controls the interface width. The dimensionless viscosity μ and the dimensionless density ρ are determined solely from the phase function as shown below.
μ = 1 + ( μ s − 1)φ , ρ = 1 + ( ρ s − 1)φ .
(3)
Imposing boundary conditions is a very simple task. We do not have to pay attention to the interfacial conditions. It is enough to apply the no-slip boundary conditions on the four surrounding walls, i.e., u = v = 0 at x = 0, xm and y = 0, ym ,
(4)
where xm = L / a* and ym = H / a* ; see Fig. 1 for L and H . 2.2 Numerical Methods of FPD
We employ a uniform, staggered grid system. Fig. 2 shows arrangement of definition points for the flow variables. The phase function as well as the viscosity function
98
Y.K. Suh, J.H. Jeong, and S. Kang
needs to be evaluated at the points (p), (c) and (v); we will use these as superscripts to the variables to indicate evaluation of the variables at the corresponding points. The continuity equation is discretized as follows. Δy ( ui , j − uˆi , j ) + Δx ( vi , j − vˆi , j ) = 0 .
(5)
Here the hat variables denote the ones obtained in the previous time step. Time integration of the momentum equations are performed in three steps by using the projection method [3-6]. In the first step, the pseudo velocity components u% and v% are obtained implicitly. For instance, discretization of the y -momentum equation for v% leads to the following form.
ρ% iv, j v%i , j − ρˆ iv, j vˆi , j Δt
=
(
)
1 % Li , j + Lˆi , j − Pi , j + N i , j + Bi , j . 2
(6)
Here, L , P , N and B represent the discretized form of the diffusion, pressure, convection and body-force terms, respectively. For instance, L takes the following form.
Li , j =
vi , j − vi , j −1 ⎫ 1 ⎡ ⎧ p vi , j +1 − vi , j − μip, j ⎢ 2 ⎨ μi , j +1 ⎬ 2 Re ⎢⎣ ⎩ Δy Δy 2 ⎭
⎛ ui , j +1 − ui , j vi +1, j − vi , j ⎞ ⎛ ui −1, j +1 − ui −1, j vi , j − vi −1, j ⎞ ⎪⎫ ⎤ ⎪⎧ c (7) + ⎨ μic, j ⎜ + + ⎟ − μi −1, j ⎜ ⎟ ⎬⎥ . 2 Δx ΔxΔy Δx 2 ⎠ ⎪⎭ ⎥⎦ ⎪⎩ ⎝ ΔxΔy ⎠ ⎝ The over-bar in Eq. (6) indicates evaluation of the corresponding the variable at the intermediate time step between the current and previous ones. We use 2nd-order extrapolation scheme for this: ˆ ˆ ˆ N i , j = 3Nˆ i , j / 2 − Nˆ i , j / 2 , Pi , j = 2 Pˆi , j − Pˆi , j , Bi , j = 3Bˆi , j / 2 − Bˆi , j / 2 . (8) Note that the pressure is formally defined at the intermediate time step contrary to the other variables. Fortunately the matrices of the algebraic equation system for u% and v% are symmetric thereby allowing use of the efficient solver ICCG (incomplete Cholesky conjugate gradient, see e.g. [7]). vi , j In the third step, we will use the following (c ) (v ) formula to update the velocity components 'y
(u )
( p) pi , j
ui , j
'x
ρ% ui, j ρ% iv, j
ui , j − u%i , j Δt vi , j − v%i , j Δt
=− =−
ϕi +1, j − ϕi , j
,
Δx
ϕi , j +1 − ϕi , j Δy
,
(9a)
(9b)
and the following formula to update the pressure Fig. 2. Staggered grid system and the definition points for various variables
pi , j = pˆ i , j + ϕi , j .
(10)
Implicit Algorithm for the Method of Fluid Particle Dynamics
99
Back to the second step, the pseudo pressure ϕ is obtained by solving the Poisson equation which can be derived by substituting (9a) and (9b) into (5) and eliminating u and v . Since the solid particle is of a circular shape we are allowed to use a very simple method to obtain the particle’s position. The only thing we should do is to integrate the equation of motion given in terms of the coordinates of the particle’s center position: x p = xˆ p + Δt u p , y p = yˆ p + Δt v p ,
(11)
where the velocity components of the particle’s center can be obtained from taking a weighted average of them over the whole domain as shown below.
∫ uφ dxdy ≅ ∑ = ∫ φ dxdy ∑ φ
u ip, jφi p, j
up
i, j
p i, j
.
(12)
i, j
The numerical procedure can be described as follows. 1. Using the velocity components uˆ and vˆ obtained in the previous time step, estimate the particle’s new position x% p and y% p by using (11). 2. Obtain the estimated value of the phase function φ% and the corresponding viscosity μ% and density ρ% by using Eq. (3). 3. Solve the momentum equations (e.g. Eq. (6) for v% ) with ICCG to obtain the pseudo velocity components u% and v% . 4. Solve the Poisson equation for ϕ by using ICCG 5. Update the velocity components u and v by using (9a) and (9b). 6. Apply the boundary conditions (4) to get the velocity components at the grid points outside the domain. 7. Obtain the pressure p from (10). 8. Get u p and v p from (12) and then x p and y p from (11). 9. Obtain the phase function φ and the corresponding viscosity and density by using Eq. (3). 10. Replace all the hat variables by new ones, and repeat (i)-(ix) till the time required.
3 Numerical Results and Discussions 3.1 A Sphere Falling in Axi-symmetric Space
For the case of a sphere falling down along the central axis of the circular container of the dimensionless radius rm , we know the asymptotic solution, i.e. the Faxen’s formula applicable for Re → 0 and rm → ∞ as shown below [8].
−v p = 1 − 2.10444rm−1 + 2.08877rm−3 − 0.94813rm−5 −1.372rm−6 + 3.87 rm−8 − 4.19rm−10 + ...
(13)
100
Y.K. Suh, J.H. Jeong, and S. Kang
So, after modifying our code for the polar coordinates, we applied it to this problem and compared the numerical results with the ones given by (13) as shown in Fig. 3. The numerical results are in fairly good agreement with the theory; the former underpredicts the theoretical ones as much as 4 10%. For this model flow we tested the effect of the parameters μ s and ξ on the numerical accuracy. Fig. 4 presents the results of v p obtained with five different μ s
∼
values and seven different ξ values for each μ s . It shows that as μ s increases, while ξ is fixed, it under-predicts the theory. This is justified because the thin layer near the 0.25
0.2
-vp
0.15
0.1
0.05
0
0
0.1
0.2
0.3
0.4
0.5
1/rm
Fig. 3. Numerical results (symbols) of the final speed of a falling sphere under gravity in comparison with the Faxen’s asymptotic formula (solid line) Eq. (13) in the text. For this numerical calculation, we set Re = 0.01 , ρ s = 7.86 , μ s = 200 , ξ = 0.005 , I × J = 101× 401 in most cases, and Δt = 0.0002
∼ 0.001 depending on r . m
0.04
theory (0.0375) μs=20
-vp
0.03
50 0.02
100 200
0.01
0
0
0.01
0.02
ξ
0.03
500
0.04
0.05
Fig. 4. Effect of the viscosity ratio μs and the interface width ξ on the final speed of a falling sphere under the gravity obtained numerically at Re = 0.01 , ρ s = 7.86 , rm = 2 , ym = 8 , Δr = Δy = 0.02 , and Δt = 0.001 .
Implicit Algorithm for the Method of Fluid Particle Dynamics
4.2
4.2
4
4
y
4.4
y
4.4
3.8
3.8
3.6
101
0.8
1
1.2
r (a)
1.4
3.6
0.8
1
1.2
1.4
r (b)
Fig. 5. Relative velocity vectors near the particle surface obtained numerically after t = 2 with μs fixed at 200 and ξ at (a) 0.005 and (b) 0.02 . The other parameters are the same as in Fig. 4.
cylinder particle where the fluid viscosity varies significantly has higher viscosity when μ s is increased, thus resulting in the effect of increased particle size. Similar trend occurs for the case when ξ is increased with μ s fixed as shown in Fig. 5. At a smaller particle size, i.e. at μ s = 20 , there is a critical value of ξ below which the falling speed is over-predicted. In this case, because the particle’s viscosity is low, the medium of the particle deforms significantly so that the shear stress acting on the particle surface decreases. As a result of this, the particle should fall with an increased speed. As indicated in Fig. 4, decrease of ξ indeed brings improved accuracy of the numerical solutions. However, when ξ is very small, it gives significant fluctuation in the numerical results as shown in Fig. 6. We have found from numerical experiment that there is critical value of ξ below which the solution shows significant fluctuation. For the case of parameter set shown in Fig. 6, it was approximately found to be ξ c = 0.006 for the grids 41× 161 , ξ c = 0.005 for 61× 241 , ξ c = 0.004 for 81× 321 and ξ c = 0.003 for 101× 401 . Sometimes, the numerical results show a smooth evolution in time at the initial transient period but leads to the eventual fluctuation and vice versa. Oscillation observed at low Re in the initial transient period is found to be due to a relatively large time step. For instance, at Re = 0.1 , when Δt was changed like Δt = 0.01 , 0.003 and 0.001, it was observed that larger Δt exhibits larger variation in the evolution of the falling speed as shown in Fig. 7. Based on the numerical results and the subsequent analysis, we can conclude that for the model problems treated in this study choosing high μ s and low ξ in such way μ sξ ≈ 1 may be appropriate in that it can produce not only accurate but also nonoscillatory solutions..
102
Y.K. Suh, J.H. Jeong, and S. Kang
-0.02
-0.025
ξ=0.02 -0.03
vp
ξ=0.005
-0.035
ξ=0.002
-0.04
-0.045 0
0.5
1
1.5
t Fig. 6. Effect of ξ on the time evolution of v p with μs fixed at 200 and ξ varied as 0.02, 0.005 and 0.002. The other parameters are the same as in Fig. 4.
-0.0335
Δt=0.001 0.0001
vp
-0.034
-0.0345
-0.035
0.0003 0
0.1
t
0.2
0.3
Fig. 7. Effect of time step Δt on the initial fluctuation of v p obtained with μs = 200 and ξ = 0.005 . The other parameters are the same as in Fig. 4.
3.2 A Horizontal Circular Cylinder Falling in 2-D Space
We next applied our code to the case of a horizontal circular cylinder falling under the gravity as shown in Fig. 1. First we compared the numerical stability between the explicit and implicit methods. For the set of parameter values at Re = 1 , ρ s = 7.86 , xm = 4 , ym = 16 , Δx = Δy = 0.04 , μ s = 200 and ξ = 0.005 , it was found that the explicit method is unstable even at such a small time step Δt = 3 × 10 −6 , whereas the implicit method is stable even at such a large time step Δt = 2 × 10 −2 . Therefore our implicit algorithm is much more powerful than the original explicit algorithm presented by Tanaka and Araki [1]. We then investigated the motion of the cylinder with its initial position close to the container wall. Figure 8 shows the time sequence of the falling object (Fig. 8(a)) and
Implicit Algorithm for the Method of Fluid Particle Dynamics
103
evolution of the particle center’s coordinates (Fig. 8(b)). The lateral motion of the cylinder is not so simple as we have anticipated. As seen quantitatively from Fig. 8(b), the cylinder initially approaches the container wall till t = 30 after which it moves away from it heading to the central region of the container. After approximately t = 100 , it moves slower and slower while approaching the bottom wall. But from the time t = 350 its speed is increased again promptly. From about t = 390 its speed decreases so rapidly due to the bottom wall’s retarding effect. Compared with the horizontal motion, the vertical motion is very simple and monotonous. 2
14
1.9
12
16
12
10
y
1.8
yp
xp
8 8
1.7 6 1.6
4
4
1.5 0 0
2
x
4
1.4
2 100
200
300
400
0
t
(a)
(b)
Fig. 8. Motion of a horizontal cylinder falling in a rectangular container with its initial position at ( x p , y p ) = (1.5, 14) numerically obtained with the parameters set at Re = 10 , ρ s = 7.86 , xm = 4 , ym = 16 , Δx = Δy = 0.04 , μ s = 100 , ξ = 0.01 and Δt = 0.05 . In (a), the cylinder’s ro-
tating motion is visualized by using a diameter line. In (b), the solid and dashed lines represents the time development of the coordinates x p and y p , respectively.
4 Conclusions We have developed an implicit scheme based on the method of fluid-particle dynamics for simulating the motion of a finite-size solid particle together with the flow of a viscous incompressible fluid surrounding the particle. It has been found that the implicit method allows a much larger time step than the original explicit one. Our code has been applied to the problem of a falling sphere under the gravity within a finitesize circular container. For a given set-up of grids the numerical result under-predicts the theory with the error as much as 4 10%. The effect of the viscosity ratio μ s and the interface width ξ was found to be significant. For the better numerical results, μ s / ξ should be as large as possible. However it turned out that too much large values lead to significant fluctuation in the solutions. Reasonable choice of these values should be problem-dependent, but for the models problems treated in this study, we may say that μ sξ ≈ 1 should be appropriate.
∼
104
Y.K. Suh, J.H. Jeong, and S. Kang
Acknowledgment. This work was supported by the Korea Science and Engineering Foundation (KOSEF) through the National Research Laboratory Program funded by the Ministry of Science and Technology (No. 2005-1091).
References [1] Tanaka H, Araki T (2000) Simulation method of colloidal suspensions with hydrodynamic interactions: Fluid particle dynamics. Phys. Rev. Lett. 85:1338–1341 [2] Kodama H, Takeshita K, Araki T et al (2004) Fluid particle dynamics simulation of charged colloidal suspensions. J. Phys.: Condens. Matter 16:L115–123 [3] Armfield S, Street R (2004) Modified fractional-step methods for the Navier-Stokes equations. AZZIAM J. 45:C364–377 [4] Almgren AS, Bell JB, Colella P et al (1998) A conservative adaptive projection method for the variable density incompressible Navier-Stokes equations. J. Comput. Phys. 142:1–46 [5] Mittal TY, Udaykumar HS, Shyy W (1999) An accurate Cartesian grid method for viscous incompressible flows with complex immersed boundaries. J. Comput. Phys. 156:209–240 [6] Guermond J-L, Quartapelle L (2000) A projection FEM for variable density incompressible flows. J. Comput. Phys. 165:167–188 [7] Ferziger JH, Peric M (1996) Computational methods for fluid dynamics. Springer [8] Happel J, Breener H (1983) Low Reynolds number hydrodynamics. Martinus Nijhoff Pub
Mechatronics and Mechanical Engineering
EKC 2008: Summary of German Intelligent Robots Research Landscape 2007 Doo-Bong Chang CEO Applied Robot & Technologies, Karlsruhe, Germany [email protected]
Abstract. As the result of survey about the European Intelligent robot landscape 2007, AR&T studied and visited main robot research institutes in Germany. On my opinion there are no leading institutes of robot research but some of them are world class. In German institute of robot there are three categories, e.g. beginner, middle and world class. In general all German institutes have problem in aspect of commercialization, the transition of basic research to commercial products. As you know Germany is known for quality machine and precision industries. What will happen if Germany does not focus on the development of the intelligent machine industries? The German digital camera industry may be the example. The time for robotics is now! This presentation shows an overview about the research landscape of German robot 2007.
1 Objective In my opinion, Germany’s intelligent robot research levels should be divided into three classes. The top level is the world-class laboratory. In the middle is the laboratory which is jumping to world class status. Finally is the laboratory which isworking on the base-level. The world class laboratories in Germany include the Karlsruhe area and KIT (Karlsruhe Institute of Technology), München (DLR), IPK and IPA of Fraunhofer for service robots and DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz), which is for the natural language research etc. The laboratories which aretransitioning to world class are Uni. Darmstadt, TU Braunschweig, where Porf. Wahl(Institutes für Robotik & Prozess Informatik) works and TU Magdeburg. The basic areas are Freiburg, Kaiserslauten and Berlin etc. But the number of new robotics institutes from almost all technology Universities is continuously increasing. The Japanese Honda Europe laboratory for robotics is located in a convenient transportation area (Offenbach, Frankfurt and Frankfurt international airport sides) rather than an area with excellent research environments, e.g. like München or Karlsruhe. This makes transferring the technology from the university institute to research and development at companies more difficult. For example, Schunk Ltd., a company that commercializes DLR hands has given up the sale of robot hands in favor of marketing and propaganda. The commercialization of the laboratory inventions of intelligent robot research development in the future faces financial obstacles. The big companies have not invested in the future oriented robot market and micro-small innovative companies can not get support from the government or EU communities. Since Universities and
108
D.-B. Chang
Institutes typically are interested in fundamental research and not the further commercialization of that research; the public funding of robot development focused on Universities and Institutes, depletes the public monies available for investment in the commercialization of robots. On company side, because the commercialization which should lead a bulk sale and such a success does not become accomplished and in order to hold the laboratory, company has to pay without getting any turn over or return of investment, they are delaying to make decision or give up this business. Mainly the Germany laboratories are in the condition which depends on the funds which operates at tax of Europe and the nation level. Even through the government authority and the many specialists recognize, how important the necessity of the laboratories, industry and education cooperation is but it is still hard to realize concretely how they must connect each other and their concept for this solution appears to be being a mockup stage. The university or lack of commercialization consciousness of laboratory manpower and turn-over of the companies namely bumps into disappointment and the case which breaks their cooperation is increasing. The InMach Company which produces public building cleaning robots and mower robots etc. is a good example of research transitioning to a commercial product. The real hope for intelligent robotic companies in Germany is small spin off companies, or venture capital Companies, etc. In order to achieve humanoid robotics commercialization; the very special idea is considerably necessary. I believe that Germany will develop in future from these aspects. Germany’s excellent infrastructure for of fundamental robotic research surpasses that of Korea but commercialization comes very slowly because the vision which connects technology and money are insufficient. Korea has the funds and vision for commercialization compared to Germany. This provides an opportunity to work together Germany with its very good infrastructure for robotic research and Korea with its funding and vision for commercialization compliment each other. The commercialization combines all robot elements and will be able to succeed from market; the failure of commercialization will be cruel, it accepts the result of brutality or happiness of success. I do not believe the people in the universities or laboratories can overcome such a big pressure or charge and they are probably looking for the secure way. Looking in area of expertise again, the visualization of humanoid is the best for KIT, medical treatment and mechatronics for DRL, natural language recognition for DFKI (Prof. Uszkoreit , Saarbrücken and Deutsches Forschungszentrum für Künstliche Intelligenz, Bremen) of Prof. Waibel of KIT (Karlsruhe Institute of Technology), object recognition for Mr. Acad of KIT, the service robot Fraunhofer IPA, IPK. In new innovation research of robotics there are the intelligent network method and group intelligent like the insect swarm. Currently the paradigm about robot philosophy in Germany is going to be changed and the small laboratories do to be difficult to go the same way again where the already big laboratory have passed and they waste a financial fund and manpower. The tendency of the small laboratories which professionalizes and specialize in future is being clear and that is the way of survive. Laboratories which repeat universal or fundamental research and do not move forward will not keep up with the developmental speed where competition is quick and therefore cannot survive. But regarding to innovation power Europe will be great performance in the world and currently will not be able to disregard. The United
EKC 2008: Summary of German Intelligent Robots Research Landscape 2007
109
States, Japan and Europe lead the world in fundamental research in robot technology. Germany is at the center of robot research in this European group. While Korea wants to achieve world-class status; because the Republic of Korea has neglected to cooperate with Europe or USA it has lagged.
2 Institutional Research Centers 2.1 Institute of Computer Science and Engineering (CSE), Industrial Applications of Computer Science and Micro Systems (IAIM), in Karlsruhe Institute of Technology (KIT) FZI (Forschungszentrum Informatik) Leader: Prof. Dr.-Ing. R. Dillmann (KIT, Karlsruhe Institute of Technology) (http://wwwiaim.ira.uka.de) He is a professor at the KIT and has developed a large network of aprox. 50 world and European research scientists. His institute belongs to the world class. Core competence: Humanoids (Projects: Albert –Armar II,III), autonomous car and mobile robot platform , artificial muscles, medical robotics and various spectrum of robot research areas. Especially visual and object recognition system belong to best class in Europe. It is the result of Collaborative Research Center on Learning and Cooperating Multimodal Humanoid Robots “SFB (SonderForschungs-Bereich) 588,” which was established by the DFG in 2001 and will be run for 12 years. These projects include mechatronics, control, motion coordination as well as computer vision and system integration. Beside this institute, there are also other Institutes of KIT or Karlsruhe area:
Fig. 1. Armar III humanoids
• Institute of Process Control and Robotics from Porf. Woern, • IITB, Fraunhofer Karlsruhe from Dr. Kuntze
• Dr. Schulz FZK (part of KIT) hand In the area of Karlsruhe there are five famous robot institutes in Germany and shows very good infrastructure for the robot research. The KIT robotic has continued since 1975. 2.2 DLR (Deutsches Zentrum für Luft- und Raumfahrt e.V / German Aerospace Center), Oberpfaffenhofen Leader: Prof. Hirzinger (www.robotic.dlr.de) Approximately 120 Research scientists belong to top class in the world work here. This institute was founded in 1960. The Research is focused on the areas: sensors,
110
D.-B. Chang
Fig. 2. DRL Agile humanoid Robot
actuators, man-machine interface, industrial and service robotics, medical technologies (artificial organs, surgical robots), software tools and space robotics. It uses prior work of the institute, namely the DLR light weight robot (LWR) and the DLR hand by using coordinated arm and leg movements, the torso expands the grasp-space of the robot hands which results from arm movements. Core competence: space robotics, satellite, Lightweight arm and DLR hand Cooperate with MIT, Canadian space Agency, Japan space agency, CMU (Carnegie-Mellon University) and NASA etc.
2.3 DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz) GmbH, Bremen, Saabruecken Leader: Prof. Dr. Frank Kirchner (http://www.dfki.de/web) This Institute belongs to the excellent group which is jumping to world class. Prof. Uszkoreit same as Prof. Alexander Waibel from KIT is expert in natural language research in Germany. It employs approximately 30 people but the members are growing rapidly since the foundation five years ago. Core competence: space robotics, underwater robotics
Fig. 3. ARAMIES Robot with project from DLR and Underwater robot
Ambulating Robot for Autonomous Martian Investigation, Exploration and Science (ARAMIES) is to develop and program a multifunctional, multi-degree of freedom, autonomous walking-robot for rough terrain. In particular, the project is focused on very steep and uneven terrain, e.g., canyon or crater walls in Mars.
EKC 2008: Summary of German Intelligent Robots Research Landscape 2007
111
2.4 Fraunhofer Institute for Production Systems and Design Technology- IPK, Berlin IPA, Stuttgart Leader: Prof. Dr.-Ing. Joerg Krueger (www.ipk.fraunhofer.de) The institute of IPA and IPK (Prof. Schraft1) are specialized for industrial robot and service robots. Their solutions are very innovative like intelligent gear box, factory transport system, cleaning robot etc. Core competence: industrial robot and service robots. Application oriented robot solution.
Fig. 4. Worker assistant robot with intelligent gear box
This concept shows new robots which assist factory work to lift or assemble machine. Generally robot does itself his work without cooperation of humankind.
3 Robot Industries z
z
z
1
SCHUNK GmbH & Co. KG Spann- und Greiftechnik Bahnhofstr. 106 – 134, 74348 Lauffen/Neckar, Germany [email protected] www.schunk.com Tel. +49-7133-103-0 Fax +49-7133-103-2399 Festo AG & Co. KG Ruiter Straße 82, 73734 Esslingen, Germany Tel. ++49 (0) 711 / 347 – 0 Fax ++49 (0) 711 / 347 - 21 44 E-Mail: [email protected] http://www.festo.com/ InMach Intelligente Maschinen GmbH Lise-Meitner-Straße 14 D–89081Ulm, Germany Telefon +49(0)731 55 01 66 – 0 Telefax +49(0)731 55 01 66 – 9
Rolf Dieter Schraft book about “Serviceroboter” 1996, Springer Verlag, Berlin, ISBN: 3-540-59359-4.
112
D.-B. Chang z
z
NDT Systems & Services AG Am Hasenbiel 6 D-76297 Stutensee Germany +49 (0) 7244 7415 0 KUKA Roboter GmbH Zugspitzstraße 140 86165 Augsburg Deutschland Zugspitzstraße 140 86165 Augsburg, Germany Tel.: + 49 821 797-4000 Fax: + 49 821 797-4040 http://www.kuka.com/germany/de
There are many robot companies like Manz automation, ABB, Sensodrive, Amatec, FANUC Robotics Deutschland GmbH, Reis Robotics GmbH , STÄUBLI TecSystems GmbH, Asuro, AR&T etc. Many robotic companies in Germany are concentrated on Industrial robotics.
4 Conclusion I have visited most famous institutes of German robotics and have discussed with many scientists. In order to make clear aspect about German robotics we need more time but Germany has reputation about mechanical engineering as world class, even the best in the world. Therefore we have a very good chance to be nation for robotics but Germany still has a problem with the transfer of robot technology to intelligent service robotics, commercialization. Germany has strong, basic research for robotics and has big potential in comparison with other countries but very little money is invested for development of logical structure, natural language, other program language etc. Even Japan has reached remarkable mechanical robotics but has no innovation about logic. The best infrastructure for robotic research area is Karlsruhe. The future way of robotic research is international cooperation. In my opinion Korea and Germany will be very good partner for the future robotic research in the world.
References [1] Doo-Bong Chang (2007 ) Report of robot research 2007 in Germany. [2] World Technology Evaluation Center, Inc (2006) INTERNATIONAL ASSESSMENT OF RESEARCH AND DEVELOPMENT IN ROBOTICS. WTEC Panel Report 2006, World Technology Evaluation Center, Inc. German part, USA, 227-256. [3] DLR, Institute of robotics and Mechatronics, (2004) Status Report 1997-2004. part 1, Germany
Characteristics of the Arrangement of the Cooling Water Piping System for ITER and Fusion Reactor Power Station K.P. Chang1, Ingo Kuehn2, W. Curd1, G. Dell’Orco1, D. Gupta1, L. Fan1, and Yong-Hwan Kim 1
ITER-IO, Cooling Water System Section, 13103 St-Paul-lez-Durance, France [email protected] 2 ITER-IO, Project Office, 13103 St-Paul-lez-Durance, France
Abstract. The ITER has been designed to demonstrate the scientific and technical feasibility of nuclear fusion energy conversion using the tokamak magnetic machine. The ITER design and operating experience will guide the realization of a future fusion power plant called DEMO which will be designed to produce electricity at competitive cost with other energy sources. ITER will be operated as a tokamak machine and requires many plasma components and plasma diagnostic equipment. The cooling water absorbs the heat from the plasma facing components composed of the first wall/blanket (FW/BLK), divertor/limiter (DIV/LIM) that operate at higher heat loads than reached in a nuclear fission power plant reactor. Following the longterm fusion study and research of the international ITER experiment, the characteristics and considerations in the arrangement of the equipment and piping for the cooling water system can be deduced. The cooling water system for a fusion reactor has many differences from the equipment and piping arrangement of a fission plant. This paper introduces ITER cooling water system design and offers an insight into the design features that will be considered in a future fusion reactor power station like DEMO. To meet its objectives, fusion specific piping design follows many of the design concept adopted in the nuclear power plant but additional special features should be taken into account in the design.
1 Introduction Fusion power is generated using a tokamak machine in which plasma is burning like a nuclear fuel in reactor in the nuclear power plant. This transfer of heat is similar to the heat transferred from the fuel elements to the cooling water surrounding the fuel elements in a fission reactor. A tokamak machine including the cooling water primary heat transfer system provides the primary confinement. The primary confinement is the surrounded by the tokamak building, a concrete structure that provides a secondary confinement structure. The concrete tokamak structure provides a similar confinement barrier as the containment wall in a fission nuclear power plant. A concrete tokamak structure envelopes the magnets and vacuum vessel that contains the invessel components. This concrete structure has a confinement function with leak tightness to prevent the spread of activation to the environment. Fusion is a relatively safe and clean energy source, but special care should be taken into account for potential radioactive releases to the environment in the design. The plasma facing
114
K.P. Chang et al.
components will become be activated and the cooling water will become activated with activated corrosion products (ACP) and tritium. The cooling water system will therefore be designed with utilizing the As Low as Reasonably Achievable (ALARA) principles to ensure workers and public are protected from harmful radiation from the ACPs and tritium. The cooling water, once activated to levels above which can be discharged to the environment will require periodic disposal offsite. They are needed to be handled with care and the equipment and piping carrying these hazardous materials should be designed and arranged considering safety concept like a nuclear power plant. Tokamak machine is a complex machine and the following paragraphs provide a description of the ITER tokamak fusion reactor and its cooling water system. There are many differences between the cooling water systems for the ITER fusion reactor and a conventional fission nuclear power facility. The ITER cooling water system is comprised of three systems in series. These three systems are the Primary Heat Transfer System (PHTS), the Component Cooling Water System (CCWS) and the Heat Rejection System (HRS). The heat from the plasma facing components is first rejected to the closed loop PHTS. The PHTS in turn rejects this heat to a secondary closed loop cooling water system the CCWS. The PHTS and the CCWS provide two barriers in series against the potential release of radioactivity to the environment. The CCWS in turn rejects its heat to the last of the three cooling systems the HRS. The HRS is comprised of plate type heat exchangers and mechanical draft cooling towers. The HRS rejects the heat from the CCWS to the environment atmosphere through the mechanical draft cooling towers. This paper outlines the ITER cooling water system and explains the differences to be considered for cooling water piping layout on the basis of long-term fusion study and research of ITER.
2 Configuration of the Tokamak Machine and the Components to be Cooled by the Cooling Water System The core of the Tokamak machine located in the Tokamak building consists of a torus called Vacuum Vessel (VV) which is via the port structure in three elevations connected to the Bioshield. It is a cylindrical structure made of reinforced concrete forming the inner part of the building structure which is hosting the torus. The chamber of the VV is enveloped with the In-vessel components which comprise the Blanket Shield Modules (BLK) covered by the First Wall (FW), the Limiter (LIM) and the Divertor (DIV), which are located in the lower part of the vacuum vessel. Together with the Neutral Beam Injector (NBI), which is located in the site northern part of the Tokamak building, the components are the major clients to be cooled by Tokamak Cooling Water System (TCWS) within the Primary Heat Transfer Systems (PHTS) of the vacuum vessel (VV PHTS), the first wall blanket (FW/BLK PHTS), the divertor/limiter (DIV/LIM PHTS) and the neutral beam injector (NBI PHTS). Fig. 1 is showing a vertical section of the Tokamak building with the Vacuum Vessel including the in-vessel components, the neutral beam injector and the ports connected to the building. The cylindrical bioshield is shown around the vacuum vessel with the connection to each level in the Tokamak building.
Characteristics of the Arrangement of the Cooling Water Piping System
115
Fig. 1. Vertical section in north-south direction through the Tokamak building
3 General Arrangement of the Cooling Water System in the Tokamak Building The Tokamak building is a 57 meter high structure with a footprint of around 71 x 79 meter and made of reinforced concrete. The building consist of 5 full levels with the lowest level B2 below the Tokamak machine in -11.60 meter elevation with the circular lower pipe chase around the bioshield with 49 meter in diameter. This area is mainly hosting the DIV/LIM PHTS and the inlet pipe-work of the VV PHTS of the cooling water system. However the volume need to be shared with vacuum and cryoline ring manifolds routed to the Port Cells in the above level B1. The lower pipe chase is connected via vertical shafts of around 7.5m² cross-section with the level L3 hosting the upper pipe chase. This area with the same dimensions like the lower pipe chase is hosting the FW/BLK PHTS and the outlet pipe-work of the VV PHTS. It is connected via the east gallery to the TCWS vault in level L4 and forming with a TCWS vault annex in the Tritium building. These areas are designed to provide a secondary confinement in case of a failure in lines carrying ACPs and tritium. The northern part of the Tokamak building is hosting the VV PHTS pipe-work with equipment and the vertical piping of loop 1 and 2 connected to the 3 air-coolers for each loop located on the roof of the Tokamak building. Fig. 3 is showing an Isometric View of the primary heat transfer system layout in the Tokamak building in the circular lower & and upper pipe chases around the bioshield, the routing of the VV PHTS in the northern part of the Tokamak building and the connection of the pipe work to the TCWS vault. 3.1 Concept of the First Wall and Blanket Module Cooling (FW/BLK PHTS) The PFW/BLK PHTS consists of 3 loops and provides cooling mainly for the 440 blanket modules. Each loop cools the blanket modules of 3 VV sectors made of 40 degree via the chimneys of the 18 upper ports. Additional cooling is also provided to
116
K.P. Chang et al.
Fig. 2. Isometric View of the Primary Heat Transfer System layout in the Tokamak building
components in the equatorial & upper port. The pipe work, which is routed from the chimneys to the upper pipe chase currently consists of a bundle of twice 20 pipes for the FW/BLK PHTS including inlet & outlet plus 7 outlet pipes for the VV PHTS. The 47 single pipes are grouped in the area of the cryostat along with the bioshield and contained in a penetration of around 1.1 meter diameter which is welded to the cryostat and closed with a flange ex-cryostat. A plate connected to the penetration provides additional support in-cryostat. The whole penetration need to be able to withstand the transferred loads of the pipe work on the flange and to accommodate the relative displacement with bellows. In the present design the single pipes in the bundles have currently a minimum distance of 50mm between inlet and outlet which can result in a reduction of cooling power and the space for the FW/BLK PHTS pipe work shared with the VV PHTS outlet pipe work. Ongoing studies are aiming in improving the design in order to split the pipe work in independent penetrations and to segregate the SIC systems. The pipe work for the three loops of the FW/BLK PHTS are routed via the east gallery of level L3 up to the TCWS vault in level L4 where the equipment for each loop is located. It comprises of a main pump, low flow pump, heater, pressuriser, heat exchanger with bypass and control valves. 3.2 Concept of the Divertor/Limiter Cooling (DIV/LIM PHTS) The DIV/LIM PHTS consists of a single loop which provides cooling mainly for the 54 divertor cassettes located in the lower part of the VV and the 2 port limiters in the equatorial port 08 and 17 which is provided via the vertical shafts 08, 09, 17 and 18 adjacent to the Port Cells 08 and 17. The connection of the pipework with 6 pipes per port of 76mm outside diameter to the divertor cassettes is provided in the 18 Port Cells in level B1. The pipe work in the Port Cells with even numbers are penetrating the cryostat and the VV inside the lower port structure whereas the pipe work in the odd Port Cells are connected outside the port structure penetrating the cryostat and VV.
Characteristics of the Arrangement of the Cooling Water Piping System
117
The pipe work coming from the 18 Port Cells is routed via the adjacent 18 vertical pipe shafts to the level B2 where the lower pipe chase is located. The pipe work is collected around the bioshield and feed in the main inlet and outlet manifold which is via the vertical shaft 17 and 18 connected to the TCWS vault in level L4. The configuration of the equipment in the TCWS room in level L4 is identical to the FW/BLK PHTS. 3.3 Concept of the Vacuum Vessel Cooling (VV PHTS) The VV PHTS is mainly cooling the VV and consists of 2 loops which are alternating connected to the 20 degree half’s of the 9 VV sectors. In addition the VV PHTS is providing cooling to the field joints between the VV, to the port extensions and to the NBI ports. The VV PHTS is categorized as SIC and designed to be able to remove with one loop the total heat loads of the VV. It is connected to 3 air coolers per loop located on the roof of the Tokamak building for decay heat removal by natural convection. The inlet of the cooling is provided via the lower pipe chase for loop 1 and 2 and penetrating the bioshield as well as cryostat and connected to the lower ports with 114.3mm outside diameter piping. The lower pipe chase is commonly shared with the DIV/LIM PHTS as well as the vacuum- & cryolines. The segregation of loop 1 from the loop 2 of VV PHTS pipe work due its safety function and a physical separation of VV PHTS from the other systems in that area is subject of study in order to provide the operational safety in case of any failure of the systems which can cause damage on the VV PHTS and to insure the redundancy. The outlet of the cooling is provided via the upper ports and the pipe work bundled together with the FW/BLK PHTS through the cryostat & bioshield penetration in the upper pipe chase. The sharing of the same volume in that area between the VV PHTS and the FW/BLK PHTS need to be commonly studied with the similar situation in the lower pipe chase. Loop 1 and 2 has a segregated routing from the upper pipe chase via the crane hall, where currently a pressuriser for each loop is located, up to the roof of the Tokamak building to the 3 air-coolers. The feed pipes for both loops are separately vertical routed in the northern part of the building from the HV deck room down through the NB Cell in the drain tank room in level B2 where the pumps, filters and electrical heaters are placed. The loop is closed from the drain tank room to the lower pipe chase. 3.4 Concept of the Neutral Beam Injector Cooling (NBI PHTS) The NBI PHTS consist of 1 loop and provides cooling for the low voltage & high voltage components of the 2 heating and current drive injectors (H&CD) and the diagnostic neutral beam injector (DNB). Additional space has been allocated for an upgrade with a third NB system. The components of the NB system are located in the NB Cell and in the above high voltage deck room in the northern part of the Tokamak building. The equipment of the cooling water system is located in the TCWS vault in level L4 with a configuration similar to the FW/BLK PHTS and DIV/LIM PHTS. The main inlet cooling pipe is supplying coolant for the low voltage & high voltage system of the NB injector. The distributor and collector for the low voltage system are
118
K.P. Chang et al.
located in the upper pipe chase and connected to the NB Cell whereas the piping for the high voltage system is routed via the east gallery in level L3 to the HV deck room.
4 Characteristics to be Considered in the Equipment and Piping Layout for the Fusion Project 4.1 Major Elements to be Considered for the General Arrangement 4.1.1 Confinement For the equipment and piping layout in the fusion project the following elements could be considered for the safety approach such as confinement, earthquake, fire, waste, radiation protection, hazardous material, etc. The most serious hazards involve the Tritium and Activated Corrosion Product (ACP) from erosion of plasma facing components. The confinement of these hazardous materials is the most fundamental safety function in the Fusion plant, where confinement refers to all types of physical and functional barriers which provide protection against the spread and release of hazardous materials. All primary heat transfer system like FW/BLK, DIV/LIM, NBI and VV PHTS should be arranged in the confinement boundary. The confinement boundary is designed to keep leak tightness and withstand the high pressure following a high energy line break. The design of confinement barriers shall implement the principles of redundancy, diversity and independence which are major concepts to be considered for the design for SIC in the nuclear power plant. These principles could be applied in the ITER design. VV PHTS which has a safety function and Safety Chilled Water System (CHWS) are designed with redundancy concept and each loop of the redundant trains is designed in the independency. That is to say, each loop should be designed independently so as not to be damaged functionally from common cause failure. 4.1.2 Magnetic Hazard For the fusion plant design another major concept to be considered is a magnetic hazard. The tokamak machine is operated with magnet system which has function to provide the toroidal and poloidal magnetic fields necessary to contain and control the plasma during the various phases of machine operation. The TCWS vault and pipe chases are subject to magnetic fields during plasma operation. PHTS components must either be magnetically shielded or able to be operated in the prevailing fields. The arrangement of the PHTS piping must provide appropriate separation from the electrical feeds for the magnet such that magnetic deformation or arching will not simultaneously damage the PHTSs. The Components like motor operated valves (MOV) and instrumentations inside TCWS vault and the pipe chases shall be arranged taking into account the magnet fields. So they are to be located in the exclusion zone of outside boundary of magnetic fields so as not to be damaged and malfunctioned. If locating a MOV or instrumentation in the outside boundary of magnetic field zone is not available they should be qualified and evaluated to ensure their workability and reliability through a test program. Fig. 3 and 4 shows the data of stray field calculation shown vertically and horizontally in the Tokamak building. According to the Figures the pipe chases and TCWS vault in the Tokamak building are subject to be within the magnetic field boundary.
Characteristics of the Arrangement of the Cooling Water Piping System
119
Fig. 3. Stray Field on Vertical Section in east west direction of Tokamak building
Fig. 4. Stray Field on the plan drawing of Tokamak Complex
A test result says that in general, up to 10-15mT result the safe limit under which the components work correctly without any particular arrangement. But, above this limit the operability depends on orientation with respect to the magnetic flux density. It also shows a possibility of a significant sensitivity of all the control components. That is, magnetic field might have a possible influence on the MOV and instrumentation on the TCWS piping. 4.2 The Layout for Safety Function of VV PHTS There are many safety-related systems and components for the cooling water system in the nuclear power plant such as reactor coolant system (RC), safety injection and shutdown system (SISC), containment spray system (CS), feedwater system (FW), auxiliary feedwater system (AF), component cooling water system (CC), etc. Some of them are linked with safe shutdown function and maintain the safe shutdown status and those system and component are designated as a safety related system and component. Same concept could be applied in ITER and fusion plant. In the fusion plant
120
K.P. Chang et al.
the system which is related to decay heat removal and confinement function could be defined as a SIC. All SIC system which is linked with decay heat removal and confinement function should be designed taking into account of safety criteria. They should be designed so as not to be affected from the internal, external hazards such as fire, flooding, missile and pipe failure, pipe whipping, etc. All these hazards should be evaluated in the design so as not to affect nearby SIC. It is said that among these hazards pipe failure/rupture caused by high energy line break is the severest hazard. The SIC like a VV PHTS should be designed to be protected against the effect of high energy line break by means of physical separation and/or physical barrier and/or pipe whip restraints. VV PHTS is a SIC system due to their safety function of decay heat removal and a confinement as well. Moreover VV PHTS is designed with redundant concept to ensure that each of the two loops is able to remove decay heat loads at 100 percents in case that the other PHTSs are not available and also one loop of VV PHTS is not available. In the nuclear power plant physical separation concept has been adopted first for the redundant system in order to avoid common cause failure caused by fire, flooding and pipe whipping, etc. Therefore each loop of VV PHTS should be separated in the design. In case that physical separation is not feasible some other provisions could be applied such as LBB, pipe break exclusion concept, etc. 4.3 Physical Separation The PHTSs piping might carry fluid contaminated with ACP and tritium. They are operated in the high temperature and pressure condition defined as a high energy line. That is, all these PHTSs belong to the high energy lines. Special consideration has been taken into account in the design for the high energy lines in the nuclear power plant. The structure, system and components are designed and evaluated taking into account of double ended guillotine high energy line break as long as some other provisions are not applied in the design for safety evaluation. High energy line break might result in many kinds of hazards such as pipe whipping, jet impingement, flooding, secondary missile, compartment pressurization and environment condition change. So, these lines should be designed so as not to affect other components, especially safety important class (SIC) in the vicinity of them. SIC could be protected from the high energy line break by means of physical separation, physical barrier and pipe whip restraint in the nuclear power plant. Physical separation concept has been introduced first in order to install physical barrier or pipe whip restraint as low as possible. In case that proper protective provision is not available by means of physical separation, additional physical barrier is installed. And in case that proper protective provision is not available by means of separation and physical barrier, additional pipe whip restraints are installed to restrain pipe whipping. In the nuclear power plant all these high energy lines are arranged in the specially designed areas which are designed to withstand high pressure following a high energy line break. These areas have been separately designed from the safety related components. This concept is adaptable with separation criteria stated in the safety relevant documents like a preliminary safety analysis report for the ITER project. The same concept could be applied in the fusion plant and ITER as well. All high energy lines should be arranged in the High Energy Line Break (HELB) area separately so as not to affect the safety importance component (SIC) nearby. In the ITER project confinement boundary is considered to accommodate high energy lines and radioactive material contained piping.
Characteristics of the Arrangement of the Cooling Water Piping System
121
4.4 Internal/External Hazard In the equipment and piping arrangement among the many internal/external hazards the following should be taken into account in the design. 4.4.1 Failure of High Energy Pipes and Components If a pipe and component containing fluid with significant stored energy, this fluid energy may be released in such a way to cause further damage by means of pipe whipping, jets impingement, high pressure, secondary missile, pressure waves, flooding, chemical reactions, environment condition change with increasing temperature and humidity levels, etc. The failure of high energy pipes and components may also give rise to other accidents which should be considered in the qualification of the SIC systems. Therefore SICs should be protected against the effect of high energy pipes and components rupture. VV PHTS which are safety system should have a means to be protected by means of physical separation, physical barrier and pipe whip restraints. 4.4.2 Flooding Flooding is considered from the rupture of high or moderate energy pipes and tanks, vessels. Depending on the amount and nature of the liquid concerned, indirect damage to SICs can result in electrical short circuit, fire, hydrostatic pressure effects, instrument errors and buoyancy forces. SICs like a MOV located in pipe chases and TCWS vaults and other areas should be protected and evaluated from the flooding effect. Flooding may damage electrical motor or cause its malfunction. They should be located above a flooding level of each room so as not to be damaged. 4.4.3 Secondary Missiles A missile produced by high speed rotating equipment and instruments like a temperature or pressure indicator or a pipe whip may produce secondary missiles, such as pieces of concrete or parts of components, which might do unacceptable damage to SIC. Some means and provisions are required to protect SIC from the effect of missile. The most prudent course of action is to prevent their generation. For example multiple pipe breaks resulting in separated pipe parts of missiles can be avoided if the ductility and fracture toughness of piping material are applied.
5 Conclusion ITER is a first large scale of fusion thermonuclear test reactor and become a first step to accommodate all required safety features in the design. A fusion reactor is also based on a neutron reaction and produces a hazardous material like Activated Corrosion Product and Tritium. Moreover it is operated with the vacuum condition and extremely high temperature and pressure. So some special care should be taken into account on the piping layout design. This paper specifies the characteristics to be considered in the design based on the currently designed ITER project. But some other research and test will perform to ensure the feasibility and reliability of the components. And its developed design will be expected to have new technology to be adopted in the next fusion plant.
122
K.P. Chang et al.
References [1] [2] [3] [4] [5] [6] [7] [8]
[9] [10] [11] [12]
ITER Design Description Document Cooling Water System (DDD 26) July 2004 EFET Final Design Report in ITER - CTA Phase February 2004 Interaction of Tokamak Building Structures with ITER Magnetic Field April 2008 ITER Baseline Documentation, Plant Description Document, November 2007 ITER Baseline Documentation , Plant Design Specification, November 2007 ITER Baseline Documentation, Plant Integration Document, November 2007 Control System Design and Assessment (CSD) August 2004 Soft paper via EFDA tasks to ENEA & CEA “Magnetic compatibility of standard components for electrical installation: Computation of the background field and consequences on the design of the electrical distribution boards and control boards for the ITER Tokamak building” Aug 2005 Regulatory Guide 1.180 “Guidelines for Evaluating Electromagnetic and Radio-Frequency Interference in Safety-Related Instrumentation and Control Systems. October 2003 Design Description Document, Magnet, October 2006 IAEA Safety Standards for protecting people and the environment “Protection against Internal Hazards other than Fire and Explosions in the Design of Nuclear Power Plants” Regulation on Ensuring the Safety of Nuclear Power Plants, June 2008
Development of Integrated Operation, Low-End Energy Building Engineering Technology in Korea Soo Cho1, Jin Sung Lee2, Cheol Yong Jang3, Sung Uk Joo2, and Jang Yeul Sohn2 1
Building Energy Center, Korea Institute of Energy Research, Daejeon, Korea [email protected] 2 Dept. of Architecture Engineering Graduate School, Hanyang University, Seoul, Korea 3 Building Energy Center, Korea Institute of Energy Research, Daejeon, Korea
Abstract. The purpose of this project is to improve efficiency through aggressive application of individual new technologies related to building energy savings, and from their cross integration with other technologies. The goal is to reduce national dependence on carbon energy. The main points are as follows. First, the integration of pertinent technologies under a single modular package; second, the combining of climate controls, external surfacing and lighting technologies as an integral system; and third, the development of a building energy operating network for efficient usage of each engineering system. This project was planned under government leadership, with cooperation and participation from R&D organizations, universities and the industry; it is in its first year of implementation. Furthermore, policy developments and technology reliability improvements are planned in order to promote wide application and to improve reliability of the technologies. For this purpose, test implementation and operation of these technologies are planned during the Second Stage. The goal of this project is to achieve a 20% reduction in energy usage from the present. Increase in the cost of construction is limited to within 5% to facilitate greater commercialization.
1 Introduction For the purpose of energy savings in general, and in response to recent international agreements dealing with climate change, (hereafter CCA, for Climate Change Accords) there has been undertaken active research in Korea to reduce the level of CO2. Patterns for the supply and demand of energy are varied according to each type of industry, with the proven energy saving measures in their application and effectiveness also showing wide differences, such that detailed research for technology developments in each field are being pursued by their respective disciplines. Where it comes to energy savings applicable to building constructions, which comprises 25% of overall domestic energy consumption, solutions through "green" technologies can be multifaceted (when compared with other fields), and it is also perceived that a potential exists for substantial energy reductions through proper operation and management. With the geopolitical changes following the first and second oil crises, research and development for energy saving technologies of buildings have shown gradual trends for increase, from insulation standards of external surface materials to equipment
124
S. Cho et al.
installations, motorization, electrical, and renewable energy applications, with magnifying calls by the government and industry alike for development of new paradigms and products in response to the CCA's and the international environment. While individual technical advancements for building energy reduction and efficiency improvements have already reached maturity, most of these development efforts remained separate, giving rise to the challenges of commercialization and harmonization with existing systems. Such problems suggest that effective technology application for building energy savings will not be realized from concentration on newer technologies, but that it is necessary to take already proven technologies and then to improve upon the net performance of the integration of these parts. Practical applications for building energy savings will be an effective blend of existing and innovative technologies, exploiting their synergy. It would be necessary to develop energy efficient control systems and integral operating algorithms to drive such a convergent technology, in order to improve interoperability of individual technologies, in hastening practical application following technology demonstration.
2 Korea’s Current State of Thechnology for Building Energy 2.1 Current Research for Building Energy At the end of 1Q 2007, energy consumption from building structures accounted for approximately 26.6% of the national total. The Korean government has set goals in accordance with international CCA's and in response to the rising cost of oil, pursuing various enterprises. In its State plan for the next CCA, it has adopted greenhouse gas reduction measures through building energy management. The focus of key building energy initiatives are as follows: • • •
Tighter energy demand reduction standards at planning stage for new construction / Cap on total energy usage "Green" certification of energy efficient buildings / Rebate on construction cost Establishment of construction criteria for "Green New Town"
In preparation for these projects, the government is directing support and resource for technological innovations in building demand (load) reduction, high-efficiency systems installation, and renewable energy application. Government investment budget data from 1999 to 2007 for R&D subsidies to building energy related projects were analyzed, by each related individual. Technology and identified under a sub-category for a percentage of the total investment. Building related technologies in Korea were divided into three major categories of Demand Reduction, High-Efficiency Systems (including Renewable Energy) and Controls/Operation as shown in Table 1, for a total of 315 cases in progress or having been completed. As shown, investments for high-efficiency systems and renewable energy account for over 80% of the total for building related R&D, with the rest being load-reduction measures such as improved insulation, and systems operation and controls related technologies, respectively, at approximately 9% and 8%.
Development of Integrated Operation, Low-End Energy Building
125
Table 1. Building energy R&D investment & project cases Type
Demand Reduction
Cases
27
(a) ZeSH
High-Efficiency Systems Controls/Operation Total (incl. Renewable Energy) 263
(b) Super Energy Saving Building
25
315
(c) 3L House
Fig. 1. Energy-Saving Building Technology Validation Projects
2.2 Technology Trends A building is an amalgam of a myriad of technical disciplines including its basic electrical, mechanical and material engineering parts, where lately the prominence of BAS (Building Automation System) applications within IBS (Intelligent Building System) have driven rapid convergence with electronics and information technologies. An added requirement for eco-friendly construction has led to buildings with Bio Technology (BT) applications at the center of its microcosm. More efforts are being made to make a more livable environment of our urban heat islands. For some of the core technologies, the current level of energy efficiency in Korean buildings is roughly equivalent to that of leading nations. However, most are applications of individual and independently developed technologies, resulting in less than 10% commercialization, and causing problems in terms of compatibility and integration with existing systems. Such impediments have loomed large against practical application, burdening the intended operator (consumer), and caused low rates of adoption. To achieve technological integration of separate innovations, various projects are being undertaken in Korea for validation and testing, such as the Zero-Energy Solar House, the 3-Liter House, the Super Energy-Saving Building, and the Green Building. But many latent problems remain, with low availability for wide usage, stemming from technological shortcomings (complicated systems, higher costs).
3 Project Outline 3.1 Analysis of Individual Technologies In order to identify integration technologies for building energy, a study was conducted for buildings in Korea, by their intended types of usage. As a method for the
126
S. Cho et al.
study, shown in Fig. 2, buildings were grouped according to types of usage. Separately, applicable technologies were grouped under four categories of Building energy demand reduction, High-efficiency systems, Renewable energy usage, and Controls / operations. Table 2 shows the categories of construction / building energy savings technology, with 34 individual items listed there under, where many others are not listed, being limited to proven and widely used technologies, or relatively mature and Fig. 2. Buildings by type of usage in Korea ready for immediate application. It shows applicability of each individual technology to the respective types of building usage. High rate of commonality appears concentrated around the category of building load reduction. Table 2. Applicability of energy saving technology by type of building usage (● denotes applicable) Residential Type of Technology
Single Multiple (17) (16)
Tromb Wall System
●
Floor Thermal Storage System
●
Office Indus- Schools (30) trial (13) (27)
●
Double Skin Facade
●
●
Insulating Shutter
●
●
Super Envelope Insulation
●
●
●
●
High Performance Windows & Doors
●
●
●
●
●
●
●
●
Insulation
Building Energy Demand Latent Heat Control PCM System Reduction Type Con- Light Shelves struction Wind Proof Style Entrance & Exit Technology System Active Shield Technique
● ●
●
Economizer Cycle
● ●
●
●
●
●
●
●
●
●
Energy Efficient Building
●
●
●
●
●
Air-tightness Construction
●
●
●
●
●
Artificial Ground Afforestation Technique
●
●
●
●
●
Development of Integrated Operation, Low-End Energy Building
127
Table 2.(continued)
HighEfficiency Systems
Temperature, Humidity and Air Quality Control
●
●
Underground Laying Duct
●
●
Local Ventilation System
●
●
Task Ambient Lighting
●
●
Condensable Water Heat Recovery System Decompression Boiler by Steam Pressure High Efficiency Inverter Control System Low, Mid & High Isolation Heat Storage Temperature
●
Heat Recovery Steam
Renewable Energy Applications Technology
Electric Heat Changer
●
Active Solar Heating System
●
BIPV
●
Geothermal & Solar Energy Link HP Fuel Cell Small Wind Power System
●
●
●
●
●
●
●
●
●
●
●
●
●
● ●
●
●
●
●
●
●
●
●
●
●
●
Control System for Daylight Usage
● ●
●
●
●
●
●
●
●
●
EMS Control System
●
●
●
●
DSMS Heat Resources Control System
●
●
●
●
Co-generation System Control Central /Operation System
Monitoring
Operation
Residential buildings comprise approximately 68% of domestic construction, with continuous 24-hour occupancy, for priority in energy savings. They are divided as single domicile or multiple units according to size, and generally show high applicability for load reduction technology. As its size increases, a multiple unit residential building will require common-use systems and large-scale integrated monitoring technology, but becomes disadvantageous for certain passive system technology applications. Office space and educational facilities are suitable for all four categories of energy savings technology. Since industrial facilities possess different traits from those of general occupancy or usage, and are constructed to enhance productivity, a fraction of the high-efficiency systems and operating technology are applicable, and the potential for energy savings
128
S. Cho et al.
is relatively low. Residential buildings possess high potential for load reduction and renewable energy exploitation, and as stated before, office and educational facilities show potential for all categories of energy efficiency measures. We can also see that load reduction technologies are applicable and relevant regardless of the building's intended type of usage. 3.2 Selection of Technology The criteria for assessment were Cost input vs. efficiency improvement toward availability, Society, economic requirements, and Considerations for integration as applied for each type of usage. Using this method, according to their respective technical merit within each category, an integration technology appropriate for the project was selected.
Fig. 3. Building Energy Saving Integrated System
4 Scope of Project Research The project planning process served to reinforce the importance of this project. The ETI (Energy Technology Innovation) Project led by the Korean Ministry of Knowledge Economy (MKE) has come to adopt the "Integrated Low-Energy Building Technology Development" Project under its Building Energy Reduction program, where solutions to existing problem areas and future growth are being addressed through this initiative. The following are the major research units of the Integrated Low-Energy Building Technology Development Project. 1. Integral building energy modularization / unitization and common technology development 2. Environment-adaptive integral exterior surfacing-opening / climate control system technology 3. Building Energy Network development for optimized operation Each respective research unit has its own R&D system, with Unit 1 performing those tasks derived from Units 2 & 3, such as module applications, effective integration, and technical assessment. Since renewable energy technology area is separately
Development of Integrated Operation, Low-End Energy Building
129
Fig. 4. Change in Paradigm for Low-Energy Building Technology Development
under development by the MKS's ETI project, it is excluded here. But this area is designed for ultimate integration into the ETI project. Major objectives of this project can be summarized as 'Application of individual technologies and their integration with other technologies to improve efficiency and promote wide usage'. But it will be a challenge to actually apply a particular innovation to any specific field of construction. The primary objective of integral low energy building technology development is to devise strategy to bring together existing technologies for each type of building usage with new technologies, and to integrate individual energy saving technologies to achieve tangible results. Together with validation projects to prove innovations, develop applications technology for immediate availability upon project completion. Thereby, achieve 20% reduction in building energy usage with less than 5% increase in construction cost as the goal. Efficient operation will require new synergistic technology to catalyze ready application of efficient technologies inexpensively on the field, meshing together seamlessly with technical improvements to enhance application of existing technologies. The project management section will put in place the following support systems to promote technology application and commercialization. • • • • • •
Integrated Technology Development Monitoring with Related Support Individual & Integral Applicability / Metrics Assessment of Unitized Technology Development of Technology to Promote Availability Technology Export Application Study Technology Education / Marketing Information Exchange System Operation
5 Project Outline 5.1 Technological Scope of Project Includes all technologies widely in use for building energy reduction, and those with less than 10% market application for economic, social, cultural reasons, including
130
S. Cho et al.
new innovations, targeted for improvement through individual performance analyses, and as pertains to their interaction with other technologies. The scope of technologies for this project of building energy reduction has been selected as follows. (a) Building Energy Demand Reducing Construction Technology: External surface fabrication technology to reduce building energy demand, mainly through insulation and sealing. Also includes natural ventilation and passive systems using solar energy / lighting. (b) High-Efficiency Systems Equipment Technology: General building systems with superior performance, HVAC systems with minimal energy loss from heat source to delivery. (c) Controls / Operating Technology: That area of technology which reduces building load through integral management-operation of high-efficiency systems to minimize energy loss and improve harmonization with other technologies to ultimately increase energy efficiency. 5.2 Integration / Modularization of Respective Technologies A means to select those energy saving technologies appropriate to a particular type of building usage, then unitizing their related functions for optimal performance Highefficiency systems technology, to include building load reduction and renewable energy applications technologies, and integrated systems across individual technologies within a group, as well as integration across groups, and development efforts for product modularization. 5.3 Operations Technology Development Optimized building operations technology must be developed in concert, in order to achieve application and commercialization of the modularized product, and to integrate individual building energy savings technolgies through feedback between validation research and theoretical analysis. For commercialization, must include the development of systems for sensoring, monitoring and fault diagnosis of energy saving metrics, user controls for each type of energy source for economical factors, operating algorithm and management systems for integrated controls of groups. 5.4 Technology Application Even upon successful completion of all developments for optimal integrated operating technology in building energy reduction measures, including optimal individual technologies and their integrated modularization, the initiative may remain as a just another project if the market would not have been prepared to facilitate their systematic entry and availability, with follow-up support thereafter. Therefore, includes strategy to create a baseline upon which to allow commercialization of the technological developments upon project completion by making plans for industry wide application of each type of innovation to promote its wide usage. Policy initiatives for commercialization, and education of end-users on the anticipated benefits can improve understanding. Develop operating procedures and manuals, manufacturing techniques for operators to take full advantage of the planned savings.
Development of Integrated Operation, Low-End Energy Building
131
Fig. 5. Optimized System of Integral Modularization / Unitization
5.5 Sub-projects The Integrated Low-Energy Building Technology Development Project is divided into three major sub-projects, in accordance with aforementioned scope of technology from V.A section to V.D section, and managed by each research area. The scope and the objective of each major sub-project are as follows: First, in the 'Building Energy Integrated Module / Unit and Common Technology Development' a technology applicability (responsiveness) assessment to achieve overall technology development, a plan (method of conjunction with the second and third major derived technologies) for integration through modularization / unitization and to develop technologies for application, a general performance assessment, and operations monitoring technologies and to research methods for design and construction to be used on the Test Bed process for the Second Stage. Second, in the 'Energy Environment Responsive Integrated Surfaces Openings / Climate Control Systems Technology', develop and performance test modularized
Fig. 6. Organization of Technology Development
132
S. Cho et al.
products using new materials for external surfaces, door and window systems, structurally integrated external lighting control systems, and externally ventilated air conditioning / ventilation load reducing hybrid climate control systems. Third, in the 'Building Energy Network Development for Optimum Operating Efficiency', develop systems for formulating energy usage appropriate to each type of building usage and characteristic, and integrated energy source management through supervising technology research.
6 Anticipated Results from Technology Development Through the effort outlined here for building energy reduction in Korea, a system unique to the situation in Korea will become established for the design and manufacture of building energy savings. Furthermore, building related technologies of structures, materials, environmental, energy, electronics and communication, etc. will benefit, and exchanges between industry and academia will be enhanced. Not only in Korea, but international cooperation for research and development of industry leading technologies for building energy will be possible. Integrated Operation Low Energy Building Technology gained through this research will realize space-saving, construction-friendly, cost-saving benefits by recombining each individual technology into modularized, optimally integrated solutions. In addition, overall building performance is predicted to improve through integral operations/control technology between modules. Market economy related to building energy savings will become stimulated, and construction industry will become competitive in the global market, along with substantial savings in the cost of building operations.
7 Future of Development The final goal of this building energy savings technology development project is common to the government and civil sector alike, where with minimal increase in the cost of construction (within 5%), to maximize savings for building energy expenditure (over 20%), thereby reducing carbon dioxide output. The impetus for meeting this objective lies within existing technologies, core effort therefore in integrating them together, and in applications thereof, in making them available for wide usage. The method is to exploit what has been an impediment, the reliability of technology, in applying them to the building occupied by the tenant, validating for the user the economic benefits of the technology, and developing that technology to be ready immediately for the industry. With wide usage, building energy demand will be reduced along with carbon dioxide, leading to the technology becoming readily accepted. These results, aligned with the Korean government's effort underway for innovative urban development, U-ECO City development, and policy for ubiquitous expansion, will result in synergistic (for public good) effect. Throughout this process, parallel policies and systems for infrastructure, and further development leadership will be necessary for proliferation.
Development of Integrated Operation, Low-End Energy Building
133
Acknowledgments. This subject is under the processing by the support of ‘Ministry of Knowledge Economic’ and the ‘Korea Energy Management Corporation.’
References [1] Korea Energy Economic Institute (2005) Energy Survey [2] Jo DW (2006) Revision Standard of Green Building Certification for an Apartment House. Korea Green Building Council [3] Hwang KI (2005) A Survey Study on the Electric Power Consumption of Apartments. Journal of Architectural Institute of Korea 21(12) [4] Choi MS (2007) An Evaluation of the Priority Order in Developing the construction Technologies for Environment-Friendly Apartment House. Journal of Architectural Institute of Korea 23(9) [5] Ross S (1998) Green Building Materials: A Guide to Product Selection and Specification. John & Sons [6] Keenan A (2002) Green building: Project Planning and Cost Estimating. R.S Means Company [7] Moon MR (2008) A Survey Study on the Energy Consumption of Green Apartment House. Conference Journal of Architectural Institute of Korea
RF MEMS Switch Using Silicon Cantilevers Joo-Young Choi Dept. of Electrical & Electronic Engineering, Imperial College London, UK [email protected]
Abstract. This paper introduces a new concept in RF MEMS switches intended for high RF power applications. The novel switch architecture employs electrothermal hydraulic microactuators to provide mechanical actuation and 3D out-of-plane silicon cantilevers that have both spring action and latching mechanism, to create an OFF-state gap separation distance of 200 μm between ohmic contacts. Having simple assembly, many of the inherent problems associated with the more traditional in-plane suspension bridge and cantilever beam type architectures can be overcome. A SPST switch has been investigated and its ON-state insertion loss and return loss are less than 0.5 dB and greater than 15 dB, respectively; while OFF-state isolation is better than 30 dB, up to 12 GHz.
1 Introduction RF MEMS switches offer many advantages over solid-state switches: low loss, high isolation, low DC power consumption and high linearity [1]. However, RF MEMS switches have reliability issues that are linked to RF signal power level. Generally, RF MEMS switches are based on designs that employ electrostatic actuation. These switches can be classified as capacitive membrane and metal-to-metal ohmic contact switches [2, 3]. The capacitive membrane switches have large contact areas, separated by a thin dielectric layer, while the metal-to-metal ohmic contact switches can have relatively small contact areas. In terms of the mechanical structure, traditional RF MEMS switches can be divided into those that have architectures based on suspension bridges and cantilever beams [2]. With the suspension bridge structure, there are two anchors at each end of a beam. In contrast, the cantilever beam structure has one end fixed to the anchor, and the other end free to move. Switches can also be designed to have either series or shunt configurations within a circuit. Traditional RF MEMS switches have inherent limitations to RF signal power handling (e.g. failure mechanism due to self-actuation, stiction, etc.). These problems can mainly result from the beam structure itself, which is very thin (typically 0.5 to 2 μm) and has very small gap separation distance between electrodes (typically 1 to 5 µm). Because the power handling capacity varies with many parameters associated with the switch architecture, there have been a number of diverse efforts to improve RF signal power handling capacity. For example, the addition of an electrode to pull the beam up above it [4, 5,6] or to toggle the cantilever beam downwards [7]; an array of switching elements to increase isolation and reduce current density [8, 9]; and an increase in the width and thickness of the beam [8, 9, 10]; an increase in the contact force [11, 2,13]; and the use of extraordinary contact materials such as a diamond film, Pt or Ir [14, 15, 16]. However, these are not fundamental solutions because an
136
J.-Y. Choi
in-plane beam-based architecture is still employed, and can even require an increase in the complexity of their design and fabrication. As a result, non-beam-based architectures for RF MEMS switches have been also proposed, such as a ridge wave integrated with thermally plastic deformable actuators [17] and a water-based absorptive switch [18]. Using a previously established electrothermal hydraulic microactuator technology [19, 20], a very new concept in RF MEMS switch was briefly introduced [21]. Describing previous research here in more detail, this paper has developed a singlepole single-throw (SPST) RF MEMS switch architecture that, in principle, can overcome the limitations associated with traditional beam-based architectures employing electrostatic actuation.
2 Concept and Structure 2.1 Characteristics of Paraffin Wax Phase change material (PCM) characteristics can be exploited to realize electrothermal hydraulic microactuators. As a PCM, paraffin wax shows a volumetric expansion of ~15% when it melts, and shrinks back to the initial volume on cooling, as illustrated in Fig. 1. This characteristic has previously been investigated and employed within a number of non-RF MEMS application demonstrators [19, 20]. It can be extended to RF MEMS applications to replace the traditional electrostatic actuation mechanism [21].
Fig. 1. Expansion characteristics of paraffin wax [19]
2.2 Switch Concept The proposed SPST switch consists of paraffin wax microactuators and silicon cantilevers, as illustrated in Fig. 2. Instead of conventional in-plane beams, relatively thick (~90µm) silicon cantilevers having springs and latches are designed to make an ohmic contact with the coplanar waveguide (CPW) transmission line’s signal track. Paraffin wax microactuators control the silicon cantilevers by means of a mechanical push and release mechanism. The simple latching mechanism can maintain both the OFF and ON-state, without continuous DC biasing of any microactuator.
RF MEMS Switch Using Silicon Cantilevers Cantilever-2 (Si)
G
Latch
Actuator (OFF)
Actuator (ON)
G
S1 S2
Cantilever-1 (Si)
G Metal Plated
(a) OFF-state G Spring Actuator (OFF)
Actuator (ON)
Substrate (Si)
137
2.3 Paraffin Wax Actuators The paraffin wax actuator technology has already been described for non-RF MEMS applications [19, 20]. The structure of the container is illustrated in Fig. 3. Paraffin wax fills the bulk micromachined silicon containers, and is sealed using an elastic diaphragm of polydimethyl-siloxane (PDMS). When the required DC bias voltage is applied to the integrated microheater, the paraffin wax expands with the associated increase in heat, and is deliberately shaped into a hemisphere.
Ohmic Contact
(b) ON-state
2.4 Silicon Cantilevers
Actuator
G
The silicon cantilevers are inspired from elements for a microgripper [20]. Two G silicon cantilevers are created, one for the ohmic contact and the other for the latching mechanism. With the former, gold is selectively deposited onto the cantilever G S2 pins, in order to provide a metal-to-metal ohmic contact with the CPW’s signal Cantilever-1 (Si) lines (S1 and S2 in Fig. 2). Fig. 4 shows (c) Top view (without cantilever-2) the cantilever elements and latching mechanisms integrated into one piece of Fig. 2. Basic cross section and top views of silicon, to simplify assembly. With the the novel RF MEMS switch most common silicon wafer thickness of ~525µm, wide ohmic contacts can be created for high RF power applications. Moreover, these microactuators can easily introduce OFF-state gap separation distances greater than 200µm, to enhance an isolation characteristic. Substrate (Si)
S1
Fig. 3. Electrothermal hydraulic microactuator
138
J.-Y. Choi
2.5 Assembly and Ohmic Contacts The paraffin wax microactuators only offer an expansion force that can lift up the cantilever beams. Therefore, it was very difficult to achieve constant ohmic contact pressure. As a result, the beam was designed to have a tilted angle below a horizontal level, in order to provide a spring force between the cantilever and the CPW lines after assembly, as illustrated in Fig. 5.
Fig. 4. Designed silicon cantilever with springs and latches
Fig. 5. Beam structure having a tilted angle
3 Simulations The iterative simulations using ANSYS and HFSS were carried out to optimize mechanical and RF characteristics, respectively. A mechanical structure simulation tool, ANSYS (V9.0) was used to design the silicon cantilever structure. ANSYS simulation was divided into two parts; cantilever pin alignment simulations and latching mechanism simulations. Fig. 6 shows ANSYS simulation results. The RF characteristics of the switch with CPW lines including trench structure and the silicon cantilever designed using ANSYS needs to be verified. If the mechanically optimized cantilever structure is not suitable for RF performance, the design has to be modified. The final simulation results up to Ku-band from 3D models using HFSS™ are shown in Fig. 7. It can be seen that the predicted ON-state insertion and return loss levels up to Xband (12GHz) are less than 0.7dB and greater than 20dB, respectively, and OFF-state isolation is greater than 35dB. It should be noted that the insertion loss values include the loss from feed lines, because unlike the wave port configuration, direct deembedding is not applicable with the lumped port configuration used within these simulations.
RF MEMS Switch Using Silicon Cantilevers
(a) Alignment simulation
(b) Latching mechanism simulation
Fig. 6. Silicon cantilever design using ANSYS (Unit of stress: MPa)
(a) 3D model of the switch in ON-state
(b) 3D model of the switch in OFF-state
0 S21_ON (Insertion Loss) S11_ON (Return Loss) S21_OFF (Isolation)
Loss [dB]
-10
-20
-30
-40
-50 0
4
8
12
Frequency [GHz]
(c) predicted insertion and return losses in ON-state, and isolation in OFF-state
Fig. 7. Simulated RF characteristics using HFSS
139
140
J.-Y. Choi
4 Results A photograph of the assembled SPST switch is shown in Fig. 8. The switch is assembled by inserting two microactuators into the main silicon substrate and the cantilever substrate into the assigned slots of the main substrate. RF measurements of the assembled switches were conducted up to X-band. Fig. 9 shows the measured frequency responses of the switch in comparison with electromagnetic simulation results from HFSS. It can be seen that the measured ON-state insertion and return losses are less than 0.8 dB and greater than 15 dB, respectively, up to X-band. The electromagnetic simulation results follow the measured frequency responses well. The measured OFF-state isolation is greater than 30 dB, up to 12 GHz. Once again there is good agreement between the measured and modeled frequency responses. Applied bias and actuation times are summarised in Tables I and II, respectively. The microactuator itself can change its phase within 2 to 3 seconds, but it takes longer to reach a power level sufficient to complete latching and subsequent releasing of the beam.
Fig. 8. Assembled switch with main substrate dimensions of 15 mm x 7 mm
(a) ON-state
(b) OFF-state
Fig. 9. Measured frequency responses in comparison with HFSS
RF MEMS Switch Using Silicon Cantilevers
141
Table 1. Applied bias States
Voltage (V) Current (mA)
Latching ON → OFF
11.5
46
Releasing OFF → ON
10.0
48
5 Conclusions A novel RF MEMS switch, intended for RF power applications, has been developed. The design process for the switch has been established by using ANSYS for mechanical structural design and HFSS for RF design, and the fabricated switch has an excellent RF performance under small-signal conditions. When compared to conventional RF MEMS switches, the switch has a completely unique physical structure, and is expected to have advantages with RF power handling. In addition, there are additional benefits, such as latching mechanism to minimize DC power consumption, selfalignment during assembly and robust operation. However, all this comes at the price of slow switching speed and large size. Further work is needed to improve switching speed, reduce DC power consumption during switching operations, undertake measurements at high RF power levels, perform reliability and lifetime testing and then investigate size and packaging issues.
References [1] Rebeiz G M (2003) RF MEMS–Theory, design and technology. John Wiley & Sons [2] Lucyszyn S (2004) Review of radio frequency microelectromechanical systems (RF MEMS) technology. IEE Proceedings – Science, Measurement and Technology 151(2):93–103 [3] Katehi L P B, Harvey J F, Brown E (2002) MEMS and Si micromachined circuits for high-frequency applications. IEEE Trans. Microwave Theory and Tech. 50:858–866 [4] Strohm K M, Schauwecker B, Pilz D et al (2001) RF-MEMS Switching Concept for High Power Applications. Topical Meeting on Silicon Monolithic Integrated Circuits in RF System Dig., Ann Arbor, MI [5] Peroulis D, Pacheco S P, Katehi L P B (2004) RF MEMS switches with enhanced powerhandling capabilities. IEEE Transactions on Microwave Theory and Techniques 52: 59–68 [6] Grenier K, Dubuc D, Ducarouge B et al (2005) High power handling RF MEMS design and technology. 18th IEEE Int. Conf. on Micro Electro Mechanical Systems (MEMS’2005), Miami, USA 155–158 [7] Simon W, Schauwecker B, Lauer A et al (2002) Designing a novel RF MEMS switch for broadband power applications. Europ. Microw. Conf., Mailand 519–522 [8] Nishijima N, Hung J J, Rebeiz G M (2004) Parallel-contact metalcontact RF MEMS switches for high power applications. Micro Electro Mechanical Systems 17th IEEE Int. Conf., Maastricht, Netherlands 781–784 [9] McErlean E P, Hong J S, Tan S G et al (2005) 2X2 RF MEMS switch matrix. Microwaves, Antennas and Propagation IEE Proceedings 9:449–454
142
J.-Y. Choi
[10] Chow L L W, Wang Z, Jensen B D et al (2005) Skin effect aggregated heating in RF MEMS suspended structures. Microwave Symposium Digest, IEEE MTT-S International 2143–2146 [11] Jensen B D, Chow L W, Webbink R F et al (2004) Force dependence of RF MEMS switch contact heating. 17th IEEE International Conference on MEMS 137–140 [12] Palegol C, Pothierl A, Gasseling T et al (2006) RF-MEMS Switched Varactor for High Power Applications. 2006 IEEE MTT-S Int Microwave Symp. Dig.,1:35–38 [13] Siegel C, Ziegler V, Schönlinner B et al (2006) Simplified RF-MEMS switches using implanted conductors and thermal oxide. Proc. 36th European Microwave Conf., Manchester, UK 1735–1738 [14] Kohn E, Menzel W, Hernandez-Guillen F J et al (2003) Evaluation of CVD diamond for heavy duty microwave switches. IEEE MTT-S Dig. 3:1625–1628 [15] Kohn E, Kusterer J, Denisenko A Diamond for high power electronics. Microwave Symposium Dig. 901–904 [16] Kwon H, Choi D J, Park J H et al (2007) Contact Materials and Reliability for High Power RF-MEMS Switches. IEEE 20th International Conference on Micro Electro Mechanical Systems 231–234 [17] Daneshmand M, Mansour R R, Sarkar N (2004) RF MEMS Waveguide Switch. Microwave Symposium Dig., IEEE MTT-S international 2:589–592 [18] Chen C H, Peroulis D (2007) Liquid RF MEMS Wideband Reflective and Absorptive Switches. Microwave Theory and Techniques, IEEE Transactions 55(12) Part 2:2919– 2929 [19] Lee J S, Lucyszyn S (2005) A micromachined refreshable Braille cell. IEEE/ASME Journal of Microelectromechanical Systems 14(4):673–682 [20] Lee J S, Lucyszyn S (2007) Design and pressure analysis for bulk-micromachined electrothermal hydraulic microactuators using a PCM. Sensors and Actuators A: Physical, Elsevier 133(2):294–300. [21] Choi J Y, Lee J S, Lucyszyn S (2006) Development of RF MEMS switches for high power applications. IEEE Mediterranean Microwave Symposium (MMS’2006), Genova, Italy: 294–297
Robust Algebraic Approach for Radar Signal Processing: Noise Filtering, Time-Derivative Estimation and Perturbation Estimation Sungwoo Choi1, Brigitte d’Andréa-Novel2, and Jorge Villagra3 1
Ecole des Mines de Paris, Centre de Robotique, Paris, France [email protected] 2 Ecole des Mines de Paris, Centre de Robotique, Paris, France 3 Universidad Carlos III, Departamento de Ingenieria de Sistemas y Automatica, Madrid, Spain
Abstract. In this paper, we propose a robust algebraic approach to deal with noisy signal from radar in the context of Stop-and-Go scenarios which need inter-distance measurements for the control law. In general, commercial radars for cars are of such low quality that the measured signal is very noisy and the performance is not good enough to be used directly in the control law. In addition to this, a constants bias and a distance proportional perturbation factor, which varies during the lifetime of radar, can corrupt the signal too. When the signal is noisy, its time derivative is specially corrupted. Hence, an algebraic nonlinear estimation technique is proposed to deal with filtering, estimating time derivatives and perturbation factors. It is important to point out that these filters, differentiators and estimators are not of asymptotic nature, and do not require any statistical knowledge of the corrupting noises.
1 Introduction 1.1 Generality Driving assistance systems like Adaptive cruise control (ACC) and Stop-and-Go have been the objective of many researches in recent years. The former concerns the interdistance control in highways where the vehicle velocity mainly remains constant, whereas the latter deals with the vehicle circulating in towns with frequent and sometimes hard stops and accelerations. In most of the reported works (e.g. [1]), ACC and Stop-and-Go controls are based on the inter-distance measurement. An interesting work is [6], where the authors propose a nonlinear dynamic reference model taking into account safe and comfort specification. Contrary to the other similar methods, this work globalizes properly the ACC and the Stop-and-Go scenarios, using only one reference model and controller, and shows good results in both cases. However, this work makes an assumption which is never fulfilled in real situations: The velocity and the acceleration of the leader vehicle are perfectly measured from suitable sensors. In general, an ACC equipped vehicle has a radar (or a laser telemeter) which measures the distance to the preceding vehicles. Measurements coming from low cost
144
S. Choi, B. d’Andréa-Novel, amd J. Villagra
radars are so noisy that the proper time derivative is very difficult to obtain. To deal with this task, simple low pass filtering can be used but it produces phase lag which is not adequate for real time estimation. Some authors (e.g. [2]) have proposed a constant gain Kalman filter but this technique needs statistical knowledge of the noise a priori. 1.2 Outline of the Paper Firstly a model for radar measurement signals, based on experimental data, will be introduced. It will consist of a constant bias, a multiplicative factor, and an additive noise corruption. The multiplicative factor might vary during the lifetime of radar. Hence, its real time estimation is important for the control strategy robustness. In this context, our second contribution consists in noise filtering, estimating derivatives and the perturbation factor. An algebraic framework initiated in [5] which is very robust to any type of noise and not of asymptotic nature, and which does not require any statistical knowledge of the noises, has been used for these works. The remainder of the papers is organized as follows. Section 2 presents the algebraic framework for nonlinear estimation. The radar modeling will be introduced in section 3. Section 4 will present estimators for noise filtering, derivatives and the perturbation factor. Experimental results will be displayed in Section 5. Finally, conclusions and some future works will be drawn in Section 6.
2 Algebraic Framework for Numerical Differentiation1 Start with a polynomial time function x N (t ) =
∑
N
x ( v ) (0) v =0
tv ∈ ℜ[t ], t ≥ 0 , of dev!
gree N. The usual notations of operational calculus (see, e.g., [9]) yield
x N ( s ) = ∑v=0 N
Multiply both sides by positive powers of
x ( v ) (0) . s v+1
d . The quantities x (v ) (0) , ν = 0, 1,…, ds
N, satisfy the following triangular system of linear equations:
d α s N +1 X N dα = dsα dsα
(∑
N
v =0
x ( v ) (0)s N −v
)
0 ≤ α ≤ N −1 .
(1)
Multiplying both sides of Eq. (1) by s − N , N > N , allows to get rid of time derivatives, i.e., of s μ
1
dι X N , μ = 1,..., N , 0 ≤ ι ≤ N . dsι
See [4] for more details.
Robust Algebraic Approach for Radar Signal Processing
Consider now an analytic time function, defined
145
by the power se-
tv ries x (t ) = ∑ x ( v ) (0) , which is assumed to be convergent around t = 0. Apv =0 v! tv N of order proximate x(t) by the truncated Taylor expansion x N (t ) = ∑ x ( v ) (0) v =0 v! ∞
N. Good estimates of the derivatives are obtained by the same calculations as above. A most elegant and powerful algorithmic procedure for obtaining a corresponding numerical differentiator is provided in [7]. It will be exploited in the sequel.
3 Radar Signal Modeling A test was carried out to see the accuracy of radar measurement. Objects are placed at different known places and the radar data were recorded. In general, raw data from radar consist of n impact points (n<100). These points are regrouped in a cluster of points which is supposed to represent objects in the scene using a clustering algorithm. Fig. 1 shows the difference between real distances and measured one. We can verify that these perturbations d p are typically composed of a constant bias μ , a linearly dependant inter-distance perturbation κd r and an additive noise corruptionη1 :
d p = μ + κd r + η1 (t )
(2)
where d r is the real absolute distance. The measured distance d m can be written as follows:
d m = μ + (κ + 1)d r + η1 (t ) . According to Figure 1, the values κ = 0.014 and μ = 0.26 are expected.
Fig. 1. Radar measurement accuracy test
(3)
146
S. Choi, B. d’Andréa-Novel, amd J. Villagra
Standard radars can also provide relative longitudinal velocity between vehicles. This kind of measurement is usually modeled as follows:
vrm = vr + η 2 (t )
(4)
where vr and vrm are real and measured relative velocities, and η 2 is its corresponding additive noise corruption.
4 Estimator Developments 4.1 Noise Filtering Based on techniques introduced in Section 2, we will present here an estimator to obtain the closest value to the real inter-distance, i.e. to properly filter high-frequency additive noises in inter-distance measurement. Let us consider the measured interdistance d m and approximate it locally by a linear polynomial d m (t ) = a0 + a1t in sliding windows. Rewriting this in the operational domain gives:
d m ( s) =
a0 a1 + . s s2
(5)
Noise filtered value estimation leads to estimating the coefficient a0 at each instant. To eliminate a1 , we first apply an operator
d 2 s to Eq. (5), and secondly, we ds
multiply by s −3 in order to avoid any derivative term in time domain. As a result of these operations the following estimator for a0 can be written in time domain: ∧
a0 =
2! T2
T
∫
0
(2T − 3t )d m (t )dt .
(6)
4.2 Time Derivative Estimation In a similar way, if we apply an operator ∧
a1 = −
3! T3
T
∫
0
d −2 s and s to Eq. (5), we obtain: ds
(T − 2t )d m (t ) dt .
(7)
4.3 Perturbation Factor Estimation The real distance d r could be considered as follows:
dr = F + C ,
(8)
Robust Algebraic Approach for Radar Signal Processing
147
where
F = ∫ vr (τ )dτ and C is an initial distance measurement. Replacing d r by Eq. (8) in Eq. (3), and rewriting it in the operational domain gives:
μ
d m ( s) =
s
+ (κ + 1)( F ( s ) +
C ). s
(9)
In order to eliminate μ and C , let us apply the operator
d −2 s . Then, applying s allows having an integral form in time domain: ds T
∧
κ +1 =
∫ (T − 2t )d (τ )dτ . ∫ (T − 2t ) F (τ )dτ m
0
(10)
T
0
∧
The parameter κ is therefore identifiable if κ + 1 =
∫
T
0
(T − 2t ) F (τ )dτ ≠ 0 . Since
this multiplicative factor can vary during the life cycle of radar, it is interesting to estimate it in real time, but we assume that its varying rate is very small. Therefore, we will not estimate κ all the time but once good estimations of κ are obtained, it will be fixed in order to properly estimate μ during a predetermined interval of time. In a similar way, the estimator of μ can be obtained from Eq. (9): ∧
μ=
2! T2
T
∫
0
(T − t )(d m (t ) − (κ + 1) F (t ))dt − (κ + 1)C
(11)
We can also take advantage of the estimation of κ as a diagnosis tool. Because of its too small expected value (=0.014), it can be initially ignored. However if the estimated value of κ become non-negligible, we will take it into account to obtain more accurate noise filtering and time derivative estimation. These could be obtained comparing the polynomial d m (t ) = a0 + a1t to Eq. (3):
∧ a0 − μ dr = κ +1 ∧ a1 . dr = κ +1 •
(12)
(13)
148
S. Choi, B. d’Andréa-Novel, amd J. Villagra
5 Experimental Results In this section, we describe obtained experimental results. A car-following Stop-andGo scenario with a radar equipped car and a fixed object have been tested on a straight road situation. In this scenario, the fixed object is placed in front of the car with an initial distance (=36.11 m) and the car begins to move toward the object while measuring distances to the object.
Fig. 2. Comparison between noisy distance measurements and its filtered values
Fig. 2 shows a comparison between noisy distance measurements and estimated filtered signals ( a0 estimation). We can see how good performance the estimator gives at the right figure (zoomed one) in Fig. 2. Estimated time-derivatives ( a1 estimation) of noisy distance measurements and classical differentiations (
Δd m (t ) ) are compared Δt
in Fig. 3 which shows that our time-derivative estimator is very robust to noises and gives proper estimates.
Fig. 3. Comparison between estimated derivatives and classical derivatives
Robust Algebraic Approach for Radar Signal Processing
149
The radar used for our experiment shows very low performance for velocity measurements. Velocities measured by radar, time-derivatives of distance measurements and car velocities, which can be considered as ideal velocity measurements particularly in this scenario, are compared in Fig. 4. The velocities measured by radar are so noisy that good estimations for κ cannot be expected. However we can try to estimate κ with car velocities and we display the results in Fig. 5.
Fig. 4. Comparison of velocities
As shown in Fig. 4, the car was in static state until 15 and after 38 seconds, so reliable data to be used are those between 15 and 38 seconds. Estimation of κ by Eq. (10) in this interval shows pertinent estimated values as expected (those in the square in Fig. 5). Those estimated with car velocities are more precise than those with velocities measured by radar and their mean value is almost the same of the expected value (see Fig. 6). Hence, more precise velocity measurements are needed for good performance on κ estimation.
Fig. 5. Perturbation factor
κ
estimations
150
S. Choi, B. d’Andréa-Novel, amd J. Villagra
Fig. 6. Pertinent values of estimated
κ
and theirs mean values
Fig. 7. Estimated constant offset μ
We have tried to estimate the constant offset μ with the well estimated value of κ (=0.0149) but the results in Fig. 7 are not as good as expected. More sophisticated estimator should be taken into account to tackle this problem.
4 Conclusions and Future Works A new estimation approach for radar signal processing is proposed. It is based on algebraic estimation techniques and shows good robustness to measurements noises. In a general way, the noise filtering and time derivative estimation perform pretty well. The estimation of κ is also good provided precise velocity measurements are available. If κ is negligible, it is highly recommended to use a0 and a1 estimator of distance measurements to obtain desired realistic data. This algebraic estimation approach can be used to deal with unmodeled dynamics ([3, 8]) in control. Thus, a model-free engine/brake control strategy for a Stop&Go context is under study.
Robust Algebraic Approach for Radar Signal Processing
151
Acknowledgments. The authors would like to thank Michel FLIESS (INRIA-ALIEN and LIX) and Hugues MOUNIER (IEF, Univ. Pris-Sud) for useful discussions.
References [1] M Brackstone and M McDonald (2000) Car-following: A historical review. Transportation Research F, 2:181-196 [2] S Cho and JK Hedrick (1995) Vehicle Longitudinal Control Using an Adaptive observer for automated highway systems. Proc. American Control Conf., pp. 3106-3110, Seattle [3] M Fliess and C Join (2008) Intelligent PID controllers. Proc. 16th Mediterrean Conf. Control Automation, Ajaccio [4] M Fliess, C Join, H Sira-Ramirez (2008) Non-linear estimation is easy. Internat J. Modelling Identification Control, Vol. 3 [5] M Fliess and H Sira-Ramirez (2007) Closed-loop parametric identification for continuoustime linear systems via new algebraic techniques. Continuous-Time Model Identification from Sampled Data, Springer [6] J Martinez and C Canudas-de-Wit (2007) A Safe Longitudinal Control for Adaptive Cruise Control and Stop-and-Go Scenarios. IEEE Trans. Control Systems Technology, 15:246258 [7] M Mboup, C Join, M Fliess (2007) A revised look at numerical differentiation with an application to nonlinear feedback control. Proc. 15th Mediterrean Conf. Control Automation, Athens [8] J Villagra, B d'Andréa-Novel, S Choi, M Fliess, H Mounier, Robust Stop&Go control strategy: an algebraic approach for nonlinear estimation and control. submitted to International Journal of Vehicle Autonomous Systems [9] K Yosida (1984) Operational Calculus: A Theory of Hyperfunctions. Springer, New York, (translated from the Japanese).
A New 3-Axis Force/Moment Sensor for an Ankle of a Humanoid Robot In-Young Cho1 and Man-Wook Han2 1
Dept. of Mechanical Design and Manufacturing Eng. Changwon National University, Changwon 2 Vienna University of Technology, Austria [email protected] Abstract. For biped walking of a humanoid robot a three-axis force/moment sensor in the ankle, which measures the force FZ and the moments MX, MY simultaneously is necessary. There are many commercially available force/torque six axis sensors. Most of them are relatively big, heavy and expensive. From this reason the mechanical design of a new small size, light weight and low cost three-axis force/moment sensor based on rectangular beams will be described. In the paper the equations to calculate the strains of the beams according to the force or the moment are derived. They are used to design the size of sensing parts of the sensor. The reliability of the derived equations is verified by a finite element method analysis.
1 Introduction The development of walking humanoid robots with human behaviour like humans is currently one of the favourite research topics in the robotic community. In order to make humanoid robot which walks like human being, many requirements have to be fulfilled. One of them is the movement of the centroid of robot. For moving centroid of robot, the measurement of the force and moments simultaneously needs in the ankles during walking. Currently six-axis sensors are widely used. For a “Cost Oriented” robot a three-axis force/moment sensor for the force Fz and the moments Mx, My is enough. First we evaluated commercially available force/moment sensors. As pointed out earlier they are mostly six axis sensors with some disadvantages. In the United States and Japan [1-3] some sensors were already developed but most of them are quite expensive and not suitable for the ankle of a robot. Therefore in this paper, a new structure of a three-axis force/moment sensor will be described. All necessary equations are derived to presume the bending strains on the surfaces of the sensing parts under the force or the moments. The size of the sensing parts of the force/moment sensor will be calculated by the derived equations. Finally the results will be verified by the finite element method (FEM) using ANSYS software.
2 Sensor Design 2.1 Modelling of the Sensor Fig. 1 shows 4 different shapes of a sensor with different production technologies. The shape has an influence on strain and deformation. The size of the outer square ring is 8cm × 8cm . This size is similar to the size of human ankle.
154
I.-Y. Cho and M.-W. Han Table 1. Maximum deformation under the force FZ (1000 N) Maximum deformation ( μm ) (a)
(b)
(c)
(d)
0.1260
0.4419
0.3180
0.4860
Under the assumption the material is Aluminium 2024-T351 and the force Fz is 1000N all the 4 structures are analyzed by finite element method using SolidWorks and COSMOSXpress. The maximum deformation is shown in the Table 1. The shape (d) has the biggest deformation. Furthermore shape (b) has a similar deformation to (d). Because of the simple shape structure and easy manufacturing the shape (b) is more suitable than the shape (d). The structures of sensing parts for a three-axis force/moment sensor should be modelled to get the interference error of 0%, when force Fz, and moments Mx and My are applied to it. Thus, the structure of the new three-axis force/moment sensor is modelled as shown in Fig. 2. The sensor is composed of an outer square ring as a fixture ring and the force and moment transmitting block and rectangular beams A-D.
Fig. 1. Sensor shapes
Fig. 2. Structure of the new three-axis force/moment sensor
A New 3-Axis Force/Moment Sensor for an Ankle of a Humanoid Robot
155
The transmitting block of the applied force and moment located the center of sensor. And four rectan-gular beams measure the force and the moments. The sensing parts are used for the measurement of: • Force (Fz) on the upper and lower surface of the rectangular beam A and B • Moment (Mx) on the upper and lower surface of the rectangular beam C and D • Moment (My) on the upper and lower surface of the rectangular beam A and B.
The fixture ring of the three-axis force/moment sensor is attached on the foot and the transmitting block on the ankle. The forces and/or moments on ankle and foot are transmitted by four rectangular beams to the transmitting block. 2.2 Theoretical Analysis
In order to analyze the strains on the rectangular beam A-D, the equations to calculate the strain of a beam under the force Fz, need to be derived. It is sufficient to analyze only the strain of a beam because the size of four rectangular beams A-D is same. Also, the analysis of the strain of beam under the moment Mx, as well as My is identical. The derived equations for analysis of the strains under the applied moment My can be used also for the strains under the applied moment Mx. The size of the sensing parts can be determined by using the derived equations. 2.2.1 Strains under Fz Fig. 3 shows the free body diagram of the rectangular beam under the force for analyzing the strain of each beam when the force Fz is applied along the z-direction at end of the each rectangular beam, point E.
Fig. 3. Free body diagram under the force Fz
The force F1z for one beam is Fz (1) 4 is the z-direction reaction force generated to the rectangular beam due to F1z = P =
where F1z force Fz. The moment equilibrium condition at point E can be written as
M 1 y − F1z ⋅ l = 0
(2)
where M 1 y is the y-direction moment generated to the rectangular beam due to the force Fz.
156
I.-Y. Cho and M.-W. Han
By substituting the equation (1) into (2), the moment M 1 y is M 1y =
Fz ⋅ l 4
(3)
The force Fz is derived using the theoretical finite element method, and which can be expressed as Fz =
48EI l3
(4)
v
where v is the vertical displacement of point E. From (4) the vertical displacement v follows
v=
Fz l 3 48EI
(5)
The equations for analyzing the rated strains on the surface of the rectangular beam are derived by substituting the Eq. (3) into the bending moment equation ε = Mc EI , which can be written as
ε upp =
72 EI Ebh 2 l 2
ε low = −
v
72 EI Ebh 2 l 2
v
(6a) (6b)
where ε upp is the strain on the upper surface of the beams, and ε low is the strain on the lower surface of each of the beams. 2.2.2 Strains under Mx or My Fig. 4 shows the free body diagram of the rectangular beam under the moment. The equations under the moment My may be applied to the beam C and D for the moment Mx, because these rectangular beams have the same structure.
Fig. 4. Free body diagram under the moment My
When the moment My is applied to the transmitting block center point O, the force F1z, F2z along the z-direction, the force F1x, F2x along the x-direction, and the moment M1y, M2y along the y-direction are generated to the end of both side of the rectangular beam due to the moment My. As the force equilibrium condition at point O F1x = − F2 x
(7)
A New 3-Axis Force/Moment Sensor for an Ankle of a Humanoid Robot
157
F1z = F2 z
(8)
The moment equilibrium condition at point O can be written as T − F1z (l + d ) − F2 z (l + d ) = 0
(9)
By substituting the equation (8) into (9), the moment M y can be written as M y = 4 F1z (l + d )
(10)
where T = M y 2 , because the applied moment My to the beam A and B has an effect on the beam C and D as torsion. The moment My is derived using the theoretical finite element method, and which can be expressed as
My =
16 EI φ l+d
(11)
where φ is the rotational angle at point O. The rotational angle φ m can be derived using the derived equations, and which can be rewritten as
φm =
3M y (l + d )
(12)
4 Ebh 3
In case of the applied torsion T for getting a torsional angle on beam C and D
T=
Ty
(13)
2
where the moment My and the torsion Ty have same values. The twist angle is φ = TL C , where C is the torsional rigidity of the beam. For a rectangular section C = βbh 3G , where β is function of b h and L = l + d . For a rotational angle φt can be calculated as
φt =
M y (l + d )(1 + ν )
(14)
βbh 3 E
The total rotational angle φT is a sum of the rotational angle by the moment and the torsion.
φT = φ m + φt =
(l + d ){3β + 4(1 + ν )} 4 Eβbh 3
My
(15)
The equations for analyzing the rated strains on the surface of the rectangular beam are derived by using the equation ε = M EZ , which can be written as
ε upp =
12 Eβbh 3
Ebh 2 (l + d ){3β + 4(1 + ν )}
φT
(16a)
158
I.-Y. Cho and M.-W. Han
ε low = −
12 Eβbh 3
Ebh 2 (l + d ){3β + 4(1 + ν )}
φT
(16b)
where ε upp is the strain on the upper surface of each beam, and ε low is the strain on the lower surface of each beam. 2.3 Natural Frequency
The force/moment sensor is designed in consideration of the natural frequency and the translational and torsional stiffness in order to use in dynamic condition. The equation for the translational stiffness under the force is F = k ⋅ δ where δ is the displacement and k translational stiffness. By substituting (4) in F = k ⋅ δ , the equation for the translational stiffness under the force can be rewritten as kF =
48EI
(17)
l3
And the equation for the torsional stiffness under the moments and the torsions is M = k ⋅ θ . Where θ is the rotational angle by the moment and torsion. By substituting the equation for the torsional stiffness can be rewritten as kM =
4 Eβbh 3 (l + d ){3β + 4(1 + ν )}
(18)
3 Design and Analysis of the Sensor 3.1 Design of the Sensor
Like the size of human’s ankle, the maximum size of sensing parts is limited to 8cm × 8cm . The width and height of the outer square ring is limited to 1cm . The outer square ring is not considered by equation, because the outer square ring doesn’t have deformation under the force and the moment. The used material is Al2024-T351. (a)
(b)
(c)
(d)
(e)
(f)
(g)
250
Kf(MN/m)
200 150 100 50 0 0.002
0.004
0.006
0.008
0.010
0.012
h(m)
Fig. 5. The translational stiffness on each width and each thickness of the beam
A New 3-Axis Force/Moment Sensor for an Ankle of a Humanoid Robot
159
Fig. 5 shows the translational stiffness on each thickness of the 40 rectangular beam in case of (a) 35 30 width b = 0.012m , length 25 l = 0.024m ; (b) b = 0.011m , 20 l = 0.0245m ; (c) b = 0.010m , 15 10 l = 0.025m ; (d) b = 0.009m , 5 l = 0.0255m (e) b = 0.008m , 0 0.002 0.004 0.006 0.008 0.010 0.012 l = 0.026m ; (f) b = 0.007m , h(m) l = 0.0265m ; (g) b = 0.006m , l = 0.027m . Fig. 6. The torsional stiffness on each width and each thickness of the beam Fig. 6 shows the torsional stiffness on each thickness of the rectangular beam in case of (a) width b = 0.012m , length l = 0.024m ; (b) b = 0.011m , l = 0.0245m ; (c) b = 0.010m , l = 0.025m ; (d) b = 0.009m , l = 0.0255m ; (e) b = 0.008m , l = 0.026m ; (f) b = 0.007m , l = 0.0265m ; (g) b = 0.006m , l = 0.027m . The multi-axis force/moment sensor has a high natural frequency as much as possible in order to use in dynamic condition. Furthermore, the translational stiffness k F (b)
(c)
(d)
(e)
(g)
(f)
Km(kN/m)
(a)
is greater than the torsional stiffness k M in figure 5 and 6. The equation of the natural frequency is
ω Fn =
Fig. 7. Attachment location of the strain gauges
k m
(19)
where m is a translated value from the force or the moment to the mass and k is a stiffness of the force or the moment sensor. In the consideration of a value of the translational stiffness k F , the size of sensing parts is determined. The width (b) is 0.012m . The thickness (h) is 0.009m . The length (l) is 0.024m . When it has a same width and thickness is 0.010m , k F is the biggest. But it takes the thickness of 0.009m by result of table 1. It is better a deformation effect to having difference between the outer square ring’s thickness and the rectangular beam’s thickness in the inner part. Fig. 7 shows the location of the strain gages for the three-axis
160
I.-Y. Cho and M.-W. Han
force/moment sensor. The strain gages SG1~SG4 are for the measurement of the moment Mx., SG5~SG8 for the moment My and SG9~SG12 for the force Fz. This full bridge circuit for each sensor is constructed using the strain gage SG1(U1), SG2(U2), SG3(L1), SG4(L2) for the moment Mx and the strain gage SG5 (U1), SG6(U2), SG7(L1), SG8(L2) for the moment My and the strain gage SG9(U1), SG10(U2), SG11 (L1), SG12(L2) for the force Fz. 3.2 Strain Analysis
The strain and interference error of each attachment location of the strain gage is used to calculate the rated strain and interference error of each sensor though equation.
ε = ε upp1 − ε low1 + ε upp 2 − ε low2
(20)
where, ε is the strain calculated from the full bridge circuit, ε upp1 is the strain of tension strain gage U1, ε upp 2 is the strain of tension strain gage U2, ε low1 is the strain of compression strain gage L1, ε low 2 is the strain of compression strain gage L2. 3.3 Finite Element Method Analysis
FEM software, ANSYS is used to analyze the strain of the body of sensor. Figure 8 shows the meshed model of the structure of the sensor for analyzing FEM in three dimensions.
Fig. 8. Modelling for body of sensor(ANSYS)
Fig. 9. Strain under the force Fz
Fig. 9 shows one of the analysis results of the strain under the force Fz. The bright section means a high strain change. The transmitting block has few changes. Fig. 10 shows one of the analysis result of the strain under the moment Mx or My. Table 2 shows the rated strain of each sensor from the theoretical analysis and FEM analysis. By comparing finite element method’s analysis with the theoretical analysis the rated strain error by force Fz is less than 7.3% and by the moments Mx and My less than 7.5%. These errors are small.
A New 3-Axis Force/Moment Sensor for an Ankle of a Humanoid Robot
161
Fig. 10. Strain under the force Mx or My Table 2. Rated strain in theory and FEM analysis
Analysis
Fz Mx My
Theory FEM Theory FEM Theory FEM
Rated strain ( μm / m ) 507 470 1267 1172 1267 1172
4 Summary For the ankle of a “Cost Oriented” humanoid robot the development and design of a low cost three axis force/torque sensor was described. Currently six axis sensors are mostly used. They are heavy, expensive and big. According to our experiences a three axis sensor is enough. The developed sensor can be produced reasonably small and cheap. In the future, the research on using the manufactured sensor makes an experiment. And the result from the experiment will be compared with that from the theoretical analysis and FEM analysis. Acknowledgments. This work is financially supported by the Ministry of Education and Human Resources Development (MOE), the Ministry of Commerce, Industry and Energy (MOCIE) and the Ministry of Labor (MOLAB) through the fostering project of the Industrial-Academic Cooperation Centered University.
References [1] ATI Industrial Automation (2005) Multi-axis force/tor-que sensor. ATI Industrial Automation [2] BL Autotec (2003) Multi-axis force/torque sensor (BL-FTS-E020). BL Autotec [3] Nisso Electric Works Co., Ltd (1999). Multi component Load cell
162
I.-Y. Cho and M.-W. Han
[4] F Aghili et al. (2001) “Design of a hollow hexaform torque sensor for robot joints”. The International Journal of Robotics Research, 20(12):967-976 [5] JJ Park et al. (2005) “Development of the 6-axis forc-e/moment sensor for an intelligent robot’s gripp-er”. Sensors and Actuators A, 118:127-134 [6] KT Chen et al. (1997) “Shape optimal design and force sensitivity evaluation of six-axis force sen-sors”. Sensors and Actuators A, 63:105-112 [7] GS Kim et al. (1999) “Design and fabrication of thre-e-component force/moment sensor using plate-beams”. Measure Science Technology, 10:295-301
DGPS for the Localisation of the Autonomous Mobile Robots Man-Wook Han Vienna University of Technology, Austria [email protected]
Abstract. The localisation of the autonomous mobile robot is one of the basic components for the navigation. Especially in case mobile robots are used outdoor the localisation is more difficult than indoor. For the navigation of the cars and bicycles Global Positioning System (GPS) is widely used. The major problem is the accuracy of the measurement. But the most of car navigation systems use the GPS in combination of odometry. The accuracy of the GPS measurement can be improved using additional techniques, like DGPS (Differential Global Positioning System), RTK (Real Time Kinematic) and others. In this paper a study with DGPS and RTK for the localisation of mobile robot in outdoor environment will be reported. Keywords: Differential Global Positioning System (DGPS), RTK (Real Time Kinematic), mobile robot, localisation.
1 Introduction Autonomous mobile robots are widely used in many fields, like museum guide robot, cleaning robot, demining robot etc. For the navigation of mobile robots there are different sub-tasks, like path planning, collision-free movement, and localisation. One of the sub-tasks is the localization, to find the current position. Most mobile robots have encoder coupled with motor shaft, which calculates the travelled distance with number of revolutions of the motor shaft. Because of the unevenness of the work floor, backlash of the gear and others the travelled distance calculated with encoder information will not be the actual distance. Therefore it is necessary to correct the position using external sensor information. For the indoor navigation of the mobile robots beacons and other landmarks are used. Now most car navigation systems are using GPS-signals to find the current position and plan the route to the goal position. Similarly there are approaches to use GPS for the navigation of mobile robot. But the commonly used GPS has the measurement deviation up to 15m. There are several ways to improve the accuracy of the GPS. In this paper the DGPS (Differential Global Positioning Systems) and RTK (Real Time Kinematic) are investigated for the localisation of the mobile robot. The results of different measurements will be reported.
2 GPS (Global Positioning System) The Global Positioning System, usually called GPS, is the only fully-functional satellite navigation system. A constellation of more than two dozen GPS satellites
164
M.-W. Han
broadcasts precise timing signals by radio to GPS receivers, allowing them to accurately determine their location (longitude, latitude, and altitude) in any weather, day or night, anywhere on Earth. GPS has become a vital global utility, indispensable for modern navigation on land, sea, and air around the world, as well as an important tool for map-making, and land surveying. GPS also provides an extremely precise time reference, required for telecommunications and some scientific research, including the study of earthquakes (Wikipedia, 2006). GPS allows receivers to accurately calculate their distance from the GPS satellites. The receivers do this by measuring the time delay between when the satellite sent the signal and the local time when the signal was received. This delay, multiplied by the speed of light, gives the distance to that satellite. The receiver also calculates the position of the satellite based on information periodically sent in the same signal. By comparing the two, position and range, the receiver can discover its own location (Wikipedia, 2006). The GPS uses at least 24 satellites split to 4 satellites in one of 6 orbits under 55° to the equator level are bent. Between the orbits an angle from in each case 60° exists. This subdivision as well as the fact that were made up to 31 satellites in the orbit around failures be able to counteract permits the use of at least 4 satellites for the respective position regulation. The accessible exactness of GPS amounts in the civil area of Approx. 100m in 95% of the measurements. Since the 1st May, 2000 an artificial deterioration of the accuracy was switched off on the part of the US military and therefore the accessible exactness lies in the area of approx. 15m. For the military use the system already showed originally an exactness of 22m and the presently ruling exactness is not known. The accuracy of GPS can be improved several ways (Wikipedia, 2006): • • • • •
Differential GPS (DGPS) The Wide Area Augmentation System (WAAS). Local Area Augmentation System (LAAS). Wide Area GPS Enhancement (WAGE) Real Time Kinematic Positioning (RTK)
In this work the DGPS and RTK are investigated for the localisation of the mobile robots outdoor.
3 DGPS (Differential Global Positioning System) DGPS (Differential Global Positioning System) is an additional system to improve the exactness of the positioning by GPS. On this occasion, the DGPS receivers become an additional correction signal of a stationary base station receive. Because the geographic position of the base station is very exactly known, the error of the general positioning can be thereby calculated for every single satellite and be corrected therefore
DGPS for the Localisation of the Autonomous Mobile Robots
165
also. The correction signal will be transmitted by means of a GSM Network. The accuracy of the measurement can be reached by 0.3m to 2.5m in the horizontal or 0.6m to 5m with a height measurement. With highly precise systems, the exactness is possible in the mm of area. The exactness decreases with increasing distance of the measuring point from the base station.
4 Real Time Kinematic (RTK) Real Time Kinematic (RTK) satellite navigation is a technique used in land survey based on the use of carrier phase measurements of the GPS, GLONASS and/or Galileo signals where a single reference station provides the real-time corrections of even to a centimetre level of accuracy (Wikipedia, 2006). With the RTK the accuracy of the measurement of DGPS can be reached up to 3cm. The accuracy of the measurement depends on the distance to the reference stations. The differences between DGPS and RTK are: • RTK offers more accurate measurement than DGPS • For the initialization RTK needs at least 5 GPS satellites. After initialization it needs 4 satellites. DGPS needs at least 3 satellites for the localization. With 4 satellites the accuracy of the measurement will be higher. • RTK needs a dual-channel receiver. For DGPS one single-channel receiver is enough. • The initialization by RTK takes longer than by DGPS (about 1 minutes).
5 GPS Devices One of the application areas of the robots is the humanitarian demining. For this purpose a four-wheel driven mobile robot can be used outdoor. For this work a DGPS receiver Type HiPer+ from the company TOPCON is used which is connected by Bluetooth with GPS Controller Type TOPCON FC-2000. Hiper+ integrates a 40 channel dual frequency GPS receiver capable of receiving GPS and GLONASS satellite signals, an advanced performance centre mounted UHF radio antenna, an advanced dual frequency GPS antenna. There are models with internal PDL radio modems of various frequencies and channel separations or an integrated GSM modem. 5.1 GPS-Standard Protocol NMEA 0183
Fig. 1. Pioneer II-AT
The NMEA 0183 protocol is a standard GPS protocol to exchange the GPS raw data. The first version is announced in 1983. Because more than 60 types of GPS receivers from four different producers are available a standard is necessary for the data exchange. Insofar the NMEA 0183 serves ass an interface definition for the data exchange
166
M.-W. Han
independent of the distinct device. Topcon HiPer+ uses also this protocol. The interface uses following parameters: • • • • •
Transfer rate: 4800 Data Bits: 8 (d7=0) Stop Bits: 1 (or more) no Parity Bit no hand shake
The NMEA 0183 standard uses a simple ASCII, serial communications protocol. Each message starting character is a dollar sign $. The next first five characters identify the type of message. First two characters identify the device ID (for example, GP.. ,Global Positioning System receiver) and the three characters the data ID (for example, GGA…Global Positioning System Fix Data). All data fields that follow are comma-delimited. Fig. 2. GPS Controller TOPCON FC-2000 and DGPS receiver HiPer+
For example $GPGGA,hhmmss.ss,llll.ll,a,yyyyy.yy,a,x,xx,x.x,x.x,M,x.x,M,x.x,xxxx*hh hhmmss.ss = UTC of position llll.ll = latitude of position a = North or South yyyyy.yy = Longitude of position a = East or West x = GPS Quality indicator (0=no fix, 1=GPS fix, 2=Dif. GPS fix) xx = number of satellites in use x.x = horizontal dilution of precision x.x = Antenna altitude above mean-sea-level M = units of antenna altitude, meters x.x = Geoidal separation M = units of geoidal separation, meters x.x = Age of Differential GPS data (seconds) xxxx = Differential reference station ID hh = Checksum
6 Experiment The main purpose of the experiment is to compare the GPS position data that displayed on the GPS controller FC-2000 with the data that is displayed on the computer. An own software were written to display the GPS data getting from HiPer+. The feasibility of the selected DGPS for the localisation of mobile robot for the outdoor application will be investigated. First of all the measurements were done in two different places. One measurement was done in an area surrounded with buildings and in one area where there were no high buildings. Indeed, the court is surrounded by blocks of houses, however, could
DGPS for the Localisation of the Autonomous Mobile Robots
167
be reached in each case DGPS- as well as RTK Float Fix. These fixations take place only with a sufficient number of available satellites. As a starting point (Point A, see figure) a point was accepted in the middle of a relatively level surface. Of it outgoing 3 any points (B, C, and D) with a distance of 4.5m in each case to the point Fig. 3. Scheme for measurement A were measured out. Afterwards a zero was put with A and every other point was measured. An issue of the measuring data automatically occurred by means of the provided program in a text file after end of a measurement. The measuring duration in the respective point and the number preserved with it of
Table 1. Measurement at the point A
Table 3. Measurement at the point C
Table 2. Measurement at the point B
Table 4. Measurement at the point D
168
M.-W. Han
measuring values depends from of the kind of the available fixation. It was recorded with every measurement so long to a sufficient number of measuring values with RTK Float Fix existed. The refresh rate of measurement data of the measuring instrument and with it also the number of the recorded measuring values in the PC amounted to 1 measuring value per second. The number of the satellites available for the evaluation amounted during the measurement mostly to 4. The Table from 1 to 4 will show the measured value of the points A, B, C, and D. 6.1 Evaluation
20
m
Although during some measurements with a fixation RTK Float divergences were to be noted in cm of area for a longer time span, the available measuring values show on average a divergence of approx. 90cm to the measured out position. In comparison to a customary measurement GPS this shows an excellent result, however, for an exact positioning in cm of area this would not be suitP7 able. With the height measurement, however, substantial divergences of approx. 5m are recog20 m nizable. Indeed, the really available height diverP8 gence of 4 points cannot be given for want of a suitable authoritative measurement; however, a 15 .4 coarse evaluation lies with less than 0.5m. 2 m To be able to limit the cause of these substantial divergences to the expected measuring values other measurements were carried out. If diverP6 gences happen by the conversion by means of the Fig. 4. Scheme for measurement introduced program, a comparative measurement with the hand device would show this. The subsequently introduced measurement became carried out on a place in Vienna. This place is suited very well, because only low signal shadowing effects appear. For a measurement a triangle, see Fig. 4, was measured out in the middle of the place and was carried out in the edge points with the hand device as well as with the introduced program Measurements. With measurements by means of laptop and provided program the values evident in Table 5 arose. Table 5. Measurement P8 – P6 P6 – P7 P7 – P8
Average Value distance 16.32 m 18 m 20.7 m
The cited measuring values show again divergences between 0.7 m and 2 m. An evaluation of the co-ordinates given by the hand device proves the measuring values cited in table 6. Out of this divergences between 9mms and 277mms arise what of an expected exactness of the system corresponds. With this exactness a robot positioning is also conceivable. These measurements allow to suppose that with the measurement by means
DGPS for the Localisation of the Autonomous Mobile Robots
169
Table 6. Measurement P8 – P6 P6 – P7
Resulted distance 15.411 m 20.277 m
of hand device either internal conversion algorithms with very high exactness are used, or that with the communication between hand device and receiver additional records are transmitted to the 0183 standard NMEA.
7 Conclusion and Future Perspective Object of the study was the possible use of GPS receivers in the precise, e.g. within centimetres, positioning of above mentioned robots. The accuracy of position measurement with a GPS receiver depends on a number of factors, including the: • number of satellites received and their position relative to the receiver. • state of the ionosphere, through which the position signals travel. • capability of the receiver (number of channels, use of both frequencies available for civilians, parallel use of the Russian GLONASS, use of signals for error correction (which?), accuracy of the internal clock, etc.) • status of the "selective availability", which means a deliberate deterioration of the signals available for civilian purposes. Switched off since 2000. • number of measurements. Several different results can be computed to obtain a higher level of accuracy. Typical horizontal accuracy lies within: • Dozens of meters with a normal, stand-alone measurement. • A few meters with ordinary DGPS correction mode. • A few centimetres with RTK DGPS correction mode. This means, a GPS receiver for our purposes should be able to make RTK position fixes. Both the DGPS (Differential GPS) and the RTK (Real Time Kinematics) modes rely on signals for error correction which are provided by a second, fixed GPS receiver, which exactly known position is compared to the one currently measured and which should be in the relative vicinity of the mobile receiver, e.g. within 10 kilometres for RTK mode. Available was a "HiPer+" device from TOPCON, which is used for land survey, has all of the above mentioned capabilities and carries a considerable weight and bulk plus a hefty price tag. Control of the receiver was done by a hand-held device linked via Bluetooth. The correction signal for DGPS and RTK was provided by the ÖBB (Austrian Railway) and was received via built-in GSM modem. The GPS receiver was linked to a personal computer (running Windows XP) via RS-232 interface, on which the receiver's findings where transmitted as NMEA sentences (NMEA is a widely used GPS protocol). Those NMEA sentences were
170
M.-W. Han
processed by a Dynamic Link Library (DLL), written in C#, which provided the formatted results to an executable (EXE), also written in C#. This executable displayed the results (relative position in meters, among others) on the computer's screen and wrote them into a file at the same time. We compared those results with the results displayed on the Bluetooth-linked hand-held, to determine the correctness of the DLL's code running on the personal computer. Our outdoor tests took place on a place located in the city of Vienna. Because this place has a relatively wide open space, RTK measurements were possible most of the time. There were virtually impossible in narrow streets and backyards. Our tests consisted of the horizontal measurement of two or more points with the GPS receiver and comparing the calculated distance between them with the results from our tape measure. The actual accuracy was found to be within centimetres (with RTK), based on the results displayed on the hand-held. The tests (the above mentioned DLL) showed differences to the tape-measured distances up to about one meter (with distances of about 20 meters), but the fraction of the error varied widely. Two (or more) reasons for this seem possible: • Some bugs in our software, which calculates the distances from the latitude and longitude provided by the GPS receiver. If this is true: Why does the fraction of the error not remain constant? • The hand-held device gets some additional information from the GPS receiver we do not obtain with the NMEA sentences. The conclusion is that the use of GPS for the precise positioning of outdoor robots is possible, with some limitations: • A relatively unobstructed view to the sky is necessary. • The ever changing position of the GPS satellites may make RTK fixes not possible at all times, as a favourably geometry is needed for that. • Weight and price of RTK capable GPS receivers.
References [1] Everett HR (1995) Sensors for Mobile robots: theory and application. A.K. Peters [2] Hofmann-Wellenhof B, H Lichtnegger, J Collins (1992) GPS: Theory and Practice. Springer-Verlag Wien [3] Wikipedia (2006) Global Positioning System http://en.wikipedia.org/w
Verification of Shell Elements Performance by Inserting 3-D Model: In Finite Elements Analysis with ANSYS Program Chang Jun, Jean-Marc Martinez, and Barbara Calcagno ITER International, Cadarache, Saint Paul Les Durance, France [email protected]
Abstract. To evaluate ITER vacuum vessel buckling behavior, shell elements are used in FE analysis. Forged parts in lower ports are far from shell behavior and can not represent real design. To overcome this problem, 3-D solid elements are inserted in the shell elements by using Multiple Points Constraint method. With these calculations, new shape of shell elements is suggested for further calculations.
1 Introduction ITER is an international science and engineering project which is supported and funded by more than 20 countries. European Union is the host party and the international center team of ITER is installed in Cadarache, near Aix-en-Provence in France. The mission of ITER project is to build the biggest nuclear fusion machine ever in history and to verify the possibility and efficiency of a futuristic energy source. Now, more than 300 experts are gathered from EU, USA, Japan, Russia, Korea, China, and India. Fusion is a nuclear reaction that isotopes of hydrogen are merged into and turned into Helium. When the reaction happens, highly energized neutrons come out. The energy of high agitated neutrons is around 500 times bigger than input energy. This fusion phenomenon is the principle of hydrogen bomb. Fusion generator concept was already matured right after World War II. In various experiments the fusion reaction was observed clearly in extremely high temperature. But, until now enough long time driving (t>10minutes) by self-sustaining has not been achieved yet. This fusion reaction is generated in highly heated plasma status of hydrogen. Because of high energy status of plasma, successful confinement of this plasma with stable rotating movement is a critical issue. To achieve this requirement, a torus shaped vacuum vessel (VV) is used in normal fusion reactor. ITER VV will be made of stainless steel having 20m of diameter and 12m height with a hole of 6m diameter in the center. This VV is classified as nuclear safety equipment which has to be examined and approved by a legitimate regulator from French government. To fulfil this mission of demanding design work for ITER vacuum vessel, FE analysis with ANSYS program is used. Due to big size of model, shell elements are applied for modelling. But, in several critical spots, shell elements are not proper to represent the structural behaviour. Therefore, we inserted 3-D parts in the global
172
C. Jun, J.-M. Martinez, and B. Calcagno
Fig. 1. Section view of ITER tokamak machine
model of shell elements. To combine shell elements with 3D elements, special echnique is necessary due to different degrees of freedom. The Multiple Points Constraint method is applied to combine them. Through a series of optimizing calculations, we finally obtained results within 2% of difference between two models which one is 100% shell elements and another is mainly shell elements with inserted 3-D elements. The field of this study is an application of structural analysis with Finite Elements Method.
2 Solution Domain and Used Model The vacuum vessel is axi-symetric in each 40 degree. Nine sectors of vacuum vessel is not exactly same, but quite enough to be treated as symmetric. Therefore, 40 degree one sector is selected and meshed for the analysis. Vacuum vessel has many ports on outer shell. These ports are to do diagnostic measurement and to install in-vessel components like blankets which is special tile wall to protect vessel from plasma heat. These ports are extruded from the outer vessel on higher, middle and lower part of vessel like right part of Fig. 1. The vessel is supported on lower port area which is far from gravity center of vessel. There are no other supports just below the vessel. The reason is that there is not enough space to insert any supporting structure on the center part of vessel. The torus shaped vessel has to accomFig. 2. 20D FE Model of ITER Vacuum Vessel and Lower Port Gussets to Reinforce the Structure (Red is 3modate Center Solenoid Coil D model of gussets and blue is shell model) in the center hole.
Verification of Shell Elements Performance by Inserting 3-D Model
173
And 18toroidal coils go around D-shaped vacuum vessel in each 20 degree. The main supports for vessel which has 9,000 metric tons of weight are below lower ports. The distance between the vessel support and center of gravity generate big mechanical moment on the lower port connection part, so reinforcement called gussets are designed to put on the upper region of gussets. To make FE model, shell elements are used and only a half of one sector is taken. The gussets are forged pieces and far from shell shape, so 3-D model is inserted in the shell element with Multiple Points Constraint Method.
3 Load and Boundary Conditions Vacuum vessel suffers various pressure loads from mechanical and electro-magnetic source. Vacuum vessel is double walled structure. Inside double wall is vacuum environment and between two walls cooling water flows under 1.3MPa pressure with 100ºC temperature. Basic load is vessel structure weight (90MN), cooling water weight (6MN) and cooling water pressure (1.3MPa). Another important load is electro-magnetic pressure by magnetic field and eddy current in the vessel. This is called as Plasma Vertical Displacement Event (VDE) Load. When the fusion machine is turned on and driven in stabilized condition, plasma turns around inside vessel without touching the wall. But when plasma looses stability by many different noises or factors, turning plasma will be forced to go down and up. This is VDE cases and causes eddy current on the vessel. This eddy current pushes plasma back to the original position, but the plasma will hit the vessel if vertical instability is bigger enough to overcome this pushing back force. The plasma will hit on the vessel walls and release huge current along vessel in poloidal direction. This current is called “halo current”. Halo current flows along perimeter in vertical direction. This direction is called as poloidal direction. Main magnetic field is generated along toroidal direction which is along horizontal perimeter. This poloidal halo current and toroidal magnetic field
Fig. 3. Applied VDE Load as Pressure on the Lower Part of Double Walled Vacuum Vessel
174
C. Jun, J.-M. Martinez, and B. Calcagno
generate mechanical pressure on the vessel. VDE in downward direction is more dangerous than upward because downward VDE will add load with gravity. VDE is categorized as many stage according to strength and probability. VDE-III case is applied in this study which is considered “unlikely happen”. VDE load is not steady-state load, it can happen several times during 10 years operation. These total loads give pressure on the vessel and also cause reaction force of 30MN on each support which is put in every 40 degree.
4 Non-linear Buckling Analysis The forged shape of gussets is important, so we inserted 3-d model which has more realistic geometry. The 3-D model has 3 degree of freedom of x, y and z displacement, but shell model has 6 degree of freedom of additional xy, yz and zx tilting angle. After combing these two different models in a FE model, the stress results near MPC connection is not realistic. Therefore, we check structural integrity of vacuum vessel not by stress values, but by non-linear buckling analysis results. Main material of vacuum vessel is SS316-L (N) which is a special austenic stainless steel having high ductility with high strength. Below is stress strain curve used in the non-linear buckling analysis. The concept of Load Factor is applied to judge structural integrity. In normal analysis, all the loads are put on the structure and take the peak stress from the results. If the peak stress is below than allowable (yield) stress, the structural integrity will be judged OK. But, shell structure is different. Usually shell structure is modeled with shell and beam elements. Shells are main walls and beams are reinforcement between shells. Beam elements are usually just lines, so connecting ends with shell elements indicate very high stress value due to stress concentration. Moreover, in this analysis, 3-D model is inserted in the shell model and shows not proper stress values. To avoid that kind of localized unrealistic stress status, non-linear buckling analysis is very useful Fig. 4. True Stress-Strain Curve of SS316-L (N) tool to judge structural integrity of vessel structure. Analysis by Load Factor approach is as follows: 1) Put all the loads of pressure, force and temperature on the structure. 2) Run the non-linear analysis. 3) Do iteration by increasing loads by factor from 1.0 to some value, for example up to 5.0 with 0.05 steps. 4) ANSYS will run this time steps with non-linear analysis. 5) When the
Verification of Shell Elements Performance by Inserting 3-D Model
175
NL Buckling Analisys - 20degrees shell-model with Solid Gusset Reinforced 0.04
0.035
Utor (m)
Toroidal displacement, U tor (m)
0.03
Gravity
0.025
RFZ=28.764MN ESIZE 50mm, CPU TIME = 7H 01min 09s
VDE
0.02
0.015
For LF=1, RFZtot = RFZgravity+RFZVDE ? 14.382MN RFZgravity ? 4 .61MN RFZVDE ? 9 772MN
0.01
RFZ=14.382MN
0.005
Load Step Size = 0.2 Load Step Size = 0.1 NSUB,1,10,1 NSUB,10,10,1
0 0
0.5
1
1.5
LF = 1.6
2
Load Factor, LF
Fig. 5. Non-Linear Buckling Analysis Result Curve (Load Factor versus Displacement)
Fig. 6. Three Chosen Shell Models to Replace 3-D Model of Gussets
displacements increase enormously, the analysis will be diverged and the program will be crashed. We check the result data and read how much load was applied at the crashed point. For example, if all loads are 30MN and the program was crashed at 90MN loads, the Load Factor (LF) will be 3. In this vacuum vessel, our criterion was given as LF=2 which is defined by French code of construction. An important point of this Load Factor approach is that structural integrity is judged by displacement, not by stress itself. Of course, stress is related with this method, but the maximum stress control concept is not directly linked. This judgment method on structural integrity allows us avoiding all the local problems. Local peak stress is usually not real situation and even local failure happens, like leakage by local fracture or opening, it could be repaired quickly without hampering whole structural integrity. Next right side Figure shows typical result of Load Factor. If LF is increased, displacement goes up with more and more stiffer slop of curve and eventually the program will be diverged and stopped.
176
C. Jun, J.-M. Martinez, and B. Calcagno
Fig. 7. Von Mises Strain in Shell Gussets (above) and Von Mises Strain in 3-D Gussets (Red colored shows maximum strain which is 10%)
5 Results and Conclusion We used three different shape of shell gussets and 3-D model which is very similar as real geometry. Our goal is to find which shell model can represent 3-D model most closely. In other complicate calculations, we cannot use 3-D inserted model to perform analyses due to time consuming. Above figure is three shell models we have chosen. Second model (Model 2) has additional connection shell (red color). Third model (Model 3) is same as first model (Model 1), but red colored part thickness was varied. The thickness of red color parts
Verification of Shell Elements Performance by Inserting 3-D Model
177
is varied from 10mm to 200mm, while the thickness of blue colored parts was fixed as 100mm. In the above three shell models, second model gave most stable results. The Load Factor difference between second model and the 3-D elements implanted model was less than 1% in wide range of variation of red part thickness. For example, in model 2, the vertical stiffness was more than 99% of 3-D elements implanted model from 50mm thickness of middle part. For Model 3 (right one), the vertical stiffness of whole vessel was very sensitive according to the red part thickness. So, if we have a possibility to change load condition, Model 2 is best approximated model due to its stability. But, when load value is fixed Model 3 is the best choice with proper thickness of red part. From above figures, shell model and 3-D model matches quite well. The results show very similar pattern of strain which allows us to use the shell model for more time consuming analysis. This result will be used for various analysis including seismic analysis and support design. Acknowledgments. The authors wish to acknowledge the assistance and support of ASSYSTEM company who dispatched key analysts including authors to work together with ITER International team.
References [1] RCC-MR, “Design and Construction Rules for Mechanical Components of FBR. Nuclear Island”, AFCEN, Edition 2007 [2] ANSYS, Inc. Release 11.0 Documentation for ANSYS. [3] Finite Element Analysis of the 40 degree ITER Vacuum Vessel Standard Sector #01: ITER_D_24APAE (ITER Internal Document) [4] Summary of Vacuum Vessel Materials Data for Structural Analysis: ITER_D_229D7N (ITER Internal Document) [5] Assessment of the Structural Margin of the VV Lower Port Poloidal Gussets: ITER_D_28WBUS (ITER Internal Document) [6] Design Study of the VV lower Port Poloidal Gussets: ITER_D_27TFXS (ITER Internal Document) [7] Design Study of the ITER VV Supporting Structure: ITER_D_27TFYB (ITER Internal Document) [8] PID 2.0, ITER_D_2234RH (ITER Internal Document) [9] DDD 1.5 VV, Version 2.0, ITER_D_22FPWQ (ITER Internal Document) [10] Masses of VV components, VVMassesSum.doc (ITER Internal Document) [11] Analysis of the Structural Margin of the Reinforced VV Lower Port Poloidal Gussets (ITER Internal Document)
Analysis of Textile Reinforced Concrete at the Micro-level Bong-Gu Kang RWTH Aachen University, Institute of Building Materials Research, Aachen, Germany [email protected]
Abstract. To increase knowledge on the load carrying behaviour of textile reinforced concrete (TRC), simulations are important to complement experimental investigations. Due to the heterogeneous structure of TRC, different damage mechanisms at different scales are found. Since not all details can be represented in one model, different modelling levels have to be considered. This paper focuses on the modelling of TRC at the micro-scale and therefore particularly at the experimental and theoretical determination of the material, bond and micro-structural properties needed for the modelling. The consideration of fatigue damage due to cyclic and sustained loading, which is indispensable for the design of structural members, enhances the complexity of the material and bond laws. To verify the developed model, yarn pullout tests under monotonically increasing, cyclic and sustained loading with additional experimental techniques for the monitoring of the filament ruptures during the tests are scheduled.
1 Introduction Textile reinforced concrete (TRC) is a new, innovative composite material in structural engineering, in which textile fabrics, made of e.g. alkali resistant (AR) glass multi-filament yarns, are used as reinforcement element [1]. A multi-filament yarn itself consists of hundreds of individual filaments, which form the basis of the heterogeneous and complex structure. The main advantage of TRC is that thin-walled structural member with a thickness of only several millimetres can be realised. Thus, a new field of application can complement the existing applications of conventional steel concrete. However, to apply the new material effectively, knowledge on the load carrying performance under monotonically increasing, cyclic and sustained loading has to be extended. Besides experimental investigations, numerical simulations provide in this connection detailed stress analysis and cost-effective parameter studies. Due to the highly heterogeneous structure of TRC, different damage mechanisms at different scales, which interact with each other, are found. Since not all details can be represented in one model, different modelling levels have to be considered. From the most detailed (smallest scale) level, effective characteristics are derived, which are subsequently used in the next higher scale level, etc. Thus, the model at the smallest scale level constitutes the fundament of the modelling chain and the determination of proper input parameters turns out as an essential task since they influence the subsequent modelling chain. A detailed modelling strategy of TRC can be found in [2].
180
B.-G. Kang
1.1 Motivation and Objective Load carrying behaviour under monotonically increasing loading has been widely analysed so far. However, for the design of structural elements made of TRC, knowledge on the long-term behaviour as well as on the behaviour under cyclic loading is needed. Yarn pullout tests under cyclic and sustained loading [3]-[4] showed a damage accumulation mechanism caused by successive filament ruptures and debonding, which could be observed with the AE (Acoustic Emission) analysis and the FILT (Failure Investigation using Light Transmission properties [5]) test. The observed damage accumulations motivated the objective of this investigation, to establish a model of TRC at the smallest considered scale in the modelling chain (micro-scale), where the fatigue damage behaviour (damage accumulation) of filaments and the bond is studied in detail. However, the scope of this investigation is restricted to a low cycle fatigue analysis (damage analysis at high loads) of an AR-glass yarn pullout test with a maximum number of load cycles n≤100 and maximum test duration of one hour. 1.2 Concept In Fig. 1 the structure of the scheduled investigation is presented. The first major task is the experimental and theoretical investigation to determine the material, bond and micro-structural properties as basic input parameters for the modelling of TRC. Under the assumption that concrete is rigid in a yarn pullout test (negligible deformation compared to the pull-out displacement) and the bond between filaments is negligible compared to the bond between filament and concrete, three main lines result, namely the determination of the micro-structural property, the filament tensile property and the bond property. With the statistical evaluations of the respective experimental basis
Fig. 1. Structure of the scheduled investigation
Analysis of Textile Reinforced Concrete at the Micro-level
181
and the respective models to be developed, finally the generated bond structures, the filament material law and the bond law result in which the respective model parameters are determined inversely using a hybrid optimisation method [6]. Then the next step is to develop a stochastic FEM model of the yarn pullout test. The FEM tool DIANA is used for this purpose. The filament material and bond laws are implemented to DIANA with user subroutines and an automatic model generation based on the generated bond structure is developed. The last but not least important step is to validate the developed model. Therefore, yarn pullout tests under monotonically increasing, cyclic and sustained loading are planned. With the aid of the AE analysis and the FILT test the successive rupture of filaments can be monitored during testing. A comparison of the test results with the simulation results finally will show the adequacy of the proceeding. In this paper some selected results of the micro-structural properties as well as the filament tensile properties are presented.
2 Micro-structural Properties The random and incomplete penetration of the concrete matrix into the yarn is mainly responsible for the high scatter of the load carrying behaviour. Therefore, an essential task is to characterise the real microstructure for a proper simulation of the load carrying behaviour. For this purpose different specimens with different yarn shapes were stepwise grinded and SEM images were taken every 0.5mm (cf. Fig. 2 left as an example). To allow a statistical evaluation of the microstructure, first the three substantial phases, namely filament, concrete and voids, have to be separated using image analysing methods. Thus, a three-phase idealisation is obtained for each cross-section i (cf. Fig. 2 centre). A 3D arrangement of the filaments can be obtained e.g. using filament matching algorithms. As an example a simple filament matching between two consecutive cuts based on the consideration of the filament diameter and the neighbourhood relation of the filaments are presented in Fig. 2 right. In the scope of this paper, a description cannot be given extensively. However, the detailed proceeding can be found in [7]. The most important evaluation is the bond fraction x (contact fraction between concrete and filament) versus the distance from the outer boundary to the centre of the yarn. Thus, the yarn cross-sections are first subdivided in nd=10 distance classes j in such a manner that in every class the same number of filaments nf are found. Subsequently each distance class is further subdivided in nbf=10 equidistant bond fraction classes xd. In Fig. 3 the relative frequency as a result of the classification is presented as an example for one cross-section i. To achieve spatial information, the penetration behaviour of the concrete along the yarn length has to be evaluated. The 3D arrangement of the filaments enables the description of the discrete change of the bond fraction from cross-section to cross-section for each filament k. Furthermore, a waviness of the filaments can be determined, which can be considered in the filament material law. To gain a statistical description by this procedure is very expensive. Since different penetration situations have to be analysed anyway, which influence strongly the load carrying behaviour, a stochastic model is introduced, which generates idealisations of
182
B.-G. Kang
different microstructures. Assuming that a representative sample size exists in each distance class j, the relative frequency distribution of the bond fraction can be interpreted approximately as discrete probability distribution Pij(xd) with xd=1..nbf (cf. Fig. 4). Then the bond fraction of each filament k in each distance class j can be generated with uniform random numbers. However, considering the irregular concrete penetration, regions penetrated and regions free of concrete exist along the yarn length. Thus, depending on the discretisation of the yarn length in individual crosssections to be generated, the actual bond fraction of individual filaments correlates to those in the previous cross-sections. This means that when e.g. in one cross-section a filament is fully embedded in concrete (bond fraction is one) the probability that the same bond state is found in the next cross-section, e.g. 0.5mm away, certainly depending on the considered distance class, is high. However, when the bond fraction has not changed over a certain length, the probability decreases that the same bond fraction is found in the next cross-section. This changing probability can be specified by a weighting function W(x) modifying the original discrete probability distribution Pij(xd) of cross-sections i=2..ncs. As weighting function e.g. a normal distribution function with mean µ (represents the bond fraction in the last cross-section i-1) and standard deviation σ (further denoted as magnification parameter) can be used.
W (x ) =
1
σ ⋅ 2π
⎛ x−μ ⎞ − 0.5⋅⎜ ⎟ ⎝ σ ⎠ ⋅e
2
(1)
In Fig. 5 weighting functions with µ=3 and different magnification parameter σ is presented. With increasing σ, the weighting function becomes flatter. Since σ depends on the bond fraction in the previous cross-sections, a history parameter δ(k) is introduced, which controls the evolution of σ(δ)
σ (δ ) = e p⋅δ
(2)
postulating an exponential law with a constant parameter p, which has to be determined separately for each distance class j by an inverse analysis using the hybrid optimisation method. In a first step, a simple formulation of the evolution of d(k) is proposed as follows ⎧δ + 1, if bond fraction between i − 2 and i − 1 unchanged ⎫ δ i (k ) = ⎨ i −1 ⎬ ⎩
0
, if bond fraction between i − 2 and i − 1
with i = 2..ncs and δ 2 = 0 can be calculated (cf. Fig. 6).
Fig. 2. Image analysing procedure
changed ⎭
(3)
Analysis of Textile Reinforced Concrete at the Micro-level
183
distance class j 0.3
probability Pij
relative frequency hi(xd, j)
0.4 0.3 0.2 0.1 10
0.0
7
1
4
4
bond fraction class x d
1
7 10
0.1 0
distance class j
1
2
3
4
5
6
7
8
9
10
bond fraction class x d
Fig. 3. Relative frequency - cross-section i
weighting factor Wµ,σ
0.2
Fig. 4. Probability-distance class j, crosssection i
bond fraction class of filament k in cross-section i -1
0.5
weighting factor σ =1 σ =2 σ = 10
0.4 0.3 0.2 0.1 0 1
2
3
4
5
6
7
8
9
10
bond fraction class x d
Fig. 5. Weighting factor - filament k, distance class j, cross-section i
distance class j weighted w probability Pij
0.5 0.4 0.3 0.2 0.1 0 1
2
3
4
5
6
7
8
9
10
bond fraction class x d
Fig. 6. Weighted discrete probability distribution - filament k, distance class j, cross-section i
With discrete weighting factors W(µ d, σ, xd) (µ d: bond fraction class in crosssection i-1) the new normalised weighted discrete probability distribution Pijw ( xd ) Pijw ( xd ) =
Pij (xd ) ⋅ W (μ d ,σ , xd ) nbf
∑
xd* =1
( )⋅ W (
Pij xd*
μ d ,σ , xd*
)
, xd = 1..nbf
(4)
However, due to the weighting process, the overall probability distribution of the bond fraction within a distance class must not change. Therefore, a consistency condition has to be formulated. With the conditional weighted discrete probability distribution
184
B.-G. Kang ~
Pij ( xd | μ d , σ ) = W (μ d , σ , xd )⋅ Pij ( xd ) , xd = 1..nbf
(5)
of the bond fraction class xd in cross-section i under the condition of µ d and σ, the total discrete probability distribution of the bond fraction class xd accumulated over all alternative possible bond fraction classes µ d = 1..nbf in cross-section i-1 can be calculated with the law of total probability as follows. Pijtot (xd ) =
nbf ~
∑ Pij ( xd | μ d , σ ) ⋅ Pij (μ d ) ,
μ d =1
xd = 1..nbf
(6)
As consistency condition, the normalised total discrete probability distribution must be equal to the original discrete probability distribution Pij ( xd ) n
(
n
)
bf bf ~ ! 1 tot ⋅ Pij ( xd ) = Pij (xd ) , xd = 1..nbf with S = ∑ ∑ Pij xd* | μ d , σ ⋅Pij (μ d ) (7) S x d* =1 μ d =1
To fulfil the consistency condition, correction factors C(µ d, σ, xd) of the weighting factors W(µ d, σ, xd) are introduced so that equation 4 can be rewritten as PijC ( xd ) =
Pij nbf
∑
(xd ) ⋅ W (μ d , σ , xd ) ⋅ C (μ d , σ , xd )
( )⋅W (
μ d , σ , xd*
Pij xd* * x d =1
)⋅ C (
μ d , σ , xd*
)
, xd = 1..nbf
(8)
equation 5 and 6 as ~
PijC ( xd | μ d , σ ) = W (μ d , σ , xd )⋅ C (μ d , σ , xd ) ⋅ Pij ( xd ) , xd = 1..nbf Pijtot ,C (xd ) =
nbf
(9)
~
∑ PijC ( xd | μd ,σ ) ⋅ Pij (μ d ) ,
xd = 1..nbf
μ d =1
(10)
and the condition (7) as n
n
~
(
)
bf bf ! 1 ⋅ Pijtot ,C ( xd ) = Pij ( xd ) , xd = 1..nbf with SC = ∑ ∑ PijC xd* | μ d , σ ⋅Pij (μ d ) (11) SC x * =1 μ d =1 d
A homogeneous system of linear equation results with nbf equations and nbf unknowns. Since no unique result can be found for the present problem, the hybrid optimisation method is used to approximate the best fit. To verify the stochastic model, generated cross-sections ( ncsg ) are statistically evaluated. The mean distribution of the
relative frequency h g (xd , j ) over all generated cross-sections ncsg converges towards the original initial probability distribution Pi (xd , j ) with increasing number of gener-
ated cross-sections ncsg . This shows that the consistency condition is fulfilled. The
Analysis of Textile Reinforced Concrete at the Micro-level
185
calibration of p on the statistical evaluation of the bond fraction along the yarn length will be carried out as soon as all experimental results are available.
3 AR-Glass Filament Material Law Tensile tests on AR-glass filaments under monotonically increasing, cyclic (maximum 100 load cycles) and sustained loading (maximum 1h) were carried out. The schematic test set-up is presented in Fig. 7a. The detailed description of the testing procedure can be found in [5]. For specimens, which achieved the scheduled maximum load cycles and maximum loading duration respectively, the residual load carrying capacity was determined subsequently. Load level, amplitude and loading rate were varied (overall 200 specimens tested). The results show a damage accumulation due to cyclic and sustained loading as expected. For the determination of the material law for AR-glass filaments, different damage accumulation modelling approaches are discussed. The first approach is based on fracture mechanics, which is commonly used for the description of the damage mechanism of glass. Due to the brittle material characteristic of glass, initial cracks (notches) lead to high stress concentrations at the crack tip. A crack growth occurs when a critical external stress σ crit = K IC π ⋅ a (with KIC: critical stress intensity factor, a: crack depth) is exceeded. However, for sustained loading a sub-critical crack growth can already occur at low external stress depending on the relative humidity and temperature. The so-called stress corrosion (cf. [8]) is responsible for this behaviour. Diffusing water molecules preferentially split the silicon oxide network of the glass at the crack tip, where a higher chemical potential is found due to the high stress concentration. Studies on crack velocity of glass had been carried out using double-cantilever cleavage technique on flat glass depending on temperature, humidity and glass composition (cf. e.g. [9]-[10]). For high crack velocities of 10-7–10-4m/s a logarithmic correlation of crack velocity and applied load is found (cf. Fig. 7b, 25°C) (region 1). For crack velocities less than 10-7m/s, the crack velocity slowed down stronger with smaller load (region 2). Due to the small crack velocity, reaction products can adhere at the crack tip, which lead then to a rounding of the crack tip and thus significantly hamper the crack growth and even completely lead to a halt. This then is called a crack fixing. Comparing the filament lifetime in the tensile tests under sustained load (not presented here) with the lifetime according to Fig. 7b, a big discrepancy can be found. Consequently, a transfer of the crack velocity of the flat glass to the crack velocity of the filaments is impossible. The reasons for the discrepancy are certainly the different composition and the deviation of the network structure of AR-glass filaments compared to the flat glass. Unfortunately, a test method to measure directly the crack growth for individual filaments with a diameter of about 27μm is unrealistic, so that quantitative fracture mechanical parameters of filaments cannot be determined experimentally. However, under the assumption of a qualitative similar damage mechanism (piecewise logarithmic relation), the crack velocity versus relative (on the current strength ft(t) related) loading relation with a crack-fixing limit x1 and the two regions with different slopes can be assumed (cf. Fig. 7c). At this, the consideration of the relative loading x(t ) = σ (t ) f t (t ) is necessary due to the high
186
B.-G. Kang
scatter of the filament tensile strength ft0 (initial strength). To calibrate the model parameters on the results of the tensile tests an inverse analysis using the hybrid optimisation method is carried out. For this purpose, simulations of the strength degradation of filaments are carried out, which further need the knowledge of the initial tensile strengths of the specimens. However, a fundamental problem in tests with cyclic loading is that a damage accumulation can occur during the tests, so that consequently the original, un-damaged load carrying capacities cannot be determined and thus remain unknown. For this reason, a statistical method to estimate the initial tensile strengths dependent on the sample size of a test series has been developed (cf. [11]).
(a)
(b)
(c)
(d)
Fig. 7. a) Test set-up, b) Influence of temperature on fracture behaviour of soda-lime silicate glass in water (10), c) Hypotheses of crack velocity versus relative load, d) Hypotheses of instantaneous damage versus relative load
The current strength is calculated with the equation f t (t ) =
K IC
1.12 ⋅ π ⋅ a(t )
(12)
assuming surface cracks (cf. [12]). The crack depth a(t) is t
a(t ) = a0 + ∫ v(τ ) ⋅ dτ , with a0: initial crack depth
(13)
0
and the crack velocity •
a(t ) = v(t )
(14)
Analysis of Textile Reinforced Concrete at the Micro-level
187
is a function of t and a(t). Equation (14) represent an ordinary differential equation, which is solved numerically with the fourth-order Runge-Kutta method. Another damage accumulation modelling approach is the heuristic description of the damage mechanism. The strength degradation here is phenomenologically described over an instantaneous damage accumulation parameter δ(t). Some qualitative assumptions about the progress of δ(t) versus relative load x(t) can be formulated. At low x(t) almost no damage progress occurs. For x(t)-values being very close to the limit 1, a critical rise of the damage progress can be expected. As a result, at least three basic sections must be distinguished to describe δ(t) versus x(t) reasonably. Further on, the course of the δ(t)−x(t)-curve must be continuous and the gradient may not decrease. For the assumption of linear curve progression in the sections, δ(t) versus x(t) is defined by a multi-linear function (cf. Fig. 7d), where up to a certain limit of x(t), no damage progress is assumed analogous to the crack-fixing limit. As an alternative formulation to the multi-linear function, a more simple analytical formulation, a power law
δ (t ) = x n (t )
(15)
is further proposed. Finally, the strength degradation is obtained solving the ordinary differential equation resulting from the formulation of the instantaneous change of the current strength •
f t (t ) = −δ (t ) ⋅ f t (t )
(16)
which is a function of t and ft(t). Here again, the fourth-order Runge-Kutta method is used to solve the ordinary differential equation. The parameters of the proposed filament damage accumulation models are calibrated on the results of the tensile tests separately under cyclic and sustained loading respectively to analyse whether the motion due to the cyclic loading influences the damage accumulation. The results of the inverse analysis show that due to the sustained load less damage accumulation is caused. Consequently, the damage accumula•
tion model must be extended so that a dependence on the loading stress rate σ (t ) can be considered. Another result of the inverse analysis is that the best fit of the experimental results is achieved with the multi-linear law. The worst fit is gained with the fracture mechanics based law so that the assumption of a piecewise logarithmic relation between crack velocity and relative load may be questionable. However, simulation results with all three considered laws only deviate marginally when regarding the high scatter of the initial tensile strengths and the limited number of tested specimens. Thus, since the power law has the least number of optimisation parameters, it is chosen for further consideration. The exponent n in the power law (cf. Equation 15) specifies the magnitude of the damage accumulation. To include the loading stress rate in the formulation of δ(t), an exponential relation between loading stress rate and n(t) •
n(t ) = pe + ( pa − pe ) ⋅ e
− q ⋅ σ (t )
(18)
188
B.-G. Kang •
instantaneous damage accumulation δ(t)
is assumed where pa equals the exponent n for static loading ( σ (t ) = 0 ) and pe equals the exponent n for infinite loading stress rate. In Fig. 8, the instantaneous damage accumulation law based on the power law is presented. The parameters pa, pe and q are now calibrated on the total results of the tensile tests under cyclic and sustained loading. As an example, the simulation result of one filament is presented in Fig. 9. Continuous strength degradation can be observed due to the combined static and cyclic loading. 1.0
static loading infinite loading stress rate
0.8 0.6 0.4 0.2 0.0 0.0
0.2
0.4
0.6
0.8
1.0
relative load x(t)
Fig. 8. Instantaneous damage accumulation law
stress in N/mm²
2000 1500 1000
loading strength
500 0 0
400
800
1200
1600
time in s
Fig. 9. Simulation example of one filament under combined static and cyclic loading
4 Outstanding Tasks Although many parts of the investigation not presented in this paper, as the FEimplementation of the material and bond laws, the signal and frequency analysis of the acoustic emission to separate the damage mechanisms in a yarn pullout test or the stochastic FEM yarn model have been already far advanced, it was not possible so far to bring together the whole complex due to some missing links. So, as outstanding tasks first the investigation on the bond behaviour have to be finished. Preliminary investigations show a plastic slip as well as a decrease of the bond stiffness. Thus, a bond-slip law based on the coupled damage-plasticity theory extended to the bond damage accumulation due to the cyclic and sustained loading must be developed. Thereby, the same basic proceeding as for the determination of the filament damage accumulation law can be used. With the stochastic FEM model, the yarn pullout behaviour under monotonically increasing, cyclic and sustained loading can be simulated. Experimental investigations of the yarn pullout behaviour with the application of the AE analysis and the FILT
Analysis of Textile Reinforced Concrete at the Micro-level
189
test enable a detailed comparison of the damage progress during cyclic and sustained loading. Once validated, the model can be used on the one hand for further parameter studies and on the other hand as the basis for the modelling at higher scale level. Acknowledgments. This research project is part of the Collaborative Research Centre 532 “Textile Reinforced Concrete – technical basis for the development of a new technology“ and is sponsored by the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG). The support is gratefully acknowledged.
References [1] Brameshuber W (2006) Textile Reinforced Concrete. State-of-the-Art Report of RILEM Technical Committee 201-TRC. Report 36, Bagneux RILEM [2] Chudoba R, Graf W, Meskouris K, Zastrau B (2004) Numerische Modellierung von textilbewehrtem Beton (Numerical modelling of textile reinforced concrete). Beton- und Stahlbetonbau 99(6):460-465 [3] Kang B-G, Brameshuber W (2006) Bond Behaviour of Textile Reinforcement Made of AR-Glass under Cyclic Loading. Bagneux: RILEM. In: Textile Reinforced Concrete. Proceedings of the 1st International RILEM Symposium, Aachen (Hegger, J.; Brameshuber, W.; Will, N. (Eds.)), 111-119 [4] Kang B-G, Hannawald J, Brameshuber W (2008) Bond between textile reinforcement and concrete under sustained load. CCC 2008 Challenges for Civil Construction, Porto (Torres Marques et al. (Eds.)), 9 pages [5] Banholzer B (2004) Bond Behaviour of a Multi-Filament Yarn Embedded in a Cementitous Matrix. In: Schriftenreihe Aachener Beiträge zur Bauforschung, Institut für Bauforschung der RWTH Aachen No. 12, Dissertation. [6] Hannawald J (2006) Determining the Tensile Softening Diagram of Concrete-Like Materials Using Hybrid Optimisation. Dordrecht: Springer, 2006. - In: Measuring, Monitoring and Modeling Concrete Properties. An International Symposium Dedicated to Prof. Surendra P. Shah, Northwestern University, USA, (Konsta-Gdoutos, M.S. (Ed.)), pp. 179-187. [7] Kang B-G, Lange J, Benning W, Brameshuber W (2008) Statistical evaluation of the microstructure of a multi-filament yarn embedded in concrete. CCC 2008 Challenges for Civil Construction, Porto, (Torres Marques et al. (Eds.)), 12 pages. [8] Charles RJ, Hillig WB (1962) The Kinetics of Glass Failure by Stress Corrosion. Charleroi: Union Scientifique Continentale du Verre. In: Compte Rendu, Symposium sur la Resistance Mecanique du Verre et les Moyens de l'Ameliorer, Florence, 25-29 Sept. 1961, pp. 511-527. [9] Wiederhorn SM (1967) Influence of Water Vapor on Crack Propagation in Soda-Lime Glass. Journal of American Ceramic Society 50(8):407-414 [10] Wiederhorn SM, Bolz LH (1970) Stress Corrosion and Static Fatigue of Glass. Journal of American Ceramic Society 53(10):543-548 [11] Kang B-G, Hannawald J, Brameshuber W (2008) Strength Degradation of AR-Glass Filaments due to Cyclic Tensile Loading. Farmington Hills. Mich.: American Concrete Institute, ACI SP-251, 2008. - In: Design & Applications of Textile Reinforced Concrete CD-ROM, 24 pages [12] Freiman SW (1980) Fracture Mechanics of Glass. New York: Academic Press. Elasticity and Strength in Glass (Uhlmann, D.R.; Kreidl, N.J. (Ed.)) 5:21-78
Product Service Systems as Advanced System Solutions for Sustainability Myung-Joo Kang and Robert Wimmer GrAT (Centre for Appropriate Technology), Vienna University of Technology, Vienna, Austria [email protected]
Abstract. Sustainable Product Service Systems (PSS) are package solutions of products and services that are combined to directly satisfy client demands, while creating a low environmental impact. PSS offers are in contrast with product sales offers which rather indirectly serve the demands. From an environmental point of view, PSS solutions are often less material-intensive compared to product-based solutions. Therefore PSS, if designed properly, can effectively contribute to achieving the sustainability goal of decoupling value creation and resource consumption. This paper reviews most recent research of PSS and introduces several interesting examples on the market, in order to help researchers and entrepreneurs understand the concept. A number of socio-economic trends underpinning the substantial advent of PSS strategies and a series of arguments regarding their sustainability will be outlined. Based on the study, a practical mental model is suggested within the frame of sustainable development.
1 Introduction to Product Service Systems In the last years Product Service Systems (PSS) have been rigorously discussed among academics and industries, with regard to their potential for sustainability. A number of definitions and typologies have been suggested, and a large number of practical examples have been studied. 1.1 Definition There are numerous definitions available for PSS. Some of those definitions can be used, in a sense, as a synonym of ‘business offers’ or ‘value proposition’ in business literature. The most commonly known definitions are: • A marketable set of products and services capable of jointly fulfilling a user’s needs [1] • The result of an innovation strategy, shifting the business focus from designing and selling physical products only, to selling a system of products and services which are jointly capable of fulfilling specific client demands [2] In addition to the PSS definitions, some authors have defined sustainable PSS by including environmental and/or organisational conditions. • A system of products, services, supporting networks and infrastructure that is designed to be competitive, satisfy customers needs and have a lower environmental impact than traditional business models [3]
192
M.-J. Kang and R. Wimmer
• Sustainable PSS provide consumers with both functional and non-functional values which increase their satisfaction, yet consuming less material and energy [4] • Eco-efficient services are systems of products and services which are developed to cause a minimum environmental impact with a maximum added value [5] Each definition emphasises slightly different elements of PSS. Yet, the common prerequisites are value creation for needs satisfaction and low environmental load. 1.2 Examples in the Market A number of PSS examples that are available on the market are briefly introduced as follows. • City bike: Instant hiring of bicycles is possible on a time basis in a number of European cities (e.g. Copenhagen, Vienna, Berlin, Paris, etc.) Users can take a bike and return it to stations, either by using a deposit system or by paying for the use time. • Ski rental: Ski rental systems have become available at many ski resorts. The providers rent out an assortment of sports equipment. According to the customisation requirements, some products are specially designed for the rental services. Users do not need to carry heavy equipment by themselves, though they can use up-to-date models. • Car sharing associated with public transport: Car sharing is a network which gives shared access to the cars. In some advanced offers, public transport customers with subscription tickets can use the car sharing service without any deposit and monthly fee. It is assumed as the most economical and ecological traffic concept for urban agglomerations in the future. • Carpet leasing and maintenance: The American company, ‘Interface’, introduced a business strategy of leasing - rather than selling – carpets, so as to be able to take them back to recycle. The service employs the modular floor-covering with carpet tiles, which enables partial maintenance. • Solar power leasing and energy contracting: The company, ‘SunEdison’, pays, installs, owns and operates solar energy generators for clients, who pay a fixed amount. This system encourages firms, educational institutions and communities to get solar energy without a large amount of initial capital. • Reuse & refurbishment of prefabricated house units: ‘Sekisui Chemical’ in Japan produces prefabricated house units within a factory. They operate a resourcerecycling housing system to effectively utilise the units of old houses that have been normally demolished. Old houses are taken back to the factory and disassembled into units, inspected, repaired as necessary, and put on the market again. • Biological pest management: For environmentally friendly cultivation, ‘Koppert’ offers farmers crop-protection per protected square meter by using natural enemies. Clients pay a fixed fee per hectare with no additional charges for the use of natural predators.
Product Service Systems as Advanced System Solutions for Sustainability
193
2 PSS as Corresponding Strategies to Socio-economic Trends PSS judiciously respond to socio-economic trends. The emergence of PSS is a part of the trends as well as an expected strategy to enter the post-industrial age. There are several economic and social trends that foster PSS strategies in business. 2.1 Growth of Service-Based Industries It is obvious that the economic paradigm has shifted from the traditional mass material market to the knowledge, information and experience market. One of the indicators is the remarkable escalation of service industries in the last half century, in comparison with the other industrial activities (Fig.1) [4, 6, 7, 8]. Business activities are increasingly based upon immaterial values in tertiary industries (e.g. consulting, information technology) instead of only relying on the material values in primary and secondary industries (e.g. mining, manufacturing goods). One of the reasons for the increase of service industries lies in the intensive competition of product manufacturers in the saturated market. The margin from product sales is getting smaller in the price competition with the industrializing countries where production capability rises rapidly. In terms of technical functions, the difference between brands is no longer clearly distinguishable. To be outstanding in the market, therefore, companies started to provide additional services such as delivery, guarantee, repair, and take-back [9].
Fig. 1. Overview of changes in working population in industries (from 1934 to 2001) [4]
Another reason of the growth of service sectors is, needless to say, the development of information and communication technologies. Based on this, knowledge and information services are recognised to be more important than ever and extensively used [10, 11]. Numerous traditional business models have been transformed to be operated online (e.g. from libraries to internet searching engines, from paper-based mailing system to e-mail services), and completely new applications have been created (e.g. sharing of user created contents, social networks).
194
M.-J. Kang and R. Wimmer
2.2 Social Trends It seems that people have larger concerns than before, in the quality of life, such as well-being, work-life balance, and happiness. Having seen many social problems ironically increasing in spite of, or even due to, material prosperity, many of consumers started to give a skeptical view on material wealth. One of the results is shown in the change of perception, from the more, the better to the less, the better, or the better, the better. This quality-oriented view has been already observed in food consumption. Today, we have reached a consensus that more and more food does not symbolize a higher quality of life. Instead, less but healthy food of high quality is seen much more desirable. Such fundamental transposition of values can be also applied to other consumption and production areas [12]. Sooner or later, the change will be accelerated as it becomes more evident that quality of life cannot be fulfilled by material artifacts only. One of the signs, for instance, is the new global indices that show ecological efficiency for human well-being country by country [13, 14]. Those innovative measures put the well-being of citizens and the natural environment prior to the economic wealth. What counts in the measure is therefore, how efficiently the natural resources are converted to make the people live a long and happy life. Interestingly, though as predicted, the result shows that high levels of resource consumption do not automatically produce high levels of well-being (life-satisfaction) [13]. In business, the ethics of personal decision of consumers become a major concern. For example, so-called LOHAS (Lifestyles of Health and Sustainability) have been recognised as an emerging consumer type. There lies a great potential to create new service-oriented offers, or to improve current offers, in order to respond to their demands for sustainable alternatives. In the long run, social trends and concerns often lead to institutional policies. For example, the series of European regulations for electric devices (e.g. Integrated Product Policy, Waste of Electrical and Electronic Equipment, Restriction of Hazardous Substances) aim to impose producers with larger responsibility for their products, along the lifecycle. To proactively comply with those legislations, it seems necessary for the producers to find innovative solutions. Through PSS strategies, the conventional manner of production, sales, and disposal can be re-organised. For example, product leasing services can promote remanufacturing of products as manufacturers remain as the owner of the products at end-of-life, thus, can be responsible for continual leasing, recycling or proper disposal [15, 16]. In other words, the leasing service enables them to secure uninterrupted access to their resource assets, and optimize the design and services accordingly.
3 PSS and Sustainability 3.1 Resource Productivity As mentioned earlier, PSS solutions can effectively contribute to decoupling value creation and resource consumption. It is primarily because services can create values out of immaterial sources such as knowledge, information, ideas, labour, and time. In this regard, service-based solutions show, in general, higher resource productivity
Product Service Systems as Advanced System Solutions for Sustainability
195
compared to pure product solutions. As an example, a series of statistic analyses of Austrian market is shown. According to the result, the domestic material consumption (DMC) in Austria increased by 26.2% from 1976 to 2005 [6]. During the same period, gross domestic product rose by approximately 94.7%. As a result, material intensity (DMC/GDP) fell by about 35%. This means that more values were generated with lower material consumption. In other words, resource productivity (GDP/DMC) was increased. This possibility of decoupling can be engaged with the growth of service industries and the decline of manufacturing industries since ‘70s (Fig.1). The increase of service industries is certainly one of the factors explaining the increased resource productivity. The other proof of high resource productivity of service solutions is the direct calculation carried out for different industrial sectors. As shown in Table 1, public and private services in Austrian market have achieved the highest added value creation per material input [4, 6]. Table 1. Resource productivity (GDP/DMC) in different industries in Austria (2003) [4, 6] Industrial sectors
Resource productivity
Stone, glass, and ceramics
48
Chemistry, refinery, and coke
145
Mining
147
Building
190
Wood, wooden products and timber
235
Iron and steel production, non-iron metals
315
Paper, print, and publishing
428
Food, beverage, and tobacco
461
Air transport
1,575
Land transport
1,636
Other production
2,177
Energy production and supply
2,919
Mechanical engineering and machinery production
5,332
Textile and leather
5,380
Vehicle production
5,645
Ship transport
6,103
Public and private services
6,180
3.2 Efficiency in Product Use In addition to the resource efficiency, the efficiency in product use should be taken into consideration. For example, sharing and renting can multifold the functional benefit of a product by satisfying bigger number of users. Repair and maintenance services prolong the useful life time of products, and take-back and recycling services impose a new life to certain materials and products.
196
M.-J. Kang and R. Wimmer
Some authors claimed that we have to improve our environmental efficiency by Factor 4 [17, 18] up to 20 [19, 20, 21]. Factor 20 is based on a doubling of the world population (x2) combined with a fivefold increase of wealth per capita (x5) while halving the total global environmental burden (x2) [21]. The authors see that incremental technological innovation and eco-designed products of companies will not be able to reduce the environmental impact by Factor 20. Instead, combination of technological, cultural and institutional changes is suggested as a strategy to achieve the goal [22, 23]. This view supports the necessity of development of sustainable PSS solutions which have the potential for a radical improvement. 3.3 Conditions to be Sustainable PSS can be environmentally beneficial when they achieve higher resource efficiency than competing product-based offers. However, it is difficult to figure out resource efficiency of a PSS model, due to its far-reaching influences and the blurry boundaries. To facilitate the evaluation of environmental sustainability of a PSS strategy, the following mental model (Fig. 2) is suggested. End-of-pipe management, cleaner production, and eco-product design are sublevel strategies forming the product (hardware) part of PSS.
Fig. 2. Product service systems within the frame of sustainable development
The figure above represents the intrinsic feature of PSS. The superior solution integrates the sublevel strategies. End-of-pipe management needs to be taken into account to perform cleaner production, and both the production processes and end-oflife treatment are reflected into the design of an eco-product. Likewise, sustainable PSS development needs to integrate all subsequent activities. On the basis of this mental model, a new PSS offer can be created, and existing offers can be improved by adjusting the sublevel strategies. If an enterprise aims at system-level innovation, it should consider PSS first of all as a goal to achieve. It is because the sublevel strategies are to be organised according to the new system architecture. Regarding evaluation, conversely, by figuring out the environmental performance of the individual strategies, the overall PSS strategy can be assessed. Above all the sustainable development approaches, comes the sustainable system innovation. Fundamental system innovation can be realised not only by considering environmental sustainability, but also by integrating economic and social aspects.
Product Service Systems as Advanced System Solutions for Sustainability
197
Additional elements required for system innovation are, for instance, a profit generation model, consumer market trends, institutional regulations, emotional satisfaction, compatibility among various PSS offers, and so on.
4 Methods for Sustainable PSS Development A general finding in previous studies is that PSS are not automatically more sustainable than product solutions, but have to be designed systematically. With methodological help it is much more likely to produce sustainable PSS solutions rather than trials and errors. From a business developers’ perspective, shifting business focus from selling products to providing values through service-oriented solutions appears a significant challenge. Especially if the organisation has concentrated on product manufacturing and sales for a long time, they need to be guided to learn how to find out innovative alternatives step by step. This also stresses the importance of appropriate development procedures and supportive tools. The available methods for PSS development show different structure of phases and tool sets, depending on perceptual understanding of PSS, development goals, and allowed time- and financial-circumstances, and so on. One of the methods elaborated through extensive research done by European experts in sustainable development is MEPSS (Methodology for Product Service System Innovation) [24]. The method is structured in a modular basis, thus companies can select suitable modules according to their intention. Tools hired in MEPSS aid system developers to systematically deal with dynamic system elements (e.g. physical products, service organisation, stakeholders involved, and material-, financial-, information-flows, etc). Yet, methods and tools do need reflective practices [5] to be verified and refined. By applying the MEPSS in several real-life cases, some tools were improved, and new tools were designed. As a result, a simplified toolset for small and medium sized companies has been suggested [4], and this is being further developed to reflect the relevant knowledge and experiences gained from the practices.
5 Conclusion PSS are the most advanced system solutions for sustainable development. The current socio-economic changes as well as environmental concerns encourage professional adoption of the concept. However, for more successful establishment of PSS approaches, methodological support needs to be continuously refined through practices and monitoring the progress. Recent research of PSS has particularly focused on the functional replacement of PSS. In fact, PSS solutions have been fairly well-accepted in B2B markets. However, there are growing concerns about non-functional aspects of PSS [25]. In B2C markets, the distinction is recognisable between successful and unsuccessful service models. For example, hotels and restaurants are successful PSS models which have been established as a norm, or even a gauge measuring quality of life in the society. On the other hand, launderettes are struggling with the image of being old-fashioned and uncomfortable in comparison with having own washing machine. Such consumer
198
M.-J. Kang and R. Wimmer
behaviour and perception indicate that functional alternatives have a limit in achieving fundamental system innovation in terms of market penetration. The importance of cultural, ethical, and emotional values needs to be scientifically analysed, and connected into a proposal of reliable guidance.
References [1] Goedkoop M J, van Halen, J G te Riele H, Rommens P J M (1999) Product Service Systems, Ecological and Economic Basics. Vrom, EZ. The Hague, Netherlands, p.18 [2] Manzini E and Vezzoli C (2002) Product-Service-Systems and Sustainability, Opportunities for Sustainable Solutions. Politecnico di Milano, UNEP, Paris [3] Mont O (2004) Product-service systems: Panacea or myth? Ph.D. Thesis, IIIEE, Lund University, Sweden [4] Wimmer R, Kang MJ, Tischner U, Verkuijl M, Fresner J, Möller M Erfolgsstrategien für Produkt – Service Systeme, Final Report, BMVIT: Fabrik der Zukunft, Vienna, Austria (submitted in November 2007, currently in review) [5] Brezet JC, Bijma AS, Ehrenfeld J, Silvester S (2001) The design of eco-efficient services. TU Delft for the Dutch Ministry of Environmnet. Delft, Netherlands [6] Statistik Austria (2006) Statistisches Jahrbuch, Bevölkerung, p.52 [7] Statistics Bureau, the Director-General for Policy Planning (Statistical Standards) and the Statistical Research and Training Institute (2005) Summary of Prompt Sample Tabulation Result, 2005 Population Census. Japan. http://www.stat.go.jp/English/data/kokusei/2005 /sokuhou/03.htm. Accessed 07 July 2008. [8] U.S.Census Bureau (2008) Service Annual Survey 2006, Current Business Report. U.S. Department of Commerce, pp. 4-5 [9] Kang, M.J. and Wimmer, R. (submitted in March 2008, accepted) Surface Treatment in Buildings: a Sustainable Product Service System, World Sustainable Building Conference 08, Melbourne, Australia [10] Wimmer, R. (2003) Success and Failure Factors of Product-Service Systems, Proceeding of Sustainable Innovation 03, Creating Sustainable Products, Services and ProductService-Systems, 27-28 October 2003, Stockholm, Sweden [11] Ness, N., Clement, S., Field, M. Filar, J., Pullen, S. (2005) (Approaches towards) Sustainability in the Built Environment Through Dematerialization, World Sustainable Building Conference, Tokyo, 27-29 September 2005 [12] Kang, M.J. and Wimmer, R. (2007) Product Service Systems as Systemic Cures for Obese Consumption and Production. Journal of Clean Production, doi:10.1016/j.jclepro. 2007.08.009 [13] Happy Planet Index (accessed 2 July 2008) http://www.happyplanetindex.org/index.htm. Accessed 07 July 2008. [14] Global Footprint Network (accessed 3 July 2008) http://www.footprintnetwork.org/. Accessed 07 July 2008. [15] Hawken, P., Lovins, A. and Lovins, L H. (1999) Natural capitalism: the next industrial revolution, Earthscan Publications, London [16] Fishbein, B., McGarry, L. and Dillon, P. (2000) Leasing: a step toward producer responsibility. Inform Inc, New York. [17] Schmidt-Bleek, F. (2000) Das MIPS-Konzept: Weniger Naturverbrauch-mehr Lebenqualität durch Faktor 10. Droemer Knaur
Product Service Systems as Advanced System Solutions for Sustainability
199
[18] Von Weizsäcker, E., Lovins, A.B., and Lovins, H.L. (1997) Factor Four. Doubling wealth Halving resource use, The new report to the Club of Rome, Earthscan Publications Ltd, London [19] Vergragt, Ph. and Jansen, L. (1993) Sustainable Technological Development: the making of a long-term oriented technology programme, Project Appraisal 8:134-140 [20] Weterings, R.A.P.M., en Opschoor, J.B. (1992) The environmental capacity as a challenge to technology development, Advisory Council for Research on Nature & Environment (RMNO), Rijswijk, Netherlands [21] Vergragt, P. (1998) ‘The Sustainable Household: Technological and Cultural Changes’, in: Brand, E., de Bruijn, T. & Schot, J. (eds), Partnerships and Leadership Building Alliances for a Sustainable Future, Greening of Industry Network Conference, Rome November 15-18, 1998 [22] Wimmer, R. (2002) Fuzzy Logic Expertensystem - Ein integrativer Ansatz zur Bewertung von technischen Lösungen unter dem Gesichtspunkt Nachhaltiger Entwicklung, PhD Thesis, Vienna University of Technology, Vienna, Austria [23] Quist, J., Knot, M., van der Wel, M., and Vergragt, Ph. (1999) Strategies for Sustainable Households, 2nd International Symposium on Sustainable Household Consumption entitled ‘Household Metabolism: from concept to application’, Groningen June 3-4, 1999. pp. 175-186. [24] Van Halen, C., Vezzoli, C. and Wimmer, R. (2005) Methodology for Product Service System Innovation: How to Develop Clean, Clever, and Competitive Strategies in Companies. Koninklijke Van Gorcum. Assen, Netherlands [25] Wimmer, R., Kang, M.J., and Lee, K.P. (2006) Emotional PSS Design: beyond the Function, Proceedings: Changes to Sustainable Consumption, 23-25 November 2006, Wuppertal, Germany. Workshop of the Sustainable Consumption Research Exchange (SCORE!) Network
Life Prediction of Automotive Vehicle’s Door W/H System Using Finite Element Analysis Byeong-Sam Kim1, KwangSoo Lee2, and Kyoungwoo Park3 1
Department of Automotive, Hoseo University, Asan, visiting professor of Pole Universitaire Leonard de Vinci, France [email protected] 2 Department of Automotive, Hoseo University, Asan, Korea 3 Department of Mechanical, Hoseo University, Asan, Korea
Abstract. A Vehicle’s door wireing harness arrangement structure is provided. In vehicle’s door wiring harness (W/H) system is more toward to arrange a passenger compartment than a hinge and a weatherstrip. This article gives some insight into the dimensioning process, with special focus on large deflection analysis of wiring harness (W/H) in vehicle’s door structures for durability problem. A Finite elements analysis for door wiring harness (W/H) is used for residual stresses and dimensional stability with bending flexible. Durability test data for slim test specimens were compared with the numerical predicted fatigue life for verification. The final lifing of the component combines the effects of these microstructural features with the complex stress state arising from the combined service loading and residual stresses.
1 Introduction In vehicle’s door wiring harness (W/H) system is more toward to arrange a passenger compartment than a hinge and a weatherstrip. An opening/closing member of a vehicle is attached to a vehicle by a hinge in a manner enabling easy opening and closing of the opening/closing member. Such members include doors, such as side-doors and rear doors, and other opening/closing members, such as trunk lids. Definitely any wiring harness, it should have sufficient strength to withstand any abrupt situations without affecting the performance of the total system. Fig.1 shows the typical wiring harness system of the front portion of the car [1], [2]. An automotive electronic system has been able to anticipate their needs for reliable and cost effective connection systems. A vehicle’s wiring harness (W/H) system keeps everything else going, powering every component, every switch, and every device. It’s the vehicle’s central nervous system. It must work, every time and all the time. Without connection system, no system will work; it will play vital role any industry whether in automotive. The main function of the connection system is to distribute the power supply from one system to another system. The wiring harness system must not only conform to such mechanical performance requirements; like strength, engage force, mating force, durability, but also to electrical performance requirements like low level termination resistance, voltage drop, isolation resistance, temperature rise. In arranging a wire harness on a vehicle door,
202
B.-S. Kim, K. Lee, and K. Park
Fig. 1. Automotive front door wiring harness (W/H) system: open/close position
when the wire harness is arranged from a passenger compartment side of an inner panel to the body side of the door, the wire harness is not passed through an aperture so that installment becomes easy. However, since the wire harness is arranged at a point closer to the passenger compartment side than a hinge joining the body and the door, it becomes necessary to extend or contract the wire harness as the door is opened or closed. However, when following the above-noted opening/closing operation, W/H system was a problem of the fatigue, where a tube, grommet, copper etc after 1 or 5 x105 cycle. This paper gives some insight into the dimensioning process, with special focus on fatigue analysis of W/H in vehicle’s door structures [3].
2 Finite Element Analysis 2.1 Definition of Model The large deflection problem considered in this study is to behaviour of front door due to the physical JIG design for test results performance, reliability data for analysis. Fig. 1 shows extracted from the body and door structure of the wire line as a reference guide to using sweep capabilities to create a solid model. The scope of this work into develops a slam tester method. The slam tester is designed by Packard Korea in collaboration with GM-Daewoo. An endurance life prediction of door W/H is used finite element analyses and slim tester with Fig. 1. In automotive industry a long development period is necessary to secure the safety and the reliability of the vehicle within the fatigue and durability considerations. The slam test is necessary to extend these investigations to the W/H while the door is opened and closed. The cause of W/H failure is analyzed by the slam tester [4]. Each time the door is opened or closed the W/H is subjected to combined tension/bending loading. Hence a nonlinear large deflection analysis needs to be performed to find out the resulting plastic deformation after the towing loads are removed. 2.2 FEM Modeling for W/H Each of the finite element models created for the different test configurations in this work were developed with a computer aided design pre-processor. The W/H front door finite element models had the same cable bundle configuration as the samples
Life Prediction of Automotive Vehicle’s Door W/H System Using Finite Element Analysis 203 Endurance Bending Analysis Procedure Failure Type Mode Analysis
Design Change CAD Type Failure Mode
① Select of Part ② 3D Modeling ③ IGES/STEP Change ④ CAD Compatibility ① Mesh Generation ② Contact ③ Material Property ④ B.C. and Load ⑤ Parameter Set
Data Select by Verification and Identification of the decision criteria
Bending Test
(Temp. change)
Endurance Analysis
Endurance Test
Endurance (Life prediction)
① Slam Test Spec. & Specification ② Tester ③ Prototype ④ Slam Test
Analysis / test results comparison / verification
Return
Report/End
Selection
Fig. 2. Diagram for analysis procedure
used in the experimental tests. Some geometrical assumptions were used to represent the W/H and to simplify the 3D model. We estimate that the stiffeners of the outside tape are negligible. Table 1 shows the 3D model of the bundle composed of 19 wires of 0.19 mm diameter. 2.3 FEM Dynamic Analysis The analyses were performed using the commercial nonlinear finite element code ABAQUS Explicit v6.6 [5], [6] executed on IBM an IBM A-Pro(dual CPU 2GHz). The Abaqus explicit dynamics procedure performs a large number of small time increments efficiently. An explicit central-difference time integration rule is used; each increment is relatively inexpensive (compared to the direct-integration dynamic analysis procedure available in Abaqus/Standard) because there is no solution for a set of simultaneous equations. The explicit dynamics analysis procedure is based upon the implementation of an explicit integration rule together with the use of diagonal (“lumped”) element mass matrices[7]-[10]. The equations of motion for the body are integrated using the explicit centraldifference integration rule [6].
u& (Ni+1 / 2) = u& (Ni−1 / 2 ) +
Δt ( i +1) + Δt (i ) 2
u&&(Ni)
(1)
u (Ni+1) = u (Ni) + Δt ( i +1) u& (Ni+1 / 2 ) where uN is a degree of freedom and the subscript i refers to the increment number in an explicit dynamics step. The central-difference integration operator is explicit in the sense that the kinematic state is advanced using known values of u& (Ni−1 / 2) and u&&(Ni) from the previous increment. The explicit integration rule is quite simple but by itself does not provide the computational efficiency associated with the explicit dynamics
204
B.-S. Kim, K. Lee, and K. Park
procedure. The key to the computational efficiency of the explicit procedure is the use of diagonal element mass matrices because the accelerations at the beginning of the increment are computed by
u&&(Ni) = (M NJ ) −1 + ( P(iJ) − I (Ji ) )
(2)
where M NJ is the mass matrix, P NJ is the applied load vector, and I NJ is the internal force vector. A lumped mass matrix is used because its inverse is simple to compute and because the vector multiplication of the mass inverse by the inertial force requires only n operations, where n is the number of degrees of freedom in the model. The explicit procedure requires no iterations and no tangent stiffness matrix. 2.4 Material Properties The tube material is supposed to be the same as the wires one. The wire material is copper alloy their properties are given by Packard Korea [3], [4]. The characteristics of the W/H are presented in Table 1. Several factors are very important in the test, but this study, is difficult to proceed as it did not fit the exclusion, and environmental, tolerance criteria of the cable as shown in Table 1. The material properties are used for the W/H is Elasto-Plastic materials Table 1. Specification of W/H type by Standard AVSS series (unit:mm) Section
Cable bundle Wire /diameter Diameter
Thickness
Outside Diameter
Resistance
of Cable
Standard
Max
0.3
7/0.26
0.8
0.3
1.4
1.6
50.2
0.5
7/0.32
1.0
0.3
1.6
1.7
32.7
0.85
19/0.24
1.2
0.3
1.8
1.9
21.7
1.25
19/0.29
1.5
0.3
2.1
2.2
14.9
20.
37/0.26
1.8
0.4
2.6
2.7
9.5
2.5 Boundary Condition The contacts interactions are considered between the cables. To avoid the out of plane deformation of the 19 wires, they are enveloped by a tube represented by a shell 0.15 mm thickness. A rotation is imposed to the bundle, and represents the opening of the door by 75o. This rotation induced two bending/torsion moments of the wires. In Table 1 we easily identify the damage of the wire. For the 7 cable bundle cable, with a depth of 50 mm and wires length of 600 mm, the resistance value changes of the 350,000 cycles.
Life Prediction of Automotive Vehicle’s Door W/H System Using Finite Element Analysis 205
3 Slam Test 3.1 W/H Test Model The slam test is necessary to extend these investigations to the W/H while the door is opened and closed. The cause of W/H failure is analyzed by the slam tester [11]. Each time the door is opened or closed the W/H is subjected to combined tension/bending loading. The results of SEM analysis by Standard AVSS 0.5SQ sample 14 discovered optical form shown in the Table 1. The W/H failure by the crack is estimated to occur in the passed-up elastic tube, and in the inner copper cable. This failure can be considered in this kind of problem: number of bundle in a wire, cable diameter, clearance, elasticity of the tube, etc. The slam tester design cause failure analysis to be presented through the design guide line [1], [2] and [12], but all car manufacturers, has their own unique features and systems design expertise. 3.2 Test Setting The test equipment is configured such as the door opened/closed 10 times/min, the resistance of the each wire is measured every 10,000 cycle beyond the 50,000 cycle to 350,000 cycles [13]-[15]. Actual vehicle front door W/H mainly applies a 7 cable bundles and 19 cable bundles. In Table 2 we easily identify the damage of the wire.
4 Results and Discussion The history of cycle loading can be considered as a sine curve and correspondents to opening/closing of the door by 75 o. The Fig. 3 presents the damage criteria over the wires. A value of 1 of these criteria indicates that a crack has occurred. The life cycle obtained by the experiments (300,000) is 3 times higher than the 100,000 cycle’s service life usually used. This is means that larger number of cable with wire is not less than the variation mode is a lot more flexible and also stresses that the work can be seen. In addition, if the same contribute with cable larger depth with door body stresses that the work can be found in Table 2. The numerical and experimental results obtained for 7 wires and 19 wires bundles are presented and compared in Table 3, for the 50 mm depth case. The numerical result seems to be approved by the experimental tests. Through comparison of the above, the method of endurance analysis and results of wire harness for the endurance of flexible bending can secure the trust were shown in Table 2. If the number of cable with the same time as a big difference depth knows that the endurance life cycle is improved [8]. The results obtained for the 6 cases of the Table 2 and Table 3 shows that: – – –
the 19 wires bundle is more flexible than the 7 wires bundle. the maximum stress level is higher in the 7 wires bundle. the stress level is higher for the 50 mm depth cases in Table 3.
206
B.-S. Kim, K. Lee, and K. Park
Table 2. Results of maximum stresses and endurance life cycles for the different cases (unit N/mm2) Cable No
Case 1
Non linear
/Diameter
(Depth 50mm)
Endurance
Cycle
487,000
Case 2
Non linear
Max. Stress
6.87
(7/0.32mm)
Max. Stress
7.26
(Depth 100mm)
Endurance
Cycle
518,000
Case 3
Non linear
Max. Stress
3.04
(Depth 150mm)
Endurance
Cycle
600,000
Cable No
Case 4
Non linear
Max. Stress
3.78
/ Diameter
(Depth 50mm)
Endurance
Cycle
Infinite
(19/0.19mm)
Case 5
Non linear
Max. Stress
3.62
(Depth 100mm)
Endurance
Cycle
Infinite
Case 6
Non linear
Max. Stress
1.60
(Depth 150mm)
Endurance
Cycle
Infinite
Table 3. Compare with slam test results and endurance analysis FE Analysis
Evaluation
No of cable /Diameter Standard
Endurance Life Cycle
7/0.32mm
100,000
487,000
19/0.19mm
100,000
Infinite life
(Depth 50mm)
Max. Damage Value
Test Result
Result 353,054
Infinite life
→ 0.616
Approximated endurance Life = 300,000/0.616 = 487,000 cycle
Most Damaged Area
(a) Endurance analysis for case 2 (b) Maximum deflection of cable bundle Fig. 3. Damage criteria and endurance life cycle
Life Prediction of Automotive Vehicle’s Door W/H System Using Finite Element Analysis 207
5 Conclusion From the FE analysis results, it indicates that the results are well within the design standards. By adopting FE analysis using ABAQUS and FEMFAT, it not only saves time, money & slam testing but also guides the product engineer for further improvement and modification of the W/H system. The biggest challenges of such analyses are: FE modeling of the wiring hardness with analytical rigid surfaces and dealing with convergence issues due to large deformation of the elements. This research to improve the endurance life of W/H required for the life cycle design, analysis and testing for the integration of these technologies and secure source technology to derive prototype has been applied, the following were able to obtain useful results. The slam tester, designed and built by the vehicle’s test was able to reduce the time and cost. The endurance life cycle how to establish durable, and is designed to help improve productivity, and to be tested. Through comparison of the test results and analysis, vehicle’s W/H of the results for the endurance can secure the trust wires, depth due to the number of design guidelines to provide for the endurance life efficiency.
References [1] Bungo E M, Rausch C (1990) Design Requirements for: Metric-Pack and Global Termina. Packard Electric internal report, Warren, Ohio [2] Packard Electric (1984) Environmentally protected connector systems. Packard Electric internal report, Warren, Ohio [3] Kim B S, Lee K S (2007) Life prediction analysis of wiring harness system for automotive vehicle. Internal report of Hoseo University [4] Lakshmi B, N William G, Bhatia S A (2006) Non-linear finite element analysis of typical wiring harness connector and terminal assembly using ABAQUS/CAE and ABAQUS/Standard. 2006 ABAQUS users’ conferenc, 1:345-357 [5] ABAQUS User's Manual Ver. 6.6 (2007) Dausault Systems Inc [6] ABAQUS Example Problems Manual (2007) Volume I, 6.2(ed) [7] Miller K, Joldes G, Lance D, and Wittek A (2007) Total Lagrangian explicit dynamics finite element algorithm for computing soft tissue deformation. Communications in numerical methods in eng. 23:121-134 [8] G Lingtian, L Kaishin, and L Ying (2007) A meshless method for stress-wave propagation in anisotropic and cracked media. International Journal of Engineering Science 45:601-616 [9] Benjamin G, Andrew I and Peter K (2008) Sensitivity Analysis of Real-Time Systems. Int. J. of Computer Science 3:6-13 [10] Park K, Kim B S, Lim H J, and all (2007) Performance improvement in internally finned tube by shape optimization. Int. J. of Applied Science, Eng. and Technology, Vol. 4 No. 3 [11] Fermer M, Svensson H (2001) Industrial experiences of FE-based fatigue life predictions of welded automotive structures. Fatigue & Fracture of Eng. Materials and Structures 24:489-500 [12] Aichberger W, Riener H, and Dannbauer H (2007) Regarding influences of production processes on material parameters in Fatigue Life Prediction. SAE 2007 World Congress, Vol.26, Detroit
208
B.-S. Kim, K. Lee, and K. Park
[13] Gaier C, Kose K, Hebisch H, Pramhas G (2005) Coupling forming simulation and fatigue life prediction of vehicle components. NAFEMS 2005 Int. conference, Malta [14] Halászi C, Gaier C, Dannbauer H (2007) Fatigue life prediction of thermo-machanically loaded engine components. 11th European automotive congress, Budapest [15] FEMFAT User's Manual Ver. 4.6 (2007) MAGNA Prowetrain Inc..
Peak Oil and Fusion Energy Development Chang Shuk Kim ITER Organization, Tokamak Department, Cadarache, France [email protected]
Abstract. If industrial civilization does not figure out how to survive and thrive without cheap fossil energy, then technological civilization will be a short blip in the history of our species. A child born in 1990, if s/he lives a long, healthy life, may see the end of the age of oil. The peak of global petroleum discovery was about 1962. You cannot extract more oil than has been found. The oil will take decades to run out, US oil peak was in 1970, the issue is when supply no longer keeps up with demand, not when it runs out, and after peak oil, the OPEC countries in the Middle East will have most of the remaining oil. We can not eat cellular phones, computers and internet, we should eat food. Oil (Energy) is critical to plant food. Now many kinds of energy sources are investigated and some of them are already contributing little bit. Additionally, 439 Nuclear Power Plants under operation on the Earth. It should be more and more to some extent. But, it has safety, transmutation and political hurdles. So, fusion is the only known technology capable in principle of producing a large fraction of world’s electricity without any serious issues. We should burn our oil to develop Fusion Power Plants. Most of big countries in the world like EU, US, Japan planned to construct commercial fusion power plant around 2040 as same as Korea. The testing of key technologies should be done by ITER (International Thermo-nuclear Experimental Reactor) in advance.
1 Introduction The price of raw-oil is now well past the mythical limit of 100 dollars a barrel. In fact, we are already talking about 200 dollars. Corn and wheat prices are rising at an alarming rate because of the oil-based fertilizers they are dependent upon. War has become an everyday fact in the oil producing regions of the world. Experts are pointing to charts that show how production is decreasing all over the planet. If not on its way to total extinction, at the very least oil is becoming increasingly economically and politically inaccessible. If it weren’t for the sake of climate change, we would just kick back. After all, we’ve got a subway system, district heating and well-insulated walls. But the fact is that we are all now affected by the consequences of a fossil-based lifestyle, just to differing degrees. Considering our climate and environments goal, peak and end of oil, how large fraction of world’s energy for new source, safety, and technical capability, the author evaluated the fusion energy development is viable. Economic value of fusion energy could be huge, but appears several decades future.
2 Peak Oil [1] But then in the 1860s, a German engineer found a way to insert the fuel directly into the cylinder inventing the Internal Combustion Engine, which was much more
210
C.S. Kim
efficient. At first, it used benzene distilled from coal, before turning to petroleum refined from crude oil, for which it developed an unquenchable thirst. The first automobile took to the road in 1882 and the first tractor ploughed its furrow in 1907. This cheap and abundant supply of energy changed the world in then unimaginable ways, leading to the rapid expansion of industry, transport, trade and agriculture, which has allowed the population to expand six-fold in parallel. These remarkable changes were in turn accompanied by the rapid growth of financial capital, as banks lent more than they had on deposit, confident that Tomorrow’s Economic Expansion was collateral for To-day’s Debt, without necessarily recognizing that the expansion was driven by an abundant supply of cheap largely oil-based energy. Oil was formed in the geological past under well understood processes. In fact, the bulk of current production comes from just two epochs of extreme global warming, 90 and 150 million years ago, when algae proliferated in the warm sunlit waters, and the organic remains were preserved in the stagnant depths to be converted to oil by chemical reactions. Natural gas was formed in a similar way save that it was derived from vegetal material. It follows that these are finite natural resources subject to depletion, which in turn means that production in any country or region starts following the initial discovery and ends when the resources are exhausted. The peak of production is normally passed when approximately half the total has been taken, termed the midpoint of depletion. The peak of oil discovery was passed in the 1960s, and the world started using more than was found in new fields in 1981. The gap between discovery and production has widened since. Many countries, including some important producers, have already passed their peak, suggesting that the world peak of production is now imminent. Were valid data available in the public domain, it would a simple matter to determine both the date of peak and the rate of subsequent decline, but as it is, we find maze of conflicting information, ambiguous definitions and lax reporting procedures. In short, the oil companies tended to report cautiously, being subject to strict Stock Exchange rules, whereas certain OPEC countries exaggerated during the 1980s when they were competing for quota based on reported reserves. Despite the uncertainties of detail, it is now evident that the world faces the dawn of the Second Half of the Age of Oil, when this critical commodity, which plays such a fundamental part in the modern economy, heads into decline due to natural depletion. A debate rages over the precise date of peak, but rather misses the point, when what matters — and matters greatly — is the vision of the long remorseless decline that comes into sight on the other side of it. The transition to decline threatens to be a time of great international tension. Petroleum Man will be virtually extinct this Century, and Homo sapiens faces a major challenge in adapting to his loss. Peak Oil is by all means an important subject.
3 Other Energy Sources Up to now, there has been no more ultimate energy source other than fossil-based. So, the policy should be to take several different energy sources like solar, wind, hydro, bio, and nuclear. Unfortunately, solar panels, windmills and other renewable energy systems require remarkable fraction of energy inputs to manufacture itself and to prepare infrastructure. Solar and hydrogen are enough in principle, but currently very
Peak Oil and Fusion Energy Development
211
expensive and mostly not where needed. Apart from solar, the other renewable energy systems do not have potential to meet large fraction of global demand. Right now, there are 439 nuclear power plants under operation on the Earth – EU (163), KO (20), US (104), JA (55), RF (31), etc. It should be more and more to some extent. But, we have safety issues, transmutation and political hurdles.
4 Fusion Energy Development EU, USA, Japan and Russia have their own DEMO and commercial FPP (fusion power plant) concepts. Researches on development of those concepts are mature. EU and Korea have very similar time schedules for DEMO and FPP. So, DEMO construction has been scheduled around 2025, and commercial power plant construction around 2040. Designs and plans of DEMO and FPP are strongly rely on the ITER project [2]. Contributions from existing and future planned tokamak devices are required. Development of ways to reduce COE is essential by steady state operation, compact size and blanket and conversion cycle improvement. Tritium Breeding Blankets are not present in ITER but are necessary in DEMO. The two main functions are to convert neutron & gamma energy in heat and collect it by mean of a high grade coolant to reach high conversion efficiency (>30%) and to produce and recover all Tritium required as fuel for D-T reactors Tritium breeding self-sufficiency. The European Power Plant Conceptual Study (PPCS) [3] has been a study of the conceptual designs of five commercial fusion power plants, with the main emphasis on system integration. The study focused on five power plant models, which are illustrative of a wider spectrum of possibilities. The models are all based on the tokamak concept and they have approximately the same net electrical power output, 1500MWe. European utilities and industry developed the requirements that a fusion power plant should satisfy to become an attractive source of energy [4]. They concentrated their attention on safety, waste disposal, operation and criteria for an economic assessment. The most important recommendations are summarised as follows: • There should be no need for an emergency evacuation plan, under any accident driven by in-plant energies or due to the conceivable impact of ex-plant energies. • No active systems should be required to achieve a safe shut-down state. • No structure should approach its melting temperature under any accidental conditions. • “Defence in depth” and, in general, ALARA principles should be applied as widely as possible. • The fraction of waste which does not qualify for “clearance” or for recycling should be minimised after an intermediate storage of less than 100 years. • Operation should be steady state with power of about 1GWe for base load and have a friendly man-machine interface. However, as the economics of fusion power improves substantially with increase in the net electrical output of the plant, the net electrical output of all the PPCS models was chosen around 1.5GWe. • Maintenance procedures and reliability should be compatible with an availability of 75-80%. Only a few short unplanned shut-downs should occur in a year.
212
C.S. Kim
• Since public acceptance is becoming more important than economics, economic comparison should be made with energy sources with comparable acceptability but including the economic impact of "externalities". The fusion power is determined primarily by the thermodynamic efficiency and power amplification of the blankets and by the amount of gross electrical power recirculated, in particular for current drive and coolant pumping. “Improved supercritical Rankine cycle” seems the most promising. A revised configuration of the primary heat transport system leads to closer heat transfer curves between the primary and secondary, maximizing the thermal exchange effectiveness. It results in higher steam temperatures (increase of gross efficiency) and less steam mass flow (increase of net efficiency) compared to the other supercritical cycles. The improvement of the gross efficiency, with respect to the PPCS reference, is about 4 percentage points. As an alternative, independent supercritical CO2 Brayton cycles were considered for the blanket and divertor cooling circuits, in order to benefit from the relatively high operating temperature of the latter. In this case, it is possible to obtain a gross efficiency similar to the one achieved with the supercritical, improved Rankine cycle. In the PPCS models, the favourable inherent features of fusion have been exploited, by appropriate design and material choices, to provide safety and environmental advantages. The following are particularly noteworthy. • A total loss of active cooling cannot lead to structures melting. This result is achieved without any reliance on active safety systems or operator actions. • The maximum radiological doses to the public arising from the most severe conceivable accident driven by in-plant energies would be below the level at which evacuation would be considered in many national regulations. • Material arising from operation and decommissioning will be regarded as nonradioactive or recyclable after one hundred years (recycling of some material could require remote handling procedures, which are still to be validated). An alternative could be a shallow land burial, after a time (approximately 100 years) depending on the nuclides contained in the materials and the local regulations. The cost of electricity from the five PPCS fusion power plants was calculated by applying the codes developed in the Socio-economics Research in Fusion programme [5]. The calculated cost of electricity for all the models was in the range of estimates for the future costs from other environmentally friendly sources [6]. One important outcome of the conceptual study of a fusion power plant (FPP) is to identify the key issues and the schedule for the resolution of these issues prior to the construction of the first-of-a-kind plant. Europe has elected to follow a “fast track” in the development of fusion power [7], with 2 main devices prior to the first commercial FPP, namely ITER and DEMO. These devices will be accompanied by extensive R&D and by specialised machines and facilities to investigate specific aspects of plasma physics, plasma engineering, materials and fusion technology. The PPCS results for the near-term models suggest that a first commercial fusion power plant will be economically acceptable, with major safety and environmental advantages. These results also point out some of the key issues that should be resolved in DEMO and, more generally, help to identify the physics, engineering and technological challenges of fusion.
Peak Oil and Fusion Energy Development
213
Fig. 1. Korean fusion energy development roadmap
Besides, European fusion energy development, Korea has very similar time schedules for DEMO and FPP with EU. So, DEMO construction has been scheduled around 2025, and commercial power plant construction around 2040 as shown in Fig. 1. The market share of future fusion energy was already reported earlier [8]. Fig. 2 summarizes the calculated energy mix in the global electricity supply. Under environmental constraint that requires energy source with significantly reduced carbon dioxide emission, fusion electricity is estimated to have approximately 30% of global market [9]. In the calculation of the energy mix, it is possible to supply same amount of electricity without fusion energy while limiting the carbon dioxide emission at the same level. The result will be composed of larger fractions of other renewables and fossils with carbon sequestration. Total cost of electricity of this energy mix without fusion is more expensive, and this increase of the expense can be understood to correspond to a benefit of fusion.
Fig. 2. Estimated energy mix under environmental constraints
ITER is a unique opportunity to test the mock-ups in a tokamak environment prior the construction of the DEMO by Test Blanket Modules (TBMs). It is an ITER mission as “ITER should test tritium breeding module concepts that would lead in a future reactor to tritium self-sufficiency, the extraction of high grade heat and electricity production.” TBMs have to be representative of a DEMO breeding blanket, capable
214
C.S. Kim
of ensuring tritium-breeding self-sufficiency using high-grade coolants for electricity production. The ITER TBM Program is therefore a central element in the plans of all seven ITER Parties for the development of tritium breeding and power extraction technology.
5 Conclusion Fusion has the potential as the ultimate and clean energy source. Tritium can be recycled in Fusion Power Plants. Fusion is the only known technology capable in principle of producing a large fraction of world’s electricity without any serious issues. We should burn our oil to develop Fusion Power Plants.
References [1] www.peakoil.net [2] www.ITER.org [3] D Maisonnier et al (2006) Power Plant Conceptual Studies in Europe. The 21st IAEA Fusion Energy Conference, Chengdu, China [4] R. Toschi et al (2001) How far is a Fusion Power Reactor from an Experimental Reactor, Fusion Engineering and Design 56-57:163-172 [5] Socio-economic aspects of fusion power, EUR (01) CCE-FU 10/6.2.2, European Fusion Development Agreement Report, April 2001. [6] D J Ward et al (2005) The economic viability of fusion power, Fusion Engineering and Design 75-79:1221-1227. [7] European Council of Ministers, Conclusions of the fusion fast track experts meeting held on 27 November 2001 on the initiative of Mr. De Donnea, President of the Research Council (commonly called the King Report). [8] K Tokimatsu et al (2002) Evaluation of economical introduction of nuclear fusion based on a long-term world energy and environment model, in: 19th IAEA Fusion Energy Conference, Lyon, IAEA-CN-77/SEP/03 [9] P Lako et al (1998) The long-term potential of fusion power in western Europe, ECN-C98-071
Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator Dong-Seon Kim Sustainable Energy Systems, arsenal research Giefinggasse 2, 1210 Vienna, Austria [email protected]
Abstract. In this study, a one-dimensional model is developed for a vertical bubble tube absorber based on the analogies between momentum, heat and mass transfers in a two-phase cocurrent upward flow. The model is compared with two-phase flow correlations from literature and found that proper choice of pressure-drop and void fraction correlations allows predicting the heat and mass transfer coefficients of a bubble absorber with a reasonable accuracy.
1 Objective Ammonia is the second most popular refrigerant being used in the field of absorption refrigeration. Although smaller in latent heat than water, ammonia is dominantly used in most absorption machines with sub-zero evaporator (heat source) temperatures. Among many components in an ammonia absorption machine, the absorber is critically important for the energetic and economic performance of the machine. While a falling film-type absorber is regarded inevitable for a water absorption machine for minimal pressure drop, a bubble-type absorber (a flooded gas-liquid contactor) can be more advantageous for an ammonia machine because an ammonia absorber is relatively insensitive to pressure drop and therefore the inertial forces of primary fluids (i.e., solution and vapour) can be appropriately exploited to obtain larger heat and mass transfer coefficients. Although two-phase flows have been intensively investigated in the past, heat and mass transfer coefficient data are scanty and largely inconsistent in the flow regime of a bubble absorber. In this study, a one-dimensional bubble absorber model is developed using analogies between momentum, heat and mass transfer rates in a two-phase co-current upward flow. It is shown that proper choice of pressure-drop and void fraction correlations may allow the present model to predict the heat and mass transfer coefficients of a bubble absorber with a reasonable accuracy.
2 Model Development 2.1 Description of the Problem Fig. 1 shows a schematic diagram of a bubble absorber. Liquid (ammonia-salt solution in this case) and gas (ammonia vapour) enter the bottom of a vertical tube and flow upward while cooling water flows downward in the exterior of the tube. As a
216
D.-S. Kim
water
solution vapor
Fig. 1. Control volume of a two-phase co-current upward flow in a vertical tube
result of the cooling, the vapour is gradually absorbed into the solution and only liquid comes out at the top of the tube. The main subject in modelling of this system is the simultaneous heat and mass transfer process at vapour-liquid interface and therefore accurate prediction of the corresponding heat and mass transfer coefficients is very important. In the following, starting from general governing equations, a simple one-dimensional bubble absorber model is developed. 2.2 Governing Equations Conservation laws for the control element in Fig. 1 give a solution mass balance equation as d Γ s dz = n& ,
(1)
where Γ is a peripheral mass flux defined as Γ≡ m& /(πdh), an absorbent mass balance equation as d ( Γ s x b ) dz = 0 ,
(2)
where x is the mass fraction of the absorbent (salt), a vapour mass balance equation as d Γ v dz = − n& ,
(3)
where n& is the mass flux of ammonia being absorbed at the interface, an solution-side energy balance equation as & v − q& w , d ( Γ s h b ) dz = nh
(4)
an vapour-side energy balance equation as & v − q& iv , d ( Γ v h v ) dz = −nh
(5)
Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator
217
an energy balance equation for cooling water as dt dz = −q& w ( ΓCp ) w ,
(6)
and finally pressure drop (sum of friction, momentum and static losses) across the element as − dp dz = ⎡⎣8 ftp
( ρ d ) + ( dv s
3 h
tp
2 dz ) ×16 d h2 ⎤⎦ × ( Γ s + Γv ) + g v tp ,
(7)
where vtp is a mean specific volume for the two-phase flow and ftp is a two-phase friction coefficient. For easy solution of the problem, a few equations are simplified as follows by using some thermodynamic and transport theories. The solution enthalpy term hb in Eq. (4) causes inconvenience to solution of the problem because it is often given as a complex function of temperature and concentration. Using the enthalpy relation in [1], Eq. (4) can be written explicitly in terms of bulk solution temperature as d ( Γ s CpsT b ) dz = n& Δh − q& w ,
(8)
where Δh is heat of absorption defined as Δh≡hv-[hb-(1-xb)×∂hb/∂x]. In the equations above, wall heat flux q& w is defined between bulk solution and water temperatures by q& w = U (T b − t ) ,
(9)
where volumetric overall heat transfer coefficient is 1/U≈1/αw+1/αb, and vapourto-interface heat flux q& iv is defined by q& iv = α iv ai − w (T v − T i ) ,
(10)
where αiv and ai-w are vapour-side heat transfer coefficient and interface-to-wall area ratio (i.e. interface area divided by πdh), respectively, and interface-to-solution heat flux q& is is defined by q& is = α is ai − w (T i − T b )
(11)
n& = ρβ ai − w ( xb − xi )
(12)
and mass flux n& by Eqs. (10)~(12) require knowledge of interface variables, i.e., Ti and xi. These quantities are removed from the equations as follows. Applying heat and mass transfer analogy (i.e. αis=β×ρCpLem) in the liquid-side boundary layers near the interface (i.e. Δx and ΔTs in Fig. 1) and using 1st-order Taylor series of an equilibrium equation [e.g. Ts=f(x,p)], Eq. (11) and (12) are reduced to give n& = ρβ ai − w ( c1T b + c2 x b + c3 ) ,
(13)
where c1~3 are defined at a reference concentration xo as c1=-1/[∂Ts/∂x+Δh/(CpsLem)], c2=-c1(∂Ts/∂x) and c3=-c1[Tso-(∂Ts/∂x)xo].
218
D.-S. Kim
2.3 Prediction of Heat and Mass Transfer Coefficients The governing equations in the previous section can be solved when the following values are given; ε ftp αw αb αivai βai
void fraction two-phase friction coefficient water-side heat transfer coefficient at wall solution-side heat transfer coefficient at wall vapour-side volumetric heat transfer coefficient at interface solution-side volumetric mass transfer coefficient at interface
Among the variables above, ε, ftp and αw were intensively investigated in the past and a number of well-developed correlations are available from literature. However, for the remaining two-phase transfer coefficients, correlations from literature are scanty and largely inconsistent. For this reason, a consistent set of simple two-phase heat and mass transfer correlations were developed in the following assuming analogies between momentum, heat and mass transfer. Assuming that conditions exist for analogies between momentum and heat and mass transfer at the two boundaries in the system, i.e., heat exchanger wall and vapour-liquid interface, heat transfer coefficient α at one boundary can be related to the corresponding shear stress as
α = Cps Prs− m × τΔu −1
(14)
and similarly mass transfer coefficient β is given as
β = ρ −1Scs− m × τΔu −1 ,
(15)
where Δu is a driving potential for the momentum transfer at the corresponding interface. Note that Eqs. (14) and (15) have already been used to obtain Eq. (13). Then αb is given by
α b = Cps Prs− m ×
( − dp
s dz ) f × ( uavg ) , −1
(16)
αivai is v α iv ai = Cps Prs− m × ε ( − dp dz ) f × ( uavg − ui )
−1
(17)
and βai is s β ai = ρ −1Scs− m × ε ( − dp dz ) f × ( u i − uavg ) , −1
(18)
where ai is volumetric interface area density and m=2/3 according to Colburn analogy. It is clearly shown in Eq. (16)-(18) that calculation of the transfer coefficients requires knowledge of frictional pressure loss (dp/dz)f, void fraction ε and three characteristic velocities, i.e., average vapour velocity usavg, interface velocity ui and average liquid velocity usavg. From the definition of ε, average solution velocity is given by s uavg = 4Γ s ⎡⎣ ρ s d h (1 − ε ) ⎤⎦ ,
(19)
Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator
219
average vapour velocity is given by v uavg = 4Γ v
( ρv d hε )
(20)
and assuming volumetric interface area density is ai=ε1/2 (i.e. laminar annular flow), interface velocity is given by ui =
v s uavg ( μv μs ) × (1 − ε ) ε + uavg
(21)
1 + ( μv μ s ) × (1 − ε ) ε
Then, obviously, the transfer coefficient values in Eqs. (16)-(18) will depend on the choice of (dp/dz)f and ε models. Operating range of a bubble absorber in an absorption machine is typically below 0.2 in vapour quality. Unfortunately, however, it is the region where void fraction models from literature disagree as shown in Fig. 2. 2500
dp/dzf,models (pa/m)
1
0.8
ε
0.6
0.4 ρv=3.5, ρs=1000 Smith[4] Chisholm[2] Zivi[3]
0.2
0
Quality=0.1, xb=0.55, dh=0.015, Γs=0.03~0.4kg/ms Lockhart-Martinelli[9] Cichiti[8] McAdams[10] Dukler[11] Hibiki[6] Friedel[5] Lee[7]
2000
1500
+20%
-20%
1000
500
0
0.001
0.01
0.1
quality Fig. 2. Void fraction models
1
0
400
800
1200
1600
2000
dp/dzf,[9] (pa/m) Fig. 3. Pressure drop models
All three models in Fig. 2 are in good agreement for the vapour qualities above 0.1. However, disagreement is substantial in the low quality range that is of interest in this study. Among the models, Chisholm [2] predicts the largest and Zivi [3] the smallest with Smith [4] in-between. On the other hand, a few pressure drop correlations from literature are shown in Fig. 3. It can be seen that one separate flow model (Hibiki [6]) and one homogeneous model (McAdams [10]) agree with Lockhart-Martinelli [9] within 20% error boundary under the given conditions.
3 Results As shown in the previous section, (dp/dz)f and ε models from literature are not in good agreement. Although dispuitable, (dp/dz)f model from Lockhart-Martinelli [9] and ε model from Smith [4] are used in the following without justification.
D.-S. Kim
αbmodels (kW/m2K)
10
0.001
xb=0.55, dh=0.015, ms=0.002~0.025kg/s Eq. (16) Akers[13] Shah[12] Groothuis[14]
8
(βai/aw)models (m/s)
220
6
4 +30%
2
-30% 0
xb=0.55, dh=0.015, ms=0.002~0.025kg/s Eq. (18) Ferreira[18] Keiser[15] Banerjee[16] Kasturi[17]
0.0008
0.0006 +30% 0.0004 -30% 0.0002
0 0
0.4
α
0.8
b Eq.(16)
1.2
1.6
2
(kW/m K)
(a) heat transfer coefficients
2
0
0.0002 0.0004 0.0006 0.0008 0.001
(βai/aw)Eq. (18) (m/s) (b) mass transfer coefficients
Fig. 4. Eqs. (16) and (18) and some empirical correlations from literature
Fig. 4 compares the heat and mass transfer coefficients from Eqs. (16) and (18) with some empirical correlations from literature. The heat and mass transfer coefficients in Fig. 4 are average values for a 1m-long 15mm-I.D. vertical tube with ms varied in 2-25g/s of 55% LiNO3 solution and inlet quality varied in 0.4-12%. They were averaged over the void fraction profiles obtained from numerical solution of the equations in Section II-2 using the corresponding empirical correlation. Transport properties were assumed constant to avoid the influences of varying properties. In Fig. 4a, it can be seen that the present model is in particularly good agreement with Shah [12]. Disagreement with [13, 14] is significant. The present model and [12] agree better with the experimental data from Infante Ferreira [18] where αb=0.52kW/m2K is reported under similar conditions. For the case of mass transfer in Fig. 4b, the present model is found to be in good agreement with Infante Ferreira [18], Keiser [15] and Banerjee [16]. Kasturi [17] predicts the smallest values in the entire range. Note that an interface area density of Hibiki and Ishii [19] has been used with [16, 17] to calculate volumetric coefficients. Recall that the results in Fig. 4 are based on the particular (dp/dz)f and ε models used here without justification and therefore should be considered only as ideal references in comparison. Nevertheless it is indicated that the present model is able to make realistic predictions and may be further improved by elaborating some details described in the previous sections.
4 Conclusions A simple one-dimensional bubble absorber model has been developed based on the analogies between momentum, heat and mass transfer in a two-phase co-current upward flow. It is shown that the model with a proper choice of pressure drop and void fraction correlations can provide realistic heat and mass transfer coefficients for a bubble tube absorber. It is thought that the model may be improved by elaborating some of its details.
Modelling of a Bubble Absorber in an Ammonia-Salt Absorption Refrigerator
221
References [1] Haltenberger Jr W (1939) Enthalpy-concentration charts from vapour pressure data. Ind Eng Chem 31:783-786 [2] Chisholm D (1972) An equation for velocity ratio in two-phase flow. NEL Report 535 [3] Zivi SM (1964) Estimation of steady-state steam void-fraction by means of principle of minimum entropy production. ASME J Heat Transfer 86:247-252 [4] Smith SL (1969) Void fractions in two-phase flow. A correlation based on equal velocity head model. Proc Instn Mech Engrs 184:647-664 [5] Friedel L (1979) Improved friction pressure drop correlations for horizontal and vertical two-phase pipe flow. Paper E2, European Two-Phase Group Meeting, Ispra, Italy [6] Hibiki T, Hajuku T, Takamasa T, Ishii M (2007) Some characteristics of developing bubbly flow in a vertical mini pipe. Int J Heat Fluid flow 28:1034-1048 [7] Lee K, Mudawar I (2005) Two-phase flow in high-heat-flux micro-channel heat sink for refrigeration cooling applications: Part I. Int J Heat Mass Transfer 48:928-940 [8] Cicchitti A, Lombardi C, Silvesti M, Solddaini G, Zavalluilli R (1960) Two-phase experiments-Pressure drop, heat transfer and burnout measurement. Energi Nucl 7:407-425 [9] Lockhart RW, Martinelli RC (1949) Proposed correlation of data for isothermal twophase two-component flow in pipes. Chem Eng Prog 45:39-48 [10] McAdams WH (1954) Heat transmission, 3rd ed. McGraw-Hill, New York [11] Dukler AE, Wicks III M, Cleveland RG (1964) Frictional pressure drop in two-phase flow: A. A comparison of existing correlations for pressure loss and holdup. AIChE Journal 10:38-43 [12] Shah MM (1977) a general correlation for heat transfer during subcooled boiling in pipes and annuli. ASHRAE trans 83:205-215 [13] Akers WW, Deans HA, Crosser OK (1959) Condensation heat transfer within horizontal tubes. Chem Eng Prog Symp Ser 55:171-176 [14] Groothuis H, Hendal WP (1959) Heat transfer in two-phase flow. Chem Eng Sci 11:212220 [15] Keiser C (1982) Absorption refrigeration machines. Dissertation, Delft Univeristy of Technology [16] Banerjee A, Scott DS, Rhodes E (1970) Studies on cocurrent gas-liquid flow in hellically coild tubes. Part II. Canadian J Chem Eng 48:542-551 [17] Kasturi G, Stepanek JB (1974) Two-phase flow –IV. Gas and liquid side mass transfer coefficients. Chem Eng Sci 29:1849-1856 [18] Infante Ferreira CA.(1985) Vertical tubular absorbers for ammonia-salt absorption refrigeration. Dissertation, Delft Univeristy of Technology [19] Hibiki T, Ishii M (2001) Interfacial area concentration in steady fully-developed bubbly flow. Int. J. Heat Mass Transfer 44:3443-3461
Literature Review of Technologies and Energy Feedback Measures Impacting on the Reduction of Building Energy Consumption Eun-Ju Lee, Min-Ho Pae, Dong-Ho Kim1, Jae-Min Kim2, and Jong-Yeob Kim3 1
Integrated Simulation Unit, DASS Consultants Ltd., Seoul 143-834, Korea Energy System Research Unit, University of Strathclyde, UK 3 Building Environment & Energy Research Unit, KOREA NATIONAL HOUSING CORPORATION, 463-704, Korea [email protected] 2
Abstract. In order to reduce energy consumption in buildings, there are a number of available technologies and measures that can be adopted. Energy feedback measures enable energy endusers (e.g. households) to recognize the need for energy reduction and change their behaviour accordingly. The effects of energy feedback measures have been reported on in most North American and European industrialized countries, though little research has been conducted in Korea. This paper presents case studies of energy feedback measures and their effectiveness on the basis of a literature review of academic papers, technical reports and website sources. Energy feedback measures can be as effective (10-20% reduction rate) as innovative energy systems which require substantial capital investment. In this paper, the design strategy of universal human interfaces is also discussed in support of energy feedback measures. Keywords: Energy Feedback, Information System, Building Energy Consumption, Literature review.
1 Introduction Various technologies and measures are adopted for energy saving in buildings. Among them, the energy feedback measures are a way of energy reduction which enables energy end-users (e.g. households) to recognize the need for energy reduction and change their behaviour of energy use. Energy reduction strategy using energy feedback aims at ultimate energy saving effect via indirect way. The effects of energy feedback were reported from the mid-1970s [1]. In this study, the measures and effects of energy feedback are reviewed on the basis of publications available in academic journals and Internet World Wide Web sites. This paper also presents a proposed design strategy of universal human interfaces based on the literature review in support of the energy feedback measures.
2 Literature Review 2.1 Technologies and Measures for Building Energy Saving in Korea In the previous study [2], academic papers associated with energy saving technologies and measures in buildings were retrieved with key words of ‘energy’, ‘saving (or
224
E.-J. Lee et al.
reduction)’ and ‘buildings’ from the publications of Korean professional societies including the Society of Air-conditioning and Refrigerating Engineers of Korea (SAREK), and Architecture Institute of Korea(AIK). As shown in Table 1, 318 papers were selected from 1970 to 2008. According to the result of the search, building material, refrigeration, air-conditioning systems and building automation systems are popular subjects. While most of studies are technology-oriented, there are few studies on measures for end-user policy. The studies associated with end user policy are mostly introTable 1. Cases on building energy reduction ductory to the concept and methods of energy feedback (Moon Energy reduction factor 1980's 1990's 2000's (2006) [3], Lee (2004) [4]). Bae and Chun (2008) [5] investigated Insulation 2 2 25 how residents environmental Building materials 39 awareness and behavior change Refrigeration 1 2 35 make impact on indoor environment condition. The study reports Outdoor air cooling system 2 that providing simple information Lighting system 2 3 10 such as temperature and humidity Building automatic control 2 21 made occupiers more react to system their environment condition and Solar energy 3 1 1 control indoor air quality acGeo thermal system 1 1 tively. They stressed that it is imCogeneration system 2 portant to provide residents with Air conditioning system 1 37 education and information reguShading devices 1 13 larly to sustain the positive effect Windows and doors system 1 12 of education since it tends to decrease as time goes by. Double façade system 3 14 Although this study is not reEcological architecture 22 lated with energy reduction effect Environmental cost 6 directly, it gives and can be the Greenhouse system 1 5 4 research to be consultable in enConstruction management 29 ergy feedback study hereafter, as User policy 4 the research that achieve use9 22 297 energy information feedback effect through real monitoring. Total 318 Study of Jong-yeop Kim (2007) [6] was accomplished on purpose of the used energy amount information Web-Based program development. So, he constructed information system to acquire energy expenditure monitoring information. He has the plan to study about the saving effect according to the energy usage statistics data and user participation grade as users' attitude surveying and managing the energy information offering web-site [7]. 2.2 Review of Studies on Energy Feedback Oversea The study of the reduction of energy that with user policy is rare in Korea, but that is done in North America and Europe from a long time ago. We grasp the effectiveness
Literature Review of Technologies and Energy
225
of energy reduce with user policy based on literature review from Darby S (2006) that summarize its results. Darby set direct feedback and indirect feedback that a measure to be effective reduction of building energy that provide energy information to energy user. That is Table 2. Direct feedback research, as shown in the table is 5 to 14 percent reduction case reported, and indirect feedback- related research is majority exists between 0 to 4 percent, 10~14 percent. In addition, if use direct energy feedback that is over 20 percent would to do show that the three cases were reported in the voluntary energy saving aspects of the user can see up to expectations Table 2. Effect of energy feedback Energy Saving
0~4 5~9 10~14 15~19 20 of peak
Direct Feedback
2
Indirect Feedback
3
1987~2000 1975~2000
4 6
8 6 9
7
1
6
1
3
5 13
1 3
1 1
20
Unknown
3 3 3
1 3
2.2.1 Direct Feedback Direct feedback that was defined by Darby is the status is the user at any time to see energy meter (Display installed in places that use energy, personal computers or through the Web) to provide information you can get feedback on the way soon [8]. According to one case of realtime feedback of energy in Ontario Hydro in Canada, every 25 households reduce 13 percent of all electricity energy consumption [9]. In Japan, that saves an average 12 percent of the total energy amount, which means another application if you use the complex can expect to see a bigger effect [10]. Research of complex application feedback has been done by Harrigan & Gregory in the United States by 1994 [11]. When the American people to target the poor insulation of buildings to improve energy efficiency of buildings to support things that support the cost Weatherization apply, it is able to save about 14 percent of heating gas usage. In addition to raise awareness of energy saving policies with energy conservation education, if implemented, heating gas consumption savings of 26 percent. Education programs, Weatherization and energy feedback, all cases shall also apply to savings can lead to more than 26 percent. Recent research has been performed in Canada in 2006 [12], the actual contents of a user can carry a portable monitor in real time to provide information on using power and CO2 usage by hourly, periodic, cumulative. It was through separating thermal and the other energy load showed maximum 16.7 percent saving. In Europe, there is webbased user monitoring system research with Internet infrastructure, but the current European situation is still not enough places to study the progress report was delayed. Upland Technologies of the U.S. Fig.1 portable monitoring devices and fixed monitoring devices, energy usage statistics for the establishment of a Web interface. The Wattson made in DIY Kyoto have the most efficient visual interface in the energy feedback [13].
226
E.-J. Lee et al.
Fig. 1. Energy viewer (Upland Technoloies)
Fig. 2. Wattson (DIY, Kyoto Ltd)
Fig 2 is picture of it. Electric using data and coast is indicated to color and bright. Function of the device could be understand to get electric using data during season by USB connecter in the PC but it could not apply in another energy sources commonly. 2.2.2 Indirect Feedback In 1979, a study reported that supplying energy bills to end-users 6 days a week resulted in reductions of energy use of up to 18%14. A Norwegian study15 showed that delivering energy bills several times during the year gave rise to a change in the pattern of end-users’ energy consumption. The energy bill used in the study contained a comparison analysis report between the current and same season from the previous year. The effect of energy feedback was about a 12% reduction. The study made a major impact on establishing regulations where accurate energy bills must be delivered to end-users every 3 months. A study was conducted in the UK in 1999 16 to see effect of varying the content of energy feedback. In the study, 120 people were divided into 6 groups and provided with the following information: Group 1: Group 2: Group 3: Group 4: Group 5: Group 6:
comparison of energy use against other houses. comparison between current and last season’s energy use. comparison of cost of energy use underwent an educational programme on the environmental crisis provided information on energy saving technologies provided with a software program to check energy data on demand.
The study lasted for 9 months and showed that the most effective group in terms of energy savings was the Group 6 (software program users) followed by Group 1 and then Group 2. The study also concluded that the visual design of the energy report was important to increase the energy feedback effect in addition to general content.
3 User Interface Design for Energy Feedback There is an average energy saving rate of about 10-15% through direct and indirect feedback measures according to the literature review. The effects of energy feedback are, however, dependent on the type of energy information provided and design of
Literature Review of Technologies and Energy
227
user interface and energy report. When combining energy feedback measures with appropriate education, the energy saving effects increase. When considering the development of an information and communication system in support of energy feedback, appropriate hardware and software systems are required to provide informative content. In addition to the content, user interfaces should be carefully designed to increase human interactivity with energy systems. We propose a design guideline of human interfaces for effective energy feedback based on the findings of the literature review. Fig. 3 illustrates the schematic design guideline, which incorporates 4 principles – namely that the interface be: informative, understandable, usable and multi-functional. Recent information and communication technologies allow energy end-users to have ubiquitous energy management systems with which energy use can be monitored at appliance level. The authors have developed a prototype energy monitoring viewer. Fig. 4 shows an example of an energy monitoring viewer displaying energy and environment factors in real-time and giving warning messages to users when detecting unnecessary appliance energy use. The human interface also provides emotional image icons according to status of the appliance energy use pattern.. Fig. 5 is an example of an energy monitoring interface implemented in a website. The status of individual appliances is shown with metered consumption data. Graphic icons represent the home appliance currently being used. The web interface is designed to make an emotional impression to energy end-users as well as quantitative information according to their energy use patterns. An example of energy bills with informative reports (e.g. comparison analysis) is shown in Fig. 6.
Fig. 3. Design guideline of user interfaces for energy feedback
Fig. 4. An example of energy monitoringviewer
Fig. 5. An example of energy monitoring website..
Fig. 6. An example of energy bills
228
E.-J. Lee et al.
4 Conclusion Now we have Energy cost increasly and the Global warming by increasing Co2 emission. So we should develop energy saving technology and system. The efficient energy saving give 10~15% the saving rate by paper. But the saving rate has difference of content, visual type by expecting. So the study considerate display type and system and indicate design guideline to inspire the desire of energy saving oneself. To The energy feedback is finished, firstly technology of monitoring in indoor condition and energy consumption is comprised. Secondly energy feedback type is defined to search type of energy consumption and Korean traditional situation. And the content need be developed substantially. Finally we make connect in various expect to develop energy feedback technology using approach various spare.
References [1] Steven C Hayes and John D Cone (1977) Reducing residental electrical energy use: Payments, Information, And Feedback. Journal of applied behavior analysis 3:425-435 [2] M H Pae (2008) Literature review of technologies and energy feedback measures impacting on the reduction of building energy consumption. KIAEBS 08, 04:125-130 [3] H J Moon (2006) Research Trend about Building Energy Savings and House Performance Analysis in U.S.A., Housing & Urban 91:120-131 [4] C H Lee (2004) Power-saving program, which can follow much participation of consumers, should be designed. Energy Management 338:33-35 [5] N R Bae (2008) Changes of residents' indoor Environment control Behavior as result of provide Education and Environmental information. Architectural institute of Korea 232:285-293 [6] J Y Kim (2007) Development Web-based Building Energy Information System. 2007 Housing and Urban Research Institute Proceeding 183-200 [7] http://www.energysave.or.kr Accessed 3 July 2008. [8] Darby S (2006) Making it obvious: designing feedback into energy consumption. Environmental change Institute, University of Oxford [9] Dobson, John K., and J D Anthony Griffin (1992) Conservation Effect of immediate Electricity Cost Feedback on Residental Consumption Behavior. ACEEE 1992 summer study on Energy Efficicency in Building American Council for an energy efficient Economy, Washington D.C. [10] Ueno T, Inada R, Saeki O and Tsuji K (2005) Effectiveness of Displaying Energy Consumption Data in residential Houses. ECEEE 2005 summer study on energy efficiency in building (6), European Concil for energy efficient economy, Brussels :1289 [11] Harrigan M S and Gregory J M (1994) Do savings from energy education persist? Alliance to Save Energy, Washington DC [12] Mountain D, (2006) the impact of real-time feedback on residential electricity consumption: he Hydro One pilot. Mountain Economic Consulting and Associates Inc., Ontario [13] http://www. DIYkyoto.com Accessed 3 July 2008 [14] Bittle, Valesano, and Thaler (1979) The Effects of Daily Cost Feedback on Residential Electricity Consumption,Behavior Modification 3(2):187-202 [15] Wilhite H and R Ling (1995) Measured energy savings from a more informative energy bill. Energy and buildings 22:145-155 [16] Gendolyn Brandon and Alan Lewis (1999) reducing Household Energy donsumption : A Qualitative And Quantitative Field Study. Joural of Environmental Psychology 19:75-85
Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography Ki-hwan Kim1, 2, Na-young Song1, Byung-kwon Choo1, Didier Pribat2, Jin Jang1, and Kyu-chang Park1 1
Department of Information Display, Kyung-Hee University, Dongdaemoon-gu, Seoul, Korea [email protected] 2 Laboratoire de Physique des Interfaces et Couches Minces, École Polytechnique, 91128 Palaiseau CEDEX, France
Abstract. This paper is a study about mechanical characteristics of hard-polydimethylsiloxane (h-PDMS), we compared and analyzed physical properties of h-PDMS with polydimethylsiloxane (PDMS) which in non-photolithography patterning process. As a next generation (or advanced) patterning process, various methods of soft lithography for non-photolithography and low-cost is actively being researched, especially, PDMS uses are increasing with the material which the adhesiveness and formation is superior Recently, there is a report about a new process using h-PDMS that is improved the low Young’s modulus which is a mechanical weak point of PDMS will get good quality for nano-scale patterning. According to changing composition ratio of h-PDMS, we measured the crack density per unit length with radius of curvature, also measured the strain with a radius of curvature when starts creations of crack due to hardness of h-PDMS. With these experiments, we studied that it is possible to control mechanical characteristics of hard-PDMS with composition ratio, we showed possibility of improving at pattern collapse and twist which weak point of PDMS, and also showed that it is possible to fabricate desired hardness of h-PDMS.
1 Introduction For the past several decades, the photolithography process that has been used dominantly in the semiconductors and information display industries is continuously developing for lower cost in semiconductor and increase the size of display panel. However, improved lithography technologies such as ink-jet printing [1 and 2], soft lithograph [3], imprinting technology [4 and 5], self assembly [6] and atom lithography [7] have been brought to public attention as candidates to challenge against conventional photolithography process in order to lower process cost and minimize material wasting. Especially, soft lithography processes is very promising technology due to lower cost than photolithography process and higher processability [3]. Using polydimethylsiloxane (PDMS) has several merits like possibility in multiple printing, precise patterning and improving the quality of patterning with surface modification with technology like FLEPS (Flexibile letterpress stamping method) [8], μCP (micro contact printing) among the soft lithography method, but on the other hand, some problems can happen like a paring, sagging, swelling, and shirinking of PDMS due to essential problems of PDMS [10, 11, 12]. Furthermore, because when
230
K.-h. Kim et al.
we want to make patterns on the substrate using PDMS mold, some problems are able to be caused by collapse of mold, hard PDMS that complement mechanical characteristics of PDMS is newly introduced [13]. Because the surface of hard PDMS is much harder than soft PDMS which is made of Sylgard 184A and Sylgard 184B, probability to form roof collapse or lateral collapse is much lower, when we make the patterns, it is possible to make patterning precisely [14]. Therefore, hard PDMS can be used alternatively in the soft lithography method, moreover, it is possible to be raised as a most promising method that hard PDMS will substitute the photolithography process. In this paper, we report a study on mechanical characteristics of hard PDMS for feasible application, we compared with soft PDMS when we want to make specific patterns on the substrate. Therefore, we studied some surface modification method that usually is treated on the soft PDMS with hard PDMS.
2 Experiment Fig. 1 shows the comparison of fabrication process between soft PDMS and hard PDMS. The fabrication of hard-PDMS is more complicated than soft-PDMS. First of all, we mixed 4 types of materials, a vinyl PDMS prepolymer (VDT-731, Gelest Corp., www.gelest.com), Pt catalyst (platinum divinyltetramethyldisloxane, SIP 6831.1, Gelest Corp.), modulator (1, 3, 5, 7 tetravinyl- 1, 3, 5, 7 tetramethyl cyclotetrasiloxane, SIT-7900, Gelest Corp.) and hydrosilaneprepolymer (HMS-301, Gelest Corp.) with certain composite ratio. VDT-731 plays a role like a Sylgard 184A, SIP 6831-1 plays a role of reaction agent, SIT 7900.0 plays a role of adhesion promoter, and HMS 301 plays a role like a Sylgard 184B. Here, we fixed composite ratio of 3 kinds of materials as 9μL of a SIP 6831.1, 0.1g of a SIT 7900.0, and 0.5g of a HMS-301 respectively, but we varied amount of a VDT-731 for studying a relationship between VDT-731 content and crack density of h-PDMS.
Fig. 1. Fabrication process of soft PDMS & hard PDMS
Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography
231
After mixing all of composite, we coated the mixed composite on cleaned master stamp using spin coating process at 1000rpm for 1 minute, and then it was cured in a thermal heater at 60°C for 30 minutes. After that, we used the soft PDMS stamp fabrication process. At first we mixed Sylgard 184A and Sylgard 184B as a ratio of 10:1. Here, the Sylgard 184B plays a role of curing agent, and it was poured on the hard PDMS coated film, and then it was cured in a thermal heater at 60°C for 2hr. After that, we obtained hard PDMS & soft PDMS composite stamp. As a matter of fact, this stamp is comprised of 2 layer PDMS, soft PDMS acts a role of substrate, and hard PDMS is actual surface layer. Therefore, when we analyze this composite stamp, we have to focus on mechanical characteristics of hard PDMS. To characterize the mechanical properties of hard PDMS, at first, we tested the crack density of hard PDMS with various VDT-731 contents ratio. As a matter of fact, because VDT-731 plays a role of Sylgard 184A at soft PDMS, hardness of hard PDMS is depends on VDT-731 contents ratio, so we studied about the radius curvature dependent crack creation with different contents ratio of VDT-731. When we carry out this study, we fixed the dimension of composite stamp that is made with hard PDMS and soft PDMS of a 1cm x 1cm x 3mm. There are two types of film stress as shown in fig. 2, tensile stress and compressive stress, but in this study, because we used soft PDMS as a substrate and hard PDMS as a target film, the stress applied to hard PDMS. The film stress was measured by using FLX-2320-S (Toho Technology Corporation). The principle of measurement is that when we make a film on the substrate, physical coefficient between substrate and thin film deviate. Therefore, stress on the substrate is created. This stress is introduced by the bending of substrate, it can be defined with the variation of radius of curvature. At this moment, the radius of curvature is able to be calculated by the angle of reflection of laser that is induced by the substrate, so we are able to obtain the variation of radius of curvature with calculating the difference of radius of curvature after measuring of radius of curvature with before and after coating of thin film.
R is the variation of the radius of curvature, R1 is the radius of curvature before coating of thin film, and R2 is the radius of curvature after coating of thin film.
Fig. 2. Two types of Film stress on the substrate
232
K.-h. Kim et al.
Fig. 3. Computational film stress
A case of the sample is fabricated like a fig. 3,
with this equation, we are able to extract the strain of the substrate.[14] After that, we fabricated samples for studying of tensile strength of the hard PDMS with the UV-O3 treatment. There are several method to modify the surface energy of the substrate, for example, by creating of self-assembled monolayer (SAM) using chemical materials like a TMCS (trimethylchlorosilane, Sigma Aldrich) [15], OTS (octyltrichlorosilane, Sigma Aldrich), [16], and Teflon AF [17] or by changing the surface energy using O2 plasma treatment, but in this study, we were able to change the surface energy of the surface of hard PDMS by using UV-O3 treatment [18]. For the UV-O3 treatment, we made up the equipment as shown in fig. 4, in this case, experiments are carried out with atmosphere in the room temperature, and we supplyed the O2 gas except the N2 gas. The experiments are proceeded with the case of no treatment, 1 minute treatment, 3 minutes treatment, 6 minutes treatment, 9 minutes treatment, 12 minutes treatment, and 15 minutes treatment. The hard and soft PDMS composite stamp treated using UV-O3 method is fabricated, and is measured by the KS M 6158 method proposed by Korea Environment & Merchandise Testing Institute (KEMTI).
Fig. 4. Schematic of UV-O3 equipment
Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography
233
With this mechanical characteristics of hard PDMS analyzed like above methods, studying actual difference with soft PDMS at patterning, we studied about whether collapse exist or not. For the experiments, when we fabricate the composite stamp comprised with hard PDMS and soft PDMS, hard PDMS is formed on the target master stamp, and then the mixing materials of Sylgard 184A and Sylgard 184B were poured to make the soft PDMS as substrates. When we fabricate the soft PDMS, we formed soft PDMS was formed directly on the target master stamp, and then for the patterning using μCP (micro contact printing) method, the Novolac resin and PGMEA (propylene glycol methly ether acetate) were mixed, and patterning was done on the substrate after spin coat at 4500rpm for 60 seconds on each mold. Formed pattern was measured by optical microscope.
3 Results and Discussion We report a study on the variation of radius of curvature for measuring the number of cracks of hard PDMS surface as changing VDT 731 contents ratio of fabricated composite stamp. For the convenience to calculate crack density easily, the surface area is fixed as a 1cm x 1cm and is fabricated. There are some examples of samples in table 1 that is fabricated with fixed contents ratio of SIP 6831.1, SIT 7900.0, and HMS 301 to 9μL, 0.1g, and 0.5g respectively, and various VDT 731 contents ratio. Moreover, fig. 5 shows the relationship between the radius of curvature and crack Table 1. The hard PDMS fabrication examples with VDT-731 content ratio variation
Fig. 5. Crack density of the h-PDMS as increasing the radius of curvature as a function of VDT wt. % in hard PDMS mold
234
K.-h. Kim et al.
Fig. 6. Strain of the hard PDMS as increasing the negative and positive radius of curvature. When the radius of curvature goes to the negative direction, the compressive stress exist on the surface of hard PDMS, when the radius of curvature goes to positive direction, the tensile stress exist on the surface of hard PDMS.
density with changing VDT 731 contents ratio. Generally, if contents ratio of VDT 731 is more and more contained, we are able to know that the creation of the crack become less as decreasing the radius of curvature. As a matter of fact, when we touch large amount of VDT 731 contained hard PDMS, we can know that it is soft as a soft PDMS. Moreover, in general case, the crack is created due to the surface hardness of hard PDMS, but in composite stamp case, if the contents ratio of VDT 731 is high, because it shows the flexibility like a soft PDMS, the stress between hard PDMS and soft PDMS in the stamp is relatively low, so less cracks are created. Therefore, if when the crack is created through measuring the strain of hard PDMS, we can obtain the result as shown in fig. 6. In this experiment, samples were created with coating the hard PDMS on the 15cm x 15cm glass substrate. Here, we measured when the applied stress on the coated hard PDMS film on the glass substrate is case of tensile strength and when the stress is compressive strength. As graphs are shown also, when the stress is compressive strength, crack does not exist, but when the stress is tensile strength, if the εsurface is over the 39.8%, crack is created. With above experimental results, because surface of hard PDMS is relatively harder, when we fabricate the mold, pairing occurred less, and the situation that is also touched other pattern to substrate due to the collapse of PDMS is less happened. Because Young’s modulus of soft PDMS is relatively low (3MPa) [12], when we make the pattern, the probability of collapse is high at the intaglio of fabricated mold, so as shown in fig. 7-a and 7-b, we were able to know that other patterns are also touched to substrate, but in hard PDMS case, it has a relatively higher Young’s modulus (9MPa) than soft PDMS [12], the probability of collapse is less at the mold, so as shown in fig. 7-c and 7-d, when we make the pattern, little patterns are also touched. In fig. 7-a and 7-c, because mold is fabricated with master stamp that pattern trench is 5μm, even if collapse is happened, relatively small area is touched to substrate, but in the case of fig. 7-b and 7-d, mold is fabricated with the master stamp that pattern trench is 2μm, so relatively large area is touched to substrate. Therefore, in figure 7-b and 7-d cases has a less quality than case of fig. 7-a and 7-c [19]. In above experiments, we were able to confirm that if we make the pattern, when the mold fabricated with the hard PDMS has small amount of problems than when the
Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography
235
Fig. 7. Optical images after patterning using soft PDMS and hard PDMS (a) 5μm trench pattern, soft PDMS, (b) 2μm trench pattern, soft PDMS, (c) 5μm trench pattern, hard PDMS, (d) 2μm trench pattern, hard PDMS
mold is fabricated with soft PDMS. However, when we want to use the hard PDMS in a field of soft lithography such as a soft PDMS, it has to be possible to treat surface modification like soft PDMS. So far, in previous reports, there are some chemical treatments like trichloromethylsilane (TMCS, Sigma Aldrich) [15] or octyltrichlorosilane (OTS, Sigma Aldrich) [16], and Teflon AF [17]. Moreover, few reports used the O2 plasma method for changing surface energies of substrates or thin films, but in this reports, we changed the surface energy of thin film by using UV-O3 treatment [18]. When we treat the UV-O3 treatment on the PDMS surface, SiO2 is created on the PDMS surface [20], so the surface energy is increasing, as a result, PDMS is going to hydrophilic. Hence, the desired resist is coated well, we are able to make the pattern well. In this reason, when we use the soft lithography method, we have to do surface modification on the mold. Among established several treatment to soft PDMS, we treated the UV-O3 treatment to hard PDMS surface for surface modification. As shown in fig. 8, as a result, surface energy variation of the hard PDMS surface is similar with the soft PDMS, so we confirmed that hard PDMS is able to treated surface modification like a soft PDMS. Furthermore, we studied about variation of hardness of surface with treating UV-O3 treatment to hard PDMS. For Studying difference of hardness, we used the two kinds of sample for comparison data, UV-O3 non-treated sample and UV-O3 sufficiently treated sample. Here, we fabFig. 8. UV-O3 treatment time dependent variation of surface energies of hard PDMS and soft PDMS ricated the composite stamp with soft
236
K.-h. Kim et al.
PDMS and hard PDMS for fabricating sample to measure using method that is proposed by KIMTI. However, because this study is for knowing variation of the surface hardness of hard PDMS, exposed surface of stamp is fabricated with hard PDMS. As a result, UV-O3 non treated composite stamp represented 3MPa, and UV-O3 treated composite stamp represented 8Mpa. The reason where the difference happens like this, it seems to increase the surface hardness due to the amount of SiO2 on surface is increasing as increasing UV-O3 treatment time onto the surface. With these experiments, we studied that hard PDMS is able to treat using surface treatment like soft PDMS case, and if we treat the surface modification using UV-O3 treatment, hardness of surface is also increases, so we confirmed that it is possible to prevent collapse.
4 Conclusion In this paper, we report a study on the mechanical characteristics of hard PDMS for feasible application. As a result, because the one of materials that is composed hard PDMS, the surface hardness is changed with the contents ratio of VDT 731 that play a role like a Sylgard 184A in PDMS, we made clarify that composite ratios of hard PDMS have to be optimized for practical usage, and we studied that when crack is created on hard PDMS surface with applying the compressive stress and tensile stress to hard PDMS film. As a result, the cracks wasn’t created with applying compressive stress onto hard PDMS surface, but, when applying tensile stress onto the surface, if strain is above of 39.8%, it creates crack. Moreover, because of the hardness of hard PDMS surface, we confirmed that it is possible to make precise patterning because collapse is less happened than when we use the soft PDMS for patterning. Furthermore, for knowing the possibility of usage of surface modification that is inevitable to practical application like a soft PDMS, we studied that surface energy change is possible or not by using UV-O3 treatment, and we confirmed that hardness of hard PDMS surface increase with increasing treatment time. In actual fabrication processes, it can be processed by using composite stamp that is formed hard PDMS and soft PDMS, the mechanical characteristics of composite stamp is much better because the stamp surface is hard PDMS. Hence, when we use the composite stamp to application, it is possible to form much better patterns. Acknowledgments. This work was supported by Seoul Research and Business Development program (Grant no. CR 070054). And also, we thank for the assistance and support of all those who contribute to the EKC2008.
References [1] P Calvert (2001) Inkjet Printing for Materials and Devices. Chemistry of Materials 13(2):3299-3305 [2] WS Wong, S Ready, R Matusiak, SD White, J-P Lu, J Ho, RA Street (2002) Jet-printed fabrication of a-Si:H thin-film transistors and arrays. Non-Crystalline Solids 299302(2):1335-1339
Mechanical Characteristics of the Hard-Polydimethylsiloxane for Smart Lithography
237
[3] Y Xia and G M Whitesides (1998) Soft Lithography. Angewandte Chemie International Edition 37(5):550-575 [4] S Y Chou, P R Krauss, P J Renstrom (1995) Imprint of sub-25 nm vias and trenches in polymers. Applied Physics Letters 67(21):3114-3116 [5] S Y Chou, P R Krauss, P J Restrom (1996) Imprint Lithography with 25-Nanometer Resolution. Science 272(5258):85-87 [6] A Kumar and G M Whitesides (1993) Features of gold having micrometer to centimeterr dimensions can be formed through a combination of stamping with an elastomeric stamp and an alkanethiol “ink” followed by chemical etching. Applied Physics Letters 63(14):2002-2004 [7] M Mutzel, S Tandler, D Haubrich, D Meschelde, K Peithmann, M Flaspohler, K Buse (2002) Atom Lithography with a Holographic Light Mask. Physical Review Letters 88(8):083601 [8] S M Miller, S M Troiana, S Wagner (2003) Photoresist-free printing of amorphous silicon thin-film transistors. Applied Physics Letters 83(15):3207-3209 [9] E Delamarchem, H Schmid, B Michel, H Biebuyck (1997) Stability of molded polydimethylsiloxane microstructures. Advanced Materials 9(9):741-746 [10] A Bietsch and B Michel, (2000) Conformal contact and pattern stability of stamps used for soft lithography. Journal of Applied Physics. 88(7):4310-4318 [11] T W Lee, Oleg Mitrofanov, Julia W P Hsu (2005) Pattern-Transfer Fidelity in Soft Lithography: The Role of Pattern Density and Aspect Ratio. Advanced Functional Materials 15(10):1683-1688 [12] H Schimid and B Michel (2000) Siloxane Polymers for High-Resolution, High-Accuracy Soft Lithography. Macromolecules 33(8):3042-3049 [13] T W Odem, J C Love, D B Wolfe, K E Paul, G M Whiteside (2002) Improved Pattern Transfer in Soft Lithography Using Composite Stamps. Langmuir 18(13):5314-5320 [14] H Gleskova, S Wagner, Z Suo (1999) Failure resistance of amorphous silicon transistors under extreme in-plane strain. Applied Physics Letters 75(19):3011-3013 [15] B K Choo, K H Kim, K C Park, J Jang (2007) Surface Modification of Thin Film using Trimethylcholorosilane. IMID 2007 proceeding [16] B K Choo, J S Choi, G J Kim, K C Park, J Jang (2006) Self-Organized Process for Patterning of a Thin-Film Transistor. Journal of the Korean Physics Society 48(6):1719-1722 [17] B K Choo, J S Choi, S W Kim, K C Park, J Jang (2006) Fabrication of amorphous silicon thin-film transistor by micro imprint lithography. Journal of Non-Crystalline Solids 352(9-20):1704-1707 [18] B K Choo, N Y Song, K H Kim, J S Choi, K C Park, J Jang, (2008) Ink stamping lithography using polydimethylsiloxane stamp by surface energy modification. Journal of NonCrystalline Solids 354(19-25):2879–2884 [19] W Zhou, E Menard, N R Aluru, J A Rogers, A G Alleyne, Y Huang (2005) Mechanism for stamp collapse in soft lithography. Applied Physics Letters 87(25):251925-251927 [20] B Schnyder, T Lippert, R Kotz, A Wokaum, V M Graubner, O Nuyken (2003) UVirradiation induced modification of PDMS films investigated by XPS and spectroscopic ellipsometry. Surface Science 532-535:1067-1071
ITER Plant Support Sytems Yong Hwan Kim and G. Vine ITER Organization, Joint Work Site, Cadarache, France [email protected] Abstract. Fusion energy features essentially limitless fuel available all over the world, without greenhouse gases, with intrinsic safety, without long-lived radioactive waste, and with the possibility for large-scale energy production. The overall objective for ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. A unique feature of ITER is that almost all of the machine will be constructed through in kind procurement from the Parties (CN, EU, IN, JA, KO, RF, US). The long-term objective of fusion research and development is to create power plant prototypes demonstrating operational safety, environmental compatibility and economic viability. ITER is not an end in itself: it is the bridge toward a first plant, DEMO, which will demonstrate the large-scale production of electrical power. In this paper, the main features of ITER plant support systems: Tritium Plant, Vacuum Systems, Fuelling and Wall conditioning, Cryoplant and distribution, Electrical Power Supply, Cooling Water Supply, Radwaste Management System, Hotcell facility, will be introduced.
1 Introduction As one of the few options for large-scale, non-carbon, future supply of energy, fusion has the potential to make an important contribution to sustainable energy supplies. Fusion can deliver safe and environmentally benign energy, using abundant and widely available fuel, without the production of greenhouse gases or long-term nuclear waste ITER is an international research project with a programmatic goal of demonstrating the scientific and technological feasibility of fusion energy for peaceful purposes, an essential feature of which would be achieving sustained fusion power generation. In terms of its design, it is a unique international collaboration among seven participating teams namely China, the EU, India, Japan, Korea, Russian Federation and the United States of America. Partners of the project will contribute in kind to the various sub systems which will be finally constructed and commissioned at site by the International team in collaboration with the member countries. ITER means “the way” in Latin. It is an intermediate step between the experimental studies presently known in plasma physics and electricity-producing fusion power plants of the future. It is an effort to build the first FUSION SCIENCE EXPERIMENT which will be capable of producing a self-sustaining fusion reaction, the “burning plasma.”
2 Main Features of ITER ITER is a tokamak, a type of magnetic confinement device in which strong magnetic fields confine a torus-shaped fusion plasma that consists of a hot ionized gas of hydrogen isotopes (hydrogen, deuterium, tritium).
240
Y.H. Kim and G. Vine
The fuel - a mixture of deuterium and tritium, two isotopes of hydrogen - is heated to temperatures in excess of 100 million degrees, forming a hot plasma. The plasma is kept away from the walls by strong magnetic fields produced by superconducting coils surrounding the vessel and an electrical current driven in the plasma. The ITER machine will have a major radius of 6.3m and a minor radius of 2.0m. The toroidal magnetic field of 5.3T will be created with the help of a set of large,
Fig. 1. Three dimensional cutaway view of ITER showing the main elements of the tokamak core Table 1. Main Parameters of ITER Total fusion power
500 MW
Additional heating power
50 MW
Q - fusion power/ additional heating power
≥ 10
Average 14MeV neutron wall loading
≥ 0.5 MW/m2
Plasma inductive burn time
300-500 s *
Plasma major radius (R)
6.2 m
Plasma minor radius (a)
2.0 m
Plasma current (Ip)
15 MA
Toroidal field at 6.2 m radius (BT)
5.3 T
* Under nominal operating conditions.
ITER Plant Support Sytems
241
super conducting, coils encompassing the ultra high vacuum vessel. This vessel will have volume of more than 800m3. The plasma will carry a maximum current of 15 Mega Amperes. The plasma will be shaped with the help of a set of super-conducting poloidal coils. All these components will be housed within a high vacuum chamber, the cryostat, which will be 28 meters in diameter and about 26 meters in height. To further raise the plasma temperature to the conditions necessary for fusion, multi megawatt heating systems in the form of highly energetic neutral beams and RF waves will be introduced through special openings in the inner vacuum vessel. The plasma conditions will be studied and measured using numerous, highly sophisticated, diagnostic systems. The entire facility will be controlled and protected with the help of a complex central control system. The device is designed to generate 500 megawatts of fusion power for periods of 300 – 500 seconds with a fusion power multiplication factor, Q, of at least 10 (Q ≥10).
3 Plant Support Systems 3.1 ITER Organization The ITER Central Engineering and Plant Support Department is responsible for all activities related to plant engineering, fuel cycle engineering, electrical engineering, and CAD/engineering design. Specifically, it is responsible for assuring that the designs of all of these plant systems are completed, that they satisfy the requirements, that the hardware and systems interface correctly and satisfy technical specifications (particularly quality), that they are assembled, installed and tested correctly in accordance with the integrated on-site installation and assembly plan. 3.2 Tritium Plant ITER is the first fusion machine fully designed for operation with equimolar deuterium-tritium mixtures. The tokamak vessel will be fuelled through gas puffing and Pellet Injection (PI), and the Neutral Beam (NB) heating system will introduce deuterium into the machine. The ITER tritium systems constitute a rather complex chemical plant. As shown schematically in the diagram in Fig.2, pure deuterium and tritium are introduced into Storage, where tritium is kept in metal hydride beds for immediate use. Through the Fuelling system DT is fed into the Tokamak vessel. Off-gases from the torus collected in the cryopumps or tritiated gases from diagnostics or first wall cleaning are moved by roughing pumps into exhaust processing (“Detritiation, Hydrogen Isotope Recovery”). Recovered hydrogen isotopes are transferred to the Isotope Separation system, while the remaining waste gas is sent to Detritiation Systems for decontamination purposes before release into the environment. The inner fuel loop is closed by the return of unburned deuterium and tritium from Isotope Separation to Storage. The systems are designed to process considerable and unprecedented deuterium-tritium flow rates with high flexibility and reliability. Multiple barriers are essential for the confinement of tritium within its respective processing components, and Detritiation Systems are crucial elements in the concept.
242
Y.H. Kim and G. Vine
Fuelling Systems Neutral Beam Heating
Storage and Delivery System
Tritium / Deuterium from External Sources
Long Term Storage Neutral Beam Injector Cryo Pumps
Isotope Separation System
Torus Water Detritiation Tritium Breeding Test Blanket
Tokamak Exhaust Processing Atmosphere and Vent Detritiation Systems
Torus Cryo Pumps Roughing Pumps
Diagnostics First Wall Cleaning
Analytical System Off-gas Release
Protium Release
Automated Control System & (Hard Wired) Safety System
Fig. 2. Outline flow diagram of the ITER fuel cycle
The basis of detritiation is the catalytic oxidation of tritium and tritiated species into water, followed by transfer to Water Detritiation for recovery of tritium and to allow for release of a decontaminated gas stream. The recovery of tritium from tritiated water collected in Detritiation Systems is unique feature of ITER tritium confinement. ITER operation is divided into four phases. Before achieving full deuterium-tritium (DT) operation, which itself is split into two phases, ITER is expected to go through two operation phases, a hydrogen phase and a deuterium phase, for commissioning of the entire plant. 3.3 Vacuum Systems ITER has one of the largest and definitely the most complex vacuum system ever to be built. The large vacuum volumes of ITER are: Cryostat Vacuum Volume: 8500m3, Pressure: <10-4Pa Torus Vacuum Volume: 1330m3, Pressure ~10-6Pa NBI Vacuums Volumes: 630m3 (total for 4), Pressure ~10-7Pa The mission of the Vacuum System section is: • To deliver the vacuum systems required for the safe, reliable and successful operation of ITER to cost and schedule. • To manage the design and integration of the vacuum procurements and related systems. • Manage the vacuum and tritium boundary standards for all relevant procurement packages. • Ensure compliance of direct and DAs procurements. • Integration of all diagnostic and auxiliary vacuum systems to ensure the torus vacuum integrity. • Ensure the vacuum acceptance assembly test standards necessary for a successful build.
ITER Plant Support Sytems
243
Fig. 3. Vacuum Systems as Part of the ITER Fuel Cycle (Simplified)
3.4 Fuelling and Wall Conditioning The ITER Fuelling and Wall Conditioning System consist of the Gas Injection System (GIS), Pellet Injection System (PIS) and Glow Discharge Conditioning System (GDC). Their configuration and functions are as follows: Gas Injection System. The GIS consists of fuelling manifolds and gas valve boxes (GVB). The manifolds connect the Storage and Delivery System (SDS) in the tritium plant to the tokamak or NB injectors, via the GVB, to provide hydrogenic and impurity gases with independent lines contained in guard tubes which form a secondary containment boundary. Pellet Injection System. The PIS employs a pneumatic gas gun injector to inject hydrogenic ice pellets into the plasma to control plasma density and the frequency of plasma instabilities. The same injector will be used to inject impurity pellets for impurity transport studies. Each PIS consists of twin screw pellet extruder, and a pumping and recirculation system to recover the pellet propellant gas. They are located in the steel cask in the port cell. The cask is capable of accommodating up to 2 injectors. Glow Discharge Conditioning System. Glow Discharge Cleaning is a low temperature plasma process, initiated using in-vessel electrodes, which assists achievement of clean and stable plasma operation by reducing and controlling impurity and hydrogenic fuel outgassing from plasma-facing components. The GDC system consists of 6 GDC electrodes and associated deployment mechanisms at lower port level. 3.5 Cryoplant and Distribution System The ITER cryogenic system will use the most advanced cryogenic technologies developed for accelerators projects adapted and optimized to fulfil the requirements and constraints of a large fusion installation. A refrigeration capacity equivalent to 65kW at 4.5K is distributed for the cooling of superconducting magnets, their HTS current leads and small users. It includes also the cooling and regeneration in sequence of the cryogenic pumps. A 1300kW nitrogen
244
Y.H. Kim and G. Vine
plant cools the 80K thermal shields. The key design requirement is the capability to cope with large pulsed heat loads deposited in the magnets due to magnetic field variations and neutron production from the fusion reaction. The basic functions are: • cool down and warm-up of the cryostat and torus cryopumps • gradual cool down and filling of the magnet system and the 80K thermal shield in about one month • cool down of the NB cryopumps, pellet units, small users and gyrotrons • maintain magnets and cryopumps at nominal operating temperatures over a wide range of operating modes with pulsed heat loads due to nuclear heating and magnetic field variations • accommodate periodic regeneration of cryopumps • accommodate resistive transitions and fast discharges of the magnets and recover from them in few days • accommodate the cryoplant operational modes with the five ITER operation states Additional functions are: • ensure high flexibility and reliability in line with machine operation requirements • low maintenance of cryogenics equipments for reduced shutdowns The cryogenic system shall provide the cooling of the ITER cryogenic components with four different primary loop temperature levels, namely: 4.2K, 4.5K, 50K and 80K. These are the expected cold source temperature levels to which the components cooling loops are connected to. All loops will use helium as cryogenic fluid in the tokamak building. 3.6 Electrical Power System The Steady State Electrical Power Network (SSEPN) distributes power to the ITER plant components requiring steady state electrical power and the Pulsed Power Supply System (PPSS) provides controlled DC power to the superconductive magnets. The SSEPN receives power from the French 400kV transmission grid, transforms it to appropriate voltage levels and distributes it to the ITER plant components requiring steady state electric power at 6.6kV and 230/400V AC. The consumers are mainly motors and total power of all connected loads is about 170MVA (145MW), excluding design margins. The maximum active power consumption is about 120MW, during the Plasma Operation State. The major consumers are cooling-water and cryogenic systems requiring together about 80% of the total demand. The SSEPN also supplies Investment Protection and Safety Relevant consumers. For these consumers, about 6MW must be provided even in case of a loss of off-site power. The main operational functions of PPSS are to provide: • pulsed power for energizing the TF, PF, CS and correction coils to generate, confine and control the plasma current;
ITER Plant Support Sytems
245
• pulsed power to the additional Heating and Current Drives (H&CD) systems (IC, EC, LH and NB systems) for the plasma heating and current drive. This system protects: • the super-conducting coils against fast discharges in case of quench; • the coils against over-voltages or/and over-currents due to abnormal or fault operation of power supplies or in case of plasma current disruption; • the machine structures against high induced voltages due to electrical ground faults, by grounding the magnet system components and monitoring ground currents The PPSS consists of Pulsed Power Distribution System and Reactive Power Compensation (RPC), AC/DC converter, Switching Network (SN) and Fast Discharge Units (FDU). The ITER magnets are supplied by a large AC/DC conversion plant, which has a total installed power of about 1.6GVA. The SN is used to generate high voltage for plasma breakdown, and the FDU is used to discharge the energy stored in superconductive magnets in case of quench. 3.7 Cooling Water Systems The ITER Cooling Water System (CWS) consists of the Tokamak Cooling Water System (TCWS), the Component Cooling Water System (CCWS), the Chilled Water System (CHWS), and the Heat Rejection System (HRS). The main functions of the cooling water system are to: • Remove heat deposited in the in-vessel components, the vessel, additional heating systems, and diagnostics during the burn cycle; • Maintain coolant temperatures, pressures, and flow rates to limit component temperatures and retain thermal margins during the operating campaign as required; • Remove decay heat also during shutdown periods; • Provide baking for the in-vessel components; • Accommodate draining, and provide refilling and drying for maintenance periods; • Remove heat from components of plant auxiliary systems; • Reject all the above heat to the environment. The TCWS transfers the heat, deposited from Plasma to the in-vessel components and the vacuum vessel, to the HRS. The CCWS removes heat from other large process components not serviced by the TCWS. The CHWS provides chilled water to components (inside and outside the Vacuum Vessel) and to the HVAC system at a temperature lower than that can be provided by the normal CCWS and HRS. The HRS is the ultimate heat sink that receives heat from tokamak and its in-vessel components via the TCWS, CCWS, and CHWS and rejects this heat to the environment via the HRS cooling towers. The TCWS consists of the Primary Heat Transfer Systems (PHTSs) and the Chemical and Volume Control Systems (CVCSs), the Draining and Refilling Systems, and the Drying System.
246
Y.H. Kim and G. Vine
There are three PHTS loops for the First Wall and Blanket (FW/BLK), one for the Divertor and Limiter (DIV/LIM), one for the Neutral Beam Injector (NBI), and two for the Vacuum Vessel (VV). 3.8 Radwaste Management System ITER radioactive waste will be generated mainly by activations of materials by neutrons produced by fusion reaction and contamination by tritium used as fusion fuel. The ITER radioactive waste management systems are designed to ensure the treatment and storage functions for different types of ITER radioactive waste which will be produced during the ITER operational and the de-activation phases. Depending on their radioactivity level and classification, each type of ITER radioactive waste will be treated and stored in appropriate buildings and facilities on ITER site before it is sent to the Host country’s interim storage or final disposal repositories 3.9 Hotcell Facility The ITER Hot Cell facility is designed to support the Tokamak during the assembly and operation phases. The current design of the Hot Cell facility comprises a substantial concrete, standalone building with four floors above ground and one basement; the current volume of this building is estimated at about 95,000m3. The current volume of the red zones is estimated at about 20,000m3. The main function of the Hot Cell facility is to repair or refurbish components, tools and equipment which have become activated by neutron exposure and/or contaminated with tritium and beryllium, or covered with activated dust. This facility is needed to meet the maintenance and upgrading requirements of the ITER machine, as well as to ensure the availability of the machine during the operational phases. All operations are performed by remote-handling systems. Hot Cell Facility houses also the remote handling mock-up for simulation of operations, rehearsal of interventions and up-grade of remote handling tools. Components entering the Hot Cell facility for repair may be diverted to the Hot Cell waste processing system. The function of the Hot Cell radwaste system is to process these components – including tritium removal, size reduction and packing, and temporary storage – before they are handed over to the Host Country.
4 Summary In summary, it is evident that the various plant support systems are essential for the ultimate success of the ITER project. It is clear that many of the technological areas are most challenging and the R & D in these areas requires speedy implementation. IO and all the Domestic Agencies (DAs) need to unite to surpass the challenges. In conclusion, all efforts are being made to finalize the systems for Baseline 2007. Most plant support systems need to be constructed earlier than the first plasma, so that concentrated efforts from ITER IO and DAs are needed. The R & D in different areas need to be completed to achieve the key milestones.
ITER Plant Support Sytems
247
Acknowledgments. This report was prepared as an account of work for the ITER Organisation. The Members of the Organisation are the People’s Republic of China, the European Atomic Energy Community, the Republic of India, Japan, the Republic of Korea, the Russian Federation, and the United States of America. The views and opinions expressed herein do not necessarily reflect those of the Members or any agency thereof. Dissemination of the information in this paper is governed by the applicable terms of the ITER Joint Implementation Agreement. The authors appreciate the contributions by the relevant Responsible Officers in ITER Organization.:- W Curd, L Serio, M Benchikoune, B Na, M Glugla, R Pearce, S Maruyama, I Benfatto, I Song, J Hourtoule.
Growth Mechanism of Nitrogen Incoporated Carbon Nanotubes with RAP Process Chang-Seok Lee1,2, Je-Hwang Ryu1, Han-Eol Im1, Sellaperumal Manivannan1, Didier Pribat2, Jin Jang1, and Kyu-Chang Park1 1
Kyung Hee university, Department of Information Display and Advanced Display Research Center (ADRD), Hoegi-dong, Dongdaemoon-gu, Seoul, South Korea [email protected] 2 Ecole Polytechnique, LPICM, 91128 Palaiseau Cedex, France
Abstract. The CNTs were synthesized with various growth durations from 10min to 120min by dc-plsam enhanced chemical vapor deposition (dc-PECVD). The mixture of C2H2/NH3 and H2 gases was used for CNT growth. As the growth time is increase, the CNTs show the tendency gathering into the center of catalyst pattern due to the van der waals interaction between the CNTs and bonding to each other. In the 80 min growth, the CNTs show that the ~ 1μm thick diamter and up to 35μm length. We investiged these CNTs by the scanning electron microscopy (SEM), transmission electron microscopy (TEM), energy dispersive X-ray spectroscopy (TEM-EDS) and Raman. The bonded CNTs was covered by silicon comes from the silicon substrate and nitrogen was incoporated by hiding of catalyst metals. We obtained ratio of N/C and Si/C is ranged 0.25 ~ 0.55, 0.74 ~ 1.19 respectively.
1 Introduction Since the discovery and high yield synthesis of carbon nanotubes, they have attracted great interest among the research community due to their remarkable mechanical, electrical and thermal properties [1, 2]. Based on these unique properties, the way of possible application is still expanding today. From the discovery of CNTs to the present, researchers have conducted numerous experiments and theoretical calculations for figure out the CNTs. However, most of the efforts has confined into intrinsic carbon nanotubes. The presence of defects or impurities that are electronically and chemically active can change these properties. Therefore, the control of the defects and impurities has become more important key for all of the applications using CNTs. Doping of the other atoms into the carbon nanotube is regarded as one of the approaches to modify the electrical property of CNTs and to constitute carbon-based nanosized electronics. The nitrogen doped CNTs shows n-type semiconducting behavior regardless of the chirality [3]. Nitrogen has similar atomic size with carbon, a property that provides them with a strong probability of entering into the carbon latiice. Boron is widely refered as p-type dopant of carbon nanotubes due to similar reason with nitrogen [4]. Over these two main dopant, Cobalt [5], Potassium [6], Silicon [7], Phosphorous [8], and Oxygen [9] may also be able to dope the carbon nanotubes and modify their properties.
250
C.-S. Lee et al.
The roles of nitrogen while growing the carbon nanotube are already analyzed by many workers [10-14]. It bring the pentagone bond into inner wall, as a result induced positive curvature can not only reduce the diameter of CNTs but also make the bamboo structure. Trasobares, S. et al have been observed experimentally using elemental line scans that N is richer in the compartment core and on the inner walls of the compartments, indicating that N preferentially sits on narrow-diameter tubes [15]. Jiwen Liu et al have synthesized nitrogen doped CNTs from 4.8 at % to 8.9 at % [11]. And Tang et al. demonstrated that a much higher nitrogen concentration range from 10 % to 20 % [16]. In this works, we will show not only the quite high nitrogen contents in CNTs but also unexpected behavior of silicon.
2 Experiment Note that Plasma Enhanced Chemical Vapour Deposition is the unique method to grow CNTs at temperatures below 400°C [17]. And also we can obtain very well aligned CNTs by PECVD. Recent researches have shown PECVD capability to grow both, single-walled and multi-walled nanotubes from pre-patterned catalyst particles, imposing thus the CNT position. These features have made PECVD methods very attractive for CNT integration in devices. We are using a unique method for the CNT growth named as Resist Assisted Patterning (RAP) process and the detailed process flow for RAP was reported earlier [18]. Ni was uniformly sputtered with 100Å thickness and a few hundred ohm/sq. And then, catalyst was patterned by standard lithography process. Patterned catalyst was strongly bounded under the silicon interface through the forming process. That’s why we don’t need to use barrier layer such as TiN or SiO2 for preventing the diffusion of catalyst metal. Most strong advantage of RAP process is the adhesion between CNT and silicon substrate. After grow the CNTs, even if we give any physical stresses, the boding side between CNTs and substrate is not broken. Generally, the pre-treatment using amonia NH3 is needed for uniform catalyst agglomeration [19, 20]. Agglomerated size of catalyst metal directly affect on both of the diameter and height. In our case, the catalyst metal is already agglomerated and bounded into silicon substrate surface through the formig process, as a result pretreatment is not considerable factor for CNT growth. Prior to grow the CNTs, patterned catalyst Ni was prepared with 10μm dot pattern and 50μm pitch between the dots. Ni was most popular material as the catalyst metal. Because it shows that less oxdiation compare with other catalyst metal such as Iron or Cobalt. Oxdized metal has a difficult for adorption of carbon atom. The growth of CNTs was carried out with a triode dc-PECVD, with a mesh grid being placed 10mm above the substrate holder electrode with +300V bias. The substrate electrode was maintained at -600V with the top electrode grounded, and the spacing between the two electrodes was 30mm. The total gas pressure during the growth was kept to as 2 Torr and the CNT was grown from 10 to 120min in various durations. The growth temperature was maintained at 580°C.
Growth Mechanism of Nitrogen Incoporated Carbon Nanotubes with RAP Process
251
3 Results and Discussions We made a CNT array with 10 X 10 dot pattern. Each of dot pattern size is 10μm and has a 50μm pitch between dot patterns. Our unique technique has several attractive advantages such as very well aligned CNTs, strongly bounded with substrate and clear pattern whatever you want. The CNTs are grown on each of dot pattern, see in Fig. 1 with various growth time. The structural changes with growth time are easily figured out. The number of CNTs per dot pattern is drastically decreased as the growth time is increased.
Fig. 1. The SEM surface images of vertical aligned CNTs magnified image of one emitter at 5K during (a) 10min, (b) 20min, (c) 60min, (d) 80min, (e) 100min, and (f) 120min
Fig. 2. The transmission electron microscopy images of CNTs. (a) 20min growth, (b) 60min growth, (c) 80min growth and middle side, (d) 80min growth and top side, (e) 120min growth and bottom side, (f) 120min growth. Each image has a scale bar, 50nm, 0.5μm, 100nm, 100nm, 50nm, 1μm respectively.
252
C.-S. Lee et al.
In the case of 10min growth, there are lots of CNTs over 600 ea per dot pettern. In long time growth, especially 100min and 120min, they have similar number of CNTs from 20 ea to 25 ea. These changes will be discussed later with additional SEM images and TEM images. The diameter of CNTs is increased linearly with growth time, and the maximun length of CNTs is increased up to 35μm at 80min growth. In the case of 100min, 120min growth, the maximum length of CNTs are decreased as 23μm, 21μm respectively. In the short time growth, edge side of dot pattern has long CNTs relatively. As the growth time is increased, the center region of dot pattern has more longer CNTs than edge side. The catalyst patterning process is based on wet etchning process. As a result of this process, the catalyst shape will be rounded, particularly, at edge side. Agglomerized particle size distribution through the forming process drawn in Fig. 4. The relation between the catalyst thickness and the diamter of CNTs are already well known. In our result also have a thread of connection with previous literature. After wet etching process, the catalyst shape is somewhat rounded in near of pattern edge. The size distribution of catalyst particle is strongly depends on the catalyst thickness, hence, the center region will be show more larger particle size than around of dot pattern. If the catalyst particle size is small, it needs much more carbon atoms than the small particle for CNT growth. In other word, the smaller catalyst particle has a fast CNT growth rate. The catalyst surface saturation also brings more fastly by amorphous carbon or impurity and etching process will be dominated by plasma. Therefore in case of long time growth, relatively larger catalyst particle can obtain the longer CNTs. We measured TEM and TEM-EDS at each top, middle and bottom side with growth time. In the 20min growth, see Fig. 2 (a), we confirm the closed tip of CNTs. The catalyst particle size is less than 50nm. The white spot might be occurred by charge accumulation. In 60min growth, see Fig. 2 (b), the CNTs become more sharp and thick. More than 60min growth, the CNTs looks like open tip without catalyst metal. However, in the more high resolution image, there are some creases inside of CNTs. Fig. 2 (c) and Fig. 2 (e) is the 80min growth, middle side and 100min growth, bottom side of CNTs. Both of these images show lots of crease and it arrounded by other material. It even has the catalyst metals inside of CNTs. The amount of incoporation and elemental change along the time and position analyzed by TEM-EDS. We can obtain the automatically calculated atomic percentage, but the EDS doesn’t have enough resolution in order to separate the carbon, nitrogen and oxide. The energy distance between these atoms is similar with the limit of resolution. We measured 18 cases of TEM-EDS, but only 5 cases are clearly divided into carbon, nitrogen and oxide. The results sometimes show the unexpected peak and copper element by grid. We confined the total contens is composed of carbon, nitrogen and silicon. Because we want to know relative elemental change of each atom, and other element content is negilgible such as oxide, nickel. At first, we tear out the CNTs from substrate for the analysis. Silicon is independent element from the limit of resolution, that’s why we can easily figure out the behavior.
Growth Mechanism of Nitrogen Incoporated Carbon Nanotubes with RAP Process
253
Fig. 3. The sum of Carbon, Nitrogen and Silicon is regared as at 100 percent. (a) The change of silicon atomic percent with growth time. (b) The change of nitrogen atomic percent with growth time. (c) The change of nitrogen atomic percent with growth time. (d) The atomic ratio of Silicon and Nitrogen with Carbon. The ratio of N/C and Si/C is ranged 0.2~0.2, 0.2~0.2 respectively.
℃
Fig. 4. Growth mechanism of CNTs. (a) The catalyst patterning by wet etching. Each dot pattern has 10μm size. (b) The catalyst agglomerization through the forming process at 580 . Exactly, they bounded into silicon surface. (c) and (d) CNT growth with a mixture of C2H2 and NH3. (e) CNT bonding by van der waals force. (f) Nitrogen doped CNTs that covered by silicon.
254
C.-S. Lee et al.
Fig. 5. The SEM images of 60min growth. Left CNT image show the tendency of gathering into the center region. Right image show the initial state of bonding by silicon between CNTs.
As we followed from bottom to middle and top side, the amount of silicon has a decreasing tendency, see Fig. 3 (a), even though there are some cross between middle and bottom side curve. As the length of CNTs is increased, the possibility can be broken when we tear out the CNT from the substrate will be increased. This possibility might be one of reason of cross betweens the middle and top side. The distance with silicon substrate is the major factor for the amount of silicon contents. The average atomic percent of silicon is increased up to around 85 min growth with 22 at. %. After then, they decreased. The trace of silicon has a similar behavior with the relation between growth time and length. We drawn the only 5 exact points of the atomic percent of nitrogen, see Fig. 3 (b). These results have not enough for the analysis of nitrogen behavior along the CNT axis. The trace of average show that the nitrogen contents drastically increase in short time growth, and saturated at around 85min growth. From the other research group reported the nitrogen contents are less than 10 at. % and recently nitrogen incoporation up to 20 at. %. In our work, nitrogen is doped up to 24 at. %. The atomic percent of carbon obtained by substraction of silicon and nitrogen from the 100 at. %. The trace of average show that the carbon atomic percent is decreased from 70 at. % to 51 at. %. Theoretical works has pointed out that substitutional Si induces a strong deformaiton of the cylindrical surface outward of the tubes. The silicon impurity introduces a localized level located at approximately 0.6eV above the top of the valence band, whereas for the metallic nanotube, the Si impurity introduces a resonating level close to 0.7eV above the Fermi level. It is expected that the Si substitutional impurity will be highly reactive, serving as a binding center to other atoms or molecules. Our case is not considered as the substitutional doping of silicon, but, relatively carbon atomic percent is low due to silicon acts as a binding center and induce other silicon or nitrogen. The ratio of silicon and nitrogen with respect to carbon decreased after 80min growth. This result has a similar tendency with CNT length. Si/C ratio ranges from 0.74 to 1.19, N/C ratio ranges from 0.25 to 0.55. Growth mechanism was drawn and previously discussed with Fig. 4 (a~d). All the SEM images show that the CNTs gather into the center side due to van der waals force between the CNTs. Eventually, they are bonded from the top side and gradually bottom side by silicon. Therefore, the number of CNTs was decreasing. Through the
Growth Mechanism of Nitrogen Incoporated Carbon Nanotubes with RAP Process
255
Fig. 5, 60min growth, we can confirm the these behavior. The contact side between the CNTs, has a direction each other. It means that the silicon is not simply amorphous phase. It might have some crystallinity. Raman spectroscopy is one of the most powerful tools for characterization of CNTs and non destructive analysis methode. It is important to note that Kataura plots are well established for pure carbon nanotube system [21, 22], but have not been fine tuned Raman Shift (cm-1) for doped CNTs. The excitation laser of wavelength Fig. 6. The Raman shift from the 120 min 514.532nm (Ar-ion laser) was used in the growth and nitrogen doped CNTs. D peak and G peak is located at around 1354.63 present experiment. We measured Raman cm-1, 1579.26 cm-1 respectively (ID/IG = using 120min growth CNTs. At the low frequency peak, around 200cm-1, there is ra0.8577). dial breathing mode that directly depends on the diameter of CNTs. However, the frequency of RBM is not related to the chiral angle theta of the nanotube [23-25]. We observed a disappearence of radical breathing mode (RBM) peak corresponding to thick diameter of CNTs. The wavelength of RBM peak has a relation of inverse proportion with the diamter of CNTs. In our case, 120min growth, the diameter of CNTs is up to 1.33μm (see fig. 1), as a result of this, the RBM peak will be downshipfted. And also, in case of very well aligned CNTs, the intensity of RBM peak has a tendancy becoming vanished from sight [26]. The peak at around 520cm-1 is the silicon peak. D peak and G peak is located at around 1354.63cm-1, 1579.26cm-1 respectively. ID/IG is about 0.8577. Intensity
C2H2/NH3 = 40/60, 120 min growth
0
200
400
600
800
1000
1200
1400
1600
1800
2000
4 Conclusion CNTs have lots of progress in synthesis. From now on, the control of the defects and impurities has become more important key for all of the applications using CNTs. In our work, synthesized CNTs have a relation with nitrogen and silicon. N/C, Si/C is ranged 0.25 ~ 0.55, 0.74 ~ 1.19 respectively. Focus on the long time growth, all the CNTs are covered by silicon. The catalyst metal doesn’t work anymore, the nigrogen was more incoporated with silicon. The amount of nitrogen was decreased after 80min growth by decreasing of silicon. Unexpected behavior of silicon with CNTs requires the further research. Acknowledgments. This work was supported by Seoul Research and Business Development program (Grant no. CR 070054). And also, we thanks for the assistance and support of all those who contribute to the EKC2008.
256
C.-S. Lee et al.
References [1] S Iijima (1991) Helical microtubules of graphitic carbon. Nature 354:56-58 [2] Y Saito, K Hamaguchi, R Mizushima, S Uemura, T Nagasako, J Yotani, T Shimojo (1999) Field emission from carbon nanotubes and its application to cathode ray tube lighting elements. Appl. Surf. Sci. 146:305-311 [3] Latil S, S Roche, D Mayou, J C Charlier (2004) Mesoscopic Transport in Chemically Doped Carbon Nanotubes. Phys. Rev. Lett. 92(25):256805 [4] K McGuirea, N Gotharda, P L Gaib, M S Dresselhausc, G Sumanasekerad, A M Raoa (2005) Synthesis and Raman characterization of boron-doped single-walled carbon nanotubes. Carbon 43:219-227 [5] K Lafdi, A Chin, N Ali, J F Despres (1996) Cobalt-doped carbon nanotubes. J. Appl. Phys. 79:6007 [6] J Kong, C Zhou, E Yenilmez, H Dai (2000) Alkaline metal-doped n-type semiconducting nanotubes as quantum dots. Appl. Phys. Lett. 77:3977 [7] S B Fagan, R Mota, Antonio J, R da Silva, A Fazzio (2004) Substitutional Si Doping in Deformed Carbon Nanotubes. Nano Lett. 4:975 [8] V Jourdain, O Stephan, M Castignolles, A Loiseau, P Bernier (2004) Controlling the morphology of multiwalled carbon nanotubes by sequential catalytic growth induced by phosphorus. Adv. Mater. 16:447 [9] D J Manna, M D Halls (2002) Ab initio simulations of oxygen atom insertion and substitutional doping of carbon nanotubes. J. Chem. Phys. 116:9014 [10] Villalpando-Paez F, A Zamudio, A L Eliase, H Sonb, E B Barrosf, S G Choud, Y A Kima, H Muramatsug, T Hayashig, J Kongb, H Terronese, G Dresselhausb, M Endog, M Terronese, M S Dresselhaus (2006) Synthesis and characterization of long strands of nitrogen-doped single-walled carbon nanotubes. Chem. Phys. Lett. 424(4-6):345-352 [11] Liu J, S Webster, D L Carroll (2005) Temperature and Flow Rate of NH3 Effects on Nitrogen Content and Doping Environments of Carbon Nanotubes Grown by Injection CVD Method. J. Phys. Chem. B 109(33):15769-15774 [12] Sumpter B G, V Meunier, José M Romo-Herrera, Eduardo Cruz-Silva, David A, Cullen H, Terrones D, J Smith, M Terrones (2007) Nitrogen-Mediated Carbon Nanotube Growth: Diameter Reduction, Metallicity, Bundle Dispersability, and Bamboo-like Structure Formation." ACS Nano 1(4):369-375 [13] S H Lim, H I Elim, X Y Gao, A T S Wee, W Ji, J Y Lee, J Lin (2006) Electronic and optical properties of nitrogen-doped multiwalled carbon nanotubes. Phys. Rev. B. 73, 045402 [14] H S Kang, S M Jeong (2004) Nitrogen doping and chirality of carbon nanotubes. Phys. Rev. B. 70, 233411 [15] Trasobares S, Stephan C, Colliex C, Hsu W K, Kroto H W, Walton D R. W (2002) Compartmentalizaed CNx Nanotubes. J. Chem. Phys. 116:8966 [16] Tang C, Bando Y, Golberg D, Xu F (2004) Structure and nitrogen incorporation of carbon nanotubes synthesized by catalytic pyrolysis of dimethylformamide. Carbon, 42:2625 [17] T Kato, G-H Jeong, T Hirata, R Hatakeyama, K Tohji, K Motomiya (2003) Single-walled carbon nanotubes produced by plasma-enhanced chemical vapor deposition. Chem. Phys. Lett. 381:422 [18] K C Park, J H Ryu, S H Lim, K S Kim, J Jang (2007) Growth of carbon nanotubes with resist-assisted patterning process. J. Vac. Sci and Tech. B, 25:1261-1264 [19] Han J, J-B Yoo, C Y Park (2002) Tip growth model of carbon tubules grown on the glass substrate by plasma enhanced chemical vapor deposition. J. Appl. Phys. 91(1):483-486
Growth Mechanism of Nitrogen Incoporated Carbon Nanotubes with RAP Process
257
[20] Cantoro M, S Hofmann, S Pisanaa, C Ducatib, A Parveza, A C Ferraria, J Robertsona (2006) Effects of pre-treatment and plasma enhancement on chemical vapor deposition of carbon nanotubes from ultra-thin catalyst films. Diamond and Related Materials 15(48):1029-1035 [21] C Thomsen, H Telg, J Maultzsch, S Reich (2005) Chirality assignments in carbon nanotubes based on resonant Raman scattering. Phys. Status Solidi B 242:1802 [22] H Kataura, Y Kumazawa, Y Maniwa, I Umezu, S Suzuki, Y Ohtsuka, Y Achiba (1999) Optical Properties of Single-Wall Carbon Nanotubes. Synth. Met. 103:2555 [23] R. Jishi, L Venkataraman, M Dresselhaus, G Dresselhaus (1993) Chem. Phys. Lett. 209:77 [24] J Kurti, G Kresse, H Kuzmany (1998) First-principles calculations of the radial breathing mode of single-wall carbon nanotubes. Phys. Rev. B 58:R8869 [25] D Sanchez-Portal, E Artacho, J Soler, A Rubio, P Ordejon (1999) Characterization methods of carbon nanotubes. Phys. Rev. B 59:12768 [26] Liu J, S Webster, D L Carroll (2006) Highly aligned coiled nitrogen-doped carbon nanotubes synthesized by injection-assisted chemical vapor deposition. Appl. Phys. Lett. 88(21): 213119-3
Micro Burr Formation in Aluminum by Electrical Discharge Machining Dal Ho Lee Faculty of Mechanical engineering, RWTH Aachen, Aachen, Germany [email protected]
Abstract. Currently, development of high technology parts such as semi-conductor elements, optical elements, electron-elements, parts of an automobile, aircraft and satellites component is increased very actively. Micromachining is the foundation of the technology to realize such micro parts. Micromachining is very various with EDM, LBM, ECM and USM etc. Among others EDM is one of the most powerful methods for micromachining in metal and in other electrically conductive materials. However, when machining a workpiece with EDM, heat affected zone is formed at the surface of the workpiece, burr is formed at the edge of the workpiece. Especially, burr cause of many problems in inspection, assembly, edge finishing precision micro parts. Despite these many problems, state of the art of burr formation by EDM is not enough yet. Thus, this study was carried out an experiment to observe burr formation in several different materials by EDM, particularly was carried out an experiment to observe burr formation by EDM parameters. As a result, in this study was observed effect of burr size, shape and formation mechanism of burr by EDM.
1 Introduction Micromachining is the processing of tiny components with lengths smaller than a millimeter and requiring a high level of accuracy that general machining processes do not have. Commercialized micromachining technologies include Electrical Discharge Machining (EDM), Laser Beam Machining (LBM), Ultrasonic Machining (USM) and Electro-Chemical Machining (ECM) [1]. Among these micromachining technologies, EDM is the most widely used because of its high-aspect ratio and ability to process very hard materials such as steel, cement, industrial diamonds, and conductive ceramics [2]. EDM is also used in the production of turbo-engine nozzles for airplanes, fuel-injection nozzles and coolant openings for cars, electron guns for video-signal processing, micro connections and lead frames for the integrated circuits (IC) of high-speed computers, inkjet nozzles, liquid orifices used in the medical field, and nuclear-fusion measuring devices [3, 4]. Despite these excellent processing characteristics, plastic transformation by EDM usually produces unintended burrs on the final product. The burrs largely decrease the processing accuracy of the machine [5]. As EDM technology becomes more precise, the burrs produced also become proportionately smaller, producing what we call micro burrs. Micro burrs formed by EDM are yet to be considered critical factors that decrease the level of processing accuracy and cause obvious problems in microminiaturization.
260
D.H. Lee
However, the potential processing hazards caused by EDM-produced micro burrs are serious enough to warrant this study. First, discharged machine-standard-processing surface or measuring surface can be erroneously set up by micro burrs, which may cause production problems in the automotive and high-technology electronic industry. Moreover, micro burrs have a direct impact on efficiency in the assembly lines and the quality of the final processed components. For example, micro burrs in the minute openings of inkjet nozzles can lead to nozzle blocking and serious performance degradation. Burrs formed in the fuel-injection nozzle of turbo engines disturb the flow of fuel and pose dangerous unexpected consequences [6, 7]. Therefore, the best recourse for micromachining technology is to conduct research on micro burrs formed by EDM. In addition, the machining design standard should be more technically established to treat micro burrs as an urgent problem and to minimize their formation. This study defines micro burrs formed by EDM from a technological and engineering standpoint with the aim of ensuring a high level of processing accuracy and efficiency in microminiaturization.
2 Theoretical Background 2.1 Discharge Pulse and Discharge Energy When the peak current between electrodes is Ip (A), the pulse on time is τp (sec), the rest time is τr (sec), and the discharge voltage is eg (V), the single-discharge energy Psingle (w) is shown in Eq. (1), and the discharge frequency f (Hz) in Eq. (2) [8].
Psin gle = eg ⋅ I p ⋅τ p
(1)
f = (τ p + τ r )
(2)
−1
Continuous discharge energy in machining Pcontinue (w) is shown in Eq. (3) from Eq. (1) and Eq. (2).
Pcontinue ( w) = eg ⋅ I p ⋅ τ p ⋅ f = eg ⋅ I p ⋅ τ p ⋅ (τ p + τ r ) −1 = eg ⋅ I
(3)
Average machining current I= Ip τp (τp + τr)-1, and discharge voltage eg (V) are constant during discharge. Pcontinue is determined by the average machining current regardless of the voltage between electrodes. Eventually, the average machining current (I) is the EDM factor that is the most closely related to machining performance and speed. An overload of current for high machining speed leads to flameless arc discharge while an insufficient current also causes flameless arc discharge, both of which make machining impossible [9]. 2.2 Discharge Machining Factor There are electric and non-electric factors that affect EDM performance. Electric factors include peak current, pulse factor, accumulation capacity, and discharge voltage. Non-electric factors include dielectric fluid and servo devices. [9].
Micro Burr Formation in Aluminum by Electrical Discharge Machining
261
2.2.1 Peak Current and Pulse Factor The factors which are most closely related to discharge performance are pulse factors such as peak current Ip, pulse on time τp, rest time τr, etc. For pulse factors, there are two conditions, namely: they should not cause arc discharge, and discharges do not occur at the same point. Their effects are as follows: 1. Peak current (Ip): It refers to the peak current between electrodes and machining subjects, and is closely related to discharge energy. It is one of the EDM factors that greatly affect the electrode consumption rate, machining speed, machining surface roughness, and over-cut. 2. Pulse on time (τp): It refers to the time imposed between two electrodes, and is one of the EDM factors closely related to electrode consumption rate and machining speed. It is also closely related to electrode consumption rate. 3. Rest time (τr): It refers to the time during which voltage is not imposed between two electrodes. Insulation recovers during this time to make continuous EDM possible. It is closely related to machining stability. 2.2.2 Accumulation Capacity Single machining is powered by single-discharge energy. Therefore, the effect of single discharges accumulates in EDM machining. The accumulation capacity of the condenser controls energy in case of RLC-circuit EDM. Discharge energy (when discharge voltage is not high) can be simply expressed as follows:
Psin gle = (C + C ') ⋅
eg
2
2
(4)
Where eg is the discharge voltage, C is the condenser capacity of the circuit, and C' is the sum of all the condenser capacities formed between the electrodes and water tank (or dielectric fluid) [9]. 2.2.3 Discharge Voltage Discharge voltage means the voltage imposed on the condenser from the power source. Therefore, discharge voltage is closely related to discharge energy as shown in Eq. (4), and is an important parameter of EDM. 2.2.4 Servo-transfer Device With EDM, the proper maintenance of the distance between the workpiece and the electrode, which is the job of the servo-transfer device, is very important. The difference between the average voltage of the discharge gap and the standard voltage controls the general location of the electrodes. If the average voltage of the gap is higher than the standard voltage, the servo device goes down to the direction that narrows the gap as a response to an electric command. On the other hand, if the standard voltage gets higher, the servo device goes up to the direction that widens the gap. When the gap is open, the servo device adjusts itself until the electrode goes down and equals the fixed standard voltage. When the electrode comes into contact with the workpiece and a short circuit occurs, the discharge-gap voltage becomes zero, causing the servo device to go up until the gap voltage equals the standard voltage. When the standard
262
D.H. Lee
voltage declines, the average discharge gap narrows in discharge. When the standard voltage rises, the gap widens to make machining possible [9]. 2.3 Characteristics of Burrs To define, burrs formed on the entrance surface when the electrode enters the workpiece are called entrance burrs. Burrs formed on the exit surface when the electrode exits the workpiece are called exit burrs. The height of burr is defined as the height perpendicular to the workpiece surface, and the width of burr is defined as the size of burr formed from the center of the hole of the workpiece [10].
3 Experimental Conditions and Method This study used A16061, which is relatively more ductile and has one of the lowest melting points among the various metal alloys used in the aerospace, electronic, and molding industries. This study used brass as material for the electrode to get a low electrodeconsumption rate and to minimize leveling and vibration amplitude. The electrode was 3 mm in diameter and 250 mm in length. Micro hole-EDM apparatus was used to observe burrs formed in micro holes in this study. The main body consists of the electrode-transfer device (servo device), the column to hold it, the bed and table supporting the column, the drill chuck, and the electrode guide. The hole-machining method was performed in this study. Electric and non-electric factors were chosen as direct factors in the formation of micro burr, and their effects in micro burr formation were observed. The factors of greatest effect are electrical factors such as peak current, condenser capacity, and discharge voltage. The nonelectric factors, such as servo voltage for electrode-transfer control and discharge Table 1. Specification of experimental conditions Item
Specification Peak current: 6 ~ 22 [A] Pulse-on duration: 10,13,15 [µs]
EDM condition
Pulse-off duration: 1,2,5 [µs] Discharge voltage: 85 ~ 125 [V] Capacitance: 100 ~ 1000 [pF] Servo voltage: 20 ~ 28 [V] Tool electrode: Brass [φ0.3mm]
Mechanical condition
Dielectric: water Dielectric fluid pressure: 30~90 [kg/cm2]
Micro Burr Formation in Aluminum by Electrical Discharge Machining
263
pressure of dielectric fluid, also have important effects on the formation of burrs. These machining factors were systematically changed to measure the size and shape of micro burrs. The electrodes were changed more than ten times in each experiment and in each machining factor to minimize experimental errors arising from electrode consumption. Table 1 shows the experimental conditions in this study.
4 Experimental Result and Investigation 4.1 The Effect of Discharge-Machining Factors The effects of changes of discharge-machining factors, such as EDM peak current, condenser capacity, discharge voltage, servo voltage, and discharge pressure of dielectric fluid, on the size and shape of burrs were thoroughly analyzed. Fig. 1 (a) shows the hole machining of the A16061 workpiece by EDM using a 0.3-mm-diameter brass electrode. The dielectric fluid used was water, the discharge (a)
(b)
(c)
(d)
Fig. 1. Size of burr by (a) variation of peak current, (b) variation of capacitance, (c) variation of servo voltage, (d) variation of discharge voltage - Peak current: 10A (Pulse duration: τp= 15μs, τr= 1μs)
264
D.H. Lee
voltage was 100 V, the servo voltage was 28 V, and the condenser capacity was 200 pF. The change in the size of burrs (height and thickness) was observed to depend on the peak current. Burr size increases as the peak current increases, regardless of the entrance or exit surface, but the size increase occurs more frequently at the exit than at the entrance. Fig. 1 (b) shows the hole machining of the A16061 workpiece by EDM using a 0.3-mm-diameter brass electrode. The dielectric fluid used was water, the discharge voltage was 100 V, the servo voltage was 28 V, and the peak current was 10 A. The change in the size of burrs (height and thickness) was observed according to condenser capacity. Burr size increases as the condenser capacity increases, and this happens more intensely at the exit than at the entrance. Fig. 1 (c) shows he hole machining of the A16061 workpiece by EDM using a 0.3mm-diameter brass electrode The dielectric fluid used was water, the discharge voltage was 100 V, the condenser capacity was 200 pF, and the peak current was 10 A. The change in the size of burrs (height and thickness) was observed according to servo voltage. Burr size decreases as the servo voltage increases, regardless of the material and location, but the size decrease occurs more frequently at the entrance than at the exit. Fig. 1 (d) shows the hole machining of the A16061 workpiece by EDM using a 0.3mm-diameter brass electrode. The dielectric fluid used was water, the peak current was 10 A, the condenser capacity was 200 pF, and the servo voltage was 28 V. The change in the size of burrs (height and thickness) was observed according to discharge voltage. Burr size increases as the discharge voltage increases, regardless of the material and location, and this happens more intensely at the exit than at the entrance. 4.2 Shape of Burr Classification of the burr after the hole machining by EDM is shown in Fig. 2. The shapes of burr are classified as 3 kinds, namely: type A, in which burrs are rarely observed; type B, in which burrs are formed in flower or burst-cone pattern, and type C, in which burrs form into lumps or refusion layers.
(a) Type A
(a) Type B
(a) Type C
Fig. 2. Classification of burr by EDM
5 Conclusion Performed to investigate the creation of micro burrs by EDM with A16061 as the workpiece, this study arrived at the following conclusions: 1. The size of the micro burr is greatly dependent on the peak current, condenser capacity, and the discharge energy related to the discharge voltage.
Micro Burr Formation in Aluminum by Electrical Discharge Machining
265
2. The size of the micro burr is also dependent on the non-electric factors, such as servo voltage and dielectric fluid pressure, related to the discharge voltage. 3. The size of the micro burr is greater on the exit surface than on the entrance surface. 4. The shapes of the micro burr are classified as: without burr, burst shape, and refusion shape. 5. Discharge energy should be decreased by reducing electrical discharge machining factors such as peak current, condenser capacity, and discharge voltage to the minimum, to minimize the formation of micro burrs. It is also necessary to adopt amore precise electrode-transfer device control.
References [1] Masuzawa T (2000) State of the art of micromachining. Annals of the CIRP 49: 473-488 [2] Z Y Yu, K P Rajurkar, H Shen (2002) High Aspect Ratio and Complex Shaped Blind Micro Hole by Micro EDM. Annals of the CIRP 51:359-362 [3] T Masuzawa, J Tsukamoto, M Fujino (1989) Drilling of Deep Microholes by EDM. Annals of the CIRP 38:195-198 [4] F Klocke, M Klotz (2001) Funkenerosion für anspruchsvolle Bearbeitungsaufgaben. wtWerkstattstechnik 91:409-413 [5] J –P Kruth, L Stevens, L Froyen, B Lauwers, K U Leuven (1995) Study of the White Layer of a Surface Machined by Die-Sinking Electro Discharge Machining. Annals of the CIRP 44:169-172 [6] J M Stein, D A Dornfeld (1997) Burr Formation in Drilling Miniature Holes. Annals of the CIRP 46:63-66 [7] K Gillespie, P T Blotter (1976) The Formation and Properties of Machining Burrs. Transaction of the ASEM Journal of Engineering for Industry 98:66-74 [8] K B Lee, J D Kim, B H Choi, H D Song (1998) Determination of Machining Parameters Considering Current Density in Three Dimensional Electrical Discharge Machining. Journal of the Korean Society of Machine Tool Engineers 8:100-106 [9] G J Yoo (1985) Electro Discharge Machining. Daek-wang publishing company [10] D A Dornfeld, J S Kim, H Dechow, J Hewson, L J Chen (1999) Drilling Burr Formation in Titanium Alloy, Ti-6Al-4V". Annals of the CIRP 48:73-76
Multi-objective Environmental/Economic Dispatch Using the Bees Algorithm with Weighted Sum Ji Young Lee and Ahmed Haj Darwish Manufacturing Engineering Centre, Cardiff University, Cardiff, CF24 3AA, UK [email protected]
Abstract. This paper presents using the Bees Algorithm for Environmental/Economic power Dispatch problem which is formulated as a nonlinear constrained multi-objective optimisation problem. In this problem, both fuel cost and emission are to be simultaneously minimised. Simulation results presented for the standard IEEE 30-bus system using the Bees Algorithm with Weighted Sum are compared to other previous approaches and the comparison shows the superiority of the proposed Bees Algorithm and confirms its potential to solve the multiobjective EED problem.
1 Introduction The sole aim of the classical economic dispatch problem has only been to minimise the total fuel cost. However, this single objective can be no longer considered alone due to rising concerns about the environment from the emissions produced by fossilfuelled electric power plants. Environmental/Economic Dispatch (EED) is a multi-objective problem with conflicting objectives because emissions are in conflict with minimum cost of generation. Many approaches have been proposed to solve this multi-objective problem. More recently, multi-objective evolutionary algorithms [1] have been applied to the problem and Abido [2, 3] used Nondominated Sorting Genetic Algorithm (NSGA), Niched Pareto Genetic Algorithm (NPGA), Strength Pareto Evolutionary Algorithm (SPEA) and NSGA-II to the standard IEEE 30-bus system. This paper proposes the Bees Algorithm which is a new population-based optimisation algorithm that mimics the natural foraging behaviour of a swarm of bees to minimise fuel costs and nitrogen oxides (NOx) emission simultaneously to the EED problem. This paper is organized as follows: section 2 is a literature review about multiobjective optimisation and intelligent swarm-based optimisation, section 3 describes the proposed Bees Algorithm, section 4 defines the Environmental/Economic Dispatch problem, section 5 shows the simulation results and section 6 is the conclusion based on this research.
268
2
J.Y. Lee and A.H. Darwish
Multi-objective Optimisation and Intelligent Swarm-Based Optimisation
2.1 Multi-objective Optimsation Most realistic optimsation problems require the simultaneous optimisation of more than one objective function, and it is unlikely that the different objectives would be optimised by the same alternative parameter choices. The multi-objective problem (MOP) is almost always solved by combining the multiple objectives into one scalar objective whose solution is a Pareto optimal point for the original MOP. Most algorithms have been developed in the linear framework (i.e. linear objectives and linear constraints), but some techniques such as Weighted Sum, Homotopy Techniques, Goal Programming, Normal-Boundary Intersection (NBI) and Multilevel Programming are also applicable to nonlinear problems. In this paper, Weighted Sum was used as the solution technique. Minimising Weighted Sums of Functions. A standard technique for MOP is to minimise a positvely weighted convex sum of the objectives, that is n
∑ α f (x ) i i
(1)
i =1
Where αi > 0, i = 1, 2, ..., n. It is easy to prove that the minimiser of this combined function is Pareto optimal and it is up to the user to choose appropriate weights. This approach gives an idea of the shape of the Pareto surface and provides the user with more information about the trade-off among the various objectives. 2.2 Intelligent Swarm-Based Optimisation Swarm-based optimisation algorithms (SOAs) mimic nature’s methods to drive the search towards the optimal solution. Many of nature’s methods have been utilised by the various species which form swarms and develop as SOAs such as Genetic Algorithm (GA), Ant Colony Optimisation (ACO) and Particle Swarm Optimisation (PSO). The GA is based on natural selection and genetic recombination and works by choosing solutions from the current population and then applying a genetic operator – such as mutation and crossover – to create a new population. ACO algorithm mimics the behaviour of real ants which are capable of finding the shortest path from the food source to their nest using a chemical substance called a pheromone. The pheromone is deposited on the ground as the ants move and the probability that a passing stray ant will follow this trail depends on the quantity of pheromone laid. The PSO is an optimisation procedure based on the social behaviour of groups or organizations such as flocking of birds or as in a fish school. Individual solutions are represented by “particles” which evolve or change their positions over time in a search space according to its own experience and also its neighbours’ experience, thus combining local and global search methods. The main difference between SOAs and direct search algorithms such as hill climbing and random walk is that SOAs have a number of solutions equal to the number making up the population in every iterations, whereas a direct search algorithm has only a single solution.
Multi-objective EED Using the Bees Algorithm with Weighted Sum
269
3 The Bees Algorithm Bees are well known as social insects with well organized colonies. Therefore many researchers have studied their behaviour such as foraging, mating and nest site location to solve many difficult problems. The Bees Algorithm is inspired by honey bees’ foraging behaviour and it was developed by D. T. Pham in 2006[7, 8]. 3.1 Bees in Nature In a colony, the foraging process conducted by scout bees which are sent to flower patches to search for a food source. When they return to the hive, they deposit their nectar or pollen and then go to the “dance floor” to perform what is called the “waggle dance” which contains important information regarding a flower patch which they have found. The information helps worker bees to go precisely to the most promising flower patches without guides or maps. More worker bees are sent to more promising patches and less bees are sent to other less fruitful sites. Therfore the colony gathers food more quickly and more efficiently. 3.2 Proposed Bees Algorithm As mentioned, the Bees Algorithm is an optimisation algorithm to find the optimal solution. Fig. 1 represents a pseudo code for the Bees Algorithm. 1. Initialise population with random solution 2. Evaluate fitness of the population While (Stopping criterion not met) //Forming new population 3. Select patches for neighbourhood search 4. Recruit bees for selected patches(more bees for best patches) and evaluate their fitness 5. Select the fittest bee from each patch 6. Assign remaining bees to search randomly and evaluate their fitness 7. End while Fig. 1. Pseudo code for the Bees Algorithm
The algorithm requires 6 parameters to be set, namely: 1) number of scout bees (n), 2) number of selected flower patches (m) among the visited patches by scout bees, 3) number of the best patches (e) among the m, 4) number of recruited bees (n2) for e patches, 5) number of recruited bees (n1) for (m-2) patches and 6) initial patches size (ngh). The algorithm starts with n scout bees which are randomly sent to the search space. After evaluating fitness, those with the highest fitness scores are chosen as “selected bees” and their patches are also chosen as “selected patches” for the neighbourhood search. When a neighbourhood search is conducted among the selected patches, more bees are assigned to the best patches which represent more promising solutions. Both this differential recruitment and scouting concept are the key operations for the Bees Algorithm. After finishing a neighbourhood search, only one bee which obtains the highest fitness is elected in each of the selected patches to form the next bee population. Although there is no such restriction like this in nature, the Bees Algorithm introduces it to reduce the number of points to be explored. The
270
J.Y. Lee and A.H. Darwish
remaining bees in the population will be randomly assigned in the search space, in case new potential solutions are scouted. All these steps are repeated until a stopping criterion is met.
4 Environmental/Economic Dispatch The EED problem involves the conflicting objectives of emission and optimisation of fuel cost and it is formulated as described below. 4.1 Objective Functions Fuel Cost Objective. The economic dispatch problem in power generation is to minimise the total fuel cost while satisfying the total required demand. The equation used the optimal combination for this problem is shown as follows: [1, 4, 5, 6] n
(
)
C = ∑ ai + bi × PGi + ci × PGi2 $/hr i =1
(2)
Where C: total fuel cost ($/hr), ai, bi, ci: fuel cost coefficients of generator i, PGi: power generated(p.u.) by generator i, and n: number of generators. NOx Emission Objective. The total NOx emission caused by fossil-fuel is expressed as: n
(
)
E NOx = ∑ aiN + biN × PGi + ciN × PGi2 + d iN × exp(eiN × PGi ) ton/hr i =1
(3)
Where aiN, biN, ciN, diN, and eiN are NOx coefficients of the ith generator emission characteristics. 4.2 Constraints The optimisation problem is bounded by the following constraints: Power balance constraint. The total power generated must supply the total load demand and the transmission losses expressed as: n
∑P
Gi
− PD − PL = 0
i =1
Where
PD : total load demand(p.u.) and PL : transmission losses(p.u.). However, for this paper
PL is taken as 0.
(4)
Multi-objective EED Using the Bees Algorithm with Weighted Sum
271
Maximum and minimum limits of power generation. The power generated PGi by each generator is constrained between its minimum and maximum limits stated as:
PGi min ≤ PGi ≤ PGi max
(6)
Where PGimin: minimum power generated, and PGimax: maximum power generated.
Fig. 2. Single-line diagram of IEEE 30-bus test system
4.3
Multi-objective Formulation
The multi-objective Environmental/Economic Dispatch optimisation problem is therefore formulated as:
Minimise[C , E NOx ] Subject to :
n
∑P
Gi
(7)
− PD = 0 (power balance), and
i =1
PGi min ≤ PGi ≤ PGi max (generation limits) Table 1. Fuel Cost coefficients Unit i
ai
bi
ci
PGi min
PGi max
1
10
200
100
0.05
0.50
2
10
150
120
0.05
0.60
3
20
180
40
0.05
1.00
4
10
100
60
0.05
1.20
5
20
180
40
0.05
1.00
6
10
150
100
0.05
0.60
272
J.Y. Lee and A.H. Darwish
4.4 System Parameters Simulations were performed on the standard IEEE 30-bus 6-generator test system using the Bees Algorithm. The power system is interconnected by 41 transmission lines and the total system demand for the 21 load buses is 2.834p.u. Fuel cost and NOx emission coefficients for this system are given in Table 1 and 2 respectively. Table 2. NOx Emission coefficients Unit i
aiN
biN
ciN
d iN
eiN
1
4.091e-2
-5.554e-2
6.490e-2
2.0e-4
2.857
2
2.543e-2
-6.047e-2
5.638e-2
5.0e-4
3.333
3
4.258e-2
-5.094e-2
4.586e-2
1.0e-6
8.000
4
5.326e-2
-3.550e-2
3.380e-2
2.0e-3
2.000
5
4.258e-2
-5.094e-2
4.586e-2
1.0e-6
8.000
6
6.131e-2
-5.555e-2
5.151e-2
1.0e-5
6.667
Table 3. Parameters for the Bees Algorithm Value n : Number of scout bees
50
m : Number of selected patches
8
e : Number of elite patches
3
ngh : Initial patch size
0.1
n2 : Number of bees allocated to elite patches
5
n1 : Number of bees allocated to other selected patches
3
Table 4 Parameters for Weighted Sum Technique Value W1: weight for fuel cost
1
W2: weight for NOx emission
1 50 100 500 1,000 5,000 10,000 50,000 100,000 500,000
Multi-objective EED Using the Bees Algorithm with Weighted Sum
273
For all simulations, the following parameters in Table 3 and Table 4 are used to an accuracy of 1.0e-5.
5 Results Fig. 3 shows a good diversity in Weighted Sum solutions obtained by the Bees Algorithm after 200 itereations. Tables 5 and 6 show the best fuel cost and best NOx emission obtained by the Bees Algorithm as compared to Linear Programming (LP), Multi-Objective Stochastic Search Technique (MOSST), Nondominated Sorting Genetic Algorithm (NSGA), Niched Pareto Genetic Algorithm (NPGA) and Strength Pareto Evolutionary Algorithm (SPEA). In Table 5, the best fuel cost is 600.1320$/hr by the Bees Algorithm with a corresponding NOx emission of 0.2212ton/hr. Although the best emission by the Bees Algorithm in Table 6 is the same as SPEA and NSGAII, the corresponding fuel cost is much lower than the others at 636.0475$/hr. It is quite evident that the proposed approach gives superior results. Therefore the Bees Algorithm optimises minimum fuel cost with minimal NOx emission as compared to the other previous approaches. 0.2250
NOx Emission (ton/hr)
0.2200
0.2150
0.2100
0.2050
0.2000
0.1950
0.1900 600
605
610
615
620
625
630
635
640
Fuel Cost ($/hr)
Fig. 3. Weighted Sum solutions for EED problem using the Bees Algorithm Table 5. Best fuel cost LP
MOSST
NSGA
NPGA
SPEA
NSGA- II
Bees Algorithm
PG1
0.1500
0.1125
0.1567
0.1080
0.1062
0.1059
0.1132
PG2
0.3000
0.3020
0.2870
0.3284
0.2897
0.3177
0.2977
PG3
0.5500
0.5311
0.4671
0.5386
0.5289
0.5216
0.5372
PG4
1.0500
1.0208
1.0467
1.0067
1.0025
1.0146
1.0024
PG5
0.4600
0.5311
0.5037
0.4949
0.5402
0.5159
0.5284
PG6
0.3500
0.3625
0.3729
0.3574
0.3664
0.3583
0.3549
Best cost
606.314
605.889
600.572
600.259
600.15
600.155
600.1320
Corresp. Emission
0.22330
0.22220
0.22282
0.22116
0.2215
0.22188
0.221239
274
J.Y. Lee and A.H. Darwish Table 6 Best NOx emission LP
MOSST
NSGA
NPGA
SPEA
NSGA- II Bees Algorithm
PG1
0.4000
0.4095
0.4394
0.4002
0.4116
0.4074
0.3886
PG2
0.4500
0.4626
0.4511
0.4474
0.4532
0.4577
0.4495
PG3
0.5500
0.5426
0.5105
0.5166
0.5329
0.5389
0.5376
PG4
0.4000
0.3884
0.3871
0.3688
0.3832
0.3837
0.3986
PG5
0.5500
0.5427
0.5553
0.5751
0.5383
0.5352
0.5395
PG6
0.5000
0.5142
0.4905
0.5259
0.5148
0.5110
0.5199
Best Emission 0.19424
0.19418
0.19436
0.19433
0.1942
0.19420
0.1942
Corresp. Cost 639.600
644.112
639.231
639.182
638.51
638.269
636.0475
6 Conclusion In this paper, the multi-objective Environmental/Economic Dispatch problem was solved using the Bees Algorithm with Weighted Sum. The algorithm tested on the standard IEEE 30-bus system and the minimum cost and minimum emission solutions found are better than those found by the previous approaches. Therefore the Bees Algorithm confirms its potential to solve the multi-objective EED problem and whilst offering great financial saving is also contributing to the reduction of greenhouse gases in the atmosphere.
References [1] R T F A King, H C S Rughooputh and K Deb Evolutionary Multio-Objective Environmental/Economic Dispatch: Stochastic vs. Deterministic Approaches [2] M A Abido (2000) A New Multiobjective Evolutionary Algorithm for Environmental/Economic Power Dispatch IEEE [3] M A Abido (2003) Environmental/Economic Power Dispatch Using Multiobjective Evolutionary Algorithms. IEEE Transactions on power system 18(4) [4] R T F A King (2006) Stochastic Evolutionary Multiobjective Environmental/Economic Dispatch. IEEE Congress on Evolutionary Computation, Canada [5] A Farag, S Al-Baiyat and T. C. Cheng (1995) Economic load dispatch multiobjective optimisation procedures using linear programming techniques. IEEE Transactions on Power System 10(2) [6] R Yokoyama, S H Bae, T Morita and H S asaki (1998) Multiobjective Optimal generation dispactch based on probability security criterion. IEEE Transactions on Power Systems 3(1) [7] D T Pham, A Ghanbarzadeh, E Koc, S Otri, S Rahim and M Zaidi (2006) The Bees Algorithm, A Novel Tool for Complex Optimisation problems. Proc 2nd Int Virtual Conf on Intelligent Production Machines and Systems (IPROMS 2006) Oxford:Elsevier 454-459 [8] D T Pham, E Koc, J Y Lee and J Phrueksanat (2007) Using the Bees Algorithm to schedule jobs for a machine. Laser Metrology and Performance VIII:430-439
Experimental Investigation on the Behaviour of CFRP Laminated Composites under Impact and Compression After Impact (CAI) J. Lee1 and C. Soutis1,* 1
Aerospace Engineering, The University of Sheffield, Mappin Street, Sheffield, S1 3JD * [email protected]
Abstract. The importance of understanding the response of structural composites to impact and CAI cannot be overstated to develop analytical models for impact damage and CAI strength predictions. This paper presents experimental findings observed from quasi-static lateral load tests, low velocity impact tests, CAI strength and open hole compressive strength tests using 3mm thick composite plates ([45/-45/0/90]3s – IM7/8552). The conclusion is drawn that damage areas for both quasi-static lateral load and impact tests are similar and the curves of several drop weight impacts with varying energy levels (between 5.4J and 18.7J) follow the static curve well. In addition, at a given energy the peak force is in good agreement between the static and impact cases. From the CAI strength and open hole compressive strength tests, it is identified that the failure behaviour of the specimens was very similar to that observed in laminated plates with open holes under compression loading. The residual strengths are in good agreement with the measured open hole compressive strengths, considering the impact damage site as an equivalent hole. The experimental findings suggest that simple analytical models for the prediction of impact damage area and CAI strength can be developed on the basis of the failure mechanism observed from the experimental tests. Keywords: Quasi-static lateral load, low velocity impact, CAI and open hole compressive strength.
1 Introduction The damage caused to carbon fibre composite structures by low velocity impact, and the resulting reduction in compression after impact (CAI) strength has been well known for many years [1,2] and is of particular concern to the aerospace industry, both military and civil. Typically the loss in strength may be up to 60% of the undamaged value and traditionally industrial designers cope with this by limiting compressive strains to the range of 0.3% to 0.4% (3000 to 4000με). Provided buckling is inhibited by good design of the compression panels the material is capable of withstanding more than double these values. This punitive reduction in design allowables is also a result of the fact that we cannot simulate the impact damage. Testing coupons will not simulate the behaviour of larger realistic structures because their dynamic response to low velocity impact may be quite different. It is not economic to perform impact tests on relatively large panels in order to evaluate impact behaviour and damage development. Thus there is a clear need for a
276
J. Lee and C. Soutis
modelling tool which avoids such blanket limitations and addresses the real nature of the damage and the physics of the failure mechanisms when a realistic structure is loaded in compression. An impact damage site in a composite laminate contains delaminations, fibre fracture and matrix cracking. A model to predict the damage area taking into account all these factors would be complex and take considerable time and funding to develop. It was recognised that the problem could be simplified by making some assumptions about the nature of the impact damage. In the present work, experimental findings are presented to develop simple analytical models for impact damage and CAI strength predictions based on the failure mechanism observed from the quasi-static lateral load tests, impact tests and compression-after-impact (CAI) tests in future work. It will be identified that the low velocity impact can be modelled as a quasi-static lateral load problem3 and provide a measure of how good an approximation we have, comparing the quasi-static response with dynamic response. In addition, the failure behaviour of composite specimens under static compressive loading after impact will be compared to the compressive behaviour of open hole specimens. Instrumented drop weight impact tests are carried out on quasi-isotropic circular plates made from IM7/8552 composite system to simulate real-life dropping tool events, i.e. low velocity impact condition. Quasi-static lateral load tests are also performed with the same indenter and jig used in the impact tests. The test results are presented and discussed. Finally compression after impact (CAI) strength tests and open hole compressive strength tests are performed using a Boeing Compression Rig. The details are described in the following sections.
2 Experimental 2.1 Materials and Lay-Up The material is IM7/8552 supplied by Hexcel Composite Ltd. as a roll of preimpregnated tape of epoxy matrix (8552) reinforced by continuous intermediate modulus unidirectional carbon fibres (IM7). The roll was 350mm wide and about 0.125mm thick. The prepreg was cut into 500mm wide by 500mm long sheets using a metal template and laid up in the quasi-isotropic stacking sequence ([45/-45/0/90]3s) giving a total thickness of about 3mm thickness. The in-plane stiffness and strength of the IM7/8552 unidirectional laminate are given in Table 1. These parameters were obtained by BAE system from material strength tests. Table 1. Stiffness and strength properties for the IM7/8552 composite system E11, GPa E22, GPa G12, GPa ν12 σ11T/σ11C MPa σ22T/σ22C MPa τ12, MPa Value 155
10
4.6
0.3
2400/1300
50/250
85
(σ11T/σ11C are longitudinal tensile and compressive strength, σ22T/σ22C are transverse tensile and compressive strength and τ12 is in-plane shear strength).
Experimental Investigation on the Behaviour of CFRP
277
Table 2. Impact, CAI and OHC test results ([45/-45/0/90]3s – IM7/8552)
Impact Results
Incident Energy (J)
17.8 18.2 18.7
Peak Force (kN)
9.7
a/W
0.13 0.17 0.18
CAI
280
243
242
271
-
229
Compressive Failure Strengths (MPa) Open Hole
10.1 10.3
Unimpacted 685 (a = width of impact damaged area; W = laminate width, 100mm).
2.2 Quasi-static Lateral Load Tests Quasi-static lateral load test was carried out to measure the maximum deflection at the centre of a circular plate and strain data on the top and bottom surface of the plate. For the test, the 150mm diameter circular plates were cut with a diamond saw and their edges were carefully machined. All specimens were first C-scanned before testing in order to check for damage. The bolts for the fully clamped jig were tightened using a torque wrench. The internal diameter of the jig is 102mm. A series of tests were performed using a flat nose loading rod. A single plate was loaded in a number of increments until ultimate failure. The loading was transferred from the cross-head to the plates through a flat nose with a diameter of 12mm, using a screw-driven Zwick 1488 universial testing machine with a load capacity of 200KN. The cross-head speed used in this study was 0.5mm per minute. For measuring the central deflection of the plate, an LVDT displacement transducer was used. The experimental setup and specimen jig are shown in Fig. 1.
(a)
(b)
Fig. 1. (a) Setup of quasi-static lateral load tests with (b) a jig on circular plates
2.3 Low Velocity Impact Tests Circular plates cut from a 3mm thick IM7/8552 multidirectional laminate ([45/45/0/90]3s) are subjected to low-velocity impact using a drop-weight test rig with two different impactor masses (1.58kg and 5.52kg). The specimen dimension and jig are the same as those used in the quasi-static lateral load test (see Fig. 1). The test setup is shown schematically in Fig. 2. An impactor with a flat-ended nose of 12mm in diameter is instrumented with a strain-gauged load cell providing a record of force-time
278
J. Lee and C. Soutis
history. The impactor was dropped at the centre of the specimen from a selected height and was captured after the first rebound. The velocity of the impactor carriage is measured by means of a ruled grid attached to the impactor side and which passes a photo-emitter/photo-diode device mounted on the fixed channel guides to give a pulse form of output every time a dark line is crossed. Knowing the spacing of the grid lines and time for each to pass, the velocity can be calculated before and Fig. 2. Schematic diagram of the impact after the impact event. test rig The output of the device is recorded to the Microlink data capture unit, which provides an interface between the rig instrumentation and a PC compatible computer. The data capture unit has it’s own internal timer and up to eight channels for data collection, i.e. the impactor velocity, contact force and six strain histories. The maximum sampling rate of the unit is 250sample/millisecond. The software provided on the computer allows the collection parameters of the data capture unit to be varied as well as viewing and storing the test data. The test data is then converted by the software into a Lotus format file and then transferred to a personal computer. A Microsoft Excel spreadsheet on the personal computer using the physics of motion detailed in the following section is then used for further data reduction and analysis. After impact, the damage of each specimen was visually inspected for surface damage. The examination of interior damage was carried out using primarily ultrasonic C-scan and occasionally by X-ray radiography. 2.4 CAI Strength Tests Post-impact tests were performed to determine CAI strengths for a range of impact levels which induce fibre damage. For the test various methods have been used [11]. A side-supported fixture developed by the Boeing Company for compression residual strength tests was used in the current study. The fixture does not need any special instruments and is easy to use. Fig. 3 (a) and (b) show the fixture and the specimen geometry. The fixture was placed on a fixed compression platen in a screw-driven Zwick 1488 universal testing machine with a load capacity of 200kN. The specimens were loaded until failure at a rate of 0.5mm/min. When performing compression of the impacted specimens, several plain specimens which had not been impacted were tested in compression to provide baseline undamaged specimen data with the modified ICSTM fixture used in the static compressive test [16]. The dimensions of the plain specimen are 30mm x 30mm in gauge length and specimen width. In addition, open hole compression tests were carried out in accordance with the specimen dimensions for CAI test. Hole diameters obtained from X-ray radiographs by measuring the size of the darkest region of impacted specimens were used (see Fig. 7 (b)). This data would effectively show what the compressive strength would be if the stiffness property of the damaged region is zero.
Experimental Investigation on the Behaviour of CFRP
(a) Boeing fixture
279
(b) specimen geometry
Fig. 3. (a) Boeing fixture and (b) specimen geometry for CAI strength test
3 Results and Discussion 3.1 Quasi-static Lateral Load Tests Clamped circular specimens were loaded at the centre by a normal load. Fig. 4 shows a typical force-deflection curve for a circular plate, where the deflection was measured by an LVDT at the centre of the plate. During the testing, the first load drop was observed around 10kN with an audible acoustic event (see Fig. 4). The force gradually recovered up to around 11.5kN. Then the load fell again but did not recover up to 11.5kN. This clearly indicates the reaching of a maximum contact force once fibre fracture occurs and also illustrates the large amount of energy that is lost as the fibre fracture and the loading rod penetrates the plate. A visual and C-scan inspection of the specimens were carried out to examine specimen damage from the applied load, 6kN to 11.5kN with the increment of 1kN. No visual damage was found at the top and bottom surface of the specimen up to an applied load of 10kN. Significant internal damage was, however, detected from C-scan examination with the applied load, 10kN, as shown in Fig. 5 (a). The damage could be considered as the combination of delamination and matrix cracks in the specimen. With the applied load, 11kN, any damage was not visually observed on the bottom surface but some circular damage on the top surface due to the contact force of the loading rod existed with tiny cracks around the contact area. Finally, much severer damage was observed around 11.5kN just prior to ultimate failure which is caused by tensile fibre fracture on the back face of the specimen allowing the flat nose loading rod to eventually penetrate the specimen (See Fig. 5 (b) and (c)). Fig. 5 (a) and (b) show C-scan images taken at the applied loads, 10kN and 11.5kN. Fig. 5 (c) presents photographs taken from loading face to show penetration in the specimen and Fig. 4. Typical static load-deflection curve for from back face to show fibre fracture the circular plate with a diameter of 102mm after an applied load of 11.5kN. ([45/-45/0/90]3s – IM7/8552)
280
J. Lee and C. Soutis
3.2 Low Velocity Impact Tests Two sets of impact tests were performed with different impactor masses. Firstly, 3mm thick circular plates of the IM7/8552 system were subjected to low-velocity impact with the impactor mass of 1.58kg under the range of incident energy between 5J and 11J. Secondly, impact test was carried out with the impactor mass of 5.52kg under the range of incident energy between 16J and 19J. The impact response of laminated plates is commonly described in terms of force-time and incident kinetic energy-time traces (a) C-scan images at 10kN [4]. This allows the ability of a (b) C-scan images at 11.5kN material to resist impact damage and the absorbed kinetic energy to be assessed. In particular, the shape of these curves usually indicates the onset of damage and its propagation. If the response of the plate were purely elastic, with no form of damage or dissipation, (c) Photograph at loading face and back face in the and if the plate mass was very specimen small compared with the impactor, then a pure fundamental simple Fig. 5. Static damages taken by C-scan with the apharmonic response would be explied load, (a) 10 kN and (b) 11.5 kN and (c) photopected with sinusoidal force and graphs of the loading face and back face after an applied load of 11.5 kN ([45/-45/0/90]3s – displacement histories. IM7/8552)
(a)
(b)
Fig. 6. (a) Force-time and (b) force-displacement curves for representative impacts on quasiisotropic ([45/-45/0/90]3s) IM7/8552 laminates, fully clamped with a 102 mm diameter and incident energies of (a) 10.85 J and (b) 18.7 J
Fig. 6 (a) shows a typical force versus time history for impact without damage. It is for a 3mm thick circular plate with clamped edges under incident energy of 18.7J (impactor mass: 5.52kg). Fig. 6 (a) exhibits fluctuation around the peak force, making
Experimental Investigation on the Behaviour of CFRP
281
it possible to identify the onset of damage. A slower recovery also indicates a decrease in the structural stiffness due to damage. By integrating the force-time curve, the displacement can be calculated. On the base of the displacement data, the forcedisplacement curves can be plotted as shown in Fig. 6 (b). Fig. 6 (b) show the forcedisplacement curves for the tests of Fig 6 (a). Once the impactor energy is exhausted, the load starts to drop and reaches zero at a permanent displacement value as shown in Fig. 6 (b). These force-displacement curves will be compared with the curve measured from the quasi-static lateral load test in the next section. After drop tests all the circular specimens were inspected using ultrasonic C-scan to assess the extent of their internal damage caused. The specimens subjected to lowvelocity impact with the impactor mass of 1.58kg did not show any damage. However the specimens impacted with the impactor mass of 5.52kg show significant internal damage with or without visible damage on the specimen surface depending on the amount of incident energy. Fig. 7 (a) and (b) present the impact damage of the specimen for the test of Fig. 6 (incident energy: 18.7J) using ultrasonic C-scan and X-ray radiograph, respectively. For the incident energy of 17.88J, damage on the top and bottom plies was visually observed; tiny cracks around the impact area on the top surface but severe damage on the bottom plies including matrix cracking, delamination, fibre splitting and fibre fracture developed. The intensity of the dark region shown in Fig. 7 (b) is a (a) C-scan (b) X-ray measure of the extent of severe damage in Fig. 7. Impact damage of a quasi-isotropic the specimen; sectioning/polishing and de([45/-45/0/90]3s) IM7/8552 laminate, taken ply studies [5,6] revealed that in the very from (a) C-scan and (b) X-ray radiograph dark area fibre breakage and delaminations with an incident energy of 18.7 J exist in almost all interfaces through the thickness of the laminate. The lighter region in Fig. 7 (b) corresponds to the splitting and delamination of the back face rather than internal damage. The circled dark region will be used as the replacement of the impact damage with an equivalent open hole for the compression after impact (CAI) strength prediction and compared to the estimated impact damage area in future study. 3.3 Comparison of Quais-static Lateral Load and Low Velocity Impact Test Results Impact events that involve a high mass impacting a relatively small target at low velocities can usually be thought of as quasi-static events [3, 4, 7-10]. The most direct way to determine whether an impact event can be considered quasi-static is to compare two cases experimentally. Figure 8 shows the static and dynamic responses of 3mm thick quasi-isotropic IM7/8552 panels. In the figure the force-displacement curves of several drop weight impacts with varying energy levels (between 5.4J and 18.7J) are superposed on a static deflection test. The quasi-static deflection test was carried to complete perforation of the specimen. The loading paths of all the impact events follow the static curve quite well. Obviously, the dynamic response contains
282
J. Lee and C. Soutis
vibrations, but the major features are still clearly distinguishable. The vibrations are caused by the inertia forces in the early portion of the impact. The amplitude of these vibrations depends on the velocity and the mass of the plate. Increasing the velocity, therefore, will increase the amplitude of the vibration. Because of these vibrations, we would expect more scatter in a dynamic test than in a quasi-static test. The difference between the static and dynamic responses shown in Figure 8 is within the scatter of the data. In Fig. 9, the quasi-static response is again indicated for the same static and impact events as shown in Fig. 8, where peak contact force for each test is plotted as a function of impact energy. This is compared with the force against energy absorbed for a quasi-static test, the energy absorbed being calculated by integrating the area under the load-displacement curve. It can clearly be seen that at a given energy the peak force is in good agreement between the static and impact cases. The Fig. 8. Comparison of the force-displace- first fall in load that occurs at an energy of ment curves of impact tests at increasing en- approximately 14J on the static curve is ergy levels with the curve from a continuous due to the interior damage (delamination quasi-static lateral loading test for circular and matrix cracks) of the specimen without plates of a 102mm diameter ([45/-45/0/90]3s visual damage on both surfaces as shown – IM7/8552) in Fig. 5 (a). The third fall in load at an energy of approximately 26J on the static curve is due to the initiation of tensile fibre fracture on the back face of the plate caused by the onset of penetration under the indentor (see Fig. 5 (b)). The impact data is in good agreement with the static curve for impact energies of 15J – 17J where interior damage is only detected without visual damage on both surfaces of the plate (see Fig. 10). At higher impact energy of 18.7J (peak force 10.3kN), the data are also in reasonable Fig. 9. Peak contact force against impact agreement with the quasi-static curve energy for tests on clamped circular plates (peak force 11.5kN) where fibre fracture of 102 mm diameter ([45/-45/0/90]3s – on the bottom face is clearly visible. This IM7/8552) is a very good indication that the damage mechanism, from the point at which damage first initiates to the point that the indentor has penetrated the plate, are similar to the quasi-static and impact loading considered in this study. In addition, the good agreement between the impact peak forces and the static load as a function of impact energy would suggest that the damage areas should be in good agreement between the static and dynamic test cases (see Fig. 10). Figure 10 shows C-scan images taken
Experimental Investigation on the Behaviour of CFRP
283
from the static and impact test at a peak force of approximately 10kN. It can be seen that damage areas are similar regardless of the test method. These results were also identified by Sjoblem, P. and Hartness, J. [7]. They performed quasi-static and impact tests on 48-ply quasi-isotropic Fig. 10. C-scan images taken from (a) static AS-4/3502 circular plates of 123mm and (b) impact test at a peak force of approxiin diameter. Both the statically tested mately 10 kN for the circular plates of 102 mm diameter ([45/-45/0/90]3s – IM7/8552) and the impacted specimens showed similar conical shape of damage under the indenter and similar damage areas. They suggested that rate effects on the failure behaviour are minor. Any effect of elastic wave propagation through the thickness of the specimen is totally negligible for the typical contact times experienced in tests simulating dropped tools or runway debris kicked up during takeoff or landing. 3.4 CAI Strength Tests Each circular plate was impacted with a known energy level of between 5 and 19J. An energy level between 5 and 16.8J was too insignificant to encourage the test-piece to fail at the impact site. Most data for fibre breakage has been obtained between 17 and 19J. As explained in Section 3.2, impact damage induced in the large plates has both local and global effects. The former consists of matrix cracking, fibre-matrix debonding and surface microbuckling, all surrounding the slightly dented impact contact area, whereas the latter consists of extensive internal delamination, and fibre breakage. The damage results in the lower matrix resin stiffness of composite materials and local changes in fibre curvature. They may contribute to the initiation of local compression failure by shear with a kink band. During the CAI testing, clear cracking sounds were heard around the damaged area due to matrix cracking, fibe-matrix debonding, delamination and fibre breakage. As the applied load is increased, damage in the form of local buckling like a crack grows laterally from the impact damage region. In addition, delaminated regions continued to propagate, first in short discrete increments and then rapidly at the failure load. Examination of the failed specimens removed from the test fixture confirmed that the local delaminations extended completely across the specimen width but extended only a short distance in the axial direction with a kink shear band through the laminate thickness (see Fig. 11 (b)). This pattern of damage growth [12-15] is similar to that observed in specimens with open holes under uniaxial compression as described in Refernce16. Fig. 11 (a) and (b) show a typical impacted specimen before and after CAI strength test taken by X-ray radiography. Specimen failure after CAI strength test shows fibre kink band shear through its thickness. The residual compressive strengths and impact results are summarized in Table 2. In the table the extent of damage caused by the impact was observed by X-ray radiographs as shown in Fig. 7 (b). The compressive strengths of unimpacted plain specimens and open hole specimens measured to provide reference values are also
284
J. Lee and C. Soutis
(a) X-ray before CAI strength test
(B) X-ray after CAI strength test
Fig. 11. Impacted CAI specimen (a) before and (b) after CAI strength test showing compression failure with a kink-band shear ([45/-45/0/90]3s – IM7/8552)
included in the table. For the open hole specimens, the observed impact damage is replaced with an equivalent open hole. It can be seen that the residual strengths are reduced up to 64% of the unimpacted compressive strength between an energy level of 17.8J and 18.7J. In the case of an equivalent open hole specimens, the failure strengths (OHC) are in good agreement with those of the residual strengths (CAI), the difference is less than 10%. Soutis et. al. [12, 15, 17] have also performed this strategy, which considers impact damage site as an equivalent hole to predict the CAI strength of different composite systems and lay-ups. They used damage width measured from X-ray radiographs for the prediction. The theoretical predictions are in a good agreement with the experimental measurements.
4 Concluding Remarks A quasi-static and dynamic series of tests were performed using 3mm thick circular plates ([45/-45/0/90]3s - IM7/8552) with a flat-ended impactor to compare the static response with the dynamic response and identify damage patterns between them. In the dynamic test, two different impactor masses were used with varying impact energy level, i.e. impactor mass of 1.58kg under the range of incident energy between 5J and 11J and 5.52kg under the range of incident energy between 16J and 19J. During the quasi-static and impact testing, the development of damage was monitored using C-scan and X-ray radiography. Significant interior damage was detected at the similar applied peak load, 10kN from the quasi-static test and 9.8kN from the impact test prior to initiation of tensile fibre damage on the tensile face of the plates under the indenter. In addition, it has been found that damage areas for both tests are similar (see Fig. 10). From the investigation of damage patterns performed by Sjoblem, P. and Hartness, J [7] using microscopy, it is identified that both the statically tested and the impacted circular plate specimens have similar conical shape of damage under the indenter. In comparison of the force-displacement responses obtained from both tests, it was confirmed that the curves of several drop weight impacts with varying energy levels (between 5.4J and 18.7J) follow the static curve quite well (see Fig. 9). In addition, the peak contact force-impact energy graph plotted to be compared with the force against energy absorbed for a static test (see Fig. 10) showed that at a given energy the peak force is in good agreement between the static and impact cases.
Experimental Investigation on the Behaviour of CFRP
285
Finally CAI tests were conducted to determine residual compressive strength of the plates impacted at an energy level between 17J and 19J. The failure behaviour of the specimens was very similar to that observed in laminated plates with open holes under compression loading. The residual strengths between an impact energy level of 17J and 19J varied from 280MPa to 242MPa and reduced to 64% of the unimpacted compressive strength. The measured open hole compressive strengths were in good agreement with the residual strengths, considering the impact damage site as an equivalent hole. The size of the hole was determined from X-radiograph images. The experimental results above indicate that the low velocity impact response for the plates tested in this study is close to quasi-static behaviour. This means that inertia effects are negligible and hence the plate response is the fundamental, or statically deflected, mode. It is also indicated that impact damage site for CAI strength can be modelled as an equivalent hole. On the basis of these experimental findings, simple analytical models will be developed to predict impact damage area, reduced elastic properties due to the impact load and CAI strength in future work. The results obtained in this study will be compared to the predict results. Acknowledgments. This work was carried with the financial support of Structural Materials Centre, QinetiQ, Farnborough, UK. The authors are grateful for many useful discussions with Professor G. A. O. Davies of the Department of Aeronautics, Imperial College London and Professor P. T. Curtis of the Defence Science and Technology Laboratory (DSTL), UK.
References [1] Whitehead R S (1985) ICAF National Review, Pisa 10-26 [2] Greszczuck L B (1982) Damage in Composite Panels due to Low Velocity Impact. Impact Dynamics Ed. Zukas Z.A. J. Wiley [3] Dahsin L (1988) Impact-induced Delamination – a View of Bending Stiffness Mismatching. Journal of Composite Materials 22:674-692 [4] Zhou G and Davies G A O (1995) Impact Response of Thick Class Fibre Reinforced Polyester Laminates. International Journal of Impact Engineering 16(3):357-374 [5] Guynn E G and O’brien T K (1985) The Influence of Lay-up and Thickness on Composite Impact Damage and Compression Strength. Proc. 26th Structures, structural Dynamics, Materials Conf., Orlando, FL 187-196 [6] Hitchen S A and Kemp R M (1994) The Effect of Stacking Sequence and Layer Thickness on the Compressive Behaviour of Carbon Composite Materials: Impact Damage and Compression after Impact. Technical Report 94003, Defence Research Agency, Farnborough [7] Sjoblom P O and Hartness J T (1988) On Low-Velocity Impact Testing of Composite Materials. Journal of Composite Materials 22:30-52 [8] Delfosse D and Poursartip A (1997) Energy-Based Approach to Impact Damage in CFRP Laminates. Composites Part A 28 (7):647-655 [9] Watson S A, (1994) The Modelling of Impact Damage in Kevlar-Reinforced Epoxy Composite Structures. PhD Thesis, University of London [10] Hou J (1998) Assesment of Low Velocity Impact Induced Damage on Laminated Composite Plates. PhD Thesis, University of Reading
286
J. Lee and C. Soutis
[11] Hodgkinson J(2000) Mechanical Testing of Advanced Fibre Composites. Woodhead Publishing Ltd [12] Soutis C and Curtis P T (1996) Prediction of The Post-Impact Compressive Strength of CFRP Laminated Composites Composite Science and Technology 56 (6):677-684 [13] Zhou G (1996) Effect of Impact Damage on Residual Compressive Strength of GlassFibre Reinforced Polyester (GFRP) Laminates. Composite Structures 35(2):171-181 [14] Davies G A O, Hitchings D and Zhou G (1996) Impact Damage and Residual Strengths of Woven Fabric Glass/Polyester Laminates. Composites Part A 27(12):1147-1156 [15] Hawyes V J, Curtis P T, and Soutis C (2001) Effect of Impact Damage on The Compressive Response of Composite Laminates. Composites Part A 32(9):1263-1270 [16] Soutis C, Lee J and Kong C (2002) Size Effect on Compressive Strength of T300/924C Carbon Fibre-Epoxy Laminates. Plastics, Rubber and Composites 31(8):364-370. [17] Soutis C, Smith F C and Matthews F L (2000) Predicting the Compressive Engineering Performance of Carbon Fibre-Reinforced Plastics. Composite Part A 31(6):531-536
Development of and Research on Energy-Saving Buildings in Korea Hyo-Soon Park1, Jae-Min Kim2, and Ji-Yeon Kim3 1
Energy Efficiency Research Department, Korea Institute of Energy Research, Deajeon, Korea [email protected] 2 Mechanical Engineering University of Strathclyde, ESRU, Glasgow, United Kingdom 3 Architectural Engineering Department, Inha University, Incheon, Korea
Abstract. Korea is sparse energy reserves, and over 97% of the total energy it consumes is imported. Furthermore, fossil fuels comprise more than 80% of the total energy consumed in Korea, resulting in the emission of greenhouse gases like carbon dioxide, which contributes to global warming. The building sector is one of the major energy-consuming sectors in Korea. The energy consumption of the buildings in the country represents about 24% of the total energy consumption of the whole nation, and it is on the rise due to the continued growth of the Korean economy. The energy use of buildings is dependent on a wide variety of interacting features that define the performance of the building system. Many different research buildings that utilize several kinds of energy conservation technologies were constructed in KIER, in Taeduk Science Town, to provide feedback regarding the most effective energy conservation technologies. This paper intends to introduce the energy conservation technologies of new residential houses, passive and active solar houses, super-low-energy office buildings, “green buildings,” and “zero-energy houses,” whose utilization will help protect the quality of the environment and conserve energy.
1 Introduction In Korea, amount of energy consumed is, as shown in Table 1, 163,995ktoe as of 2006. Building energy consumption occupies 22.9% and transportation occupies 21.0% of the total national energy consumption. The most recently developed energy technologies have the potential for promoting the efficient use of energy, for enhancing the country’s energy self-sufficiency rate, and for preventing environmental pollution due to energy consumption. Since its establishment in 1977, the Korea Institute of Energy Research, a nonprofit scientific-research institute supported by the government, has been committed to conducting research in the areas of energy conservation policy: the thermal insulation standard (K value), typical-energy-consumption criteria, ESCO (Energy Service Company) business, and building energy rating system for the reduction of the energy consumption and development in the energy conservation areas (i.e., alternative energy and various environmental technologies related to fossil energy use). With its accumulated technological capacity and professionalism, Korea (KIER) will continue to contribute to addressing the country’s many difficult energy-related problems. Korea (KIER) will also concentrate its efforts on the research and development of new energy technologies for the 21st century.
288
H.-S. Park, J.-M. Kim, and J.-Y. Kim Table 1. Final energy consumption by demand Total Energy Consumption [ktoe] Industry
Building
Transportation
173,584
97,235
39,822
36,527
(100%)
(56.0%)
(22.9%)
(21.0%)
2 Energy Conservation in Residential Houses 2.1 Retrofitting of the Existing Detached Houses The existing detached houses are being retrofitted for energy conservation. It is important to evaluate the energy-saving effect of retrofitting a non-insulated house and to show the importance of retrofitting to home owners by presenting reliable data to them.
Fig. 1. Perspective of an existing detached test house
The energy performance and consumption of a house were measured and analyzed after retrofitting. Cost-benefit analysis was conducted in each of the retrofitting measures. The following results were obtained from the experiments that were conducted on the test house: The total reduction in the heating energy requirement achieved by retrofitting was found to be 51.9%, and the payback period of the initial investment for retrofitting was estimated to be five to six years. 2.2 New Residential Houses The energy conservation problems of residential houses have been viewed macroscopically and microscopically in every detailed field related to the design, construction, Table 2. Before-and-After Comparison Pre-Retrofit [%] Post-Retrofit [%] Improvement Rates [%] Boiler efficiency
79.8
85.2
5.4
Heating Efficiency 66.8
75.2
8.4
Piping Losses
13.1
10.0
3.1
Boiler Losses
20.2
14.8
5.4
Development of and Research on Energy-Saving Buildings in Korea
289
Fig. 2. New residential houses ( the former : A type , the other : B type)
auditing, and maintenance of houses. Especially, studies on energy conservation in a newly constructed house clearly emphasize the need to consider energy savings from the initial stage of house design formulation. The solar-saving fractions of the house are as follows: – –
43~46% (set the indoor temperature at 18 39~42% (set the indoor temperature at 20
℃); and ℃).
The modified heating load of the A-type experimental model house was reduced to 5.55% compared with the B-type house, due to heavy insulation. In the case of the attachment of the insulated door, the difference between the indoor and outdoor temperatures was decreased to 1.58 compared with the case when an insulated door was not attached. The difference between the daytime and nighttime indoor temperatures in the former case was smaller than in the latter one. The indoor temperature in the southern direction was 1.29~1.71 higher than that in the northern direction.
℃
℃
Table 3. K-value comparison of test houses A-Type Insulation Part
B-Type k-value
℃
Thickness
[kcal/m²h ]
Thickness
k-value
℃
[kcal/m²h ]
Ceiling
Styrofoam(50mm) + Glass 0.246 wool(50mm)
Styrofoam (25mm)+ Glass 0.399 wool(25mm)
Wall
Styrofoam(100mm)
Ureafoam(50mm)
0.277
0.465
Floor
Ureafoam(100mm)
0.338
Styrofoam(50mm)
0.346
Window
Pair glass
2.483
Pair glass
2.483
℃
In one room, the temperature in the upper level increased to 0.17~0.34 , and that in the lower level decreased to 0.51~0.61 , based on the central level of the room. The efficiency of the boiler B-type experimental model house for one year was about 1,889 ℓ (20 ) and 1,496 ℓ (18 ), and the natural air change was observed to be 0.3 times/h (B type).
℃
℃
℃
290
H.-S. Park, J.-M. Kim, and J.-Y. Kim Table 4. Heating loads & energy saving in each house 2-Story House(area : 121.86m2 ) Non-Insulated New Residential New Residential House
Houses (A-Type) Houses (B-Type)
Hourly Heating Loads [kcal/h] 22,675
7,447.7
8,591.7
Energy Saving Rate [%]
67.1
62.1
100
3 Development of a Model House Using Energy-Efficient Design Methods Since 1980, the results of researches have been used to establish energy-saving methods in residences. Model energy-efficient housing plans were prepared for the demonstration of energy-efficient design methods in residences to architects, clients, and constructors, and for nationwide dissemination.
Fig. 3. Model houses ( left: A type , right : B type)
The objective annual heating consumptions in dwellings and the thermal-comfort criteria of indoor environments in Seoul were also estimated. Model designs of energy-efficient residences, and their specifications, were made after investigating the applicability of the current energy-saving methods in dwellings. After this, the annual heating loads and annual heating energy consumptions, and the costs of the construction of these buildings, were estimated using the DOE-2 building energy analysis computer program for the Seoul climatic region. The objective annual heating load of the model houses for the Seoul climatic region (100 Mcal/m2․y) can be achieved even with a 50 mm insulation thickness in each building envelope. In addition, a thermal insulating material should be attached to the basement wall to prevent surface condensation in summer. The case study that was conducted in this research showed annual heating loads of 117.4 Mcal/m2․y and 116.0 Mcal/m2․y for the single-story and two-story residences, respectively.
Development of and Research on Energy-Saving Buildings in Korea
291
Table 5. Comparison of annual heating loads 82.5m2
132m2
(A-Type)
(B-Type)
151.5
145.5
[Mcal/m2․y]
117.5
116.0
Energy Saving Rate [%]
22.5
20
Type Non Insulated House [Mcal/m2․y] Model House by the Energy Efficient Design Methods
4 Development of a Solar House 4.1 Development of a Passive Solar House A passive solar house was developed, with priority on the development and application of a passive technology for energy conservation in houses. The principles of the Passive System (Trombe Michel Wall) are as follows: –
The thermal storage mass for the building is a south-facing wall of masonry or concrete with a glazed surface to reduce heat losses outside. Solar radiation falls on the wall and is absorbed by it and transferred by conduction from where it radiates, and heat is transferred by convection to the living spaces. – Through the openings or vents at the top and bottom of the storage mass, hot air will rise and enter the living space, drawing cooler room air through the lower vents back into the collector air space. Fig. 4. A passive solar house – The solar wall may be used as a solar chimney in summer, when the continued air movement exhausts hot air from the solar wall and draws in cooler air from the north side of the house for ventilation.
The thermal performances of passive solar systems were evaluated through actual experiments, and the problems that can arise in relation to their implementation were discussed. Computer simulation programs were also developed for the theoretical performance prediction of passive buildings. The criteria were prepared by examining all the existing design schemes and synthesizing the performance evaluation methods that have been developed up to the present time. The solar-saving fraction was found to be 27.3%.
292
H.-S. Park, J.-M. Kim, and J.-Y. Kim Table 6. Description of a passive solar house No Item
1-1 1
Description of Installation Living Room
Direct Gain System
Bed Room
Trombe-Wall System
Hall
Day lighting
Insulation Thickness
1-2
Ceiling
200mm
Wall
100mm
Floor 2 3
100mm 2
2
Heating Area
97.6m (105ft )
Building Structure
Masonry
Storage Material
Cement Brick
4
Auxiliary Heating System Oil Boiler + Floor Radiant Heating
5
Pay-Back Period
6 ~ 10 years
4.2 Development of an Active Solar House The concepts of active solar systems are well known to the public. Past experiences reveal, however, that this technology is not yet ready for massive commercialization in Korea, since it is not yet economical (has a high initial investment) and has difficult maintenance problems. The 1981 project “Development and Improvement of the Active Solar-heating System,” was conducted as a basic work to develop low-cost solar-heating systems, which are expected to show economical solar utilization as a low-thermal source of energy for use in space or water heating. To improve the chances of attaining its aim or objective, the work was divided into two parts: the software and the hardware aspects. Fig. 6. Active solar house The scope of these studies includes the following four major works: – –
performance evaluation and formulation of a design method for liquid-type flat-plate solar collectors; utilization of a computer simulation program for designing active solarheating systems;
Development of and Research on Energy-Saving Buildings in Korea
–
–
293
construction of a small-scale water-heating system and improvement of the existing active solar space-heating system for demonstration and performance evaluation; and preparation of a test procedure, an evaluation scheme, and criteria for installation/performance. Table 7. Description of an Active Solar House No
Item
Description of Installation
1
Heating Area
127m2(136ft2)
2
Use
Space Heating & Water Heating
3
Collector Type
Flate-Plate Type Liquid Collector
4
Collector Area
24m2
5
Working Fluid
Water(Drain DownSystem)
6
Storage Tank Volume
2.4m3
7
Auxiliary Heating System
Oil Boiler
8
Annual Solar Saving Fraction
50 ~ 60%
5 Super Low Energy Office Building Residential buildings represent more than 70% of all the buildings in Korea. From this point of view, the spread of newly developed technologies and the carrying out of a demonstration project to prove energy savings as well as other parameters are essential for this study. The project was carried out from July 1994 to December 1998. The focus of this study was the demonstration of the developed technologies that are not being used commercially in Korea, to promote them. The scope of this study was divided into two categories: research work to provide detailed data for the design, and the discussion of the design and construction work with the project manager of six subprojects. The contents of the study were as follows: Fig. 7. Super-Low-Energy Office Building
-
the design of a double-skin structure; a seasonal thermal storage system; cool-tube-related technologies; a co-generation system;
294
-
H.-S. Park, J.-M. Kim, and J.-Y. Kim
a vacuum tube solar collector for heating and cooling; and a PV for building application.
The super-low-energy office building was constructed in November 1998, with three stories and an underground floor. It was a reinforced-concrete type, with a total floor area of 1,102m2. It is now on the way to test operation. A total of 74 kinds of elementary technologies were applied in the building: 23 kinds of energy load reduction methods through architectural planning, 35 kinds of mechanical-system-related technologies, and 16 kinds of electrical-system fields. The following are brief constructional descriptions of six of these major technologies. Table 8. Comparison of major energy-saving tools Energy Savings [Mcal/m2․y]
Notes
Design of Double Skin Structure
22.9
Width : 1.5m, Height : 10.8m
Cool Tube Related Technologies
13.0
Small Scale Co-generation System
56.4
Major Technologies
Vacuum Tube Solar Collector for Heating 26.0 and Cooling Total
Length : 70m, Buried Depth:3m, Diameter of the Pipe : 30cm Effective Area : 265m2 Effective Area : 50m2
106.3
6 Development of on the Green Building A “green building” is a building that was designed, constructed, operated, and eventually demolished so that it would have a minimum impact on the global and internal environment.
Fig. 8. Exterior and interior of Green building
Development of and Research on Energy-Saving Buildings in Korea
295
Sustainable development is the challenge of meeting growing human needs for natural resources, industrial products, energy, food, transportation, shelter, and effective waste management while conserving and protecting the environment quality and the natural-resource base that are essential for future life and development. The concept recognizes that long-term human needs cannot be met unless the earth’s natural physical, chemical, and biological systems are also conserved. Adopting green-building technologies into buildings will not only decrease the energy consumption of buildings but will also improve their environmental conditions. The following are the contents of such technologies: – – – – – – – – –
double envelope on the south façade; an atrium for day lighting and natural ventilation; movable shading devices on the west façade; a gray-water recycling system; a rainwater collection system; an energy-efficient HVAC system; environmentally friendly building products; solar collectors on the roof; and solar cells.
7 Development of Zero Energy House Both energy conservation and the use of an alternative technology must be applied for the construction of an energy-sufficient and zero-energy house. Zero-energy and solar technologies must be developed to overcome the energy crisis in the near future. This cannot be realized with the separate application of unit technologies because a building is an integrated system of several energy conservation technologies. As such, related energy technologies, including solar energy, must be developed and gradually adopted, considering the installation cost. The objective of this project was to develop the net energy of 78% self-sufficient demonstration house in 3 years (January 2001 –December 2003), the commercialization in 6 years and 100% in 10 years. From four years ago (January 2001), we started to research their integration something like the super-insulation and air-tightness, solar collector, heat recovery ventilation system and so on. The most important factors in superinsulated thermal envelope design are the low heat transfer, air leakage and moisture damage. Heat transmission coefficient of thermal envelope is under 0.15 (kcal/m2h℃). It was selected exterior insulation Fig. 9. Zero Energy House system for Zero Energy House. The air/vapor barrier should be installed
296
H.-S. Park, J.-M. Kim, and J.-Y. Kim
inside insulation and best material is 0.03~0.05mm polyethylene film. At the joint between two sheets of polyethylene, the two sheets should be overlapped by 50~150mm.
8 Conclusion The energy consumption of the building sector in Korea has been on the rise due to the growth of the Korean economy. At the same time, the demand for a comfortable indoor environment is also increasing. It is thus very important to consider not only building energy conservation but also IAQ (indoor air quality). Finally, by constructing, investigating, surveying, and testing experimental and model buildings, this study showed the criteria and measures for energy conservation in buildings (existing and new) and suggested the construction of “green buildings” to protect the environmental quality and the country’s natural-resource base, which are essential for future life and development.
References [1] Soo Cho at all (2005) The Evaluation of Thermal Performance on the High Efficiency Windows, The Society of Air-conditioning and Refrigeration Engineers of Korea: 1095-1100 [2] Korea Research Council of Public Science and Technology (2006) Energy Technology Transfer and Policy Research [3] Nam-Choon Baek at all (2002) Development of Zero Energy House [4] KIER (1982) Development of Passive Solar Systems, KE-81T-22 [5] KIER (1996) R&D and Demonstration project on the Super Low Energy Office Building, kier-953220 [6] KIER (2000) Planning for promoting the dissemination of Green Building, kier-993628 [7] KIER(2003) Development of Zero Energy House for distributing, kier-A32406
Non-iterative MUSIC-Type Imaging Algorithm for Reconstructing Penetrable Thin Dielectric Inclusions Won-Kwang Park1, Habib Ammari2, and Dominique Lesselier1 1
Département de Recherche en Electromagnétisme Laboratoire des Signaux et Systèmes (CNRS-Supélec-Univ. Paris Sud 11), 3 rue Joliot-Curie, 91192 Gif-sur-Yvette cedex, France [email protected] 2 Centre de Mathématiques Appliquées (CNRS-Ecole Polytechnique), 91128 Palaiseau cedex, France
Abstract. We consider a non-iterative MUSIC-type imaging algorithm for reconstructing thin, curve-like penetrable inclusions in a two-dimensional homogeneous space. It is based on an appropriate asymptotic formula of the scattering amplitude. Operating at fixed nonzero frequency, it yields the shape of the inclusion from scattered fields in addition to estimates of the length of the supporting curve. Numerical implementation shows that it is a fast and efficient algorithm.
1 Introduction From a physical point of view, an inverse scattering problem is the problem of determining unknown characteristics of an object (shape, internal constitution, etc.) from scattered field data. Throughout the literature, various algorithms for reconstructing an unknown object have been suggested, most based on Newton-type iteration schemes. Yet, for successful application of these schemes, one needs a good initial guess, close enough to the unknown object. Without, one might suffer from large computational costs. Moreover, iterative schemes often require regularization terms that depends on the specific problem at hand. So, many authors have suggested noniterative reconstruction algorithms, which at least could provide good initial guesses. Related works can be found in [2, 3, 4, 9, 11, and 12]. In this contribution, we consider a non-iterative MUSIC-type algorithm for retrieving the shape and estimating the length of thin inclusions the electric permittivities of which differ from the one of the embedding (homogeneous) space. The application foreseen is imaging of cracks (seen as thin screens) from electromagnetic measurements formulated as an inverse scattering problem for the two-dimensional Helmholtz equation. In general, cracks have different electrical parameters from their surroundings, and it is not necessary to get their exact values. Useful information about cracks might simply be shape, with length as by-product. The contribution is organized as follows. An asymptotic expansion of the scattering amplitude associated to penetrable thin inclusions is introduced by approaches in
298
W.-K. Park, H. Ammari, and D. Lesselier
harmony with those recently developed for small bounded inclusions in electromagnetics, e.g., [3] and [4]. This enables us to derive a MUSIC-type non-iterative imaging algorithm. Numerical simulations then illustrate its pros and cons. Let us notice that we consider a pure dielectric contrast between inclusions and embedding medium in the Transverse Magnetic (TM) polarization, yet the analysis can be extended to a pure magnetic case and combination cases as well, in both TM and TE (Transverse Electric) polarizations [12].
2 Asymptotic Expansion of the Scattering Amplitude Let us consider the two-dimensional electromagnetic scattering by a thin penetrable inclusion, Γ, in a homogeneous space R2. The inclusion, illustrated in Fig. 1, is curvelike, i.e., it is in the neighborhood of a curve: Γ={x+η(x): x σ, η (-h,h)}, where supporting σ is a simple, smooth curve of finite length, and where h is a positive constant which gives the thickness of the inclusion. All materials involved are non-magnetic (permeability μ≡1) and characterized by their dielectric permittivity at frequency of Fig. 1. Illustration of a two-dimensional operation ω; we define the piecewise conthin penetrable inclusion stant permittivity 0<ε(x)<+∞ and wavenumber k(x) as
∈ ∈
Let u(x) be the time-harmonic total electric field which satisfies the Helmholtz equation
△u(x)+k (x) u(x)=0 for x∈R 2
2
(1)
with standard continuity conditions at the inclusion boundaries and at infinity. As usual, u divides itself into the incident field u0 and the scattered one us, u(x)= u0(x)+ us(x) for x
∈
∈R . 2
Let us notice that we set u0(x) = ei θ·x for x R2 and the unknown scattered field us(x) satisfies the Sommerfeld radiation condition
uniformly in all directions xˆ = x / | x | . The scattered field u s reads as
Non-iterative MUSIC-Type Imaging Algorithm
where
299
is the two-dimensional fundamental solution of the Helm-
holtz equation (the Green's function).
Assuming { yˆ j }Nj=1 ⊂ S 1 as a discrete finite set of observation directions and
{θ l }lN=1 ⊂ S 1 as the same number of incident directions, the scattering amplitude is a function K ( yˆ ,θ ) which satisfies
as | y |→ ∞ uniformly on yˆ = y / | y | , letting θ = (θ x ,θ y ) be a two-dimensional vector
ㆍ
on the unit circle S1 in R2 (θ satisfies θ θ =1). From results of [4] and [7] and the asymptotic behavior of the Hankel function, the asymptotic formula for the scattering amplitude follows as
3 Music-Type Imaging Algorithm We now apply the asymptotic formula (3) to build up a MUSIC-type algorithm. We use the eigenvalue structure of the Multi-Static Response (MSR) matrix K=(Kjl), Kjl amplitude collected at observation number j for the lth incident wave. We start from the assumption that σ can be partitioned into the set of points {x1, x2,···, xM}, the length of the interval [xj , xj+1] being |σ|/M for j=1, 2, ···, M-1, this step size being a fraction of the wavelength (we will discuss the relation between M, the number of nonzero singular values, and the length of the curve). 2 Let us exclude the asymptotic term o(h) and the constant factor h k 0 (1 + i ) from 4 k 0π
∈C is, for
formula (3). For each yˆ j = −θ j , the jl-th element of the MSR matrix Kjl j,l=1, 2,···, N
Matrix K can be rewritten using vs=( e iωθ1⋅ xm , e iωθ2 ⋅xm ,···, e iωθ N ⋅xm )T:
~
The MSR matrix is symmetric, but not Hermitian (this would be K = K K ). K is the frequency-domain version of a time-reversed MSR matrix [4].
300
W.-K. Park, H. Ammari, and D. Lesselier
Let us assume that we have enough measurement points, i.e., N > M. For any point z R2, we define the vector g CN by g=( e iωθ1⋅ z , e iωθ2 ⋅z ,···, e iωθ N ⋅ z )T . It can be shown that there exists n0 N such that for any n ≥ n0 ≥ M [4]:
∈
∈
∈
∈
~
g Range( K ) if and only if z
∈{x , x ,···, x } 1
2
M
(4)
T
The singular value decomposition of the matrix K reads K=VSW . Since the rank of K is M, the first M columns of V, {v1, v2, ···, vM}, yield an orthonormal basis for K. The next ones, {vM+1, vM+2,···, vN}, yield a basis for its null (or noise) space, with projection as
From (4), an image of xj for j=1, 2, ···, M follows by computing
Its plot should show high peaks at xj for j=1, 2, ···, M.
4 Numerical Experiments Illustrative numerical results are discussed in this section. We set the search domain ~ ~ ~ Ω to Ω = [−1,1] × [−1,1] , and for each z ∈ Ω , the step size of z is taken of the order of 0.02; the thickness h of the thin inclusion Γ is 0.015 and permittivities ε and ε0 are 5 and 1, respectively (save the case of two inclusions seen next). Since ε0=1, the wavelength λ is such that ω=2π/λ and is set between a low-frequency one (λ =1) and a high-frequency one (λ =0.1). Two σj characteristic of the thin inclusion Γj are chosen:
Illumination and observation directions θ l are
Upon computing the MSR matrix K by means of the asymptotic formula (3) with a Nyström solution method [6], the singular value decomposition of K=VSWT is performed via a standard Matlab subroutine. In contrast with the theoretical analysis, computed singular values do not separate neatly between non-zero and zero sets, so we have to discriminate between the singular values significant of the signal subspace (and vectors spanning it) from those associated to the noise subspace (and vectors spanning it), via thresholding. Detailed explanation and examples are found in [12]. Several plots of the MUSIC cost functional are in Fig. 2 as a function of this threshold. Normalizing the singular values with respect to the one of maximum amplitude and keeping the singular values λj such that λj / λ1 ≥ 0.1 (here, this means that
Non-iterative MUSIC-Type Imaging Algorithm
301
M=5 singular values are signal, N-M=11 are noise) leads to suitable results: M=5 sharp peaks of close magnitude, regularly distributed along the true curve of the inclusion, emerge, from which its shape and location follow. Each such peak is separated from the next one by half a wavelength, here 0.3. So, an estimate of the length of the inclusion is obtained by counting the number of intervals between peaks, via the relationship
this remaining an estimate (the true length might not be a multiple of the half wavelength, here, it is 1.19).
~
Fig. 2. Maps of W(z), z ∈ Ω , assuming 3 (top left), 4 (top center), 5 (top right), 6 (bottom left), 7 (bottom center), and 8 (bottom right) singular values in the signal subspace.
Fig. 3. Distributions of normalized singular values of matrix K (left column) and maps of W(z),
~ z ∈ Ω , (right column), for N=15 incidences and a λ=0.5 wavelength (top row), N=20 and
λ=0.3 (bottom row), when the inclusion is Γ1
302
W.-K. Park, H. Ammari, and D. Lesselier
The quality of the images remains good even though one or even two additional singular values might result from the choice of a lower threshold. Here, choosing M=6 singular values, though the associated M=6 peaks are less sharp and less resembling one another than with M=5, still makes sense. Beyond, results worsen. Examples of the imaging of Γ1 as a function of the wavelength are shown in Fig. 3, the elements of K being collected for N=15 and N=20 at λ=0.5 and λ=0.3, respectively. With reference to Fig. 4 where Γ1 is imaged at 0.7 wavelength, with either N=10 or N=20 illuminations, a too small N might be improper (an isolated high peak emerges, not seen at N=20), though the threshold seems efficient, 4 well defined peaks being observed along the sought supporting curve. Complementary images of Γ2, at high frequency (λ=0.2) and at low frequency (λ=0.8), are displayed in Fig. 5. This shows how the spectrum enlarges, and the number of peaks grows (from 4 to 13) and their sharpness as well, when the frequency is increased, the λ/2 interval always separating the peaks.
~
Fig. 4. Map of W(z), z ∈ Ω , for N=10 (left) and N=20 (right) incidences with a 0.7 wavelength when the inclusion is Γ1
Fig. 5. Distributions of normalized singular values of matrix K (left column) and maps of of
~
W(z), z ∈ Ω , (right column), for N=10 incidences and a λ=0.8 wavelength (top row), for N=32 and λ =0.2 (bottom row), when the is Γ2
Non-iterative MUSIC-Type Imaging Algorithm
303
~
Fig. 6. Map of W(z), z ∈ Ω , ε1= ε2=5, (top left), ε1=5, ε2=3 (top right), ε1=10, ε2=3 (bottom left) and ε1=5, ε2=0.5 (bottom right) for N=28 incidences and a 0.7 wavelength when the thin inclusion is Γ (made of the two inclusions)
~
Fig. 7. Shape of Γ3 (left), map of W(z), z ∈ Ω , for N=36 incidences and a 0.6 wavelength (center), for N=64 incidences and a 0.2 wave length (right)
~
Fig. 8. Shape of Γ4 (left), map of W(z), z ∈ Ω , for N=96 incidences and a 0.2 wavelength (center), for N=164 incidences and a 0.1 wave length (right)
The algorithm can be extended to two (or more) inclusions. Let us refer to Fig. 6, with Γ2 the one previously introduced, and a (moved, mirrored) copy of Γ2, respective permittivities being ε1 and ε2, letting Γ = Γ1 Γ2. The MSR matrix is collected for N=28 at λ=0.5. In contrast with the single inclusion case, if an inclusion has a much smaller value of permittivity than the other, it does not significantly affect the MSR
∪
304
W.-K. Park, H. Ammari, and D. Lesselier
matrix and cannot be retrieved. Here, the scattered field is the first term of an asymptotic formula with respect to the thickness of the inclusions, coupling between inclusions impacting higher-order terms. The algorithm applies to an oscillating inclusion. The configuration is the same as ~ previously save the search domain Ω = [−1.5,1.5] × [−1.5,1.5]. Two σj are chosen for illustration:
Let us consider Γ3. Typical results are in Fig. 7 at λ=0.6 and λ=0.2. Imaging is rather coarse at the low wavelength, better at the high one. The thresholding looks convenient, whilst a high value of N is to be taken as expected. For the case of a more oscillating inclusion, the results are not so good, as is seen in Fig. 8. Nevertheless, we can still say that a good initial guess is obtained at low computational cost, to be improved upon by an appropriate iterative algorithm.
5 Concluding Remarks The applicability of a MUSIC-type algorithm has been investigated for imaging thin, penetrable, curve-like inclusions. Results (not given) show that this approach works as well for impenetrable screens [10], the mathematical formulation requiring further investigation, and can be applied to penetrable inclusions buried in a half-space. Otherwise, such non-iterative imaging of low computational cost, could provide initial guesses of a level-set evolution [1] or of a Newton-type algorithm [8]. Finally, it is expected that it applies to the case of supporting surfaces in the full 3-D framework of vector scattering.
References [1] D Alvarez, O Dorn, M Moscoso (2006) Reconstructing thin shapes from boundary electrical measurements with level sets, Int. J. Informat. Systems, 1:1-14 [2] H Ammari, E Iakovleva, D Lesselier (2005) A MUSIC algorithm for locating small inclusions buried in a half-space from he scattering amplitude at a fixed frequency, SIAM ultiscale Model. Simulat. 3:297-628 [3] H Ammari, E Iakovleva, D Lesselier, G Perrusson (2007) MUSIC type electromagnetic imaging of a collection of small three-dimensional inclusions, SIAM J. Sci. Comput. 29:674-709 [4] H Ammari and H Kang (2004) Reconstruction of Small Inhomogeneities from Boundary Measurements, Lecture Notes in Mathematics, Volume 1846, Springer-Verlag, Berlin [5] M Cheney (2001) The linear sampling method and the MUSIC algorithm, Inverse Problems, 17:591-595 [6] D Colton and R Kress (1998) Inverse Acoustic and Electromagnetic Scattering Problems, Springer Verlag, New York [7] Y Capdeboscq and M S Vogelius (2008) Imagerie électromagnétique de petites inhomogénéités, ESAIM: Proc. 22:40-51 [8] R Kress (1995) Inverse scattering from an open arc, Math. Methods Appl. Sci. 18:267-293
Non-iterative MUSIC-Type Imaging Algorithm
305
[9] O Kwon, J K Seo, J R Yoon (2002) A real-time algorithm for the location search of discontinuous conductivities with one measurement. Comm. Pure Appl. 55:1-29 [10] Z T Nazarchuk (1994) Singular Integral Equations in Diffraction Theory, Karpenko Physicomechanical Institute, Ukrainian Academy of Sciences, Lviv [11] W K Park, H Ammari, D Lesselier, On the imaging of two-dimensional thin inclusions by a MUSIC-type algorithm from boundary measurements, 13th International Workshop on Electromagnetic Nondestructive Evaluation (ENDE), Seoul, June 2008. [12] W K Park and D Lesselier, MUSIC-type, non-iterative imaging of a thin penetrable inclusion from its far-field multi-static response matrix, preprint.
Network Based Service Robot for Education Kyung Chul Shin, Naveen Kuppuswamy, and Hyun Chul Jung Yujin Robot Co. Ltd. [email protected]
Abstract. The usage of robotic technology for human services is increasingly coming into realization. Despite rapid advances, there are few robots that are practically in use in the human life environment. The service robot iRobiQ is a ubiquitous robot which can deliver lots of edited contents through ubiquitous network and has many features enabling intelligent interaction with young children and adults. iRobiQ has been used in the kindergarten environment to support the teachers and to play with kids for book reading, singing, dancing, game etc. iRobiQ is equipped with several IT technologies and has outstanding features for human robot interaction, thus serving as an constant source of attraction for children. In this paper, some of the features of iRobiQ, and the network-based service robot paradigm will be presented along with its applicability for education of children.
1 Introduction Rapid advances in computer and network technology, in recent times have hastened the dawn of the Ubiquitous era, which is typically characterized by objects and devices that are fully networked. Such an environment can ideally and fully be utilized by highly advanced robots to provide us with a variety of services. Ubiquitous computing (UC) coined by Mark Weiser [3], motivated a paradigm shift in computer technology, in terms of relationship between technology and human beings. This shift has hastened the ubiquitous revolution which has further manifested itself in the new multidisciplinary research area, of ubiquitous network robotics [2]. Service Robot technology has progressed greatly since its early day to establish itself as a distinctive research area. Service Robots may be broadly classified as those with human service oriented applications, as opposed to Industrial Robots whose application domain lies solely in the industry. This broad classification allows various applications to be developed for service robots, but however possess a wide variety of challenges for their design and development. The magnitude of the challenge has also been a mitigating factor in the widespread adoption of service robots in our daily lives. Thus, despite many years of research and developments, few if any, successful service robot products are currently available commercially, both for various service applications and for the end users in homes, schools and offices as well. Ubiquitous systems have a close symbiosis with the prevalence of networks. The ubiquitous network robot concept allows service robots to harness the on-hand information availability of ubiquitous networks with the mobile capabilities of a mobile robot platform. Coupling this with the human service oriented design features of typical service robots such as HRI interaction capabilities, and hardware for providing
308
K.C. Shin, N. Kuppuswamy, and H.C. Jung
different kinds of dedicated services, thus results in a networked service robot system with superior capabilities. The ubiquitous network thus allows service robot development to transcend and overcome the traditional barriers holding back their research, development, usage and penetration. Yujin Robot Co. Ltd. located in Seoul, Republic of Korea, has been in the business of Robot development since 1988. Its specialization is in the development of Intelligent Robot and has achieved recognition through awards and commendations both from the Government of Korea and various professional bodies. The Robot iRobiQ as mentioned earlier is a network service robot, designed and developed by Yujin Robot Co Ltd. for applications at home and for education. It consists of a number of advanced HRI features designed to make it an ideal companion at home and school. The commercial strategy behind iRobiQ, encouraging the thirdparty development of various contents, has also motivated the development of a complete SDK (Software Development Kit) for harnessing the utility of iRobi. Indeed, iRobiQ is more than the physical robot itself. It is a media that provides new form of active service by combining the mobility of the robot and the interactivity of HRI technology with the contents developed for various dedicated application. The advantage of this strategy is not just envisioned in various business and marketing strategies. Rather what is does, is tackle the problem of difficulty of service robot development, by decoupling the predominantly hardware based issues of the robot manufacturers from the dedicated application issues faced by domain developers. Thus the robot becomes a form of media, wherein different contents maybe offered for various applications. The difference of this paradigm from conventional forms of media is that it’s not just a passive immobile medium anymore. Empowered by HRI, it is a highly interactive and empowered by the mobile nature of the basic robot platform, it can transcend physical barriers. One of the application domains increasingly under focus is education. Educational researchers have for long looked for various ways to improve upon existing teaching strategies. This assumes special relevance for the education of young children especially in domains such as language education, which is a vital requirement in today’s globalized environment. Education through various online/computer based media has also been trialed in some places. This paper shall discuss how the networked service robot paradigm as envisioned in iRobiQ can result in the development of an effective service robot for educational applications. This paper is organized as follows. Section II presents the technical features and some hardware and software characteristics of the iRobiQ platform. Section III discusses the network based service robot paradigm and iRobiQ’s contents distribution model. Section IV presents the educational applications followed by the results and conclusions.
2
Technical Features and Characteristics of iRobiQ
This section presents the technical features of the iRobiQ networked service robot platform. Many aspects of the design have critically considered the problems inherent in service robot development. Another crucial aspect of the design is end user safety and ease of usage. The Robot iRobiQ can be seen below in Fig. 1.
Network Based Service Robot for Education
309
A. Hardware Characteristics The iRobiQ is small in size, and measures 45x32x32cm. It weighs 7kg. The robot platform is capable of self navigation to any number of landmark targets and can intelligently avoid obstacles in its path. It has a Celeron based internal computer for running its software and can store up to 40GB of information. Physically, iRobiQ has 2 arms which are used mainly for emotional interaction and gesturing purposes in various HRI services. The head contains a camera which can be used for different kinds of vision based HRI. The head itself includes pan-tilt capabilities. The head utilizes LCD based eye units with LED in the mouth region which, coupled together can be used to convey a variety of fixed emotional responses such as Happiness, Sadness etc. iRobiQ is also equipped with a number of touch sensors at different locations of its body, this enables the programming of realistic responses when user, pat, tap, touch, or nudge the robot. The Robot also has a touch-screen LCD display in its body, which can be used to display a main menu and serves as its multimedia interface. It can be used for the playing of media, such as music, movies and interactive games etc. B. Software Characteristics The Robot iRobiQ being primarily a service robot for the home, HRI is crucial to its successful application. The Software of iRobiQ runs on the embedded windows XP Operating System, this allows the easy porting of many of the existing contents and services. The various contents and services may access any of the functionality of the robot through the SDK (Software Development Kit). The iRobiQ software also features a dedicated HRI (Human-Robot Interaction) module. There are basically two kinds of HRI interaction. One is through its speech as voice capabilities of its voice engine, and the other is through the vision and recognition features of the vision engine. Many end users expect speech based capabilities in a home robot system. The speech and voice capabilities of iRobiQ include Voice recognition, Name Call recognition, Sound Source Recognition, Detection and response to clapping sounds, and finally Voice Synthesis (TTS). Moreover, the voice engine is multilingual in design, and currently support Korean, English, Chinese languages. An interesting feature is the voice and speech synthesis Fig. 1. iRobiQ – the network based module (TTS module). Depending upon the service robot personality of the Robot being aimed for, it may use a child or adult female voice. For interacting visually with iRobiQ, the vision engine has been designed with the following functionality, Face detection, Face Recognition, Object Recognition, and Gesture Recognition.
310
K.C. Shin, N. Kuppuswamy, and H.C. Jung
The Vision Engine utilizes a USB camera present in iRobiQ’s forehead. This enables the vision engine responses to be anthromorphic and realistic since the field of vision is dependent on head position.
3 Network Based Service Robot Concept Despite having a number of features conducive to the development of advanced service applications, a limiting factor is the time and dedication for application service development. The SDK based on the iRobiQ business model permits the decoupling needed to optimise this task. The dedicated application may be concentrated upon by the application service developers, leaving the robot development to its original manufacturer which is in this case, Yujin Robot. A. Contents Provider and Server based Distribution Model The development of an SDK allows third party development of Contents. The SDK of iRobiQ allow the access of all of the functionality of the robot (hardware as well as software modules). The SDK is based on both a win32 API for windows developers and action script for flash based content developers, using Adobe Flash Pro 8. The contents providers can thus concentrate on the applications at hand and need not worry about the actual implementation lying underneath. The distribution model for iRobiQ uses a Contents distribution server, to which contents developer upload the various developed contents. The different kinds of Services include: • • • • •
Basic Services – Task Scheduling, photo, video database information Information Services – news, weather and cooking information Security Services – home monitoring Education Services – storytelling, sing-along, word train Entertainment Services - karaoke, English games, media player
If contents developers wish to develop any other specific applications, they may do use using the SDK or action script. The Server based distribution model allows end
Fig. 2. Contents Design Process and Simulation Screenshot
Network Based Service Robot for Education
311
users to easily download and try out different contents and services. This helps in easy distribution of advanced and well designed contents. This is the crux of the network based service robot paradigm of which iRobiQ is an effective example. A screenshot of the Contents design process can be seen in Fig 2.
4 Education Services of iRobiQ at the Kindergarten Educational applications have been the recent focus to the development of iRobiQ, particularly in the domain of language teaching for children. This domain has assumed increasing importance in recent days owing to the multi-lingual requirements of the globalised world. In this regard, the iRobiQ is ideally placed to leverage the development of language applications. The development of teaching based contents with a close coupling with HRI based interaction allows a new level of interactive education which can lead to better results in classroom. The educational services of iRobiQ are targeted at the Kindergarten level since, children at that age-group are at the most impressionable and therefore can benefit most from interactive educational services. The main categories of Kindergarten Services include: • Storytelling – for learning stories • Sing-along – English song Karaoke • Word Train – English Phrases and words A screen shot of the Educational Services can be seen in Fig 3. The educational Contents and Services are designed to encourage interaction with children. This allows iRobiQ to transcend the physical barriers that limit conventional passive educational media such as books or passive educational videos. Some forms of interaction include, touch, and interactive dance. The SDK also allows complex interactions involving vision and voice HRI.
Fig. 3. Screenshot of various Educational Contents
312
K.C. Shin, N. Kuppuswamy, and H.C. Jung
The interaction of iRobiQ with children can be seen in Fig 4. Hyun et al [1] along with a team of researchers has recently performed extensive experiments on the applicability of iRobiQ for the language instruction programming for Fig. 4. Increase in Adaptive behavior in children over a young children. The tests 4 week period of the educational service program (reshowed significant improveproduced with permission from [1]) ment in the story making, story understanding and word recognition abilities of young children and furthermore they were observed to demonstrate a large degree of adaptive behavior on interacting with the LCD screen using touch behaviors. Their results on measurement of adaptive behavior in children over a 4 week period are reproduced here in Fig. 4 for reference. Some of the conclusions of that research are that, the usage of iRobiQ educational contents allowing bi-directional interaction affected the improvement of a child’s linguistic ability. The intelligent robot media allows adaptive and active behavior. Children are especially observed to not only actively interact and adapt with the programs structure but also speak to and touch the robot in an act of increasing familiarity. Intelligent Robots using bi-directional interaction thus improved the children’s linguistic abilities, especially in regard to Story Making, Story Understanding, and Word Recognition.
Fig. 5. iRobiQ interaction with children in the kindergarten
Network Based Service Robot for Education
313
5 Conclusion This paper presented some of the features and capabilities of the iRobiQ service Robot. The network service robot paradigm was presented and analysed along with its advantages. The applicability of the iRobiQ platform for various services was discussed through its hardware and software capabilities. Finally the application of this multi-faceted platform for educational services was presented in the form of kindergarten educational contents. The results with actual kindergarten education programs were also presented. It is not difficult to envision a future where advanced intelligent service robots like iRobiQ shall enable us to improve our quality of life.
References [1] EJ Hyun, SY Kim, S Jang and S Park (2008) Comparative study of effects of language instruction programming using intelligence robot and multimedia on the linguistic ability of young children. IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany (accepted for publication) [2] JH Kim, KH Lee, YD Kim, NS Kuppuswamy and J Jo (2007) Ubiquitous Robot : A new Paradigm for Integrated Services. IEEE International Conference on Robotics and Automation, Italy [3] M Weiser (1993) Some computer science problems in ubiquitous computing. Communications of ACM, vol. 36, no. 7, pp. 75-84
Design of Hot Stamping Tools and Blanking Strategies of Ultra High Strength Steels Hyunwoo So and Hartmut Hoffmann Technische Universität München, Department of Mechanical Engineering, Munich, Germany [email protected]
Abstract. In recent years, the automotive industry is making an effort to reduce the vehicle weight in order to reduce the CO2 emission while maintaining high strength properties of vehicle components. For lightweight construction, the demand of ultra high strength steels is drastically increased. To increase the formability and reduce the high spring back, hot stamping with ultra high strength steel is becoming more popular in automotive industry. In hot stamping, blanks are hot formed and press hardened in a water-cooled tool to achieve high strength. Hence, design of the tool with active cooling system significantly influences the final properties of the blank and the process time. However, the achieved high strength causes severe tool wear at the blanking of the hardened parts in a serial production with conventional mechanical cutting methods. Because of high costs for repairing of the blanking tools, laser cutting has been necessarily used in almost all automotive industries as alternatives in spite of the high costs and long process time. In this paper, the systematic design method of the hot stamping tool is introduced. Subsequently, some latest results on the blanking of the hardened steels after hot stamping are presented.
1 Introduction Nowadays the weight reduction while maintaining safety standards has been strongly emphasized in the automotive industry. To meet the customer’s demands with respect to more powerful but saver vehicles and to reduce the CO2 emission simultaneously, more and more car components made of high and ultra high strength steels have been applied. But the use of high strength steels leads to some disadvantages such as low formability and high spring back at room temperature. To resolve these problems a new forming technology called direct hot stamping process had been consequently developed for quenchenable boron alloyed steels such as 22MnB5, which combines hot forming and subsequent quenching in one tool and in one process step as seen in Fig. 1. After hot forming and quenching of the blank some further manufacturing processes are needed to complete the final products such as trimming and punching. The flange part remaining after hot stamping is sheared by trimming, and the punching process is used mainly to prepare holes for the assembly. However, the press hardened parts exhibit very high strength of about 1600MPa and very high hardness ofabout 50 HRC after hot stamping. Therefore, these properties cause severe tool wear at the trimming and punching of the press hardened parts with conventional mechanical blanking methods. As a consequence it causes expensive tool repair
316
H. So and H. Hoffmann
austenitization
transfer
forming & quenching
transfer
cutting
Fig. 1. Direct hot stamping process
costs in the serial production. Hence, a laser cutting has been necessarily used as the alternatives in almost all automotive industries in spite of the high costs and long process time. This paper presents however some experimental results with conventional mechanical blanking methods, because the research activities in the area of the sheet metal blanking of hardened boron alloyed steels has not been sufficiently carried out. Besides, the experimental studies at UTG (Institute of metal forming and casting, Technische Universität München, Germany) shows that the achieved experiences from the steel sheet metal blanking cannot be simply applied into the press hardened boron alloyed steels. Therefore, the demand of the research about the blanking of the press hardened boron alloyed steels remains still high. In this research, not only the experimental but also the finite element analyses are done by the predefined process parameters, which represent the different blanking conditions with different blanking clearances and different blanking angles.
2 Design of Hot Stamping Tools with Cooling System 2.1 Motivation To enhance the economical production procedure and good characteristics of the formed parts, hot stamping tools need to be designed optimally. Therefore, the main objective of this chapter is the optimal designing of an economical cooling system in hot stamping tools to obtain efficient cooling rate in the tool. So far, very few researches have been conducted regarding the design of cooling systems in hot stamping tools. Therefore, an advanced design method is required. Also, an adequate simulation model is required to perform the optimization and investigation of tools and products as fast and accurate as possible. 2.2 Characteristics of 22MnB5 In direct hot forming process, the boron-manganese alloyed steel 22MnB5 is commonly used. Also, 22MnB5 is one of the representative materials of ultra high strength steels. Therefore, in this study, aluminum pre-coated 22MnB5 sheet (Arcelor’s USIBOR) was considered as the blank material. The material 22MnB5 has a ferritic-pearlitic microstructure with a tensile strength of approximately 600MPa at the delivery state. After hot stamping, the part has a martensitic microstructure and the tensile strength can be significantly increased. Higher tensile strength is achieved by a rapid cooling at least at the rate of 27°C/s [1, 2]. The initial sheet of 22MnB5
Design of Hot Stamping Tools and Blanking Strategies of Ultra High Strength Steels
317
must be austenitized before forming process to achieve a ductility of blank sheet. As the austenite cools very fast during quenching process, martensite transformation will occur. This microstructure with martensite provides the hardened final product with a high tensile strength up to 1600MPa.
Ultimate Strain [%]
70 Press hardening process
60
austenite
50
(cooling rate > 27 K/s)
40
T
30
22MnB5 hardened
ferrite-pearlite
10 0
martensite
T
20
0
300
22MnB5 600
900
1200
1500
1800
Ultimate Tensile Strength [MPa] Fig. 2. Material properties and microstructures of 22MnB5 by press hardening process
2.3 Design of Cooling Systems The schematic of the prototype hot stamping tool and the initial blank and the proposed test part are shown in Fig. 3 (a) and (b), respectively.
plunger faceplate distance bolts counter punch die blank blank holder punch barells faceplate table table
initial thickness: 1.75mm
draw depth: 30mm
(a ) Schema tic of a test tool
(b) Initia l blank a nd dra wn part
Fig. 3. Tool component and test part
The tool must be designed to cool efficiently in order to achieve required cooling rate of the hot stamped part. Hence, a cooling system needs to be integrated into the tools. The cooling system with cooling ducts near to the tool contour is currently well known as an efficient solution. However, the geometry of cooling ducts is restricted due to constraints in drilling and also the ducts should be placed as near as possible for efficient cooling but sufficiently away from the tool contour to avoid any deformation in the tool during the hot forming process. To guarantee good characteristics of
318
H. So and H. Hoffmann
Constraints boring • position
• sealing plug
minimum distance • between loaded contour and cooling duct (x) • between unloaded contour and cooling duct (a) • between cooling ducts (s) loaded contour x unloaded contour s a coolant bore Optimization (Evolutionary Algorithm) input parameters of cooling system • number of cooling channels and coolant bores • diameter of cooling duct evaluation criteria • cooling intensity and uniform cooling Solution 1 solution per given input Æ separate optimization
Fig. 4. Optimization procedure for each tool
the drawn part, the whole active parts of the tool (punch, die, blank holder and counter punch) need to be designed to cool sufficiently. The optimization procedure for design of a cooling system is presented in Fig. 4. In this procedure, cooling channels can be optimized in each tool by a specific Evolutionary Algorithm (EA), which was developed at ISF (Institut für Spannende Fertigung, Universität Dortmund, Germany) for the optimization of injection molding tools and adapted for design of cooling systems in hot stamping tools [3, 4]. As constraints for optimization, the available sizes of connectors and sealing plugs, the minimum wall thicknesses as well as the non-intersection of drill holes were considered. The admissible minimal distance between cooling duct and unloaded/loaded tool contour (a/x) and the minimal distance between cooling ducts (s) were determined through FE analyses. Parameters of the cooling system such as the number of channels (a chain of sequential drill holes), drill holes per channel and the diameter of the holes for each tool component were also provided as input parameters to the optimization. These input parameters can be obtained from existing design guidelines or through FE simulations. Based on the input parameters, initial solution is generated randomly by EA or manually by the user. From the initial solution, the EA generates new solutions by recombination of current solutions and modifying them randomly. The defined constraints were subsequently used for the correction of the generated solutions and the elimination of inadmissible solutions. All the generated solutions were evaluated by optimum criteria such as efficient cooling rate and uniform cooling. Finally, the best solution was selected as optimized cooling channels for a selected tool component.
Design of Hot Stamping Tools and Blanking Strategies of Ultra High Strength Steels
319
Fig. 5. Optimized cooling channels with 8mm duct diameter
In our research, the selected diameters of ducts were 8mm and 12mm for punch, 8mm, 12mm and 16mm for die, 8mm and 10mm for counter punch and 8mm for blank holder. EA was used to place the cooling channels optimally according to the given input and constraints for each tool component. The optimized profiles of the channels for duct diameter of 8mm are shown in Fig. 5. 2.5 Evaluation of the Optimum Cooling Channel Designs The design of cooling channels was generated by EA for each tool component with different bore diameters and their cooling performances were evaluated by using FE simulations [5]. In the design and development phase of hot stamping tools, it is important to estimate the hot stamping process qualitatively and quantitatively within a short time for economic manufacturing of tools. For this purpose, two transient thermal simulations were carried out with ABAQUS/standard, which uses an implicit method. In this analysis steel 1.2379 was selected as a tool material. Table 1. Combinations of designed tools for FE analysis
V1 V2
punch 8mm 12mm
diameter of cooling duct counter punch die 8mm 8mm 10mm 16mm
blank holder 8mm 8mm
320
H. So and H. Hoffmann
The simulation model comprises 4 tool components: punch, die, blank holder and counter punch. In Table 1, the selected combinations of tool components with optimized cooling channels are presented. The variant V1 is the combination of optimized tools with small cooling duct diameters, whereas variant V2 with large cooling duct diameters. In order to represent a series of production processes, a number of cycles of the hot stamping processes were simulated as a cycle heat transfer analysis. The Fig. 6 shows the FE model including boundary conditions.
Fig. 6. FE model and boundary conditions
This hot forming process for the prototype part was designed such that the cycle time is 30sec. In a cycle, the punch movement for forming requires 3sec, the tool is closed for 17sec for quenching the blank and it takes another 10sec for opening the tool and locating the next blank on the tool. However, in thermal analysis, the tool motion and deformation of the blank was not considered in order to reduce the computation time. Hence, only heat transfer analysis was performed in a closed tool. In thermal analysis, the quenching process takes places 20sec instead of 17sec, because the motion of punch was not considered. It was assumed that the blank has an initial homogeneous temperature (Tb,0) of 850°C due to free cooling from 950°C during the transfer in environment [6]. The initial tool temperature (Tt,0) was assumed as 20°C at the first cycle and changes from cycle to cycle. The temperature of the cooling medium (Tc) was assumed as room temperature. Beside the boundary conditions, the required material properties of 22MnB5 were obtained from hot tensile test conducted at LFT (Lehrstuhl für Fertigungstechnologie, Universität Erlangen-Nürnberg, Germany), with whom a joint research on hot stamping is being conducted . In this analysis, convection from blank and tools to the environment (he), conduction within each tool, convection from tool into cooling channels (hc) and heat transfer from hot blank to tool (αc) were considered. Here, αc is the contact heat transfer coefficient (CHTC) which describes the amount of heat flux from blank into tools. This coefficient usually depends on the gap d between tool and blank and the contact pressure P. It increases usually as the contact pressure increases. However, in thermal analysis the pressure dependent CHTC was not available, but a gap dependent coefficient was used. CHTC was assumed as 5000W/m2°C [7] at zero distance between blank and tool (gap) and keeps constant until the gap increases beyond critical value.
Design of Hot Stamping Tools and Blanking Strategies of Ultra High Strength Steels
321
2.6 Simulation Results And Discussion Fig. 7 shows the temperature changes in the tool components for 10 cycles at tool combination V1 and V2. 400
die
punch
300 T [° C] 200 100
V1
V2
0 0
100
t [s]
300 0
100
t [s]
300
Fig. 7. Temperature changes in heat transfer analysis
The results show that the hottest temperatures of the tools at the end of each cycle do not change almost after some cycles. The obtained cooling rates of the blank at the hottest point from 850°C to 170°C are 40°C/s with V1 and 33°C/s with V2 at 10th cycle and these are greater than the required minimum cooling rate of 27°C/s. Furthermore, V1 leads to a more efficient cooling performance than V2. Better cooling performance for V1 compared to V2 can be explained with the geometric restrictions and the minimal wall thickness. A cooling duct with small diameter can be placed closer to the tool surface in a convex area and the amount of the cooling channels can be increased additionally. Usually, the heat dissipation in the convex area is slower than in concave area [7]. The result shows also that the temperature of convex area in the punch cools down slower than the concave areas in the die. Due to this fact, it can be concluded that the efficient cooling is most desired at convex area.
3 Blanking of Press Hardened Ultra High Strength Steel 3.1 Motivation As basic investigations on the blanking strategies of hardened boron steels, the conventional mechanical blanking process was carried out for the prediction of the geometry of the sheared profile, the mechanical state of the sheared zone, the punch force-penetration curve and the wear evolution of the blanking tools versus the number of blanking cycles. In the blanking process, some parameters such as the punch-die clearance, the punch velocity, the tool geometry and the mechanical properties of the materials influence the quality of the cross-section and the dimension precision. Therefore, it is necessary to study the fracture of the hardened parts to select the rational blanking process parameters. 3.2 Process Parameters for Blanking of the Hardened Parts To study the effects of the variation of processes parameters on the geometry, tests were mainly conducted on different angle of blanks to see how the blanking angle affects the results. For blanking process, both positive and negative blanking angles
322
H. So and H. Hoffmann
were selected as seen in Fig. 8. The data from these tests is intended to help select the suitable parameters for various angle of trimming flanges of hardened parts. It also can be used as a reference for hot blanking in our further work. The proposed tests were carried out with the existing press and blanking tool at UTG. All of the tools steels were hardened and tempered to 62 HRC (hardness Rockwell C).
blank holder punch
γ clearance
blank die
* γ: blanking angle (a) γ = 0 °
(b) γ > 0° (positive)
(c) γ < 0 ° (negative)
Fig. 8. Blanking angles as a process parameter
A square workpiece with the thickness of 1.75mm is taken as the geometry. All experiments were done in a 630kN hydraulic press which makes the punch velocity vary from 30mm/s up to 100mm/s. The blanking clearance varies from 5% to 20% of the thickness of the sheet at various blanking angles such as +5°, +20° and -10°. The sheared profiles of the sheets were obtained with Marsurf equipment (Mahr GmbH) which is based on the tactile stylus method and the states of the sheared zone were also measured by SEM (Scanning Electron Microscope). The blanking force was measured by the integrated quartz force transducer type 9061 from Kistler (Germany) and the punch penetration or displacement was measured by non contacting displacement transducer produced by Micro-Epsilon (Germany). Finally, these test results were compared with the finite element (FE) simulation results with DEFORM-2D, in which the blanking process was simulated with 2-dimensinal plain strain model and the blanking tools were considered as rigid body. From FE simulations of the blanking process, it is much convenient to understand the fracture mechanism of the hardened boron alloyed sheet. 3.3 Experiment and Simulation Results and Discussion Fig. 9 shows the measured profile with Marsurf at different punch-die clearance and punch velocity at normal shearing process without blanking angle. From the test results it is shown that the measured profiles of the sheared zone vary not so much with the variation of the punch velocity within 30~100mm/s. However, the burr height increases, when the clearance becomes larger. In Fig. 10 and 11, test and simulation results such as the sheared profiles and punch force-penetration curves are presented at different blanking angles, respectively. From test results it is shown that the burr is not apparently formed at blanking angles such as +5°, +20° and -10°. From both test and simulation results, different sheared profiles and blanking forces are obtained ccorresponding to different blanking angles. As a result, the
Design of Hot Stamping Tools and Blanking Strategies of Ultra High Strength Steels
323
blanking angle is the most influencing parameter on the blanking result. The measured blanking force reached its maximum of 1.43kN/mm with the sheet thickness of 1.75mm at the negative blanking angle of 10°. Of course, the punch-die clearance influences also the cutting force and the sheared profile. From 5% up to 15% clearance, the blanking process shows relative similar results on the sheared profiles and cutting forces. However, it is observed that the clearance of 20% causes high burr formation and inappropriate sheared surface, e.g. high ratio of fracture depth versus smooth-sheared depth. Finally, it is noted that the blanking of hardened boron alloyed steels exhibits a brittle fracture feature and it causes differences between the experimental and the FE simulation results. In brittle fracture, the crack is propagated very quickly after crack initiation. Therefore, the fracture depth on the sheared surface of the used hardened boron steels is relative high, i.e. 50% up to 80% of fracture depth of the sheet metal thickness.
Fig. 9. Profiles of the sheared surface with Marsurf at different clearance (u) and punch velocity (Vpunch)
Fig. 10. Test results at different blanking angles (u= 10% and Vpunch= 60 mm/s)
324
H. So and H. Hoffmann
Fig. 11. Simulation results at different blanking angles (u= 10% and Vpunch= 60 mm/s)
4 Conclusion In this paper, a systematic method has been developed for optimizing the geometrical design of the cooling systems of hot stamping tools. This methodology was successfully applied to design of cooling channels in a prototype tool for efficient cooling performance. This indicates that the method can be used for designing cooling systems in other stamping tools as well. As a next objective, the blanking strategies of the hot stamped parts have been researched. The experimental results corresponding to different blanking angles and other blanking process parameters could be references in the automotive industries for various geometries of blanking parts of ultra high strength steels.
References [1] Merklein M, Lechler J, Geiger M (2006) Characterisation of the flow properties of the quenchenable ultra high strength steel 22MnB5. Annals of the CIRP 55/1:229-232 [2] Geiger M, Merklein M, Lechler J et al (2007) Basic investigations on hot sheet metal forming of quenchenable high strength steels. 2nd International Conference on New Forming Technology, Bremen, Germany [3] Weinert K, Mehnen J, Michelitsch T, Bartz-Beielstein T (2004) A multiobjective approach to optimize temperature control systems of molding tools. Production Engineering XI/1:77-80 [4] So H, Steinbeiss H, Hoffmann H (2006) Entwicklung einer Methodik zur Optimierung von Umformwerkzeugen für die Warmblechumformung. Tagungsband zum 1. Erlanger Workshop Warmblechumformung, Berichts- und Industriekolloquium der ortsverteilten DFG-Forschergruppe FOR 552:102-117 [5] Hoffmann H, So H, Steinbeiss H (2007) Design of hot stamping tools with cooling system. Annals of the CIRP 56/1:269-272
Design of Hot Stamping Tools and Blanking Strategies of Ultra High Strength Steels
325
[6] Brosius A et al (2007) Modellierung und Simulation der Warmblechumformung: Aktueller Stand und zukünftiger Forschungsbedarf, Tagungsband zum 2. Erlanger Workshop Warmblechumformung, Berichts- und Industriekolloquium der ortsverteilten DFGForschergruppe FOR 552:37-58 [7] Lorenz D, Roll K (2005) Modeling and analysis of integrated hotforming and quenching processes. Advanced Materials Research 6/8:787-794
Information and Communications Technology
Efficient and Secure Asset Tracking Across Multiple Domains Jin Wook Byun1 and Jihoon Cho2 1
Department of Information and Communication, Pyeongtaek University, Korea [email protected] 2 Information Security Group, Royal Holloway University of London, Surrey, UK
Abstract. Current Existing RFID systems generally assume that a tag is initiated and identified/traceable by a single party. In some applications, however, more than two independent parties, can be involved to identify and trace tagged assets or personnel. We propose an RFID system, where tags can be shared across multiple domains. Universal re-encryption is employed to enhance tag privacy, but we extend it to multi-recipient setting, so that the proposed RFID system can work across multiple domains.
1 Introduction Tracking and managing corporate assets and people is a time-consuming and laborintensive process, when a large volume of assets and people frequently move between departments within organisation. RFID technologies have been increasingly popular in asset tracking systems, providing a cost-effective visibility of assets and personnel within organisations. RFID technologies hold great promise, but it also raises significant privacy concerns. Since most tags silently and automatically emit static and unique identifiers, a person or an item carrying tags effectively broadcasts a fixed serial number to nearby readers. A variety of privacy-enhancing solutions assume different capabilities of tags. In several years, passive tags with very limited memory and logical gates will be mostly deployed in mass market, because cost pressures will lead manufacturers to implement very few features in the smallest and cheapest devices. Unfortunately, passive tags lack resources to perform public-key cryptographic operations. However, even though passive tags themselves cannot perform any cryptographic operation, it is possible for external trusted devices to use standard cryptographic techniques, such as reencryptions [1, 3, and 4], to enhance tag privacy. Existing RFID systems generally assume that a tag is initiated and identified/traceable by a single party. In some applications, however, more than two independent parties can be involved to identify and trace tagged assets or personnel. A simple solution would be to attach or embed a number of tags from each domain1, but it is not a practical choice in some applications such as human-implantable chips. 1
A domain is a logical entity which initiates and identifies/traces tags using its own key material.
330
J.W. Byun and J. Cho
Furthermore, embedding a single tag into an object instead of several tags may lead to cost saving. We thus propose an RFID system, where tags are shared across multiple domains. Universal re-encryption is employed to enhance tag privacy, but we extend it to multirecipient setting, so that the proposed RFID system can work across multiple domains. The rest of the paper is organised as follows. In section 2, we introduce RFID systems and privacy definition, and also describe universal re-encryption. In section 3, we propose an RFID system, which can work across multiple domains. Finally, we discuss the future work in section 4.
2 Preliminaries 2.1 RFID Systems An RFID system, which makes use of re-encryptions, involves the following entities. • A tag is a passive transponder which is initially programmed with a unique identifier id. It has a re-writable memory that can be written to by an external device. When interrogated by a reader or randomiser, it returns the resident data. • A reader belongs to a specific domain. It initiates a tag by writing into the tag a pseudonym corresponding to a tag identifier, and identifies/traces the tag by recovering an identifier from the pseudonym resident in the tag. It consists of one or more transceivers and a back-end server. The transceivers collect from tags data, which is sent to a back-end server for identification. • A randomiser is a transceiver which only refreshes the data resident in a tag. 2.2 Privacy Definition In most cryptographic security models, an adversary is assumed to have more-or-less unfettered access to system components; such access will, however, be a sporadic event in most RFID systems. That is, in order to either scan a tag or listen to messages from readers (or randomisers), an adversary must be in physical proximity to the tag or reader (or randomiser). Moreover, because low-cost tags cannot perform standard cryptographic functions, they cannot provide a meaningful level of security against too strong an adversary. Indeed, it is straightforward to see that an RFID system cannot preserve privacy against a strong adversary who is capable of eavesdropping on all communications between tags and readers/randomisers. We thus consider an adversary which accurately reflects real-world threats and tag capabilities. We assume that an adversary may corrupt any of randomisers, but not readers. The adversary involves all dishonest parties. We assume that the adversary cannot eavesdrop on all the conversation between honest randomisers and tags. Otherwise, all hope for privacy is lost. Privacy requirement is that once a tag is updated by an honest randomiser, an adversary cannot trace it. That is, if between two reads of the tag by an adversary, the tag is updated by an honest randomiser, an adversary cannot distinguish whether or not it reads the same tag.
Efficient and Secure Asset Tracking Across Multiple Domains
331
2.3 Universal Re-encryption As a privacy enhancing mechanism, data resident in a tag can be periodically refreshed by external devices, using standard cryptographic techniques. Juels and Pappu [4] employ a public-key encryption scheme to enhance privacy of an RFID-enabled banknote. An encrypted version of the serial number of a banknote is written into the tag embedded in the banknote, and the ciphertext is periodically re-encrypted by any randomiser programmed with the public key. While a single key pair may suffice for the above case, general RFID systems would certainly require multiple key pairs. In order to re-encrypt a ciphertext within a tag, it would now be necessary to know under which public key the ciphertext has been encrypted. However, including a public key on a tag along with ciphertext would permit a certain degree of tracking and profiling, because the public key itself could be used as a static identifier. To solve this anonymity problem, Golle et al. [3] introduced a cryptographic technique, known as universal re-encryption, which permits re-encryption without any necessity to also provide information about the public key used. They suggest using the universal re-encryption, which is based on ElGamal encryption scheme, to enhance privacy in RFID systems involving multiple public keys.2 A public-key encryption scheme that permits universal re-encryption, which we call a UPE scheme, consists of the five polynomial-time algorithms [3]. • The randomised common-key generation algorithm UG takes as an input a security parameter τ ∈ Ν and returns a common key I. • The randomised key generation algorithm UK takes as an input the common key I and returns a public/secret key pair (pk, sk). • The randomised encryption algorithm UE takes as inputs a public key pk and a plaintext m, and returns a ciphertext c. • The randomised re-encryption algorithm URe takes as an input a ciphertext c (but not a public key pk), and returns a re-encrypted ciphertext c' which is decrypted to the same plaintext as c. • The deterministic decryption algorithm UD takes as inputs a secret key sk and a ciphertext c (or c') to return the corresponding plaintext or a special symbol ⊥ if the ciphertext is invalid. Golle et al. [3] also introduced a new security notion for re-encryption, addressing the use of multiple key pairs in the same environment, namely universal semantic security under re-encryption. Privacy can be preserved in RFID systems, which use universal re-encryption, via universal semantic security under re-encryption [3].
3 Proposed Scheme We briefly sketch the extension of the UPE scheme to multi-recipient setting, say n recipients, as follows. The multi-recipient UPE scheme (MUPE) consists of the following 2
Golle et al. proposed use of universal re-encryption in MIXnets as a privacy-preserving technique. They also proposed application of the technique in RFID systems to enhance privacy, observing that an RFID system involving refreshing (relabelling) of tags is similar to a MIXnet.
332
J.W. Byun and J. Cho
polynomial-time algorithms. A common-key generation algorithm MUG, a key generation algorithm MUK, and a decryption algorithm MUD are defined as in the UPE scheme. The randomised encryption algorithm MUE takes as inputs a public-key vector pk = (pk[1],…,pk[n]) and a plaintext vector m = (m[1],…,m[n]) and, returns a ciphertext vector c = (c[1],…,c[n]). The randomized re-encryption algorithm MURe takes as an input a ciphertext vector c (but not a public-key vector pk), and returns a ciphertext vector c' = (c'[1],…,c'[n]). The trivial construction of the MUPE scheme would be a concatenation of ciphertexts for multiple recipients. For an efficient extension, we apply the property of randomness re-use to reduce the size of ciphertext [2]. It is straightforward to construct an RFID system, which makes use of the MUPE scheme. It runs the algorithm MUG and MUK to obtain a common key and a public/secret key pair. It initiates a tag by writing into the tag a ciphertext3 which is an output of the algorithm MUE. The algorithm MURe is used to re-encrypt a ciphertext resident in a tag. The algorithm MUD returns an identifier of an interrogated tag if the ciphertext resident in the tag is valid.
4 Future Work As a specific construction of the MUPE scheme, we extend the ElGamal-based UPE scheme in single-recipient setting to multi-recipient setting. For an efficient extension, we apply the property of randomness re-use to reduce the size of a ciphertext [2]. The proposed scheme should satisfy the required security notion, universal semantic security under re-encryption.
References [1] Ateniese G, Camenisch J, Medeiros B (2005) Untraceable RFID tags via insubvertible encryption. In Atluri V et al (ed) ACM Conference on Computer and Communication Security [2] Bellare M, Boldyreva A, Staddon J (2003) Randomness re-use in multi-recipient encryption schemeas. In: Desmedt Y (ed) Public Key Cryptography, LNCS 2567, Springer [3] Golle P, Jakobsson M, Juels A, Syverson P F (2004) Universal re-encryption for Mixnets. In: Okamoto T (ed) Topics in Cryptology - CT-RSA 2004, LNCS 2964, Springer [4] Juels A and Pappu R (2003) Squealing Euros: Privacy protection in RFID-enabled banknotes. In: Rebecca N. W. (ed) Financial Cryptography, LNCS 2742, Springer
3
A corresponding plaintext is the identifier of a tag.
Wireless Broadcast with Network Coding: DRAGONCAST Song Yean Cho1 and Cedric Adjih2 1
Hipercom Team LIX, École Polytechnique, Palaiseau, France [email protected] 2 Hipercom Team INRIA Paris-Rocquencourt, Le Chesnay Cedex, France
Abstract. Network coding is a novel method for transmitting data, which has been recently proposed, and has been shown to have potential to improve wireless network performance. In this article, we study using network coding for (energy-) efficient broadcasting in a multi-hop wireless networks: transmitting data from one source to all nodes with a small number of retransmissions. It is known that finding an efficient method to broadcast, is essentially summarized in selecting proper transmission rates of each node. Our contribution is to propose a simple and efficient wireless broadcast protocol: DRAGONCAST, which uses Dynamic Rate Adaptation from Gap with Other Nodes (D.R.A.G.O.N.). The rationale of this rate selection method is detailed from some logical arguments. The behavior of DRAGONCAST is analyzed in a tandem network, and its efficiency is experimentally evaluated by simulations.
1 Introduction The concept of network coding, where intermediate nodes mix information from different flows, was introduced by seminal work from Ahlswede, Cai, Li and Yung [1]. Since then, a rich literature has flourished for both theoretical and practical aspects. In particular, several results have established network coding as an efficient method to broadcast data to the whole wireless networks (see Lun et al. [6] or Fragouli et al. [12] for instance), when efficiency consists in: minimizing the total number of packet transmissions for broadcasting from the source to all nodes of the network. From an information-theoretic point of view, the case of broadcast with a single source in a static network is well understood; see for instance Deb. et al [19] or Lun et al. [3] and their references. In practical networks, the simple method random linear coding from Ho et al. [2] may be used but several features should be added. Examples of these practical protocols for multicast are CodeCast from Park et al. [20] or MORE of Chachulski et al. [8]. This article also proposes a practical wireless broadcast protocol with two features: termination and retransmission rate. Termination: this is the ability to be able to get and decode all packets, at the end of the transmission or generation, even in cases with mobility and packet losses. This feature is supported with a specifically additional protocol: a termination protocol. Retransmission (rate): this is related to functioning of random linear coding. Every node receives packets, and from time to time, will retransmit coded packets. As indicated in section 2, the optimal retransmission fixed rates may be computed for static networks; however in a mobile wireless network changes of topology would
334
S.Y. Cho and C. Adjih
cause optimal rates to evolve continuously. Hence a network coding solution should incorporate an algorithm to determine when to retransmit packets and how many of them, such as the ones in Fragouli et al. [12] or MORE [8]. In this article, we propose a protocol for broadcast in wireless networks: DRAGONCAST. It provides the two previous features in a novel way and is based on simplicity and universality. Unlike previous approaches, it does not use explicitly or implicit knowledge relative to the topology (such as the direction or distance to the source, the loss rate of the links), hence is perfectly suited to ad-hoc networks. A cornerstone of DRAGONCAST is a rate adjustment method: every node is retransmitting coded packets with a certain rate and this rate is adjusted dynamically. Essentially, the rate of the node increases if it detects some nodes that lack too many coded packets in the current neighborhood. This is called a “dimension gap”, and the adaptation algorithm is a Dynamic Rate Adjustment from Gap with Other Nodes (DRAGON). Node state information required for this adaptation is exchanged using piggybacking on coded packets. Ultimate decoding at every node is assured by integrating a termination protocol. The rest of the paper is organized as follows: section 2 provides some background about network coding in practical aspects, section 3 presents some in theoretical aspects, section 4 details our approach and protocols, section 5 analyzes the behavior of the protocol with modeling in a tandem network, section 6 explains evaluation metrics and experimental results by simulations and section 7 concludes.
2 Practical Framework for Network Coding In this section, we present the known practical framework for network coding (see also Fragouli et al. [13] for tutorial) that is used in this article. 2.1 Linear Coding and Random Linear Coding Network coding differs from classical routing by permitting coding at intermediate nodes. One possible coding algorithm is linear coding that performs only linear transformations through addition and multiplication (see Li et al. [9] and Koetter et al. [11]). Precisely, linear coding assumes identically sized packets and views the packets as vectors on a fixed Galois field, GF (q). In the case of single source multicasting, all packets initially originate from the source, and therefore any coded packet received at a node v at any point of time is a linear combination of some source packets as: ith coded packet at node v: Pi
(v)
i =k
= ∑ ai , j Pj , j =1
where the (Pj)j=1,...,k are k source‘s packets and [ai,1, ai,2, . . . , ai,n] is the coding vector of coded packet p(v)i. When a node generates a coded packet with linear coding, an issue is how to select coefficients. Whereas centralized deterministic methods exist, Ho et al. [2] presented a novel coding algorithm, which does not require any central coordination. The coding
Wireless Broadcast with Network Coding: DRAGONCAST
335
algorithm is random linear coding: when a node transmits a packet, it computes a linear combination of all data possess with randomly selected coefficients (γi), and sends the result of the linear combination: coded packet =∑i γiP(v)i . In practice, a special header containing the coding vector of the transmitted packet may be added as proposed by Chou et al. [14]. 2.2 Decoding, Vector Space, and Rank The node will recover the source packets {Pj} from the packets {p(v)i}, considering the matrix of coefficients {ai,j} from section 2.1. Decoding amounts to inverting this matrix, for instance with Gauss elimination. Thinking in terms of coding vectors at any point of time, we can associate node v with the vector space, spawned by the coding vectors of a node v. The dimension of this vector space is denoted by Dv. In the rest of this article, we will call Dv a rank in a node. This rank of a node is a direct metric for the amount of useful received packets, and a received packet is called innovative when it increases the rank in the receiver node. Ultimately a node can decode all source packets when its rank (Dv) is equal to the total number of source packets (generation size) [14]. 2.3 Rate Selection In random linear coding, the remaining decision is when to send packets. This could be done by deterministic algorithms; for instance, [11] proposes algorithms which take a decision of sending or not another packet upon reception. In this article, we consider “rate selections”: at every point of time, an algorithm decides the rate: Cv(τ) of a node v in the set of nodes V at time τ. Then, random linear coding operates as indicated on algorithm 1. With this scheduling, the parameter which varies is the delay chosen with rate selection algorithm: delay ≈1/C (t). Algorithm 1. Random Linear Coding with Rate Selection 1. Source scheduling: the source transmits sequentially the D vectors (packets) of a generation with rate Cs 2. Nodes’ start and stop conditions: The nodes start transmitting when they receive the first vector but they continue transmitting until themselves and their neighbors have enough vectors to recover the D source packets. 3. Nodes’ scheduling: every node v retransmits linear combinations of the vectors it has, and waits for a delay computed from the rate distribution
3 Theoretical Performance of Wireless Network Coding For static networks, several important results exist for network coding in the case of single source multicast. First, it has been shown that the simple method of random linear coding from Ho et al. [2] could asymptotically achieve maximal multicast capacity (optimal performance), and also optimal energy-efficiency (see [6]). Second, for energy-efficiency, only the average rates of the nodes are relevant. Third, the optimal average rates may be found in polynomial time with linear programs as with
336
S.Y. Cho and C. Adjih
Wu et al. [5] Li et al. [4], Lun et al. [6]. Last, performing random linear coding, with a source rate slightly lower than the maximal one, will allow to decode all packets in the long run (when time grows indefinitely, see [3 and 16]). For mobile ad-hoc networks, if one desires to use the optimal rates at any point of time, an issue is that they are a function of the topology, which should then also be perfectly known.
4 Our Approach: DRAGONCAST As mentioned in section 1, our contribution is a method for broadcast from a single source to the entire network with network coding. It is based on known principles described in section 2 and 3; and the general framework of our protocol is described in section 4.1. There are two components in this framework: • DRAGON, a rate selection algorithm, described in section 4.2 • A termination protocol described in section 4.3 4.1 Framework for Broadcast with Network Coding In this section, we briefly describe our practical framework for broadcast protocols. It assumes the use of random linear coding. It further details the basic operation presented on algorithm 1, and appears in algorithm 2. Algorithm 2. Framework for Broadcast with Network Coding
1. Source data transmission scheduling: the source transmits sequentially D vectors (packets) with rate Cs. 2. Nodes’ data transmission start condition: the nodes start transmitting a vector when they receive the first vector. 3. Nodes’ data storing condition: the nodes store a received vector in their local buffer only if the received vector has new and different information from the vectors that the nodes already have. 4. Nodes’ termination conditions: the nodes continue transmitting until themselves and their current known neighbors in their local information base have enough vectors to recover the D source packets. 5. Nodes’ data transmission scheduling: every node retransmits linear combinations of the vectors in its local buffer after waiting for a delay computed from the rate selection. 6. Nodes’ data transmission restart condition: when one node receives a notification indicating that one neighboring node requires more vectors to recover the D source packets and it has already stopped data transmission, the node re-enters in a transmission state As described in algorithm 2, the source initiates broadcasting by sending its first original data packets. Other nodes initiate transmission of encoded data upon receiving the first coded packet, and stay in a transmission state where they will transmit packets with an interval decided by the rate selection algorithm. Upon detection of termination, they will stop transmitting.
Wireless Broadcast with Network Coding: DRAGONCAST
337
4.2 DRAGON: Rate Selection In this section, we introduce our core heuristic for rate selection, DRAGON. Before its introduction, we describe our previous heuristics: IRON and IRMS. These heuristics do not assume a specific type of network topology; the only assumption is that one transmission reaches several neighbors at the same time. 4.2.1 Static Heuristic IRON Heuristic IRON, starts from the simple logic of setting the same rate on every node: for instance let us assume that the every node has an identical rate as one, e.g. a packet per a second. Now we further optimistically assume that near-optimal energy-efficiency is achieved and that every transmission would bring innovative information to almost every receiver, and we denote M the average number of neighbors of a node in the mobile network. Then every node will receive on average M packets a second. Hence the source should inject at least M packets per a second. This constitutes the heuristic IRON: • IRON (Identical Rate for Other Nodes than source): every node retransmits with the same rate, except from the source which has a rate: M times higher. 4.2.2 Static Adjustment with Local Topology: IRMS Heuristic IRON assumes networks with similar neighborhood size. However, when the nodes have different numbers of neighbors, the first step is to adjust this different neighborhood size. The goal of this adjustment helps nodes with lower local received rate than source rate. A heuristic IRMS (Increased Rate for the Most Starving node) in [22] finds a cause of low receiving rate by smaller neighborhood than the expected one and adjusts a rate, inspired from [12] as: • IRMS: the rate of node v is set to Cv = k max M , u∈H u H u where Hw is the set of neighbors of w, |Hw| is its size, k = 1 is a global adjustment factor. In [22 and 23], IRMS (k=1) was explored experimentally, and although overall good performance was observed, in the case of sparser networks, phenomena occurred where only a few nodes would connect one part of the network to another, in a similar fashion to the center node in Fig.1. In networks similar to Fig.1, the rate of the nodes linking two parts of the Fig. 1. Example Topology (source: big red network would be dependent on how many circle) of them are present: such information is not available from local topology information. 4.2.3 New Dynamic Heuristic, DRAGON The previous heuristics for rate selection where static, using simple local topology information, and rate would be constant as long as the topology would remain identical.
338
S.Y. Cho and C. Adjih
In this article, a different approach is chosen. The starting point is the observation that with fixed rates one would expect the rank of a node to grow at the same pace as the source transmission, as in the example of optimal rate selections for static networks (see section 4.2.1). Decreasing the rates of intermediate nodes by a too large factor, would not permit the proper propagation of source packets in real time. On the contrary, increasing excessively their rates, would not increase the rate of the decoded packets (naturally bounded by the source rate) while it would decrease energyefficiency (by increasing the amount of redundant transmissions). The idea of the proposed rate selection is to find a balance between these two inefficient states. As we have seen, ideally the rank of a node would be comparable to the lastly sent source packet. Since we wish to have a simple decentralized algorithm, instead of comparing with the source, we compare indirectly the rank of a node with the rank of all its perceived neighbors. The key idea is to perform a control so that the rank of neighbor nodes would tend to be equalized: if a node detects that one neighbor had a rank which is too low in comparison with its own, it would tend to increase its rate. Conversely, if all its neighbors have greater ranks than itself, the node need not to send packets in fact. Precisely, let Dv(τ) denote a rank in a node v at time τ , and denote gv(τ) the maximum gap of rank with its neighbors, normalized per number of neighbors. Then, we propose dynamic rate selection, Dynamic Rate Adaptation from Gap with Other Nodes (DRAGON) using gv(τ) as: DRAGON: the rate of node v is set to Cv(τ) at time τ as: • if gv(τ) > 0 then: Cv(τ) = α gv(τ), where α is some constant • Otherwise, the node stops sending encoded packets until gv(τ) • becomes > 0, • where
g v (τ ) = max v∈H u
Dv (τ ) − Du (τ ) | Hu |
This heuristic has some strong similarities, with previous heuristics: IRON and IRMS. Consider the local received rate at one destination v: then DRAGON ensures that every node will receive a total rate at least equal to the average gap of one node and its neighbors scaled by α, that is the local received rate at time τ verifies: ⎛ 1 ⎞ C (V \ {v}) ≥ α ⎜⎜ Du (τ ) − Dv (τ ) ⎟⎟ ∑ ⎝ | H v | u∈H v ⎠ , which would ensure that the gap would be
closed in time≈≤1/ α, if the neighbors did not receive new innovative packets. 4.3 Termination Protocol A network coding protocol for broadcast requires a termination protocol in order to decide when retransmissions of coded packets should stop. Our precise terminating condition is as follows: when a node (a source or an intermediate node) itself and all its known neighbors have sufficient data to recover all source packets, the transmission stops. This stop condition requires information about the status of neighbors including their ranks. Hence, each node manages a local information base to store one hop neighbor information, including their ranks.
Wireless Broadcast with Network Coding: DRAGONCAST
339
Algorithm 3. Brief Description of Local Info Base Management Algorithm
1. Nodes’ local info notify scheduling: The nodes start notifying their neighbors of their current rank and their lifetime when they start transmitting vectors. The notification can generally be piggybacked in data packets if the nodes transmit a vector within the lifetime interval. 2. Nodes’ local info update scheduling: On receiving notification of rank and lifetime, the receivers create or update their local information base by storing the sender’s rank and lifetime. If the lifetime of the node information in the local information base expires, the information is removed. In order to keep up-to-date information about neighbors, every entry in the local information base has lifetime. If a node does not receive notification for update until the lifetime of an entry is expired, the entry is removed. Hence, every node needs to provide an update to its neighbors. In order to provide the update, each node notifies its current rank with new lifetime. The notification is usually piggybacked in an encoded data packet, but could be delivered in a control packet if a node does not have data to send during its lifetime. A precise algorithm to organize the local information base is described in algorithm 3. The notification of rank has two functions: it acts both as a positive acknowledgement (ACK) and as a negative acknowledgement (NACK). When a node has sufficient data to recover all source packets, the notification works as ACK, and when a node needs more data to recover all source packets, the notification has the function of an NACK. In this last case, a receiver of the NACK could have already stopped transmission, and thus detects and acquires a new neighbor that needs more data to recover all source packets. In this case, the receiver restarts transmission. The restarted transmission continues until the new neighbor notifies that it has enough data, or until the entry of the new neighbor is expired and therefore removed.
5 Theoretical Analysis of DRAGON 5.1 Overview In effect, DRAGON is performing a feedback control, which intends to equalize the rank in one node to the rank in its neighbors, by adapting the rate of the nodes. However, notice that the precise control-theoretic analysis of DRAGON is complicated by the fact the rank gap does not behave like a simple “physical” output, and that we have the following properties, for two neighbor nodes u, v: • If Du>Dv, then every transmission of u received by v will increase the rank in v (is innovative). • If Du≤Dv, then a transmission of u received by v, may or may not, increase the rank in v, both cases may occur. As a result, there is some uncertainty in the control about the effect of increasing the rate of a neighbor that has rank than another node. A refined dynamic approach would make detailed statistics about the innovation rate and use this information in
340
S.Y. Cho and C. Adjih
the control; however, the approach used in DRAGON is simpler and more direct. If a node has lower rank than all its neighbors, it will stop sending packets – these amounts to pessimistically estimate that transmissions from nodes with higher rank are non-innovative. Although this would tend to make the rates less stable with time, intuitively this property might allow DRAGON to be more efficient. 5.2 Insights from Tandem Networks Although the exact modeling of the control is complex, some insight may be gained from approximation. Consider one path (s = v0, v1, . . . Fig. 2. Line Network vn) from the source s to a given node vn as seen in Fig2. Denote Dk =Dvk , Ck =Cvk . Assume now that the ranks in the nodes verify Dk+1(τ) < Dk(τ), for k = 0, . . . , n − 1, and then a fluid approximation yields the following equations: dDk +1 (τ ) α = Ck (τ ) + I k (τ ) withα k =| | ; Ck (τ ) = α k ( Dk (τ ) − Dk +1 (τ )(τ − δ k (τ ))) (1 ) dτ H vk +1
where Ik(τ) is the extra innovative packet rate at vk from other neighbors than vk−1 and δk(τ) is the delay for node k +1 to get information about the rank in its neighbor k. If rank is piggybacked on each transmitted packet, then δk(τ) ≈1/Ck (τ). If we make the approximation that δk(τ) and if we consider a linear network composed exclusively of the path, then αk = 1/α, and Ik(τ) = 0. Let β= α/2, and then (1) yields a sequence of first order equations for Dk, solvable with standard resolution methods: kM Dk (τ ) = Mτ − (1 − e −βτ ) − e −βτ Pk (τ ) ,
β
where Pk(τ) is a polynomial of τ ( Pk(0) = 0). This result shows that for a line network, when τ→∞, the rank in the nodes v0, v1, . . . , vk are such as the dimension gap between two neighbors is M/β, and this occurs after time on the order of magnitude of 1/β=2/α: the rank of the nodes decreases linearly from the source to the edge of the network.
6 Experimental Results 6.1 Evaluation Metrics 6.1.1 Metric for Energy Efficiency To evaluate the performance of our broadcasting protocol DRAGONCAST, we measured the energy-efficiency of the method using Eref−eff: the ratio between Ecost and Ebound, where Ecost is a total number of transmissions to broadcast one data packet to the entire network and Ebound is one lower bound of the possible value of Ecost. The metric for efficiency, Eref−eff is always larger than one and may approach one only when the protocol becomes close to optimality (the opposite is false). As
Wireless Broadcast with Network Coding: DRAGONCAST
341
indicated previously, Ecost, the quantity appearing in the expression of Eref−eff is the average number of packet transmissions per one broadcast of a source packet. We compute directly Ecost as: • Ecost = Total number of transmitted packets / Number of source packets The denominator of Eref−eff , Ebound is a lower bound of the number of transmissions to broadcast one unit data to all N nodes, and we compute it as N/Mavg−max,where Mavg−max is an average of the maximum number of neighbors. The value of Ebound comes from assumption that a node has Mavg−max neighbors at most and one transmission can provide new and useful information to Mmax nodes at most. Notice the maximum number of neighbors (Mmax) evolves in a mobile network, and hence we compute the average of Mmax as Mavg−max for the whole broadcast session after measuring Mmax at periodic intervals. 6.1.2 Energy Efficiency Reference Point for Routing In order to obtain a reference point for routing, we are using the upper bound of efficiency without coding (Ebound−ref−eff) of Fragouli et al. [12]. Their argument works as follows: consider the broadcasting of one packet to an entire network Fig. 3. Ebound without coding and consider one node in the network which retransmits the packet. To further propagate the packet to network, another connected neighbor must receive the forwarded packet and retransmit it, as seen in Fig. 3. Considering the inefficiency due to the fact that any node inside the shared area receives duplicated packets, a geometric upper bound of for routing can be deduced: 6.2 Simulation Results In this section, we start the analysis of the performance of various heuristics, by considering their efficiency from Ecost. Simulations in Fig.4 were performed on several graphs with default parameters but M=10 – relatively sparse networks – and with three rate selections: optimal rate selection, IRMS, and DRAGON. In addition to default parameters of NS-2 Fig. 4. Ebound comparison of different heuristics (version 2.31), we use following simulation parameters: number of nodes = 200; range is defined by M, expectation of number of nodes; position of the nodes: random uniform i.i.d; generation size = 500; field Fp with p = 1078071557; α = 1 (for DRAGON). 6.2.1 Theory and Practice The first 2 bars in Fig.4 represent the gap between theory and practice. The first bar (label: opt(th)), is the optimal Ecost as obtained directly from the linear program solution without simulation. The second bar (label: opt), is the ratio of actual measured
342
S.Y. Cho and C. Adjih
Ecost in NS-2 simulations and theoretical optimal Ecost. The comparison of these two value shows that the impact of the physical and mac layer, as simulated by NS-2 (with 802.11, two ray ground propagation, omni-antenna), is limited as ≈20%. 6.2.2 Efficiency of Different Heuristics The third bar and the last bar in Fig.4 represent the efficiency of DRAGON and IRMS respectively. As one may see in this scenario, the ratio between the optimal rate selection and DRAGON is around 1.5, but without reaching this absolute optimum, DRAGON still offers significantly superior performance to IRMS. The gain in performance comes from the fact that the rate selection IRMS, has lower maximum broadcast rate (in some parts of the network), than the actual targeted one, and hence than the actual source rate. As a result, in the parts with lower min-cut, the rate of the nodes is too high compared to the innovation rate, whereas with DRAGON such phenomena should not occur for prolonged durations. This is one reason for its greater performance. The fourth bar (E(no-coding)) represents bound without coding explained in section 6.1.2. Relative efficiency of DRAGON is higher than bound without coding. This result experimentally confirms that DRAGON outperforms routing on some representative networks. 6.3 Closer Analysis of the Behavior of DRAGON 6.3.1 Impact of α In DRAGON, one parameter of the adaptation is α, and is connected to the speed at which the rates adapt. The table 1 indicates the total number of transmissions made of simulations with default parameters described in 6.2 for the Table 1. Impact of α with graph in Fig 2 reference graph Fig. 1. Dragon is simulated with different α. 1 5 10 IRMS α. As one might see, first, the Ecos 32.166 32.544 35.468 128 efficiency of IRMS on this graph is rather low (1/4 of DRAGON): indeed, the topology exemplifies properties found in the cases of networks where IRMS was found to be less efficient: two parts connected by one unique node. For various choices of α, it appears that the performance of DRAGON decreases when α increases. This evidences the usual tradeoff between speed of adaptation and performance.
Fig. 5. Propagation vs. distance
6.3.2 Comparison with Model Fig. 5 represents information propagation speed of DRAGON in a network of Fig.1 depending α=1, 5, 10. The x coordinate is the distance of the node to the source and the y-coordinate is the time at which the node has received exactly half of the generation size. Hence, ycoordinate yields the propagation of the
Wireless Broadcast with Network Coding: DRAGONCAST
343
coded packets (new information) from the source. First, we see that there is a large step near the middle of the graph: this is the effect of the center node, which is the bottleneck and which obviously induces further delay. Second, linear lines represent that a node far from the source needs more time to get the same amount of information and the required time linearly augments. This result confirms the intuitions given by the models in section 5.2, about a linear decrease of the rank (amount of new information) in the node from the source to the edge of the network. Finally we can find the difference of time to get information between nearest node from the source, and furthest node. It is around 35 for α.=1, 6 for α=5 and 4 for α.=10: roughly, it is invertly proportional to α., as expected from our model in section 5.2. In addition, we can find that with higher values of α (greater reaction to gaps), the impact of the bottleneck in a center node, is dramatically reduced.
7 Conclusion We have introduced a wireless broadcast protocol with a simple heuristic for performing network coding: DRAGON. The heuristic is based on the idea of selecting rates of each node, and this selection is dynamic. It operates as a feedback control, whose target is to equalize the amount of information in neighbor nodes, and hence indirectly in the network. The properties of efficiency of DRAGON are inherited from static algorithms, which are constructed with a similar logic. Experimental results have shown the excellent performance of the heuristics. Further work includes addition of congestion control methods.
References [1] R Ahlswede, N Cai, S-Y R Li, R W Yeung (2000) “Network Information Flow”, IEEE Trans. on Information Theory, 46(4) [2] T Ho, R Koetter, M M´edard, D Karger, M Effros (2003) “The Benefits of Coding over Routing in a Randomized Setting”, International Symposium on Information Theory (ISIT 2003) [3] D S Lun, M M´edard, R Koetter, M Effros (2007) “On coding for reliable communication over packet networks”, Technical Report #2741, MIT LIDS [4] Z Li, B Li, D Jiang, L C Lau (2005) “On Achieving Optimal Throughput with Network Coding” Proc. INFOCOM [5] Y Wu, P A Chou, S-Y Kung (2005) “Minimum-Energy Multicast in Mobile Ad Hoc Networks using Network Coding”, IEEE Trans. Commun. 53(11):1906-1918 [6] D S Lun, N Ratnakar, M M´edard, R Koetter, D R Karger, T Ho, E Ahmed, F Zhao (2006) “Minimum-Cost Multicast over Coded Packet Networks”, IEEE/ACM Trans. Netw. 52(6) [7] P A Chou, Y Wu, K Jain (2003) “Practical Network Coding”, Forty-third Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL [8] S Chachulski, M Jennings, S Katti, D Katabi, “Trading Structure for Randomness in Wireless Opportunistic Routing”, SIGCOMM’07 [9] S-Y R Li, R W Yeung, N Cai (2003) ”Linear network coding”. IEEE Transactions on Information Theory [10] Y Wu, P A Chou, S-Y Kung (2005) “Minimum-Energy Multicast in Mobile Ad HocNetworks using Network Coding”, IEEE Trans. Commun. 53(11):1906-1918
344
S.Y. Cho and C. Adjih
[11] R Koetter and M Medard (2003) “An algebraic approach to network coding”, IEEE/ACM Transactions on Networking, 11(5) [12] C Fragouli, J Widmer, J-Y L Boudec, “A Network Coding Approach to Energy Efficient Broadcasting”, Proceedings of INFOCOM 2006 [13] C Fragouli, J-Y L Boudec, J Widmer (2006) “Network Coding : an Instant Primer”, Proceedings of ACMSIGCOMM Computer Communicatino Review 36(4) [14] P A Chou, Y W, K Jain (2003) “Practical Network Coding”, Forty-third Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL [15] A Dana, R Gowaikar, R Palanki, B Hassibi, M Effros (2006) “Capacity of Wireless Erasure Networks”, IEEE Trans. on Information Theory, 52(3):789-804 [16] D S Lun, M M´edard, R Koetter, M Effros (2005) “Further Results on Coding for Reliable Communication over Packet Networks” International Symposium on Information Theory (ISIT 2005) [17] B Clark, C Colbourn, D Johnson (1990) “Unit disk graphs”, Discrete Mathematics, 86(1-3) [18] C Fragouli and E Soljanin (2004) ”A connection between network coding and convolutional codes”, IEEE International Conference on Communications (ICC), 2:661-666 [19] S Deb, M Effros, T Ho, D Karger, R Koetter, D Lun, M Medard, R Ratnakar, “Network Coding for Wireless Application A Brief Tutorial”, IWWAN05. [20] J S Park, M Gerla, D S Lun, Y Yi, M M´edard (2006) IEEE Personal Communications. 13(5):76 - 81 [21] J S Park, D S Lun, F Soldo, M Gerla, M M´edard (2006) Performance of network coding in ad hoc networks. Proc. IEEE Milcom [22] SY Cho and C Adjih (2007) Heuristics for Network Coding in Wireless Networks. Proc. Wireless Internet Protocol (WICON) [23] C Adjih, SY Cho, P Jacquet (2007) Near Optimal Broadcast with Network Coding in Large Sensor Networks. Proc. workshop on information theory for sensor networks (WITS)
A New Framework for Characterizing and Categorizing Usability Problems Dong-Han Ham School of Computing Science, Middlesex University, London, UK [email protected] Abstract. It is widely known that usability is a critical quality attribute of IT-based systems. Many studies have developed various methods for finding out usability problems and metrics for measuring several dimensions underlying usability concept. Usability professionals have emphasized that usability should be integrated into the development life cycle in order to maximize the usability of systems with minimal cost. To achieve this, it is essential to classify usability problems systematically and connect the usability problems into the activities of designing user interfaces and tasks. However, there is a lack of framework or method for these two things and thus remains a challengeable research issue. As a beginning study, this paper proposes a conceptual framework for addressing the two issues. We firstly summarize usability-related studies so far, including usability factors and evaluation methods. Secondly, we review seven approaches to identifying and classifying usability problems. Based on this review and opinions of usability engineers in real industry, this paper proposes a framework comprising of three viewpoints, which can help usability engineers characterize and categorize usability problems.
1 Introduction As one of critical quality attributes of IT systems, usability has much been studied during recent decades. It is well known that systems showing a high degree of usability can ensure tangible and measurable business benefits [14]. Usability can be defined as ‘the capability of IT systems to be understood, learned, used and be attractive to the user, when used under specified conditions’ [17]. Important usability research topics include: usability factors, usability evaluation methods and metrics, user interface design principles and guidelines, usability problem classification, and usercentred design methodology [18]. Many studies have examined several factors characterizing the concept of usability, which makes it difficult to give an absolute definition of usability. For example, ISO/IEC 9241 [8] specifies three dimensions: effectiveness, efficiency, and satisfaction. Nielsen [13] gives another example of such factors: learnability, efficiency of use, memorability, errors, and satisfaction. Usability factors can be categorized into two groups (objective and subjective). The objective factors are concerned with the assessment of how users perform their tasks, whereas the subjective factors attempt to evaluate how users actually feel the usability of the system [2]. There are a lot of usability evaluation methods or techniques, and there are several ways of classifying them. But it seems that they are usually divided in three ways: usability testing, usability inquiry, and usability inspection [20]. Usability testing makes users conduct a set of tasks by using a system or a prototype and then evaluates how the users to do their tasks. Co-discovery learning, question-asking protocol, and
346
D.-H. Ham
shadowing method are typical examples of usability testing. Usability inquiry observes how users use a system in real work and asks them questions in order to understand users’ feelings about the system and their information needs. Field observation, focus groups, and questionnaire survey are categorized into inquiry methods. Usability inspection methods examine usability-related aspects in an analytic manner. Typical methods are cognitive walkthrough, pluralistic walkthrough, and heuristic evaluation. As there is no absolute best method for all situations, it is necessary to choose an appropriate method, taking into account evaluation purposes, available time, measures to be collected, and so on. Although several design features influence on the degree of usability, user interface would be the most important design factor affecting usability. For this reason, a lot of user interface design principles and guidelines have been developed to help interface designers’ development activities. Design principles are high-level design goals that hold true irrespective of task or context, whereas design guidelines are more specific rules that serve as means implementing design principles, depending on task and context [18]. Consistency is one example of design principles, and one guideline corresponding to this is ‘always place home button at top left hand corner’. Usability engineering is an organized engineering process and a set of methods that specify and measure usability quantitatively throughout the development lifecycle [14]. It emphasizes that usability characteristics should be clearly specified from the very early stage, and that usability should lie at the centre of other development activities. There is no doubt that all the usability engineering activities are important and should be integrated under a single, unified framework. In other words, usability should be integrated into the whole systems development lifecycle. A key essential activity, however, to make it possible is to diagnose the causes of usability problems [4]. Being affected from the software engineering community that developed software defect classification scheme, usability engineering community has proposed several usability problem classification schemes, which are reviewed in the next section.
2 Usability Problem Classification Studies COST action 294-MAUSE (towards the MAturation of information technology USability Evaluation) is European consortium that is organized to study usability evaluation methods and usability problems in a more scientific manner (www.cost294.org) [11]. MAUSE provides a good overview of problems of usability evaluation studies and usability problem classification schemes. This section is described by referring mainly to the studies having been done by MAUSE. 2.1 Problems of Usability Evaluation Studies MAUSE identified several significant problems related to usability evaluation studies as follows. From the list of these problems, we can understand that how important it is to classify usability problems systematically and to connect them to the design process in order to improve the quality of IT systems. – A lack of a sound theoretical framework to explain the phenomena observed; – A lack of a set of empirically based and widely accepted criteria for defining usability problems;
A New Framework for Characterizing and Categorizing Usability Problems
347
– A lack of a standard approach to estimating values of key usability test parameters; – A lack of effective strategies to manage systematically the user/evaluator effect; – A lack of a thoroughly validated defect classification system for analyzing usability problems; – A lack of widely applicable guidelines for selecting tasks for a scenario-based usability evaluation; – A lack of a sophisticated statistical model to represent the relationships between usability and other quality attributes like reliability; – A lack of a clear understanding about the role of culture in usability evaluation 2.2 Usability Problem Classification Schemes We present seven different schemes for classifying and organizing usability problems. Some of which are originally defect classification schemes developed in software engineering community; but they appears to be used flexibly for usability problem classification and thus included here. – – – – – – –
Orthogonal Classification Scheme (ODC) [5] Root Cause Analysis (RCA) [12] Hewlett Packard Defect Classification Scheme (HP-DCS) [7] Usability Problem Taxonomy (UPT) [9] User Action Framework (UAF) [1] Classification of Usability Problem Scheme (CUP) [19] Usability Problems Classification using Cycle Interaction (CI) [16]
ODC was developed by IBM in order to give software developers meaningful feedback on the progress of the current project [5]. It is aimed to bridge the gap between statistical defect models and causal analysis; thus it strives to find out welldefined cause-effect relationships between software defects found and their effects on development activities. The ODC provides a basic capability to extract signatures from defects and infer the health of the development process. The classification of defects is based on the objective findings of a defect such as Defect Type or Trigger (to be explained later), not the subjective opinions regarding where it was injected. It has eight dimensions or factors to describe the meaning of defects. These factors are organized according to the two process steps, where defect classification data are collected [6]. The process step Open is carried out when a defect was found and a new defect report is opened in the defect tracking system; however, the process step Close is performed when the defect has been corrected and the defect report is closed. The step Open has three factors: Activity (when did you detect the defect?), Trigger (how did you detect the defect?), and Impact (what would have user noticed if defect had escaped into the field?). The step Close consists of five factors: Target (what high level entity was fixed?), Source (who developed the target?), Age (what is the history of the target?), Defect Type (what had to be fixed?), and Defect Qualifier (indication of whether the defect type was an omission, a commission, or extraneous). Each factor has its own values. For example, the values of Defect Type contain assignment, checking, algorithm, function, timing, interface, and relationship. All the factors are
348
D.-H. Ham
necessary to provide the exact semantics of a defect; however two factors Defect Type and Trigger play a significant role in the ODC. RCA is a classification scheme that was used for retrospective analysis of the defect modification requests discovered while building, testing, and deploying a release of a network element as part of an optical transmission network [12]. In order to capture the semantics of a defect from the multiple points of view, the RCA has five categories: Phase Detected, Defect Type, Real Defect Location, Defect Trigger, and Barrier Analysis. The Phase Detected refers to a phase on the development life cycle including ten phases, which begin from system definition and end with deliveries. The Defect Type divides defects into three classes: implementation, interface, and external. Each class is again composed of several defect types. The Real Defect Location specifies where a defect was located by using three values: document, hardware, and software. The Defect Trigger means the actual root causes in the RCA. The RCA stances a position that there may be several underlying causes rather than just one. Four inherently non-orthogonal classes of root causes are provided: phase-related, human-related, project-related, and review-related. The Barrier Analysis suggests measures for ensuring earlier defect detection and preventing defects as well. HP-DCS was developed to improve the development process by minimizing the number of software quality defects over time [7]. It has three descriptors for characterizing a defect: Origin (the first activity in the lifecycle where a defect could have been prevented, not where it was actually found), Type (the area, within a particular origin, which is responsible for the defect), and Mode (designator of why the defect occurred). Each descriptor is composed of several factors or factor groups, of which combination classifies defects. The Origin has six factors: specification/requirements, design, code, environment/support, documentation, and other. The Type has six factor groups, and one example is a group comprising logic, computation, data handling, module interface, and so on. The Mode explains the reason of defects with five factors, which are concerned with whether information was missing, unclear, wrong, changed, or done in a better way. One important thing is that the choice of a factor for the Origin constrains the possible set of factors for the Type. UPT is a taxonomic model in which usability problems detected in graphical user interfaces with textual components are classified from two perspectives: artefact and task [9]. The UPT was developed on the basis of systematic review of 400 usability problems collected in real industry projects. It is made up of three hierarchical levels. The UPT has an artefact component and a task component, which are located at the top level. The artefact component focuses on difficulties observed when users interact with individual interface objects, whereas the task component is concerned with difficulties encountered when users conduct a task. The two components are divided into five primary categories. The artefact component comprises of three categories (visualness, language, and manipulation), and the task component has two categories (taskmapping and task facilitation). Each primary category is again composed of multiple subcategories. For example, visualness consists of five subcategories: object (screen) layout, object appearance, object movement, presentation of information/results, and non-message feedback. UAF is an interaction model-based structure for organizing usability concepts, issues, design features, usability problems, and design guidelines [1]. It thus aims to be
A New Framework for Characterizing and Categorizing Usability Problems
349
an integrated framework for usability inspection, usability problem reporting, usability data management, and effective use of design guidelines. Another main purpose of the UAF is to support consistent understanding and reporting of the underlying causes of usability problems. Usability problem classification in the UAF uses the interaction design cycle, which is adapted and extended Norman’s ‘stage of action’ model. The interaction cycle is all about what users think (cognitive actions), do (physical actions), and see (perceptual actions) during cycle of interaction with computer. It is composed of four activities: Translation (determining how to do it with physical actions), Planning (determining what to do), Assessment (determining, via feedback, if outcome was favourable), and Physical Action (doing it). This cycle model provides high level organization and entry points to the underlying structure for classifying usability problems. Finding the correct entry point for a usability problem is based on determining the part of the interaction cycle where the user is affected. Examples of relating usability problems to relevant part of the interaction cycle are: ‘unreadable error message’ and Assessment, ‘user does not understand master document structure’ and Planning, ‘user cannot directly change a file name in an FTP program’ and Translation, and ‘user clicks on wrong button’ and Physical Actions. CUP was developed for the purpose of classifying usability problems further to give developers better feedback on how to correct the problems, on the basis of collective review of previous defect classification schemes [19]. The CUP specifies 10 attributes to characterize a usability problem. They include: Identifier (ID), Description, Defect Removal Activity, Trigger, Impact, Expected Phase, Failure Qualifier, Cause, Actual Phase, and Error Prevention. As in the other schemes, most of these attributes have a set of values of their own. For example, the Cause has five values: Personal, Technical, Methodological, Managerial, and Review. Ryu and Monk [16] developed a classification scheme based on cyclic interaction model, which is similar to the interaction cycle in the UAF. It is aimed at examining low-level interaction problems and thus they also developed a simple walkthrough method. The cyclic interaction model strives to model a recognition-based interaction between users and systems, by considering three paths in an interaction cycle: actioneffect path, effect-goal path, and goal-action path. These three paths result in three kinds of usability problems: action-effect problems, effect-goal problems, and goalaction problems. The action-effect problems are deeply related to mode problems that lead same action to different system effects. In general, the mode problems can be classified into three groups: hidden mode problems, partially hidden mode problems, and misleading mode signals. The effect-goal problems are concerned with goal reorganization process. Ineffective or improper goal reorganization can be explained by four types: missing cues for goal construction, misleading cues for goal construction, missing cues for goal elimination, and misleading cues for goal elimination. The goalaction problems occur when users should perform unpredictable actions to achieve a goal, which can be explained in terms of affordance. Typical two unpredictable actions are: weak affordance of a correct action and strong affordance of an incorrect action. Classification schemes originated in software engineering community, such as ODC and HP-DCS, tended to understand usability problems within a broad development context and from the side of developers. However it seems that classification
350
D.-H. Ham
schemes based on interaction model or users’ cognitive task model tended to understand usability problem from the point of view of users. From the review of existing classification schemes, we could find that there is not yet a systematic framework or process to link usability problems found into the development context and activities.
3 Proposed Framework The comparative review of existing classification schemes and opinions of usability engineers in real industry indicated a set of requirements that serves as a conceptual basis for the proposed framework. First, the scope of usability concept should be exactly defined, depending on the purpose of usability problem classification. Traditionally, usability concept does not include aspects related to system functions (usefulness). But it is a trend to incorporate usefulness into the concept of usability, as well as satisfaction. If the purpose of classification is to improve design process, the broad concept of usability should be adopted. However, if the purpose is just related to usability testing of certain interface features and interface-based operation, the narrow concept would be better. Second, a usability problem should be characterized in terms of basic 5W1H questions as follows: – – – – – –
What are the phenomena resulted from the usability problem? Who experienced the usability problem? When did users meet the usability problem? How did users undergo the usability problem? Which part of user interfaces (Where) is related to the usability problem? Why did the usability problem occur?
Third, usability problems mainly resulted from design deficiencies need to be differentiated from those specific to a particular user (group). As the causes of the latter type usability problems are likely to be related to subjective nature of the user (groups) rather than design features, their implication to design process should be considered from different perspectives. Fourth, related to the second requirement, it should be noted that usability problems can be categorized by several criteria. The corresponding relationship between the criteria and the basic questions can be: system function criteria (what), user criteria (who), task criteria (when), interaction step criteria (how), interface object/feature criteria (where), and design principle criteria (partly why). For example, if usability problems of a word processor are classified in terms of task, some can be related to editing tasks while others can affect formatting tasks. If they are categorized by interface object criteria, some can be regarded menu-related problems and others can be information architecture-related problems. Fifth, if we want usability problems to be used more meaningfully during design process, design activities and usability problems should be explained by same language or modelling concept. As the proposed framework suggests, the most recommendable concept is the abstraction level of design knowledge.
A New Framework for Characterizing and Categorizing Usability Problems
351
Taking into account these requirements above, we propose a conceptual framework in which existing classification schemes can be compared and usability problems can be interpreted in a new way (Fig. 1). AH Model
‘Design Knowledge’ Perspective
Relate problems to activities
‘Design Activities’ Perspective
Diagnose usability problems
Framework for Usability Problems
‘Context of Use’ Perspective
Prevent usability problems FBS Model
AUTOS Model
Fig. 1. Framework for characterizing and classifying usability problems
The framework is composed of three perspectives: Context of Use, Design Knowledge, and Design Activities. Within each perspective, it is recommended to use a modelling tool to address usability problems. Context of Use perspective helps capture the broad semantics of usability problems. For this, Artefact-Users-TasksOrganization-Situation (AUTOS) [3] model can be effectively used, which makes it possible to consider several criteria (task, user, interface object, interaction step) simultaneously. The other two criteria (system function and design principle) are concerned with Design Knowledge perspective. In order to interpret usability problems in terms of designed system functions and their relevant design principles, abstraction hierarchy (AH) model [15] can be a good modelling tool. Design Activities perspective, together with Design Knowledge perspective, allows us to improve the quality of design activities, referring to usability problems. As Function-Behaviour-Structure (FBS) model is effective when reasoning about and explaining the nature of design activities, it is a recommended modelling tool in this perspective. FBS model assumes that there are eight abstract engineering design processes (e.g., formulation, analysis, and documentation), and they link five elements (function, expected behaviour, exhibited behaviour, structure, and design description). From the FBS model, we can also assume that the cause of a usability problem is highly related to one of the eight processes. Interestingly, the relationship between the five elements in the FBS model can
352
D.-H. Ham
be reasonably interpreted from the perspective of abstraction level of design knowledge. This gives an insight on how to bridge two perspectives (Design Knowledge and Design Activities), which can provide more scientific method of linking usability problems to design process.
4 Conclusion A key element for enhancing usability of IT systems is to diagnose the causes of usability problems detected and to give a meaningful feedback to design process in association with the problems. In order to help usability engineers assess usability problems more systematically, this paper proposed a framework, which consists of three viewpoints (context of use view, design knowledge view, and design activity view). This framework itself does not offer any specific ways for usability engineers to easily follow, but just serves as a thinking tool for dealing with usability problems. Therefore more specific methods or guidelines to categorize usability problems are being studied. In particular, the area of linking usability problems into design activities, which are characterized by transformation between abstraction levels of design knowledge, is of primary concern to the author’s study. Acknowledgments. The authors wish to acknowledge the assistance and support of all those who contribute to EKC 2008.
References [1] Andre T, Hartson H, Belz S, and McCreary F (2001) The user action framework: a reliable foundation for usability engineering support tools. International Journal of HumanComputer Studies 54(1):107-136 [2] Bevan N (1999) Quality in use: Meeting user needs in quality. The Journal of Systems and Software 49(1):89-96 [3] Boy G (1998) Cognitive function analysis, Ablex Publishing Corporation [4] Card D (1998) Learning from our mistakes with defect causal analysis. IEEE Software. 15(1):56-63 [5] Chillarge R, Bhandari I, Chaar J, Halliday M, Moebus D, Ray B, and Wong M (1992) Orthogonal defect classification-A concept for in-process measurements. IEEE Transactions on Software Engineering 18(11):943-956. [6] Freimut B (2001) Developing and using defect classification schemes (IESE-Report No. 072.01/E), Fraunhofer IESE [7] Huber J (1999) A Comparison of IBM’s Orthogonal Defect Classification to Hewlett Packcard’s Defect Origins, Types, and Modes. Hewlett Packard Company [8] ISO/IEC 9241 (1998) Ergonomic requirements for office work with visual display terminal-Part 11: Guidance on usability [9] Keehan S, Hartson H, Kafura D, and Schulman R (1999) The usability problem taxonomy: A framework for classification and analysis. Empirical Software Engineering 4(1):71-104 [10] Kruchten P (2005) Casting software design in the function-behaviour-structure framework. IEEE Software 22(2):52-58. [11] Law E (2004) Proposal for a New COST Action 294: Towards the Maturation of IT Usability Evaluation. COST Office
A New Framework for Characterizing and Categorizing Usability Problems
353
[12] Leszak M, Perry D, and Stoll D (2002) Classification and evaluation of defects in a project retrospective. The Journal of Systems and Software 61(3):173-187 [13] Nielsen J (1993) Usability engineering, AP Professional [14] Peuple JL and Scane R (2004) User interface design, Crucial [15] Rasmussen J (1986) Information processing and human-machine interaction-An approach to cognitive engineering, Elsevier Science [16] Ryu H and Monk A (2004) Analysing interaction problems with cyclic interaction theory: low-level interaction walkthrough. Psychnology Journal 2(3):304-330 [17] Schoeffel R (2003) The concept of product usability: a standard to help manufacturers to help consumers. ISO Bulletin, March:5-7 [18] Te’eni D, Carey J, and Zhang P (2007) Human-computer interaction: Developing effective organizational information systems, John Wiley [19] Vilbergsdottir S, Hvannberg E, and Law E (2006) Classification of usability problems (CUP) scheme: Augmentation and exploitation. Proceedings of NordiCHI 2006. Oslo, Norway, 281-290 [20] Zhang Z (2003) Overview of usability evaluation methods. Retrieved 20 May 2008, from http://www.usabilityhome.com Accessed 3 July 2008
State of the Art in Designers’ Cognitive Activities and Computational Support: With Emphasis on the Information Categorization in the Early Stages of Design Jieun Kim, Carole Bouchard, Jean-François Omhover, and Améziane Aoussat New Product Design Laboratory (LCPI), Arts et Metiers ParisTech, Paris, France jieun.kim@ paris.ensam.fr
Abstract. Nowadays, the growth of interest is the analysis of designers’ cognitive activities [2][10].This interest has become a major interdisciplinary topic not only in design science, but also in cognitive psychology, computer science and artificial intelligence. In this context, the analysis of the designer's cognitive activity aims to describe more clearly the mental strategies for solving design problems and to develop computational tools to support designers in the early stages of design. Insofar as much design research depends on findings from empirical studies [11], the results of these studies have shown neglected the cognitive aspects involving in construction, categorization and effective application of design information for idea generation in the early stages of design which is relatively implicit. So, in this paper, we provide the state of the art in designers’ cognitive activities and computational support with emphasis on the information categorization in the early stages of design through a review of a wide range of literature. We also present a case study of the ‘TRENDS’ system which is positioned in the core of these research tendency. Finally, we discuss the limitations of current research and perspectives for the further work.
1 General Introduction The paradigm of ‘design as a discipline’ in the 1980s, has led to a vigorous discussion on the view that design has its own things to know and its own ways of knowing them [1]. While in the past, the design research community has focused on the former (related to products), nowadays the growth of interest is the analysis of designers’ cognitive activities [2-10]. In fact, the view of designers’ activities as being primarily cognitive has been put forth by European researchers in the 1950s and 1960s. At that time it did not widely appeal to researchers in design science, due to the lack of communication between the two disciplines; cognitive psychology and design science as well as the lack of reference to European work [11, 12]. Since then research on design cognition has steadily increased, this interest has become a major interdisciplinary topic not only in design science, but also in cognitive psychology, computer science and artificial intelligence. In this context, the aim of the research is to describe more clearly the mental strategies for solving design problems and to develop computational tools to support designers in the early stages of design. The early stages of design, also called ‘conceptualization’, are characterized by information processing and idea generation [4, 8]. They are also some of the most cognitively intensive stages in the whole design
356
J. Kim et al.
process [13]. But some phases of the early stages of design still remain incompletely understood. Thus having a full understanding of designers’ cognitive activities underlying the early stages of design is of great interest, to formalize the designer’s cognitive process and to encompass computational support in the early stages of design. In this paper, we provide the state of the art on the study of designers’ cognitive activities (Part II) and also take into account a worldwide study about computational support for designer’s activity through a review of a wide range of literature (Part III). Following this, we present a case study of the ‘TRENDS’ system (Trends Research Enabler for Design Specifications) which the authors contributed to and developed in the 6th framework of a European community project (FP6) [14]. In part IV, we discuss the limitations of current research and perspectives for the further work.
2 The Nature of Designer’s Cognitive Activity in the Early Stages of Design 2.1 Cognitive Aspect of Design as an Information Processing Activity According to Jones (1970) [15], the designer has been describes as a ‘black box’, because it was thought that designers generated a creative solution without being able to explain or illustrate how the solutions came about. However early studies have been develFig. 1. Description of an informational cycle [4] oped tools and techniques in order to capture and implement the empirical data related to designers’ cognitive activities [11], The dominant research interest was ‘what and where’ designers retrieve and collect inspirational sources and ‘how’ they represented their ideas using physical representations, such as in sketching activities. It is very important as for the design research community and as for developing computational support in the early stages of design. In this respect, many researchers agreed that designers use various levels of information in reducing abstraction through integrating more and more constraints in the early stages of design [5, 16]. So the designer’s cognitive activity can be seen as information processing activity. As Figure 1 shows, that information processing activity can be described as a information cycle which consists of informative, generative and decision-making phases (evaluation-selection) whose outcome is an intermediate representation. This information cycle evolutionally iterated [4]. 2.2 Information Categorization Toward Bridging between Informative and Generative Phase The design information can be divided into external information, such as visual information conveyed by photos and images; and mental representations of design. The former comes from designers collecting inspirational information and the latter can be structured by cognitive mechanisms [2, 17, 39]. Both types of information interact each other in generating ideas and are considered very important in the activity of
State of the Art in Designers’ Cognitive Activities and Computational Support
357
expert designers [18]. Insofar as much design research depends on findings from empirical studies [11], it has been focused on specific activities such as collection of information [10, 19-21] and sketching [22-24]. By contrast, the results of these studies have shown that information categorization phases were neglected, although they play an important role in bridging the above two activities, but are relatively implicit. Information categorization is the way in which design information can be externalized in the sketching activity, i.e. cognitive tasks of construction, categorization and application of design information for idea generation in the early stages of design [6, 25]. According to Howard’s study [26, 27] on the comparison between the ‘engineering design process’ and the ‘creative design process’, the ‘creative process’ is defined as ‘a cognitive process culminating in the generation of an idea.’ Thus, this view on creativity brings insight in understanding designer’s cognitive activities particularly in order to bridge between informative and generative phase in the early stages of design. As following the literature on creativity from cognitive psychology, there is a well-known Walla (1926) [28] four- stage model of the creative process – preparation – incubation – illumination – verification. The middle phases of how designers incubate information and how come they attain creative insight still remain incomplete as regards design in practice [29, 30]. This point raises common questions as those mentioned for information processing activities. In this respect, we believe that integrating cognitive theories related to ‘creativity’ and the results of observation in design practice can bring clear explanations on designers’ cognitive activities in categorizing information which might bridge between the informative and generative phases in the early stages of design. 2.3 Description of a Conventional Method for the Information Categorization As described above, it is still difficult to understand cognitive mechanisms of information categorisation. However in observing designers’ activities, we could find that designers try to discover a ‘new’ or ‘previously hidden’ association between a certain piece of information and what they want to design [31] through information categorization activities, especially with visual information such as images, photos etc. which come from specialised magazines, material from exhibitions and the web, and in different areas. Visual information categorization is a natural and meaningful activity and is considered one of the major stages in designers’ activity especially for expert designers. [19, 21]. In design practice, even though the purpose on information categorisation might be different depending on the context of application, the visual information categorisation stage provides an unique opportunity to see in images how the designer’s needs for visual information are shaped by the visual information already accessed [6]. Also it is very specific inasmuch as it includes the ability to diverge and to generate new categories and to converge and classify images resources to fit in existing categories at once. In addition, visual information categorization is based on the use of attributes from low levels such as formal, chromatic and textural to high levels descriptors - semantic adjectives, for instance, ‘warm colors’ to represent colors from the red series[40]. The use of semantic adjectives to link words with images and vice-versa impose a much greater cognitive load than low level attributes [20]. Here, we present the major visual information categorization method in design
358
J. Kim et al.
fields (A) KJ(Kawakita Jiro) clustering (1975), (B) MDS (Multidimensional scaling) [32,33] (C) Trend boards [3-5]. A. KJ (Kawakita Jiro) clustering – It is used for clustering images relative to each other and assigning images as keywords on image clusters. Limitations of this method include an insufficient surface for a lot of images and difficulties in revealing relationships between groups [34]. B. MDS (Multi-dimensional scaling) – Its purpose is to find trend between images, and to visualize image distribution on specified conditions [34, 35]. But it is hard to decide semantic words for the fixed axes [36] and to measure scale value between images [34]. C. Trend Boards– To finalize the informative phase, designers build trend boards with determining colours, finishing, textures, and forms and with providing closer the desired universe to the perceived feeling [3-5]. It allows stimulation of creativity and collaborative communication [10] and is useful for apprehending the physical aspect of the future product [35]. But the delimitation of parameters is subjective and it is a time consuming method [37].
Fig. 2. (From left to right) KJ clustering [34] (A), MDS positionning (B), Trend board (C)
3 Current Computational Support 3.1. Upstream Tendency to Digitalize the Design Process Nowadays, with the penetration of Information Technology (IT), there is a growing trend in using computational tools and internet centered on designers’ activities. Designers tend to build their digital databases of design, increasingly provide them with more and more importance within Fig. 3. Evolution of the computational support their activities [6, 7, 10]. In this respect, computational support encompassing the design process is very important. However, as shown in Figure.3, the evolution of the computational support has been reversed the design process. In contrast to later stages of execution phases which are primarily involved prototyping technology like as CAM (Computer-aided Manufacturing) and CAD (Computer aided Design), computational support to help idea generation and to explore designer’s creativity in the early stages of design (conceptualisation) are relatively undeveloped [4, 37]. Even though commercial image retrieval websites, for instance, ‘Google’, ‘Getty images’
State of the Art in Designers’ Cognitive Activities and Computational Support
359
and ‘Flicker’ etc. allow designers to get easily a bunch of image sources, but retrieving images sources from web is laborious and inadequate in order to become inspirational sources for designers. Moreover, given the growing size of databases, structuring the design information is increasingly difficult. So current research needs for the computational tool is to support designer’s cognitive activities in the early stages of design, especially, from seeking inspirational information to reusing them through categorization and visualization [5, 10, 34]. Table 1. Computational support for visual information categorization as a generative approach Reference
Title
Nakakoji et al. (1999) [29] EVIDII
Originality Support collective creativity Show associations between images, words and persons
Nakakoji et al. (1999) [29] IAM-eMMa
Uses knowledge-based rules The systems orders images in its library according to physical attributes of images
Stappers & Pasman (2001) Product World Expressive possibilities of dynamic and interactive [30] Büsher (2004) [7]
MDS interactive ACSA
Aesthetic categorisation support for Architecture Add and modify the annotations
Oshima N. & Harada A. (2003) [39]
3DDG Display Individual system for recollecting past work and displaying through 3 axes structural model of the memory
Keller, AI (2005) [10]
CABINET
Tangible interaction with digital images Database collection for source inspiration
Jung, H, et al. (2007) [34] I-VIDI
Folksonomy-based collaborative tagging
TRENDS consortium [8,14]
Base on the Conjoint Trends Analysis (CTA)
TRENDS
Intergrate Kanseoi Based image Retrieval (KBIR) Auto-generate image mapping
3.2 Current Computational Support for Visual Information Categorization as a Generative Approach When designers categorize visual information, naturally they identify images with not only low levels attributes, such as formal, chromatic and textural, but also high-level descriptor - semantic adjectives. Similarly the emerging field which is called Kansei engineering in Asia has been developed many computational systems which is focused on defining and evaluating subjective characteristic of product with semantic adjective. The knowledge and technology coming from Kansei engineering is very useful in enriching the study of designer’s cognitive activities. However we should make clear that the purpose of computational support which is shown in Table 1 is to dedicate for designers and to support visual information categorization for idea generation. It should also enable designers to explore their creativity. In addition to, computational tools are also emphasized the importance of graphical interfaces, because,
360
J. Kim et al.
to support the designer’s cognitive activities, a graphical interface should better matches the visual cognitive style [38] and have to be developed visual exploration of how to manage these huge databases in a more intuitive and d more "fun" or more interactive [4, 10, 29, 30]. Table 1 shows the state of the art on the computational support for visual information categorization as a generative approach. Each computational tool has been developed according to the different usage context and the evolution of technology. We discuss the limitations of current research and perspectives for the further computational support in part IV. 3.3 Case Study: TRENDS - A Content-Based Information Retrieval System for Designers The TRENDS (Trends Research Enabler for Design Specifications) was in the 6th framework of a European community project (FP6) [14]. The TRENDS content-based information retrieval system aims at improving designers’ access to web-based resources by helping them to find appropriate materials, structure these materials in way that supports their design activities and identify design trends [8]. TRENDS system developed Fig .4. Digitalization of CTA method [14] with the basis on the Conjoint Trends Analysis [3] and digitalized this manual method (Fig.5). CTA method enables the identification of design trends through the investigation of specific sectors of influence and the formalization of trend boards and pallets. These pallets put together specific attributes linked to particular datasets (e.g., common properties of images in a database) so that they can be used to inspire the early design of new products. CTA results in the production of trend boards that can represent sociological, chromatic, textural, formal, ergonomic and technological trends [3, 8]. To develop TRENDS-tool interface, we used a specific methodological approach including both a highly user centred approach and creative collaborative thinking. Currently TRENDS prototype is being assessed its interface and functionalities [14].
4 Discussion and Conclusions Within this paper, research related designer’s cognitive activities and computational support with emphasis on the information categorization in the early stages of design has been reviewed. This above the state of the art raised the discussions following three issues: 1. Limitations with the use of current computational support on designers’ cognitive activities –In using computational tools for information categorization, it
State of the Art in Designers’ Cognitive Activities and Computational Support
361
allows designers to more easily communicate and to achieve creative inspiration through restructuring and categorizing activities for idea generation in the early stages of design. However, current computational tool doesn’t still overcome the limitation of the conventional method as we mentioned in part II-3. One reason might come from the ambiguity of the process that stems from the fact that the information categorization is mostly mentally and is subjective task [7, 34-36]. So it was difficult to anticipate. It is due that current method doesn’t take account on the impact of designer’s mental representations which can impact in the development of design and make a relation to external representation. In short, due to the lack of understanding the designer’s cognitive activities [5, 17, 39]. The other reason is the holistic nature of visual information including multidimensional data. Designers use high levels descriptors - semantic adjectives as for characterizing images or as for a source of creativity. These high-level description of images semantically related to emotions impact, thus it brings the problem of semantic gap between designers’ semantic descriptor and digitalized descriptor through computer algorithms [40]. 2. Perspective for further research on designers’ cognitive activities We examined limitations derived from the nature of designers’ cognitive activities and the design information in the early stages of design. In the early stages of design, they are some of the most cognitively intensive phases in the whole design process [13] and are for explorative, creative activities. So it is very important to understand designers’ cognitive activities and study for formalizing the cognitive design process with the extraction of design knowledge, rules and skills [5]. Also based on the theoretical account for cognitive psychology, we need to translate design rules into design algorithms in order to develop computational tools. In this respect, the further research should be focused on the formalisation information categorisation process toward bridging between informative and generative phase. At the same time, it could bring useful insights to understand uncertain creative process especially from incubation and illumination phases. 3. Computational tools as the further opportune area The analysis of designers’ cognitive activities has been studied to develop new computational tools, even though their evolution has been upstream direction of the design process. Many researchers agreed that computational tool is useful in the early stages of design and more and more important within their activities. Meanwhile, as the evolution of technology, even though computational tool can resolve most limitations of conventional method, we still meet some arguments on the limitations of the computational support whether this is still useful for exploring idea generation and stimulating designer’s creativity. That is why, nowadays in many pioneer researchers communities, there are many questions on the Creativity and the possibility support with Artificial Intelligence (AI), or AI and Design. In addition, further computational tool should take count of the importance of graphical interfaces which can be better matches the visual cognitive style [38] and more interactive visual exploration [4, 10, 29, 30]. Like this, the analysis of designers’ cognitive activities and computational support are recognized as a major interdisciplinary topic not only in design science, but also in cognitive psychology, computer science and artificial intelligences. Further research can also benefit through getting various insights from all these areas and communities.
362
J. Kim et al.
Acknowledgments. This study refers to TRENDS project, funded by the European Commission through the 6th Framework Program for Information Society Technologies (FP6-27916) and run from Jan 2006 to Dec 2008. www.trendsproject.org
References [1] Cross N (2007) Forty years of design research. Design Studies 28:1-4 [2] Eckert C, Stracey MK (2000) Sources of Inspiration: A language of design. Design Studies 21: 99-112 [3] Bouchard C, Aoussat A (2002) Design process perceived as an information process to enhance the introduction of new tools. International Journal of Vehicle Designer 31.2:162-175 [4] Bouchard C, Lim D, Aoussat A (2003) Development of a Kansei Engineering system for industrial design: identification of input data for Kansei Engineering Systems. Journal of the Asian Design International Conference 1:12 [5] Bouchard C, Omhover JF, Mougenot C, Aoussat A et al (2008) TRENDS: A ContentBased Information retrieval system for designers. In: J.S. Gero and A. Goel (eds), Design Computing and Cognition DCC’08 (pp. 593-611). [6] Restrepo J (2004) Information processing in design. Delft University Press, the Netherlands [7] Büsher M, Fruekabeder V, Hodgson E et al (2004) Designs on objects: imaginative practice, aesthetic categorization and the design of multimedia archiving support. Digital Creativity [8] Stapper, P J and Sanders, E -B N (2005) Tools for designers, products for users? In s n (Ed.) 2005 international conference on planning and design: creative interaction and sustainable development [9] McDonagh D, Denton H (2005) Exploring the degree to which individual students share a common perception of specific trend boards: observations relating to teaching, learning and team-based design. Design Studies 26:35-53 [10] Keller AI (2005) For Inspiration Only - Designer Interaction with informal collections of visual material. Ph.D. Thesis, Delft University of Technology, The Netherlands [11] Coley F, Houseman O, Roy R (2007) An introduction to capturing and understanding the cognitive behaviour of design engineers. Journal of Engineering Design 311-325 [12] Hubka V, Eder W (1996) Design Science. London: Springer-Verlag [13] Nakakoji K (2005) Special issue on ‘Computational Approaches for Early Stages of Design’. Knowledge-based System. 18:381-382 [14] TRENDS Website (2008) http://www.trendsproject.org/ Accessed 07 July 2008. [15] Jones JC (1992) Design Methods. 2nd ed. Van Nostrand Reinhold. New York [16] Bonnardel N, Marmèche E (2005) Towards supporting evocation processes in creative design: A cognitive approach. International Journal of Human-Computer Studies Computer support for creativity. 63:422-435 [17] Eastman CM (2001) New Directions in Design Cognition: Studies on Representation and Recall, Design Knowing and Learning. 79-103 in Cognition in Design Education, edited by C.M. Eastman, W.M. McCracken, and W.C. Newstetter. Amsterdam: Elsevier Science Press. Frigui, H and Caudill, J : 2006, Region Based Image Annotation. ICIP 2006, 953-956 [18] Gero JS (2002) Towards a theory of designing as situated acts. The Science of Design International Conference, Lyon [19] Casakin H, Goldschmidt G (1999) Expertise and the use of visual analogy: implications for design education. Design Studies 20:153-175
State of the Art in Designers’ Cognitive Activities and Computational Support
363
[20] Pasman G (2003) Designing With Precedents. Delft University of Technology, Ph.D. Thesis, Delft University of Technology, The Netherlands. [21] Goldschmidt G, Smolkov M (2006) Variances in the impact of visual stimuli on design problem solving performance. Design Studies 27: 549-569 [22] Goldschmidt G (1991) The dialectics of sketching. Creativity Research Journal 4:123–143. [23] Goldschmidt G (1994) On visual design thinking: the vis kids of architecture. Design Studies 15:158-174 [24] Yi-Luen DoE (2005) Design sketches and sketch design tools. Knowledge-Based Systems. Computational Approaches for Early Stages of Design 18:383-405 [25] Bilda Z, Gero JS (2007) The impact of working memory limitations on the design process during conceptualization. Design Studies 28:343-367 [26] Howard T, Culley S J, Dekoninck E (2007) Creativity in the engineering design process, International conference on engineering design, ICED’07 [27] Howard T J, Culley S J, Dekoninck E (2008) Describing the creative design process by the integration of engineering design and cognitive psychology literature. Design Studies 29:160-180 [28] Walla G (1926) The art of though. (Jonathan Cape 1926, London, 1926) [29] Nakakoji K, Yamamoto Y, Ohira M (1999) A Framework that Supports Collective Creativity in Design using Visual Images. In Creativity and Cognition (pp. 166-173). New York:ACM Press [30] Pasman G, Stappers PJ (2001) "ProductWorld", an Interactive Environment for Classifying and Retrieving Product Samples. Proceedings of the 5th Asian Design Conference, Seoul [31] Sharples M (1994) Cognitive Support and the Rhythm of Design. In Dartnall T.(Ed.) Artificial Intelligence and Creativity (pp. 385-402). The Netherlands: Kluwer Academic Publishers [32] Kruskal, JB, Wish M (1978) Multidimensional scaling. Beverly Hills:Sage Publications. [33] Borg I, Groenen P (1997) Modern Multidimensional Scaling: Theory and Applications. Springer Verlag [34] Jung H, Son MS, Lee K (2007) Folksonomy-Based Collaborative Tagging System for Classifying Visualized Information in Design Practice. CHI, Beijing [35] Maya Castano J (2007) What user product experiences is it currently possible to integrate into the design process. International Conference on engineering design, ICED’07 [36] Schütte S (2005) Engineering Emotional Values in Product Design – Kansei Engineering in Development, Ph.D. Thesis, Linköping studies in Science and Technology [37] Soublé L, Mougenot C, Bouchard C (2006) Elaboration d’un outil interactif de recherche d’information pour designers industriels, Actes de CONFERE 2006 [38] Stappers, P.J. and Hennessey J.M (1999), Computer Supported Tools for the Conceptualization phase. Proceedings 4th International Design, Thinking Research Symposium on Design Representation, pp. 177-188 [39] Oshima N, Harada A (2003) Design Methodology which recollects memory in creation process. 6th Asian Design Conference, Japan [40] Wang XJ, Ma WY, Li X (2004) Data-driven approach for bridging the cognitive gap in image retrieval. ICME '04. 2004 IEEE International Conference: 2231- 2234
A Negotiation Composition Model for Agent Based eMarketplaces Habin Lee and John Shepherdson 1
Brunel Business School, Brunel University West London, Uxbridge, Middlesex, UK [email protected] 2 Intelligent Systems Research Centre, BT Group CTO, Ipswich, Suffolk, UK
Abstract. Organizations that find partners and form virtual organizations (VOs) with them in open eMarketplaces are involved in many different negotiation processes during the lifecycle of the VOs. As a result, support for negotiation is one of the major requirements for information systems that facilitate the formation of VOs. This paper proposes a component-based approach to the problem, where negotiation processes are implemented as software components that can be dynamically installed and executed by software agents. This approach allows autonomous software agents to participate in various VO formation and execution processes that incorporate negotiation protocols that they were previously unaware of. A component-based approach has advantages in the management of negotiation processes in terms of version control and dynamic switching of negotiation processes.
1 Introduction Organizations that find partners and form virtual organizations (VOs) [1] with partners in open eMarketplaces are involved in many different negotiation processes during the lifecycle of the VOs. As a result, support for the negotiation process is one of the major issues for Information Systems (ISs) that support VOs in open eMarketplaces. Furthermore, due to the complexity involved in handling exceptions [2] within the inter-organizational workflows (IOWs) [3] that underpin VOs, the ISs for an eMarketplace need negotiation strategies that can effectively handle such exceptions. This paper proposes a component-based approach to solve these issues. Negotiation processes are implemented as software components that can be dynamically installed and executed by software agents. The approach allows autonomous software agents to participate in various VO formation and execution processes that use negotiation protocols that the agents have not encountered previously. A component-based approach has advantages in the management of negotiation processes in terms of version control and dynamic switching of negotiation processes. An NCM (negotiation composition model) is proposed to define the sequences of negotiation components and their (data and control) relationships to handle exceptions in the middle of IOW executions. An illustrative example of IOW is used to show the usefulness of NCM in the eMarketplace context. The organization of this paper is as follows. The next section details the negotiation composition model and section 3 applies the model to an illustrative example. Finally section 4 discusses the novelty of the paper and concludes it.
366
H. Lee and J. Shepherdson
2 NCM: A Negotiation Composition Model NCM takes a component-based approach for negotiation service composition. A negotiation service is implemented as a software component which can be plugged and played within software agents. Caller
Initiator
Controller
Result Container
Message Creator
Protocol Handler
Organizational Agent Internal
User Interface
Agent Action Library
Message Queue
ACL message
Service Description
Respondent conversation sessions
…
DF
Goal Engine
Ontology base
ACL message
Respondent
Service Description SD Manager
Protocol Handler
Controller
Service Action
Coordinator
Message Queue C-COM Library
Initiator conversation sessions
…
User Interface
DB
(a)
(b)
Fig. 1. The structure of a negotiation component (a) and the internal architecture of an agent that enables plug and play of the negotiation component (b)
Figure 1 shows the internal architecture of a negotiation component, called a CCOMnego, and the agent that uses the component on a plug and play basis. The main feature of a C-COMnego is that it enables agents to use interaction protocols that were previously unknown to them to communicate. This is achieved by the agents dynamically installing one or more role components, provided by the CCOMnego, which implement the new interaction protocol. A C-COMnego is divided in to two or more role-components, each of which is of type Initiator or Respondent. An Initiator component can be plugged dynamically in to an agent. From an agent’s point of view, the Initiator component is a black box, which hides the details of the interaction process (with its designated Respondent component) from the agent that installed the component, only exposing an interface which specifies the input and output data needed to execute the role component. A negotiation composition model is a process model in which a negotiation service can be considered as a service and the links between services as process control transitions. In formal terms, an NCM is defined as follows.
A Negotiation Composition Model for Agent Based eMarketplaces
367
Definition 1 (NCM). A negotiation composition model is a 4-tuple, NCM = ( O, N, P, L) where: O = {o1, o2, …, o2m } is a finite set of ontology items, N = { n1, n2, …, nn } is a set of negotiation services , P = { p1, p2, …, pk } is a set of places that are used to contain ontology items and routing rules, L ⊆ (P × N) ∪ (N × P) ∪ (P × P) is a set of links between negotiation services and places. A negotiation service is activated when the control of the process reaches it and is achieved by the execution of a C-COMnego. A place contains a set of ontology items that are produced as a result of the execution of C-COMnego (s) or provided from an external source. Each place is described by place type, ontology items that can be contained in it, and routing rules that define how process control is routed to its successors. A routing rule is defined for the ontology items produced in the place. A place is classified as ‘trigger’, ‘mediator’ or ‘terminator’. There is only one trigger place and one terminator place in an NCM. There is no restriction on the number of mediator places. The NCM can be created dynamically by human users when an unknown type of exception occurs so that an agent can handle the exception. Therefore an NCM is defined so it can be interpreted by a software agent.
3 An Illustrative Example Suppose an eMarketplace exists where travel-related organizations form VOs to provide flexible travel services to customers. Customers can submit their travel schedules (including destination, arrival and departure times, and maximum and/or minimum budgets) to a travel agency. On receiving the order, the travel agency transforms the schedule into an IOW schema and advertises the business opportunities for the roles in the IOW on the eMarketplace to find hotels, flights, and tickets for entertainments for each leg of the trip schedule to fulfill the requirement by the customers. Organizations interested in the roles can submit their offers to the travel agency. The hotels and flights companies selected after the negotiations (for example, reverse auction [4]) with the travel agency form a VO to provide the travel service to the customer. The execution of the IOW is realized when the customer(s) progress their schedule. For example, once the customer(s) checks out a hotel, then the instance of the IOW progresses to next service and relevant information is passed to the organization which provides the service. Suppose that the customers missed their flight from London to Paris. In this case, the customers may want to cancel their plan to have a group dinner and the hotel reservation for that night. If this is the case, the travel agency has to start a new negotiation process to re-arrange the trip by finding alternative transportation from London to Paris, cancelling the schedule of the day in Paris, modifying the booking of the hotel in Paris. Each activity requires negotiation with existing organizations (for changing
368
H. Lee and J. Shepherdson
FIPA Request
H
o1 o3
FIPA Request
Cancel dinner r2
o2
o1
o2 o4
Modify hotel booking
o4 o3
Find alternative transportation Contract -net
H
p1
H
p2
Find flight
Find train
Fig. 2. NCM diagram for the travel support IOW
pre-contract) or new organizations (for new service). Even though that kind of exception (missing a flight) is predictable, how to handle the exception is different for different cases and cannot be modeled in advance. Furthermore, considering the time limitation to find alternative transportation to Paris, the travel agency would need to use a different protocol like Contract-Net rather than reverse auction which typically takes longer. Figure 2 shows an NCM diagram that specifies the order of execution of C-COMs to handle the exception described above.
4 Discussion and Conclusion The novelty of this paper is the proposal of a design that enables ad-hoc composition of negotiation services in a VO context. To our knowledge, this is the first approach in the literature. The core of the design is implementing a negotiation service as a dynamically pluggable software component and providing an NCM which can also be provided to an agent dynamically. By referring to the provided NCMs, an agent can handle unexpected exceptions whilst executing IOWs. The future research direction is to design a mechanism to verify the soundness of dynamically provided NCMs to guarantee that their execution will always result in a sound state (resolving the exception or concluding that it is not resolvable). One possible anomaly is that an agent cannot reach the next state due to a missing ontology items or ‘communication timed out’ message.
A Negotiation Composition Model for Agent Based eMarketplaces
369
References [1] Strader T J, Lin F R and Shaw M J (1998) Information infrastructure for electronic virtual organization management. Decision Support Systems 23:75-94 [2] Strong D M, Miller S M (1995) Exceptions and exception handling in computerized information processes. ACM Transactions on Information Systems 13(2):206 [3] Van der Aalst, W M P, Kumar A (2003) XML-based schema definition for support of interorganizational workflow. Information Systems Research 14(1):23-46. [4] Wagner S M, Schwab A P (2004) Setting the Stage for Successful Electronic Reverse Auctions. Journal of Purchasing and Supply Management 10(1):11–26.
Discovering Technology Intelligence from Document Data in an Organisation Sungjoo Lee, Letizia Mortara, Clive Kerr, Robert Phaal, and David Probert University of Cambridge, Department of Engineering, Cambridge, UK [email protected]
Abstract. Technology Intelligence (TI) systems provide a mechanism for identifying business opportunities and threats. Until now, despite the increasing interest on TI, little attention has been paid to the analysis techniques and procedures for supporting TI, especially the tools which focus on the data stored in unstructured documents. With the dramatic growth of documents captured and stored by organisations in the attempt to provide purposeful intelligence, the development of an application framework for how to build such mining tools would be extremely timely. Detailed guidelines on how to analyse data and develop such tools to meet specific needs of decision makers are in urgent need. Therefore, this research aims to understand how companies can extract TI systematically from a plenty of documents. Guidance will be developed to support intelligence operatives to find suitable techniques and software for getting value from document mining.
1 Introduction With the boost of global competition and the need for continuous innovation, the industrial interests in developing effective capabilities for technology intelligence (TI) are increasing [1]. TI is a collection and delivery of information about new technologies to support the decision-making process within a company. And thus central to the search for TI is to identify, gather and organize and analyse information related to technologies [2]. Especially, as companies are capturing increasingly more data about their customers, suppliers, competitors, and business environment, it has become increasingly critical for them to structure large and complex data sets to meet their information needs [3]. Nevertheless, those data are apt to be accumulated in the storage of companies’ database systems and could not be used more effectively, because the large part of them are in the form of documents, which are multi-attribute and unstructured in nature and thus are hard to be analysed. As a result, voluminous documents might be piled in databases though they have a great potential to provide technology intelligence. To reflect the needs for extracting value from the documents, intelligence applications have been suggested. Knowledge extraction and data visualisation tools constitute one form of the techniques that present information to users in a manner that supports technology decision-making processes. However, companies still have difficulties in selecting appropriate data and techniques that can be used to support a specific decision-making.
372
S. Lee et al.
This study takes a broader perspective of TI in organisations, encompassing all data about market, product, and technology, and attempt to develop an overall framework to extract TI from a large size of documents produced in an organisation. The specific purpose will be to • identify internal and external sources of TI in the form of documents • review techniques for data extraction, analysis and visualisation for documents • develop a relational table between documents and techniques to tell what intelligence could be provided from combination of techniques and documents • give guidance to companies on how to use their documents more effectively The remainder of this paper is organised as follows. First, theoretical and methodological background of this research is briefly explained in section II. Then, development process of document mining is described in section III and the way to applying the framework is followed in section IV. Finally, limitations and further research directions are discussed in section V.
2 Background 2.1 TI and Data Mining TI provides a mechanism for capturing business and technology opportunities and preparing for threats through an effective delivery of relevant information to a company [4], which can be a critical factor for its decision-making process [5]. In order to sustain growth, a company should respond to both market-pull and technology-push forces, by deploying sustaining technologies in its product range and by adopting radical innovation strategies to counter threats by disruptive technologies [6]. During the process, TI activities could support strategic decision-makings of a firm and especially using Data-Mining (DM) approach, the activities could be improved greatly. DM enables a company to turn large volumes of data into information and eventually into knowledge and thus helps TI activities by connecting complementary pieces of information across different domains [7] or providing a broad picture of technical themes in terms of their connections, related institutions, or related people [8], which might give a possibility of technology fusion, early warnings of technological changes, and knowledge on network. Despite the increasing interest on TI, however, little attention has been paid to the analysis techniques and procedures for discovering TI. Since many companies are facing the challenges of interpreting large volumes of document data, which can be an effective sources of TI if well analysed, detailed guidelines on how to analyse those data to meet specific needs of TI are in urgent need. With the dramatic increase in the amount of data being captured by organisations, this is the time to develop an application framework for mining them for their practical use. 2.2 DM Activities DM is defined as a process of exploration and analysis, by automatic or semiautomatic means, of large quantities of data in order to discover meaningful patterns
Discovering Technology Intelligence from Document Data in an Organisation
373
and rules [9]. Especially, with the rapid advance of technologies, the volume of technological data and the need for analysis to obtain a technological intelligence are both increasing as well. Those phenomena facilitate the use of DM techniques in various areas [10] and have developed a number of methods for identifying patterns in data to provide insights and support decision-makings for users [11]. In general, the techniques to mine unstructured documents are grouped into two categories: the first one is to transform unstructured documents in a structured form and the second one is to find some implication from the structured documents. For the former, text-mining and co-word analysis are frequently used. Text mining is intended to discover useful patterns from textual [12]. Recently, it has been actively applied in information retrieval from intellectual property data [8, 12, 13, 14]. Co-word analysis utilises the frequency of the co-occurrence of keywords or classification codes in the literature to produce the relationships between them [15]. An advantage of co-word analysis is that it does not rely on the data's own classification system, allowing analysis results to be interpreted in various ways. The technique has been applied to domain analysis in the fields of chemistry and artificial intelligence etc., technology cartography [16], and also used in data retrieval systems [15]. For the latter, information visualisation techniques are applied to create insightful representations [17]. Most formal models of information visualisation are concerned with presentation graphics [18, 19] and scientific visualisation [20, 21, 22, 23]. Out of them, we use multi-dimensional visualisation techniques representing such data in a two- or three-dimensional visual graphic [24, 25, 26, 27], because results of textmining or co-word analysis in general are in a form of keyword vector or cooccurrence matrix having multi-dimensional attributes. Apart from the techniques, simple data restriction techniques such as PCA (Principle Component Analysis), SOFM (Self-Organisation Feature Map), clustering techniques, and network analysis techniques are also useful in visualisation. Until now, a number of DM techniques have been suggested, addressing the techniques themselves, their algorithms, and system developments. Another research stream has investigated a significant implication from DM results by applying the techniques to a specific domain. Based on those two streams of studies, one promising but not yet addressed issues is how to apply those techniques on large volumes of data in a company to get meaningful TI, taking a holistic approach. The research results can help facilitate the effective use of data and DM techniques and thus will practically help those who are in charge of strategic planning and managers in siteoperations as well.
3 Development of Document-Mining Framework 3.1 Overall Research Process Document mining is defined as a process of exploration and analysis, by automatic or semi-automatic means, of large quantities of document data in order to discover meaningful patterns and rules. It is not a process of searching and listing documents of concern according to their relevance this project aims to understand how companies can extract TI systematically from a large set of documents data. The overall research process is described in Fig. 1.
374
S. Lee et al.
Fig. 1. Overall research process
To develop the document-mining framework, the research involves a large volume of literature review, a series of interviews, and in-dept case studies with organisations that have an established TI system. The aim of this exercise is to capture the implicit challenges associated with establishing a TI analysis framework, sorting and communicating relevant information, and feedback mechanisms need to ensure that the framework operates effectively. From the literature review and web search, the draft of document analysis needs in a firm and the available techniques to mine documents will be identified. Based on them, document-mining framework is developed and its feasibility is verified with application scenarios. The final goal would be to suggest guidelines for document mining, which is customised to each of organisational needs. 3.2 Document Analysis Needs To identify document analysis needs in an organisation, we first identify strategic purpose of document analysis and then list possible data sources that can be used for the purpose. • Step 1: Identify Strategic Purpose of Document Analysis Document data can be used for various purposes. Generally, it can be used to extract intelligence on customers, markets, technologies and business processes [28]. At functional level, it is useful for visualisation, knowledge representation, semantic networks, human resource management, project management, knowledge engineering, information retrieval, personalisation, lesson learned systems [29]. In specific for technology management, it provides information about partners or intellectual property right [30], and also can be used in technology trend analysis, infringement risk analysis, technologist search, R&D trend analysis, researcher search, literature survey, information gathering, forecast and planning [3]. In marketing, it helps understand sales effectiveness, improving support and warranty analysis, relating customer relationship management to profitability [31]. Table 1 summarises the strategic purposes of document analysis based on its possible applications. • Step 2: List Data Sources for Document Analysis Potential data sources for document analysis have been investigated and applied from previous research and practical projects. For example, Kerr et al. (2006) mentioned
Discovering Technology Intelligence from Document Data in an Organisation
375
Table 1. Strategic purpose of document analysis CEO/business planning
R&D planning
Product planning
Market planning
Business trend
Technology trend
Product trend
Market trend
Partners information
Technology forecasting New product idea
Market forecasting
Warranty analysis
Customer relations
Competitors information New technology idea Regulations
R&D redundancy
Product defaults
Customer needs
Investment status
R&D network
Competitor products
Competitors trend
the following as important sources for intelligence; past projects and previous intelligence reports, employees expertise and past experience, and employees knowledge derived from networks of contacts for internal sources; industry and cross-industry organisations, printed or online material such as trade magazines, academic journals, and newspapers, technology-related events including conferences, trade fairs and seminars, patent records and analysis, commercially prepared intelligence reports, collaboration with universities, and other companies or government, futures studies for external sources [1]. In a similar way, Heinrichs and Lim (2003) also identified internal and external data sources; e-commerce systems, sales transaction systems, financial and accounting systems, human resource systems, plant operation systems, market research databases such as customer satisfaction information and quality performance information for internal sources; web-based business intelligence organisations, trade associations, industry organisations, government agencies for competitor’s sales and market share, regional demographics, and industry trends for external sources [32]. Without such a grouping, Cody et al. (2002) listed several data sources such as business documents, e-mail, news and press articles, technical journals, patents, conference proceedings, business contracts, government reports, regulatory filings, discussion groups, problem report databases, sales and support notes, and the Web [31]. Of course, some researchers attempt to analyse document data including patent documents, one of the most frequently utilised data source [33][34], and research literature [35] or product manual [36]. Based on the review, potential sources can be summarised as in Table 2 according to two criteria of data source (external/internal) and planning objective (market/product/technology). Table 2. Potential data sources for document analysis
External
Internal
Market
Product
Technology
Industry trend reports
Product manuals
Patent documents
News and press articles Discussions on products Academic journals Regulatory filings
Customer complaints
Published TRMs
Business contracts
New product proposals
Unpublished TRMs
Sales and support notes Failure reports Marketing reports
Customer surveys
R&D proposals Past R&D projects
376
S. Lee et al.
3.3 Document Analysis Tasks To identify feasible document analysis tasks, we review available DM techniques and then design DM tasks for document analysis based the combination of the techniques. • Step 1: List Available DM Techniques Though various techniques have been suggested, the general techniques include Decision tree (DT), case-based reasoning (CBR), neural network (NN), market basket analysis (MBA), k-means clustering, principle component analysis (PCA) [9], multidimensional scaling (MDS) [37], self-organisation map [38], hierarchical clustering [39]. Co-word analysis [15, 40] word-counting [17], network analysis, semantic analysis, text-mining [12] and database tomography [8] are specialised techniques for text analysis. Those techniques can be classified according to their functionality as shown in Table 3. Table 3. Available DM techniques for document analysis Extraction
Analysis
Visualisation
Keyword extraction (KE)
Decision tree (DT)
Word-counting (WC)
Neural network (NN)
Principal component analysis (PCA)
Summary abstraction (SA)
Market basket analysis (MBA) Multi-dimensional scaling (MDS) Self-organizing map (SOM) k-Means clustering (KMC)
Summary extraction (SE)
Case-based reasoning (CBR) Co-word analysis (CA)
Network analysis (NA)
Hierarchical clustering (HC)
Hyper-linked (HL)
Basic statistics (BS)
2-Dimensional mapping (2DM)
• Step 2: Design DM Tasks for Document Analysis We defined DM tasks as a process of applying a series of DM techniques to obtain a meaningful output. Many researchers have designed useful DM tasks in the area of document mining, which have been applied for various purposes such as information retrieval [41], summarisation [3, 42], technology trend analysis [33, 34, 43], and automated classification applications [44, 45]. In detail, Berry and Linoff (1997) described six basic tasks - classification, estimation, prediction, market basket analysis, clustering and description [9], while Kopanakis and Theodoulidis (2003) explained only three but more advanced ones - association rules, relevance analysis and classification [46]. However, in many studies, the focus of analysis has been patent documents. Tseng et al. (2007) suggested text segmentation, summary extraction – summary and abstraction, keyword extraction, term association, and cluster generation – document clustering, term clustering, topic identification, and information mapping, as a method for patent analysis [41]. Then Yoon et al. (2008) extended the scope of analysis from patents to technological documents, suggesting more generalised tasks including summarisation, information extraction for ontology generation or trend analysis, clustering, and navigating [3]. Of course, similar efforts have been taken in marketing areas as well. Information presentation, knowledge evocation and analytic capabilities are suggested as possible DM tasks [28], which can be concretised as clustering, taxonomy building, classification, information extraction, summarisation,
Discovering Technology Intelligence from Document Data in an Organisation
377
expertise location, knowledge portals, customer relationship management, and bioinformatics [31]. Those tasks can be summarised into 17 document-mining tasks based on the DM techniques that they use in each stage of data extraction, analysis, and visualisation, out of the techniques listed in Table 3. The summarisation results with their references are presented in Table 4. Table 4. DM tasks and related techniques for document analysis DM tasks
Extraction Analysis
1
Summarisation [42]
SA/SE
2
Retrieval [41]
CBR
3
Navigating [3]
Visualisation
HL
4
Association-term [40]
5
Association-document [34] KE-WC
KE
CA-NA
6
Taxonomy [3]
KE-WC
KMC
7
Clustering-multi stage [42]
KE-WC
HC
8
Ontology [3]
KE-WC
HC/KMC
9
Clustering-title [47]
KE-WC
KC-BS
MDS SOFM
10
Topic mapping [42]
KE-WC
HC
11
Trend analysis [3]
KE-WC
BS
12
Portfolio analysis [48]
KE
13
Classification [45]
KE-WC
MDS MDS 2DM
DT/NN
14
Affinity grouping [9]
MBA
15
Prediction [9]
DT
16
Estimation [9]
NN
17
Description [9]
DT, MBA
3.4 Document-Mining Framework Once potential data sources for document analysis are identified and possible tasks are designed, a relational table between documents and techniques can be developed, by identifying the expected intelligence from a combination of a specific task and a specific document. The framework enables an organisation to understand what kinds of intelligence can be extracted from documents and thus what kinds of documents are needed and what kinds of tasks should be applied to get the intelligence that is needed. The detailed process will be described in the next section.
4 Application of Document-Mining Framework 4.1 Intelligence from Documents If an organisation is interested in TI especially based on external technological documents, the relational table can be developed based on three documents - patents,
378
S. Lee et al.
academic journals, and published roadmaps (see Table 2) and 17 available DM tasks (see Table 4). Table 5 is an example of TI that can be extracted from the three documents and 9 tasks out of 17. Among the intelligence that can be expected from those combinations, if there is any meaningful TI worth analysing in meeting the needs, the organisation starts document mining. For example, if the table is developed for R&D planning and the organisation is especially interested in technology trend analysis, a Trend Analysis task on three documents would be meaningful and for the analysis, KE-WC-BS could be conducted in series (see Table 4). Table 5. Sample relational table to extract TI based on three documents and nine tasks Patents SummarisationSummary of technology Summary of an invention
Academic journals
Roadmap(published)
Summary of research
Summary of a roadmap
Keywords in research
Keywords in a roadmap
Keywords in an invention Retrieval
Retrieval of patents of concern Retrieval of article of concern Retrieval of TRM of conExtract of a part in an article cern
Extract of a part in a patent
Extract of a part in a TRM Association
Relations between technologies/patents/competitors Technology/competitors relations
Relations between technolo- Relations between technologies/organisations gies/articles/research groups/authors Technology relations Technology relations Organisations with similar
Infringement risk
Research groups with similar view research concerns
Ontology
Technology ontology (hierarchy) – available technology
Technology ontology (hierar- Technology ontology (hichy) – research areas erarchy) – to-bedeveloped technology
Clustering
Technology/patent/competitor groups based on similarity in contents
Technologies/articles/researchTechnologies/publishers groups/authors groups based groups based on similarity in contents on similarity in contents
Trend analysis Emerging and declining tech- Emerging and declining renologies (described by terms or search areas (described by areas) terms or areas) Portfolio analysis
Importance of available technologies
Development plan and realisation point of technologies
Importance of research areas Importance of to-bedeveloped technologies
Classification Patent assignment to a specific Article assignment to a spe- Technology assignment to a specific organisations technology area cific journal Technology assignment to a specific research group Affinity grouping
Principal and the subordinate re- Principal and the subordinate Principal and the subordilations between technolorelations between technolo- nate relations between technologies/publishers gies/patents/competitors gies/journals/research groups/authors
4.2 Document Analysis Examples Once the documents and tasks for mining are decided, the analysis is followed using the techniques listed in Table 4. The following three figures are the examples of
Discovering Technology Intelligence from Document Data in an Organisation
379
Fig. 2. Document-mining application to product data [49]
Fig. 3. Document-mining application to technology data [48]
document-mining analysis. Fig. 2 describes the results of applying document mining to product data. The left figure shows the customer needs map developed by adopting customer complaints documents to apply association, while the right figure describes the product specification to give the best satisfaction to customers by using customer survey documents to classification. On the other hand, Fig. 3 shows the results of applying document mining to technology data. Application of portfolio analysis and association on patent documents leads to emerging and declining technology areas and technology relations respectively as shown Fig. 3.
5 Conclusion TI provides an effective delivery of relevant information. Extracting explicit intelligence information from an internal repository and monitoring the development of new
380
S. Lee et al.
technologies identified as relevant for the future would be a very powerful competency but organizations are facing the challenge of interpreting a large volume of data, especially in the form of documents that are hard to analyse. To investigate the challenge, this research purposes to understand how organisations can extract TI systematically from a large set of documents data and finally to develop document-mining framework. To the end, organisational needs on documents data are collected and sources of TI in the form of documents are identified. At the same time, techniques for data search, extraction, analysis, and visualisation particularly for documents are reviewed and the DM tasks are designed based on the techniques reviewed, especially for document data. The techniques are then applied to data sources to develop document mining framework, which is in the form of a relational table between documents and tasks to tell what intelligence could be provided when a specific task is applied to a specific document. Finally, the tasks are customised according to organisational needs and scenarios of applications are developed to verify the framework. The research result is expected to give guidance to support intelligence operatives and thus to support the effective use of document data, helping organisational mangers establish and operate a document mining within their own organisations. With all those meaningful implications, this study is subject to several limitations and so further research is needed. Firstly, a qualitative approach such as an interview or case study is required to develop more practical framework. Organisational needs on document analysis and potential data sources should be identified not only by literature review but also in the perspectives of company. Secondly, there are a number of commercial software packages for document analysis, which need to be reviewed in their functionality. Finally, this research is mostly based on review and so a real case study should be followed to verify the feasibility of research.
References [1] Kerr C, Mortara L, Phaal R et al (2006) A conceptual model for technology intelligence. International Journal of Technology Intelligence and Planning 1(2):73-93 [2] Mothe J, Chrisment C, Dkaki T et al (2006) Combining mining and visualization tools to discover the geographic structure of a domain. Computer, Environment and Urban Systems 30:460-484 [3] Yoon B, Phaal R, Probert D (2008) Structuring technological information for technology roadmapping: data mining approach. In: WSEA Conference, Cambridge [4] Lichtenthaler E (2003) Third generation management of technology intelligence processes. R&D Management 33(4):361-375 [5] Dou H, Dou J-M (1999). Innovation management technology: experimental approach for small firms in a deprived environment. International Journal of Information Management 19: 401-412 [6] Husain Z, Sushil Z (1997) Strategic management of technology – a glimpse of literature. International Journal of Technology Management 14(5):539-578 [7] Smalheiser N (2001) Predicting emerging technologies with the aid of text-based data mining: the micro approach. Technovation 21:689-693 [8] Kostoff R, Toothman D, Eberhart H et al (2001) Text mining using database tomography and bibliometrics: a review. Technological Forecasting and Social Change 68(3):223-253 [9] Berry M, Linoff G (2000) Mastering data mining. John Wiley & Sons, NY
Discovering Technology Intelligence from Document Data in an Organisation
381
[10] Porter A, Watts R (1997) Innovation forecasting. Technological Forecasting and Social Change 56:25-47 [11] Shaw M, Subramaniam C, Tan G et al (2001) Knowledge management and data mining for marketing. Decision Support Systems 31(1):127-137 [12] Losiewicz P, Oard D, Kostoff R (2000) Textual data mining to support science and technology management. Journal of Intellectual Information System 15:99-119 [13] Feldman R, Dagan I, Hirsh H (1998) Mining text using keyword distributions. Journal of Intelligent Information Systems 10(3):281-300 [14] Zhu D, Porter A (2002) Automated extraction and visualization of information for technological intelligence and forecasting. Technological Forecasting and Social Change 69:495-506 [15] Ding Y, Chowdhury G, Foo S (2001) Bibliometric cartography of information retrieval research by using co-word analysis. Information Processing and Management 37(6):817842 [16] Engelsman E, van Raan A (1994) A patent-based cartography of technology. Research Policy 23(1):1-26 [17] Keim D (2002) Information visualization and visual data mining. IEEE Transactions on Visualization and Computer Graphics 7(1):100-107 [18] Bertin J (1983) Semiology of graphics: diagrams, networks, maps. University of Wisconsin Press, Madison [19] Mackinlay J (1986) Automating the design of graphical presentations of relational information. ACM Transactions on Graphics 5(2):110-141 [20] Beshers C, Feiner S (1993) AutoVisual: rule-based design of interactive multivariate visualizations. IEEE Computer Graphics and Applications 13(4):41-49 [21] Beshers C, Feiner S (1994) Automated design of data visualizations. In: Rosemblum L et al (ed) Scientific visualization-advances and applications. Academic Press [22] Hibbard W, Dryer C, Paul B (1994) A lattice model of data display. Proc. IEEE Visualization’ 94:310-317 [23] Roth S, Mattis J (1990) Data characterization for intelligent graphics presentations, Proc. Human Factors in Computing Systems Conf. (CHI ’90):193-200 [24] Card S, Mackinlay J, Schneiderman B (1999) Readings in information visualization, Morgan Kaufmann [25] de Oliveira M, Levkowitz H (2003) From visual data exploration to visual data mining: a survey. IEEE Transactions on Visualization and Computer Graphics 9(3):378-394 [26] Spence B (2000) Information visualization. Pearson Education Higher Education Publishers, UK. [27] Ware C (2000) Information visualization: perception for design, Morgen Kaufman [28] Lim J, Heinrichs J, Hudspeth L (1999) Strategic marketing analysis: business intelligence tools for knowledge based actions. Pearson Custom Publishiing, Needham Heights [29] Liao, S (2003) Knowledge management technologies and applications – literature review from 1995 to 2002. Expert Systems with Applications 25: 155-164 [30] Jermol M, Lavrač N, Urbančič T (2003) Managing business intelligence in a virtual enterprise: a case study and knowledge management lessons learned. Journal of Intelligent & Fuzzy Systems 14: 121-136 [31] Cody W, Kreulen J, Krishna V et al (2002) The integration of business intelligence and knowledge management. IBM systems Journal 41(4): 697-713
High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process J.M. Park Process Development, austriamicrosystems AG, A8141 Schloss Premstaetten, Austria [email protected]
Abstract. This paper describes a high-voltage IC technology. Various novel lateral highvoltage device concepts, which can be efficiently implemented in a submicron CMOS process, are explained and analyzed. It’s essential for lateral high-voltage devices to show best trade-off between specific on-resistance Rsp and breakdown voltage BV, and super-junction devices give an opportunity to achieve a best Rsp-BV trade-off for BV over 100V. Key issues for monolithic integration of high-voltage devices and low-voltage CMOS are reviewed in the paper. Finally, hot-carrier (HC) behaviour of a high-voltage 0.35μm lateral DMOS transistor (LDMOSFET) is presented. It is shown that self-heating effects during HC stress have to be taken into account for the HC stress analysis. Together with TCAD simulations and measurements, one can clearly explain the self-heating effects on the HC behaviour of an LDMOSFET.
1 Introduction High-voltage (HV) ICs implemented in a standard low-voltage (LV) CMOS platform have attracted much attention in a wide variety of applications [1]. Monolithic integration of HV and LV devices has offered efficient protection components, simple drive characteristics, and good control dynamics together with a direct interface to the signal processing circuitry on the same chip. As a result, information and telecommunication fields have greatly benefited from advances in high-voltage IC (HVIC) technology. The integration of HVIC into standard logic CMOS by exploiting RESURF principles often requires dedicated implantations and additional processing. The main issues in the development of HVICs are to obtain the best trade-off between Rsp and BV, and to shrink the feature size without degrading device characteristics. New concepts such as super-junction (SJ) are studied and extended to improve device characteristics of lateral HV devices. Section 2 provides a review of the recent developments in lateral HV device technologies. Process development and qualification of the HV CMOS process increase cost and complexity. By introducing innovative technologies, it’s essential to minimize the fabrication cost while keeping best device performance [2]. State of the art of the HVIC and LV CMOS process integration concepts are descrived in section 3. Device simulation with TCAD (technology computer-aided design) tools has proven to play an important role for the design engineers and researchers to analyze, characterize, and develop new devices [3]. Two- and threedimensional device simulations are performed to study new device concepts. Long term and short term reliabilities like a HC behaviour [4, 5], NBTI (negative bias temperature instability), and ESD (electrostatic discharge) are also key issues for practical
384
J.M. Park
application of HVICs. High electric fields in a HVIC generate hot carriers, which cause device degradation by interface trap generation and charge trapping in the oxide. New analysis like self-heating effects on the hot carrier induced degradation is described in this paper.
2 Standard CMOS Compatible Lateral HV Devices Commonly used CMOS compatible HV devices are the LDMOSFETs and lateral insulated gate bipolar transistors (LIGBTs) implemented in bulk silicon or SOI (Silicon on Insulator) [6, 7]. Over the past decade a variety of new HV devices has been suggested and commercialized. New structures such as, SJ devices [8, 9], lateral trench gate [10], and folded gate LDMOS transistors (FG-LDMOSTs) [11] have been proposed to improve the performance of conventional HV devices. SJ devices assume complete charge balance in the drift region. These results in a significant reduction of Rsp. FG-LDMOSTs have been suggested to increase the channel area without consuming more chip area. A lateral trench gate LDMOSFET uses narrow trenches as channels. Contrary to the conventional vertical trench MOSFETs with current flow in vertical direction, the lateral trench gate is formed laterally on the side wall of a trench and the channel current flows in lateral direction through the trench side walls. This gives an increased channel area compared to that of conventional LDMOSFETs. 2.1 LDMOS and LIGBT Fig. 1 (a) shows the cross section of the LDMOSFET. Normally, LDMOSFETs can be made in an optimized n-type epitaxial layer, and deep p-type diffused regions are used to isolate them. The standard n-type and p-type source and drain regions are used for contacting source/drain and body, respectively. The major limitation of LDMOSFETs is their relatively high Rsp due to the majority carrier conduction mechanism. IGBT is a relatively new HV device which is designed to overcome the high on-state loss of power (a) LDMOS MOSFETs. The device is essentially a combination of a pnp-bipolar transistor which provides high current handling capability, and an n-channel MOSFET which gives a highimpedance voltage control over the bipolar base current. It can be fabricated both as a high-power discrete vertical IGBT and a lowpower lateral IGBT, the latter of which presents interesting possibilities for integration (b) LIGBT together with control circuitry. Fig. 1 (b) Fig. 1. Cross section of the LDMOS shows a cross section of the LIGBT. LIGBT and LIGBT is similar to that of an LDMOSFET, the gate
High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process
385
is also formed by double diffusion. The main difference between LIGBT and LDMOSFET is that it has a p+-anode instead of the n+-drain of LDMOSFETs. The holes from the anode are injected into the n-drift region and electrons flow into the drift region from the source through the channel. If the hole concentration exceeds the background doping level of the n-drift region, the device characteristics are similar to those of a forward biased pin-diode. As a result, it can be operated at a higher current density compared to conventional LDMOSFETs. 2.2 SJ LDMOSFET Ron and BV are inversely related to each other. The reducing Ron while maintaining a BV rating has been the main issue of HV devices. Although much effort has been put into the reduction of Rsp while maintaining the desired BV, it has been understood that there is a limit [12]. Recently, SJ concept was suggested and studied, which achieved a significant improvement in the tradeoff between the on-resistance and the BV compared to conventional devices. Assuming complete charge balance between the n- and pFig. 2. Rsp versus BV of different column of the drift region of the SJ structure, drift device structures and their theoretical limits [12]. doping can be increased drastically. Fig. 2 shows the Rsp versus BV of four different device structures and their theoretical limits. Note that these theoretical limits are only one part to estimate the device performance, it is mainly related to the cost of the devices. From this figure SJ devices have a best trade-off between Rsp and BV (generally, voltage rating over 100V), and lower Rsp is obtained with smaller column width. Doping concentration rises with decreasing column width, however, small columns become more and more difficult to produce. Fig. 3 (a) shows the potential distribution of the SJ pn-diode, which is the basic structure for all the SJ devices, at a reverse voltage of 300V. Potential lines are uniformly distributed throughout the drift region. Fig. 3 (b) shows the electric field distribution of the SJ pn-diode. It shows a rather high electric field along the pn-junction (and p+n- and n+p-junction) compared to that
(a) Potential distribution at 300V
(b) Electric field distribution at 300V
Fig. 3. SJ pn diode
386
J.M. Park
(a) SJ SOI-LDMOSFET
(b) Current flow iso-line.
Fig. 4. SJ SOI-LDMOSFET [3]
at each side of the device, but the electric field distribution is nearly square shaped throughout the drift region. The standard SJ SOI-LDMOSFETs can be made by introducing extra p-columns in the drift region (Fig. 4). It is assumed that the charge in the n- and p-column of the drift layer should be exactly balanced. To increase further the on-state conduction area in the drift region an unbalanced SJ SOI-LDMOSFET [3], which has a larger ncolumn width than that of an n-column, was suggested. Generally, unbalanced structure shows a low on-resistance compared to the conventional SJ devices by increasing the current path area and by suppressing the mobility degradation, although doping concentration at the active region is lower than that of conventional SJ devices. The BV of SJ devices strongly depends on the charge balance condition. In practical manufacturing it is difficult to achieve perfect charge balance. Generally, it is assumed that the doping can be controlled within ±10% of the nominal charge. Therefore, it is important to reduce the sensitivity of the charge imbalance on the BV.
3 High-Voltage and Low-Voltage Process Integration 3.1 Process Integration HVICs have always used design rules and technologies which are less efficient than that used for ULSI and VLSI devices. When ULSI devices used submicron IC design rules HVICs devices were fabricated with 1.5 or 2µm design rules. This difference was essentially linked (i) to the more complex fabrication that must be taken into account: isolation, combination of different kinds of devices, CMOS, DMOS, bipolar, and (ii) to the important development of VLSI and ULSI devices driven by a large market. Recently design rules for HVICs went down from 0.8 to 0.13µm. In addition, it’s not trivial to implement monolithically HV devices and LV CMOS in a single chip. As shown in Fig. 5, state of the art of LV CMOS has a negligible thermal budget to reduce the channel length, shallow junction depth, small chip area consumption, extremely thin gate oxide (ex. 3.5nm for 1.8 V gate voltage), low-voltage operation, shallow trench isolation (STI), and channel engineering is getting important to keep the high LV COMS performance. On the other hand, HV device has a large thermal budget, deep junction depth, large chip area consumption, thick gate oxide (ex. 50nm for 20V gate voltage), high-voltage operation, and drift region engineering is getting important to have a best Rsp-BV trade-off.
High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process
-
LV CMOS Negligible thermal budget Shallow junction depth Small chip area Thin gate oxide Low-voltage operation Shallow trench isolation Channel engineering
-
-
387
HV Devices Large thermal budget Deep junction depth Large chip area Thick gate oxide High-voltage operation Field oxide (FOX) Drift region engineering
LV CMOS + HV Devices Optimum LV CMOS and HV device performance Minimum number of additional masks Minimum chip size to reduce the cost Both of the thin & thick gate oxide Isolation between LV & HV devices (junction isolation, SOI) Reliabilities (Oxide, NBTI, HC, ESD, latch-up, …) Fig. 5. Demands for LV CMOS & HV devices
(a) STI corner of the HV device
(b) CVD + thermal oxide
Fig. 6. STI corner issues of the HV device
Considering the application fields, chip cost, and process complexity, one has to decide process integration concept. Highly doped buried-layer gives a good isolation between the device and substrate together with robust ESD performance, but it increases the process complexity. For design rule below 0.25µm, STI has to be introduced instead of FOX. As shown in Fig. 6, STI corner shape and oxide thickness at the STI top corner should be carefully controlled, because they directly affect the electrical performances of the HV devices. One solution to adjust the STI corner is to introduce the “CVD + thermal oxide”, but it may degrade the oxide quality. In general, high-energy and high-dose implantations for LV wells are used to minimize the thermal budget. Then, it is necessary to introduce the dedicated wells with low-dose to form the drift region of HV devices. Therefore, well sharing between LV and HV devices is also one of the key issues to reduce the number of masks.
3.2 TCAD Simulations Device simulation with TCAD tools has proven to play an important role for the design engineers and researchers to analyze, characterize, and develop new devices. It saves time and lowers the cost of designing devices when compared to the experimental approach. It greatly helps to evaluate the validity of LV CMOS & HV devices
388
J.M. Park
process integration without real silicon process. In addition, it allows to see physical effects clearly in the semiconductor devices concerning new device concepts. Clear understanding of TCAD used for the analysis of HV semiconductor devices is essential to obtain an accurate and reliable simulation result. A great deal of effort has been put into the development of a stable and powerful TCAD tool. In this study MINIMOS–NT [13] and DESSIS [14], general purpose device simulators, are used for the simulations. There are several physical models for the purpose of numerical device simulations starting from Boltzmann’s transport equation, Monte–Carlo method, hydrodynamic, and the simple drift–diffusion (including thermodynamic) models. Depending on the type of devices under investigation and the desired accuracy of simulations, one of the physical models above mentioned can be chosen.
4 Self-heating Effects on the HC Behaviour of LDMOSFETs High electric field near the bird’s beak region is believed to be a major cause of the HC generation. One can see peak electric field at the bird’s beak region (circle between gate and field oxide in Fig. 7 (a)) at high drain voltage under low gate voltage. With increased gate voltage the peak electric field moves from the (a) High electric field regions in the bird’s beak towards the drain side, and HC LDMOSFET. generation at the bird’s beak region is suppressed at high gate voltages. HC generate defects in the oxide and at the Si/SiO2 interface. Interface traps (or charge trapping in the oxide) created by this process collect charge over time and directly affect the device operation. Extensive research has been undertaken to minimize the degradation effects by hot carrier injection. Several fabrication steps were suggested to minimize hot carrier effects in LDMOSFETs. Hot carrier generation can be minimized by (b) Substrate current using various RESURF techniques in order to Fig. 7. Substrate current versus p-body/ reduce the electric field at the interface of the n-drift junction of a LDMOSFET device. Moving the current path away from the high electric field and the impact ionization region deep into the silicon instead of the surface can help to reduce hot carrier degradation of the device. Substrate current is the indirect way to see the amount of impact-ionization by hot-carrier generation. As shown in the Fig. 7 (b), by moving the p-body/n-drift junction from (A) to (B) one can see suppressed substrate current. Two dimensional device simulations together with measurements are performed to explain clearly the HC behaviour of a HV LDMOSFET including self-heating effects. The devices used in this study were fabricated in a 0.35μm CMOS-based HV technology.
High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process
389
Gate oxide thickness of 48nm was formed by thermal oxidation. The HV p-channel LDMOSFETs (Fig. 8 (a)) for 50V applications, with a gate length 0.8 μm and a width of 20 μm, were used in the study. Fig. 8 (b) shows the potential near the Si/SiO2 interface at gate voltages VG = -10V and 25V, respectively. Direction of potential (compared to applied gate voltage) is changed in the Bird’s Beak region for the low gate voltage (VG = 10V) and in the middle of drift region for the high gate voltage (VG = -25V). Near the bird’s beak region at low gate voltage (VG = -10V), there are two adjacent regions which show electric field components normal to the current flow vector, near the Si/SiO2 interface, with a positive and a negative sign respectively. Consequently, both cases of electron and hole injection have to be considered for hot-carrier degradation behaviour. Hot carrier stress experiments (under gate voltage VG = -20V and drain voltage VD = -50V) and NBTI stress (under VG = -20V and ground on source and drain at 423K) were performed for device reliability evaluations.
(a) Schematic view
(b) Potential at the Si surface.
Fig. 8. 50V p-channel LDMOSFET for HC and NBTI stress 100
Del_Vth (%)
HC Stress (Vg=-20V, VD=-50V) NB TI (423 K)
10
1 1
10
100
1000
10000
100000
Stre ss Time (sec)
(a) Temperature distribution at VG = -20V, and VD = -50V.
(b) Percent change of the threshold voltage Vth.
Fig. 9. Temperature distribution and Vth-shift during the HC and NBTI stresses
Fig. 9 (a) shows the temperature distribution of the p-channel LDMOSFET at VG = -20V and VD = -50V. The temperature rise is highest in the middle of the drift region, and it decreases towards the bottom of substrate. The channel region also shows a high temperature over 400K. In our devices Vth shifts were found to be negative (absolute Vth was increased) under the hot carrier stress, which correspond to a net positive charge build-up at the Si/Si02 interface. Because of the increased phonon scattering with temperature rise, hot-carrier generation, generally, is suppressed in an
390
J.M. Park
n-channel LDMOSFET. However, NBTI in a p-channel LDMOSFET is greatly enhanced with temperature increase. Fig. 9 (b) shows a good agreement of ΔVth between hot carrier stress and NBTI stress at 423K. It proves that the temperature rise during hot carrier stress causes a large amount of degradation at the channel region purely due to NBTI-degradation.
5 Conclusion State of the art of the HVIC technology was described together with various lateral HV devices, which can be implemented in the submicron LVCMOS process. Novel SJ concept has been studied with 2D device simulations, and it shows that SJ devices have a best trade-off between Rsp and BV (generally, voltage rating over 100V). Finally, HC behaviour of a HV 0.35μm LDMOSFET was presented. With TCAD simulations the large amount of temperature increase by the self-heating was observed during HC stress. Therefore, self-heating effects during HC stress have to be taken into account for the HC stress analysis. Together with TCAD simulations and measurements, one can clearly explain the self-heating effects on the HC behaviour of a p-channel LDMOSFET.
References [1] T Efland 2003) Earth is Mobile – Power. Proc. Intl. Symp. Power Semiconductor Devices & Integrated Circuits 2-9 [2] V Vescoli, J M Park, S Carniello, and R Minixhofer (2008) 3D-Resurf: The integration of a pchannel LDMOS in a standard CMOS process. Proc. Intl. Symp. Power Semiconductor Devices & Integrated Circuits 123-126 [3] J M Park, R Klima, and S Selberherr (2002) Lateral Trench Gate Super-Junction SOILDMOSFETs with Low On-Resistance. Proc. European Solid-State Device Research Conference 283-286 [4] J M Park, H Enichlmair, and R Minixhofer (2007) Hot-Carrier Behaviour of a 0.35μm HighVoltage n-channel LDMOS Transistor. 12th Intl. Conference on Simulation of Semiconductor Devices and Processes, Springer 369-372 [5] V Vescoli, J M Park, H Enichlmair, K Martin, R Georg, M Rainer, S Martin (2006) Hot Carrier Degradation Behaviour of High-Voltage LDMOS Transistors. 8th International Seminar on Power Semiconductors 79-84 [6] S Merchant, E Arnold, H Baumgart, S Mukherjee, H Pein, and R Pinker (1991) Realization of High Breakdown Voltage (>700V) in Thin SOI Devices. Proc. Intl. Symp. Power Semiconductor Devices & Integrated Circuits 31-34 [7] B Murari, C Contiero, R Gatiboldi, S Sueri, and A Russo (2000) Smart Power Technologies Evolution. Proc. Industry Applications Conference 1:10–19 [8] T Fujihira (1997) Theory of Semiconductor Superjunction Devices. Jpn. J. Applied Physics 36(10):6254-6262 [9] M Saggio, D Fagone, and S Musumeci (2000) MDmeshTM: Innovative Technology for High Voltage Power MOSFETs. Proc. Intl. Symp. Power Semiconductor Devices & Integrated Circuits 65-68
High-Voltage IC Technology: Implemented in a Standard Submicron CMOS Process
391
[10] Y Zhu, Y Liang, S Xu, P Foo, and J Sin (2001) Folded Gate LDMOS Transistor With Low On-Resistance and High Transconductance. IEEE Trans. Electron Devices 48(12):2917-2928 [11] Y Kawaguchi, T Sano, and A Nakagawa (1999) 20V and 8V Lateral Trench Gate Power MOSFETs with Record-Low On-resistance. Proc. Intl. Electron Devices Meeting 197-200 [12] J M Park (2004) Novel Power Devices for Smart Power Applications. Dissertation, IUE, TU Vienna [13] (2002) MINIMOS-NT 2.0 User’s Guide, IUE, TU Vienna [14] (2007) DESSIS User’s Guide, Synopsys
Life and Natural Sciences
Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis Sungbo Cho Biohybrid Systems, Fraunhofer IBMT, St. Ingbert [email protected]
Abstract. The goal of this article is the conception, development, and evaluation of micro system-based impedance spectroscopy for the diagnosis of atherosclerosis. For this, it was investigated basically how the changes of tissue parameter on the cellular level can affect the measured impedance by using a single cell model and shown that cellular growth and distribution affect the impedance of tissues. Based on a cell layer model, it was found that a cellular alteration induced by atherosclerotic pathology is well reflected in the measured impedance of cell assemblies. For the intravascular impedance measurement, a balloon impedance catheter with integrated flexible microelectrodes was developed. From an in situ test with animal model, it was successfully demonstrated that the aortas containing atherosclerotic fatty plaques can be distinguished from normal aortas by intravascular impedance measurement.
1 Introduction Chronic diseases led by cardiovascular disease are the largest cause of death in the world since more than 17 million people a year die mainly from heart disease and stroke [1]. The main cause of heart attack or stroke is atherosclerosis, a chronic disease affecting the arterial blood vessel and forming multiple plaques within the arteries [2]. The plaque rupture of a blood vessel with the subsequent thrombus formation frequently causes the acute coronary syndromes (ACS). However, most of ACS are triggered by the rupture showing non-critical stenoses in typical X-ray angiography or intravascular ultrasound (IVUS) [3] and [4]. Hence, new methods are required to characterize the plaques in vessels for more precise diagnosis of atherosclerosis. Electrical impedance spectroscopy (IS), one of electrochemical analysis, has a potential to characterize the plaques in vessels non-destructively and quantitatively. IS is a method to measure the frequency dependent electrical properties, conductivity and permittivity, of materials [5]. For the intravascular impedance diagnosis of atherosclerotic plaques, an impedance catheter with the array of 5 annular voxels has been developed and used to detect the impedance of disk-shaped plastic droplets representing fatty lesions in a human iliac artery in vitro [6]. For more sensitive electrical characterization of atherosclerotic plaques in vessels, it was suggested to use a balloon impedance catheter (BIC) which consists of electrodes integrated with typical balloon catheter [7]. Since the microelectrodes contact with intima according to the inflation of balloon, the impedance measurement of vessels can avoid the disturbance of intravascular condition (e.g. velocity or viscosity of blood component) and therefore
396
S. Cho
can be more sensitive and stable. However, so far, the utilization of BIC for the intravascular impedance measurement of vessels has been limited due to the difficulty in fabrication of microelectrodes durable to in- and deflation of balloon catheter, and the lack of knowledge for the interpretation of IS data measured on such a thin and small vessel walls. The goal of this article is to the conception, development, and evaluation of micro system-based IS using BIC for the intravascular diagnosis of atherosclerosis. For this, it is investigated 1. how the changes of tissue parameter on the cellular level can affect the measured impedance, 2. whether cellular alteration related to atherosclerosis can be characterized by IS, 3. whether reproducible impedance measurement with BIC can be performed in vessels.
2 Electrical Impedance of Cells and Tissue The electrical impedance of a biological tissue is determined by various tissue parameters such as its structure and composition as well as cellular distribution. To understand basically how the changes of tissue parameter affect the impedance of tissue, the cellular alteration was characterized electrically by using IS with a single cell model keeping the level of complexity as low as possible. However, the impedance measurement of single cells has been limited by the electrode impedance increasing with decrease of electrode size [8]. To measure the impedance of single cells without the disturbance of electrode impedance, the use of a micro hole-based structure was considered as Fig. 1 [9]. The insulating layer with micro hole was fabricated by semiconductor process technology. Onto the one side of silicon wafer (100), a Si3N4 layer with the thickness of 800nm was deposited by plasma enhanced chemical vapour deposition (PECVD). By photolithography and reactive ion etching, a micro hole with the radius of 3µm was patterned in the insulating Si3N4 layer. On the other side of substrate, a SiO2 layer was deposited by PECVD. To make a well for the conservation of Fig. 1. Schematic of impedance measurement cell, and to connect the well with the of a single cell on micro hole hole, the SiO2 layer and Si wafer were etched in sequence. For the insulation of well, a SiO2 layer with the thickness of 200nm was deposited onto the substrate. The thickness and area of the insulated layer were 1µm and 220µm x 220µm, respectively. With experimental setup as Fig. 1, the impedance of single cells was measured by using the top and bottom electrode pairs connected to an impedance analyzer (Solartron 1260, Solartron Analytical, Farnborough, UK). By controlling a culture
Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis
397
medium (RPMI 1640, 10% FCS, 0.5% Penicillin/Streptavidin) with micro fluidic controller (Cell-Tram, Eppendorf, Wesseling-Berzdorf, Germany), a L929 single cell was positioned on the micro hole. Fig. 2 shows micrographs of a positioned single L929 cell (a) and cell cultured overnight (b), and measured impedance magnitudes of without cell (No Cell), positioned single cell (0), and cell cultured for 2 or 4 days (c). Averages and standard errors of measured impedance magnitudes were symbols and plus bars, respectively (n for each group = 3). During the aspiration, the spherical shape of suspended cell was positioned on the hole Fig. 2. Micrograph of a positioned single and a part of cell could be inserted into L929 cell on the hole (middle black) with the hole in dependence on the pressure, radius of 3µm (a) and cell cultured oversurface tension, and viscoelasticity of cell night (b), and measured impedance magnitude of without cell (No Cell), positioned [10]. A well positioned cell was reflected single cell (0), and cell cultured for 2 or 4 in an increase of impedance magnitude in days (c), averages and standard errors of the low frequency in comparison to the each group were symbols and plus bars (n impedance of a free hole. Since the cell for each group = 3) [9] positioned onto a micro hole adhered and proliferated on the surface around hole, the impedance magnitude at low frequencies increased with increase of cultivation period. The difference of impedance magnitude between the groups could not be measured when the frequency is above tens of kHz due to the stray current over the insulating layer. From the result based on the single cell model, it was found that the impedance of cells is determined by the cell growth and distribution. However, real biological tissues are not a distribution of single cells. In biological tissues, the cells interact. The interaction of cells can not be represented by the single cell model. To include cell/cell interaction, models based on cell assemblies or tissues are required. In the next chapter, a cell layer model was used to investigate the suitability of IS for the characterization of cell assemblies and tissues related with atherosclerosis.
3 Effect of Cellular Alteration on Impedance of Cells A precondition to use IS for the diagnosis of atherosclerosis is that the alteration in vessels caused by atherosclerotic pathology is reflected in the measurable impedance. To characterize the cellular alteration involved with atherosclerosis by IS, it was considered to use a cell layer model with electrode-based chip as Fig. 3. For the impedance measurement of cell layer, a planar electrode-based structure was fabricated by using semiconductor process technology [11]. After deposition of insulating Si3N4 layer on a silicon wafer by PECVD, high conductive platinum
398
S. Cho
electrodes and gold interconnection lines were patterned on the substrate. After insulation with Si3N4 layer over the substrate, the electrodes were opened by reactive ion etching. The opened area of electrode was circular shape with the radius of 500µm, and a pair of electrodes was used for the impedance measurement. A cylindrical glass dish was integrated with the electrode substrate for the conservaFig. 3. Schematic of impedance measurement tion of cells on electrodes. For the imof cell layer on electrodes pedance measurement of cells, the electrodes were electrically connected to the impedance analyzer. A herpes simplex virus (HSV) infection model was used since HSV is related with the development of atherosclerosis [12]. Vero (African green monkey kidney) cells exhibiting a wide spectrum of virus susceptibility were prepared. For the experiment, 8 x 104 Vero cells with culture medium of 3ml (D-MEM, 10% FBS, 50 units penicillin, 50µg/ml streptomycin) were added in the electrode-based chip. During the impedance measurement of cells in the frequency range of 100Hz to 1MHz, cells were infected with HSV at the different multiplicities of infection (MOI: 0.06, 0.006, or 0.0006). To interpret the measured impedance of cells, a mathematical model derived by Giaever and Keese [13] was used. By nonlinear curve fitting analysis, a parameter Rb reflecting inter cellular junction in the established model was adjusted to minimize the sum of squared deviations between the model and measured impedance spectra.
Fig. 4. Micrograph of Vero cells not infected (a) or infected with HSV for 102h at the MOI of 0.006 (b) on the platinum electrode with a radius of 500µm, arrow: exposed electrode area, scale bar: 100µm, and Rb of Giaever and Keese model [13] during the cultivation or infection with HSV at different MOI [11]
Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis
399
Without the infection, the cells well adhered and are confluent on the platinum electrode (see Fig. 4 (a)). However, the infected cells were shaped round and detached from the electrode in dependency on the time of infection and virus concentration. At 102h after infection with HSV at the MOI of 0.006, the partially exposed electrode area caused by the detachment of cells was clearly observed (see arrows in Fig. 4 (b)). The impedance measurement of cells on planar electrodes was restricted by the electrode polarization at low frequencies, and also by the stray capacitance at high frequencies. The impedance of cell layer revealed in the intermediate frequencies is mostly determined by cellular adhesion or extra cellular matrix [14]. At the beginning of measurement, the parameter reflecting inter cellular junction Rb was increased due to the cellular adhesion and spread. As the cells lost their adhesion during the infection, however, Rb was diminished dependently on MOI (see Fig. 4 (c)). The result demonstrates that the virus induced-alteration in cell assemblies involved with atherosclerotic pathology can be sensitively monitored by IS. Further, it is expected that alterations in vessel related with atherosclerosis can be electrically characterized by IS. For the impedance measurement of vessels, however, it needs to understand the influence of complex structures and properties of vessel with plaques on the intravascular impedance measurement. In the following chapter, it is reported whether the BIC-based intravascular characterization of atherosclerotic plaque in vessels can be performed with such a sensitivity and reproducibility that relevant medical parameters are determinable in animal models.
4 Intravascular Impedance Measurement of Vessel For the intravascular impedance measurement of vessels, a BIC was developed by integrating a flexible micro electrode array with a balloon catheter as Fig. 5. The flexible electrode structure was fabricated based on an insulating polyimide (PYRALIN PI 2611, Du Pont) [15]. The polyimide resin was coated on a silicon wafer by spin coating and imidized in a curing process at 350°C under nitrogen atmosphere. The platinum structures of four-rectangular electrodes, transmission lines, and terminal pads were patterned by lift-off technique. After the deposition of polyimide layer with 5µm thickness over the patterned metals, the areas of electrodes and terminal pads were exposed by reactive ion etching. The exposed area of electrodes was 100µm by 100µm, and the separation distance between centers of electrodes was 333µm. The electrodes were connected with impedance measurement system consisted of the impedance analyzer in combination with a bioimpedance interface (Solartron 1294, Solartron Analytical, Farnborough, UK). Using the fabricated BIC, the intravascular impedance of vessels was measured in situ animal models which enabled the impedance analysis of vessels in parallel to histological investigation. Animal experiments were designed in accordance with the German Law for Animal Protection and were approved by the Review Board for the Care of Animal Subjects in Karlsruhe, Germany. The experiments conform to the Guide for the Care and Use of Laboratory animals published by the US National Institutes of Health (NIH Publication No. 85–23, revised 1996). Six female New Zealand White rabbits (Harlan-Winkelmann, Borchen, Germany), 1.5kg were kept at standard conditions (temperature 21°C, humidity 55%) and fed with a 5% cholesterol-enriched
400
S. Cho
diet (Ssniff Spezialdiäten GmbH, Soest, Germany) for 17 weeks to induce atherosclerotic plaques. After preparing thoracic aorta, black points were marked on the superficial layer of aorta to guarantee an exact matching of the histology of marked aortic tissues and the impedance measurement (separation distance between marked points: 5- 10mm). After introduction of the guide wire, BIC was placed distally of the aortic arch. When the position of electrodes is exactly matched to the marked points on the surface of aorta under fluoroscopy control, the balloon was inflated with 0.5atm to ensure a close contact of the electrodes to the aortic wall and then the impedance was measured.
Fig. 5. Fabricated BIC (upper left) and polyimide-based flexible electrode array (upper right), and schematic of intravascular impedance measurement of vessel by using BIC [16]
The used four-electrode method was able to reduce the effect of electrode impedance on the total measured impedance and therefore to extend the effective frequency range of impedance measurement. The polyimide-based electrode array was flexible and ultralight enough that the property of BIC was not degraded during the expansion and contraction of balloon. The intravascular impedance measurement with BIC was determined not only by the relative vessel thicknesses to the electrode configuration [17] but also by the relative position of electrode array to the atheromatous plaque in vessel [18]. Therefore, the histology of sections was analyzed at the marked points (measured points). The plaques induced in the intima of vessels were early type according to the definition of Stary et al. [2]. Vessel without plaque was classified as P0 (n=44), one with plaque thinner than media as PI (n=12), and one with plaque thicker than media as PII (n=36). To minimize the dependence of impedance on the different thicknesses of vessels, the impedance change versus frequency (ICF=impedance magnitude at 1kHz – impedance magnitude at 10kHz) was analyzed rather than using raw data. Fig. 6 shows a micrograph of atherosclerotic aorta and ICF value versus plaque type of vessel. The symbols and lines were average and standard error. The
Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis
Fig. 6. Micrograph of aorta segment with plaque indicated by arrow (a), and ICF of plaque type with result of hypothesis t-test (b), P0: no plaque (n=44), symbols and lines were average and standard error, PI: plaque thinner than media (n=12), PII: plaque thicker than media (n=36) [16].
401
ICF of group PII (–22.2±43.29Ω) was significantly lower in comparison to PI (137.7±53.29Ω; p=0.05) and P0 (208.5±55.16Ω; p=0.002). However, there was no difference between group P0 and PI (p=0.515). From the in situ animal experiment, it was shown that the BIC is feasible for the intravascular impedance measurement of vessels and that the early type of atherosclerotic plaques can be electrically characterized by ICF analysis. Additionally, the fabricated BIC was feasible to characterize the advanced plaque type of human aorta in vitro [19]. Due to the dependence of intravascular impedance measurement on the vessel thickness or on the plaque position, it is necessary to control the distribution of electric fields in vessels especially for in vivo application. For this, in future work, it should be investigated to use a multielectrode arrangement. By selecting the required electrodes in the multi-electrodes, it is possible to control the distribution of electric fields and to image the intravascular impedance. Further, the use of BIC will be studied on atherosclerotic human models in vivo.
5 Conclusions In this article, it was described about the conception, development, and evaluation of micro system-based IS using BIC for the intravascular diagnosis of atherosclerosis. Through the approach on the cellular level to animal model, it was concluded that 1. the impedance of tissue is determined by the growth and distribution of cells unit of tissue, 2. the virus-induced disintegration of cell assemblies involved with atherosclerotic pathology can be characterized by IS, 3. the fabricated BIC is feasible for the intravascular impedance measurement of vessel, and the impedance of vessels with atheromatous plaque thicker than media is distinguished from one of normal vessels For the clinical trial in vivo, a multi-electrode arrangement can be used for BIC which can control the measurement area in vessels and increase the measurement resolution to the various positions of plaque in vessels.
402
S. Cho
Acknowledgments. The author wish to acknowledge Prof. Fuhr, director of Fraunhofer IBMT for his guidance for research, Dr. Thielecke, head of Biohybrid Systems in Fraunhofer IBMT, and PD Dr. Briesen, head of Cell Biology & Applied Virology in Fraunhofer IBMT, for their support for research, PD Dr. Süselbeck and Dr. Streitner, Department of Medicine, University Hospital of Mannheim, University of Heidelberg, for the cooperation on the research of cardiology.
References [1] Yach D, Hawkes C, Gould C L et al. (2004) The global burden of chronic diseases overcoming impediments to prevention and control. The Journal of the American Medical Association 291:2616-2622 [2] Stary H C, Chandler A B, Dinsmore R E et al. (1995) A definition of advanced types of atherosclerotic lesions and a histological classification of atherosclerosis. Arteriosclerosis, Thrombosis, and Vascular Biology 15:1512-1531 [3] Ambrose J A, Tannenbaum M A, Alexopoulos D et al. (1988) Angiographic progression of coronary artery disease and the development of myocardial infarction. Journal of the American College of Cardiology 12:56-62 [4] Fuster V, Badimon L, Badimon J J et al. (1992) The pathogenesis of coronary artery disease and the acute coronary syndromes. The New England Journal of Medicine 326:24250 [5] Grimnes S and Martinsen Ø G (2000) Bioimpedance and bioelectricity basics. Academic Press, San Diego. [6] Konings M K, Mali W P Th M, Viergever M A (1997) Development of an intravascular impedance catheter for detection of fatty lesions in arteries. IEEE Transactions on Medical Imaging 16:439-446 [7] Stiles D K and Oakley B A (2003) Simulated characterization of atherosclerotic lesions in the coronary arteries by measurement of bioimpedance. IEEE Transactions on Biomedical Engineering 50:916-921 [8] Ayliffe H E, Frazier A B, Rabbitt R D et al. (1999) Electric impedance spectroscopy using microchannels with integrated metal electrodes. IEEE Journal of Microelectromechanical Systems 8:50-57 [9] Cho S and Thielecke H (2007) Micro hole based cell chip with impedance spectroscopy. Biosensors and Bioelectronics 22:1764-1768 [10] Cho S, Castellarnau M, Samitier J et al. (2008) Dependence of impedance of embedded single cells on cellular behaviour. Sensors 8:1198-1211 [11] Cho S, Becker S, Briesen H et al. (2007) Impedance monitoring of herpes simplex virusinduced cytopathic effect in Vero cells. Sensors and Actuators B: Chemical 123:978-982 [12] Key N S, Vercellotti G M, Winkelmann J C et al. (1990) Infection of vascular endothelial cells with herpes simplex virus enhances tissue factor activity and reduces thrombomodulin expression. The Proceedings of the National Academy of Sciences 87:70957099 [13] Giaever I and Keese C R (1991) Micromotion of mammalian cells measured electrically. The Proceedings of the National Academy of Sciences 88:7896-7900 (correction: (1993) The Proceedings of the National Academy of Sciences 90:1634) [14] Cho S and Thielecke H (2008) Electrical characterization of human mesenchymal stem cell growth on microelectrode. Microelectronic Engineering 85:1272-1274
Electrical Impedance Spectroscopy for Intravascular Diagnosis of Atherosclerosis
403
[15] Stieglitz T, Beutel H, Meyer J U (1997) A flexible, light-weight multichannel sieve electrode with integrated cables for interfacing regenerating peripheral nerves. Sensors and Actuators A 60:240-243 [16] Süselbeck T, Thielecke H, Koechlin J et al. (2005) Intravascular electric impedance spectroscopy of atherosclerotic lesions using a new impedance catheter system. Basic Research in Cardiology 100:446-452 [17] Cho S and Thielecke H (2005) Design of electrode array for impedance measurement of lesions in arteries. Physiological Measurement 26:S19-S26 [18] Cho S and Thielecke H (2006) Influence of the electrode position on the characterisation of artery stenotic plaques by using impedance catheter. IEEE Transactions on Biomedical Engineering 53:2401-2404 [19] Streitner I I, Goldhofer M, Cho S et al. (2007) Intravascular electric impedance spectroscopy of human atherosclerotic lesions using a new impedance catheter system. Atherosclerosis Supplements 8:139
Mathematical Modelling of Cervical Cancer Vaccination in the UK* Yoon Hong Choi** and Mark Jit Centre for Infections, Health Protection Agency, UK [email protected]
Abstract. Human papillomaviruses (HPV) are responsible for causing cervical cancer and anogenital warts. The UK considered a national vaccine program introducing one of two licensed vaccines, Gardasil™ and Cervarix™. The impact of vaccination is, however, difficult to predict due to uncertainty about the prevalence of HPV infection, pattern of sexual partnerships, progression of cervical neoplasias, accuracy of screening as well as the duration of infectiousness and immunity. Dynamic models of HPV transmission, based upon about thousands of scenarios incorporating uncertainty in these processes, were developed to describe the infection spread and development of cervical neoplasia, cervical cancer (squamous cell and adenocarcinoma) and anogenital warts. Each scenario was then fitted to epidemiological data to estimate transmission probabilities and the best-fitting scenarios used to predict the impact of twelve different vaccination strategies. Our analysis provides relatively robust estimates of the impact of HPV vaccination, as multiple sources of uncertainty are explicitly included. The most influential remaining source of uncertainty is the duration of vaccine-induced protection.
1 Introduction Human papillomavirus (HPV) infection is responsible for the development of cervical cancer in women as well as anogenital warts in both men and women. The most common forms of cervical cancer in the United Kingdom (UK) are squamous cell carcinomas and adenocarcinomas. Two HPV types (16 and 18) accounts for about 70% of squamous cell carcinomas [1] and 85% of adenocarcinomas [2], while another two types (6 and 11) cause over 90% of cases of anogenital warts [3]. Two prophylactic vaccines against HPV have been developed: a bivalent vaccine (Cervarix™) against types 16 and 18, and a quadrivalent vaccine (Gardasil™) that also includes types 6 and 11. In clinical trials, use of either vaccine in HPV-naive females resulted in at least 90% reduction in persistent infection and associated disease during 30 months of follow-up [4, 5]. The quadrivalent vaccine has shown to be highly effective at preventing anogenital warts [6]. The vaccines have the potential to reduce the substantial burden of HPV-related disease. However, they are priced at levels significantly higher than other vaccines in national vaccination schedules, so their epidemiological and economic impact needs to be carefully considered. Because of the complexity of HPV infection and pathogenesis, mathematical models are required to estimate the impact of vaccination to * **
This is a shorter version of the original paper (Choi, et. al. 2008. PLoS Medicine. Submitted). Corresponding author.
406
Y.H. Choi and M. Jit
describe such complexity and consequent examination of alternative immunisation programmes.
2 Method We developed a set of compartmental Markov models to represent acquisition and heterosexual transmission of infection, with an imbedded progression model to represent the subsequent development of HPV-related disease (different stages of pre-cancerous cervical neoplasias, squamous cell carcinomas, adenocarcinomas and anogenital warts). HPV types in the model are divided into five groups: type 16, type 18 and other oncogenic high risk types for cervical cancers, plus type 6 and type 11 for anogenital warts. For oncogenic HPV types in females, there are type-specific model compartments for being susceptible to HPV infection, infected with HPV, immune to HPV infection, having cervical intraepithelial neoplasias (CINs) of differFig. 1. Flow diagram for models of (a) ent grades (CIN1, CIN2 or CIN3), having high risk (oncogenic) HPV infection CIN3 carcinoma in situ (CIS) or undiagnosed and disease in females, and (b) low risk squamous cell carcinoma, having diagnosed (warts-related) HPV infection in females or any HPV infection in males. invasive squamous cell carcinoma and having had a hysterectomy (see Fig. 1). Adenocarcinomas were modelled separately but the same model structure was adopted. Males can only occupy the susceptible, HPV infected and immune states. Females move through the various HPV disease states at rates independent of their age and of the time already spent in the state. They may regress to less severe disease states or to the immune state, either as a result of natural regression or of cervical screening followed by treatment. They are also subject to an agedependent background hysterectomy rate. Assumptions governing disease progression have been previously described in greater detail (Jit et al., submitted manuscript). The dynamics of infection by HPV 6 and 11 are modelled using three compartments in both females and males, representing being susceptible to infection, being infected and being immune to infection. A proportion of susceptibles who become newly infected were assumed to acquire symptomatic warts and present for treatment, thereby contributing towards anogenital warts incidence. Complete model equations for Figure 1 are available upon request. Vaccines were assumed to provide 100% efficacy against vaccine type infection, but with vaccinated individuals losing vaccine protection at a constant rate. Three possible mean durations of protection (ten years, twenty years and lifetime) were
Mathematical Modelling of Cervical Cancer Vaccination in the UK
407
modelled. A scenario was also considered where vaccination provided crossprotection with efficacy of 27% against oncogenic non-vaccine HPV types as suggested by trials of both vaccines. Sexual transmission was modelled using a structure similar to that previously developed for HIV [8], with the model population is stratified into three sexual behaviour groups (low risk, moderate risk and high risk). Women were assumed to be screened at age-dependent rates, with successful screening and treatment leading to women being moved to an HPV-free state. Progression and regression rates between different disease states were determined by fitting to HPV prevalence data. For HPV 6 and 11, the key parameters governing model predictions about annual anogenital warts diagnoses are the proportion of warts diagnoses linked to HPV 6 infection and the proportion of HPV 6 and 11 infections that lead to clinically diagnosed symptoms. The proportion of infections causing clinical symptoms was determined by matching the age-dependent pattern of annual anogenital warts cases to the seroprevalence of HPV 6 and 11 in a recent UK convenience sample, assuming a seroconversion rate of 60-80% [12, 13]. For each of the 18 oncogenic HPV (12 squamous and 6 adenocarcinoma) and 6 low-risk HPV scenarios, a total of 150 combinations of assumptions governing the transmission of HPV infection, natural immunity and vaccine-induced protection were considered. These comprised of five possibilities for the duration of natural immunity (zero months i.e. no natural immunity, three years, ten years, twenty years, and lifelong), five possibilities for the duration of infection (six months, nine months, twelve months, fifteen months and eighteen months), three possibilities for the assortativeness parameter governing mixing between risk groups (0, 0.5. 0.9) and two possibilities about the probability of HPV transmission per partnership (based on high risk group values, based on low risk group values). Thus, 2,700 combinations of assumptions were investigated for each oncogenic type (16, 18 and other oncogenic types), and 900 parameter combinations were investigated for each warts type. Hence in total, 9,900 scenarios were fitted. For each of the 9,900 combination of assumptions the equilibrium prevalence of the dynamical model was fitted to a data-derived HPV prevalence curve associated with that scenario (described in Jit et al., submitted manuscript), by altering the two parameters governing HPV transmission: the probability of transmission per sexual partnership for the low risk group, and the coefficient by which this probability is multiplied to get the corresponding probability for the medium and high risk groups. These parameters were estimated by minimising the sum of squared residuals weighted by the variance of the HPV prevalence curve over all age groups. Numerical fitting was conducted using the Brent method [14]. However, only cancer scenarios with sum of squared residuals lying below a goodness of fit cut-off of 40 were used for subsequent analysis, representing 70% of all squamous cell carcinoma scenarios. This cut-off was chosen so that the remaining scenarios would match known data about cervical cancer cases. Fig. 2 shows the number of cervical cancers a year indicated by models with different values of the sum of squared residuals. For warts, a goodness of fit threshold of 4 for the sum of squared residuals was fixed, based on reports of new and recurrent cases of anogenital
408
Y.H. Choi and M. Jit
Fig. 2. Relationship between (a) estimated annual cancer incidence and sum of squared residuals for each scenario of oncogenic HPV types, (b) estimated annual warts incidence and sum of squared residuals for each scenario of non-oncogenic HPV types
warts (KC60 codes C11A and C11B). There was a strong association between goodness of fit and duration of natural immunity for warts. Only 3% of scenarios corresponding to lifelong natural immunity were eliminated; conversely, 98% of scenarios with no natural immunity were eliminated. In the base case scenario, routine quadrivalent vaccination was delivered to 12 year old girls with no catch-up campaign. Coverage of 80% for the full three doses was assumed, based on reported three dose coverage from the trial of a school-based hepatitis B vaccination programme [17]. Alternative vaccination scenarios were considered: (i) vaccinating girls at the age of 13 or 14 years instead of 12 years, (ii) vaccinating boys at age 12 years in addition to girls at age 12 years, assuming the vaccine fully protects boys against infection by all four vaccine types, (iii) vaccinating girls with three-dose coverage of 70% or 90% instead of 80%, (iv) catch up campaigns for girls up to age 14, 16, 18, 20 or 25 years old, and (v) vaccinating 12 year old girls with a vaccine that provides cross-protection against oncogenic non-vaccine types with efficacy of 27% as suggested by clinical trials. The combined effect of vaccination on cervical cancer incidence was calculated by summing results for HPV 16, HPV 18 and other high-risk HPV scenarios with the same assumptions. Similarly, the combined effect on anogenital warts was calculated by summing results for HPV 6 and HPV 11 scenarios with identical assumptions.
3 Results Fig. 4 shows the estimated impact of vaccinating 12 year old girls at 80% coverage on diagnosed HPV 6/11/16/18-related disease for different assumptions about the duration of vaccine protection. Large reductions in incidence of cervical dysplasia, cervical cancer and anogenital warts cases are expected, provided vaccine induced immunity lasts for ten years or more, although the reduction in cervical cancer takes much longer to become apparent. Increasing vaccine coverage reduces the post-vaccination incidence of disease. However, even at 70% coverage in girls it is possible to eliminate vaccine-type HPV in some model scenarios when vaccine protection is assumed to be lifelong, particularly those with short duration of natural immunity, because females who are not directly protected from being vaccinated are protected indirectly (by herd immunity).
Mathematical Modelling of Cervical Cancer Vaccination in the UK
409
Fig. 3. Percentage change in the annual number of diagnosed a) cervical cancer cases and b) genital warts following vaccination of 12 year old girls, for different assumptions about the duration of vaccine induced immunity with 80% of vaccine coverage assumption
Offering routine vaccination at a later age speeds the reduction in disease (data not shown), since vaccine is given closer to the age at which HPV is acquired. If vaccineinduced immunity is relatively short-lived (ten years), then offering vaccination later in childhood protects women for more of the highest risk period of HPV acquisition (late teens and early twenties). Thus vaccination at 14 years of age provides slightly improved outcomes compared with 12 years of age if vaccine induced immunity is not life-long. Scenarios where vaccination provides cross-protection with 27% efficacy against oncogenic non-vaccine HPV types show on average about 5% - 10% extra reduction in cancer incidence following vaccination, with a broadly similar pattern across assumptions about duration of vaccine protection. Extending vaccination to boys provides additional benefit in terms of reduction of cervical cancer and anogenital warts compared to vaccinating girls alone (Fig. 5). The effect on anogenital warts cases is slightly greater since males acquire warts but not cervical cancer. However, there is also an indirect effect on cervical cancer incidence since vaccinating boys prevents males from infecting females with oncogenic HPV types. The effect on both disease endpoints is small because vaccinating girls alone already reduces HPV prevalence to a very low level, especially if duration of vaccine protection is long. Catch-up campaigns usually have a more dramatic short-term effect than extending vaccination to boys on cervical cancer and anogenital warts incidence (Fig. 4). However, the effect of a catch-up campaign is minimal beyond 20 years after the advent of the campaign. There are decreasing marginal returns from extending the catch-up programme to older age groups, particularly if the average duration of vaccine protection is long. In particular, if vaccine protection is lifelong, the reduction in cancer incidence of using a campaign up to the age of 25 years is not greatly different from the more limited campaigns. This is the first model to comprehensively capture the effect of HPV vaccination on both cervical cancer and warts. All HPV types in the vaccines (6, 11, 16 and 18) are modelled separately, unlike some previous studies that only looked at a single HPV type in isolation [20-22], or grouped some of the vaccine types for the purposes of analysis [23]. Also, no previous (static or dynamic) model explicitly models adenocarcinomas. However, we have combined high-risk non-vaccine HPV types into a single group instead of modelling each type separately. This could cause some inaccuracy in estimates of the impact of HPV vaccination on non-vaccine types (if any),
410
Y.H. Choi and M. Jit
Fig. 4. The estimated impact of extending vaccination to boys or including a catch up campaign on annual cervical cancer and anogenital warts cases. Three doses coverage is assumed to be 80%. Graphs show the additional percentage change in number of diagnosed cancer or warts cases prevented compared to the base case programme of vaccinating 12 year old girls only. Results are shown for lifelong vaccine protection for a) cancer, and b) warts.
since actual vaccine protection against non-vaccine types is not uniformly 27% against every type. Also, we have not modelled penile, vulval, vaginal, anal, head and throat cancers as these are poorly described in the vaccine trials and have less wellknown natural history compared to cervical cancers. Models incorporating these cancers may become more important when exploring the effect of targeted HPV vaccination for specific risk groups such as homosexual and bisexual men. Heterosexual males are protected by vaccination of girls alone. However, vaccinating males as well as females produces additional benefits both for the males themselves (by reducing their risk of warts), and to a lesser extent for females (by reducing the overall HPV prevalence). The direct effect of vaccinating males (on reducing their risk of warts) is particularly true for homosexual men, who are not included in this model and who are likely to benefit less than their heterosexual counterparts from vaccination of girls only. The marginal benefits derived from vaccinating boys depend on the extent to which HPV incidence is reduced by vaccination of girls only. In many scenarios (particularly those that assume short periods of natural immunity) vaccinating girls alone reduces the incidence of vaccine-preventable types to very low levels (elimination might even be achieved at 80% coverage). Under these circumstances vaccination of boys brings few additional benefits (although it does mean that the probability of elimination is greater). However, if the period of immunity is shorter then the additional benefits from vaccinating boys are greater. Clearly, these benefits are larger for prevention of anogenital warts as males benefit directly from vaccination against this disease. Similarly, if vaccine duration of protection is short, then vaccinating boys has a larger marginal benefit (see Fig. 7). Catch-up campaigns have no effect on long-term incidence. However in the shortterm they can bring about a more rapid reduction in disease, particularly for acute diseases (anogenital warts and low grade neoplasias) associated with HPV infection. As the effects of the campaign wear off it is possible to get a resurgence of disease some time after vaccination (a post-honeymoon epidemic), before the system settles into its new equilibrium state. There are decreasing marginal returns associated with a catchup programme as the upper age at which vaccination is offered is extended. This is because the probability of remaining susceptible to the vaccine-preventable types falls rapidly after the age of about 15 years.
Mathematical Modelling of Cervical Cancer Vaccination in the UK
411
We have developed and parameterised a family of transmission dynamic models of infection and disease with oncogenic and anogenital warts-associated HPV types. A unique feature of our approach is that instead of adopting a single model, we have generated thousands of combinations of assumptions, and then fitted them to prevalence data. By developing a large number of related models that make differing assumptions about the natural history of HPV infection and disease, we have assessed structural as well as parameter uncertainty and propagated this through our analyses. The results of these analyses provide a robust evidence base for economic analyses of the potential impact of HPV vaccination. Acknowledgments. We thank John Edmunds, Nigel Gay, Andrew Cox, Geoff Garnett, Kate Soldan and Liz Miller for their contributions to the model development and analysis.
References [1] Munoz N, Bosch F X, de Sanjose S, et al. (2003) Epidemiologic classification of human papillomavirus types associated with cervical cancer. N Engl J Med 348(6):518-27 [2] Munoz N, Bosch FX, Castellsague X, et al. (2004) Against which human papillomavirus types shall we vaccinate and screen? The international perspective. Int.J.Cancer 111(2):278-85 [3] Krogh G, Lacey C J, Gross G, Barrasso R, Schneider A (2001) European guideline for the management of anogenital warts. Int J STD AIDS 12 Suppl 3:40-7 [4] FUTURE II Study Group. (2007) Quadrivalent vaccine against human papillomavirus to prevent high-grade cervical lesions. N Engl J Med 356(19):1915-27 [5] Harper D M, Franco E L, Wheeler C, et al. (2004) Efficacy of a bivalent L1 virus-like particle vaccine in prevention of infection with human papillomavirus types 16 and 18 in young women: a randomised controlled trial. Lancet 364(9447):1757-65 [6] Garland S M, Hernandez-Avila M, Wheeler C M, et al. (2007) Quadrivalent vaccine against human papillomavirus to prevent anogenital diseases. N.Engl.J.Med. 356(19):1928-43 [7] Kahn JA, Burk RD. Papillomavirus vaccines in perspective. Lancet 2007;369(9580):2135-7. [8] Garnett G P, Anderson R M (1994) Balancing sexual partnerships in an age and activity stratified model of HIV transmission in heterosexual populations. IMA J.Math.Appl.Med.Biol. 11(3):161-92 [9] Department of Health Statistical Bulletin. Cervical Screening Programme, England: 2005-06. http://www.ic.nhs.uk/pubs/csp0506 . 20-12-2006. The Information Centre. 610-2007. [10] Nanda K, McCrory DC, Myers ER, et al. Accuracy of the Papanicolaou test in screening for and follow-up of cervical cytologic abnormalities: a systematic review. Ann Intern Med 2000;132(10):810-9. [11] Redburn J C, Murphy M F. (2001) Hysterectomy prevalence and adjusted cervical and uterine cancer rates in England and Wales. BJOG. 108(4):388-95. [12] Dillner J (1999) The serological response to papillomaviruses. Semin Cancer Biol 9(6):423-30 [13] Carter J J, Koutsky L A, Hughes J P, et al. (2002) Comparison of human papillomavirus types 16, 18, and 6 capsid antibody responses following incident infection. J Infect Dis 181(6):1911-9
412
Y.H. Choi and M. Jit
[14] Press W H; Teukolsky S A; Vetterling W T, Flannery B P (2002) Numerical Recipes in C++: The Art of Scientific Computing. 2nd ed. Cambridge: Cambridge University Press [15] Office for National Statistics. Cancer Statistics 2004: Registrations Series MB1No 35. http://www.statistics.gov.uk/statbase/Product.asp?vlnk=8843&More=N . 2006. [16] Health Protection Agency (2006) Trends in anogenital warts and anogenital herpes simplex virus infection in the United Kingdom: 1996 to 2005. CDR Weekly 16(48) [17] Wallace L A, Young D, Brown A, et al. (2005) Costs of running a universal adolescent hepatitis B vaccination programme. Vaccine 23(48-49):5624-31. [18] Garland S M, Hernandez-Avila M, Wheeler C M, et al. (2007) Quadrivalent vaccine against human papillomavirus to prevent anogenital diseases. N.Engl.J.Med. 356(19):1928-43 [19] Olsson S E, Villa L L, Costa R L, et al. (2007) Induction of immune memory following administration of a prophylactic quadrivalent human papillomavirus (HPV) types 6/11/16/18 L1 virus-like particle (VLP) vaccine. Vaccine 25(26):4931-9 [20] Hughes J P, Garnett G P, Koutsky L (2002) The theoretical population-level impact of a prophylactic human papilloma virus vaccine. Epidemiology 13(6):631-9 [21] French K M, Barnabas R V, Lehtinen M, et al. (2007) Strategies for the introduction of human papillomavirus vaccination: modelling the optimum age- and sex-specific pattern of vaccination in Finland. Br.J.Cancer 96(3):514-8 [22] Barnabas R V, Laukkanen P, Koskela P, Kontula O, Lehtinen M, Garnett G P (2006) Epidemiology of HPV 16 and Cervical Cancer in Finland and the Potential Impact of Vaccination: Mathematical Modelling Analyses. PLoS.Med 3(5):e138 [23] Elbasha E H, Dasbach E J, Insinga R P (2007) Model for Assessing Human Papillomavirus Vaccination Strategies. Emerg Infect Dis 13(1):28-41
Particle Physics Experiment on the International Space Station Chanhoon Chung Physikalisches Institut B, RWTH-Aachen University, Aachen, Germany [email protected] Abstract. We know that the universe consists of 22% dark matter. The dark matter particle has to be stable, non-relativistic and only weakly interacting. However we do not know what the dark matter is made of and how it is distributed within our galaxy. In general, the cosmic antiparticles are expected as secondary products of interactions of the primary cosmic-rays (CRs) with the interstellar medium during propagation. While the measurements of CR positrons, anti-protons and diffuse gamma rays have become more precise, the results still do not match with pure secondary origins. A comparison between background of these CRs and experimental data has been performed using CR propagation models. A phenomenological study based on the supersymmetry (SUSY) is carried out and shows a better interpretation of CR fluxes including neutralino annihilations in the galactic halo and center. The AMS-02 will be the major particle physics experiment on the International Space Station (ISS) and make a profound impact on our knowledge of high energetic CRs with unprecedented accuracy. It will extend our knowledge on the CR origin, acceleration and propagation mechanism. Especially, the measurement of the position flux may be the most promising for the detection of the neutralino dark matter since the predicted flux is less sensitive to the astrophysical parameters responsible for the propagation and the dark matter halo profile. The fully AMS-02 detector has been assembled at CERN (European Organization for Nuclear Research) located near Geneva in Switzerland. Afterwards space qualification tests it will be delivered to NASA-KSC to prepare for the launch with a space shuttle. The launch and installation of the AMS-02 detector on ISS is scheduled for 2010.
1 Introduction The universe is composed of 22% non-baryonic cold dark matter (CDM) and its nature is one of the outstanding questions in modern physics. The existence of dark matter from the Big-Bang to the present-day is confirmed by the various astrophysical observations including the recent Wilkinson Microwave Anisotropy Probe (WMAP) experiment of the cosmic microwave background [1]. The dark matter is considered to be a stable, neutral, weakly and gravitational interacting massive particle (WIMP). In spite of the fact that the Standard Model (SM) of particle physics is well established by various experiments with an extreme accuracy, it could not provide any viable candidate of dark matter. The SUSY theories predict the existence of relic particles from the Big-Bang. The lightest SUSY particle (LSP) in the R-parity conserved models is a good candidate for non-baryonic cold dark matter [2]. Its signals have been actively explored both in the collider and astrophysics experiments. Direct detection relies on observing the elastic scattering of neutralinos in a detector. On the other hand, indirect detection depends on observing the annihilation products
414
C. Chung
from cosmic rays, such as neutrinos, positrons, antiprotons or gamma rays. Present experiments are just reaching the required sensitivity to discover or rule out some of the candidates, and major improvements are planned over the coming years. The Alpha Magnetic Spectrometer (AMS) is a high energy particle physics experiment to be operated on the ISS in 2010 for at least three years [3]. One of the main physics motivations is the indirect search for dark matter from cosmic rays. The SUSY dark mater particles could annihilate each other and produce positrons, antiprotons and gamma-rays as an additional primary CR sources. The observation of a deviation from a simple power law spectrum would be a clear signal for dark matter annihilation.
2 SUSY Dark Matter Search from CRs The recent data from the WMAP experiment on the CMB anisotropies has confirmed the existence of CDM. However the nature of the CDM still remains a mystery. Among the CDM candidates, WIMPs are promising candidates since their thermal relic abundances are naturally within the cosmological favoured range if they weight less than 1TeV [4]. In most SUSY models, the lightest neutralino is a well studied candidate of WIMPs. It is a superposition of the super-partners of the gauge and Higgs fields. Great effort has been devoted to detect the neutralino dark matter directly or indirectly. The indirect detection experiment relies on pair annihilation of neutralinos into SM particles with significant cross sections through the decay of short lived heavy leptons, quarks and gauge bosons. Various experiments are designed to observe the anomalous CRs such as high energy neutrinos from the Sun or Earth, gamma rays from the galactic centre, and positrons and anti-protons from the galactic halo, which are produced by the neutralino annihilations. Protons and helium, electrons as well as carbon, oxygen, iron, and other nuclei synthesized in stars, are primaries accelerated by astrophysical sources. Nuclei such as lithium, beryllium, and boron are secondaries which are not abundant end-products of stellar nucleosynthesis. Most of anti-protons and positrons are also in large part secondaries produced in interactions of the primaries with the interstellar medium (ISM) during propagation. Accurate measurements of cosmic anti-protons (or anti-deuterons) at the low energy region, positrons and gamma rays at high energies with efficient background rejection are necessary to search for primary contributions from the neutralino annihilations. Its flux has a unique spectral peak around 2GeV due to the production threshold and decreases sharply toward lower energies as shown in Fig. [1]. It provides an opportunity to test exotic contributions since the neutralino-induced components do not drop as fast at low energies [5]. The recent BESS [6] balloon-borne experiment has well measured the anti-proton flux below a few GeV where the neutralinoinduced signal is expected to be detectable. But the experimental errors are still too large to reach any final conclusion. The major uncertainties affecting the neutralinoinduced antiproton flux come from nuclear physics cross sections, propagation parameters and the thickness of the diffuse halo size. However, the AMS-02 experiment can take full advantages from the precise measurements of cosmic nuclei such as B/C and 10Be/ 9Be to constrain the astrophysical parameters, and will disentangle the
Particle Physics Experiment on the International Space Station
415
signal from the background with much higher statistics and reduced systematic uncertainties A subtle excess in the cosmic positron fraction above 5 GeV up to 50 GeV observed by HEAT [7], has stimulated numerous calculations and interpretations with dark matter annihilations in the galactic halo. However it needs a significant flux enhancement factor which could be explained with clumpy dark matter since the flux is proportional to the square of the dark matter density. The confirmation of this excess requires further sensitive measurements with good proton background rejection power, higher statistics and covering the wide energy range between 1 GeV and several hundreds of GeV. The shape also depends on the annihilation cross section and the degree of local inhomogeneity in dark matter halo profiles. High energy diffuse gamma-ray emission is produced primarily by CR protons interacting with the ISM via nucleon-nucleon processes, high energy electron bremsstrahlung through interactions with the interstellar gas and inverse Compton
(1)
(2)
(3)
(4)
Fig. 1. Compiled measured CR fluxes with background calculation using GALPROP program. The upper-left panel (1) shows the χ2 fit from the GALPROP models with recent experimental data. The primary proton and electron injection index are considered as main uncertainty. The upper-right panel (2) shows the anti-proton data with model expectation considering different primary injection spectra. The solid line is modulated and yellow band indicates an uncertainty of the primary proton and electron injection rate. The lower-left panel (3) shows a compilation of positron fraction measurements with model prediction. The lower-right panel (4) shows in case of diffuse gamma-ray for the galactic centre region.
416
C. Chung
scattering with low energy photons. However, galactic or extragalactic diffuse gamma rays could also be generated mainly from the decay of neutral pions (π0) produced in jets from neutralino annihilations. The public GALPROP [8] code is used to investigate the galactic CR propagation model. It is based on the known galactic matter density and the CR rate and measured spectrum at the Earth. The proton and helium injection spectra and the propagation parameters are chosen to reproduce the most recent measurements of primary and secondary nuclei, antiprotons, electrons and positrons, as well as gamma-rays and synchrotron radiation. Fig. [1] shows the prediction of the conventional GALPROP model. The observed anti-protons, positron fraction and diffuse gamma-rays show an excess over predictions at less than 2 GeV for anti-proton and above 8 GeV for positron and a few GeV region for diffuse gamma-ray emission.
Fig. 2. SUSY interpretation to the CR excess based on the mSUGRA model. A numerical χ2 minimization is displayed in the (m0,m 1/2) plane. In order to reflect gμ -2 constraint, tanβ = 40 is specially chosen. The relevant common parameters are set for , A0 = 0, mt = 172.5 GeV and sign(μ) = +1.
Particle Physics Experiment on the International Space Station
417
Several authors have presented each spectrum with a specific exotic model [9]. However the successful explanation is still in a debate. In this paper, an optimized model is investigated to explain anti-proton, positron and diffuse gamma-rays simultaneously based on the the minimal supergravity (mSUGRA) model [10]. The number of free parameters is reduced to five in this scenario (four continuous and one discrete); common gaugino mass (m1/2), scalar mass (m0), triliniear scalar coupling (A0), ratio of the neutral Higgs vacuum expectation (tanβ) and sign of Higgsino mass parameter (μ). As well as the SUSY breaking input parameters, the bottom mass (mb), the strong coupling constant (αs), and the top mass (mt) could also have strong effects on mSUGRA predictions. In order to scan the mSUGRA parameter space, the relic dark matter density is calculated to be consistent with recent WMAP data [1]. Other strong constraints on mSUGRA models include the lower bounds on Higgs boson and sparticle masses [11], the branching ratio of the b → s γ decay [12], the upper bound on the branching ratio of Bs → μ+ μ− [13], and the measurements of the muon anomalous magnetic moment (gμ -2) [14]. As shown in Fig. 2, a simultaneous χ2 minimization has been performed using the sum of anti-proton, positron fraction and gamma-rays. The overall χ2 fits include only three free parameters to enhance the signals of positrons, antiprotons and gamma rays independently. The most preferred point resides in the focus point region with a neutralino mass of 90 GeV with a χ2 of 26.8 for 33 degrees of freedom, which corresponds to a probability of 76.8 % indicating a good agreement. It also shows the positron fraction, anti-proton and gamma-ray spectra at a benchmark point in the focus point region. The shape of the positron flux generates a sharp bump around 40 GeV, corresponding to the half of neutralino LSP mass from annihilating by W gauge boson (66%) and b-quark pairs (20 %). The corresponding antiproton flux reproduces the mild excess of data in the low energy range. The GeV excess of the diffuse gammaray spectrum is obtained from the same neutralino pair annihilation [16].
3 AMS-02 Detector The particle identification of CRs relies on precise measurements of rigidity, velocity, energy and electric charge. The AMS detector shown in Fig. 3 uses a large superconducting magnet at its core and consists of a large area silicon microstrip tracker, a transition radiation detector (TRD), a time of flight (ToF) system with anti-coincidence counters (ACC), a ring Image cherenkov Counter (RICH) and an electromagnetic calorimeter (ECAL). The detector has an overall dimension of 3 × 3 × 3 m3 and the weight is about 7 tons. The TRD, ToF and Tracker have an acceptance of 0.5m2⋅sr, and the combination of Tracker, ToF and RICH provides cosmic nuclei separation up to Z = 26 (Fe) by dE/dx measurements [3]. The velocity is measured by ToF and RICH independently. The hadron to positron rejection with TRD, Tracker and ECAL is better than 105 up to 300GeV. The key feature of the detector is a superconducting magnet and its purpose is to extend the measurable energy range of particles and nuclei to the multi-TeV region
418
C. Chung
Fig. 3. Layout of the AMS-02 detector from http://ams.cern.ch
with a high bending power BL2 of 0.862 T⋅m 2. The TRD is observed as X-rays when a charged particle traverses the boundary between different dielectric materials for Lorentz factors (γ= E/m) larger than 1000. Therefore, it is useful to separate the positron (electron) events from the proton (anti-proton) background [15]. In the AMS-02 experiment, cosmic positron (electron) can be identified by using a TRD as a threshold device in the momentum range of 1 GeV/c ≤ p ≤ 300 GeV/c and it considerably improves the search for primary positron contribution from neutralino annihilations. The Tracker is composed of 8 layers of double sided silicon micro-strip sensors inside the superconducting magnet. It measures the trajectory of the incoming charged particles and determines the charge sign with a magnetic rigidity resolution of 20 % for 0.5TV protons. The finer p-side strips are used to measure the bending coordinate and give a spatial resolution better than 10μm. The TOF system is working as the primary fast trigger and consists of four layers of scintillator paddles, two layers between TRD and Tracker and two layers below the Tracker. The RICH measures the velocity of singly charged particles with a relative uncertainty of 0.1 % and contributes to the charge separation of nuclei with tracker and ToF. The ECAL measures the deposited energy, direction of electrons, positrons and gamma rays with an angular resolution around 1º. The fine grained sampling electromagnetic calorimeter consists of 9 superlayers along its depth. It is able to image the shower developments in 3D and allows the discrimination between hadronic and electromagnetic cascades.
4 Summary The AMS is a particle physics detector designed to measure the CR spectra on the ISS for three years mission starting 2010. The main physics motivations are focused on the precise measurements of CRs for the indirect dark matter search and direct detection for
Particle Physics Experiment on the International Space Station
419
Fig. 4. AMS-02 expectation of the positron fraction after three years operation on the ISS. The error bar considers a positron identification and proton, antiproton and electron background rejection as a function of positron kinetic energy [16].
heavy anti-matter in space. The AMS-02 will measure the cosmic positron spectrum with unprecedented accuracy up to 300GeV and considerably improve the search for primary positron contribution from neutralino annihilations as shown in Fig. 4. The search for high energy gamma-ray emission from the galactic centre is also planned for AMS-02 using a combination of tracker and calorimeter with good angular and energy resolution. The flux of anti-deuterons might be quite suppressed compared with antiprotons, but its signal could be observable by AMS-02 below energies of 3GeV/nucleon where secondary anti-deuterons are absent. This indirect dark matter search would complement accelerator and direct detection experiments. Now, we are on the threshold of a new and exciting ear of unexpected discoveries at the frontiers of particle physics on the ground and space.
References [1] WMAP Collaboration, D.N. Spergel et al., Astrophys. J. Suppl. Ser.148, 175 (2003); astro-ph/0603449 (2006) [2] H.E. Haber, G.L. Kane, Phys.Rep.117, 75 (1985); S.P. Martin, Phys.Rev.D62, 067505 (2000); H.P. Nilles, Phys.Rep.110, 1 (1984)
420
C. Chung
[3] AMS-02 Collaboration, ‘AMS on ISS, Construction of a particle physics detector on the International Space Station’, submitted to Nucl.Instrum.Meth.A (2005); C.H. Chung, Proceedings of Beyond Einstein: Physics for the 21st century (EPS13), Bern Switzerland (2005) [4] G. Jungman, M. Kamionkowski, K. Griest, Phys.Rep.267, 195 (1996); L. Bergstrom, Rep.Prog.Phys.63, 793 (2000); G. Bertone, D. Hopper, J. Silk, Phys.Rep.405, 279 (2005) [5] L. Bergstrom et al., Phys.Rev.D59, 043506 (1999); F.Donate et al., Phys.Rev.D69, 063501 (2004) [6] BESS Collaboration, H. Matsunaga et al., Phys.Rev.Lett.81, 4052 (1998); BESS Collaboration, S. Orito et al., Phys.Rev.Lett.84, 1078 (2000) [7] J.J. Beatty et al., Phys.Rev.Lett.93, 241102 (2004) [8] A.W. Strong, I.V. Moskalenko, ApJ509 , 212 (1998), ApJ664, L91 (2004) [9] W de Boer, C Sander, V Zhukov, A V Gladyshev and D I Kazakov (astroph/0508617) [10] G.L. Kane et al., Phys.Rev.D49, 6173 (1994) (hep-ph/9312272) [11] LEP Higgs working group, hep-ex/0107030, hep-ex/0107031, LHWG-Note 2005-01, The latest results are available via http://lephiggs.web.cern.ch/LEPHIGGS/papers/; LEPSUSYWG, ALEPH, DELPHI, L3 OPAL Collaboration The latest results are available via http://www.cern.ch/lepsusy/; Acessed 3 July 2008 [12] H. Asatrian et al., hep-ph/0512097 [13] G. Buchalla, A.J. Buras, Nucl.Phys.B400, 225 (1993); Nucl.Phys.B412, 106 (1994); Nucl.Phys.B~548, 309 (1999); M. Misiak, J.Urban, Phys.Lett.B451, 161 (1999) ; CDF and D0 Collaborations, R. Bernhard et al., hep-ex/0508058 ; A. Dedes, A. Pilaftsis, Phys.Rev.D67, 015012 (2003); S. Baek, Y.G. Kim, P. Ko, hep-ph/0406033, hepph/0506115 [14] The Muon g-2 Collaboration, G. Bennet et al., Phys.Rev.D62, 091101 (2000); Phys.Rev.Lett.86, 2227 (2001); Phys.Rev.Lett.89, 101804 (2002); Phys.Rev.Lett.92, 161802 (2004) (hep-ex/0401008) and hep-ex/0602035 [15] C.H. Chung et. al, Nucl.Phys.Proc.Suppl.113:154-158 (2002); Nucl.Instrum.Meth.A522, 69-72 (2004), IEEE Trans.Nucl.Sci.51:1365-1372 (2004) [16] C.H. Chung et. al, , proceedings of 15th International Conference on Supersymmetry and the Unification of Fundamental Interactions (SUSY07), Karlsruhe, Germany (2007)
Effects of Methylation Inhibition on Cell Prolieration and Metastasis of Human Breast Cancer Cells Seok Heo1 and Sungyoul Hong2 1 2
Department of Pediatrics, Medical University Vienna, Vienna, Austria Department of Genetic Engineering, Faculty of Life Science and Technology, Sungkyunkwan University, Suwon, Republic of Korea [email protected]
Abstract. Breast cancer is the second most fatal cancer in women. Adenosine has been shown to induce apoptosis through various mechanisms including adenosine receptor activation, adenosine monophosphate (AMP) conversion, AMP-activated protein kinase activation, or conversion to S-adenosylhomocysteine, which is an inhibitor of S-adenosylmethioninedependent methyltransferases. Since the pathways involved in the anticancer activity of adenosine analogues are not still clearly understood, I examined the relationship between methyltransferase inhibition and the anticancer, antimetastatic effect of adenosine dialdehyde (AdOx) which is known for AdoHcy hydrolase inhibitor, which result in methylation inhibition, using non-invasive and invasive human breast cancer cells (HBCs MCF-7 and MDA-MB 231, respectively). Morphological changes and condensed chromatin were observed in HBCs treated with AdOx. Cytotoxicity was increased and DNA synthesis and cell counts were decreased by AdOx in HBCs, but the cytotoxicity was higher in MCF-7 than in MDA-MB 231. In MDA-MB 231, AdOx lowered the expression of G1/S regulators and the tumor suppressor p21WAF1/Cip1. In MCF-7, apoptotic molecules and tumor suppressor p21WAF1/Cip1 expression were induced by AdOx. Colony dispersion and cell migration was inhibited by AdOx and the activities of matrix metalloproteinase-2 /-9, which are key enzymes for cancer invasion and migration, were decreased by AdOx. But, the mRNA levels of MMP-2 and MMP-9 were not affected in accordance with the changes in enzymatic activity. Mammary-specific serine protease inhibitor was increased by AdOx in both cell lines. These results suggest that methyltransferase inhibition by AdOx may decrease cell viability and influence cell cycle distribution and migratory potential, providing evidence of methylation inhibition as a potential target for anticancer and antimetastatic effects in HBCs.
1 Introduction 1.1 Cancer Metastasis and Breast Cancer Primary tumor is the mass of cell growing rapidly compared to normal cell present at the site of initial conversion. There would be of little clinical importance if the cells present in primary site of cancer remained in their own sites. Abnormal growth and proliferation of cancer cells would give pressure on neighboring tissue, which would ultimately noxious to the host. But by surgery, tumor mass which is well-defined its character can be removed in a straightforward and permanently. But there are some problems making human hard to recognize and excise the primary tumor, because tumor cells do not always remain at the primary sites, but move away from primary to
422
S. Heo and S. Hong
secondary sites by one of two procedures. There are 6 steps to proceed metastasis primary tumor cells to secondary sites: aggressive, proliferation of growthtransformed cells, breakdown of basal lamina, intravasation and circulation through bloodstream or lymphatic system, attachment to inner wall of the blood vessel, extravasation and abnormal proliferation at the metastasized site. But there are two crucial steps for cancer cells to move to another sites: (1) Invasion, or the movement of the cell into the adjacent tissue composed of another types of cells; and (2) metastasis, or the movement of cells to the distant secondary sites, usually through blood or lymphatic vessel system, or via body cavities. Substantial invasion usually occurs before any metastasis starts [1]. Invasion is just the expansion of tumor cells into surrounding tissues as a consequence of uninterrupted cell proliferation. However, movement of rapid proliferating cells may be occurred. Cancer cells tend to lose their adhesiveness to each other or to neighboring tissues or extracellular matrix (ECM), break away from the mass of the tumor, and take apart from each other then finally start to move from their own sites. Such movement does not occur in normal tissue. Normal cells move both in culture and in the track of embryological development, but when normal mature cells in culture get in touch each other, usually they stop not only growth but also movement. But cancer cells, at least in culture conditions, are not controlled by cell-cell contact. Also in body, they continue to grow and move into neighboring tissues. To do such actions, most of the cancer cells release proteases which are helpful to their digestion of extracellular components, thereby facilitating invasion [1]. Metastasis, that is the spreading and established growth of tumor cells at a site away from the original tumor site, is one of the characteristic observed in the final stages of the malignant cancer. Metastasis is a highly complicated process. Tumor cells are the prime movers but metastasis requires the participation of several types of normal cells at the primary tumor site and at the metastatic site. Interaction of tumor cells with host extracellular matrices and host basement membrane takes place at several points during the process of metastasis. Thus an important aspect of metastasis is the ability of the tumor cells to degrade ECM and to bind to and degrade basement membranes. There various kinds of proteolytic enzymes related to ECM degrading process and it includes: (1) matrix metalloproteinases (MMPs) such as transin, stromelysins and type IV collagenase; (2) plasminogen activator which change plasminogen into plasmin; (3) cathepsin B; (4) elastase; and (5) glycosidases. These enzymes are secreted by tumor cells or by normal cells, and they can be induced by extracellular stimulant. In other for cancer cells to move to distal region, first they must migrate away from the primary tumor mass as part of the invasive process. Numerous migratory factors have been identified that appeared to be associated with cancer cell migration [2]. Previous studies showed that cancer cells appeared to secrete migratory factors culture and also they can stimulate non-migratory cells in culture. Migration of cancer cells all over the body is not sufficient to cause metastasized tumors because the environmental condition of secondary site is different from that of the original site. Blood and lymphatic vessel system which are major pathways of transport of cancer cells are severe condition to survive so most of the cancer cells either die or reach the lung where they are either broken up or stay quiescent. It is thought that blood serum contains particular substances that are toxic to the cancer cells.
Effects of Methylation Inhibition on Cell Prolieration and Metastasis
423
The interactions of cells with ECM are critical for the normal development and function of organisms. Modulation of cell-matrix interactions occurs through the action of unique proteolytic enzymes responsible for degrading a variety of ECM proteins. By regulating the integrity and composition of ECM structure, these enzyme systems play a crucial role in the control of signals and they regulate a variety of cellular phenomena such as cell proliferation, differentiation and cell death. The loss and alteration of ECM have to be highly regulated because uncontrolled proteolysis contributes to atypical development and to the generation of many pathological conditions characterized by too much degradation of ECM. Matrix metalloproteinases (MMPs) are a well-known group of enzymes that regulate cell-matrix composition. The MMPs are metal ion-dependent, especially zinc ion, endopeptidase known for their ability to cleave one or several ECM components, as well as non-matrix proteins. They include a large family of protease that share general structural and functional elements and products of different genes. All members of these enzymes contain a pro-peptide and catalytic domain. The catalytic domain has the catalytic machinery including conserved methionine and metal ion binding domain. Metal ion binding such as zinc and calcium is necessary for maintaining the three dimensional structure, stability and enzymatic activities of MMPs. Substrate specificity is different among these enzymes. Most cells synthesize and immediately secrete MMPs into the ECM [3]. However, inflammatory cells such as neutrophils store this kind of protease. Tissue distribution of these proteases is divergent. Also some kinds of MMPs are synthesized (e.g. 72kDa gelatinase), but some are synthesized mainly on stimulation (e.g. collagenase). Abundant evidence exists on the role of MMPs both in normal and pathological conditions, including embryogenesis, wound healing, arthritis, cardiovascular disease, inflammation and cancer. There are evidences that the expression patterns of MMPs have interesting implications for the use of metalloproteinase inhibitors as therapeutic agents. Inhibition of specific MMPs in disease states and the regulation of each MMP genes by regulating chromatin structures can be a useful effort for therapeutic purpose [4]. Breast cancer is the most common malignant tumor in women. It starts as a local disease. Most of the cases of death of breast cancer patients, it is not the primary but the secondary tumors that are the main cause of death. Recent advances of diagnosis makes women’s survival rate. Survival rate of women breast cancer patients younger than 50 years of age has been increased by 10%; in older women the increase is 3% (Early Breast Cancer Trialists’ Collaborative Group, 2005). Breast cancer is a clinically heterogenous disease. Approximately 10-15% of breast cancer patients have an aggressive phenotype and develop distant metastases within 3 years after the initial recognition of disease but not prolonged up to 10 years. Therefore patients with breast cancer are at risk of experiencing metastasis for their entire lifetime. Heterogenous characteristic of breast cancer makes it difficult to assess risk factors as well as to characterize cure for this disease. Improving our insights of the molecular mechanisms of the breast cancer might also improve clinical management of the disease. 1.2 Regulation of Cell Cycle and in Cancer Progression Oncogenic transformation of normal cells results in cell cycle abnormality and apoptosis dysregulation. Upon activation of mitogenic signaling cascade, cells commit to enter the cell cycle regulation. S phase for the synthesis of DNA is followed by M
424
S. Heo and S. Hong
phase to separate into two daughter cells. Between the S phase and M phase, there exist G2 phase to repair mismatched nucleotide base pairs synthesized during S phase. In contrast, the G1 phase, which is located between the M and S phase, represents the period of cell growth including subcellular organelles and protein synthesis. Cells may stop their cycle in the G1 before entering the S phase and enter a state of quiescence called G0 phase. To start from cell growth and division, cells have to re-enter the cell cycle in the S phase. In order for cells to continue their cycling to the following phase, the prior phase has to be completed; otherwise cell cycle checkpoint mechanisms are operated. Cell cycle checkpoint is precisely regulated by cyclin-dependent kinase complexes (cdks) which activate regulators via phosphorylation on the serine or threonine residues. These complexes are composed of cdks and cofactors, including cyclins and endogenous cdk inhibitors (CKIs) such as p21cip/waf1. One crucial target of cdks is the retinoblastoma protein (Rb) which is known as a tumor suppressor protein. Therefore, manipulating the cdk complexes can be a valuable strategy in cancer therapeutics. The evidence that most cancer cells are aneuploid, reflecting abnormal sister chromatid separation, has inspired scientists to be interested in the mitotic checkpoints. Exhaustion of one of several mitotic checkpoint components by small molecules, such as intracellular antibodies, short-interfering RNA or dominant negative alleles induce cell death in in vitro. Another target relevant to not only cell cycle regulation but also apoptosis is tumor suppressor p53 which is inactivated in human cancer cells [5]. As the majority of tumor cells have lost the G1 checkpoint regulator as a result of p53 dysfunction but not the G2 checkpoint, upon DNA damage they would arrest in G2. Therefore, therapeutic approaches combining the use of DNA-damaging agent with small molecules which selectively disrupt the G2 checkpoint reveals a smart approach for cancer due to accelerated mitosis and promoted DNA lesion un-repaired. 1.3 Regulation of Apoptosis Apoptosis is a morphological event characterized by cell shrinkage, plasma membrane blebbing, chromatin condensation, nuclear fragmentation [6]. Cysteine aspartyl-specific proteases (Caspases) are responsible for the apoptotic morphology by cleaving various kinds of substrates which is involved in cytoskeleton, chromatin structure and nuclear envelope maintenance. Apoptosis is divided into two main pathways: the intrinsic pathway, which is characterized by depolarization of mitochondria followed by leading to caspase 9 activation; and the extrinsic pathway, which is characterized by the activation of death receptor via corresponding ligand and sequential activation of caspase 8. Caspase 9 and 8 which is known as starter caspase promote the activation of effector caspase, such as caspase 3, leading to apoptosis [6]. In addition to p53, other proteins relevant to tumorigenesis, such as bcl-2, AKT, ras, are also important in the inhibition of apoptosis. In case of bcl-2, aberrant upregulation of bcl-2is observed in various kinds of human cancers. Also, ras which is one of the well-known oncogenic proteins will promote abnormal proliferation and decrease in apoptosis. Although cell cycle regulation and apoptosis have distinct physiological characteristics, there is abundant circumstantial evidence that cell cycle and apoptosis are closely related. In vivo, apoptosis is detected primarily in proliferating tissue and is particularly evident after rapid cell growth. Therefore, apoptosis may serve a supplementary role: balancing the cell number between increments due to proliferation and reduction
Effects of Methylation Inhibition on Cell Prolieration and Metastasis
425
due to programmed cell death. There are many kinds of positive and negative signals to induce or inhibit apoptosis, but ultimately, these signals must break into the cell cycle if proliferation is to be halted and cells are to die. Several evidence suggest that the apoptosis induction is cell cycle-specific [7]. Among each cell cycle phase, entry into apoptosis is not possible during the whole G1 phase but seem to occur predominantly during entry into S phase. According to other experiments using apoptosisinducing agents, it appears that cells have to progress to late G1 phase for apoptosis to occur. In other words, arrest prior to this stage delays or inhibits apoptosis while arrest after this stage helps apoptosis. Key molecule of this regulation is p53 so that it can be considered as the main switch and the molecular meeting-point of cell cycle and apoptosis. One of the most well established mechanism for apoptosis by p53 is the induction of bax. Another protein which influence on apoptosis is Bcl-2, and these two proteins are critical to the regulation of apoptosis. Bcl-1 enhances cell survival while Bax promotes cell death, and the ratio of these two proteins determines the cell fate. Under certain situations, cyclin and cdks also seem to be essential for apoptosis [8]. Activation of cyclin-cdk complexes which is involved in G1-S and G2-M checkpoint have been observed in a number of differentiated cells during apoptosis. In case of caspases, it can be served as a meeting-point of apoptosis and cell cycle regulation because of its catalytic activity against cdk family. 1.4 S-Adenosylmethionine and Methylation S-Adenosylmethionine (AdoMet)-dependent methylation can be occurred on a large number of substrates including proteins, RNA, DNA and lipids. The synthesis of AdoMet is catalyzed by AdoMet synthetase, which transfers the adenosyl group of ATP to methionine. AdoMet exists in two stereoisomers; (S, S)-AdoMet is the biologically active form while (R, S)-AdoMet is a potent inhibitor of methyltransferase [9]. AdoMet is one of the most generally used and very multipurpose enzyme substrate. Sulfonium group of AdoMet makes it possible to be employed as both a methyl donor (and it is used as the major methyl donor in all living organisms) and a monoalkyl donor in the synthesis of polyamines. As a methyl donor, AdoMet is used in many reactions to transfer the methyl group to the nitrogen to the oxygen of the many different substrates via substrate-specific methyltransferase. SAdenosylhomocysteine (AdoHcy) is generated after the transfer of methy groups from AdoMet to the various kinds of substrates, and is then hydrolyzed to adenosine (Ado) and homocystein (Hcy) by AdoHcy hydrolase. Proteins can undergo posttranslational methylation at one or more nucleophilic side chains in a variety of cell types ranging from prokaryotic to eukaryotic organisms to lead to alteration in steric orientation, charge, hydrophobicity, and a global effect on the protein molecule involved, such as repair of damaged protein, response to environmental stress, cell growth / differentiation and carcinogenesis. Various amino acid residues are distributed in nature. The methyl group, donated from AdoMet is transferred by esterification to the γ-carboxyl group of one or more glutamic acid residues. In eukaryotic cells, proteins are methylated on carboxyl groups or on the side-chain nitrogens of lysine, arginine or histidine. Protein methylation is classified into 3 subtypes: N-methylation which is methylated at the side chains of arginine, lysine and histidine residues of proteins; O-methylation which is methylated at the free carboxyl groups of glutamyl and aspartyl residues; and S-methylation which is methylated at
426
S. Heo and S. Hong
the side chains of methionine and cysteine residues [10]. And substrate-specific methyltransferases by using AdoMet as a methyl donor are categorized into 3 groups: protein methyltransferase I (protein arginine N-methyltransferases; EC 2.1.1.23); protein methyltransferase II (protein carboxyl O-methyltransferases; EC 2.1.1.24); and protein methyltransferase III (protein lysine N-methyltransferase; EC 2.1.1.43). By methylating various regions of proteins, it can affect to hydrophobicity, proteinprotein interactions and other cellular processes. Also, protein methylation in involved in signal transduction, transcription regulation, heterogenous nuclear ribonucleoprotein export, cellular stress responses and the aging / repair of proteins. However, the potential role of protein methylation as a posttranslational modification in signal transduction has not been explored with the same breadth and intensity compared to protein phosphorylation or acetylation which has enjoyed in recent years. From the beginning of the first publication on protein methylation in the 1960s, by the early 1980, a variety of target amino acids and substrates has been discovered. But in spite of these advances in the research on protein methylation, the importance of the biological protein methylation remained unclear. In recent studies, there has been many improvements to elucidate the functions of protein methylation and had evidences on gene expression regulation and various kinds of signal transduction. Besides proteins, DNA also can be methylated by DNA-specific methyltransferase. For example, methylation is normally added only the 5’ position of the cytosine in a post-DNA synthesis catalyzed by one of several DNA methyltansfreases. DNA methylation plays a key role in suppression of the activity of endogenous parasitic sequence chromatin remodeling, and suppression of gene expression (epigenetic silencing). A prevalent definition of epigenetics is the study of mitotically and/or meiotically heritable changes in gene function that cannot be accounted by alterations in DNA sequences. Epigenetics has emerged as an crucial biological process of all multicellular organisms. There exists several epigenetic processes described, and all seem to be interconnected. Besides DNA methylation, posttranslational modification of histones seem to mediate epigenetic alterations in many organisms, and RNA interferen is an important epigenetic mechanism both in plant cells and in mammal cells. DNA methylation in the human genome occurs extensively at cytosine residues within the symmetric dinucleotide motif, CpG. Methylated cytosine accounts for 0.75~1% of the total DNA and almost 70% of all CpG islands. Methylated CpG islands are distributed all over the genome, especially with high densities in the promoters and transposons accumulated in the genome. While most CpG islands remain unmethylated and are associated with transcription-activated genes, certain CpG islands are normally methylated. The DNA methylation of a cell is accurately reproduced after DNA synthesis and stably inherited to the daughter cells. DNA methylation is mediated by an enzyme known as DNA methyltransferase (DNMT). There are many evidences to account for various roles of DNA methyltransferases. The epigenetic balance of the normal cells suffers a striking transformation in the cancer cells. These epigenetic abnormalities can be summarized into 5 categories: (1) transcriptional silencing of tumor suppressor; (2) global genomic hypomethylation; (3) loss of imprinting events; (4) epigenetic lack of intragenomic parasite repression; and (5) genetic lesions in chromatin-related genes. One thing is certain that promoter CpG island hypermethylation of tumor suppressor genes is an abundant key marker of all human cancers. Data accumulated in the last few years show that cancer cells exhibit significant changes in DNA methylation patterns compared with normal counter parts.
Effects of Methylation Inhibition on Cell Prolieration and Metastasis
427
And it can be summarized as global hypomethylation of the genome accompanied by focal hypermethylation events. The origin of these changes is largely unknown, but the most emphasized suggestion of aberrant DNA methylation is transcriptional inactivation of tumor suppressor genes by promoter methylation. De novo methylation of these genes can be occurred early stage of cancer progression and lead to unusual functions, including controlling of cell cycle, apoptosis, and signal transduction. On the other hand, global hypomethylation has been involved in chromosome instability, loss of imprinting, and reactivation of transposons and retroviruses, all of which may contribute to carcinogenesis. But aberrant DNA methylation patter is very complex among people and among cell- or cancer-type. In other words, these patterns suggest that methylation of specific genes may contribute to the development and progression of specific tumor type. For these reasons, the patterns of DNA methylation of target genes may serve as a key marker for diagnosis or prognosis of cancer and other methylation-related diseases [11]. Protein methylation, especially in histones, can interact with each other, and also DNA methylation cooperates with protein and other kinds of methylation. Histones can be methylated by histone-specific methyltransferase and also cross-talk with each other by acting as molecular switches, enabling or blocking the setting of other covalent marks [12]. It can also predict a chronology in the establishment of a specific modification patterns. The cross-talk can take place between modifications on different histones. In case of DNA methylation, especially on CpG islands within specific promoter brings out inheritable chromatin state of transcriptional repression. But the order of events leading to heterochromatin formation may differ from cell to cell. In any case, the epigenetic control of gene expression requires the cooperation of histone modification and DNA methylation, and malfunction of either of those processes result in aberrant gene expression involved in almost all human disease, such as cancers. There are evidences on the silencing of tumor-suppressor and other cancerrelated genes by aberrant methylation of CpG islands in their respective promoter regions. DNA methylation in the CpG islands has identified as an alternative mechanism to mutations and chromosomal deletions [13]. An interaction occurs between gene methylation and alterations in chromatin structures to silence gene expression. It makes the transcriptional factors hard to access to their binding sites. As a result of DNA/protein methylation, two products, the methylated substrates and the by-product AdoHcy are generated. It is important that AdoHcy by itself can function as a potent inhibitor of AdoMet-dependent methylation. Thus they need to break down AdoHcy further into adenosine and homocysteine by S-adenosylhomocysteine hydrolase (AdoHcy hydrolase). Adenosine dialdehyde (AdOx) is a potent AdoHcy hydrolase inhibitor to increase the AdoHcy level and reduce the activity of methyltransferases in cultured cells [14]. Therefore AdOx can inhibit AdoMet-dependent methylation. In the previous studies, it was elucidated that various kinds of DNAspecific methylase inhibitors had antineoplastic effect and showed promising preclinical and clinical activity, especially in leukemia [15, 16]. For example, the action mechanism of 5-aza-CdR is related to activation of tumor suppressor and induction of terminal differentiation or senescence of cancerous cells. In case of AdOx, it can inhibit methylation not only DNA but also protein at the same time. Although it has potent inhibitory effect on methylation in DNA and protein, there is no clear evidence about the effect of methylation inhibition by AdOx on the metastatic characteristic focused on MMPs and other metastatic molecules, such as maspin, mammary-specific serine protease inhibitor,
428
S. Heo and S. Hong
and TIMP, tissue inhibitors of matrix metalloproteinase. Also, the regulatory mechanisms of MMPs activity by DNA / protein methylation inhibition are unclear.
2 Conclusions In this review, we have proposed relationship between cancer progression and DNA/protein methylation. Also we analysed the effect of methylation inhibition by adenosine dialdehyde on different types of breast cancer cells. To find definite evidence on relationship between breast cancer progression and DNA/protein methylation, further experiments are in progress using biochemical and proteomic tools.
References [1] SB Oppenheimer (2006) Cellular basis of cancer metastasis: A review of fundamentals and new advances. Acta Histochem 108:327-334 [2] W Jiang and IF Newsham (2006) The tumor suppressor DAL-1/4.1B and protein methylation cooperate in inducing apoptosis in MCF-7 breast cancer cells. Mol Cancer 5:4 [3] JF Woessner Jr. (1991) Matrix metalloproteinases and their inhibitors in connective tissue remodeling, FASEB J. 5:2145-2154 [4] LM Matrisian (1994) Matrix metalloproteinase gene expression. Ann N Y Acad Sci 732:42-50 [5] KH Vousden, X Lu (2002) Live or let die: the cell's response to p53. Nat Rev Cancer 2:594604 [6] JC Reed (2002) Apoptosis-based therapies. Nat Rev Drug Discov 1:111-121 [7] W Meikrantz and R Schlegel (1995) Apoptosis and the cell cycle. J Cell Biochem 58:160-174. [8] LL Rubin, CL Gatchalian, G Rimon, SF Brooks (1994) The molecular mechanisms of neuronal apoptosis. Curr Opin Neurobiol 4:696-702 [9] RT Borchardt, YS Wu (1976) Potential inhibitors of S-adenosylmethionine-dependent methyltransferases. 5. Role of the asymmetric sulfonium pole in the enzymatic binding of S-adenosyl-L-methionine. J Med Chem 19:1099-1103 [10] WK Paik and S Kim (1990) Protein methylation, in: J. Lhoest, C. Colson (Eds.), Ribosomal protein methylation. CRC press, Boca Raton, FL, 155-178 [11] AS Wilson, BE Power, PL Molloy (2007) DNA hypomethylation and human diseases. Biochim Biophys Acta 1775:138-162 [12] W Fischle, Y Wang, CD Allis (2003) Histone and chromatin cross-talk. Curr Opin Cell Biol 15:172-183 [13] RL Momparler and V Bovenzi (2000) DNA methylation and cancer. J Cell Physiol 183:145-154 [14] RF O'Dea, BL Mirkin, HP Hogenkamp, DM Barten (1987) Effect of adenosine analogues on protein carboxylmethyltransferase, S-adenosylhomocysteine hydrolase, and ribonucleotide reductase activity in murine neuroblastoma cells. Cancer Res 47:3656-3661 [15] RL Momparler, J Bouchard, N Onetto, GE Rivard (1984) 5-aza-2'-deoxycytidine therapy in patients with acute leukemia inhibits DNA methylation, Leuk Res 8:181-185 [16] GE Rivard, RL Momparler, J Demers, P Benoit, R Raymond, K Lin, LF Momparler (1981) Phase I study on 5-aza-2'-deoxycytidine in children with acute leukemia. Leuk Res 5:453-462
Proteomic Study of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes Sung Ung Kang1,2, Karoline Fuchs2, Werner Sieghart2, and Gert Lubec1,* 1
Department of Pediatrics Division of Biochemistry and Molecular Biology, Center for Brain Research, Medical University of Vienna * Dept. of Pediatrics, Medical University of Vienna, Waehringer Guertel 18, 1090 Vienna Austria, [email protected] 2
Abstract. Over the past decade, understanding of the structure and function of membrane proteins has advanced significantly as well as how their detailed characterization can be approached experimentally. Detergents have played significant roles in this effort. They serve as tools to isolate, solubilize, and manipulate membrane proteins for subsequent biochemical and physical characterization. Combination of detergents and various separation methods coupled with mass spectrometry technology e.g. MALDI-TOF/TOF and nano-HPLC-ESI-QTOF/MS/MS is now possible to examine the expression of membrane proteins. This study for establishing separation methods of membrane proteins on two modified gel-electrophoresis (16-BAC and BN-PAGE subsequently with SDS-PAGE) could make it likely that the components of membranes become increasingly amenable to identification and characterization. To study the structure (complexes) and function of membrane proteins, we must first pre-fractionate enriched membrane proteins, or isolate and purify membrane complexes. Such proteins can be solubilized by high-salt solutions or detergents, which have affinity both for hydrophobic groups and for water. Due to a preponderance of binding detergents over hydrophobic regions, when integral proteins are exposed on aqueous solution, these protein molecules are prevented from aggregation and maintained their native conformation. Subsequently, diverse kinds of eletrophoretic analysis combined with mass spectrometry have been applied with site specific (tripsin, chymotrypsin, CNBr and Asp-N) enzymes. The final goal is to enable high-throughput analysis of ion-channel proteins and major neurotransmitter receptor complexes within central nervous system by an electrophoretic method allowing quantification with subsequent unambiguous protein identification.
1 Introduction 1.1 Theoretical Study of Biological Membranes and Detergents for Study of Membrane-Transporters (Complexes) in Mammalian Brain 1.1.1 Biological Membrane Proteins and Membrane Protein Complexes in Mammalian Brain The membranes of living organisms are involved in many aspects of not only the life, growth and development of all cells, but also important targets in various diseases e.g. *
Corresponding author.
430
S.U. Kang et al.
hyperkalemic periodic paralysis and myasthenia gravis involve ion channel defects result from genetic mutations and the actions of specific antibodies that interfere with channel function [1] and [2]. In spite of these importances, analytical processes of membrane proteome are tardy because the predominant structural elements of these membranes are lipids and proteins, major insoluble components, which consist of enriched hydrophobic amino acids (Alanine, Valine, Isoleucine, Leucine, Methionine, Phenylalanine, Tyrosine, Tryptophan). Biological membranes are composed of phospholipids and proteins where phospholipids can be viewed as biological detergents. The majority of the lipids that make up the membrane contain two hydrophobic groups connected to a polar head. This molecular architecture allows lipids to form structures called lipid bilayers, in which the hydrophobic chains face each other while the polar head groups are outside facing the aqueous milieu. Proteins and lipids, like cholesterol, are embedded in this bilayer. This bilayer model for membranes was first proposed by Singer and Nicolson in 1972 and is known as the fluid mosaic model [3]. The embedded proteins are held in the membrane by hydrophobic interactions between the hydrocarbon chains of the lipids and the hydrophobic domains of the proteins. These membrane proteins, known as integral membrane proteins, are insoluble in water but are soluble in detergent solutions [4]. 1.1.2 Solubilization of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes by Detergents Detergents are amphipathic molecules that contain both polar and hydrophobic groups. These molecules contain a polar group (head) at the end of a long hydrophobic carbon chain (tail). In contrast to purely polar or non-polar molecules, amphipathic molecules exhibit unique properties in water. Their polar group forms hydrogen bonds with water molecules, while the hydrocarbon chains aggregate due to hydrophobic interactions. These properties allow detergents to be soluble in water. In aqueous solutions, they form organized spherical structures called micelles, each of which contains several detergent molecules. Because of their amphipathic nature, detergents are able to solubilize hydrophobic compounds in water. Incidentally, one of the methods used to determine the CMC relies on the ability of detergents to solubilize a hydrophobic dye. Detergents are also known as surfactants because they decrease the surface tension of water. Detergents solubilize membrane proteins by mimicking the lipid-bilayer environment. Micelles formed by detergents are analogous to the bilayers of the biological membranes. Proteins incorporate into these micelles via hydrophobic interactions. Hydrophobic regions of membrane proteins, normally embedded in the membrane lipid bilayer, are now surrounded by a layer of detergent molecules and the hydrophilic portions are exposed to the aqueous medium. This keeps the membrane proteins in solution [4]. Complete removal of detergent could result in aggregation due to the clustering of hydrophobic regions and, hence, may cause precipitation of membrane proteins [5]. Although, phospholipids can be used as detergents in simulating the bilayer environment, they form large structures, called vesicles, which are not easily amenable for isolation and characterization of membrane proteins. Hence, the use of synthetic detergents is highly preferred for the isolation of membrane proteins. Dissolution of membranes by detergents can be divided into different stages. At low concentrations, detergents bind to the membrane by partitioning into the lipid bilayer. At higher concentrations, when the bilayers are saturated with detergents, the
Proteomic Study of Hydrophobic (Membrane) Proteins
431
membranes disintegrate to form mixed micelles with the detergent molecules [4]. In the detergent-protein mixed micelles, hydrophobic regions of the membrane proteins are surrounded by the hydrophobic chains of micelles. In the final, solubilization of the membranes leads to the formation of mixed micelles consisting of lipids and detergents and detergent micelles containing proteins. Other combinations of micelles containing lipids and detergents and lipid-protein-detergent molecules are possible at intermediate concentrations of detergent. Micelles containing protein-detergent molecules can be separated from other micelles based on their charge, size, or density. A large number of detergents with various combinations of hydrophobic and hydrophilic groups are now commercially available. Based on the nature of the hydrophilic head group, they can be broadly classified as ionic, non-ionic, and zwitterionic detergents. 1.1.3 Ion-Channel Proteins (Membrane Transporters), Neurotransmitter Receptors, and Their Complexes In the last few years a new word has entered the medical and scientific vocabulary. This word, channelopathy, describes those human and animal diseases that result from defects in ion channel function [6]. Ion channels are membrane proteins that act as gated pathways for the movement of ions across cell membranes. They play essential roles in the physiology and pathophysiology of all cells and it is therefore not very surprising that an ever increasing number of human and animal diseases have been found to be caused by defective ion channel function. Ion-channels regulate the flow of ions across the membrane in all cells. In nerve and muscle cells they are important for controlling the rapid changes in membrane potential associated with the action potential and with the postsynaptic potentials of target cells. For instance, the influx of Ca2+ ions controlled by these channels can alter many metabolic processes within cells, leading to the activation of various enzymes and other proteins, as well as release of neurotransmitter. Channels differ from one another in their ion selectivity and in the factors that control their opening and closing, or gating. Ion selectivity is achieved through physicalchemical interaction between the ion and various amino acid residues that line the walls of the channel pore. Gating involves a change of the channel’s conformation in response to an external stimulus, such as voltage, a ligand, stretch or pressure. The molecular make-up of a membrane-bound receptor consists of three distinct structural and functional regions: the extracellular domain, the transmembranespanning domain and intracellular domain. Receptors are characterized by their affinity to the ligand, their selectivity, their number, their saturability and their binding reversibility. So-called isoreceptors from families of structurally and functionally related receptors, which interact with the same neuroactive substance, they can be distinguished by their response to pharmacological agonists or antagonists. Isoforms of receptor can occur in a tissue-restricted manner, but expression of different isoreceptors in the same tissue is also found. Binding of a ligand to a receptor induces a modification in the topology of the receptor by changing its conformation: this allows either an ion current to flow, so-called ionotropic receptors, or elicits a cascade of intracellular events often consist of the metabotropic receptor [7]. The design of intramembranous receptors is quite variable. Some receptors consist of single polypeptides exhibiting three domains: an intracellular and extracellular domain linked by a transmembrane segment. Other receptors are also monomeric, but
432
S.U. Kang et al.
folded in the cell membrane and thus form variable intra- and extracellular as well as transmembrane segments. A large group of receptors consists of polymeric structures with complex tertiary topology. 1.2 Experimental Study of Biological Membranes; Study of Membrane-Transporters (Complexes) in Mammalian Brain 1.2.1 Analysis of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes Analysis by Modified Gel Electrophoresis Recently, most of protocol have been focused on direct injecting LC-MS and/or coupled with multidimensional protein identification technology (MudPIT) for whole membrane protein screening on murine brain [8], and with protein-tagging protocol for targeted membrane protein study [9]. Application of poly-acrylamide gel electrophoresis (PAGE) on membrane proteins, although it has several advantages comparing with LC-MS, is hesitated since its highly insoluble or hydrophobic specificity composed of large portion of hydrophobic amino acids. In this study, modified gel electrophoresis, BN/SDS-PAGE and 16-BAC/SDS-PAGE, will be applied because developing new applications in analysis of membrane-transport protein should help to answer neurobiological questions at the protein level, independent of antibody availability and specificity. 1.2.1.1 Blue Native Poly-Acrylamide Gel Electrophoresis. BN-PAGE is a technique developed by Schaegger and von Jagow (1991) for separation and analysis of membrane protein complexes, often in their enzymatically active form [10]. Its main advantage over conventional PAGE systems is the use of Coomassie Brilliant Blue G in both sample-loading buffer and the cathode buffer. Binding of the dye to native proteins performs two important functions; (1) it imparts a slight negative charge on the proteins, thereby enabling them to enter the native gel at neutral pH where the stability of protein complexes is most optimal. (2) By binding to hydrophobic regions of proteins, the dye prevents protein aggregation during electrophoresis. Electrophoresis separates detergent from protein complexes, which often results in the aggregation of hydrophobic membrane proteins. However, the presence of Coomassie dye in BN-PAGE maintains protein solubility, thereby enabling multiprotein complexes to separate one from another according, largely, to their apparent molecular mass (Mr). A further advantage of this technique is that protein bands are visible during electrophoresis and thus subsequent staining of the gel is not always necessary. Although the charge shift on membrane proteins and their complexes exerted by the Coomassie dye can lead to aberrant molecular masses, the Mr of proteins with a pI below 8.6 does not deviate significantly when compared with most soluble protein markers [11]. BN-PAGE has been instrumental in the analysis of protein complexes of the mitochondrial membranes, in particular respiratory complexes [10]. It has also been an important tool in the analysis and assembly of mitochondrial protein translocation complexes. Additionally, the method has been used to study individual or multiple protein complexes from membranes including chloroplasts, endoplasmic reticulum, and the plasma membrane. BN-PAGE can also be used for the analysis of soluble protein complexes, as has been observed for the heptameric mitochondrial matrix form of Hsp60. Not all membrane proteins and their complexes resolve on BN-PAGE. For
Proteomic Study of Hydrophobic (Membrane) Proteins
433
example, many mitochondrial proteins streak from the high~ to the low~ Mr range, which may be due to the proteins dissociating from their complexes during electrophoresis or from a change in their solubility. Other complexes resolve extremely well. This variability may be dependent on a number of factors including the detergent employed and the stability of the protein complexes, as well as whether Coomassie dye in fact binds to proteins being analyzed. It is important to use a detergent that is efficient at solubilizing membranes but does not disrupt the integrity of the membrane protein complex. Initial studies should determine which detergents are most suitable for maintaining the particular protein complex intact prior to application and for the duration of the run. Most studies employ Triton X-100, n-dodecyl maltoside, or digitonin as detergents. In the case of mitochondria, digitonin gives more discernible protein complexes of higher Mr in comparison to dodecyl maltoside or Triton X-100. Indeed, the stable complexes observed on BN-PAGE following Triton X-100 solubilization can instead be seen as supercomplexes when digitonin is used. BN-PAGE has also been used to analyze the subunit composition of membrane protein complexes. Total extracts or purified membrane protein complexes can be subjected to BN-PAGE, and indivisual subunits can be separated using SDS-PAGE in second dimension. Well-resolved protein spots originating from complexes can be observed and subjected to further downstream processing such as Coomassie staining, immunoblot analysis, or amino acid sequencing [12]. This is a particularly useful technique because two-dimensional gel electrophoresis using IEF in the first dimension may be problematic for resolving membrane proteins. Purification of membrane protein complexes for crystallization trials for structural analysis is the one of other applications of BN-PAGE. 1.2.1.2 16-BAC Poly-Acrylamide Gel Electrophoresis. The introduction of pHgradient SDS–PAGE two-dimensional electrophoresis by O’Farrell revolutionized the analysis of complex protein mixtures commonly found in biological samples. Numerous modifications of the original procedure have been described that attempted to overcome the limitations of the original procedure such as the introduction of immobilized pH gradients. These systems have been used to resolve and isolate picomolar quantities of soluble proteins. Due to their high reproducibility, these procedures also form the basis of two-dimensional gel protein databases. However, the separation of integral membrane proteins, particularly those with large hydrophobic domains (multiple transmembrane domains), has remained less than satisfactory since many of them resolve only poorly in the pH-gradient dimension. This is partially due to the inherent problem that membrane proteins do not solubilize well in nonionic detergents, particularly at low ionic strength (despite the presence of high amounts of urea). Even if solubilization can be achieved, the proteins often precipitate around pH values close to their isoelectric point. Furthermore, charge heterogeneity as commonly found in glycosylated membrane proteins contributes to additional streaking in the first dimension. Numerous attempts have been made to overcome these inherent problems, most of them aimed at improving initial solubilization of the proteins by using detergents stronger than Nonidet P-40 including SDS, zwitterionic detergents such as CHAPS, and zwittergent. While many of these protocols result in improvements of protein patterns derived from membrane fractions, it is often not clear which of the spots represent truly integral membrane proteins. Other modifications have been successful for separating individual membrane proteins, but the techniques must be optimized for
434
S.U. Kang et al.
each protein individually and usually do not tolerate loading of high amounts of protein. Thus, to our knowledge there appears to be no satisfactory protocol involving pHgradient electrophoresis which is universally suitable for the separation of biological membranes with a complex protein composition. Due to the problems encountered with pH-gradient electrophoresis, various alternative approaches have been used to increase the resolution of membrane proteins beyond the resolution afforded by onedimensional discontinuous SDS–PAGE. In search for a method that yields optimal resolution with minimal loss of material, we have adapted the procedure developed by Macfarlane [13]. In this procedure, separation in the first dimension is achieved by an ‘‘inverse’’ discontinuous electrophoresis system, using the cationic detergent benzyldimethyl-n-hexadecyl ammonium chloride (16-BAC), a stacking gel at pH4.1, and a separation gel at pH2.1. Similar to SDS–PAGE, proteins are separated according to their molecular mass. However, the properties of the acidic 16-BAC system are sufficiently different from the basic SDS system to allow for substantial resolution in the second dimension [14]. 1.2.2 Identification of Hydrophobic (Membrane) Proteins and Hydrophobic Protein Complexes Analysis by Mass Spectrometry and Immunological Approaches In mass spectrometry, the sample (e.g. a tryptic digest of a protein spot) is ionized and mass per charge is analyzed. The most commonly used ionization techniques for polypeptides and peptides are electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). In ESI, polypeptides are ionized amino acid by amino acid, and thereby sequenced according to mass per charge (m/z). In MALDI, whole peptides are charged and their m/z is measured. To decrease the level of complexity at analysis, a preseparatory technique such as HPLC is used when peptide mixtures are studied. Presence of peptide fragments matching known sequences from database will identify the protein. Two problems in database matching are that some different amino acids have the same mass per charge ratio, thus making them indistinguishable from one another, and that reported sequences in the database may contain a substantial amount of erroneously annotated entries, thus making proteins with incorrect reference data evade identification. Sequencing errors may also make the databases references incorrect. Mass spectrometry is gaining further in popularity due to lowered costs of analysis. It is not limited to protein identification, but in proteomics it has had one of its greatest impacts as it offers new accurate possibilities of identification [15]. The immunological approach of identification is based on the production of specific antibodies and detection by a secondary antibody linked to an enzyme. The approach relies on the specificity of the antibody used; it is often produced against which the antibody was raised was a long polypeptide sequences, there may be a population of antibodies so that many other proteins with sufficiently similar epitopes cause unspecific binding. If a sample includes several isoforms from a family of proteins with a high degree of similarity, the different isoforms may not be distinguished immunologically. 1.3 Computational Study of Biological Membranes; Bioinformatics for Proteins Structure, Functions, and Interactions A typical Proteomics experiment might begin with cells grown under some specified set of conditions. A subset of the cellular proteins is purified through subcellular fractiona-
Proteomic Study of Hydrophobic (Membrane) Proteins
435
tion or the use of affinity tags or protein interactions. These proteins are then identified or quantified by some combination of one- or two-dimensional gel electrophoresis, high-performance liquid chromatography, amino acid sequencing, proteolysis, and/or mass spectrometry. A diverse set of bioinformatics analysis is required both to perform these experimental steps and to interpret the resulting data (Fig. 1). Beyond the initial step of identifying peptides from their sequences or mass spectra, bioinformatics analysis of these results is (1) to identify intact proteins from their constituent peptides, (2) to find related proteins in multiple species, (3) to find genes corresponding to the proteins, (4) to find coexpressed genes or proteins, (5) to find or test candidate protein interaction partners, (6) to validate or compare proteomics measurements of posttranslational modifications or protein subcellular localization with computational predictions of the same properties, (7) to predict structures or domains of proteins, and (8) to find the functions of proteins discovered in proteomics experiments. Experiment
Protein sequence Protein-type-specific DB Organism-specific DB Large protein DB Large nucleotide DB ESTs/unfinished sequences Identify orthologs
Sequence DB search
Search domain DBs/ domain assignment
Try to identify molecular function
Guess function from curated domains Medine links, Gene annotation Identify transmembrane segments and topology
Try to identify cellular function
Determine cellular localization DBs, signal prediction
Prosites, Glycosylation, Phosphorylation, cleavage
Predict secondary structure Homology modeling Fold recognition Ab initio modeling
Look for inherent sequence signals
Generate model of protein tertary structure
Predict functionally linked and interacting proteins
Take advantage of Experimental databasess
Gene fusions Operons Gene neighbors Phylogenetic profiles
Protein-Protein interaction Metabolic activity Signaling/Modification RNA expression Mutation/Phenotypes
Fig. 1. Flowchart illustrating the steps for bioinformatics to predict a protein’s structure, function, and interaction partners
2 Conclusion In this review, we introduced an analytical tool for the analysis of membrane proteins using a gel-based proteomics approach and mass spectrometric analysis. In addition, we for the first time reported the unambiguous sequence analysis and identification of two recombinant GABAA receptor subunits (REF). Further work is going on in our
436
S.U. Kang et al.
laboratories aiming to analyse receptor proteins from the brain, to identify receptor subtypes and isoforms and investigate posttranslational modifications.
References [1] Ptacek LJ, George AL Jr., Griggs RC, Tawil R, Kallen RG, Barchi R L, Robertson M, Leppert MF (1991) Identification of a mutation in the gene causing hyperkalemic periodic paralysis. Cell 67(5):1021-7 [2] Donnelly D, Mihovilovic M, Gonzalez-Ros JM, Ferragut JA, Richman D, MartinezCarrion M (1984) A noncholinergic site-directed monoclonal antibody can impair agonist-induced ion flux in Torpedo californica acetylcholine receptor. Proc Natl Acad Sci U S A. 81(24):7999-8003 [3] Singer SJ and Nicolson GL (1972) The fluid mosaic model of the structure of cell membranes. Science 175:720 [4] Helenius A and Simons K (1975) Solubilization of membranes by detergents. Biochim. Biophys. Acta 415:29 [5] Horigome T and Sugano H (1983) A rapid method for removal of detergents from protein solutions. Anal. Biochem. 130:393 [6] Hoffman EP (1995) Voltage-gated ion channelopathies: inherited disorders caused by abnormal sodium, chloride, and calcium regulation in skeletal muscle. Annu Rev Med. 46:431-41, Review [7] DeLorey TM and Olsen RW (1992) Gamma-aminobutyric acidA receptor structure and function. J Biol Chem. 267(24):16747-50, Review [8] Wu CC and Yates JR 3rd. (2003) The application of mass spectrometry to membrane proteomics. Nat Biotechnol. 21(3):262-7 [9] Olsen JV, Andersen JR, Nielsen PA, Nielsen ML, Figeys D, Mann M, Wisniewski JR (2004) HysTag--a novel proteomic quantification tool applied to differential display analysis of membrane proteins from distinct areas of mouse brain. Mol Cell Proteomics. 3(1):82-92 [10] Schaegger H and von Jagow G (1991) Blue native electrophoresis for isolation of membrane protein complexes in enzymatically active form. Anal Biochem. 199(2):223-31 [11] Schaegger H, Cramer WA, von Jagow G (1994) Analysis of molecular masses and oligomeric states of protein complexes by blue native electrophoresis and isolation of membrane protein complexes by two-dimensional native electrophoresis. Anal Biochem. 217(2):220-30 [12] Kang SU, Fuchs K, Sieghart W, Lubec G (2008) Gel-Based Mass Spectrometric Analysis of Recombinant GABAA Receptor Subunits Representing Strongly Hydrophobic Transmembrane Proteins. J Proteome Res. (in print) [13] Macfarlane DE (1983) Use of benzyldimethyl-n-hexadecylammonium chloride ("16BAC"), a cationic detergent, in an acidic polyacrylamide gel electrophoresis system to detect base labile protein methylation in intact cells. Analytical Biochemistry 132:231-5 [14] Bierczynska-Krzysik A, Kang SU, Silberrring J, Lubec G (2006) Mass spectrometrical identification of brain proteins including highly insoluble and transmembrane proteins. Neurochem Int. 49(3):245-55 [15] Chen WQ, Kang SU, Lubec G (2006) Protein profiling by the combination of two independent mass spectrometry techniques. Nat Protoc. 1(3):1446-52
Climate Change: Business Challenge or Opportunity? Chung-Hee Kim Department of Management, Glasgow, Strathclyde Business School, UK [email protected]
Abstract. This paper aims to critically explore and examine why, how and to what extent climate change and energy management is used as a business opportunity in the global market. With the case of the UK and Germany, the author argues that the integrating climate change and energy issues with business management may prove sustainable and environmentally sound for the firm. The implication is that climate change management can be a powerful instrument for addressing the integration of current global issues into the economic success of business, when business concerns these issues through corporate social responsibility (CSR) lense as an opportunity and legitimacy in the international market. To encourage business involvement and make it a real market, it is argued that the international level of cooperation and incentive policy can be the major driving force and contribution.
1 Introduction: Why “Climate Change” Now? Why is climate change emerging as an important issue and to what extent is it used as a means of doing business? Climate change and energy management is regarded as an economic challenge as well as an opportunity. In prior times, business could analyse its success mainly by profit maximisation, which even it was regarded as the only corporate social responsibility (CSR) [7]. However, this is no longer true in the age of globalisation. Business has to obey law, be ethical and be good corporate citizen beyond making a profit [1]. To elaborate, business, especially multi-national corporations (MNCs), have to understand recent challenges to traditional market economics [2] and seek legitimacy in the global market in which they operate [3, 9]. They have to build the competency to cope with the transformation of the market from a profit driven to a value driven one. Their ultimate goal now is not to gain immediate market share but to win the competition in a race to build competencies [8, 14]. To this point, climate change issues are elaborated upon through the views of Corporate Social Responsibility (CSR) as a method of value creation. In this regard, this paper will elaborate the issues according to three questions and seek the implications of business and other stakeholder group on climate change issues.
2 Research Questions and Discussions 2.1 What Kinds of Climate Change and Energy Management Will Prove Sustainable and Environmentally Sound? Business is the principle offender contributing to climate change as well as a critical player in resolving the problem. Therefore, without corporations’ active involvement,
438
C.-H. Kim
the issues will not be settled. If this is so, then how can business is approached on this issue? The answer is to integrate climate change with business management. In this era of globalisation, how should business fit climate change management to its market need? The author elaborates an answer with the case of Sir Richard Branson’s current investment announcement on the climate change issue (Clinton Global Initiative, September 2006). Why is Sir Richard Branson contributing £1.6bn to fight global warming and investigate renewable energy? Can we identify from this case how and to what extent energy management can be integrated with business management? It is supposed that this decision came from the entrepreneur’s calculation that environment issues will soon materialise as a competitive market advantage, and that is why he would like to be a first-mover in it. The first and foremost reason is to pursue a new corporate value through environmental investment as corporate responsibility. This is also related with risk management and sustainable development management. Through this action, Branson’s the Virgin can establish legitimacy in the global market through corporate responsibility, and transform the energy challenge into a global business opportunity. CSR is no longer an option for MNCs. It is emerging as ‘a must’ and ‘an operating license’ in the global market in order to win the market game. It is also assumed that integrating climate change into the business is very much related with stakeholder management: the term stakeholder has become an ‘idea of currency’ [6] and is now used almost as everyday terminology in business [13]. To elaborate, business can use CSR issues of energy and environment as an efficient way to communicate with various stakeholders of the day, and especially in the relationship with one of the most important stakeholders – government. Business can also pursue its business purpose through transparent lobbying to relevant governments (including international and host governments). As we are living in a time of decentralisation and diversity, the business’ relationship with other stakeholders is emerging as a critical issue at the heart of business management. The power on the global stage is no longer concentrated only in one segment (such as government, business, military or NGOs). There must be appropriate power sharing, and dialogue and networking, with other members of the community. Such relationships develop the foundations of trust, confidence and loyalty which must be built to foster a corporate reputation [4]. Therefore, according to Hillman and Keim [10], the firm must address “true” stakeholder issues, such as environmental protection, not just “social” and “philanthropic” issues. 2.2 Which Strategies, Instruments, and Programs are Driving Business Climate Change Management? Are There Any Incentives to Innovate? 2.2.1 Driving Force by Business Many business actions are unsustainable. Thus, sustainable development can be regarded as a capacity for continuance in the long term. The concept of SD has increasingly come to represent a new kind of world, where economic growth delivers a more just and inclusive society, at the same time as preserving the natural environment and the world's non-renewable resources for future generations.
Climate Change: Business Challenge or Opportunity?
439
As sustainability concepts move into mainstream asset management and as the request for sustainability funds increases, investors are looking for SD indicators of a firm’s value creation beyond economic parameters [11]. For example, the Dow Jones Sustainable Index (DJSI) and the Global Reporting Initiative (GRI) are extensively used by many investors for measuring corporate sustainability performance of the company. In this regard, the European Emission Trading Scheme is regarded as an efficient way of attracting business involvement in environmental activities while pursuing economic efficiency at the same time. If there is an opportunity to make money, business naturally gathers at this market. This kind of business-driven market is encouraging for both the environmental and economic aims. For instance, Morgan Stanley, with the largest financial commitment to date to ETS, has recently announced its investment plan of about $3bn (£1.6bn) in emission trading (26th October 2006, FT). From such a case, we can expect the future blueprint plan for a global environmental approach. 2.2.2 Driving Force by Government How business deals with energy and climate change issues clearly impacts on national competitiveness. Governments put the emphasis on these issues as governments play a very important role for business promotion of energy and resource management efficiency, as well as investment for renewable energy. When it comes to a comparative analysis of government policy between the UK and Germany, according to the 4th National Communication of each government policy and measures from the data of UN Framework Convention on Climate Change (see following table), this paper recognises that much of the driving forces of the two governments comes similarly from EU-level directives and regulations, whereas a detailed approach to each sector is somewhat different according to their national circumstances. Both governments emphasise the importance of energy efficiency and the invention of renewable energy through both environmental and economic perspectives, whereas the approach to a specific source of energy is somewhat different. For example, the German government has clearly announced its policy concerning nuclear energy, whereas the UK has not clearly mentioned its policy because nuclear energy has been hotly debated both politically and economically. In terms of renewable energy, the UK puts the emphasis on wind, wave and tidal, and biomass energy, whereas Germany focus on wind, hydro and biomass energy as future renewable energy sources. As can be seen from the comparison analysis between the two countries below, business must become more cleaver in following up governments’ policies, both in measures as well as incentives, and discover an appropriate strategy for their investment in this challenging market. Even though government tries to drive business to become actively involved as indicated in the above analysis, government has to know that many corporations still think this kind of government political approach and gesture is not enough and too vague. The business sector in the UK has been criticising the lack of consistency in government policy, which has to be transformed from a review process to an actual regulatory action and from target to real action (BWEA, 2006). It can only make business to join more actively in environment market.
440
C.-H. Kim Table 1. Government policy and regulation to business sector (Germany and UK) Germany
Framework
Regulation & Tax
Organisation
Emission Trading Scheme
Building
Nuclear Energy
Major renewable energy & investment
UK
- Gives long-term perspective to all actors and hence a dependable framework for investment decision
- Committed to clear, flexible and stable policy framework for business
- The National Climate Protection Programme of 13.7 2005 - The Renewable Energies Act (EEG) - Market incentives programme (MAP) (promoting solar collectors and biomass plants) - dena (German Energy Agency) (limited liability company, GmbH) - Working Group on Emissions Trading to Combat Greenhouse Effect (AGE) - Project Mechanism Act (ProMechG) - Transposing Directive 2002/91/EC into national law - Energy Saving Ordinance (EnEV) => based on EU Building Directives - Energy efficiency in building sector (including energy certificates)
- Climate change levy (tax on the use of energy in industry/commerce and public sector) - Climate change agreement (80% discount from the levy) - Carbon Trust (independent company funded by the Government) - voluntary UK ETS (5-year pilot scheme) => move to EU ETS as of 2007
- Building Regulation (Energy standard of new and refurbished buildings) - Implementing of the EU Energy Performance of Buildings Directive (EPBD)
- To be phased out gradually over the next 20 years.
- Not mentioned at NC4 (politically & economically debated issue)
- wind energy - hydro power - biomass utilization
- wind energy - wave and tidal energy - biomass heat
* It is analysed based on the source of UK and Germany National Communication 4, UN Framework Convention on Climate Change.
2.3 On What Level Do Incentives Have the Greatest Effect (International, EU, National, Regional, Etc.)? As climate change issues are not the only issues of one nation or one region, it is insisted that international and EU-level cooperation as well as incentives is the most
Climate Change: Business Challenge or Opportunity?
441
important factor to make the energy market actively develop. As can be seen from the national policy and measures of Germany and the UK, many of the national policies have been driven at the EU and multinational level. For example, for the successful continuation of an emission trading scheme, there must be a solid operating system at the EU or global level, and it must be backed up by international policy and control mechanisms. Still, the most severe emitter of the green house gases – the United States, has not joined in this market. Consequently, there is a question whether it will develop into a real market or will fade away. In order to attract US or other countries which still have not joined in this global endeavour, emission trading must be regarded as the most efficient way to attract them to introduce business investment [15]. Therefore, international cooperation on this issue is critical. Secondly, there must be an effort at the international level of making climate change and energy management issues into a global index and standard. Even though there may be a different approach by each nation on this issue according to their environmental, institutional and economic situation, there must be a global approach for international trade, with environment management as a global standard and rigorous policy; not just a bureaucratic approach with regulations but an encouragement policy with incentives for them to compete in transparency, is needed. It also gives business certainty for them to pursue their concrete strategy. As well, the international community has to recognise that there will be more and more businesses who would like to take competitive advantage of their first-mover advantage in this new hybrid market.
3 Conclusion To conclude, climate change and energy management can be a powerful instrument for addressing the integration of current global issues into the business success of business, when business sees these issues through CSR as an opportunity in the global market. To encourage business involvement and make it as a real market, an international level of cooperation and incentive policy can be the major driving force and contribution. Acknowledgement. This is a developing paper of the essay which was introduced at the World Business Dialogue Essay Competition, Cologne University, Germany in March 2007.
References [1] Carroll AN and Bucholtz AK (2003) Business and Society: Ethics and Stakeholder Management. Mason.: Thomas Learning [2] Clulow V (2005) Futures dilemmas for marketers: can stakeholder analysis add value? European Journal of Marketing 39(9/10):978-998 [3] Dunning J (2003) Making Globalization Good: The Moral Challenges of Global Capitalism, Oxford: Oxford University Press [4] Edelman Survey (2006) Annual Edelman Trust Barometer 2006. New York
442
C.-H. Kim
[5] European Union (2004) Directive 2004/101/EC, amending Directive 2003/87/EC establishing a scheme for greenhouse gas emission allowance trading within the Community, in respect of the Kyoto Protocol's project mechanisms. European Commission [6] Freeman RE and Phillips R (2002) Stakeholder theory: A libertarian defense. Business Ethics Quarterly 12(3):331-350 [7] Friedman M (1970) The Social Responsibility of Business Is to Increase Its Profits’. The New York Times Magazine. (13, September 1970) [8] Gary H (1990) The Core Competence of the Corporation. Harvard Busness Review 68, no. 3 (May/June 1990):79-91 [9] Gooderham P and Nordhaug O (2003) International Management: Cross Boundary Challenges. Oxford: Blackwell Publishing Ltd. [10] Hillman AJ and Keim GD (2001) Shareholder value, stakeholder management, and social issues: What's the bottom line?. Strategic Management Journal 22:125-139 [11] Holliday CO, Schmidheiny S and Watt SPW (2002) Walking the Talk: The Business Case for Sustainable Development. Sheffield, UK: Greenleaf Publishing [12] Morrison (2006) Morgan Stanley plans to invest $3bn in Kyoto scheme to reduce emissions. Financial Times. Oct 27, 2006 [13] Pinnington A, Macklin R and Campbell T (2007) Human Resource Management: Ethics and Employment. Oxford: Oxford University Press [14] Porter M and Kramer MR (2006a) Strategy & Society: The Link Between Competitive Advantage and Corporate Social Responsibility. Harvard Business Review 84(12):78-92 [15] Stern Review (2006) The Economics of Climate Change: The Stern Review. Cabinet office. HM Treasury. London [16] UK dti (2006) The Energy Challenge Energy Review Report 2006, Department for Trade and Industry, London [17] UNFCCC (2006) Germany National Communication 4. United Nations Framework Convention on Climate Change. http://unfccc.int/resource/docs/natc/gernc4.pdf. Accessed 30 June 2008 [18] UNFCCC (2006) UK National Communication 4. United Nations Frame work Convention on Climate Change. http://unfccc.int/resource/docs/natc/uknc4.pdf. Accessed 30 June 2008
Comparison of Eco-Industrial Development between the UK and Korea Dowon Kim and Jane C. Powell University of East Anglia, School of Environmental Sciences, Norwich, UK [email protected]
Abstract. The application of eco-industrial development (EID) has attracted increasing attention worldwide leading to diverse implementation strategies of EID including eco-industrial parks and industrial symbiosis. As eco-industrial developments have resulted in varying levels of success it is necessary for new EID programmes to understand the lessons from both the successful and less successful earlier cases. This study has been undertaken to compare the implication of the National Industrial symbiosis Programme (NISP) in the UK, regarded as an active EID case, with the Korean eco-industrial park scheme as a newly implemented one. The two EID models were analysed with a framework centred on three practical aspects: the underlying approach to EID, the organisational framework and the operational framework. Comparing the two models based on the three aspects, this study suggests several implications for the implementation of EID in Korea, including business centred approach, expanding EIPs to regional EID, standardising operational framework and developing innovative synergy generation tools.
1 Introduction Eco-industrial development (EID) is an emerging industrial approach to sustainable development grounded in industrial ecology. It compares industry with natural ecology in order to transform conventional industrial systems into more sustainable ones. EID aims to optimise industrial resource use by the collaboration among the firms in an industrial network, focusing on recycling industrial by-products and wastes within the network. As a result, not only can EID improve the economic growth and competitiveness of participating firms, but it also can minimise the environmental impacts from industrial resource use and consequently mitigate climate change. EID has been developed in diverse strategies since the Kalundborg symbiosis in Denmark was found in 1989 [1]. As the application of EID has attracted increasing attention worldwide, diverse EID strategies have been also developed and have evolved in many industrial regions in the world. The Kalundborg case and the National Industrial Symbiosis Programme (NISP) in the UK [2] are typical European examples of industrial symbioses (IS), while many other industrial regions including those in USA, China and Korea have also developed eco-industrial parks (EIPs) [3-8]. In addition, some European regions such as Styria in Austria and Finland have developed recycling network [9, 10]. Although it is hard to evaluate existing EID cases exactly as their features often vary, some EID programmes are regarded as successful cases while others are regarded as failed [4, 5, 11]. Therefore it is necessary for new EID
444
D. Kim and J.C. Powell
programmes such as Korean EIPs to understand which strategies were effective to implement EID in initial stage through the lessons from the existing cases. This study has been undertaken to explore the implications for implementing EID, based on the comparison of an active EID case with a newly implemented one. NISP, which was selected as the active EID case has been compared with the recently introduced Korean EIP scheme. The two EID models are analysed with a framework centred on three practical aspects: the underlying approach to EID, the organisational framework and the operational framework. The data of this study were collected mainly from literatures including presentation materials, reports and websites.
2 National Industrial Symbiosis Programme (NISP) in the UK NISP is a national scale network of industrial symbiosis in the UK aimed at improving cross industry resource efficiency through the commercial trading of materials, energy and water and sharing assets, logistics and expertise. Operating at the forefront of industrial symbiosis thinking and practice, the programme has helped industries and firms take a fresh look at their resources. Integrating the experience and knowledge from the four precursor pilot projects which were developed independently, NISP expanded its network into national scale [2, 12-17]. The programme is managed by an independent non-profit organisation funded by public organisations [18]. NISP is considered to have created significant economic, environmental and social benefits based on creating sustainable business opportunities and improving resource efficiency. Most industrial sectors across the country are involved in the NISP including include oil, chemicals, pharmaceutical, steel, paper, water, cement, construction, engineering, financial and consultancy industry [19], with the participating number increasing sharply since the programme was launched. In 2007, the number of businesses working with NISP is over six thousand over all industrial sectors [17]. Recently, NISP was selected as an exemplar programme for eco-innovation by the European Commission [20]. NISP has reported the following performance of the programme during 14 months from April 2005 to June 2006 [12]. • • • • • • • •
1,483,646 tonnes diverted from landfill (of which 29% was hazardous wastes) 1,827,756 tonnes of virgin material saved 1,272,069 tonnes CO2 savings 386,775,000 litres potable water savings £36,080,200 additional sales for industry £46,542,129 of cost savings to industry 790 jobs created £32,128,889 private capital investment in reprocessing
1 Underlying approach to EID: Business centred industrial symbiosis One of the most prominent features of NISP is that the programme adopted the notion of industrial symbiosis rather than an eco-industrial park or eco-industrial network. NISP’s definition of industrial symbiosis emphasises five points: expansion of geographical boundary of collaboration to national level; expansion of collaboration types to broader resource sharing; expansion of the participants of symbiosis to
Comparison of Eco-Industrial Development between the UK and Korea
445
non-industrial organisations; demand based approach and the necessity of management tool. In particular, the programme expands the concept of resources from byproduct or waste to all tangible and intangible resources that can practically diversify the opportunities of industrial symbiosis, including raw materials, expertise and services [12, 18]. NISP approaches inter-firm collaboration from the viewpoint of business, the main players of eco-industrial development. The programme practically seeks bottom line benefits for the member firms through efficient resource use rather than environmental management. The vision of the program is to change the way business thinks. The programme prefers business terms such as ‘new products’, ‘opportunity’ and ‘cost reduction’ to the environmental terms such as ‘regulation’ and ‘waste minimisation’ to attract voluntary participation of businesses by highlighting the business benefits rather than emphasising the environmental concerns that may cause negative preconception of businesses [19]. All synergy projects are led by project advisory group (PAG) which is composed of highly experienced and knowledgeable members in industry, business or related area, across the region. As PAG gives practical advice to the project team from the viewpoint of business and plays an important role in linking industrial sectors using the same ‘industrial language’, the credibility of the programme as well as the success rate of each synergy project increases, and more businesses are attracted to NISP [21]. NISP keeps the viewpoint of business during the industrial symbiosis process. Involvement in NISP is voluntary relying only on the participant’s willingness to collaborate and its process is not strictly formalised. The information or data of participating firms are strictly protected by the security system to address business’s concern about confidentiality. The quality level of data is not compelled, but wholly depending on the decision of the participating organisation. Synergy projects are developed by the businesses concerned based on their own interests while the NISP practitioners support them by monitoring the process, coordinating with other organisations or experts to solve regulation or technology issues of the project and advertising the synergy performance to stimulate industrial symbiosis in the similar fields [2, 21]. 2 Organisational framework of NISP A distinctive feature of the NISP model is that inter-firm synergy is sought in geographically wide spatial boundary. NISP is the first industrial symbiosis initiative in the world to be launched on a national scale [18]. Although each regional programme was implemented one by one, NISP has been planned on a national scale from the beginning. The national NISP network consists of twelve large interfacing regional NISPs with a critical mass comprised of a wide variety of cross sector industries so that inter-firm synergy can be created flexibly. The regional boundary was designed to be the same as the administrative area for regional economic development so that each regional programme can be supported by the regional governance. Although NISP operates on a regional basis led by each regional PAG and coordinated by each regional team, inter-regional collaboration and communication also work depending on the characteristics of the each inter-firm collaboration to cover most national industries. [2,21] NISP is coordinated and managed by the independent third party. Although the programme is funded by government and local authorities, and it contributes to the
446
D. Kim and J.C. Powell
benefit of businesses, the programme is independent of both governmental organisations and businesses. At a national level NISP is governed by an independent board consisted of leading academic and representatives from the regulators and government, while that at a regional level is governed by business-led PAG. The programme is currently managed by International Synergies Limited, a Birmingham based independent non-profit organisation partially funded by public organisations [17]. The regional programme is coordinated by regional NISP practitioners who have proven track records in environmental and resource management in diverse fields. Not only can their wealth of experience lead the synergy opportunities more successful, but it also contributes to building the credibility between the participants and the NISP [2,12]. NISP has sought support from diverse sectors and partnership in order to expedite the growth of the programme and to reduce the risks of public funding. NISP is currently supported by academia, non governmental organisations and government departments and agencies including the DTI, DEFRA, Environment Agency, local authorities and regional development agencies (RDA) as well as industrial organisations. NISP considers the partnership with RDA particularly important as the programme supports regional development. NISP is funded by diverse public funding body including DEFRA (Business Resource Efficiency and Waste, BREW) and each RDA. Addressing the increasing necessity of new technology or expertise for synergy projects, NISP built the partnership with the Resource Efficiency Knowledge Transfer Network (RE-KTN) to transfer higher level technology or knowledge that are necessary to create more sustainable output in synergy projects [18, 21]. 3 Operational framework of NISP While most industrial symbiosis activities are carried out independently on a regional basis, all regional NISPs share overall operational framework including operational tools and IS database accumulated in each region. The national team develops basic operational framework such as IS process or internet based database system and provide them to each regional NISP. In addition, the operational tools developed in the regional NISPs are disseminated to all regions by the national team. Each regional NISP identifies synergy partnerships and facilitates them independently in each regional context using the structured operational tools. Therefore the operational framework is shared in national scale, while practical IS tasks are carried out in the regional context as the combination of a top-down and bottom-up approach. NISP has developed its own operational tools. The basic IS process of NISP consists of six steps, although this process is flexibly applied depending on each case. The first step is securing commitment and ownership of the firms and organisations in the region. The second and third steps are training participating organisations to develop industrial symbiosis thinking and collecting resource data from them, which take place concurrently. The fourth step is identifying synergy projects and the core stage of industrial symbiosis process, while the fifth step involves implementing commercially viable synergies and the final step is monitoring and maintaining the synergy projects. [2] In addition to the IS process, operational tools for generation of synergy idea have been developed by NISP for use during the IS process. The training program for
Comparison of Eco-Industrial Development between the UK and Korea
447
member organisations and the web based data collection system are the main tools for the initial stage of IS process. As the database is embodied in web-based framework, all data and information entered in a region can be shared nationally with flexible and remote access. The data held on the system are analysed by regional practitioners to identify synergistic links. [21] IS workshops are also the main operational tools for the generation of synergy opportunities. ‘Synergy Workshop’ events are structured, facilitated and ‘themed’ forums designed by NISP to provide IS opportunities. By drawing diverse organisations, such as inter-sector industries, across sector industries or governmental bodies, together into specific perspectives or interests such as specific legislation or key resource issues on a common theme, the event can identify the collaboration ideas that add value to each [21, 22]. In addition, the ‘Quick-Wins Workshop’ developed by NISP Humber is also widely used by most regional NISPs as one of the most popular tools to generate synergy ideas. The underling notion of the workshop is that some synergy ideas can be created with low cost and within short-term from the discussions between cross-sector parties. During the workshop, resource streams provided by participants are collected on a tabular matrix with the two axes, ‘Wants to source’ and ‘Wants to supply’, to enable the demand-supply matching process to be simple and quick. The participants from cross-sector organisations are encouraged to match and discuss their potential resource wants and needs with one another. After completing the matrix table, resource link diagram is drawn by the practitioners to demonstrate the matched demand-supply streams more clearly. [2]
3 Eco-Industrial Parks (EIPs) in Korea Eco-industrial development in Korea is in its initial stages although the necessity of EID has been raised since 1990s [23]. The Korean government has been concerned about environmental issues in industrial estates including the increasing environmental and resource costs as well as the industrial waste problems in Korea, where about 500 conventional industrial estates exist with high industry density. In order to address these problems, the Ministry of Commerce, Industry and Energy (MOCIE), responsible for the development of industry in Korea, has adopted the concept of an EIP to transform conventional industrial estates into sustainable ones. After a few years of feasibility studies, Korea National Cleaner Production Centre launched pilot scheme EIP projects supported by MOCIE in 2005 [6]. Five pilot EIP projects have been selected for implementation within existing industrial estates: three were selected in 2005, Ulsan EIP, Pohang EIP and Yeosu EIP, and a further two, Banwol-Sihwa EIP and Cheongju EIP, in 2006 [24]. In addition to these, a few EIP projects have been prepared voluntarily in several industrial regions [25, 26]. According to the governmental plan for implementing EIPs, the Korean EIP model will be developed based on three stages as illustrate in Table 1. The Korean EID model is intended to emerge after 2015 from the results of the pilot projects [7, 24]. Therefore, this study focuses on common tendencies of the pilot EIP projects in Korea rather than the typical Korean EID model.
448
D. Kim and J.C. Powell
1 Underlying approach to EID: Government driven eco-industrial parks The Korean EID model is characterised by an eco-industrial park driven by the effective environmental management of industrial estates. The Korean government has initiated EIPs as a tool of cleaner production to address the environmental problems in an industrial estate. The governmental policy on the environmental management of industry is explained with three main dimensions: improving the environmental and economic efficiency by accelerating cleaner production for individual firm level; by implementing EIPs for each industrial estate level and by disseminating EIPs for Table 1. Long-term EIP plan in Korea
Stage
Goal
Main contents
1st stage (2005~2009)
Pilot EIP projects
Building implementation framework for an EIP Networking material and energy flows within an industrial estate through the pilot EIP projects on the selected industrial estates
2nd Stage (2010~2014)
Expanding circular resource networks
3rd Stage (2014~2019)
Establishing Korea specific EIP model
Expanding the circular resource networks to national scale by leveraging the results of pilot projects Solidifying cooperation with regional community Designing resource-circular industrial estate based on the principle of industrial ecology Applying Korea specific EIP scheme to all new industrial estates from the planning stage (source: modified from [24]).
national scale industry level [24]. According to the EIP plan, three necessities of an EIP are stressed: the implementation of sustainable industrial system, the reformation of conventional industrial estates to more environmentally sound ones and the reinforcement of the environmental policy on the location of industries [27]. Consequently the Korean EIPs strongly points to more effective environmental management of industrial estates through top-down managerial approach rather than business opportunities through business centric way. 2 Organisational framework of the Korean EIPs The five pilot EIP projects each located at different industrial estates are across the country. Inter-firm collaboration tends to be limited within a specific industrial estate with little cooperation between the EIPs. Each pilot EIP project is however directly connected with the government due to the governmental funding and the guidelines for each pilot EIP project to meet. Therefore each project is strongly dependent on the government rather than local authority or local community. Although local authorities also tend to partly fund to the projects, the influence of the government is still greater than that of local authorities as the local authorities’ funding is given in expectation of governmental funding. Each pilot EIP project is composed of several sub-projects or research projects based on a consortium with research institutes and local organisations. As the pilot EIP projects were asked to provide the industrial estate with most expertise including
Comparison of Eco-Industrial Development between the UK and Korea
449
technology necessary for an EIP, the pilot EIP projects tended to seek a few grand technology projects. Therefore, although the situation may vary, in general each pilot EIP project forms a consortium with several research institutes and local organisations to perform independent tasks. In addition, the government guidelines that are grounded on the environmental management policy to accelerate cleaner production at individual firm level first, requires the pilot projects to undertake compulsory tasks in common such as process diagnosis or ISO14000 certificate [24]. Therefore the organisational framework inclines toward the sum of specific top-down tasks rather than collecting bottom needs. 3 Operational framework of the Korean EIPs The independent relationship between pilot EIPs has also led the operational framework of every pilot EIP project to be quite independent of other pilot EIP projects. Even every sub-project or research project in a pilot EIP project also has its own operational framework. The IT platform of database and information network to identify synergy opportunities in one pilot EIP project is different from that of others. Therefore it is difficult to generalise its operational framework and tools as every pilot EIP project has developed its own operational framework independently. In addition, the Korean pilot EIP projects seem to have concentrated on developing technological tools and breakthrough technologies. The government leads the pilot projects to develop resource management technologies that can be adopted by every industrial estate, such as water pinch, integrated resource recovery system, chemical life cycle management. The government is going to disseminate these resource management technologies to all EIPs after studying their feasibility in the pilot projects until 2009 [24].
4 Reflections from the Comparison of the Two EID Models As described above, there are several differences between the two EID models, NISP and Korean EIPs. Table 2 summarises the main differences of the two EID models in terms of fundamental approach to EID, organisational and operational framework. 1 Reflections from underlying approach to EID NISP is business centric and business led industrial symbiosis while Korean EIPs are regarded as an environmental management programme. The NISP approach can lead to the voluntary involvement of businesses based on economic benefits, while the Korean EIP programme may improve the environmental quality of industrial estates. It is difficult to determine which approach is more desirable in terms of sustainable development, which should attain both economic and environmental improvements. As the economic benefits may prevail over the environmental ones in the NISP context when the two benefits conflict while the EID programme is usually supported by public funding, it can be controversial whether the business benefits should be prioritised. On the other hand, voluntary participation of businesses is hardly expected when environmental management is too emphasised in the Korean EIP context.
450
D. Kim and J.C. Powell Table 2. Summary of the comparison of EID models between NISP and Korean EIPs NISP in the UK
Approach to EID
Business centred and business led industrial symbiosis
Organisational framework
Large region based national network Managed by independent 3rd party with combination of top-down and bottom-up Practitioners network for facilitating synergies Funded by public organisations including regional agencies Expertise including technology supported from external network Unified operational framework over the nation Developing diverse operational tools for creating synergy idea
Operation framework
EIPs in Korea EIPs for the effective environmental management of industry Within industrial estate or closely located estates Government driven top-down approach Technology development oriented organisation Mainly funded by the government Self-development of technologies for synergy Independent operational framework between EIPs Developing technological solutions
It is considered that priority should be given to voluntary involvement in initial stage of EID. If businesses do not participate in EID programme voluntarily, synergy opportunities rarely occur and consequently industrial sustainability does not improve. NISP regards the voluntary involvement of businesses as a critical factor for the successful implementation of EID, because they consider the synergy ideas should be created by the organisation that can realise it. It is hard to expect businesses who are concerned about the leak of business information will proactively create synergy ideas and make excellent progress, when they are involved reluctantly. Therefore, Korean EIPs should consider the business centric approach to attract the voluntary involvement of businesses in the initial stage, reflecting the experience and approach of NISP. 2. Reflections from organisational framework One of the most distinct differences between the two models in terms of organisational framework is the geographical boundary of the collaboration network. While NISP adopted large region based national network, each Korean EIP focuses their collaboration boundary on an individual industrial estate. Which scale of geographical boundary is better for EID? There seem to be a couple of reasons why an EIP model looks suitable for Korean industrial conditions. Firstly, when very large numbers of firms are located in one industrial estate as occurs in Korea, inter-firm collaboration can be activated within an industrial estate. Secondly, as some large industrial estates in Korea are independently managed by central government, it may be more efficient to transform directly them into EIPs rather than to approach EID based on regional development. However, as most large industrial estates in Korea are composed of the firms belonging to similar
Comparison of Eco-Industrial Development between the UK and Korea
451
industrial sectors aimed at cluster effects, it is necessary to address the homogeneousness to promote by-product exchange. In addition, EIP centred on large scale industrial estates may exclude small scale industrial estates or scattered firms located outside industrial estates. It has been argued that a large industrial region can generate more effective performance of EID than single industrial estates based on German and Austrian cases [1, 9]. Single industrial estates that contain limited number of firms and industrial sectors are not only vulnerable to the flexibility of business links that are difficult to replace within an industrial estate, but they also restrict the opportunities of by-product exchange. On the other hand, a large industrial region can increase the flexibility of business links based on industrial diversity and can facilitate by-product markets as it can contain many potential collaboration candidates. NISP operates on a large regional scale to create flexible inter-firm synergies with a critical mass of cross sector industries. Therefore it is recommended that the geographical boundary of Korean EIPs should be expanded to a regional scale. In order to address the homogeneousness that is one of the most critical problems to EID in Korea, it is necessary to expand the geographical boundary to a larger region that contains diverse industrial estates and provide greater opportunities for by-product exchange. Then, while the main EID activities at an industrial estate level are inter-firm sharing among the firms of similar industrial sector, by-product exchange can be activated between industrial estates at regional level. As the distance between industrial estates in Korea is relatively short, the economic and the environmental burden may be inconsiderable. However, how large is the optimum scale of regional boundary in Korea? The regional scale of NISP cannot be replicated to Korean EID as the geographical, economical, industrial and administrative conditions in the UK are different from those in Korea. Therefore the optimum scale of geographical boundary in Korea needs to be studied further. The optimum geographical boundary to promote by-product exchange will be the function of the scale of firms, the density of firms located in an industrial estate, the diversity of industrial sectors, logistics in the region and governance structure of regional administration. 3 Reflections from operational framework Every regional NISP shares an operational framework and tools while Korean pilot EIP projects do not. Each Korean pilot EIP project develops its own tools independently and flexibly, concentrating on the characteristics of their industrial estates and the participating firms. However, this independency also causes redundancy in resource use and difficulties in the dissemination of the operational tools or good practices between EIPs. As every pilot EIP has spent their resources to develop its own IT platform, not only does it cause redundancy in resource use, but also it is hard to share the systems between EIPs. In addition, incompatibility of operational framework may impede collaborative synergy projects between EIPs. To overcome these problems and to facilitate inter-regional cooperation, it is necessary to standardise the operational framework. NISP has developed a diverse range of operational tools while Korean EIPs have concentrated on technology development. In practice, the priority in EID programme should be given to practical tools for synergy idea creation and development rather
452
D. Kim and J.C. Powell
than technology, as synergy ideas can be generated without high level technology. Although technological capability is also critical to solve the problems of inter-firm collaboration, it can be developed in partnership with external experts. In addition, as industrial circumstances differ between regions and synergy can be created in the diverse directions under the rapidly varying industrial conditions, the diverse viewpoints of industrial people can identify more productive synergy ideas than a few experts. Therefore, it is necessary for Korean EIPs to develop the operational tools such as training programme, diverse workshops and RAG programme of NISP. In particular, it is considered that Korean EIPs need to introduce the NISP workshop programmes as its format looks very productive as well as highly interactive. By engaging forty to fifty delegates from diverse cross-sector in one morning workshop over 100 potential synergy ideas can be identified [21]. In addition, the sustainability criteria and economic instruments need to be developed in EID programme. As industrial symbiosis is realised based on business benefits, the environmental or social benefits may be sacrificed in some instances. As many EID programmes are funded by public organisations including government and local authority, the EID programmes should contribute to the improvements of public benefits and thus any decision should be made on the basis of balanced criteria. In order to address this issue it may be necessary to develop certain drivers that ‘push’ the environmental and social benefit, including economic instruments or a sustainability index. Although NISP already demonstrates the environmental benefits of its performance such as the reduction of carbon dioxide, waste and job creation, it may be necessary to expand and standardise the criteria so as to quantify the improvements in sustainability, and to decide which is the better option when several options are being considered. 4. Limitation of the study This study excludes the policy framework as the relevant documents on NISP are insufficient for analysis. The policies include domestic and EU regulations, economic instruments and strategies impacting on NISP. As the policy framework can be one of the critical background conditions of the programme, the evaluation of EID model can bear more reliable results when it is studied further. Peter Laybourn, the Programme Director of NISP, also emphasised the necessity of further study on policy framework by raising the question, “What within that policy framework could be changed to make the conditions for IS more favourable?” [12 pp17].
5 Conclusions It may be still early to decide whether NISP is a successful case, as the programme has less than ten year experience including pilot projects. Although it is hard to say that the programme is successful in all aspects, the program can be regarded as one of the most active cases in the world at present as it has made distinct achievements and the number of participating firms has rapidly increased. The Korean pilot EIP projects started just a few years ago and seem to have experienced many trials and errors in their implementation. Therefore, the reflections from the experience of NISP model are discussed in this paper to give useful implications to the Korean EID programme.
Comparison of Eco-Industrial Development between the UK and Korea
453
Comparing the two models based on the three aspects, this study suggested several implications including business centred approach, expanding EIPs to regional EID, standardising operational framework and developing innovative synergy generation tools.
References [1] Sterr T, Ott T (2004) The industrial region as a promising unit for eco-industrial development--reflections, practical experience and establishment of innovative instruments to support industrial ecology. Journal of Cleaner Production 12:947-965. [2] Laybourn P, Clark W (2004) National Industrial Symbiosis Programme: A year of achievement, NISP (National Industrial Symbiosis Programme). [3] Potts Carr AJ (1998) Choctaw Eco-Industrial Park: An ecological approach to industrial land-use planning and design. Landscape and Urban Planning 42:239. [4] Chertow MR (2007) "Uncovering" industrial symbiosis. Journal of Industrial Ecology 11:11-30 [5] Gibbs D, Deutz P (2005) Implementing industrial ecology? Planning for eco-industrial parks in the USA. Geoforum 36:452. [6] Park H-S, Won J-Y (2007) Ulsan Eco-industrial Park: Challenges and Opportunities. Journal of Industrial Ecology 11:11-13. [7] Lee K, Yoon C, Lee MY, Kim JY, Kim SD, Byun KJ, Kwon MJ, Lee SI, Ma HR, Jin HJ, An DK, Kim JW, Kim HS, Moon SW, Lee T, Choi J (2003) Mater plan of implementing an eco-industrial park for developing the foundation of cleaner production. Korea Ministry of Commerce, Industry and Energy (MOCIE). [8] Chiu ASF, Yong G (2004) On the industrial ecology potential in Asian Developing Countries. Journal of Cleaner Production 12:1037. [9] Schwarz EJ, Steininger KW (1997) Implementing nature's lesson: The industrial recycling network enhancing regional development. Journal of Cleaner Production 5:47-56. [10] Korhonen J, Niemeläinen H, Pulliainen K (2002) Regional industrial recycling network in energy supply - The case of Joensuu city, Finland. Corporate Social Responsibility and Environmental Management 9:170-185. [11] Gibbs D (2003) Trust and Networking in Inter-firm Relations: The Case of Eco-industrial Development. Local Economy 18:222. [12] Laybourn P (2007) NISP: Origins and Overview. In: Industrial Symbiosis in Action: Report on the 3rd International Industrial Symbiosis Research Symposium, Birmingham, England, August 5-6, 2006 (Lombardi R, Laybourn P, eds): Yale School of Forest & Environmental Studies. [13] BCSD-NSR (2002) National Industrial Symbiosis Programme (NISP): Business delivery of political strategy on resource productivity - "Making it happen". In: Business Council for Sustainable Development North Sea Region. [14] Curry RW (2003) Mersey Banks Industrial Symbiosis Project: A study into opportunities for sustainable development through Inter-company collaboration. In: North West Chemical Initiative. [15] Parry C (2005) TVISP Final Report: Jan 2003 - Dec 2004. Clean Environment Management Centre (CLEMANCE), University of Teesside. [16] Kane G, Parry C, Street G (2005) Tees Valley Industrial Symbiosis Project: A case study on the implementation of industrial symbiosis. CLEMANCE, University of Teesside.
454
D. Kim and J.C. Powell
[17] NISP (2007) Synergie: Newsletter for the National Industrial Symbiosis Programme, Issue No 3. National Industrial Symbiosis Programme. [18] NISP. NISP homepage. [cited; Available from: http://www.nisp.org.uk/default.aspx. [19] Laybourn P (2003) National Industrial Symbiosis Programme. In: Business Strategy and the Environment Conference. Leister, UK.. [20] NISP. NISP News. [cited; Available from: http://www.nisp.org.uk/article_index.aspx. [21] Clark W, Laybourn PT (2005) A case for publicly-funded macro industrial symbiosis networks: a model from the UK's National Industrial Symbiosis Programme (NISP). In: 11th Annual International Sustainable Development Research Conference. Helsinki, Finland: ERP Environment. [22] NISP West Midlands (2005) NISP: Quarterly Newsletter No.5. NISP West Midlands: Birmingham. [23] Choi J (1995) Development of an ecology oriented industrial estate: Application of industrial ecology's viewpoint. :71-91. [24] Chung D (2006) Eco-industrial park in Korea: Current status and future plan. In: The 4th International Conference on Eco-Industrial Park. Seoul, Korea: Korea National Cleaner Production Center [1]. [25] Oh DS, Kim KB, Jeong SY (2005) Eco-Industrial Park Design: A Daedeok Technovalley case study. Habitat International 29:269. [26] Kim H (2007) Building an eco-industrial park as a public project in South Korea. The stakeholders' understanding of and involvement in the project. Sustainable Development 15:357-369. [27] KNCPC (2004) Understanding of an eco-industrial park. In: Korea Ministry of Commerce IaEM, ed: Korea National Cleaner Production Centre [1].
토지연구
On Applications of Semiparametric Multiple Index Regression Eun Jung Kim Laboratoire de Statistique Théorique et Appliquée, Université Paris VI, Paris, France Laboratoire de Statistique, CREST-INSEE, Malakoff, France [email protected]
Abstract. Generalized linear models (GLM) have been used as a classical parametric method for estimating a conditional regression function. We intend to examine the practical performance of Multiple-Index Modelling (MIM) as an alternative semiparametric approach. We focus specially on models with binary response Y and multivariate covariates X . We shall use two methods to estimate a regression function among which the refined Outer Product Gradient (rOPG) and the refined Minimum Average Variance Estimation (rMAVE) defined in Xia et al. (2002). We will show here by simulation argument that Multiple-Index modelling appears to be much more efficient than the GLM methodology and its usual Single-Index modelling (SIM) generalization to modelize the regression function.
1 Introduction Let us consider the classical problem of estimating a regression function m ( x ) = E (Y | X = x ) from independent observations ( X i , Yi ) ∈ R p + 1 ( i = 1,
..., n ) of a random vector ( X , Y ) ∈ R p +1 . When Y is a binary variable, a commonly used parametric estimation method is the so-called “logit model” which belongs to the class of “generalized linear models” (GLM). One assumes that
m ( x ) = F ( β 0 + β 1T x ) where F ( s ) = 1 / (1 + ex p ( − s )) and ( β 0 , β 1T ) T ∈ R p +1 . More recently, a number of authors proposed to estimate simultaneously the parameters β 0 and β 1 and the function F ( ⋅) , by using “single-index modelling” (SIM) methodology. In this paper, as an extension of the Logit models and semiparametric SIM, we intend to examine the practical performance and the utility of using semiparametric “multiple-index modelling” (MIM). The basic assumption of MIM is that all the relevant information provided by X is contained in d factors, that is, d linear combinations of the components of X which can be interpreted as projection directions for X . That assumption can be more realistic in practice than the assumption used in GLM and SIM in which only one factor is supposed to have all the relevant information of X .
456
E.J. Kim
In the statistical literature, several papers are concerned with multiple-index inference and are known as “dimension reduction techniques” (Li (1991), Cook (1994), Hristache et al. (2001), Xia et al. (2002) and Delecroix et al. (2006)). MIM needs some troublesome computaional implementations. In the literature, we chose two methods for their simplicity in implemetation, namely the refined Outer Product Gradient (rOPG) and the refined Minimum Average Variance Estimation (rMAVE) (Xia et al., 2002) which is derived from the Minimum Average Variance Estimation (MAVE) method. The procedure allows to estimate simultaneously a regression function as well as multiple indexes by local polynomial fitting. See more their advantages in Xia et al. (2002). rOPG and rMAVE applied to SIM were specially studied in theory as well as in practice by Xia (2006). They can be also applied to a multiple-index framework. The paper is organized as follows. Section 2 explains MIM methodology including a comparison with SIM and GLM. Section 3 provides a brief description of two estimation methods used in the paper. In Section 4, the practical performance of MIM using simulated and real data is illustrated. Section 5 includes a conclusion and a discussion.
2 MIM Methodologies A semiparmetric multiple-index model is defined as:
m( X ) = g ( BT X )
(2.1)
→ R d (d << p, d ≤ 3) , B is T a p × d orthogonal matrix such that B = ( β1 ,..., β d ) and B B = Id d for identification condition (Ichimura and Lee, 1991). The function m( X ) depends on X only T through B X . The goal is to estimate both g and B . If d = 1 , one obtains a single-index model which is clearly a generalisation of the where g (⋅) is an unknown function such that g : R
p
usual “GLM” as defined in McCullagh and Nelder (1989). In GLM, one assumes the conditional density fY | X = x ( x ) of Y | X = x to belong to the exponential family as follows:
fY | X = x ( x) = exp ⎡ A( g ( xT β )) + C ( g ( xT β )) y + D( y ) ⎤ ⎢⎣ ⎥⎦ where A, C and D are known functions and g (⋅) is called the inverse of link function. Estimating m under (2.1) is equivalent to searching for the space spanned by the column vectors of B , because g is nothing but the conditional expectation
g ( X ) = E (Y | BT X ) and can be estimated nonparametrically when B is known. Assuming (1.1) allows then to escape the classical “curse of dimensionality” when estimating nonparametrically m if d << p (Li (1991), Cook (1994), Hristache et al. (2001), Xia et al. (2002), and Delecroix et al. (2006)).
On Applications of Semiparametric Multiple Index Regression
457
Li (1991) proposed a dimenstion reduction method called Sliced Inverse Regression (SIR) which allows to estimate the so-called effective dimension reduction space (spanned by the columns vector of B ). Among numerous methods on dimension reduction proposed thereafter, we selected the methods of Xia inspired by which will be detailed in paragraph 3. Before detailing the methods in paragraph 3, note that for the sake of simplicity, one can incorporate a sequential procedure in the dimension reduction methods. First, estimate β1 under the assumption of the existence of an index in the model. Second, substitute
β1
∧
by
β1
under the assumption of the existence of double indexes in the
model and then estimate β 2 . Continually, estimate the parameters as one needs. Delecroix et al. (2006) gave a brief description of the sequential procedure in their semiparametric M-estimation method suggesting a simplified computation in practice. This idea was reviewed by Patilea (2006). In double index model, a semiparametric M-estimator of the second component is written by ∧
β 2 = arg min ∧
β 2 , β 2 ⊥ β1
where
ψ
∧ ∧ 1 n T T ψ Y g β ( , ( ∑ 1 X , β 2 X ))τ ( X ) n i =1 ∧
is a contrast function,
∧ T 1
τ (⋅) is a trimming function and g ( β X , β 2T X ) ∧ T 1
∧ T 1
is
a nonparametric estimator of g ( β X , β 2 X ) = E (Y | ( β X , β 2 X )) . For details of a semiparametric M-estimation, see in Delecroix et al. (2006). This sequential procedure is different form the projection pursuit (Friedman and Stuetzle, 1981) in which the regression function is approximated by the sum of univariate smooth functions. Although the estimated MIM by the sequential procedure may perform better than SIM and GLM provided that multiple indexes in a true model, they may give less accuracy of the estimation than the estimated MIM by the simultaneous method which can estimate simultaneously all components of B . However, for easy implementation, this procedure is worth considering in practice. T
T
3 Presentation of Estimation Procedures The rOPG, rMAVE methods are based on a weighted least squares, more specially, on the local polynomial fitting in Fan and Gijbels (1996). Let us recall the estimator in a simple model such as Y = f ( x) + ε , E (ε | X ) = 0 a.s. for a given samn
f is sufficiently smooth, the local linear estimator of a regression function f and its gradient ∇f are defined as the minimizing at a given point X j
ple ( X i , Yi )i =1 . If
458
E.J. Kim n
∑ {Y − a i
i =1
− bTj ( X i − X j )} wh ( X i , X j ) 2
j
(3.1)
where a bandwidth h > 0 and wh ( X i , X j ) is a weight function. For example,
wh ( X i , X j ) = Kh ( X i − X j ) / ∑ s =1 Kh ( X s − X j ) n
and
∑
n i =1
Kh (⋅) = h− p Kh (⋅ / h)
where
wh ( X i , X j ) = 1 .
Now consider the model (2.1). Let us recall that g and B are unknown. If B0 = B was known, local linear estimators of g and ∇g would be a solution to the following minimization problem.
{
}
min E {Y − E (Y | B0T X )} =. min E σ B20 ( B0T X ) 2
σ B2 = E ⎡⎢{Y − E (Y | B0T X )} | B0T X ⎤⎥ . At a point X 0 , the local linear ap2
where
⎣
0
⎦
proximation can be written by E
(Y | B X ) ≈ a + b B ( X − X ) . T 0
T
T 0
0
However, B is an unknown parameter. One can naturally substitute B by its estima∧
tor B to estimate the regression function. Xia et al. propose to optimize simutaneT T
ously the problem below with respect to B and (a j , b j ) . n
min T
∑ ∑ {Y − a
B:B B = I a j ,b j , j =1,..n j =1
ρj
n
i
− b jT ρ j BT ( X i − X j )} wh ( X i , X j ) 2
j
i =1
(3.2)
is a so-called “trimming function” which allows to exclude from the sum the val∧
ues X i for which the density estimator g ( X i ) is close to zero in order to stabilize computations. (See for details in Cizek et al (2006), Delecroix et al. (2006) and Xia ∧
(2006)). Until B converges, this procedure is repeated. Xia et al. (2002) call the described method above MAVE. The simultaneous minimization is more convenient than the existing other methods which adopt two separate strategies to estimate g and B . rMAVE is nothing but a refined version of as the estimator (3.2) (MAVE method). We use here a refined weight function. ∧
∧
wh* ( X i , X j ) = K h ( BT ( X i − X j )) / ∑ s =1 K h ( BT ( X s − X j )) n
As B is unknown, we substitute it by an estimator calculated iteratively (See Xia et al. (2002) for details). The idea is to search for an improved estimate of the reduction
On Applications of Semiparametric Multiple Index Regression
459
space and then use it in a weight function. It results in improving the accuracy of estimation, because one can reduce the effect of high dimension in nonparametric estimation of a regression function because of d << p . As the relationship between MAVE and rMAVE, rOPG is regarded as the refined version of OPG. Based on MAVE approach, Xia et al. (2002) investigated OPG method as a generalisation of ADE in Hardle and Stoker (1989). In the model (1.1), one considers the average of the outer product of the gradients noted by
∑
p× p
∑
p× p
as
=E {∇m( X )∇T m( X )} = BE {∇g ( BT X )∇T g ( BT X )} BT
As from Lemma 1 in (Xia et al., 2002), the p × d orthogonal dimension reduction matrix B consists of the d largest eigenvectors corresponding to the d largest eigenvalues. By solving simply the problem (3.1), one can estimate b j . Therefore,
∑
p× p
is estimated as ∧
∑
p× p
=
1 n ∧ $ $T ∑ ρ j bjbj n j =1
∧
where
ρj
is a trimming function for the same reason in MAVE. Finally, one deter-
mines d largest eigenvectors, which is equivalent to the estimation of all components ∧
∧
of B , correspond to the d largest eigenvalues. Once B is obtained, one can esti∧
$ j , one can estimate B . The two steps are repeated until B conmate b j . As from b verges.
4 Simulated and Real Data Results To assess the practical performance of MIM and compare it to SIM and Logit models, we conducted a simulation study and an application with a well-known credit scoring data from a southern German bank (Fahrmeir and Tutz, 1994). It is available from the Fahrmeir R-package (www.r-projet.org). For this purpose, we used the three estimation methods. The maximum likelihood estimation (MLE) is used for Logit models. The two estimation methods rOPG and rMAVE are used for SIM and MIM. The last two original implementations for the Xia’s algorithms in Matlab are available in his website (www.stat.nus.edu.sg/~staxy/). To focus on this paper, we made a small modification in the orginal programs and implemented the three algorithms in SAS/IML.
460
E.J. Kim
4.1 Data Set In our simulation study, the generated model is defined as Y
= 1 if Z > 0 where
Z = ( X T β1 ) 2 + X T β 2 + 0.2ε T = 0 if Z ≤ 0 . X = ( X 1 , X 2 , X 3 , X 4 ) consists of four independent random variables with a standard normal distribution N (0,1) . The residual ε is a standard normal variable independent of X . The true components of B are taken as β1 = 1/ 4(1,1,1,1)T and β 2 = 1/ 22(−4,1,1, 2)T . The sample size is n=200 of
and Y
which 100 observations are served as a validation set. For each sample size, we generate 100 samples of (Y , X ) ∈ R . In credit scoring example, we used an extract of a data set in Fahrmeir and Tutz (1994). The goal is to determine the creditability of a client with 8 covariates. The binary variable is described as Y = 0 : creditworthy and Y = 1 : not creditworthy. There T T
5
are two continuous covariates described as X 3 : duration of credit in month and X 4 : amount of credit. They are expressed in Deutche Marks (DM). The other 6 covariates are binary variables described as X 1 = 1 : no running account, X 2 = 1 : good run-
X 5 = 1 : bad in payment of previous credits, X 6 = 1 : professional for intended use, X 7 = 1 : a male and X 8 = 1 : living alone. Data contain 300 observations of Y = 1 and 700 of Y = 0 . To get 200 observations in a validation set, we
ning account,
divided randomly 1000 observations into two groups by using an existing SAS procedure called Proc Surveyselect. 4.2 Results Our purpose is to estimate the parameters with a training set and to estimate the probability of P (Y = 1| X ) with a validation set by a polynomial fitting. Then, we calculate a rate of misclassification in a validation set for each of the models. For simplicity, we use a typical rule of classification with an estimated probabil∧
ity P ( ⋅) , a score S and a threshold s ( 0 ≤ s ≤ 1 ). The rule is that the predicted value ∧
∧
∧
∧
∧
is defined as Y = 1 if P ( S ) ≥ s and Y = 0 if P ( S ) < s . It is easy to see that Y ∧
∧
represents a function of s , i.e. Y = Y ( s ) . The estimated probability calculated as ∧
∧
∧
P ( S ) = 1 / (1 + ex p ( − S )) for Logit models and P ( S ) = g ( S ) for other models. The score S can be a variable equal to a linear combination of X for Logit models and SIM, while S can be a d - dimensional vector equal to d linear combination of X for MIM. Table 1 shows the rates of misclassification with 100 observations in the 30th validation set from the simulated data if s = 0 .5. The second column shows how many
On Applications of Semiparametric Multiple Index Regression
461
∧
observations are predicted 0, i.e. Y = 0 over the totality of the observed true value Y = 1 (In the same way for the third column). We remark that there is a small difference for the rate of overall misclassification between Logit models and SIM, but for each estimation method double index models classify much more accurately than Logit models and SIM. Regarding the performance of estimation method in the double index model, rMAVE (9%) outperforms and rOPG (15%). ∧
Indeed, the rMAVE estimators are ∧
and β 2 = 1 /
β 1 = 1/ 4 (1.14, 0.91, 0.87,1.05)T
22 ( −3.70, 0.97, 0.75, 2.60)T . Table 1. Rate of misclassification in a simulated experiment, n=100
LOGIT rOPG ( d rMAVE( d rOPG ( d rMAVE( d
=1) =1) = 2) = 2)
Observed 1 (Y = 1 ) 9/66 (13.64%) 13/66 (19.70%) 11/66 (16.67%) 3/66 (4.55%) 1/66 (1.52%)
Observed 0 (Y = 0 ) 24/34 (70.59%) 19/34 (55.89%) 17/34 (50.00%) 12/34 (35.29%) 8/34 (23.53%)
Overall misclassification 33/100 (33%) 32/100 (32%) 28/100 (28%) 15/100 (15%) 9/100 (9%)
By calculating the frequency of the best estimation method and model for each sample out of 100 samples, we confirmed that the least rate of misclassification per each sample is given by rMAVE (74%) and is followed by rOPG (22%) in the framework of double index model ( d = 2 ). Table 2 shows the rates of missclassification with 200 observations in a validation set from credit scoring data at the thresholds s = 0, 3, 0 .5, 0 .7 . Table 2. Rate of misclassification in an application to credit scoring data, n=200
LOGIT s = 0 .3 s = 0 .5 s = 0 .7
72.4% 27.6% 27.6%
rOPG d = 2 33.8% 26.0% 27.2%
d =1 35.2% 27.0% 26.8%
d =3 31.6% 24.2% 24.2%
d =1 38.8% 23.0% 28%
rMAVE d = 2 31 .0% 22.8% 25.2%
d =3 28.2% 24.2% 24.6%
In Table 2, except for s = 0 .5 in rMAVE, the rates of misclassification are improved over all the given thresholds under the existence of the three indexes for each estimation method denoted by B B = Id3 ( β1 ,..., β 3 ∈ R ) . At s < 0 .5 in LOGIT model, the rates of misclassification are much worse than those in SIM at other different T
8
462
E.J. Kim
thresholds. In other words, contrary to the results in a simulated experiment, the flexibility of SIM may contribute to the improvement of the estimations so that SIM perform better than Logit models in this case.
5 Conclusions and Discussion We study the practical performance of semiparametric MIM in practice for a substitute of SIM and GLM by using the dimension reduction methods called rOPG and rMAVE. We start from a popular model of GLM as Logit models which are commonly applied to binary choice response and extend them to MIM. The proposed models are well adapted to our classification problems. Generally, rMAVE performs better than rOPG. Although the algorithm for estimating MIM is more complicated to implement and computationally slower than those of SIM and GLM, we show that MIM is the most powerful among the three models. Morevoer, the algorithms of rOPG and rMAVE are easier to implement than other dimension reduction methods, because they have only one simultaneous minimization problem. As mentioned above, rOPG perform poorly than rMAVE, but it is much easier to implement than the other. To enhance the simplicity of rMAVE, we suggest that one may incorporate the sequential procedure into rMAVE as mentionned in paragraph 2. Although we focus on applications to binary choice data in this paper, MIM may be applied to different types of data. An advantage of rOPG and rMAVE is that they are well adapted to dependent data such as time series.
References [1] Cizek P and Hardle W (2006) Robust estimation of dimension reduction space, Computational Statistics & Data Analysis 51(2):545-555 [2] Delecroix M, Hristache M, Patilea V (2006) On semiparametric M-estimation, J. Statist. Plan. Infer. 136:730-769 [3] Fan J and Gijbels I (1996) Local Polynomial Modelling and Its Applications, Chapman and Hall, London [4] Fahrmeir L and Tutz G (1994) Multivariate Statistical Modelling Based on Generalized Linear Models. Second ed. New York, Springer [5] Friedman JH and Stuetzle W (1981) Projection pursuit regression. J. Amer. Statist. Assoc. 76:817-823 [6] Ichimura H and Lee LF (1991) Semiparametric least squares estimation of multiple index models: single equation estimation, Nonparametric and semiparametric methods in Statistics and Econometrics, Cambridge University Press 3-50 [7] McCullagh P and Nelder JA (1989) Generalized linear models, second ed., Chapman and Hall, London [8] Patilea V (2007) Semiparametric regression models with applications to scoring: a riview, Communications in Statistics-Theory and Methods 36:2641-2653 [9] Xia Y, Li WK, Tong H, Zhang D (2002) An adaptive estimation of dimension reduction space, J. Roy, Statist. Soc. Ser. B 64:363-410 [10] Xia Y (2006) Asymptotic distributions for two estimators of the single-index model, Eonometric Theory 22:1112-1137
Towards Transverse Laser Cooling of an Indium Atomic Beam Jae-Ihn Kim and Dieter Meschede Insititut für Angewandte Physik, Rheinische Friedrich-Wilhelms-Universität Bonn, Wegelerstraße 8, D-53115 Bonn, Germany [email protected]
Abstract. Since the achievement of laser cooling, a research field of atomic physics has been rapidly developed. One of the widespread applications of atomic physics is atomic nanofabrication, where a structured array can be fabricated by means of light and magnetic forces. In this report, the past, present and future of ANF experiment with neutral indium atoms are described.
1 Introduction Atomic physics has been dramatically evolved since the achievement of laser cooling technique [1]. Laser cooled atoms and gases have enabled such broad applications as quantum information [2], Bose-Einstein condensation [3], precision measurements [4], and atomic nanofabrication (ANF) [5]. ANF experiments have attracted attentions, where a periodical structure on a substrate can be fabricated by means of light masks or magnetic lenses. These methods were successfully demonstrated in many other groups [6-9]. Especially ANF experiments using group III atoms, such as In, Al and Ga, are interesting because they are widely used composite materials in semiconductor physics. So far, the laser manipulation of Al [10] and Ga [11] for the purpose of ANF were demonstrated. In our laboratory, we attempt to demonstrate the transverse laser cooling of an indium atomic beam, one of the requirements in ANF experiment. With the laser cooled In atomic beam a fully 3D structured (In,Al)As crystal with periodically modulated In concentration is expected to be deposited by ANF method as shown in Fig. 1. In this report, the previous and current experiments on the laser cooling of an indium atomic beam are presented. Firstly, a Λ-type laser cooling scheme and its problems Fig. 1. Prospective 3D strucare briefly introduced. Secondly, an ultraviolet laser systured (In,Al)As crystal with periodically modulated In tem to drive a cycling transition in indium is described. concentration deposited by Finally, current status of the experiment towards laser cooling of an indium atomic beam is shortly mentioned. ANF method
464
J.-I. Kim and D. Meschede
2 Transverse Laser Cooling of an Indium Atomic Beam Using Multi-level Λ Transitions For past several years, we have tried to laser-cool an indium atomic beam using Λ type cooling scheme [12]. Fig. 2 shows the energy level scheme of 115In. 5P1/2−6S1/2 and 5P3/2−6S1/2 transitions are driven by violet diode lasers and frequency doubled Ti:sapphire laser, respectively. The frequencies of the diode lasers are locked to the corresponding atomic resonance frequencies, and that of Ti:sapphire laser is stabilized by an external cavity.
Fig. 2. (a) Energy level scheme of 115In. (b) Experimental setup. Cooling beams consist of 5 laser frequencies. Atomic fluorescence is detected by a CCD camera to observe the velocity distribution of the atomic beam. The indium atomic beam is produced by a commercial effusion cell which is heated up to 1200°C.
It is turned out that this cooling scheme has several serious problems as reported by Klöter et al. Firstly, due to the many number of ground states the scattering rate which gives the light force is limited up to1/6, which is factor 3 smaller than that of twolevel system. Secondly, due to the dark states in F to F and F to F+1 transition atoms are trapped in the dark states after only few scattering processes which means that atoms don’t feel light forces any more. Finally, the capture range in this scheme is limited by the potential depth of the standing wave because of the transient cooling mechanism. Instead of the Λ transition, one can choose a cycling transition in indium which is resonant to a UV light at 325.609nm to laser-cool an indium atomic beam.
3 A UV Light Source to Drive a Cycling Transition in Indium Rather than Λ type cooling scheme a cycling transition, |5P3/2, F=6〉 - |5D5/2, F=7〉, can be employed as a cooling transition in indium. It is more conventional to use a cycling transition for the laser cooling because it allows a high rate of absorption-emission cycles. A UV light source at 326nm is required to drive this cycling transition in indium. Recently a fiber-based tunable UV light source at 326nm for the laser cooling of indium atoms has been developed [13]. Fig. 4 shows the experimental setup for UV light system.
Towards Transverse Laser Cooling of an Indium Atomic Beam
465
Fig. 3. Energy level scheme of 115In. A cycling transition is resonant to an ultraviolet (UV) light at 325.609 nm.
Fig. 4. Experimental setup for a UV light source for laser cooling of an indium atomic beam [13]. This system is based on an external cavity diode laser, two fiber amplifiers and two external cavities for frequency upconversion.
Two home-made fiber amplifiers are built to obtain obtain the reasonable power of fundamental light at 977nm. Then, one of the infrared lights at 977nm is frequency doubled in an external cavity (EC). Subsequently, the 977nm light from another fiber amplifier and the second harmonic (SH) light at 488nm are coupled into a doubly resonant cavity in order to generate the third-harmonic light at 326nm. Eventually the useful power of 12mW at 325nm could be obtained.
4 Current Status of the Laser Cooling Experiment Using the UV light at 326nm, a Doppler limited absorption spectroscopy was performed to find a proper transition wavelength of indium atoms in a hollow cathode lamp. More than 50% absorption was observed. Because the thermal population in the ground state, |P3/2, F=6〉, of a cycling transition is only 6% at 1200°C, it is required to pump the atoms in P3/2 states to the ground
466
J.-I. Kim and D. Meschede
state for the efficient cooling. The population was increased up to 33% by the optical pumping using two violet diode lasers at 410nm. Recently atomic beam pushing effect has been observed. When the indium atomic beam is irradiated by a resonant UV light at 326nm, a strong atomic beam deviation occurred and the corresponding transverse velocity change was 92cm/s ~ 87vr, where vr is recoil velocity, with only small saturation parameter of 0.8. This velocity change is more than 2 times larger than that of the Λ type cooling scheme.
5 Conclusion and Outlook The past and current experiments on laser cooling of an indium atomic beam were described. Several problems in the Λ type cooling scheme were discussed. New laser cooling experiment employing a cycling transition in indium to escape previous problems were introduced. Laser manipulation of indium atoms using a cycling transition can enable to realize ANF experiment on atom by atom basis and contribute to the nano-technologies and semiconductor research fields.
References [1] Metcalf H and van der Straten P (1999) Laser Cooling and Trapping (New York: Srpinger) [2] Kuhr S, Alt W, Schrader D, Müller M, Gomer V, and Meschede D (2001) Deterministic delivery of a singe atom. Science 293:278 [3] Anderson M, Ensher J, Matthews M, Wieman C and Cornell E (1995) Observatino of Bose-Einstein condensation in a dilute atomic vapor. Science 269: 198 [4] Takamoto M, Hong F, Higashi R, and Katori H (2005) An optical lattice clock. Nature 435: 321 [5] Meschede D and Matcalf H (2003) Atomic nanofabrication: atomic deposition and lithography by laser and magnetic forces. J. Phys. D 36: R17 [6] McClelland J J, Scholten R E, Palm E C, and Celotta R (1993) Laser-focused atomic deposition. Science 262: 87 [7] Gupta R, McClelland J J, Jabbour Z J, and Celotta R (1995) Nanofabrication of a twodimensional array using laser-focused atomic deposition. Appl. Phys. Lett. 67:1378 [8] Drodofsky U, Stuhler J. Schulze T, Drewsen M, Brezger B, Pfau T, and Mlynek J (1997) Hexagonal nanostructures generated by light masks for neutral atoms. Appl. Phys. B 65:755 [9] Mützel M, Tandler S, Haubrich D, Meschede D, Peithmann K, Flaspöhler M, and Buse K (2002) Atom lithography with a holographic light mask. Phys. Rev. Lett. 83:083601 [10] McGowan R W, Giltner D M and Lee S A (1995) Light force cooling, focusing and nanometer-scale deposition of aluminum atoms. Opt. Lett. 20:2535 [11] Rehse S J, Bockel K M, and Lee S A (2004) Laser collimation of an atomic gallium beam. Phys. Rev. A 69:063404 [12] Klöter B, Weber C, Haubrich D, Meschede D, and Metcalf H (2008) Laser cooling of an indium atomic beam enabled by magnetic fields. Phys. Rev. A 77:033402 [13] Kim J and Meschede D (2008) Continuous-wave coherent ultraviolet source at 326nm based on frequency tripling of fiber amplifiers. Opt. Express accepted for publication
Heat and Cold Stress Indices for People Exposed to Our Changing Climate JuYoun Kwon and Ken Parsons Human Thermal Environments Laboratory, Department of Human Sciences, Loughborough University, Loughborough, Leicestershire LE11 3TU United Kingdom [email protected]
Abstract. The aim of the experiment presented in this paper was to evaluate and develop climate indices that are currently used to assess human health. Twenty people were exposed to a range of climatic conditions, outside near a weather station, in the UK. Measurements of the climate included air temperature, radiant temperature (including solar load), humidity and wind speed. Subjective responses were taken and physiological measurements included internal body temperature, heart rate and sweat loss. The responses of the participants over a one hour exposure were compared with those that would have been predicted using ISO Standards and in particular the Wet Bulb Globe Temperature (WBGT) index (ISO 7243), the Predicted Mean Vote (PMV, ISO 7730) and the Wind Chill Index (WCI, ISO 11079) as well as The PMVsolar index which is the PMV index created for the affects of direct solar radiation. The results show that all indices considered provided statistically significant (p<0.01) correlations between the index value and the thermal sensation of subjects. The highest correlations overall was with the WBGT index. This preliminary finding will be tested in more extensive trials involving more subjects and covering all seasons in the UK.
1 Introduction It is generally accepted that over the last 50 years, in Europe, there has been an increase in minimum and maximum temperatures, changes in precipitation patterns and an increase in the number of extreme events such as heat waves, heavy rain and droughts. Heat waves alone account for significant increases in mortality, for example, there were well over 20,000 excess deaths across Europe in the 2003 heat wave (Bhattacharya, 2003). This has also been reflected across the planet and there is a general requirement to set up mechanisms; to warn people of the probability of atypical weather; to predict the likely consequences; and provide methods for reducing damage to health. It is a multi-disciplinary problem that requires a systematic approach. Part of that system involves the use of a valid index to represent weather conditions to which people are exposed and predict the physiological strain and hence possible health effects on people. 1.1 Climatic Indices A system of international standards has been produced over a thirty year period that provide assessment methods for hot, moderate and cold environments. In hot environments a simple method based upon the Wet Bulb Globe Temperature (WBGT)
468
J.Y. Kwon and K. Parsons
index provides a method for monitoring and regulating heat stress (ISO 7243). This index was originally developed by Yaglou and Minard (1957) to reduce heat casualties during the outdoor training of military recruits in the USA. It is expressed with the measurement of natural wet-bulb temperature (tnw), the globe temperature (tg) and dry bulb temperature (ta). In moderate environments thermal comfort is assessed using the Predicted Mean Vote (PMV) and the Predicted Percentage of Dissatisfied (PPD) indices (ISO 7730). The PMV is an index which predicts the thermal sensation of people based on the seven-point scale (+3 hot, +2 warm, +1 slightly warm, 0 neutral, 1 slightly cool, -2 cool, -3 cold). Hodder (2002), Parsons (2003) and Hodder and Parsons (2007) provided a modification to the PMV index to include the direct effects of the sun on thermal sensation (PMVsolar). The PMVsolar is a calculation of the PMV (excluding direct solar radiation) and increases the PMV values by one sensation scale unit for every 200Wm-2 of direct solar radiation on the person. The PMV scale is also increased to include +4 very hot; +5 extremely hot; -4 very cold and -5 extremely cold. For cold environments ISO 11079 provides a method of calculating the clothing insulation required (IREQ) as well as wind chill and equivalent temperatures. Siple and Passel (1945) created the Wind Chill Index (WCI). The index is achieved through the wind speed and air temperature. It can estimate whether the weather conditions have influence on comfort of human and health. An example calculation of each of the indices is provided below. The Wet Bulb Globe temperature (WBGT) is calculated from following equations:
WBGT = 0.7tnwb + 0.3t g WBGT = 0.7tnwb + 0.2t g + 0.1ta
out of the sun (1) in the sun (2)
Where tnwb = temperature of a natural wet bulb thermometer (ºC) tg = temperature of a 150mm diameter black globe thermometer (ºC) ta = air temperature (ºC).
An example would be for a hot day in the sun: tnwb=25ºC, tg=40ºC, ta=30ºC so WBGT= 28.5ºC. The Predicted Mean Vote (PMV) is calculated from air temperature (ta); mean radiant temperature (tr); humidity (ø); air velocity (v) and an estimate of the clothing insulation worn (clo) and metabolic heat produced by activity (Met). For the experiment reported in this paper, an example would be: ta=19ºC, tr=25ºC, ø=74%, v=2.6ms-1, and the subject wore a white short sleeved t-shirt, blue jeans, underwear and socks with an estimated clo value of 0.47clo. Activity was a step test with 20 steps per a minute providing an estimated metabolic rate of 244Wm-2. For these data, PMV=1.5 (slightly warm to warm) and PPD=53%.
PMVsolar = PMV +
RAD 200
(3)
ta=19ºC, tr=19ºC, ø=74%, v=2.6ms-1, direct solar radiation (RAD)=245Wm-2 and the subject wore a white short sleeved t-shirt, blue jeans, underwear and socks with an estimated clo value of 0.47clo. Activity was a step test with 20 steps per a minute
Heat and Cold Stress Indices for People Exposed to Our Changing Climate
469
providing an estimated metabolic rate of 244Wm-2. The PMV out of direct solar radiation was then PMV=1.3 and 245Wm-2 of direct solar radiation, adds 1.2 scale units. Therefore, PMVsolar was 1.3 + 1.2 = 2.5 (warm to hot). The Wind Chill Index (WCI) is calculated from the following equations:
WCI = 1.16(10 v + 10.45 − v )(33 − ta ) Wm-2
(4)
v=2.6ms-1 and ta=19ºC, the wind chill index was 390Wm-2. The Chilling temperature (tch) is calculated from following equations:
tch = 33 −
WCI ºC 25.5
(5)
The WCI=390Wm-2 and the chilling temperature was 17.7ºC. 1.2 Aim The aim of this study was to evaluate existing thermal indices (WBGT, PMV, PMVsolar, WCI) when applied to predicting the thermal sensations of people outdoors.
2 Methods 2.1 Environmental Measurement Environmental conditions were recorded around the subjects. Air temperature and humidity were measured using a whirling hygrometer. Dry and wet bulb temperatures were measured using shielded thermistors placed at heights equally spaced at 0.2m, 1.2m and 1.7m. Globe temperature was placed at the same place as where air temperature was measured in the sun. Radiation levels were measured using a Skye pyranometer SP 1110. Air velocity was measured using B&K anemometer and Oregon weather station WMR 928 NX. Clothing worn varied with conditions and was a white cotton short sleeved shirt or a white cotton/polyester shirt or a grey cotton/polyester sweat shirt, blue jeans, underwear, socks and trainers. 2.2 Physiological Measurement The subject was weighed before and after the ‘exposure’ using Mettler 1D1 Multirange Digital Dynamic Scales. Skin and aural temperature was measured during the exposure using thermistors. Heart rate was measured using a Polar Sports Tester. Mean skin temperature was calculated using a four-point method (Ramanathan 1964). 2.3 Subjective Measurement A subjective questionnaire was completed by the subjects every ten minutes throughout the 60 minute exposure (see Appendix A). ISO 11-point thermal sensation scale was used (ISO 10551). Subjects gave ratings of thermal sensation, comfort, stickiness, preference, pleasantness, acceptance, satisfaction and Borg’s Rate of Perceived Exertions (RPE).
470
J.Y. Kwon and K. Parsons
2.4 Participants Twenty fit male and female subjects (see Table1) conducted a step test for one hour at 20 steps per minute with a step height of 0.15m or 0.1m in the open space where weather station outdoors in the sun. Table 1. Physical characteristic of participants [mean (SD)] Sex Age Height (cm) Male (n=16) 29 (8.5) 176 (5.8) Female (n=4) 30 (6.6) 162 (4.8) *: BSA means Body Surface Area (m2).
Weight (kg) 71 (10.8) 55 (5.2)
BSA* (m2) 1.86 (0.128) 1.57 (0.079)
2.5 Procedures When participants arrived at the place where experiments were to be held, they were told about details of the experiments, and then they completed generic health screen forms. Skin thermistors and aural thermistors were fitted and heart rate monitors were put on their chest. Their semi-nude and fully clothed body were weighed before a step test. The participants exercised for 60 minutes in an open space facing the sun, performing a step test in time to a metronome set at a rate of 80bpm on a vertical rise of 150mm or 100mm. The subject was advised to alter the choice of lead foot periodically to avoid unequal leg strain. Every minute the subjects’ physiological measurements and the environmental parameters were recorded. And a subjective questionnaire was completed every ten minutes. At the end of the experiment, subjects’ semi-nude and clothed weights were measured.
3 Results 3.1 Environmental Measurements The range of environmental conditions for 20 people are presented on Table2. The lowest air temperature was -1ºC and the highest one was 27ºC. And other environmental conditions were also various. The data were collected between July, 2007 to March, 2008 at a latitude 52.47N and a longitude 01.11W (Loughborough, UK) between 9am and 5pm. Table 2. Mean environmental conditions over 60 mins(n=20) Range Air temperature, ta (ºC) -1 ~ 26.9 Mean radiant temperature, tr (ºC) 4.8 ~ 47.7 Air velocity, v (ms-1) 0.34 ~ 5.3 Relative humidity, ø (%) 43 ~ 100 Solar radiation (Wm-2) 58 ~ 1063 Clothing (clo*) 0.47 ~ 0.97 * clo is a unit which gives an estimate of clothing insulation on human body. For example, 0clo is for a nude person, 1.0clo the insulation of a ‘typical business suit’ etc.
Heat and Cold Stress Indices for People Exposed to Our Changing Climate
471
3.2 Physiological Measurements The subjects wore aural thermistors on their both ears and four skin thermistors were fitted to the chest, upper arm, thigh and shin on the subjects’ bodies. A heart rate monitor was fitted on their chest. Their body weights were measured just before and after experiments. Total sweat loss was found with the difference between the seminude weight before an experiment and the semi-nude weight after an experiment. Table 3 shows the range of their physiological data. Table 3. Final physiological data after 60mins (n=20)
Aural temperature (ºC) Skin temperature (ºC) Heart rate (bpm) Total sweat loss (g/h)
Range 36.6 ~ 37.6 28.8 ~ 34.8 74 ~ 138 75 ~ 613
3.3 Subjective Measurements Participants voted their thermal sensation, comfort, stickiness, preference, pleasantness, acceptance, satisfaction and Borg’s scale. The results can be seen in Table 4. In the case of acceptance, 4 out of 20 people found the thermal environment unacceptable and 6 out of 20 people showed dissatisfaction with the thermal environment. Table 4. Final subjective data after 60 mins (n=20) (see Appendix A) Range Thermal sensation -3 ~ 3 (cold ~ hot) Comfort 1 ~ 3 (not uncomfortable ~ uncomfortable) Stickiness 1 ~ 4 (not sticky ~ very sticky) Preference* -2 ~ 1 (cooler ~ slightly warmer) Pleasantness** -2 ~ 0.5 (unpleasant ~ slightly pleasant) Borg’s RPE 6 ~ 17 (no exertion at all ~ very hard) *: +3 much warmer, +2 warmer, +1 slightly warmer, 0 no change, -1 slightly cooler, -2 cooler, -3 much cooler. **:+3 very pleasant, +2 pleasant, +1 slightly pleasant, 0 neither pleasant nor unpleasant, -1 slightly unpleasant, -2 unpleasant, -3 very unpleasant.
3.4 Analysis of Climate Indices Table5 shows the correlations between the actual mean votes (AMV) of the thermal sensation of subjects and the values of the climate indices. Table 5. Pearson correlations between AMV and Climate indices WBGT PMV PMVsolar AMV .680 (**) .600 (**) .648 (**) **Correlation is significant at the 0.01 level (2-tailed).
WCI -.617 (**)
tch .617 (**)
472
J.Y. Kwon and K. Parsons
3.4.1 Wet Bulb Globe Temperature (WBGT) The results show that when the value of the WBGT index went up, the thermal sensation of participants became warmer (Figure1). The relationship between WBGT and AMV is shown in Table 5. There was a significant correlation between WBGT and AMV (r=.680, N=158, p<.01, two-tailed). For this correlation 46.3% of the variation is explained. The scattergram shows that the data points are reasonably well distributed. When WBGT is between 8ºC and 15ºC, participants reported their sensation between ‘cold’ and ‘warm’. 5 4 3 2
AMV
1 0 -1 -2 -3 -4 -5 0
5
10
15
20
25
WBGT(ºC)
Fig. 1. Scatter plot of AMV and WBGT(see Appendix A)
3.4.2 Predicted Mean Vote (PMV) The value of AMV was generally lower than PMV’s (Figure 2). In other words, participants actually felt slightly cooler than predicting values. The relationship between PMV and AMV is shown in Table 5. There was a significant correlation between PMV and AMV (r=.600, N=133, p<.01, two-tailed). For this correlation 36% of the variation is explained. 5 4 3 2
AMV
1 0 -1 -2 -3 -4 -5 -3
-2
-1
0 PMV
1
2
3
Fig. 2. Scatter plot of AMV and PMV (see Appendix A)
Heat and Cold Stress Indices for People Exposed to Our Changing Climate
473
3.4.3 PMVsolar PMVsolar was higher than AMV, and then participants actually felt cooler than the value of PMVsolar (Fig. 3). The relationship between PMVsolar and AMV is shown in Table5. There was a significant correlation between PMVsolar and AMV (r=.648, N=148, p<.01, two-tailed). For this correlation 42% of the variation is explained. 5 4 3 2
AMV
1 0 -1 -2 -3 -4 -5 -5
-4
-3
-2
-1
0 1 PMVsolar
2
3
4
5
Fig. 3. Scatter plot of AMV and PMVsolar(see Appendix A)
3.4.4 Wind Chill Index (WCI) Corresponding sensation of people should be cool or slightly cold, when WCI was 400Wm-2 to 700Wm-2. However, about half of participants felt neutral to warm (Fig. 4). Therefore, actual participants’ thermal sensation was warmer than corresponding sensation of people of WCI. The relationship between WCI and AMV is shown in Table 5. There was a significant correlation between WCI and AMV (r=-.617, N=151, p<.01, two-tailed). For this correlation 38% of the variation is explained. 5 4 3 2
AMV
1 0 -1 -2 -3 -4 -5 0
100
200
300
400 500 WCI(Wm-2)
600
700
800
900
Fig. 4. Scatter plot of AMV and WCI (see Appendix A)
474
J.Y. Kwon and K. Parsons
3.4.5 Chilling Temperature (tch) The chilling temperature is the temperature in which people actually feel under the cold environment. When chilling temperature was above 18ºC, most participants felt neutral to hot (Fig. 5). However, their thermal sensation was various from cold to warm below 15ºC. The result of correlation between tch and AMV was the same as the one between WCI and AMV (Table 5). 5 4 3 2
AMV
1 0 -1 -2 -3 -4 -5 -5
0
5
10
15
20
25
30
tch(ºC)
Fig. 5. Scatter plot of AMV and Chilling temperature (tch)
4 Discussion and Conclusion All environments of 20 people were below the reference value of WBGT (ISO 7243) and therefore would have been acceptable working environments when considering heat stress. PMVsolar showed a significant (p<.01) correlation with AMV. However, the value was consistently higher than AMV. According to expected corresponding sensation of people based on WCI, people may feel cool when WCI is 400 Wm-2. And the sensation of people would be cold when WCI is 800Wm-2. However, the experimental results were higher than expected corresponding sensation of WCI. In other words, most of people felt neutral or slightly warm when WCI was 400Wm-2. And about half of the people answered slightly cool, neutral or slightly warm when WCI was 600Wm-2 and 700Wm-2. Therefore, it indicates that WCI predicts that people will be colder than their actual thermal sensation. The results show that all indices considered provided statistically significant (p<0.01) correlations between the index value and the thermal sensation of subjects. The highest correlation overall was with the WBGT index. This preliminary finding will be tested in more extensive trials involving more subjects and covering all seasons in the UK. Acknowledgments. The authors would like to thank Dr Simon Hodder and Lisa Kelly for their practical support during the experiments.
Heat and Cold Stress Indices for People Exposed to Our Changing Climate
475
References [1] Bhattacharya S (2003) European heatwave caused 35,000 deaths. New scientist October 2003 [2] Hodder S G, Parsons K C (2007) The effects of solar radiation on thermal comfort. International journal of biometeorology. 51(3):233-250 [3] Hodder S G (2002) Thermal comfort in vehicles: The effect of solar radiation. PhD thesis, Loughborough University [4] ISO 7243 (2003) Hot environments – Estimation of the heat stress on working man, based on the WBGT-index (wet bulb globe temperature). International Standards Organisation, Geneva [5] ISO 7730 (2005) Ergonomics of the thermal environment – Analytical determination and interpretation of thermal comfort using calculation of the PMV and PPD indices and local thermal comfort criteria. International Standards Organisation, Geneva [6] ISO 11079 (2007) Ergonomics of the thermal environment – Determination and interpretation of cold stress when using required clothing insulation (IREQ) and local cooling effects. International Standards Organisation, Geneva [7] Parsons K C (2003) Human Thermal Environments (Second ed.). London: Taylor and Francis, ISBN 0415237939 [8] Ramanathan N L (1964) A new weighting system for mean surface temperature of the human body. Journal of applied physiology 19(3):531-533 [9] Siple P and Passel C (1945) Measurements of dry atmospheric cooling in subfreezing temperatures. Proceedings of the American Philosophical Society 89(1): 177-199 [10] Yaglou C P, Minard C D (1957) Control of heat casualties at military training centers. A.M.A. archives of industrial hygiene and occupational health 16(4):302-316
476
J.Y. Kwon and K. Parsons
Appendix A Subjective Questionnaire Thermal comfort assessment Condition: outdoor Time: Subject: Date: 1. Thermal Environment. Please rate how YOU feel NOW: 5 Extremely hot
4 Very uncomfortable
4 Very hot
3 Uncomfortable
3 Hot
2 Slightly uncomfortable
2 Warm
1 Not uncomfortable
1 Slightly warm 0 Neutral 4 Very sticky -1 Slightly cool 3 Sticky -2 Cool 2 Slightly sticky -3 Cold 1 Not sticky -4 Very cold -5 Extremely cold
2. Please rate on the scale how YOU would like to be NOW: Much armer
Warmer
Slightly warmer
No change
Slightly cooler
Cooler
Much cooler
3. Please rate on the scale how YOU feel NOW in this thermal environment: pleasant
Pleasant
Slightly pleasant
Neither pleasant nor unpleasant
4. Please indicate how acceptable YOU find this thermal environment : acceptable / unacceptable
Slightly unpleasant
Unpleasant
Very unpleasant
5. Please indicate how satisfied YOU are with this thermal environment: satisfied / dissatisfied
Comments,(Main source of discomfort): 6. Please rate on the following scale how heavy and strenuous the exercise feel to you? (Give the number) 6 No exertion at all 7 Extremely light 8 9 Very light 10 11 Light 12 13 Somewhat hard 14 15 Hard (heavy) 16 17 Very hard 18 19 Extremely hard 20 Maximal exertion
The Thrill Effect in Medical Treatment: Thrill Effect as a Therapeutic Tool in Clinical Health Care (Esp. Music Therapy)* Eun-Jeong Lee Institute of Medical Psychology, University of Heidelberg, Heidelberg, Germany
[email protected] Abstract. The Thrill Effect is a strong and powerful reaction of the whole psycho-somatic system to an "appropriate" stimulus under supporting circumstances. The - quite different - studies until today have found a large variety of symptoms, especially during and/or after musical stimulation (music therapy) [1, 2, 3, 4]. Nowadays it is known that the thrill phenomenon affects the human body and mind strongly through music [1]. PET images of the human brain during thrill effect show that thrill may have good emotionally positive effects [5, 6].This study with European music-therapists affirms these findings while also highlighting the effect’s main hazards and the differentiation from strong emotions. Supporting factors were identified. In addition positive influences on the success of therapies were shown. In depth research on the Thrill Effect its supporting factors and the development of appropriate methods for induction and encouragement will be most helpful in influencing the outcome of many form of therapy. This non-pharmacological therapeutic research could be a scientific challenge for Neuroscience of Music, Music Psychology and their clinical application.
1 Introduction Nowadays the effects of music are studied in many scientific fields. Neuroscience and Music is a new fundamental research area investigating function of the human brain under musical stimuli. A PET Study [5] and an fMRI study [7] showed differing types of brain activity in reaction to music perceived as pleasant or unpleasant, as well as an activation of emotion processing within the human brain. Another PET study [6] showed how the human brain reacted to music with intensely pleasurable responses. Right thalamus, anterior cingulated cortex, midbrain, suppl. motor area, bilateral cerebellum, left ventral striatum, bilateral insula and orbitofrontal cortex related to enjoyment, reward or euphoria showed positive correlations between exposure to musical stimuli and intensity of pleasurable responses. Additionally, amygdala, ventral medial prefrontal cortex, and hippocampus/amygdale, areas connected with fear or aversive emotions showed a negative correlation. In the latest studies, this intensive response is called “thrill”. The term refers to a common human experience [1, 3, 4, 8] a strong emotional response to music that occurs simultaneously on both physiological (vegetative) and psychological levels [1]. Other terms are “chill” [8, 10, 11], “shiver” [8], and “peak experience” [4, 9]. Such strong response to music is experienced mostly as *
This article based particularly on unpublished diploma-thesis (2005) at Department of Music Therapy at University of Applied Sciences, Heidelberg, Germany.
478
E.-J. Lee
pleasurable itself [1]. Extraordinary pleasure [10], love and sadness [8] were the most often occurring emotions during chills. In fact Women had significantly more chills than men when listening to sad music whereas men experienced more chills from happy music [8]. Thrills were triggered mostly when music featured unexpected features such as melodic appoggiaturas, new or unexpected changes of harmonies [2]. Chills seemed to be influenced by dramatic increases of volume and could appear in connection with the familiar or already known genres of music [10]. This strong emotional and vegetative reaction to music could be helpful for patients experiencing problems perceiving and realizing their own physical and mental situations. This study aims to prove the therapeutic significance of thrill and tried to show that thrill may be a worthy therapeutic tool in medical treatment, especially music therapy. Parts of this work will be published in German at a later date.
2 Method I 2.1 Survey Among Experts I (with Interviews) Two specialists, one from neurological and one from oncological music therapy with more than 5 years music therapeutic experience were interviewed in order to find out if the thrill phenomenon is universal to music therapy patients and which effects could be observed by therapist or reported by patients themselves. For these interviews the symptom category of Sloboda´s study was used [2]. 2.2 Result I Thrill produced by music stimulation also appeared in clinical contexts especially in music therapy. Both therapists reported observing thrill symptoms such as laughter,
Fig. 1. Structure for Questionnaire- General Occurrence and Characteristics of Thrills
The Thrill Effect in Medical Treatment
479
shivers down the spine, lump in the throat, tears, outbreaks of goose pimples, racing heart, trembling, flushing/blushing, facial tension, accelerated breathing, and sweating. Symptoms reported by patients were tickling sensations, shivers down the spine, ‘lump in the throat’ and goose pimples. It was particularly difficult to isolate symptoms such as shivers down the spine and goose pimples, as these could also be attributed to patient’s lack of adequate dress and as patients might not have consciously experienced thrill. Yawning, pit of stomach sensations and sighs also occurred in the clinical context, through patients showing such symptoms did not acknowledge these as thrill phenomena. Sexual arousal and hair standing on end did not occur during music therapy. Some thrill phenomena could be specifically observed in relation with secondary body symptoms. For example the symptom of ‘lumps in the throat’ often appeared due to music in clinical contexts and it could be witnessed from swallowing motions, a throaty voice in post musical dialogues or facial expressions.
3 Method II 3.1 Survey Among Experts II (with Questionnaire) Fourteen music therapists, with over 5 years’ music therapy experience in neurology and oncology participated in this study. The oncological music therapists worked in acute care wards and/or rehabilitation clinics. All neurological music therapists worked with patients form neurological rehabilitation phase B (early rehabilitation) and three of them also worked with patients from phase C (early mobilization) [12]. The questionnaires on thrills were based on the results of the first survey, with additional interviews conducted with two experts. They focused on four key issues, two of which are introduced in this paper: 1. Thrills in the context of music therapy - knowledge before, occurrence, situation and context, and type of experience (see Fig.1.) 2. Thrills in the context of self-perception The occurrences of thrill responses were given on an ordinal scale (never=0, rare=1, often=2) and the emotional quality and self-perception on a five-level ordinal scale (never=0, rarely=1, occasionally=2, often=3, very often=4). The subjects chose the music therapeutic setting in which patients experienced thrills. 3.2 Result II Findings show that thrill effects appeared more than occasionally in music therapy (µ=2.3). The most frequently observed thrill responses were tears (µ=1.9, often) and laughter (µ=1.6, often). All of the other observed and reported thrill symptoms appeared rarely (µ=0.6 - 1.5), except for tickling sensations (µ=0.2). (See Fig. 2.) Thrills appeared most frequently when therapists and patients played music together (µ=0.79). Secondary thrills frequently occurred in situation where patients listened to music performed by the music therapist (µ=0.64). Patients also experienced thrill responses when listening to their own music (sit.1) or music they had been asked
480
E.-J. Lee
Fig. 2. Observed and Reported Thrills in Music therapy
Fig. 3. Situations of Thrill-Occurrence in Music Therapy
to select (sit. 4). However, patients experienced significantly more thrills when listening to recordings of music they had played themselves. No significant difference in thrill occurrences (see Fig.3.) could be shown between the appreciation of live music and music played from CD recordings. The contradictory emotions of pleasure (µ=3.0 often) and sadness (µ=2.9 often) were most often experienced with thrill responses. Both emotions were aroused significantly more often in music therapy than other emotions. Love and lust (µ=2.0),
The Thrill Effect in Medical Treatment
481
Fig. 4. Self-Perception
shame (µ=1.8), and curiosity (µ=1.7) appeared only occasionally and disgust (µ=1.0) rarely. Neurological and oncological patients occasionally perceived conscious thrill responses to music (µ=2.36), and after thrill experiences patients could more easily communicate with therapists about themselves (µ=2.6). Music therapists assumed that the ability of patients to perceive themselves clearly and to experience their body and mind more intensely was heightened after thrill effect (µ=2.29 occasionally) (see Fig.4.).
4 Discussion The data of this empirical study verified that patients in Oncology and Neurology actually feel strong emotional and physical responses to music in their medical context. In comparison to Sloboda’s Study [2] the most commonly occurring four symptoms in neurological and oncological context are tears(1), laughter(2), accelerated breathing(3), and sweating(4). Slobada found shivers down the spine(1s), laughter(2s), ‘lump in the throat’(3s) and tears(4s) to be most frequent. The fact that tears are the most frequent thrill symptoms in clinical contexts hint at a relevance of patients´ illnesses when considering their thrill reactions. Pleasure and sadness are mostly associated with thrill effects. Other studies show that thrill is experienced as pleasure, love, or sadness (see Introduction). It is interesting to note that both basic emotions are primarily in accordance with frequently occurring symptoms (tears and laughter) from this study. Panksepp´s study [8] showed that chills occur in the context of the familiar, liking, and favorite music. This study concurs with his findings. The musical context such as patients playing music together with therapists, patients listening to their favorite music, and patients listening to live music according to their liking must feature such familiar and enjoyable elements. Deducing that patients´ individual personality should be considered in music therapeutical treatment and therapists should adhere to the ISO-Principle1 is therefore not absurd. Thrill also occurs during surprising 1
ISO-Principle: Therapists pick patients up where they are: Therapists attempt to select music that will mirror patients´ current mood and/or emotional state as closely as possible.
482
E.-J. Lee
moments in music (see Introduction). The present study shows that thrill responses to music often occurred while patients were listening to live improvisations by therapists. A music therapist in this study states that patients often experience surprise, when listening to therapists´ improvisations what they had never heard before. Also in this case the liking and acceptable, comfortable, and enjoyable feelings are basically preconditions to experiencing thrills. Encouraging patients´ self-perception, especially in neurology and oncology, as well as in other clinical fields is one of the main therapeutic treatments [13-15]. According to this study patients may gain increased physical and mental awareness after thrill effects. Nevertheless, the problem remains that patients could not perceive consistent thrill responses each time due to their disease-conditional disposition. If patients are able to enter into therapeutic dialogue and find themselves increasingly able to talk about their physical and mental situation after experiencing thrills, it makes sense to assume patients possessed a basic ability for self-perception prior to treatment. A meaningful interpretation could be that thrill as a peak experience [9] brings a contribution to therapeutic processes. Gabrielsson and Lindström [9] classified the therapeutic implications related to SEM (Strong Experience of Music) in their descriptive study. The fourth category from their classification is; “Music may release barriers or defenses, often unconscious, giving contact with hidden thoughts and feelings and providing feeling of openness and freedom (p.200).” This report delivers a message that thrill as a strong emotional response is a crucial to the emotional and physiological opening necessary for patients healing. A further supporting result from the scientific field of Neuroscience of Music is the activation during chill experience of ventral striatum mediate and dorsomedial midbrain that is associated with motivation/approach behavior and reward. As well as the activation of the Thalamus and the AC (anterior cingulate) that is involved in general arousal and attentional processes [6]. The thrill effect has the possibility of being perceived secondly as a negative experience of patients new to the clinical context because of its strong powerfulness. In this case therapeutic support is necessary to connect to the primary unperceived sensation.
5 Conclusion This study has shown that “thrill” has indeed occurred in clinical contexts and has helped the (music) therapeutic treatment. Induced by music thrill benefits our somatic and mental well-being. By the reason of unrenownedness of thrill it is difficult this strong responses to distinguish exactly from other emotional reactions. Thrill as a well-structured therapeutic tool could help to place the ISO-Principle to utilize and implement in music therapy for the therapeutic process. Further research is necessary to study especially the relationship between the thrill effect and his practical medical utilization in view of diverse disease such as depression, bulimia nervosa, dementia, abuse of drugs etc. It would be a great chance for psychiatric patients to use the thrill effect to have a non-pharmacologic treatment with utilization of thrill effect without any side effects.
The Thrill Effect in Medical Treatment
483
References [1] Goldstein A (1980) Thrills in response to music and other stimuli. Physiological Psychology 8(1), 126-129 [2] Sloboda J (1991) Music structure and emotional response: Some empirical Findings Psychology of Music 19, 110-120 [3] Panksepp J, Bernatzky G (2002) Emotional sounds and the brain: the neuro-affective foundations of musical appreciation. Behavioural Processes 60, 133-155 [4] Gabrielsson A, Lindström SW (2003) Strong experiences related to music: A descriptive system. Musicae Scientiae 7(2), 157-217 [5] Blood A, Zatorre R, Bermudez P, Evans AC (1999) Emotional responses to pleasant and unpleasant music correlate with activity in paralimbic brain regions. Nature Neuroscience 2, 382-387. [6] Blood A, Zatorre R (2001) Intensely pleasurable responses to music correlate with activity in brain regions implicated with reward and emotion. Proc. Natl. Acad. Sci. 98, 11818-11823 [7] Koelsch S, Fritz T, v Cramon DY, Müller K (2006) Investigating emotion with music: An fMRI study. Human Brain Mapping 27, 239-250 [8] Panksepp J (1995) The emotional sources of “chills” induced by music. Music Perception 13(2), 171-207 [9] Gabrielsson A, Lindström S (1995) Can strong experiences of music have therapeutic implication? In: Steinberg R (ed.) Music and the mind machine. The psychophysiology and psychopathology of the sense of music (pp.195-202) Berlin Springer [10] Grewe O, Nagel F, Kopiez R, Altenmuller E (2005) How does music arouse "chills"? Investigating strong emotions, combining psychological, physiological, and psychoacoustical methods. Ann N Y Acad Sci 1060, 446-449 [11] Grewe O, Nagel F, Kopiez R, & Altenmüller E (2007) Listening to music as a re-creative process: physiological, psychological, and psychoacoustical correlates of chills and strong emotions. Music Perception 24(3), 297-314 [12] Baumann M, Geßner C (Hrsg.) (2004) Zwischen Welten. Musiktherapie bei Patienten mit erworbener Hirnschädigung. Reichert Verlag Wiesbaden [13] Baumann M (2004) Wo steht die Musiktherapie in der Neurorehabilitation? Musiktherapeutische Umschau 25(1), 45-56 [14] Koch U, Matthey K, Weis J (1998) Bedarf psychoonkologischer Versorgung in Deutschland: Ein Ist-Soll Vergleich. Psychotherie Psychosomatik Medizinische Psychologie 48, 417-425 [15] Koch U, Weis J (Hrsg.) (1998) Krankheitsbewältigung bei Krebs und Möglichkeiten der Unterstützung. Der Förderschwerpunkt "Rehabilitation von Krebskranken. Schattauer Stuttgart
Status of the Climate Change Policies in South Korea Ilyoung Oh Centre for Environmental Strategy, University of Surrey, Guildford, UK [email protected]
Abstract. Climate change is considered to be the greatest long-term challenge to human beings. South Korea has to play a considerable role in dealing with this human challenge, given it is positioned at 9th level in greenhouse gas emission and 12th level in economic volume. The South Korea government has formulated national climate change action plans every three year since 1990. In 2008, new national action plans are under preparation. This new plans include reduction of greenhouse gas emission, adaptation to climate change and international cooperation. This paper aims to introduce and discuss the direction of main policies within new action plans.
1 Introduction South Korea emits 591 million tons CO2 equivalent greenhouse gas in 2005 ranks 9th in the world and the growth rate (90.1%) of greenhouse gas between 1990 and 2005 is positioned 1st among the OECD countries [1]. Even so, South Korea is the only OECD country except Mexico which is not included in non-annex 1 parties [2]. It can be expected that the leading countries in UNFCCC (UN Framework Convention on Climate Change) claim for South Korea to be a member of annex 1 parties at the following agreements after Kyoto Protocol in near future. How does the South Korea government intend to deal with these challenges? South Korea has formulated and implemented action plans every three years from 1999 to 2007; those action plans have encompassed three areas; reduction of greenhouse gas, adaptation to climate change and international cooperation [1]. In 2008, the new government is preparing new action plans, considering the international political landscape after UN climate conference 2007 in Bali. This paper is dedicated to the trends of greenhouse gas emission, history of previous three action plans, the direction of main policies which is considered by the new government and discussions etc.
2 Greenhouse Gas Emission and the Impacts of Climate Change in South Korea 2.1 Trends of Greenhouse Gas Emission in South Korea South Korea has experienced steep increase of greenhouse gas emission in the period 1990 – 2005. The amount of greenhouse emission in 2005 is 591 million ton CO2 equivalent, which ranks 9th in the world [1]. Moreover, the growth rate of greenhouse gas emission between 1990 and 2005 is 90.1%, which ranks 1st among the thirty
486
I. Oh
OECD countries [2]. This historical emission trends show that we need to investigate the main reasons of steep increase and develop comprehensive climate change policies to challenge the greenhouse gas emission in the future. Considering other emission indicators, greenhouse gas emission per unit of GDP is 0.6 ton CO2 equivalent per thousand American dollars and the emission per capita is 12.3 ton CO2 equivalent per person; 8th place and 14th place respectively in the OECD countries [1, 3, 4]. Regarding the contribution of each sector in South Korea, the energy industries sector (electricity generation and petroleum refining) is the biggest contributor to total greenhouse gas emission and accounts for 29.0% in 2005. The next biggest contribution is from manufacture and construction industries sector (25.1%), followed by transport sector (16.6%), commercial and residential sector (12.7%) and industrial processes (11.0%) in the same year [1]. In terms of increase trends of greenhouse gas emission from each sector between 1990 and 2005, energy industries sector has experienced significant increase by 350.3%; similarly, industrial process, transport and manufacture industries sectors have risen by 225.6%, 131.6% and 80.7% respectively [1]. With regard to future trend of greenhouse gas emissions, the 3rd national report of South Korea shows that the greenhouse gas emissions of South Korea are expected to be about 679.2 million ton CO2 equivalent in 2010 and 813.9 million ton CO2 equivalent in 2020 without additional measures to reduce greenhouse gas emissions [1]. This report estimates that the energy industries and transport sectors increase by 54.7% and 51.2% respectively in the period 2005 – 2020; conversely, other sectors including manufacture industries, industrial process and commercial/residential sectors are expected to emit less greenhouse gas than 2005 in the same period. It has to be commented that greenhouse gas emissions of South Korea can overtake UK’s emission after 2010, considering the fact that UK’s greenhouse gas emissions are projected to fall to 620.4 million ton CO2 equivalent in 2010 and 623.3 million ton CO2 equivalent in 2020 [5]. Conclusively, South Korea has experienced most dramatic increase in greenhouse gas emissions among OECD countries and is expected to increase greenhouse gas emissions by relatively high rate - 37.7% between 2005 and 2020 without aggressive actions [1]. In terms of sectors, the energy industries and transport sectors are estimated to make relatively significant contributions to the increasing trends of greenhouse gas emissions than other sectors, even though the increase rate of greenhouse gas emissions from both sectors fall in the fifteen years following 2005 than in the fifteen years before 2005. These emission trends of South Korea imply that politically leading groups such as UK, Germany etc. will call for more aggressive climate change policies to be adopted by South Korea and joining the Annex 1 countries which have obligations to reduce greenhouse gas emissions - in the near future. Moreover, it has to be pointed out that the greenhouse reduction policies of South Korea need to encompass all the sectors, particularly concentrate on the energy industries sectors, the industry and domestic sectors which consume significant amount of electricity and transport sectors. 2.2 Impacts of Climate Change in South Korea The Office for Government Policy Coordination in South Korea [6], which renamed Prime Minister’s Office in 2008, introduced some evidences which South Korea has
Status of the Climate Change Policies in South Korea
487
impacted by climate change. For instance, the winter season has shortened about a month since 1920 and the number of storm in a year increased from about 24 days in 1950s up to 36.7 day per year in 2000s. The Korea peninsula has warmed by about 1.5 during last century which is twice the global average. The sea level of the JeJu province1 which is located at the bottom of South Korea has risen by up to three times the global average since 1960s. Moreover, the damage costs from massive storms are estimated at about 19 billion dollars in recent decade. Under the IPCC A1B emission scenarios, the temperature of South Korea is projected to increase by 4 above the average level of last thirty years in the end of 21st century and the rainfall is estimated to increase by 17% in the same period [1, 7]. Conclusively, half of the southern region of South Korea is expected to change from temperate climate to subtropical climate [6].
℃
℃
3 History of Climate Change Polices in South Korea The South Korea government has operated the Inter-ministerial Committee on UNFCCC since 1998 and has formulated and implemented action plans for climate change every three years [1]. The third action plan has been terminated by 2007 and the next action plans are being prepared by new government in 2008. Table 1. Summary of climate change policies in South Korea between 1998 and 2007 [1] Area
Policies and Measures
Infrastructure for UNFCCC
Operation of greenhouse gas registry system Basic research into emission trading scheme Promoting CDM projects Raising funds for energy saving, renewable energys R&D Promoting PR and partnership with industries, NGOs
Mitigation of Greenhouse gas
Expanding VA with industry sector, financially supporting energy service companies Supporting CHP, VA on renewable energy supply with electricity suppliers Improving energy efficiency of appliances Driving up the energy standards of new buildings and green building certification Raising portion of compact car, upgrade public transportation service, introducing comprehensive logistics system Promoting landfill gas facility, control of deforestation
Adaptation against climate change
1
Analysing the data of climate change in east Asia Basic research into the impacts of climate change on ecosystem, agricultures etc.
㎢
JeJu province is the island located in the south of South Korea. This island is about 200 the Korean peninsular and the area is 1,848.3 ..
㎞ off
488
I. Oh
According to the National Communication of South Korea [8], South Korea has accomplished 27 tasks including voluntary agreement with energy intensive companies, supporting energy service companies and promoting renewable energy supply etc through the first action plans which covered the period 1999 – 2001. As for the second period 2002 – 2004, the action plans have been established to achieve three targets such as development of greenhouse gas reduction technology, strengthening greenhouse gas mitigation policies and encouraging public participations [8]. The third action plan in the period 2005 – 2007 has included three relevant areas: developing the infrastructure for UNFCCC, mitigating greenhouse gas from each sector and improving the adaptation ability against climate change [1]. Through three periods, policies of each action plan have been revised in the following periods. Detailed polices and measures of three periods can be summarized as like Table 1. Although South Korea has implemented various climate change policies through previous action plans, it has to be pointed out that these previous action plans have failed to encourage the public and industry sectors to mitigate greenhouse gas positively. According to the Office for Government Policy Coordination [6], only 13.6% of the industry sectors make efforts to reduce the greenhouse gas emissions and South Korea government evaluates that this failure is mainly due to the absence of national mitigation targets and the non-positive attitudes on greenhouse gas mitigation. As it were, South Korea government has not succeeded to enlighten the public and industry sectors on how severe the emission trends of South Korea are and how vital reducing greenhouse gas emission is. Conclusively, it should be stressed that South Korea government need not only to maintain positive attitude on the climate change policies, but also to focus on enforcing wide range of policies effectively.
4 The Main Direction of Climate Change Policies in the Following Period The South Korea government is preparing new action plans for the following five years in 2008. Since the former government opened drafts of 4th action plans for the period 2008 – 2012 in December 2007, the new government is revising the drafts of 4th action plans with more positive attitude considering the result of UN Climate conference 2007 in Bali. This paper basically refers to the drafts of 4th action plans, strategies of new presidential transition committee and related plans which are announced by new government in 2008. The following sections are dedicated to the introduction of primary policies with respect to greenhouse gas mitigation, adaptation and international cooperation. 4.1 Medium-Long Term National Targets As mentioned above, the evaluation of previous action plans point out that South Korea need to set up medium – long term domestic goals in the next action plans [6]. Following this evaluation, the South Korea government is preparing for medium-long term goals. For the example of other countries, UK designed the Climate Change Programme 2006 considering the domestic short-term target to reduce CO2 emissions by 20% below 1990 levels by 2010 and long–term target to cut CO2 emissions by 60%
Status of the Climate Change Policies in South Korea
489
by 2050 [5]. Recently, UK government has introduced new Climate Change Bill which includes medium and long term targets to reduce CO2 emissions by 26% by 2020 and by 60% by 2050 against 1990 levels [9]. On the other hand, China announced the domestic target to cut the energy consumption per unit of GDP by 20% below 2005 levels by 2010 [6]. There are strong arguments about political purpose and effectiveness of these targets; however, this is beyond the scope of this paper. 4.2 Changes of Energy Supply Mix With respect to the change of energy supply mix, the South Korea government highlights the role of renewable energy and nuclear power generation [10]. The government’s aspiration to change energy mix is due to weak points of energy supply and consumption in South Korea. For instance, the energy intensity which is expressed by total primary energy supply per unit of GDP shows that South Korea (0.34 ton oil equivalent per 2000 thousand dollars) is relatively more energy intensive than developed countries such as Japan (0.11), Germany (0.18), United States (0.21) and UK (0.14) [11]. Moreover, 82.4% of total primary energy supplied in South Korea is accounted for by imported fossil fuels such as oil, coal and natural gas in 2005 [11]. Considering both factors mentioned above, the economy of South Korea is significantly vulnerable to the energy price and supply in the world. In respect of renewable energy, South Korea government announced new target of renewable energy supply in April 2008, which is to increase the proportion of energy supply from renewable sources from 2.5% of primary energy supplied in 2006 to 9% by 2030 [6]. The main policies are obliging the portion of renewable energy supply on the licensed electricity suppliers, raising the investment on renewable energy R&D and expansion of bio diesel supply in the transport sector [6, 10]. The South Korea government has operates voluntary agreements on the renewable energy supply between government and electricity suppliers energy since 2006, and has a plan to introduce the renewable energy obligation as like UK [6]. The government not only has trebled the national budget on the renewable energy project for last 4 years but also has raised the investment on the R&D of renewable energy technology, especially focusing on the photovoltaic and wind power generation technologies [10, 12]. In order to increase the consumption of bio diesel, the government has a plan to raise the portion – 0.5% in 2007 - of bio diesel in diesel which is distributed at gas stations [6]. Simultaneously, South Korea government examines expansion of nuclear power generation - which contributes 39.0% in 2005 to total electricity supply - in order to meet the need to reduce carbon emissions and energy security concerns [10]. Furthermore, the government has a plan to support nuclear power generation industry of South Korea in order to increase the market share of Korean nuclear energy companies, prospecting that approximately 300 nuclear power stations will be built in the world until 2030 [13]. 4.3 Energy Demand Management Various energy demand policies are supposed to be launched in the transport and commercial/domestic sectors. As pointed out above, the transport sector is expected to
490
I. Oh
increase greenhouse gas emissions by 51.2% in the period 2005 – 2020; similarly, the commercial/domestic sectors account for about 50% of total electricity consumption of which greenhouse gas emissions are projected to rise by 54.7% in the same period [1, 11]. While the transport and commercial/domestic sectors account for 20.5% and 20.1% of total energy consumption respectively in 2005, these sectors are estimated to have relatively large potential to reduce energy consumption [6, 10]. In the transport sector, the government considers revising the emission standard of new vehicles to improve average fuel efficiency of new vehicles and increasing budget allocation in order to subsidize environmentally friendly vehicles [6]. Moreover, various plans such as congestion charge, expansion of railway infrastructures and modification of bus networks are examined in order to encourage a move towards public transport [6]. Likewise, the government is considering wide range of measures in the commercial/domestic sectors. According to Ministry of Knowledge Economy [10], the building regulation will be revised to make buildings more energy efficient. For instance, energy efficiency standard will be strengthened and the building energy efficiency will be screened by authorities along all the process of building permission. The smart metering systems will be promoted aiming for changing the behavior of residents. 4.4 Adaptation to the Climate Change As mentioned above, various evidences show that South Korea is affected by climate change. To improve the adaptation ability to climate change, South Korea government will take action to expand the research into projection and monitor of regional climate change impact in eastern Asia and to set up adaptation policy framework [6]. National Institute of Environment Research (NIER) and National Institute of Meteorological Research (NIMR) have revised and expanded climate change scenarios and models to improve predictions of regional climate change impacts since the middle of 2010s [14]. The South Korea government will launch a national forum for climate change adaptation which constitute experts from governments, academy and announce a white paper including specified action plans on adaptation in the second half of 2008 [6]. In addition, some initiatives such as carbon neutral programme and voluntary agreements with local governments have been launched since 2007 [6]. For instance, Gwacheon city2 has operated voluntary individual carbon offsetting project since the beginning of 2008, corresponding to the voluntary agreements with Ministry of Environment [15]. 4.5 Climate Change Bill, Market Based Mechanisms and International Cooperation South Korea government has carried out research into carbon saving potential, specific schemes of the climate change bill, carbon taxing system and carbon emission trading systems [6]. It is expected that the government will determine the pathway to these measures by the beginning of 2009. 2
㎞
Gwacheon city is located about 10 off Seoul in South Korea. The area is 35.8 population is about 60 thousand in 2006.
㎢ and the
Status of the Climate Change Policies in South Korea
491
Investment to CDM projects by private sectors has been encouraged by the government. For example, 14 CDM projects are registered at UNFCCC by 2007 and private sectors raised six private funds (about 100 million dollars) in 2007 [6]. As the cooperation projects with developing countries, the government will support China and Mongolia by mobilizing about 10 million dollars for 10 year desert forestation project and operate the climate change adaptation programme such as water resources management for developing countries [6, 16].
5 Discussions and Conclusion As described in the previous chapters, the climate change polices of South Korea encompass mitigation of greenhouse gas emissions, adaptation and international cooperation. Comparing with other countries, it can be commented that South Korea has developed various and balanced portfolio of policies. According to Office for Government Policy Coordination [6], South Korea government suggested that the average annual growth rate of greenhouse gas emissions has been decreasing from 4.5% in the first period (1999-2001) to 3.5% in the second period (2002-2004) to 2.8% in the third period (2005 – 2007), even though the greenhouse gas emissions has increased by 90.1% between 1990 and 2005. However, it has to be pointed out that South Korea government need to appraise the carbon saving effects of each polices with in-depth quantitative and qualitative approach, because the decrease of the average growth rate since 1999 could be affected by various factors such as the change of industrial mix, rising oil prices etc. This approach is able to provide analytical evidences to design process of climate change policies. Secondly, South Korea need to combine some climate change polices with sustainable consumption and production policies. For example, food consumption involves shopping trip, cooking and wastes disposal; whereas, food production includes agricultural production, transportation, retailing and food services [17]. These actions environmentally lead to greenhouse gas emissions, deforestation, landfills, water pollutions; socio-economically employment of labors, food culture and health issues such as obesity. Conclusively, it has to be suggested that the strategy for sustainable consumption of food is able to provide solutions comprehensively to mitigation of greenhouse gas emission, rain forest conservation and public health etc. Likewise, UK’s sustainable strategy suggests other items such as tourism, retailers, and transport [17]. Thirdly, comprehensive research of “rebound effects” needs to be carried out to achieve desired effects of climate change policies. According to UK energy research centre [18], rebound effects is described as the behavioral responses that improvements in energy efficiency encourage greater use of energy. This paper comments that rebound effects directly or indirectly can make the effects of energy policies less than expected; moreover, these rebound effects can be lager in the developing countries. It has to be pointed out that energy efficiency related polices such as expansion of nuclear power generation, supply of fuel efficient cars cannot achieve the goals without critical energy demand management. To sum up, South Korea government makes efforts to introduce climate policies comprehensively. However, in order to maximize the effects of polices, the government need to evaluate each polices with quantitative and qualitative approaches, to
492
I. Oh
consider “rebound effects” at the design process of policies and to develop the combined climate policies with sustainable consumption and production as like food policies in UK.
References [1] The Government of the Republic of Korea (2007) The Third National communication of the Republic of Korea under the United Nations Framework Convention on Climate Change (Draft version). Korea Energy Economic Institution, Seoul [2] United Nation Framework Convention on Climate Change (UNFCCC) (2007) Report on national greenhouse gas inventory data from Parties included in Annex I to the Convention for the period 1990–2005. http://unfccc.int/resource/docs/2007/sbi/eng/30.pdf.\ Accessed 19 June 2008 [3] OECD (2008) Principle Economic Indicators. http://www.oecd.org/dataoecd/48/4/ 37867909.pdf. Accessed 19 June 2008 [4] OECD (2008) OECD. StatExtracts. http://stats.oecd.org/WBOS/Default.aspx?QueryName=254&QueryType=View Accessed 19 June 2008 [5] Department for Environment, Food and Rural Affairs (DEFRA) (2006) The UK’s Fourth National communication under the United Nations Framework Convention on Climate Change. HMSO, London [6] The Office for Government Policy Coordination (OPC) (2007) The 4th Climate Change Policy in South Korea. http://www.opc.go.kr/opc/board/opc_article_view.jsp? board_cd=BBBF&category=information&article_no=849 Accessed 19 June 2008 [7] Intergovernmental Panel on Climate Change (IPCC) (2007) Climate change 2007: Synthesis Report. http://www.ipcc.ch/pdf/assessment-report/ar4/syr/ar4_syr.pdf Accessed 19 June 2008 [8] The Government of the Republic of Korea (2003) The Second National communication of the Republic of Korea under the United Nations Framework Convention On Climate Change. http://unfccc.int/resource/docs/natc/kornc02.pdf Accessed 19 June 2008. [9] DEFRA (2008) The Climate Change Bill. http://www.defra.gov.uk/environment/ climatechange/uk/legislation/ Accessed 23 June 2008 [10] Ministry of Knowledge Economy (2008) Strategy for energy saving to overcome high oil price; the report of Energy Saving Promotion Committee in South Korea. http:// www.mke.go.kr/news/bodo/bodo_view.jsp?board_id=P_04_02_05&code=1310&sn=411 87 Accessed 19 June 2008 [11] International Energy Agency (IEA) (2008) IEA statistics. http://www.iea.org/ Textbase/stats/index.asp Accessed 19 June 2008 [12] Ministry of Science and Technology (MOST) (2006) Plan on the Research and Development responding Climate Change (2006 – 2010). MOST, Seoul. [13] The Korea Presidential Transition Committee of the Republic of Korea (2008) Strategy on economic development responding the crisis of Climate Change. The press conference of Presidential Transition Committee, 13. February 2008, Seoul [14] OPC (2007) Evaluation of the outcome from the 3rd Climate Change Policy in 2006. OPC, Seoul. [15] Kim. K. E. (2008) Seoul attempts to raise climate change funds. Kyounghayng online. 29 April. http://newsmaker.khan.co.kr/ khnm.html?mode=view&code=115&artid=17380 Accessed 29 June 2008
Status of the Climate Change Policies in South Korea
493
[16] Korea Forest Service (2008) The Efforts to prevent desertification. http://www.forest.go.kr/foahome/user.tdf?a=common.HtmlApp&c=1001&page=/html/ko r/information/foreign/foreign_060_010.html&mc=WWW_INFORMATION_FOREIGN_ 060 Accessed 19 June 2008 [17] HM Government (2005) Securing the Future; delivering UK sustainable development strategy. http://www.sustainable-development.gov.uk/publications/pdf/strategy/ SecFut_complete.pdf Accessed 19 June 2008 [18] UK energy research centre (2007) The Rebound Effect: an assessment of the evidence for economy-wide energy savings from improved energy efficiency. http://www.ukerc.ac.uk/ Downloads/PDF/07/0710ReboundEffect/0710ReboundEffectReport.pdf Accessed 19 June 2008
Trends in Microbial Fuel Cells for the Environmental Energy Refinery from Waste/Water Sung Taek Oh Department of Civil Engineering, University of Glasgow, UK
[email protected]
Abstract. A microbial fuel cell (MFC) is a device to use for bio electrochemical energy production. The MFC typically consists of two chambers, an anaerobic anode chamber and an aerobic cathode chamber separated by an ion conducting membrane. Anaerobic microorganisms at the anode oxidise organic matter and transfer electrons to the anode that pass through an external circuit producing current. Protons migrate through the solution across the membrane to the cathode where they combine with oxygen and electrons to form water. The device may provide new opportunities for the renewable energy in waste water/swage treatment plants. Here, the purpose of this paper is to provide an overview of microbial fuel cell technology and review the anode and cathode limitations MFCs. The research trends of the new technology may help many researchers to get more information in current biotechnology and environmental technology.
1 Introduction of Microbial Fuel Cell In the past 20 years, chemical fuel cells have been developed from novel device to directly energy transfer. However, the advantage of the fuel cells operating the high efficiency is partially limited by (a) the limited viability and high cost of the catalysts, (b) the highly corrosive electrolytes and (c) the elevated operating temperatures. The possibility exists to reduce some of these problems through the development of bio-electrochemical fuel cells. Such bio-electrochemical system incorporates microorganisms and/or enzymes as an active catalytic component within the specified electrode compartments. Recently anode compartment with electrophilic microorganisms has been studied to define the mechanism of the observed electrochemical reactions. The investigations have dealt primarily with developing methodology and defining mechanisms for enhancing the rate of electron transfer from the microorganism to the solid electrode surface. Applications of the developing technologies have been envisioned for analytical chemistry (i.e. Biological Oxygen Demand), medical devices and anti-biofilm.
2 Anode in Microbial Fuel Cell The anode of microbial fuel cell (MFC) is composed of microbial communities on the surface of graphite. The performance of the anode depends on (a) substrate conversion to protons and electrons and (b) the electrons transfer to the surface of graphite. The substrate decomposition depends strongly on the composition of microbial communities. Electron transfer was explicitly explained by Zhang and Halme’s model (1995), which anodic bacteria produce a redox chemical mediator (electron shuttle),
496
S.T. Oh
which would be enriched around electrode, that transfers electrons to the electrode. They demonstrated the potential importance of electron shuttles as the electron transfer mechanism and the biological production of the redox electron shuttle and its diffusion to the anode may be the major limiting factors in electricity production. The redox electron shuttle mechanism is not the only proposed electron transfer mechanisms in MFCs. Many researchers (Lee, Phung et al. 2003; Kim, Park et al. 2004; Gorby, Yanina et al. 2006; Kato-Marcus, Torres et al. 2007) observed that electric current is produced in the absence of electron shuttles provided the bacteria survive. These empirical observations have been interpreted as conduction driven by electric potential gradients (Reguera, McCarthy et al. 2005). The path that the electrons trace out on route to the solid electrode remains a matter of debate and ongoing research. The most prominent current theory is that they are conducted through a network of bacterial pili, and/or conductive extracellular polymeric substances (EPS). Most efforts in the past have almost exclusively been concentrated on the anode compartment in microbial fuel cell (MFC), whereas the cathode compartment less studied.
3 Cathode in Microbial Fuel Cell The cathode compartment is categorised by the presence/absence of aerobic microorganisms. The aerobic respiration of microorganisms consumes oxygen with electrons on cathode for the their growth and maintenance (Venkata Mohan, Saravanan et al.; Bergel, Feron et al. 2005; Prasad, Sivaram et al. 2006). Bergel (2005) observed the growth of biofilm (mixed culture) on cathode at pH8.2/28oC in seawater. Biffinger (2007) observed Shewanella oneidensis DSP10 in pure culture and Prasad (2006) observed the growth of Thiobacillus ferrooxidans in artificial wastewater. However some cathode compartment without microorganisms has recently reported (Zhao, Harnisch et al. 2005; Walker and Walker 2006; You, Zhao et al. 2006). These studies are proposed of steeper rate of acceptance for electrons from cathode by chemical catalysis (chemical mediator). The mediators are mainly Ferricyanide (K3Fe(CN)6), Bilirubin/Laccase and permanganate (KMnO4). 3.1 Ferricyanide Cathodes used for MFCs are often Pt coated carbon electrodes immersed in water and use dissolved oxygen as the electron acceptor in a ferricyanide solution. The ferricyanide in the cathode compartment is reduced to ferocyanide. This reaction combined with water oxidation (Eqs. 1 and 3) and then the electrochemical potential in cathode compartment can be increased. Accordingly, most MFC studies (Venkata Mohan, Saravanan et al.; Rabaey, Lissens et al. 2003; Oh, Min et al. 2004; Oh, Min et al. 2004; Oh and Logan 2006; Prasad, Sivaram et al. 2006; Biffinger, Pietron et al. 2007) are reported increased power generation upto 3600mW/m2.
O2 + 2 H + + 2e − → H 2O2
E h (V) = 0.695
(1)
O2 + 4 H + + 4e − → 2 H 2O
E h (V) = 1.228
(2)
Fe (CN ) 36− + e − → Fe ( CN ) 64 −
E h (V) = 0 . 358
(3)
Trends in MFC for the Environmental Energy Refinery from Waste/Water
497
3.2 Bilirubin/Laccase Bilirubin/Laccase are oxidase enzymes containing copper that are found in fungi and microorganisms, which can be used as the cathode in an enzyme catalyzed fuel cell. They perform the reduction of oxygen to water and can be paired with an electron mediator. Many researchers (Wingard, Shaw et al. 1982; Bullen, Arnot et al. 2006) widely uses it for medical probe applications (i.e. Enzyme MFCs). 3.3 Permanganate (KMnO4) This oxidant is the most commonly used in industry. You and Zhao (You, Zhao et al. 2006) and Zhang and Ranta (Zhang, Ranta et al. 2006) reported that using as the cathodic electron acceptor for a MFC is able to produce much more electrical power than using other existing types of electron acceptors: permanganate (115.6mW/m2), ferricyanide (25.62mW/m2) and oxygen only (10.2mW/m2).
MnO4− + 4 H + + 3e − → MnO2 + 2 H 2O MnO4− + 2 H 2O + 3e − → MnO2 + 4OH −
E h (V) = 1.692 E h (V) = 0.589
(4) (5)
Furthermore some researchers (Cheng, Liu et al. 2004; Liu, Ramnarayanan et al. 2004; Min, Kim et al. 2005; Kim, Jung et al. 2007) suggested that aeration (without mediator) in the non-microbial cathode compartment can be possible.
4 Conclusion The MFC device may provide new opportunities for the renewable energy in waste water/swage treatment plants. However, there is limited information available about the energy metabolism and nature of the bacteria using the anode and cathode and the produced electron/proton transfer mechanisms with chemical mediators. The limitations can be explicitly barriers of the opporturnties and hence further research targets for the practical and industrial MFC applications.
References [1] Bergel A, D Feron, et al. (2005) Catalysis of oxygen reduction in PEM fuel cell by seawater biofilm Electrochemistry Communications 7(9):900-904 [2] Biffinger J C, J Pietron, et al. (2007) A biofilm enhanced miniature microbial fuel cell using Shewanella oneidensis DSP10 and oxygen reduction cathodes. Biosensors and Bioelectronics 22(8):1672-1679 [3] Bullen R A, T C Arnot, et al. (2006) Biofuel cells and their development. Biosensors and Bioelectronics 21(11):2015-2045 [4] Cheng S A, H Liu, et al. (2004) Optimization of air cathode used in single-chamber microbial fuel cells. Preprints of Extended Abstracts presented at the ACS National Meeting, American Chemical Society, Division of Environmental Chemistry 44(2):1514-1516 [5] Gorby Y A, S Yanina, et al. (2006) Electrically conductive bacterial nanowires produced by Shewanella oneidensis strain MR-1 and other microorganisms. Proceedings of the National Academy of Sciences of the United States of America 103(30):11358-11363
498
S.T. Oh
[6] Kato Marcus, A, C I Torres, et al. (2007) Conduction-based modeling of the biofilm anode of a microbial fuel cell. Biotechnology and Bioengineering [7] Kim B H, H S Park, et al. (2004) Enrichment of microbial community generating electricity using a fuel cell type electrochemical cell. Appl. Microbiol. biotechnology 63:672-681 [8] Kim J R, S H Jung, et al. (2007) Electricity generation and microbial community analysis of alcohol powered microbial fuel cells. Bioresource Technology 98(13):2568-2577 [9] Lee J, N T Phung, et al. (2003) Use of acetate for enrichment of electrochemically active microorganisms and their 16S rDNA analyses. FEMS Microbiology Letters 223(2):185-191 [10] Liu H, R Ramnarayanan, et al. (2004) Production of Electricity during Wastewater Treatment Using a Single Chamber Microbial Fuel Cell. Environmental Science and Technology 38(7):2281-2285 [11] Min B, J Kim, et al. (2005) Electricity generation from swine wastewater using microbial fuel cells. Water Research 39(20):4961-4968 [12] Oh S E and B E Logan (2006) Proton exchange membrane and electrode surface areas as factors that affect power generation in microbial fuel cells. Applied Microbiology and Biotechnology 70(2):162-169 [13] Oh, S.-E., B. Min, et al. (2004). "Characterization of design factors affecting power output in a microbial fuel cell." Abstracts of Papers, 228th ACS National Meeting, Philadelphia, PA, United States, August 22-26, 2004: ENVR-195. [14] Oh S , B Min, et al. (2004) Cathode Performance as a Factor in Electricity Generation in Microbial Fuel Cells. Environmental Science and Technology 38(18):4900-4904 [15] Prasad D, T K Sivaram, et al. (2006) Microbial fuel cell constructed with a micro-organism isolated from sugar industry effluent. Journal of Power Sources 160(2):991-996 [16] Rabaey K, G Lissens, et al. (2003) A microbial fuel cell capable of converting glucose to electricity at high rate and efficiency. Biotechnology Letters 25(18):1531-1535 [17] Rabaey K and W Verstraete (2005) Microbial fuel cells: novel biotechnology for energy generation. Trends in Biotechnology 23(6) [18] Reguera G, K D McCarthy, et al. (2005) Extracellular electron transfer via microbial nanowires. Nature 435:1098-1101 [19] Venkata Mohan S, R Saravanan, et al. Bioelectricity production from wastewater treatment in dual chambered microbial fuel cell (MFC) using selectively enriched mixed microflora: Effect of catholyte. Bioresource Technology In Press, Corrected Proof: 496 [20] Walker A L and J C W Walker (2006) Biological fuel cell and an application as a reserve power source. Journal of Power Sources 160(1):123-129 [21] Wingard L B, C H Shaw, et al. (1982) Bioelectrochemical fuel cells. Enzyme and Microbial Technology 4(3): 137-142 [22] You S, Q Zhao, et al. (2006) A microbial fuel cell using permanganate as the cathodic electron acceptor. Journal of Power Sources 162(2): 1409-1415 [23] Zhang X C and A Halme (1995) Modelling of a microbial fuel cell process. Biotechnology Letters 17:809-814 [24] Zhang X C, A Ranta, et al. (2006) Direct methanol biocatalytic fuel cell Considerations of restraints on electron transfer. Biosensors and Bioelectronics 21(11):2052-2057 [25] Zhao F, F Harnisch, et al. (2005) Application of pyrolysed iron(II) phthalocyanine and CoTMPP based oxygen reduction catalysts as cathode materials in microbial fuel cells. Electrochemistry Communications 7(12):1405-1410 [26] Zhao F, F Harnisch, et al. (2006) Challenges and Constraints of Using Oxygen Cathodes in Microbial Fuel Cells. Environmental Science & Technology 40(17):5193-5199
Cell Based Biological Assay Using Microfluidics Jung-uk Shim1,2, Luis Olguin2, Florian Hollfelder1, Chris Abell2, and Wilhelm Huck2 1
Department of Biochemistry, University of Cambridge, Cambridge, UK [email protected] 2 Department of Chemistry, University of Cambridge, Cambridge, UK
Abstract. A microfluidic device has been designed to measure and manipulate microdroplets, in which protein expression is induced in single cells. The device exploits the permeation of water through poly (dimethylsiloxane) (PDMS) in order to keep the concentration of solutes in aqueous picoliter volume microdroplets stored in wells. The device operates by first creating droplets of the water/solute mixture. Next, droplets are transported down channels and then guided into storage wells using surface tension forces. Finally, the solute concentration of each stored droplet is maintained by chemical potential in a reservoir that is separated from the droplets by a thin layer of PDMS through which water, but not the solutes, permeates. We coexpressed two target proteins, alkaline phosphatase (AP) [1] and red fluorescent protein (mRFP1) [2], in single cells while they have been encapsulated in microdroplets. We interrogated the enzymatic activity of AP and the expression of mRFP1 by following the fluorescence of stored droplets.
1 Introduction Microfluidic instruments are capable of precisely manipulating sub-nanoliter quantities of fluids. Their purpose is to vastly reduce the amount of fluids used in chemical processing and provide accurate delivery of fluids in a defined geometry on the micron length scale with a temporal accuracy of milliseconds. A microfluidic device can include channels for transporting fluids, valves for controlling flow, nozzles to create drops, pumps to propel fluids, storage chambers, and mixers to homogenize multiple fluid streams and drops [3-5]. To this panoply of components we add the abilities to store drops and to controllably vary the water content of stored drops. Each of these primitive functions can be combined in numerous ways to create complex devices optimized for specific tasks. Other powerful features of microfluidics are the ease and rapidity of their construction and the low cost of materials. Assays based isolating bacterial colonies on agar or on microtiter plates, or FACS (fluorescence-activated cell sorting) analysis, are end-point assays yielding a single data point per experiment. Therefore, no resolved information about the time course of an enzymatic transformation is obtained. Even with IVC (in vitro compartmentalisation) [6], time resolved phenomenon have been hardly studied and there is an increasing need to obtain several data points over time to distinguish different mutants in more detail by their kinetic parameters. In order to study the kinetics of enzymatic activity in vivo, it is necessary to isolate the cells to prevent diffusion away of substrates and products. Microfluidics is an attractive technology to carry out these assays on a high throughput way. Different methods have been used to isolate single
500
J.-u. Shim et al.
cells and quantify enzymatic activities [7, 8], however, these methods were not designed to suit high throughput analysis. In this work we studied kinetics of induced protein expression of monomeric red fluorescent protein (mRFP1) [2] and enzymatic activities of alkaline phosphatase (AP), which two target proteins were coexpressed in Escherichia coli cells while they were encapsulated in microdroplets.
2 Theory Microdroplets can provide a promising methodology to study kinetics of enzymatic activities. A microfluidic device was constructed using multi layer soft lithography [9], in which aqueous droplets were formed in oils [10], stored in wells [11] and monitored with time-series measurement. After droplets were formed they were flowed through a flow channel, which has a rectangular cross section of typically 100μm width and 30μm height. The device was designed such that the channels flatten and elongate the droplets. The droplets can be stored in wells, which are located above the flow channels, with typical dimensions of 50μm width, 40μm depth and spaced 15μm apart. A droplet in a well has a more spherical shape, minimizing its surface area and thus surface tensional energy with the resulting force acting to drive and store the droplet into the well. In addition, the bottom of the wells were constructed from a thin PDMS membrane (15μm thick) that is slightly permeable to water [12]. The other side of the membrane contained a reservoir, which produced a chemical potential gradient between the droplet stored in the well and the reservoir. This is required to maintain the droplet size by flowing a solution in the reservoir, which is osmotically equivalent to the droplet, otherwise, the volume significantly shrinks in a few minutes. The ability to store droplets and to maintain the concentration allowed us to study kinetics of protein expression in cells.
3 Experimental As described in Fig. 1, droplets were generated at the nozzle of the microfluidic device, which encapsulated transformed Escherichia coli cells having single plasmid DNA that contained two promoter expression vectors, and a mixture of a fluorogenic substrate (fluorescein diphosphate) and inducer (isopropyl β-D-thiogalactopyranoside, IPTG). Monolithic valves, embedded in the microfluidic device [9], enabled droplets to flow into the storage region, in which the wells were constructed as shown in Fig. 1-(b). After storing droplets into wells by closing the valves, water flowed through the reservoir to maintain the droplet concentration. IPTG triggered the gene expression to produce proteins in cells while encapsulated in the microdroplets. The cell concentration was adjusted to enclose a few cells in each droplet so that we can address the production of proteins in specific individual cells. The fluorescence emitted from the stored droplets at two different wavelengths were periodically (typically every 20min) measured to report accumulations of fluorescent products resulting from the enzymatic reaction of AP and expression levels of mRFP1 using an EMCCD camera (iXon, Andor Tech.) coupled to an inverted
Cell Based Biological Assay Using Microfluidics
501
Fig. 1. Schematic processes of protein expressions in vivo. (a)Droplet formation at nozzle. One syringe has cell, the other has inducer and substrate. (b)A photograph showing stored droplets in the device. Squares are wells and circles are stored droplets surrounded by oil. (c) Enzymatic reactions occured in a stored droplet. AP hydrolyze substrate into fluorescent product. mRFP1 is directly detectible.
microscope (IX71, Olympus). In order to determine the dynamic range of expression level and activity of the enzyme, a coexpression plasmid of wild type (WT) AP-mRFP1 was prepared.
4 Results and Discussion
(b)
15
12
3
12
fluorescence intensity (10 , a.u)
(a)
fluorescence intensity (103, a.u)
As shown in Fig. 2-(a), fluorescein concentrations in droplets where AP was expressed slowly increased for first few hours, which is mainly due to auto-hydrolysis of substrate. Five hours after the droplets were formed, the concentrations began rising significantly for the next 4~5 hours and reached a plateau at around 10 hour. Interestingly, a time lag was observed between droplet formations and the observation of fluorescence. It could be due to the time taken for enzyme synthesis, export to
9
6
6
4
3
2 3
0 0
5
10 time (hr)
4
5
15
6
7
20
9 6 6
3
3 0
0
0
0
5
1
10 time (hr)
2
3
15
4
5
20
Fig. 2. Each symbol represents reaction occurred in different droplets that have identical aqueous condition. There are a few cells per each droplet at most. The solid line is an average of intensities at each measurement. The insertions show the time lag in detail. (a) Fluorescence intensities showing accumulations of fluorescein produced by AP. The time lag is about 5 hours. (b) Kinetics of mRFP1 expression. The insertion shows mRFP1 is observable at 2 hour.
502
J.-u. Shim et al.
periplasm and protein maturation such as disulfide bond formation and dimerization. Each droplet has different time lag between 4 hour to 6 hour. mRFP1 was detected in cells at around 2 hours after the droplet formation. The amount of mRFP1 increased significantly over next few hours and plateaued after 10 hours as shown in Fig. 2-(b). The time lag was a few hours shorter than was observed for the enzyme activity. As two genes of AP and mRFP1 were constructed on a same plasmid, the gene expression for two target proteins was simultaneously triggered, therefore, we attributed the earlier appearance of mRFP1 to the fact that it was neither exported to periplasm, nor did it require diffusion of the fluorogenic substrate through cell membranes to generate fluorescent product from the enzymatic reaction.
5 Conclusions We have manufactured a class of microfluidic devices designed to manipulate and measure the protein expressions and the enzymatic activity. The device consists of a solute formulation stage, a droplet creation stage, and a droplet storage stage. The storage compartment contains a semi-permeable membrane through which water passes, but not protein, salt and macromolecules. The water content of the stored droplet is maintained by filling a reservoir located across the membrane from the storage well with a salt solution of a specific molarity. We built the microfluidic device that were able to store 30pl drops in wells at a density of 200/cm2 with an independent dialysis reservoir. We have studied the kinetics of protein expression in individual cells based on the microfluidic device. We are currently extending this approach to measure the enzymatic activities of various mutants of AP and to quantify the enzyme expression simultaneously, through which we can differentiate specific mutations of enzyme that have a promising activity. Acknowledgments. This research was supported by RCUK Basic Technology Programme. JUS was supported by Marie Curie Actions (FP7-PEOPLE-2007-4-2-IIF).
References [1] M B Martinez et al. (1992) Biochemistry 31:11500 [2] R E Campbell et al. (2002) Proceedings of the National Academy of Sciences of the United States of Amerca 99:7877 [3] S K Sia and G M Whitesides (2003) Electrophoresis 24:3563 [4] T M Squires and S R Quake (2005) Reviews Of Modern Physics 77:977 [5] B Zheng, C J Gerdts, and R F Ismagilov (2005) Current Opinion In Structural Biology 15:548. [6] A D Griffiths and D S Tawfik (2006) Trends in Biotechnology 24:395 [7] L Cai, N Friedman, and X S Xie (2006) Nature 440:358 [8] M Y He et al. (2005) Analytical Chemistry 77:1539 [9] M A Unger et al. (2000) Science 288:113 [10] S L Anna, N Bontoux, and H A Stone (2003) Applied Physics Letters 82:364 [11] J U Shim et al. (2007) Journal of the American Chemical Society 129:8825 [12] J M Watson and M G Baron (1996) Journal of Membrane Science 110:47
Finite Volume Method for a Determination of Mass Gap in the Two Dimensional O(n) σ-Model Dong-Shin Shin Max-Plank-Institute for Physics, Munich, Germany [email protected]
Abstract. We calculate the finite volume mass gap M(L) at 3-loop level in the non-linear O(n) σ-model in two dimensions in small volumes. In order to determine the physical mass gap m for the limit of infinite volume, we first apply our results in O(∞) model where the mass gap can be calculated exactly. There, we find nice agreement of our perturbative determination for m with the analytic value. Motivated by the successful applicability of our method in O(∞) model, we make use of this method to determine the physical mass gap m in O(3) model by extrapolating the finite volume mass gap M(L) to the infinite volume.
1 Introduction The non-linear O(n) σ-model which describes an n-component spin field has thus far found many applications in theoretical physics. In condensed matter physics it has been investigated to study, e.g. ferromagnets. In elementary particle physics, on the other hand, the model in two dimensions shares with QCD many common properties: on the quantum level it is like QCD renormalizable [4], asymptotically free according to perturbation theory [5, 6, 7] and thought to have a mass gap m. In addition, the O(3) model has instanton solutions. Further, the σ-model has an additional advantage: because of low dimensionality it is simple to simulate numerically. Therefore, in many cases the model can be used for tests of new ideas in lattice theories. One of the most interesting properties of the σ-model is the existence of an infinite set of conserved non-local quantum charges [8, 9, 10, 11]. From these, one can exclude particle production and also derive the “factorization equations” directly. Using these properties, one can determine the S-matrix exactly up to CDD-ambiguity [12], provided the particle spectrum of the model is known [13, 14]. Furthermore, the CDD-ambiguity can also be restricted with help of the assumption that there are no bound states. The absence of the bound states was shown in the 1/n-expansion to the order of 1/n [13]. Further, the exact S-matrix constructed in this way was confirmed by the same method to the second order in 1/n [15]. Since there is only one scale in the model, one can, in principle, express all physical quantities with help of this scale. In particular, the mass gap of the theory is proportional to the Λ-parameter. This situation is similar to what happens in QCD with massless quarks where the Λ-parameter sets the scale for scale violations, for example, in deep inelastic scattering. Using the exact S-matrix, a few years ago Hasenfratz, Maggiore and Niedermayer succeeded in an analytic determination of the ratio m/Λ in the σ-model [2, 3]. In this
504
D.-S. Shin
derivation, however, various assumptions including the thermodynamical Bethe ansatz were made, which are plausible, for which, however, the rigorous proofs are still lacking. An independent calculation of m/Λ is therefore desirable. By introducing a mass shift δ0 = [M(L) − m]/m [20], Lüscher made the first lowest order estimation for the ratio m/Λ in O(3) model [19, 20] whose value was improved by Floratos and Petcher at the 2-loop level [21]. Although the correction for m/Λ by Floratos and Petcher approach its analytic value by Hasenfratz et al. quite well, the perturbative determination still deviates from the analytic value noticeably. In order to determine m/Λ more precisely, we decided to evaluate m/Λ to one loop order higher. This requires the calculation of the finite volume mass gap M(L) at 3-loop level. Our work is also interesting in another respect. It is generally accepted that the O(n) σ-model is asymptotically free. This assumption was, however, questioned by Patrascioiu and Seiler [16, 17, 18]. Since in the determination of m/Λ by Hasenfratz et al. the validity of the asymptotic freedom was required, the confirmation of their result through our independent determination will, at the same time, support the correctness of asymptotic freedom. The work is arranged as follows. In the next chapter, we present our 3-loop calculations of the finite volume mass gap M(L). By computing the spin-spin correlation function, we first evaluate the mass gap on a lattice. We then take the continuum limit and finally convert it into the MS-scheme. In chapter 3, we apply the result to the determination of mass gap m in units of the Λ-parameter in the O(3) model. For this purpose, we introduce the mass shift δ0 = [M(L) − m]/m which is known for L → ∞ and match this with the perturbative M(L) [19, 20, 21]. Finally, the Feynman diagrams and some technical details are presented in the appendices.
2 Calculation of Mass Gap at 3-Loop Level 2.1 Mass Gap on a Lattice We consider an n-component spin field qi(x) (i = 1, · · · , n) with unit length q(x)2=1 on a two-dimensional finite lattice1 where T and L are fixed integers with T, L ≥ 2. We work with periodic boundary conditions in space direction and, for reasons which will become clear below, free boundary conditions in time direction. The action is given by
where f0 denotes the bare coupling constant. The forward and backward lattice derivatives are defined by
1
We set the lattice constant a to 1.
Finite Volume Method for a Determination of Mass Gap
505
μˆ is unit vector to the positive μ-direction (μ = 0, 1). The expectation values of the observables can be calculated by the formula
where the partition function Z is such that <1>= 1. Concerning the energy spectrum, there is a following possibility for the calculation of the mass gap M(L) in finite volume for f0 → 0. We calculate the expectation value of the spatially averaged spin-spin correlation function in the limit T → ∞ (τ > 0):
If the bare coupling f0 is fixed, the correlation function C(τ) converges to the vacuum expectation value of the 2-point function for spin field operators at large T independently of whether one chooses free or periodic boundary conditions in the time direction. In perturbation theory, we are, however, first expanding in powers of the coupling f0 and then let T go to infinity. Therefore, we can not, in general, have such a converging behaviour, and one requires a proper choice of boundary conditions to get the desired vacuum expectation value. In free boundary conditions, we are projecting on states which are O(n) invariant at large times. The energies ε(f0) in these states, on the other hand, have the property
where ε0(f0) denotes the ground state energy. Except for the ground state, all these states are therefore exponentially suppressed at large T and we arrive at the desired vacuum expectation value even though we are first expanding in powers of f0 [1]. After having taken T to infinity so that C(τ) been reduced to the vacuum expectation value of the product of two spin field operators, only the vector intermediate states contribute. The energy ε1(f0) of the first excited state of this system has the property
For the energies of the other higher excited states, eq.(2.5) is valid and their contribution to the spin correlation function hence vanishes exponentially at large τ. From this consideration, the mass gap M(L) which is defined by the l.h.s of eq.(2.6) can be calculated by
If one expands C(τ) in perturbation theory for f0 → 0, the mass gap has the general form
Δ(ν) can thus be determined by calculating the 2-point function [eq.(2.4)] with the mentioned boundary conditions in perturbation theory.
506
D.-S. Shin
The calculations up to the third order in f0 were already done by Lüscher and Weisz [22]. Their results read
where the coefficients are given by
with
γ denotes the Euler constant (γ = 0.577216 · · ·). In order to determine the mass gap in the fourth order, we first evaluate Δ(4) on finite lattices and then extrapolate to continuum limit, i.e., L/a → ∞. The 2- and 3-loop diagrams contributing to Δ(4) are illustrated in figures (8) and (9) which are presented in appendix A due to very large number of diagrams. In carrying out Wick’s contractions from them, we used the symbolic language MATHEMATICA. We calculated the generated terms numerically. In the numerical computations, we take T and τ large, but finite. The corrections to the limit T, τ → ∞ are here exponentially suppressed by the order of e-4π/L·2τ and e-4π/L·(T-τ). We therefore achieve the best approximation to the limit T, τ → ∞ by keeping T ≃ 3τ. The numerical work, however, turned out to be very complicated due to the problems caused by rounding errors and running time which increases with L very quickly (in the order of L6). In addition, problems appear from the fact that on the 3-loop level on which we are working there are extremely many terms to treat. One needs therefore a very efficient computer program which requires, among others, the free propagator running fast without significant loss of precision. In this way, we succeeded in evaluating
Finite Volume Method for a Determination of Mass Gap
507
all diagrams up to L = 20 with good enough precision and reasonable running time (for L = 20 we needed the CPU time of around 7 days on the IBM RISC 6000/32H), so that we could extrapolate to the continuum limit to get our desired results. With regard to the extrapolation to the continuum limit, we note that in this limit Δ(4) has the general form
up to terms of the order L-2(ln L)3 where the coefficients Δ0(4), · · · , Δ3(4) are independent of L. All of these coefficients except Δ0(4) can be derived with help of the renormalization group equation and the results from the calculations of the mass gap at lower orders:
In order to determine the unknown coefficient Δ0(4), we decompose it into nindependent constants:
The coefficients s0, · · · , s3 can then be determined by inserting Δ(4) which was evaluated for different L’s in eq.(2.21) and extrapolating to continuum limit. The extrapolation of sk (k = 0, · · · , 3) to L = ∞ can be done very efficiently with help of the procedure by Lüscher and Weisz described in detail in [23]. We note that in our case the constants sk have corrections of the form
Through the extrapolation in the region 5 ≤ L ≤ 20, we obtain
508
D.-S. Shin
At this point, we would like to emphasize that for all four constants sk the limit L → ∞ exists. This is, of course, only the case if Δ(4) calculated by us contains the terms diverging logarithmically with L, so that they cancel exactly with the other divergent factors of eq.(2.21) in a non-trivial way. This correct logarithmic behaviour of Δ(4) is a strong consistency check on our evaluations of the diagrams. 2.2 Conversion of the Mass Gap into the MS-Scheme The β-function in the MS-scheme of dimensional regularization is defined by
where f denotes the renormalized coupling and μ the normalization mass. The first two coefficients in (2.31) are scheme independent:
The remaining coefficients are known to four loops [24, 25, 26, 27]:
with
where the constant υ is given by υ ≈ 1.2020569. The corresponding Λ-parameter is
The function λ(f) is well-defined at f = 0 and can therefore be expanded as a power series in f. Up to 4-loop, we find
On the other hand, the β function on the lattice is defined by2
2
In this section, we introduce the lattice constant a.
Finite Volume Method for a Determination of Mass Gap
509
The coefficients here are also known to four loops [28, 29, 30]:
with
where the difficult 4-loop computation was recently performed by Caracciolo and Pelissetto [30]. The Λ-parameter on the lattice ΛL can be obtained from ΛMS of eq.(2.38) through the replacements μ → a-1, f → f0 and β(x) → β^(x). The two Λ-parameters are related by the formula:
From this, it follows the relation between the coupling constants in both schemes. Up to order Ơ(f4), we find
Finally, we use this relation to express the mass gap [eqs. (2.9)-(2.11) and (2.21)] in terms of the renormalized coupling in MS-scheme. After a lengthy, but straightforward calculation we get
510
D.-S. Shin
The expressions for the coefficients ρ1, · · · , ρ6 of κ(3) are rather long; we therefore write them in appendix B. Here, the coefficients κ(1) and κ(2) are the result of Lüscher and Weisz [22]. By using dimensional regularization, κ(2) was first calculated by Floratos and Petcher [21] with less numerical precision. Our result for κ(3) is the extension of their 1- and 2-loop computations to 3-loop. With dimensional regularization, κ(ν) = 0 for n = 2 since we formally have a free theory in this case. κ(1) and κ(2) already satisfy this condition. We checked that κ(3) is also zero within the errors, which shows a further check on the correctness of our final results in the lattice calculations, s0, s1, s2 and s3 [eqs.(2.27)-(2.30)]. This is, however, not only a check on our computations, but also that on the 4-loop coefficient of the β function on the lattice b^4 which was introduced in our calculations by the conversion of the result on the lattice into the MS-scheme. 2.3 Expansion in z In view of the evaluation of the finite volume mass gap M(L) in the infinite volume limit, we introduce the variable and express the ratio M(L)/ΛMS_ in terms of this dimensionless and renormalization group invariant parameter in perturbation theory for z → 0, as suggested by Lüscher [19]. Here, ΛMS_ is related with ΛMS of eq.(2.38) through
For that purpose, we invert at first the f-expansion of eq.(2.56) in the z-expansion: where
The insertion of this series in eq.(2.38), together with eq.(2.61), yields finally
with
The expressions for the constants _α1, · · · , α4 are given in appendix B. We note that the coefficients aν in eq.(2.67) are independent of μ and L since M(L), ΛMS_ and z are renormalization group invariants. In the above result, the leading term and the first coefficient a1 were calculated in ref. [22], and our determination of a2 is the 3-loop correction to their calculations.
Finite Volume Method for a Determination of Mass Gap
511
3 Results and Discussions In this chapter, we determine the physical mass gap m in units of Λ-parameter by applying our results in the last chapter. For this, we are going to estimate the ratio m/ΛMS_ by matching the behaviour of the finite volume mass gap M(L) evaluated at small L with that of the mass shift δ0 = [M(L) − m]/m which is known at large L. We discuss this method in detail and give the results. After that, it is interesting to compare them with the analytic determination performed by Hasenfratz, Maggiore and Niedermayer [2, 3]. Their result in the general O(n) model has the following simple form:
In our work, we will need the value in the O(3) model:
3.1 Determination of m/ΛMS_ by Applying δ0 Our aim is to determine the mass gap m in infinite volume by applying our result of the mass gap M(L) in finite volume [eq.(2.67)]. For this purpose, we introduce two quantities C0(z) and c0 by (3.3) (3.4) where m is obtained from M(L) by m = limL→∞M(L). The problem is then the determination of c0 which may be done by attempting to extrapolate C0(z) to the infinite volume limit, i.e. to z → ∞. Inspired by the successful application of the method in the O(n) model at n = ∞ where one can solve the given problem exactly, Lüscher [19] tried, with his 1-loop calculation of C0(z), to estimate c0 in the O(3) model as the value of C0(z) at some intermediate region of z (at around 3 < z < 4) with the hope that in this region the perturFig. 1. C0 as a function of z in the O(3) model. bative C0(z) would be a good apThe lowest curve contains only the 1-loop contribution, the middle one the contributions up to proximation for the infinite vol2-loop and the uppermost curve up to 3-loop. ume. Here, we extend his 1-loop calculation to up to 3-loop which is plotted in Fig. 1 as a function of z in the O(3) model. The lowest curve is the 1-loop result by Lüscher while the middle one comes from the 2-loop computation by Floratos and Petcher [21]. Finally, the uppermost curve shows our 3-loop result. The
512
D.-S. Shin
expected behavior of the curves, supported by the study in the O(∞) model, is that they rapidly decrease at small z and then quickly become flat. From the figure we, however, see that it is difficult to make a clear estimation for C0(∞) in this way. For a better estimation of c0, we make use of the mass shift defined by (3.5)
Fig. 2. C0 as a function of ζ in the O(3) model. The lowest curve contains for C0 (z) only the 1-loop contribution, the middle one the contributions up to 2-loop and the uppermost curve up to 3-loop.
as done in ref. [21]. δ0 was first introduced by Lüscher [20] and can be calculated at large ζ = mL exactly up to an exponentially small correction term if the elastic forward scattering amplitude is known.3 In the O(n) σ-model, the exact scattering matrix was determined by brothers Zamolodchikov [13, 14]. In particular, the forward scattering amplitude is known and hence one can calculate δ0 in this model at large ζ. By using the definition of δ0 together with eq.(3.4), we obtain the following relation between C0(z) and c0: (3.5) From this, a formula expressing c0 as a function of ζ follows: (3.7)
Actually, c0 is independent of ζ. Therefore, it must be valid for all ζ: C0[z(ζ)] ~ 1+ δ0(ζ) if C0 and δ0 are known exactly. However, we know the two quantities only approximately; C0 for small z and δ0, on the other hand, for large ζ. Although in this approximate relation c0 is in general dependent on ζ, there may exist the possibility that one finds somewhere an overlapping region where both approximations are good and thus c0 becomes independent of ζ. This idea works very well in the model for n = ∞, as shown in ref. [21]. We therefore apply it to the determination of c0 in the O(n) model, explicitly in the O(3) model. In the O(3) model, δ0 was calculated by Lüscher [20]: (3.8) where κ is a constant which is not smaller than √(3/2) and may become as large as 3. If we now insert this quantity together with C0(z) at n = 3 in eq.(3.7), we obtain a functional relation of c0 in terms of ζ which is illustrated in Fig. 2. 3
ζ is related with z by z = (1 + δ0)ζ so that z ≃ ζ at large ζ.
Finite Volume Method for a Determination of Mass Gap
513
The 1-loop result displays a region where c0 ≃ 1.6. This value was the earlier estimation by Lüscher by this method [19, 20]. There, Lüscher gave an optimistic error of 20 % which he apparently underestimated. The 2-loop result corrects this by about 30 % to c0 ≃ 2.15. This is in agreement with the estimation by Floratos and Petcher who give c0 ≃ 2.1 [21]. Finally, we see from our 3-loop result that the curve rises rather monotonically instead of showing a wider flat area. If this method should work well, we would expect that the flat area at the 3-loop becomes wider than at the 2-loop. We attribute the bad applicability of this method to the fact that the convergence radius of C0(z) is apparently too small. Nevertheless, we conservatively estimate the 3-loop approximation to c0 by c0 ≃ 2.3. We note that it is very difficult to give a systematic error on our estimation. Our determination is to be compared with that by Hasenfratz et al. which is c0 = 2.94304 · · · [see eq.(3.2)]. The value of c0 ≃ 2.3 at the 3-loop still deviates noticeably from that by Hasenfratz et al.. One does not, however, need to be disturbed because, from the convergence of the 1-, 2- and 3-loop approximations, one sees a definite indication that the results in higher orders would arrive at that of Hasenfratz et al..
4 Conclusion From our determinations of the mass gap m, we conclude that in O(3) model our results support the correctness of those by Hasenfratz et al.. Acknowledgments. I thank Peter Weisz for suggesting this work and many useful discussions. He also read through this manuscript, which is greatly acknowledged as well.
References [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16]
M Lüscher, P Weisz and U Wolff (1991) Nucl Phys B359:221 P Hasenfratz, M Maggiore, and F Niedermayer (1990) Phys Lett B245:522 P Hasenfratz and F. Niedermayer (1990) Phys Lett B245:529 E Br´ezin, J Zinn-Justin, and JC Le Guillou (1976) Phys Rev D14:2615 AM Polyakov (1975) Phys Lett B59:79 E Br´ezin and J Zinn-Justin (1976) Phys Rev B14:3110 WA Bardeen, BW Lee, and RE Shrock (1976) Phys Rev D14:985 M Lüscher (1978) Nucl Phys B135:1 M Lüscher (1986) Addendum to ref. [8], unpublished notes D Buchholz and JT Lopusza´nski (1979) Lett Math Phys 3:175 D Buchholz and JT Lopusza´nski (1986) Nucl Phys B263:155 L Castillejo, RH Dalitz, and FJ Dyson (1956) Phys Rev 101:453 AB Zamolodchikov and AB Zamolodchikov (1979) Ann Phys (NY) 120:253 AB Zamolodchikov and AB Zamolodchikov (1978) Nucl Phys B133:525 B Berg, M Karowski, V Kurak, and P Weisz (1978) Phys Lett B76:502 A Patrascioiu and E Seiler. The difference between the abelian and non-abelian models: fact and fancy, AZPH-TH/91-58 and MPI-Ph/91-88
514 [17] [18] [19] [20]
[21] [22] [23] [24] [25] [26] [27] [28] [29] [30] [31] [32]
D.-S. Shin A Patrascioiu and E Seiler (1995) Phys Rev Lett 74:1920 A Patrascioiu and E Seiler (1995) Phys Rev Lett 74:1924 M Lüscher (1982) Phys Lett B118:391 M Lüscher (1984) On a relation between finite size effects and elastic scattering processes. Lecture given at Carg`ese (1983), in Progress in gauge field theory, ed. G. ’t Hooft et al. (Plenum, New York) E Floratos and D Petcher (1985) Nucl. Phys. B252:689 M Lüscher and P Weisz. unpublished notes M Lüscher and P Weisz (1986) Nucl Phys B266:309 E Br´ezin and S Hikami (1976) J Phys A11:1141 S Hikami (1981) Phys Lett B98:208 W Bernreuther and FJ Wegner (1986) Phys Rev Lett 57:1383 FJ Wegner (1989) Nucl Phys B316:663 M Falcioni and A Treves (1986) Nucl Phys B265[FS15]:671 P Weisz. unpublished notes, cited in ref. [1] S Caracciolo and A Pelissetto (1995) Nucl Phys B455[FS]:619 M Hasenbusch. unpublished K Symanzik (1980) Cutoff dependence in lattice _44 theory, in Recent developments in gauge theories (Carg`ese 1979), ed. G. ’t Hooft et al. (Plenum, New York)
A Feynman Diagrams We list here the 2- and 3-loop Feynman diagrams contributing to the mass gap in fourth order. In the diagrams, the internal dot denotes the vertices while the cross means those coming from “π2 insertion”.
Fig. 3. 2-loop diagrams
Fig. 4. 2-loop diagrams
Finite Volume Method for a Determination of Mass Gap
Fig. 4. (continiued)
515
Fig. 4. (continiued)
Fig. 4. (continiued)
B Expressions of Coefficients In this appendix, we show the expressions of the coefficients defined in the main text. B.1 Expressions of the Coefficients Appearing in eq.(2.59)
516
D.-S. Shin
B.1 Expressions of the Coefficients Appearing in eq.(2.69)
Understanding the NO-Sensing Mechanism at Molecular Level Byung-Kuk Yoo1, Isabelle Lamarre1, Jean-Louis Martin1, Colin R. Andrew2, Pierre Nioche3, and Michel Negrerie1 1
Laboratory for Optics and Biosciences, INSERM U696, CNRS UMR 7645, Ecole Polytechnique, 91128 Palaiseau Cedex, France [email protected] 2 Department of Chemistry and Biochemistry, Eastern Oregon University, La Grande, Oregon 97850, USA 3 Laboratoire de Pharmacologie, Toxicologie et Signalisation Cellulaire, INSERM, UMR-S 747, Université Paris V, 75006 Paris, France [email protected]
Abstract. We present here how ultrafast time-resolved spectroscopy improves our understanding of a new class of proteins: Nitric Oxide sensors. Nitric oxide (NO) is a small, short-lived, and highly reactive gaseous molecule and it acts as a second messenger in several physiological systems. NO sensors are proteins which bind NO and are able to translate this binding into a signal for mammal cells as well as in bacteria. We have studied NO-sensors with the goal of understanding the activation and deactivation mechanism of the human NO-receptor, the enzyme guanylate cyclase (sGC), which is involved in communication between cells. Some bacterial sensors of NO (SONO) have structural homologies and common properties with sGC, but also have differences with sGC which make them valuable system to get structural and physiological information on sGC. To understand how NO-sensors interact with NO and control its reactivity, it is essential to probe dynamics and interactions when NO is present within protein core and what are the associated structural changes. For this purpose, we have used time-resolved absorption spectroscopy in the picoseconds (10-12s) time domain. NO can be photodissociated from heme by the pulse of femtosecond laser. Time-resolved transient absorption spectra on NO-sensors were recorded and NO-protein interacttion were recorded. In case of cytochrome c′, we identified the formation of 5-coordinate (5c)-NO and 5c-His hemes from 4c-heme and demonstrate that proximal histidine precludes NO rebinding at the proximal site. In bacteria, the adaptation of SONO to temperature changes was not achieved by a simple temperature-dependent NO binding equilibrium, but by a change of the proportion between 5c-NO and 6c-NO species. This amplifies the response to temperature changes since a fast NO rebinding is the only property of a 5c-NO leading to 4c-heme after dissociation. Our results of NO dynamics provide a model for the regulation at molecular level in NO-sensing function.
1 Introduction In mammals, NO acts as a second messenger in several physiological systems [1, 2] is involved in the production of several cytotoxic chemical species and nitrosative stress, and appears as an intermediate in the denitrification process [3, 4]. Cells have thus developed numerous heme proteins whose diverse functions involve NO binding and/or release. Our goal is to study NO-sensors with the understanding of activation
518
B.-K. Yoo et al.
and deactivation mechanism of the human NO-receptor, the enzyme soluble guanylate cyclase (sGC), which is involved in communication between cells. In some bacteria, heme sensors were discovered with a femtomolar affinity for NO, while cytochromes c' able to bind NO are found in many nitrogen-fixing, denitrifying and photosynthetic bacteria [5, 6]. Contrary to sGC, the bond formation with NO of Alcaligenes xylosoxidans cytochromes c' (AXCP) in which NO clearly replaces the histidine ligand to give a five-coordinate (5c) species is a two-steps and concentration dependent mechanism. In the presence of NO, sGC can be activated to convert GTP to cyclic GMP and cGMP has a various physiological roles in cardiovascular and neural systems [7]. sGC is a heterodimeric enzyme composed of β-subunit with a heme where NO binding takes place and of α-subunit which is involved in the catalytic process. A recent breakthrough is the discovery and crystal structure determination of prokaryotic homologues of sGC. Nioche et al. [5] termed these proteins as SONO (for Sensor of Nitric Oxide). Two sensors of NO are from the strict anaerobe thermophile Thermoanaerobacter tengcongensis (Tt) and the filamentous cyanobacterium Nostoc punctiforme (Np) respectively. These sGC-related bacterial NO-sensors have structural homologies with sGC and form a mixture of both 5c-NO and 6c-NO according to temperature which make them very valuable model systems to get structural and physiological information on human sGC. NO is a small, short-lived, and highly reactive gaseous molecule. NO is a free radical and acts as a signal transmitter in physiological pathways because it readily and quickly diffuses through plasma membranes and reacts with the heme iron [8]. Some heme proteins are involved in ligand NO binding to the heme iron and their regulation closely depends on the dynamic behavior of the ligand within the protein core. How heme proteins interact with NO and control its reactivity is ultimately related to their structure and the heme surroundings. To understand their mechanisms, it is essential to probe the dynamics and interactions when NO is present within the protein core, but not bound to the iron. This can be achieved by dissociating NO with the pulse of femtosecond laser. The dissociation of the ligand in heme proteins generates a nonequilibrium molecular system that evolves over periods of time extending from femtoseconds to milliseconds. After its dissociation, the diatomic ligand may either rebind to the iron of the protein or diffuse into the solution. Subsequent to photodissociation of a ligand, geminate rebinding is observed in the picosecond to nanosecond time scales [9]. In case of NO, the geminate recombination kinetics is very sensitive to factors altering the dynamics of relaxation of the protein. Under these conditions, the concerted use of biochemical techniques and ultrafast spectroscopy techniques provide a powerful tool to identify the nature for the local protein motions that govern the binding reaction.
2 Experimental Methods Preparation of Samples for Spectroscopy. sGC was purified from bovine lung using a previously described method [10]. Purified AXCP and bacterial NO-sensors were prepared as in ref. [5, 11]. The samples were put in a 1-mm optical path length quartz cell, degassed. The SONO and sGC are already purified in the reduced state but AXCP needs to be reduced by the sodium ascorbate (5mM final concentration). For preparing NO-liganded proteins, gas phase NO diluted in N2 was directly introduced into the
Understanding the NO-Sensing Mechanism at Molecular Level
519
spectroscopic cell (~ 20µM for 10% NO in the aqueous phase) and wait for 10min for equilibration between the gas phase and the solution. Equilibrium spectra were recorded at each step for monitoring the evolution of ligation. The absorbance of the sample was in the range 0.7-1 at the Soret maximum for 1mm path length. Time-resolved Spectroscopy. Transient absorption measurements were performed with the pump-probe laser system previously described [9]. The photodissociation of NO was achieved with an excitation pulse at 564nm whose duration was ~40fs with a repetition rate of 30Hz. The transient absorption spectra after a variable delay between pump and probe pulses were recorded by means of a CCD detector as a timewavelength matrix data. All experiments were carried out at room temperature except for SONO. The global analysis of the data was performed from the time-wavelength matrix such that all spectral components were identified in the time-range up to 5ns. Thus, the kinetics presented here represents the evolution of the associated differential spectra, not those at a single wavelength.
3 Results and Discussion sGC and Alcaligenes xylosoxydans cytochrome c' [AXCP]. Nitrosylated heme proteins generally form 6c-complexes having histidine and NO as axial ligands, but some become 5c with NO such as sGC and AXCP. The bleaching at 392 and 394nm can be assigned to 5c-heme as shown in Fig.1-A. Although sGC and AXCP possess different sequences and tertiary structures, they share the property that the binding of NO leads to a 5-c heme. The crystal structure of AXCP has revealed that NO is bound to the proximal side of the ferrous heme replacing the endogenous His ligand [12]. In case of AXCP, we probed NO dynamics and the proximal His motion and found that after NO dissociation, the structure of the proximal heme pocket confines NO close to the iron so that an ultrafast (7ps) and complete (~99%) geminate rebinding occurs as shown in Fig.1-D [13]. For sGC, the geminate rebinding of NO was found to be monoexponential and ultrafast (7.5ps, 97%). The NO dynamics of these NO-sensors is faster than that of endothelial NO-synthase (NOS, endogenous cell source of NO) and myoglobin (Mb, oxygen carrier protein) which is multiphasic and in nanosecond time scales (Fig. 1-B). The comparison of NO kinetics in NOS, “designed” for NOrelease, in Mb, an oxygen-storage protein, and in NO-sensors is very informative. Yet the ultrafast rebinding of NO in the latter does not imply a high NO affinity, but rather a tight control on NO dynamics. The spectrum of the 1% population of NO which does not rebind is represented as SVD 2 component (dot; Fig. 1-C). The bleaching at 415nm is assigned to the 4c-heme and the absorbance at 435nm is due to the 5c-His heme. The dashed line represents the calculated 4c-heme spectrum (long wavelength part of SVD 1 and short part of SVD 2 spectra in Fig. 1-C). SVD 1 represents the rebinding of NO to the heme [τgem=7ps], whereas SVD 2 represents the rebinding of His to the heme [τ1=~100ps] (Fig. 1-D, inset). The rebinding of NO to the 5c-His heme appears slower than to the 4c-heme of AXCP with an associated spectrum different from SVD 2. This shows that SVD 2 is not due to 6c-His-NO. If NO escapes the heme pocket, His rebinds to the heme and precludes direct NO rebinding from the solution. NO must then rebind to the distal side, confirming the mechanism
B.-K. Yoo et al.
Differential absorbance
416
425
394
A
438
B Differential absorbance
520
AXCP sGC Mb
392 418 380
390
400
410
420
430
440
450
460
(NOS)
(Mb) (AXCP) (sGC) 0
50
100
150
C
0.10
D
250
300
1000
2000
0.6
D A(10-2 )
0.4
SVD2
0.2
0.08
0.0
D A (a. u.)
Differential absorbance 380
200
Time (ps)
Wavelength (nm)
-0.2
0.06
-0.4 -0.6
0.04
0
10
20
30
100
200
300
400 500
Time (ps)
0.02
SVD1
SVD1 SVD2 390
400
410
420
430
440
Wavelength (nm)
450
0.00 460
0
10
20
30
40
Time (ps)
Fig. 1. A Transient spectral component from global analysis of AXCP, sGC, and Mb. B Kinetics of NO geminate rebinding obtained by global analysis for NOS, Mb, AXCP, and sGC up to 2ns. C Normalized transient spectra obtained from SVD analysis of AXCP. D Kinetics of SVD 1 and SVD 2 (inset) components of AXCP.
of "ligand trap". This property of AXCP is more likely that of a NO transporter rather than of a NO reductase. The present results shows that in AXCP the distal side controls the initial NO binding, whilst the proximal heme pocket controls the release of NO, with the ability of trapping and gating NO, with a virtually unidirectional release of NO. Thermoaneaerobacter tencongensis SONO [Tt-SONO]. The raw difference spectra at successive time delays after photodissociation of NO at room temperature are displayed in Fig. 2-A. Two main features are observed: 1) the maximum of induced absorption centered at about 441nm and the minimum of bleaching at 421nm decreases, and 2) a new bleaching at 430nm was becoming significant after 4.7ps and this result in a considerable shift of the isosbestic point from 431 to 443nm. A similar shift also appears around 390nm in the same time range. These shifts indicate that two different species are involved in the early time kinetics. The transient spectra are different when recorded at 72°C as shown in Fig 2-B. Instead of the bleaching at 421nm a new bleaching at 401nm is shown and this is due to the transient 5c-NO bound heme species caused by NO rebinding to the 4c-heme. Moreover, the bleaching at 429nm which corresponds to 6c-NO species is clearer and more pronounced than that of data at RT. We performed the global analysis and the absorbance decay associated with geminate rebinding was fitted with monoexponential component and constant (inset). The rebinding of NO to the 4c-heme appears fast (7ps) in both temperatures but the constant value in RT is larger than that of 72°C which means more 6c-NO proportion remains in case of RT. We found that the
Understanding the NO-Sensing Mechanism at Molecular Level
380
0
4
8
12
40
80
120 160 200
Time (ps)
0.7 ps 2.7 ps 4.7 ps 6.7 ps 8.7 ps 15 ps 85 ps 200 ps
430
time
A
0
400
410
420
430
440
450
460
470
480
4
8
12 40
80
120 160 200
Time (ps)
time time 401
72 ° C B
421 390
0.7 ps 2.7 ps 4.7 ps 6.7 ps 8.7 ps 12.7 ps 85 ps 200 ps
DA (a.u.)
Differential absorbance
DA (a.u.)
Differential absorbance
441
521
380
429
390
400
410
Wavelength (nm)
420
430
440
450
460
470
480
Wavelength (nm)
Fig. 2. A Transient spectra at different time delays after photodissociation of NO at 18°C. B at 72°C for Tt-SONO and their normalized kinetics done by the global analysis in two time windows at 18 and 72°C (inset, respectively).
fast component of rebinding is due to the 4c-heme, which shows the very high reactivity of Fe2+ when it is only 4c in a planar configuration and this property is used in the functioning of NO-sensors. Moreover, the affinity of NO towards Tt-SONO is strongly modulated by small change of temperature by a modification of the proportion between the 5c-NO and 6c-NO species. Since this bacterium has its optimum of growth at ~70°C, we can infer that these dynamic properties of its SONO protein represent an adaptation to temperature changes. Nostoc punctiform SONO [Np-SONO] Raw difference spectra at successive time delays after photodissociation of NO at given temperature are displayed in Fig. 3A-C. At 18°C, Two main features are observed: 1) the maximum of induced absorption 435
B
Differential absorbance
Differential absorbance
A
0.8 ps 5 ps 14 ps 100 ps 250 ps 500 ps
18 °C
435
399
390
400
410
420
430
440
450
460
470
380
390
Wavelength (nm)
390
400
410
420
430
440
450
Wavelength (nm)
410
420
430
440
460
470
D
450
460
470
18 °C
Differential absorbance
Differential absorbance
72 °C 380
400
Wavelength (nm) 0.8 ps 5 ps 10 ps 14 ps 100 ps 250 ps 500 ps
C
50 °C
417
417 380
0.8 ps 5 ps 13 ps 100 ps 250 ps 500 ps
50 °C 72 °C
0
5
10
100
200
300
400
500
Time (ps)
Fig. 3. A Transient spectra at different time delays after photodissociation of NO at 18°C. B At 50°C, C at 72°C for Np-SONO D Normalized kinetics of NO rebinding associated with the first SVD spectral component in two time windows at 18°C, 50°C, and 72°C.
522
B.-K. Yoo et al.
centered at about 435nm decreases and the minimum of bleaching at 417nm increases, and 2) this results in a little 1 nm shift of the maximum and minimum peaks. These shifts are getting evident in the raw difference spectra at 50°C. The minor bleaching at 400nm corresponding to the 5c-NO species is getting significant in the early events and is due to the 5c-NO bound heme species which reforms rapidly from the 4c transient. We verify that there is no contribution of 5c-NO species from the early events without NO (not shown). Fig. 3-D shows normalized kinetics in two time windows at each temperature conditions. Kinetic parameters were obtained by fitting kinetics associated transient spectra done by global analysis, which correspond to the NO geminate rebinding, and the values are listed in Table 1. These results show that: 1) the amplitude of the fast component (τ1 = 4.4 ~ 5.1ps) increase with temperature that is to say with the proportion of 5c-NO. This fast component is assigned to the immediate rebinding of NO to the 4c photodissociated heme. Table 1. Parameters obtained by fitting the kinetic in Figure 3-D, 1-D, and ref. 10 with the function A(t) = A1exp(-t/ τ1) + A2exp(-t/ τ2) + A3exp(-t/ τ3) + A4
a
b
Temperature
τ1
A1a
τ2
A2 a
τ3
A3 a
Cb
A4 a
18 °C 50 °C 72 °C AXCP at RT sGC at RT
ps 5.1 4.7 4.4 7.0 7.5
0.30 0.48 0.99 0.99 0.97
ps 37 26 76 100 -
0.19 0. 22 0.01 0.01 0
ps 246 329 -
0.33 0.27 0 0 0
cst cst cst
0.18 0.03 0 0 0.03
Relative amplitude: ∑iAi = 1 for given components (decay of ∆A).
cst: constant component.
Furthermore, this is slightly faster than the observed geminate rebinding rate in AXCP (7ps) and sGC (7.5ps), showing that the heme pocket of Np-SONO has more steric hindrance. 2) The slower components, (τ1=26~37ps and τ2=246~329ps) correspond to the rebinding of NO to the 5c-His heme (from the 6c-NO heme after photodissociation) whose amplitude decreases due to increasing temperature. At 72°C, slow components vanish, and only one component appears (τ1=76ps) due to the uncertainty on the very small amplitude. As a working hypothesis, we may interpret these dynamics features as an adaptation to variations of diatomic ligands (O2, NO) in medium as a function of temperature. Indeed, this allows to amplify the response to change in NO concentration with respect to the case where only the speed of diffusion is modified by a temperature change. Comparing NO activation mechanisms. Four different NO activation models are depicted in Fig. 4 according to our results. In Fig. 4-A, the cleavage of the iron - Methionine (Met) bond allows the binding of NO, then triggering a still hyphothetical conformational change toward the active state of cytochrome c which triggers apoptosis [14]. For guanylate cyclase in Fig. 4-B, the binding of NO induces a bond breaking due to trans effect which then triggers a conformational change which is transmitted to the catalytic site (α-subunit) and increases the catalytic activity up to 300-fold [15]. An important feature of AXCP is the ability of NO to replace His ligand to give a 5c species in a
Understanding the NO-Sensing Mechanism at Molecular Level
523
Fig. 4. NO activation Mechanisms of A Mitochondrial Cytochrome c, B Guanylate Cyclase, C Bacterial Cytochrome c', D Bacterial Sensors of NO. 1This state is thermally driven and occurs transiently at room temperature [16], allowing NO to bind in the distal site, replacing Met. While nitrosyl cytochrome c complex is stable, its function in apoptosis has been proposed [14] and is still hypothetical. 2The 6-coordinate NO-heme-His has a very short lifetime in sGC, and has not yet been measured. 3The 6-c-NO state in AXCP has a long lifetime and can be measured spectroscopically. 4The temperature dependence of the equilibrium varies for the different SONO species.
concentration dependent manner [Fig. 4-C]. In this case the binding of NO does not lead to a change of overall structure [12, 13]. Temperature-dependent states can be included in the mechanism for bacterial sensors of NO as shown in Fig. 4-D.
4 Conclusion We explored NO binding and NO release mechanisms of NO-sensors with timeresolved absorption spectroscopy. Very fast rebinding of NO (~7ps) which is observed for the three proteins sGC, AXCP, and SONO, is due to the 4c heme. We have shown that the proteins “use” the properties of heme coordination to control the dynamics of NO, leading to: 1) very different affinities, with selective control of kon and koff. 2) control of the activated state of the protein as a function of NO availability. Even the coordination state of the heme can be controlled depending upon NO concentration, temperature, and state of the protein (allostery). This particular behavior of these NOsensors originates from the particular structure of the respective protein, including the overall structure and that of the heme pocket. Thus, the measurement of dynamics of NO interacting with its heme binding site in diverse proteins provides valuable information on the mechanisms which regulate NO-sensors at molecular level.
524
B.-K. Yoo et al.
References [1] Ignarro L J (2000) Nitric Oxide: Biology and Pathobiology. Academic Press, San Diego, CA, 3–19 [2] Zumft W G (2002) Nitric Oxide Signaling and NO Dependent Transcriptional Control in Bacterial Denitrification by Members of the FNR-CRP Regulator Family. J. Mol. Microbiol. Biotechnol. 4:277-286 [3] Corker H. and Poole R. K. (2003) Nitric Oxide Formation by Escherichia coli.. J. Biol. Chem. 278:31584-31592 [4] Reddy D. et al (1983) Nitrite inhibition of Clostridium botulinum: electron spin resonance detection of iron-nitric oxide complexes. Science 221:769-770 [5] Nioche P. et al (2004) Femtomolar sensitivity of a NO sensor from Clostridium botulinum. Science 306:1550-1553 [6] Davidson P.M. (2001) Chemical preservatives and natural antimicrobial compounds. Food Microbiology: Fundamentals and Frontiers, 2nd ed., 593-627 ASM Press, Washington D.C [7] Poulos T L (2006) Soluble guanylate cyclase. Curr. Opin. Struc. Biol. 16:736-743 [8] Lancaster J Jr. (1996) Nitric oxide: principles and actions, 1-18. Academic Press, San Diego, USA. [9] Martin J -L and Vos M H (1992) Femtosecond biology. Annu. Rev. Biophys. Biomol. Struct. 21:199-22 [10] Negrerie M et al (2001) Control of Nitric Oxide dynamics by guanylate cyclise in its activated state. J. Biol. Chem. 276:46815-46821 [11] Ambler R. P (1973) The amino acid sequence of cytochrome c' from Alcaligenes sp. N.C.I.B. 11015. Biochem J. 135:751-758 [12] D M Lawson et al (2000) Unprecedented proximal binding of nitric oxide to heme: implications for guanylate cyclase. EMBO J. 19:5661-5671 [13] M Negrerie et al (2007) Molecular basis for Nitric Oxide dynamics and affinity with Alcaligenes xylosoxidans cytochrome c′. J Biol Chem 282:5053-5062 [14] Schonhoff C M et al (2003) Nitrosylation of Cytochrome c during Apoptosis. J. Biol. Chem. 278:18265-18270 [15] J R Stone et al (1996) Spectral and Kinetic Studies on the Activation of Soluble Guanylate Cyclase by Nitric Oxide. Biochemistry 35:1093 -1099 [16] M Negrerie et al (2006) Ultrafast Heme Dynamics in Ferrous versus Ferric Cytochrome c Studied by Time-Resolved Resonance Raman and Transient Absorption Spectroscopy. J. Phys. Chem. B. 110:12766-12781
Environmentally Friendly Approach Electrospinning of Polyelectrolytes from Aqueous Polymer Solutions Miran Yu, Metodi Bozukov, Wiebke Voigt, Helga Thomas, and Martin Möller Deutsches Wollforschungsinstitut an der RWTH Aachen e.V
[email protected]
Abstract. Polyelectrolytes have been studied in a variety of industrial applications in paints, paper coatings, cosmetics, or pharmaceuticals as well as in research areas including drug delivery, antimicrobial agents, chemical and biological protective clothing, or biomimetic actuators. We have investigated the electrospinning of aqueous polymer solutions to generate nanofibres of environmentally friendly conditions. As polyelectrolytes, Polyvinyl alcohol, Polyvinyl amine and wool keratins were used. Due to the sensitivity of the resulting nanofibres towards aqueous conditions, the fibres were stabilized by physical and chemical corsslinking. To impart antimicrobial acitivity nanosilver was incorporated on example of PVA-nanofibres.
1 Introduction Raw materials for nonwovens are generally natural or man-made fibres with diameters ranging from a few single to a few ten microns. New levels of performance can be enabled by electrospun nano-scaled fibres in all fields of application demanding a high surface-area-to-weight ratio, e.g. filtration and catalysis. A broad range of polymers including natural and synthetic materials can be electrospun from the solution or melt allowing the generation of tailored nanofibre webs for various applications. The increasing application potential of nanofibres leads to increasing requirements on special material properties as chemical and biological functionality or thermal stability. In addition, environmentally friendly production conditions are required for simple technical feasibility without additional disposal of hazardous organic solvents. Therefore work at DWI is mainly concentrated on avoiding organic solvents during fibre production (i) by application of melt-electrospinning processes or (ii) by using water as a polymer solvent and subsequent stabilisation of the spun fibres towards aqueous surroundings [1-5]. Functionality of nanofibres can be achieved by spinning of functional polymers or by direct implementation of functional substances into the fibre as well as their subsequent fixation to the nanofibre web. In addition, chemical and biological functionality can also be achieved from natural polymers accessible from waste materials. For example the chitin derivative chitosan is known to provide antimicrobial effectiveness [6] or keratin fibres are known for their propensity in binding air polluting substances by nucleophilic addition, e.g. formaldehyde [7]. In addition, electrospinning of keratin-based proteins offers special advantages which are mainly related to their cystine content being advantageous for stabilisation of the corresponding nanofibres. This is due to the ability of cystine to recombine after reductive cleavage, being a prerequisite for keratin isolation [8].
526
M. Yu et al.
For special filters, e.g. cabin filters or protective masks, inhibition of uncontrolled growth of microbes on the filter surface are desired. To interfere with the multiplication of microorganisms a hygienic functionalisation of the nanofibre web is necessary by addition of e.g. chemical oxidants, heavy metals, organic protein denaturants or membrane disrupters either as individual additives or as polymers bearing special antimicrobial function. Whereas antimicrobial polymers as polyamines are supposed to generate nanofibres at certain conditions, non-polymeric compounds, e.g. silver ions or elemental silver can be used as add-on to the spinning solution. In the present work we describe the formation of keratin as well as hygienic functionalised nanofibre webs at environmentally acceptable production conditions. In this context special regard is given (i) to the use of water based polymer solutions for fibre generation and the stabilisation of the resulting fibres towards humid conditions and (ii) to different strategies for inhibiting microbial growth on the nanofibre webs. The latter includes the use of polyvinylamine as well as the implementation of nanosilver into fibrous materials.
2 Results and Discussion Electrospinning of aqueous polymer solutions requires an additional stabilisation of the electrospun fibres. These requirements can be used by (i) electrospinning of polymer-derivatives modified with crosslinkable groups, (ii) modification of the polymer through adding appropriate crosslinkers to the spinning solution or (iii) posttreatment, e.g. annealing or subsequent chemical reaction of the electrospun fibres. The possibility for generating water stable fibres from aqueous polymer solutions is demonstrated on example of polyvinylamine (PVAm, Lupamin 9095, BASF). Apart from intrinsic antimicrobial activity the presence of primary amino groups along the hydrocarbon chain enables Fig. 1. SEM-images of PVAm electrospun fibres the chemical modification of the polymer which is necessary for imparting water stability of the corresponding nanofibres. Fig.1 depicts SEM images of fibres electrospun using commercial PVAm (Mw 350,000g/mol) and the average diameter of fibres is ca. 85nm. The commercial product is containing certain amount of salt from the preparation of PVAm by alkaline hydrolysis. The salt influences the resulting fibres morphology and diameter because it has a changed solution viscosity, conductivity, surface tension, and coil size. For generating the water stable PVAm nanofibres, we have modified PVAm with p-(6-bromohexyloxy)benzophenone (BHBP) and methacryloyl chloride (mAC) to permit a photo-induced crosslinking of the polymer chains in the generated fibres (Scheme 1). Fig. 2 shows the nanofibres derived from PVAm-BHPP-mAC which exhibit good water stability after UV-irradiation.
Environmentally Friendly Approach Electrospinning of Polyelectrolytes
527
(1) O O HC
(CH2)6 NH2
NH
5oC, 2h
NH
NH NH
O
CH
O
HN
NH2 O
O
(2) Scheme 1. Synthesis of a UV-curable PVAm-derivative by reacting with BHPP (1) and mAC (2) in a two-stage process
Water stable fibres from PVAm can also be obtained by blending the polyamine with polymers enabling nucleophilic addition of the PVAm-chains (Fig. 3). This also counts for polyvinyl alcohol (PVA) as indicated in Fig. 4 before and after storage in water over a period of 2 hours. Apart from chemical crosslinking, also physical crosslinking can result in water stable fibres. For example water resistance of PVA can be performed by increasing the degree of crystallinity which is well established to be achieved by annealing of highly hydrolysed PVA samples. Furthermore PVA is stabilised by heat-treatment due to the formation of crosslinks during intermolecular dehydration. Decrease in crystallinity can be caused by thermal degradation Fig. 2. SEM-images of UV-irradiated fibres elecincluding dehydration and detrospun from PVAm-BHBP-mAC in waterWater polymerisation [9]. stable fibres from PVAm can also be obtained by Via DSC-measurements it blending the polyamine with polymers enabling nuwas shown that for PVAcleophilic addition of the PVAm-chains nanofibre webs an annealing temperature of 150 °C is optimal for stabilisation. However, the generation of water insolubility strongly interacts with the annealing period. This can be indicated from a rapidly increasing crystallinity to an extent which is slowly reduced when the anFig. 3. SEM-images of PVAm-nanofibres blended nealing process exceeds a heatwith a reactive polymer before (left) and after (right) ing period of 10min. However, water exposition the presence of additives as
528
M. Yu et al.
nanosilver (Fig. 5) leads to a general reduction in the degree of crystallinity. This reduction is obviously caused by a more intensive disturbance of the crystallisation process by Ag-nanoparticles formed during irradiation. This interference Fig. 4. SEM-images of PVA-nanofibres blended with is corroborated by the longer expoa reactive polymer enabling nucleophilic addition of sure time needed to achieve a PVA before (left) and after (right) water expositio maximum degree of crystallinity. Whereas, for pure PVA-fibres a maximum value is obtained after an annealing period of 10 min, silver containing PVA fibres need for maximum crystallisation an extension of the heating phase to 25min. Samples which have been heated for shorter or longer periods are not water stable, due to incomplete crystallisation or proceeding polymer degradation. For samples annealed at the optimal conditions pronounced water stability is achieved (Fig. 6) as indicated by withstanding an immersion in water over a period of 1h.
Fig. 5. PVA-fibres bearing silver nanoparticles generated by electrospinning of an aqueous solution of PVA/AgNO3 and subsequent reduction of silver ions by UV-irradiation
Fig. 6. PVA- (left) and PVA/Ag0-composite (right) fibres fibres after annealing and immersion in water
For generation of stable nanofibres from wool kerateins, the proteins were reductively extracted from the wool according to Scheme 2 using oxidative sulfitolysis. After reductive cleavage of the cystine bridges with Na2SO3, the cysteine groups are completely converted into S-sulfonates by reaction with Na2S4O6 and Na2SO3. According to differential solubility behaviour, a separation into microfibrillar (intermediate filament) and matrix (intermediate filament associated) proteins can be achieved by isoelectric precipitation. Cleavage of the S-sulfo-groups before electrospinning is
Environmentally Friendly Approach Electrospinning of Polyelectrolytes
529
Wool Keratin extraction (Oxidative sulfitolysis) ker-S-S-ker
2 ker-S-SO3-
Isoelectric precipitation Pellet: microfibrillar proteins
Supernatant: matrix proteins
(LS-SSO3-)
(HS-SSO3-)
Reduction
Reduction LS-S-
LS-SSO3Reoxidation LS-S-
LS-S-S-LS
HS-S-
HS-SSO3Reoxidation HS-S-
HS-S-S-HS
Scheme 2. Wool keratin extraction scheme
achieved by reaction with excessive reducing agent which is necessary to perform the recombination of cystine bridges, e.g. by oxidation with air. Both fractions differ significantly in amino acid composition and molecular weight. The microfibrillar proteins (LS) contain lower amounts of cystine (2.9 mol-%) than the matrix fraction (HS, 6.9 mol-%). In addition the LS-fraction is composed of higher molecular weight proteins (Mw 45.000-50.000g/mol) when compared to those of the corresponding HS-fraction (Mw 14.000-28.000g/mol). The differences in molecular weight in both fractions lead to a different behaviour during electrostatic spinning of the kerateins. Whereas the lower molecular weight of HS-proteins inhibits fibre formation during electrospinning, nanofibres were generated from the LS-fraction (Fig. 7).
Fig. 7. SEM-picture of electrospun low-sulfur-keratein fraction isolated from wool
3 Conclusions Different strategies for the generation of water stable nanofibres solutions were investigated. Water stabilisation of the nanofibres was achieved i) by chemical crosslinking, ii) by physical crosslinking and thereby, increasing the crystallinity. Chemical crosslinking was achieved by implementation of reactive groups into the polymer and subsequent crosslinking after performance of electrospinning via UV-irradiation as well as by reactive with polymers bearing nucleophilic attractable groups. Semi-crystalline polymers (PVA) offer the possibility of a physical crosslinking by
530
M. Yu et al.
annealing. Furthermore, we investigated electrospinning of keratin-based proteins. Keratin fibres possess the propensity to bind air polluting substances. The diamino acid cystine is advantageous for stabilisation of the nanofibres generated from keratinaceous proteins. The paper presents different strategies for generation of water stable nanofibres from water based polymer solutions. The first successful PVAm electrospinning shows more possible applications and its ultra fine diameter is a big advantage for filter application.
References [1] K Klinkhammer, P Dalton, J Salber, D Klee, M Möller (2005) BIOmaterialien 6:187 [2] P D Dalton, K Klinkhammer, J Salber, D Klee, M Möller (2006) Biomacromolecules
7:686 [3] P D Dalton, J L Calvet, A Mourran, D Klee, M Möller (2006) Biotechnology Journal 1: 998 [4] H Thomas, E Heine, R Wollseifen, C Cimpeanu, M Möller (2005) Int. Nonwovens Journal 14:12 [5] W Voigt, H Thomas, E Heine, M Möller (2007) Textiles for Sustainable Development; R Anandjiwala, L Hunter, R Kozlowski, G Zaikov (eds), Nova Science Publishers Inc. [6] S Hirano (1996) Biotechnology Annual Review 2:237 [7] G Wortmann, R Sweredjuk, G Zwiener, F Doppelmayer, F J Wortmann (2001) DWI Reports 124:378 [8] H Thomas, A Conrads, K H Phan, M van de Löcht, H Zahn (1986) Intern. J. Biol. Macromol. 8:258 [9] C A Finch (1973) Polyvinyl alcohol Properties and Applications, C.A. Finch, London
Author Index
Abell, Chris 499 Adjih, Cedric 333 Ammari, Habib 297 Andrew, Colin R. 517 Aoussat, Am´eziane 355 Baek, Je Hyun 3 Bouchard, Carole 355 Bozukov, Metodi 525 Brown, Richard E. 45 Byun, Jin Wook 329 Calcagno, Barbara 171 Chang, Doo-Bong 107 Chang, K.P. 113 Cho, In-Young 153 Cho, Jihoon 329 Cho, Seong Wook 77 Cho, Soo 123 Cho, Song Yean 333 Cho, Sungbo 395 Choi, Ji-Hun 85 Choi, Joo-Young 135 Choi, Minsuk 3 Choi, Sungwoo 143 Choi, Yoon Hong 405 Choo, Byung-kwon 229 Chung, Chanhoon 413 Curd, W. 113 d’Andr´ea-Novel, Brigitte 143 Darwish, Ahmed Haj 267 Dell’Orco, G. 113 Fan, L. 113 Fuchs, Karoline
429
Gabi, Martin 69 Gupta, D. 113 Ham, Dong-Han 345 Han, Man-Wook 153, 163 Heo, Seok 421 Hoffmann, Hartmut 315 Hollfelder, Florian 499 Hong, Sungyoul 421 Huck, Wilhelm 499 Hur, Nahmkeon 19, 33 Im, Han-Eol
249
Jang, Cheol Yong 123 Jang, Jin 229, 249 Jeong, Jong Hyun 95 Jit, Mark 405 Joo, Sung Uk 123 Jun, Chang 171 Jung, Hyun Chul 307 Kang, Bong-Gu 179 Kang, Myung-Joo 191 Kang, Sangmo 95 Kang, Sung Ung 429 Kerr, Clive 371 Kim, Byeong-Sam 201 Kim, Chang Shuk 209 Kim, Chung-Hee 437 Kim, Dong-Ho 223 Kim, Dong-Seon 215 Kim, Dowon 443 Kim, Eun Jung 455
532
Author Index
Kim, Hyo Won 45 Kim, Jae-Ihn 463 Kim, Jae-Min 223, 287 Kim, Jae Won 85 Kim, Ji-Yeon 287 Kim, Jieun 355 Kim, Jong-Yeob 223 Kim, Ki-hwan 229 Kim, Minsung 85 Kim, Yong-Hwan 113 Kim, Yong Hwan 239 Kuehn, Ingo 113 Kuppuswamy, Naveen 307 Kwon, JuYoun 467
Olguin, Luis 499 Omhover, Jean-Fran¸cois
Pae, Min-Ho 223 Park, Hyo-Soon 287 Park, J.M. 383 Park, Kyoungwoo 201 Park, Kyu-Chang 229, 249 Park, Seong-Ryong 85 Park, Won-Kwang 297 Parsons, Ken 467 Phaal, Robert 371 Powell, Jane C. 443 Pribat, Didier 229, 249 Pritz, Balazs 69 Probert, David 371
Lamarre, Isabelle 517 Lee, Chang-Seok 249 Lee, Dal Ho 259 Lee, Eun-Jeong 477 Lee, Eun-Ju 223 Lee, Habin 365 Lee, J. 275 Lee, Jangho 85 Lee, Ji Young 267 Lee, Jin Sung 123 Lee, KwangSoo 201 Lee, Myungsung 19 Lee, Sang Hyuk 33 Lee, Seong Hyuk 77 Lee, Sungjoo 371 Lesselier, Dominique 297 Lubec, Gert 429 Magagnato, Franco 69 Manivannan, Sellaperumal Martin, Jean-Louis 517 Martinez, Jean-Marc 171 Meschede, Dieter 463 Moctar, Bettar el 61 M¨ oller, Martin 525 Mortara, Letizia 371 Negrerie, Michel 517 Nioche, Pierre 517 Oh, Ilyoung 485 Oh, Sung Taek 495
355
Ro, Kyoung Chul 77 Ryou, Hong Sun 77 Ryu, Je-Hwang 249 Shepherdson, John 365 Shim, Jung-Uk 499 Shin, Dong-Shin 503 Shin, Kyung Chul 307 Sieghart, Werner 429 So, Hyunwoo 315 Sohn, Jang Yeul 123 Son, Gihun 33 Son, Young-Woo 85 Song, Na-young 229 Soutis, C. 275 Suh, Yong Kweon 95 249 Thomas, Helga
525
Villagra, Jorge Vine, G. 239 Voigt, Wiebke
143 525
Wimmer, Robert Won, Chan-Shik
191 19
Yoo, Byung-Kuk Yu, Miran 525
517
springer proceedings in physics 81 Materials and Measurements in Molecular Electronics Editors: K. Kajimura and S. Kuroda
94 ISSMGE Numerical and Theoretical Approaches Editor: T. Schanz
82 Computer Simulation Studies in Condensed-Matter Physics IX Editors: D.P. Landau, K.K. Mon, and H.-B. Sch¨uttler
95 Computer Simulation Studies in Condensed-Matter Physics XVI Editors: D.P. Landau, S.P. Lewis, and H.-B. Sch¨uttler
83 Computer Simulation Studies in Condensed-Matter Physics X Editors: D.P. Landau, K.K. Mon, and H.-B. Sch¨uttler
96 Electromagnetics in a Complex World Editors: I.M. Pinto, V. Galdi, and L.B. Felsen
84 Computer Simulation Studies in Condensed-Matter Physics XI Editors: D.P. Landau and H.-B. Sch¨uttler 85 Computer Simulation Studies in Condensed-Matter Physics XII Editors: D.P. Landau, S.P. Lewis, and H.-B. Sch¨uttler 86 Computer Simulation Studies in Condensed-Matter Physics XIII Editors: D.P. Landau, S.P. Lewis, and H.-B. Sch¨uttler 87 Proceedings of the 25th International Conference on the Physics of Semiconductors Editors: N. Miura and T. Ando 88 Starburst Galaxies Near and Far Editors: L. Tacconi and D. Lutz 89 Computer Simulation Studies in Condensed-Matter Physics XIV Editors: D.P. Landau, S.P. Lewis, and H.-B. Sch¨uttler 90 Computer Simulation Studies in Condensed-Matter Physics XV Editors: D.P. Landau, S.P. Lewis, and H.-B. Sch¨uttler 91 The Dense Interstellar Medium in Galaxies Editors: S. Pfalzner, C. Kramer, C. Straubmeier, and A. Heithausen 92 Beyond the Standard Model 2003 Editor: H.V. Klapdor-Kleingrothaus 93 ISSMGE Experimental Studies Editor: T. Schanz
97 Fields, Networks, Computational Methods and Systems in Modern Electrodynamics A Tribute to Leopold B. Felsen Editors: P. Russer and M. Mongiardo 98 Particle Physics and the Universe Proceedings of the 9th Adriatic Meeting, Sept. 2003, Dubrovnik Editors: J. Trampeti´c and J. Wess 99 Cosmic Explosions On the 10th Anniversary of SN1993J (IAU Colloquium 192) Editors: J. M. Marcaide and K. W. Weiler 100 Lasers in the Conservation of Artworks LACONA V Proceedings, Osnabr¨uck, Germany, Sept. 15–18, 2003 Editors: K. Dickmann, C. Fotakis, and J.F. Asmus 101 Progress in Turbulence Editors: J. Peinke, A. Kittel, S. Barth, and M. Oberlack 102 Adaptive Optics for Industry and Medicine Proceedings of the 4th International Workshop Editor: U. Wittrock 103 Computer Simulation Studies in Condensed-Matter Physics XVII Editors: D.P. Landau, S.P. Lewis, and H.-B. Sch¨uttler 104 Complex Computing-Networks Brain-like and Wave-oriented Electrodynamic Algorithms Editors: I.C. G¨oknar and L. Sevgi 105 Computer Simulation Studies in Condensed-Matter Physics XVIII Editors: D.P. Landau, S.P. Lewis, and H.-B. Sch¨uttler