Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5910
Ivan Lirkov Svetozar Margenov Jerzy Wa´sniewski (Eds.)
Large-Scale Scientific Computing 7th International Conference, LSSC 2009 Sozopol, Bulgaria, June 4-8, 2009 Revised Papers
13
Volume Editors Ivan Lirkov Svetozar Margenov Bulgarian Academy of Sciences, Institute for Parallel Processing Acad. G. Bonchev, block 25A, 1113 Sofia, Bulgaria E-mail: {ivan, margenov}@parallel.bas.bg Jerzy Wa´sniewski Technical University of Denmark Department of Informatics and Mathematical Modelling Richard Petersens Plads - Building 321, 2800 Kongens Lyngby, Denmark E-mail:
[email protected]
Library of Congress Control Number: 2010924431 CR Subject Classification (1998): G.1, D.1, D.4, F.2, C.2, I.6, J.2, J.6 LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues ISSN ISBN-10 ISBN-13
0302-9743 3-642-12534-4 Springer Berlin Heidelberg New York 978-3-642-12534-8 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180
Preface
The 7th International Conference on Large-Scale Scientific Computations (LSSC 2009) was held in Sozopol, Bulgaria, June 4–8, 2009. The conference was organized and sponsored by the Institute for Parallel Processing at the Bulgarian Academy of Sciences. The conference was devoted to the 70th birthday anniversary of Professor Zahari Zlatev. The Bulgarian Academy of Sciences awarded him the Marin Drinov medal on ribbon for his outstanding results in environmental mathematics and for his contributions to the Bulgarian mathematical society and the Academy of Sciences. The plenary invited speakers and lectures were: – P. Arbenz, “μFinite Element Analysis of Human Bone Structures” – Y. Efendiev, “Mixed Multiscale Finite Element Methods Using Limited Global Information” – U. Langer, “Fast Solvers for Non-Linear Time-Harmonic Problems” – T. Manteuffel, “First-Order System Least-Squares Approach to Resistive Magnetohydrodynamic Equations” – K. Sabelfeld, “Stochastic Simulation for Solving Random Boundary Value Problems and Some Applications” – F. Tr¨ oltzsch, “On Finite Element Error Estimates for Optimal Control Problems with Elliptic PDEs” – Z. Zlatev, “On Some Stability Properties of the Richardson Extrapolation Applied Together with the θ-method” The success of the conference and the present volume in particular are an outcome of the joint efforts of many partners from various institutions and organizations. First we would like to thank all the members of the Scientific Committee for their valuable contribution forming the scientific face of the conference, as well as for their help in reviewing contributed papers. We especially thank the organizers of the special sessions. We are also grateful to the staff involved in the local organization. Traditionally, the purpose of the conference is to bring together scientists working with large-scale computational models of environmental and industrial problems, and specialists in the field of numerical methods and algorithms for modern high-performance computers. The invited lectures reviewed some of the advanced achievements in the field of numerical methods and their efficient applications. The conference talks were presented by university researchers and practical industry engineers including applied mathematicians, numerical analysts and computer experts. The general theme for LSSC 2009 was “Large-Scale Scientific Computing” with a particular focus on the organized special sessions.
VI
Preface
The special session organizer were: – Multilevel and Multiscale Preconditioning Methods — J. Kraus, S. Margenov, M. Neytcheva – Upscaling and Multiscale Methods — M. Katsoulakis, R. Lazarov – Industrial and Biomedical Multiscale Problems — Y. Efendiev, O. Iliev, P. Popov – Environmental Modelling — A. Ebel, K. Georgiev, Z. Zlatev – Control and Uncertain Systems — M. Krastanov, V. Veliov This session was dedicated to the 60th anniversary of Asen Donchev. – Applications of Metaheuristics to Large-Scale Problems — F. Luna, S. Fidanova – Monte Carlo: Methods, Applications, Distributed Computing — I. Dimov, V. Mitin, M. Nedjalkov – Grid and Scientific and Engineering Applications — A. Karaivanova, E. Atanassov, T. Gurov – Reliable Numerical Methods for Differential Equations — I. Farag´ o, J. Kar´ atson, S. Korotov – Discretization and Fast Solution Techniques for Large-Scale Physics Applications — P. Vassilevski, L. Zikatanov – Least Squares Finite Element Methods — P. Bochev, T. Manteuffel – Unconventional Uses of Optimization in Scientific Computing — P. Bochev, D. Ridzal More than 140 participants from all over the world attended the conference representing some of the strongest research groups in the field of advanced largescale scientific computing. This volume contains 99 papers submitted by authors from 24 countries. The 8th International Conference on Large-Scale Scientific Computations (LSSC 2011) will be organized in June 2011. January 2010
Ivan Lirkov Svetozar Margenov Jerzy Wa´sniewski
Table of Contents
I
Plenary and Invited Papers
An Efficiency-Based Adaptive Refinement Scheme Applied to Incompressible, Resistive Magnetohydrodynamics . . . . . . . . . . . . . . . . . . . . J. Adler, T. Manteuffel, S. McCormick, J. Nolting, J. Ruge, and L. Tang
1
Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Oleg P. Iliev, Raytcho D. Lazarov, and Joerg Willems
14
Stochastic Simulation for Solving Random Boundary Value Problems and Some Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Karl Sabelfeld
26
On Finite Element Error Estimates for Optimal Control Problems with Elliptic PDEs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fredi Tr¨ oltzsch
40
On Some Stability Properties of the Richardson Extrapolation Applied Together with the θ-Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ´ Zahari Zlatev, Istv´ an Farag´ o, and Agnes Havasi
54
II
Multilevel and Multiscale Preconditioning Methods
On the Use of Aggregation-Based Parallel Multilevel Preconditioners in the LES of Wall-Bounded Turbulent Flows . . . . . . . . . . . . . . . . . . . . . . . . . . Andrea Aprovitola, Pasqua D’Ambra, Daniela di Serafino, and Salvatore Filippone An Additive Matrix Preconditioning Method with Application for Domain Decomposition and Two-Level Matrix Partitionings . . . . . . . . . . . Owe Axelsson Numerical Study of AMLI Methods for Weighted Graph-Laplacians . . . . Petia Boyanova and Svetozar Margenov A Scalable TFETI Based Algorithm for 2D and 3D Frictionless Contact Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Zdenˇek Dost´ al, Tom´ aˇs Brzobohat´y, Tom´ aˇs Kozubek, Alex Markopoulos, and V´ıt Vondr´ ak
67
76 84
92
VIII
Table of Contents
Multilevel Preconditioning of Crouzeix-Raviart 3D Pure Displacement Elasticity Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivan Georgiev, Johannes Kraus, and Svetozar Margenov
100
Element-by-Element Schur Complement Approximations for General Nonsymmetric Matrices of Two-by-Two Block Form . . . . . . . . . . . . . . . . . . Maya Neytcheva, Minh Do-Quang, and He Xin
108
Numerical Simulation of Fluid-Structure Interaction Problems on Hybrid Meshes with Algebraic Multigrid Methods . . . . . . . . . . . . . . . . . . . . Huidong Yang and Walter Zulehner
116
III
Multilevel and Multiscale Methods for Industrial Applications
Recent Developments in the Multi-Scale-Finite-Volume Procedure . . . . . . Giuseppe Bonfigli and Patrick Jenny Boundary Element Simulation of Linear Water Waves in a Model Basin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Clemens Hofreither, Ulrich Langer, and Satyendra Tomar Numerical Homogenization of Bone Microstructure . . . . . . . . . . . . . . . . . . . Nikola Kosturski and Svetozar Margenov Multiscale Modeling and Simulation of Fluid Flows in Highly Deformable Porous Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P. Popov, Y. Efendiev, and Y. Gorb
IV
124
132 140
148
Environmental Modelling
Assimilation of Chemical Ground Measurements in Air Quality Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gabriele Candiani, Claudio Carnevale, Enrico Pisoni, and Marialuisa Volta Numerical Simulations with Data Assimilation Using an Adaptive POD Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gabriel Dimitriu, Narcisa Apreutesei, and R˘ azvan S ¸ tef˘ anescu Game-Method Model for Field Fires . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nina Dobrinkova, Stefka Fidanova, and Krassimir Atanassov Joint Analysis of Regional Scale Transport and Transformation of Air Pollution from Road and Ship Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Ganev, D. Syrakov, G. Gadzhev, M. Prodanova, G. Jordanov, N. Miloshev, and A. Todorova
157
165 173
180
Table of Contents
Runs of UNI-DEM Model on IBM Blue Gene/P Computer and Analysis of the Model Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Krassimir Georgiev and Zahari Zlatev Sensitivity Analysis of a Large-Scale Air Pollution Model: Numerical Aspects and a Highly Parallel Implementation . . . . . . . . . . . . . . . . . . . . . . . Tzvetan Ostromsky, Ivan T. Dimov, Rayna Georgieva, and Zahari Zlatev Advanced Results of the PM10 and PM2.5 Winter 2003 Episode by Using MM5-CMAQ and WRF/CHEM Models . . . . . . . . . . . . . . . . . . . . . . . Roberto San Jos´e, Juan L. P´erez, Jos´e L. Morant, F. Prieto, and Rosa M. Gonz´ alez Four-Dimensional Variational Assimilation of Atmospheric Chemical Data – Application to Regional Modelling of Air Quality . . . . . . . . . . . . . . Achim Strunk, Adolf Ebel, Hendrik Elbern, Elmar Friese, Nadine Goris, and Lars Peter Nieradzik Numerical Study of Some High PM10 Levels Episodes . . . . . . . . . . . . . . . . A. Todorova, G. Gadzhev, G. Jordanov, D. Syrakov, K. Ganev, N. Miloshev, and M. Prodanova
V
IX
188
197
206
214
223
Control and Uncertain Systems
Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roumen Anguelov, S. Markov, and F. Minani
231
Stochastic Skiba Sets: An Example from Models of Illicit Drug Consumption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roswitha Bultmann, Gustav Feichtinger, and Gernot Tragler
239
Classical and Relaxed Optimization Methods for Nonlinear Parabolic Optimal Control Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I. Chryssoverghi, J. Coletsos, and B. Kokkinis
247
Exponential Formula for Impulsive Differential Inclusions . . . . . . . . . . . . . Tzanko Donchev
256
Directional Sensitivity Differentials for Parametric Bang-Bang Control Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ursula Felgenhauer
264
Estimates of Trajectory Tubes of Uncertain Nonlinear Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Tatiana F. Filippova
272
X
Table of Contents
Asymptotics for Singularly Perturbed Reachable Sets . . . . . . . . . . . . . . . . . Elena Goncharova and Alexander Ovseevich
280
On Optimal Control Problem for the Bundle of Trajectories of Uncertain System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mikhail I. Gusev
286
High-Order Approximations to Nonholonomic Affine Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mikhail I. Krastanov and Vladimir M. Veliov
294
VI
Application of Metaheuristics to Large Scale Problems
A Parametric Multi-start Algorithm for Solving the Response Time Variability Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Albert Corominas, Alberto Garc´ıa-Villoria, and Rafael Pastor
302
Enhancing the Scalability of Metaheuristics by Cooperative Coevolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ciprian Cr˘ aciun, Monica Nicoar˘ a, and Daniela Zaharie
310
Hybrid ACO Algorithm for the GPS Surveying Problem . . . . . . . . . . . . . . Stefka Fidanova, Enrique Alba, and Guillermo Molina
318
Generalized Nets as Tools for Modeling of the Ant Colony Optimization Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefka Fidanova and Krassimir Atanassov
326
A Scatter Search Approach for Solving the Automatic Cell Planning Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Francisco Luna, Juan J. Durillo, Antonio J. Nebro, and Enrique Alba
334
Some Aspects Regarding the Application of the Ant Colony Meta-heuristic to Scheduling Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ioana Moisil and Alexandru-Liviu Olteanu
343
High-Performance Heuristics for Optimization in Stochastic Traffic Engineering Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Evdokia Nikolova
352
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dana Simian, Florin Stoica, and Corina Simian
361
Table of Contents
VII
XI
Monte Carlo: Methods, Applications, Distributed Computing
Enabling Cutting-Edge Semiconductor Simulation through Grid Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Asen Asenov, Dave Reid, Campbell Millar, Scott Roy, Gareth Roy, Richard Sinnott, Gordon Stewart, and Graeme Stewart Incremental Reuse of Paths in Random Walk Radiosity . . . . . . . . . . . . . . . Francesc Castro and Mateu Sbert Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-Scale Air Pollution Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivan T. Dimov and Rayna Georgieva Parallel Implementation of the Stochastic Radiosity Method . . . . . . . . . . . Roel Mart´ınez and Jordi Coma
369
379
387 395
Monte-Carlo Modeling of Electron Kinetics in Room Temperature Quantum-Dot Photodetectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Vladimir Mitin, Andrei Sergeev, Li-Hsin Chien, and Nizami Vagidov
403
Particle Model of the Scattering-Induced Wigner Function Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Nedjalkov, P. Schwaha, O. Baumgartner, and S. Selberherr
411
Analysis of the Monte Carlo Image Creation by Uniform Separation . . . . A.A. Penzov, Ivan T. Dimov, L. Szirmay-Kalos, and V.N. Koylazov The Role of the Boundary Conditions on the Current Degradation in FD-SOI Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . K. Raleva, D. Vasileska, and S.M. Goodnick Gamma Photon Transport on the GPU for PET . . . . . . . . . . . . . . . . . . . . . L. Szirmay-Kalos, B. T´ oth, M. Magdics, D. L´egr´ ady, and A.A. Penzov Transport in Nanostructures: A Comparative Analysis Using Monte Carlo Simulation, the Spherical Harmonic Method, and Higher Moments Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . M. Vasicek, V. Sverdlov, J. Cervenka, T. Grasser, H. Kosina, and S. Selberherr Thermal Modeling of GaN HEMTs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Vasileska, A. Ashok, O. Hartin, and S.M. Goodnick
419
427 435
443
451
XII
VIII
Table of Contents
Grid and Scientific and Engineering Applications
Tuning the Generation of Sobol Sequence with Owen Scrambling . . . . . . . Emanouil Atanassov, Aneta Karaivanova, and Sofiya Ivanovska Applying the Improved Saleve Framework for Modeling Abrasion of Pebbles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . P´eter D´ ob´e, Rich´ ard K´ apolnai, Andr´ as Sipos, and Imre Szeber´enyi Information Flow and Mirroring in an Agent-Based Grid Resource Brokering System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Maria Ganzha, Marcin Paprzycki, Michal Drozdowicz, Mehrdad Senobari, Ivan Lirkov, Sofiya Ivanovska, Richard Olejnik, and Pavel Telegin Scatter Search and Grid Computing to Improve Nuclear Fusion Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antonio G´ omez-Iglesias, Miguel A. Vega-Rodr´ıguez, Francisco Castej´ on-Maga˜ na, Miguel C´ ardenas-Montes, and Enrique Morales-Ramos Service-Oriented Integration of Grid Applications in Heterogeneous Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Radoslava D. Goranova
459
467
475
483
491
Grid Based Environment Application Development Methodology . . . . . . . Dorian Gorgan, Teodor Stefanut, Victor Bacu, Danut Mihon, and Denisa Rodila
499
User Level Grid Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anastas Misev and Emanouil Atanassov
507
Dictionary Compression and Information Source Correction . . . . . . . . . . . D´enes N´emeth, M´ at´e Lakat, and Imre Szeber´enyi
515
A Parallelization of Finite Volume Method for Calculation of Gas Microflows by Domain Decomposition Methods . . . . . . . . . . . . . . . . . . . . . . Kiril S. Shterev and Stefan K. Stefanov Background Pollution Forecast over Bulgaria . . . . . . . . . . . . . . . . . . . . . . . . D. Syrakov, K. Ganev, M. Prodanova, N. Miloshev, G. Jordanov, E. Katragkou, D. Melas, A. Poupkou, and K. Markakis Climate Change Impact Assessment of Air Pollution Levels in Bulgaria . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . D. Syrakov, M. Prodanova, N. Miloshev, K. Ganev, G. Jordanov, V. Spiridonov, A. Bogatchev, E. Katragkou, D. Melas, A. Poupkou, and K. Markakis
523 531
538
Table of Contents
Effective Algorithm for Calculating Spatial Deformations of Pre-stressed Concrete Beams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J´ anos T¨ or¨ ok, D´enes N´emeth, and M´ at´e Lakat
IX
XIII
546
Reliable Numerical Methods for Differential Equations
Structurally Stable Numerical Schemes for Applied Dynamical Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roumen Anguelov Matrix and Discrete Maximum Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . Istv´ an Farag´ o
554
563
On a Bisection Algorithm That Produces Conforming Locally Refined Simplicial Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antti Hannukainen, Sergey Korotov, and Michal Kˇr´ıˇzek
571
A Discrete Maximum Principle for Nonlinear Elliptic Systems with Interface Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . J´ anos Kar´ atson
580
Inverse Problem for Coefficient Identification in Euler-Bernoulli Equation by Linear Spline Approximation . . . . . . . . . . . . . . . . . . . . . . . . . . . Tchavdar T. Marinov and Rossitza S. Marinova
588
A Fully Implicit Method for Fluid Flow Based on Vectorial Operator Splitting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Rossitza S. Marinova, Raymond Spiteri, and Eddy Essien
596
Discrete Maximum Principle for Finite Element Parabolic Operators . . . Miklos E. Mincsovics
604
Climate Change Scenarios for Hungary Based on Numerical Simulations with a Dynamical Climate Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ildik´ o Pieczka, Judit Bartholy, Rita Pongr´ acz, and Adrienn Hunyady
613
Numerical Simulations of Reaction-Diffusion Systems Arising in Chemistry Using Exponential Integrators . . . . . . . . . . . . . . . . . . . . . . . . . . . R˘ azvan S ¸ tef˘ anescu and Gabriel Dimitriu
621
On the Discretization Time-Step in the Finite Element Theta-Method of the Two-Dimensional Discrete Heat Equation . . . . . . . . . . . . . . . . . . . . . Tam´ as Szab´ o
629
XIV
X
Table of Contents
Novel Applications of Optimization Ideas to the Numerical Solution of PDEs
A Locally Conservative Mimetic Least-Squares Finite Element Method for the Stokes Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pavel Bochev and Max Gunzburger
637
Additive Operator Decomposition and Optimization–Based Reconnection with Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Pavel Bochev and Denis Ridzal
645
Least-Squares Spectral Element Method on a Staggered Grid . . . . . . . . . . Marc Gerritsma, Mick Bouman, and Artur Palha Mimetic Least-Squares Spectral/hp Finite Element Method for the Poisson Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Artur Palha and Marc Gerritsma Adaptive Least Squares Finite Element Methods in Elasto-Plasticity . . . Gerhard Starke The Automatic Construction and Solution of a Partial Differential Equation from the Strong Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joseph Young
XI
653
662 671
679
Contributed Papers
Zienkiewicz-Type Finite Element Applied to Fourth-Order Problems . . . Andrey B. Andreev and Milena R. Racheva
687
Acceleration of Convergence for Eigenpairs Approximated by Means of Non-conforming Finite Element Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andrey B. Andreev and Milena R. Racheva
695
A Two-Grid Method on Layer-Adapted Meshes for a Semilinear 2D Reaction-Diffusion Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivanka T. Angelova and Lubin G. Vulkov
703
Comparative Analysis of Solution Methods for Delay Differential Equations in Hematology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gergana Bencheva
711
Solving Non-linear Systems of Equations on Graphics Processing Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lubomir T. Dechevsky, Børre Bang, Joakim Gundersen, Arne Laks˚ a, and Arnt R. Kristoffersen
719
Table of Contents
Computing n-Variate Orthogonal Discrete Wavelet Transforms on Graphics Processing Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lubomir T. Dechevsky, Joakim Gundersen, and Børre Bang Wavelet Compression, Data Fitting and Approximation Based on Adaptive Composition of Lorentz-Type Thresholding and Besov-Type Non-threshold Shrinkage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lubomir T. Dechevsky, Joakim Gundersen, and Niklas Grip On Interpolation in the Unit Disk Based on Both Radon Projections and Function Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Irina Georgieva and Rumen Uluchev Comparison between the Marching-Cube and the Marching-Simplex Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Joakim Gundersen, Arnt R. Kristoffersen, and Lubomir T. Dechevsky Transitions from Static to Dynamic State in Three Stacked Josephson Junctions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ivan Christov, Stefka Dimova, and Todor Boyadjiev Factorization-Based Graph Repartitionings . . . . . . . . . . . . . . . . . . . . . . . . . . Kateˇrina Jurkov´ a and Miroslav T˚ uma An Efficient Algorithm for Bilinear Strict Equivalent (BSE)- Matrix Pencils . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Grigoris I. Kalogeropoulos, Athanasios D. Karageorgos, and Athanasios A. Pantelous
XV
730
738
747
756
765 771
779
Two-Grid Decoupling Method for Elliptic Problems on Disjoint Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Miglena N. Koleva and Lubin G. Vulkov
787
A Method for Sparse-Matrix Computation of B-Spline Curves and Surfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Arne Laks˚ a
796
Parallel MIC(0) Preconditioning for Numerical Upscaling of Anisotropic Linear Elastic Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Svetozar Margenov and Yavor Vutov
805
The Bpmpd Interior Point Solver for Convex Quadratically Constrained Quadratic Programming Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Csaba M´esz´ aros
813
XVI
Table of Contents
On Shape Optimization of Acoustically Driven Microfluidic Biochips . . . Svetozara I. Petrova
821
Block Preconditioners for the Incompressible Stokes Problem . . . . . . . . . . M. ur Rehman, C. Vuik, and G. Segal
829
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
837
An Efficiency-Based Adaptive Refinement Scheme Applied to Incompressible, Resistive Magnetohydrodynamics J. Adler, T. Manteuffel, S. McCormick, J. Nolting, J. Ruge, and L. Tang University of Colorado at Boulder Department of Applied Mathematics Boulder, CO
Abstract. This paper describes the use of an efficiency-based adaptive mesh refinement scheme, known as ACE, on a 2D reduced model of the incompressible, resistive magnetohydrodynamic (MHD) equations. A first-order system least squares (FOSLS) finite element formulation and algebraic multigrid (AMG) are used in the context of nested iteration. The FOSLS a posteriori error estimates allow the nested iteration and ACE algorithms to yield the best accuracy-per-computational-cost. The ACE scheme chooses which elements to add when interpolating to finer grids so that the best error reduction with the least amount of cost is obtained, when solving on the refined grid. We show that these methods, applied to the simulation of a tokamak fusion reactor instability, yield approximations to solutions within discretization accuracy using less than the equivalent amount of work needed to perform 10 residual calculations on the finest uniform grid. Keywords: Magnetohydrodynamics, adaptive mesh refinement, algebraic multigrid, nested iteration.
1
Introduction
Magnetohydrodynamics (MHD) is a model of plasma physics that treats the plasma as a charged fluid. As a result, the set of partial differential equations that describe this model are a time-dependent, nonlinear system of equations. Thus, the equations can be difficult to solve and efficient numerical algorithms are needed. This paper shows the use of such an efficient algorithm on the incompressible, resistive MHD equations. A first-order systems least-squares (FOSLS) [1,2] finite element discretization is used along with nested iteration and algebraic multigrid (AMG) [3,4,5,6,7,8]. The main focus of this paper is to show that if an efficiency-based adaptive mesh refinement (AMR) scheme is used, within the nested iteration algorithm, then a nonlinear system of equations, such as the MHD equations, can be solved in only a handful of work units per time step. Here, a work unit is defined as the equivalent of one relaxation sweep on the finest grid. In other words, the accuracy-per-computational-cost for solving the MHD equations can be maximized by the use of nested iteration and AMR. As I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 1–13, 2010. c Springer-Verlag Berlin Heidelberg 2010
2
J. Adler et al.
is shown in the results section, we were able to resolve an island coalescence instability in less than 10 work units per time step. The MHD system and the FOSLS methodology applied to it are discussed in detail in the companion paper [9], so we include only a brief description here in section 2. The nested iteration algorithm has also been described in [10] and [11], so we only briefly discuss it here in section 3. This section also discusses the efficiency-based AMR method known as ACE, which was developed in [12,13,14]. Finally, in section 4, numerical results are shown for a 2D reduced model that simulates plasma instabilities in a tokamak reactor. These results confirm that the AMR algorithm greatly reduces the amount of work needed to solve the MHD systems.
2
The MHD Equations and FOSLS Formulation
The resistive MHD equations are time-dependent and nonlinear, and involve several dependent variables. The system is a coupling of the incompressible NavierStokes and Maxwell’s systems. The primitive variables are defined to be the fluid velocity, u, the fluid pressure, p, the magnetic field, B, the current density, j, and the electric field, E. In addition, a resistive form of Ohm’s law, j = σ(E + u × B),
(1)
is used to eliminate the electric field, E, from the equations. After a nondimensionalization using Alfv´en units, the following equations for incompressible resistive MHD are obtained (i.e., Navier-Stokes coupled with Maxwell’s equations) [15,16]: 1 2 ∂u + u · ∇u − j × B + ∇p − ∇ u = f, ∂t Re ∂B 1 − B · ∇u + u · ∇B + (∇ × j) = g, ∂t SL ∇ × B = j,
(2) (3) (4)
∇ · B = 0, ∇ · u = 0,
(5) (6)
∇ · j = 0.
(7)
Here, Re is the fluid Reynolds Number and SL is the Lundquist Number, both of which are assumed to be constants and adjusted for different types of physical behavior. Using the FOSLS method [1,2], the system is first put into a differential firstorder system of equations. This is done based on a vorticity-velocity-pressurecurrent formulation [17,18,19]. A scaling analysis is performed in [9] for the full MHD system. The resulting scaling yields a nice block structure of the MHD system, which results in good AMG convergence of the linear systems obtained, while still preserving the physics of the system.
An AMR Scheme for Incompressible RMHD
3
Vorticity, ω = ∇ × u, is introduced and the final formulation in 3D used is 1 √ ∇ × u − Re ω = 0, (8) Re 1 √ ∇ · u = 0, (9) Re Re ∇ · ω = 0, (10) 1 ∂u 1 √ − u × ω − j × B − Re ∇p + √ ∇ × ω = f , (11) Re ∂t Re 1 √ ∇ × B − SL j = 0, (12) SL 1 √ ∇ · B = 0, (13) SL SL ∇ · j = 0, (14) 1 ∂B 1 1 √ +√ (u · ∇B − B · ∇u) + √ ∇ × j = g. (15) SL ∂t Re SL SL The above system is denoted by L(u) = f , where u = (u, ω, p, B, j)T represents a vector of all of the dependent variables that should not be confused with the vector fluid velocity, u. Then, the L2 norm of the residual of this system is minimized. This is referred to as the nonlinear functional, F (u) = ||L(u) − f ||0 .
(16)
In general, we wish to find the argmin of (16) in some solution space V. In the context of this paper, we choose V to be an H 1 product space with boundary conditions that are chosen to satisfy the physical constraints of the problem as well as the assumptions needed for the FOSLS framework. In practice, a series of nested finite subspaces, V h , are used to approximate the solution in V. However, in the Newton-FOSLS approach [20,21], system (8)-(15) is first linearized using a Newton step before a FOSLS functional is formed and minimized. This results in the weak form of the problem that produces symmetric positive definite algebraic systems when the problem is restricted to a finite-dimensional subspace, V h . In addition, proving continuity and coercivity of the resulting bilinear form is equivalent to having H 1 equivalence of the FOSLS functional. Moreover, the FOSLS functional yields a sharp a posteriori local error estimate, which is used to make the algorithm more robust and, under the right conditions, produces algebraic systems that are solved easily by multilevel iterative solvers. Our choice here is algebraic multigrid (AMG) [3,4,5,6,7,8], which, when applied to the FOSLS discretization, has been shown to be an optimal (O(n)) solver [1,2,6]. Using the formulation above, and with appropriate boundary conditions, H 1 equivalence of the linearized FOSLS functional is shown in [11]. Therefore, the FOSLS functionals can be a good measure of the error, or at least the semi-norm of the error, in the solution space. Thus, they can be used to develop an efficient solution algorithm and as aids in the adaptive refinement process. By measuring
4
J. Adler et al.
the functional norm of the error in each element of the domain, information on where refinement is needed is readily available.
3
Solution Algorithm
In [10], an algorithm is devised to solve a system of nonlinear equations, L(u) = f . Starting on a coarse grid, given an initial guess, the system is linearized and the linearized FOSLS functional is then minimized on a finite element space. At this point, several AMG V-cycles are performed until there is little to gain in accuracy-per-computational-cost. The system is then relinearized and the minimum of the new linearized FOSLS functional is searched for in the same manner. After each set of linear solves, the relative difference between the computed linearized functional and the nonlinear functional is checked. If they are close and near the minimum of the linearized functional, then it is concluded that we are close enough to the minimum of the nonlinear functional and, hence, we have a good approximation to the solution on the given grid. Next, the approximation is interpolated to a finer grid and the problem is solved on that grid. This process is repeated until an acceptable error has been reached, or until we have run out of computational resources, such as memory. If, as in the case of the MHD equations, it is a time-dependent problem, the whole process is performed at each time step. This algorithm is summarized in the flow chart, figure 1. 3.1
Adaptive Refinement
In the nested iteration algorithm, we decide when to stay on a current mesh and iterate further or interpolate to a finer grid. In [10], it was assumed that the grid is uniformly refined. In other words, it was assumed that there are 2d more points on the next grid than the one before, where d is the dimension of the problem.
Fig. 1. Flow chart of nested iteration algorithm
An AMR Scheme for Incompressible RMHD
5
This is generally not the case when the grids are refined locally. On a given mesh, after enough Newton steps and linear iterations have been performed, the nonlinear functional is calculated in each element. This indicates in which region of the domain the functional and, hence, the error is large compared to the rest of the domain. Then, the best use of new degrees of freedom is to concentrate them where the error is large. Since the goal of the algorithm is to increase the accuracy-per-computational-cost, we do not want to over solve in areas where the error is already small. The adaptive scheme that we describe here is an efficiency based refinement method, called ACE, that was developed in [12,13,14]. This scheme estimates both the reduction in the functional and the computational cost that would result from any given refinement pattern. These estimates are used to establish a refinement pattern that attempts to optimize the Accuracy-per-Computational cost (Efficiency), which gives rise to the acronym ACE. The square of the functional value on each element, i , is computed and is ordered such that the local functional value is decreasing: 1 ≥ 2 ≥ . . . ≥ N l ,
(17)
where Nl is the total number of elements on level l. Next, we predict the reduction of the squared functional and the estimated computational work that would result if we were to refine a given percentage of the elements with the most error. Denote the percentage by r ∈ (0, 1] and the number of elements to be refined by r ∗ Nl . Define f (r) to be the squared functional value in the r ∗ Nl elements with largest error, that is, i
(19)
Here, p is the order of the finite element basis. In other words, the predicted error reduction only comes from the regions where the elements have been refined. The predicted error in the remaining elements is left unchanged. In general, this underestimates the reduction. In practice, the unrefined elements can have some error reduction as well. On the other hand, the predicted work is assumed to be a function of the total number of elements on the refined grid. Since each refined element yields 2d children, we predict the number of elements on the refined grid to be Nl+1 = Nl (1 − r + 2d r).
(20)
6
J. Adler et al.
Fig. 2. Error distribution among elements. Left plot shows uniformly distributed error. Right plot shows distribution where a few elements contain most of the error.
The work required to solve the linear system on the refined grid is assumed to be a function of the number of elements, Nl+1 , the error reduction factor, γ(r), and the AMG convergence factor, ρ. Assume that the cost of one AMG V-cycle is C1 work units and the overall setup cost for FOSLS and AMG is C0 work units. The number of V-cycles needed to reduce the error by γ(r) using AMG is n=
logγ . logρ
(21)
Thus, the overall predicted work of solving on a refined grid is logγ )Nl+1 . logρ
(22)
logγ )(1 − r + 2d r)Nl . logρ
(23)
W (r) = (C0 + C1 Replacing Nl+1 using (26) yields W (r) = (C0 + C1
With these relations in mind, we choose the r that minimizes the predicted effective functional reduction, γ(r)ef f = γ(r)1/W (r) .
(24)
Therefore, we refine only r ∗ Nl elements on each level that gives us the best error reduction for the added cost. In addition, one could allow for multiple refinements of each element. For example, the ACE scheme could call for r1 Nl elements to be refined once and r2 Nl elements to be refined twice. This changes the predicted error reduction, γ(r1 , r2 ), to 2p 4p 1 1 γ(r1 , r2 ) = 1 − f (r1 ) + (f (r1 ) − f (r2 )) + f (r2 ). (25) 2 2 Then, the new number of elements on the refined grid is Nl+1 = Nl (1 − r1 + 2d (r1 − r2 ) + 22d r2 ),
(26)
An AMR Scheme for Incompressible RMHD
7
and the predicted work estimate, W (r1 , r2 ) becomes W (r) = (C0 + C1
logγ )(1 − r1 + 2d (r1 − r2 ) + 22d r2 )Nl . logρ
(27)
Now, the optimal pair, 0 ≤ r1 ≤ r2 ≤ 1 is found to minimize the effective error reduction, equation (24). This allows for more aggressive refinement. In practice, it has been found that using even more refinement, such as triple refinement, is unnecessary. The ACE scheme fits in nicely with the nested iteration approach. Not only do we try to get the best accuracy-per-computational-cost for each linearization and each AMG cycle, but we also take this into account when interpolating to finer grids. A more detailed explanation of the ACE algorithm can be found in [13,14]. Numerical results in section 4 show that using adaptive refinement yields the same accuracy in the MHD test problems while using fewer degrees of freedom.
4
Numerical Results
In this section, we investigate an MHD test problem to show that the nested iteration Newton-FOSLS-AMG method works well with the addition of the ACE adaptive mesh refinement. We show that these methods are capable of solving the complex nonlinear systems in a minimal amount of work units, or fine-grid relaxation equivalents. The full algorithm, as in figure 1, was applied to a tokamak test problem [22,23,24,25]. From the papers by Chac´ on, Knoll, and Finn [22] and Philip [24], a reduced set of MHD equations is obtained. These equations simulate a “large aspect-ratio” tokamak, with non-circular cross-sections. Here, the magnetic B-field along the z-direction, or the toroidal direction, is very large and mostly constant. In this context, we are able to look at plasma behavior in the poloidal cross-section. This was described in the companion paper, [9]. The reduced model is equivalent to the 2D version of equations (8)-(15). The x-direction denotes the periodic poloidal direction in the tokamak, whereas the y dimension represents a thin annulus in the poloidal cross section. In this 2D setting, vorticity, ω, and current density, j, are both scalar variables. We now apply our methodology to a test problems known as the island coalescence problem. 4.1
Test Problem: Island Coalescence
This test problem simulates an island coalescence in the current density arising from perturbations in an initial current density sheet. A current density sheet in the toroidal direction of the tokamak is perturbed, resulting in an instability that causes a reconnection in the magnetic field lines and merging of two islands in the current density field. This produces a sharp peak in current density where the magnetic field lines reconnect. This region is known as the reconnection zone and the point at which the magnetic field lines break is known as the X point. See [26,27,24] for more details. We choose a low enough resistivity
8
J. Adler et al.
(i.e. Lundquist number above 50, 000) in order to observe the interesting physics. For the following simulations, we define Ω = [−1, 1] × [−1, 1], Re = SL = 50, 001. The initial conditions at equilibrium are 1 sinh(2πy) B0 (x, y) = , cosh(2πy) + k cos(2πx) k sin(2πx) u0 (x, y) = 0, ω0 (x, y) = 0, 2π(k 2 − 1) j30 (x, y) = ∇ × B0 = , (cosh(2πy) + 0.2 cos(2πx))2 (1 − k 2 ) 1 , p0 (x, y) = 1+ 2 (cosh(2πy) + 0.2 cos(2πx))2
(28) (29) (30) (31) (32)
where k = 0.2. These initial conditions are perturbed away from equilibrium as follows: ⎞ ⎛ 1 − π cos(πx) sin(π y2 ) (33) δB0 (x, y) = ⎝ 12 π1 cos(π y2 ) sin(πx) ⎠ , 0 y δj30 (x, y) = cos(π ) cos(πx), (34) 2 where = −0.01. The boundary conditions are periodic in x and Dirichlet for the current density and vorticity on the top and bottom of the domain. We also have n · u and n · B known on the top and bottom. Again, the FOSLS formulation, (8)-(15), is H 1 elliptic. Results. The problem was run to time 10τA with a timestep of 0.1τA using a BDF-2 implicit time-stepping scheme. By the 80th time step, or time 8τA , the islands have coalesced and the large peak in current density has occurred at the reconnection point. Using both uniform refinement and the ACE scheme, we were able to capture the instability. With ACE employed, the grids evolve over time to refine in areas with steeper gradients. In this problem, as time progresses, a steep gradient occurs at the reconnection point. This is seen in the bottom graph in figure 3. We expect, then, that most of the refinement occurs in this region, which is indeed the case. To get an idea of how much more efficient adaptive refinement is, we compare the work performed using ACE to that of using uniform refinement. The work at one time step is calculated by first determining the work of all the V-cycles on a given refinement level. Then these values times the number of matrix nonzeros
An AMR Scheme for Incompressible RMHD
9
for the level are summed over all grids and divided by the number of nonzeros on the finest refinement level for the given problem. In table 1, the work unit values given are with respect to the finest level of the given refinement scheme. They are an average over all time steps. To compare the two schemes, the average work unit value is multiplied by the fine-grid nonzeros for that scheme and then the ratio is taken. This ratio is defined as the Work Ratio in table 1. Similarly, the Element Ratio column is the ratio of elements on the finest grid of the adaptive scheme compared to the number of elements on the finest grid of the uniform scheme.
Fig. 3. Numerical solution using 10 levels of adaptive refinement. SL = Re = 50, 001. Top Left: Current Density at Time 4τA . Top Right: Current Density at Time 8τA . Bottom: Zoomed in plot of current density peak at Time 8τA .
Table 1. Average number of work units per timestep using uniform refinement versus ACE refinement. A total of 100 time steps were performed. Uniform Work Units 66.87
ACE Nonzeros Work Units 51,781,975 78.23
Nonzeros Work Ratio Element Ratio 7,283,047.4 0.16 0.07
10
J. Adler et al.
Table 2. Results for a single time step. AMG convergence factor, ρ, total work units (total work units with respect to uniform refinement in parenthesis), and the nonlinear functional norm are given. Level Elements Nonzeros Newton Steps 2 4 17,493 2 3 16 63,063 2 4 40 155,575 1 5 79 299,145 1 6 175 640,871 1 7 337 1,161,741 1 8 658 2,263,261 1 9 1,078 3,624,089 1 10 1,696 5,535,089 1 Total
ρ 0.35 0.55 0.40 0.46 0.58 0.50 0.79 0.67 0.71
Work Units Nonlinear Functional 0.054 (0.006) 4.919 0.355 (0.038) 1.287 0.427 (0.046) 0.604 0.682 (0.073) 0.284 3.328 (0.356) 0.112 3.929 (0.420) 0.049 19.548 (2.090) 0.023 20.789 (2.222) 0.013 29.495 (3.153) 0.008 78.697 (8.398)
The results show that using adaptive refinement greatly reduces the amount of work needed, compared to that of using uniform refinement as is done in [10]. ACE requires 10% of the work that uniform refinement requires. The physics is more localized in this problem, especially by time 8τA and, thus, the refinement is more localized. Looking at one specific time step, as in table 2, one can see how the nested iteration algorithm along with ACE solves the problem efficiently. By the finest levels, only one Newton step and a handful of work units are needed. The values in parenthesis show the relative work with respect to a grid that was uniformly refined for the given number of levels. Thus, using nested iteration Newton-FOSLS-AMG with ACE yields a good approximation to the solution of the island coalescence problem in less than 10 work units, or the equivalent of 10 relaxation sweeps on a 128 × 128 bi-quadratic uniform grid.
5
Discussion
We showed that using an efficiency-based adaptive refinement scheme, such as ACE, along with the FOSLS finite element method and nested iteration, yields a highly effective method for solving the complicated, current-vorticity form of resistive MHD. Real world MHD applications are solved very efficiently when the focus is on accuracy-per-computational-cost. The use of FOSLS greatly aids this process. Its sharp, a posteriori error estimate allows parameters to be computed that are used to estimate the current accuracy-per-computational-cost. From this, judgements as to what further computation is necessary are made. Such decision making facilitates an efficient local adaptive refinement process, which in turn reduces the amount of cost needed. Several aspects still need to be studied. First, the linear system solvers are what dictate the overall efficiency of the NI-Newton-FOSLS method. For this paper, the algebraic systems are solved with a classical algebraic multigrid method. Deteriorations in the algebraic convergence for increased timestep size as well as
An AMR Scheme for Incompressible RMHD
11
Reynolds and Lundquist numbers are observed. In fact, in table 2, the convergence factor ρ increases slightly as h gets smaller. The current AMG algorithm can be improved in several ways. One might develop an improved AMG for the above type of systems of PDEs. This might involve the use of newly developed adaptive multigrid algorithms described more in [28,29,30]. In the test problems addressed in this paper, tensor product grids are used, with local refinement. Therefore, one might employ a geometric multigrid solver instead that takes into account the hierarchy of grids used, or one that is based on block structured grids. Specifically, using the same multigrid method to solve on a uniform mesh as for an adaptively refined mesh is not ideal. One would prefer a solver that takes into account the unstructured nature of the grid. In addition, many aspects of the adaptive refinement algorithm can be improved. In the above tests, the ACE algorithm requires more levels than uniform refinement to reach the same accuracy because it does not refine as aggressively as uniform refinement. Currently, we are examining modifications to ACE that dictate the number of unknowns on the refined grid. If this number is fixed, then the ACE algorithm chooses where to put these new nodes. This is accomplished by allowing ACE to refine an element more than once at each step. This approach allows for fewer refinement steps and, hence, fewer computations overall. Finally, there are many other MHD problems to be tested, as well as other time-dependent problems in fluid dynamics that have large nonlinearities. Doing most of the hard work on the coarser grids allows us to solve these problems more efficiently. Using a first-order system least squares formulation, we were able to resolve the above MHD physics, and we believe that, with a careful formulation, it can be used for many other time-dependent nonlinear systems. This, with the addition of a parallel implementation, could allow us to tackle even more complicated problems such as Extended or Hall MHD, as well as other complex fluid problems. Acknowledgments. This work was sponsored by the Department of Energy under grant numbers DE-FG02-03ER25574 and DE-FC02-06ER25784, Lawrence Livermore National Laboratory under contract numbers B568677, and the National Science Foundation under grant numbers DMS-0621199, DMS-0749317, and DMS-0811275.
References 1. Cai, Z., Lazarov, R., Manteuffel, T., McCormick, S.: First-Order System Least Squares for Second-Order Partial Differential Equations. SIAM J. Numer. Anal. 31, 1785–1799 (1994) 2. Cai, Z., Manteuffel, T., McCormick, S.: First-Order System Least Squares for Second-Order Partial Differential Equations. II. SIAM J. Numer. Anal. 34, 425–454 (1997) 3. Brandt, A., McCormick, S.F., Ruge, J.: Algebraic Multigrid (AMG) for automatic multigrid solutions with application to geodetic computations. Report, Inst. for Computational Studies, Fort Collins, CO (1982)
12
J. Adler et al.
4. Brandt, A., McCormick, S.F., Ruge, J.: Algebraic Multigrid (AMG) for sparse matrix equations. Cambridge University Press, Cambridge (1984) 5. Brandt, A.: Algebraic Multigrid Theory: The Symmetric Case. Appl. Math. Comput. 19(1-4), 23–56 (1986) 6. Briggs, W.L., Henson, V.E., McCormick, S.F.: A Multigrid Tutorial. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2000) 7. Oosterlee, C., Schuller, A., Trottenberg, U.: Multigrid. Academic Press, London (2000) 8. Ruge, J., St¨ uben, K.: Algebraic Multigrid (AMG). In: McCormick, S.F. (ed.) Multigrid Methods (1986) 9. Adler, J., Manteuffel, T., McCormick, S.F., Ruge, J.: First-order system least squares for incompressible resistive Magnetohydrodynamics. SIAM J. Sci. Comp. (to appear, 2009) 10. Adler, J., Manteuffel, T., McCormick, S., Ruge, J., Sanders, G.: Nested Iteration and First-Order System Least Squares for Incompressible, Resistive Magnetohydrodynamics. SIAM J. on Sci. Comp. (SISC) (submitted) (2009) 11. Adler, J.: Nested Iteration and First Order Systems Least Squares on Incompressible Resistive Magnetohydrodynamics. PhD thesis, University of Colorado at Boulder (2009) 12. Berndt, M., Manteuffel, T., McCormick, S.F.: Local error estimates and adaptive refinement for first-order system least squares (FOSLS). E.T.N.A. 6, 35–43 (1998) 13. DeSterck, H., Manteuffel, T., McCormick, S., Nolting, J., Ruge, J., Tang, L.: Efficiency-based h- and hp-refinement strategies for finite element methods. J. Num. Lin. Alg. Appl. 15, 249–270 (2008) 14. Nolting, J.: Efficiency-based Local Adaptive Refinement for FOSLS Finite Elements. PhD thesis, University of Colorado at Boulder (2008) 15. Nicholson, D.R.: Introduction to Plasma Theory. John Wiley and Sons, New York (1983) 16. Ullrich, P.: Dynamics and Properties of the Magnetohydrodynamics Equations (unpublished) (2005) 17. Bochev, P., Cai, Z., Manteuffel, T., McCormick, S.: Analysis of Velocity-Flux FirstOrder System Least-Squares Principles for the Navier-Stokes Equations: Part I. SIAM J. Numer. Anal. 35, 990–1009 (1998) 18. Bochev, P., Cai, Z., Manteuffel, T., McCormick, S.: Analysis of Velocity-Flux FirstOrder System Least-Squares Principles for the Navier-Stokes Equations: Part II. SIAM J. Numer. Anal. 36, 1125–1144 (1999) 19. Heys, J., Lee, E., Manteuffel, T., McCormick, S.: An Alternative Least-Squares Formulation of the Navier-Stokes Equations with Improved Mass Conservation. J. Comp. Phys. 226(1), 994–1006 (2007) 20. Codd, A.: Elasticity-Fluid Coupled Systems and Elliptic Grid Generation (EGG) Based on First-Order System Least Squares (FOSLS). PhD thesis, University of Colorado at Boulder (2001) 21. Codd, A., Manteuffel, T., McCormick, S.: Multilevel First-Order System Least Squares for Nonlinear Elliptic Partial Differential Equations. SIAM J. Numer. Anal. 41, 2197–2209 (2003) 22. Chacon, L., Knoll, D.A., Finn, J.M.: An Implicit, Nonlinear Reduced Resistive MHD Solver. J. of Computational Physics 178, 15–36 (2002) 23. Chacon, L., Knoll, D.A., Finn, J.M.: Nonlinear Study of the Curvature-Driven Parallel Velocity Shear-Tearing Instability. Physics of Plasmas 9, 1164–1176 (2002)
An AMR Scheme for Incompressible RMHD
13
24. Philip, B., Chacon, L., Pernice, M.: Implicit Adaptive Mesh Refinement for 2D Reduced Resistive Magnetohydrodynamics. J. Comp. Phys. 227(20), 8855–8874 (2008) 25. Strauss, H.: Nonlinear, Three-Dimensional Magnetohydrodynamics of Noncircular Tokamaks. Physics of Fluids 19, 134–140 (1976) 26. Bateman, G.: MHD Instabilities. MIT Press, Cambridge (1978) 27. Knoll, D.A., Chacon, L.: Coalescence of Magnetic Islands, Sloshing, and the Pressure Problem. Physics of Plasmas 13(1) (2006) 28. Brezina, M., Falgout, R., MacLachlan, S., Manteuffel, T., McCormick, S., Ruge, J.: Adaptive smoothed aggregation (αsa) multigrid. SIAM Review (SIGEST) 47, 317–346 (2005) 29. Brezina, M., Falgout, R., Maclachlan, S., Manteuffel, T., McCormick, S., Ruge, J.: Adaptive algebraic multigrid. SIAM J. on Sci. Comp. (SISC) 27, 1261–1286 (2006) 30. MacLachlan, S.: Improving Robustness in Multiscale Methods. PhD thesis, University of Colorado at Boulder (2004) 31. Ruge, J.: Fospack users manual, version 1.0. (unpublished) (2000)
Discontinuous Galerkin Subgrid Finite Element Method for Heterogeneous Brinkman’s Equations Oleg P. Iliev1,2 , Raytcho D. Lazarov2,3, and Joerg Willems1,3 1 Fraunhofer ITWM, Kaiserslautern, Germany Inst. Mathematics, Bulgarian Academy of Sciences, Sofia, Bulgaria Dept. Mathematics, Texas A&M University, College Station, TX 77843, USA 2
3
Abstract. We present a two-scale finite element method for solving Brinkman’s equations with piece-wise constant coefficients. This system of equations model fluid flows in highly porous, heterogeneous media with complex topology of the heterogeneities. We make use of the recently proposed discontinuous Galerkin FEM for Stokes equations by Wang and Ye in [12] and the concept of subgrid approximation developed for Darcy’s equations by Arbogast in [4]. In order to reduce the error along the coarse-grid interfaces we have added a alternating Schwarz iteration using patches around the coarse-grid boundaries. We have implemented the subgrid method using Deal.II FEM library, [7], and we present the computational results for a number of model problems. Keywords: Numerical upscaling, flow in heterogeneous porous media, Brinkman’s equations, subgrid approximation, mixed FEM.
1
Introduction
In this paper we consider the Brinkman’s equations for the velocity u and the pressure p: −μΔu + ∇p + μκ−1 u = f in Ω, ∇ · u = 0 in Ω, (1) u = 0 on ∂Ω. Here μ > 0 is the viscosity, Ω is a bounded simply connected domain in Rn , n = 2, 3, with Lipschitz polyhedral boundary having the outward unit normal vector n, 0 < κ ∈ L∞ (Ω) is the permeability and f ∈ L2 (Ω)n is a forcing term. Then problem (1) has unique weak solution (u, p) ∈ (H 1 (Ω)n , L20 (Ω)). Brinkman’s equations adequately describe flows in highly porous media. They are used for modeling many industrial materials and processes such as industrial filters, with porosity above 0.9, thermal insulators made of glass or mineral wool with porosity 0.99, or open foams with porosity above 0.95, see Fig. 1. Equation (1) was introduced by Brinkman in [6] in order to reduce the deviations between the measurements for flows in highly porous media and the Darcybased predictions. This was done without direct link to underlying microscopic I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 14–25, 2010. c Springer-Verlag Berlin Heidelberg 2010
Discontinuous Galerkin Subgrid Finite Element Method
15
Fig. 1. Microstructure of industrial foams
Fig. 2. Microstructure and marcostructure of mineral wool
behavior of the flow process, but as a constitutive relation involving a dissipative term scaled by the viscosity. However, advances in homogenization theory made it possible to rigorously derive Brinkman’s equations from Stokes’ equations in the case of slow viscous fluid flow in the presence of periodically arranged solid obstacles, see e.g., [1,2,9,11]. Also, system (1) has been considered from the point of view of fictitious domain or penalty formulation of flows of incompressible liquids around solid or porous obstacles; in this case the coefficients are piecewise constant: the viscosity μ is constant and κ is “small” in the solid obstacles and “infinity” in the fluid, (see, e.g. [3]). In this paper we derive and study numerical methods for solving Brinkman’s equations (1) assuming that the coefficient κ is piece-wise constant and has large jumps. Moreover, the structure of the subdomains, where the coefficient is constant, has quite a complicated geometry, see, e.g. Figure 1. For such problems we shall construct and numerically test two-scale finite element approximations. In the case of problems with scale separation the method captures well the coarse scale behavior of the solution and enhances it with fine scale features. Also, we extend such approximations to numerically treat problems without scale separation. Enhancing the method by subdomains around the coarse-grid edges we devise an iterative alternating Schwarz methods, which converges to the fine-grid approximate solution. All constructions are implemented within
16
O.P. Iliev, R.D. Lazarov, and J. Willems
the Deal.II finite element library. As a byproduct of this development we also obtaine a subgrid finite element approximation of Darcy’s problem, a method proposed and justified by Arbogast [4], and the discontinuous Galerkin method for Stokes’ equations, proposed and studied by Wang and Ye [12]. In this note we present derivation of a subgrid approximation of Brinkman equations and test the method on a number of model problems. Our further goals include theoretical and practical study of the error of the method, testing its performance on more realistic three-dimensional problems, and development and justification of efficient solution methods.
2
Preliminaries
Here we use the standard notation for spaces of scalar and vector-functions defined on Ω. L2 (Ω) is the space of measurable square integrable functions in Ω and L20 (Ω) is its subspace of functions with mean value zero. The Sobolev spaces H 1 (Ω) and H 1 (Ω)n consist of scalar and vector-functions, respectively, with weak first derivatives in L2 (Ω) and L2 (Ω)n . Similarly, H01 (Ω)n := {v ∈ H 1 (Ω)n : v = 0 on ∂Ω}, H(div; Ω) := {v ∈ L2 (Ω)n : ∇ · v ∈ L2 (Ω)}, H0 (div; Ω) := {v ∈ H(div; Ω) : v · n = 0 on ∂Ω}. Further, Pk denotes the set of polynomials of degree k ≥ 0 and Pkn the set of vector functions in Rn with components in Pk . The two-scale finite element method uses various (mixed) finite element spaces, which are defined below. Let TH and Th be quasi-uniform quadrilateral triangulations of Ω with mesh-parameter H and h, respectively, such that each TH ∈ TH is an agglomeration of elements in Th . We will refer to TH and Th as coarse and fine triangulation, respectively. Let EH denote the set of all edges (n = 2) and faces (n = 3) of TH , respectively. Also, we define E˚H to be the set of all internal edges/faces of TH , i.e., E˚H := {eH ∈ EH | eH ∂Ω} and E˚h to be the set of all edges/faces of Th that are internal for the coarse-grid cells TH ∈ EH . Furthermore, for each TH ∈ TH we denote by Th (TH ) the restriction of Th to the coarse finite element TH , Th (TH ) is referred to as a fine triangulation of TH . We denote by (VH , WH ) and (Vh , Wh ) the mixed finite element spaces corresponding to TH (Ω) and Th (Ω), resprectively. Likewise, for each TH ∈ TH (Ω) and eH ∈ E˚H let (δVh (TH ), δWh (TH )) ⊂ (H0 (div; TH ), L20 (TH ))
(2)
be mixed finite element spaces corresponding to Th (TH ). We also consider the direct sums of these local finite element spaces and set (δVh , δWh ) := (δVh (TH ), δWh (TH )), TH ∈TH
Discontinuous Galerkin Subgrid Finite Element Method
17
where functions in (δVh (TH ), δWh (TH )) are extended by zero to Ω\TH . We, furthermore, assume that the finite element spaces satisfy the following properties: and ∇ · VH = WH ,
(3a)
in the L2 -inner-product,
(3b)
∇ · δVh = δWh δWh ⊥ WH and
VH ∩ δVh = {0}.
(3c)
We note that if we choose (VH , WH ) and (δVh (TH ), δWh (TH )), with TH ∈ TH , to be Brezzi-Douglas-Marini (BDM1) mixed finite element spaces of order one (cf. e.g. [5, Section III.3, p. 126]), then (3) is satisfied. The velocity for the BDM1 space in 2-D is given by P12 + span{curl(x21 x2 ), curl(x1 x22 } = P12 + span{(x21 , −2x1 x2 ), (2x1 x2 , −x22 )} on each cell, with the restriction that the normal component is continuous across cell boundaries. In 3-D the spaces are defined in a similar manner, see, e.g. [5, Section III.3, p. 127]. Due to (3b) and (3c) the following direct sum is well-defined (VH,h , WH,h ) := (VH , WH ) ⊕ (δVh , δWh ) .
(a) (Vh , Wh ).
(4)
(b) (VH,h , WH,h ) with 2 coarse cells.
Fig. 3. Degrees of freedom of different mixed finite element spaces corresponding to BDM1 elements for the velocity and piece-wise constant functions for the pressure
3
Subgrid Method for Brinkman’s Equations
Now we outline the numerical subgrid approach for problem (1) in the way T. Arbogast applied it to Darcy’s problem in [4]. It is well known that the mixed variational formulation of (1) reads as follows: Find (u, p) ∈ (H01 (Ω)n , L20 (Ω)) such that for all (v, q) ∈ (H01 (Ω)n , L20 (Ω)) f · vdx, (5) a (u, v) + b (v, p) + b (u, q) = Ω
18
O.P. Iliev, R.D. Lazarov, and J. Willems
where
∇ · vqdx,
(6a)
(μ∇u : ∇v + μκ−1 u · v)dx.
(6b)
b (v, q) := − a (u, v) :=
Ω
Ω
Now we consider a finite element approximation of (5) with respect to (VH,h , ⊂ H01 (Ω)n , and therefore it is a nonconforming finite WH,h ). Note, that VH,h element space. However, we have that VH,h ⊂ H0 (div; Ω) and we can use the discontinuous Galerkin approximation of Stokes equations, derived in [12]: Find (uH,h , pH,h ) ∈ (VH,h , WH,h ) such that for all (vH,h , qH,h ) ∈ (VH,h , WH,h ) we have ah (uH,h , vH,h ) + b (vH,h , pH,h ) + b (uH,h , qH,h ) = F (vH,h ), where ah (u, v) :=
Th ∈Th Th
+
e∈Eh
(7)
∇u : ∇v + κ−1 u · v dx
α u v − {{ε(u)}} v − {{ε(v)}} u ds e |e|
with the average {{ε(·)}} and the jump · defined by 1 + + − − on e ∈ E˚h , 2 n · ∇(u|Th+ · τ ) + n · ∇(u|Th− · τ ) {{ε(u)}} := + + n · ∇(u|T + · τ ) on e ∈ Eh∂
(8)
(9a)
h
and v :=
v|T + · τ + + v|T − · τ − on e ∈ E˚h , h h v|T + · τ + on e ∈ Eh∂ .
(9b)
h
Here α > 0 is a sufficiently large stabilization parameter, n and τ are normal and tangential vectors to the edge e (with right orientation), the superscripts + and − refer to the elements on either side of edge e, and E˚h and Eh∂ denote the sets of all internal and boundary edges, respectively. If H = h, i.e. δVh = ∅, δWh = ∅, we have a single grid approximation of Brinkman’s equations, a method proposed and studied by Wang and Ye in [12] for Stokes equations. The approximate solution obtained on a single grid for H sufficiently small will be further called a reference solution. The subsequent derivation, which follows the reasoning in [4], is the core of the numerical subgrid approach and essentially yields a splitting of (7) into one coarse global and several fine local problems.Due to (4) we know that each element in (VH,h , WH,h ) may be uniquely decomposed into its components from (VH , WH ) and (δVh , δWh ). Thus, (7) may be rewritten as ah (uH + δuh , v + δvh ) + b (v + δvh , pH + δph ) = F (v + δvh ), b (uH + δuh , q + δqh ) = 0, ∀(v, q) ∈ (VH , WH ) and ∀(δvh , δqh ) ∈ (δVh , δWh ).
(10)
Discontinuous Galerkin Subgrid Finite Element Method
19
By linearity we may decompose (10) into ah (uH + δuh , v) + b (v, pH + δph ) + b (uH + δuh , q) = F (v), ∀(v, q) ∈ (VH , WH ), ah (uH + δuh , δvh ) + b (δvh , pH + δph ) + b (uH + δuh , δqh ) = F (δvh ),
∀(δvh , δqh ) ∈ (δVh , δWh ).
(11a)
(11b)
Due to (3a), (3b), and (3c) we may simplify (11) to obtain the equation: ah (uH + δuh , v) + b (v, pH ) + b (uH , q) = F (v) ∀ (v, q) ∈ (VH , WH ) ah (uH + δuh , δvh ) + b (δvh , δph ) + b (δuh , δqh ) = F (δvh ), ∀ (δvh , δqh ) ∈ (δVh , δWh ).
(12a)
(12b)
Remark 1. This last step is crucial to ensure the solvability of (12b). In fact, the equivalence of (11b) and (12b) is a major reason for requiring the properties of the finite element spaces in (3). The requirements (3), however, significantly limit the possible choices of finite elements that might be used in the subgrid method. Indeed, the BDM1 finite element is the only choice that we are aware of that satisfies all constraints and can be used for discretizing (1).
h , δp + δp ) and using Now, by further decomposing (δuh , δph ) = (δuh + δu h h superposition, (12b) may be replaced by the following systems of equations sat h , δp ) and (δuh , δp ), respectively, for all (δvh , δqh ) ∈ (δVh , δWh ): isfied by (δu h h
h , δvh + b δvh , δp
h , δqh = 0 + b δu (13a) ah uH + δu h and
ah δuh , δvh + b δvh , δph + b δuh , δqh = F (δvh ).
(13b)
h , δp ) = (δu
h (uH ), δp (uH )) is a linear operWe easily see by (13a) that (δu h h ator in uH . Unfortunately, as written, these two problems do not lead to local (over the coarse elements) computations since they are connected through the
h and δuh on ∂TH . We achieve penalty involving the tangential component of δu the desired localization by considering equations (13) over each coarse grid cell with zero tangential component of the velocity imposed weakly by the penalty term (for more details, see [13]). Keeping this in mind we further use the same
h (uH ), δp (uH )) as the notation. In the following we refer to (δuh , δph ) and (δu h responses to the right hand side and uH , respectively.
h (uH ) into (12a) we arrive at an upscaled equation, which Plugging δuh + δu is entirely posed in terms of the coarse unknowns, i.e. for all (v, q) ∈ (VH , WH )
h (uH ), v + b (v, pH ) = F (v) − ah δuh , v , ah uH + δu (14) b (uH , q) = 0.
20
O.P. Iliev, R.D. Lazarov, and J. Willems
h (v), that Now, due to (13a) we see, by choosing δvh = δu
h (uH ), δu
h (v) + b δu
h (v), δp (uH ) = 0. ah uH + δu h (uH ) = 0 for any
h (v), δp The second equation in (13a) in turn yields b δu h v ∈ VH . Combining these two results with (14) we obtain the symmetric upscaled problem: find (uH , pH ) ∈ (VH , WH ) so that for all (v, q) ∈ (VH , WH )
h (uH ), v + δu
h (v) + b (v, pH ) + b (uH , q) = F (v) − ah δuh , v , ah uH + δu (15)
h (uH ) One can set up this linear system by first computing the responces δu with uH being a coarse-grid basis function. This could be done in advance and in parallel. Once (15) is solved for (uH , pH ) we obtain the solution of (7) by
h (uH ) + δuh , pH + δp (uH ) + δp ). (uH,h , pH,h ) = (uH + δu h h
4
(16)
Subgrid Method and Alternating Schwarz Iterations
As noted in the previous section we presented a special way of computing the solution of (7), i.e., the finite element solution corresponding to the space (VH,h , WH,h ). The difference between the spaces (VH,h , WH,h ) and (Vh , Wh ) is that the former has no fine degrees of freedom across coarse cell boundaries. Thus, fine-scale features of the solution (u, p) across those coarse cell boundaries are poorly captured by functions in (VH,h , WH,h ). Algorithm 1 addresses this problem by performing alternating Schwarz iterations between the spaces (VH,h , WH,h ) and (Vhτ (eH ), Whτ (eH )) that consist of fine-grid functions defined on overlapping subdomains around each coarse-mesh interface eH ∈ E˚H of size H. Now, problem (17) is of exactly the same form as (7). Thus, by the same reasoning as in the previous section we may replace (17) by the following two problems: Find (δeh , δeh ) ∈ (δVh , δWh ) such that for all (δvh , δqh ) ∈ (δVh , δWh ) j+ 13
ah (δeh , δvh ) + b (δvh , δeh ) = F (δvh ) − ah (uh b (δeh , δqh ) =
j+ 1 −b(uh 3 , δqh ).
j+ 13
, δvh ) − b(δvh , ph
),
(19a)
Find (eH , eH ) ∈ (VH , WH ) such that for all (v, q) ∈ (VH , WH ) we have j+ 13
aH (eH , v) + b (v, eH ) = F (v) − ah (uh b (eH , q) =
j+ 1 −b(uh 3 , q).
j+ 13
, v) − b(v, ph
) − ah (δeh , v),
(19b)
Here, (19a) and (19b) correspond to (13b) and (15), respectively, and analogous to (16), (eH,h , eH,h ) from (17) is obtained by
h (eH ), δp (eH ) + (δeh , δeh ) . (eH,h , eH,h ) = (eH , eH ) + δu (20) h
Discontinuous Galerkin Subgrid Finite Element Method
21
Algorithm 1. Alternating Schwarz extension to the numerical subgrid approach for Darcy’s problem — first formulation. 1: Set (u0h , p0h ) ≡ (0, 0). 2: for j = 0, . . . until convergence do 3: if j > 0 then 4: Starting from (ujh , pih ) perform an additive Schwarz step with respect to j+1/3 j+1/3 , ph ). (Vhτ (eH ), Whτ (eH )) for all eH ∈ EH to get (uh 5: else 1/3 1/3 6: (uh , ph ) = (u0h , p0h ) 7: end if 8: Find (eH,h , eH,h ) ∈ (VH,h , WH,h ) such that for all (v, q) ∈ (VH,h , WH,h ) we have ⎧ ⎨ ah (eH,h , v) + b (v, eH,h ) = F (v) − ah uj+1/3 , v − b v, pj+1/3 , h h (17) j+1/3 ⎩ ,q . b (eH,h , q) = −b uh 9:
Set
j+1/3
j+1 (uj+1 h , ph ) = (u h
j+1/3
, ph
) + (eH,h , eH,h ).
(18)
10: end for
Algorithm 2. Alternating Schwarz extension to the numerical subgrid approach for Darcy’s problem — second formulation.
h for all coarse velocity basis functions, i.e., solve (13a) with uH re1: Compute δu placed by basis functions. 2: Set (u0h , p0h ) ≡ (0, 0). 3: for j = 0, . . . until convergence do 4: Steps 3:–7: of Algorithm 1 5: Solve (19a) for (δeh , δeh ). j+2/3 j+2/3 j+1/3 j+1/3 6: Set (uh , ph ) = (uh , ph ) + (δeh , δeh ). 7: Solve (22) for (eH , eH ). j+2/3 j+1
h (eH ), pj+2/3 + eH + δp (eH )). + eH + δu 8: Set (uj+1 h h , ph ) = (u h h 9: end for
Now, define
j+ 23
(uh
j+ 23
, ph
j+ 13
) := (uh
j+ 13
, ph
) + (δeh , δeh ).
Combining this with (18) and (20) we obtain j+ 23
j+1 (uj+1 h , ph ) = (uh
j+ 23
, ph
h (eH ), δp (eH )). ) + (eH , eH ) + (δu h
(21)
We observe that due to (3a) and (3b) we may simplify (19b) to obtain the equality for all (v, q) ∈ (VH , WH ) aH (eH , v) + b (v, eH ) +b (eH , q) j+ 23
= F (v) − ah uh
j+ 23
, v − b(uh
j+ 23
, q) − b(v, ph
Thus, we may rewrite Algorithm 1 in form of Algorithm 2.
).
(22)
22
O.P. Iliev, R.D. Lazarov, and J. Willems
Remark 2. Algorithm 2 also has a different interpretation than just being some equivalent formulation of Algorithm 1. It is straightforward to see that for j = 0, 2 2 j+ 2 j+ 2 (uh3 , ph3 ) = (δuh , δph ), i.e. it is the solution of (13b). For j ≥ 1 (uh 3 , ph 3 ) is the solution of (13b) with the homogeneous boundary conditions being replaced j+ 1
j+ 1
by (in general) inhomogeneous ones defined by (uh 3 , ph 3 ). Besides, (22) is of exactly the same form as (15). Thus, Algorithm 2 can be viewed as a subgrid algorithm that iteratively improves the local boundary conditions of the response to the right hand side. Remark 3. As a byproduct of our consideration we have a subgrid approximation of the Darcy’s equation. In this case the above algorithms reduce to an overlapping Schwarz domain decomposition method. Such methods were studied in details in [8,10] where a convergence rate independent of H has been established.
(a) Periodic geometry.
(b) Media with obstacles. (c) Media vuggs.
with
large
Fig. 4. Three test geometries. The dark regions indicate obstacles or porous media, i.e. κ−1 is large, and the light regions indicate (almost) free flow regions, i.e. κ−1 is small.
(a) Reference solution.
(b) Subgrid solution.
(c) Schwarz DD, 5 it.
Fig. 5. Velocity u1 for media with periodically arranged obstacles of Figure 4(a)
Discontinuous Galerkin Subgrid Finite Element Method
5
23
Numerical Results
We consider the following example to test the numerical subgrid approach for Brinkman’s problem and the enhanced version with alternating Schwarz iterations. The model problem has non-homogeneous boundary conditions, namely, u = g on ∂Ω. Extending this subgrid method to problems with non-homogeneous boundary data is possible under the condition that g is contained in the trace space of VH on the boundary ∂Ω. This has been elaborated in the PhD thesis of J. Willems, [13]. Example 1. We choose Ω = (0, 1)2 and 1 1e3 in dark regions, f ≡ 0, g ≡ , μ = 1, κ−1 = 0 1 in light regions, where the position of the obstacles is shown in Figure 4(a) (periodic case) and Figure 4(b) (media with small obstacles), and Figure 4(c) (media with relatively large vuggs), respectively. We have chosen a fine mesh of 128 × 128 cells that resolve the geometry. For the subgrid algorithm we choose a coarse 8 × 8 mesh and each coarse-cell is further refined to 16 × 16 fine cells. On all figures we plot the numerical results for the velocity u1 for Example 1. On Figure 5 we report the numerical results for periodically arranged obstacles: Figure 5(a) shows the reference solution, computed on the global fine 128×128 grid, Figure 5(b) — the solution of the subgrid method, and Figure 5(c) — the solution after five iterations of overlapping Schwarz domain decomposition method (Schwarz DD). The relative L2 -norm of the error for the the subgrid solution shown in Figure 5(b) is 7.14e-2, while after five Schwarz iterations (see Figure 5(c)) the L2 -norm of the error is an order of magnitude better. Likewise, on Figure 6 we show the numerical results for the two geometries shown on Figures 4(b) and 4(c). The first row represents the reference solution computed on the global fine grid with 128 × 128 cells. In the second row we give the solution of the subgrid method and in the third row we show the solution obtained after few Schwarz overlapping domain decomposition iterations. In all cases we clearly see the error of the subgrid solution along the coarsegrid boundaries. This error is due to the prescribed zero values for the fine-grid velocity on the coarse-grid boundaries and is characteristic for the multiscale methods (e.g. [14]). As we see in the numerical experiments, few Schwarz iterations substantially decrease the error. For the geometry shown on Figure 4(c) the improvement is achieved after just one iteration, while for the geometry shown on Figure 4(b) the substantial improvement is achieved after 10 iterations. In all cases the approximate solution of Schwarz overlapping method obtained in few iterations is very close to the reference one.
24
O.P. Iliev, R.D. Lazarov, and J. Willems
(a) Reference solution
(b) Reference solution
(c) Subgrid FE solution
(d) Subgrid FE solution
(e) Schwarz DD after 1 iteration
(f) Schwarz DD after 10 iterations
Fig. 6. Velocity component u1 for Example 1; on the left – the results for geometry of Figure 4(b) and the right row – the results for geometry of Figure 4(c)
Discontinuous Galerkin Subgrid Finite Element Method
25
Acknowledgments. The research of O. Iliev was supported by DAAD-PPP D/07/10578 and award KUS-C1-016-04, made by King Abdullah University of Science and Technology (KAUST). R. Lazarov has been supported by award KUS-C1-016-04, made by KAUST, by NSF Grant DMS-0713829, and by the European School for Industrial Mathematics (ESIM) sponsored by the Erasmus Mundus program of the EU. J. Willems was supported by DAAD-PPP D/07/10578, NSF Grant DMS-0713829, and the Studienstiftung des deutschen Volkes (German National Academic Foundation). Part of the research was performed during the visit of O. Iliev to Texas A&M University. The hospitality of the Institute of Applied Mathematics and Computational Science, funded by KAUST, and the Institute for Scientific Computing are gratefully acknowledged. The authors express sincere thanks to Dr. Yalchin Efendiev for his valuable comments and numerous discussion on the subject of this paper.
References 1. Allaire, G.: Homogenization of the Navier-Stokes Equations in Open Sets Perforated with Tiny Holes. I: Abstract Framework, a Volume Distribution of Holes. Arch. Rat. Mech. Anal. 113(3), 209–259 (1991) 2. Allaire, G.: Homogenization of the Navier-Stokes Equations in Open Sets Perforated with Tiny Holes. II: Non-critical Size of the Holes for a Volume Distribution and a Surface Distribution of Holes. Arch. Rat. Mech. Anal. 113(3), 261–298 (1991) 3. Angot, P.: Analysis of Singular Perturbations on the Brinkman Problem for Fictitious Domain Models of Viscous Flows. Math. Methods Appl. Sci. 22(16), 1395– 1412 (1999) 4. Arbogast, T.: Analysis of a Two-Scale, Locally Conservative Subgrid Upscaling for Elliptic Problems. SIAM J. Numer. Anal. 42(2), 576–598 (2004) 5. Brezzi, F., Fortin, M.: Mixed and Hybrid Finite Element Methods. Springer Series in Comput. Mathematics, vol. 15. Springer, New York (1991) 6. Brinkman, H.C.: A Calculation of the Viscose Force Exerted by a Flowing Fluid on a Dense Swarm of Particles. Appl. Sci. Res. A1, 27–34 (1947) 7. deal.II: A Finite Element Differential Equations Analysis Library, http://www.dealii.org/ 8. Ewing, R., Wang, J.: Analysis of the Schwarz Algorithm for Mixed Finite Element Methods. Math. Modelling and Numer. Anal. 26(6), 739–756 (1992) 9. Hornung, U. (ed.): Homogenization and Porous Media, 1st edn. Interdisciplinary Applied Mathematics, vol. 6. Springer, Heidelberg (1997) 10. Mathew, T.: Schwarz Alternating and Iterative Refinement Methods for Mixed Formulations of Elliptic Problems, Part II: Convergence Theory. Numer. Math. 65, 469–492 (1993) 11. Ochoa-Tapia, J.A., Whitaker, S.: Momentum Transfer at the Boundary Between a Porous Medium and a Homogeneous Fluid. I. Theoretical development. Int. J. Heat Mass Transfer 38, 2635–2646 (1995) 12. Wang, J., Ye, X.: New Finite Element Methods in Computational Fluid Dynamics by H(div) Elements. SIAM J. Numer. Anal. 45(3), 1269–1286 (2007) 13. Willems, J.: Numerical Upscaling for Multi-Scale Flow Problems, Ph.D. Thesis, Technical University of Kaiserslautern (2009) 14. Wu, X.H., Efendiev, Y., Hou, T.Y.: Analysis of Upscaling Absolute Permeability. Discrete Contin. Dyn. Syst., Ser B 2, 185–204 (2002)
Stochastic Simulation for Solving Random Boundary Value Problems and Some Applications Karl Sabelfeld Institute Comp. Math. & Math. Geoph., Novosibirsk, Lavrentiev str, 6, 630090 Novosibirsk, Russia
[email protected]
Abstract. I present in this paper (first given as a plenary lecture at the International conference Large-Scale Scientific Computations, Sozopol 2009) an overview of stochastic simulation methods for solving boundary value problems for high dimensional PDEs with random fields as input data. Three classes of boundary value problems are considered: (1) PDEs with random coefficients, in particular, the Darcy equation with random hydraulic conductivity, (2) PDEs with random source terms, in particular, the elasticity problems governed by the Lam´e equation with random loads, and (3) PDEs with random boundary conditions, in particular, the Laplace and Lam´e equations with random velocities prescribed on the boundary. Here of particular interest are applications related to the analysis of dislocations behavior in crystals. In the modern problems of mathematical physics stochastic models governed by PDEs with random parameters attract more and more attention. One of the reasons is the growing complexity of the modern measurement techniques which enables to cover multiscale character of the studied processes. As an example, we mention the X-ray diffraction technique for the analysis of the crystal surfaces, and the flows in porous media. A direct Monte Carlo approach to solve the random boundary value problem ensemblewise is well known, but it takes a huge amount of computer time even in quite simple cases. We give a different approach based on a combined space-ensemble ergodic averaging. To this end, we develop ergodic models of random fields, in particular, the Fourier-wavelet models. We present some applications, in particular, the long-term correlations of the transport in porous media, and the X-ray diffraction measurements of the dislocations in crystals. We discuss also some inverse problems of elastography in a stochastic formulation, governed by the elasticity Lam´e equation with random loads and random displacements prescribed on the boundary. Comparisons are made against some exactly solvable results. Finally we suggest a stochastic fractal model for the mass density distribution in the Universe.
1
Introduction
Random fields provide a useful mathematical framework for representing disordered heterogeneous media in theoretical and computational studies. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 26–39, 2010. c Springer-Verlag Berlin Heidelberg 2010
Stochastic Simulation
27
Example 1. Fully developed turbulent flows. The velocity field representing the turbulent flow is modeled as a random field v(x, t) with statistics encoding important empirical features, and the temporal dynamics of the position X(t) and velocity V(t) = dX dt of immersed particles is then governed by m dV(t) = − γ V(t) − v(X(t), t) dt + 2kB T γ dW(t), where m is the particle mass, γ is its friction coefficient, kB is the Boltzmann constant, T is the absolute temperature, and W(t) is a random Wiener process representing molecular collisions. See [1,11,12]. Example 2. Transport through porous media, such as groundwater aquifers, in which the hydraulic conductivity K(x) is modeled as random field reflecting the empirical variability of the porous medium, see [14,15]. The Darcy flow rate q(x) in response to pressure applied at the boundary is governed by the Darcy equation q(x) = −K(x) grad φ(x),
div q = 0.
Example 3. Stochastic elastostatics problem governed by the Lam´e equation μ Δu(x) + (λ + μ) grad div u(x) = f (x) ,
x∈D ,
u(y)|y∈∂D = g(y)
where u is the displacement vector, and μ and λ are the elastic Lam´e coefficients (scalar random fields), f is a random vector load, and g is a random displacement vector prescribed on the boundary. Example 4. Smoluchowski coagulation equations. Our fourth example deals with a nonlinear coagulation dynamics. In the processes of crystal growth, the kinetics of an ensemble of atom islands that diffuse on the surface and irreversibly merge as they touch each other can be described by the system of Smoluchowski equations: ∞ dcn 1 = Kij ci cj − cn Kjn cj . dt 2 i+j=n j=1 Here cn is the number of islands containing n units (vacancies of atoms in our case) per unit area. In Fig. 1 we show a sample of simulated facetted islands of atoms (upper panel), and the relevant size distributions, see for details [16,17]. What should be evaluated ? In the first example, important characteristics is c(x), the average concentration of particles which is in fact a one-particle statistics. More complicated is the fluctuation of concentration, which is a twoparticle statistics related to the mean square separation ρ2 (t). Flows in porous media: even the simplest statistical characteristics, the mean flow, is a non-trivial function to be calculated. More general Lagrangian statistical characteristics are necessary, to evaluate the mean concentration and its flux, see [11-15]. For tracking the trajectories in the extremely heterogeneous velocity field, one needs a multiscale resolution through a stochastic synthesis of random fields.
28
K. Sabelfeld
Fig. 1. Formation of facetted adatom islands by solving the Smoluchowski equation
Fig. 2. Two samples of a random velocity field, with a unit correlation length L = 1 (left panel) and L = 0.5 (right panel)
Stochastic Simulation
29
Fig. 3. A sample of a random hydraulic conductivity for an isotropic porous medium
Elasticity problems: Standard statistical characteristics are the mean displacements ui , the second moments u2i , and the probability that the displacements exceed some critical value: P rob(ui > ucr ). The same statistics - for the strain and stress tensors in the elasticity problem εij = (ui,j + uj,i )/2,
τij = 2μεij + λδij divu .
In the analysis of dislocations in crystals, the goal is to evaluate the x-ray diffraction peak profiles from distributions of misfit and threading dislocations. The x-ray scattering amplitude from a film of thickness d is given by an integral of the form ∞ A(qx , qz ) =
d dz exp {i[qx x + qz z + V (x, z)]}
dx −∞
0
where V (x, z) =
K(x − x , z)u(x , ω)dx ,
is the total displacement field due to all misfit dislocations, and the kernel K(x− x , z) is the Green function given explicitly or to be calculated numerically, and m the random field u(x , ω) is defined by u(x, ω) = δ(x − ωj ) where δ is the j=1
Dyrac delta-function, the random points ω1 , . . . , ωm are distributed on [0, L] with a density p(ω) which is independent coordinate x. of the spatial The scattered intensity I(qx , qz ) = |A(qx , qz )|2 can be directly calculated by the Monte Carlo double randomization method, as explained above, by a randomized evaluation of the integral representation
30
K. Sabelfeld
∞ I(qx , qz ) =
d d dx
−∞
0
dz1 dz2 ei[qx x+qz (z1 −z2 )] ei[V (x1 ,z1 )−V (x2 ,z2 )] .
0
The random points ω1 , ω2 , . . . , ωm may have quite different distribution on [0, L], say, they may be all independent of each other, they may form a Markov chain with a certain transition probability density, or they may be placed almost periodically, with small but correlated random shifts ωj from their mean positions Xj = j Δx, j = 1, . . . , m: ωj+1 = Δj + ωj . The mean density of points on the [0, L] is ρ = m/L so that Δx = ρ−1 , and the random field u(x, ω) is stationary with the mean u(x, y; ω) = ρ. For details, see [16,17]. Smoluchowski coagulation equation: dX(t) = V(t)dt, dVi (t) = ai (X(t), V(t))dt + σij (X(t), t)dBj (t) 1 dnl = Kij ni nj − Kli nl ni . dt 2 i≥1
i+j=l
Important quantities are the average of the mean size ¯ n(t), as well as the average size spectrum nk (t). More complicated functionals: what are the average and variance of the random time when a “solution explosion” happens? (the so-called gelation phenomenon). Droplet growth in turbulent flows: The coagulation kernel in the case of turbulent coagulation regime has the form: ε(x, t) 3 1/3 r1 (i + j 1/3 )3 Kij = 1.364 ν where ε(x, t) is a random field of the dissipation rate of the turbulent kinetic energy, ν is the kinematic viscosity, and r1 is the monomer size.
2
Spectral Representations of Homogeneous Random Fields
Under quite general conditions, a real-valued Gaussian homogenous random field u(x) can be represented through a stochastic Fourier integral ˜ (dk) u(x) = e2πik·x E 1/2 (k)W (1) R Id
˜ (dk) is a complex-valued white noise random measure on IRd , with where W ˜ (B) = W ˜ (−B), W ˜ (B) = 0, and W ˜ (B)W ˜ (B ) = μ(B ∩ B ) for Lebesgue W measure μ and all Lebesgue-measurable sets B, B . The spectral density E(k) is a nonnegative even function representing the strength (energy) of the random field associated to the wavenumber k, meaning the length scale 1/|k| and direction k/|k|.
Stochastic Simulation
31
Multiscale random fields will have a multiscale spectral density, meaning that E(k) will have substantial contributions over a wide range of wavenumbers kmin |k| kmax , with kmax /kmin 1. This poses a challenge for efficient simulation. More generally, we deal with homogeneous Gaussian l-dimensional vector random fields (see [1-3]) u(x) = (u1 (x), . . . , ul (x))T , x ∈ IRd with a given correlation tensor B(r): Bij (r) = ui (x + r) uj (x), i, j = 1, . . . l, (2) or with the corresponding spectral tensor F : −i 2π k·r Bij (r) dr, Bij (r) = ei 2π r·k Fij (k) dk, Fij (k) = e R Id
i, j = 1, . . . l .
R Id
We assume that the condition
(3) |Bjj (r)| dr < ∞ is satisfied which ensures that
R Id
the spectral functions Fij are uniformly continuous with respect to k. Here Bjj is the trace of B. Let Q(k) be an l × n matrix defined by Q(k)Q∗ (k) = F (k),
¯ Q(−k) = Q(k) .
(4)
Here the star stands for the complex conjugate transpose which is equivalent to taking two operations, the transpose T , and the complex conjugation of each entry. Then the spectral representation of the random field is written as follows u(x) = ei 2π kx Q(k) Z(dk) (5) R Id
where the column-vector Z = (Z1 , . . . Zn )T is a complex-valued homogeneous n-dimensional white noise on IRd with a unite variance and zero mean: Z(dk) = 0,
Zi (dk1 ) Z¯j (dk2 ) = δij δ(k1 − k2 ) dk1 dk2
(6)
¯ satisfying the condition Z(−dk) = Z(dk). Different simulation formulae based on series representations and Fourierwavelet expansions can be found in our recent papers [1-3].
3
Random Walks and Double Randomization Method
Assume we have to solve a PDE which includes a random field σ, say in a right-hand side, in coefficients, or in the boundary conditions: Lu = f,
u|Γ = uγ .
32
K. Sabelfeld
Suppose we have constructed a stochastic method for solving this problem, for a fixed sample of σ. This implies, e.g., that an unbiased random estimator ξ(x| σ) is defined so that for a fixed σ, u(x, σ) = ξ(x| σ) where · stands for averaging over the random trajectories of the stochastic method (e.g., a diffusion process, a Random Walk on Spheres, or a Random Walk on Boundary). Let us denote by Eσ the average over the distribution of σ. The double randomization method is based on the equality: Eσ u(x, σ) = Eσ ξ(x| σ) . The algorithm for evaluation of Eσ u(x, σ) then reads: 1. Choose a sample of the random field σ. 2. Construct the random walk over which the random estimator ξ(x| σ) is calculated. 3. Repeat 1. and 2. N times, and take the arithmetic mean. Suppose one needs to evaluate the covariance of the solution. Let us denote the random trajectory by ω. It is not difficult to show that u(x, σ) u(y, σ) = E(ω1 ,ω2 ,σ) [ξω1 (x, ω)ξω2 (y, ω)] . The algorithm for calculation of u(x, σ) u(y, σ) follows from this relation: 1. Choose a sample of the random field σ. 2. Having fixed this sample, construct two conditionally independent trajectories ω1 and ω2 , starting at x and y, respectively, and evaluate ξω1 (x, ω)ξω2 (y, ω). 3. Repeat 1. and 2. N times, and take the arithmetic mean.
4
Random Loads
Here we solve a problem from elastography: measuring the correlation functions of random displacements under isotropic random loads we try to recover the elastic properties of a molecule consisting of 17 atoms, see Fig. 5, left picture. The spectral tensor of an isotropic random load has the following general structure: Sij (k) = [SLL (k) − SN N (k)]
ki kj + SN N (k)δij , k2
k = |k| .
Here SLL and SN N are longitudinal and transversal spectra, respectively. We consider incompressible random field, SLL = 0, hence in this case, SN N (k) = E(k)/(2πk)
Stochastic Simulation
33
1.8
Longitudinal corr. B(u) (x) 11
1.6 1.4 1.2 1
Transverse corr. B(u)(x) 22
0.8 0.6 0.4
σ=1, L=0.5; α=2
0.2 0 −0.5
0
0.5
1
1.5
x
2
Fig. 4. Longitudinal and transverse correlation functions of the elastic displacements, for fixed intensity fluctuations σ, and different correlation lengths L and elasticity constant α
0.015 0.01
α = 0.90 = 0.75 = 0.50 = 0.36
B (x) 21
0.005
solid dash dash−dot dot
0 −0.005 B (x)
−0.01 −0.015
12
0
0.5
1
1.5
2
2.5
3
3.5
4
4.5
X
5
Fig. 5. Elastic molecule consisting of 17 atoms (left panel), and the cross correlation function of the elastic displacements for different values of the elastic constant α
where E(k) is the energy spectrum. We choose √ 2 2 L −Lk2 E(k) = σ √ e , k = |k| , π here L is a positive parameter, characterizing the correlation length. The Lam´e equation was solved by the Random Walk on spheres method, the calculated correlation functions are plotted in Figures 4 and 5 which show that the elasticity constant can be recovered from the measurements of the correlation functions. For details see [4].
34
5
K. Sabelfeld
Respond to Boundary Excitations
In this section we consider PDEs with random boundary conditions, in particular, the Dirichlet problem for Laplace and Lam´e equations with white noise excitations on the boundary. For details and proofs of Theorems 1-5 see [5,7]. 5.1
Inhomogeneous Random Fields and Karhunen-Lo` eve Expansion
The Karhunen-Lo`eve expansion of an inhomogeneous random field u(x) has the form ∞ u(x) = λk ξk hk (x) . k=1
By definition, B(x1 , x2 ) is bounded, symmetric and positive definite, so the ∞ Hilbert-Schmidt theory says that B(x1 , x2 ) = λk hk (x1 ) hk (x2 ) where the k=1
eigen-values and eigen-functions are the solutions of the following eigen-value problem for the correlation operator: B(x1 , x2 ) hk (x1 ) dx1 = λk hk (x2 ) . G
The eigen-functions form a complete orthogonal set
hi (x) hj (x) dx = δij and
G
1 ξk = √ λk
u(x) hk (x) dx ,
E ξk = 0,
Eξi ξj = δij .
G
Theorem 1. The solution of the Dirichlet problem for the Laplace equation in a disc K(x0 , R) with the white noise boundary function g(y) is an inhomogeneous 2D Gaussian random field uniquely defined by its correlation function u(r1 , θ1 ) u(r2 , θ2 ) = Bu (ρ1 , θ1 ; ρ2 , θ2 ) =
1 1−ρ21 ρ22 2π 1−2ρ1 ρ2 cos(θ2 −θ1 )+ρ21 ρ22
which is harmonic, and it depends only on the angular difference θ2 − θ1 and the product of radial coordinates ρ1 ρ2 = r1 r2 /R2 . The random field u(r, θ) is thus homogeneous with respect to the angular coordinate θ, and its partial discrete spectral density has the form fθ (0) = 1/2π, fθ (k) = (ρ1 ρ2 )k /π, k = 1, . . . . The Karhunen-Lo`eve expansion of the solution has the form ∞ 1 k ξ0 ρ ξk cos(k θ) + ηk sin(k θ) u(r, θ) = √ + √ π 2π k=1
where {ξk }, {ηk } are sets of mutually independent standard Gaussian random variables.
Stochastic Simulation
35
Theorem 2. Assume the boundary function g in the Dirichlet problem is a homogeneous random process with a continuous correlation function Bg (ψ). Then the solution of this problem is partially homogeneous with respect to the angular coordinate, and its correlation function Bu (ρ1 , θ1 ; ρ2 , θ2 ) depends on the angular difference ψ = θ2 − θ1 and the product ρ1 ρ2 , and is explicitly given by the convolution Bu = K ∗ Bg , i.e. , by the Poisson formula 1 Bu (ρ1 ρ2 ; ψ) = 2π
2π
K(ρ1 ρ2 ; ψ − ψ ) Bg (ψ ) dψ
0
which implies that the correlation function Bu (ρ, θ) is harmonic in the unit disc, and it is the unique solution of the Dirichlet boundary value problem ΔBu = 0,
Bu |ρ→1 = Bg .
Theorem 3. Let u(x, y), x = (x1 , . . . xn−1 ) be a random field defined in the halfspace D+ = IRn+ as a harmonic function with the boundary condition u|y=0 = g where g is a zero mean homogeneous random field on the boundary {y = 0} with the correlation function Bg (x) which is bounded in dimension n = 2, or tends to zero as |x| → ∞ if n > 2. Then Bu (x, y) = Bu (x2 − x1 , y1 + y2 ), the correlation function of the solution, is a harmonic function in IRn+ , and is related to Bg by the Poisson type formula: (y1 + y2 ) Bg (x ) dS(x ) Γ (n/2) Bu (x2 − x1 , y1 + y2 ) = . π n/2 [(x − (x2 − x1 ))2 + (y1 + y2 )2 ]n/2 ∂D+
Remark. In practice, it is often important to know the statistical structure of the gradient of the solution. Using the Fourier transform we find that Buxi = −
∂ 2 Bu , ∂xi
i = 1, . . . , n − 1,
Buy =
∂ 2 Bu . ∂2y
Note that since the correlation function Bu is harmonic, this implies the following n−1 remarkable property: Buy = Buxi . So in dimension two, Buy = Bux . i=1
Theorem 4. The exact Karhunen-Lo`eve representations for the covariance tensor and the random field (ur , uθ )T which solves the Lam´e equation under the boundary white noise excitations are given by Bu (ρ1 , θ1 ; ρ2 , θ2 ) = ⎧ ⎫ ∞ ∞ ρ1 ρ2 1 1 k k k k ⎪ ⎪ ⎪ ⎪ + Λ ρ ρ cos [k(θ − θ )] Λ ρ ρ sin [k(θ − θ )] 11 1 2 2 1 12 1 2 2 1 ⎪ ⎪ 2π π π ⎪ ⎪ ⎪ ⎪ k=1 k=1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ , ⎪ ⎪ ⎪ ⎪ ∞ ∞ ⎪ ⎪ ⎪ 1 ⎪ ρ1 ρ2 1 k k k k ⎪ ⎪ ⎩ π Λ21 ρ1 ρ2 sin [k(θ2 − θ1 )] + Λ ρ ρ cos [k(θ − θ )] ⎭ 22 2 1 1 2 2π π k=1
k=1
36
K. Sabelfeld
Λ11 = λ11 (ρ1 , k) λ11 (ρ2 , k) + λ12 (ρ1 , k) λ12 (ρ2 , k), Λ12 = λ11 (ρ1 , k) λ21 (ρ2 , k) − λ12 (ρ1 , k) λ22 (ρ2 , k), where the functions λij are known explicitly, see [5], e.g., λ11 (ρ, k) =
2(λ + 2μ) k (λ + μ)(1 − ρ2 ) 1 2μ ρ + + , 2(λ + 3μ) ρ ρ
and ur (r, θ) =
∞ 1 ξ0 ρ + λ11 ρk ξk cos kθ + ηk sin kθ 2π π k=1
+
∞ 1 λ12 ρk − ηk cos kθ + ξk sin kθ , π k=1
uθ (r, θ) =
∞ 1 ξ0 ρ + λ21 ρk − ηk cos kθ + ξk sin kθ 2π π k=1
+
∞ 1 λ22 ρk ξk cos kθ + ηk sin kθ , π k=1
where {ξk , ηk } and {ξk , ηk }, k = 0, 1, 2, . . . are two independent families of standard independent gaussian random variables. Thus the random field is homogeneous with respect to the angular variable, and the respective partial spectra are: = m the spectrum is pure Smm (k) = π1 Λmm ρk1 ρk2 , Smm (0) = ρ1 ρ2 /2π, and for n imaginary: Smn (k) = i π1 Λmn ρk1 ρk2 . The results of calculations for the longitudinal and cross correlations are shown in Fig. 6 for different values of the elasticity constant. These can be used for recovering the elastic properties of the body by measuring the correlations in respond to random boundary excitations.
6
Fractal Density of the Universe
Let us consider a fractional model of the density of the Universe 1−a
(−Δ) 2 u(x, y) = 0, u|∂Rn+ = f (ξ)
x ∈ IRn ,
y ∈ IR+
In [7], we have given the following statement which proves an existence of a model for the mass distribution in the Universe with a correlation function having very heavy tails characterizing the fractal nature of this mass distribution, see Fig. 7 (left panel) and a sample of the fractal density of the Universe (right panel).
Stochastic Simulation
37
Fig. 6. The longitudinal (left panel) and the cross correlation (right panel) functions, for different values of the elastic constant
Theorem 5. The random field u(X), X ∈ IRn × [0, ∞) solving the boundary value problem for the Lam´e equation with the Gaussian white noise f is partially isotropic, that is, its correlation function depends on x = |x1 −x2 |, and on y1 , y2 . It is uniquely defined by its correlation function which has the form 1 Bu (X1 , X2 ) =
2 Cn,a
A(y1 , y2 ) 0
t
n−a−1 2
(1 − t)
n−a−1 2
dt 2 n +1−a y1 t + y22 (1 − t) + t(1 − t)|x1 − x2 |2 2
π n/2 Γ n2 + 1 − a 1−a 1−a A(y1 , y2 ) = n+1−a 2 y1 y2 . Γ 2
where
The partial spectral function is explicitly given by S(k; y1 , y2 ) = C 2 (|k|y1 )
1−a 2
(|k|y2 )
1−a 2
K 1−a (2π|k|y1 ) K 1−a (2π|k|y2 ) 2
2
1−a
where C = 2π 2 /Γ ( 1−a 2 ) , and Kν (z) is the modified Bessel function of the −Iν (z) second kind, also known as the Macdonalds function: Kν (z) = π2 Iνsin(νπ ), where ν Iν (z) = i Jν (iz) is the modified Bessel function in turn defined via Jν , the Bessel function of a pure imaginary argument iz. The random field u(x, y) has the following Randomized Spectral approximation: u(x, y) ≈ V (x, y) = C
(|k|y)
1−a 2
K 1−a (2π |k|y) 2 ξ cos(2πk · x) + η sin(2πk · x) p(k)
where k is a random vector in IRn distributed with a density p(k) which can be chosen quite arbitrarily, satisfying the condition p(k) = 0 if K 1−a (|k|y) = 0, and 2 ξ, η are independent standard Gaussian random variables.
38
K. Sabelfeld
Fig. 7. Correlations of the fractal density of the Universe, for different values of the fractal parameter a, in 3D (left panel) and a sample of the fractal density (right panel)
Acknowledgments The author thanks the organizers of the conference, and acknowledges the support of the RFBR under Grants N 06-01-00498, 09-01-12028-ofi-m, and a joint BMBF and Bortnik Funds Grant.
References Random field simulation. 1. Kramer, P., Kurbanmuradov, O., Sabelfeld, K.: Comparative Analysis of Multiscale Gaussian Random Field Simulation Algorithms. Journal of Computational Physics 226, 897–924 (2007) 2. Kurbanmuradov, O., Sabelfeld, K.K.: Stochastic spectral and Fourier-wavelet methods for vector Gaussian random field. Monte Carlo Methods and Applications 12(5-6), 395–446 (2006) 3. Kurbanmuradov, O., Sabelfeld, K.: Convergence of Fourier-Wavelet models for Gaussian random processes. SIAM Journal on Numerical Analysis 46(6), 3084– 3112 (2008) Random loads in elasticity. 4. Sabelfeld, K., Shalimova, I., Levykin, A.: Stochastic simulation method for a 2D elasticity problem with random loads. Probabilistic Engineering Mechanics 24(1), 2–15 (2009) Random boundary excitations. 5. Sabelfeld, K.: Expansion of random boundary excitations for some elliptic PDEs. Monte Carlo Methods and Applications 13(5-6), 403–451 (2007)
Stochastic Simulation
39
6. Sabelfeld, K.K.: Stokes flows under random boundary velocity excitations. J. Stat. Physics 133(6), 1107–1136 (2008) 7. Sabelfeld, K.: Expansion of random boundary excitations for the fractional Laplacian. Journal of Cosmology and Astroparticle Physics 10(004) (2008), doi:10.1088/ 1475-7516/2008/10/004 8. Sabelfeld, K.K., Shalimova, I.A.: Elastic half-plane under random displacement excitations on the boundary. J. Stat. Physics 132(6), 1071–1095 (2008) Nucleation, coagulation processes and Ostwald ripening. 9. Kaganer, V.M., Ploog, K.H., Sabelfeld, K.K.: Dynamic coalescence kinetics of facetted 2D islands. Physical Review B 73(11) (2006) 10. Kaganer, V.M., Braun, W., Sabelfeld, K.K.: Ostwald ripening of faceted twodimensional islands. Physical Review B 76, 075415 (2007) Footprint problem in canopy. 11. Vesala, T., Kljun, N., Rannik, U., Rinne, J., Sogachev, A., Markkanen, T., Sabelfeld, K., Foken, T., Leclerc, M.Y.: Flux and concentration footprint modeling: state of the art. Environmental Pollution 152(3), 653–666 (2008) Turbulence simulation and transport in porous media. 12. Suciu, N., Vamos, C., Vereecken, H., Sabelfeld, K., Knabner, P.: Memory Effects Induced by Dependence on Initial Conditions and Ergodicity of Transport in Heterogeneous Media. Water Resources Research 44, W08501 (2008) 13. Sabelfeld, K.: Random Field Simulation and Applications. In: Niederreiter, H., Heinrich, S., Keller, A. (eds.) Invited paper of the Plenary Lecture. Proceedings of the 7th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, Ulm, Germany, August 14-18, 2006, pp. 143–166. Springer, Heidelberg (2007) 14. Sabelfeld, K., Kurbanmuradov, O., Levykin, A.: Stochastic simulation of particle transport by a random flow inside a porous cylinder. Monte Carlo Methods and Applications 15(1) (2009) 15. Kurbanmuradov, O., Sabelfeld, K.: Stochastic flow simulation and particle transport in a 2D layer of random porous medium. Submitted to Transport in Porous media (2009) X-Ray diffraction peaks in the dislocation analysis. 16. Kaganer, V., Brandt, O., Riechert, H., Sabelfeld, K.: X-ray diffraction of epitaxial films with arbitrarily correlated dislocations: Monte Carlo calculation and experiment. Phys. Rev. B 80, 033306 (2009) 17. Kaganer, V., Sabelfeld, K.: X-ray diffraction peaks from partially ordered misfit dislocations. Phys. Rev. B 80, 184105 (2009)
On Finite Element Error Estimates for Optimal Control Problems with Elliptic PDEs Fredi Tr¨ oltzsch Technische Universit¨ at Berlin, Institut f¨ ur Mathematik 10623 Berlin, Str. d. 17. Juni 136, Sekr. MA 4-5, Germany Abstract. Discretizations of optimal control problems for elliptic equations by finite element methods are considered. The problems are subject to constraints on the control and may also contain pointwise state constraints. Some techniques are surveyed to estimate the distance between the exact optimal control and the associated optimal control of the discretized problem. As a particular example, an error estimate for a nonlinear optimal control problem with finitely many control values and state constraints in finitely many points of the spatial domain is derived.
1
Introduction
In this paper, we consider optimal control problems of the type min J(y, u) := subject to
1 λ y − yd 2 + u2 2 2
−Δy = u y=0
in Ω on Γ,
(1)
(2)
where also further constraints u ∈ Uad (control constraints) and y ∈ Yad (state constraints) may be given. In this setting, Ω ⊂ IR2 is a convex, bounded and polygonal domain, yd ∈ L2 (Ω) a given desired state, and λ > 0 is a fixed regularization parameter. By · , the natural norm of L2 (Ω) is denoted. In the last section, B(u, ρ) ⊂ IRm is the open ball of radius ρ around u. In the paper, c is a generic constant that is more or less arbitrarily adapted. Our main issue is to estimate the error arising from a finite element discretization of such problems. First, we consider the problem without control and state constraints. Next, we explain how the error can be estimated, if control constraints are given. We briefly survey recent results on probems with state constraints and discuss finally a problem with finite-dimensional control space and state constraints given in finitely many points of the spatial domain.
2 2.1
Optimal Control Problem with Control Constraints The Unconstrained Optimal Control Problem
The Problem. Let us start the tour through error estimates by a quite simple approach for the unconstrained problem (1), (2), where no further constraint on u or y are given, i.e. Uad = L2 (Ω) and Yad = L2 (Ω). I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 40–53, 2010. c Springer-Verlag Berlin Heidelberg 2010
On Finite Element Error Estimates for Optimal Control Problems
41
For all u ∈ L2 (Ω), there is exactly one state y = y(u) ∈ H 2 (Ω) ∩ H01 (Ω) and the mapping Λ : L2 (Ω) → H 2 (Ω) ∩ H01 (Ω), Λ : u → y(u), is continuous. We consider Λ also with range in L2 (Ω) and denote this “solution operator” by S, i.e. S = EH 2 →L2 Λ, where EH 2 →L2 denotes the continuous injection of H 2 (Ω) in L2 (Ω). By S, we are able to formally eliminate the PDE and to transform the problem to the control reduced quadratic optimization problem (P )
min f (u) := J(Su, u) =
u∈L2 (Ω)
1 λ Su − yd 2 + u2 . 2 2
The existence of a unique (optimal) solution of this problem is a standard result. In all what follows, we denote by u¯ the unique optimal control and by y¯ = y(¯ u) the associated optimal state. All further results on the necessary optimality conditions for (P) and its version with additional control constraints are stated without proofs. They are discussed extensively in the forthcoming textbook [1]. Necessary and Sufficient Optimality Condition. It is clear that f (¯ u) = 0 is necessary for the optimality of u ¯, hence u) = S ∗ (S u ¯ − yd ) + λ¯ u=0 f (¯ must hold, where S ∗ denotes the adjoint operator of S. It is very useful to introduce an auxiliary function p by p¯ := S ∗ (S u ¯ − yd ). This function p¯ is called adjoint state associated with u ¯. Therefore, we have p¯ + λ¯ u = 0.
(3)
The adjoint state p¯ is the solution of the adjoint equation −Δp = y¯ − yd p=0
in Ω on Γ.
(4)
To determine the unknown triplet (¯ y, u ¯, p¯), we have to solve the optimality system (2), (4), (3). Invoking (3), i.e. inserting u = −λ−1 p in the state equation, the optimality system −Δy + λ−1 p = 0 y = 0,
−Δp − y = −yd p=0
in Ω on Γ
(5)
is obtained. Having solved this, we obtain the optimal control by u¯ := −λ−1 p. Discretized Problem and Error Estimate. We assume a regular triangulation T of Ω with mesh size h, triangles Ti , and piecewise linear and continuous ansatz functions Φi , i = 1, . . . , n, which generate the finite-dimensional subspace Vh = span {Φ1 , . . . , Φn } ⊂ H01 (Ω). We do not explain the standard notion of
42
F. Tr¨ oltzsch
regularity of the triangulation. Instead, we just assume that the standard finite element error estimate (7) below is satisfied. In the discretized problem, the state yh associated with u is determined by yh ∈ Vh and ∇yh · ∇Φi dx = u Φi dx ∀i = 1, . . . , n. (6) Ω
Ω
To each u ∈ L2 (Ω), there exists exactly one solution yh ∈ H01 (Ω) of (6) denoted by yh (u). From the finite element analysis for regular grids, the error estimate h yh (u) − y(u)H 1 (Ω) + yh (u) − y(u) ≤ c h2 u
(7)
is known for all sufficiently small h > 0, where the constant c does not depend on u or h. Let us introduce the mapping Sh : L2 (Ω) → L2 (Ω), Sh : u → yh (u). In terms of S and Sh , (7) is equivalent to S − Sh L2 (Ω)→L2 (Ω) ≤ c h2 .
(8)
Analogously to the former section, the discretized optimal control problem can be formulated in control reduced form as (Ph )
min fh (u) :=
1 λ Sh u − yd 2 + u2. 2 2
This discretized problem has a unique optimal control denoted by u¯h with associated state y¯h . The reader might be surprised that the control u in (Ph ) is not discretized. In fact, we do not need this here, since the optimality conditions will automatically imply uh ∈ Vh . ¯. We write down the necessary optiIt is easy to estimate the error ¯ uh − u mality conditions for both optimal controls, S ∗ (S u ¯ − yd ) + λ¯ u=0 ¯h − yd ) + λ¯ uh = 0, Sh∗ (Sh u multiply them scalarly by u¯ − u¯h , subtract the results and re-order. This yields u−u ¯h )2 + λ¯ u−u ¯h 2 ≤ | (yd , (S − Sh )(¯ u − u¯h )) | Sh (¯ ≤ yd c h2 ¯ u−u ¯h , where (· , ·) denotes the natural inner product of L2 (Ω). Consequently, we have the L2 error estimate (9) ¯ u−u ¯h ≤ c λ−1 h2 yd . This was easy, since we considered the unconstrained case that is not really interesting in optimization. Let us include also constraints on the control.
On Finite Element Error Estimates for Optimal Control Problems
2.2
43
Constraints on the Control
Optimality Conditions. Let ua < ub be two real numbers. Consider the control-constrained problem (P C)
min f (u) =
u∈Uad
1 λ Su − yd 2 + u2 2 2
with Uad = {u ∈ L2 (Ω) : ua ≤ u(x) ≤ ub a.e. in Ω}. Again, the problem has a unique optimal control u¯. However, due to the conu) = 0. Instead, the variational inequality straints, we cannot expect that f (¯ f (¯ u)(u − u ¯) ≥ 0 ∀u ∈ Uad is necessary and sufficient for optimality of u¯. It expresses the intuitively clear observation that, in a minimum, the function f cannot decrease in any feasible direction. In terms of S, this means ¯ − yd ) + λ¯ u, u−u ¯) ≥ 0 (S ∗ (S u or equivalently
∀u ∈ Uad
(10)
p(x) + λ¯ u(x) (u(x) − u ¯(x)) dx ≥ 0 ∀u(·) ∈ Uad .
Ω
A simple pointwise discussion of this inequality reveals that almost everywhere u(x) > 0 ua if p(x) + λ¯ u¯(x) = (11) u(x) < 0 ub if p(x) + λ¯ u(x) = 0. From this, one and we have, of course, u¯(x) = −λ−1 p(x) if p(x) + λ¯ derives with some effort the well-known projection formula 1 a.e. in Ω, (12) u ¯(x) = IP[ua ,ub ] − p(x) λ where IP[ua ,ub ] : IR → [ua , ub ] denotes projection onto [a, b]. This projection ¯ can exhibit corners in the formula shows that, although we have p ∈ H 2 (Ω), u points, where the bounds ua and ub are reached. Hence in general we can only expect u ∈ H 1 (Ω). Moreover, (¯ y, u ¯, p¯) cannot be obtained from a smooth coupled system of PDEs as (5). Therefore, error estimates are more difficult. They also depend on the way how the control function u is discretized. Discretization by Piecewise Constant Controls. The most common way of control discretization is working with piecewise constant controls. Here, the set of admissible discretized controls is defined by h Uad = {ua ≤ u(·) ≤ ub : u is constant on each triangle Ti }
44
F. Tr¨ oltzsch
and the associated discretized problem is (P Ch )
min fh (uh ),
h uh ∈Uad
where fh and yh are defined as in the last section. The difference to (Ph ) consists h . Let u¯h denote the unique optimal control of (P Ch ) in the appearance of Uad and let y¯h be the associated (discretized) state. Then a discrete counterpart to the variational inequality (10) must be satisfied, h (Sh∗ (Sh u¯h − yd ) + λ¯ uh , uh − u ¯h ) ≥ 0 ∀uh ∈ Uad .
By the discrete adjoint state ph := Sh∗ (Sh u ¯h − yd ), this is equivalent to 1 u ¯(x) = IP[ua ,ub ] − ph (x) dx ∀ x ∈ Ti , ∀ i = 1, . . . , M, λ |Ti | Ti
(13)
(14)
where |Ti | is the area of the triangle Ti . Now we cannot derive an error estimate in the same way as before. First, u¯ cannot be inserted in (14), as u ¯h is not piecewise constant. As a substitute, we use the interpolant Πh u ¯ defined by 1 (Πh u¯)(x) := − u¯(x) dx ∀ x ∈ Ti . (15) |Ti | Ti h It holds that Πh u ¯ ∈ Uad and, with some c > 0 not depending on h,
¯ ≤ c h. ¯ u − Πh u
(16)
Now, we might insert u ¯h in (10), Πh u¯ in (13), add the two inequalities obtained and resolve for ¯ u−u ¯h to estimate the error. This procedure only yields a non√ optimal estimate of the order h. Some further tricks avoid the square root, cf. [2] or the early papers [3,4]. Below, we explain a perturbation approach. Its main idea goes back to [5] and was applied in [6] to nonlinear elliptic problems. The function Πh u¯ will only fulfill the variational inequality (13) if it is optimal for (P Ch ) by chance. However, it satisfies the variational inequality for the perturbed control problem (P Cζh ) min fh (uh ) + ζh (x) uh (x) dx , h uh ∈Uad
Ω
¯ satisfies the associated projection formula if ζh (x) is defined such that Πh u 1 ¯(x) = IP[ua ,ub ] − (ph (x) + ζh (x)) dx ∀ x ∈ Ti , ∀ i = 1, . . . , M. Πh u λ |Ti | Ti Notice that the derivative of the perturbed functional at a function u is equal to Sh∗ (Sh u − yd ) + ζh + λ u = ph + ζh + λ u so that ph + ζh plays the role of the former ph . How the function ζh must be constructed? If ua < Πh u¯(x) < ub holds in a triangle Ti , then λ|Ti |Πh u ¯(x) + Ti (ph + ζh ) dx = 0 must hold on Ti . This follows from (14), applied to Πh u ¯ and (P Cζh ). Therefore, we define ζh by 1 ζh (x) ≡ −Πh u¯(x) + ph dx on Ti . λ |Ti | Ti
On Finite Element Error Estimates for Optimal Control Problems
45
If, on Ti , Πh u ¯(x) ≡ ua , then λ|Ti |Πh u¯(x) + Ti (ph + ζh ) dx ≥ 0 must hold on Ti (adapt (11) to (PCζh )). To compensate negative values, we define ζh by
1 ζh (x) ≡ Πh u ¯(x) + ph dx on Ti , λ |Ti | Ti − where a− = (|a| − a)/2 ≥ 0 denotes the negative part of a real number. Analogously, we define ζh := −[. . .]+ via the associated positive part to compensate a ¯(x) ≡ ub . It is not difficult to show that positive value, if Πh u ζh ≤ c ¯ u − Πh u¯ ≤ c˜ h. Now we proceed similarly as in the last section. We insert Πh u ¯ in the variational inequality (13) for u ¯h and insert u¯h in the perturbed variational inequality for Πh u ¯ to obtain uh , Πh u ¯−u ¯h ) ≥ 0, (Sh∗ (Sh u¯h − yd ) + λ¯ ∗ (Sh (Sh Πh u ¯ − yd ) + ζh + λΠh u ¯, u ¯ h − Πh u ¯) ≥ 0. Adding both inequalities, we obtain after some re-ordering and ignoring the term Sh (¯ u h − Πh u ¯)2 ¯ uh − Πh u¯2 ≤ λ−1 (ζh , u ¯ h − Πh u ¯) ≤ λ−1 ζh ¯ u h − Πh u ¯. In view of (16), an obvious application of the triangle inequality yields finally ¯ uh − u ¯ ≤ c h
(17)
with some c > 0 not depending on h. This is the optimal error estimate for piecewise constant approximations of u¯. The same order of the error can be derived for problems with semilinear elliptic equations, both for distributed and boundary controls, [6,7]. However, thanks to non-convexity, the situation is more delicate. Different locally optimal controls might appear. They should satisfy a second-order sufficient optimality condition to have a unique approximating locally optimal control in a certain neighborhood, cf. also the discussion in the last section. The error analysis for piecewise linear controls is more difficult. We refer only to [8] for Neumann and to [9,10,11] for Dirichlet boundary control problems. In the Neumann case, the order h3/2 can be expected for the error. Variational Discretization. The situation is easier, if the control functions are (formally) not discretized, i.e. if we consider the discretized problem (Ph )
min fh (u).
u∈Uad
At first glance, this consideration seems to be useless. How should one be able to compute the optimal control without any kind of discretization? However, take a look at the finite element approximation of the optimality system −Δp − y = −yd in Ω −Δy = IP[ua ,ub ] − λ−1 p (18) y = 0, p=0 on Γ,
46
F. Tr¨ oltzsch
where u is eliminated by the projection formula (12). This nonsmooth nonlinear system can be solved numerically to obtain the (discrete) state yh and adjoint state ph , [12]. Then u¯h is found by u¯h = IP[ua ,ub ] {−λ−1 ph }. It is piecewise linear but does not in general belong to Vh . This approach of variational discretization is also useful in iterative methods of optimization, cf. [13]. For this variational discretization, an error estimate of the order h2 is easily derived: We repeat a similar procedure as before and insert u ¯h in the variational inequality for u¯, u ¯ in the variational inequality for u ¯h (all u ∈ Uad are now admitted!), ¯ − yd ) + λ¯ u, u ¯h − u ¯) ≥ 0, (S ∗ (S u ∗ (Sh (Sh u ¯h − yd ) + λ¯ uh , u ¯−u ¯h ) ≥ 0. Next, we add both inequalities and re-order as we proceeded to derive (9), u − u¯h )2 + λ¯ u − u¯h 2 ≤ | (yd , (S − Sh )(¯ u−u ¯h )) | ≤ yd c h2 ¯ u−u ¯h . Sh (¯ Consequently, we have the optimal L2 -estimate ¯ u−u ¯h L2 (Ω) ≤ c λ−1 h2 yd . The same optimal order can be obtained under natural assumptions with piecewise constant controls by one smoothing step, cf. [14].
3 3.1
Problems with State Constraints Available Error Estimates
Here, we admit also state constraints. Now the error analysis is more difficult. Currently, this is a very active field of research. A detailed survey on relevant contributions would go beyond the scope of this paper. We mention first results for problems with finitely many state constraints in [15] and the convergence of discretizations for pointwise state constraints of the type y(x) ≤ c a.e. in Ω in [16]. To have a comparison with the results of the next section, we state also recent results for elliptic problems with pointwise state constraints: In [17], for ¯ u − u¯h the order h1−ε was derived in Ω ⊂ IR2 and h1/2−ε for Ω ⊂ IR3 . Recently, this was improved in [18] to h| log h| for 2D and h1/2 for 3D domains. 3.2
Control in IRm and State Constraints in Finitely Many Points
Let us discuss a simpler problem with semilinear elliptic equation, controls in IRm and state constraints in finitely many points x1 , . . . , x of Ω: ⎧ 1 λ ⎪ ⎪ min J(yu , u) = yu − yd 2 + |u|2 ⎪ ⎪ 2 2 ⎪ ⎨ u∈Uad subject to (P S) ⎪ ⎪ ⎪ g (y (x )) = 0, for all i = 1, . . . , k, ⎪ ⎪ ⎩ i u i gi (yu (xi )) ≤ 0, for all i = k + 1, . . . , ,
On Finite Element Error Estimates for Optimal Control Problems
47
where yu is the solution to the state equation −Δ y(x) + d(y(x), u) = 0 in Ω y(x) = 0 on Γ
(19)
and Uad = {u ∈ IRm : ua ≤ u ≤ ub } with given ua ≤ ub of IRm . We assume l ≥ 1 and set k = 0, if only inequality constraints are given and k = l, if only equality constraints are given. We assume for short that d : IR2 → IR and gi : IR → IR, i = 1, . . . , , are twice differentiable with locally Lipschitz second-order derivatives and that d is monotone non-decreasing with respect to y. In [19], the problem is considered in a slightly more general setting. Thanks to our assumptions, the mapping u → yu ¯ hence the values yu (xi ) are well defined. is continuous from IRm to H01 (Ω)∩C(Ω), ¯ Therefore, we consider S : u → yu as mapping from IRm to H01 (Ω) ∩ C(Ω). To convert (PS) into a finite-dimensional nonlinear programming problem, → IR of class C 2,1 by f (u) = J(yu , u) = J(S(u), u). we define again f : IRm Thanks to our assumptions, in particular the Lipschitz properties of the second derivatives of d and the gi , the mapping S has a locally Lipschitz continuous second-order derivative S . Moreover, we define G : IRm → IR by G(u) := [g1 (yu (x1 )), . . . , g (yu (x ))] .
(20)
To cover the equality and inequality constraints in a unified way, we introduce the convex cone K = {z ∈ IR : zi = 0, i = 1, . . . , k, zi ≤ 0, i = k + 1, . . . , } and write z ≤K 0 iff z ∈ K. By these definitions, (PS) becomes equivalent to the nonlinear programming problem min f (u) (21) (N ) G(u) ≤K 0, u ∈ Uad . The discretized optimal control problem (PSh ) is defined on substituting yu by its finite-element approximation yh,u , obtained from ∇yh · ∇vh dx + d(yh , u) vh dx = 0 ∀vh ∈ Vh . Ω
Ω
Introducing Gh (u) := [g1 (yh,u (x1 )), . . . , g (yh,u (x ))] we express this problem as finite-dimensional nonlinear programming problem min fh (u) (22) (Nh ) Gh (u) ≤K 0, u ∈ Uad . ¯0 ⊂ Ω1 ⊂ Ω ¯1 ⊂ Ω. Let Ω0 and Ω1 be open sets such that {x1 , . . . , x } ⊂ Ω0 and Ω Then there exists a constant c > 0, independent of h and u ∈ Uad , such that (23) yu − yh,u L∞ (Ω0 ) ≤ c h2 | log h| yu W 2,∞ (Ω1 ) + h2 yu H 2 (Ω) ,
48
F. Tr¨ oltzsch
cf. [19]. Thanks to this estimate and to our assumptions, it holds |f (u) − fh (u)| + |f (u) − fh (u)| + |f (u) − fh (u)| ≤ c h2 |G(u) − Gh (u)| + |G (u) − Gh (u)| + |G (u) − Gh (u)| ≤ c h2 | log h|
(24)
for all u ∈ Uad , with some constant c not depending on h and u, [19]. In all what follows, u ¯ is a locally optimal reference solution of (PS), hence also of (N). We show the existence of an associated locally optimal solution u¯h of (PSh ), (or (Nh )) converging to u ¯ as h ↓ 0. Our main aim is to estimate |¯ u−u ¯h |. For short, we use the abbreviation α(h) = h2 | log h|. Our error analysis is based on 3 main assumptions. To formulate them, we need some standard definitions of nonlinear programming which are explained below. ˆ ∈ IR+2m by including the box We first extend the vector G(u) ∈ IR to G(u) constraints defining Uad . We add the 2m further components G+i (u) = ua,i − ui ,
for i = 1, . . . , m
G+m+i (u) = ui − ub,i ,
for i = 1, . . . , m,
ˆ and put G(u) := (Gi (u))i=1,...,+2m . Then all constraints of the problem can ˆ be unified by G(u) ≤K 0, where K is re-defined accordingly. We define the Lagrangian function L : IRm × IR+2m by L(u, ν) = f (u) +
+2m
νi Gi (u).
i=1
The index set A(¯ u) of active constraints at u ¯ is defined by A(¯ u) = {i ∈ {1, . . . , + 2m} : Gi (¯ u) = 0} . We now formulate the main assumptions: Robinson regularity condition: At u ¯, it holds that 0 ∈ int {G(¯ u) + G (¯ u)(Uad − u ¯) + K}, u)(u − u ¯)+ k | u ∈ Uad , k ∈ K}. where the set in braces is defined as ∪{G(¯ u)+ G (¯ It is known that this regularity assumption is sufficient for the existence of a Lagrange multiplier ν¯ associated with the (locally) optimal solution u ¯. Strong second-order sufficient optimality condition: For the pair (¯ u, ν¯), it holds ∂ 2 L(¯ u, ν¯) v>0 ∀v ∈ Cu¯ , v = 0, v ∂u2 where Cu¯ ⊂ IRm is defined by Cu¯ = {v | Gi (¯ u)v = 0 ∀i ∈ {1, . . . , k} ∪ {i ∈ {k + 1, . . . , + 2m} : ν¯i > 0}} . Linear independence condition of active gradients: This condition is satu) | i ∈ A(¯ u)} are linearly independent. isfied if all vectors of the set {∇Gi (¯
On Finite Element Error Estimates for Optimal Control Problems
49
Theorem 1. Under the three assumptions stated above, there exists a constant c > 0 not depending on h such that, for all sufficiently small h > 0, a unique locally optimal control u¯h exists in a neighborhood of u ¯ and it holds |¯ u − u¯h | ≤ c h2 | log h|. We do not entirely show this result here. Instead, we show an estimate of the order h | log h|. In this way, we prepare the proof of the full order in [19]. First, we approximate admissible vectors for (N) by admissible ones for (Nh ) with the order α(h) and vice versa. Lemma 1. Suppose that u¯ is feasible for (N ) and satisfies the Robinson regularity condition. Then there are c > 0 independent of h and h0 > 0 such that, for each h ∈ (0, h0 ) an admissible uh for problem (Ph ) exists with |¯ u − uh | ≤ c α(h).
(25)
Proof. To be consistent with the notation of [20], we write ⎧ ⎨ Gh (u), if u ∈ Uad and h > 0, G(u), if u ∈ Uad and h = 0, G(h, u) = ⎩ ∅, if u ∈ / Uad . Thanks to (24), G and ∂G/∂u are continuous at the point (h, u) = (0, u ¯). Moreover, we have G(0, u ¯) ≤K 0. In view of the Robinson regularity condition, the assumptions of the generalized implicit function in [20] are fulfilled. We obtain the existence of neighborhoods N of h = 0 and O of u¯ such that, for all h ∈ N , the inequality G(h, u) ≤K 0 has a solution u ∈ O, and it holds dist[v, Σ(h)] ≤ c |G(h, v)+ |,
∀h ∈ N , ∀v ∈ O,
(26)
where Σ(h) = {u ∈ Uad | G(h, u) ≤K 0} is the solution set of the inequality and dist denotes the Euclidean distance of a point to a set. The value |G(h, v)+ | is the distance of the set G(h, v) + K to the origin and measures the residual of v with respect to the inequality G(h, v) ≤K 0, cf. [20], p. 498. Inserting v = u ¯ in (26), we deduce dist[¯ u, Σ(h)] ≤ c |G(h, u¯)+ | ≤ c(|G(0, u¯)+ | + |G(h, u¯)+ − G(0, u ¯)+ |) ≤ 0 + c α(h). u − uh | ≤ c α(h). The statement is shown. Hence, there exists uh ∈ Σ(h) with |¯ Lemma 2. Let the reference solution u ¯ satisfy the linear independence condition. Then, for all given ρ > 0 and all sufficiently small h > 0, the auxiliary problem ⎧ ⎨ min fh (u) Gh (u) ≤K 0, (27) (Nh,ρ ) ⎩ u ∈ Uad ∩ cl B(¯ u, ρ) is solvable. If u ¯h is any optimal solution to this problem, then an admissible element vh for (N) exists satisfying with some c > 0 independent of h |¯ uh − vh | ≤ c α(h).
(28)
50
F. Tr¨ oltzsch
Proof. (i) Solvability of (Nh,ρ ) : For a positive h0 and all h ∈ (0, h0 ), the admissible set of (Nh,ρ ) is not empty, because uh constructed in Lemma 1 satisfies all constraints. The existence of an optimal u ¯h follows immediately. We have to find ¯h | ≤ c α(h). Below, we cover the inequalvh in Uad with G(vh ) ≤K 0 and |vh − u ˆ ity constraints of Uad by the extended vector function G(u) : IRm → IR+2m ˆ introduced 2 pages before. Let us set in this proof G := G and := + 2m to avoid an extensive use of the hat sign. Hence we have to construct vh such that Gi (vh ) = 0,
i = 1, . . . , k,
Gi (vh ) ≤ 0,
i = k + 1, . . . , .
(ii) Construction of an equation for vh : Notice that u ¯h ∈ cl B(¯ u, ρ) for all h ≤ h0 . Therefore, if ρ is taken small enough, all inactive components Gi (¯ u) are inactive for u ¯h as well and there exists ε > 0 such that uh ) ≤ −ε < 0 Gi (¯
∀i ∈ I,
∀h ≤ h0 ,
(29)
where I is the set of all inactive indices i of u ¯ in {k + 1, . . . , }. Suppose that r constraints are active at u ¯, k ≤ r ≤ m. After renumbering, if necessary, we can assume that those with the numbers 1 ≤ i ≤ r u) = . . . = Gr (¯ u) = 0. By the independence condiare active, hence G1 (¯ u) are linearly independent. If ρ is small tion, the associated gradients ∇Gi (¯ enough, also ∇G1 (¯ uh ), . . . , ∇Gr (¯ uh ) are linearly independent. Consider the ma trix Bh = [∇G1 (¯ uh ), . . . , ∇Gr (¯ uh )] . Since Bh has full rank r, we find an invertible submatrix Dh such that (after renumbering of the components of u, if necessary) Bh = [Dh , Eh ] holds with some matrix Eh . Define Fh : IRr → IRr by ¯h,m ) − Gh,i (¯ uh ), Fh,i (w) := Gi (w, u¯h,r+1 , . . . , u
i = 1, . . . , r.
To find vh , we fix its m − r last components by vh,i := u ¯h,i , i = r + 1, . . . , m, and determine the first r components as the solution w of the system Fh (w) = 0,
(30)
i.e. we set vh,i := wi , i = 1, . . . , r. (iii) Solvability of (30): In this part of the proof, we follow a technique used by Allg¨ ower et al. [21]. We define for convenience w ¯h := (¯ uh,1 , . . . , u ¯h,r ) , w ¯ := ¯r ) and have (¯ u1 , . . . , u |Fh (w ¯h )| ≤ c α(h), (31) since |Gi (¯ uh ) − Gh,i (¯ uh )| ≤ c α(h) holds for all 1 ≤ i ≤ r. Thanks to (24) and the Lipschitz assumptions, there exist γ > 0, β > 0 with Fh (w1 ) − Fh (w2 ) ≤ γ |w1 − w2 | (Fh (w))−1
≤β
∀wi ∈ B(w, ¯ ρ),
∀w ∈ B(w, ¯ ρ)
for all 0 ≤ h ≤ h0 , if ρ is taken sufficiently small. Notice that ∂G(w)/∂w is then close to ∂G(w)/∂w, ¯ and this matrix is invertible. Define η > 0 by η := β|Fh (w ¯h )|. Then (31) implies β γ η/2 ≤ 1 for all 0 < h < h0 , if h0 is sufficiently small.
On Finite Element Error Estimates for Optimal Control Problems
51
Proceeding as in [21], the Mysovskij theorem, cf. Ortega and Rheinboldt [22], p. 412, ensures that the Newton method starting at w0 := w ¯h generates a solution w of (30) in the ball cl B(w ¯h , c0 η), where c0 is a certain constant. It follows from our construction that = 0, i = 1, . . . , k, Gi (vh ) = Gh,i (¯ uh ) ≤ 0, i = k + 1, . . . , r. Moreover, if h is small, Gi (vh ) < 0 holds for r < i ≤ . Therefore, we have that G(vh ) ≤K 0 and vh ∈ Uad . From w ∈ cl B(w ¯h , c0 η) it follows |w − w ¯ h | ≤ c0 η ≤ c α(h), hence also |vh − u ¯h | ≤ c α(h). Lemma 3. If ρ > 0 is taken sufficiently small and h ∈ (0, h0 (ρ)), then all u, ρ). Therefore, they solutions u¯h of the auxiliary problem (Nh,ρ ) belong to B(¯ are also locally optimal for the problem (Nh ). Proof. First, we compare the solution u¯h of (Nh,ρ ) defined in Lemma 1 with uh that is admissible for (Nh,ρ ) and approximates u ¯ with the order α(h). We get fh (¯ uh ) ≤ fh (uh ) ≤ |fh (uh ) − fh (¯ u)| + |fh (¯ u) − f (¯ u)| + f (¯ u). By |fh (¯ u) − f (¯ u)| + |uh − u ¯| + |fh (¯ uh ) − f (¯ uh )| ≤ c α(h) and by the uniform Lipschitz property of fh , we find f (¯ uh ) ≤ f (¯ u) + c1 α(h).
(32)
Next, we compare u ¯ with vh taken from Lemma 2. The assumed second-order sufficient optimality condition implies a quadratic growth condition. Inserting vh in this condition, we obtain for small h u) + ω |¯ u − vh |2 . f (vh ) ≥ f (¯ From |¯ uh − vh | ≤ c α(h) we deduce u) + ω |¯ u−u ¯h |2 . f (¯ uh ) + c2 α(h) ≥ f (¯
(33)
Combining the inequalities (32)–(33), it follows that f (¯ u) + c1 α(h) ≥ f (¯ u) + ω |¯ u − u¯h |2 − c2 α(h) and hence we obtain the stated auxiliary error estimate |¯ u−u ¯h | ≤ c α(h).
(34)
For all sufficiently small h, this estimate implies |¯ u−u ¯h | < ρ so that u ¯h does not touch the boundary of B(¯ u, ρ). In view of this, u ¯h is locally optimal for (Nh ).
52
F. Tr¨ oltzsch
The error estimate (34) is not optimal. We can get rid of the square root. Moreover, we are able to show the stated local uniqueness of u¯h , i.e. uniqueness of local optima of (Nh ) in a neighborhood of u ¯. Both tasks can be accomplished by the stability theory for optimality systems written as generalized equations. This would go beyond the scope of this paper and we refer the reader to the detailed presentation in [19], where we complete the proof of the optimal error estimate stated in the theorem. Moreover, we mention the recent monography [23], where the theory of generalized equations and associated applications are discussed extensively. The same estimate can also be shown for the associated Lagrange multipliers.
References 1. Tr¨ oltzsch, F.: Optimal Control of Partial Differential Equations: Theory, Methods and Applications. Graduate Studies in Mathematics. American Math. Society (to appear, 2010) 2. Casas, E., Tr¨ oltzsch, F.: Error estimates for linear-quadratic elliptic control problems. In: Barbu, V., et al. (eds.) Analysis and Optimization of Differential Systems, pp. 89–100. Kluwer Academic Publishers, Boston (2003) 3. Falk, F.: Approximation of a class of optimal control problems with order of convergence estimates. J. Math. Anal. Appl. 44, 28–47 (1973) 4. Geveci, T.: On the approximation of the solution of an optimal control problem problem governed by an elliptic equation. R.A.I.R.O. Analyse num´erique/ Numerical Analysis 13, 313–328 (1979) 5. Dontchev, A.L., Hager, W.W., Poore, A.B., Yang, B.: Optimality, stability, and convergence in nonlinear control. Applied Math. and Optimization 31, 297–326 (1995) 6. Arada, N., Casas, E., Tr¨ oltzsch, F.: Error estimates for the numerical approximation of a semilinear elliptic control problem. Computational Optimization and Applications 23, 201–229 (2002) 7. Casas, E., Mateos, M., Tr¨ oltzsch, F.: Error estimates for the numerical approximation of boundary semilinear elliptic control problems. Computational Optimization and Applications 31, 193–220 (2005) 8. Casas, E., Mateos, M.: Error estimates for the numerical approximation of boundary semilinear elliptic control problems. Computational Optimization and Applications 39, 265–295 (2008) 9. Casas, E., Raymond, J.P.: Error estimates for the numerical approximation of Dirichlet boundary control for semilinear elliptic equations. SIAM J. Control and Optimization 45, 1586–1611 (2006) 10. Vexler, B.: Finite element approximation of elliptic Dirichlet optimal control problems. Numer. Funct. Anal. Optim. 28, 957–973 (2007) 11. Deckelnick, K., G¨ unther, A., Hinze, M.: Finite element approximation of Dirichlet boundary control for elliptic PDEs on two- and three-dimensional curved domains. Preprint SPP1253-08-05, DFG-Priority Programme 1253 Optimization with Partial Differential Equations (2008) 12. Neitzel, I., Pr¨ ufert, U., Slawig, T.: Strategies for time-dependent PDE control with inequality constraints using an integrated modeling and simulation environment. Numerical Algorithms 50(3), 241–269 (2009)
On Finite Element Error Estimates for Optimal Control Problems
53
13. Hinze, M.: A variational discretization concept in control constrained optimization: the linear-quadratic case. J. Computational Optimization and Applications 30, 45– 63 (2005) 14. Meyer, C., R¨ osch, A.: Superconvergence properties of optimal control problems. SIAM J. Control and Optimization 43, 970–985 (2004) 15. Casas, E.: Error estimates for the numerical approximation of semilinear elliptic control problems with finitely many state constraints. ESAIM, Control, Optimisation and Calculus of Variations 8, 345–374 (2002) 16. Casas, E., Mateos, M.: Uniform convergence of the FEM. Applications to state constrained control problems. J. of Computational and Applied Mathematics 21, 67–100 (2002) 17. Meyer, C.: Error estimates for the finite element approximation of an elliptic control problem with pointwise constraints on the state and the control. Preprint 1159, WIAS Berlin (2006) 18. Deckelnick, K., Hinze, M.: Numerical analysis of a control and state constrained elliptic control problem with piecewise constant control approximations. In: Kunisch, K., Of, G., Steinbach, O. (eds.) Proceedings of ENUMATH 2007, the 7th European Conference on Numerical Mathematics and Advanced Applications, Graz, Austria. Numerical Mathematics and Advanced Applications, pp. 597–604. Springer, Heidelberg (2007) 19. Merino, P., Tr¨ oltzsch, F., Vexler, B.: Error estimates for the finite element approximation of a semilinear elliptic control problem with state constraints and finite dimensional control space. Accepted for publication by ESAIM, Control, Optimisation and Calculus of Variations 20. Robinson, S.M.: Stability theory for systems of inequalities, part ii: differentiable nonlinear systems. SIAM J. Numer. Analysis 13, 497–513 (1976) 21. Allgower, E.L., B¨ ohmer, K., Potra, F.A., Rheinboldt, W.C.: A mesh-independence principle for operator equations and their discretizations. SIAM Journal on Numerical Analysis 23, 160–169 (1986) 22. Ortega, J.M., Rheinboldt, W.C.: Iterative solution of nonlinear equations in several variables. SIAM Publ., Philadelphia (2000) 23. Dontchev, A.L., Rockafellar, R.T.: Implicit functions and solution mappings. A view from variational analysis. Springer Monographs in Mathematics. Springer, New York (2009)
On Some Stability Properties of the Richardson Extrapolation Applied Together with the θ-Method ´ Zahari Zlatev1 , Istv´an Farag´ o2, and Agnes Havasi3 1 National Environmental Research Institute, Aarhus University Frederiksborgvej 399, P.O. Box 358, DK-4000 Roskilde, Denmark
[email protected] 2 Department of Applied Analysis and Computational Mathematics E¨ otv¨ os Lor´ and University, P´ azm´ any P. s. 1/C, H-1117 Budapest, Hungary
[email protected] 3 Department of Meteorology, E¨ otv¨ os Lor´ and University P´ azm´ any P. s. 1/C, H-1117 Budapest, Hungary
[email protected]
Abstract. Consider the system of ordinary differential equation (ODEs) dy/dt = f (t, y) where (a) t ∈ [a, b] with b > a, (b) y is a vector containing s components and (c) y(a) is given. The θ-method is applied to solve approximately the system of ODEs on a set of prescribed grid-points. If N is the number of time-steps that are to be carried out, then this numerical method can be defined by using the following set of relationships yn = yn−1 +h(1−θ)f (tn−1 , yn−1 ), θ ∈ [0.5, 1.0], n = 1, 2, . . . , N , h = (b−a)/N , tn = tn−1 + h = t0 + nh, t0 = a, tN = b. As a rule, the accuracy of the approximations {yn | n = 1, 2, . . . , N }, can be improved by applying the Richardson Extrapolation under the assumption that the stability of the computational process is preserved. Therefore, it is natural to require that the combined numerical method (Richardson Extrapolation plus the θ-method) is in some sense stable. It can be proved that the combined method is strongly A-stable when θ ∈ [2/3, 1.0]. This is the main result in the paper. The usefulness of this result in the solution of many problems arising in different scientific and engineering areas is demonstrated by performing a series of experiments with an extremely badly-scaled and very stiff atmospheric chemistry scheme which is actually used in several well-known large-scale air pollution models.
1
Richardson Extrapolation
Consider the classical initial value problem for systems of ordinary differential equations (ODEs): dy = f (t, y), dt
t ∈ [a, b],
b > a,
y ∈ s ,
f ∈ s ,
s ≥ 1,
with a given initial value y(a) = y0 . I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 54–66, 2010. c Springer-Verlag Berlin Heidelberg 2010
(1)
On Some Stability Properties of the Richardson Extrapolation
55
Assume that N ≥ 1 is a given integer and that h = (b − a)/N is a real constant. It is convenient to consider the equidistant grid defined by TN = {tn , n = 0, 1, . . . , N | t0 = a, tn = tn−1 + h (n = 1, 2, . . . , N ), tN = b} (2) but the results could easily be extended for the case where non-equidistant grids are used. It will also be assumed that yn , zn and wn are some approximations of the exact solution y(t) of (1) at the grid-point t = tn ∈ TN . Furthermore, when the Richardson Extrapolation is studied, it is also appropriate to consider an approximation wn−0.5 of y(t) at the inter-mediate grid-point t = tn−0.5 = tn − 0.5h. The simplest version of the Richardson Extrapolation can be formulated as follows. Assume that two approximations zn and wn of y(tn ) have been calculated by using a numerical method of order p and stepsizes h and 0.5h respectively. The following two relationships can be written by exploiting the fact that the order of the selected numerical method is p: y(tn ) = zn + hp K + O(hp+1 ),
(3)
y(tn ) = wn + (0.5h)p K + O(hp+1 ),
(4)
where K is some quantity depending on the numerical method applied in the calculation of zn and wn . Eliminating K from (3) and (4) gives: y(tn ) =
2p wn − zn + O(hp+1 ). 2p − 1
Denote
(5)
2p wn − zn . (6) 2p − 1 It is clear that the approximation yn , being of order p + 1, will in general be more accurate than both zn and wn (at least when the stepsize h is sufficiently small). Thus, the Richardson extrapolation can be used in the efforts to improve the accuracy of the approximate solution. There are two ways of implementing the Richardson Extrapolation. Assume that the three approximations zn , wn and yn are already calculated and it is necessary to calculate the next approximation yn+1 . The first implementation is based on the use of zn and wn as initial values in the calculation of zn+1 and wn+1 respectively and after that the improved approximation yn+1 can be calculated by utilizing the obtained values of zn+1 and wn+1 . This computational device is called Passive Richardson Extrapolation. In the second approach the calculated improvement yn is used as an initial value in the calculation of both zn+1 and wn+1 . After that again zn+1 and wn+1 are applied to calculate yn+1 . This device is called Active Richardson Extrapolation. It must be emphasized here that only the Active Richardson Extrapolation will be studied in the remaining part of this paper. Moreover, the word “active” will always be omitted. yn =
´ Havasi Z. Zlatev, I. Farag´ o, and A.
56
2
Combining the Richardson Extrapolation and the θ-Method
The Richardson Extrapolation can be combined with the θ-method by using a computational process containing three stages: Stage 1. Perform a large time-step: zn = yn−1 + h[(1 − θ)f (tn−1 , yn−1 ) + θf (tn , zn )].
(7)
Stage 2. Perform two small time-steps: wn−0.5 = yn−1 + 0.5h[(1 − θ)f (tn−1 , yn−1 ) + θf (tn−0.5 , wn−0.5 )],
(8)
wn = wn−0.5 + 0.5h[(1 − θ)f (tn−0.5 , wn−0.5 ) + θf (tn , wn )].
(9)
Stage 3. Calculate an improved approximation: yn = 2wn − zn when θ = 0.5 and yn =
4wn − zn when θ = 0.5. 3
(10)
It is clear that at any time-step n (n = 1, 2, . . . , N ) it is necessary to perform (i) one large step by applying (7), (ii) two small steps by applying (8) - (9) and (iii) to combine the calculated approximations by using (10). The θ-method is reduced to the second-order Trapezoidal Rule when θ = 0.5. Therefore, (6) is used with p = 2 for θ = 0.5. The θ-method is of order one when θ = 0.5 and (6) has to be used with p = 1 in this case.
3
Studying the Stability Properties
The scalar linear test-equation (proposed originally by Dahlquist in 1963, [4]) dy = λy, dt
(11)
where λ is a given complex constant, is often used in stability studies. The application of formulae (7) - (9) in the numerical solution of (11) results in zn = and
wn =
1 + (1 − θ)μ yn−1 1 − θμ
1 + (1 − θ)(0.5μ) 1 − θ(0.5μ)
(12)
2 yn−1 ,
(13)
where μ = λh is a complex number. The application of the Richardson Extrapolation leads to the following relationship: yn = R(μ)yn−1 (14)
On Some Stability Properties of the Richardson Extrapolation
with
1 + (1 − θ)(0.5μ) R(μ) = 2 1 − θ(0.5μ) and R(μ) =
4 3
1 + 0.25μ 1 − 0.25μ
2 −
2 −
1 + (1 − θ)μ for θ = 0.5 1 − θμ
1 1 + 0.5μ for θ = 0.5. 3 1 − 0.5μ
57
(15)
(16)
The last three formulae show clearly that the application of the Richardson Extrapolation combined with the θ-method in the solution of the test-problem (11) is equivalent to the application of a one-step method for solving ODEs with a stability function R(μ). We are mainly interested in solving stiff systems of ODEs. Therefore, it is important to preserve the stability of the computational process. Several definitions related to the stability properties of numerical methods with stability function R(μ) are proposed in the literature (see, for example [2,3,6,7,8,11]). The following three definitions will be used in this paper. Definition 1. Consider the set S containing all values of μ = α + iβ for which |R(μ)| ≤ 1. If S ⊃ C = {ν | ν = γ + iδ, γ ≤ 0}, then the method with stability function R(μ) is called A-stable (see, for example, [6]). Definition 2. A numerical method with stability function R(μ) is said to be strongly A-stable if it is A-stable and if the relationship: lim (|R(μ)|) ≤ ζ,
µ→∞
(17)
where ζ ≤ 1 is some non-negative constant, holds (see [7]). Remark 1. If μ ∈ C, then μ → ∞ will always mean that |μ| grows beyond any assigned positive real number. Definition 3. A numerical method with a stability function R(μ) is called Lstable if it is A-stable and the relationship: lim (|R(μ)|) = 0
µ→∞
(18)
holds (this definition was originally proposed in [5], see also [6,7]). It should be pointed out here that the class of the L-stable methods is a sub-class of the class of strongly A-stable methods, which in their turn form a class which is a sub-class of the class of A-stable methods. The use of numerical methods, which are at least A-stable, is desirable when the system of ODEs is stiff. It is well-known (see, for example, [7]) that the θ-method is – A-stable when θ = 0.5 (the Trapezoidal Rule), – strongly A-stable when θ ∈ (0.5, 1.0), – L-stable when θ = 1.0 (the Backward Euler Formula).
58
´ Havasi Z. Zlatev, I. Farag´ o, and A.
Therefore, it is assumed in this paper that θ ∈ [0.5, 1.0]. The following results related to the stability of the combination consisting of the Richardson Extrapolation and the θ-method can be formulated and proved (see [14,15]). Theorem 1. The combined numerical method consisting of the Richardson Extrapolation and the Trapezoidal Rule is not A-stable. Theorem 2. The combined numerical method consisting of the Richardson Extrapolation and the Backward Euler Formula is L-stable. Theorem 3. The numerical method consisting of a combination of the Richardson Extrapolation and the θ-method is strongly A-stable when θ ∈ (2/3, 1.0]. Theorem 1 shows that that the Richardson Extrapolation may cause stability problems when it is combined with the Trapezoidal Rule and when stiff systems of ODE’s are solved. Theorem 2 and Theorem 3 show that the Richardson Extrapolation has stronger stability properties when the Backward Euler Formula is used instead of the θ-method with θ ∈ (2/3, 1.0). The following theorem indicates that if the accuracy requirements are to be taken into account, then it might be more preferable to apply the θ-method with θ ∈ (2/3, 1.0). Theorem 4. The principal part of the local truncation error of the θ-method with θ = 0.75 is twice smaller than that of the Backward Euler Formula.
4
Introduction of an Atmospheric Chemistry Scheme
An atmospheric chemistry scheme containing s = 56 species was applied in the experiments results of which will be presented below. This chemistry scheme is used in several well-known environmental models (for example, in the EMEP models, [9], and UNI-DEM, [12,13]). The chemistry scheme is described mathematically as a non-linear system of ODEs of type (1). This example is extremely difficult because (a) it is badly scaled and (b) some species vary very quickly during the periods of changes from day-time to night-time and from night-time to day-time. The bad scaling is demonstrated by the results given in Table 1, where the maximal, minimal and mean values of the concentrations of several chemical species during a time-interval of 24 hours are given. It is clearly seen that while Table 1. Maximal, minimal and mean concentrations of some species during a timeperiod of 24 hours. The units are numbers of molecules per cubic centimetre. Species NO N O2 Ozone OH Isoprene
Maximal value 2.5E+09 2.4E+10 1.8E+12 2.3E+07 3.7E+09
Minimal value 8.4E+04 3.7E+08 1.4E+12 3.3E+04 1.1E+06
Mean value 5.5E+08 4.3E+09 1.5E+12 6.3E+06 1.5E+09
On Some Stability Properties of the Richardson Extrapolation
1.5*10
4
1.0*10
4
5.0*10
3
0.0*10
0
12 13 14 15 16 17 18 19 20 21 22 23 24 01 02 03 04 05 06
7
8
59
9 10 11 12
Fig. 1. Diurnal variation of the concentrations of the chemical species OP (P state of oxygen)
some species (as nitrogen dioxide and ozone) do not vary too much, other species vary in a very wide range (sometimes by many orders of magnitude; see also Fig. 1 and Fig. 2). The steep gradients of some of the concentrations in the critical parts of the time-interval (changes from day-time to night-time and from night-time to day-time) are demonstrated in the plots drawn in Fig. 1 and Fig. 2. The concentrations of some species are growing during the day (an example is given in Fig. 1), while other concentrations are growing during the night (see Fig. 2).
5
Organization of the Numerical Experiments
The atmospheric chemistry scheme discussed in the previous section is handled on the time-interval [a, b] = [43200, 129600]. The value a = 43200 corresponds to twelve o’clock at the noon (measured in seconds starting from the mid-night), while b = 129600 corresponds to twelve o’clock on the next day. Thus, the length of the time-interval is 24 hours (86400 seconds) and it contains important changes from day-time to night-time and from night-time to day-time when most of the chemical species are quickly varying. In each experiment the first run is performed by using N = 168 time-steps (this means that the time-stepsize is h ≈ 514.285 seconds). After that the stepsize h is halved eighteen times (which implies that the number of time-steps, N , is
60
´ Havasi Z. Zlatev, I. Farag´ o, and A.
8
*10
2
1
12 13 14 15 16 17 18 19 20 21 22 23 24 01 02 03 04 05 06
7
8
9 10 11 12
Fig. 2. Diurnal variation of the concentrations of the chemical species N O3
doubled in every successive run). The behavior of the errors in this sequence of 19 runs is studied. The error made in each run is measured in the following way. Denote: ref |ym,k − ym,k | ERRm = max , (19) ref |, 1.0) k=1,2,...,56 max(|ym,k ref where ym,k and ym,k are the calculated value and the reference solution of the k-th chemical species at time tm = t0 + mh0 (where m = 1, 2, . . . , 168 and h0 ≈ 514.285 is the time-stepsize that has been used in the first run). The reference solution was calculated by using a three-stage fifth-order L-stable fully implicit Runge-Kutta algorithm with N = 998244352 and href ≈ 6.1307634E − 05. This algorithm is often called RADAU IIA (the formulae on which it is based are listed for example in [2,3,6,8]). It is clear from the above discussion that only the values of the reference solution at the grid-points of the coarse grid used in the first run have been stored and applied in the evaluation of the error (it is, of course, also possible to store all values of the reference solution, but such an action will increase tremendously the storage requirements). The global error made during the computations is estimated by
ERR =
max
(ERRm ).
m=1,2,...,168
(20)
It is desirable to eliminate the influence of the rounding errors when the quantities involved in (19) and (20) are calculated. Normally, this task can easily be
On Some Stability Properties of the Richardson Extrapolation
61
accomplished when double precision arithmetic is used. Unfortunately, this is not true when the atmospheric chemistry scheme is handled. The difficulty can be explained as follows. If the problem is stiff, and the atmospheric chemistry scheme selected by us is as mentioned above a stiff non-linear system of ODEs, then implicit numerical methods are to be used. The application of such numerical methods leads to the solution of systems of non-linear algebraic equations, which are normally treated at each time-step by the Newton Iterative Method (see, for example, [6]). This means that long sequences of systems of linear algebraic equations are to be handled. Normally this does not cause great problems. However, the atmospheric chemistry schemes are very badly scaled and the condition numbers of the involved matrices are very large. It was found, by applying a LAPACK subroutine for calculating eigenvalues and condition numbers ([1]), that the condition numbers of the matrices involved in the Newton Iterative Process during the numerical integration of the atmospheric chemistry scheme on the interval [a, b] = [43200, 129600] vary in the range [4.56E + 08, 9.27E + 12]. Simple application of error analysis arguments from [10] indicates that there is a danger that the rounding errors will affect the fourth significant digit of the approximate solution on most of the existing computers when double precision arithmetic is used. Therefore, all computations reported in the next section were performed by using quadruple-precision (i.e. by using REAL*16 declarations for the real numbers and, thus, about 32-digit arithmetic) in order to eliminate the influence of the rounding errors on the first 16 significant digits of the computed approximate solutions.
6
Results from the Numerical Experiments
The results presented in the next three sub-sections will demonstrate that the combined method (consisting of the Richardson Extrapolation and a strongly Astable θ-method) has the following properties: (a) it behaves as a second-order numerical method, (b) for some values of θ the results produced by this method are more accurate than the results produced by the combination consisting of the Richardson Extrapolation and the Backward Euler Formula (which is obtained when θ = 1.0) and (c) it is much more efficient (in terms of the computing time) when a prescribed accuracy is required. 6.1
Achieving Second Order of Accuracy
Results obtained by using the θ-method directly and in combination with the Richardson Extrapolation are given in Table 2. The value θ = 0.75 is selected, which means that (17) is satisfied with ζ = 5/9. It is clearly seen the θ-method performs as (a) a first-order method when it is applied directly and (b) a stable second-order method when it is used as an underlying method in the Richardson Extrapolation.
62
´ Havasi Z. Zlatev, I. Farag´ o, and A.
Table 2. Numerical results obtained in (a) 19 runs in which the direct implementation of the θ-method with θ = 0.75 is used and (b) 19 runs in which the combination consisting of the Richardson Extrapolation and the θ-method with θ = 0.75 is applied. The errors obtained by (20) are given in the columns under “Accuracy”. The ratios of two successive errors are given in the columns under “Rate”.
Job No. Time-steps 1 168 2 336 3 672 4 1344 5 2688 6 5376 7 10752 8 21504 9 43008 10 86016 11 172032 12 344064 13 688128 14 1376256 15 2752512 16 5505024 17 11010048 18 22020096 19 44040192
6.2
Direct use of the θ-method Richardson Extrapolation Accuracy Rate Accuracy Rate 1.439E-00 3.988E-01 6.701E-01 2.147 5.252E-02 7.593 3.194E-01 2.098 1.503E-02 3.495 1.550E-01 2.060 3.787E-03 3.968 7.625E-02 2.033 9.502E-04 3.985 3.779E-02 2.018 2.384E-04 3.986 1.881E-02 2.009 5.980E-05 3.986 9.385E-03 2.005 1.499E-05 3.989 4.687E-03 2.002 3.754E-06 3.993 2.342E-03 2.001 9.394E-07 3.996 1.171E-03 2.001 2.353E-07 3.993 5.853E-04 2.000 6.264E-08 3.756 2.926E-04 2.000 1.618E-08 3.873 1.463E-04 2.000 4.111E-09 3.935 7.315E-05 2.000 1.036E-09 3.967 3.658E-05 2.000 2.601E-10 3.984 1.829E-05 2.000 6.514E-11 3.993 9.144E-06 2.000 1.628E-11 4.001 4.572E-06 2.000 4.051E-12 4.019
Comparing Results Obtained by the Backward Euler Formula and the θ-Method
Theorem 4 indicates that one should expect, as stated above, the θ-method with θ = 0.75 to be more accurate than the Backward Euler Formula. Some results are shown in Table 3. It is seen that the accuracy of the results obtained by using the θ-method with θ = 0.75 is indeed considerably better than those obtained by the Backward Euler Formula (see the figures given in the third and the fifth columns of Table 3). It is remarkable that the accuracy is improved precisely by a factor of two when the time-stepsize becomes sufficiently small. It is not clear how to derive corresponding expressions for the principal parts of the local truncation error when the Richardson Extrapolation is used together with these two numerical methods for solving systems of ODEs. However the results in Table 3 show that the accuracy is in general improved by a factor greater than two when the θ-method with θ = 0.75 is used as an underlying method instead of the Backward Euler Formula.
On Some Stability Properties of the Richardson Extrapolation
63
Table 3. Comparison of the accuracy achieved when the Backward Euler Formula and θ-method with θ = 0.75 are run with 19 different time-stepsizes. The errors obtained by (20) are given in the columns in this table. The ratios (the errors obtained when the θ-method with θ = 0.75 is used divided by the corresponding errors obtained when the Backward Euler Formula is used) are given in brackets. Backward Euler Formula Job No. Time-steps Direct Richardson 1 168 2.564E-00 3.337E-01 2 336 1.271E-00 1.719E-01 3 672 6.227E-01 5.473E-02 4 1344 3.063E-01 7.708E-03 5 2688 1.516E-01 1.960E-03 6 5376 7.536E-02 5.453E-04 7 10752 3.757E-02 1.455E-04 8 21504 1.876E-02 3.765E-05 9 43008 9.371E-03 9.583E-06 10 86016 46842E-03 2.418E-06 11 172032 2.241E-03 6.072E-07 12 344064 1.171E-03 1.522E-07 13 688128 5.853E-04 3.809E-08 14 1376256 2.926E-04 9.527E-09 15 2752512 1.463E-04 2.382E-09 16 5505024 7.315E-05 5.957E-10 17 11010048 3.658E-05 1.489E-10 18 22020096 1.829E-05 3.720E-11 19 44040192 9.144E-06 9.273E-12
6.3
The θ-method with θ = 0.75 Direct Richardson 1.439E-00 (0.561) 3.988E-01 (1.195) 6.701E-01 (0.527) 5.252E-02 (0.306) 3.194E-01 (0.513) 1.503E-02 (0.275) 1.550E-01 (0.506) 3.787E-03 (0.491) 7.625E-02 (0.503) 9.502E-04 (0.484) 3.779E-02 (0.501) 2.384E-04 (0.437) 1.881E-02 (0.501) 5.980E-05 (0.411) 9.385E-03 (0.500) 1.499E-05 (0.398) 4.687E-03 (0.500) 3.754E-06 (0.392) 2.342E-03 (0.500) 9.394E-07 (0.389) 1.171E-03 (0.500) 2.353E-07 (0.388) 5.853E-04 (0.500) 6.264E-08 (0.411) 2.926E-04 (0.500) 1.618E-08 (0.425) 1.463E-04 (0.500) 4.111E-09 (0.432) 7.315E-05 (0.500) 1.036E-09 (0.435) 3.658E-05 (0.500) 2.601E-10 (0.437) 1.829E-05 (0.500) 6.514E-11 (0.437) 9.144E-06 (0.500) 1.628E-11 (0.438) 4.572E-06 (0.500) 4.051E-12 (0.437)
Comparing Computing Times Needed to Obtain Prescribed Accuracy
Three time-steps (one large and two small) with the underlying numerical method are necessary when one time-step of the Richardson Extrapolation is performed. This means that if the Richardson Extrapolation and the underlying numerical method are used with the same time-stepsize, then the computational cost of the Richardson Extrapolation will be about three times greater than that of the underlying numerical method. In many practical situations this factor will be less than three, but considerably larger than two (because the number of Newton iterations needed for each of the two small time-steps will normally be smaller than the corresponding number for the large time-step). However, the use of the Richardson Extrapolation leads also to an improved accuracy of the calculated approximations (see Table 2). Therefore, it is not relevant (and not fair either) to compare the Richardson Extrapolation with the underlying method under the assumption that both devices are run with equal number of time-steps. It is much more relevant to investigate how much work is needed in order to achieve the same accuracy in the cases where (a) the θ-method with θ = 0.75 is applied directly and (b) when it is combined with the Richardson
64
´ Havasi Z. Zlatev, I. Farag´ o, and A.
Table 4. Comparison of the computational costs (measured by the CPU times given in hours) needed to achieve prescribed accuracy in the cases where (a) the θ-method with θ = 0.75 is implemented directly and (b) the Richardson Extrapolation with the θ-method is used. The computing times measured in hours are given in the columns under “CPU time”. The numbers of time-steps needed to obtain the desired accuracy are given in the columns under “Time-steps”.
Accuracy [1.0E − 2, 1.0E [1.0E − 3, 1.0E [1.0E − 4, 1.0E [1.0E − 5, 1.0E [1.0E − 6, 1.0E
− 1) − 2) − 3) − 4) − 5)
The θ-method with θ = 0.75 Richardson Extrapolation CPU time Time-steps CPU time Time-steps 0.0506 2686 0.0614 336 0.1469 21504 0.0897 1344 1.1242 344032 0.1192 2688 6.6747 2752512 0.2458 10752 43.0650 22020096 0.6058 43008
Extrapolation. The computing times needed in the efforts to achieve prescribed accuracy are given in Table 4. If the desired accuracy is 10−k (k = 1, 2, 3, 4, 5), then the computing time achieved in the first run in which the quantity from (20) becomes less than 10−k is given in Table 4. This means that the error is in the interval [10−(k+1) , 10−k ) when accuracy of order 10−k is required. The results shown in Table 4 show clearly that not only is the Richardson Extrapolation a powerful tool for improving the accuracy of the underlying numerical method, but it is also extremely efficient with regard to the computational cost (this being especially true when the accuracy requirement is not very low). It must be emphasized here that accuracy better than 10−6 was not achieved in the 19 runs when the θ-method is used directly, while accuracy better than 10−11 can be achieved when this method is combined with the Richardson Extrapolation (see Table 2).
7
Concluding Remark
Important properties of the Richardson Extrapolation were studied in the previous sections of this paper. Theorems related to the stability of the computational process and the accuracy of the results were formulated. Numerical experiments were carried out to demonstrate (a) the improvement of the accuracy by applying the Richardson Extrapolation, (b) the better accuracy properties of the combination Richardson Extrapolation plus the θ-method with θ = 0.75 when compared with the accuracy achieved by using the combination the Richardson Extrapolation plus the Backward Euler Formula and (c) the great savings in computing time achieved when a prescribed accuracy is required. There are still many open problems which will be studied in the near future. Four of the open problems are listed below: – It seems plausible to conjecture that Theorem 3 could be extended for some implicit or diagonally-implicit Runge-Kutta methods for solving systems of ODEs when these are to be applied together with the Richardson Extrapolation.
On Some Stability Properties of the Richardson Extrapolation
65
– It is desirable to obtain some stability results related to the Richardson Extrapolation applied together with strongly stable or L-stable numerical methods for solving systems of ODEs that are of higher order, i.e., for combined numerical methods with p ≥ 3. – Theorem 4 and the results shown in Table 3 indicate that not only should one select methods of a given order, but it is also necessary to try to improve the results by designing methods with a smaller (in absolute value) principal parts of the local truncation error. – Splitting procedures can be useful in many applications. It is interesting to try to prove in a rigorous way when the splitting procedures preserve the stability properties of the combined methods (i.e. when the combination consisting of the Richardson Extrapolation plus some splitting procedure plus the numerical methods for solving system of ODEs has good stability properties).
Acknowledgements The research of Z. Zlatev was partly supported by the NATO Scientific Programme (Collaborative Linkage Grants: No. 980505 “Impact of Climate Changes on Pollution Levels in Europe” and No. 98624 “Monte Carlo Sensitivity Studies of Environmental Security”). The Danish Centre for Supercomputing gave us access to several powerful parallel computers for running a long sequence of numerical experiments. ´ The research of Istv´ an Farag´ o and Agnes Havasi was supported by Hungarian Scientific Research Fund (OTKA), grants No. K67819 and F61016, respectively.
References 1. Anderson, E., Bai, Z., Bischof, C., Demmel, J., Dongarra, J., Du Croz, J., Greenbaum, A., Hammarling, S., McKenney, A., Ostrouchov, S., Sorensen, D.: LAPACK: Users’ Guide. SIAM, Philadelphia (1992) 2. Burrage, K.: Parallel and Sequential Methods for Ordinary Differential Equations. Oxford University Press, Oxford (1995) 3. Butcher, J.C.: The Numerical Analysis of Ordinary Differential Methods: RungeKutta and General Linear Methods, 2nd edn. Wiley, New York (2003) 4. Dahlquist, G.: A special stability problem for linear multistep methods. BIT 3, 27–43 (1963) 5. Ehle, B.L.: On Pad´e approximations to the exponential function and A-stable methods for the numerical solution of initial value problems. Research Report CSRR 2010, Dept. AACS, University of Waterloo, Ontario, Canada (1969) 6. Hairer, E., Wanner, G.: Solving Ordinary Differential Equations: II Stiff and Differential-Algebraic Problems. Springer, Berlin (1991) 7. Hundsdorfer, W., Verwer, J.G.: Numerical Solution of Time-Dependent AdvectionDiffusion-Reaction Equations. Springer, Berlin (2003) 8. Lambert, J.D.: Numerical Methods for Ordinary Differential Equations. Wiley, New York (1991)
66
´ Havasi Z. Zlatev, I. Farag´ o, and A.
9. Simpson, D., Fagerli, H., Jonson, J.E., Tsyro, S.G., Wind, P., Tuovinen, J.-P.: Transboundary Acidification, Eutrophication and Ground Level Ozone in Europe, Part I. Unified EMEP Model Description. EMEP/MSC-W Status Report 1/2003, Norwegian Meteorological Institute, Oslo, Norway (2003) 10. Wilkinson, J.H.: The algebraic eigenvalue problem. Oxford University Press, Oxford (1965) 11. Zlatev, Z.: Modified diagonally implicit Runge-Kutta methods. SIAM Journal on Scientific and Statistical Computing 2, 321–334 (1981) 12. Zlatev, Z.: Computer treatment of large air pollution models. Kluwer Academic Publishers, Dordrecht (1995) 13. Zlatev, Z., Dimov, I.: Computational and Environmental Challenges in Environmental Modelling. Elsevier, Amsterdam (2006) ´ Some stability properties and applications of the 14. Zlatev, Z., Farag´ o, I., Havasi, A.: Richardson Extrapolation (submitted, 2009) ´ Stability of Richardson Extrapolation and the 15. Zlatev, Z., Farag´ o, I., Havasi, A.: θ-method (submitted, 2009)
On the Use of Aggregation-Based Parallel Multilevel Preconditioners in the LES of Wall-Bounded Turbulent Flows Andrea Aprovitola1 , Pasqua D’Ambra2 , Daniela di Serafino3 , and Salvatore Filippone4
2
1 Engine Institute (IM) — CNR Via Marconi, 8 I-80125 Naples, Italy
[email protected] Institute for High-Performance Computing and Networking (ICAR) — CNR Via Pietro Castellino 111, I-80131 Naples, Italy
[email protected] 3 Department of Mathematics, Second University of Naples Via Vivaldi 43, I-81100 Caserta, Italy
[email protected] 4 Department of Mechanical Engineering, University of Rome Tor Vergata, Viale del Politecnico 1, I-00133, Rome, Italy
[email protected]
Abstract. This work is concerned with the application of algebraic multilevel preconditioners in the solution of pressure linear systems arising in the large eddy simulation of turbulent incompressible flows in wallbounded domains. These systems, coming from the discretization of elliptic equations with periodic and Neumann boundary conditions, are large and sparse, singular, compatible, and nonsymmetric because of the use of non-uniform grids taking into account the anisotropy of the flow. Furthermore, they generally account for a large part of the simulation time. We analyse, through numerical experiments, the effectiveness of parallel algebraic multilevel Schwarz preconditioners, based on the smoothed aggregation technique, in the iterative solution of the above pressure systems. We also investigate the behaviour of a variant of the smoothed aggregation technique, recently developed to efficiently deal with nonsymmetric systems.
1
Introduction
Projection methods are widely used to solve the Navier-Stokes (N-S) equations for incompressible flows [8]. A main task of these methods consists in computing a velocity field which is divergence-free in some discrete sense; this requires the solution of a discretized elliptic equation, known as pressure-correction or simply pressure equation, which is derived from the continuity equation. In this paper we consider a projection method applied to suitably filtered N-S equations in the context of the Large Eddy Simulation (LES) of wall-bounded turbulent flows. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 67–75, 2010. c Springer-Verlag Berlin Heidelberg 2010
68
A. Aprovitola et al.
The related pressure systems are large and sparse, singular, compatible, with symmetric nonzero pattern and nonsymmetric values. The computational cost of solving these systems is very high if large Reynolds numbers are considered, thus calling for high-performance solvers. Krylov methods coupled with multilevel preconditioners have shown their effectiveness in the context of CFD, as well as in a variety of other applications. Increasing interest has been devoted to algebraic multilevel domain decomposition preconditioners because of their general applicability and their suitability to modern parallel and distributed computing systems. The MultiLevel Domain Decomposition Parallel Preconditioners Package based on PSBLAS (MLD2P4) [7] implements algebraic multilevel preconditioners using Schwarz domain decomposition methods and the smoothed aggregation coarsening technique; the preconditioners can be used with the Krylov solvers available in the PSBLAS library [9]. MLD2P4 has been developed following an object-based design translated into Fortran 95, to pursue goals such as extensibility and flexibility, while preserving runtime efficiency and immediate interfacing with Fortran codes. The package has been integrated in a Fortran 90 LES code implementing the above projection method, to analyse the behaviour of different multilevel preconditioners in the solution of the pressure equations, with the final goal of identifying the most effective ones in the parallel simulation of wall-bounded turbulent flows. We note that the classical smoothed aggregation technique produces very effective multilevel algorithms for symmetric positive-definite systems, where good characterizations of the so-called algebraically smooth error are available. Recently, a generalization of the smoothed aggregation has been developed to efficiently deal with nonsymmetric systems and has been implemented in the ML package [10] available in the Trilinos C++ framework [11]. In order to analyse the impact of this aggregation strategy on the pressure systems arising in our LES application, we performed numerical experiments with ML preconditioners using such strategy. In this case the analysis was carried out “offline”, i.e. by extracting the linear systems at certain steps of the simulation. The paper is organized as follows. In Section 2 we outline the LES approach and the approximate projection method used in our application, focusing on the pressure equation and the related linear system. In Section 3 we briefly describe the algebraic multilevel Schwarz preconditioners, focusing on the smoothed aggregation technique and its generalization. In Section 4 we discuss the results of the numerical experiments carried out with MLD2P4 and ML. In Section 5 we draw some conclusions and future work.
2
Pressure Systems in the LES of Turbulent Channel Flows
We consider incompressible and homothermal turbulent flows modelled by initial boundary-value problems for the N-S equations. In our LES approach, a tophat filter and an approximate deconvolution operator are applied to the N-S
Parallel Multilevel Preconditioners in the LES of Wall-bounded Flows
69
equations in non-dimensional weak conservation form, obtaining the following formulation: ∂ v · n dS = 0, v A−1 (1) = fconv + fdif f + fpress , x ∂t ∂Ω(x)
= Ax (¯ where Ω(x) is a finite volume contained into the region of the flow, v v) is the unknown velocity field, with Ax approximate differential deconvolution ¯ resolved velocity field, and fconv , fdif f and fpress are the convecoperator and v tive, diffusive and pressure fluxes, respectively. Note that the pressure unknown is introduced in the N-S equations through the fpress term. An implicit subgridscale modelling is used, thus the unresolved subgrid-scale terms do not appear explicitly in the equations. For details the reader is referred to [2]. We focus on the solution of channel flows, with periodic boundary conditions assigned in the stream-wise (x) and span-wise (z) directions, and no-slip boundary conditions on the walls (y direction). The domain is discretized by using a structured Cartesian grid, with uniform grid spacings in the x and z directions, where the flow is assumed to be homogeneous, and a non-uniform grid spacing in the y direction, with refinement near the walls. The equations (1) are discretized by using a finite volume method, with flow variables co-located at the centers of the control volumes; a third-order multidimensional upwind scheme is used for the fluxes. A time-splitting technique based on an approximate projection method [3] is used to decouple the velocity from the pressure in the deconvolved ˜ is computed, at each time momentum equation; the unknown velocity field v step, through the following predictor-corrector formula: n+1 = v∗ − Δt∇φn+1 ; v where v∗ is an intermediate velocity field, Δt is the time step, and φ is a scalar field such that ∇φ is an O(Δt) approximation of the pressure gradient. The intermediate velocity v∗ is computed applying a second-order AdamsBashforth/Crank-Nicolson semi-implicit scheme to the deconvolved momentum equation, where the pressure term is neglected. The correction term, needed which is divergence-free in a discrete sense, is obto have a velocity field v tained by computing φn+1 as the solution of the so-called pressure equation, which is a Poisson-type equation with periodic and non-homogeneous Neumann boundary conditions. By discretizing this equation with a second-order central finite-volume scheme we obtain a linear system Ax = b,
(2)
where A = (aij ) ∈ n×n is a sparse matrix with symmetric sparsity pattern, but nonsymmetric values because of the use of a non-uniform grid in the y direction. System (2) is singular but compatible, since a discrete compatibility condition is enforced according to the boundary conditions. The dimension n of the matrix A depends on the number of cells of the discretization grid and hence, owing to
70
A. Aprovitola et al.
resolution needs, increases as the Reynolds number grows; therefore the efficient solution of the above linear system is a key point in the LES simulations. We note that R(A) ∩ N (A) = {0}, where R(A) and N (A) are the range space and the null space of A; this property, coupled with the compatibility of the linear system, ensures that in exact arithmetic the GMRES method computes a solution before a breakdown occurs [4]. Usually, the Restarted GMRES (RGMRES) method is used and the application of an effective preconditioner that reduces the condition number of the restriction of A to R(A) is crucial to decrease the number of iterations to achieve a required accuracy in the solution.
3
Algebraic Multilevel Domain Decomposition Preconditioners
Multilevel domain decomposition preconditioners are based on the idea of applying basic domain decomposition methods to a hierarchy of matrices that represent, in increasingly coarser spaces, the matrix A to be preconditioned, and of combining contributions coming from each space. The use of multiple spaces (levels) allows to correct “errors” arising in the application of the basic domain decomposition methods to the original matrix. These preconditioners are built from two main components: a basic domain decomposition preconditioner (smoother), acting on a single matrix in the hierarchy, and a coarse-space correction, which computes a correction to the basic preconditioner by transferring suitable information from the current space to the next coarser one and vice versa, and by solving a linear system involving the coarse-space matrix. Among the multilevel preconditioners, the algebraic ones generate the hierarchy of matrices using only information on A, without assuming any knowledge of the problem from which it arises. A nice feature of these preconditioners is that they combine a wide applicability with the capability of automatically adapting to specific features of the problem to be solved. A description of algebraic multilevel domain decomposition preconditioners is beyond the scope of this paper. For details the reader is referred, e.g., to [14,15]. We only outline the smoothed aggregation technique for the construction of the hierarchy of matrices and of the related coarse-to-fine and fine-to-coarse transfer operators [5,16], which is used by the multilevel preconditioners discussed in this paper. Given the set of row (column) indices Ω k = {1, 2, . . . , nk } of the matrix Ak concerning a certain level k in the hierarchy, the smoothed aggregation generates a coarser index set Ω k+1 = {1, 2, . . . , nk+1 } by suitably grouping strongly k coupled indices into disjoint subsets called aggregates. Two indices i, j ∈ Ω are considered strongly coupled if |akij | ≥ · |akii akjj |, where > 0 is a given threshold. Then a tentative prolongator P k ∈ nk ×nk+1 is built, where each column identifies an aggregate. According to the general theory on aggregation-based multilevel methods, P k should be such that its range contains the near null space of the matrix Ak , i.e. the eigenspace associated to the smallest eigenvalue of Ak . We consider the following definition of P k
Parallel Multilevel Preconditioners in the LES of Wall-bounded Flows
Pk = ( pkij ),
pkij =
1 0
if point i is in the aggregate j , otherwise
71
(3)
which is a piecewise constant interpolation operator whose range includes the space spanned by the vector of all ones, i.e. the near null space of the matrix Ak related to the pressure linear system of our LES application. The actual prolongator P k is obtained by applying a damped Jacobi smoother to P k , P k = (I − ω k (Dk )−1 Ak )Pk , where ω k is a suitable damping parameter and Dk is the diagonal part of Ak . The coarse matrix Ak+1 is built by using the Galerkin approach, i.e. Ak+1 = Rk Ak P k , with Rk = (P k )T . Convergence results concerning multilevel methods based on the previous aggregation strategy confirm the effectiveness of the above approach for symmetric positive definite matrices [17]. Parallel algebraic multilevel domain decomposition preconditioners are implemented in MLD2P4 [6,7]. Additive Schwarz methods are used as basic preconditioners and the coarse-level corrections are built through the previously described smoothed aggregation, using the tentative prolongator defined in (3). These components can be combined in different multilevel frameworks, obtaining purely additive and hybrid (i.e. multiplicative among the levels and additive within each level) preconditioners; the latter apply basic Schwarz methods as pre-, post- or two-side (i.e. V-cycle) smoothers. Different solvers are available at the coarsest level, including LU and ILU factorizations as well as block-Jacobi solvers with ILU or LU on the blocks. A generalization of the classical smoothed aggregation technique has been recently proposed to obtain on nonsymmetric systems the effectiveness achieved on the symmetric ones [13]. In the new approach the coarse matrix Ak+1 is built using a restriction operator Rk different from (P k )T , which is defined as k (I − Ak W k (Dk )−1 ), Rk = R k = (P k )T is a tentative restrictor and W k is a diagonal matrix whose where R k . Local dampi-th diagonal entry is a damping parameter for the i-th row of R ing parameters are also used in the definition of the smoothed prolongator, i.e. P k = (I − (Dk )−1 Ω k Ak )Pk , where Ω k is a diagonal matrix. This nonsymmetric smoothed aggregation is currently available in the ML package of parallel multilevel preconditioners [10], included in the C++ Trilinos computational framework [11]. We are working to implement it in MLD2P4.
4
Numerical Experiments
Multilevel preconditioners based on the smoothed aggregation techniques described in Section 3 have been applied to the pressure systems arising in the simulation of a bi-periodical channel flow with the LES approach outlined in Section 2. The problem domain is a box of size 2πl × 2l × 2πl, where l is the
72
A. Aprovitola et al.
channel half-width, and the Reynolds number referred to the shear velocity is Reτ = 1050; the initial flow is a Poiseuille flow, with a random Gaussian perturbation. A computational grid with 64 × 96 × 128 cells is considered, with cell aspect ratio varying approximately from 3 to 500. The resulting pressure matrix has dimension 786432 and 5480702 nonzero entries. The time step Δt is 10−4 , to meet stability requirements. Preliminary results concerning the performance of multilevel preconditioners available in MLD2P4, coupled with the Restarted GMRES (RGMRES) from PSBLAS, in the solution of the pressure linear systems arising in the LES simulation, are reported in [1]. In that case, the threshold used in the aggregation algorithm was set to zero and the multilevel algorithms used only post-smoothing. Many other experiments have been performed, varying the type of multilevel cycle, the number of levels, the aggregation threshold, the basic Schwarz preconditioner, and the coarsest-level solver. A significant improvement in the convergence behaviour has been observed in passing from = 0 to > 0, according to the mesh anisotropy; on the other hand, only modest changes have been obtained using different small positive values of . For the sake of space, we report here the results concerning multilevel preconditioners of V-cycle type, with 2, 3 and 4 levels, using = 0.01, the block-Jacobi preconditioner as basic preconditioner, and four sweeps of the block-Jacobi solver at the coarsest level (the ILU(0) factorization is applied on the local submatrices within the block-Jacobi preconditioner/solver). On the average, this type of preconditioner shows on our problem the best results in terms of iterations and execution time of the coupled Krylov solver. In the following, the above preconditioners are denoted by 2LMLD2P4, 3L-MLD2P4, and 4L-MLD2P4. We also report, for comparison, the results obtained with the one-level block-Jacobi preconditioner, using ILU(0) on the local blocks (BJAC-MLD2P4). The results presented in this Section concern the solution of the pressure systems arising in the first ten time steps of the simulation procedure. Furthermore, since the pressure matrix does not change during the whole simulation, the cost for the construction of the multilevel preconditioners is neglected in our discussion. In order to analyse the impact on the pressure systems of the nonsymmetric smoothed aggregation described in Section 3, we have applied to those systems some multilevel preconditioners using the new aggregation strategy, which are available in the Trilinos-ML package. All the other features of the ML preconditioners have been chosen as in the MLD2P4 preconditioners, except the coarsestlevel solver, set to 4 sweeps of the pointwise Jacobi method, since ML does not include the block-Jacobi one. These preconditioners are referred to 2L-ML-NS, 3L-ML-NS, 4L-ML-NS. To focus on the aggregation technique, removing the dependence from the coarsest-level solver in our analysis, we have also applied the corresponding ML preconditioners with classical smoothed aggregation (2L-ML, 3L-ML, 4L-ML). The ML preconditioners have been coupled with the RGMRES solver provided by the Aztec-OO package included in Trilinos. Note that no significative influence on the convergence behaviour of the above ML preconditioners
Parallel Multilevel Preconditioners in the LES of Wall-bounded Flows
73
Table 1. MLD2P4: mean number of iterations and mean execution time (in brackets) Procs 1 2 4 8 16 32 64
BJAC-MLD2P4 91 (62.77) 96 (37.89) 98 (19.80) 98 (9.59) 102 (4.45) 101 (2.09) 109 (1.33)
2L-MLD2P4 3L-MLD2P4 22 (32.51) 14 (23.31) 23 (19.96) 12 (10.85) 22 (9.72) 12 (5.42) 22 (4.81) 16 (3.56) 23 (2.57) 12 (1.45) 23 (1.37) 16 (1.03) 27 (0.99) 19 (0.80)
4L-MLD2P4 13 (21.83) 13 (11.46) 13 (5.59) 14 (3.29) 13 (1.48) 19 (1.34) 21 (1.03)
has been observed varying the aggregation threshold in the decoupled coarsening strategy. This different behaviour between MLD2P4 and ML is probably due to a different local coarsening strategy and it is currently under investigation. All the previous preconditioners have been used as right preconditioners and the restarting parameter of RGMRES has been set to 30. The null vector has been chosen as starting guess and the RGMRES iterations have been stopped when the ratio between the 2-norms of the residual and of the right-hand-side was smaller than 10−7 ; a maximum number of 500 iterations has been also set. A rowblock distribution of the pressure matrix has been considered, with right-hand side and solution vectors distributed accordingly. The experiments have been carried out on a HP XC 6000 Linux cluster with 64 bi-processor nodes, operated by the Naples branch of ICAR-CNR. Each node comprises an Intel Itanium 2 Madison processor, with clock frequency of 1.4 Ghz, and is equipped with 4 GB of RAM; it runs HP Linux for High Performance Computing 3 (kernel 2.4.21). The main interconnection network is Quadrics QsNetII Elan 4, which has a sustained bandwidth of 900 MB/sec. and a latency of about 5 μsec. for large messages. The GNU Compiler Collection, v. 4.3, and the HP MPI implementation, v. 2.01, have been used. MLD2P4 1.1 and and PSBLAS 2.3.1 have been installed on top of ATLAS 3.6.0 and BLACS 1.1. The release 8.0.5 of Trilinos, including ML 5.0, has been installed on top of ATLAS 3.6.0 and LAPACK 3.0. In Table 1 we report the mean number of iterations and the mean execution time, in seconds, obtained with the MLD2P4 preconditioners over the ten linear systems, on 1, 2, 4, 8, 16, 32 and 64 processors. We see that the MLD2P4 multilevel preconditioners show good algorithmic scalability, with a low increase in the number of iterations when the number of processors increases. Similar iteration counts and execution times are obtained with three and four levels, showing that three levels are sufficient to reduce low-energy error modes in this case. Satisfactory speedup values can be observed for all the preconditioners. A significant reduction of the number of iterations and of the execution time with respect to the block-Jacobi preconditioner is obtained with all the multilevel preconditioners. In Table 2 we report the mean number of iterations and the mean execution time for the ML preconditioners based on the classical smoothed aggregation and on the nonsymmetric one. We observe that RGMRES with the ML
74
A. Aprovitola et al.
Table 2. ML: mean number of iterations and mean execution time (in brackets) Procs 1 2 4 8 16 32 64
2L-ML 3L-ML 4L-ML 2L-ML-NS 3L-ML-NS 4L-ML-NS 39 (73.70) 25 (57.68) 24 (57.84) 38 (77.72) 18 (54.00) 9 (39.98) 31 (42.89) 22 (36.46) 21 (36.57) 31 (44.80) 17 (35.54) 9 (28.79) 31 (21.50) 22 (17.79) 21 (17.99) 31 (22.70) 18 (18.26) 9 (13.97) 31 (10.27) 22 (8.42) 21 (8.57) 31 (10.79) 18 (8.75) 10 (6.62) 31 (5.01) 22 (3.92) 22 (4.08) 31 (5.31) 17 (3.95) 12 (3.15) 32 (2.47) 22 (1.88) 26 (2.20) 32 (2.63) 18 (1.91) 15 (1.75) 32 (1.31) 24 (1.03) 27 (1.32) 32 (1.61) 17 (1.02) 15 (0.95)
preconditioners based on the classical smoothed aggregration has a general convergence behaviour similar to that obtained with the MLD2P4 preconditioners (the different number of iterations is due to the different coarsest-level solver). In particular, 3L-ML and 4L-ML have close iteration counts. Conversely, the preconditioners based on the nonsymmetric aggregation strategy produce a large reduction of the iterations when three and four levels are used. The best results are obtained with 4L-ML-NS, which gets the smallest number of iterations and the smallest execution time with respect to the other ML preconditioners. These results show that the aggregation strategy for nonsymmetric matrices can enhance the performance of multilevel preconditioners in the solution of the linear systems arising in our LES application and its inclusion in MLD2P4 is desirable. Finally, we see that the multilevel preconditioners from MLD2P4 generally require smaller execution times than the ML preconditioners; the same is true if we consider the time per single iteration. On the other hand, the ML preconditioners show higher speedup values on 32 and 64 processors; this is mainly due to their large sequential execution times.
5
Conclusions and Future Work
In this work we analysed the performance of parallel algebraic multilevel Schwarz preconditioners, based on aggregation, in the solution of nonsymmetric pressure systems arising in the LES of incompressible wall-bounded flows. Our work was aimed at identifying effective multilevel preconditioners to be used in a scientific code implementing the LES approach described in Section 2, where MLD2P4 has been already integrated. The results obtained show that the MLD2P4 preconditioners based on the classical smoothed aggregation achieve good algorithmic scalability and parallel performance on the systems under consideration. However, experiments with a recently proposed nonsymmetric smoothed aggregation technique indicate that the effectiveness of multilevel algorithms on such systems can be improved. Based on these considerations, we planned to implement the new aggregation technique into the next release of MLD2P4.
Parallel Multilevel Preconditioners in the LES of Wall-bounded Flows
75
References 1. Aprovitola, A., D’Ambra, P., Denaro, F.M., di Serafino, D., Filippone, S.: Application of Parallel Algebraic Multilevel Domain Decomposition Preconditioners in Large Eddy Simulations of Wall-Bounded Turbulent Flows: First Experiments, ICAR-CNR Technical Report RT-ICAR-NA-07-02 (2007) 2. Aprovitola, A., Denaro, F.M.: On the Application of Congruent Upwind Discretizations for Large Eddy Simulations. J. Comput. Phys. 194(1), 329–343 (2004) 3. Aprovitola, A., Denaro, F.M.: A Non-diffusive, Divergence-free, Finite Volumebased Double Projection Method on Non-staggered Grids. Internat. J. Numer. Methods Fluids 53(7), 1127–1172 (2007) 4. Brown, P.N., Walker, H.F.: GMRES on (nearly) singular systems. SIAM J. Matrix Anal. Appl. 18(1), 37–51 (1997) 5. Brezina, M., Vanˇek, P.: A Black-Box Iterative Solver Based on a Two-Level Schwarz Method. Computing 63(3), 233–263 (1999) 6. D’Ambra, P., di Serafino, D., Filippone, S.: On the Development of PSBLAS-based Parallel Two-Level Schwarz Preconditioners. Applied Numerical Mathematics 57, 1181–1196 (2007) 7. D’Ambra, P., di Serafino, D., Filippone, S.: MLD2P4: a Package of Parallel Multilevel Algebraic Domain Decomposition Preconditioners in Fortran 95. ICAR-CNR Technical Report RT-ICAR-NA-09-01 (2009), http://www.mld2p4.it 8. Ferziger, J.H., Peric, M.: Computational Methods for Fluid Dynamics. Springer, Heidelberg (1996) 9. Filippone, S., Buttari, A.: PSBLAS 2.3: User’s and Reference Guide (2008), http://www.ce.uniroma2.it/psblas/ 10. Gee, M.W., Siefert, C.M., Hu, J.J., Tuminaro, R., Sala, M.G.: ML 5.0 Smoothed Aggregation User’s Guide (May 2006), http://trilinos.sandia.gov/packages/ml/publications.html 11. Heroux, M.A., et al.: An Overview of the Trilinos Project. ACM Trans. Math. Soft. 31(3), 397–423 (2005) 12. Keyes, D.: Domain Decomposition Methods in the Mainstream of Computational Science. In: Proc. of the 14th International Conference on Domain Decomposition Methods, pp. 79–93. UNAM Press, Mexico City (2003) 13. Sala, M., Tuminaro, R.: A New Petrov-Galerkin Smoothed Aggregation Preconditioner for Nonsymmetric Linear Systems. SIAM J. Sci. Comput. 31(1), 143–166 (2008) 14. Smith, B., Bjørstad, P., Gropp, W.: Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations. Cambridge University Press, Cambridge (1996) 15. St¨ uben, K.: Algebraic Multigrid (AMG): an Introduction with Applications. In: Sch¨ uller, A., Trottenberg, U., Oosterlee, C. (eds.) Multigrid. Academic Press, London (2000) 16. Vanˇek, P., Mandel, J., Brezina, M.: Algebraic Multigrid by Smoothed Aggregation for Second and Fourth Order Elliptic Problems. Computing 56, 179–196 (1996) 17. Vanˇek, P., Mandel, J., Brezina, M.: Convergence of Algebraic Multigrid based on Smoothed Aggregation. Numer. Math. 88, 559–579 (2001)
An Additive Matrix Preconditioning Method with Application for Domain Decomposition and Two-Level Matrix Partitionings Owe Axelsson Department of Information Technology, Uppsala University, Sweden & Institute of Geonics, AS CR, Ostrava, The Czech Republic
[email protected]
Abstract. Domain decomposition methods enable parallel computation during the iterative solution of partial differential equations of elliptic type. In order to limit the number of iterations one must then use an efficient preconditioner which can significantly reduce the condition number of the given problem and, at the same time, is highly parallelizable. In this paper we describe and analyse such a preconditioner, which is based on local subdomain inverse matrices and is applicable for various types of domain decomposition methods.
1
Introduction
The paper analyses a particular preconditioner,primarily to be used in a conjugate gradient method, for nonsingular matrices in the form ⎤ ⎡ A1 0 A13 (1) A = ⎣ 0 A2 A23 ⎦ A31 A32 A3 Such matrices arise in many applications of which we here just mention the following three: (i) If a domain of definition (Ω) of an elliptic problem discretized by a finite element method, is divided in two parts, the matrix blocks Ai , i = 1, 2 correspond to interior node points in the subdomains Ω1 and Ω2 ,respectively and A3 corresponds to the nodepoints on the common border, Ω¯1 ∩ Ω¯2 between the two subdomains, i.e. an edge in a 2D and a face in a 3D problem. Unless one applies some parallelizable method for the sub-blocks in A3 , the parallelism is here limited to two sub-blocks. Domain decomposition methods, also in a much more general context, have been presented in a number of publications, see e.g. [5,7,9,10]. (ii) If the domain is partitioned in a number of odd-even ordered subdomains then Ai , i=1,2 correspond to the odd- respectively the even-numbered subdomains and A3 to the common edges in a 2D and faces/edges in a 3D I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 76–83, 2010. c Springer-Verlag Berlin Heidelberg 2010
An Additive Matrix Preconditioning Method
77
problem. A 3D domain can be first partitioned consecutively in 2D domains, where we use alternatively an odd-even and even-odd orderings. The sub-blocks Ai , i = 1, 2 consist themselves of uncoupled, diagonal matrix sub-blocks, one for each subdomain. Therefore this method can contain a high degree of parallelism. Here, however, A3 has a fairly large size, and the solution of the arising corresponding systems needs special attention. This problem is avoided if the following variant of this DD method is used. (iii) Partition first the given matrix in two-by-two blocks, where the lower diagonal block corresponds to a set of coarse,vertex mesh points and the upper, pivot block to the remaining (fine) mesh nodes. Preconditioners for such matrix partitionings have been discussed in a number of publications, see e.g. [4], and will not be commented on further in this paper. Instead we want to apply the subdomain partitioning to just the pivot block matrix, i.e. the matrix that remains if the solution at all coarse mesh nodes have been chosen to be zero. This is an important part in the preconditioning of matrices partitioned in two-by-two block form, see e.g. [4,1,2]. In this case the corresponding matrix A3 for the pivot block consists of uncoupled sub-blocks and the solution of the arising systems with this matrix can be done efficiently in parallel as well. The sub-blocks are actually not always uncoupled, as some nodes on two different edges might be coupled. If such a coupling exists it can be deleted and compensated for by the method of deletion and diagonal compensation, see [3] for details.This is done only to construct the preconditioner,the given matrix will be unchanged. As will be seen in Section 2, the preconditioner will be of additive form, where each term corresponds to a subdomain. Such preconditioners have been discussed in [2,7],see also references cited therein. Because of the additive form, they can be taken in any order. In fact, the initial odd-even partitioning has been done only to simplify the analysis of the condition number of the resulting preconditioned matrix and is not essential for the actual implementation of the method. We shall show that by use of certain perturbations of the sub-blocks in positions (3,3) of the inverse matrices, we can derive a condition number bound, κ of the preconditioned matrix which, for symmetric and positive definite (spd) matrices depends −1/2 −1/2 1/2 −1 )} , only on the CBS constant, γ = {ρ(A3 (A31 A−1 1 A13 + A32 A2 A23 )A3 2 1/2 as κ =1/(1-γ ) ,where ρ(·) denotes the spectral radius. Note that the CBS constant measures the strength of the off-diagonal blocks in relation to the diagonal blocks of the matrix in (1). The method can be extended by recursion in various ways. In the domain decomposition method in (i), each of the subdomains Ω1 , Ω2 can themselves be partitioned in two sub-blocks and so on. In the context of a recursive odd-even subdomain ordering, the arising set of coarse mesh vertex points can be ordered in such subsets. In Section 2 of the paper we describe the additive preconditioner and derive an expression for a resulting error matrix. The analysis of the corresponding condition number is found in Section 3. Some concluding remarks are found in Section 4.
78
O. Axelsson
The notation A ≥ B, where A and B are positive semidefinite, used in the following, means that A − B is positive semidefinite.
2
Error Matrix for an Additive Preconditioning Method (1)
(2)
(i)
Given a matrix in the form (1), we partition A3 = A3 + A3 , where A3 , i = 1, 2 arise from contributions to edge nodes from the odd and even numbered elements, respectively. The preconditioner takes the form B = B1 + B2 , where ⎡ ⎤ ⎡ ⎤ B11 0 B13 I1 0 0 ⎢ ⎥ 1 = ⎣ 0 0 0 ⎦ B1 = ⎣ 0 0 0 ⎦ and B1 A −1 (1) 0 0 I3 B31 0 S3 ⎡ ⎤ ⎡ ⎤ 0 0 0 0 0 0 ⎢ ⎥ 2 = ⎣0 I2 0 ⎦ B2 = ⎣0 B22 B23 ⎦ and B2 A (2) −1 0 0 I3 0 B32 S3 2 − E, where 1 + A Here the matrix A has been split as A = A ⎡ ⎡ ⎤ ⎤ ⎡ ⎤ A1 0 A13 0 0 0 00 0 1 = ⎣ 0 0 0 ⎦ , A 2 = ⎣0 A2 A23 ⎦ , E = ⎣0 0 0 ⎦ A A31 0 A3 0 A32 A3 0 0 A3 and S3 = A3 − A3i A−1 1 Ai3 , i = 1, 2. The matrices Bi,j need not be specified here as they will not appear in the final result. A computation shows that (i)
1 + B1 (A 2 + B2 (A 2 − E) + B2 A 1 − E)) − A ABA − A = A(B1 A ⎡ ⎤ 00 0 2 − E) + B2 (A 1 − E)) − A = A(I + ⎣0 0 0 ⎦ + B1 (A 0 0 I3 ⎤ ⎡ ⎤ ⎤ ⎡ ⎤⎡ ⎤⎡ ⎡ 0 0 0 I1 0 0 0 0 0 A1 0 A13 0 0 A13 = ⎣0 0 A23 ⎦ + ⎣ 0 0 0 ⎦ ⎣0 A2 A23 ⎦ + ⎣0 I2 0 ⎦ ⎣ 0 0 0 ⎦ 0 0 I3 0 0 I3 A31 0 0 0 0 A3 0 A32 0 2 − E)B1 (A 2 − E) + (A 1 − E)B2 (A 1 − E) +(A ⎤⎡ ⎤⎡ ⎤ ⎡ ⎤ ⎡ 00 0 0 0 A13 0 0 0 0 0 0 ⎢ ⎥ = ⎣ 0 0 A23 ⎦ + ⎣0 A2 A23 ⎦ ⎣0 0 0 ⎦ ⎣0 A2 A23 ⎦ −1 (1) A31 A32 A3 0 A32 0 0 A32 0 0 0 S3 ⎤⎡ ⎡ ⎤⎡ ⎤ 00 0 A1 0 A13 A1 0 A13 ⎢ ⎥ + ⎣ 0 0 0 ⎦ ⎣0 0 0 ⎦ ⎣ 0 0 0 ⎦ (2) −1 A31 0 0 A31 0 0 0 0 S3
An Additive Matrix Preconditioning Method
⎡
i.e.
⎢ ABA − A = ⎣
(2) −1
A13 S3
A13
0 A31
A13
0
79
⎤
⎥ (1) −1 A23 S3 A32 A23 ⎦ := F A32 A3
(2)
Hence it holds ABA − A = F , where F is an error matrix and, if A is spd, A1/2 BA1/2 − I = A−1/2 F A−1/2 .
3
Condition Number Bound for a Perturbed Preconditioner (1)
(2)
(1)
From now on we assume that A is spd. Since A3 = A3 + A3 , where A3 and (2) A3 are the contributions to A3 from odd-and even-numbered elements, respec⎤ ⎡ ⎤ ⎡ 0 0 0 A1 0 A13 tively, it holds that ⎣ 0 0 0 ⎦ and ⎣0 A2 A23 ⎦ are symmetric and positive (1)
(2)
A31 0 A3
0 A32 A3
semidefinite. Hence A3 − A3i Ai −1 Ai3 , i=1,2 are also positive semidefinite, so (i)
(1)
S3
= A3 + A3 − A31 A1 −1 A13 ≥ A3
(1) −1 A3
≥
(2)
(1)
(2)
(2) −1 S3
Hence, (2) −1
A13 S3
(1) −1
A31 ≤ A13 A3
(1) −1 A23 S3 A32
so, by (2)
(2) −1
or A3
≤
A31 (2) −1 A23 A3 A32
(1) −1
≥ S3
and, similarly,
≤ A1 ≤ A2
⎤ A1 0 A13 ABA − A ≤ ⎣ 0 A2 A23 ⎦ = A A31 A32 A3 ⎡
Therefore A1/2 BA1/2 ≤ 2I, and λmax (BA) ≤ 2. To derive a lower bound we shall use perturbations. Let then B be computed as above and let = B + Δ, B (3) ⎤ ⎡ 00 0 to be used as a preconditioner to A, where Δ is such that AΔA = δ ⎣0 0 0 ⎦, 0 0 A3 δ ≥ 0. It follows that ⎡ ⎤⎡ ⎤ −1 ⎤ ⎡ 0 0 A−1 0 0 0 00 0 1 A13 S3 −1 ⎦ ⎣ 0 0 0 ⎦. 0 0 0 ⎦⎣ Δ = δ ⎣0 0 A−1 2 A23 S3 −1 −1 −1 −1 −1 0 0 A3 S3 A31 A1 S2 A32 A2 −S3 −1 0 0 −S3 Here the arising actions of Ai can be computed locally elementwise. The Schur −1 complement matrix S3 = A3 − A31 A−1 1 A13 − A32 A2 A23 can be assembled from local Schur complements exactly. However,the solution of the arising systems
80
O. Axelsson
with S3 can be too costly to implement for just a perturbation matrix. Therefore, this method is less practical and should be seen only as a benchmark method to show what can be ideally achieved. Due to space limitations we will not discuss other alternatives here. One method which can be used in a multilevel context,is to replace S3 with a corresponding version on a coarser mesh and interpolate the so computed values to finer meshpoints. The intention is to keep δ sufficiently big but also sufficiently small not to − A = ABA − A + AΔA. increase the upper bound too much. It holds ABA Let now ξ be a positive number sufficiently big to make (ξ +1)ABA−A ≥ 0. It holds (ξ+1)ABA−A = ξA+(ξ+1)(ABA−A)+(ξ+1)AΔA. Here ABA−A = F, ⎡ ⎤ 00 0 where F is defined in (2) and AΔA = δ ⎣0 0 0 ⎦. 0 0 A3 Hence ⎡ ⎤ A1 0 αA13 ˜ −A= ξ⎣ 0 A2 αA23 ⎦ (ξ + 1)ABA αA31 αA32 α2 D3 ⎡ ⎤ (2) −1 A31 0 βA13 A13 S3 ⎢ ⎥ (1) −1 +(ξ + 1) ⎣ ⎦ 0 A23 S3 A32 βA23 βA31
(1)
(2)
βA32 ⎡ β 2 (S⎤ 3 + S3 ) 00 0 +((2ξ + 1) + (ξ + 1)δ − 2β 2 (ξ + 1)) ⎣0 0 0 ⎦ 0 0 A3 (4) (1) (2) −1 where D3 = A31 A−1 A + A A A and S + S = 2A − D . 13 32 2 23 3 3 1 3 3 Further ξ, α, β are positive numbers that satisfy the following system of equations, ξα + (ξ + 1)β = 2ξ + 1, here equating the off-diagonal entries here equating the D3 -terms ξα2 = (ξ + 1)β 2 , 2(ξ + 1)β 2 = (2ξ + 1) + (ξ + 1)δ, here equating the A3 -terms.
(5)
Theorem 1. Assume that A, defined in (1), is symmetric and positive definite be defined by (3).Then λmin (BA) ≥ 1/(ξ + 1), λmax (BA) ≤ 2 + δ/(1 − and let B −1/2 2 γ ) where ξ = ξ(δ) satisfies the equations (5) and γ = {ρ(A3 (A31 A−1 1 A13 + −1/2 1/2 −1 A32 A2 A23 )A3 ) . As δ → 0,then ξ → ∞, and the condition number is 1 δ 1/2 asymptotically bounded by κ(BA) ≤ 2√ (2δ −1/2 + 1−γ 2 } and minδ≥0 κ(BA) ≤ 2 1/(1 − γ 2 )1/2 .
Proof. It holds ⎡ ⎤ ⎡ ⎤⎡ ⎤ ⎤⎡ 0 0 I1 A1 0 αA13 A1 0 0 I1 0 αA1 −1 A13 ⎣ 0 0 I2 0 ⎦ ⎣ 0 A2 0⎦ ⎣ 0 I2 αA2 −1 A23 ⎦ A2 αA23 ⎦ = ⎣ −1 −1 αA31 αA32 α2 D3 0 0 0 αA31 A1 αA32 A2 I3 0 0 I3
An Additive Matrix Preconditioning Method
81
and ⎡ ⎢ ⎣
(2) −1
A13 S3
0 βA31
A13
0
⎤
βA13
⎥ (1) −1 ⎦= A23 S3 A32 βA23 (1) (2) 2 βA32 β (S3 + S3 )
⎤⎡ ⎤⎡ ⎤ 00 0 0 0 A13 0 0 0 ⎥ ⎣0 0 0 ⎦ ⎢ ⎣0 0 0 ⎦ ⎣ 0 0 0 ⎦ + −1 (2) (2) (2) 0 0 βS3 A31 0 βS3 0 0 S3 ⎤⎡ ⎡ ⎤⎡ ⎤ 00 0 00 0 0 0 0 ⎥ ⎣0 0 A23 ⎦ ⎢ 0 ⎦ ⎣0 0 0 ⎦ ⎣0 0 (1) (1) (1) −1 0 0 βS3 0 A32 βS3 0 0 S3 ⎡
It follows that the first two terms in (4) are positive semidefinite. Since by (5) ≥ A. Further it holds the last term is zero, it follows that (ξ + 1)ABA T = sup xT ABAx/x λmax (BA) Ax ≤ 2 + sup xT AΔAx/xT Ax x=0
= 2 + δ sup xT3 A3 x3 /xT Ax
x=0
x=0
= 2 + δ sup xT3 A3 x3 /xT3 S3 x3 = 2 + δ/(1 − γ 2 ), x=0
−1 where S3 = A3 − A31 A−1 1 A13 − A32 A2 A23 . A computation, using (5), shows 1/2 1/2 β, (ξ + 1)√ ((ξ + 1)1/2 + ξ 1/2 )β = 2ξ + 1, and 1/(ξ + 1) = that α = ((ξ + 1)/ξ)
1/2 (1 + ζ + 2ζ )/ (1 + ζ)δ ≤ 2 2δ, where ζ = ξ/(ξ + 1). Hence ζ < 1 and as √ ≥ 1/(ξ + 1) = 2 2δ. For the δ → 0 it holds ξ → ∞ and ζ → 1, so λmin (BA) condition number it holds,
≤ κ(BA)
2 + δ/(1 − γ 2 ) √ , 2 2δ
˜ ≤ √ which is minimized for δ = 2(1 − γ 2 ), so asymptotically, min κ(BA δ>0
1 1−γ 2
Remark 3.1. If we replace A3 in the perturbation matrix AΔA with the matrix S3 , then ⎡ ⎤ −1 ⎤ ⎡ 0 0 A−1 0 0 0 1 A13 S3 −1 ⎦ ⎣ ⎦ 0 0 0 Δ = δ ⎣0 0 A−1 (6) 2 A23 S3 −1 −1 A31 A−1 A A −I 0 0 −S3 32 3 1 2 ≥ 1/(ξ + Using the same notations as before,it can be shown that λmin (BA) 1) = δ and that the upper eigenvalue bound now becomes λmax (BA) ≤ 2 + δ. ≤ 2/δ + 1, i.e., does Therefore,in this case the condition number satisfies κ(BA) not depend on γ. Typically we let δ = 1. However,we can choose δ arbitrarily large. In the limit, δ → ∞, we get a method which just corresponds to solving the −1 reduced,Schur complement equation, S3 x3 = f3 − A31 A−1 1 x1 − A32 A2 x2 , where
82
O. Axelsson
Ax = F , F = (f1 , f2 , f3 )T , which is an alternative to using the perturbation method. In the latter however, we do not need to solve the Schur complement system very accurately, which is an advantage of it. Note that each action of Δ in (6), in addition to the two local element solutions with matrices Ai , i = 1, 2 requires only one Schur complement equation solution.
4
Concluding Remarks
For domains partitioned in two parts it is known, see e.g.[9,10] that γ 2 = 1−O(h), and hence for partitioning in subdomains, that γ 2 = 1 − O(h/H), where h, H are the characteristic meshsizes for the finest elements and for the macroelements, respectively. Hence it follows from Theorem 1 that the condition num = O((h/H)−1/2 . Therefore, the number of conjugate gradient iteraber κ(BA) tions to solve a system with matrix A, using the perturbed preconditioner with = B(δ), B δ = 2(1 − γ 2 ) = O(h/H), increases at most as O((h/H)−1/4 ), h → 0, which implies a fairly modest increase. For instance, if h = H/16 decreases to h = H/256, then the number of iterations approximately only double. An elementwise construction of preconditioners can be applied also for preconditioning of the Schur complement matrix, which arises in the construction of preconditioners for matrices partitioned in two-by-two blocks, see [2], [6] and [8] for further details. When the method is extended by recursion, an optimal sequence of mesh division parameters mk , where hk+1 = hk /mk , mk ≥ 2 has been derived in [2]. The method approaches then rapidly, asymptotically an optimal order method, i.e., the total computational complexity becomes proportional to the degrees of freedom at the finest mesh level, when the number of levels, k increases When solving the arising block matrix systems, one can use inner iterations for the subblocks, see e.g., [2]. The preconditioner can also be used for nonsymmetric matrices but the corresponding convergence theory is less developed, see however [8] for an approach. Here one must in general use some form of a generalized conjugate gradient method.
Acknowledgements The work of the author was supported by the Academy of Sciences of the Czech Republic through projects AV0Z30860518 and 1ET400300415.
References 1. Axelsson, O.: Iterative Solution Methods. Cambridge University Press, New York (1994) 2. Axelsson, O., Blaheta, R., Neytcheva, M.: Preconditioning for boundary value problems using elementwise Schur complements. SIAM J. Matrix Analysis (to appear) (2009) 3. Axelsson, O., Kolotilina, L.: Diagonally compensated reduction and related preconditioning methods. Numer. Linear Algebra Appl. 1(2), 155–177 (1994)
An Additive Matrix Preconditioning Method
83
4. Axelsson, O., Vassilevski, P.S.: Variable-step multilevel preconditioning methods. I. Selfadjoint and positive definite elliptic problems. Numer. Linear Algebra Appl. 1(1), 75–101 (1994) 5. Dryja, M., Sarkis, M.V., Widlund, O.: Multilevel Schwarz methods for elliptic problems with discontinuous coefficients in three dimensions. Numer. Math. 72(3), 313–348 (1996) 6. Kraus, J.K.: Algebraic multilevel preconditioning of finite element matrices using local Schur complements. Numer. Linear Algebra Appl. 13(1), 49–70 (2006) 7. Mandel, J., Dohrmann, C.R.: Convergence of a balancing domain decomposition by constraints and energy minimization. Numer. Linear Algebra Appl. 10(7), 639–659 (2003) 8. Neytcheva, M., B¨ angtsson, E.: Preconditioning of nonsymmetric saddle point systems as arising in modelling of viscoelastic problems. ETNA 29, 193–211 (2007– 2008) 9. Toselli, A., Widlund, O.: Domain decomposition methods — algorithms and theory. Springer Series in Computational Mathematics, vol. 34. Springer, Berlin (2005) 10. Quarteroni, A., Valli, A.: Domain decomposition methods for partial differential equations. Numerical Mathematics and Scientific Computation. Oxford University Press, New York (1999)
Numerical Study of AMLI Methods for Weighted Graph-Laplacians Petia Boyanova and Svetozar Margenov Institute for Parallel Processing Bulgarian Academy of Sciences Acad. G. Bonchev, bl. 25A 1113 Sofia, Bulgaria
[email protected],
[email protected]
Abstract. We consider a second-order elliptic problem in mixed form that has to be solved as a part of a projection algorithm for unsteady Navier-Stokes equations. The use of Crouzeix-Raviart non-conforming elements for the velocities and piece-wise constants for the pressure provides a locally mass-conservative approximation. Since the mass matrix corresponding to the velocities is diagonal, these unknowns can be eliminated exactly. We address the design of efficient solution methods for the reduced weighted graph-Laplacian system. Construction of preconditioners based on algebraic multilevel iterations (AMLI) is considered. AMLI is a stabilized recursive generalization of two-level methods. We define hierarchical two-level transformations and corresponding block 2x2 splittings locally for macroelements associated with the edges of the coarse triangulation. Numerical results for two sets of hierarchical partitioning parameters are presented for the cases of two-level and AMLI methods. The observed behavior complies with the theoretical expectations and the constructed AMLI preconditioners are optimal.
1
Introduction
The aim of the current article is to study the convergence of algebraic multilevel iterations (AMLI) method for weighted graph-Laplacian systems resulting from mixed finite element approximation of a second-order elliptic boundary value problem with Crouzeix-Raviart non-conforming elements for the vector valued unknown function and piece-wise constants for the scalar valued unknown function. The considered problem: u + ∇p = f
in Ω,
∇ · u = 0 in Ω,
u · n = 0 on ∂Ω,
(1)
where Ω is a convex polygonal domain in IR2 and f is a given smooth vector valued function, has to be solved at the projection step of a composite time stepping method for unsteady Navier-Stokes equations. In the paper we will use the CFD terminology and refer to the vector u as a velocity and p as a I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 84–91, 2010. c Springer-Verlag Berlin Heidelberg 2010
Numerical Study of AMLI Methods for Weighted Graph-Laplacians
85
pressure. The chosen FEM discretization provides a locally mass-conservative algorithm (see [3] and the references therein). The Crouzeix-Raviart mass matrix is diagonal and we can eliminate the velocity unknowns exactly. The reduced matrix for the pressure has a structure of a weighted graph-Laplacian (see [8]). The rest of the paper is organized as follows. In Section 2 we introduce the general framework of AMLI methods. In Section 3 we present the preconditioner construction for the considered case of weighted graph-Laplacian systems. Section 4 is devoted to analysis of the numerical results of applying a preconditioned conjugate gradient (PCG) method with two-level and AMLI preconditioners for the 2-D model problem. The final Section 5 contains a summary of the observed behavior and suggestions for future work.
2
AMLI Method
The framework of the algebraic multilevel iterations (AMLI) method was originally proposed in [1] for the case of linear conforming FEs. More recently, AMLI methods for non-conforming FEs and discontinuous Galerkin (DG) systems were developed, see [5,6,7,9] and the references therein. AMLI is a recursive generalization of two-level preconditioners that has optimal behavior due to a proper Schur complement stabilization using Chebyshev polynomials (see also [2]). We consider a sequence of nested triangulations T0 ⊂ T1 ⊂ · · · ⊂ Tl of the domain Ω, constructed by recursive uniform refinement of a given initial mesh. Next we define the corresponding spaces of piece-wise constant functions V (0) ⊂ V (1) ⊂ · · · ⊂ V (l) , the spaces of degrees of freedom V(0) , V(1) , · · · , V(l) , the numbers of degrees of freedom n0 < n1 < · · · < nl , and the system matrix associated with each triangulation level A(0) , A(1) · · · A(l) . We are interested in the solution of the discrete problem (2) at the finest mesh. A(l) x = b
(2)
Let us define a hierarchical 2x2 block partitioning of the system matrix: (k) A (k) }nk − nk−1 T A (k) (k) (k) (k) 11 12 =J A J A = , (k) A (k) }nk−1 A 21 22
(3)
where J (k) is a properly defined nonsingular hierarchical transformation matrix. The generated splitting in the space of degrees of freedom is characterized by the constant γ (k) in the strengthened Cauchy-Bunyakowski-Schwarz (CBS) inequality, associated with the cosine between the two subspaces. Regarding (0) this partitioning, the AMLI method is defined = A(0) and for C as follows: −1 (k) (k) A (k) −1 C −T 0 IC 11 12 11 k = 1, . . . , l, C (k) = J (k) J (k) , where the (k) (k−1) A˜ 0 I A 21 (k) are symmetric positive definite approximations of A (k) , that satisfy blocks C 11
11
v ≤ vt C v ≤ (1 + b)vt A v, for all v ∈ Rnk −nk−1 , vt A 11 11 11 (k)
(k)
(k)
(4)
86
P. Boyanova and S. Margenov
The Schur complement approximation is stabilized by
˜(k−1)−1 = I − Pβ C (k−1)−1 A(k−1) A(k−1)−1 . A
The acceleration polynomial is explicitly defined by
1 + α − 2t 1+α Pβ (t) = 1 + Tβ / 1 + Tβ , 1−α 1−α
where Tβ is the Chebyshev polynomial of degree β with L∞ -norm 1 on (−1, 1). The main result concerning the efficiency of the AMLI preconditioner was originally stated in [1] and is reformulated in the following theorem. Theorem 1. Let the following three assumptions be fulfilled: (i) The hierarchical (k) = A(k−1) ; (ii) There exists an transformation matrices are sparse and A 22 (k) absolute constant γ such that γ ≤ γ < 1 for all k > 0; (iii) √ 1 2 < β < ρ, 1−γ
where ρ = min nk /nk−1 . k
Then there exists α ∈ (0, 1), such that the AMLI preconditioner C (l) has opti−1 mal relative condition number κ(C (l) A(l) ) = O(1), and the total computational complexity is O(nl ). −1
1+b It is known that the relative condition number κ(C (l) A(l) ) = O( 1−γ 2 ). For the case β = 2 the coefficients q0 and q1 of the optimal stabilization polynomial Q1 = (1 − P2 (t))/t = q0 + q1 t, which has to be evaluated in the AMLI W-cycle, are given by (see [1,4]):
q0 =
3
2 −1
, q1 = (5) 2 2 2 −b + 1 + b + b − γ 1 − γ + b(1 + 2b − 2 1 + b + b2 − γ 2 )
Preconditioner Construction for Weighted Graph-Laplacian
In the case of general triangular mesh the reduced weighted graph-Laplacian (k) (k) A(k) = {ai,j } has the following properties: ai,j = 0 only for elements with in(k)
dexes i and j that share a common edge. Then ai,j = −wi,j , i = j, where wi,j > (k) 0 are weights that depend on the angles of the triangles and ai,i = i=j wi,j . In this paper we consider 2-D uniform mesh of right-angled triangles. Then the reduced system for the pressure of the discretized problem (1) corresponds to the T-shaped four point stencil shown in Fig. 1. We assume that the finest triangulation is obtained by consecutive refinement of a coarse triangulation. Let each refined mesh be obtained by dividing the current triangles in four congruent ones connecting the midpoints of the sides. We use an approach proposed in [9] and define two-level hierarchical transformations and corresponding 2x2 block partitionings at each refinement level locally for macroelements associated with the edges of the coarser triangulation. Let us
Numerical Study of AMLI Methods for Weighted Graph-Laplacians
87
−1 4 −1
−2
Fig. 1. Schur complement four point stencil for the pressure
consider two consecutive triangulations Tk−1 ⊂ Tk and a decomposition of the (k) (k) (k) = weighted graph-Laplacian, A(k) = A e∈E Ae , e∈E Ae , as a sum (assembly) of local matrices associated with the set of edges E of the coarser (k) one. We define Ae using a weight parameter t ∈ (0, 1) so that the contribution of the links between the interior nodes among the (macroelement) edge matrices of the current coarse triangle are distributed correctly. Following the numbering (k) from Fig. 2, we introduce the local (macroelement) matrix Ae , corresponding to a hypotenuse, in the form: ⎡
(k)
A(k) = Ae;H e
t + 1 −2t
⎢ ⎢ −2t ⎢ ⎢ t−1 ⎢ 2 ⎢ t−1 ⎢ ⎢ 2 =⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
t−1 2
⎤
t−1 2
2t 5−t 2 5−t 2
t + 1 −2t −2t −2 −2 (k)
2t
t−1 2 t−1 2
⎥ ⎥ ⎥ ⎥ −2 ⎥ ⎥ −2 ⎥ ⎥ ⎥. ⎥ t−1 t−1 ⎥ 2 2 ⎥ ⎥ ⎥ ⎥ ⎥ 5−t ⎦ 2 5−t 2
(k)
The local (macroelement) matrix Ae = Ae;C , corresponding to a cathetus, is introduced in a similar way. Then we define the (macroelement) local transfor(k) mation matrix Je as ⎡
Je(k)
⎤ 1 p q q ⎢1 q p q ⎥ ⎢ ⎥ ⎢1 q q p ⎥ ⎢ ⎥ ⎢ 1 p q q ⎥ ⎥, =⎢ ⎢ 1 q p q ⎥ ⎢ ⎥ ⎢ 1 q q p ⎥ ⎢ ⎥ ⎣r r r r ⎦ r r r r
√ where p and q are parameters of the splitting, and r = 2/2 is a scaling factor. (k) A (k) T A (k) (k) (k) (k) e:11 e:12 , and γ 2 ≤ max{γ 2 , γ 2 } = γ 2 e = Je Ae Je Then A = e;H e;C M (k) A (k) A e:21
e:22
(see e.g. [9]). Local analysis has been performed in [10] for the studied case of 2-D regular right-angled triangulation that derived the following estimates for 2 γ: γM = 0.73 for the parameter setting (p = 1, q = −0.5), and the improved
88
P. Boyanova and S. Margenov
Fig. 2. Macroelement of two adjacent triangles from Tk with a common hypotenuse 2 estimate γM = 0.58 for (p = 1, q = −0.1). The AMLI preconditioners for both parameter settings comply with the three assumptions of Theorem 1 for β = 2 or β = 3.
4
Numerical Results
The aim of the performed experiments is to convey a numerical study of the properties of the proposed hierarchical partitioning of weighted graph-Laplacians and the behavior of the corresponding AMLI method. We use PCG method to solve the system (2) for the case of Dirichlet boundary conditions in the unit square Ω = (0, 1) × (0, 1), discretized with a mesh size h. The collected data in the tables that follow represent number of PCG iterations needed to reduce the residual norm by a factor for a zero right hand side and random initial guess. As the constructed AMLI preconditioner is a recursive generalization of a two-level method with a two-level partitioning that has the same properties at each level, we first examine the case of PCG with two-level multiplicative preconditioner based on the splitting (3). The two cases of partitioning parameters (p = 1, q = −0.5) and (p = 1, q = −0.1) are considered. As can be seen in Table 1, the number of PCG iterations for both settings do not depend on the size of the problem but only on the reduction factor , namely linearly w.r.t. ln(1/ ). These results agree with the theoretical expectations, as it is known that the number of iterations nit needed for achieving a relative error for the PCG method is −1
−1
2
nit ≥ 12 κ(C (k) A(k) ) ln (2/ ) + 1 and κ(C (k) A(k) ) ≤ 1/(1 − γ (k) ) where γ (k) ≤ γ < 1 for all k > 0. Next, we consider PCG with AMLI preconditioner with polynomial degree 2 β = 2 and polynomial coefficients calculated by (5) using the estimate γ 2 ≤ γM . In all presented results, the coarsest level of triangulation has mesh size h = 1/16, that corresponds to 512 degrees of freedom. For the experiments summarized in Table 2 we use inner iterations to achieve high accuracy solution for the pivot block systems that let us assume b = 0 in (4). In both cases of splitting parameters the number of iterations is stabilized but for the setting that has a better CBS constant estimate this happens earlier and the convergence rates are considerably better. As opposed to a preconditioner that does not use pivot approximation, we test the convergence of AMLI when the pivot blocks are approximated by their scaled diagonals. It is known that b is uniformly bounded and does not depend
Numerical Study of AMLI Methods for Weighted Graph-Laplacians
89
Table 1. Two-level method: number of PCG iterations PCG stop criteria
(p = 1, q = −0.5) (p = 1, q = −0.1)
= 10−3 = 10−6 = 10−9 = 10−3 = 10−6 = 10−9
32 (2048) 4 7 11 4 7 10
h−1 (Number of degrees of freedom) 64 128 256 (8192) (32768) (131072) 4 4 4 8 8 8 12 12 12 4 4 3 7 7 7 10 10 10
512 (524288) 4 8 12 3 7 10
Table 2. W-cycle, no pivot approximation: number of PCG iterations PCG stop criteria
(p = 1, q = −0.5, 2 γM = 0.73) (p = 1, q = −0.1, 2 γM = 0.58)
= 10−3 = 10−6 = 10−9 = 10−3 = 10−6 = 10−9
1 (2048) 4 7 11 4 7 10
Number of levels of refinement (Number of degrees of freedom) 2 3 4 (8192) (32768) (131072) 8 8 9 16 18 19 25 28 29 4 4 4 8 8 8 11 12 11
5 (524288) 8 18 28 4 8 11
on the problem size (see e.g. [10]). Therefore it is enough to calculate it for a rather coarse mesh and use the obtained value in (5). We construct diagonal pivot approximation with b = 4.67 for (p = 1, q = −0.5) and b = 6.2 for (p = 1, q = −0.1). The AMLI method results are presented in Table 3. As expected, the two parameter settings differ in convergence rates, the one with better CBS constant estimate achieving better performance. The difference in number of iterations for the two AMLI preconditioner types recorded in Table 2 and Table 3 is the theoretically expected. The results presented so far concern AMLI preconditioners that have strong theoretical analysis background. However, the observations of the performed numerical experiments let us think we can go further and try and improve the AMLI method for the setting (p = 1, q = −0.5), using feedback from the program runs. One motivation is that the two-level method is a good practical indicator for the qualities of the hierarchical partitioning defined. As can be seen from Table 1, the PCG for the two cases of parameter settings perform in a very similar way, which let us assume that they also have similar splitting characteristics. The second motivation for the tests that follow, is that we have both the theoretical estimate of the CBS constant for (p = 1, q = −0.5) and the practical numerical proof that the defined two-level and AMLI preconditioners behave in the ex2 = 0.73 is too pected stable way. Next, we assume that the derived estimate γM pessimistic for the case (p = 1, q = −0.5) and we could use the one, obtained for
90
P. Boyanova and S. Margenov Table 3. W-cycle, diagonal pivot approximation: number of PCG iterations PCG stop criteria
(p = 1, q = −0.5 2 γM = 0.73) (p = 1, q = −0.1 2 γM = 0.58)
= 10−3 = 10−6 = 10−9 = 10−3 = 10−6 = 10−9
1 (2048) 8 16 24 9 20 30
Number of levels of refinement (Number of degrees of freedom) 2 3 4 (8192) (32768) (131072) 18 18 20 38 45 47 58 69 74 9 9 9 20 20 20 30 30 30
5 (524288) 19 47 74 9 20 30
Table 4. W-cycle, (p = 1, q = −0.5, γ 2 = 0.58): number of PCG iterations PCG stop criteria
no pivot approx. diagonal pivot approx.
= 10−3 = 10−6 = 10−9 = 10−3 = 10−6 = 10−9
1 (2048) 4 7 11 8 16 24
Number of levels of refinement (Number of degrees of freedom) 2 3 4 (8192) (32768) (131072) 4 4 4 8 8 8 11 13 12 9 9 9 19 19 19 28 28 29
5 (524288) 4 8 12 9 19 29
2 the case (p = 1, q = −0.1), namely γM = 0.58. The results presented in Table 4 are for an AMLI preconditioner with polynomial coefficients calculated using this more optimistic bound. As can be seen, the PCG converges, and the AMLI is stabilized at lower iteration numbers.
5
Concluding Remarks
The purpose of the performed numerical study was to practically examine the properties of AMLI method for weighted graph-Laplacians, following three logical steps. First the two-level method was considered. The results show that the proposed two-level hierarchical transformation defines a proper partitioning of the system matrix and the corresponding two-level preconditioner performs as expected. This fact let us proceed with the multilevel generalization - the AMLI method. The qualities of the preconditioner are most transparently observed when b = 0. Then for both examined splitting parameter sets, the PCG with 2 = 0.58). AMLI preconditioner behaves similarly to the two-level method (for γM Preconditioners with no pivot block approximations are too expensive to use in practice, due to the high computational cost of solving systems with the pivot blocks. That is why, as a next step, we construct a preconditioner with a diagonal pivot approximation. This is a good first choice for pivot block approximation and the AMLI is stabilized again, though at greater iteration numbers.
Numerical Study of AMLI Methods for Weighted Graph-Laplacians
91
When choosing pivot block approximations, it is important to find the balance between their good conditioning and the computational efficiency of solving systems with them. The first affects the convergence rates of the AMLI method, and the second — the computational cost of each iteration. This is the reason why one important topic for future work is the examination of possibilities for improvement of the pivot block approximation for weighted graph-Laplacians, so that the overall computational complexity is decreased. Another important aspect of our future work is the incorporation of the constructed optimal AMLI method for weighted graph-Laplacians in the composite time stepping algorithm for unsteady Navier-Stokes equations. The results presented in this paper concern regular meshes but this approach is applicable for more general irregular triangulation. In the later case the convergence of the AMLI method would depend on the properties of the initial grid, namely, the angles of the coarsest elements used. Acknowledgments. This work is supported by the Bulgarian Science Foundation grants DO02–147 and DO02–338.
References 1. Axelsson, O., Vassilevski, P.S.: Algebraic multilevel preconditioning methods II. SIAM J. Numer. Anal. 27, 1569–1590 (1990) 2. Vassilevski, P.S.: Multilevel Block Factorization Preconditioners: Matrix-based Analysis and Algorithms for Solving Finite Element Equations. Springer, Heidelberg (2008) 3. Bejanov, B., Guermond, J., Minev, P.: A locally div-free projection scheme for incompressible flows based on non-conforming finite elements. Int. J. Numer. Meth. Fluids 49, 239–258 (2005) 4. Georgiev, I., Kraus, J., Margenov, S.: Multilevel preconditioning of rotated bilinear non-conforming FEM problems. RICAM-Report No. 2006–03 (January 2006) 5. Blaheta, R., Margenov, S., Neytcheva, M.: Robust optimal multilevel preconditioners for non-conforming finite element systems. Numer. Lin. Alg. Appl. 12(5-6), 495–514 (2005) 6. Kraus, J., Margenov, S.: Multilevel methods for anisotropic elliptic problems. Lectures on Advanced Computational Methods in Mechanics, Radon Series Comp. Appl. Math. 1, 47–87 (2007) 7. Kraus, J., Tomar, S.: A multilevel method for discontinuous Galerkin approximation of three-dimensional anisotropic elliptic problems. Num. Lin. Alg. Appl. 15, 417–438 (2008) 8. Margenov, S., Minev, P.: On a MIC(0) preconditioning of non-conforming mixed FEM elliptic problems. Mathematics and Computers in Simulation 76, 149–154 (2007) 9. Lazarov, R., Margenov, S.: CBS constants for multilevel splitting of graphLaplacian and application to preconditioning of discontinuous Galerkin systems. J. Complexity 23(4-6), 498–515 (2007) 10. Boyanova, P., Margenov, S.: Multilevel Splitting of Weighted Graph-Laplacian Arising in Non-conforming Mixed FEM Elliptic Problems. In: Margenov, S., Vulkov, L.G., Wa´sniewski, J. (eds.) NAA 2008. LNCS, vol. 5434, pp. 216–223. Springer, Heidelberg (2009)
A Scalable TFETI Based Algorithm for 2D and 3D Frictionless Contact Problems Zdenˇek Dost´ al1 , Tom´aˇs Brzobohat´ y2 , Tom´aˇs Kozubek1 , 2 Alex Markopoulos , and V´ıt Vondr´ a k1 1
2
ˇ FEECS VSB-Technical University of Ostrava, Department of Applied Mathematics, 17. listopadu 15, CZ-708 33 Ostrava, Czech Republic, ˇ FME VSB-Technical University of Ostrava, Department of Mechanics, 17. listopadu 15, CZ-708 33 Ostrava, Czech Republic
Abstract. We report our recent results in the development of theoretically supported scalable algorithms for the solution of large scale complex contact problems of elasticity. The algorithms combine the TFETI based domain decomposition method adapted to the solution of 2D and 3D frictionless multibody contact problems of elasticity with our in a sense optimal algorithms for the solution of the resulting quadratic programming problems. Rather surprisingly, the theoretical results are qualitatively the same as the classical results on scalability of FETI for the linear elliptic problems. The efficiency of the method is demonstrated by the results of numerical experiments with parallel solution of both coercive and semicoercive 2D and 3D contact problems.
1
Introduction
Observing that the classical Dirichlet and Neumann boundary conditions of contact problems are known only after the solution has been found, it is not surprising that the solution of contact problems is more costly than the solution of the related linear problems with classical boundary conditions. In particular, since the cost of the solution of any problem increases at least linearly with the number of the unknowns, even if we should only copy the results, it follows that the development of a scalable algorithm for contact problems is a challenging task which requires to identify the contact interface in a sense for free. In spite of this, a number of interesting results have been obtained by modifications of the methods that were known to be scalable for linear problems, in particular multigrid (see, e.g., Kornhuber [19], Kornhuber and Krause [20], and Wohlmuth and Krause [24]) and domain decomposition (see, e.g., Dureisseix and Farhat [13], Dost´ al, Gomes, and Santos [7], and Avery et al. [1]. Sch¨ oberl [23] even proved the optimality for his approximate variant of the projection method using a domain decomposition preconditioner and a linear multigrid solver on the interior nodes. For the multigrid-based algorithms, it seems that it was the necessity to keep the coarse grid away from the contact interface (see also Iontcheva and Vassilevski [17]) that prevented the authors to prove the optimality results similar to the classical results for linear problems. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 92–99, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Scalable TFETI Based Algorithm
93
The point of this paper is to report our optimality results for coercive contact problems of elasticity using TFETI (Total FETI) [9], a variant of the FETI method introduced by Farhat and Roux [16] for parallel solution of linear problems that enforces the prescribed displacements by Lagrange multipliers. For linear problems, the method was considered earlier by Justino, Park, and Felippa [18] and Park, Felippa, and Gumaste [22]. See also the thesis by Of [21]. We use an in a sense optimal “natural coarse grid preconditioning” introduced for linear problems by Farhat, Mandel, and Roux [15]. Since the preconditioning by the “natural coarse grid” uses a projector to the subspace with the solution [8], its application to the solution of variational inequalities does not turn the bound constraints into general bounds and can be interpreted as a variant of the multigrid method with the coarse grid on the interface. This unique feature, as compared with the standard multigrid preconditioning for the primal problem, reduces the development of scalable algorithms for the solution of variational inequalities to the solution of bound and equality constrained quadratic programming problems with the rate of convergence in terms of bounds on the spectrum. For the sake of simplicity, we consider only the frictionless problems of linear elasticity with the linearized, possibly non-matching non-interpenetration conditions implemented by mortars, but the results may be exploited also for the solution of the problems with friction [12] or large deformations with more sophisticated implementation of the kinematic constraints [3]. The basic idea works also in the framework of boundary element methods [2].
2
TFETI and Contact Problems
Assuming that the bodies are assembled from Ns subdomains Ω (s) , the equilibrium of the system may be described as a solution u of the problem min j(v)
subject to
Ns s=1
(s)
BI v (s) ≤ gI
and
Ns s=1
(s)
BE v (s) = o,
(1)
where o denote the zero vector, j(v) is the energy functional defined by j(v) =
Ns 1 s=1
2
T
T
v (s) K (s) v (s) − v (s) f (s) ,
where v (s) and f (s) denote the admissible subdomain displacements and the subdomain vector of prescribed forces, K (s) is the subdomain stiffness matrix, (s) (s) T T BI and BE are the blocks of the matrix B = BIT , BE that correspond to Ω (s) , and gI is a vector collecting the gaps between the bodies in the reference configuration. The matrix BI and the vector gI arise from the nodal or mortar description of the non-penetration conditions, while BE describes the “gluing” of the subdomains into the bodies and the Dirichlet boundary conditions. To simplify the presentation of the basic ideas, we can describe the equilibrium in terms of the global stiffness matrix K, the vector of global displacements u, and the vector of global loads f . In the TFETI method, we have
94
Z. Dost´ al et al.
⎤ u(1) ⎥ ⎢ u = ⎣ ... ⎦ , u(Ns ) ⎡
K = diag(K (1) , . . . , K (Ns ) ),
⎡
⎤ f (1) ⎢ ⎥ and f = ⎣ ... ⎦ , f (Ns )
where K (s) , s = 1, . . . , Ns , is a positive semidefinite matrix. The energy function reads 1 j(v) = v T Kv − f T v 2 and the vector of global displacements u solves min j(v)
subject to
BI v ≤ gI
and BE v = o.
(2)
Alternatively, the global equilibrium may be described by the Karush–Kuhn– Tucker conditions (see, e.g., [6]) Ku = f − B T λ,
λI ≥ o,
λT (Bu − g) = o,
(3)
T T where g = gIT , oT and λ = λTI , λTE denotes the vector of Lagrange multipliers which may be interpreted as the reaction forces. The problem (3) differs from the linear problem by the non-negativity constraint on the components of the reaction forces λI and by the complementarity condition. We can use the left equation of (3) and the sparsity pattern of K to eliminate the displacements. We shall get the problem to find λI ≥ o and RT (f − B T λ) = o,
(4)
1 1 Θ(λ) = − λT BK † B T λ + λT (BK † f − g) − f K † f, 2 2
(5)
max Θ(λ)
s.t.
where
K † denotes a generalized inverse that satisfies KK † K = K, and R denotes the full rank matrix whose columns span the kernel of K. The action of K † can be effectively evaluated by a variant of LU–SVD decomposition [14,10]. Recalling the FETI notation F = BK † B T ,
e = RT f,
G = RT B T ,
d = BK † f − g,
we can modify (4) to min θ(λ)
s.t. λI ≥ o and Gλ = e,
(6)
where 1 θ(λ) = λT F λ − λT d. 2 To replace the equality constraint in (6) by a homogeneous one without changing the non-negativity constraint, we can use the following lemma.
A Scalable TFETI Based Algorithm
95
T Lemma 1. Let GI = RT BIT , GE = RT BE , and G = [GI , GE ] arise from the TFETI decomposition of a coercive contact problem. Then GE is a full rank matrix and
oI λ= (7) GTE (GE GTE )−1 e
I = oI and Gλ = e. satisfies λ Proof. First observe that G = [GI , GE ], GTE ξ
so it is enough to prove that = BE Rξ = o implies ξ = o. Since BE Rξ denotes the jumps of the vector Rξ across the auxiliary interfaces and the violations of the prescribed Dirichlet boundary conditions, it follows that the vector u = Rξ satisfies both the discretized Dirichlet conditions and the “gluing” conditions, but belongs to the kernel of K, which contradicts our assumption that the problem (1) arises from the discretization of a coercive contact problem. Lemma 1 is an important ingredient in the proof of the numerical scalability of our algorithm. Let us point out that the proof is the only place where we use the assumption that our model problem (1) is coercive, i.e., that the prescribed displacements prevent the body from floating. and substituting into (6), we get Denoting λ = μ + λ 1 + const. θ(λ) = μT F μ − μT (d − F λ) 2 After returning to the old notation, problem (6) is reduced to min
1 T λ F λ − λT d 2
s.t. Gλ = o and λI ≥ o
with d = d − F λ.
3
Optimality
Our final step is based on the observation that the last problem is equivalent to min θ(λ)
s.t. Gλ = o and λI ≥ o,
(8)
where θ(λ) =
1 T (λ QF Q + ρP )λ − λT Q d, 2
P = GT (GGT )−1 G,
Q = I − P,
(9)
and ρ > 0. The regularization term ρP is introduced in order to simplify the reference to the results of quadratic programming that assume the regularity of the Hessian matrix of the quadratic form. Problem (8) turns out to be a suitable starting point for the development of an efficient algorithm for variational inequalities due to the following classical estimates of the extreme eigenvalues due to Farhat, Mandel, and Roux [15].
96
Z. Dost´ al et al.
Theorem 1. If the decompositions and the discretizations of given contact problems are sufficiently regular, then there are constants C1 > 0 and C2 > 0 independent of the discretization parameter h and the decomposition parameter H such that C1 ≤ λmin (QF Q|ImQ)
and
λmax (QF Q|ImQ) ≤ QF Q ≤ C2
H , h
(10)
where λmin and λmax denote the extremal eigenvalues of corresponding matrices. The theorem states that if we fix the regularization parameter ρ, then problems (8) resulting from the application of various discretizations and decompositions have the spectrum of the Hessian matrices confined to a positive interval, nonnegativity constraints, and homogeneous equality constraints. If we use SMALBE, our variant of the augmented Lagrangian method with adaptive precision control for the solution of quadratic programming problems with bound and equality constraints, to the solution of (8), then we reduce the solution of (8) to a sequence of bound constrained problems with uniformly bounded spectrum of the Hessian matrix [8,4]. SMALBE enforces the equality constraints by the Lagrange multipliers generated in the outer loop, while the auxiliary bound constrained problems can be solved approximately in the inner loop by MPRGP proposed by Dost´al and Sch¨ oberl [11], an active set based algorithm which uses the conjugate gradient method to explore the current face, the fixed steplength gradient projection to expand the active set, the adaptive precision control of auxiliary linear problems, and the reduced gradient with the optimal steplength to reduce the active set. The unique feature of SMALBE with the inner loop implemented by MPRGP when used to (8) is the bound on the number of iterations whose cost is proportional to the number of variables, so that it can return an approximate solution for the cost proportional to the number of variables. More information about the algorithms and their implementation can be found in [6].
4
Numerical Experiments
The algorithms reported in this paper were tested with the aim to verify their optimality and capability to solve the real-life problems. We first tested the scalability on a classical Hertz 2D problem of Fig. 1 with varying discretizations and decompositions using structured grids. We kept the ratio H/h of the decomposition and the discretization parameters approximately constant so that the assumptions of Theorem 1 were satisfied. The solution with the discretization and the traces of decomposition is in Fig. 2. We used the stopping criterion max {g P (λk , μk , ρ), Gλk } ≤ 10−4 P d, where g P (λk , μk , ρk ) denotes the projected gradient of the augmented Lagrangian L(λk , μk , ρk ) = θ(λk ) + (μk )T Gλk +
ρk Gλk 2 . 2
A Scalable TFETI Based Algorithm
97
20 18 16 14 12 10 8 6 4 2 0 0
Fig. 1. 2D Hertz problem
5
10
Fig. 2. Solution of 2D Hertz problem: total displacement
Table 1. Scalability of algorithm for 2D Hertz problem Primal Dual Subdomains Null space dimension dimensions 40000 640000 10240000
600 11200 198400
2 32 512
6 96 1536
Matrix–vector multiplications
Time (sec)
45 88 134
10 78 1300
The results of the computations are in Table 1. We can observe that the number of matrix–vector multiplications varies only mildly with the increasing dimension of the problem in agreement with our theory. We have also tested our algorithms on real-life problems, such as the analysis of the stress in the ball bearings of Fig. 3. The problem is difficult because the traction acting on the lower part of the inner ring is distributed throughout the nonlinear interface of the cage and balls to the outer ring. The solution of the problem discretized by 1688190/408196 primal/dual variables required 2364 matrix–vector multiplications. It took 5383 seconds to identify 20843 active constraints. Though this number is not small, we were not able to resolve the problem by a commercial software, including ANSYS. We believe that we shall get better results by enhancing the standard FETI preconditioners and some recently proposed improvements. We conclude that the results of numerical experiments indicate that the algorithm can be useful for an effective solution of real-life problems.
98
Z. Dost´ al et al.
Fig. 3. Ball bearings
5
Fig. 4. Von midplane
Mises
stress
in
the
Comments and Conclusions
The TFETI method turns out to be a powerful engine for the solution of contact problems of elasticity. Results of numerical experiments comply with our recent theoretical results and indicate high efficiency of the method reported here. Future research will include adaptation of the standard preconditioning strategies, problems with friction in 3D, and dynamic contact problems.
Acknowledgements This research has been supported by the grants GA CR No. 201/07/0294 and 103/09/H078 and ME CR No. MSM6198910027.
References 1. Avery, P., Rebel, G., Lesoinne, M., Farhat, C.: A numerically scalable dual–primal substructuring method for the solution of contact problems – part I: the frictionless case. Comput. Methods Appl. Mech. Eng. 193, 2403–2426 (2004) 2. Bouchala, J., Dost´ al, Z., Sadowsk´ a, M.: Theoretically Supported Scalable BETI Method for Variational Inequalities. Computing 82, 53–75 (2008) 3. Dobi´ aˇs, J., Pt´ ak, S., Dost´ al, Z., Vondr´ ak, V.: Total FETI based algorithm for contact problems with additional non-linearities. Advances in Engineering Software (in print, 2009), doi:10.1016/j.advengsoft.2008.12.006 4. Dost´ al, Z.: An optimal algorithm for bound and equality constrained quadratic programming problems with bounded spectrum. Computing 78, 311–328 (2006) 5. Dost´ al, Z.: Inexact semi-monotonic augmented Lagrangians with optimal feasibility convergence for quadratic programming with simple bounds and equality constraints. SIAM J. Numer. Anal. 43(1), 96–115 (2005) 6. Dost´ al, Z.: Optimal Quadratic Programming Algorithms, with Applications to Variational Inequalities, 1st edn. Springer, US (2009)
A Scalable TFETI Based Algorithm
99
7. Dost´ al, Z., Gomes, F.A.M., Santos, S.A.: Solution of contact problems by FETI domain decomposition with natural coarse space projection. Comput. Methods Appl. Mech. Eng. 190(13-14), 1611–1627 (2000) 8. Dost´ al, Z., Hor´ ak, D.: Theoretically supported scalable FETI for numerical solution of variational inequalities. SIAM J. Numer. Anal. 45, 500–513 (2007) 9. Dost´ al, Z., Hor´ ak, D., Kuˇcera, R.: Total FETI - an easier implementable variant of the FETI method for numerical solution of elliptic PDE. Commun. Numer. Methods Eng. 22, 1155–1162 (2006) 10. Dost´ al, Z., Kozubek, T., Markopoulos, A., Menˇs´ık, M.: Combining Cholesky decomposition with SVD to stable evaluation of a generalized inverse of the stiffness matrix of a floating structure (submitted, 2009) 11. Dost´ al, Z., Sch¨ oberl, J.: Minimizing quadratic functions over non-negative cone with the rate of convergence and finite termination. Comput. Optim. Appl. 30(1), 23–44 (2005) 12. Dost´ al, Z., Vondr´ ak, V.: Duality Based Solution of Contact Problems with Coulomb Friction. Arch. Mech. 49(3), 453–460 (1997) 13. Dureisseix, D., Farhat, C.: A numerically scalable domain decomposition method for solution of frictionless contact problems. Int. J. Numer. Methods Eng. 50(12), 2643–2666 (2001) 14. Farhat, C., G´eradin, M.: On the general solution by a direct method of a large scale singular system of linear equations: application to the analysis of floating structures. Int. J. Numer. Methods Eng. 41, 675–696 (1998) 15. Farhat, C., Mandel, J., Roux, F.-X.: Optimal convergence properties of the FETI domain decomposition method. Comput. Methods Appl. Mech. Eng. 115, 365–385 (1994) 16. Farhat, C., Roux, F.-X.: A method of finite element tearing and interconnecting and its parallel solution algorithm. Int. J. Numer. Methods Eng. 32, 1205–1227 (1991) 17. Iontcheva, A.H., Vassilevski, P.S.: Monotone multigrid methods based on element agglomeration coarsening away from the contact boundary for the Signorini’s problem. Numer. Linear Algebra Appl. 11(2-3), 189–204 (2004) 18. Justino, M.R.J., Park, K.C., Felippa, C.A.: The construction of free–free flexibility matrices as generalized stiffness matrices. International Journal for Numerical Methods in Engineering 40, 2739–2758 (1997) 19. Kornhuber, R.: Adaptive monotone multigrid methods for nonlinear variational problems. Teubner–Verlag, Stuttgart (1997) 20. Kornhuber, R., Krause, R.: Adaptive multigrid methods for Signorini’s problem in linear elasticity. Comput. Vis. Sci. 4(1), 9–20 (2001) 21. Of, G.: BETI - Gebietszerlegungsmethoden mit schnellen Randelementverfahren und Anwendungen. Ph.D. Thesis, University of Stuttgart (2006) 22. Park, K.C., Felippa, C.A., Gumaste, U.A.: A localized version of the method of Lagrange multipliers. Computational Mechanics 24, 476–490 (2000) 23. Sch¨ oberl, J.: Solving the Signorini problem on the basis of domain decomposition techniques. Computing 60(4), 323–344 (1998) 24. Wohlmuth, B., Krause, R.: Monotone methods on nonmatching grids for nonlinear contact problems. SIAM J. Sci. Comput. 25, 324–347 (2003)
Multilevel Preconditioning of Crouzeix-Raviart 3D Pure Displacement Elasticity Problems Ivan Georgiev1 , Johannes Kraus1 , and Svetozar Margenov2 1
Johann Radon Institute for Computational and Applied Mathematics, Austrian Academy of Sciences, Altenbergerstraße 69, A-4040 Linz, Austria
[email protected],
[email protected] 2 Institute for Parallel Processing, Bulgarian Academy of Sciences, Acad. G. Bonchev, Bl. 25A, 1113 Sofia, Bulgaria
[email protected]
Abstract. In this study we demonstrate how some different techniques which were introduced and studied in previous works by the authors can be integrated and extended in the construction of efficient algebraic multilevel iteration methods for more complex problems. We devise an optimal order algorithm for solving linear systems obtained from locking-free discretization of 3D pure displacement elasticity problems. The presented numerical results illustrate the robustness of the method for nearly incompressible materials.
1
Introduction
In this paper we consider the pure displacement elasticity problem 3 ∂σ ij + fi = 0 ∂x j j=1 u=0
x∈Ω
i = 1, 2, 3
x ∈ ΓD = ∂Ω
where Ω is a polyhedral domain in R3 and ∂Ω is the boundary of Ω. The stresses σij and the strains εij are defined by the classical Hooke’s law, i.e. 3 1 ∂ui ∂uj σij (u) = λ εkk (u) δij + 2μεij (u), εij (u) = + . 2 ∂xj ∂xi k=1
The unknowns of the problem are the displacements uT = (u1 , u2 , u3 ). A generalization to nonhomogeneous boundary condition is straightforward. The Lam´e coefficients are given by λ=
νE , (1 + ν)(1 − 2ν)
μ=
E , 2(1 + ν)
where E stands for the elasticity modulus, and ν ∈ [0, 12 ) is the Poisson ratio. We use the notion nearly incompressible for the case ν = 12 − δ (δ > 0 is a I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 100–107, 2010. c Springer-Verlag Berlin Heidelberg 2010
Multilevel Preconditioning of 3D Pure Displacement Elasticity Problems
101
small parameter). Note that the boundary value problem becomes ill-posed when ν = 12 (the material is incompressible). The weak formulation of the 3D linear elasticity problem (homogeneous boundary conditions are assumed) reads as follows: For f = (f1 , f2 , f3 )T ∈ 1 (L2 (Ω))3 , find u ∈ (HD (Ω))3 = {v ∈ (H 1 (Ω))3 , v |ΓD = 0} such that 1 A(u, v) = f T vdx ∀v ∈ (HD (Ω))3 . (1) Ω
The bilinear form A(u, v) is of the form 3 A(u, v) = λdiv(u)div(v) + 2μ εij (u)εij (v)dx. Ω
We rewrite (2) as
(2)
i,j=1
A(u, v) =
Ω
Cd(u), d(v)dx
and consider also the modified bilinear form s A (u, v) = C s d(u), d(v)dx Ω
where ⎡
⎤ ⎡ ⎤ λ+2μ 0 0 0 λ 0 0 0 λ λ+2μ 0 0 0 λ+μ 0 0 0 λ+μ ⎢ 0 μ 0μ 0 0 0 0 0 ⎥ ⎢ 0 μ 00 0 0 0 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 0μ 0 0 0μ 0 0 ⎥ ⎢ 0 0μ 0 0 00 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 μ 0μ 0 0 0 0 0 ⎥ ⎢ 0 0 0μ 0 0 0 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ s ⎥ ⎢ ⎥ C =⎢ ⎢ λ 0 0 0λ+2μ 0 0 0 λ ⎥, C = ⎢ λ+μ 0 0 0λ+2μ 0 0 0 λ+μ ⎥, ⎢ 0 0 0 0 0 μ 0μ 0 ⎥ ⎢ 0 0 0 0 0 μ 00 0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 0μ 0 0 0μ 0 0 ⎥ ⎢ 0 00 0 0 0μ 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ 0 0 0 0 0 μ 0μ 0 ⎦ ⎣ 0 0 0 0 0 0 0μ 0 ⎦ λ 0 0 0 λ 0 0 0λ+2μ λ+μ 0 0 0 λ+μ 0 0 0λ+2μ and
d(u) =
∂u1 ∂u1 ∂u1 ∂u2 ∂u2 ∂u2 ∂u3 ∂u3 ∂u3 , , , , , , , , ∂x1 ∂x2 ∂x3 ∂x1 ∂x2 ∂x3 ∂x1 ∂x2 ∂x3
T .
1 Then due to the pure displacement boundary conditions, as for u, v ∈ (HD (Ω))3 we have ∂ui ∂vj ∂ui ∂vj dx = dx, ∂x ∂x j i Ω Ω ∂xi ∂xj 1 one finds that A(u, v) = As (u, v) for all u, v ∈ (HD (Ω))3 . s The matrix C is positive definite. It is also important, that the rotations are excluded from the kernel of the related (stabilized) Neumann boundary conditions operator. As a direct result, the nonconforming Crouzeix-Raviart FEs are straightforwardly applicable to the variational problem (1)–(2). Let us remind that locking-free error estimates for the 2D pure displacement problem discretized by Crouzeix-Raviart FEs are presented, e.g., in [5,6,7]. The same scheme of analysis is applicable to the 3D case as well.
102
I. Georgiev, J. Kraus, and S. Margenov
Fig. 1. Reference macroelement of 6 Crouzeix-Raviart elements
2
Composite FR Algorithm
Let us assume that the domain Ω is covered by a tetrahedral mesh Th based on cubic (macro)elements, each of which is split into 6 tetrahedra. We also suppose that the edges of the cubes are parallel to the coordinate axes. The sequence of nested cubic meshes T1 ⊂ T2 ⊂ · · · ⊂ T is obtained by uniform refinement of the coarser macroelements into 8 finer cubes. The construction of the robust algebraic multilevel (AMLI) preconditioner is based on a hierarchical two- and multilevel splitting of the finite element space, see [1,2]. One such algorithm is the so-called First Reduce (FR) splitting, see [3,4,11]. Here we present a composite FR algorithm for AMLI preconditioning of the stiffness matrix Ah , which corresponds to the FEM discretization of the modified elasticity problem (1)–(2), using Crouzeix-Raviart elements defined on the finest mesh Th . The algorithm is described on a macroelement level. The global stiffness matrix Ash is written in the form Ash =
AsE ,
E∈T
where E ∈ T are the cubic macroelements. In what follows we use the numbering of nodes from Fig. 1 where the interior nodes 1 to 6 have the coordinates {(xi , yi , zi )}6i=1 = {( 13 , 13 , 13 ), ( 23 , 13 , 13 ), ( 13 , 23 , 13 ), ( 23 , 13 , 23 ), ( 13 , 23 , 23 ), ( 23 , 23 , 23 )}. Let φ1 , . . . , φ18 be the standard (scalar) nonconforming linear finite element nodal basis functions on the macroelement E. Then for the 3D elasticity problem (1) (2) (3) we use the basis functions φi = (φi , 0, 0)T , φi = (0, φi , 0)T , and φi = T (0, 0, φi ) , i = 1, . . . , 18. The vector of the macroelement basis functions T (1) (2) (3) (1) (2) (3) (1) (2) (3) ΦE = φ1 , φ1 , φ1 , φ2 , φ2 , φ2 , . . . , φ18 , φ18 , φ18
Multilevel Preconditioning of 3D Pure Displacement Elasticity Problems
103
E = JE ΦE . Folis transformed into a vector of new hierarchical basis functions Φ lowing the FR procedure, we consider a transformation matrix JE corresponding (E) = V 0 (E) ⊕ V 1 (E) ⊕ V 2 (E), to the splitting V 0 (E) = span {φ(k) , φ(k) , φ(k) , φ(k) , φ(k) , φ(k) }3 , V 1 2 3 4 5 6 k=1 1 (E) = V (k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
(k)
span {φ8 −φ7 , φ10 −φ9 , φ12 −φ11 , φ14 −φ13 , φ16 −φ15 , φ18 −φ17 }3k=1 , 2 (E) = V (k)
span {φ8 +φ7 , φ10 +φ9 , φ12 +φ11 , φ14 +φ13 , φ16 +φ15 , φ18 +φ17 }3k=1 . Accordingly, JE transforms the macroelement stiffness matrix AsE into the s = JE As J T , hierarchical form A E E E ⎤ ⎡ 0 (E) s s s V A E:00 AE:01 AE:02 ⎥ s s s s = ⎢ A ⎣ AE:10 AE:11 AE:12 ⎦ V1 (E) . E 2 (E) s s s V A A A E:20
E:21
E:22
The corresponding global hierarchical stiffness matrix s = s A A h
E
E∈Th
has a 3 × 3 block structure
⎡
s A h:00 ⎢ s h = ⎣ A s A h:10 s A h:20
s A h:01 s A h:11 s A h:21
⎤ s A h:02 s ⎥ A h:12 ⎦ , As
(3)
h:22
s which is induced by the partitioning on macroelement level. The block A h:00 corresponds to the interior nodal unknowns with respect to the macro-elements s s E ∈ T . The matrices A h:11 and Ah:22 correspond to certain differences and aggregates of nodal unknowns (basis functions) related to the faces of E. The first step of the FR algorithm is to eliminate locally (static condensation) the h , and get the Schur first block of the unknowns [4]. For this purpose we factor A s complement Bh in the form ⎡ ⎤ s B s B s = ⎣ h:11 h:12 ⎦ , B (4) h s B s B h:21 h:22 where its first block corresponds to the differences of the (two) basis functions s corresponds to the sum of related to each macroelement face. The matrix B h:22 the (same) basis functions from each macroelement face, and thus is associated with the coarse grid. Here, “coarse grid” means the grid of cubic elements associated with Th . After applying a two-level method to (4) the problem is reduced
104
I. Georgiev, J. Kraus, and S. Margenov
0.50
Ν0.4999 Ν0.499 0.45
Ν0.49 Ν0.4
0.40
0
2
4
6
8
10
12
14
2 Fig. 2. Numerically computed γE as a function of the number of coarsening steps
s . This is the end of the first step to a system with the coarse level matrix B h:22 of our composite algorithm. s The next observation to note is that B h:22 has the same structure as the related Rannacher-Turek FE stiffness matrix. This allows us to apply the FR method from [9] as a second step of the algorithm. The convergence of the AMLI algorithm crucially depends on, and is controllable by, the related CBS constants. The next Fig. 2 shows the multilevel behavior of the local CBS constant γE varying the Poisson ratio [10]. The first important observation is that γE is uniformly bounded when the Poisson ratio tends to the incompressibility limit of ν = 1/2. We see also that in the multilevel setting, the local CBS constant at the first step of the composite FR algorithm is considerably smaller. Then there is a steep 2 (but bounded by 0.52) increase of γE , corresponding to the first aggregation step of the second stage (where the Rannacher-Turek FR construction is applied) of the composite algorithm. This peak is followed by a monotonically decreasing 2 (k), which tends to a certain fixed value (close to 0.4), uniformly sequence of γE with respect to the Poisson ratio.
3
Numerical Tests
We consider the pure displacement linear elasticity problem in the unit cube Ω = (0, 1)3 . The robustness of the composite AMLI algorithm is studied varying the mesh size h and for a Poisson ratio ν approaching 1/2. The presented results are for the second step of the composite algorithm, i.e., s in (4) for the solution of the linear systems with the coarse level matrix B h:22 which we use different variants of the AMLI algorithm. Note that the first step of the solution procedure results in an optimal order process if the two-level preconditioner for (4) has a uniformly bounded condition number. We will comment on this issue at the end of this section. If we implement Dirichlet boundary conditions without elimination of variables, e.g., by equating the corresponding
Multilevel Preconditioning of 3D Pure Displacement Elasticity Problems
105
Table 1. Convergence results for AMLI V-cycle # voxels 163 323 643
ν = 0.49 ν = 0.4999 ν = 0.499999 12 [18] 10 [14] 7 [11] 15 [24] 12 [19] 8 [14] 19 [31] 15 [25] 9 [17]
Table 2. Convergence results for linear AMLI W-cycle # voxels 163 323 643
ν = 0.49 ν = 0.4999 ν = 0.499999 9 [14] 7 [11] 6 [9] 10 [15] 8 [13] 6 [10] 10 [16] 8 [14] 6 [10]
rows and columns with the identity matrix, the dimension of the full system to be solved is NT × NT where NT = 18n2 (1 + 2n) and n = 1/h. The number of interior DOF on voxel level is NI = 18n3 . Thus the size of the condensed matrix (4) is NC × NC , NC = 18n2 (1 + n), and, finally, its lower right block, to which we apply the recursive multilevel method, yields a linear system of dimension N = 9n2 (1 + n). In the tables we list the number of outer iterations that are required to reduce the A-norm of the initial error by a factor 106 . The right-hand side vector is the vector of all zeros and the iteration is initialized with a random initial guess. The approximate inverses of the pivot blocks we realize in all cases by static condensation of the interior DOF on macro element level followed by an incomplete factorization without any additional fill-in, i.e., ILU(0) applied to the Schur complement. The iteration counts are for the multiplicative and in brackets for additive AMLI. In Table 3 we also report the solution time on a Fujitsu Siemens Primergy RX600 S3 workstation with 4 dual core Intel Xeon MP processors (3.4 GHz) performing the linear AMLI W-cycle method. We observe that our method is completely robust with respect to the Poisson ratio ν; The convergence even becomes faster when ν approaches 1/2. In case of the V-cycle method the number of PCG iterations increases moderately according to the increase of the condition number of the preconditioner when adding levels of approximate factorization. The W-cycle method in both cases, linear and nonlinear AMLI, stabilizes the number of outer iterations and thus yields an optimal order solution process. In case of linear AMLI this is achieved by a proper second-degree stabilization polynomial, in case of nonlinear AMLI by two inner (generalized) conjugate gradient iterations at each intermediate level. Table 3. Solution time for linear AMLI W-cycle ν = 0.4999 # voxels ν = 0.49 163 0.48 [0.90] 0.39 [0.71] 323 4.97 [8.92] 3.96 [7.67] 43.11 [86.97] 34.59 [71.56] 643
ν = 0.499999 0.33 [0.58] 3.00 [5.34] 26.25 [51.41]
106
I. Georgiev, J. Kraus, and S. Margenov Table 4. Convergence results for nonlinear AMLI W-cycle # voxels 163 323 643
ν = 0.49 ν = 0.4999 ν = 0.499999 8 [13] 7 [11] 5 [9] 8 [14] 7 [12] 5 [9] 8 [15] 7 [12] 5 [9]
14 Ν0.4999
12 Ν0.499
10 Ν0.49
8 Ν0.4
6 4
0
2
4
6
8
10
12
14
s h:11 Fig. 3. Numerically computed condition number of the first block B of the Schur complement (4) as a function of the number of coarsening steps
s of the Schur complement (4) as well as the pivot Remark 1. The first block B h:11 blocks of the recursively computed coarse-level matrices after static condensation of the interior unknowns are well-conditioned with a condition number bound that is uniform with respect to the Poisson ratio ν ∈ (0, 1/2). This can be shown by a local analysis on (macro) element level, see Fig. 3.
4
Concluding Remarks
The starting point of this paper is a generalization of locking-free discretizations of 2D pure displacement elasticity problems by nonconforming Crouzeix-Raviart finite elements to 3D problems. Then we propose a robust optimal order composite multilevel algorithm for the iterative solution of the related systems of linear algebraic equations. The presented method can also be used for solving efficiently certain problems resulting from mixed finite element discretization, e.g. the Stokes problem. By applying the augmented Lagrangian method one can reduce the indefinite problem to a nearly singular system. In case of Stokes flow the nearly singular system corresponds to a linear elasticity problem for a nearly incompressible material. For details see [8,12]. Note that the demonstrated robustness of the presented preconditioners (with respect to the Poisson ratio) results in an optimal order solution algorithm for this kind of nearly singular systems even without using any regularization techniques.
Multilevel Preconditioning of 3D Pure Displacement Elasticity Problems
107
Acknowledgments The authors gratefully acknowledge the support by the Austrian Academy of Sciences. This work has also been partially supported by the Austrian Science Foundation (FWF Project P19170-N18) and the Bulgarian Science Foundation (grants DO02–147 and DO02–338).
References 1. Axelsson, O., Vassilevski, P.S.: Algebraic Multilevel Preconditioning Methods I. Numer. Math. 56, 157–177 (1989) 2. Axelsson, O., Vassilevski, P.S.: Algebraic Multilevel Preconditioning Methods II. SIAM J. Numer. Anal. 27, 1569–1590 (1990) 3. Blaheta, R., Margenov, S., Neytcheva, M.: Aggregation-based multilevel preconditioning of non-conforming FEM elasticity problems. In: Dongarra, J., Madsen, K., Wa´sniewski, J. (eds.) PARA 2004. LNCS, vol. 3732, pp. 847–856. Springer, Heidelberg (2004) 4. Blaheta, R., Margenov, S., Neytcheva, M.: Uniform estimate of the constant in the strengthened CBS inequality for anisotropic non-conforming FEM systems. Numerical Linear Algebra with Applications 11(4), 309–326 (2004) 5. Brenner, S., Scott, L.: The mathematical theory of finite element methods. Texts in applied mathematics, vol. 15. Springer, Heidelberg (1994) 6. Brenner, S., Sung, L.: Linear finite element methods for planar linear elasticity. Math. Comp. 59, 321–338 (1992) 7. Falk, R.S.: Nonconforming finite element methods for the equations of linear elasticity. Math. Comp. 57, 529–550 (1991) 8. Fortin, M., Glowinski, R.: Augmented Lagrangian methods: Applications to the numerical solution of boundary value problems, vol. 15. North-Holland Publishing Co., Amsterdam (1983) 9. Georgiev, I., Kraus, J., Margenov, S.: Multilevel algorithms for Rannacher-Turek finite element approximation of 3D elliptic problems. Computing 82, 217–239 (2008) 10. Kraus, J., Margenov, S.: Robust Algebraic Multilevel Methods and Algorithms. Radon Series Comp. Appl. Math. 5 (2009) 11. Kraus, J., Margenov, S., Synka, J.: On the multilevel preconditioning of CrouzeixRaviart elliptic problems. Numer. Lin. Alg. Appl. 15, 395–416 (2008) 12. Lee, Y., Wu, J., Xu, J., Zikatanov, L.: Robust subspace correction methods for nearly singular systems. Mathematical Models and Methods in Applied Sciences 17(11), 1937–1963 (2007)
Element-by-Element Schur Complement Approximations for General Nonsymmetric Matrices of Two-by-Two Block Form Maya Neytcheva1 , Minh Do-Quang2 , and He Xin1 1
2
Department of Information Technology, Uppsala University, Sweden,
[email protected],
[email protected] Department of Mechanics, Royal University of Technology, Stockholm, Sweden
[email protected] Abstract. We consider element-by-element Schur complement approximations for indefinite and general nonsymmetric matrices of two-by-two block form, as arising in finite element discretized systems of PDEs. The paper provides some analysis of the so-obtained approximation and attempts to quantify the quality of the underlying two-by-two matrix splitting in a way similar to that used for symmetric positive definite matrices. The quality of the approximation is illustrated numerically.
1
Introduction
This paper targets the efficient solution of systems of equations as arising in multiphase flow models, where various patterns and interface motion have to be resolved. Multiphase processes advance through free contact surfaces (sharp fronts) to be accurately tracked by the numerical methods. Some existing models however fail to provide an efficient treatment of important problem characteristics such as the so-called wetting contact angle at the walls of a solid in contact with a liquid. A good alternative turns out to be the Phase-Field model (PFM), which allows to easily implement the combination of large scale simulation of the hydrodynamic interaction with the constrain of micro scale properties (such as wetting). There, the mathematical tool used is the Cahn-Hilliard equation. We assume that the discretization is done using the conforming Finite Element method (FEM) on standard or adaptive meshes. For the solution of the so-arising linear systems of equations we consider iterative methods and describe a preconditioner, based on the classical two-bytwo block factorization of the matrix of the underlying algebraic system. The novel moment is the analysis of the behaviour of the preconditioner for general nonsymmetric matrices. We briefly describe the physical problem and the mathematical model in Section 2. Section 3 describes the suggested preconditioner and some of its properties for the general nonsymmetric case. Section 4 presents some numerical illustrations. Finally, in Section 5, we state some conclusions and directions for future research. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 108–115, 2010. c Springer-Verlag Berlin Heidelberg 2010
Element-by-Element Schur Complement Approximations
2
109
The Physical Problem and Mathematical Models Used
To illustrate some of the effects of multiphase flow we choose to consider a ball falling into water. To understand the effect of throwing a solid object (say, a sphere) into a liquid and in particular, the splashing phenomena observed when the object penetrates the surface of the liquid, has been a challenge the researchers since more than 100 years. When we drop a ball in water, numerous physical phenomena take place, which we want to understand, measure or compute, and control: the force acting on the sphere, the shape of the produced air cavity, the sound effects during the impact between solid and liquid phase, pulsation of the cavity volume, existence of critical velocity, which was found to be a function of the so-called wetting contact angle. During the years, various mathematical models have been considered to describe the above mentioned phenomena. The model, which has shown to have advantages over others is the Phase-Field model (PF). Constituting the model goes back to 1893 (van der Waals). To shortly characterize the PF model, consider two different phases A and B. Let C be the corresponding relative concentration of each phase, thus C takes a distinct constant value in each of the bulk phases, say −1 and 1, and changes rapidly but in a continuous manner from one to the other in an interface strip of certain thickness. The profile of the interface, which is what we are mostly interested in, is obtained by minimizing the so-called free energy of the system F = f dV , where f is the free energy density, expressed in terms of the concentration, C, cf. for instance [10]. Originally, the FP model has been derived for stationary problems. It has been extended to model time-dependent problems (Cahn, 1961), and later on, further modified by Bates and Fife (1993) to model the creation, evolution and dissolution of phase-field interfaces as a result of diffusive processes. As a next step, to enable incorporating fluid motion, the velocity u has been added to the model. The latter version is referred to as the Cahn-Hilliard equation, which reads as follows: ∂C + (u · ∇)C = k∇2 (βΨ (C) − α∇2 C) in Ω. (1) ∂t Here k stands for the mobility of the fluid and Ψ is a a double-well function with two minima at ±1, corresponding to the two stable phases1 . In order to determine the velocity u, one has to solve the time-dependent Navier-Stokes (NS) equations, where the forcing term depends on C and on the so-called surface tension Φ. The function Φ is also referred to as the chemical potential of the phase mixture and is defined as the rate of change of the free energy F , namely, δF = β Ψ (C) − α∇2 C Φ= δC Here α and β are control parameters (cf, e.g., [10]). In this paper we do not describe the model any further. We state the resulting coupled system of equations in its finalized form, obtained after nondimensionalization. 1
Examples of such functions are Ψ (C) = (C + 1)2 (C − 1)2 and Ψ (C) = 1/4(1 − C 2 )2 .
110
M. Neytcheva, M. Do-Quang, and H. Xin
∂C 1 2 + (u · ∇)C = ∇ (Ψ (C) − Cn2 ∇2 C) ∂t P e ∂u 1 Re + (u · ∇)u = −∇p + ∇2 u − C∇Φ ∂t Ca Cn ∇·u= 0
(2) (3) (4)
As is seen, there are four problem parameters involved in the model: Re is the Reynolds number, P e is the Peclet number, Ca is the Capillary number (defined as the ratio between the viscous and surface tension forces) and the Cahn number Cn, which is a measure of the ratio between mean field thickness and characteristic length. We note that the convective Cahn-Hilliard equation (2) is mass-preserving. The system (2)-(4) is usually solved using operator splitting techniques, namely, one solves (2) first, then solves the NS problem (3)-(4) and repeats until a proper convergence criterium is met. In this work we address only the solution of equation (2). It is seen from the expression in (2), that the CH equation is a fourth order nonlinear parabolic PDE. A classical approach to avoid discretizations of the high derivatives is to decompose (2) into a coupled system of two second-order PDEs, which after some manipulations takes the form ψ = f (φ) + ∇2 φ, ∂φ + (b · ∇)φ = ∇ · (k∇Φ). ∂t
(5)
Here Φ is a continuous scalar variable that describes the diffusive interface profile. It has constant value in each phase, ±1. There are two straightforward approaches to solve (5). The first one is the operator splitting method when the two equations are solved autonomously in an alternating manner, which is simpler and many well-studied solution techniques are available. However, it is restricted to very small timesteps which entails very long simulation times. Alternatively, (5) can be solved as a system, discretized by mixed FEM which allows for larger timesteps (cf. [6] and the references therein). Either of the approaches requires the use of a suitable numerical solution method for the so arising linear or linearized large scale systems of algebraic equations, which most often governs the computational efficiency of the FEM models. In many cases those linear systems are solved by sparse direct solution methods. Despite their general applicability and robustness, direct solution methods become very costly for large problems and do remain the most time consuming part of the numerical simulations. For example, with one of the best direct solvers, UMFPACK [4], the numerical simulations (in 2D) in [6] required more than 60 hours of computation. Aiming at reducing the simulation time, we consider iterative methods. The major task in this paper is to test and analyse a preconditioning strategy for systems in a two-by-two block form, such as in (5), based on the structure of the matrix and utilizing the discretization, in this case, standard conforming FEM.
Element-by-Element Schur Complement Approximations
3
111
Preconditioning Strategy and Estimates
When discretizing a system of PDEs, the resulting linear (or linearized) system of algebraic equations admits itself a two-by-two block structure ((6), left), A11 A12 A11 0 I1 Z12 A= , B= . (6) A21 S 0 I2 A21 A22 To solve systems with A we consider preconditioned iterative methods, such as pcg, minres, gmres or gcg (cf. [1]). The suggested preconditioner utilizes the matrix block structure and is of block-multiplicative form ((6), right), where S is an approximation of the Schur complement SA of A, namely, SA = A22 − A21 A−1 11 A12 , Z12 is either exactly equal or approximates the matrix product A−1 A 11 12 and I1 , I2 are identity matrices of corresponding order. The issues on how to solve systems with the pivot block A11 and how to handle the block Z12 are out of scope of this paper. We discuss only the construction of an approximation of SA , for the case when the discretization of the PDE system is done using conforming FEM. We note that when we discretize systems of PDEs as in (5) with FEM, the local stiffness matrices, corresponding to one finite element, admit also a twoM T R(m) A(m) R(m) where by-two block form, namely, A = m=1
(m)
A
(m) (m) A11 A12 }n1 = (m) (m) A21 A22 }n2
(7)
Here M denotes the number of the finite elements in the discretization mesh. The matrices R(m) (n, N ) are the standard Boolean matrices which provide the local-to-global correspondence of the numbering of the degrees of freedom. Let S=
M
(m) T
R2
(m)
S (m) R2
(8)
m=1 (m)
(m)
(m) −1
(m)
(m)
where S (m) = A22 − A21 A11 A12 and R2 are the parts of R(m) corresponding to the degrees of freedom in A22 . In several works, e.g. [8,9,2], it has been shown that S is a high quality approximation of SA . However, so far this has been shown rigorously only in the case of symmetric positive matrices, see [8,2]. Here we suggest a framework to study the quality of the preconditioned system S −1 SA for general nonsymmetric matrices. Define the auxiliary matrix ⎡ ⎤ (1) (1) (1) A12 R2 A11 ⎢ ⎥ (2) (2) (2) A11 A12 R2 ⎢ ⎥ ⎢ ⎥ .. .. ⎢ ⎥ . =⎢ ⎥ . A (9) ⎢ ⎥ (M) (M) (M) ⎢ ⎥ A R A 11 12 2 ⎢ ⎥ M ⎣ (1) T (1) (2) T (2) ⎦ (M) T (M) (m) T (m) (m) R2 A21 R2 A21 · · · R2 A21 R2 A22 R2 m=1
112
M. Neytcheva, M. Do-Quang, and H. Xin
is constructed in such a way that its Schur complement is exactly S. Clearly, A have the same 22-block. Further, A and A Note that A and A can be related C 0 algebraically in the following way. We define a matrix C = T1 12 , where 012 I2 I2 is the identity matrix of order N2 , 012 is a rectangular zero matrix of order (M n1 , N2 ). The block C1 is of order (M n1 , N 1) and is constructed as follows T T T C1T = R1(1) R1(2) · · · R1(M) . A straightforward computation reveals that 11 C1 C T A C1T A T 1 12 A = C AC = . 21 C1 A A22 An additional observation is that C1T C1 = D, where D is a diagonal matrix of order N1 with entries equal to 1 or 2, corresponding to the fact that two local (m ) (m ) element matrix blocks A11 1 , A11 2 can share only one mesh point (in 2D). Applying Sherman-Morrison-Woodbury formula to S −1 and using the rela we obtain: tions between A and A −1 −1 A −1 A S −1 SA = I2 + A (I1 − W )A 22 12 (Γ − I1 ) 11 12 (= I2 + E),
−1 A −1 A 11 12 A22 A21
(10)
T C1 A−1 11 C1 A11 .
and W = Observe that W C1 = C1 , where Γ = thus W > 1. Further, simple computation reveals that W k = W for any k. Equality (10) shows that there holds S −1 SA ≤ 1 + δ, −1 −1 A −1 A where δ = A (W − I1 )A 22 12 (Γ − I1 ) 11 12 . Below, for spd matrices, we show a relation between (10) and the result from [2] which latter reads:
(1 − γ 2 )SA ≤ S ≤ SA
(11)
with γ of the corresponding hierarchical hasis function representation. Denoting 12 A−1 A 21 ), U = A 1/2 , −1 A −1/2 A12 A−1/2 and W =A 1/2 C1 A−1 C T A γ = ρ(A 11
22
11
22
and applying a spectral transformation on S −1 SA we obtain
11
11
−1/2 = I2 + U T (I1 − U U T )−1 (I1 − W 1/2 S −1 SA A )U. A 22 22
1
11
(12)
T
Taking into account that γ = ρ(U U ), we obtain 2
λmax (S −1 SA ) ≤ 1 +
γ 2 2 1 ≤ 1 + γ I − W = . 1 2 2 1−γ 1− γ 1−γ 2
(13)
= 0, the upper bound in (13) is attained and admits the same form For W measures the error in approximating A−1 by as in (11). However, since W 11 T −1 C1 A11 C1 , which is never zero, we see that the actual upper bound for λmax (S −1 SA ) is smaller than 1/(1 − γ 2 ) and does not deteriorate with γ → 1, confirmed also by the numerical results. We also see from (10) and (12) that in the general case, γ itself is not sufficient to characterize the quality of the two-by-two splitting and the approximation S, considered here.
Element-by-Element Schur Complement Approximations
4
113
Numerical Illustrations
We consider two test problems — one with symmetric and indefinite system matrix as arising from the stationary Stokes problem and the second with a general nonsymmetric two-by-two block matrix as arising from the discretized system (5). All numerical experiments are done in Matlab. Problem 1. We consider a model of Oseen’s problem in 2D for finding velocity u and pressure p satisfying −νΔu + (w · ∇)u + ∇p = f (x, y) ∇·u= 0 on the unit square Ω = [0, 1]2 . Here ν is the viscosity and w is the wind. We consider two cases. Case 1: ν = 1, w = 0 (the Stokes problem). For testing purposes, in this case we choose a solution in advance and compute f and the boundary conditions for the velocity correspondingly. Case 2: (Oseen’s driven lid problem, see [5] and the references there in) ν = 1/20, ν = 1/80 and ν = 1/320, and w = 2(2y − 1) ∗ (1 − (2x − 1)2 ) . The boundary conditions are u1 = u2 = 0 for −2(2x − 1) ∗ (1 − (2y − 1)2 ) x = 0, x = 1 and y = 0, u1 = 1, u2 = 0 for y = 1. The problem is discretized using a square mesh and the so-called modified Taylor-Hood (MTH) elements, i.e., bilinear basis functions for the velocity on a mesh with a meshstep h and bilinear basis functions for the pressure on a mesh with a meshstep 2h. It is known (cf. e.g. [3]) that MTH provides a stable discretization for which the LBB condition holds. Problem 2. Simulation of a moving interface with a constant speed by using the Cahn-Hilliard equation as in (5). Stokes problem is solved by the minres method, preconditioned by the preconditioner (6), right. Oseen’s problem is solved by the gcg-mr method. We compare the quality of the element-by-element Schur complement approximation (S) with the pressure mass matrix (M ) which is known to be a good approximation of the negative Schur complement, in this case equal to A21 A−1 11 A12 . Table 1 shows a comparison of the two preconditioning strategies. For the Stokes problem we present the spectral bounds of M −1 SA and S −1 SA , respectively. For these experiments, A−1 11 A12 is used as Z12 and the blocks A11 and S, respectively M , are solved directly in order to see only the influence of the approximation of SA . In Table 2, W and E are the same as in (10). The results for Stokes problem, Table 1 (left), illustrate that S is at least as good approximation of SA , as the mass matrix M . For Oseen’s problem, Table 1 (right), when the matrices become nonsymmetric, the advantages of the proposed approach are better seen. The obtained iteration counts with S as a preconditioner are slightly better than those, reported in [5].
114
M. Neytcheva, M. Do-Quang, and H. Xin Table 1. Problem 1 Stokes
Size of S 81 289 1089 4225 16641
M eig(SA , M ) min-max 0.166-0.986 0.164-0.996 0.164-0.999 -
Iter A, B 8 7 6 4 3
S eig(SA , S) min-max 0.101-0.634 0.098-0.635 0.096-0.636 -
ν = 1/20 Iter A, B 8 6 5 5 4
M 28 30 29 27 24
S 22 24 24 22 19
Oseen ν = 1/80 Iter M S 66 37 106 65 117 88 116 89 110 89
ν = 1/320 M 81 219 427 522 514
S 45 96 210 330 398
Table 2. Problem 2 Size of S 153 561 2145 8385 33153
−1 eig(SA S) min(Re)max max(Re) abs(Im)
0.98-1 0.68-1 0.26-1 0.07-1 -
0.0001 0.0002 7.2e-6 0 -
W
E/γ
1.25 1.25 -
0.04/0.38 0.35/1.49 -
Iter SA , S A, B (gmres) (gcg) 3 6 11 22 -
4 7/4 13/7 25/12 47/24
Table 2 illustrates the quality of S constructed for Problem 2. Column 6 in Table 2 shows the iteration counts to solve a system with SA , preconditioned by S (using the Matlab-gmres method with a relative stopping criterion 10−6 ). Column 7 in Table 2 shows the iterations required to solve a system with A, preconditioned by B as in (6), using the gcg - minimal residual method with a relative stopping criterion 10−6 and 10−3 , respectively. Again, the block Z12 is taken to be equal to A−1 11 A12 . Systems with S and A11 are solved directly using the Matlab \ operator. In this way we monitor only the quality of the approximation S. The experiments in Table 2 indicate that (i) cond(S −1 SA ) for the two-level method depends on the discretization parameter h and, thus, is not of optimal order as for spd problems; (ii) γ can take values larger than 1 and seems not to be the proper quality indicator for nonsymmetric matrices; (iii) the norm of W seems to have an upper bound independent of h; (iv) the error matrix E remains relatively small for all the numerical tests, which makes S still an attractive preconditioner to SA , obtained and applied as a sparse matrix on a low computational cost.
5
Conclusions and Some Open Problems
The numerical experiments complement those from earlier works and show that the element-by-element Schur approximation S is a good quality approximation
Element-by-Element Schur Complement Approximations
115
of the exact Schur complement SA for a broad spectrum of problems. The results in Section 3 indicate a way to theoretically quantify the relation between S and −1 SA using the spectral radius of the same matrix product A−1 11 A12 A22 A21 as in the symmetric positive definite case. The study for the general nonsymmetric case needs more systematic analysis to get a better understanding and insight both in the target application problem and in the resulting approximation of the Schur complement matrix. Another important question to answer is how to efficiently solve the 11-block, which will then make the preconditioner not only numerically but also computationally efficient.
Acknowledgements This research is partly supported by a grant from the Swedish Research Council (VR), Finite element preconditioners for algebraic problems as arising in modelling of multiphase microstructures, 2009–2011.
References 1. Axelsson, O.: Iterative Solution Methods. Oxford University Press, Oxford (1994) 2. Axelsson, O., Blaheta, R., Neytcheva, M.: Preconditioning for boundary value problems using elementwise Schur complements. SIMAX 31, 767–789 (2009) 3. Braess, D.: Finite Elements, 2nd edn. Cambridge University Press, Cambridge (2001) 4. Davis, T.A.: A column pre-ordering strategy for the unsymmetric-pattern multifrontal method. ACM Transactions on Math. Software 30, 165–195 (2004) 5. de Niet, A.C., Wubs, F.W.: Two preconditioners for saddle point problems in fluid flows. Int. J. Numer. Meth. Fluids 54, 355–377 (2007) 6. Do-Quang, M., Amberg, G.: The splash of a ball hitting a liquid surface: Numerical simulation of the influence of wetting. Journal Physic of Fluid (2008) (accepted) 7. femLego (Numerical simulation by symbolic computation), http://www.mech.kth.se/~ gustava/femLego 8. Kraus, J.: Algebraic multilevel preconditioning of finite element matrices using local Schur complements. Num. Lin. Alg. Appl. 13, 49–70 (2006) 9. Neytcheva, M., B¨ angtsson, E.: Preconditioning of nonsymmetric saddle point systems as arising in modelling of visco-elastic problems. ETNA 29, 193–211 (2008) 10. Villanueva, W., Amberg, G.: Some generic Capillary-driven flows. Int. J. of Multiphase Flow 32, 1072–1086 (2006)
Numerical Simulation of Fluid-Structure Interaction Problems on Hybrid Meshes with Algebraic Multigrid Methods Huidong Yang1 and Walter Zulehner2 1 Johannes Kepler University Linz, Altenberger Strasse 69, 4040 Linz, Austria/Europe
[email protected] http://www.numa.uni-linz.ac.at/~huidong 2 Johannes Kepler University Linz, Altenberger Strasse 69, 4040 Linz, Austria/Europe
[email protected] http://www.numa.uni-linz.ac.at/~zulehner
Abstract. Fluid-structure interaction problems arise in many application fields such as flows around elastic structures or blood flow problems in arteries. The method presented in this paper for solving such a problem is based on a reduction to an equation at the interface, involving the so-called Steklov-Poincar´e operators. This interface equation is solved by a Newton-like iteration. One step of the Newton-like iteration requires the solution of several decoupled linear subproblems in the structural and the fluid domains. These subproblems are spatially discretized by a finite element method on hybrid meshes. For the time discretization implicit first-order methods are used for both subproblems. The discretized equations are solved by algebraic multigrid methods.
1 1.1
Problem Setting of the Fluid-Structure Interaction Geometrical Description
Let Ω0 denote the initial domain at a time t = 0 consisting of the structural and fluid sub-domains Ω0s and Ω0f , respectively. The domain Ω(t) at a time t is composed of the deformable structural sub-domain Ω s (t) and the fluid sub-domain Ω f (t). The corresponding interface Γ (t) is evolving from the initial interface Γ0 . The evolution of Ω(t) is obtained by an injective mapping, the so-called the arbitrary Lagrangian Eulerian (ALE) mapping (Fig. 1.): x : Ω0 × R+ → R3 .
(1)
The position of a point x0 ∈ Ω0s at a time t is given by the mapping for the structure domain xst : Ω0s → Ω s (t), (2) given by xst (x0 ) ≡ xs (x0 , t) = x(x0 , t) = x0 +ds (x0 , t) for x0 ∈ Ω0s , where ds (x0 , t) denotes the displacement ds (x0 , t) of the structural domain at a time t. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 116–123, 2010. c Springer-Verlag Berlin Heidelberg 2010
Numerical Simulation of Fluid-Structure Interaction Problems
Γ0d
Γ0n
Ω0s
Γ0
Γin
Ω s (t)
xs (·, t)
Γout
Γ (t)
Γin (t)
Ω0f
117
Γout (t)
Ω (t) f
x (·, t) f
Ω0
Ω(t)
Fig. 1. ALE mapping
Correspondingly, the position of any point x0 ∈ Ω0f at a time t is given by the mapping for the fluid domain xft : Ω0f → Ω f (t)
(3)
given by xft (x0 ) ≡ xf (x0 , t) = x(x0 , t) = x0 + df (x0 , t) for x0 ∈ Ω0f , where df (x0 , t) denotes the displacement in fluid domain. It is defined as an extension of the structural displacement ds at the interface Γ0 : df = Ext(ds|Γ ),
(4)
0
e.g. the harmonic extension, given by: ⎧ f ⎪ ⎨ −Δd = 0 df = ds ⎪ ⎩ df = 0
in Ω0f , on Γ0 ,
(5)
on Γin (t) ∪ Γout (t).
Furthermore, we introduce the domain velocities by ws (x0 , t) :=
∂ds ∂xs (x0 , t) = (x0 , t) ∂t ∂t
wf (x0 , t) :=
∂df ∂xf (x0 , t) = (x0 , t) ∂t ∂t
and
for the structural and the fluid domain respectively. 1.2
The Physical Model in Strong Form
We describe the interface conditions which have to be satisfied for this coupled problem.
118
H. Yang and W. Zulehner
Interface Conditions. At the interface between the structure and the fluid domain we assume no-slip conditions: df Γ = ds |Γ0 (6) 0
and the equilibrium of normal stresses: (σf nf ) ◦ xft + σs ns = 0,
(7)
where σs is the first Piola-Kirchoff stress tensor, σf is the Cauchy stress tensor, nf and ns are the outward normals of Ω f (t) and Ω0s , respectively. Structure and Fluid Sub-problems. With prescribed Dirichlet data λ for the displacement at the interface Γ0 , we compute the Neumann data σs ns at the interface Γ0 by solving the structure problem ⎧ ∂ 2 ds ⎪ ⎪ ρ − div(σs (ds )) = 0 in Ω0s , s ⎪ 2 ⎪ ⎪ ⎨ ∂t σs (ds )ns = 0 on Γ0n , (8) ⎪ s d ⎪ ⎪ d =0 on Γ0 , ⎪ ⎪ ⎩ ds = λ on Γ0 , where ρs is the density. We will concentrate on a linear Saint-Venant Kirchoff s s T ) elastic model, i.e. σs (ds ) = 2μl ε(ds ) + λl div(ds )I with ε(ds ) = ∇d +(∇d and 2 l l the Lam´e constants λ , μ . We introduce Ss as the Dirichlet-to-Neumann mapping Ss : H 1/2 (Γ0 ) → H −1/2 (Γ0 ), s
(9) 1/2
given by Ss (λ) = σs (d )ns , with an appropriate function space H (Γ0 ) and its dual space H −1/2 (Γ0 ). Let u(x, t) denote the Eulerian velocity of the fluid. The ALE time derivative of u(x, t) is introduced in order to overcome the difficulty for evaluating the time derivative of velocity u(x, t) under the Eulerian framework in a moving domain. −1 Let x ∈ Ω f (t) with x0 = xft (x), then the ALE time derivative is given by ∂u d (x, t) = u(xft (x0 ), t) . (10) ∂t x0 dt Analogously, with prescribed Dirichlet data ∂λ ∂t for the velocity at the interface Γ0 , we compute the Neumann data (σf nf ) ◦ xft at the interface Γ0 by solving the fluid problem ⎧ ∂u ⎪ f ⎪ in Ω f (t), ⎪ ⎪ ρf ∂t + ρf (u − w ) · ∇ u − 2μdivε(u) + ∇p = 0 ⎪ ⎪ x 0 ⎪ ⎪ ⎪ ⎪ divu = 0 in Ω f (t), ⎨ σf (u, p)nf = gin on Γin (t), (11) ⎪ ⎪ ⎪ ⎪ on Γout (t), σf (u, p)nf = 0 ⎪ ⎪ ⎪ ⎪ ⎪ ∂λ ⎪ ⎩ on Γ0 , u ◦ xft = ∂t
Numerical Simulation of Fluid-Structure Interaction Problems
119
where ρf is the density of the fluid, μ its dynamic viscosity, the stress tensor T σf (u, p) = −pI + 2με(u), the pressure p, the strain tensor ε(u) = ∇u+(∇u) . 2 In a similar way as before, we introduce the Dirichlet to Neumann mapping Sf : H 1/2 (Γ0 ) → H −1/2 (Γ0 ),
(12)
given by Sf (λ) = (σf (u, p)nf ) ◦ xft . With these notations the equilibrium condition (7) can be written as S(λ) := Sf (λ) + Ss (λ) = 0,
(13)
which is the so-called Steklov-Poincar´e equation and will be solved by iterative method in Section 2. 1.3
Weak Formulations
For the weak formulation spaces V s = [H 1 (Ω0s )]3 , V0s = we need thes function s s s d s {v ∈ V |v = 0 on Γ0 Γ0 }, and Vg = {v ∈ V s |v s = λ(t) on Γ0 } for the structure. For the fluid, we define Df = [H 1 (Ω0f )]3 , D0f = {d ∈ Df |d = 0 on Γ0 }, Dgf (t) = {d ∈ Df |d = λ(t) on Γ0 }, V f (t) = {v f |v f ◦ xft ∈ H 1 (Ω0f )3 }, V0f (t) = {v f ∈ V f (t)|v f ◦ xft = 0 on Γ0 }, Vgf (t) = {v f ∈ V0f |v f ◦ xft = wf ◦ xft on Γ0 }, and Qf (t) = {q f |q f ◦ xft ∈ L2 (Ω0f )}, where H 1 (Ω0s ) and H 1 (Ω0f ) denote the standard Sobolev spaces. Then we obtain: The Weak Form of the Structure Problem. Find ds ∈ Vgs such that for all v s ∈ V0s , a(ds , v s ) = 0 (14) with
∂ 2 ds a(d , v ) = ρs 2 · v s dx0 + ∂t Ω0s s
s
Ω0s
[λl divds divv s + 2μl ε(ds ) : ε(v s )]dx0 . (15)
The Weak Form for the Harmonic Extension. Find df ∈ Dgf (t) such that for all φ ∈ D0f , (16) a(df , φ) = 0
with a(df , φ) =
Ω0f
∇df : ∇φdx0 .
(17)
The computational fluid domain Ω f (t) is then given by Ω f (t) = Ω0f + df .
(18)
120
H. Yang and W. Zulehner
The Weak Form of the Fluid Problem. Find (u, p) ∈ Vgf (t) × Qf (t) such that for all (v f , q f ) ∈ V0f (t) × Qf (t), a(u, v f ) + b1 (v f , p) = F f , v f , (19) b2 (u, q f ) − c(p, q f ) = Gf , q f , where
⎧ d ⎪ ⎪ a(u, v f ) = ρf u · v f dx − divwf ρf u · v f dx ⎪ ⎪ dt f (t) f (t) ⎪ Ω Ω ⎪ ⎪
⎪ ⎪ ⎪ f f ⎪ + ρ (u − w ) · ∇ u · v dx + 2μ ε(u) : ε(v f )dx, ⎪ f ⎨ Ω f (t) Ω f (t)
⎪ f f ⎪ ⎪ b (v , p) =b (v , p) = − p divv f dx, 1 2 ⎪ ⎪ f (t) ⎪ Ω ⎪ ⎪
⎪ ⎪ ⎪ f f f ⎪ gin · v f ds, Gf , q f = 0. ⎩ c(p, q ) =0, F , v =
(20)
Γin (t)
The Weak Form of the Equilibrium Condition. Find λ(t) ∈ H 1/2 (Γ0 ) such that, for all v f × v s ∈ V f (t) × V s such that Sf (λ), v f Γ (t) + Ss (λ), v s Γ0 = 0.
(21)
where v f ◦ xft = v s on Γ0 and ., . denotes the corresponding dual product. 1.4
Discretization
The spatial discretization was done by a finite element method on a hybrid mesh consisting of tetrahedra, hexahedra, pyramids and prisms. These elements, seethe first line of Fig. 2, are splitted into pure tetrahedral elements by introducing artificial points at the volume and face centers, see the second line of Fig. 2. We then construct the finite elements based on pure tetrahedral meshes. The introduced additional degrees of freedom are locally eliminated by the mean value approximation, for details [5]. Concerning time discretization we proceeded as proposed in [1]: For the fluid problem an implicit Euler scheme is used with a semi-implicit treatment of the non-linear convective term. The structure problem is discretized by an implicit first-order scheme. Good stability properties of this scheme were reported in [1].
2
Iterative Methods for the Interface Equation
We apply a preconditioned Richardson method to (13): given λ0 , for k ≥ 0, λk+1 = λk + ω k Pk−1 −Ss (λk ) − Sf (λk ) , (22) with the relaxation parameter ω k and a proper preconditioner Pk .
Numerical Simulation of Fluid-Structure Interaction Problems
Number of splitted tetrahedra:
14
24
121
4
Fig. 2. Splitting of hybrid elements into tetrahedral elements
The Newton algorithm applied to (13) is obtained by using iteration (22) and choosing the preconditioner at step k (see [1]): Pk = Ss (λk ) + Sf (λk ). In our computation, we instead use an approximation of the full tangent operators Pˆk ≈ Ss (λk ) + Sf (λk ) which includes the Fr´echet derivative for the structure operator Ss and the classical Fr´echet derivative part which does not take into account the shape change for the fluid operator Sf . In each step of the iterative method a problem of the form (23) Pˆk μk = − Ss (λk ) + Sf (λk ) has to solved. For this we used a preconditioned GMRES method with precon ditioner Ss (λk ) (see [1]). Summarizing, the method can be described as follows: for k ≥ 0, 1. update the residual Ss (λk ) + Sf (λk ) by solving the structure and fluid subproblems, 2. solve the tangent problem (23) via GMRES method, 3. update the displacement λk+1 , if not accurate enough, go to step 1. Note that Step 1 can be parallelized due to the independence of the sub-problems for given interface boundary conditions. Step 2 requires solving the linearized structure and fluid problems several times during the GMRES iteration, for details we refer to [5]. The algebraic multigrid method (AMG) is used for the fluid and structure sub-problems, see [3] and [4].
3
Numerical Results
We simulate a pressure wave in a cylinder of length 5 cm and radius 5 mm at rest. The thickness of the structure is 0.5 mm. The structure is considered linear and clamped at both the inlet and outlet. The fluid viscosity is set to μ = 0.035,
122
H. Yang and W. Zulehner
(a) coarse mesh
(b) fine mesh
Fig. 3. Fine and coarse meshes for simulations
(a) t = 5 ms
(b) t = 10 ms
(c) t = 15 ms
(d) t = 20 ms
Fig. 4. Simulation results at time t = 5 ms (upper left), t = 10 ms (upper right), t = 15 ms (lower left) and t = 20 ms (lower right)
Numerical Simulation of Fluid-Structure Interaction Problems
123
the Lam´e constants to μl = 1.15 × 106 and λl = 1.73 × 106 , the density to ρf = 1.0 and ρs = 1.2. The fluid and structure are initially at rest and a pressure of 1.332 × 104 dyn/cm2 is set on the inlet for a time period of 3 ms. Two meshes (Fig.3.) are used for simulations: For all simulations, we use the same time step size δt = 1 ms and run the simulation until the same end time t = 20 ms as in [1]. For visualization purposes the deformation is amplified by a factor of 12. Fig.4 shows the pressure wave propagation on the fine mesh at different time levels. A relative error reduction by a factor of 10−5 is achieved in 2-3 outer iterations. Each of these iterations requires 6-8 GMRES iterations for a relative error reduction by a factor of 10−5 . For solving the structure problem, about 10 preconditioned conjugate gradient iterations with AMG preconditioning are needed for a relative error reduction by a factor of 10−8 , for the fluid problem about 5 AMG iterations for a relative error reduction by a factor of 10−8 . Almost the same numbers of iterations were observed for the coarse and the fine mesh. For the doubled time step size δt = 2 ms the number of inner AMG iterations for the fluid problem increased to about 10. In future work it is planned to implement an adaptive time stepping strategy to improve the efficiency of the method.
References 1. Deparis, S., Discacciati, M., Fourestey, G., Quarteroni, A.: Fluid-structure algorithms based on Steklov-Poincar´e operators. J. Comput. Methods Appl. Mech. Engrg. 105, 5799–5812 (2006) 2. Fern´ andez, M.A., Moubachir, M.: A Newton method using exact Jacobians for solving fluid-structure coupling. J. Comput. and Struct. 83(2-3), 127–142 (2005) 3. Reitzinger, S.: Algebraic Multigrid Methods for Large Scale Finite Element Equations. PhD thesis, Johannes Kepler Unversity Linz (2001) 4. Wabro, M.: Algebraic Multigrid Methods for the Numerical Solution of the Incompressible Navier-Stokes Equations. PhD thesis, Johannes Kepler Unversity Linz (2003) 5. Yang, H.: Numerical Simulations of Fluid-structure Interaction Problems on Hybrid Meshes with Algebraic Multigrid Methods. PhD thesis, Johannes Kepler Unversity Linz (in preparation) (2009)
Recent Developments in the Multi-Scale-Finite-Volume Procedure Giuseppe Bonfigli and Patrick Jenny ETH-Zurich, CH-5092 Zurich, Switzerland
[email protected]
Abstract. The multi-scale-finite-volume (MSFV) procedure for the approximate solution of elliptic problems with varying coefficients has been recently modified by introducing an iterative loop to achieve any desired level of accuracy (iterative MSFV, IMSFV). We further develop the iterative concept considering a Galerkin approach to define the coarse-scale problem, which is one of the key elements of the MSFV and IMSFVmethods. The new Galerkin based method is still a multi-scale approach, in the sense that upscaling to the coarse-scale problem is achieved by means of numerically computed basis functions resulting from localized problems. However, it does not enforce strict conservativity at the coarsescale level, and consequently no conservative velocity field can be defined as in the IMSFV-procedure until convergence to the exact solution is achieved. Numerical results are provided to evaluate the performance of the modified procedure.
1
Introduction
The multi-scale-finite-volume (MSFV) procedure by Jenny et al. [1] is a method for the approximate solution of discrete problems resulting from the elliptic partial differential equation ∇ · λ∇φ = q,
x ∈ Ω, λ > 0,
(1)
when the operator on the left-hand side is discretized on a Cartesian grid with standard five-point stencils. The coefficient λ can be an arbitrary function of the spatial coordinate x. Successive developments of the MSFV-approach have been proposed by Lunati and Jenny [2], Hajibeygi et al. [3] and Bonfigli and Jenny [4]. The iterative version, i.e. the IMSFV-method, has been introduced in [3] and modified in [4] to improve its efficiency in cases in which the coefficient λ varies over several orders of magnitude within the integration domain. The IMSFV-method as presented in [4] foresees two steps for the computation of the approximate solution at each level of the iterative loop: the fine-scale step or fine-scale relaxation and the coarse-scale step. The fine-scale step requires the solution of several localized problems on subdomains of the original grid with approximated boundary conditions. This allows to capture the finescale features in the spatial distributions of the coefficient λ and of the source I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 124–131, 2010. c Springer-Verlag Berlin Heidelberg 2010
Recent Developments in the Multi-Scale-Finite-Volume Procedure
125
term q. However, due to the local character of the considered problems, the interdependence of any two points in the integration domain, as implied by the elliptic character of equation (1) (global coupling), can not be accounted for at this stage. Analogies are found here with domain decomposition approaches, in particular if Dirichlet boundary conditions for the localized problems are extracted in the IMSFV-method from the solution at the previous iteration [4]. If the subdomains are reduced to a grid line or to a single grid point, standard line or Gauss-Seidel-relaxation is recovered. The coarse-scale step provides the global coupling missing in the fine-scale step. An upscaled problem with a strongly reduced number of degrees of freedom (coarse-scale problem) is defined by requiring the integral fulfillment of the conservation law (1) for a set of coarse control volumes. This provides an approximate solution on a coarse grid spanning the whole integration domain, which is then prolonged to the fine grid as in a two-level multi-grid approach [3,4]. The main novelty of the IMSFV-procedure considered as an elliptic solver is represented by the strategy used for restriction and prolongation, i.e. to define the coarse-scale problem and to interpolate its solution back to the fine grid. Both steps are achieved by means of the basis functions computed by solving numerically localized problems at the fine-scale level, such that the fine-scale features of the coefficient λ are approximatively accounted for. Even if the efficiency of the MSFV and IMSFV-methods have been demonstrated for several applications, some aspects still deserve further investigation. In particular: – The stability of the iterative procedure for critical λ-distributions has been progressively improved [3,4], but problems where the procedure diverges may still be found. – For large cases, the size of the coarse-scale problem may become significant, thus making a direct solution impracticable. The introduction of a further upscaling step, in analogy with multi-grid methods, would be an attractive alternative. The critical aspect with respect to the first point is clearly the quality of the coarse-scale operator. At the same time the structure of the coarse-scale problem also matters for the development of a multi-level procedure. The present investigation considers an alternative definition of the coarse-scale problem based on a Galerkin approach, which, as usual for Galerkin methods, leads to a symmetric positive-definite or semi-definite system matrix. This is clearly a desirable feature in view of the targeted development of a multi level procedure. Due to its analogy with a Galerkin-based finite-element approach we call the new procedure iterative-multi-scale-finite-element method (IMSFE). As a preliminary step towards a multi-level method, we compare the performance of the IMSFE-procedure with that of the original IMSFV procedure, when the coarse-scale problem is solved directly. The base-line formulation of the IMSFVprocedure is the one presented in [4]. We point out that Galerkin, also iterative, approaches have been already used in the context of multi-scale methods for elliptic problems, beginning with the
126
G. Bonfigli and P. Jenny
seminal work by Hou and Wu [5] (see also [6]). The novelty of our contribution is rather their application in the context of the iterative strategy from the IMSFVmethod, i.e. considering basis functions as defined in the IMSFV-procedure and including line or Gauss-Seidel relaxation to converge at the fine-scale level.
2
The IMSFV-Procedure
An extensive discussion of the IMSFV-procedure can be found in [3,4]. Here we only present the basics. The different grids considered for the formulation of the IMSFV-procedure are sketched in figure 1. The so-called coarse and dual grids, are staggered with respect to each other and their cells (coarse and dual cells) are defined as rectangular blocks of cells from the original grid (fine grid). Neighbouring dual cells share one layer of fine-grid cells (right bottom figure). Let P be the index set for an indexing of the fine-grid cells. The unknowns of the fine-scale problem are the values φj , j ∈ P, of the discrete solution φ at centroids xj of the fine cells. The unknowns of the coarse-scale problem are the values φj , j ∈ P¯ ⊂ P at fine-grid nodes corresponding to the corners of dual cells (coarse-grid nodes, crosses in figure 1). Given the approximate solution φ[n] the improved solution φ[n+1] at the next iteration step is given by φ[n+1] = S ns (φ[n] + δφ),
(2)
where S in the fine-scale relaxation operator (we only consider line and GaussSeidel relaxation), which is applied ns times after adding the correction δφ from the coarse-scale step to the old solution. In turn, the correction δφ is defined as the linear superposition δφ = δφj Φj (3) ¯ j∈P
¯ provided by of the basis functions Φj , scaled with the node-values δφj , j ∈ P, the solution of the coarse-scale problem.
¯j Ω ˜j Ω
¯j , solid lines) and dual grid (cells Ω ˜j , dashed lines) with enlargeFig. 1. Coarse (cells Ω ment of the corner region of coarse (top) and dual cells (bottom). Thin lines mark the boundaries of fine-grid cells, thick lines the boundaries of coarse and dual cells.
Recent Developments in the Multi-Scale-Finite-Volume Procedure
127
Once the basis functions are assigned, the coarse-scale problem, i.e. a linear ¯ is obtained by requiring the system for the coarse-scale unknowns δφj , j ∈ P, conservation law represented by equation (1) to be fulfilled by (φ[n] + δφ) for all ¯j . By considering the divergence theorem and the definition (3) of coarse cells Ω δφ, one obtains the following equivalent sets of equations: ¯j Ω
¯ k∈P
∇ · λ∇(φ[n] + δφ)dV =
δφk
qdV, ¯j Ω
¯j ∂Ω
λ∇Φk · ndS = −
¯j ∂Ω
¯ j ∈ P,
(4a)
¯ j ∈ P,
(4b)
λ∇φ[n] · ndS +
qdV, ¯j Ω
of which the latter is usually implemented in the IMSFV-procedure. Of course, the discrete version of the equations is considered in the numerical procedure, where the integration and differentiation operators are replaced by the corresponding discrete operators. The equivalence of equations (4a) and (4b) is preserved also in the discrete case, since the divergence theorem holds exactly for the considered finite-volume discretization. In conclusion, we detail the definition of the basis functions Φj , each of which is non-zero only within the dual cells intersecting the corresponding coarse cell ¯j . Fix j ∈ P¯ so that no face of Ω ¯j lies on the domain boundary ∂Ω and let Ω ˜k Ω ˜ ¯ ¯ be a dual cell so that Ωk ∩ Ωj = ∅. Let furthermore C ⊂ P be the set of indices of ˜k . The restriction of Φj to the coarse-grid nodes xl corresponding to corners of Ω ˜k is then determined by the following localized elliptic problem with so-called Ω reduced boundary conditions: ˆ ∇ · λ∇Φ j = 0,
˜k , x∈Ω
(5a)
Φj = 0, Φj = 1, ˆ t Φj = 0, ∇t · λ∇
x = x l , l ∈ C \ {j}, x = xj , ˜k . x ∈ ∂Ω
(5b) (5c) (5d)
Thereby, the tangential nabla operator ∇t = ∇ − n(n · ∇) is defined at boundary ˜k analogously to ∇, but ignores derivatives in direction normal points x ∈ ∂ Ω ˆ = max{λ, Λ}, ˆ Λˆ > 0, follows from to the boundary. The modified coefficient λ λ after removing very low values to avoid bad conditioning of the coarse-scale problem [4]. For numerical implementation, the discretized version of equation ˜k , leading (5d) is imposed in place of (5a) for cells adjacent to the boundary ∂ Ω ˜ to 1-d problems along the edges of ∂ Ωk , with Dirichlet-like boundary conditions ˜k lie on the domain given by (5b) and (5c). Finally, if one or more edges of Ω boundary, the homogeneous counterpart of the boundary condition prescribed for the solution φ are imposed there in place of equation (5d). The consistency of the definition of Φj at fine-grid nodes shared by more than one dual cell and their suitability to define the coarse system are discussed in [3,4].
128
3
G. Bonfigli and P. Jenny
The Galerkin Approach
The coarse-scale problem in its form (4a) can be interpreted as a weak formulation of the elliptic equation (1) considering the test functions ¯j 1, x ∈ Ω ¯ χΩ¯ j = , j ∈ P. (6) ¯j 0, x ∈ /Ω The obvious alternative considered in this work is represented by the Galerkin approach, in which the basis functions themselves are considered as test functions [5,6]. The following equivalent coarse-scale problems are derived: ¯ Φj [∇ · λ∇(φ[n] + δφ)]dV = Ψj qdV, j ∈ P, (7a) Ω Ω ¯ δφk Φj ∇ · λ∇Φk dV = Φj (−∇ · λ∇φ[n] + q)dV, j ∈ P, (7b) Ω
¯ k∈P
¯ k∈P
Ω
δφk
Ω
∇Φj · λ∇Φk dV =
Ω
Φj (−∇ · λ∇φ[n] + q)dV,
¯ j ∈ P.
(7c)
Summations needed to evaluate the volume integrals in the discrete case, may be limited to cells in the support of Φj and their immediate neighbours. The formulation (7c) of the coarse-scale problem shows that the system matrix is symmetric and positive definite, except for problems with Neumann conditions on the whole boundary, for which it is positive semi-definite, with one single eigenvalue equal to zero. Constant φ-distributions are in this case the only non-trivial elements of the matrix null-space [4]. Correspondingly, the system is uniquely solvable if the boundary conditions are not only Neumann. In the opposite case, the solution exists, and is determined up to an additive constant, only if the usual compatibility constraint between Neumann boundary conditions and source term is fulfilled. The effects of the Galerkin approach on the performance of the complete iterative procedure are not as evident. Numerical experiments investigating this aspect for a challenging distribution of the coefficient λ are presented in the following section. We point out that the original IMSFV procedure provides at any convergence level of the iterative procedure an approximation g˜ for the gradient ∇φ, so that ∇ · g˜ = q (conservative vector field). This is not true for the Galerkin-based IMSFE-procedure, for which only the fully converged solution is conservative.
4
Numerical Tests
We consider the elliptic problem of the form (1), which has to be solved in the simulation of incompressible flows in porous media. The coefficient λ corresponds to a 2-d slice of the permeability field for the SPE-10-bottom-layer test case for reservoir simulations [7]. The fine grid contains 220 × 60 cells, equally divided
Recent Developments in the Multi-Scale-Finite-Volume Procedure 60 y
Log(λ) 44 3 22 1 00 -1 -2 -2
40 20 00
129
50
100
150
x 200
p 0.01 0.00 -0.01 0y
50 x
0
100 150 25
50
200
Fig. 2. Permeability distribution and solution. Lines in left figure correspond to the boundaries of the coarse cells, circles to the coarse-grid nodes.
into 22 × 6 coarse cells. Concentrated sinks and sources of equal intensity and opposite sign are set at two opposite corners of the integration domain. The distribution for λ and the fine-scale solution are displayed in figure 2. Convergence rates Γ for the IMSFV and IMSFE-procedures are presented in figure 3 for different values of the lower bound Λˆ (see equation (5)) and for different stretching factors α = Δx/Δy of the fine-grid cells. The coefficient λ is assumed to be constant within fine cells for the discrete formulation of the problem, and the value associated to each cell is not modified when varying α. Result are presented for computations in which fine-scale relaxation is obtained either by line (ns = 10) or Gauss-Seidel relaxation (ns = 20, each line-relaxation step includes two steps comparable to a Gauss-Seidel step, one for each spatial direction). The convergence rate Γ is defined as Γ = 1/N , where N is the average number of iterations needed to reduce the error = ||φ[n] − φ||∞ by one order of magnitude. We evaluate Γ considering iterations for which 10−5 ≤ ≤ 10−3 . If α 1, the connectivity between fine cells sharing edges normal to the y-axis is much larger than for cells sharing faces normal to the x-axis (anisotropic problems). This is known to be a critical situation, and similar problems could be handled successfully by IMSFV only after including line-relaxation in the finescale step [3]. The robustness of the iterative procedure in this respect was enˆ for the computation of basis hanced by introducing the clipped coefficient field λ functions [4]. Figure 3 shows that a further significant improvement may be accomplished by considering the Galerkin-based coarse system. On the other hand, the stability envelope of the original procedure can be enlarged only marginally by increasing the number ns of relaxation steps. Convergence rates for stable computations depend only moderately on the strategy considered to define the coarse-scale problem. However, the improved ˆ which stability of the Galerkin approach simplifies the choice of lower bound Λ, in the standard procedure is not easy, since, for a given α, the optimal value of Λˆ is often close to the limit of the stability region. When using the Galerkinbased approach, a reasonable choice for moderate α would be Λˆ = 0 (no lower bound). For very large stretching factors α, the performance is not satisfactory, neither for the IMSFV, nor for the IMSFE-procedure. The highest convergence ˆ rates are obtained for Λ/||λ|| ∞ = 1, i.e. when the basis functions are computed
130
G. Bonfigli and P. Jenny
3
3
α
α
2
2
1
1
0
-6
-4
-2
ˆ Λ/||λ|| ∞
0
0
3
3
α
α
2
2
1
1
0
-6
-4
-2
0
0 ˆ Λ/||λ|| ∞
-6
-4
-2
-6
-4
-2
ˆ Λ/||λ|| ∞
0
0 ˆ Λ/||λ|| ∞
log10 (Γ ) Fig. 3. Convergence rates for the IMSFE (left colum) and the IMSFV-procedures (right column) considering line relaxation (ns = 10, top row) or Gauss-Seidel relaxation (ns = 20, bottom row) for fine-scale relaxation. Blanked values indicate divergence of the iterative procedure.
10
0
0
10
Γ
ns ns ns ns ns
Γ 10-1
10-1
=5 = 10 = 15 = 20 = 300
10-2 10
-2
10-3 10
-3
10
-4
10
ns ns ns ns ns
=5 = 10 = 15 = 20 = 300
10-4
-5
0
10
1
10
2
α
3
10
10
0
10
10
1
2
10
α
10
3
Fig. 4. Convergence rates for the IMSFE (lines) and the IMSFV-procedures (lines with symbols) as functions of the number ns of fine scale relaxation steps and of the grid −2 ˆ stretching α with Λ/||λ|| . Fine-scale relaxation is achieved either by means ∞ = 10 of line-relaxation (left) of Gauss-Seidel relaxation (right).
for a homogeneous λ-distribution. In this case the IMSFV-approach performs slightly better. If Λˆ is decreased, the IMSFV-procedure becomes unstable, while convergence rates for the Galerkin based approach decrease to the level provided by the fine-scale relaxation step. Better results for anisotropic problems, could
Recent Developments in the Multi-Scale-Finite-Volume Procedure
131
be achieved by modifying the number of fine cells per coarse cell so to keep the aspect ratio of the latter close to one. Convergence rates are consistently higher if line relaxation is used in place of Gauss-Seidel relaxation. The dependence of Γ on the number ns of relaxation steps is investigated in −2 ˆ figure 4 for Λ/||λ|| . Convergence rates are found in all cases to grow ∞ = 10 proportionally to ns . Since fine-scale relaxation represents a relevant portion of the computational costs, no determinant gain of performance can be attained by increasing ns .
5
Conclusions
A Galerkin formulation has been proposed for the coarse-scale problem of the IMSFV-procedure, leading to a symmetric positive-definite or semi-definite matrix. This is equivalent to using a finite-element approach at the coarse level [5,6]. Numerical testing has been carried out considering one challenging spatial distribution of the coefficient in the elliptic problem (permeability field according to the terminology used for incompressible flows in porous media). If the coarse system is solved directly as usual in IMSFV, the resulting IMSFE-procedure is significantly more stable than the original one. For cases where both procedure converge, differences in their convergence rates are moderate. These results seem to indicate that the Galerkin-based approach, due to the favourable features of the coarse-scale-system matrix, could be a good starting point for the generalization of IMSFV as a multi-level procedure. Further tests considering different permeability distributions are needed to allow conclusive statements.
References 1. Jenny, P., Lee, S.H., Tchelepi, H.A.: Multi-scale finite-volume method for elliptic problems in subsurface flow simulation. J. Comp. Phys. 187, 47–67 (2003) 2. Lunati, I., Jenny, P.: Multiscale finite-volume method for density-driven flow in porous media. Comput. Geosci. 12, 337–350 (2008) 3. Hajibeygi, H., Bonfigli, G., Hesse, M.A., Jenny, P.: Iterative multiscale finite-volume method. J. Comp. Phys. 277, 8604–8621 (2008) 4. Bonfigli, G., Jenny, P.: An efficient multi-scale poisson solver for the incompressible navier-stokes equations with immersed boundaries. J. Comp. Phys. (accepted, 2009) 5. Hou, T.Y., Wu, X.H.: A multiscale finite element method for elliptic problems in composite materials and porous media. J. Comp. Phys. 134, 169–189 (1997) 6. Efendiev, Y., Hou, T.Y.: Multiscale Finite Element Methods: Theory and Application, 1st edn. Springer, Heidelberg (2009) 7. Christie, M.A., Blunt, M.J.: 10th SPE comparative solution project: a comparison of upscaling techniques. In: SPE 66599 (February 2001)
Boundary Element Simulation of Linear Water Waves in a Model Basin Clemens Hofreither1 , Ulrich Langer2 , and Satyendra Tomar3 1
3
DK Computational Mathematics, JKU Linz Altenberger Straße 69, 4040 Linz, Austria 2 Institute of Computational Mathematics, JKU Linz Altenberger Straße 69, 4040 Linz, Austria Johann Radon Institute for Computational and Applied Mathematics (RICAM) Altenberger Straße 69, 4040 Linz, Austria
Abstract. We present the Galerkin boundary element method (BEM) for the numerical simulation of free-surface water waves in a model basin. In this work, as a first step we consider the linearized model of this timedependent three-dimensional problem. After time discretization by an explicit Runge-Kutta scheme, the problem to be solved at each time step corresponds to the evaluation of a Dirichlet-to-Neumann map on the free surface of the domain. We use the Galerkin BEM for the approximate evaluation of the Dirichlet-to-Neumann map. To solve the resulting large, dense linear system, we use a data-sparse matrix approximation method based on hierarchical matrix representations. The proposed algorithm is quasi-optimal. Finally, some numerical results are given.
1
Introduction
Numerical simulation of free-surface water waves in a model test basin is important when maritime structures, such as freight carriers, ferries, and oil rigs are tested on a model scale. Before the actual construction, the owners and designers need precise information about the hydrodynamic properties of their design. Several numerical algorithms have been proposed for such problems, see e.g. [8] (a combination of finite element method and finite difference method), or [7] (a complete discontinuous Galerkin finite element method). For such problems, only the discretization of the surface is of interest, which is a typical characteristic of the boundary element method (BEM). Unfortunately, the resulting system of linear equations from BEM is dense. On the other hand, while the finite element method (FEM) requires the discretization of the whole domain, the resulting matrices are sparse, and fast iterative solvers are available for these discrete systems. Thus, asymptotically, FEM has been considered the favorable choice as compared to BEM, at least, for three-dimensional problems, see e.g. [4]. However, data-sparse approximations of the dense BEM matrices can overcome this drawback of the BEM, see [2,5] and the references wherein. In this paper, we present the Galerkin BEM for the numerical simulation of linear free-surface water waves in a model basin. We use a linearized model for I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 132–139, 2010. c Springer-Verlag Berlin Heidelberg 2010
Boundary Element Simulation of Linear Water Waves in a Model Basin
133
representing the dynamics of the water, which is introduced in Section 2. Its basic structure is that of an operator ordinary differential equation involving a Dirichlet-to-Neumann map on its right-hand side. For the evaluation of this operator, we employ a boundary element method, which is briefly described in Section 3. The resulting large, dense matrices are approximated via data-sparse H-matrix techniques, as outlined in Section 4. Finally, we present numerical results in Section 5 and conclude our findings in Section 6.
2
Modeling
We briefly outline the mathematical model used to describe the behavior of a model basin. The detailed derivation can be found in [9]. We start from the Navier-Stokes equations for an ideal, incompressible fluid. By assuming an irrotational flow in a simply connected domain, we may introduce a potential φ(x, y, z, t) for the velocity v = ∇φ. If the amplitude of the waves is small in comparison to the depth of the basin, we may linearize the problem; our computational domain Ω ⊂ R3 is then time-independent. Without loss of generality, let Ω be oriented such that gravity applies in negative z direction with a magnitude g, and the free surface ΓF of Ω is a subset of the plane at z = 0. The remainder of the surface, ΓN = ∂Ω \ ΓF , represents the walls of the basin, where we prescribe a given Neumann boundary condition gN , possibly time-dependent, for the potential φ. We denote the vertical perturbation due to waves at the free surface by a scalar function ζ(x, y, t) with (x, y, 0) ∈ ΓF ; see Figure 1 for a sketch. Under these assumptions, we obtain the following system of equations. The first and the second condition on the free surface are called the kinematic and the dynamic boundary condition, respectively. ⎫ ∂φ ⎬ = gN (t) on ΓN , ⎪ Δφ = 0 in Ω, ∂n (1) ∂φ ∂ζ ∂φ ⎪ ⎭ = on ΓF , = −gζ on ΓF . ∂t ∂n ∂t
z=0
ζ(x, y, t)
Ω Fig. 1. A sketch of the free surface parametrization ζ
134
C. Hofreither, U. Langer, and S. Tomar
We now introduce the Dirichlet-to-Neumann map or Steklov-Poincar´e operator S(t) : H 1/2 (ΓF ) → H −1/2 (ΓF ). For given Dirichlet data gD ∈ H 1/2 (ΓF ), let u ∈ H 1 (Ω) be the weak solution of the mixed boundary value problem Δu = 0
in Ω,
∂u = gN (t) ∂n
on ΓN ,
and u = gD
on ΓF .
(2)
We then define S(t) to be the Dirichlet-to-Neumann map ⏐ ∂u ⏐ ⏐ . S(t) : gD → ∂n ⏐ΓF Using this operator, the behavior of the model problem (1) on the free surface may be specified in the form of a system of two coupled operator ordinary differential equations, d φF 0 −g φF = , (3) S(t) 0 ζ dt ζ where φF represents the trace of the potential φ on ΓF . As initial values, we use constant zero functions for both φF and ζ. In the context of our model problem, this corresponds to the free surface being undisturbed and at rest initially. For the solution of this ODE system in a time interval [0, T ], we introduce a discretization of the time axis, 0 = t0 < t1 < . . . < tN = T , with the step sizes τ (n) := tn+1 − tn . After spatial discretization, the Steklov-Poincar´e operator S exhibits moderate stiffness with a Lipschitz constant of the order O(h−1 ). The use of an explicit scheme for time integration, e.g. a classical fourth-order RungeKutta method, is thus justifiable as long as the time step size is not chosen too large. For the discretization of the Steklov-Poincar´e operator S, we use a boundary element method, as motivated in the introduction.
3
The Boundary Element Method
The BEM operates only on the Cauchy data, that is, on the Dirichlet and Neumann traces on the boundary of the computational domain. The Cauchy data are related to each other via an integral equation on the boundary. This equation is then solved via e.g. a collocation or Galerkin approach. For a thorough treatment of the boundary element method, we refer the reader to e.g. [5]. For the remainder of this section, let u refer to the Dirichlet values of the ∂u |Γ refer to its Neumann solution of the PDE on the boundary Γ , and let v := ∂n data. 1 1 of the Laplace With the help of a fundamental solution E(x, y) = 4π |x−y| equation in 3D, we now define the boundary integral operators V : H −1/2 (Γ ) → H 1/2 (Γ ), K : H 1/2 (Γ ) → H 1/2 (Γ ), D : H 1/2 (Γ ) → H −1/2 (Γ )
Boundary Element Simulation of Linear Water Waves in a Model Basin
135
called the single layer potential operator, double layer potential operator and hypersingular operator, respectively. We have the relation 1 u u I −K V u =C := 2 , (4) 1 D v v v 2I + K where the two-by-two block operator C is called the Calder´ on projector. Consider now the mixed boundary value problem (2) for a fixed t. We define extensions g˜D ∈ H 1/2 (Γ ) and g˜N ∈ H −1/2 (Γ ) of the given boundary values and choose the ansatz u = g˜D + uN , v = g˜N + vD with uN ∈ H 1/2 (Γ ), vD ∈ H −1/2 (Γ ). Substituting this in (4) and restricting the equations to suitable parts of the boundary yields 1 g D + K gD − V gN 2 1 = g N − D gD − K g N 2
V vD − KuN = K vD + DuN
in H 1/2 (ΓD ),
(5)
in H −1/2 (ΓN ).
(6)
−1/2 (ΓD ) × H 1/2 (ΓN ), where We then choose a trial space Λ := H
1/2 (Γ ) := v = v˜|Γ : v˜ ∈ H 1/2 (Γ ), supp v˜ ⊂ Γ , H
−1/2 (Γ ) := H 1/2 (Γ ) H for any open subset Γ ⊂ Γ . With an arbitrary test function (s, t) ∈ Λ, we multiply (5) by s and (6) by t in order to obtain a variational formulation. We discretize Ω by a quasi-uniform and shape-regular triangulation with mesh size h. On this mesh, we define the space of piecewise linear and continuous 1/2 (ΓN ) on the Neumann boundary, and the space of functions Sh1 (ΓN ) ⊂ H −1/2 (ΓD ) on the Dirichlet boundary. piecewise constant functions Sh0 (ΓD ) ⊂ H This gives us the natural choice Λh := Sh0 (ΓD ) × Sh1 (ΓN ) ⊂ Λ for a finitedimensional trial space. We thus obtain the Galerkin variational formulation: Find (vDh , uN h ) ∈ Λh such that for all (sh , th ) ∈ Λh , there holds a(vDh , uNh ; sh , th ) = F (sh , th ),
(7)
with the bilinear form a(v, u; s, t) = V v, sΓD − Ku, sΓD + K v, tΓN + Du, tΓN and an analogous abbreviation F (s, t) of the right-hand side. The resulting linear system may be solved by a suitable Krylov subspace method, e.g. MINRES.
4
Data-Sparse Approximation
The system matrices obtained from (7) are dense. By discretizing the boundary only, we obtain NBEM = O(h−2 ) unknowns, resulting in a fully populated system
136
C. Hofreither, U. Langer, and S. Tomar
matrix with O(h−4 ) non-zero entries. Even with an optimally preconditioned iterative solver, the solution of the corresponding linear system will thus require at least O(h−4 ) arithmetical operations. In contrast to the BEM, the FEM, where we have to discretize the entire domain, results in a system with NFEM = O(h−3 ) unknowns. However, the finite element stiffness matrices are sparse, i.e. they have only O(h−3 ) non-zero entries. With an optimal preconditioner, a numerical solver with O(h−3 ) operations is attainable. However, it is possible to avoid the difficulties associated with classical BEM discretization by using data-sparse approximation of the involved system matrices. In particular, we apply hierarchical matrix (or H-matrix ) techniques to represent the system matrix. This results in a considerable reduction of the memory demand for matrix coefficients than the storage of the full matrix would require; see e.g. Bebendorf [2]. The core idea is to approximate a matrix A∈ Rm×n by a sum of outer prodr ucts of vectors ui ∈ Rm , vi ∈ Rn , i.e. A ≈ A˜ = i=1 ui viT . We call this a low-rank approximation of A with rank (at most) r. One way to construct such approximations is to compute a singular value decomposition of A and then discard all singular values which are below a certain threshold. This is called truncated singular value decomposition. While this produces optimal approximations in the spectral norm [2], it is quite slow. Therefore, faster methods have been developed in recent years. We only mention here the Adaptive Cross Approximation (ACA) method. Its concept is to construct A˜ from a sum of crosses, that is, outer products of a column and a row of A. Once a desired error threshold is reached, the process is stopped. The approximation rank is thus determined adaptively. Our BEM system matrices share the common property that they have a nearsingularity along the diagonal. For such matrices, typically no suitable low-rank approximation with desirable accuracy exists. Thus, the algorithms described above fail to give useful results if applied to the entire matrix. Instead, it is required to split the matrix recursively into sub-matrices which may be better approximated. This is done by clustering the degrees of freedom which are represented by the matrix rows and columns into so-called cluster trees. Pairs of clusters are then chosen according to some admissibility condition for low-rank approximation. By this method, we obtain a so-called H-matrix approximation. Various matrix operations like addition, factorization or inversion may be generalized to H-matrices. We refer the interested reader to the comprehensive monograph [2] for a detailed discussion of the techniques mentioned above. We also mention that software libraries which implement the methods sketched above are available, e.g. HLib [3] and ahmed [1]. The latter was used in our numerical experiments. Data-sparse approximation is also used for preconditioning. From (7), we get the system of boundary element equations in the form −f Dh −Vh Kh v Dh = . KhT Dh uN h f Nh
Boundary Element Simulation of Linear Water Waves in a Model Basin
137
Table 1. Performance of the algorithm for T=80 # triangles (N = O(h−2 )) 704 2816 11264 45056
init 9 62 520 4162
solve 20 370 3214 20473
total 29 432 3734 24635
ratio – 16.78 8.78 6.60
As a preconditioner for this block matrix, we choose I 0 −Vh Kh C= , 0 Dh −KhT Vh−1 I where hierarchical Cholesky factorizations of Vh and Dh are used to apply this preconditioner in an approximate way.
5
Numerical Results
In the following, we extend a numerical example from [7] into three dimensions. We assume that we have a test basin with the dimensions Ω = (0, 10) × (0, 1) × (−1, 0). The free surface at rest is the quadrilateral ΓD = (0, 10) × (0, 1) × {0}. On the remaining walls ΓN , we prescribe the normal velocity gN (x, y, z, t) that is (1 + z)a sin(ωt) for all (x, y, z) ∈ {0} × (0, 1) × (−1, 0) and 0 otherwise. That is, the left wall of the basin is assumed to be equipped with a wave maker which exhibits periodic oscillations with maximum amplitude a = 0.02 and frequency ω = 1.8138. The other walls are assumed to be stationary. Note that the oscillations exhibit maximum amplitude at the top of the basin and vanish at the bottom. For time discretization, we use a fixed time step τ = 0.1 over the time interval [0, T = 80]. This results in 800 time steps, each of which requires four solutions of the mixed boundary value problem in the domain Ω. The surface Γ is discretized by triangles using the software package netgen. We use a series of uniformly refined boundary meshes where each refinement step quadruples the number of triangles. The computations are performed on a machine with four Opteron-852 processors and 32 GB of RAM. Table 1 summarizes the performance of our data-sparse BEM algorithm. The first column indicates the number of triangles in the boundary mesh. The second and third columns show the CPU time (in seconds) used for initialization and solution of the problem, while the fourth column shows total CPU time. Finally, the fifth column gives the ratio between total time for the current and the previous smaller problem. Note that there is a significant cost for the generation and factorization of the system matrices. This however has to be performed only once at startup. The scheme is thus better suited for long simulations where many time iterations are to be performed. Also, parallelization was used for this initialization phase, so
138
C. Hofreither, U. Langer, and S. Tomar
(a) Resulting wave profile at t = 20.0, t = 38.0, t = 67.0, and t = 120.0 (left to right)
(b) Asymmetric mesh employed in the computations Fig. 2. Wave profiles computed on an asymmetric mesh
actual measured times were lower. No appreciable parallelization overhead could be measured, so the CPU times given above can be taken as wall clock times for serial execution on one CPU. As discussed at the beginning of Section 4, a FEM-based implementation of the Dirichlet-to-Neumann map would have O(h−3 ) unknowns and thus, if optimally preconditioned, a time complexity of O(h−3 ) = O(N 3/2 ). Since evaluating the Dirichlet-to-Neumann map is the main bottleneck in the numerical 3 simulation, we could then expect a constant ratio 4 2 = 8 in the last column of Table 1. The numbers we have obtained here thus suggest that our scheme may outperform a FEM-based approach for large problems. To investigate the stability of the proposed scheme over a long period of time, we performed the simulations up to T = 120 with N = 11264 and N = 45056 triangles, and did not face any stability problems like those reported in [8]. Figure 2(a) shows the computed wave profile with 45056 boundary triangles at times t = 20.0, 38.0, 67.0 and t = 120.0. At t = 20.0 the wave starts approaching the wall opposite to the wave maker, at t = 38.0 the wave gains full height, and t = 67.0 the wave reflected from the wall is affecting the pattern in the basin. The wave is also exhibiting a very nice pattern at t = 120.0, without any stability problems arising from the numerical scheme. Since no experimental data is available to validate our results (for a 3-D linear problem), we can rely only on visual results. However, they are in excellent agreement with a behavior similar to that of the 2-D problem reported in [7]. It is also important to note here that for a non-uniform/asymmetric mesh (Figure 2(b) shows the structure of the mesh employed in our computations) we do not need any artificial stabilization in our numerical scheme. This property is highly desirable for simulations over a long period of time. A detailed analysis of the effect of mesh-asymmetry in the numerical computations was carried out in [6], and the resulting mechanism was used to stabilize the numerical scheme in [6,7].
Boundary Element Simulation of Linear Water Waves in a Model Basin
6
139
Conclusion
We have presented a numerical scheme for solving a linearized, time-dependent potential flow problem using the fourth-order explicit Runge-Kutta method and the boundary element method for the time and space discretizations, respectively. Data-sparse approximations of the resulting large-scale, dense matrices were used to make an efficient solution feasible. While this technique incurs a large one-time overhead for setting up the hierarchical matrices, it is worthwhile when many boundary value problems are to be solved on a fixed geometry, as the results in Section 5 indicate. For the same reason, however, a direct generalization to the case of a non-linearized, time-dependent domain seems to be more problematic at the first glance, since the system matrices would have to be recomputed at every iteration step. This problem may be alleviated by using the same preconditioner in every iteration and taking advantage of the fact that the matrix generation step may be trivially parallelized. The numerical scheme does not require a separate velocity reconstruction, or different order polynomials for velocity field and potential to preserve the accuracy in the wave height. A rigorous analysis of the numerical scheme proposed in this paper is still in progress.
References 1. Bebendorf, M.: ahmed. Another software library on hierarchical matrices for elliptic differential equations, Universit¨ at Leipzig, Fakult¨ at f¨ ur Mathematik und Informatik 2. Bebendorf, M.: Hierarchical Matrices. Springer, Heidelberg (2008) 3. B¨ orm, S., Grasedyck, L.: HLib. A program library for hierarchical and H 2 -matrices, Max Planck Institute for Mathematics in the Sciences, Leipzig 4. Cai, X., Langtangen, H.P., Nielsen, B.F., Tveito, A.: A finite element method for fully nonlinear water waves. J. Comput. Physics 143(2), 544–568 (1998) 5. Rjasanow, S., Steinbach, O.: The Fast Solution of Boundary Integral Equations. Mathematical and Analytical Techniques with Applications to Engineering. Springer-Verlag New York, Inc. (2007) 6. Robertson, I., Sherwin, S.: Free-surface flow simulation using hp/spectral elements. J. Comput. Phys. 155(1), 26–53 (1999) 7. Tomar, S.K., van der Vegt, J.J.W.: A Runge-Kutta discontinuous Galerkin method for linear free-surface gravity waves using high order velocity recovery. Comput. Methods Appl. Meth. Engrg. 196, 1984–1996 (2007) 8. Westhuis, J.-H.: The Numerical Simulation of Nonlinear Waves in a Hydrodynamic Model Test Basin. PhD thesis, Universiteit Twente (2001) 9. Whitham, J.: Linear and Nonlinear Waves. John Wiley, New York (1974)
Numerical Homogenization of Bone Microstructure Nikola Kosturski and Svetozar Margenov Institute for Parallel Processing, Bulgarian Academy of Sciences
Abstract. The presented study is motivated by the development of methods, algorithms, and software tools for μFE (micro finite element) simulation of human bones. The voxel representation of the bone microstructure is obtained from a high resolution computer tomography (CT) image. The considered numerical homogenization problem concerns isotropic linear elasticity models at micro and macro levels. Keywords: μFEM, microstructure.
1
nonconforming
FEs,
homogenization,
bone
Introduction
The reference volume element (RVE) of the studied trabecular bone tissue has a strongly heterogeneous microstructure composed of solid and fluid phases (see Fig. 1). In a number of earlier articles dealing with μFE simulation of bone structures (see, e.g., [8,10]), the contribution of the fluid phase is neglected. This simplification is the starting point for our study and for the related comparative analysis. In this study the fluid phase located in the pores of the solid skeleton is considered as almost incompressible linear elastic material. This means that the bulk modulus Kf is fixed and the related Poisson ration νf is approaching the incompressibility limit of 0.5. The elasticity modulus Es and the Poisson ratio νs of the solid phase as well as the bulk modulus of the fluid phase Kf are taken from [4]. Let Ω ⊂ IR3 be a bounded domain with boundary Γ = ΓD ∪ ΓN = ∂Ω and u = (u1 , u2 , u3 ) be the displacements in Ω. The components of the small strain tensor are 1 ∂ui ∂uj εij = + , 1 ≤ i, j ≤ 3 2 ∂xj ∂xi and the components of the Cauchy stress tensor are 3 σij = λ εkk δij + 2μεij , 1 ≤ i, j ≤ 3. k=1
Here λ and μ are the Lam´e coefficients, which can be expressed by the elasticity modulus E and the Poisson ratio ν ∈ (0, 0.5) as follows λ=
Eν , (1 + ν)(1 − 2ν)
μ=
E . 2 + 2ν
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 140–147, 2010. c Springer-Verlag Berlin Heidelberg 2010
Numerical Homogenization of Bone Microstructure
141
Fig. 1. Microstructure of a trabecular bone specimen
Now, we can introduce the Lam´e’s system of linear elasticity (see, e.g., [2]) 3 ∂σij j=1
∂xj
+ fi = 0,
i = 1, 2, 3
(1)
equipped with the boundary conditions 3
ui (x) = gi (x),
x ∈ ΓD ⊂ ∂Ω,
σij (x)nj (x) = hi (x),
x ∈ ΓN ⊂ ∂Ω,
j=1
where nj (x) are the components of the normal vector of the boundary for x ∈ ΓN . The remainder of the paper is organized as follows. The applied locking-free nonconforming FEM discretization is presented in the next section. Section 3 contains a description of the applied numerical homogenization scheme. Selected numerical results are given in Section 4. The first test problem illustrates the locking-free approximation when the Poisson ratio tends to the incompressibility limit. Then, a set of numerical homogenization tests for real-life bone microstructures are presented and analyzed. Some concluding remarks are given at the end.
2 Let
Locking-Free Nonconforming FEM Discretization (H01 (Ω))3 = u ∈ (H 1 (Ω))3 : u = 0 on ΓD , (Hg1 (Ω))3 = u ∈ (H 1 (Ω))3 : u = g on ΓD .
The weak formulation of (1) can be written in the form (see, e.g., [3]): for a given f ∈ (L2 (Ω))3 find u ∈ (Hg1 (Ω))3 such that for each v ∈ (H01 (Ω))3 a(u, v) = − (f , v)dx + (h, v)dx. Ω
ΓN
142
N. Kosturski and S. Margenov
In the case of pure displacement boundary conditions, the bilinear form a(·, ·) can be written as (Cd(u), d(v)) = (C ∗ d(u), d(v)) = a∗ (u, v), (2) a(u, v) = Ω
where
d(u) =
Ω
∂u1 ∂u1 ∂u1 ∂u2 ∂u2 ∂u2 ∂u3 ∂u3 ∂u3 , , , , , , , , ∂x1 ∂x2 ∂x3 ∂x1 ∂x2 ∂x3 ∂x1 ∂x2 ∂x3
T
and the matrices C and C ∗ are ⎤ ⎡ λ + 2μ 0 0 0 λ 000 λ ⎢ 0 μ0μ 0 000 0 ⎥ ⎢ ⎥ ⎢ 0 ⎥ 0 μ 0 0 0 μ 0 0 ⎢ ⎥ ⎢ 0 μ0μ 0 ⎥ 0 0 0 0 ⎢ ⎥ ⎥, C =⎢ λ 0 0 0 λ + 2μ 0 0 0 λ ⎢ ⎥ ⎢ 0 ⎥ 0 0 0 0 μ 0 μ 0 ⎢ ⎥ ⎢ 0 ⎥ 0 μ 0 0 0 μ 0 0 ⎢ ⎥ ⎣ 0 000 0 μ0μ 0 ⎦ λ 000 λ 0 0 0 λ + 2μ ⎤ ⎡ λ + 2μ 0 0 0 λ + μ 0 0 0 λ + μ ⎢ 0 μ00 0 000 0 ⎥ ⎥ ⎢ ⎢ 0 0μ0 0 000 0 ⎥ ⎢ ⎥ ⎢ 0 00μ 0 000 0 ⎥ ⎢ ⎥ ⎥ C∗ = ⎢ ⎢ λ + μ 0 0 0 λ + 2μ 0 0 0 λ + μ ⎥ . ⎢ 0 000 0 μ00 0 ⎥ ⎢ ⎥ ⎢ 0 000 0 0μ0 0 ⎥ ⎢ ⎥ ⎣ 0 000 0 00μ 0 ⎦ λ + μ 0 0 0 λ + μ 0 0 0 λ + 2μ Let us note, that the modified bilinear form a∗ (·, ·) is equivalent to a(·, ·) if ∂ui ∂uj ∂ui ∂uj dx = dx (3) ∂x ∂x j i Ω Ω ∂xi ∂xj is fulfilled. It is also important, that the rotations are excluded from the kernel of the modified (corresponding to a∗ (·, ·)) Neumann boundary conditions operator. As a result, the nonconforming Crouzeix-Raviart (C.–R.) FEs are applicable to the modified variational problem, providing a locking-free discretization. Locking-free error estimates of the 2D pure displacement problem discretized by C.–R. FEs are presented, e.g., in [3]. A similar analysis is applicable to the 3D case as well. It is easy to check that the equality (3) holds true also for the RVE with Dirichlet boundary conditions with respect to the normal displacements only. As we will see in the next section, this is exactly the case in the studied numerical homogenization scheme.
Numerical Homogenization of Bone Microstructure
143
Fig. 2. Splitting a voxel into six tetrahedra
In order to apply C.–R. FE discretization of RVE, we split each voxel (macroelement) into six tetrahedra, see Fig. 2. For a RVE divided into n × n × n voxels, the number of unknowns in the resulting linear system is 18n2 (2n + 1). A block factorization (static condensation) is applied first on the macroelement level, eliminating the unknowns associated with the inner nodes. This step reduces the total number of unknowns to 18n2 (n + 1).
3
Numerical Homogenization
We study the implementation of a standard numerical homogenization scheme of Dirichlet boundary conditions type [6]. The RVE boundary value problem has zero normal displacements on five of the faces of the cube and a (small) nonzero constant normal displacement on the sixth face (see Fig. 3). The right hand side is also supposed to be zero. Nonconforming C.–R. finite elements (based on the modified bilinear form a∗ (·, ·) from (2)) are used to get a locking-free discretization. By symmetry arguments, it simply follows that the homogenized stress and the strain tensors have zero shear components, and therefore the following relation between the homogenized normal stress and strain components holds ⎤ ⎡ 1 σ ˆx E(1 − ν) ν ⎣σ ⎦ ⎣ ˆy = (1 + ν)(1 − 2ν) 1−ν ν σ ˆz 1−ν ⎡
ν ν 1−ν 1−ν ν 1 1−ν ν 1−ν 1
⎤ εˆx ⎦ ⎣ εˆy ⎦ . εˆz ⎤⎡
Let us consider now the case, where the nonzero constant normal displacement (in z direction) is applied on the top face of the RVE cube. Then εˆx = εˆy = 0 and therefore σ ˆx = σ ˆy =
Eν εˆz , (1 + ν)(1 − 2ν)
and σ ˆz =
E(1 − ν) εˆz . (1 + ν)(1 − 2ν)
144
N. Kosturski and S. Margenov
Fig. 3. Boundary conditions of the RVE problem
From these relations we can directly calculate the homogenized elasticity coefficients as follows ν=
1 , 1+p
where p=
E=
(1 + ν)(1 − 2ν) r, 1−ν
σ ˆz σ ˆz = , σ ˆx σ ˆy
r=
σ ˆz . εˆz
ˆy . In the case of numerical homogenization, the average values In theory σ ˆx = σ of the stresses σx and σy over all tetrahedral finite elements are used. In the case of numerical homogenization of strongly heterogeneous materials (like bone tissues), the average stresses σ ˆx and σ ˆy are not equal. This is why we set 1 σ ˆx = σ ˆy = 2
Nel Nel 1 1 σxi + σi Nel i=1 Nel i=1 y
when computing the homogenized elasticity modulus and Poisson ratio. Similar relations hold when the nonzero displacements are alternatively applied in the x and y directions. To determine the elasticity coefficients more precisely, we apply the above numerical scheme in all three cases, and finally average the computed elasticity modulus and Poisson ratio.
4
Numerical Experiments
A set of numerical tests illustrating the accuracy of the locking-free FEM approximation in the almost incompressible case are presented first. The second part of the section contains results of numerical homogenization of three real-life
Numerical Homogenization of Bone Microstructure
145
Table 1. Relative error on a fixed 32 × 32 × 32 mesh for ν → 0.5 ν 0.4 0.49 0.499 0.4999 0.49999
r∞ 0.033733 0.052206 0.551943 5.551980 55.552900
f ∞ 214407 1381450 13125600 1.31 × 108 1.31 × 109
r∞ /f ∞ 1.57331 × 10−7 3.77904 × 10−8 4.20509 × 10−8 4.24652 × 10−8 4.25009 × 10−8
bone microstructure RVEs. The (additional) response of the fluid phase within the studied linear elasticity model is analyzed. The studied methods and algorithms are addressed to the case of μFEM numerical homogenization which leads in general to large-scale linear systems. The developed solver is based on the Preconditioned Conjugate Gradient (PCG) [1] method, where BoomerAMG1 is used as a preconditioner (see also [7] for an alternative locking-free AMLI solver in 2D). In the presented numerical tests, the relative stopping criterion for the PCG method is rTk C −1 rk ≤ 10−12 rT0 C −1 r0 . The numerical tests are run on an IBM Blue Gene/P supercomputer. 4.1
Model Problem
A homogeneous material with fixed elasticity modulus E = 0.06 GPa is considered. The computational domain Ω is the unit cube [0, 1]3 . A pure displacement problem is considered with Dirichlet boundary conditions on ΓD = ∂Ω corresponding to the given exact solution u1 (x, y, z) = x3 + sin(y + z), u2 (x, y, z) = y 3 + z 2 − sin(x − z), u3 (x, y, z) = x2 + z 3 + sin(x − y). The right hand side f is obtained by substituting the exact solution and the corresponding material coefficients in Lam´e’s system (1). The relative errors presented in Table 1 well illustrate the robustness (locking-free approximation) of the C.–R. nonconforming FEM approximation of almost incompressible elasticity problems. 4.2
Homogenization of Bone Microstructures
Three test cases of different RVE are considered. The related voxel representations (16 × 16 × 16, 32 × 32 × 32 and 64 × 64 × 64) of the bone microstructure are extracted from a high resolution computer tomography (CT) image [9]. The used mechanical properties of bones are from [4]. The elasticity modulus of solid phase is Es = 14.7 GPa and the related Poisson ratio is νs = 0.325. The fluid phase is considered as almost incompressible where the bulk modulus Kf = 2.3 GPa is 1
BoomerAMG is a parallel algebraic multigrid implementation from the package Hypre, developed in LLNL, Livermore [5].
146
N. Kosturski and S. Margenov Table 2. Solid skeleton: Es = 14.7 GPa, νs = 0.325 Solid phase Ehom νhom
16 × 16 × 16 47 % 3.21 GPa 0.118
32 × 32 × 32 26 % 1.10 GPa 0.141
64 × 64 × 64 19 % 0.58 GPa 0.158
Table 3. Solid skeleton with incompressible fluid filling: RVE of 16 × 16 × 16 voxels Solid phase Es νs 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325
Fluid phase Ef νf 1.38 × 108 Pa 0.49 1.38 × 107 Pa 0.499 1.38 × 106 Pa 0.4999 1.38 × 105 Pa 0.49999 1.38 × 104 Pa 0.499999
Homogenized Ehom νhom 5.05 GPa 0.299 4.82 GPa 0.307 4.79 GPa 0.309 4.79 GPa 0.309 4.79 GPa 0.309
Table 4. Solid skeleton with incompressible fluid filling: RVE of 32 × 32 × 32 voxels Solid phase Es νs 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325
Fluid phase Ef νf 1.38 × 108 Pa 0.49 1.38 × 107 Pa 0.499 1.38 × 106 Pa 0.4999 1.38 × 105 Pa 0.49999 1.38 × 104 Pa 0.499999
Homogenized Ehom νhom 2.24 GPa 0.376 1.96 GPa 0.391 1.88 GPa 0.396 1.86 GPa 0.396 1.86 GPa 0.396
Table 5. Solid skeleton with incompressible fluid filling: RVE of 64 × 64 × 64 voxels Solid phase Es νs 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325 14.7 GPa 0.325
Fluid phase Ef νf 1.38 × 108 Pa 0.49 1.38 × 107 Pa 0.499 1.38 × 106 Pa 0.4999 1.38 × 105 Pa 0.49999 1.38 × 104 Pa 0.499999
Homogenized Ehom νhom 1.54 GPa 0.408 1.19 GPa 0.428 1.07 GPa 0.435 1.05 GPa 0.436 1.05 GPa 0.436
known. In the presented tests, the Poisson ratio is νf varied between 0.49 and 0.5 − 10−6 , and respectively Ef = 3Kf (1 − 2νf ). Tables 3, 4, and 5 contain the homogenized material coefficients of three bone specimens with different mesh sizes. The case when the fluid phase is neglected is presented in Table 2, including the percentage of the solid phase of the considered three RVEs. This is the starting point of our comparative analysis. The last three tables clearly demonstrate the significant contribution of the fluid phase to the homogenized parameters (Ehom , νhom ). The numerical stability with respect to νf → 0.5 is also well expressed. High accuracy of the homogenization procedure is achieved for νf ∈ {0, 4999, 0.499999}.
Numerical Homogenization of Bone Microstructure
147
The following next steps in the numerical homogenization of bone microstructures are planned: a) anisotropic macro model; b) poroelasticity micro model. The goal is to improve further (quantitatively and qualitatively) our understanding of the underlying phenomena.
Acknowledgments This work is partly supported by the Bulgarian NSF Grants DO02-115/08 and DO02-147/08. We kindly acknowledge also the support of the Bulgarian Supercomputing Center for the access to the supercomputer IBM Blue Gene/P.
References 1. Axelsson, O.: Iterative solution methods. Cambridge University Press, Cambridge (1994) 2. Axelsson, O., Gustafsson, I.: Iterative methods for the Navier equations of elasticity. Comp. Meth. Appl. Mech. Engin. 15, 241–258 (1978) 3. Brenner, S., Sung, L.: Nonconforming finite element methods for the equations of linear elasticity. Math. Comp. 57, 529–550 (1991) 4. Cowin, S.: Bone poroelasticity. J. Biomechanics 32, 217–238 (1999) 5. Lawrence Livermore National Laborary, Scalable Linear Solvers Project, https://computation.llnl.gov/casc/linear_solvers/sls_hypre.html 6. Kohut, R.: The determination of Young modulus E and Poisson number ν of geocomposite material using mathematical modelling, private communication (2008) 7. Kolev, T., Margenov, S.: Two-level preconditioning of pure displacement nonconforming FEM systems. Numer. Lin. Algeb. Appl. 6, 533–555 (1999) 8. Margenov, S., Vutov, Y.: Preconditioning of Voxel FEM Elliptic Systems. TASK, Quaterly 11(1-2), 117–128 (2007) 9. Vertebral Body Data Set ESA29-99-L3, http://bone3d.zib.de/data/2005/ESA29-99-L3 10. Wirth, A., Flaig, C., Mueller, T., M¨ uller, R., Arbenz, P., van Lenthe, G.H.: Fast Smooth-surface micro-Finite Element Analysis of large-scale bone models. J. Biomechanics 41 (2008), doi:10.1016/S0021-9290(08)70100-3
Multiscale Modeling and Simulation of Fluid Flows in Highly Deformable Porous Media P. Popov1, Y. Efendiev2 , and Y. Gorb2 1
Institute for Parallel Processing, Bulgarian Academy of Sciences, Sofia, Bulgaria
[email protected] 2 Dept. of Mathematics, Texas A&M University, College Station, TX 77843 {efendiev,gorb}@math.tamu.edu
Abstract. In this work a new class of methods for upscaling FluidStructure Interaction (FSI) problems from the pore-level to a macroscale is proposed. A fully coupled FSI problem for Stokes fluid and an elastic solid is considered at the pore-level. The solid, due to coupling with the fluid, material nonlinearities, and macroscopic boundary conditions, can deform enough so that the pore-space is altered significantly. As a result, macroscopic properties such as the permeability of the porous media become nonlinearly dependent on the fine-scale displacements. Therefore, classical upscaled models, such as Biot’s equations, can no longer be applied. We propose a series of numerical upscaling models in the context of the Multiscale Finite Element Method (MsFEM) which couple this fine-scale FSI problem to a nonlinear elliptic equation for the averaged pressure and displacements at the coarse scale. The proposed MsFEM schemes correctly transfer the appropriate physics from the fine to the coarse scale. Several numerical examples which demonstrate the methods are also presented.
1
Introduction
In this work we consider a Multiscale Finite Element framework for modeling flows in highly deformable porous media. The physical processes under consideration span two length-scales. On the macroscopic level, one has fluid, diffusing through a nonlinear porous solid. At the microscale the solid has a complex pore geometry and interacts with a Stokes flow. We assume good scale separation, with the usual small parameter ε being the ratio of the fine to the coarse length scales. We denote the fine scale domain by Ωε0 , which contains two subdomains — a fluid part Fε0 and solid part Sε0 . The superscript 0 indicates the reference, or undeformed configuration of the body. The interface between the solid and fluid domains is denoted by Γε0 = ∂Fε0 ∩ ∂Sε0 . The physics is described by the strongly coupled, stationary fluid structure interaction problem [4]: Find the interface Γε , velocity vε , pressure pε and displacements uε such that: Γε = X + uε (X)| ∀ X ∈ Γε0 , I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 148–156, 2010. c Springer-Verlag Berlin Heidelberg 2010
(1)
Multiscale Modeling and Simulation of Fluid Flows
149
the Stokes and Elasticity equations are satisfied: −∇pε + μΔvε + f = 0,
∇ · vε = 0
−∇ · S(Eε ) = f
in Fε
(2)
Sε0
(3)
on Γε0 .
(4)
in
with the interface condition −T
det (I + ∇uε ) (−pε I + 2μDε ) (I + ∇uε )
n0 = S(Eε )n0
Here X is the material and x the spatial coordinate, μ is the fluid viscosity, f is the body force. Further, S is the first Piola-Kirchhoff stress tensor in the solid. It depends, linearly or nonlinearly, on the strain E(u) = 12 ∇u(X) + ∇u(X)T . −T Recall, that given a CauchyT stress σ one has S = det(I + ∇uε )σ (I + ∇uε ) . 1 D(v) = 2 ∇v(x) + ∇v(x) is the rate of stretching tensor, and n0 is the reference normal to the interface. Note, that the interface condition (4) introduces a nonlinearity into the problem, regardless of the constitutive form for S.
2
Multilevel Algorithm
The combination of poroelasticity with large pore-level deformation is a complex problem. The main difficulty in upscaling it is when Fε deforms substantially at the fine-scale, breaking the assumptions underlying classic homogenized models such as Darcy and Biot [1,6,5,2]. This necessitates a two-scale numerical approach, such as MsFEM. For developing MsFEM, one needs to (1) identify relevant macroscopic parameters, (2) develop downscaling operators and (3) derive macroscopic variational formulation for poroelastic media. The proposed multiscale method follows the general MsFEM framework. In the present study, the coarse pressure p0 and displacement u0 are taken as the macroscopic variables. For these macroscopic variables, the downscaling operator E MsF EM is introduced which maps the macroscopic quantities to microscopic ones [3]. In particular we define the downscaling map ˜ε , u ˜ ε) , E MsF EM : (p0 , u0 ) → (˜ pε , v
(5)
where the subscript ε refers to the fine-scale quantities. Note that p˜ε = p˜ε (p0 , u0 ), ˜ε = v ˜ε (p0 , u0 ), u ˜ε = u ˜ ε (p0 , u0 ), where ˜· is used to differentiate the downscaled v quantities from the true solution. For simplicity of the presentation, we avoid using finite element discretizations of the variables. In our simulations, (p0 , u0 ) ˜ε , u ˜ ε ) are discretized are discretized on the coarse grid (mesh size H) and (˜ pε , v on the fine grid (mesh size h). The choice of macroscopic variables is motivated by using formal asymptotic expansion of the solid and fluid subproblems. Following a standard scaling argument [5], the Stokes system (2) is expanded in Fε : vε (x) = ε2 v0 (x, y) + ε3 v1 (x, y) + ..., pε (x) = p0 (x) + εp1 (x, y) + ...
y ∈ YF .
150
P. Popov, Y. Efendiev, and Y. Gorb
˜ ε as the first-order correctors for the One can now introduce formally p˜ε and v pressure and velocity, respectively: p˜ε (x) = p0 (x) + εp1 (x, y),
˜ ε = ε2 v0 v
(6)
˜ ε (x) as approximation for the fine scale For the solid problem (3) one considers u displacement uε (x) defined as follows: ˜ ε (x) = u0 (x) + εu1 (x, y), u
y ∈ YS0 ,
(7)
where the zero order term u0 contains the smooth part of the displacement, while the Y − periodic part u1 (x, y) approximates the oscillatory part of the solution. The approximation (7) also defines a fine-scale fluid domain F˜ε : ˜ ε (X)| ∀ X ∈ Γ 0 Γ˜ε = X + u (8) ˜ ε ) are defined. Together Γ˜ε , p˜ε , v ˜ε , u ˜ ε define which is the domain where (˜ pε , v an oscillatory approximation to the solution of the FSI problem (1)-(4). To obtain an FSI cell problem in a representative element of volume Y = YF ∪ YS , with YF being the fluid part, YS the solid, one uses the expansion for the Stokes equation to obtain the classical cell problem [5,6]: −∇y p1 + μΔy v0 + f − ∇x p0 = 0
∇y · v0 = 0
in YF˜ ,
(9)
which in our case is defined in an a-priori unknown unit sized fluid domain YF˜ . As usual the subscript y indicates that the respective differential operators are taken with respect to the fast variable y in a unit sized domain [5]. The above cell problem is subject to periodic boundary conditions for the velocity v0 and pressure p1 on ∂Y and no-slip condition for v0 on the interface Γ˜ε . Similarly, nonlinear homogenization of the solid part (3) results in the cell problem [3]: −∇y · S (Ex (u0 ) + Ey (u1 )) = 0
in YS0 ,
(10)
which, in general, is nonlinear. The main difficulty comes from interface condition ˜ε , u ˜ ε it reads: (4). If one substitutes in (4) the approximations p˜ε , v −T
det (I + ∇˜ uε ) [−˜ pε I + 2μD (˜ vε )] (I + ∇˜ uε )
n0 = S (E (˜ uε )) n0
on εYΓ0 (11)
As before, n0 is the unit outer normal to the surface YΓ0 . Note, that this condition is nonlinear and cannot be separated into different powers of ε. If one wishes to write it in a unit sized domain, the fluid and solid stresses have to be rescaled in the same way. Thus, one introduces Uε (y) := ε−1 uε (x). which preserves the strain, that is: Ey (Uε (y)) = Ex (uε (x)) ,
(12)
Multiscale Modeling and Simulation of Fluid Flows
151
Moreover, by defining Sy (·) := ε−1 S (·), interface condition (11) can now be rescaled, keeping the correct stress scaling on both sides: −T ˜ ε [−p0 I + ε (−p1 I + 2μDy (v0 ))] I + ∇y U ˜ε det I + ∇y U n0 = ˜ ε n0 on YΓ Sy Ey U (13) 0 Observe that the left side of the last equation is the fluid traction at the rescaled interface YΓ0 , which we denote by ˜sε . The FSI cell problem consists of (8)-(11) and is nonlinear, even if the constitutive response of the material is linear. Note also that (7) does not put restrictions on the presence of finer scales in u1 , for example ε2 . The reason for this is again the interface condition, which cannot be split into powers of ε. Thus it is not possible to determine which orders in the displacement will be influenced by the ˜ fε = p˜ε I + 2μD (˜ vε ) which approximates σ fε up to first order in ε, fluid stress σ cf. the left side of (11). The macroscopic equations which correspond to (9) and (10) are (cf. [7]): ∇ · (K (p0 , u0 ) ∇p0 ) = f ,
(14)
∇ · S0 (E (u0 ) , p0 ) = f .
(15)
The first is macroscopic balance of mass, with K being the standard Darcy permeability tensor for the cell geometry YF˜ , for details see [6]. Since p0 and u0 drive the cell FSI problem, they determine YF˜ and therefore K = K (p0 , u0 ). Similarly, the macroscopic homogenized stress S0 is given by (cf. [3]): S0 := S (Ex (u0 ) + Ey (u1 )) ,
(16)
where · is the Y − averaging operator. In the general case, the local problem (10) is nonlinear, so u1 depends on u0 and therefore S0 is an implicit function of Ex (u0 ). Since closed form expressions for K and S0 cannot be obtained for general geometries one can instead linearize (14) and (15), for example as ∇ · (Kn (x)∇pn+1 )=f 0
∇ · (Ln (x)∇un+1 ) = f. 0
(17)
In the case of linearized elastic solids, L = L (p0 , u0 ) is simply the homogenized elasticity moduli, which similarly to K depends on the macroscopic variables. In the case of nonlinear elasticity, one can use any appropriate linearization of S0 around (p0 , u0 ). This suggests the following two-level MsFEM iteration: Algorithm 1. Fully coupled FSI model: Initialize p0 and u0 . For n = 1, 2, . . . until convergence, do (n)
(n)
(n)
(n)
(n)
˜ε 1. Given p0 and u0 solve for v0 , p1 , u (n) (8), (9), (10) and (13) and define YΓ˜ . (n)
which satisfy the cell problem
2. Based on YΓ˜ , find the permeability K(n) and elasticity tensor L(n) . 3. Based on the macroscopic linearization (17) find a new upscaled pressure (n+1) (n+1) and upscaled displacement u0 . p0
152
P. Popov, Y. Efendiev, and Y. Gorb
The key to this algorithm is the ability to solve the fine-scale FSI problem numerically (step 1). The numerical algorithm for that has been presented elsewhere [4]. It is based on successive solutions to fluid and solid subproblems: one starts with a guess for the fluid domain, solves the Stokes equation there, and computes the fluid stress on the interface. This stress is then used to solve an elasticity problem, whose solution is then used to deform the fluid domain. The process is repeated until convergence. Given this scheme, one can further consider a variant of the above two-level algorithm, where in step 1, the FSI cell problem is not solved exactly, but only a few iterations are done, the extreme case being a single one. This can be summarized in the following algorithm: Algorithm 2. Two-scale linearized FSI model: Initialize p0 and u0 . For n = 1, 2, . . . until convergence, do: (n)
˜ ε which also defines the cell geometry 1. Given the downscaled displacement u (n) (n+1) (n) YF˜ , compute K and solve the first equation in (17) to obtain p˜0 . (n+1)
2. Based on p˜0
(n+1)
solve (9) for p˜ε
3. Compute the fluid traction (n) (n+1) (n+1) ˜ε and v . YF˜ , p˜ε (n)
(n+1) ˜sε
(n+1)
˜ε and v
(n)
using YF˜
as fluid domain.
(cf. the left hand side of (13)) based on
(n+1)
˜ 0 and ˜sε 4. Based on u compute a new linearized moduli L(n) . Then solve (n+1) ˜0 . the second equation in (17) to obtain u (n+1) (n+1) ˜0 and u , solve (10), (13) for the downscaled displacement 5. Using ˜sε (n+1) (n+1) (n+1) ˜ε u , which defines a new interface Yi,Γ˜ and fluid domain YF˜ at the (n+1) (n+1) and fluid domain F˜ε . cell level and therefore downscaled interface Γ˜ε In practice, either of these proposed algorithms are realized as follows. First, a macroscopic discretization of (17) is constructed, for example by the Finite Element Method (FEM). The macroscopic problems are constructed at a number of macroscopic locations which provide a good interpolation of K(n) and L(n) , typically the integration points associated with the macroscopic discretization. The computation of the cell problems is done at each iteration of Algorithm 1 or 2. Note that these problems require only the transfer of macroscopic information, e.g. p0 and u0 , so the computation is very easy to parallelize. An important simplification of Algorithms 1 and 2 can be obtained if the solid is not connected, e.g. one has flow past obstacles. This allows to eliminate the macroscopic momentum balance from (15) and from the linearization (17). While this situation is not very common in practice it provides a rich model problem, both in terms of numerical experiments and rigorous analysis. The analysis of both Algorithms 1 and 2, that is convergence of the iterative process, error bounds in terms of ε and mesh parameters is a complex task. A partial analysis for the simplified case of flows past deformable obstacles will be presented in an upcoming publication.
Multiscale Modeling and Simulation of Fluid Flows
153
1
Fluid 0.8
Rigid support 0.6
0.4
Elastic Solid
0.2
0 0
(a) Unit Cell
0.2
0.4
0.6
0.8
1
(b) Macroscopic domain
Fig. 1. Unit cell (a) and macroscopic domain (b) used in computations
3
Numerical Examples
Several numerical examples were considered in order to test the proposed algorithms. The basic fine-scale domain is a 2D periodic arrangement of elastic obstacles (Figure 1). The unit cell (Figure 1(a)) consists of circular linear elastic material, surrounded by fluid. The elastic material under consideration is isotropic with Young’s modulus E = 1.44 and Poisson’s ration ν = 0.1. It is supported rigidly in the center. The unit cell is arranged periodically to form the macroscopic domain. A series of macroscopic domains with ε−1 = 4, 8, 16, 32 were considered. Shown in Figure 1(b) is the fine-scale domain with ε−1 = 16. The convergence of the nonlinear iterative processes involved in our two-level algorithms is demonstrated first. This is done by considering a set of boundary value problems (BVPs) in which a uniform pressure Pl is applied at the left side of the macroscopic domain. The pressure at the right side is 0 and no-flow boundary conditions are applied at the top and bottom sides of the domain. Both MsFEM algorithms took 6 iterations to converge for Pl = 0.1 and 8 with Pl = 0.2, uniformly with respect to ε or the macroscopic mesh size H. Also, they proved insensitive to the number of iterations performed on the cell FSI problem (step 1 of Algorithm 1), including the extreme case of a single one. Secondly, thanks to the simple geometry and boundary conditions, it was possible to obtain fine-scale solutions for the same set of BVPs via a direct numerical simulation (DNS). A typical finescale DNS solution and a first-order corrected MsFEM solution are shown in Figure 2 for the case ε = 1/8 and Pl = 0.1. The results demonstrate convergence with respect to ε of the fine-scale approximations obtained via our two-scale algorithm and the DNS results (Table 1). A number of more complicated two-dimensional flows were also computed. Shown in Figure 3 is an example of corner point flow, driven by a pressure difference of 0.2. Again, as in the previous examples both algorithms 1 and 2 converged to the same solution. Unlike the previous example, however, quadratic finite elements were used for the macroscopic pressure. The permeability field was evaluated at the nodal points for the macroscopic pressure and was also interpolated by piecewise quadratic polynomials in each element. The number
154
P. Popov, Y. Efendiev, and Y. Gorb
(a) Fine-scale solution uε
˜ε (b) First order corrector u
˜ε (c) Error, uε − u Fig. 2. Displacements in a typical MsFEM model problem. The macroscopic domain has 8 × 8 unit cells, and due to periodicity in the y direction, only one horizontal row of unit cells is shown. The exact fine scale displacements (a) can be compared with the first order corrector (b). The difference between the two is shown in (c). P=0
1
0.8
0.6
0.4
0.2
P = 0.1 0 0
0.2
0.4
0.6
0.8
1
(a) BCs and discretization
(b) Permeability, K11
(c) Coarse pressure, p0
Fig. 3. Corner flow driven by a pressure difference
1
0.8
P = −0.1 0.6
P = 0.1 0.4
0.2
0 0
0.2
0.4
0.6
0.8
1
(a) BCs and discretization
(b) Permeability, K11
(c) Coarse pressure, p0
Fig. 4. Flow driven by a pressure difference between two parallel plates
Multiscale Modeling and Simulation of Fluid Flows
155
Table 1. Error in the fine-scale displacements when compared to DNS results 1/4 1/8 1/16 1/32
Iterations 6 6 6 6
L∞ Error L∞ 1.23 × 10−3 3.18 × 10−4 8.07 × 10−5 2.03 × 10−5
1/4 1/8 1/16 1/32
Iterations 8 8 8 8
L∞ Error L∞ 2.96 × 10−3 7.94 × 10−4 2.06 × 10−4 5.25 × 10−5
Pl = 0.1 Rel. Error 0.18 0.10 0.053 0.027 Pl = 0.2 Rel. Error 0.22 0.126 0.068 0.035
L2 Error L2 Rel. Error 2.48 × 10−4 0.23 4.39 × 10−5 0.13 7.75 × 10−6 0.069 1.37 × 10−6 0.0351 L2 Error L2 4.93 × 10−4 8.78 × 10−5 1.56 × 10−5 2.75 × 10−6
Rel. Error 0.22 0.127 0.067 0.034
of macroscopic iterations for both algorithms was 6. The results indicate a permeability field (Figure 3(b)) that is strongly correlated with the macroscopic pressure. It is highest where the pressure is highest (lower left-corner) and lowest, where the pressure is lowest (upper-left corner). In another example we consider a 2D section of flow generated by pressure difference between to parallel plates (Figure 4). A unit domain with the same microstructure as in the previous examples is considered. A pressure difference of 0.2 is again applied at the two interior plates (Figure 4(a)) and no-flow boundary conditions are specified at the outer boundary. The computation shown in Figure 4 was performed with ε = 1/8 and both algorithms took 9 iterations to converge. As can be seen from the figure, the permeability plot is strongly correlated to the macroscopic pressure field. The highest (K11 = 7.86 × 10−6 ) is observed at the left plate, where the pressure is highest and the lowest value (K11 = 2.38 × 10−6 ) is observed at the right plate, where the pressure is lowest. Unlike the previous examples where the pressure was always positive, here negative values of the pressure leads to expansion of the fine-scale inclusion (cf. Figure 1(a)). This in turn decreases the permeability below the value for rigid obstacle. These two examples further indicate that the proposed algorithms are robust and can solve effectively multiscale FSI problems.
Acknowledgments This work was partially funded by the European Commission, grant FP7PEOPLE-2007-4-3-IRG-230919 as well as the US National Science Foundation, grant NSF-DMS-0811180. The authors would like to specially thank Dr. Steve Johnson for his support in using the Calclab cluster at the Dept. of Mathematics, Texas A&M University. Last but not least, we are particularly indebted to Dr. Richard Ewing for the generous funding and support that he provided through the Institute for Scientific Computing during the initial stage of this research.
156
P. Popov, Y. Efendiev, and Y. Gorb
References 1. Biot, M.A.: General theory of three dimensional consolidation. J. Appl. Phys. 12, 155–164 (1941) 2. Burridge, R., Keller, J.B.: Poroelasticity equations derived from microstructure. Journal of the Acoustical Society of America 70, 1140–1146 (1981) 3. Efendiev, Y., Pankov, A.: Numerical homogenization of nonlinear random parabolic operators. Multiscale Modeling and Simulation 2(2), 237–268 (2004) 4. Iliev, O., Mikelic, A.: P Popov. On upscaling certain flows in deformable porous media. SIAM Multiscale Modeling and Simulation 7, 93–123 (2008) 5. Sanchez-Palencia, E.: Non-Homogeneous Media and Vibration Theory. Lecture Notes in Physics, vol. 127. Springer, Berlin (1980) 6. Sanchez-Palencia, E., Ene, H.I.: Equations et ph´enom`enes de surface pour l’´ecoulement dans un mod`ele de milieu poreux. Journal de M´ecanique 14, 73–108 (1975) 7. Zhikov, V.V., Kozlov, S.M., Oleinik, O.A.: Homogenization of Differential Operators and Integral Functionals. Springer, Berlin (1994)
Assimilation of Chemical Ground Measurements in Air Quality Modeling Gabriele Candiani, Claudio Carnevale, Enrico Pisoni, and Marialuisa Volta Department of Electronics for Automation University of Brescia Viale Branze, 38 – 25123 – Brescia, Italy
[email protected]
Abstract. Regional authorities need tools helping them to understand the spatial distribution of an air pollutant over a certain domain. Models can provide spatially consistent air quality data, as they consider all the main physical and chemical processes governing the atmosphere. However, uncertainties in formalization and input data (emission fields, initial and boundary conditions, meteorological patterns) can affect simulation results. On the other hand, measurements, which are considered more accurate, have a limited spatial coverage. The integration of monitoring data and model simulations can potentially increase the accuracy of the resulting spatial fields. This integration can be performed using either deterministic techniques or algorithms which allow for uncertainties in both modeled and observed data, to obtain the best physical assessment of concentration fields. In this study ozone daily fields, simulated by the Transport Chemical Aerosol Model (TCAM), are reanalyzed from April to September 2004. The study area covers all the Northern Italy, divided in 64x41 cells with a spatial resolution of 10x10 km2 . Ozone data measured by ground stations are assimilated using Inverse Distance Weighting (IDW), kriging, cokriging and Optimal Interpolation (OI) in order to obtain reanalyzed ozone fields. Results show that these methods highly improve the accuracy of the retrieved ozone maps.
1
Introduction
An accurate description of the air quality state is a very challenging task due to the complexity and non-linearity of the mechanisms which take place in atmosphere. This is especially true when dealing with secondary pollutants, such as ozone. In order to describe the distribution of an air pollutant over a certain area, regional authorities can use either the data from a monitoring station network or the results of simulations performed by a deterministic modeling system. Both of these approaches have advantages and drawbacks. Monitoring stations are considered to be more accurate than models but, since they lack in spatial coverage, the accuracy of the resulting pollutant field is limited by the number and the position of the stations. On the other hand, multiphase chemical and transport models can describe the pollutant distribution with a good spatial resolution. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 157–164, 2010. c Springer-Verlag Berlin Heidelberg 2010
158
G. Candiani et al.
However, because of the complexity of these models and the need for detailed meteorological and emission inputs, model results are affected by uncertainties [6]. The integration between models and observations can therefore have the potential to decrease the uncertainties in the resulting pollutant fields [7]. In this study, ozone daily fields simulated by the Transport Chemical Aerosol Model (TCAM) are reanalyzed off-line, assimilating ozone ground measurements. The paper is structured as follows: section 2 presents the TCAM features and the theoretical formalization of the assimilation methodologies; section 3 shows the experiment setup; in section 4 the results of the reanalysis are presented and commented for the four techniques.
2
Methodology
The objective of the assimilation is to estimate as accurately as possible the state of atmosphere at a given time. The reanalysis is a particular case of assimilation which can be formalized as [1] xa = xb + δx
(1)
where: – xa is the analysis field; – xb is the model field (also called background or first guess or a priori field); – δx is the analysis increment. The increment δx is the product of a weight matrix K and the differences between the observations y and the background at the monitoring station locations: δx = K(y − H(xb )) = Kψ
(2)
where H(x) is an operator that maps the model state space x into the observed state space y, and ψ is often called residual vector. In literature, reanalysis algorithms differ in the way they compute the matrix K. In this study, Inverse Distance Weighting (IDW) [7], kriging [5], cokriging [5] and Optimal Interpolation (OI) [1] are used to estimate δx over the domain grid points. 2.1
Background Fields
The background fields are provided by the simulations of TCAM [2], a 3-D multiphase Eulerian model. It is a part of the Gas Aerosol Modeling Evaluation System (GAMES) [15] which also includes the meteorological pre-processor PROMETEO and the emission processor POEM-PM [3]. TCAM solves time by time a PDE system which describes the horizontal/vertical transport, the multiphase chemical reactions and the gas to particle conversion phenomena using a splitting operator technique [12]. The horizontal transport is solved by a chapeau function approximation [13] and the non linear Forester filter [8], while the vertical transport PDE system is solved by an hybrid implicit/explicit scheme.
Assimilation of Chemical Ground Measurements in Air Quality Modeling
159
The gas chemistry is described by the COCOH97 scheme [16]. The ODE chemical kinetic system is solved by means of the Implicit-Explicit Hybrid (IEH) solver, which splits the species in fast and slow ones, according to their reaction rates. The system of fast species is solved by means of the implicit Livermore Solver for Ordinary Differential Equations (LSODE) [4] implementing an Adams predictor/corrector method in the non-stiff case, and the Backward Differentiation Formula method in the stiff case. The slow species system is solved by the Adams-Bashfort method [17]. The thermodynamic module is derived by SCAPE2 [11]. 2.2
Algorithms
Inverse Distance Weighting. One of the simplest technique to compute K = [ki,p ] is the IDW. The weights used in this algorithm are inverse functions of the distance di,p (di,q ) between the grid point i and the station location p (q), according to the equation K = [ki,p ] =
1 dβ i,p n 1 q=1 dβ
(3)
i,q
where n is the number of stations and β is the weighting power. Ordinary kriging. In ordinary kriging, weights K = [ki,p ] are based on the knowledge of the semivariogram, which describes the expected squared increment of values ψ between locations p and q. First, an empirical semivariogram γψ is constructed as: 1 ¯ = [ψ(p) − ψ(q)]2 (4) γψ (d) ¯ 2N (d) (p,q)
¯ is the number of station pairs (p, q) separated by d. ¯ Then, the where N (d) empirical semivariogram is fitted by a model semivariogram γˆψ (e.g. spherical, gaussian, exponential, ...), which is used to solve the system of n + 1 equations [10] n
ki,p γˆψ (dq,p ) + μ = γˆψ (dq,i )
q = 1, ..., n
(5)
p=1 n
ki,p = 1
(6)
p=1
where: – n is the number of stations; – dq,p and dq,i are the distances between a station point (p) and another station point (q) or a grid point i, respectively; – μ is the Lagrange multiplier which ensures the unbiasedness condition (6).
160
G. Candiani et al.
Ordinary Cokriging. Ordinary cokriging is an extension of ordinary kriging. It allows to use an auxiliary variable called the secondary variable, which is correlated to the primary variable and is usually more densely sampled. In this case, the increment δx (2) is not only a linear combination of primary variable ψ, but also a function of the secondary variables η weighted by the matrix L, according to the following equation: δx = K(y − H(xb )) + Lη = Kψ + Lη.
(7)
The weights K = [ki,p ] and L = [li,s ] are estimated on the base of the two semivariograms γψ and γη (estimated as in (4)) and the crossvariogram γψη , which describes the correlation between the two variables ψ and η according to ¯ = γψη (d)
1 (ψ(p) − ψ(q))(η(p) − η(q)) ¯ 2N (d)
(8)
(p,q)
¯ is the number of station pairs (p, q) separated by d. ¯ Model semivarwhere N (d) γψη ), are then used to compute iograms (ˆ γψ and γˆη ) and model crossvariogram (ˆ the set of weights solving the following n + m + 2 equations [10] n
ki,p γˆψ (dp,q ) +
p=1 n p=1
m
li,s γˆψη (ds,q ) s=1 m
ki,p γˆψη (dp,t ) +
+ μ1 = γˆψ (di,q )
li,s γˆη (ds,t ) + μ2 = γˆψη (di,t )
q = 1, ..., n
t = 1, ..., m
(9)
(10)
s=1 n
ki,p = 1
p=1 m
li,s = 0
(11)
(12)
s=1
where: – n and m are respectively the number of data for primary and secondary variable; – dp,q and dp,t are the distances between a point (p) of the primary variable and another point (q) of the primary variable or a point (t) of the secondary variable, respectively; – ds,q and ds,t are the distances between a point (s) of the secondary variable and a point of the primary variable (q) and another point of the secondary variable (t), respectively; – di,q and di,t are the distances between a grid point i and a point of the primary variable q or the secondary variable t, respectively; – μ1 and μ2 are the Lagrange parameters which ensure the unbiasedness condition (11) and (12).
Assimilation of Chemical Ground Measurements in Air Quality Modeling
161
Optimal Interpolation. In the Optimal Interpolation algorithm [1], the weight matrix K (also called gain matrix) is calculated as K = BH T (HBH T + R)−1
(13)
where B and R are the covariance matrices for background and observation errors respectively. The difference from the algorithms presented above is that now the weight matrix K takes into account the errors associated to the model and the observations. The B matrix is of utmost importance as it weights the model error against the competing observation error and how the information is distributed over the domain. In this work, B is estimated using the Gaussian exponential function B = [bi,j ] = exp(−
d2i,j )v = D(i, j)v, 2L2h
(14)
where di,j is the distance between the grid cells i and j, Lh is a parameter defining the decay of covariance with respect to the horizontal distance, and v is the model error variance estimate, computed on the basis of previous simulation results. This approximation states that the model error variance is constant for each cell of the domain, while the error covariance between 2 grid points (i and j) is a function of the horizontal distance between them. The R matrix can be considered diagonal when the measurements performed by the monitoring stations are independent. Moreover, if the same type of instruments are used for the measurements, it could be possible to assume that all the monitoring stations have the same error variance r, thus R could be rewritten as R = rI. Under these assumptions eq. 13 can be written as r K = D(i, j)vH T (v(HD(i, j)H T + R))−1 v = D(i, j)H T (HD(i, j)H T + σR)−1
(15)
where the only degree of freedom is the ratio between the observation and model error variances σ.
3
Experiment Setup
In this study, the maps of the maximum of 8 hour mean (MAX8H) ozone concentration, simulated by TCAM, are reanalyzed with the assimilation of ground measurement data. The study area covers all the Northern Italy (Fig. 1), divided in 41x64 cells (10x10 km2 resolution). Northern Italy is a densely inhabited and industrialized area, where high anthropogenic emissions, frequent stagnating meteorological conditions and Mediterranean solar radiation regularly cause high ozone levels, in summer months. The reanalysis covers the period between April and September 2004. The background fields are provided by means of TCAM simulations. The emission fields were estimated by POEM-PM pre-processor starting from the
162
G. Candiani et al.
Fig. 1. Study area
regional emission inventory collected by ISPRA (Italian Agency for Environmental Protection and Research), while the meteorological fields are computed by means of MM5 model [9]. The boundary conditions are computed starting from continental scale simulation of CHIMERE model [14]. The observation data are represented by 134 stations, collected from the regional monitoring networks for Italy, and from AirBase database for France and Switzerland. The 75% of the stations (crosses in Figure 1) are used to estimate the daily analysis fields, while the remaining stations (stars in Figure 1) are used for the validation. The parameters used in the algorithms are chosen on the basis of a previously sensitivity analysis. Thus, β in IDW is equal to 2, secondary variable η used in cokriging is the altitude, while Lh and σ in the OI are equal to 50000 m and 0.5, respectively.
4
Results and Discussions
The validation of the reanalyzed MAX8H ozone fields is performed comparing the results of the four techniques with the values observed at the monitoring stations chosen for the validation. The same comparison is also performed for the non-reanalyzed TCAM fields, in order to better appreciate the improvements due to the reanalysis. Figure 2 shows the results of validation performances in terms of mean error, root mean square error, and correlation. Looking at the mean error results (Fig. 2(a)), even if the median of TCAM is the closest to zero, the analysis algorithms present a low variability, suggesting a better accuracy of the estimates. Significant improvements can be seen for the
Assimilation of Chemical Ground Measurements in Air Quality Modeling
163
35 Root Mean Square Error
Mean Error
20 10 0 −10
25 20 15 10
−20 background IDW
30
kriging cokriging
background IDW
OI
(a)
kriging cokriging
OI
(b)
0.9
Correlation
0.8 0.7 0.6 0.5 0.4 0.3 background IDW
kriging cokriging
OI
(c) Fig. 2. Mean error (a), Root Mean Square Error (b), and Correlation (c) for TCAM, IDW, kriging, cokriging, and OI
RMSE index (Fig. 2(b)), where the TCAM value of 27-28 μgm−3 becomes less than 20 μgm−3 for IDW and cokriging and close to 15 μgm−3 for kriging and OI. The improvement is confirmed also by the results for the correlation coefficients (Fig. 2(c)), where the 0.6 value of TCAM increases to 0.85 in IDW, kriging and OI. The improvement seen in cokriging is less marked than the other techniques, but is still close to 0.8. Overall, the implemented methodologies ensure a large improvement in the accuracy of the retrieved ozone concentration fields.
Acknowledgments The research has been developed in the framework of the Pilot Project QUITSAT (Contract I/035/06/0 — http://www.quitsat.it), sponsored and funded by the Italian Space Agency (ASI).
References 1. Bouttier, F., Courtier, P.: Data assimilation concepts and methods. In: Meteorological Training Course Lecture Series (2002) 2. Carnevale, C., Decanini, E., Volta, M.: Design and validation of a multiphase 3D model to simulate tropospheric pollution. Science of the Total Environment 390, 166–176 (2008)
164
G. Candiani et al.
3. Carnevale, C., Gabusi, V., Volta, M.: POEMPM: an emission model for secondary pollution control scenarios. Environmental Modeling and Software 21, 320–329 (2006) 4. Chock, D., Winkler, S., Sun, P.: A comparison of stiff chemistry solvers for air quality modeling. In: Proceedings of Air and waste management association 87th annual meeting (1994) 5. Cressie, N.A.C.: Statistics for spatial data. Wiley, Chichester (1991) 6. Cuvelier, C., et al.: CityDelta: A model intercomparison study to explore the impact of emission reductions in European cities in 2010. Atmospheric Environment 41, 189–207 (2007) 7. Denby, B., Hor´ alek, J., Walker, S.E., Eben, K., Fiala, J.: Interpolation and assimilation methods for Europe scale air quality assessment and mapping. Part I: review and recommendations. Technical report, ETC/ACC Technical Paper 2005/7 (2005) 8. Forester, C.: Higher order monotonic convection difference schemes. Journal of Computational Physics 23, 1–22 (1977) 9. Grell, G., Dudhia, J., Stauffer, D.: A description of the Fifth-generation Penn State/NCAR Mesoscale Model (MM5). Technical report, NCAR Tech Note TN398 + STR, 122 p. (1994) 10. Isaaks, E.H., Srivastava, R.M.: Applied Geostatistics. Oxford University Press, Oxford (1989) 11. Kim, Y., Seinfeld, J., Saxena, P.: Atmospheric gas aerosol equilibrium I: thermodynamic model. Aerosol Science and Technologies 19, 157–187 (1993) 12. Marchuk, G.: Methods of Numerical Mathematics. Springer, Heidelberg (1975) 13. Pepper, D.W., Kern, C.D., Long Jr., P.E.: Modeling the dispersion of atmospheric pollution using cubic splines and chapeau functions. Atmospheric Environment 13, 223–237 (1979) 14. Schmidt, H., Derognat, C., Vautard, R., Beekmann, M.: A comparison of simulated and observed ozone mixing ratios for the summer of 1998 in Western Europe. Atmospheric Environment (2001) 15. Volta, M., Finzi, G.: GAMES, a comprehensive Gas Aerosol Modeling Evaluation System. Environmental Modelling and Software 21, 587–594 (2006) 16. Wexler, A., Seinfeld, J.: Second-generation inorganic aerosol model. Atmospheric Environment 25, 2731–2748 (1991) 17. Wille, D.: New stepsize estimators for linear multi step methods. In: Proceedings of MCCM (1994)
Numerical Simulations with Data Assimilation Using an Adaptive POD Procedure Gabriel Dimitriu1 , Narcisa Apreutesei2 , and R˘ azvan S ¸ tef˘anescu1 1
2
“Gr. T. Popa” University of Medicine and Pharmacy, Department of Mathematics and Informatics 700115 Ia¸si, Romania
[email protected],
[email protected] “Gh. Asachi” Technical University, Department of Mathematics 700506 Ia¸si, Romania
[email protected]
Abstract. In this study the proper orthogonal decomposition (POD) methodology to model reduction is applied to construct a reduced-order control space for simple advection-diffusion equations. Several 4D-Var data assimilation experiments associated with these models are carried out in the reduced control space. Emphasis is laid on the performance evaluation of an adaptive POD procedure, with respect to the solution obtained with the classical 4D-Var (full model), and POD 4D-Var data assimilation. Despite some perturbation factors characterizing the model dynamics, the adaptive POD scheme presents better numerical robustness compared to the other methods, and provides accurate results.
1
Introduction
Data assimilation represents a methodology to combine the results of a largescale numerical model with the measurements available to obtain an optimal reconstruction of the state of the system. The four-dimensional variational data assimilation (4D-Var) method has been a very successful technique used in operational numerical weather prediction at many weather forecast centers ([6,8,9,10]). Proper orthogonal decomposition technique has been used to obtain low dimensional dynamical models of many applications in engineering and science ([1,2,3,4,5,7]). Basically, the idea starts with an ensemple of data, called snapshots, collected from an experiment or a numerical procedure of a physical system. The POD technique is then used to produce a set of basis functions which spans the snapshots collection. The goal of the approach is to represent the ensemble of data in terms of an optimal coordinate system. That is, the snapshots can be generated by a smallest possible set of basis functions. The drawback of the POD 4D-Var consists of the fact that the optimal solution can only be sought within the space spanned by the POD basis of background fields. When observations lay outside of the POD space, the POD 4D-Var solution may fail to fit observations sufficiently. The above limitation of the POD 4D-Var can be improved by implementing an adaptive POD 4D-Var scheme. In I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 165–172, 2010. c Springer-Verlag Berlin Heidelberg 2010
166
G. Dimitriu, N. Apreutesei, and R. S ¸ tef˘ anescu
this study an adaptive proper orthogonal decomposition (A-POD) procedure is applied to set up a reduced-order control space for the one-dimensional diffusionadvection equations. The outline of this paper is as follows. Section 2 is dedicated to the numerical model under study and reviewing the 4D-Var assimilation procedure, together with POD 4D-Var method and its adaptive variant. Section 3 contains numerical results from data assimilation experiments using 4D-Var, POD 4D-Var and adaptive POD 4D-Var methods. The paper ends with some conclusions.
2
Numerical Model and Adaptive POD 4D-Var Assimilation Procedure
Our model under study is a one-dimensional diffusion-advection equation defined by the following partial differential equation: ∂2c ∂c ∂c =a 2 +b + f (x, t) , ∂t ∂x ∂x
(x, t) ∈ Ω × (0, T ),
c(x, 0) = x(1 − x) .
(1)
Here, the spatial domain Ω = [0, 1], and the coefficients a and b are positive constants. The function f is chosen to be f (x, t) = (−x2 + (1 + 2b)x + 2a − b)et . Then, the exact (analytical) solution of (1) is given by cexact (x, t) = x(1 − x)et . Details about the numerical implementation of a data assimilation algorithm for a similar model of (1) is presented in [10]. For a successful implementation of the POD 4D-Var in data assimilation problems, it is of most importance to construct an accurate POD reduced model. In what follows, we briefly present a description of this procedure (see [2,3,7]). For a temporal-spatial flow c(x, t), we denoted by c1 , . . . , cn a set adequately chosen n in a time interval [0, T ], that is ci = c(x, ti ). Defining the mean c = 1 i=1 ci , we expand c(x, t) as n cPOD (x, t) = c(x) +
M
βi (t)Φi (x) ,
(2)
i=1
where Φi (x) (the ith element of POD basis) and M are appropriately chosen to capture the dynamics of the flow as follows: n 1. Calculate the mean c = n1 i=1 ci ; 2. Set up the correlation matrix K = [kij ], where kij = Ω (ci − c)(cj − c) dx; 3. Compute the eigenvalues λ1 ≥ λ2 ≥ · · · ≥ λn ≥ 0 and the corresponding orthogonaleigenvectors v1 , v2 , . . . , vn of K; n 4. Set Φi := j=1 vji (ci − c) . Now, we introduce a relative information content to select a low-dimensional basis of size M n, by neglecting modes corresponding to the small eigenvalues. Thus, we define the index k λi I(k) = i=1 n i=1 λi
Numerical Simulations with Data Assimilation
167
and choose M , such that M = argmin{I(m) : I(m) ≥ γ}, where 0 ≤ γ ≤ 1 is the percentage of total information captured by the reduced space X M = span{Φ1 , Φ2 , . . . , ΦM }. The tolerance parameter γ must be chosen to be near the unity in order to capture most of the energy of the snapshot basis. The reduced order model is then obtained by expanding the solution as in (2). Generally, an atmospheric or oceanic model is usually governed by the following dynamic system dc = F (c, t) , dt
c(x, 0) = c0 (x) .
(3)
To obtain a reduced model of (3), we first solve (3) for a set of snapshots and follow above procedures, then use a Galerkin projection of the model equations onto the space X M spanned by the POD basis elements (replacing c in (3) by the expansion (2), then multiplying Φi and integrating over spatial domain Ω): dβi = dt
F
c+
M
βi Φ i , t , Φ i
,
βi (0) =< c(x, 0)−c(x), Φi (x) > . (4)
i=1
Equation (4) defines a reduced model of (3). In the following, we will analyze applying this model reduction to 4D-Var formulation. In this context, the forward model and the adjoint model for computing the cost function and its gradient represent the reduced model and its corresponding adjoint, respectively. At the assimilation time interval [0, T ], a prior estimate or ‘background estimate’, cb of the initial state c0 is assumed to be known and the initial random errors (c0 − cb ) are assumed to be Gaussian with covariance matrix B. The aim of the data asimilation is to minimize the square error between the model predictions and the observed system states, weighted by the inverse of the covariance matrices, over the assimilation interval. The initial state c0 is treated as the required control variable in the optimization process. Thus, the objective function associated with the data assimilation for (3) is expressed by J(c0 ) = (c0 − cb )T B−1 (c0 − cb ) + (Hc − yo )T R−1 (Hc − yo ) .
(5)
Here, H is an observation operator, and R is the observation error covariance matrix. In POD 4D-Var, we look for an optimal solution of (5) to minimize the cost function J(cM 0 ) = J(β1 (0), . . . , βM (0)) given by POD − cb )B−1 (cPOD − cb ) + (HcPOD − yo )R−1 (HcPOD − yo ) , (6) J(cM 0 ) = (c0 0
where cPOD is the control vector. 0 In (6), cPOD = cPOD (x, 0) and cPOD = cPOD (x, t) are expressed by 0 cPOD (x) = c(x) + 0
M i=1
βi (0)Φi (x),
cPOD (x) = c(x) +
M i=1
βi (t)Φi (x) .
168
G. Dimitriu, N. Apreutesei, and R. S ¸ tef˘ anescu
Therefore, in POD 4D-Var the control variables are β1 (0), . . . , βM (0). As explained later, the dimension of the POD reduced space could be much smaller than the original space. As a consequence, the forward model is the reduced model (4) which can be very efficiently solved. The adjoint model of (4) is then used to calculate the gradient of the cost function (6) and that will significantly reduce both the computational cost and the programming effort. The POD model in POD 4D-Var assimilation is established by construction of a set of snapshots, which is taken from the background trajectory, or integrate original model (3) with background initial conditions. The A-POD algorithm used in our numerical experiments is presented below: A-POD Algorithm: Step 1: Set k = 1, the iteration level for POD procedure, and the initial guess controls ck0 . Step 2: Set up the snapshots ensemble from the solution of the full forward model, with the controls ck0 . Step 3: Compute the POD bases (the number of POD bases is chosen to capture a prescribed energy level, γ, mentioned above). Step 4: Project the controls ck0 on the reduced space βk,iter (iter = 1). Step 5: Optimize the initial controls βk,iter (here, iter denotes the iteration of the optimization process, completely carried out on the reduced space). Step 6: (i) Check the value of the cost function (5). If |Jiter | < ε (where ε is the tolerance for the optimization), then STOP; (ii) If |Jiter | > ε and |Jiter − Jiter−1 | > 10−3 , then set iter = iter + 1 and go to Step 5; (iii) If |Jiter | > ε and |Jiter − Jiter−1 | < 10−3 , then update the POD bases: βk,iter
3
by projecting the optimization controls (a) Find the new controls ck+1 0 onto the original domain. (b) Set k = k + 1 and go to Step 2.
Numerical Results
This section is devoted to the numerical results of several data assimilation experiments carried out to examine the performance of A-POD 4D-Var procedure, by comparing it with the full 4D-Var and POD 4D-Var data assimilation. All the performed tests have used as ‘true’ (exact) solution c0 of the assimilation problem, that one computed from the analytical solution cexact of (1) given in Section 2. We set in our approach T = 1, and used 31 discretization points in space, and 69 points in time interval [0, 1]. By means of a perturbed initial condition we generated the observed state yo . We assumed that the correlation matrices in (5) and (6) are diagonal matrices, given by B = σb2 I and R = σo2 I, with I denoting the identity matrix of appropriate order. We set σb2 = 0.052 and σo2 = 0.12 , representing the variances for the background and observational errors, respectively.
12 snapshots
−0.5
log10(rel. energy)
log10(rel. energy)
Numerical Simulations with Data Assimilation
−1 −1.5 −2
169
18 snapshots
−0.5 −1 −1.5 −2 −2.5
−2.5 2
4
6
8
10
12
5
23 snapshots −1 −2 −3
5
10
15
Mode index
10
15
Mode index
20
log10(rel. energy)
log10(rel. energy)
Mode index
−5 35 snapshots −10 −15 10
20
30
Mode index
Fig. 1. Relative energy spectrum for POD modes, corresponding to different sets of snapshots for the system state
The numerical tests have been performed with the following values of the diffusion and advection parameters: a = 0.0001 and b = 1. The background value cb and the observation yo were obtained by perturbing the exact solution of the state equation. Figure 1 illustrates the decay of the eigenvalues, in case when the spatial correlation matrix, K, was calculated using 12, 18, 23, and 35 snapshots, respectively. The dimension of the POD reduced model depends on the number of basis functions. We found that only few basis functions are required to capture a high percentage from the dynamic of the system. For the presented cases, we can remark in Table 1, that more than 90% from the full variability characterizing the model dynamics can be captured with 4, 16, and 18 POD modes, respectively. Thus, choosing M = 16 modes and n = 36 snapshots, the captured energy represents 98.53% of the total energy, while when M = 18, the captured energy percentage is 99.16%. Table 1. The values of the index I(k) for different numbers of snapshots (n), and POD modes (M ) Number of Number of Index snapshots (n) modes (M ) I(M ) 12 4 91.74% 18 16 99.89% 36 16 98.53% 36 18 99.16%
170
G. Dimitriu, N. Apreutesei, and R. S ¸ tef˘ anescu
Table 2. The values of the RMS errors (RMSE) calculated for POD and A-POD procedures, corresponding to different numbers of snapshots and POD modes Number of Number of RMSE RMSE snapshots (n) modes (M ) for POD for A-POD 12 4 0.1193 0.0874 12 12 0.0730 0.0532 18 16 0.0684 0.0476 36 16 0.0640 0.0420 23 22 0.0444 0.0403 36 22 0.0427 0.0351
We performed several numerical tests by implementing the three distinct procedures for data assimilation problem (1), (5): the standard (full) 4D-var, the POD 4D-Var, and the adaptive POD (A-POD) scheme. In all cases, the performance of POD scheme with respect to A-POD was evaluated by computing RMS errors of the assimilation solution (see Table 2). The numerical solution of the optimal control problem was obtained using fminunc — the Matlab unconstrained minimization routine. Its algorithm is based on the BFGS quasi-Newton method with a mixed quadratic and cubic line search procedure. The decreasing
n= 18(snapshots), M= 10 (modes) 4D−VAR POD A POD
J/J0
0.9 0.8 0.7 0.6 0.5 0
10
20
30
40
0.6 0.4 0.2 0
10
20
30
40
50
Number of iteration n= 35 (snapshots), M= 25 (modes) 1
Gradient/Gradient0
1
4D−VAR A POD
0.8
0
50
Number of iteration n= 35(snapshots), M= 25 (modes) 4D−VAR POD A POD
0.9
J/J0
n= 18 (snapshots), M= 10 (modes) 1
Gradient/Gradient0
1
0.8 0.7 0.6 0.5 0
10
20
30
40
Number of iteration
50
4D−VAR A POD
0.8 0.6 0.4 0.2 0
0
10
20
30
40
50
Number of iteration
∂J Fig. 2. The decreasing of the cost function J and of the gradient ∂c , for different values 0 of snapshots and POD modes, compared to the 4D-Var method with full order model and reduced order model using POD 4D-Var and adaptive POD 4D-Var techniques
Numerical Simulations with Data Assimilation
exact 4D−VAR POD A POD
n= 23(snapshots), M= 22 (modes) 0.3
C0
171
0.2 0.1 0
5
10
15
20
25
Number of space points n= 36(snapshots), M= 22 (modes)
exact 4D−VAR POD A POD
0.3
C0
30
0.2 0.1 0
5
10
15
20
25
30
Number of space points
Fig. 3. Comparative results on the assimilation of c0 , maintaining the same number of POD modes (M = 22) and increasing the number of snapshots, from n = 23 to n = 36 ∂J values of the cost function J, as well as of its gradient ∂c normalized to their 0 values at the first iteration is presented in Figure 2. The comparative assimilation results associated with our experiments for certain selections of M and n are depicted in Figure 3. Obviously, one can conclude that better results are obtained when one is using more POD modes and more snapshots. We also remark that the variance of the observational errors, chosen twice bigger than the variance level of the background errors, caused a relative high instability for the assimilation solution (even for the best case, where the reduced model was constructed with n = 36 snapshots and M = 22 modes — see Figure 3, bottom plot). A certain contribution to this oscillatory behaviour of the solution could also be attributed to the fact the advection strongly dominates diffusion in our model.
4
Conclusions
In this study, we applied to a simple diffusion-advection model, both a standard 4D-Var assimilation scheme and a reduced order approach to 4D-Var assimilation using an adaptive POD procedure. The numerical results from several POD models (constructed with different numbers of snapshots and POD modes) are compared with those of the original model. Our numerical tests showed that variability of the original model could be captured reasonably well by applying an adaptive POD scheme to a low dimensional system set up with 36 snapshots and 22 leading POD basis functions. At the same time, the available data should
172
G. Dimitriu, N. Apreutesei, and R. S ¸ tef˘ anescu
be both representative and sufficiently accurate in order to ensure the desired improvements. In the case when the available data consist of measurements, some information about the uncertainties of the measurements would be very helpful for the assimilation procedure. Several issues, including more general models (containing distributed diffusion and advection parameters, and also nonlinear source terms, directly depending on state variable, c) are under investigation.
Acknowledgments The paper was supported by the project ID 342/2008, CNCSIS, Romania.
References 1. Antoulas, A., Sorensen, S., Gallivan, K.A.: Model Reduction of Large-Scale Dynamical Systems. In: Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2004, Part III. LNCS, vol. 3038, pp. 740–747. Springer, Heidelberg (2004) 2. Cao, Y., Zhu, J., Luo, Z., Navon, I.M.: Reduced Order Modeling of the Upper Tropical Pacific Ocean Model Using Proper Orthogonal Decomposition. Computer & Mathematics with Applications 52(8-9), 1373–1386 (2006) 3. Cao, Y.H., Zhu, J., Navon, I.M.: A reduced-order approach to four-dimensional variational data assimilation using proper orthogonal decomposition. Int. J. Numer. Meth. Fluids 53(10), 1571–1583 (2007) 4. Dimitriu, G.: Using Singular Value Decomposition in Conjunction with Data Assimilation Procedures. In: Boyanov, T., Dimova, S., Georgiev, K., Nikolov, G. (eds.) NMA 2006. LNCS, vol. 4310, pp. 435–442. Springer, Heidelberg (2007) 5. Fang, F., Pain, C.C., Navon, I.M., Piggot, M.D., Gorman, G.J., Allison, P.A., Goddard, A.J.H.: Reduced-order modelling of an adaptive mesh ocean model. Int. J. Numer. Meth. Fluids 59, 827–851 (2009) 6. Ghil, M., Malanotte-Rizzoli, P.: Data Assimilation in Meteorology and Oceanography. In: Dunowska, R., Saltzmann, B. (eds.) Advances in Geophysics, vol. 33, pp. 141–266 (1991) 7. Luo, Z., Zhu, J., Wang, R., Navon, I.M.: Proper Orthogonal Decomposition Approach and Error Estimation of Mixed Finite Element Methods for the Tropical Pacific Ocean Reduced Gravity Model. Submitted to Computer Methods in Applied Mechanics and Engineering 8. Qui, C.J., Zhang, L., Shao, A.M.: An Expicit four-dimensional variational data assimilation method. Sci. China Ser. D-Earth Sci. 50(8), 1232–1240 (2007) 9. Th´epaut, J.N., Hoffman, R.N., Courtier, P.: Interactions of dynamics and observations in a 4D variational assimilation. Mon. Weather Rev. 121, 3393–3414 (1993) ´ Implementation issues related to variational 10. Zlatev, Z., Brandth, J., Havasi, A.: data assimilation: some preliminary results and conclusions. In: Working Group on Matrix Computations and Statistics, Copenhagen, Denmark, April 1-3 (2005)
Game-Method Model for Field Fires Nina Dobrinkova1, Stefka Fidanova2 , and Krassimir Atanassov3 1
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences
[email protected] 2 Institute for Parallel Processing, Bulgarian Academy of Sciences
[email protected] 3 Centre of Biomertical Engineering, Bulgarian Academy of Sciences
[email protected]
Abstract. Every year about 45000 forest fires occur in Europe, burning half a million hectares of land, some of which protected zones with rare species of the flora and fauna. The existing methods for wildland modeling are very complicated and their realizations need a lot of computational capacity. That is why we will use another approach based on the game-method theory, which consume less computational resources.
1
Introduction
The study of wildland fire should begin with the basic principles and mechanisms of the combustion process — fire fundamentals. Fire behavior is what a fire does, the dynamics of the fire event. An understanding of the fundamentals of wildland fire is important for some very practical reasons. The combustion process can be manipulated to some extent: Retardants can be applied to affect the combustion process: fuel arrangement can be altered for hazard reduction; and appropriate environmental conditions can be chosen for prescribed fire to reduce smoke impacts, achieve desired fuel reduction, and still retain control of the fire. The need to understand wildland fire fundamentals is even more pressing than it was in the past. In earlier times the focus was on describing the aspects of fire that are important to suppression efforts. That continues to be high priority. In addition, there is now increasing emphasis on characterizing fire for its effect on vegetation and for the smoke it produces [4]. The basis for the fire modeling is the Rothermel model for the behavior of surface fires [5]. It calculates for any given point local intensity and spread parameters for the head of a surface fire. Inputs for the model are a two-dimensional wind field, terrain parameters, fuel moisture and a detailed description of the fuel bed, based on the local behavior output by the Rothermel model and on a model for the local shape of fire spread. The mathematical models require descriptions of fuel properties as inputs to calculations of fire danger indices or fire behavior potential. The set of parameters describing the fuel characteristics have become known as fuel models and can be organized into four groups: grass, shrub, timber, and slash. Fuel models for fire danger rating have increased to twenty while fire behavior predictions and I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 173–179, 2010. c Springer-Verlag Berlin Heidelberg 2010
174
N. Dobrinkova, S. Fidanova, and K. Atanassov
applications have utilized the thirteen fuel models tabulated by Rothermel [5] and Albini [1]. Each fuel model is described by the fuel load and the ratio of surface area to volume for each size class; the depth of the fuel bed involved in the fire front; and fuel moisture, including that at which fire will not spread, called the moisture of extinction. The fire models used nowadays by wildland fire managers are specified to distinct types of fire behavior. There are separate models for surface fires, crown fires, spotting, and point-source fire acceleration. These behaviors are really abstractions of an overall three-dimensional process of unconfined combustion that links implicitly to the environment through heat and mass transfer feedbacks. In this work we will focus on game-method principle, which is easy to understand, use, and implement, when we are trying to model spreading of a wildland fire. This model can be used also by non specialist in the forest fires field for learning and describing this phenomenon. The game-method model differs from the other models with the usage of small amount of computational recourses. Most of the developed mathematical models require descriptions of fuel properties as inputs to calculations of fire danger indices or fire behavior potential. They are so sophisticated that their resulting simulations have predictions which cannot be tested analytically. In nature, the natural consistency of a given model is usually tested by calculations of sets of simple experiments (e.g a hydrodynamics code is always tested by calculations of shock tube experiments). However, often such tests are impossible or ambiguous. In such cases, a more basic question arises — the correctness question, in the sense that all applied conditions and restrictions are valid, no additional conditions and restrictions are applied; and the model rules are not contradictory among themselves. In other words, we mean correctness in the mathematical and logical sense (completeness, conservativeness and consistency). That is why in the paper we focus on the game-method model [2] and we apply it on field fires.
2
Description of the Game-Method Model
The combinatorial game-method has mostly theoretical applications in astronomy for example, but we will try to show its possible application in fire spreading problem. Let a set of symbols S and n-dimensional simplex (in the sense of [3]) comprising n-dimensional cubes (at n = 2, a two-dimensional net of squares) be given. Let material points be found in some of the vertices of the simplex and let a set of rules A be given, containing: – rules for the motions of the material points along the vertices of the simplex; – rules for the interactions among the material points. Let the rules from the i-th type be marked as i-rules, where i = 1, 2. Each material point is a set by a number, n-tuple of coordinates characterizing its location in the simplex and a symbol from S reflecting the peculiarity of the material point (e.g. in physical applications — mass, charge, concentration, etc.).
Game-Method Model for Field Fires
175
We shall call an initial configuration every set of (n + 2)-tuples with an initial component being the number of the material point; the second, third, etc. until the (n + 1)-st — its coordinates; and the (n + 2)-nd – its symbol from S. We shall call a final configuration the set of (n + 2)-tuples having the above form and which is a result from a (fixed) initial configuration by a given number of applying the rules from A. Once applying a rule from A over a given configuration K will be called an elementary step in the transformation of the model and will be denoted by A1 (K). In this sense if K is initial configuration, and L – a final configuration derived from K through times applying regulations from A, then configurations K0 , K1 , ..., Km will exist, for which K0 = K, Ki+1 = A1 (Ki ) for 0 ≤ i ≤ m − 1, Km = L, (the equality “=” is in the sense of coincidence in the configurations) and this will be denoted by L = A(K) ≡ A1 (A1 (...A1 (K)...)). Let a rule P be given, which juxtaposes to a combination of configurations M a single configuration P (M ) being the mean of the given ones. We shall call this rule a concentrate one. The concentration can be made over the values of the symbols from S for the material points, as well as over their coordinates, however, not over both of them simultaneously. For each choice of the rule P one should proceed from physical considerations. For example, if k-th element of M (1 ≤ k ≤ s, where s is the number of elements of M ) is a rectangular with p × q squares and if the square staying on (i, j)-th place (1 ≤ i ≤ p, 1 ≤ j ≤ q) contains number dki,j ∈ {0, 1, ..., 9}, then on the (i, j)-th place of P (M ) stays: – minimal number
di,j
s 1 k = di,j , s k=1
– maximal number
di,j =
s 1 k di,j , s k=1
– average number
di,j
s 1 k 1 = di,j + , s 2 k=1
where for real number x = a + α, where a is a natural number and α ∈ [0, 1): [x] = a and a, if α = 0 x = a + 1, if α > 0 Let B be a criterion defined from physical or mathematical considerations. On given two configurations K1 and K2 it answers the question if they are close to
176
N. Dobrinkova, S. Fidanova, and K. Atanassov
each other or not. For example, for two configurations K1 and K2 having the form from the above example, B(K1 , K2 ) = or
q p 1 1 |d − d2i,j | < C1 p.q i=1 j=1 i,j
⎞ 12 q p
1 2 B(K1 , K2 ) = ⎝ d1 − d2i,j ⎠ < C2 , p.q i=1 j=1 i,j ⎛
where C1 and C2 are some constants. For the set of configurations M and the set of rules A we shall define the set of configurations A(M ) = {L|(∃K ∈ M )(L = A(K))}. The rules A will be called statistically correct, if for a great enough (from a statistical point of view) natural number N : (∀m > N )(∀M = {K1 , K2 , ..., Km }) (B(A(P (M )), P ({Li |Li = A(Ki ), 1 ≤ i ≤ m})) = 1).
(1)
The essence of the method is in the following: the set of rules A, the proximity criterion B and the concentrate rule P are fixed preliminarily. A set of initial configurations M is chosen and the set of the corresponding final configurations is constructed. If the equation (1) is valid we can assume that the rules from the set A are correct in the frames of the model, i.e. they are logically consistent. Otherwise we replace part (or all) of them with others. If the rules become correct, then we can add to the set some new ones or transform some of the existing and check permanently the correctness of the newly constructed system of rules. Thus, in consecutive steps, extending and complicating the rules in A and checking their correctness, we construct the model of the given process. Afterwards we may check the temporal development (as regards the final system of rules A) of a particular initial configuration. In the method of modelling generally one works over one or several particular configurations. However here we check initially the correctness of the modelling rules and just then we proceed to the actual modelling. This is due to a great deal to the fact that we work over discrete objects with rules convenient for computer realization. Thus a series of checks of the equation (1) can be performed and just to construct the configuration A(K) for a given configuration K and a set of rules A. Here we will start describing our model: Let have a matrix NxM, which will be an abstract description of the surface of a forest. The visualization will be a net NxM with bins containing numbers from 0 to 9. Those parameters are representation of the coefficient of burning. The 0 would be totally burned or nothing to burn status of the bin (in case of river, rock, lake etc.) and everything in the range from 1 to 9 will be thickness of the forest trees in each bin. Another
Game-Method Model for Field Fires
177
parameter will be the vector of the wind with its direction and intensity of blowing. In our software application we can include a flag for showing the state of each bin in the moment of the started fire. The flag is 1 if the bin is burning and it is 0 if not. In our simulation the fire starts from the central bin, of course it can start from any bin. The bins coefficients represent how long the bin will burn till be totally burned. In this paper we consider the simplest case without wind. Thus the fire is distributed in all directions with the same speed. So if the state of the bin is 1 (burning) every time step (iteration) the burning coefficient decrease with 1 till it becomes 0 (totally burned). If the state of the bin is 0 (not burning) and the state of all neighbor bins are 0, then the burning coefficient states unchanged. If the state of the bin is 0 and the state any of the neighbor bins was 1 in a precedence iteration, then the state of the bin becomes 1. Thus with this model we can trail the development of the fire in every time moment.
3
Experimental Results
In this section we prepare several test problems. Our test area is divided on N × M bins. On real problem one bin can be 10 square meters. To generate tests we generate random numbers from 1 to 9 for burning coefficients. We start burning from the central bin and burning process continues till the fire reaches borders of the area. We generate K tests (simulations). To check the correctness of the algorithm, we calculate the average area by averaging every bin with respect to all simulated areas: bij =
K 1 aijn K n=1
(2)
We do the same for burned areas (resulting areas): cij =
K 1 a K n=1 ijn
(3)
After that we apply the method on average initial area and the result is the burned area dij . We compare cij with dij for every i and j. On Figure 1 the Table 1. The development of the fire 3 3 3 9 8 4 6 1 3
9 4 2 5 8 4 2 9 9
8 5 8 9 2 5 8 1 9
9 3 5 4 8 8 1 6 5
2 9 2 8 2 6 4 2 9
6 4 4 0 0 0 5 1 1
4 9 7 6 4 1 9 1 1
2 3 5 8 3 1 1 3 5
9 7 8 8 7 5 4 7 2
3 3 3 9 8 4 6 1 3
9 4 2 5 8 4 2 9 9
8 5 8 9 2 5 8 1 9
9 3 5 3 7 7 1 6 5
2 9 2 7 1 5 4 2 9
6 4 4 0 0 0 5 1 1
4 9 7 6 4 1 9 1 1
2 3 5 8 3 1 1 3 5
9 7 8 8 7 5 4 7 2
3 3 3 9 8 4 6 1 3
9 4 2 5 8 4 2 9 9
8 5 7 8 1 4 7 1 9
9 3 4 2 6 6 0 6 5
2 9 1 6 0 4 3 2 9
6 4 3 0 0 0 4 1 1
4 9 7 6 4 1 9 1 1
2 3 5 8 3 1 1 3 5
9 7 8 8 7 5 4 7 2
178
N. Dobrinkova, S. Fidanova, and K. Atanassov
Fig. 1. Differences between average on the result areas and result on average initial areas, the different bins are market with X
different bins between the average result areas and result on average initial areas are marked with X. We prepared an example where the area consists of 143 × 143 bins. We generated 30 simulations. We apply the method on every of the simulation and on the average area where the value of the bin (i, j) is an average of the bins (i, j) of all 30 simulations. We average the resulting areas and compare with the result of the average area Figure 1. Only 914 bins, which are 4% of whole area, are different and they are on the boundary, so called boundary effect. So we can conclude that the results are statistically similar and the method is correct. Table 2. The area after the third time step o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o X o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o X X X o o o
o o o X X X o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o X X X X X o o
o o X X X X O o o
o o X X O X X o o
o o X o o o X o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o o o o o o o o o
o X X X X X X X o
o X X X X X X O o
o X X X X X O X o
o X O X O X X X o
o X X o o o X O o
o X X X o O X O o
o o o o o o o o o
o o o o o o o o o
Game-Method Model for Field Fires
179
We prepare a small example with 9×9 bins. We want to show how the method works and how the fire is dispersed. On the first table is the initial area with the burning coefficients in the every bin. The coefficient 0 means unburned material. On the Table 1 we observe the decreasing of the burning coefficients. On the Table 2 we show the fire dispersion and how the fire surround unburned area. The bins with o are without fire. The bins with X are burning bins. The bins with O are totally burned.
4
Conclusion
On this paper we describe method of modelling wildland fire using games theory. We focused on simplified example when the area is flat and without wind. Our model can trace the fire and predict its development which is very important to bring it under control and to prevent it to cause large damages. In a future works we will complicate the model including areas which are not flat and meteorological data as wind and humidity. Thus our model will become more flexible and applicable in different situations. We will use real data from a past wildland fire and compare how well we model the development of the process.
References 1. Albini, F.A.: Estimating wildfire behavior and effects. USDA For. Serv. Gen. Tech. Rep. INT-30, 92 p. lntermt. For. and Range Exp. Stn., Ogden, Utah (1976) 2. Atanassov, K., Atanassova, L., Sasselov, D.: On the combinatorial game-metod for modelling in astronomy (1994) 3. Kuratovski, K.: Topology. Academic Press, New York (1966) 4. Pyne, S.J., Andrews, P.L., Laven, R.D.: Introduction to Wildland Fire, p. 3. Wiley, New York (1996) 5. Rothermel, R.C.: A mathematical model for predicting fire spread in wildland fuels. Research Paper INT-115. Ogden, UT: US Department of Agriculture, Forest Service, Intermountain Forest and Range Experiment Station, pp. 1–40 (1972)
Joint Analysis of Regional Scale Transport and Transformation of Air Pollution from Road and Ship Transport K. Ganev1 , D. Syrakov2, G. Gadzhev1 , M. Prodanova2, G. Jordanov1, N. Miloshev1 , and A. Todorova1 2
1 Geophysical Institute, Bulgarian Academy of Sciences, Sofia, Bulgaria National Institute of Meteorology and Hydrology, Bulgarian Academy of Sciences, Sofia, Bulgaria
Abstract. The objective of the present work is to study in detail the dilution processes of the plumes and the chemical transformations of pollutants jointly generated by road and ship transport. The numerical simulations are carried out using the MODELS-3 system (SMOKE/ MM5/CMAQ) nesting abilities, downscaling to a horizontal resolution of 30 km, the innermost domain including regions with very intensive road and ship transport and different climate (photochemistry reactions intensity). The CMAQ “Process rate analysis” option is applied to discriminate the role of different dynamic and chemical processes to the pollution levels formation. A large number of numerical experiments were carried out, which makes it possible to distinguish the relative contribution of different air pollution factors. Careful and detailed analysis of the obtained results was made, outlining the influence of the domain specific physiographic characteristics, road and ship emission impacts on the pollution characteristics.
1
Introduction
The objective of the present work is to study the regional scale dilution and chemical transformation processes of pollutants generated by road and ship transport. More precisely the study aims at clarifying the interaction between the pollution from road and ship emissions their mutual impacts and contribution to the overall pollution. It is expected the results of the current work to give some clues for specification of the “effective emission indices” linking emission inventories to the emissions to be used as input in large scale models.
2
Modelling Tools
The US EPA Model-3 system was chosen as a modelling tool because it appears to be one of the most widely used models with proved simulation abilities. The I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 180–187, 2010. c Springer-Verlag Berlin Heidelberg 2010
Joint Analysis of Regional Scale Transport and Transformation
181
system consists of three components: MM5 – the fifth generation PSU/NCAR Meso-meteorological Model MM5 ([6,7]) used as meteorological pre-processor; CMAQ – the Community Multiscale Air Quality System ([5,2]); SMOKE – the Sparse Matrix Operator Kernel Emissions Modelling System ([4,8,3]).
3
Methodology
US NCEP Global Analyses data was used for meteorological background input: The data is with 1 × 1 degree grid resolution covering the entire globe, the time resolution is 6 hours. The Models-3 system nesting abilities were applied for downscaling the problem to a resolution of 30 km for the domain discussed further. Two sets of emission data are used in the present study: – The data set created by Visschedijk et al. ([9]) was used for the all the emissions but those from ship transport; – The ship transport emissions were taken from the inventory created by Wang et al. ([10]). As to prepare the CMAQ emission input file, these inventory files were handled by a specially prepared computer code. Two main procedures were performed. The temporal allocation is made on the base of daily, weekly, and monthly profiles, provided by Builtjes in [1]. The temporal profiles are country-, pollutantand SNAP (Selected Nomenclature for Air Pollution)-specific. A specific approach for obtaining speciation profiles is used here. The USA EPA data base is intensively exploited. Typical sources for every SNAP were linked to similar source types from US EPA nomenclature. The weighted averages of the respective speciation profiles are accepted as SNAP-specific splitting factors. In such a way VOC and PM2.5 speciation profiles are derived. The biogenic emissions of VOC were estimated by the model SMOKE. The study was based on joint analysis of the results from the following emission scenarios: 1. Simulations with all the emissions in the simulation domain, corresponding arbitrary (concentration, deposition, columnar value, process contribution, etc.) characteristic Φ denoted by Φall ; 2. Simulations with the emissions from road transport reduced by a factor of α, corresponding arbitrary characteristic Φ denoted by Φreducedroad ; 3. Simulations with the emissions from ship transport reduced by a factor of α, corresponding arbitrary characteristic Φ denoted by Φreducedship ; 4. Simulations with the emissions from both road and ship transport reduced by a factor of α, corresponding arbitrary characteristic Φ denoted by Φreducedroadandship . The most natural properties, which can be constructed from these scenarios are the relative (in %) contributions of road/ship emissions to the formation of the characteristic Φ:
182
K. Ganev et al.
1 Φall − Φreduced road /, 100 1−α Φall 1 Φall − Φreduced ship = /, 100. 1−α Φall
Φroad = Φship
Some more “sophisticated” properties, like impact of road transport on the pollution from ship emissions (the ratio Φship |road of pollution from ship emissions when road emissions are also excluded to pollution from ship emissions, but with road emissions present) or vice versa (Φroad |ship ) can also be defined: Φreduced road − Φreduced road and ship /, 100 and Φall − Φrediced ship Φreduced ship − Φreduced road and ship = /, 100. Φall − Φrediced road
Φship |road = Φroad |ship
MM5 and CMAQ simulations were carried out for the periods January 2001– 2005 and July 2001–2005. Averaging the fields over the respective month produces a diurnal behavior of given pollution characteristic, which can be interpreted as “typical” for the month (respectively season). The characteristic, which will be mostly demonstrated and discussed as an example further in this paper is the surface concentration c. Moreover, what will be shown and discussed concerns not only separate pollutants, but also some aggregates like nitrogen compounds (GNOY = NO + NO2 + NO3 + 2*N2O5 + HONO + HNO3 + PNA), organic nitrates (ORG N = PAN + NTR), hydrocarbons (HYDC = PAR + ETH + OLE + TOL + XYL + ISO), CAR PHE = FORM + ALD2 + MGLY + CRES, aerosol NH4, SO4 and H2O, PM2.5 and PMcoarse = PM10 – PM2.5. The Models-3 “Integrated Process Rate Analysis” option is applied to discriminate the role of different dynamic and chemical processes for the pollution from road and ship transport. The processes that are considered are: advection, diffusion, mass adjustment, emissions, dry deposition, chemistry, aerosol processes and cloud processes/aqueous chemistry.
4
Results and Discussions
The respective “typical” concentrations can be also averaged for the day and these will be (due to the volume limitations) most of the illustrations demonstrated, like on Fig. 1. The impact of road/ship emissions is rather complex and will take a lot of pages to be described. One could not help but notice, however, how well displayed the configurations of most intensive ship/road transport are. Due to the none-linearity of the processes their impact can be negative as well. Some big cities and the main ship routes are particularly well displayed as sinks in the July ozone plots. The same plots for a “typical” January are demonstrated in Fig. 2. The impact of road/ship emissions is again very complex. The difference, both qualitative and quantitative between “summer” and “winter” fields is quite evident for all
Joint Analysis of Regional Scale Transport and Transformation
183
a)
b) Fig. 1. Averaged for a “typical” day in July fields of croad [%] (a) and cship [%] (b) for ozone, PM2.5, nitrogen compounds, hydrocarbons, CAR PHE, PMcoarse , organic nitrates, and SO2
the species, though the configurations of most intensive ship/road transport are again very well displayed. Ship/road emission relative contribution does have diurnal course, which is well displayed by Fig. 3. Again the temporal behavior is complex and can not be described by some very general statements. The most intensive ship/road sources are well visible as sinks and the sink locations do not have dramatic diurnal change. The shape and location of areas with positive impact to the ozone levels generated by ship/road transport emissions temporal variations are more prominent. It should be noted that, while the detailed spatial distribution shows that the contribution of road/ship transport emissions to some compounds surface concentration can change sign from point to point, but averaged over large enough domains the respective contributions vary with time but their sign, though different for different compounds and aggregated families, does not change during the day.
184
K. Ganev et al.
a)
b) Fig. 2. Averaged for a “typical” day in January fields of croad [%] (a) and cship [%] (b) for ozone, PM2.5, nitrogen compounds, hydrocarbons, CAR PHE, PMcoarse , organic nitrates, and SO2
The more complex and sophisticated “road-to-ship” and “ship-to-road” contributions are illustrated in Fig. 4. It is really hard to describe the plot, but they are a very good demonstration of how complex the interactions of the pollutants are and how strange and surprising the impact of one of the types of transport to the air pollution by the other is. The absolute value of these contributions in some points could be very large, but generally for most of the compounds the contributions vary within relatively narrow margins around 100%. The corresponding plots for January are of course different, but the most general features mentioned above are valid as well. To better understand the interaction of pollutants from road and ship transport, one should closely examine the contributions of different processes to the total pollution, in particular the “road-to-ship” and “ship-to-road” contributions to the respective processes. That is why the CMAQ “Integrated Process Rate Analysis” option was applied and the “portions” of the major processes which
Joint Analysis of Regional Scale Transport and Transformation
185
a)
b) Fig. 3. Diurnal course of croad [%] (a) and cship [%] (b) for ozone for typical July day
a)
b) Fig. 4. Fields of cship |road [%] (a) and of croad |ship [%] (b) for ozone, PM2.5, nitrogen compounds and hydrocarbons, July
form the hourly concentration changes were calculated. The process “portions” Δc(proc) can be treated like the concentrations and respective “road”, “ship”, “road-to-ship”, and “ship-to-road” contributions can be estimated. Plots of the horizontal fields of the “road-to-ship” Δc(chem) ship|road and (chem) “ship-to-road” Δc road|ship (contributions to the chemical transformation processes “portion” of the surface ozone the hourly change concentrations are shown in Fig. 5. The very complex and “mosaic” texture of the plots can hardly be discussed in details. This is simply an illustration of how discouragingly complex the mutual impacts of pollution from different source types is and
186
K. Ganev et al.
a)
b) Fig. 5. Diurnal course of Δc(chem) ship|road [%] (a) and of Δc(chem) road|ship [%] (b) for ozone, July
how difficult the answer of a question “What is the impact of the pollution from road transport to the chemical transformation of the ozone from ship transport?” or vise versa could be.
5
Conclusion
The numerical experiments performed produced a huge volume of information, which have to be carefully analyzed and generalized so that some final conclusions could be made. The conclusions that can be made at this stage of the studies are that the transport type/processes interactions are indeed very complex. The results produced by the CMAQ “Integrated Process Rate Analysis” demonstrate the very complex behavior and interaction of the different processes — process contributions change very quickly with time and these changes for the different points on the plane hardly correlate at all. The analysis of the behavior of different processes does not give simple answer of the question what the impact of pollution from given source type to the process, which form the pollution from another source type, could be.
Acknowledgments The present work is supported by EC through 6FP NoE ACCENT (GOCE-CT2002-500337), SEE-GRID-SCI project, contract No. FP7 - RI-211338, COST Actions 728, as well as by the Bulgarian National Science Fund (grants No. DO02-161/16.12.2008 and DO02-115/2008). The contacts within the framework of the NATO SfP Grant ESP.EAP.SFPP 981393 were extremely simulating as well.
Joint Analysis of Regional Scale Transport and Transformation
187
References 1. Builtjes, P.J.H., van Loon, M., Schaap, M., Teeuwisse, S., Visschedijk, A.J.H., Bloos, J.P.: Project on the modelling and verification of ozone reduction strategies: contribution of TNO-MEP, TNO-report, MEP-R2003/166, Apeldoorn, The Netherlands (2003) 2. Byun, D., Schere, K.L.: Review of the Governing Equations, Computational Algorithms, and Other Components of the Models-3 Community Multiscale Air Quality (CMAQ) Modeling System. Applied Mechanics Reviews 59(2), 51–77 (2006) 3. CEP: Sparse Matrix Operator Kernel Emission (SMOKE) Modeling System, University of Carolina, Carolina Environmental Programs, Research Triangle Park, North Carolina (2003) 4. Coats Jr., C.J., Houyoux, M.R.: Fast Emissions Modeling with the Sparse Matrix Operator Kernel Emissions Modeling System, TheEmissions Inventory: Key to Planning, Permits, Compliance, and Reporting, Air and Waste Management Association, New Orleans (September 2006) 5. Dennis, R.L., Byun, D.W., Novak, J.H., Galluppi, K.J., Coats, C.J., Vouk, M.A.: The Next Generation of Integrated Air Quality Modeling: EPA’s Models-3. Atmosph. Environment 30, 1925–1938 6. Dudhia, J.: A non-hydrostatic version of the Penn State/NCAR Mesoscale Model: validation. Mon. Wea. Rev. 121, 1493–1513 (1993) 7. Grell, G.A., Dudhia, J., Stauffer, D.R.: A description of the Fifth Generation Penn State/NCAR Mesoscale Model (MM5). NCAR Technical Note, NCAR TN-398STR, 138 p. (1994) 8. Houyoux, M.R., Vukovich, J.M.: Updates to the Sparse Matrix Operator Kernel Emission (SMOKE) Modeling System and Integration with Models-3, The Emission Inventory: Regional Strategies for the Future, Raleigh, NC, Air and Waste Management Association (1999) 9. Visschedijk, A.J.H., Zandveld, P.Y.J., Denier van der Gon, H.A.C.: A High Resolution Gridded European Emission Database for the EU Integrate Project GEMS, TNO-report 2007-A-R0233/B, Apeldoorn, The Netherlands (2007) 10. Wang, C., Corbett, J.J., Firestone, J.: Modeling Energy Use and Emissions from North American Shipping: Application of the Ship Traffic, Energy, and Environment Model. Environ. Sci. Technol. 41(9), 3226–3232 (2007)
Runs of UNI–DEM Model on IBM Blue Gene/P Computer and Analysis of the Model Performance Krassimir Georgiev1 and Zahari Zlatev2 1
Institute for Parallel Processing, Bulgarian Academy of Sciences Acad. G. Bonchev, bl. 25A, 1113 Sofia, Bulgaria
[email protected] 2 National Environmental Research Institute, Aarhus University Frederiksborgvej 399, P.O. Box 358, DK-4000 Roskilde, Denmark
[email protected]
Abstract. UNI–DEM is an Eulerian model for studying long range transport of air pollutants. The computational domain of the model covers Europe and some neighbour parts of Atlantic ocean, Asia and Africa. The model mainly is developed in the National Environmental Research Institute of Denmark, located at Roskilde. If UNI–DEM model is to be applied on a large space domain by using fine grids, then its discretization leads to a huge computational problem. If the space domain is discretized by using a (480 x 480) grid and the number of chemical species studied by the model is 35, then several systems of ordinary differential equations containing 8 064 000 equations have to be treated at every time-step (the number of time-steps being typically several thousand). If a threedimensional version of the same air pollution model is to be used, then the figure above must be multiplied by the number of layers. This implies that such a model as UNI–DEM must be run only on high-performance computer architectures, like IBM Blue Gene/P. The implementatation of such complex large–scale model on each new computer is a non trivial task. Analysis of the runs of UNI–DEM performed until now on IBM Blue Gene/P computer is presented and some preliminary results on performance, speed ups and efficiency are discussed.
1
Introduction
The environmental models are normally described by systems of partial differential equations (PDEs). Often, the number of equations is equal to the number of chemical species studied by the models and the unknown functions are concentrations of these species. There are five basic stages in the development of a large environmental model: • one has to choose carefully the physical and chemical processes that are to be taken into account during the development of the model, • the selected processes must be described mathematically, I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 188–196, 2010. c Springer-Verlag Berlin Heidelberg 2010
Runs of UNI–DEM Model on IBM Blue Gene/P Computer and Analysis
189
• the resulting system of PDEs must be treated by numerical methods, • the reliability of the obtained results must be evaluated, and • conclusions should be drawn. It is important to take into account all relevant physical and chemical processes during the development of the models. If more physical and chemical processes are included in the model, then one should expect that more accurate and more reliable results can be calculated. On the other hand, there are two difficulties related to the attempt to include as many as possible physical and chemical processes in the model: • The complexity of the model is increased when more processes are included in it. The treatment of the model on the available computers might become very difficult, and even impossible, when too many processes are taken into account. • Some of the physical and chemical processes are still not well understood, which means that such processes must either be neglected or some simplified mechanisms, based on uncertain assumptions and/or on experimental data, must be developed. It is necessary to find a reasonable compromise related to the number of processes that are to be taken into account when a large environmental model is developed. This explains also why it is necessary to validate the model results. The selected physical and chemical processes have to be described by mathematical terms. There exist some more or less standard rules which can be used for the mathematical description of different processes. Several examples, related to air pollution modes, are listed below: • The transport caused by the wind (which is normally called advection) is described by using terms containing first-order spatial derivatives of the unknown functions (the concentrations of the studied pollutants) multiplied by the wind velocities along the coordinate axes. • The diffusion of the concentrations is expressed by second-order spatial derivatives multiplied by the diffusivity coefficients. • The chemical reactions are represented by non-linear mathematical terms. • The change of the concentrations in time is given by first-order derivatives of the concentrations of the pollutants with respect to time. When all selected physical and chemical processes are expressed by some mathematical terms, then these terms have to be combined in a system of PDEs. The rest of the paper is organized as follows. The mathematical background of UNI-DEM is discussed in Section 2. The importance of the efficient organization of the computational process is outlined in Section 3. Section 4 is devoted to the parallelization strategy and the supercomputer used, analysis of the performed runs of UNI–DEM on IBM Blue Gene/P computer and some preliminary results on the performance, speed ups and efficiency achieved. Finally, short concluding remarks can be found in Section 6.
190
2
K. Georgiev and Z. Zlatev
Mathematical Description of the Danish Eulerian Model
The Danish Eulerian Model (DEM) is represented mathematically by the following system of partial differential equations (PDE): ∂(ucs ) ∂(vcs ) ∂(wcs ) ∂cs =− − − ∂t ∂x ∂y ∂z ∂ ∂cs ∂cs ∂cs ∂ ∂ + Kx + Ky + Kz ∂x ∂x ∂y ∂y ∂z ∂z +Es − (κ1s + κ2s )cs + Qs (c1 , c2 , . . . , cq ) ;
(1)
s = 1, . . . , q
where cs = cs (t, x, y, z) is the concentration of the chemical specie s at point (x, y, z) of the space domain at time t of the time-interval, u = u(t, x, y, z), v = v(t, x, y, z), and w = w(t, x, y, z) are the wind velocities along Ox, Oy, and Oz directions respectively at point (x, y, z) and time t; Kx = Kx (t, x, y, z), Ky = Ky (t, x, y, z), and Kz = Kz (t, x, y, z) are diffusion coefficients; Es are the emissions, κ1s = κ1s (t, x, y, z) and κ2s = κ2s (t, x, y, z) are the coefficients for dry and wet deposition of the chemical specie s at point (x, y, z) and at time t of the time-interval (for some species these coefficients are non-negative constants, the wet deposition coefficients κ2s are equal to zero when it is not raining), and finally, Qs (t, x, y, z, c1 , c2 , . . . , cq ) are expressions that describe the chemical reactions under consideration. Normally, it is not possible to solve exactly the systems of PDEs by which the large environmental models are described mathematically. Therefore, the continuous systems of type (1) are to be discretized. Assume that the space domain, on which (1) is defined, is a parallelepiped (this is as a rule the case when environmental models are to be handled) and that x ∈ [a1 , b1 ], y ∈ [a2 , b2 ], z ∈ [a3 , b3 ] and t ∈ [a, b]. Consider the grid-points (tn , xj , yk , zm ), where xj = a1 + jΔx, j = 0, 1, 2, . . . , Nx , yk = a2 + kΔy, k = 0, 1, 2, . . . , Ny , zm = a3 + mΔz, m = 0, 1, 2, . . . , Nz , and tn = a + nδt, n = 0, 1, 2, . . . , Nt . Assume also that the initial values cs (a, x, y, z) are given on all points of the spatial grid defined above. Then the task of finding the exact solution of the unknown functions cs at all points (t, x, y, z) of the domain (infinite number of points) is reduced to the task of finding approximations of the values of the functions ci at the points (tn , xj , yk , zm ); the number of these points can be very large (up to many millions), but it is finite. This means that the original task is relaxed in two ways: the number of points at which the problem is treated is reduced to the number of grid-points and it is required to find approximate solutions instead of the exact solution. In the example given above equidistant grids are introduced (i.e. Δx, Δy, Δz and Δt are constants). Non-equidistant grids can also be used. The vertical grids are normally not equidistant. It is assumed here that Cartesian coordinates have been chosen. Other coordinates, for example spherical coordinates, can also be used.
Runs of UNI–DEM Model on IBM Blue Gene/P Computer and Analysis
191
The above two remarks illustrate the fact that the discretization can be performed in different ways. The important thing is that the main idea remains the same: one considers approximate values of the unknown functions at a finite number of grid-points, defined by the discretization chosen, instead of the exact solution of (1) on the whole continuous space domain. Numerical methods must be used to find approximate values of the solution at the grid-points. It is also appropriate to split the model, the system of PDEs of type (1), into several sub-models (sub-systems), which are in some sense simpler. There is another advantage when some splitting procedure is applied: the different sub-systems have different mathematical properties and one can try to select the best numerical method for each of the sub-systems. It is clear that if some splitting procedure and appropriate numerical methods are already chosen, then any continuous system of type (1), which represents an environmental model, is replaced by several discrete sub-models which have to be treated on the available computers. More details about the mathematical description of the physical and chemical processes involved in UNI-DEM can be found in [19] or [20].
3
Need for Efficient Implementation of the Computations
The discretization of the system of PDEs by which the environmental models are described mathematically leads to huge computational tasks. The following example illustrates clearly the size of these tasks. Assume that • Nx = Ny = 480 (when a 4800km×4800km domain covering Europe is considered, then this choice of the discretization parameters leads to horizontal 10km × 10km cells), • Nz = 10 (i.e. ten layers in the vertical direction are introduced) and • Ns = q = 56 (the chemical scheme contains 56 species). Then the number of equations that are to be handled at each time-step is (Nx + 1)(Ny + 1)(Nz + 1)Ns = 142 518 376. A run over a time-period of one year with a time step-size Δt = 2.5 minutes will result in Nt = 213 120 time-steps. When studies related to climatic changes are to be carried out, it is necessary to run the models over a time-period of many years. When the sensitivity of the model to the variation of some parameters is studied, many scenarios (up to several hundreds) are to be run. This short analysis demonstrates the fact that the computational tasks arising when environmental studies are to be carried out by using large–scale models are enormous. Therefore, it is necessary: • to select fast but sufficiently accurate numerical methods and/or splitting procedures, • to exploit efficiently the cash memories of the available computer, • to parallelize the code in the efforts to make a large environmental model tractable on the available computers. It should be mentioned that it may be impossible to handle some
192
K. Georgiev and Z. Zlatev
very large environmental models on the computers available at present even when the above three conditions are satisfied. The treatment of UNI–DEM on different supercomputers is discussed in [2,5,6,16,17,18,19,20]. This model has been used to study pollution levels in (a) Bulgaria ([24,25]), (b) Denmark ([19,22,23]), (c) England ([1]), (d) Europe ([3,4,19,22,20]), (e) Hungary ([9,10]) and (f) the North Sea ([8]).
4 4.1
Parallel Implementation of DEM IBM Blue Gene/P Computer
The supercomputer which is used in this study is the first Bulgarian supercomputer IBM Blue Gene/P (see: http://www.bgsc.acad.bg/). As it is reported, this machine consists of two racks, 2048 Power PC 450 based compute nodes, 8192 processor cores and a total of 4 TB random access memory. Each processor core has a double–precision, dual pipe floating–point core accelerator. Sixteen I/O nodes are connected via fibre optics to a 10 Gb/s Ethernet switch. The smallest partition size, available currently, is 128 compute nodes (512 processor cores). The theoretical performance of the computer is 27.85 Tflops while the maximum LINPACK performance achieved is 23.42 Tlops (≈ 84%). 4.2
Parallelization Strategy
The MPI (Message Passing Interface, [7]) standard library routines are used to parallelize this model. One of the most important advantages of MPI is that it can be used on a much wider class of parallel systems, including shared-memory computers and clustered systems (each node of the cluster being a separate shared-memory machine). Thus it provides a high level of portability of the code. Our MPI parallelization is based on the space domain partitioning. The space domain is divided into several sub-domains (the number of the sub-domains being equal to the number of MPI tasks). Each MPI task works on its own sub-domain. On each time step there is no data dependency between the MPI tasks on both the chemistry and the vertical exchange stages. This is not so with the advection-diffusion stage. Spatial grid partitioning between the MPI tasks requires overlapping of the inner boundaries and exchange of certain boundary values on the neighboring subgrids for proper treatment of the boundary conditions. This leads to two main consequences: (i) certain computational overhead and load imbalance, leading to lower speedup of the advection-diffusion stage in comparison with the chemistry and the vertical transport; (ii) communication necessity for exchanging boundary values on each time step (done in a separate communication stage).
Runs of UNI–DEM Model on IBM Blue Gene/P Computer and Analysis
193
Table 1. CHUNKSIZE runs on IBM Blue Gene/P computer Size of chunks
1
2
4
8
16
32
64
Case 1 (CPU in hours)
9.8
10.3
9.4
8.7
8.4
8.2
8.4
Case 2 (CPU in hours)
43.2
37.7
33.4
30.4
30.1
31.1
37.0
Table 2. Characteristic of some parallel runs of UNI–DEM on IBM Blue Gene/P computer Computational Measured proces quantities
Number of processors used 15 30 60 120
Advection
CPU time Speed-up Percent
16.67 35.6
8.55 1.95 34.6
4.50 3.70 31.6
2.36 7.06 26.3
Chemistry
CPU time Speed-up Percent
24.77 53.0
12.51 1.98 50.1
6.20 4.00 43.6
3.10 7.99 34.2
Overhead
CPU time Speed-up Percent
5.28 11.4
83.54 1.49 15.3
3.52 1.50 24.8
3.53 1.50 39.2
Total
CPU time Speed-up Percent
46.72 100.0
24.60 1.90 100.0
15.21 3.07 100.0
9.00 5.19 100.0
The subdomains are usually too large to fit into the fast cache memory of the target processor. In order to achieve good data locality, the smaller lowlevel tasks are grouped in chunks. A parameter CHUNKSIZE is provided in the chemical–emission part of the DEM code, which should be tuned with respect to the cache size of the target machine (see Table 1). Remark: In Table 1 the following model parameters are used: (i) in both cases 480 × 480 grid (10km × 10km cells) and number of processors = 120; (ii) Case 1: advection time step: 150 s., chemical time step: 150 s.; Case 2: advection time step: 90 s.; chemical time step: 9 s. The time for pre-processing and post-processing is, in fact, overhead, introduced by the MPI partitioning strategy. Moreover, this overhead is growing up with increasing the number of MPI tasks and little can be done for its parallel processing. Thus the relative weight of these two stages grows up with increasing the number of MPI tasks, which eventually affects the total speed-up and efficiency of the MPI code (see results reported in Table 2).
194
4.3
K. Georgiev and Z. Zlatev
Numerical Results
Results of some preliminary experiments for a one year period on the IBM Blue Gene/P computer described above are shown in Table 2. Let us mention the following details of the performed runs: 1. The computing times are measured in hours. 2. Discretization parameters are as follows: (a) Advection step: 150 s.; (b) Chemical step: 150 s., 480 × 480 grid in the spatial domain (10km × 10km cells). 3. The percentages are related to the total times for the processors used. 4. The speed-up factors are related to the computing times obtained when 15 processors are used. 5. All runs were performed by using chunks of length 48.
5
Conclusions
By using IBM Blue Gene/P high parallel computer to run the variable grid-size version of the Danish Eulerian Model, detailed air pollution results for a large region (whole Europe) and for a very long period (one or several years) can be obtained within a reasonable time. The use of many processors (several hundreds) give possibility to use finer time resolution both in advection– diffusion and chemical-deposition parts. Some preliminary runs with the combinations of time steps 90 s.(advection/diffusion) – 9 s. (chemistry/deposition) and 9 s.(advection/diffusion)– 0.9 (chemistry/deposition) showed to be very optimistic. The unified parallel code of DEM, UNI–DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on IBM Blue Gene/P machine.
Acknowledgments This research is supported in part by grant DO02-115/2008 from the Bulgarian NSF and the Bulgarian Supercomputer Center giving access to IBM Blue Gene/P computer.
References 1. Abdalmogith, S., Harrison, R.M., Zlatev, Z.: Intercomparison of inorganic aerosol concentrations in the UK with predictions of the Danish Eulerian Model. Journal of Atmospheric Chemistry 54, 43–66 (2006) 2. Alexandrov, V., Owczarz, W., Thomsen, P.G., Zlatev, Z.: Parallel runs of large air pollution models on a grid of SUN computers. Math. and Comp. in Simulation 65, 557–577 (2004)
Runs of UNI–DEM Model on IBM Blue Gene/P Computer and Analysis
195
3. Ambelas Skjøth, C., Bastrup-Birk, A., Brandt, J., Zlatev, Z.: Studying variations of pollution levels in a given region of Europe during a long time-period. Systems Analysis Modelling Simulation 37, 297–311 (2000) 4. Bastrup-Birk, A., Brandt, J., Uria, I., Zlatev, Z.: Studying cumulative ozone exposures in Europe during a 7-year period. Journal of Geophysical Research 102, 23917–23935 (1997) 5. Georgiev, K., Zlatev, Z.: Running an advection–chemistry code on message passing computers. In: Alexandrov, V.N., Dongarra, J. (eds.) PVM/MPI 1998. LNCS, vol. 1497, pp. 354–363. Springer, Heidelberg (1998) 6. Georgiev, K., Zlatev, Z.: Parallel sparse matrix algorithms for air pollution models. Parallel and Distributed Computing Practices 2, 429–442 (2000) 7. Gropp, W., Lusk, E., Skjellum, A.: Using MPI: Portable programming with the message passing interface. MIT Press, Cambridge (1994) 8. Harrison, R.M., Zlatev, Z., Ottley, C.J.: A comparison of the predictions of an Eulerian atmospheric transport chemistry model with experimental measurements over the North Sea. Atmospheric Environment 28, 497–516 (1994) ´ Boz´ 9. Havasi, A., o, L., Zlatev, Z.: Model simulation on transboundary contribution to the atmospheric sulfur concentration and deposition in Hungary. Id¨ oj´ ar´ as 105, 135–144 (2001) ´ Zlatev, Z.: Trends of Hungarian air pollution levels on a long time-scale. 10. Havasi, A., Atmospheric Environment 36, 4145–4156 (2002) 11. Hesstvedt, E., Hov, Ø., Isaksen, I.A.: Quasi-steady-state approximations in air pollution modeling: comparison of two numerical schemes for oxidant prediction. Int. Journal of Chemical Kinetics 10, 971–994 (1978) 12. Hov, Ø., Zlatev, Z., Berkowicz, R., Eliassen, A., Prahm, L.P.: Comparison of numerical techniques for use in air pollution models with non-linear chemical reactions. Atmospheric Environment 23, 967–983 (1988) 13. Marchuk, G.I.: Mathematical modeling for the problem of the environment. Studies in Mathematics and Applications, vol. 16. North-Holland, Amsterdam (1985) 14. McRae, G.J., Goodin, W.R., Seinfeld, J.H.: Numerical solution of the atmospheric diffusion equations for chemically reacting flows. J. Comp. Physics 45, 1–42 (1984) 15. Ostromsky, T., Owczarz, W., Zlatev, Z.: Computational Challenges in Large-scale Air Pollution Modelling. In: Proc. 2001 International Conference on Supercomputing in Sorrento, pp. 407–418. ACM Press, New York (2001) 16. Ostromsky, T., Zlatev, Z.: Parallel Implementation of a Large-scale 3-D Air Pollution Model. In: Margenov, S., Wa´sniewski, J., Yalamov, P. (eds.) LSSC 2001. LNCS, vol. 2179, pp. 309–316. Springer, Heidelberg (2001) 17. Ostromsky, T., Zlatev, Z.: Flexible Two-level Parallel Implementations of a Large Air Pollution Model. In: Dimov, I.T., Lirkov, I., Margenov, S., Zlatev, Z. (eds.) NMA 2002. LNCS, vol. 2542, pp. 545–554. Springer, Heidelberg (2003) 18. Owczarz, W., Zlatev, Z.: Running a large-scale air pollution model on fast supercomputer. In: Chok, D.P., Carmichael, G.R. (eds.) Atmospheric Modelling, pp. 185–204. Springer, Heidelberg (2000) 19. Zlatev, Z.: Computer treatment of large air pollution models. Kluwer, Dordrecht (1995) 20. Zlatev, Z., Dimov, I.: Computational and Environmental Challenges in Environmental Modelling. Elsevier, Amsterdam (2006) 21. Zlatev, Z., Dimov, I., Georgiev, K.: Three-dimensional version of the Danish Eulerian Model. Zeitschrift f¨ ur Angewandte Mathematik und Mechanik 76, 473–476 (1996)
196
K. Georgiev and Z. Zlatev
22. Zlatev, Z., Dimov, I., Ostromsky, T., Geernaert, G., Tzvetanov, I., Bastrup-Birk, A.: Calculating losses of crops in Denmark caused by high ozone levels. Environmental Modelling and Assessment 6, 35–55 (2001) 23. Zlatev, Z., Moseholm, L.: Impact of climate changes on pollution levels in Denmark. Environmental Modelling 217, 305–319 (2008) 24. Zlatev, Z., Syrakov, D.: A fine resolution modelling study of pollution levels in Bulgaria. Part 1: SOx and NOx pollution. International Journal of Environment and Pollution 22(1-2), 186–202 (2004) 25. Zlatev, Z., Syrakov, D.: A fine resolution modelling study of pollution levels in Bulgaria. Part 2: High ozone levels. International Journal of Environment and Pollution 22(1-2), 203–222 (2004)
Sensitivity Analysis of a Large-Scale Air Pollution Model: Numerical Aspects and a Highly Parallel Implementation Tzvetan Ostromsky1 , Ivan Dimov1 , Rayna Georgieva1, and Zahari Zlatev2 1
Institute for Parallel Processing, Bulgarian Academy of Sciences, Acad. G. Bonchev, bl. 25A, 1113 Sofia, Bulgaria
[email protected],
[email protected],
[email protected] http://www.bas.bg/clpp/ 2 National Environmental Research Institute, Department of Atmospheric Environment, Frederiksborgvej 399 P.O. Box 358, DK-4000 Roskilde, Denmark
[email protected] http://www.dmu.dk/AtmosphericEnvironment
Abstract. The Unified Danish Eulerian Model (UNI-DEM) is a powerful air pollution model, used to calculate the concentrations of various dangerous pollutants and other chemical species over a large geographical region (mostly Europe). It takes into account the main physical and chemical processes between these species, the emissions, the deposition, advection and diffusion in dependence with the changing meteorological conditions. This large and complex task is split into submodels responsible for the main physical and chemical processes. In the chemical submodel there is a number of parameters, responsible for the speed of the corresponding chemical reactions. By simultaneous variation of these parameters we produced a set of multidimensional discrete functions. These are necessary for variance-based sensitivity analysis by using adaptive Monte Carlo approaches, which is subject to another paper. These huge computational tasks require extensive resources of storage and CPU time. A highly parallel implementation of the UNI-DEM has been created for this purpose and implemented on the new IBM BlueGene/P supercomputer, the most powerful parallel machine ever in Bulgaria. Some details of this implementation and a set of results obtained by it are presented in this paper.
1
Introduction
Huge computational tasks arise in the treatment of large-scale air pollution models and great difficulties arise even when modern high-performance computers are used. Therefore, it is an advantage to be able to simplify as much as possible the model, keeping control over the accuracy and reliability of the results produced by the model. A detailed sensitivity analysis can help us to find out which simplifications can be done with minimum loss of accuracy. On the other hand, it is important to analyze the influence of variations of the initial conditions, I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 197–205, 2010. c Springer-Verlag Berlin Heidelberg 2010
198
T. Ostromsky et al.
the boundary conditions, chemical rates etc. as there is always certain level of uncertainty of their values. This knowledge can show us which parameters are most critical for the results and, therefore, how can we effectively improve the results by improving the accuracy of certain input data items.
2
Sensitivity Analysis Concept
Sensitivity analysis (SA) is the study of how much the uncertainty in the input data of a model (due to any reason: inaccurate measurements or calculation, approximation, data compression, etc.) is reflected in the accuracy of the output results [4]. Two kinds of sensitivity analysis is considered in the existing literature: local and global SA. Local SA studies how some small variations of inputs around a given value can change the value of the output. Global SA takes into account all the variation range of the input parameters, and apportions the output uncertainty to the uncertainty in the input data. Subject to our study in this paper is the global sensitivity analysis. Several sensitivity analysis techniques have been developed and used throughout the years [4]. In general, these methods rely heavily on special assumptions connected to the behaviour of the model (such as linearity, monotonicity and additivity of the relationship between input and output parameters of the model). Among the quantitative methods, variance-based methods are the most often used. The main idea of these methods is to evaluate how the variance of an input or a group of inputs contributes into the variance of model output. 2.1
Definition of the Total Sensitivity Indices (TSI)
Assume that a model is represented by the following model function: u = f (x), where the input parameters x = (x1 , x2 , . . . , xd ) ∈ U d ≡ [0, 1]d are independent (non-correlated) random variables with a known joint probability distribution function (p.d.f.). In this way the output u becomes also a random variable (as it is a function of the random vector x) with its own p.d.f. and mathematical expectation (E). Let D[E(u|xi )] is the variance of the conditional expectation of u with respect to xi and Du is the total variance according to u. This indicator is called first-order sensitivity index by Sobol [5] or sometimes correlation ratio. Total Sensitivity Index (TSI) [5] of an input parameter xi , i ∈ {1, . . . , d} is the sum of the complete set of mutual sensitivity indices of any order (main effect, two-way interactions (second order), three-way interactions (third order), etc.): = Si + Sil1 + Sil1 l2 + · · · + Sil1 ...ld−1 , (1) Sxtot i l1 =i
l1 ,l2 =i,l1
where Sil1 ...lj−1 – j th order sensitivity index for the parameter xi (1 ≤ j ≤ d), j = 1, . . . , Si — the “main effect” of xi . According to the values of their total sensitivity indices, the input parameters are classified in the following ), important (0.5 < Sxtot < 0.8), unimportant way: very important (0.8 < Sxtot i i tot tot (0.3 < Sxi < 0.5), irrelevant (Sxi < 0.3). In most practical problems the high
Sensitivity Analysis of a Large-Scale Air Pollution Model
199
dimensional terms can be neglected, thus reducing significantly the number of summands in (1). 2.2
Sobol’ Approach, Based on HDMR and ANOVA
The Sobol’ method is one of the most often used variance-based methods. It is based on unique decomposition of the model function into orthogonal terms (summands) of increasing dimension and zero means. Its main advantage is computing in an uniform way not only the first order indices, but also the higher order indices (in quite a similar way, as the computation of the main effects). The total sensitivity index can then be calculated with just one Monte Carlo integral per factor. The Sobol’ method for global SA, applied here, is based on the so-called HDMR1 (2) of the model function f (integrable) in the d-dimensional factor space: d fl1 ...ls (xl1 , xl2 , . . . , xls ), (2) f (x) = f0 + s=1 l1 <...
where f0 is a constant. The representation (2) is not unique. Sobol’ has proven that under the conditions (3) for the right-hand-side functions 1 fl1 ...ls (xl1 , xl2 , . . . , xls ) dxlk = 0, 1 ≤ k ≤ s, s = 1, . . . , d. (3) 0
the decomposition (2) is unique and is called ANOVA2-HDMR of the model function f (x). Moreover, the functions of the right-hand side can be defined in a unique way by multidimensional integrals [6].
3
The Danish Eulerian Model
In this section we describe shortly the Danish Eulerian Model (DEM) [8,9]. It is mathematically represented by the following system of partial differential equations, in which the unknown concentrations of a large number of chemical species (pollutants and other chemically active components) take part. The main physical and chemical processes (horizontal and vertical wind, diffusion, chemical reactions, emissions and deposition) are represented in following system: ∂(ucs ) ∂(vcs ) ∂(wcs ) ∂cs =− − − + ∂t ∂x ∂y ∂z ∂ ∂ ∂ ∂cs ∂cs ∂cs + Kx + Ky + Kz + ∂x ∂x ∂y ∂y ∂z ∂z + Es + Qs (c1 , c2 , . . . cq ) − (k1s + k2s )cs , 1 2
High Dimensional Model Representation. ANalysis Of VAriances.
s = 1, 2, . . . q
(4)
200
T. Ostromsky et al.
where – – – – – –
cs — the concentrations of the chemical species; u, v, w — the wind components along the coordinate axes; Kx , Ky , Kz — diffusion coefficients; Es — the emissions; k1s , k2s — dry / wet deposition coefficients; Qs (c1 , c2 , . . . cq ) — non-linear functions describing the chemical reactions between species under consideration.
The above rather complex system is split into 3 subsystems / submodels, according to the major physical and chemical processes: (1) (1) (1) (1) (1) ∂ ucs ∂ vcs ∂ ∂cs ∂ ∂cs ∂cs =− − + Kx + Ky = A1 c(1) s (t) ∂t ∂x ∂y ∂x ∂x ∂y ∂y horizontal advection & diffusion
∂cs (2) (2) (2) = Es + Qs c1 , c2 , . . . c(2) − (k1s + k2s )c(4) q s = A2 cs (t) ∂t (2)
chemistry, emissions & deposition
(3) ∂cs
∂t
=−
(3) ∂ wcs ∂z
∂ + ∂z
(3)
∂cs Kz ∂z
= A3 c(3) s (t)
vertical transport
The Danish Eulerian Model is designed to calculate the pollution over a large geographical region (4800 km × 4800 km), covering whole Europe. Spatial and time discretization makes each of the above submodels a huge computational task, and high performance computing — vital for the real-time solution. That is why the parallelization has always been a key point in the computer implementation of DEM since its very early stages. A coarse-grain parallelization strategy based on decomposition of the spatial domain appears to be the most efficient and well-balanced on widest class of nowadays parallel machines (with not too many processors), although some restrictions apply. Other parallelizations are also possible and suitable to certain classes of supercomputers [2,3]. For the purpose of this study, a distributed memory parallelization of the model via MPI is used [1,2]. Parallelization is based on domain decomposition of the horizontal grid, which implies certain restrictions on the number of MPI tasks and requires communication on each time step. Improving the data locality for more efficient cache utilization is achieved by using chunks to group properly the small tasks in the chemistry-deposition and vertical exchange stages. Additional pre-processing and post-processing stages are needed for scattering the input data and gathering the results, causing some overhead.
Sensitivity Analysis of a Large-Scale Air Pollution Model
201
Table 1. Choosable parameters for selecting an optional UNI-DEM version Parameter Description NX = NY Grid size 96 × 96 Grid step 50 km. NZ # layers (2D/3D) 1 NEQUAT # chem. species 35
Optional values 288 × 288 16.7 km. 56
480 × 480 10 km. 10 168
The following numerical methods are used in the solution of the submodels: – Advection-diffusion part: Finite elements, followed by predictor-corrector schemes with several different correctors. – Chemistry-deposition part: An improved version of the QSSA (Quazi Steady-State Approximation) – Vertical transport: Finite elements, followed by theta-methods.
4
DEM, UNI-DEM, and SA-DEM
The development and improvements of DEM throughout the years has lead to a variety of different versions with respect to the grid-size/resolution, 2D – 3D layering and the number of species in the chemical scheme. The most prospective of them has been united under a common driver routine, called UNI-DEM. It provides a uniform and easy user access to the available up-to-date versions of the model. These versions and their governing parameters are shown in Table 1. SA-DEM is a modification of UNI-DEM, specially adjusted to be used in sensitivity analysis studies. The main differences are in the direct user access to the chemical rates — constants in the original model, and the ability to be modified, either separately or in groups in dependence with the dimension of the particular sensitivity analysis study. As a large number of experiments are to be done in order to produce the necessary mesh functions, especially in the higher dimensional studies, a driver routine that automatically changes certain modification coefficients and restarts the model has been developed. Finally, an additional program for extracting the necessary mean monthly concentrations and computing the variance mesh functions has been developed. The last task is much simpler and not computationally intensive, so we leave it beyond the scope of our supercomputer performance analysis.
5
Performance Results
In this section we present some time and speed-up3 results, showing the scalability of SA-DEM on two parallel supercomputers. As in UNI-DEM, the basic 3
The ratio between the execution times of an algorithm in sequential (on one processor) and in parallel (on n processors) of the same machine, is called speed-up on n processors for the corresponding algorithm and machine and is denoted by Sp(n) throughout this paper.
202
T. Ostromsky et al.
Table 2. Time and speed-up of SA-DEM on a SUN Sunfire E25000 at DTU, (96 × 96 × 1) grid, 35 species, CHUNKSIZE=48 # proc. 1 2 4 8 16 32
Advection Time [s] 1832 826 482 243 120 66
Chemistry
(Sp) Time [s] — 7548 (2.2) 3756 (3.8) 1901 (7.5) 939 (15.3) 474 (27.8) 244
TOTAL
(Sp) Time [s] — 9956 (2.0) 5167 (4.0) 2945 (8.0) 1654 (15.9) 1072 (30.9) 824
(Sp) — (1.9) (3.4) (6.0) (9.3) (12.1)
E [%] — 96% 85% 75% 58% 38%
run for SA-DEM is for one year period. Although for the particular SA study we do not use the whole set of results, but usually take the mean monthly concentrations for a typical summer month (where the ozone concentrations reach their highest levels), the results of one-year runs are generally more accurate and reliable due to the minimal impact of the initial conditions. The results of such runs are presented in the tables of this section. The time and the speed-up (Sp) of the main computational stages of the model, as well as in total, are given in separate columns. The total time includes also the times for pre-processing and post-processing stages and some data transfer procedures, which are either inherently sequential or cannot be efficiently parallelized. In addition, the total efficiency E (in percent) is given in the last column, where E = Sp(n)/n .100%, where n is the number of processors (given in the first column). The Sunfire E25000 system consists of 72 dual core Ultrasparc IV processors (1350 MHz, 2 level cache). The machine (called ‘newton’) is part of the high performance cluster of the Technical University of Denmark (DTU). It has been used for a long time for development and experiments with UNI-DEM, so the codes are well tuned on it. The results of one-year experiments with the SA-DEM on the Sunfire E25k are presented in Table 2. The IBM Blue Gene/P is a state-of-the-art high-performance system with 8192 CPU in total and theoretical peak performance more than 23 TFLOPS. It is built of 2048 quad core PowerPC 450 (4 CPU, 850 MHz, 2 GB RAM). There is 8 MB L3 cache per node, 32 KB L1 cache per CPU (private). The system was installed in autumn 2008 in DAITS, Sofia. The results of one-year experiments with the SA-DEM on the Blue Gene/P are presented in Table 3.
6
Sensitivity Analysis Experiments
Chemistry is the most difficult and the most time consuming part of the model, as one can see in the tables from the previous section. It is described as a complex non-linear system for calculating the unknown concentrations. However, other important parameters take part in the calculations and among them – the
Sensitivity Analysis of a Large-Scale Air Pollution Model
203
Table 3. Time and speed-up of SA-DEM on the IBM Blue Gene/P, (96 × 96 × 1) grid, 35 species, CHUNKSIZE=48 # proc.
Advection Time [s]
1 2 4 8 16 24 32 48
5456 2748 1373 698 366 278 229 158
Chemistry
(Sp) Time [s] — (2.0) (4.0) (7.8) (14.9) (19.6) (23.8) (34.5)
32099 16067 8081 4054 2013 1355 1008 673
TOTAL
(Sp) Time [s] — (2.0) (4.0) (7.9) (15.9) (23.8) (31.8) (47.7)
38848 19863 10283 5690 3265 2482 2198 1866
(Sp) E [%] — (2.0) (3.8) (6.8) (11.9) (15.7) (17.6) (20.8)
— 98% 95% 85% 74% 65% 55% 43%
chemical rate constants. There are 69 time-dependent chemical reactions and 47 time-independent in the condensed Carbon Bond-IV Mechanism (CBM IV), used in DEM [9]. The intensity of each reaction is determined by the corresponding chemical rate constant. For the purpose of the sensitivity analysis it can be considered as a random variable. On the first stage of our study we had to determine the most important chemical rate constants with respect to a given criterion. Ozone is known to be one of the most dangerous pollutants in the air. That is why the mean monthly ozone concentration was chosen as a primary criterion. By extensive experiments with perturbation of a large number of coefficients of the suspicious chemical reactions, we were trying to determine the most critical of them. The results of these experiments revealed the chemical reaction #22 (time-dependent): N O + OH → HN O2 as one of the most critical. Two other were also chosen to take part in the SA study: #3 (time-dependent) and #6 (time-independent). By running SA-DEM with a given set of perturbation factors α = (αi )di=1 (d = 3 in our case, according to the above choice), we generate an output of the following type: rs (α) =
cα s (as , bs ) , cs (a0 , b0 )
α ∈ {0.1, 0.2, . . . , 2.0}d
(5)
where s is the index of chemical specie (s = 1, . . . , 35) and the denominator is the mean monthly concentration of this specie without any perturbation (α = 1) in the point where the maximal mean monthly concentration of a fixed chemical specie k over the spatial domain G is achieved (cmax = ck (a0 , b0 ) = k max(a,b)∈G {ck (a, b)}). The sensitivity of five important pollutants (nitrogen oxide (N O), nitrogen dioxide (N O2 ), sulphur dioxide (SO2 ), peroxy radical (P HO) and ozone (O3 )) to its rate perturbation coefficient is shown in Fig. 1. In Fig. 2 just the ozone sensitivity is given for 5 consecutive years. The graphics are quite similar. Moreover, the graphics for the other chemical species show similar behaviour, which
204
T. Ostromsky et al.
1.25 1.5 1.0
1.0
0.75
0.5 0.5 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0
Fig. 1. July’1996 sensitivity of the mean monthly concentrations of 5 chemical species to the RC22 perturbation factors in the interval [0.1, 2.0]
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0
Fig. 2. Sensitivity of the ozone mean monthly concentrations for 5 consecutive years to the RC22 perturbation factors in the interval [0.1, 2.0]
indicates that the chemical rate constants are not so sensitive to the variations of the meteorological conditions. With generating the values rs (α) (mesh functions in G) the first stage of the SA procedure is completed. The second stage consists of two steps: (i) Approximation of the mesh functions and (ii) Computing of Sobol’ global sensitivity indices, but those are beyond the scope of this paper.
7
Conclusions and Plans for Future Work
A 3-stage variance-based sensitivity analysis method is proposed and applied to the Danish Eulerian Model. On the first stage, many numerical experiments with perturbation of some reaction coefficients were executed, producing the necessary mesh functions for the next stage of the proposed sensitivity analysis method. For that purpose, a new SA-oriented version of the model (SA-DEM) was created. It was implemented on two very powerful parallel supercomputers, showing good scalability on both machines. Next stage of this SA research include approximation of the mesh function by polynomials of 3-rd / 4-th degree, or by cubic B-spline functions. A Monte Carlo integration method is furtherly applied to the results. The study would possibly lead to improvements in the chemical scheme of the model as well as to its more efficient use in some time-critical applications. These results will be presented in another paper. Other near future plans include: – Extending the abilities and improving the performance of SA-DEM; – Experiments by SA-DEM with 3D coarse grid (96 × 96 × 10), as well as with finer resolution grids (storage-permitting);
Sensitivity Analysis of a Large-Scale Air Pollution Model
205
– studies of model sensitivity with respect to the emission levels and the boundary conditions.
Acknowledgments This research is supported by the Bulgarian NSF grants DO 02-115/2008 “Supercomputer Applications” and DO 02-215/2008.
References 1. Dimov, I., Georgiev, K., Ostromsky, T., Zlatev, Z.: Computational challenges in the numerical treatment of large air pollution models. Ecological Modelling 179, 187–203 (2004) 2. Ostromsky, T., Zlatev, Z.: Parallel Implementation of a Large-scale 3-D Air Pollution Model. In: Margenov, S., Wa´sniewski, J., Yalamov, P. (eds.) LSSC 2001. LNCS, vol. 2179, pp. 309–316. Springer, Heidelberg (2001) 3. Ostromsky, T., Zlatev, Z.: Flexible Two-level Parallel Implementations of a Large Air Pollution Model. In: Dimov, I.T., Lirkov, I., Margenov, S., Zlatev, Z. (eds.) NMA 2002. LNCS, vol. 2542, pp. 545–554. Springer, Heidelberg (2003) 4. Saltelli, A., Tarantola, S., Campolongo, F., Ratto, M.: Sensitivity Analysis in Practice: A Guide to Assessing Scientific Models. Halsted Press, New York (2004) 5. Sobol, I.M.: Sensitivity estimates for nonlinear mathematical models. Mathematical Modeling and Computational Experiment 1, 407–414 (1993) 6. Sobol, I.M.: Global Sensitivity Indices for Nonlinear Mathematical Models and Their Monte Carlo Estimates. Mathematics and Computers in Simulation 55(1-3), 271– 280 (2001) 7. Sobol, I.M.: Theorem and examples on high dimensional model representation. Reliability Engineering and System Safety 79, 187–193 (2003) 8. WEB-site of the Danish Eulerian Model, http://www.dmu.dk/AtmosphericEnvironment/DEM 9. Zlatev, Z.: Computer treatment of large air pollution models. Kluwer, Dordrecht (1995)
Advanced Results of the PM10 and PM2.5 Winter 2003 Episode by Using MM5-CMAQ and WRF/CHEM Models Roberto San Jos´e1 , Juan L. P´erez1, Jos´e L. Morant1 , F. Prieto2 , and Rosa M. Gonz´ alez3 1
3
Environmental Software and Modelling Group, Computer Science School, Technical University of Madrid - UPM, Campus de Montegancedo, Boadilla del Monte 28660 Madrid, Spain
[email protected] http://artico.lma.fi.upm.es 2 Department of Ecology, Building of Sciences, University of Alcal´ a, 28871 Alcal´ a de Henares, Madrid, Spain Department of Meteorology and Geophysics, Faculty of Physics, Complutense University of Madrid – UCM, Ciudad Universitaria, 28040 Madrid, Spain
[email protected]
Abstract. During the winter of 2003 there was an special particulate episode over Germany. The application of the MM5-CMAQ model (PSU/ NCAR/EPA, US) to simulate the high concentrations in PM10 and PM2.5 during a winter episode (2003) in Central Europe has been performed. The selected period is January 15 – April 6, 2003. Values of daily mean concentrations up to 75 µgm−3 are found on average of several monitoring stations in Northern Germany. Additionally WRF/CHEM (NOAA, USA) model has been applied. In this contribution we have performed additional simulations to improve the results obtained in our contribution San Jos´e et al. (2008) [5]. We have run again both models but with changes in emission inventory and turbulence scheme for MM5-CMAQ. In the case of WRF/CHEM much more changes have been performed: Lin et al. (1983) microphysics scheme has been substituted by WSM 5-class single moment microphysics scheme (Hong et al. 2004); Goddard radiation scheme has been substituted by Dudhia radiation scheme and FTUV photolysis model has been substituted by J-FAST photolysis model. The results improve substantially the PM10 and PM2.5 patterns in both models. The correlation coefficient for PM10 for 80 days simulation period and for daily averages has been increased up to 0.851 and in the case of PM2.5, it has been increased up to 0.674. Keywords: Emissions, PM10 and PM2.5, air quality models, air particles.
1
Introduction
During recent years, investigations related to different pollution episodes have been increased substantially. Simulations of elevated PM10 and PM2.5 I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 206–213, 2010. c Springer-Verlag Berlin Heidelberg 2010
Advanced Results of the PM10 and PM2.5 Winter 2003 Episode
207
concentrations have been always underestimated by modern three dimensional air quality modelling tools. This fact has focused much more attention between researchers during last years. Three dimensional air quality models have been developed during the last 15-20 years and substantial progress has occurred in this research area. These models are composed by a meteorological driver and a chemical and transport module. Examples of meteorological drivers are: MM5 (PSU.NCAR, USA), [5], RSM (NOAA, USA), ECMWF (Redding, UK), HIRLAM (Finnish Meteorological Institute, Finland), WRF [3] and examples of dispersion and chemical transport modules are EURAD (University of Cologne, Germany), EUROS (RIVM, The Netherlands), EMEP Eulerian (DNMI, Oslo, Norway), MATCH (SMHI, Norrkoping, Sweden), [2], REM3 (Free University of Berlin, Germany), CHIMERE (ISPL, Paris, France), NILU-CTM (NILU, Kjeller, Norway), LOTOS (TNO, Apeldoorm, The Netherlands), DEM (NERI, Roskilde, Denmark), OPANA model based on MEMO and MM5 mesoscale meteorological models and with the chemistry on-line solved by [4] STOCHEM (UK Met. Office, Bracknell, UK), [1] and CMAQ (Community Multiscale Air Quality modelling system), developed by EPA (USA). In USA, CAMx Environ Inc., STEM-III (University of Iowa) and CMAQ model are the most up-to-date air quality dispersion chemical models. In this application we have used the CMAQ model (EPA, USA) which is one of the most complete models and includes aerosol, cloud, and aerosol chemistry. In this contribution we present results from two simulations by two different models. The first air quality modelling systems is MM5-CMAQ which is a matured modelling systems based on the MM5 mesoscale non-hydrostatic meteorological model and the dispersion and chemical transport module, CMAQ. The second tool is the WRF/CHEM air quality modelling system, which is an on-line (one code, one system) tool to simulate air concentrations based on the WRF meteorological driver. In WRF/CHEM the chemistry transport and transformations are embedded into WRF as part of the code so that the interactions between many meteorological and climate variables and the chemistry can be investigated. WRF/CHEM is developed by NOAA/NCAR (USA). The advantage of on-line models is based on the capability to analyze all variables simultaneously and to account for all interactions (or at least, as much as possible) with a full modular approach.
2
PM10 and PM2.5 Episode
During the period January 15, 2003 to April 5, 2003, in central Europe (mainly northern part of Germany), we observe three high peaks on PM10 and PM2.5 values in several monitoring stations located in the area of North-East of Germany. The daily averages of PM10 concentrations were close to 80 µgm−3 and higher than 70 µgm−3 for PM2.5 concentrations. These values are about 4-5 times higher than those registered as “normal” values. The first peak on PM10 and PM2.5 concentrations has occurred from February 1 to February 15. During this period of time, Central Europe was under the influence of a high-pressure
208
R. San Jos´e et al.
system coming from Russia through Poland and Southern Scandinavia. In Northern part of Germany, we found south-easterly winds and stable conditions with low winds. These meteorological conditions brought daily PM10 concentrations at about 40 µgm−3 . The second peak was characterized by a sharp gradient on PM10 concentrations from February 15 to March 7. This episode reached daily PM10 concentrations up to 70 µgm−3 . The meteorological conditions on March 2 (peak values) was characterized by a wind rotation composed by South-easterly winds from Poland over the North of Germany and North-westerly and Western winds in the Central part of Germany. Finally a third peak with values of about 65 µgm−3 on March 27 starts on March 20 ending on April 5, 2003 was having a similar structure and causes than the second one. The observational data used to compare with the modelling results is referred in San Jos´e et al. (2008).
3
Emission Data
In both models, we have applied the TNO emissions as area and point sources with a geographical resolution of 0.125◦ latitude by 0.25◦ longitude and covering all Europe. The emission totals by SNAP activity sectors and countries agree with the baseline scenario for the Clean Air for Europe (CAFE) program. This database gives the PM10 and PM2.5 emission for the primary particle emissions. We also took from CAFE the PM splitting sub-groups, height distribution and the breakdown of the annual emissions into hourly emissions. The PM2.5 fraction of the particle emissions was split into an unspecified fraction, elemental carbon (EC) and primary organic carbon (OC). The EC fraction of the PM2.5 emissions for the different SNAP sectors was taken from EMEP emission inventory. For the OC fraction, the method proposed by is applied as follows: an average OC/EC emission ratio of two was used for all sectors, i.e. the OC fraction were set as twice the EC fractions, except if the sum of the two fractions exceed the unity. In this case (fEC > 0.33), fOC was set as: fOC = 1−fEC . With this prepared input, the WRF/CHEM and CMAQ took the information as it is. The hourly emissions are derived using sector-dependent, monthly, daily and hourly emission factors as used in the EURODELTA (http://aqm.jrc.it/eurodelta/) exercise. The differences with [5] simulations for MM5-CMAQ are established as follows: Albania, Croatia, Bosnia, and Serbia use the Bulgaria daily factors; Turkey uses the Hungary daily factors; Belarus, Moldavia, Ukraine, and Russia use the Romania daily factors; Germany use the Federal Republic of Germany daily factors; Czech Republic uses the Slovakia monthly factors. The VOC to TOC factor is 1.14. In case of WRF/CHEM the changes are the same as for MM5-CMAQ but the VOC to TOC factor in the VOC splitting scheme is changed to 3.2.
4
MM5-CMAQ and WRF-CHEM Architectures and Configurations
MM5 was set up with two domains: a mother domain with 60 x 60 grid cells with 90 km spatial resolution and 23 vertical layers and 61x61 grid cells with
Advanced Results of the PM10 and PM2.5 Winter 2003 Episode
209
30 km spatial resolution with 23 vertical layers. The central point is set at 50.0 N and 10.0 E. The model is run with Lambert Conformal Conical projection. The CMAQ domain is slightly smaller following the CMAQ architecture rules. We use reanalysis T62 (209 km) datasets as 6-hour boundary conditions for MM5 with 28 vertical sigma levels and nudging with meteorological observations for the mother domain. We run MM5 with two-way nesting capability. We use the Kain-Fritsch 2 cumulus parameterization scheme, the MRF PBL scheme, Schultz microphysics scheme and Noah land-surface model. In CMAQ we use clean boundary profiles for initial conditions, Yamartino advection scheme, ACM2 for vertical diffusion, EBI solver and the aqueous/cloud chemistry with CB05 chemical scheme. Since our mother domain includes significant areas outside of Europe (North of Africa), we have used EDGAR emission inventory with EMIMO 2.0 emission model approach to fill those grid cells with hourly emission data. The VOC emissions are treated by SPECIATE Version 4.0 (EPA, USA) and for the lumping of the chemical species, we have used the procedure, for 16 different groups. We use our BIOEMI scheme for biogenic emission modeling. The classical, Atkin, Accumulation and Coarse modes are used (MADE/SORGAM modal approach). In WRF/CHEM simulation we have used only one domain with 30 km spatial resolution similar to the MM5. We have used the Lin et al. (1983) scheme for the microphysics, Yamartino scheme for the boundary layer parameterization and for the biogenic emissions. The MOSAIC sectional approach is used with 4 modes for particle modeling.
5
Changes in Model Configurations
In case of MM5-CMAQ the changes in the model simulations compared with the report of [5] affect only to the emissions (as explained above) an the Kz (eddy diffusivity coefficient). The option to use the so-called KZMIN as detailed in CMAQ code is applied. If KZMIN is activated the Kz coefficient is calculated by: Kz = KzL + (KzU − KzL) ∗ U F RAC (1) where Kz is the eddy diffusivity in m2 s−1 , KZL is 0.5 (lowest) and KzU is 2.0 (highest). The UFRAC represents the percentage (range 0–1) of urban landuse in the grid cell. In case of WRF/CHEM, the changes affect to the microphysics scheme, substituting the scheme by the WSM (WRF single moment) 5-class microphysics scheme. 5 represents the number of water species predicted by the scheme. The Goddard/NASA radiation scheme is substituted by the Dudhia radiation scheme. The FTUV photolysis rate model is substituted by the FAST-J scheme.
6
Model Results
The comparison between daily average values (averaged over all monitoring stations) of PM10 concentrations and modeled values has been performed with
210
R. San Jos´e et al.
Fig. 1. Comparison between daily averaged observed PM10 concentrations and model results produced by MM5-CMAQ. The model gets closer to the maximum peak compared with the previous simulation in [5].
Fig. 2. Comparison between daily averaged observed PM10 concentrations and model results produced by WRF/CHEM. The model captures even better than in the previous simulation [5] the magnitude of the PM10 peaks.
Advanced Results of the PM10 and PM2.5 Winter 2003 Episode
211
Fig. 3. Comparison between daily averaged observed PM2.5 concentrations and model results produced by MM5-CMAQ. The model gets closer to the simulation performed in [5].
Fig. 4. Comparison between daily averaged observed PM2.5 concentrations and model results produced by WRF/CHEM. The model overestimates a little bit the observed data but the correlation coefficient gets a light improvement (up to 0.759).
212
R. San Jos´e et al.
several statistical tools such as: Calculated mean/Observed mean; Calculated STD/Observed STD; bias; squared correlation coefficient (R2); RMSE/Observed mean (Root Mean Squared Error); percentage within +/- 50% and number of data sets. Figure 1 shows the comparison between PM10 observed averaged daily values and the modeled values by MM5-CMAQ. The results show that for MM5-CMAQ, the new configuration related to emission data and eddy diffusivity improves the correlation coefficient from 0.828 to 0.851 but the pattern show a substantial improvement with the central peak much closer to the observed data. Figure 2 shows the comparison between observed and modeled average daily data for the episode with the new configuration for the WRF/CHEM model. The results show a much better correlation coefficient going from 0.782 to 0.852 with the new configuration. Figures 3 and 4 show similar results for PM2.5. In case of MM5-CMAQ the improvement is from 0.608 to 0.674 and for WRF/CHEM the change is from 0.760 to 0.759. These results show that the new configuration is substantially better than the previous one. New experiments are needed to determine the impact of emissions and the eddy diffusivity respectively.
7
Conclusions
We have implemented and re-run two different models (MM5-CMAQ and WRFCHEM) for the same episode over Northern part of Germany during the winter period of 2003 (Jan. 15-Apr. 5, 2003). The comparison between these simulations and those performed in [5] produce the following results: we have improved substantially the correlation coefficients for the daily averages when comparing observed and modelled data for both models. The WRF/CHEM continue to show better results than MM5-CMAQ but the peaks for PM10 and PM2.5 for MM5-CMAQ are getting closer to the observed peaks. The patterns for MM5CMAQ have improved substantially compared with the results obtained in [5]. New experiments are necessary to determine the impact of eddy diffusivity and emission inventory on the new results.
Acknowledgements We would like to thank Dr. Peter Builtjes (TNO, The Netherlands) for his initial guidance and suggestion for this experiment and also COST 728 project (EU) where the inter-comparison experiment was proposed. The authors thankfully acknowledge the computer resources, technical expertise and assistance provided by the Centro de Supercomputaci´on y Visualizaci´on de Madrid (CeSVIMa) and the Spanish Supercomputing Network.
References 1. Collins, W.J., Stevenson, D.S., Johnson, C.E., Derwent, R.G.: Tropospheric ozone in a global scale 3D Lagrangian model and its response to NOx emission controls. J. Atmos. Chem. 86, 223–274 (1997)
Advanced Results of the PM10 and PM2.5 Winter 2003 Episode
213
2. Derwent, R., Jenkin, M.: Hydrocarbons and the long-range transport of ozone and PAN across Europe. Atmospheric Environment 8, 1661–1678 (1991) 3. Grell, G.A., Dudhia, J., Stauffer, D.R.: A description of the Fifth-Generation Penn State/NCAR Mesoscale Model (MM5), NCAR/TN- 398+ STR. NCAR Technical Note (1994) 4. Jacobson, M.Z., Turco, R.P.: SMVGEAR: A sparse-matrix, vectorized GEAR code for atmospheric models. Atmospheric Environment 28(2), 273–284 (1994) 5. San Jos´e, R., P´erez, J.L., Morant, J.L., Gonz´ alez, R.M.: Elevated PM10 and PM2.5 concentrations in Europe: a model experiment with MM5-CMAQ and WRF/CHEM. WIT Transactions on Ecology and the Environment 116, 3–12 (2008)
Four-Dimensional Variational Assimilation of Atmospheric Chemical Data – Application to Regional Modelling of Air Quality Achim Strunk, Adolf Ebel, Hendrik Elbern, Elmar Friese, Nadine Goris, and Lars Peter Nieradzik Rhenish Institute for Environmental Research at the University of Cologne
[email protected] http://www.riu.uni-koeln.de
Abstract. The chemistry transport model system EURAD-IM and its variational data assimilation implementation are applied to air quality assessment problems. The facility of joint initial value and emission rate optimisation together with a nested 4d-var module is employed to the measurement campaign BERLIOZ. To emphasise the benefits of computationally efficient data assimilation for monitoring and simulating air quality, results of an operational 3d-var implementation are also given.
1
Introduction
The main objective of data assimilation (DA) is to provide an as accurate as possible and consistent image of a system’s state on regular grids. This is to be accomplished by taking into account all available information, consisting of procedural knowledge coded in numerical models and declarative knowledge like climatological information, observations and model forecasts. Due to the variety of information sources, given by 1) observations with heterogeneous accuracy, representativenes and spatial and temporal density, 2) retrieval methods of different kinds and reliability and 3) defective and incomplete model simulation results, only advanced DA techniques – like the four-dimensional variational DA (4d-var) – and inverse modelling provide methods for appropriate data fusion and analysis. In traditional (meteorological) DA, the optimal guess of the system’s state space variables is in the focus of analysis applications, providing new initial conditions for successive forecasts ([9]). Regarding chemical applications in the troposphere, air-surface interactions like emissions have strong impact on trace gas distributions, while being at the same time insufficiently known ([5,6]). This fact suggests the generalisation of air quality DA to an inversion problem, allowing for the joint analysis of various kinds of model parameters. This exposition will show air quality assessment results by DA applications using variational approaches implemented in the advanced chemistry-transport model (CTM) EURAD-IM.
The paper is dedicated to Zahari Zlatev on the occasion of his 70th birthday.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 214–222, 2010. c Springer-Verlag Berlin Heidelberg 2010
Four-Dimensional Variational Assimilation of Atmospheric Chemical Data
2
215
Four-Dimensional Variational Data Assimilation
Using variational data assimilation techniques, the analysis task is formulated as a minimisation problem of the variational calculus. Emission rates are included in the set of control parameters, optimised together with the initial values. Following [5], we want to preserve the daily cycle of each emitted species in each grid cell, while the total amount of emitted mass is scaled by a time invariant emission factor, which is to be optimised instead. This serves for a temporal regularisation and suitable constraints to the minisation procedure. The theoretical outline will be discussed briefly, resting on [6]. Climatological information or short range forecasts serve for a background field xb of the state space, while the analogous background emission estimate eb is usually obtained from emission inventories. The differences between observations y t and the model equivalents H xt at time t are called innovations and given by dt = y t − H xt , with the observation operator H mapping from the model state space to observations. The model state xt evolved from initial conditions x0 by a model integration, using emission factors f . In order to obtain an incremental formulation of a cost function ([3]), we use perturbations of the initial state δx0 = x0 − xb0 and of logarithmically transformed emission factors δu = u − ub = ln(f ) − ln(f b ). With B and K being background error covariance matrices of the background initial state and emission factors, respectively, we further must transform the control variables to v = B−1/2 δx0 and w = K−1/2 δu, in order to suitably precondition the ill-posed optimisation problem. This leads to the scalar valued cost function J with N
1 1 1 T J(v, w) = v T v + wT w + [dj − H δxj ] R−1 [dj − H δxj ] , 2 2 2 j=0
(1)
with H being a linearised approximation of H. This cost function can be minimised using gradient descent or quasi-Newton algorithms under the availability v of the gradient of the cost function with respect to z = : w T /2 N v B 0 T −1 ∇z J = − − MT [dj − H δxj ] , j H R T /2 w 0 K
(2)
j=0
where MT j is the adjoint operator of the tangent linear M of the model operator M in δxt = Mt,0 δz, and HT is the adjoint of the linearised observation operator.
3
The EURAD-IM Chemistry Transport Model
The European Air Pollution Dispersion chemistry-transport model and its inverse modelling module (EURAD-IM) is a comprehensive Eulerian model operating from hemispheric to local scale. It involves transport, diffusion, chemical
216
A. Strunk et al.
tranformation of about 60 chemical species (depending on the chosen mechanism), wet and dry deposition of tropospheric trace gases ([7,8]). Moreover, the inverse modelling system allows for application of variational data assimilation techniques, focussing on episodic scenario analysis and operational forecast enhancements in a near-real-time setup. As meteorological driver the Penn State/NCAR mesoscale model MM5 is used. Emission data in this exposition is produced based on available EMEP inventories of different years of validity. In addition to the (forward) integrating CTM, the 4d-var data assimilation system includes the associated adjoint operators for gas phase mechanisms, transport and diffusion schemes. The system is completed by a quasi-Newton minimisation procedure (L-BFGS) and a sophisticated covariance module for the initial value background error covariance matrix (BECMiv ) including a suitable preconditioning strategy. The provision of BECMiv , which is hampered by unsufficient knowledge and the impracticality of storing a matrix with more than 1012 entries in the case of a real world CTM, uses the algorithm proposed by [11]. The authors advice to employ a generalised diffusion approach to provide correlations between background state errors by an operator. This technique allows for flexible applications including inhomogeneous and anisotropic correlation features. An emission factor background error covariance matrix has been constructed using emission statistics from EMEP, broken down by polluter groups. It is coded as block diagonal matrix describing emission rate correlations between emitted constituents. The implementation of the covariance matrices in EURAD-IM is described in detail by [6]. The 4d-var setup with its adjoint operators enforce saving the complete forward (direct) model trajectory for later retrieval during the backward (adjoint) integration. Due to the enormous amount of data to save, the current implementation of the EURAD-IM applies a two level recalculation strategy, resulting in two additional forward integrations during the backward sweep. Therefore, one complete iteration takes about five times the duration of a single forward simulation, necessitating a parallel implementation. This is realised using a domain decomposition method and communication facilities of MPI. The implementation within EURAD-IM is described in detail in [4] and [6]. Due to discontinuities in the EURAD-IM particel phase module MADE ([1]), the adjoint of this module is not yet available. The examples for data assimilation including particle phase components shown below is thus limited to the application of three dimensional variational (3d-var) data assimilation.
4
Applications and Results
As mentioned above, one core feature of the 4d-var system is the joint optimisation of initial values and emission rates. A short example of the motivation will be given in the following: Figure 1 shows the ozone evolution at the Austrian station St. P¨ olten on June 4, 2003. Having assimilated the observations of the first six hours only, the properties of optimising the different sets of control parameters are obvious: Initial value optimisation strongly improves the forecast skill, while still overestimating the ozone values on the second part of the
Four-Dimensional Variational Assimilation of Atmospheric Chemical Data
217
Fig. 1. Observed and analysed ozone evolution at Austrian station St. P¨ olten using different sets of control parameters. Grey shaded: window of assimilated observations (00-06 UTC). Vertical bars: ozone observations with error estimates. Dashed line: Control run without data assimilation. Dotted line: initial value optimisation. Dash-dotted line: emission factor optimisation. Solid line: joint initial value and emission factor optimisation.
day. Emission based optimisation strongly suppresses ozone precursor emissions (due to ozone overestimation of the control run during the assimilation interval) and thus leads to almost completely removed ozone at the end of the day. Only jointly adjusting emission rates and initial values allows for an almost perfect fit to the observed ozone evolution. The following two sections will show results for 1) a nested application of the 4d-var system to an ozone episode and 2) the operational implementation of a 3d-var system and its benefits for the forecast skill. 4.1
Nested 4d-Var Application to the BERLIOZ Campaign
The main stimulus of the BERLIOZ campaign ([10]) in July and August 1998 was to analyse the transport and chemical processes responsible for photo-oxidant formation and the contribution of ozone precursors to its formation in the Berlin area. Due to limited success of analysing reactive emitted constituents like NOx on coarse resolutions, the EURAD-IM simulation setup for the BERLIOZ campaign include a coarse-grid with a horizontal grid spacing of 54 km and two recursively nested domains with a nesting ratio of 3, resulting in a sequence of 54-18-6 km. The deepest nested domain (nest 2) covers large parts of Eastern Germany with the greater Berlin area, being the region of main interest (see Figure 3). The assimilated observational data include ground based observations for O3 , NO, NO2 , CO, SO2 , benzene, HNO3 , H2 O2 , CH2 O, and PAN within an assimilation window from 06 to 20 UTC on July 20, 1998. Figure 2 shows nitrogen oxides results for two selected stations in the greater Berlin area. A significantly improved simulation skill can be seen when analysis results for nest level 2 are used, even for the subsequent forecast. Hence, observed constituents of strong
218
A. Strunk et al.
Fig. 2. Observations with error estimates (vertical bars) and model realisations on nest level 2 for nitrogen oxides at two selected stations in the greater Berlin area on June 20-21. Dashed line: Control run (no assimilation). Dotted line: First guess run (based on analyses for nest 0 and nest 1). Solid line: Analysis run (based on analysis for nest 2). Assimilated observations: 06-20 UTC on July 20 (grey shaded); later observations used for quality control only.
local variability have become sufficiently predicted by nested analysis results applying joint initial value and emission rate optimisation. Figure 3 exhibits analysed emission factors for NO2 at nest level 2, showing moderate values close to 1. Thus, the analysis runs do not enforce a vigorous change in emission inventory to reproduce the observational information and indicate a reasonably well estimated emission inventory. These optimised emission rates are a direct estimate of improved surface fluxes. The flexibility of the variational technique in taking into account various types of observations like temporally and spatially dense earth observations will be emphasised in the following. This is of even more importance, since satellite retrievals for tropospheric height levels is a strongly emerging issue in earth
Fig. 3. Optimised emission factors for NO2 on BERLIOZ nest 2, as obtained by joint optimisation of initial values and emission factors. The different optimisation stages in terms of boxes of nest levels 0, 1 and 2 are obvious. Each analysis on the recursive nestlevels refines the previous determined scaling factors, leading to a more and more refined emission rate analysis.
Four-Dimensional Variational Assimilation of Atmospheric Chemical Data
219
Fig. 4. Assimilation results on the European grid for GOME tropospheric NO2 columns on July 20, 1998. Upper left: retrievals; upper right: total model column (analysis); lower left: model equivalents (background); lower right: model equivalents (analysis). Column densities are in 1015 molec/cm2 .
observation with focus on air quality monitoring. The results of assimilating nitrogen dioxide tropospheric column retrievals (provided by KNMI, [2]) of GOME observations – which come along with averaging kernel and footprint information – are shown in Figure 4. The observations show features with elevated NO2 column densities coincidently with higher expected NO2 surface concentrations, namely the area from London over the Channel and Netherlands/Belgium to the German Rhine-Ruhr area as well as Paris and the Spanish north-coast. Although underestimated, the higher NO2 column densities over the Channel and the adjacent areas already become apparent in the model equivalents by the background model simulation. The structures in the observed NO2 column densities can be reproduced by the analysis run in a remarkable manner. The remaining discrepancies coincide with very low observed values, which the model does not reproduce. To emphasise the kind of information contained in the observed column densities including footprint information, the total tropospheric model columns as produced by the analysis run is given, too. Here, the locations with high NO2 surface concentrations also push through in the tropospheric column density, but remain concentrated ashore. The higher NO2 column densities over Scotland and the Atlantic Ocean are linked to a cyclonic low, transporting polluted air masses – originating from surface levels – over long distances.
220
A. Strunk et al.
Fig. 5. Scatter plots of O3 at the European domain and PM10 at the nested NorthrhineWestfalian grid, showing the benefit of the operational analysis procedure for the forecast skill on November 27, 2008, as given by bias and root-mean-square errors (rmse) for different model realisations. Locations of measuring stations are given by crosses in the upper right corner of the plot areas. See text for further explanations.
4.2
Operational 3d-Var Application for PROMOTE
Within the framework of the European project PROMOTE an operational 3dvar data assimilation setup is chosen to produce daily analyses and high-quality forecasts on various European domains. Starting with a coarse European domain with a horizontal grid spacing of 45x45 km2 and using one intermediate nest level with 15x15 km2 grid spacing, target domains include Switzerland, Austria, Ireland, Mecklenburg-Western Pomerania and Northrhine-Westfalia in Germany at a horizontal resolution of 5x5 km2 . A special nest is prepared for the Rhine-Ruhr area with a grid spacing of 1x1 km2 . Due to the numerical demands of the 4d-var technique, only a spatial analysis algorithm has been chosen allowing for activation of the aerosol module MADE ([1]) in EURAD-IM. Figure 5 shows scatter plots for O3 on the European domain and PM10 on the Northrhine-Westfalian grid, emphasising the benefits of the data assimilation system for the forecast skill. Four model realisations are given, basing on different meteorological and chemical analyses: The simulations forecast day-1 and control day-0 base on the same chemical analysis, but control day-1 uses a more recent meteorological analysis (by 12 hours). The difference in model performance is thus given by different meteorological conditions only. In comparison, forecast day-0 uses the same meteorology like control day-0, but an updated chemical analysis inferred by available observations from the day before. The lower bias and rmse values are therefore due to the chemical data assimilation procedure only. A
Four-Dimensional Variational Assimilation of Atmospheric Chemical Data
221
significantly lower bias and rmse can be stated both for ozone at coarse resolution as well as for particulate matter in the nested domain. The fourth model simulation statistics analysis day-0 is given for the sake of completeness, showing the results after performing 3d-var assimilation of the given observations on November 27 with forecast d-0 as model background.
5
Summary
Allowing for joint analysis of emission rates and initial values, 4d-var DA proves to be a powerful tool for air quality assessments and simulations. The problem of a more ill-posed optimisation problem has been compensated by the design of flexible and precise error covariance matrices for both optimisation parameter sets. The operational application of spatial DA techniques on nested domains leads to improved forecast skill for air quality relevant constituents. Under the upcoming availability of an adjoint particle phase module, the employment of operational 4d-var experiments and ensemble setups will further enhance the statistical knowledge needed for further improvement of covariances for both gas and particle phase.
References 1. Ackermann, I.J., Hass, H., Memmesheimer, M., Ebel, A., Binkowski, F.S., Shankar, U.: Modal aerosol dynamics model for Europe: Development and first applications. Atmos. Environ. 32, 2981–2999 (1998) 2. Boersma, K.F., Eskes, H.J., Brinksma, E.J.: Error analysis for tropospheric NO2 retrieval from space. J. Geophys. Res. 109(D04311) (2004) 3. Courtier, P., Th´epaut, J.N., Hollingsworth, A.: A strategy for operational implementation of 4D-Var, using an incremental approach. Q. J. R. Meteorol. Soc. 120(519), 1367–1387 (1994) 4. Elbern, H., Schmidt, H.: Chemical 4D variational data assimilation and its numerical implications for case study analyses. In: Chock, D.P., Carmichael, G.R. (eds.) IMA volumes in Mathematics and its Applications, Atmospheric Modeling, vol. 130, pp. 165–184 (2002) 5. Elbern, H., Schmidt, H., Talagrand, O., Ebel, A.: 4D–variational data assimilation with an adjoint air quality model for emission analysis. Environ. Model. and Software 15, 539–548 (2000) 6. Elbern, H., Strunk, A., Schmidt, H., Talagrand, O.: Emission rate and chemical state estimation by 4-dimensional variational inversion. Atmos. Chem. Phys. 7, 1–59 (2007) 7. Jakobs, H.J., Feldmann, H., Hass, H., Memmesheimer, M.: The use of nested models for air pollution studies: An application of the EURAD model to a SANA episode. J. Appl. Meteor. 34(6), 1301–1319 (1995) 8. Memmesheimer, M., Friese, E., Ebel, A., Jakobs, H.J., Feldmann, H., Kessler, C., Piekorz, G.: Long-term simulations of particulate matter in Europe on different scales using sequential nesting of a regional model. Int. J. Environm. and Pollution 22(1-2), 108–132 (2004)
222
A. Strunk et al.
9. Talagrand, O., Courtier, P.: Variational assimilation of meteorological observations with the adjoint vorticity equation. I: Theory. Q. J. R. Meteorol. Soc. 113, 1311– 1328 (1987) 10. Volz-Thomas, A., Geiss, H., Hofzumahaus, A., Becker, K.H.: Introduction to special section: Photochemistry experiment in BERLIOZ. J. Geophys. Res. 108(D4), 8252 (2003) 11. Weaver, A., Courtier, P.: Correlation modelling on the sphere using a generalized diffusion equation. Q. J. R. Meteorol. Soc. 127, 1815–1846 (2001)
Numerical Study of Some High PM10 Levels Episodes A. Todorova1, G. Gadzhev1 , G. Jordanov1, D. Syrakov2, K. Ganev1 , N. Miloshev1 , and M. Prodanova2 1
Geophysical Institute, Bulgarian Academy of Sciences, Sofia, Bulgaria 2 National Institute of Meteorology and Hydrology, Bulgarian Academy of Sciences, Sofia, Bulgaria
Abstract. The study aims at examining the ability and the limitations of US EPA Models 3 system to adequately reproduce air pollution episodes and to evaluate the role of different processes in the PM10 pattern formation. The case study focuses on the meteorological situation in Germany in February and March of 2003 during which three major PM10 episodes could be identified. The simulated meteorological fields agree well with the patterns described in the case study definition. The simulated PM10 concentrations are compared with measurements from the EMEP stations in Germany. The Integrated Process Rate Analysis function of CMAQ is applied for clarifying the role of different processes of transport and transformation in forming PM10 concentration peaks.
1
Introduction
The goal of the study is to examine the ability and the limitations of US EPA Models 3 system to adequately reproduce air pollution episodes and to evaluate the role of different processes of transport and transformation in forming PM10 concentration peaks. The case study focuses on the meteorological situation in Germany in February and March of 2003 during which three major PM10 episodes could be identified. These episodes had already been applied for model intercomparison and studying model simulation abilities ([12]). Extensive joint model comparison studies are recently still going on in the frame of COST Action 728, mostly aimed at clarifying the reasons for the shortcomings in the simulations and at the choice of optimal model set-ups, inputs and parameters. The present study is part of COST 728 activities as well, but focuses mostly at studying the role of different processes in the PM10 pattern formation and their contribution to the PM10 peaks in the period of interest.
2
Modeling Tools
The US EPA Model-3 system was chosen as a modelling tool. The system consists of three components: the Meso-meteorological Model MM5 ([7,9]); CMAQ I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 223–230, 2010. c Springer-Verlag Berlin Heidelberg 2010
224
A. Todorova et al.
Fig. 1. Surface concentrations of PM10 for 28 February at 14 and 20h
– the Community Multiscale Air Quality System ([6,2,3,14]); SMOKE – the Sparse Matrix Operator Kernel Emissions Modelling System ([5,10,4,15]).
3
Model Configuration and Brief Description of the Simulations
As far as the basis meteorological data for 2003 is the NCEP Global Analysis Data with 1◦ × 1◦ resolution, it was necessary to use MM5 and CMAQ nesting capabilities as to downscale to 10 km, the innermost domain covering the territory of Germany. The TNO inventory ([13]) is used for the simulations. As to prepare the CMAQ emission input file, these inventory files were handled by a specially prepared computer codes. Two main procedures were performed. The temporal allocation is made on the base of daily, weekly and monthly profiles, provided by Builtjes in [1]. The temporal profiles are country-, pollutantand SNAP-specific. The speciation procedure depends on the Chemical Mechanism (CM) used. The Carbon Bond, v.4 (CB4) is exploited in the present study ([8]). In the used Version 4.6 of CMAQ the CB4 is upgraded with the Version 1.7 of ISORROPIA aerosol model ([11]). It requires splitting of VOC to 10 lump pollutants and PM2.5 to 5 groups of aerosol. A specific approach for obtaining speciation profiles is used here. The USA EPA data base (see [16]) is intensively exploited. Typical sources for every SNAP were linked to similar source types from US EPA nomenclature. The weighted averages of the respective speciation profiles are accepted as SNAP-specific splitting factors. In such a way VOC and PM2.5 speciation profiles are derived.
Numerical Study of Some High PM10 Levels Episodes
225
It must be stressed that the biogenic emissions of VOC were not estimated. Having in mind that the simulations concern winter episodes in North Europe it is obvious that this is not a great omission. The CB-4 chemical mechanism with Aqueous-Phase Chemistry and EBI solver (Eulerian iterative method) was used. CMAQ simulations were carried out for the period 1 Jan 2003 - 30 Apr 2003 for the two inner domains with 30 and 10 km resolution The Models-3 “Integrated Process Rate Analysis” option is applied to discriminate the role of different dynamic and chemical processes for the formation of the observed high PM10 concentration episodes. The processes that are considered are: advection, diffusion, mass adjustment, emissions, dry deposition, chemistry, aerosol processes and cloud processes/aqueous chemistry.
4
Comparison of the PM10 Simulations with Measurments
The case study focuses on the meteorological situation in Germany in February and March of 2003 during which three major PM10 episodes could be identified: Between Feb 10 and Feb 14 with observed peak PM10 concentrations from Feb 11 to Feb 13, the core episode between Feb 21 and March 5 with peak PM10 concentrations from Feb 28 to March 4, and the episode between March 24 and March 31 with PM10 maxima at March 27 and March 28. The meteorological simulations are very close to the observed state of the atmosphere: on 10-11 February the main pressure system that influences the domain of interest is a high pressure system over Central and Eastern Europe with winds from S and SE blowing over Germany; from 25 February until 2 March there are two large pressure systems – low pressure over the Atlantic and high pressure over Eastern Europe, an almost stationary warm front is evident dividing Germany in two parts with different meteorological (especially wind) conditions. Plots of surface PM concentrations for February 28 are shown on Fig. 1 as an example. The sum is over these aerosol species that have significant contribution to the total concentrations (> 1 µg/m3 ) and the hours are chosen so that daytime concentrations are shown as well as the peak at 20h. From February 28 to March 4 a large PM plume is evident in western Germany. This plume moves forwards and backwards in E-W direction, changing shape as well as location and intensity of maximum concentration. The modeled and observed temporal evolution of the daily average PM10 concentrations at the different background stations in Germany is shown in Fig. 2. One should estimate the agreement of simulated with measured concentration as a good one — for all the stations the concentrations temporal evolution is qualitatively fairly well simulated. Still the observed peaks, however, are severely underestimated by the model for stations DE02R, DE09R, and DE41R. The comparison between simulations made with horizontal resolution of 30 and 10 km shows that the higher resolution does not improve the agreement
226
A. Todorova et al.
Fig. 2. Simulated vs. measured PM10 surface concentrations for the following stations DE02R-Langenbrugge, DE03R-Schauinsland, DE04R-Deuselbach, DE05RBrotjacklriegel, DE07R-Neuglobsow, DE08R-Schmucke, DE09R-Zingst, DE41RWesterland Tinnum, DE44R-Melpitz
with measured data. In fact the 10 and 30 km results barely differ at all. Quite obvious this is due to the fact that the observed PM10 peaks are a result of some large-scale processes.
5
Process Analysis
To better understand the development of the pollution episodes, one should closely examine the contributions of different processes to the total aerosol concentrations. The analysis of the behavior of different processes points to possible explanations of the genesis of the PM10 concentration peaks. Fig. 3 shows the time evolution of the contribution of each process to the change in PM10 concentration in each of nine stations between 57th and 63th Julian day (February 26 to March 4). In all 9 stations advection is the dominant process which has highly variable impact on PM10 concentrations. For DE02 cloud processes also have
Numerical Study of Some High PM10 Levels Episodes
227
Fig. 3. Temporal evolution of the contribution of different processes to the hourly change of PM10 surface concentrations for the following stations: DE02RLangenbrugge, DE03R-Schauinsland, DE04R-Deuselbach, DE05R-Brotjacklriegel, DE07R-Neuglobsow, DE08R-Schmucke, DE09R-Zingst, DE41R-Westerland Tinnum, DE44R-Melpitz
significant contribution — on the 57th day (positive) and 60th day (negative). DE04 experiences a drop in concentration after 57th day due to horizontal advection and vertical diffusion. Between 57th and 58th day the vertical advection acts in the opposite direction, slowing down the rate of decrease. The increase after 60th day is also due to advection — vertical at first and then horizontal. In DE05 there is a peak on 58th day due to vertical advection and aerosol processes. On the 59th and 60th day cloud processes have significant contribution to the rising PM10 concentration while horizontal advection and vertical diffusion act in negative direction all the time until the 61st day. The large drop after 60th day is due to cloud processes while horizontal advection acts in the opposite direction. Station DE07 has high and variable concentrations for this period and there are lots of processes involved. The variations between 57th and 59th day are
228
A. Todorova et al.
due to the variable impact of horizontal advection, which changes its sign from day to day. After 59th day the positive impact of horizontal advection competes with the negative one of cloud processes and vertical advection. In DE08 there is, again, a pronounced impact of horizontal advection, which is highly variable and frequently changes its sign. Vertical diffusion is responsible for the lower PM10 concentration in the afternoon of 58th and cloud processes play a role in the decrease on 60th day. For DE09 advection is again the dominant process — the influence of vertical advections is predominantly positive, while horizontal advection is highly variable and changes its sign. Vertical diffusion contributes to the rapid decreasing of PM10 concentration on 58th day and cloud processes play an important role on 60th day. For DE41 the decrease in concentrations on 59th day is due to horizontal advection and cloud processes acting in negative direction, while after 60th day the concentration rises again due to horizontal and vertical advection. DE42 shows quick changes in simulated PM10 concentration between 57th and 60th day due to various processes. Cloud and aerosol processes play an important role on 57th , while horizontal advection and vertical diffusion are the processes responsible for the decrease on 58th . On the 59th day the negative influence of vertical advection becomes more pronounced. After day 60 horizontal advection and aerosol processes contribute to the slight increase in PM10 concentration.
6
Conclusions
The main conclusions that can be made from this study are the following: – The simulated meteorological fields agree well with the patterns described in the case study definition. The simulated PM10 agreement with measurements is as good (or “as bad”) as many other model runs demonstrate for many other cases. The qualitative agreement is good and the quantitative agreement for most of the stations is reasonable; – Enhancing the horizontal spatial resolution does not improve the results significantly, so most probably the observed PM10 peaks are a result of some large-scale processes; – The results produced by the CMAQ “Integrated Process Rate Analysis” demonstrate the very complex behavior and interaction of the different processes — process contributions change very quickly with time and these changes for the different stations hardly correlate at all. The analysis of the behavior of different processes does not give clear explanation of the genesis of the PM10 concentration peaks, but at least outlines the most important and dominant processes and points to possible explanations of the genesis.
Acknowledgments The present work is supported by EC through 6FP NoE ACCENT (GOCECT-2002-500337), SEE-GRID-SCI project, contract No. FP7 RI-211338, COST
Numerical Study of Some High PM10 Levels Episodes
229
Actions 728, as well as by the Bulgarian National Science Fund (grants No. DO02-161/16.12.2008 and DO02-115/2008). The contacts within the framework of the NATO SfP Grant ESP.EAP.SFPP 981393 were extremely simulating as well. Deep gratitude is due to US EPA, US NCEP and EMEP for providing free-of-charge data and software. Special thanks to the Netherlands Organization for Applied Scientific research (TNO) for providing us with the high-resolution European anthropogenic emission inventory.
References 1. Builtjes, P.J.H., van Loon, M., Schaap, M., Teeuwisse, S., Visschedijk, A.J.H., Bloos, J.P.: Project on the modelling and verification of ozone reduction strategies: contribution of TNO-MEP, TNO-report, MEP-R2003/166, Apeldoorn, The Netherlands (2003) 2. Byun, D., Ching, J.: Science Algorithms of the EPA Models-3 Community Multiscale Air Quality (CMAQ) Modeling System. EPA Report 600/R-99/030, Washington, DC (1999) 3. Byun, D., Schere, K.L.: Review of the Governing Equations, Computational Algorithms, and Other Components of the Models-3 Community Multiscale Air Quality (CMAQ) Modeling System. Applied Mechanics Reviews 59(2), 51–77 (2006) 4. CEP: Sparse Matrix Operator Kernel Emission (SMOKE) Modeling System, University of Carolina, Carolina Environmental Programs, Research Triangle Park, North Carolina (2003) 5. Coats Jr., C.J., Houyoux, M.R.: Fast Emissions Modeling with the Sparse Matrix Operator Kernel Emissions Modeling System, The Emissions Inventory: Key to Planning, Permits, Compliance, and Reporting, Air and Waste Management Association, New Orleans (September 2006) 6. Dennis, R.L., Byun, D.W., Novak, J.H., Galluppi, K.J., Coats, C.J., Vouk, M.A.: The Next Generation of Integrated Air Quality Modeling: EPA’s Models-3. Atmosph. Environment 30, 1925–1938 7. Dudhia, J.: A non-hydrostatic version of the Penn State/NCAR Mesoscale Model: validation (1993) 8. Gery, M.W., Whitten, G.Z., Killus, J.P., Dodge, M.C.: A Photochemical Kinetics Mechanism for Urban and Regional Scale Computer Modeling. Journal of Geophysical Research 94, 12925–12956 (1989) 9. Grell, G.A., Dudhia, J., Stauffer, D.R.: A description of the Fifth Generation Penn State/NCAR Mesoscale Model (MM5). NCAR Technical Note, NCAR TN-398STR, 138 p. (1994) 10. Houyoux, M.R., Vukovich, J.M.: Updates to the Sparse Matrix Operator Kernel Emission (SMOKE) Modeling System and Integration with Models-3, The Emission Inventory: Regional Strategies for the Future, Raleigh, NC, Air and Waste Management Association (1999) 11. Nenes, A., Pandis, S.N., Pilinis, C.: ISORROPIA: A new thermodynamic equilibrium model for multiphase multicomponent inorganic aerosols. Aquat. Geoch. 4, 123–152 (1998) 12. Stern, R., Builtjes, P., Schaap, M., Timmermans, R., Vautard, R., Hodzic, A., Memmesheimer, M., Feldmann, H., Renner, E., Wolke, R., Kerschbaumer, A.: A model inter-comparison study focussing on episodes with elevated PM10 concentrations. Atmospheric Environment 42, 4567–4588 (2008)
230
A. Todorova et al.
13. Visschedijk, A.J.H., Zandveld, P.Y.J., Denier van der Gon, H.A.C.: A High Resolution Gridded European Emission Database for the EU Integrate Project GEMS, TNO-report 2007-A-R0233/B, Apeldoorn, The Netherlands (2007) 14. http://www.cmaq-model.org/ 15. http://www.smoke-model.org/ 16. http://www.epa.gov/ttn/chief/emch/speciation/
Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations R. Anguelov1,2 , S. Markov2, and , F. Minani3 1
Department of Mathematics and Applied Mathematics, University of Pretoria Institute of Mathematics and Informatics, Bulgarian Academy of Sciences 3 Department of Applied Mathematics, National University of Rwanda
2
Abstract. A new concept of viscosity solutions, namely, the Hausdorff continuous viscosity solution for the Hamilton-Jacobi equation is defined and investigated. It is shown that the main ideas within the classical theory of continuous viscosity solutions can be extended to the wider space of Hausdorff continuous functions while also generalizing some of the existing concepts of discontinuous solutions. Keywords: Viscosity solution, Hausdorff continuous, envelope solution. 2000 Mathematics Subject Classification: 49L25, 35D05, 54C60.
1
Introduction
Hamilton-Jacobi equations are traditionally associated with classical mechanics but in recent times they are also linked to new areas of knowledge, e.g. the modeling of biomotor processes [12]. In its special form of Hamilton-Jacobi-Bellman equations they are associated with problems of optimal control [3], including feedback control problems typical for bioreactors [7,9]. The theory of viscosity solutions was developed in order to accommodate various kinds of nonsmooth solutions of these problems. In its classical formulation, see [6], the theory deals with solutions which are continuous functions. The concept of continuous viscosity solutions is further generalized, e.g. see [3, Chapter V], [5,4], to include discontinuous solutions with the definition of Ishii given in [8] playing a pivotal role. In this paper we propose a new approach to the treatment of discontinuous solutions, namely, by involving Hausdorff continuous (H-continuous) interval valued functions. We justify the proposed approach by demonstrating that (i) the main ideas within the classical theory of continuous viscosity solutions can be extended to the wider space of H-continuous functions; (ii) the existing theory of discontinuous solutions is a particular case of that developed in this paper in terms of H-continuous functions; (iii) the H-continuous viscosity solutions have a more clear; interpretation than the existing concepts of discontinuous solutions, e.g. envelope viscosity solutions [3, Chapter V].
The first author was partially supported by the NRF of South Africa. The second author was partially supported by the Bulgarian NSF Project DO 02-359/2008.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 231–238, 2010. c Springer-Verlag Berlin Heidelberg 2010
232
R. Anguelov, S. Markov, and, F. Minani
In order to simplify the exposition we will only consider first order HamiltonJacobi equations of the form Φ(x, u(x), Du(x)) = 0, x ∈ Ω,
(1)
where Ω is an open subset of Rn , u : Ω → R is the unknown function, Du is the gradient of u and the given function Φ : Ω × R × Rn → R is jointly continuous in all its arguments. The theory of viscosity solutions rests on two fundamental concepts, namely, of subsolution and of supersolution. These concepts are defined in various equivalent ways in the literature. The definition given below is formulated in terms of local maxima and minima. We use the notations U SC(Ω) = {u : Ω → R : u is upper semi-continuous on Ω} LSC(Ω) = {u : Ω → R : u is lower semi-continuous on Ω} Definition 1. A function u ∈ U SC(Ω) is called a viscosity subsolution of the equation (1) if for any ϕ ∈ C 1 (Ω) we have Φ(x0 , u(x0 ), Dϕ(x0 )) ≤ 0 at any local maximum point x0 of u − ϕ. Similarly, u ∈ LSC(Ω) is called a viscosity supersolution of the equation (1) if for any ϕ ∈ C 1 (Ω) we have Φ(x0 , u(x0 ), Dϕ(x0 )) ≥ 0 at any local minimum point x0 of u − ϕ. Without loss of generality we may assume in the above definition that u(x0 ) = ϕ(x0 ) exposing in this way a very clear geometrical meaning of this definition: the gradient of the solution u of equation (1) is replaced by the gradient of any smooth function touching the graph of u from above, in the case of subsolution, and touching the graph of u from below, in the case of supersolution. This also establishes the significance of the requirement that a subsolution and a supersolution should respectively be upper semi-continuous and lower semi-continuous functions. More precisely, the upper semi-continuity of a subsolution u ensures that any local supremum of u − ϕ is effectively reached at a certain point x0 , that is, it is a local maximum, with the geometrical meaning that the graph of u can be touched from above at x = x0 by a vertical translate of the graph of ϕ. In a similar way, the lower semi-continuity of a supersolution u ensures that any local infimum of u − ϕ is effectively reached at a certain point x0 which means that the graph of u can be touched from below at x = x0 by a vertical translate of the graph of ϕ. Naturally, a solution should incorporate the properties of both a subsolution and a supersolution. In the classical viscosity solutions theory [6] a viscosity solution isa function u which is both a subsolution and a supersolution. Since U SC(Ω) LSC(Ω) = C(Ω), this clearly implies that the viscosity solutions defined in this way are all continuous functions. The concept of viscosity solution for functions which are not necessarily continuous is introduced by using the upper and lower semi-continuous envelopes. For a locally bounded function u these are given by S(u)(x) = inf{f (x) : f ∈ U SC(Ω), u ≤ f } = inf sup{u(y) : y ∈ Bδ (x)}, (2) δ>0
I(u)(x) = sup{f (x) : f ∈ LSC(Ω), u ≥ f } = sup inf{u(y) : y ∈ Bδ (x)}, δ>0
(3)
Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations
233
where Bδ (x) denotes the open δ-neighborhood of x in Ω. Using the fact that for any function u : Ω → R the functions S(u) and I(u) are always, respectively, upper semi-continuous and lower semi-continuous functions, a viscosity solution can be defined as follows, [8]. Definition 2. A function u : Ω → R is called a viscosity solution of (1) if S(u) is a viscosity subsolution of (1) and I(u) is a viscosity supersolution of (1). The first important point to note about the advantages of the method in this paper is as follows. Interval valued functions appear naturally in the context of noncontinuous viscosity solutions. Namely, they appear as graph completions. Indeed, the above definition places requirements not on the function u itself but on its lower and upper semi-continuous envelopes or, in other words, on the interval valued function F (u)(x) = [I(u)(x), S(u)(x)], x ∈ Ω,
(4)
which is called the graph completion of u, see [13]. Clearly, Definition 2 treats functions which have the same upper and lower semi-continuous envelopes, that is, have the same graph completion, as identical functions. On the other hand, since different functions can have the same graph completion, a function can not in general be identified from its graph completion, that is, functions with the same graph completion are indistinguishable. Therefore, no generality will be lost if only interval valued functions representing graph completions are considered. Let A(Ω) be the set of all functions defined on an open set Ω ⊂ Rn with values which are closed finite real intervals, that is, A(Ω) = {f : Ω → IR}, where IR = {[a, a] : a, a ∈ R, a ≤ a}. Identifying a ∈ R with the point interval [a, a] ∈ IR, we consider R as a subset of IR. Thus A(Ω) contains the set A(Ω) = {f : Ω → R} of all real functions defined on Ω. Let u ∈ A(Ω). For every x ∈ Ω the value of u is an interval [u(x), u(x)] ∈ IR. Hence, the function u can be written in the form u = [u, u] where u, u ∈ A(Ω) and u(x) ≤ u(x), x ∈ Ω. The function w(f )(x) = u(x) − u(x), x ∈ Ω, is called width of u. Clearly, u ∈ A(Ω) if and only if w(f ) = 0. The definitions of the upper semi-continuous envelope, the lower semi-continuous envelope and the graph completion operator F given in (2), (3) and (4) for u ∈ A(Ω) can be extended to functions u = [u, u] ∈ A(Ω) in an obvious way, e.g. S(u) = inf δ>0 sup{z ∈ u(y) : y ∈ Bδ (x)}. We recall here the concept of S-continuity associated with the graph completion operator, [13]. Definition 3. A function u = [u, u] ∈ A(Ω) is called S-continuous if F (u) = u, or, equivalently, I(u) = u, S(u) = u. Using the properties of the lower and upper semi-continuous envelopes one can easily see that the graph completions of locally bounded real functions on Ω comprise the set F(Ω) of all S-continuous functions on Ω. Following the above discussion we define the concept of viscosity solution for the functions in F(Ω). Definition 4. A function u = [u, u] ∈ F(Ω) is called a viscosity solution of (1) if u is a supersolution of (1) and u is a subsolution of (1).
234
R. Anguelov, S. Markov, and, F. Minani
A second advantage of the method in this paper is as follows. A function u ∈ A(Ω) is a viscosity solution of (1) in the sense of Definition 2 if and only if the interval valued function F (u) is a viscosity solution of (1) in the sense of Definition 4. In this way the level of the regularity of a solution u is manifested through the width of the interval valued function F (u). It is well known that without any additional restrictions the concept of viscosity solution given in Definition 2 and by implication the concept given in Definition 4 is rather weak, [3]. This is demonstrated by the following example, partially discussed in [3]. Example 1. Consider the equation u (x) = 1 , x ∈ (0, 1). The functions v(x) =
x+1 if x ∈ (0, 1)∩Q x if x ∈ (0, 1)\Q
w(x) =
(5) x if x ∈ (0, 1)∩Q x+1 if x ∈ (0, 1)\Q
are both viscosity solutions of equation (5) in terms of Definition 2. The interval valued function z = F (v) = F (w) given by z(x) = [x, x + 1], x ∈ (0, 1), is a solution in terms of Definition 4. With the interval approach adopted here it becomes apparent that the distance between I(u) and S(u) is an essential measure of the regularity of any solution u, irrespective of whether it is given as a point valued function or as an interval valued function. If no restriction is placed on the distance between I(u) and S(u), then we have some quite meaningless solutions like the solutions in Example 1. On the other hand, a strong restriction like I(u) = S(u) gives only solutions which are continuous. In this paper we consider solutions for which the Hausdorff distance, as defined in [13], between the functions I(u) and S(u) is zero, a condition defined through the concept of Hausdorff continuity.
2
The Space of Hausdorff Continuous Functions
The concept of Hausdorff continuous interval valued functions was originally developed within the theory of Hausdorff approximations, [13]. It generalizes the concept of continuity of real function using a minimality condition with respect to inclusion of graphs. Definition 5. A function f ∈ A(Ω) is called Hausdorff continuous, or Hcontinuous, if for every g ∈ A(Ω) which satisfies the inclusion g(x) ⊆ f (x), x ∈ Ω, we have F (g)(x) = f (x), x ∈ Ω. As mentioned in the Introduction the concept of Hausdorff continuity is closely connected with the Hausdorff distance between functions. Let us recall that the Hausdorff distance ρ(f, g) between two functions f, g ∈ A(Ω) is defined as the Hausdorff distance between the graphs of the functions F (f ) and F (g) considered as subsets of Rn+1 , [13]. Then we have f = [f , f ] is H-continuous ⇐⇒ f is S-continuous and ρ(f , f ) = 0 .
Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations
235
Although every H-continuous function f is, in general, interval valued, the subset of the domain Ω where f assumes proper interval values is a set of first Baire category. One of the most surprising and useful properties of the set H(Ω) of all Hcontinuous functions is its Dedekind order completeness with respect to the point wise defined partial order [1,10] f ≤ g ⇐⇒ f (x) ≤ g(x), f (x) ≤ g(x), x ∈ Ω. What makes this property so significant is the fact that with very few exceptions the usual spaces in Real Analysis or Functional Analysis are not Dedekind order complete. Hence the space of H-continuous functions can be a useful tool in Real Analysis and in the Analysis of PDEs, particularly in situations involving order. For example, a long outstanding problem related to the Dedekind order completion of spaces C(X) of real valued continuous functions on rather arbitrary topological spaces X was solved through Hausdorff continuous functions [1]. Following this breakthrough a significant improvement of the regularity properties of the solutions obtained through the order completion method, see [11], was reported in [2]. Namely, it was shown that these solutions can be assimilated with the class of Hausdorff continuous functions on the open domains Ω. We may also remark that the concept of viscosity solutions is defined through order. Hence it is natural to be considered in the general setting of a Dedekind order complete space like H(Ω).
3
The Envelope Viscosity Solutions and Hausdorff Continuous Viscosity Solutions
Recognizing that the concept of viscosity solution given by Definition 2 is rather weak the authors of [3] introduce the concept of envelope viscosity solution. The concept is defined in [3] for equation (1) with Dirichlet boundary conditions u|∂Ω = g.
(6)
The concepts of subsolution and supersolution are extended to the problem (1),(6) through respective inequalities on the boundary, namely u|∂Ω ≤ g for a subsolution and u|∂Ω ≥ g for a supersolution. Definition 6. A function u ∈ A(Ω) is called an envelope viscosity solution of (1),(6) if there exist a nonempty set Z1 (u) of subsolutions of (1),(6) and a nonempty set Z2 (u) of supersolutions of (1),(6) such that u(x) =
sup f (x) = f ∈Z1 (u)
inf
f ∈Z2 (u)
f (x), x ∈ Ω.
It is shown in [3] that every envelope viscosity solution is a viscosity solution in terms of Definition 2. Considering the concept from geometrical point of view, on can expect that by ’squeezing’ the envelope viscosity solution u between a set of subsolutions and a set of supersolutions the gap between I(u) and S(u) would be small. However, in general this is not the case. The following example shows that the concept of envelope viscosity solution does not address the problem of the distance between I(u) and S(u). Hence one can have envelope viscosity solutions of little practical meaning similar to the one in Example 1.
236
R. Anguelov, S. Markov, and, F. Minani
Example 2. Consider the following problem on Ω = (0, 1) −u(x)(u (x))2 = 0, x ∈ Ω, u(0) = u(1) = 0.
(7) (8)
For every α ∈ Ω we define the functions 1 if x = α 0 if x = α φα (x) = , ψα (x) = 0 if x ∈ Ω \ {α} 1 if x ∈ Ω \ {α}. We have φα ∈ U SC(Ω), ψα ∈ LSC(Ω), α ∈ Ω. Furthermore, for every α ∈ (0, 1) the function φα is a subsolution of (7) while ψα is a supersolution of (7). Indeed, both functions satisfy (7) for all x ∈ Ω \ {α} and at x = α we have −φα (α)p2 = −p2 ≤ 0 for all p ∈ D+ φα (α) = (−∞, ∞) −ψα (α)p2 = 0 ≥ 0 for all p ∈ D− ψα (α) = (−∞, ∞). We will show that the function u(x) =
1 if x ∈ Ω \Q 0 if x ∈ Q Ω
is an envelope viscosity solution of (7). Define Z1 (u) = {φα : α ∈ Ω \ Q}, Z2 (u) = {ψα : α ∈ Q Ω}. Note that the functions in both Z1 (u) and Z2 (u) satisfy the boundary condition (8). Therefore these sets consist respectively of viscosity subsolutions and viscosity supersolutions of (7),(8). Further u satisfies u(x) = supw∈Z1 (u) w(x) = inf w∈Z2 (u) w(x), which implies that it is an envelope viscosity solution. Clearly neither u nor F (u) is a Hausdorff continuous function. In fact we have F (u)(x) = [0, 1], x ∈ Ω. The next interesting question is whether every H-continuous solution is an envelope viscosity solution. Since the concept of envelope viscosity solutions requires the existence of sets of subsolutions and supersolutions respectively below and above an envelope viscosity solution then an H-continuous viscosity solution is not in general an envelope viscosity solution, e.g. when the H-continuous viscosity solutions does not have any other subsolutions and supersolutions around it. However in the essential case when the H-continuous viscosity solution is a supremum of subsolutions or infimum of supersolutions it can be linked to an envelope viscosity solution as stated in the next theorem. Theorem 1. Let u = [u, u] be an H-continuous viscosity solution of (1) and let Z1 = {w ∈ U SC(Ω) : w-subsolution, w ≤ u} Z2 = {w ∈ LSC(Ω) : w-supersolution, w ≥ u}. = ∅ and u(x) = sup w(x) then u is an envelope viscosity solution. a) If Z1 w∈Z1
b) If Z2 = ∅ and u(x) = inf w(x) then u is an envelope viscosity solution. w∈Z2
Hausdorff Continuous Viscosity Solutions of Hamilton-Jacobi Equations
237
Proof. a) We choose the sets Z1 (u) and Z2 (u) required in Definition 6 as follows: Z1 (u) = Z1 , Z2 (u) = {u}. Then we have u(x) = supw∈Z1 (u) w(x) = inf w∈Z2 (u) w(x), which implies that u is an envelope viscosity solution. The proof of b) is done in a similar way. Let us note that if the conditions in both a) and b) in the above theorem are satisfied then both u and u are envelope viscosity solutions and in this case in makes even more sense to consider instead the H-continuous function u.
4
Existence and Uniqueness
One of the primary virtues of the theory of viscosity solutions is that it provides very general existence and uniqueness theorems, [6]. In this section we give existence and uniqueness theorems for H-continuous viscosity solutions in a similar form to the respective theorems for continuous solutions [6, Theorem 4.1], and for general discontinuous solutions [8, Theorem 3.1], [3, Theorem V.2.14]. Theorem 2. (Existence) Assume that there exist Hausdorff continuous functions u1 = [u1 , u1 ] and u2 = [u2 , u2 ] such that u1 is a subsolution of (1), u2 is a supersolution of (1) and u1 ≤ u2 . Then there exists a Hausdorff continuous solution u of (1) satisfying the inequalities u1 ≤ u ≤ u2 . The proof of the above theorem, similar to the other existence theorems in the theory of viscosity solutions, uses Perron’s method and the solution is constructed as a supremum of a set of subsolutions, this time the supremum being taken in the poset H(Ω) and not point-wise. We should note that due to the fact that the the poset H(Ω) is Dedekind order complete it is an appropriate medium for such an application of Perron’s method. As in the theory of continuous viscosity solutions, the uniqueness of Hcontinuous viscosity solutions can be obtained from a comparison principle, which can be formulated for H-continuous functions as follows. Definition 7. The Hamilton-Jacobi equation (1) satisfies the comparison principle on the set H(Ω) of all H-continuous functions on Ω if for any subsolution u ∈ H(Ω) and any supersolution v ∈ H(Ω) we have (u(x) ≤ v(x), x ∈ ∂Ω) =⇒ (u(x) ≤ v(x), x ∈ Ω). It is easy to see that the above comparison principle neither follows nor implies the comparison principle for point-valued functions, e.g. see [3]. The reason is that the partial order of intervals as defined in [10] and used here does not imply respective ordering between arbitrary points in these intervals. Hence the issue of characterizing the equations satisfying the comparison principle in Definition 7. Nevertheless we may note that in the case when u and v are point-valued on the boundary ∂Ω the comparison principle in Definition 7 follows from the comparison principle for point-valued functions.
238
5
R. Anguelov, S. Markov, and, F. Minani
Conclusion
The H-continuous functions are a particular class of interval-valued functions. Nevertheless, recent results have shown that they can provide solutions to problems formulated in terms of point-valued functions, [1,2]. In this paper Hcontinuous functions are linked with the concept of viscosity solutions. As shown in the Introduction the definition of viscosity solution, see Definition 2, has an implicit interval character since it places requirements only on the upper semicontinuous envelope S(u) and the lower semi-continuous envelope I(u). For a H-continuous viscosity solution u the functions I(u) and S(u) are as close as they can be in the sense of the Hausdorff distance ρ defined in [13], namely, we have ρ(I(u), S(u)) = 0. Hence, the requirement that a viscosity solution is H-continuous has a direct interpretation which we find clearer than the requirements related to some other concepts of discontinuous viscosity solutions. Further research will focus on extending other aspects of the classical theory to the case of H-continuous viscosity solutions. In particular this includes characterizing the equations satisfying the comparison principle in Definition 7.
References 1. Anguelov, R.: Dedekind order completion of C(X) by Hausdorff continuous functions. Quaestiones Mathematicae 27, 153–170 (2004) 2. Anguelov, R., Rosinger, E.E.: Solving Large Classes of Nonlinear Systems of PDE’s. Computers and Mathematics with Applications 53, 491–507 (2007) 3. Bardi, M., Capuzzo-Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Birk¨ auser, Basel (1997) 4. Barles, G.: Discontinuous viscosity solutions of first order Hamilton-Jacobi Equations. Nonlinear Anal.: Theory, Methods and Applications 20(9), 1123–1134 (1993) 5. Barron, E.N., Jensen, R.: Semicontinuous viscosity solutions for Hamilton-Jacobi equations with convex Hamiltonians. Communications in Partial Differential Equations 15(12), 1713–1742 (1990) 6. Crandal, M.G., Ishii, H., Lions, P.-L.: User’s guide to viscosity solutions of second order partial differential equations. Bulletin of AMS 27(1), 1–67 (1992) 7. Dimitrova, N.S., Krastanov, M.I.: Stabilization of a Nonlinear Anaerobic Wastewater Treatment Model. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2005. LNCS, vol. 3743, pp. 208–215. Springer, Heidelberg (2006) 8. Ishii, H.: Perron’s method for Hamilton-Jacobi equations. Duke Mathematical Journal 55(2), 369–384 (1987) 9. Krastanov, M.I., Dimitrova, N.S.: Stabilizing feedback of a nonlinear process involving uncertain data. Bioprocess and Biosystems Engineering 25(4), 217–220 (2003) 10. Markov, S.: Extended interval arithmetic involving infinite intervals. Mathematica Balkanica 6, 269–304 (1992) 11. Oberguggenberger, M.B., Rosinger, E.E.: Solution on Continuous Nonlinear PDEs through Order Completion. North-Holland, Amsterdam (1994) 12. Perthame, B., Souganidis, P.E.: Asymmetric potentials and motor effect: a homogenization approach. Ann. I. H. Poincar´e — AN (2008), doi:10.1016/j.anihpc.2008.10.003 13. Sendov, B.: Hausdorff Approximations. Kluwer, Dordrecht (1990)
Stochastic Skiba Sets: An Example from Models of Illicit Drug Consumption Roswitha Bultmann, Gustav Feichtinger, and Gernot Tragler Vienna University of Technology (TU Wien), Institute for Mathematical Methods in Economics (IWM), Research Unit for Operations Research and Control Systems (ORCOS), Argentinierstr. 8/105-4, A-1040 Wien, Austria
[email protected],
[email protected],
[email protected]
Abstract. Skiba or DNSS sets are an important feature of many deterministic (usually convex or at least non-concave) optimal control models. For stochastic models they have hardly been studied. Using a classical discretization scheme, we consider a time-discrete stochastic reformulation of a well-known optimal control model of illicit drug consumption, in which the retail drug price is influenced by exogenous random forces. We assume that these exogenous forces are described by i.i.d. random variables on a finite probability space. Having set up the model in this way, we use techniques from dynamic programming to determine the optimal solution and describe the transient sets that constitute the stochastic Skiba/DNSS sets. We also show that the DNSS sets expand with the variance, and the optimal policy becomes a continuous function of the state for sufficiently high levels of the variance. Keywords: Australian heroin drought, DNSS point, drug epidemic, drug policy, market disruption, stochastic optimal control, supply shock, treatment.
1
Introduction
More than ten years ago, the first optimal control models of illicit drug epidemics were studied ([1,17]; see also [2,18]). Since then, realism and hence complexity in the models has increased steadily, while the focus has been restricted to a deterministic setup. Motivated by the Australian heroin drought (e.g., [19,10]), [4,5] discussed market disruptions in a two-stage setting, in which the drug price changes exactly once at a given time but is otherwise kept constant. That approach captures the very basics of a supply shock (i.e., high/low price in the drought/glut phase and lower/higher price thereafter), but of course it is unrealistic to know in advance how long a given market disruption will last and what the price will be once the disruption is over. One possible move towards more realism is to consider stochastic optimal control models, which is what we do in
Corresponding author.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 239–246, 2010. c Springer-Verlag Berlin Heidelberg 2010
240
R. Bultmann, G. Feichtinger, and G. Tragler
#
#
? Equilibrium 1
? DNSS Point
Equilibrium 2
Fig. 1. Schematic illustration of a deterministic DNSS point
this paper. To the best of our knowledge, this constitutes the first attempt in that direction. In the simplest case, for deterministic discounted infinite horizon optimal control problems that are autonomous we find a unique optimal steady state solution (often and in what follows mostly referred to as ‘equilibrium’) to which the dynamical system converges when the optimal policy is applied. If, however, the optimized system exhibits multiple equilibria, so-called DNSS or Skiba sets may occur. In such a case, the DNSS sets constitute the borders of the basins of attraction of the steady states. Being at a DNSS point, one is indifferent between at least two optimal policies leading to different equilibria. In recognition of studies by Sethi [13,14], Skiba [15], Dechert, and Nishimura [8], these points of indifference are denoted as DNSS (Dechert-Nishimura-Sethi-Skiba) or Skiba points (cf. [11] or http://en.wikipedia.org/wiki/DNSS_point). Fig. 1 schematically illustrates the case of a deterministic DNSS point. For stochastic optimal control models, DNSS points have hardly been studied. According to [9], “the stochastic equivalent of a Skiba point is a transient set between two basins of attraction. The basins of attraction are recurrent sets for the Markov dynamics. The two basins of attraction are separated by a region where there is a positive probability that the dynamics will end up in the lower basin of attraction, and a positive probability they will end up in the upper basin of attraction.” Reformulating this footnote definition very loosely und adapting it to our purposes so that it will fit best to the deterministic case, each of the equilibria is surrounded by some set of points from which these equilibria are reached with certainty (absorbing set), while there also exist other points for which there are positive probabilities that the one or the other equlibrium will be approached. In other words, in stochastic optimal control models there may exist a set of points for which the long-run outcome is uncertain, and for obvious reasons we will call this set set of uncertainty or stochastic DNSS set. Fig. 2 schematically illustrates the case of a stochastic DNSS set.1 The further contents of this paper are as follows. In Section 2 we present the time-discrete stochastic optimal control model of drug use including a 1
A remark is in order. In stochastic models, an ‘equilibrium’ in general is described by an invariant set of points which are visited alternatively in the long run with some positive probability. The set of probabilities associated with these points constitutes the limit distribution which describes the long-run frequency of occurrence of these points. For ease of exposition, we stick to the notation ‘equilibrium’.
Stochastic Skiba Sets: An Example from Models of Illicit Drug Consumption
#
241
#
?
?
Absorbing Set of
Stochastic DNSS Set
Absorbing Set of
Equilibrium 1
Equilibrium 2
6
6
?
"
! "
!
Fig. 2. Schematic illustration of a stochastic DNSS set
parameterization based on data from the current U.S. cocaine epidemic. Section 3, which is devoted to the analysis via techniques from dynamic programming, reveals that the occurrence of stochastic DNSS points crucially hinges on the variance of the random variables. Section 4 further discusses the results and concludes.
2
The Model
Optimal control models both capture the dynamic aspects of drug epidemics and seek to optimize some given objective via control instruments (e.g., budgets for prevention, treatment, or price-raising law enforcement measures). Here — as in most other existing optimal control models of drug use – the goal is to minimize the discounted stream of social costs arising from drug consumption plus the control costs over an infinite planning horizon subject to the dynamic evolution of the population of drug users. In mathematical terms, we consider the following time-discrete stochastic reformulation of a well-known model of illicit drug consumption, which we obtain by using a classical discretization scheme: ⎛ ⎞⎞ ⎛ ∞ ⎜ ⎟⎟ t ⎜ min IE ⎝ h (1 − rh) ⎝ κp−ω At + ut ⎠⎠ ut ≥0 t time step t=0 discounting
social cost
control spending
subject to ⎛
⎞
⎜ ⎜ −a ¯ At+1 = At + h ⎜ t At A − At − ⎜kp ⎝ initiation into drug use
cuzt A1−z t outflow due to ‘treatment’
⎟ ⎟ − μpbt At ⎟ , ⎟ ⎠ natural outflow
where ut , At , and pt denote control spending (in the present context called ‘treatment’), the number of drug users, and the drug retail price at time t,
242
R. Bultmann, G. Feichtinger, and G. Tragler Table 1. Model parameters with numerical values and description
Parameter h r κ p¯ ω k a
Value 0.001 0.04 3.396819914 0.12454 0.5 0.00000001581272 0.25
¯ A c z μ
16,250,000 0.043229 0.52 0.181282758
b
0.25
Description time step annual discount rate social costs including a proportionality constant drug retail price (in 1000 US-$) absolute value of the short-run elasticity of demand initiation proportionality constant absolute value of the elasticity of initiation with respect to drug price maximum number of drug users in the model treatment proportionality constant 1 − z reflects treatment’s diminishing effects rate of natural outflow including a proportionality constant elasticity of quitting with respect to drug price
respectively. We want to stress the fact that the considered model is not an approximation to a continuous stochastic model. While in the deterministic model (discussed in detail in its time-continuous version in [11]) the drug price is kept constant over time, in our stochastic reformulation we allow the retail price to be influenced by exogenous random forces. Here, we assume that pt = p¯ · ζt , where p¯ is some (constant) average price and {ζt }t≥0 is a family of positive i.i.d. random variables with IEζt = 1, taking values in a finite set Z. For simplicity we further assume that Z = {Z1 , Z2 , Z3 } with respective probabilities P = {P1 , P2 , P3 }. In non-technical terms, the price evolves randomly but may only take three different values. This simplifying assumption was made primarily to reduce the computation time to a minimum. For Z, we will consider the three cases {0.5, 1, 1.3}, {0.25, 1, 1.45}, and {0.15, 1, 1.51} with corresponding variances 0.12, 0.27, and 0.3468, respectively, but use the same set of probabilities P = {0.3, 0.2, 0.5}. The base case parameter values for the following numerical analysis are given in Table 1 (see [5] for the derivation of these values).
3
Analysis
We use techniques from dynamic programming to determine the optimal solution and describe the transient sets that constitute the sets of uncertainty. In particular, we use a modified policy iteration algorithm as described in [3], in which each Bellman operator iteration is followed by 1000 policy operator iterations (cf. [9]). For a better performance of the solution algorithm, we have recently started to implement an adaptive space discretization scheme as proposed by [12].
Stochastic Skiba Sets: An Example from Models of Illicit Drug Consumption
243
In Fig. 3 we present the space increments f (At , u∗t , ζt ) := At+1 − At , so
−a z b f (At , u∗t , ζt ) = h k (¯ pζt ) At A¯ − At − c (u∗t ) A1−z − μ (¯ p ζ ) A t t t (left panels) and the optimal feedback controls u∗t (At ) (right panels) for variances 0.12 (top), 0.27 (center), and 0.3468 (bottom). Note that the space increments f (At , u∗t , ζt ) of course depend on the realization of ζt , so we find one curve for each possible value in Z = {Z1 , Z2 , Z3 }. If for a given value of At all f (At , u∗t , ζt ) are negative (positive), then the number of users decreases (increases) with certainty for that value of At . If, however, the sign of f (At , u∗t , ζt ) is not unique (i.e., we find curves both above and below the horizontal axis for a given value of At ), then the state can both decrease and increase, depending on the particular value of ζt . The case with the lowest value of the variance (top panels in Fig. 3) is most similar to the deterministic model. We find a unique DNSS point, which separates the basins of attraction of a low equilibrium at 0 and a high equilibrium ˆ At the DNSS point, the optimal control is discontinuous. While denoted by A. in the deterministic model the equilibria are single points, in our stochastic reformulation the higher equilibrium is an interval, i.e. an invariant set of points. Increasing the variance, the qualitative results remain the same except for the fact that the DNSS point spreads out to an interval, which constitutes our set of uncertainty/stochastic DNSS set, which – in the terminology of [9] – is a transient set between two basins of attraction (center panels in Fig. 3). It is interesting to note, though, that the optimal control still has (only) one jump, which in our case coincides with the upper boundary of the stochastic DNSS set. Finally, when the variance becomes sufficiently high, the DNSS property disappears, i.e., for all initial values we converge to the low equilibrium at 0 (bottom panels in Fig. 3). In this case, the optimal control is continuous over the whole state space. It may, however, take a fairly long time to ‘escape’ the transient set (indicated by the dashed arrow in Fig. 3), which is what [6] denoted as near invariance (see also the newer results in [7]).2
4
Discussion, Extensions, and Concluding Remarks
Motivated by the Australian heroin drought and [9], we were able to compute a stochastic DNSS set as sketched in Fig. 2. From the policy point of view it is interesting to note, though, that there is still a single point in the state space separating two optimal policy strategies, which qualitatively look the same as for the deterministic model and are denoted as ‘eradication’ and ‘accomodation’ strategies, respectively (see the center panels in Fig. 3).3 In our stochastic setting, however, around this point we find an area in which the long-run outcome 2 3
For an interpretation of the non-monotonic behavior of the optimal feedback controls as displayed in the right panels of Fig. 3 see, e.g., [17,18]. The eradication/accomodation strategy refers to the left/right branch of the optimal policy and uses extensive/moderate levels of control. In the deterministic model, the eradication/accomodation strategy leads to a low/high equilibrium.
244
R. Bultmann, G. Feichtinger, and G. Tragler
DNSS
−
DNSS
- Aˆ +
±
−
DNSS
DNSS
?
−
−
?
?- Aˆ ±+ ±
±
−
−
Fig. 3. Space increments f (At , u∗t , ζt ) (left panels; for each of the three curves, ζt is kept constant at Z1 , Z2 , Z3 , respectively) and optimal feedback controls u∗t (right panels) for variances 0.12 (top), 0.27 (center), and 0.3468 (bottom). The vertical lines indicate values of At , where we find a change in the sign of the space increment. Left/right solid arrows indicate a decrease/an increase, respectively, of the number of users under the optimal control. With Aˆ we denote the upper equilibrium, which is an invariant set. Within the stochastic DNSS set (center panel, indicated with a question mark) the long-run outcome is uncertain. Within the area for which we display a dashed arrow (bottom panel), the number of users can both increase and decrease, but sooner or later the system will leave this area at the lower boundary and converge to the low equilibrium at 0; that area is hence a transient set but can be ‘nearly invariant’ in that the system remains there for a very long time.
Stochastic Skiba Sets: An Example from Models of Illicit Drug Consumption
245
is uncertain; only when this area is left, either the low-level or the high-level equilibrium will be approached with certainty. That means, even though the eradication strategy may be optimal initially, a sequence of ‘bad’ realizations of the drug price might push the epidemic up so that the accomodation strategy becomes optimal. Analogously, we may find scenarios where the accommodation strategy is optimal initially, but a sequence of ‘good’ realizations of the drug prize may pull the epidemic down so that the eradication strategy becomes optimal. However, once this area of uncertainty is left, we can say that the drug epidemic’s fate is sealed. An immediate and indeed positive conclusion is that with our stochastic approach we can confirm well-established results from the deterministic case, but of course the solution structure is much richer and more realistic. We further showed that the size of the variance has a strong impact on the optimal solution. If the variance is small enough, the system behaves as in the deterministic case (except for the fact that one equilibrium extends from a point to an invariant set). For intermediate values of the variance, a stochastic DNSS set emerges from what was previously a single point. Finally, if the variance becomes high enough, the DNSS property disappears due to the loss of invariance of the upper equilibrium and we observe near invariance. These results are consistent with those by [6,16]. The immediate next goal is to compute a full bifurcation diagram with respect to the variance where we expect to be able to determine that bifurcation point where the invariant set constituting the high equilibrium loses its invariance and becomes transient or ‘nearly invariant’. Many extensions seem to be worthwhile. (1) For future work it seems necessary to clarify the theoretical concepts of stochastic DNSS sets to provide a concise definition. Therefore we plan to concentrate on simpler models exhibiting the DNSS porperty, which may help to work out more precisely what is model specific and what is an inherent property of DNSS sets. (2) The step (back) form the discrete to the continuous case seems important for some following work. (3) We increased the variance of the three-point random variable by increasing and decreasing the extremal values. Alternatively, this could be done by enlarging the probabilities of the extremal values. It will be interesting to see if the latter approach may also give rise to the observed Skiba/DNSS phenomenon.
Acknowledgement This research was partly financed by the Austrian Science Fund (FWF) within the research grant No. P18527-G14 (“Optimal Control of Illicit Drug Epidemics”). We thank three anonymous referees and Lars Gr¨ une from University Bayreuth for many constructive ideas for how to improve earlier versions of this paper.
References 1. Behrens, D.A.: The US cocaine epidemic: an optimal control approach to heterogeneous consumption behavior. PhD Thesis, Institute for Econometrics, Operations Research and Systems Theory, Vienna University of Technology (1998)
246
R. Bultmann, G. Feichtinger, and G. Tragler
2. Behrens, D.A., Caulkins, J.P., Tragler, G., Feichtinger, G.: Optimal control of drug epidemics: prevent and treat – but not at the same time? Management Science 46(3), 333–347 (2000) 3. Bertsekas, D.P.: Dynamic Programming and Optimal Control, 2nd edn., vol. II. Athena Scientific, Belmont (2001) 4. Bultmann, R., Caulkins, J.P., Feichtinger, G., Tragler, G.: Modeling Supply Shocks in Optimal Control Models of Illicit Drug Consumption. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 285–292. Springer, Heidelberg (2008) 5. Bultmann, R., Caulkins, J.P., Feichtinger, G., Tragler, G.: How should policy respond to disruptions in markets for illegal drugs? Contemporary Drug Problems 35(2&3), 371–395 (2008) 6. Colonius, F., Gayer, T., Kliemann, W.: Near invariance for Markov diffusion systems. SIAM Journal of Applied Dynamical Systems 7(1), 79–107 (2008) 7. Colonius, F., Homburg, A.J., Kliemann, W.: Near invariance and local transience for random diffeomorphisms. Journal of Difference Equations and Applications (forthcoming) 8. Dechert, W.D., Nishimura, K.: A complete characterization of optimal growth paths in an aggregated model with a non-concave production function. Journal of Economic Theory 31(2), 332–354 (1983) 9. Dechert, W.D., O’Donnell, S.I.: The stochastic lake game: A numerical solution. Journal of Economic Dynamics and Control 30, 1569–1587 (2006) 10. Degenhardt, L., Reuter, P., Collins, L., Hall, W.: Evaluating explanations of the Australian heroin drought. Addiction 100, 459–469 (2005) 11. Grass, D., Caulkins, J.P., Feichtinger, G., Tragler, G., Behrens, D.A.: Optimal Control of Nonlinear Processes — With Applications in Drugs, Corruption, and Terror. Springer, Heidelberg (2008) 12. Gr¨ une, L.: Error estimation and adaptive discretization for the discrete stochastic Hamilton-Jacobi-Bellman equation. Numerische Mathematik 99, 85–112 (2004) 13. Sethi, S.P.: Nearest feasible paths in optimal control problems: Theory, examples, and counterexamples. Journal of Optimization Theory and Applications 23(4), 563–579 (1977) 14. Sethi, S.P.: Optimal advertising policy with the contagion model. Journal of Optimization Theory and Applications 29(4), 615–627 (1979) 15. Skiba, A.K.: Optimal growth with a convex-concave production function. Econometrica 46(3), 527–539 (1978) 16. Stachursky, J.: Stochastic growth with increasing returns: stability and path dependence. Studies in Nonlinear Dynamics & Econometrics 7(2), Article 1 (2003) 17. Tragler, G.: Optimal Control of Illicit Drug Consumption: Treatment versus Enforcement. PhD Thesis, Institute for Econometrics, Operations Research and Systems Theory, Vienna University of Technology (1998) 18. Tragler, G., Caulkins, J.P., Feichtinger, G.: Optimal dynamic allocation of treatment and enforcement in illicit drug control. Operations Research 49(3), 352–362 (2001) 19. Weatherburn, D., Jones, C., Freeman, K., Makkai, T.: Supply control and harm reduction: Lessons from the Australian heroin “drought”. Addiction 98(1), 83–91 (2002)
Classical and Relaxed Optimization Methods for Nonlinear Parabolic Optimal Control Problems I. Chryssoverghi, J. Coletsos, and B. Kokkinis Department of Mathematics, National Technical University of Athens Zografou Campus, 15780 Athens, Greece
[email protected],
[email protected],
[email protected]
Abstract. A distributed optimal control problem is considered, for systems defined by parabolic partial differential equations. The state equations are nonlinear w.r.t. the state and the control, and the state constraints and cost depend also on the state gradient. The problem is first formulated in the classical and in the relaxed form. Various necessary conditions for optimality are given for both problems. Two methods are then proposed for the numerical solution of these problems. The first is a penalized gradient projection method generating classical controls, and the second is a penalized conditional descent method generating relaxed controls. Using relaxation theory, the behavior in the limit of sequences constructed by these methods is examined. Finally, numerical examples are given.
1
Classical and Relaxed Optimal Control Problems
Let Ω be a bounded domain in Rd with boundary Γ , and I = (0, T ), T < ∞, an interval. Set Q := Ω×I, Σ := Γ ×I, and consider the parabolic state equation yt + A(t)y + a0 (x, t)T ∇y + b(x, t, y(x, t), w(x, t)) = f (x, t, y(x, t), w(x, t)) in Q, y(x, t) = 0 in Σ, y(x, 0) = y 0 (x) in Ω, where A(t) is the second order elliptic differential operator A(t)y := −
d d
(∂/∂xi )[aij (x, t)∂y/∂xj ].
j=1 i=1
We denote by | · | the Euclidean norm in Rn , by (·, ·) and · the inner product and norm in L2 (Ω), by (·, ·)Q and · Q the inner product and norm in L2 (Q), by (·, ·)1 and · 1 the inner product and norm in the Sobolev space V := H01 (Ω), and by < ·, · > the duality bracket between the dual V ∗ = H −1 (Ω) and V . The state equation will be interpreted in the following weak form < yt , v > +a(t, y, v) + (a0 (t)T ∇y, v) + (b(t, y, w), v) = (f (t, y, w), v), ∀v ∈ V, a.e. in I, I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 247–255, 2010. c Springer-Verlag Berlin Heidelberg 2010
248
I. Chryssoverghi, J. Coletsos, and B. Kokkinis
y(t) ∈ V a.e. in I, y(0) = y 0 , where the derivative yt is understood in the sense of V -vector valued distributions, and a(t, ·, ·) denotes the usual bilinear form on V × V associated with A(t) d d ∂y ∂v a(t, y, v) := a (x, t) ∂x dx. Ω ij i ∂xj j=1 i=1
We define the set of classical controls W := {w : Q → U | w measurable} ⊂ L∞ (Q) ⊂ L2 (Q),
where U is a compact subset of Rd , and the functionals Gm (w) := Q gm (x, t, y, ∇y, w)dxdt, m = 0, ..., q. The classical optimal control problem is to minimize G0 (w) subject to the constraints w ∈ W,
Gm (w) = 0,
m = 1, ..., p,
Gm (w) 0,
m = p + 1, ..., q.
It is well known that, even if the set U is convex, the classical problem may have no solutions. The existence of such a solution is usually proved under strong, often unrealistic for nonlinear systems, convexity assumptions such as the Cesari property. Reformulated in the so-called relaxed form, the problem is convexified in some sense and has a solution in a larger space under weaker assumptions. Next, we define the set of relaxed controls (Young measures; for the relevant theory, see [6], [5]) 1 ∗ R := {r : Q → M1 (U ) |r weakly measurable} ⊂ L∞ w (Q, M (U )) ≡ L (Q, C(U )) ,
where M (U ) (resp. M1 (U )) is the set of Radon (resp. probability) measures on U . The set R is endowed with the relative weak star topology of L1 (Q, C(U ))∗ , and R is convex, metrizable and compact. If we identify every classical control w(·) with its associated Dirac relaxed control r(·) = δw(·) , then W may be also regarded as a subset of R, and W is thus dense in R. For φ ∈ L1 (Q, C(U )) = ¯ C(U )) (or φ ∈ B(Q, ¯ U ; R), where B(Q, ¯ U ; R) is the set of Caratheodory L1 (Q, functions in the sense of Warga [6]) and r ∈ L∞ w (Q, M (U )) (in particular, for r ∈ R), we shall use for simplicity the notation φ(x, t, r(x, t)) := U φ(x, t, u)r(x, t)(du), where φ(x, t, r(x, t)) is thus linear (under convex combinations, if r ∈ R) in r. A sequence (rk ) converges to r ∈ R in R iff lim Q φ(x, t, rk (x, t))dxdt = Q φ(x, t, r(x, t))dxdt, k→∞
¯ U ; R), or φ ∈ C(Q ¯ × U ). for every φ ∈ L1 (Q; C(U )), or φ ∈ B(Q, The relaxed optimal control problem is then defined by replacing w by r (with the above notation) and W by R in the continuous classical problem. In what follows, we shall make some of the following groups of assumptions.
Classical and Relaxed Optimization Methods
249
Assumptions 1. The boundary Γ is Lipschitz (e.g. appropriately piecewise C 1 , or polyhedral) if b = 0; else, Γ is C 1 and n 3. The coefficients aij satisfy the ellipticity condition d d
aij (x, t)zi zj θ
j=1 i=1
d i=1
zi2 ,
∀zi , zj ∈ R,
∀(x, t) ∈ Q,
with θ > 0, aij ∈ L∞ (Q). We have a0 ∈ L∞ (Q)d , and the functions b,f are defined on Q×R×U, measurable for fixed y, u, continuous for fixed x, t, and satisfy |b(x, t, y, u)| φ(x, t) + βy 2 , b(x, t, y, u)y 0, |f (x, t, y, u)| ψ(x, t) + γ |y| , ∀(x, t, y, u) ∈ Q × R × U, with φ, ψ ∈ L2 (Q), β, γ 0, b(x, t, y1 , u) b(x, t, y2 , u), ∀(x, t, y1 , y2 , u) ∈ Q × R2 × U, with y1 y2 , |f (x, t, y1 , u) − f (x, t, y2 , u)| L |y1 − y2 | , ∀(x, t, y1 , y2 , u) ∈ Q × R2 × U. Assumptions 2. The functions gm are defined on Q × Rd+1 × U, measurable for fixed y, y¯, u, continuous for fixed x, t, and satisfy 2 |gm (x, t, y, y¯, u)| ζm (x, t) + δm y 2 + δ¯m |¯ y| ,
∀(x, t, y, y¯, u) ∈ Q × Rd+1 × U,
with ζm ∈ L1 (Q), δm 0, δ¯m 0. Assumptions 3. The functions b, by , bu , f, fy , fu (resp. gm , gmy , gm¯y , gmu ) are ˜ ), where U ˜ is an open set containing ˜ (resp. Q × Rd+1 × U defined on Q × R × U U , measurable on Q for fixed (y, u) ∈ R × U (resp. (y, y¯, u) ∈ Rd+1 × U ) and continuous on R × U (resp. Rd+1 × U ) for fixed (x, t) ∈ Q, and satisfy |by (x, t, y, u)| ξ1 (x, t) + η1 |y| , |fy (x, t, y, u)| L1 , |bu (x, t, y, u)| ζ2 (x, t) + η2 y 2 , |fu (x, t, y, u)| ξ3 (x, t) + η3 |y|, ∀(x, t, y, u) ∈ Q × R × U, |gmy (x, t, y, y¯, u)| ζm1 (x, t) + δm1 |y| + δ¯m1 |¯ y| , y| , |gm¯y (x, t, y, y¯, u)| ζm2 (x, t) + δm2 y 2 + δ¯m2 |¯ 2 |gmu (x, t, y, y¯, u)| ζm3 (x, t) + δm3 y 2 + δ¯m3 |¯ y| , ∀(x, t, y, y¯, u) ∈ Q × Rd+1 × U, with ξ1 , ξ2 , ξ3 , ζm1 , ζm2 , ζm3 ∈ L2 (Q), η1 , η2 , η3 , δm1 , δ¯m1 , δm2 , δ¯m2 , δm3 , δ¯m3 0. Proposition 1. Under Assumptions 1, for every control r ∈ R and y 0 ∈ L2 (Ω), the relaxed state equation has a unique solution y := yr such that y ∈ L2 (I, V ) and yt ∈ L2 (I, V ∗ ). Proposition 2. Under Assumptions 1, the operator w
→ yw , from W to = 0, and the operator r
→ yr , from R to L2 (I, V ), and to L2 (I, L4 (Ω)) if b L2 (I, V ), and to L2 (I, L4 (Ω)) if b = 0, are continuous. Under Assumptions 1 → Gm (r) on R, are continuous. and 2, the functionals w
→ Gm (w) on W , and r
Theorem 1. Under Assumptions 1 and 2, if the relaxed problem is feasible, then it has a solution.
250
I. Chryssoverghi, J. Coletsos, and B. Kokkinis
We give below some results concerning necessary conditions for optimality, which can be proved by using the techniques of [6] and [4] (see also [5]). Lemma 1. We suppose that Assumptions 1-3 hold and that the derivatives in u are excluded in Assumptions 3. Dropping the index m in gm , Gm , the directional derivative of the functional G, defined on R, is given by
DG(r, r − r) := lim G(r+ε(r −r))−G(r) ε ε→0+ = Q H(x, t, y, ∇y, z, r (x, t) − r(x, t))dxdt, for r, r ∈ R, where the Hamiltonian H is defined by H(x, t, y, y¯, z, u) := z[f (x, t, y, u) − b(x, t, y, u)] + g(x, t, y, y¯, u), and the adjoint state z := zr satisfies the linear adjoint equation − < zt , v > +a(t, v, z) + (aT0 ∇v, z) + (zby (y, r), v) = (zfy (y, r) + gy (y, r), v) + (gy¯(y, ∇y, r), ∇v), z(t) ∈ V a.e. in I,
∀v ∈ V, a.e. in I,
z(T ) = 0, with y := yr .
The mappings r
→ zr , from R to L2 (Q), and (r, r )
→ DG(r, r − r), from R × R to R, are continuous. Theorem 2. Under Assumptions 1-3 and with the derivatives in u excluded in Assumptions 3, if r ∈ R is optimal for either the relaxed or the classical problem, then r is strongly extremal relaxed, i.e. there exist multipliers λm ∈ R, q m = 0, ..., q, with λ0 0, λm 0, m = p + 1, ..., q, |λm | = 1, such that m=0 q (1) λm DGm (r, r − r) 0, ∀r ∈ R, m=0
(2) λm Gm (r) = 0, m = p + 1, ..., q (relaxed transversality conditions). The condition (1) is equivalent to the strong relaxed pointwise minimum principle (3)
H(x, t, y(x, t), ∇y(x, t), z(x, t), r(x, t))
=
min H(x, t, y(x, t), ∇y(x, t), u∈U
z(x, t), u), a.e. in Q, where the complete Hamiltonian and adjoint H, z are deq fined with g := λm gm . m=0
If in addition U is convex, then the minimum principle (3) implies the weak relaxed pointwise minimum principle (4) Hu (x, t, y, ∇y, z, r(x, t))r(x, t) = min Hu (x, t, y, ∇y, z, r(x, t))φ(x, t, r(x, t)), a.e. in Q, φ
where the minimum is taken over the set B(Q, U ; U ) of Caratheodory functions φ : Q × U → U (see [6]), which in turn implies the global weak relaxed condition (5) Q Hu (x, t, y, ∇y, z, r(x, t))[φ(x, t, r(x, t))−r(x, t)]dxdt 0, ∀φ ∈ B(Q, U ; U ). A control r satisfying the conditions (5) and (2) is called weakly extremal relaxed.
Classical and Relaxed Optimization Methods
251
Lemma 2. Under Assumptions 1-3 and dropping the index m, the directional derivative of the functional G, here defined on W , is given by DG(w, w − w) = lim+ G(w+ε(w ε−w)−G(w) ε→0 = Q Hu (x, t, y, ∇y, z)(w − w)dxdt, for w, w ∈ W , where the adjoint state z := zw satisfies the equation − < zt , v > +a(t, v, z) + (aT0 ∇v, z) + (zby (y, w), v) = (zfy (y, w) + gy (y, ∇y, w), v) + (gy¯(y, ∇y, w), ∇v), z(t) ∈ V a.e. in I,
z(T ) = 0,
∀v ∈ V, a.e. in I,
with y := yw .
The mappings w
→ zw , from W to L2 (Q), and (w, w )
→ DG(w, w − w), from W × W to R, are continuous. Theorem 3. We suppose that Assumptions 1-3 hold and that U is convex. If w ∈ W is optimal for the classical problem, then w is weakly extremal classical, i.e. there exist multipliers λm as in Theorem 2 such that q λm DGm (w, w − w) 0, ∀w ∈ W , (6) m=0
(7) λm Gm (w) = 0,
m = p + 1, ..., q
(classical transversality conditions).
The condition (6) is equivalent to the weak classical pointwise minimum principle (8) Hu (x, t, y, ∇y, z, w(x, t)) w(x, t) = min Hu (x, t, y, ∇y, z, w(x, t)) u, a.e. in Q, u∈U
where the complete Hamiltonian and adjoint H, z are defined with g :=
q
λm gm .
m=0
2
Classical and Relaxed Optimization Methods
l l Let (Mm ), m = 1, ..., q, be positive increasing sequences such that Mm → ∞ l as l → ∞, γ > 0, b , c ∈ (0, 1), and (β ), (ζk ) positive sequences, with (β l ) decreasing and converging to zero, and ζk 1. Define first the penalized functionals on W p q l l Mm [Gm (w)]2 + Mm [max(0, Gm (w))]2 }/2. Gl (w) := G0 (w) + { m=1
m=p+1
It can be easily shown that the directional derivative of Gl is given by DGl (w, w − w) = DG0 (w, w − w) p l + Mm Gm (w)DGm (w, w − w) m=1 q
+
l Mm max(0, Gm (w))DGm (w, w − w).
m=p+1
The classical penalized gradient projection method is described by the following Algorithm, where U is assumed to be convex.
252
I. Chryssoverghi, J. Coletsos, and B. Kokkinis
Algorithm 1 Step 1. Set k := 0, l := 1, and choose an initial control w01 ∈ W . Step 2. Find vkl ∈ W such that 2 2 ek := DGl (wkl , vkl −wkl )+ γ2 vkl − wkl Q = min [DGl (wkl , v¯ −wkl )+ γ2 v¯ − wkl Q ], v ¯∈W
and set dk := DGl (wkl , vkl − wkl ). Step 3. If |dk | β l , set wl := wkl , v l := vkl , dl := dk , el := ek , wkl+1 := wkl , l := l + 1, and then go to Step 2. Step 4. (Modified Armijo Step Search) Find the lowest integer value s ∈ Z, say s¯, such that α(s) = cs ζk ∈ (0, 1] and α(s) satisfies Gl (wkl + α(s)(vkl − wkl )) − Gl (wkl ) α(s)b dk , and then set αk := α(¯ s). l Step 5. Set wk+1 := wkl + αk (vkl − wkl ), k := k + 1, and go to Step 2.
A (classical or relaxed) extremal (or weakly extremal) control is called abnormal if there exist multipliers as in the corresponding optimality conditions, with λ0 = 0.An admissible extremal control is abnormal in rather exceptional situations (see [6]). With wl as defined in Step 3, define the sequences of multipliers l λlm := Mm Gm (wl ), m = 1, ..., p, l λlm := Mm max(0, Gm (wl )), m = p+1, ..., q.
Theorems 4 and 5 below are proved by using the techniques of [4]. Theorem 4. We suppose that Assumptions 1-3 hold and that U is convex. l(k) (i) In the presence of state constraints, if the whole sequence (wk ) generated 2 by Algorithm 1 converges to some w ∈ W in L strongly and the sequences (λlm ) are bounded, then w is admissible and weakly extremal for the classical problem. In the absence of state constraints, if a subsequence (wk )k∈K (no index l) converges to some w ∈ W in L2 strongly, then w is weakly extremal classical for the classical problem. (ii) In the presence of state constraints, if a subsequence (wl )l∈L of the sequence generated by Algorithm 1 in Step 3, regarded as a sequence of relaxed controls, converges to some r in R, and the sequences (λlm ) are bounded, then r is admissible and weakly extremal relaxed for the relaxed problem. In the absence of state constraints, if a subsequence (wk )k∈K (no index l) converges to some r in R, then r is weakly extremal relaxed for the relaxed problem. (iii) In any of the convergences cases (i) or (ii) with state constraints, suppose that the classical, or the relaxed, problem has no admissible, abnormal extremal, controls. If the limit control is admissible, then the sequences (λlm ) are bounded, and this control is also extremal as above. Next, define the penalized discrete functionals on R p q l l Mm [G0 (r)]2 + Mm [max(0, Gm (r))]2 }/2. Gl (r) := G0 (r) + { m=1
m=p+1
Classical and Relaxed Optimization Methods
253
The directional derivative of Gl is given by DGl (r, r − r) = DG0 (r, r − r) p l + Mm Gm (r)DGm (r, r − r) + m=1
q m=p+1
l Mm max(0, Gm (r))DGm (r, r − r).
The relaxed penalized conditional descent method is described by the following Algorithm, where U is not necessarily convex. Algorithm 2 Step 1. Set k := 0, l := 1, choose an initial control r01 ∈ R. Step 2. Find r¯kl ∈ R such that dk := DGl (rkl , r¯kl − rkl ) = min DGl (rkl , r − rkl ). r ∈R
Step 3. If |dk | β l , set rl := rkl , r¯l := r¯kl , dl := dk , rkl+1 := rkl , l := l + 1, and then go to Step 2. Step 4. Find the lowest integer value s ∈ Z, say s¯, such that α(s) = cs ζk ∈ (0, 1] and α(s) satisfies Gl (rkl + α(s)(¯ rkl − rkl )) − Gl (rkl ) α(s)bdk , and then set αk := α(¯ s). l Step 5. Choose any rk+1 ∈ R such that l ) Gl (rkl + αk (¯ rkl − rkl )), Gl (rk+1
set k := k + 1, and go to Step 2. If the initial control r01 is classical, using Caratheodory’s theorem, it can be l shown by induction that the control rk+1 in Step 5 can be chosen, for each iteration k, to be a Gamkrelidze relaxed control, i.e. a convex combination of a fixed number of classical (i.e. Dirac) controls. With rl as defined in Step 3, define the sequences of multipliers l Gm (rl ), m = 1, ..., p, λlm := Mm l l λm := Mm max(0, Gm (rl )), m = p + 1, ..., q.
Theorem 5. We suppose that Assumptions 1-3 hold, with the derivatives in u excluded in Assumptions 3. (i) In the presence of state constraints, if the subsequence (rl )l∈L of the sequence generated by Algorithm 2 in Step 3 converges to some r in R and the sequences (λlm ) are bounded, then r is admissible and strongly extremal relaxed for the relaxed problem. In the absence of state constraints, if a subsequence (rk )k∈K (no index l) converges to some r in R, then r is strongly extremal relaxed for the relaxed problem. (ii) In case (i) with state constraints, suppose that the relaxed problem has no admissible, abnormal extremal, controls. If r is admissible, then the sequences (λlm ) are bounded and r is also strongly extremal relaxed for the relaxed problem.
254
I. Chryssoverghi, J. Coletsos, and B. Kokkinis
In practice, by choosing in the above algorithms moderately growing sequences l (Mm ) and a sequence (β l ) relatively fast converging to zero, the resulting sequences of multipliers (λlm ) are often kept bounded. One can choose a fixed ζk := ζ ∈ (0, 1] in Step 4; a usually faster and adaptive procedure is to set ζ0 := 1, and ζk := αk−1 , for k 1.
3
Numerical Examples
Let Ω = I = (0, 1). Example 1. Define the reference control and state
−1, (x, t) ∈ Ω × (0, 0.25] −1 + 32(t − 0.25)x(1 − x)/3, (x, t) ∈ Ω × (0.25, 1] y¯(x, t) := x(1 − x)et , w(x, ¯ t) :=
and consider the following optimal control problem, with state equation yt − yxx + 0.5y |y| = 0.5¯ y |¯ y | + [x(1 − x) + 2]et + y − y¯ + w − w ¯ in Q, y = 0 in Σ, y(x, 0) = y¯(x, 0) in Ω, control constraint set U := [−1, 1], and cost functional 2 y | + (w − w) ¯ 2 ]dxdt. G0 (w) := 0.5 Q [(y − y¯)2 + |∇y − ∇¯ Clearly, the optimal control, state, and cost are w, ¯ y¯, 0. Algorithm 1, without penalties, was applied to this problem using the finite element method with piecewise linear basis functions and the Crank-Nicolson time scheme (step sizes 0.01), with piecewise constant classical controls, and γ = 0.5, b = c = 0.5. After 20 iterations, we obtained the results G0 (wk ) = 4.732·10−8, dk = −1.133·10−15, εk = 1.523·10−4, ηk = 1.596·10−4, where dk was defined in Step 2 of Algorithm 1, εk is the state max-error at the (x, t)-mesh points, and ηk is the control max-error at the centers of the (x, t)-blocks. Example 2. Introducing the state constraint G1 (w) := Q [y(x, t) − 0.3]dxdt = 0, in Example 1, with U := [−1, 0.8], and applying the penalized Algorithm 1, we obtained G0 (wk ) = 1.332334091748·10−2 , G1 (wk ) = −7.229·10−6, dk = −4.279·10−6. Example 3. Define the reference control and state 1, if t ∈ [0, 0.5] w(x, ¯ t) := y¯(x, t) := x(1 − x)e−t , 1 − 2(t − 0.5)(0.2x + 0.4), if t ∈ (0.5, 1], and consider the following optimal control problem, with state equation ¯ yt − yxx + 0.5y |y| + (1 + w − w)y
Classical and Relaxed Optimization Methods
255
= 0.5 y¯ |¯ y | + y¯ + [−x(1 − x) + 2]e−t + y − y¯ + 3(w − w), ¯ y(x, t) = 0 on Σ, y(0, x) = x(1 − x) in Ω, nonconvex control constraint set U := [0, 0.2] ∪ [0.8, 1](or U := {0, 1}, on/off type control), and nonconvex cost functional 2 G0 (u) := Q {0.5 [(y − y¯)2 + |∇y − ∇¯ y | ] − (w − 0.5)2 + 0.25}dxdt One can easily verify that the unique optimal relaxed control r is given by r(x, t){1} := w(x, ¯ t),
r(x, t){0} := 1 − r(x, t){1},
(x, t) ∈ Q,
with optimal state y¯ and cost 0, and we see that r is concentrated at the two points 1 and 0; r is classical if t ∈ [0, 0.5], and non-classical otherwise. Algorithm 2, without penalties, was applied to this problem using the same approximation schemes and step sizes as in Example 1, here with piecewise constant relaxed controls. After 100 iterations in k, we obtained G0 (rk ) = 4.053 · 10−5 ,
dk = −1.430 · 10−4 ,
ηk = 5.976 · 10−3 ,
where dk was defined in Algorithm 2 and ηk is the max state error at the (x, t)mesh points. Example 4. We introduce the equality state constraint G1 (w) := Q ydxdt = 0, in Example 3. Applying here the penalized Algorithm 2, we obtained after 210 iterations in k G0 (rkl ) = 7.515191183881 · 10−2 , G1 (rkl ) = 3.574 · 10−5 , dk = −3.536 · 10−3 .
References 1. Bartels, S.: Adaptive approximation of Young measure solution in scalar non-convex variational problems. SIAM J. Numer. Anal. 42, 505–629 (2004) 2. Cartensen, C., Roub´ıˇcek, T.: Numerical approximation of Young measure in nonconvex variational problems. Numer. Math. 84, 395–415 (2000) 3. Chryssoverghi, I., Coletsos, J., Kokkinis, B.: Discrete relaxed method for semilinear parabolic optimal control problems. Control Cybernet 28, 157–176 (1999) 4. Chryssoverghi, I., Coletsos, J., Kokkinis, B.: Discretization methods for optimal control problems with state constraints. J. Comput. Appl. Math. 191, 1–31 (2006) 5. Roub´ıˇcek, T.: Relaxation in Optimization Theory and Variational Calculus. Walter de Gruyter, Berlin (1997) 6. Warga, J.: Optimal Control of Differential and Functional Equations. Academic Press, New York (1972) 7. Warga, J.: Steepest descent with relaxed controls. SIAM J. Control Optim. 15, 674–682 (1977)
Exponential Formula for Impulsive Differential Inclusions Tzanko Donchev Department of Mathematics University of Architecture and Civil Engineering 1 “Hr. Smirnenski” str., 1046 Sofia, Bulgaria
[email protected] Abstract. This paper studies the graph of the reachable set of a differential inclusion with non-fixed time impulses. Using approximation in L1 –metric, we derive exponential characterization of the reachable set.
1
Preliminaries
One of the most important questions, studying optimal control problems is the description of the (time dependent) reachable set. Here we study its graph. We consider a system having the form: = τi (x(t)), x(t) ˙ ∈ F (x(t)), x(0) = x0 a.e. t ∈ I = [0, 1], t Δx|t=τi (x(t)) = Si (x), i = 1, . . . , p,
(1)
Here F : Rn ⇒ Rn is a multifunction with nonempty compact and convex values, Si : Rn → Rn are impulses (jumps) and τi : Rn → R are the times of impulses. Notice that the times of impulses of the solution x(·) are the fixed points of the maps t → τi (x(t)), i = 1, . . . , p. For convenience we will write simply τi (not τi (x(t))). Recall (compare [12, p. 34]) that the piecewise absolutely continuous function x(·) is said to be a solution of (1) if: a) x(·) is left continuous and satisfies (1) for almost all t ∈ I, t = τi (x(t)) b) It has (possible) jumps on t = τi (x(t)) (discontinuities of the first kind), defined by: Δxt=τi (x(t)) = x(t + 0) − x(t) = Si (x(t)). The aim of the paper is the characterization of the reachable set of (1). The Euler approximation studied in [2] (when the right-hand side is Lipschitz) and in [5] (under stronger assumptions on impulsive surfaces) is good for the solution set, however, in general it is not appropriate to estimate the reachable set. Now we give some definitions and notation. We refer to [4] for all concepts used in this paper but not explicitly discussed. For a bounded set A ⊂ Rn denote by σ(l, A) = sup l, a its support function. The Hausdorff distance between the a∈A
bounded sets A and B is: DH (A, B) = max{sup inf |a−b|, sup inf |a−b|}. Define a∈A b∈B
b∈B a∈A
|A| = haus(A, {0}). Let Y, K be norm spaces. A multifunction F : Y → K is I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 256–263, 2010. c Springer-Verlag Berlin Heidelberg 2010
Exponential Formula for Impulsive Differential Inclusions
257
said to be upper semi continuous (USC) (lower semi continuous – LSC) at x ∈ X if for every ε > 0 there exists δ > 0 such that F (x + δB) ⊂ F (x0 ) + εB (if for any y0 ∈ F (x0 ) and xi → x0 there exists yi ∈ F (xi ) with yi → y0 ). Here B denotes the unit ball centered in the origin. The multimap F is called continuous if it is continuous with respect to the Hausdorff distance. Denote hF (p, x) = −σ(−p, F (x)) as lower Hamiltonian, HF (p, x) = σ(p, F (x)) as upper Hamiltonian. A multifunction F : Rn ⇒ Rn is said to be one-sided Lipschitz (OSL) with a constant L if for each x, y ∈ Rn HF (x − y, x) − HF (x − y, y) ≤ L|x − y|2 . Standing hypotheses (SH) We suppose that F (·) is USC with linear growth condition i.e. there exist A > 0, B > 0 such that |F (x)| ≤ A + B|x|. A1. τi (·) are N –Lipschitz and Si : Rn → Rn are μ–Lipschitz. A2. τi (x + Si (x)) = τj (x) ∀ j = i for every x ∈ Rn . For i = 1, . . . , p (i ≤ p − 1 if needed) just one of the following hypotheses holds: A3. τk (x) ≥ τk (x + Sk (x)), τi (x) < τi+1 (x) for every x ∈ Rn and there exist constants δ > 0 and μ < 1 such that HF (∇τi (x), x+ δB) ≤ μ at the points where the gradient ∇τi (·) exists. A4. τi (x) ≤ τi (x + Si (x)) and τi (x) > τi+1 (x) for every x ∈ Rn . There exist constants α > 1 and δ > 0 such that hF (∇τi (x), x + δB) ≥ α at the points x, where the gradient exists. Notice that these conditions are needed in fact only for a neighborhood of the domain where the values ⎛ of solutions⎞lie. co ⎝ ∇τi (y)⎠. It is well known that it is Clarke’s Let ∂x τi (x) = δ>0
y∈x+δB
subdifferential of τi (·) (cf. [3]). Proposition 1. Under the standing hypotheses for every i = 1, . . . , p: L 1) HF (l, x + ε B) ≤ α ∀ l ∈ ∂x τi (x) and all ε < ε in case of A3. 2) hF (l, x + ε B) ≥ κ ∀ l ∈ ∂x τi (x) and all ε < ε in case of A4. The proof is almost obvious. Indeed If δ < ε, then F (y + εB) ⊃ F (x + δB) for every y ∈ x + (ε − δ)B. Furthermore lim σ(y, F (x + δB) ≥ σ(x, F (x + δB). y→x
Lemma 1. Let ε > 0 be such that A1, A2, A4 hold. For every solution of x(t) ˙ ∈ F (x(t) + εB), x(0) = x0 a.e. t ∈ I = [0, 1], t = τi (x(t)), Δx|t=τi (x(t)) = Si (x), i = 1, . . . , p.
(2)
(if it exists) the equation t = τi (x(t)) admits no more than one solution (for t). Proof. The proof s contained in [2]. However, we include it (in case of A4) here.
258
T. Donchev
Let x(·) be a solution. Consider the functions fi (t) = τi (x(t)) − t. If t is a jump point of x(·) then fi (t + 0) ≥ fi (t) due to the assumption A4. Let (s, t) be in an interval of continuity. Thus due to change rule formula for Clarke’s subdifferential: ≥s−t+
s
t
fi (t) − fi (s) = s − t + τi (x(t)) − τi (x(s))
(3)
−σ(x(r), ˙ ∂x (τi (x(t))) ≥ (t − s)(κ − 1) > 0
(4)
Therefore for every i the functions fi (·) are strongly increasing (except the points when some fj (·) with j = i has a jump) and hence fi (t) = 0 has at most one solution, which is a possible jump point.
Corollary 1. Under the conditions of Lemma 1 there exists a constant λ > 0 such that for every solution y(·) of (1) τi+1 (y(t)) − τi (y(t)) ≥ λ for i = 1, 2, . . . , p − 1 and every t ∈ [0, 1]. We let gi (t, x) = τi (x) − t. For X ⊆ Rn , t ∈ I and h > 0 (t + h ≤ 1), the one-step Euler set-valued map is given by E(t, h, X) = {x + hf +
p
χi (t, x, f )Si (x) : x ∈ X, f ∈ F (x)},
i=1
and we write E(t, h, X)1 = E(t, h, X). Here χi (t, x, f ) = 1 when gj (t, x)gj (t + h, x+hf ) > 0 ∀j < i and either gi (t, x)gi (t+h, x+hf ) < 0 or gi (t+h, x+hf ) = 0. In the other cases χi (t, x, f ) = 0. n The k-th step Euler map is defined recursively as E(t, h, X)k = E(t + (k − k=1
1)h, h, X) for k = 2, 3, .... We write E(0, h, x) as E(h, x). T and let ti = ih, i = For a given natural number k and T > 0, set h = k 0, 1, . . . , k be a uniform grid on [0, T ]. The Euler polygonal arcs corresponding to (1) are defined as the linear interpolation of a discrete time trajectory {x(tj )}. These arcs are of the form for i = 1, . . . , p = τi (x(t)), (5) x(t) = x(tj + 0) + (t − tj )fj , fj ∈ F (x(tj )), t ∈ (tj , ti+1 ], t gi (tj , x)gi (tj+1 , x + hfj ) ≤ 0 ⇒ x(tj+1 + 0) = x(tj+1 ) + Si (x(tj+1 )). Here j = 0, 1, . . . , k − 1, x(0) = x0 . The paper is organized as follows: The Euler-type approximation of the solution set of impulsive (ODI) is presented in the next section, and Section 3 is devoted to the semigroup characterization of the reachable set using modified L1 –metric.
2
Euler Approximation of the Solution Set
In this section we study Euler’s discrete approximation of (1) with the scheme (5). We need the following lemma which is Lemma 2 of [11].
Exponential Formula for Impulsive Differential Inclusions
259
Lemma 2. Let a1 , a2 , b, d ≥ 0 and let δ0+ = δ0− = δ0 . If for i = 1, 2, . . . , p δi+ ≤ a1 δi− + d, then δi− ≤ (a2 d + b)
i−1
+ δi− ≤ a2 δi−1 +b
(a1 a2 )j + δ0 (a1 a2 )i , where δ0+ ≥ 0.
j=0
Studying the problem (2) we say that the measure of distance between two solutions Dist(x(·), y(·)) ≤ ε if they intersect successively the impulsive surfaces, i.e. τi (x) ≥ τm (y) is impossible for m > i and vice p versa. Moreover p
+
− − + τi − τi < ε and |x(t) − y(t)| < ε for every t ∈ I \ [τi , τi ] . Here i=1
i=1
τi− = min{τi (x(·)), τi (y(·))} and τi+ is the maximal one. The following theorem is proved in [6] in a more general form: Theorem 1. Under SH for small ε the problem (2) has a nonempty solution set and every solution exists on the whole interval I. Furthermore there exist constants K and M such that |x(t)| ≤ M and |F (x(t) + εB)| ≤ K for every solution x(·) of (2). Theorem 2. (Lemma of Filippov–Plis) Under the standing hypotheses and OSL condition there exists a constant K such that if ε > 0 is sufficiently small, then for for any solution y(·) of √ (2) with y(0) = y0 there exists a solution x(·) of (1) with Dist(x(·), y(·)) ≤ K ε when |x0 − y0 | < ε. Proof. Let y(·) be a solution of (2). It is easy to see that y(t) ˙ ∈ F (¯ y(t)), where |y(t)− y¯(t)| < ε. We are looking for a solution x(·) of (1), satisfying the condition of the theorem. First we assume x(·) does not intersect the i + 1-th surface before y(·) intersects i-th and vice versa. We let δi− = |y(τi− ) − x(τi− )| and δi+ = |y(τi+ + 0) − x(τi+ + 0)|. Define Γ (t, z) := u ∈ F (z) : ¯ y (t) − z, y(t) ˙ − u ≤ L|¯ y(t) − z|2 . It is easy to see that Γ (·, x) is measurable for every x and Γ (t, ·) is USC for a.e. t with nonempty convex and compact values. Let x(·) be a solution of: x(t) ˙ ∈ Γ (t, x(t)), x(0) = y(0).
(6)
− One has that ¯ y (t) − x(t), y(t) ˙ − x(t) ˙ ≤ L|¯ y (t) − x(t)|2 . Let t ∈ (τi+ , τi+1 ). Consequently
y(t) − x(t), y(t) ˙ − x(t) ˙ ≤ L|x(t) − y(t)|2 + ε|x(t) ˙ − y(t)| ˙ + 2|L|ε (|x(t)| + |y(t)| + ε) , 1 d |y(t) − x(t)|2 ≤ L|x(t) − y(t)|2 + 2ε (K + |L|(2M + ε)) . 2 dt Denote r2 (t) = |x(t) − y(t)|2 one has
d 2 r (t) ≤ 2Lr2 + 4(K + 2|L|M + ε)ε. dt
260
T. Donchev
Let a2 = max eLt . Since r2 (t) is less or equal of the solution of s(t) ˙ = 2Ls + t∈I
4(K + 2|L|M + ε)ε, one has that there exists a constant P such that r(t) ≤ √ + √ eL(t−τi ) ri + P ε. Denoting b = P ε we derive r(t) ≤ a2 ri + b with ri = + + + − |x(τi + 0) − y(τi + 0)| = δi . Hence δi+1 ≤ a2 δi+ + b. Assume that τi− = τi (x(τi− )) and τi+ = τi (y(τi+ )) (the method is the same if it is not the case). We have: |y(τi+ ) − x(τi− )| ≤ |y(τi+ ) − y(τi− )| + |y(τi− ) − x(τi− )| ≤ K(τi+ − τi− ) + δi− . τi+ − τi− = τi (y(τi+ )) − τi (x(τi− ))
≥ |τi (y(τi+ )) − τi (y(τi− ))| − |τi (y(τi− )) − τi (x(τi− ))| ≥ |τi (y(τi+ )) − τi (y(τi− ))| − N δi− . Moreover: τi (y(τi+ )) − τi (y(τi− )) =
τi+
τi−
τi+ d τi (y(t)) dt ≥ min l, y(t) ˙ dt. dt l∈∂x τi (x) τi−
Thus τi (y(τi+ )) − τi (y(τi− )) ≥ α(τi+ − τi− ) and N δi− ≥ (α − 1)(τi+ − τi− ) ⇒ τi+ − τi− ≤
N δi− . α−1
(7)
Consequently for δi+ we have: δi+ = |x(τi+ ) − y(τi+ + 0)| ≤ |x(τi− ) − y(τi− )| + |Si (x(τi− )) − Si (y(τi+ ))| τi+ + |x(t) ˙ − y(t)|dt ˙ ≤ δi− + 2K(τi+ − τi− ) + μ(K(τi+ − τi− ) + δi− ) = a1 δi− , τi−
2KN + (α − 1)(1 + μ + μKN ) . Hence Lemma 2 (with d = ε and α−1 a1 , a2 , b just defined) applies. Due to (7) one has that for sufficiently small ε λ τi+ − τi− ≤ , i.e. x(·) and y(·) intersect impulsive surfaces in the same order. 10 The rest of the proof is standard. The proof in case of A3 is similar and it is omitted.
where a1 =
The following theorem is true: Theorem 3. Under SH and OSL condition for k big enough we have: For every solution x(·) of (1) there exists a solution y(·) of (5) such that 1 Dist(x(·), y(·)) ≤ O √ and vice versa. k The proof is very similar to the proof of the corresponding result of [5].
Exponential Formula for Impulsive Differential Inclusions
3
261
L1 –Metric for Piecewise Lipschitz Functions and of the Reachable Sets of Impulsive ODI
Here we will compare the L1 metric and the commonly used distance proving the continuous dependence and approximation of the solutions. Let Imp,L be the set of all L-Lipschitz on (ti (x), ti+1 (x)] functions x(·) with no more than p jump points and ti+1 (x(·)) − ti (x(·) ≥ λ. Proposition 2. The space Imp,L equipped with the usual L1 (I, Rn ) norm becomes a complete metric space. Proof. One must only show that every Cauchy sequence {xk (·)}∞ k=1 converges to a Imp,L function, because Imp,L ⊂ L1 . By Egorov’s theorem if a sequence 1 {y k (·)}∞ k=1 of L-Lipschitz functions converges in L norm to y(·) then the latter is n ∞ also L-Lipschitz. Let {x (·)}n=1 be a Cauchy sequence in Imp,L . Denote by tnj (as j = 1, 2, . . . , k) the times of (possible) jumps of xn (·). Passing to subsequences if necessary one can assume that lim tnj = tj for j = 1, 2, . . . , k. Let 0 ≤ t1 ≤ t2 ≤ n→∞
· · · ≤ tk ≤ 1. Given ε > 0 we consider Iε =
k j=1
tj −
ε ε , tj + . It is easy to see k k
that L1 limit of this (sub)sequence on I \ Iε is L-Lipschitz function. Since it is valid for every ε > 0 one can conclude that the limit function x(·) is L-Lipschitz on every (tj , tj+1 ). The L1 limit is unique and hence x(·) ∈ Imp,L .
∞
Proposition 3. Let the sequence {xm (·)}m=1 of elements of Imp,L converges l with respect to L1 (I, Rn ) to x(·). If {τi }i=1 (l ≤ k) are the impulsive points of x(·), then for every ε > 0 there exists a number n(ε) such that |x(t) − xm (t)| < ε l for every m > n(ε) and every t ∈ I \ A, where A = (τi − ε, τi + ε). i=1
Proof. Due to Proposition 2 x(·) ∈ Imp,L . Hence l ≤ k. Notice that in our settings it is possible the values of some impulses to be 0. The points of impulses of x(·) will be denoted by ψj j = 1, . . . , l. As it was pointed out in the previous proof, due to Egorov’s theorem the limit 1 function x(·) must be L–Lipschitz on every (ψi , ψi+1 ). Fix ε < min (ψj+1 − 6 1≤j≤l−1 ψj ). We have to check that on every interval (ψj + ε, ψj+1 − ε) the convergence is uniform. Suppose the contrary, i.e. ∃ s > 0 such that for every m one has sup t∈(ψj +ε,ψj+1 −ε)
|x(t) − xm (t)| > s.
Consequently for every m there exists tm ∈ (ψj + ε, ψj+1 − ε) such that |x(tm ) − xm (tm )| > s. However x(·) is L–Lipscitz on (ψj + ε, ψj+1 − ε). If tm is not a s such that I = (xm − λ, xm + λ) ⊂ jump point of xm (·), then ∃ 0 < λ < 4L (ψj + ε, ψj+1 − ε) and moreover, both x(·) and xm (·) are L–Lipscitz on I. With trivial calculus one can show that |x(·) − xm (·)|L1 > 2Ls2 - contradiction. If tm
262
T. Donchev
is a jump point of xm (·) then the latter is L–Lipschitz on (xm − λ, xm ) and on (xm , xm + λ), i.e. |x(·) − xm (·)|L1 > Ls2 - a contradiction.
Proposition 4. In Imp,L : lim |xs (·) − x(·)|l1 = 0 if and only if lim Dist(xs (·), x(·)) = 0. s→∞
s→∞
The reachable set of (1) at the time t ∈ I is defined by: Rt (F, x(0)) = {x(t) : t ≥ 0, x(·) is a trajectory of (1) on [0, t] with x(0) = x0 }. If there are two impulsive systems with right-hand sides F (·) and G(·) (initial conditions x0 and y0 ). The reachable sets are Rt (F ) and Rt (G) with graphs D(F, x0 ) and D(G, y0 ) respectively. Let Dt = {x : (τ, x) ∈ D, τ = t} and dist(z, A) := inf |z − a|. a∈A
Definition 1. Let ExL1 (D, G) :=
sup dist ((t, x), G) dt. The L1 distance be-
I x∈Dt
tween two graphs of multi-functions is
dL1 (D, G) := max{ExL1 (D, G) , ExL1 (G, D)}, Proposition 5. Suppose there exists a constant K such that for every solution x(·) of (1) with initial condition x0 there exists a solution y(·) with initial condition y0 with Dist(x(·), y(·)) ≤ Kδ and vise versa. Then dL1 (D(F, x0 ), D(F, y0 )) < (K + 1)Kδ. The following theorem is a consequence of Theorem 2 and Proposition 3. Theorem 4. Let δ > 0 be sufficiently small and let F (·), satisfy SH. If F (·) is OSL (with a constant L) then there exists a (nonnegative) constant L such that √ if |x0 − y0 | < δ, then dL1 (D(F, x0 ), D(F, y0 )) < L δ. Proof. Due to Theorem 2 there exists a constant K such that for every solution x(·) of (1) with x(0) = x0 there exists a solution y(·) of (1) with y(0) = y0 and p p
− |x(t) − y(t)| ≤ Kδ on I \ [τi− , τi+ ], τi+ < τi+1 and (τi+ − τi− ) ≤ Kδ. Here i=1
i=1
τi− = min{τi (x(·)), τi (y(·))}. Let s ∈ I and let z ∈ (F , x0 )s . Hence there exists a solution x(·) of (1) such that x(s) = z. Let y(·) be the corresponding solution of (1) with initial p condition y(0) = y0 . If s ∈ I \ [τi− , τi+ ], then |x(s) − y(s)| < Rδ. Assume that i=1
− ) we s ∈ [τix , τiy ] (if τiy < τix the proof is very similar), then for every t ∈ (τiy , τi+1 y have |x(t)−y(t)| < Kδ. Furthermore for every ε > 0 there exists tε ∈ (τi , τiy +)ε) and hence tε − s < Rδ + ε. If R = 2K then dist ((t, z), (F , y0 )(I))) < Rδ.
n t For t ∈ I we denote Rk (t) = E t, , {x0 } with graph D(Rk ). k
Exponential Formula for Impulsive Differential Inclusions
263
As a trivial corollary of Theorem 3 and Theorem 4 we obtain: Theorem 5. Under SH and OSL condition: lim du (D(F ), D(Rk )(I)) = 0.
k→∞
Acknowledgment This work is partially supported by the Hausdorff Research Institute of Mathematics Bonn in the framework of the Trimester Program “Computational Mathematics”.
References 1. Aubin, J.-P.: Impulsive Differential Inclusions and Hybrid Systems: A Viability Approach. Lecture Notes, Univ. Paris (2002) 2. Baier, R., Donchev, T.: Discrete Approximation of Impulsive Systems (submitted) 3. Clarke, F.: Optimization and nonsmooth analysis. John Wiley & Sons, Inc., New York (1983) 4. Deimling, K.: Multivalued Differential Equations. De Gruyter, Berlin (1992) 5. Donchev, T.: Approximation of the Solution Set of Impulsive Systems. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 309–316. Springer, Heidelberg (2008) 6. Donchev, T.: Impulsive differential inclusions with constrains. EJDE 66, 1–12 (2006) 7. Donchev, T., Farkhi, E., Wolenski, P.: Characterizations of Reachable Sets for a Class of Differential Inclusions. Functional Differential Equations 10, 473–483 (2003) 8. Lakshmikantham, V., Bainov, D., Simeonov, P.: Theory of Impulsive Differential Equations. World Scientific, Singapore (1989) 9. Lempio, F., Veliov, V.: Discrete approximations of differential inclusions. Bayreuther Mathematische Schiften, Heft 54, 149–232 (1998) 10. Pereira, F., Silva, G.: Necessary Conditions of Optimality for Vector-Valued Impulsive Control Problems. System Control Letters 40, 205–215 (2000) 11. Plotnikov, V., Kitanov, N.: On Continuous dependence of solutions of impulsive differential inclusions and impulse control problems. Cybernetics System Analysis 38, 749–758 (2002) 12. Plotnikov, V., Plotnikov, A., Vitiuk, A.: Differential Equations with Multivalued Right-Hand Side. Asymptotical Methods. Astro Print, Odessa (1999) (in Russian) 13. Wolenski, P.: The exponential formula for the reachable set of a Lipschitz differential inclusion. SIAM J. Control Optim. 28, 1148–1161 (1990)
Directional Sensitivity Differentials for Parametric Bang-Bang Control Problems Ursula Felgenhauer Brandenburgische Technische Universit¨ at Cottbus, Institut f¨ ur Angewandte Mathematik u. Wissenschaftliches Rechnen, PF 101344, 03013 Cottbus, Germany
[email protected] http://www.math.tu-cottbus.de/~ felgenh/
Abstract. We consider optimal control problems driven by ordinary differential equations depending on a (vector-valued) parameter. In case that the state equation is linear w.r.t. the control vector function, and the objective functional is of Mayer type, the optimal control is often of bangbang type. The aim of the paper is to consider the structural stability of bang-bang optimal controls with possibly simultaneous switches of two or more components at a time. Besides of the local invariance of the number of switches for each component taken separately, existence of their directional parameter–derivatives will be shown.
1
Introduction
The problem to be analyzed for parameter dependency is the following optimal control problem in Mayer form: (Ph )
minimize
J(x, u, h) = β(x(1), h) subject to m x(t) ˙ = f (x(t), h) + ui (t)gi (x(t), h) a.e. in [0, 1], x(0) = a(h), | ui (t)| ≤ 1,
i=1
i = 1, . . . , m, a.e. in [0, 1] .
(1) (2) (3)
The parameter h is taken from a neighborhood H ⊂ Rp of the origin. The functions x : [0, 1] → Rn and u : [0, 1] → Rm stand for the state and control variables of the problem. By g = g(x, h) we denote the (n, m) matrix with columns gi representing the controlled vector fields. It will be assumed that all data functions are sufficiently smooth (i.e. at least C 2 ) functions. We will assume that, for the reference parameter h0 = 0, the above problem 1 has a local minimizer (x0 , u0 ) ∈ W∞ × L∞ . The local optimality is supposed to hold in strong local sense, i.e. on a certain L∞ neighborhood w.r.t. x0 but arbitrary admissible controls. By Pontryagin’s Maximum Principle, an adjoint vector function μ0 : [0, 1] → Rn exists such that (x0 , u0 , μ0 ) satisfy the canonical equations and the maximum condition. We will further assume, that the control I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 264–271, 2010. c Springer-Verlag Berlin Heidelberg 2010
Directional Sensitivity Differentials
265
function u0 is a bang-bang control, i.e. each component u0j is piecewise continuous and takes only extremal values ±1 on the time interval [0, 1]. Let (x(·, z, h), μ(·, z, h), u(·, z, h)) denote a solution of the following system: x(t) ˙ = f (x(t), h) + g(x(t), h)u(t), T
(4)
μ(t) ˙ = − {∇x [f (x, h) + g(x, h) u] (t)} μ(t), uj (t) ∈ Sign μ(t)T gj (x(t), h) , j = 1, . . . , m
(5)
μ(1) = −∇x β(z, h).
(6)
x(1) = z,
(The function Sign herein stands for the set-valued extended signum defined at zero as Sign(0) = [−1, 1].) In case of bang-bang controls, the above system (4) – (6) together with the initial condition T (z, h) = x(0, z, h) = a(h). (7) can be interpreted as a backward shooting formulation of Pontryagin’s necessary optimality conditions. For h = 0, it has the solution (x0 , μ0 , u0 ) corresponding to z 0 = x0 (1). The present paper has to be seen in a line with current work on stability and optimality investigation of bang-bang control problems when some of the u-components may switch at the same time (simultaneous switch). Sufficient second-order conditions for strong local optimality of bang-bang extremals are given in [1] and [10], [9] (see also references therein) for rather general control problems but with restriction to simple control switches. Related sensitivity results including parameter differentials for simple switching times are obtained in [8]. For multiple control switches, the Lipschitz stability of switching points position was shown in [5] and [6]. Recently, first results on sufficient strong local optimality conditions have been found by [11] for at most double switches, see also [12]. The mentioned work could be combined to proving local existence, structural stability and optimality of solutions for problem (Ph ) in case of double switches of the reference control, see [7]. In this paper, simultaneous switches of more than two control components are allowed. The structural stability result from [7] is generalized to this situation (Lemma 1, section 3). Further, it will be accomplished by construction rules for directional sensitivity differentials for switching points positions under parameter perturbation (Theorem 1, section 5). Notations. Let Rn be the Euclidean space of column vectors with norm | · |, and scalar product written as a matrix product aT b. Superscript T is used for transposition of vectors or matrices. The Lebesgue space of order p of vectorvalued functions on [0, 1] is denoted by Lp (0, 1; Rk ). Wpl (0, 1; Rk ) is the related Sobolev space, and norms are given as · p and · l,p , (1 ≤ p ≤ ∞, l ≥ 1), resp. The symbol ∇x denotes (partial) gradients, and [f, g] = ∇x g f − ∇x f g stands for a Lie bracket of f and g. By convM resp. cl M , the convex hull and closure of a set M are described. For characterizing discontinuities, jump terms s are denoted [v] = v(ts +) − v(ts −) where the index s will become clear from the context.
266
2
U. Felgenhauer
Regular Bang-Bang Controls
Consider an extremal (x, u) of (Ph ) which together with the related costate function μ satisfies (4) – (7). Define σj (t) := μ(t)T gj (x(t), h),
(8)
Σj := {t ∈ [0, 1] : σj (t) = 0}.
(9)
We say the control u is a regular strict bang-bang control if the following conditions are fulfilled: (A1) For each j = 1, . . . , m, the function uj is piecewise constant with uj (t) ∈ {−1, 1} a.e. on [0, 1], the set Σj is finite, and t = 0 or 1 do not belong to Σj . (A2) For j = 1, . . . , m, the function σj is strictly monotone near any ts ∈ Σj . Since u(t) = sign σ(t) under the above assumptions, the function σ is the socalled switching function, and Σ = (Σ1 , . . . , Σm ) the vector of switching times, or the switching structure. A switching point ts ∈ Σj is a simple switching point if only the j-th control component changes its sign at ts . In case that more than one component switch simultaneously we call ts a multiple switching point, and the number of related u-components its multiplicity κ. Assumption (A2) is a condition ensuring that all zeros of σj are regular. While we are interested in solution behavior under parameter perturbation, we should give it a more stable form. Let us reconsider the switching functions (8): for piecewise continuous u, from (4) we see that σ ∈ C 0,1 and the time derivatives are given almost everywhere by σ˙ j = μT [f, gj ] +
m
ui μT [gi , gj ].
(10)
i=1
Thus, σ˙ j may have jump discontinuities at points ts ∈ Σi , i
= j. As long as all switching points are simple, each σj is continuously differentiable in ts ∈ Σj . If, however, u has multiple switches then the σ components may be broken at their = 0 at some ts ∈ Σi ∩ Σj . Under perturbation, a zero points whenever [gi , gj ]
multiple switch will generically bifurcate to several (e.g. simple) switching points of a-priori unknown order. Thus, one has to make sure that for each possible local re-ordering of switches the perturbed σ-components remain to be strictly monotone. This can be done with the following assumption (A2’) If ts is κ-mutiple, and j is such that Σj ts then, at t = ts , the terms m ui (ts −) μT [gi , gj ] (t) + νi [ui ]s μT [gi , gj ] (t) σ˙ jν (t) = μT [f, gj ] (t) + i=1
i: ts ∈Σi
are nonzero, and their sign is independent of the binary vector ν ∈ {0, 1}κ.
Directional Sensitivity Differentials
267
Throughout the paper it is supposed that, at h = 0, the reference solution (x0 , u0 ) of (P0 ) and related μ0 satisfy (A1) and (A2’). Let us finish with some definition: for t close to ts (of multiplicity κ), set t σjν (t) = σj (ts ) + σ˙ jν (τ ) dτ ∀ ν ∈ {0, 1}κ, ∀ j : Σj ts .
(11)
ts
Notice that the particular choice ν = e = (1, 1, . . . , 1) yields σ˙ je (ts ) = σ˙ j (ts +).
3
Strong Stability of Bang-Bang Structure
Under assumptions (A1), (A2’) on the solution of (P0 ), the following stability result holds true: Lemma 1. Assume that, for some h close to h0 = 0, the triple (xh , uh , μh ) with xh (1) = z h is a solution of the shooting system (4) – (7). There exists a positive bound δ such that, if | h − h0 | + xh − x0 ∞ < δ, (12) then each uhi satisfies assumptions (A1), (A2) and (A2’). Moreover, each function uhi has the same number and type of switches as u0i , and |Σ h − Σ 0 | =O(δ). In [7], this result was obtained for the case of at most double switches by analyzing the maximized Hamiltonian flow. An earlier result from [5] had been additionally restricted to L1 neighborhoods w.r.t. u0 . In this sense, the given Lemma strengthens former stability statements and puts it into correspondence to the idea of strong local optimality, cf. [1], [11], or [12]. Proof. Let the points of j Σj0 = {θ1 , . . . , θM } be ordered monotonically decreasing, i.e. θk > θl for k < l, and denote θ0 = 1, θM+1 = 0. Suppose ρ to be small enough so that the balls Bk of radius ρ with centres θk , k = 0, . . . , M + 1, do not intersect. The connecting pieces (θk+1 + ρ, θk − ρ) denote by Dk for k = 1, . . . , M − 1, and D0= (θ1 + ρ, 1), DM = (0, θM − ρ) resp. Notice that none of the σj0 vanishes in Dk . Moreover, if ρ is sufficiently small then, almost = 0 for all j such that θk ∈ Σj0 . everywhere on Bk , we have σ˙ j
The proof makes use of a backward inspection of solution pieces from (4) – (6). If δ is sufficiently small then, by (8), |z h − z 0 | < δ yields that each σjh (1) is of the same sign as σj0 (1), and this remains to be true in some neighborhood of t = 1 independently of the value taken by uh . Remembering the sign condition from (5), we obtain uh ≡ u0 on a certain interval [1 − Δ t, 1], and |xh (t) − x0 (t)| + |μh (t) − μ0 (t)| + |σ h (t) − σ 0 (t)| = O(δ).
(13)
Thus, we can repeat the argumentation and, possibly decreasing δ, by continuity deduce that uh ≡ u0 holds true for all t ∈ D0 . Next let us consider Bk (beginning from k = 1): if θk is a simple switch of u0j then σj0 is differentiable on Bk and has the regular zero θk . By the Implicit
268
U. Felgenhauer
Function Theorem, the same holds for σ h , so that it has a regular zero θkh ∈ Bk (with |θkh − θk | =O(δ)) whereas all other σ-components do not change their sign. Thus, on Bk we have u0 − uh 1 =O(δ), and again the estimate (13) holds on Bk . In particular, we get uh (θk − ρ) = u0 (θk − ρ). If more than one control components switch at θk simultaneously, consider the following prediction-correction procedure: solve backwards the state-adjoint equations from system (4) with u ≡ u0 (θk + ρ), and find σ = σ ˜ h from (8). For h e h e ˙ the solutions, |˜ σ (t) − σ (t)| + |σ ˜ (t) − σ˙ (t)| = O(δ) (cf. (11)), so that each compononent σ ˜jh such that θk ∈ Σj0 has a (locally unique and regular) zero in h Bk . Find θk,1 = max {t : ∃ j such that σ ˜jh (t) = 0}. At this point, some of the h h ˜ih (θk,1 ) = 0 and νi = 1 otherwise. u -components switch, and we set νi = 0 if σ ν 0 h Resolve the system with u = u (θk −) + j νj [u0j ]s for t ≤ θk,1 : the correh ν h sponding σ ˜ then is close to σ from (11) on (θk − ρ, θk,1 ), and again, its continued j-th components switch on Bk . The iteration will be repeated until, for h each j such that θk ∈ Σj0 , a new switching point θk,l is found. Assumption (A2’) h − θk | =O(δ). ensures that these points are unique for each related j, and |θk,l h 0 At t = θk − ρ, again we have u = u , so that the argumentation can be equally applied for the next intervals Dk , Bk+1 etc. until t = 0 is reached.
4
Second-Order Condition
In order to formulate appropriate second-order coercivity conditions for (Ph ), we follow [1] and define certain auxiliary finite-dimensional problem w.r.t. switching points variation. For each j, let Σj0 = {t0js : 1 ≤ s ≤ l(j)} be monotonically ordered. Consider vectors Σ = (tjs : j = 1, . . . , m, s = 1, . . . , l(j) ) of the same length L as Σ 0 , and denote by DΣ the neighborhood of Σ 0 ∈ RL where the following condition holds: 0 < t0js < t0ir < 1 ⇒ 0 < tjs < tir < 1. (14) With tj0 = 0, tj,l(j)+1 = 1 for all j, define uj (t, Σ) ≡ (−1)l(j)−s u0j (1)
for t ∈ (tjs , tj,s+1 ).
(15)
The solution of (1) – (2) for u = u(·, Σ) will be denoted x(·, Σ, h). If (xh , uh ) is a local minimizer of (Ph ) satisfying (A1), (A2’) and (12) then, by Lemma 1, the related switching vector Σ h will belong to DΣ and in particular, is a local solution of (OPh )
min φh (Σ) = β (x(1, Σ, h), h)
w.r.t. Σ ∈ DΣ .
As explained in [5], in case ofsimultaneous control switches, the function φh is 0 piecewise C 2 on DΣ . If t0s ∈ m j=1 Σj has multiplicity κs then, locally, we find several bifurcation patterns into κs simple switching points of different order which can be enumerated by qs = 1, . . . , (κs )! . Assembling qs with κs > 1
Directional Sensitivity Differentials
269
into an multi-index q, we can define open subsets Dq of DΣ where the order of switches will be invariant w.r.t. perturbation. On cl Dq set ∇q (∇φh )|Σ =
lim
Σ ∈Dq ,Σ →Σ
∇2Σ φh (Σ ) .
(16)
It is an obvious necessary optimality condition for (P0 ) that all ∇q (∇φ0 ) should be positive semidefinite at Σ = Σ 0 . We will assume the following strong generalized second-order condition to hold for h = 0 at Σ 0 : (A3) ∃ c > 0: v T ∇q (∇φh )|Σ v ≥ c | v |2 that qs ∈ {1, . . . , (κs )!}.
for all v ∈ RL and each q such
It has been shown in [7] for at most double switches that, assuming (A1) – (A3) to be satisfyed for h = 0 and (x0 , u0 , μ0 ), is sufficient to prove existence and uniqueness of strong local optimizers for (Ph ) with h near h0 = 0. The generalization to simultaneous switch of more than two components is yet open.
5
Directional Sensitivity Differentials
Consider the following extended backward shooting approach introduced in [6]: for given (Σ, z, h) close to (Σ 0 , z 0 , 0) define by (15) a control u = u(Σ). For u(Σ) − u0 1 being sufficiently small, the system x(t) ˙ = f (x(t), h) + g(x(t), h) u(t, Σ),
x(1) = z, T
μ(t) ˙ = − {∇x [f (x, h) + g(x, h) u] (t)} μ(t), μ(1) = −∇x β(z, h), 1 on [0, 1]. has an unique solution with x = x(·, Σ, z, h), μ = μ(·, Σ, z, h) ∈ W∞ T As in (8), set σ(·, Σ, z, h) = g(x(·, Σ, z, h), h) μ(·, Σ, z, h). If the solution components satisfy (4)–(7) then
V (Σ, z, h) = x(0, Σ, z, h) = a(h), W (Σ, z, h) = Γ · σ(·, Σ, z, h)|Σ
(17) = 0.
(18)
In the last equation, for j = 1, . . . , m and each α with tα ∈ Σj it is required that α Wα (Σ, z, h) = u0j σj (tα , Σ, z, h) = 0, i.e. Γ stands for the diagonal matrix with control jump entries. Near (Σ 0 , z 0 , 0), V is differentiable and ∇z V is regular (cf. [6]) so that V = 0 locally defines a C 1 map z = z˜(Σ, h). Inserting it into (18), in accordance to [8] and [5] we obtain Z(Σ, h) := W (Σ, z˜(Σ, h), h) = ∇Σ φh (Σ) = 0. (19) Consider the case when Σj contains a number d of multiple switches ti with multiplicities κi . By (16), from within each Dq × H, the derivatives ∇qΣ Z = ∇qΣ (∇Σ φh ) = ∇qΣ W − ∇z W (∇z V )
−1
∇Σ V
270
U. Felgenhauer
exist and are positive definite at (Σ 0 , 0), see assumption (A3). Thus, Z(Σ, h) = ˜ ˜ 0 has the locally unique and Lipschitz continuous solution Σ = Σ(h), and Σ(0) = 0 Σ . Now let us define M q = ∇qΣ (∇Σ φh )|(Σ 0 ,0) ,
P = ∇h (∇Σ φh )|(Σ 0 ,0) ,
and consider the following piecewise quadratic function of s ∈ RL depending on the parameter h ∈ Rp : 1 T q s M s + sT P h if s + Σ 0 ∈ cl Dq , 2 q ∈ Q = {(q1 , . . . , qd ) : qi ∈ {1, . . . , (κi )!} } .
ψh (s) =
(20) (21)
For fixed h, the function is C 1,1 and strictly convex w.r.t s (the proof of this fact follows [6], Appendix 1, but has to be omitted here). Its minima determine ˜ in the following sense: directional sensitivity differentials of Σ Theorem 1. For given Rp r
= 0, let s¯ = s¯(r) denote the solution of min ψr (s)
s.t. s + Σ 0 ∈ DΣ .
(22)
The solution is represented by s¯ = −(M q )−1 P r if and only if s¯ + Σ 0 ∈ cl Dq . ˜ It yields directional derivative information for Σ = Σ(h) from (19) in the form
˜ ∂h Σ(0), r = s¯ = −(M q )−1 P · r. (23) Proof. The definition of ψ and the conic nature of the sets Dq ensure that, at each point s, the function has a generalized Hessian (in Clarke’s sense) such that ∂s (∇s ψ) ⊆ convq∈Q {M q } where Q is given in (21). Since all matrices therein are positive definite, problem (22) has an unique solution s¯ which is characterized by s) ⊆ convq∈Q {M q s¯} + P r. 0 ∈ ∂s ψ(¯ (cf. [14], Theorem 13.24). In case that s¯ ∈ cl Dq , it can be approached by a minimizing sequence from inside Dq so that s¯ = −(M q )−1 P r. ˜ we consider limit terms In order to obtain directional derivatives of Σ ˜ r) − Σ 0 ) γ −1 (Σ(γ
for
γ 0.
(24)
Assume first that ∃ γ¯ , q¯ such that ˜ r) ∈ Dq¯ ∀ γ : 0 < γ < γ¯ . Σ(γ
˜ r = −(M q¯)−1 P r. Then the limit exists and is expressed by ∂h Σ(0),
(25)
Suppose that assumption (25) is not fulfilled, i.e. there exist q1 , . . . , qN ∈ Q such that, for any given n 0, ∀ n ∃ γnk < n :
˜ k r) ∈ Dqk , Σ(γ n
k = 1, . . . , N.
Directional Sensitivity Differentials
271
˜ each subsequence {(γ k )−1 (Σ(γ k r)−Σ 0 )}∞ Due to the Lipschitz continuity of Σ, n n n=1 is bounded and thus has an accumulation point sk ∈ cl Dqk so that M qk sk +P r = 0. The last relation says that the points sk are stationary points of the convex objective from (22) and, consequently, all coincide with the minimizer s¯. Hence the limit in (24) exists and is given by (23). For completeness notice that the representations (23) for s¯ with different qk coincide, too. Acknowlegdement. The results presented in this paper could not have been achieved without the stimulating contact to and cooperation with Gianna Stefani and Laura Poggiolini (Florence) who the author is deeply indepted to. The work has partly been supported by CNR INDAM grant no. 43169224.
References 1. Agrachev, A., Stefani, G., Zezza, P.L.: Strong optimality for a bang-bang trajectory. SIAM J. Control Optim. 41, 991–1014 (2002) 2. Arutyunov, A.V., Avakov, E.R., Izmailov, A.F.: Directional regularity and metric regularity. SIAM J. Optim. 18(3), 810–833 (2007) 3. Clarke, F.: Optimization and nonsmooth analysis. Wiley Inc., New York (1983) 4. Felgenhauer, U.: On stability of bang-bang type controls. SIAM J. Control Optim. 41(6), 1843–1867 (2003) 5. Felgenhauer, U.: Lipschitz stability of broken extremals in bang-bang control problems. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 306–314. Springer, Heidelberg (2008) 6. Felgenhauer, U.: The shooting approach in analyzing bang-bang extremals with simultaneous control switches. Control & Cybernetics 37(2), 307–327 (2008) 7. Felgenhauer, U., Poggiolini, L., Stefani, G.: Optimality and stability result for bang-bang optimal controls with simple and double switch behavior. Control & Cybernetics 38(4) (2009) (in print) 8. Kim, J.R., Maurer, H.: Sensitivity analysis of optimal control problems with bangbang controls. In: 42nd IEEE Conference on Decision and Control, Hawaii, vol. 4, pp. 3281–3286 (2003) 9. Maurer, H., Osmolovskii, N.P.: Equivalence of second-order optimality conditions for bang-bang control problems. Part I: Main result. Control & Cybernetics 34(3), 927–950 (2005); Part II: Proofs, variational derivatives and representations, Control & Cybernetics 36(1), 5–45 (2007) 10. Milyutin, A.A., Osmolovskii, N.P.: Calculus of variations and optimal control. Amer. Mathem. Soc., Providence (1998) 11. Poggiolini, L., Spadini, M.: Strong local optimality for a bang–bang trajectory in a Mayer problem (to appear) 12. Poggiolini, L., Stefani, G.: State-local optimality of a bang-bang trajectory: a Hamiltonian approach. Systems Control Lett. 53(3-4), 269–279 (2004) 13. Poggiolini, L., Stefani, G.: Sufficient optimality conditions for a bang-bang trajectory. In: 45th IEEE Conference on Decision and Control, San Diego (USA) (December 2006) 14. Rockafellar, R.T., Wets, R.J.-B.: Variational analysis. Springer, Berlin (1998) 15. Sarychev, A.V.: First- and second-order sufficient optimality conditions for bangbang controls. SIAM J. Control Optim. 35(1), 315–340 (1997)
Estimates of Trajectory Tubes of Uncertain Nonlinear Control Systems Tatiana F. Filippova Institute of Mathematics and Mechanics, Russian Academy of Sciences, 16 S. Kovalevskaya str., GSP-384, 620219 Ekaterinburg, Russia
[email protected] http://www.imm.uran.ru
Abstract. The paper is devoted to state estimation problems for nonlinear dynamic control systems with states being compact sets. The studies are motivated by the theory of dynamical systems with unknown but bounded uncertainties without their statistical description. The trajectory tubes of differential inclusions are introduces as the set-valued analogies of the classical isolated trajectories of uncertain dynamical systems. Applying results related to discrete-time versions of the funnel equations and techniques of ellipsoidal estimation theory developed for linear control systems we present approaches that allow to find estimates for such set-valued states of uncertain nonlinear control systems.
1
Introduction
In many applied problems the evolution of a dynamical system depends not only on the current system states but also on uncertain disturbances or errors in modelling. There are many publications devoted to different aspects of treatment of uncertain dynamical systems (e.g., [5,11,12,13,14,15,17]). The model of uncertainty considered here is deterministic, with set-membership description of uncertain items which are taken to be unknown with prescribed given bounds. We consider problems of control and state estimation for a dynamical control system x(t) ˙ = A(t)x(t) + f (x(t)) + G(t)u(t), x(t) ∈ Rn , t0 ≤ t ≤ T,
(1)
with unknown but bounded initial condition x(t0 ) = x0 , x0 ∈ X0 , X0 ⊂ Rn ,
(2)
u(t) ∈ U, U ⊂ Rm , t ∈ [t0 , T ].
(3)
Here matrices A(t) and G(t) (of dimensions n × n and n × m, respectively) are assumed to be continuous on t ∈ [t0 , T ], X0 and U are compact and convex. The nonlinear n-vector function f (x) in (1) is assumed to be of quadratic type f (x) = (f1 (x), . . . , fn (x)), fi (x) = x Bi x, i = 1, . . . , n, I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 272–279, 2010. c Springer-Verlag Berlin Heidelberg 2010
(4)
Estimates of Trajectory Tubes of Uncertain Nonlinear Control Systems
273
where Bi is a constant n × n - matrix (i = 1, . . . , n). Consider the following differential inclusion [6] related to (1)–(3) x(t) ˙ ∈ A(t)x(t) + f (x(t)) + P (t) , t ∈ [t0 , T ],
(5)
where P (t) = G(t)U. Let absolutely continuous function x(t) = x(t, t0 , x0 ) be a solution to (5) with initial state x0 satisfying (2). The differential system (1)–(3) (or equivalently, (5)–(2)) is studied here in the framework of the theory of uncertain dynamical systems (differential inclusions) through the techniques of trajectory tubes X(·, t0 , X0 ) = {x(·) = x(·, t0 , x0 ) | x0 ∈ X0 } of solutions to (1)–(3) with their t-cross-sections X(t) = X(t, t0 , X0 ) being the reachable sets (the informational sets) at instant t for control system (1)–(3). Basing on well-known results of ellipsoidal calculus [14,2,3] developed for linear uncertain systems we present the modified state estimation approaches which use the special structure of the control system (1)–(4). In this paper we continue researches initiated in [8,9].
2 2.1
Preliminaries Basic Notations
In this section we introduce the following basic notations. Let Rn be the n– dimensional Euclidean space and x y be the usual inner product of x, y ∈ Rn with prime as a transpose, x = (x x)1/2 . We denote as B(a, r) the ball in Rn , B(a, r) = {x ∈ Rn : x − a ≤ r}, I is the identity n × n-matrix. Denote by E(a, Q) the ellipsoid in Rn , E(a, Q) = {x ∈ Rn : (Q−1 (x − a), (x − a)) ≤ 1} with center a ∈ Rn and symmetric positive definite n × n–matrix Q. Let h(A, B) = max{h+ (A, B), h− (A, B)}, be the Hausdorff distance for A, B ⊂ Rn , with h+ (A, B) and h− (A, B) being the Hausdorff semidistances between A and B, h+ (A, B) = sup{d(x, B) | x ∈ A}, h− (A, B) = h+ (B, A), d(x, A) = inf{x − y | y ∈ A}. One of the approaches that we will discuss here is related to evolution equations of the funnel type [7,13,16,19]. Note first that we will consider the Caratheodory–type solutions x(·) for (5)–(2)), i.e. absolutely continuous functions x(t) which satisfy the inclusion (5) for a. e. t ∈ [t0 , T ]. Assume that all solutions {x(t) = x(t, t0 , x0 ) | x0 ∈ X0 } are extendable up to the instant T that is possible under some additional conditions ([6], §7, Theorem 2). The precise value of T depending on studied system data is given in [8]. Let us consider the particular case of the funnel equation related to (5)–(2) {x + σ(A(t)x + f (x)) + σP (t)}) = 0, t ∈ [t0 , T ], (6) lim σ −1 h(X(t + σ), σ→+0
x∈X(t)
X(t0 ) = X0 .
(7)
Under above mentioned assumptions the following theorem is true (details may be found in [7,13,16,19]).
274
T.F. Filippova
Theorem 1. The nonempty compact-valued function X(t) = X(t, t0 , X0 ) is the unique solution to the evolution equation (6)-(7). Other versions of funnel equation (6) are known written in terms of semidistance h+ instead of h [14]. The solution to the h+ -equations may be not unique and the ”maximal” one (with respect to inclusion) is studied in this case. Mention here also the second order analogies of funnel equations for differential inclusions and for control systems based on ideas of Runge-Kutta scheme [5,17,18]. Discrete approximations for differential inclusions through a set-valued Euler’s method were developed in [1,4,10]. Funnel equations for differential inclusions with state constraints were studied in [13], the analogies of funnel equations for impulsive control systems were given in [7]. 2.2
Trajectory Tubes in Uncertain Systems with Quadratic Nonlinearity
Computer simulations related to modelling in nonlinear problems of control under uncertainty and based on the funnel equations require in common difficult and long calculations with additional quantization in the state space. Nevertheless in some simpler cases it is possible to find reachable sets X(t; t0 , X0 ) (more precisely, to find their –approximations in Hausdorff metrics) basing on the Theorem 1, related examples are given in [8,9]. In [8,9] we presented also new techniques of constructing the external and internal estimates of trajectory tubes X(·, t0 , X0 ) based on the combination of ellipsoidal calculus [2,3,14] and the techniques of evolution funnel equations. We considered there the control system x˙ = Ax + f˜(x)d, x0 ∈ X0 , t0 ≤ t ≤ T,
(8)
where x ∈ Rn , x ≤ K, d is a given n-vector and a scalar function f˜(x) has a form f˜(x) = x Bx with a symmetric and positive definite matrix B. The following example illustrates the estimating results based on approaches from [8,9]. Example 1. Consider the following control system
x˙ 1 = x1 + x21 + x22 + u1 , , 0 ≤ t ≤ T. x˙ 2 = − x2 + u2 ,
(9)
Here t0 = 0, T = 0.3, X0 = B(0, 1), P (t) = P = B(0, 1). External and internal ellipsoidal estimates of t the rajectory tube X(t) are shown at Fig. 1. In this paper we extend this approach to the case when right-hand sides of the system (5)–(2) contain some quadratic forms with different symmetric positive definite matrices.
Estimates of Trajectory Tubes of Uncertain Nonlinear Control Systems
275
2.5
E(a+12,Q+12)
2 1.5
X0
1
x2
0.5 0 −0.5 −1 −1.5 −2
E(a−12,Q−12)
−2.5 0
X(t)
0.2 t
0.4
−1.5
−1
−0.5
0
0.5
1
1.5
2
2.5
3
x1
Fig. 1. Trajectory tube X(t) and its external and internal ellipsoidal estimates E(a+ (t), Q+ (t)), E(a− (t), Q− (t))
3 3.1
Results Differential System with Uncertainty in Initial States
Consider the following system x˙ = Ax + f (1) (x)d(1) + f (2) (x)d(2) , x0 ∈ X0 , t0 ≤ t ≤ T,
(10)
where x ∈ Rn , x ≤ K, d(1) and d(2) are n-vectors and f (1) , f (2) are scalar functions, f (1) (x) = x B (1) x, f (2) (x) = x B (2) x, with symmetric and positive definite matrices B (1) , B (2) . We assume also that (1) (2) di = 0 for i = k + 1, . . . , n and dj = 0 for j = 1, . . . , k where k (1 ≤ k ≤ n) is fixed. This assumption means that the first k equations of the system (10) contain (1) only the nonlinear function f (1) (x) (with some constant coefficients di ) while (2) f (x) is included only in the equations with numbers k + 1, . . . , n. We will assume further that X0 in (10) is an ellipsoid, X0 = E(a, Q), with a symmetric and positive definite matrix Q and with a center a. We will need some auxiliary results. Lemma 1. The following inclusion is true X0 ⊆ E(a, k12 (B (1) )−1 ) E(a, k22 (B (2) )−1 )
(11)
where ki2 is the maximal eigenvalue of the matrix (B (i) )1/2 Q(B (i) )1/2 (i = 1, 2).
276
T.F. Filippova
Proof. The proof follows directly from the properties of quadratic forms and from the inclusions E(a, Q) ⊆ E(a, k12 (B (1) )−1 ), E(a, Q) ⊆ E(a, k22 (B (2) )−1 ), which should be fulfilled with the smallest possible values of k1 ≥ 0 and k2 ≥ 0. Lemma 2. The following equalities hold true max
z B (1) z≤k12
z B (2) z = k12 λ212 ,
max
z B (2) z≤k22
z B (1) z = k22 λ221 ,
(12)
where λ212 and λ221 are maximal eigenvalues of matrices (B (1) )−1/2 B (2) (B (1) )−1/2 and (B (2) )−1/2 B (1) (B (2) )−1/2 respectively. Proof. The formulas follow from direct computations of maxomal values in (12). Theorem 2. For all σ > 0 and for X(t0 + σ) = X(t0 + σ, t0 , X0 ) we have the following upper estimate X(t0 + σ) ⊆ E(a(1) (σ), Q(1) (σ)) E(a(2) (σ), Q(2) (σ)) + o(σ)B(0, 1), (13) where σ −1 o(σ) → 0 when σ → +0 and a(1) (σ) = a(σ) + σk12 λ212 d(2) , a(2) (σ) = a(σ) + σk22 λ221 d(1) ,
(14)
a(σ) = (I + σA)a + σa B (1) ad(1) + σa B (2) ad(2) ,
(15)
2 (1) −1 ) (I+σR) +(p1 +1)σ 2 ||d(2) ||2 k14 λ412 ·I, (16) Q(1) (σ) = (p−1 1 +1)(I+σR)k1 (B 2 (2) −1 Q(2) (σ) = (p−1 ) (I+σR) +(p2 +1)σ 2 ||d(1) ||2 k24 λ421 ·I, (17) 2 +1)(I+σR)k2 (B
R = A + 2d(1) a B (1) + 2d(2) a B (2)
(18)
and p1 , p2 are the unique positive solutions of related algebraic equations n i=1
n n n 1 1 = = , p1 + αi p1 (p1 + 1) p + β p (p i 2 2 + 1) i=1 2
(19)
with αi , βi ≥ 0 (i = 1, ..., n) being the roots of the following equations det((I + σR)k12 (B (1) )−1 (I + σR) − ασ 2 ||d(2) ||2 k14 λ412 · I) = 0,
(20)
det((I + σR)k22 (B (2) )−1 (I + σR) − βσ 2 ||d(1) ||2 k24 λ421 · I) = 0.
(21)
Proof.
From Theorem 1 we have {x + σ(Ax + f (1) (x)d(1) + f (2) (x)d(2) )}) = 0, lim σ −1 h(X(t0 + σ),
σ→+0
x∈X0
Estimates of Trajectory Tubes of Uncertain Nonlinear Control Systems
therefore X(t0 + σ) ⊆
277
{x + σ(Ax + f (1) (x)d(1) + f (2) (x)d(2) )} + o(σ)B(0, 1),
x∈X0
From Lemma 1 we have the inclusion X(t0 + σ) ⊆ {x + σ(Ax + f (1) (x)d(1) + f (2) (x)d(2) )| x ∈ E(a, k12 (B (1) )−1 ) ∩ E(a, k22 (B (2) )−1 )} + o(σ)B(0, 1). Obviously we have X(t0 + σ) ⊆ ( {x + σ(Ax + f (1) (x)d(1) + f (2) (x)d(2) )|x ∈ E(a, k12 (B (1) )−1 )}) ( {x+ σ(Ax+ f (1) (x)d(1) + f (2) (x)d(2) )|x ∈ E(a, k1 22 (B (2) )−1 )})+ o(σ)B(0, 1). Using the ideas of [8,9], after related calculations we come to the following inclusions: {x + σ(Ax + f (1) (x)d(1) + f (2) (x)d(2) )|x ∈ E(a, k12 (B (1) )−1 )} ⊆ E(a(1) (σ), (I + σR)k12 (B (1) )−1 (I + σR) ) + σ||d(2) ||k12 λ212 B(0, 1), and
(22)
{x + σ(Ax + f (1) (x)d(1) + f (2) (x)d(2) )|x ∈ E(a, k22 (B (2) )−1 )} ⊆ E(a(2) (σ), (I + σR)k22 (B (2) )−1 (I + σR) ) + σ||d(1) ||k22 λ221 B(0, 1).
(23)
We find now the external estimates of the sums of ellipsoids in the right-hand parts of the inclusions (22)-(23) using the results of [2], after that we come finally to the main formula (13). We may formulate now the following scheme that gives the external estimate of trajectory tube X(t) of the system (10) with given accuracy. Algorithm 1. Subdivide the time segment [t0 , T ] into subsegments [ti , ti+1 ] where ti = t0 + ih (i = 1, . . . , m), h = (T − t0 )/m, tm = T . – Given X0 = E(a, Q), take σ = h and define ellipsoids E(a(1) (σ), Q(1) (σ)) and E(a(2) (σ), Q(2) (σ)) from Theorem 2. – Find the smallest (with respect to some criterion [2,14]) ellipsoid E(a1 , Q1 ) which contains the intersection E(a(1) (σ), Q(1) (σ)) E(a(2) (σ), Q(2) (σ)) ⊆ E(a1 , Q1 ). – Consider the system on the next subsegment [t1 , t2 ] with E(a1 , Q1 ) as the initial ellipsoid at instant t1 . – Next steps continue iterations 1-3. At the end of the process we will get the external estimate E(a(t), Q(t)) of the tube X(t) with accuracy tending to zero when m → ∞.
278
3.2
T.F. Filippova
Control System under Uncertainty
Consider the following control system in the form of differential inclusion (1)–(3) x˙ ∈ Ax + f (1) (x)d(1) + f (2) (x)d(2) + P, x0 ∈ X0 = E(a, Q), t0 ≤ t ≤ T, (24) with all previous assumptions being valid. We assume also that P is an ellipsoid, P = E(g, G), with a symmetric and positive definite matrix G and with a center g. In this case the estimate for X(t0 + σ) (the analogy of the formula (13)) takes the form X(t0 + σ) ⊆ E(a(1) (σ), Q(1) (σ)) ∩ E(a(2) (σ), Q(2) (σ)) + σE(g, G) + o(σ)B(0, 1), (25) with all parameters defined in Theorem 2. We should modify now the previous scheme (Algorithm 1) in order to formulate a new procedure of external estimating of trajectory tube X(t) of the system (24). Algorithm 2. Subdivide the time segment [t0 , T ] into subsegments [ti , ti+1 ] where ti = t0 + ih (i = 1, . . . , m), h = (T − t0 )/m, tm = T . – Given X0 = E(a, Q), take σ = h and define ellipsoids E(a(1) (σ), Q(1) (σ)) and E(a(2) (σ), Q(2) (σ)) from Theorem 2. – Find the smallest (with respect to some criterion [2,14]) ellipsoid E(a∗ , Q∗ ) which contains the intersection: E(a(1) (σ), Q(1) (σ)) E(a(2) (σ), Q(2) (σ)) ⊆ E(a∗ , Q∗ ). – Find the ellipsoid E(a1 , Q1 ) which is the upper estimate of the sum [2,14] of two ellipsoids, E(a∗ , Q∗ ) and σE(g, G): E(a∗ , Q∗ ) + σE(g, G) ⊆ E(a1 , Q1 ). – Consider the system on the next subsegment [t1 , t2 ] with E(a1 , Q1 ) as the initial ellipsoid at instant t1 . – Next steps continue iterations 1-3. At the end of the process we will get the external estimate E(a(t), Q(t)) of the tube X(t) with accuracy tending to zero when m → ∞.
4
Conclusion
The paper deals with the problems of state estimation for a dynamical control system described by differential inclusions with unknown but bounded initial state. The solution to the differential system is studied through the techniques of trajectory tubes with their cross-sections X(t) being the reachable sets at instant t to control system. Basing on the results of ellipsoidal calculus developed for linear uncertain systems we present the modified state estimation approaches which use the special nonlinear structure of the control system and simplify calculations.
Estimates of Trajectory Tubes of Uncertain Nonlinear Control Systems
279
Acknowledgments. The research was supported by the Russian Foundation for Basic Research (RFBR) under Projects 06-01-00483, 09-01-00223.
References 1. Chahma, I.A.: Set-valued discrete approximation of state- constrained differential inclusions. Bayreuth. Math. Schr. 67, 3–162 (2003) 2. Chernousko, F.L.: State Estimation for Dynamic Systems. Nauka, Moscow (1988) 3. Chernousko, F.L., Ovseevich, A.I.: Properties of the optimal ellipsoids approximating the reachable sets of uncertain systems. J. Optimization Theory Appl. 120(2), 223–246 (2004) 4. Dontchev, A.L., Farkhi, E.M.: Error estimates for discretized differential inclusions. Computing 41(4), 349–358 (1989) 5. Dontchev, A.L., Lempio, F.: Difference methods for differential inclusions: a survey. SIAM Review 34, 263–294 (1992) 6. Filippov, A.F.: Differential Equations with Discontinuous Right-hand Side. Nauka, Moscow (1985) 7. Filippova, T.F.: Sensitivity Problems for Impulsive Differential Inclusions. In: Proc. of the 6th WSEAS Conference on Applied Mathematics, Corfu, Greece (2004) 8. Filippova, T.F., Berezina, E.V.: On State Estimation Approaches for Uncertain Dynamical Systems with Quadratic Nonlinearity: Theory and Computer Simulations. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 326–333. Springer, Heidelberg (2008) 9. Filippova, T.F., Berezina, E.V.: Trajectory Tubes of Dynamical Control Systems with Quadratic Nonlinearity: Estimation Approaches. In: Proc. of the Int. Conf. Differential Equations and Topology, Moscow, Russia, pp. 246–247 (2008) 10. H¨ ackl, G.: Reachable sets, control sets and their computation. Augsburger Mathematisch-Naturwissenschaftliche Schriften 7. PhD Thesis, University of Augsburg, Augsburg (1996) 11. Krasovskii, N.N., Subbotin, A.I.: Positional Differential Games. Nauka, Moscow (1974) 12. Kurzhanski, A.B.: Control and Observation under Conditions of Uncertainty. Nauka, Moscow (1977) 13. Kurzhanski, A.B., Filippova, T.F.: On the theory of trajectory tubes — a mathematical formalism for uncertain dynamics, viability and control. In: Kurzhanski, A.B. (ed.) Advances in Nonlinear Dynamics and Control: a Report from Russia. Progress in Systems and Control Theory, vol. 17, pp. 122–188. Birkhauser, Boston (1993) 14. Kurzhanski, A.B., Valyi, I.: Ellipsoidal Calculus for Estimation and Control. Birkhauser, Boston (1997) 15. Kurzhanski, A.B., Veliov, V.M. (eds.): Set-valued Analysis and Differential Inclusions. Progress in Systems and Control Theory, vol. 16. Birkhauser, Boston (1990) 16. Panasyuk, A.I.: Equations of attainable set dynamics. Part 1: Integral funnel equations. J. Optimiz. Theory Appl. 64(2), 349–366 (1990) 17. Veliov, V.M.: Second order discrete approximations to strongly convex differential inclusions. Systems and Control Letters 13, 263–269 (1989) 18. Veliov, V.: Second-order discrete approximation to linear differential inclusions. SIAM J. Numer. Anal. 29(2), 439–451 (1992) 19. Wolenski, P.R.: The exponential formula for the reachable set of a Lipschitz differential inclusion. SIAM J. Control Optimization 28(5), 1148–1161 (1990)
Asymptotics for Singularly Perturbed Reachable Sets Elena Goncharova1 and Alexander Ovseevich2 1
Institute of System Dynamics & Control Theory, Irkutsk, Russia
[email protected] 2 Institute for Problems in Mechanics, Moscow, Russia
[email protected]
Abstract. We study, in the spirit of [1], reachable sets for singularly perturbed linear control systems. The fast component of the phase vector is assumed to be governed by a strictly stable linear system. It is shown in loc.cit. that the reachable sets converge as the small parameter ε tends to 0, and the rate of convergence is O(εα ), where 0 < α < 1 is arbitrary. In fact, the said rate of convergence is ε log 1/ε. Under an extra smoothness assumption we find the coefficient of ε log 1/ε in the asymptotics of the support function of the reachable set.
1
Problem Statement
Consider the following singularly perturbed linear dynamic system x˙ = Ax + By + F u, εy˙ = Cx + Dy + Gu, u ∈ U,
(1)
where ε > 0 is a small parameter, a phase vector z = (x, y) consists of the “slow” component x ∈ X, and the “fast” component y ∈ Y . The matrices A, . . . , G, and convex compacts U are smooth functions of time t, and the parameter ε. The latter means that the support function h = HU of U depends on t, ε smoothly. The matrix D is assumed to be asymptotically stable for any t, and ε ≥ 0. An admissible trajectory of the system is an absolutely continuous solution to (1). We fix an interval of time [0, T ], and study the reachable set Dε (T ) to system (1) as ε → 0. Recall that the reachable set D(T ; τ, M ) of a control dynamic system is a set of the ends at the time moment T of all admissible trajectories, starting at the moment τ at a point of the initial set M . In the linear case the argument M in D(T ; τ, M ) is not very essential, and we put τ = 0, and M = {0}. This issue was addressed in [1] under assumption that D(t)|ε=0 is an asymptotically stable matrix for any t. The main result is that the sets Dε (T ) have a limit with respect to the Hausdorff metric as ε → 0, and the rate of convergence is O(εα ), where 0 < α < 1 is arbitrary. In this paper, under the same assumptions, we present a certain refinement of the result [1]. We show that the rate of convergence is ε log 1/ε. Furthermore, I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 280–285, 2010. c Springer-Verlag Berlin Heidelberg 2010
Asymptotics for Singularly Perturbed Reachable Sets
281
by assuming that the support function of U is smooth outside the origin, we find the exact coefficient of the term ε log 1/ε in the asymptotics for the support function of the reachable set. We have to stress that the smoothness assumption is essential. More precisely, the results are valid when all system data are Lipschitz continuous with respect to time t. One can present some examples, where all the system parameters are H¨ older continuous with respect to time (with any positive exponent less than 1), but the assertions of theorems are wrong.
2
Splitting of Dynamic System
Following [4] we problem by using can simplify the original gauge transformax A B F tions. If z = , A = , and B = one can perform a y C/ε D/ε G/ε substitution z = Xw, where X is an invertible 2 × 2 block matrix, and we get a new control system w˙ = A w + B u, where ˙ and B = X −1 B. A = X −1 AX − X −1 X, If X depends on t, and ε in a smooth way, then such a transformation does not change essentially the behavior of the reachable sets, but allow us to simplify the system matrix A so that the smoothness and stability assumptions are preserved. We aim at reducing the system matrix to a block-diagonal form to separate slow and fast variables. For instance, the block A 21 can be eliminated by using the lower-triangular 10 transformations X = . By applying the upper-triangular transformations L1 1N X = , one can get rid of the block A12 . Here, L(t, ε), and N (t, ε) are 0 1 uniformly bounded smooth matrix functions on [0, T ]. Then, we arrive at the split case: + F u, x˙ = Ax + Gu, u ∈ U, εy˙ = Dy
(2)
where all matrices and the convex compact U depend smoothly on time and parameter ε, and D is asymptotically stable at each time instant. We can study the asymptotic behavior of the reachable set Dε (T ) to the control system (2). This simplified problem is equivalent to the original one related to system (1). F, D, G, U To make a further simplification, assume that the parameters A, of system (2) do not depend on ε. The assumption does not result in a loss of generality. Indeed, if we put ε=0 , F = F |ε=0 , D = D| ε=0 , G = G| ε=0 , U = U |ε=0 , A = A| and form the corresponding control system x˙ = A x + F u, εy˙ = D y + G u, u ∈ U ,
(3)
282
E. Goncharova and A. Ovseevich
then, the corresponding reachable sets Dε (T ) coincide with Dε (T ) with accuracy O(ε). This follows immediately from the stability of D and the Levinson theorem (see [4]). The direct computations show that the parameters of systems (1), and (3) are related by A = (A − BD−1 C)|ε=0 , D = D|ε=0 , F = F − BD−1 G|ε=0 , G = G|ε=0 . The asymptotics we are looking for is, in fact, rather crude. Its remainders are of order o(ε log 1/ε) so that an error of order O(ε) is negligible. In what follows, we state our main results in terms of the original system (1) without appealing to splitting, but, in the proofs, the system splitting is heavily used.
3
Asymptotics of the Support Functions to Reachable Sets
Denote by Hε (ξ, η) the support function of the reachable set Dε (T ) to system (1), where ε > 0, ξ, η are dual to the variables x, y. Define a function H0 (ξ, η) =
T
ht (F (t)∗ Φ(T, t)∗ ξ ) dt +
0
∞
∗
hT (G(T )∗ eD(T ) t η) dt,
0
where ξ = ξ − C ∗ D∗−1 η, the function Φ is a fundamental matrix for linear system x˙ = (A − BD−1 C)x, and h = ht is the support function of the set U = Ut of controls. This is, in fact, the support function for the limit reachable set limε→0 Dε (T ). Theorem 1. Let Hε (ξ, η) be the support function of the reachable set Dε (T ) to system (1), then Hε (ξ, η) → H0 (ξ, η) uniformly on compacts as ε → 0. Moreover, we have the asymptotic equivalence: Hε (ξ, η) = H0 (ξ, η) + O(ε log 1/ε(|ξ| + |η|)) as ε → 0. In the proof, we address the split system (3). The proof is based upon the explicit representation of the support function Hε (ξ, η) =
0
T
1 ht (F (t)∗ Φ(T, t)∗ ξ + G(t)∗ Ψε (T, t)∗ η) dt ε
of the corresponding reachable sets, where Ψε is the fundamental matrix of system εy˙ = Dy. The idea underlying the proof goes back at least to [3], and, basically, says that the reachable set to a linear control system can be decomposed in such a way that stable, unstable and neutral parts of the reachable set are formed by using controls supported on nonoverlapping intervals. We divide the time
Asymptotics for Singularly Perturbed Reachable Sets
283
interval [0, T ] into the two subintervals [0, T − δ] and [T − δ, T ], where δ is a small positive parameter. The controls supported on the “long” interval are responsible for the “slow” part of the reachable set, while the controls supported on the “short” interval form the “fast” part of it. The proper choice of δ is crucial for the accuracy of approximation. We choose a small δ > 0 such that δ → 0 and δ/ε → ∞ as ε → 0, The main difference of the present paper with [1] stems from the final choice δ ∼ ε log 1/ε instead of δ ∼ εα . Our main asymptotic result consists in finding the remainder in the previous theorem in a more precise form c(ξ, η)ε log 1/ε + o(ε log 1/ε). We can do this under an extra assumption that the support function h is C 1 -smooth outside ¯ = hT by ζ, and consider the the origin. Denote the argument of the function h average ¯ ¯ ∗ 1 τ ∂h ∂h (G(T )∗ eD(T ) t η) dt. Av τ ( )(η) = ∂ζ τ 0 ∂ζ One can see that there exists the limit Av(f ) = lim Avτ (f ) for any continuous τ →∞
∗
function f of ζ = 0 of degree zero. Indeed, the function φ(t) = G(T )∗ eD(T ) t η has the form of a vector-valued quasipolynomial, and, therefore, φ(t) = e(Re λ)t tN
eiωt aω + o(e(Re λ)t tN ),
where λ is an eigenvalue of D(T ), N + 1 is the maximal size of the corresponding Jordan block, sum is taken over all real ω such that Re λ + iω is an eigenvalue of D(T ), and aω is a time-independent vector. “Generically”, the trigonometric polynomial φ0 (t) = eiωt aω is comprised either of one or two harmonics. We have 1 τ Av (f ) = lim f (φ0 (t)) dt, τ →∞ τ 0 since f is homogeneous of degree zero. But the latter integrand is an almost periodic function, which can be averaged. Define the function c(ξ, η) by 1 c(ξ, η) = Λ
¯ ∗ ∂h ∗ ¯ (η) − h(F (T ) ξ ) , Av F (T ) ξ , ∂ζ
where ξ = ξ − C(T )∗ D(T )∗−1 η, Λ = Λ(η) is the absolute value of the first ∗ Lyapunov exponent of the function t
→ |eD(T ) t η| (this is the modulus of the real part of an eigenvalue of D(T )). Theorem 2. Assume that the support function ¯h = hT of the control set UT is C 1 outside the origin. Then Hε (ξ, η) = H0 (ξ, η) + c(ξ, η)ε log
1 + o (ε log 1/ε) ε
284
4
E. Goncharova and A. Ovseevich
Concluding Remarks
Note that the coefficient c(ξ, η) of ε log 1ε in the asymptotics is nonpositive. ∂¯ h ¯ ∗ ξ ) for any ζ, and the averaging operation preserves (ζ) ≤ h(F Indeed, F ∗ ξ , ∂ζ the inequality. In the generic case, when the set UT of admissible controls at the time instant T is not a singleton, the coefficient c(ξ, η) is not identically (for ¯ all ξ, η) equal to zero. Otherwise, this would imply the equality h(ζ) = ζ, ϕ , ¯ ∂ h ∗ D∗ t ¯ ϕ = Av ( (G e η)), for all ζ and η. The latter necessarily means that h(ζ) ∂ζ ≡0 is the support function for the singleton UT = {ϕ}. The fact that c(ξ, η) means that the estimate given by Theorem 1 is sharp, i.e., for some ξ, η, we have |Hε (ξ, η) − H0 (ξ, η)| ≥ Cε log 1/ε, where C > 0 does not depend on ε. The above results can be illustrated by the simplest possible example of a singularly perturbed linear system: x˙ = u εy ˙ = −y + u, where x, y, u are scalars, and |u| ≤ 1. This example is also presented in [1]. An easy calculation reveals that in this case the difference of the support functions of the prelimit and limit reachable set equals ΔH = Hε (ξ, η) − H0 (ξ, η) = −2tε |ξ| − |η|(2e−tε /ε − e−T /ε ) provided that ξη < 0. Here, tε = ε log 1ε |η| |ξ| . Thus, for fixed ξ, η in this range, the difference ΔH has the form −2|ξ|ε log 1ε + Cε + r, where C is a constant, and the remainder r is exponentially small as ε → 0. This proves again that the estimate in Theorem 1 is sharp. There arises a natural question: Does a greater smoothness of all data implies an improvement of the asymptotic expansion in Theorem 2? We do not know the ultimate answer, but expect that in general there is no such improvement. More precisely, we expect that there exist singularly perturbed systems with C ∞ smoothness of all input data such that the remainder o(ε log 1ε ) in Theorem 2 can be estimated from below as Ω(αε log 1ε ), where α = α(ε) is an arbitrary function such that α(ε) → 0 as ε → 0. For instance, one cannot expect that the remainder is O(ε log 1ε / log log 1ε ).
Acknowledgements The work was supported by RFBR (grants 08-01-00156, 08-08-00292), and partly by Russian Federation President grant for Scientific Schools 1676.2008.1. The authors are grateful to referees for thoughtful comments.
Asymptotics for Singularly Perturbed Reachable Sets
285
References 1. Dontchev, A.L., Slavov, J.I.: Lipschitz properties of the attainable set of singularly perturbed linear systems. Systems & Control Letters 11(5), 385–391 (1988) 2. Dontchev, A.L., Veliov, V.M.: Singular perturbations in Mayer’s problem for linear systems. SIAM J. Control and Opt. 21, 566–581 (1983) 3. Dontchev, A.L., Veliov, V.M.: On the behaviour of solutions of linear autonomous differential inclusions at infinity. C. R. Acad. Bulg. Sci. 36, 1021–1024 (1983); MSC 1991 4. Kokotovich, P.V.: Singular Perturbation Techniques in Control Theory. SIAM Review 6(4), 501–550 (1984) 5. Ovseevich, A.I.: Asymptotic behavior of attainable and superattainable sets. In: Proceedings of the Conference on Modeling, Estimation and Filtering of Systems with Uncertainty, Sopron, Hungary, 1990, pp. 324–333. Birkha¨ user, Basel (1991) 6. Ovseevich, A.I., Figurina, T.Y.: Asymptotic behavior of reachable sets for singularly perturbed linear time-invariant control systems. Appl. Math. & Mech. 62(6), 977– 983 (1998)
On Optimal Control Problem for the Bundle of Trajectories of Uncertain System Mikhail I. Gusev Institute of Mathematics and Mechanics, Russian Academy of Sciences, 16 S. Kovalevskaya Str., 620219 Ekaterinburg GSP-384, Russia
[email protected] Abstract. The problem of optimal choice of inputs is considered for identification of control system parameters by the results of measurements. The integral of information function is accepted as a criterion of optimality. It is shown that the problem may be reduced to an optimal control problem for the bundle of trajectories of the control system generated by the set of unknown parameters.
1
Introduction
The paper deals with the problem of state and parameters estimation for uncertain control system on the basis of available observations corrupted by noise. The considered model of uncertainty is deterministic, with set-membership description of the uncertain items. Unknown parameters of ODEs presenting the system, and the noise in measurements are assumed to be unknown but bounded with preassigned bounds. The set of parameters, consistent with the system equations, measurements, and a priori constraints called information (feasible) set is considered as the solution of estimation problem [10,4,2,8,5]. This set-valued estimate depends on the input (control) of the system. In this paper we consider the problem of optimal input choice [7] for guaranteed estimation of the parameters of dynamic system on the basis of indirect observation. The information sets in the problem may be described as the level sets for so-called information function (information state) [5,1,6]. An information function is defined as a value function for a certain auxiliary optimal control problem. The integral of information function over the set of a priori constraints on parameters is considered as a criterion of optimality. This allows to avoid, when designing an optimal input, the immediate construction of information sets. It is shown that the considered problem may be reduced to an optimal control problem for the bundle of trajectories of the system generated by the set of unknown parameters. We describe the results of numerical simulations for above problem. Finally, an input synthesis procedure based on sequential usage of the results of observations is proposed.
2
Linear Models
Here we consider the input design problem for a linear model in Hilbert space in order to clarify the details of the scheme used in the paper. In this linear I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 286–293, 2010. c Springer-Verlag Berlin Heidelberg 2010
On Optimal Control Problem for the Bundle of Trajectories
287
model the output y available for measurements depends on unknown vector of parameters q = (q1 , ..., qm ) under the equality y=
m
qi ai (u) + ξ.
i=1
Here ai : U → H are given maps from the given set of control parameters U to the real Hilbert space H, ξ ∈ H is treated as a measurement error, u ∈ U is a control. Assume that all a priori information on q and ξ is given by the conditions q = (q1 , ..., qm ) ∈ Q ⊂ Rm ,
ξ ∈ Ξ = {ξ : ξ, ξ ≤ 1},
(1)
where ·, · is an inner product in H, Q is a compact set in Rm . An input design problem consists of two stages. The fist one is the identification problem, the second is the choice of optimal input, providing the best quality of identification. The identification problem is related to an estimation of unknown value of q on the basis of measurement of the output y. The solution of this problem is the information (feasible) set [2,5] consisting of all values of q consistent with the results of measurement and a priori data. Let y, u be given. An information set is determined as follows ˆ u) = {q ∈ Q : ∃ ξ, ξ, ξ ≤ 1, y = Q(y,
m
qi ai (u) + ξ}.
i=1
ˆ u) contains an unknown true value of q. The set Q(y, The quality of identification usually is characterized by the value of some ˆ u)), which is defined on the class of sets. A radius, scalar functional Ω(Q(y, diameter or a volume of the set may be considered as a such functional. The problem of optimal input choice takes on the following form: to find the control (input) u ∈ U , solving the problem ˆ u)) → min . max Ω(Q(y, y
u∈U
Here the maximum is taken in all possible values of y, or, equivalently, in all pairs q, ξ, satisfying constraints (1). A disadvantage of such approach is a necessity of constructing of information sets for calculation of optimal inputs. The solution of the last problem requires laborious computing procedures (especially in the case of nonlinear identification problems, where an information set may be nonconvex or even nonconnected). Further, we modify the statement of the problem in order to avoid the direct constructing of information sets in the process of calculation of optimal input. The proposed approach is based on the notion of information function (information state) of the problem. In the case of linearmodel this function m V (y, u, q) m is determined by the equality V (y, u, q) = y − i=1 qi ai (u), y − i=1 qi ai (u).
288
M.I. Gusev
ˆ u) = {q ∈ Q : V (y, u, q) ≤ 1}. If for Obviously, under given u, y we have Q(y, the controls u ˆ, u ¯ under given y V (y, u ˆ, q) ≤ V (y, u ¯, q)
(2)
ˆ u ˆ u for every q ∈ Q, then Q(y, ¯) ⊂ Q(y, ˆ). Hence, u ¯ is more preferable than uˆ because it gives more precise estimate of unknown parameter. Consider a scalar functional on the set of information states, which is monotone with respect to relation (2) and is defined in the following way I(y, u) = V (q, y, u)dμ(q), Q
where μ is a nonnegative measure defined on the σ-algebra of Lebesgue measurable subsets of Q with the property μ(Q) = 1. Denote by q ∗ the true value of uncertain parameter and by ξ ∗ – the realization of disturbance in measurements. The output y ∗ is a function of q ∗ , ξ ∗ and input u: y ∗ = y ∗ (q ∗ , ξ ∗ , u). Depending on the way of accounting the dependence of y from parameters the following statements of the problem are possible. Problem 1. Find u ∈ U , maximizing the functional I1 (y ∗ , u) = V (q, y ∗ (q ∗ , ξ ∗ , u), u)dμ(q).
(3)
Q
Problem 2. Find u ∈ U , maximizing the functional ∗ V (q, y ∗ (q ∗ , ξ ∗ , u), u)dμ(q). I2 (q , u) = ∗inf ξ ∈Ξ
(4)
Q
Problem 3. Find u ∈ U , maximizing the functional V (q, y ∗ (q ∗ , ξ ∗ , u), u)dμ(q). I3 (u) = ∗ inf∗ ξ ∈Ξ,q ∈Q
(5)
Q
Introduce the following definitions. Let K = {k ∈ Rm : ki ∈ {0, 1}, i = 1, ..., m}, q¯ ∈ Rm . By pk denote a mapping Rm → Rm defined by the equalities pi (q) = (−1)ki qi , i = 1, ..., m. The set Q is said to be q¯-symmetrical, if q ∈ Q implies pk (q − q¯) + q¯ ∈ Q for every k ∈ K. For the set E ⊂ Q denote E k = pk (E − q¯) + q¯. For measurable E the set E k is also measurable; if Q q¯ is symmetrical then E k ⊂ Q. The measure μ is said to be q¯-symmetrical if μ(E k ) = μ(E) for each measurable E ⊂ Q. Assumption 1. There exists q¯ ∈ Q such that the set Q and the measure μ are q¯-symmetrical.
On Optimal Control Problem for the Bundle of Trajectories
289
¯ = Q − q¯, as μ Let Assumption 1 holds. Denote Q ¯ denote the on m measure i ¯ Q, defined by the equality μ ¯ (E) = μ(E + q ¯ ). Let y ˜ = y − q ¯ a (u) = i i=1 m ¯i )ai (u) + ξ. Then i=1 (qi − q I(y, u) =
˜ y , y˜dμ(q) − 2
V (q, y, u)dμ(q) = Q
m i=1 Q
Q
+
m
(qi − q¯i )dμ(q)˜ y , ai (u)+
(qi − q¯i )(qj − q¯j )dμ(q),
pij (u)
i,j=1
Q
where pij (u) = ai (u), aj (u). Lemma 1. Under assumption 1 the following equalities hold (qi − q¯i )dμ(q) = 0, (qi − q¯i )(qj − q¯j )dμ(q) = 0, i, j = 1, ..., m, i = j. Q
Q
This lemma follows from the elementary properties of Lebesgue integral. From m lemma 1 it follows that I(y, u) = ˜ y , y˜ + i=1 Ai pii (u), Ai = Q (qi − q¯i )2 dμ. Calculating the infimum in ξ ∗ , we get I2 (q ∗ , u) = ∗inf V (q, y ∗ (q ∗ , ξ ∗ , u), u)dμ(q) ξ ∈Ξ
Q
= φ((q ∗ − q¯) P (u)(q ∗ − q¯)) +
m
Ai pii (u),
i=1
where P (u) is a matrix with the elements pij (u), i, j = 1, ..., m and the function φ(x) is defined by the equality φ(x) =
0 0 ≤ x ≤ 1, √ ( x − 1)2 x ≥ 1.
Calculating the infimum I2 in q ∗ , we have I3 (u) = inf I2 (q ∗ , u) = ∗ q
m
Ai pii (u).
i=1
Proposition 1. Let for every u ∈ U , q ∗ ∈ Q (q ∗ − q¯) P (u)(q ∗ − q¯) ≤ 1, and assumption 1 holds. Then the solutions of problems 2, 3 coincide.
(6)
290
3
M.I. Gusev
Input Design Procedure for Nonlinear Control System
An analog of Problem 2 from the previous part is studied here for nonlinear control system with uncertain parameters. Consider the control system x˙ = f (t, q, x, u(t)), t ∈ [t0 , t1 ], x(t0 ) = x0 , n
(7)
r
(x ∈ R , u ∈ R ) with the right-hand side f depending on unknown parameter q ∈ Rm . We assume that all a priori information on q is given by the inclusion q ∈ Q where Q is a compact set in Rm . As an admissible control (input) we will consider a Lebesgue-measurable function u : [t0 , t1 ] → U , where U ⊂ Rr . We assume that f (t, q, x, u) is continuous and continuously differentiable in x on [t0 , t1 ] × Q × Rn × U . The solution of system (7) is denoted as x(t, q) (or x(t, q, u(·)). Consider the measurement equation on [t0 , t1 ] y(t) = g(t, x(t)) + ξ(t),
t ∈ [t0 , t1 ],
(8) n
k
corrupted by unknown but bounded noise ξ(t). Here g : [t0 , t1 ] × R → R is continuous function. A priori information on ξ(t) is assumed to be given by the inclusion ξ(·) ∈ Ξ, where Ξ is a bounded set in the space Lk2 [t0 , t1 ]. Suppose that Ξ = {ξ(·) : W (ξ(·)) ≤ 1},
t1
W (ξ(·)) =
ξ (t)Rξ(t)dt,
t0
R is a given positive defined matrix. Let y(t) be the result of measurements, generated by unknown ”true” value of q ∗ ∈ Q, input u(t), and measurement error ξ(t). The function q → V (q, y(·), u(·)), defined by the equality V (q, y(·), u(·)) = W (y(·) − g(·, x(·, q))) is said to be an information function(information state) of the problem (7),(8). The set Q(y(·), u(·)) of all parameters q ∈ Q that are consistent with (7), (8) and a priori constraints is referred to as the information set relative to measurement y(t). It follows directly from definitions that Q(y(·), u(·)) = {q ∈ Q : V (q, y(·), u(·)) ≤ 1}. Unknown q ∗ belongs to the information set. We consider an integral of information function as a objective functional of the problem (9) I(y(·), u(·)) = V (q, y(·), u(·))dμ(q). Q
Here μ is a nonnegative measure defined on Lebesgue subsets of Q such that μ(Q) = 1. The functional I is nonnegative, the more value of I corresponds to a more accurate estimate of unknown quantity of parameter q. The integral (9) depends on u(·) and the result of measurements y(·). In turn, y(t) = y ∗ (t) + ξ(t), where y ∗ (t) = g(t, x(t, q ∗ , u(·))) and ξ(t) is the measurement error. In the worst case, the value of I is equal to J(u(·)) = inf V (q, y(·) + ξ(·), u(·))dμ(q). W (ξ(·))≤1
Q
On Optimal Control Problem for the Bundle of Trajectories
291
Direct calculations lead to the following formula for J J(u(·)) = I1 (u(·)) + φ(I2 (u(·))), where
t1 I1 (u(·))) =
(10)
r(t, q) Rr(t, q)dμ(q)dt,
t0 Q
t1 I2 (u(·))) =
r(t, q) dμ(q)R
t0 Q
r(t, q)dμ(q)dt, Q
r(t, q) = g(t, x(t, q ∗ )) − g(t, x(t, q)),
φ(x) =
−x if 0 ≤ x ≤ 1 √ 1 − 2 x if x ≥ 1.
Thus, the optimal input design problem is equivalent to the maximization of the functional J on the tubes of trajectories of uncertain system (7). It is similar in a certain way to the problems of beam optimization [9]. The necessary conditions of optimality for this problem [3] constitute the basis for elaboration of numerical algorithms for constructing optimal inputs. These algorithms are based on conventional gradient method. On Fig. 1 the results of numerical simulation for the system describing oscillations of nonlinear pendulum are presented. The orange and blues lines denote the boundaries of information sets corresponding to optimal and some non optimal inputs. These information sets are constructed for the case of hard (magnitude) constraints on measurement noise: |ξ(t)| ≤ 1, t ∈ [t0 , t1 ].
0
−0.1
−0.2
−0.3
−0.4
−0.5
−0.6
−0.7
−0.8
−0.9
−1
−3
−2
−1
0
1
2
3
4
Fig. 1. Information sets for nonlinear pendulum
5
292
4
M.I. Gusev
Input Synthesis Procedure
The solution of input design problem considered in the previous section depends on unknown value q ∗ of parameter q. In this part we describe the procedure of input construction based on sequential estimates of this parameter. This procedure does not require the knowledge of q ∗ . Consider the partition {τ0 , ..., τN } of the interval [t0 , t1 ]. At the zero step we choose some initial estimate qˆ0 of q ∗ . Define the solution u0 (t) for the problem Iqˆ0 (u(·)) → min . Here notation Iqˆ0 (u(·)) is used for the functional I with qˆ0 being taken instead of q ∗ . Then we apply this control on [τ0 , τ1 ], measure the output y(·) on [τ0 , τ1 ] and go to the next step. Let us describe the k-th step . At this step we have the input u(t) to be given on the interval [t0 , τk ]. Choose the estimate qˆk of q ∗ from the results of measurement of output on [t0 , τk ]. Define the solution uk (t) for the problem Iqˆk (u(·)) → min assuming u(t) on [τ0 , τk ] to be equal to uk−1 (t). Apply this control on [τk , τk+1 ]. Measure the output y(·) on [τk , τk+1 ]. Go to the next step. As an example of application of input synthesis procedure consider the problem of optimal elevator deflection inputs for identifying parameters in the longitudinal shot period equations of aircraft [7]. The second order control system is considered on the interval [0, 4]. The system equations are as follows x˙ 1 = q1 x1 + q2 x2 − 1.66u, x˙ 2 = x1 + 0.737x2 + 0.005u,
(11)
x1 (0) = 0, x2 (0) = 0. Here x1 is the angle of attack, x2 is the pitch rate, u is the elevator command, |u| ≤ 10, t is a time. The coefficients q1 , q2 are unknown and should be specified on the base of observations of y(t) = x1 (t) on the interval [0, 4]. A priori constraints on the measurement error ξ and parameters q = 4 (q1 , q2 ) are given by the relations 0 ξ 2 (t)dt ≤ 1, q ∈ Q, where Q is the square Q = {q : max |qi − qˆi | ≤ 0.4} i
with center qˆ = (−1.25, −0.75). Let the unknown true value of a vector q be equal to q ∗ = (−1.588; −0.562). For μ we take a measure, with equal values at the vertices of Q. Note that in considered example we don’t recalculate the measure μ at each step of the procedure. As the estimate qˆk we take mean value of the corresponding information set. The Figure 2 presents the results of numerical simulation. The blue star on the picture denotes the ”true” value q ∗ of q, and the red crosses denote the sequential estimates qˆk of this parameter. The black lines denote the boundaries of information sets corresponding to q ∗ and qˆk at the final step of procedure. In considered example these sets and corresponding optimal inputs are sufficiently close to each other.
On Optimal Control Problem for the Bundle of Trajectories
293
−0.4 −0.45
q
2
−0.5 −0.55 −0.6 −0.65 −0.7 −0.75 −1.75
−1.7
−1.65
−1.6
−1.55
−1.5
−1.45
−1.4
q1
Fig. 2. Information sets and parameter estimates
Acknowledgment The research was supported by the Russian Foundation for Basic Research (RFBR) under Project 09-01-00589.
References 1. Baras, J.S., Kurzhanski, A.B.: Nonlinear filtering: the set-membership (bounding) and the H-infinity techniques. In: Proc. 3rd IFAC Symp. on nonlinear control systems design, pp. 409–418. Pergamon Press, Oxford (1995) 2. Chernousko, F.L.: State Estimation for Dynamic Systems. CRC Press, Boca Raton (1994) 3. Gusev, M.I.: Optimal Inputs in Guaranteed Identification Problem. Proceedings of the Steklov Institute of Mathematics (suppl. 1 ), S95–S106 (2005) 4. Kurzhanski, A.B.: Control and Observation under Conditions of Uncertainty. Nauka, Moscow (1977) 5. Kurzhanski, A.B., Valyi, I.: Ellipsoidal Calculus for Estimation and Control. Birkhauser, Boston (1997) 6. Kurzhanski, A.B., Varaiya, P.: Optimization Techniques for Reachability Analysis. Journal of Optimization Theory and Applications 108, 227–251 (2001) 7. Mehra, R.K.: Optimal inputs signals for parameter estimation in dynamic systems – Survey and new results. IEEE Trans. on Autom. Control AC–19, 753–768 (1974) 8. Milanese, M., et al. (eds.): Bounding Approach to System Identification. Plenum Press (1995) 9. Ovsyannikov, D.A., Ovsyannikov, A.D.: Mathematical control model for beam dynamics optimization. In: Proceedings of the International Conference on Physics and Control, vol. 3, pp. 974–979 (2003) 10. Schweppe, F.: Uncertain Dynamic Systems. Prentice Hall, Englewood Cliffs (1973)
High-Order Approximations to Nonholonomic Affine Control Systems Mikhail I. Krastanov1 and Vladimir M. Veliov1,2 1
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences Acad. G. Bonchev Str. Bl. 8, 1113 Sofia, Bulgaria
[email protected] 2 Institute of Mathematical Methods in Economics, Vienna University of Technology
[email protected]
Abstract. This paper contributes to the theory of approximations of continuous-time control/uncertain systems by discrete-time ones. Discrete approximations of higher than first order accuracy are known for affine control systems only in the case of commutative controlled vector fields. The novelty in this paper is that constructive second order discrete approximations are obtained in the case of two non-commutative vector fields. An explicit parameterization of the reachable set of the Brockett non-holonomic integrator is a key auxiliary tool. The approach is not limited to the present deterministic framework and may be extended to stochastic differential equations, where similar difficulties appear in the non-commutative case.
1
Introduction
In this paper we present a new result about approximation of an affine control system by appropriately constructed discrete-time systems. The novelty is that the approximation is of second order accuracy with respect to the discretization step, while the vector fields defining the dynamics are not assumed commutative. In the commutative case higher than first order approximations are presented e.g. in [13,4,9]. However, it is well known that the non-commutativity is a substantial problem for second order approximations of control/uncertain systems and of stochastic differential equations (cf. [8]), due to the appearance of mixed multiple integrals in the approximation by Volterra (Chen-Fliess) series. We present an approach for obtaining higher-order approximations that combines the truncated Chen-Fliess series with the constructive description of the reachable set of an auxiliary control system which depends on the bounds on the controls, but is independent of the particular vector fields in the original system. In the particular case of a two-inputs system, on which we focus in this paper, the auxiliary system is the well known Brockett non-holonomic integrator, for the reachable set of which we present an explicit parameterization making use of [12]. The approach is not limited to the present deterministic framework and I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 294–301, 2010. c Springer-Verlag Berlin Heidelberg 2010
High-Order Approximations to Nonholonomic Affine Control Systems
295
may be extended to stochastic differential equations, where a similar difficulty appears. Let us consider an affine control system x˙ =
m
ui fi (x),
x(0) = x0 ∈ Rn ,
u = (u1 , . . . , um ) ∈ U,
t ∈ [0, T ],
(1)
i=1
where T > 0 is a fixed time horizon and U is a compact subset of Rm . Admissible inputs are all measurable functions u : [0, T ] → U , and the set of all such functions is denoted by U = U(0, T ). Under the assumptions formulated in the next section every u ∈ U defines an unique trajectory of (1) on [0, T ], denoted further by x[u](t). In parallel, for every natural number N (presumably large) we consider a discrete-time system y k+1 = gN (y k , v k ), y 0 = x0 ,
v k = (v1k , . . . , vpk ) ∈ V,
k = 0, . . . , N − 1, (2)
where V is a compact subset of Rp . An admissible control of this system is any element v = (v 0 , . . . , v N −1 ) ∈ (V )N , and the set of all such vectors is denoted by VN . For v ∈ VN denote by y[v] = (y 0 [v], . . . , y N [v]) the corresponding solution of (2). System (2) will be viewed as an approximation of (1). To make this precise we denote h = T /N , tk = hk, and link y k with x(tk ). In addition, in order to relate a discrete control v ∈ VN with a continuous-time control u ∈ U, we require that a mapping E : V → U(0, h) is defined associating to any v ∈ V an admissible control u = E(v) on the interval [0, h]. Then any v ∈ VN defines an admissible control from U (denoted by EN (v)) as EN (v)(s) = E(v k )(s − tk ) for s ∈ [tk , tk+1 ). Definition 1. The sequence of systems (2) and mappings EN provides an approximation of order r to (1) if there is a constant c such that for every sufficiently large N the following holds true: (i) for each u ∈ U there exists v ∈ VN for which x[u](tk ) − y k [v] ≤ chr , k = 0, . . . , N ; (ii) for each v ∈ VN it holds that x[EN (v)](tk ) − y k [v] ≤ chr , k = 0, . . . , N . The above definition requires that every trajectory of (1) is approximated with accuracy O(hr ) by a trajectory of (2), and that from every control of (2) one can recover a control of (1) which provides the same order of approximation for the respective trajectories. It is well known (see [3]) that the Euler discretization scheme provides even in a much more general formulation a first order approximation. In our case the m respective specifications are V = U , gN (y, v) = y + h i=1 vi fi (y), E(v)(t) ≡ v. In Section 2 a system of the type of (2) that provides a second order approximation to (1) is implicitly defined. Thanks to the investigation of an auxiliary problem in Section 3, the second order approximation is made explicit in the case of two controls (m = 2) in Section 4.
296
2
M.I. Krastanov and V.M. Veliov
A Second Order Approximation: Basic Idea
Below we assume that the control u is two-dimensional, that the set U ⊂ R2 is compact and that the functions fi : X → Rn are differentiable with Lipschitz n continuous derivatives. Here X ⊂ R is a set that contains in its interior the point x0 and all trajectories of (1) on [0, T ]. At a point x ∈ int X and for a “short” horizon [0, h] one can approximate in a standard way the solution of (1) starting from x and corresponding to a measurable control u by the truncated Volterra (Chen-Fliess) series (see e.g. [7]): 2 m h 1 x(h) = x + fi (x) ui (t) dt + f fi (x) ui (t) dt 2 i=1 i 0 0 i=1 h h + fi fj (x) ui (t) dt uj (t) dt
m
h
0
1≤j
+
0
h
[fi , fj ](x)
ui (t)
0
1≤i<j≤m
t
uj (s) ds dt + O(h3 ),
0
with [fi , fj ] = fi fj − fj fi , and |O(h3 )| ≤ ch3 , where c depends only on an upper bound of |fi |, the Lipschitz constant of fi around x and |U | = maxu∈U |u|. Changing the variables of integration we obtain the representation 1 2 m h2 fi (x) ui (ht) dt + f fi (x) ui (ht) dt x(h) = x + h 2 i=1 i 0 0 i=1 1 1 +h2 fi fj (x) ui (ht) dt uj (ht) dt m
1≤j
+h2
1≤i<j≤m
1
0
(3)
0
1
[fi , fj ](x)
ui (ht) 0
t
uj (hs) ds dt + O(h3 ).
0
This representation is not symmetric, but can easily be put into a symmetric form by summing the versions of the above representation corresponding to all permutations of the control variables. In the case m = 2 considered in this paper the symmetric representation gives the alternative formula (to obtain this representation one can use the explicit formulae obtained in [1], [6] and [10] for product expansion of the Chen-Fliess series) x(h) = gN (x, v) + O(h3 ),
(4)
where gN (y, v) = y + h(f1 (y)v1 + f2 (y)v2 ) +
h2 f fj (y)vi vj + h2 [f1 , f2 ](y)v3 2 i,j=1,2 i
High-Order Approximations to Nonholonomic Affine Control Systems
297
and
1
v1 =
u1 (ht) dt, 0
1
u1 (ht)
v3 = 0
v2 =
1
u2 (ht) dt, 0
t
0
1
u2 (hs) ds dt −
t
u2 (ht) 0
u1 (hs) ds dt.
0
The above relations define a mapping L1 ([0, h] → U ) u −→ F (u) = v that maps the admissible controls on [0, h] onto a set V3 ⊂ R3 . The set V3 is exactly the reachable set at time t = 1 of the auxiliary control system v˙ 1 = u1 , v˙ 2 = u2 , v˙ 3 = u1 v2 − u2 v1 ,
v1 (0) = 0, v2 (0) = 0, v3 (0) = 0.
(u1 , u2 ) ∈ U.
(5)
Let E be any selection mapping of the inverse F −1 , that is V v −→ E(v) ∈ F −1 (v), (thus E “recovers” a control function u = E(v) from a vector v ∈ V3 ). By a standard propagation/accumulation error analysis one can obtain the following result. Theorem 1. The discrete-time system (2) with the specifications (2), V3 , E, provides a second order approximation to the control system (1) for the case m = 2, in the sense of Definition 1. If the original control system (1) contains a drift term f , then one can obtain a respective O(h3 ) local approximation applying (3) with the substitution U := 1 × U . In this case the mapping gN and the discrete control set V = V5 are defined as h2 f f (y) + h (f1 f (y)v1 + f2 f (y)v2 ) (6) 2 +h ([f, f1 ](y)v3 + [f, f2 ](y)v4 ) h2 f fj (y)vi vj + h2 [f1 , f2 ](y)v5 , +h(f1 (y)v1 + f2 (y)v2 ) + 2 i,j=1,2 i
gN (y, v) = y + hf (y) +
where (v1 , . . . , v5 ) ∈ V5 , and V5 is the reachable set of the 5-dimensional control system v˙ 1 v˙ 2 v˙ 3 v˙ 4 v˙ 5
= u1 , = u2 , = v1 , = v2 , = u1 v2 − u2 v1 ,
v1 (0) = 0, v2 (0) = 0, v2 (0) = 0, v4 (0) = 0, v5 (0) = 0.
(u1 , u2 ) ∈ U.
A similar theorem holds true with the specifications of gN in (6), V5 , and E, where u = E(v) is a control with values in U that reaches the point v ∈ V5 .
298
M.I. Krastanov and V.M. Veliov
We stress that the sets V3 and V5 are independent of the vector fields fi , hence they can be “pre-calculated” by a representative set of points for typical control constraining sets U , such as boxes, balls, or ellipsoids. Such a “pre-calculation” is possible for V3 but is hard for V5 . Moreover, the utilization of the approximation from Theorem 1 with a set V represented by a huge number of points is limited to simulation purposes. An explicit parameterization of the discrete control set V , that may make it efficiently usable also in the optimal control context, is discussed in the next sections.
3
An Auxiliary Problem
In this section we investigate the reachable set V3 of the auxiliary control system (5) in the most typical case of U = [−1, 1] × [−1, 1]. This is a particular be-linear system well known in the literature as “Brocket non-holonomic integrator” [2]. The pair of vector fields B1 (v) = (1, 0, v2 ) and B2 (v) = (0, 1, −v1 ) that defines the system is nilpotent of order two and generates an Lie algebra of rank three at the origin. Thus, the system is small-time locally controllable at the origin. Hence, V3 contains the origin in its interior (cf. for example [5] or [11]). The description of the reachable set V3 is complicated by the fact that it is non-convex. More precisely, the following lemma holds for the reachable set of (5) on [0, θ], denoted further by V3 (θ). Lemma 1. The reachable set V3 (θ) is not-convex for each θ > 0. The next lemma is obvious. Lemma 2. The projection of V3 (θ) on the (v1 , v2 )-plane is the square V2 (θ) centered at the origin, with sides parallel to the axes and of length 2θ. The following result plays the key role in the characterization of V3 . Lemma 3. If a point (v1 , v2 , v3 ) belongs to V3 (θ), then also (v1 , v2 , αv3 ) belongs to V3 (θ) for each α ∈ [−1, 1]. This lemma is proved, essentially, in [12], although our proof (to be presented elsewhere) is technically different. It employs the chronological calculus (see [1]) and is applicable for multi-input systems (with m > 2) and for systems with a drift term (see the end of the previous section), in contrast to the geometric approach in [12]. The last two lemmas together imply that the set V3 (θ) can be recovered from its upper boundary, that is, from the surface Γ (θ) = {v ∈ R3 : (v1 , v2 ) ∈ V2 (θ), v3 = max{w : (v1 , v2 , w) ∈ V3 (θ)}}. Notice that according to Lemma 1 this surface is not a graph of a concave function. Next, we shall define a parameterized family of controls that generate the upper boundary Γ (θ) of the reachable set V3 (θ). In doing that we rely on the results in [12].
High-Order Approximations to Nonholonomic Affine Control Systems
299
For any θ > 0 define the sets P 1 (θ) = {(μ, τ ) :
θ ≤ τ ≤ θ, 0 ≤ μ ≤ τ, θ − 2τ ≤ μ ≤ θ − τ }, 3
θ θ ≤ τ ≤ , 0 ≤ μ ≤ τ, θ − 3τ ≤ μ ≤ θ − 2τ }. 4 2 For each parameter p = (μ, τ ) ∈ P 1 (θ) we define the control u1,p on [0, θ] as follows ⎧ ⎨ (−1, 1) for t ∈ [0, μ) (1, 1) for t ∈ [μ, μ + τ ) u1,p (t) = ⎩ (1, −1) for t ∈ [μ + τ, θ]. P 2 (θ) = {(μ, τ ) :
Moreover, for each p = (μ, τ ) ∈ P 2 (θ)) ⎧ (−1, 1) ⎪ ⎪ ⎨ (1, 1) 2,p u (t) = (1, −1) ⎪ ⎪ ⎩ (−1, −1)
we define u2,p as for for for for
t ∈ [0, μ) t ∈ [μ, μ + τ ) t ∈ [μ + τ, μ + 2τ ) t ∈ [μ + 2τ, θ].
Let us define the following symmetries in R3 : for v = (v1 , v2 , v3 ) S0 (v) = (v2 , v1 , v3 ),
S1 (v) = (−v1 , v2 , v3 ),
S2 (v) = (v1 , −v2 , v3 ), S12 (v) = (−v1 , −v2 , v3 ), S012 (v) = (−v2 , −v1 , v3 ). Proposition 1. For θ > 0 define the sets Γ i (θ) = {v[u](θ) : u ∈ P i (θ)},
i = 1, 2,
where v[u](·) is the trajectory of (5) corresponding to the control u : [0, θ] → U. Then the following representation holds true: Γ (θ) = Γ 1 ∪ S0 (Γ 1 ) ∪ S12 (Γ 1 ) ∪ S012 (Γ 1 ) ∪ Γ 2 ∪ S1 (Γ 2 ) ∪ S2 (Γ 2 ) ∪ S12 (Γ 2 ) , where the argument θ is suppressed in the right-hand side. The essence of the above proposition is two-fold. First, it implies that every point of the upper boundary of the reachable set V3 (in fact also of the whole boundary) can be reached by a bang-bang control with at most three switches. However, this information is not enough to obtain an one-to-one parameterization, since the three would give a redundant parameterization of the boundary, which is two-dimensional. The proposition claims more, namely that in the case of three switches the distance between the first and the second switch equals the distance between the second and the third switch (see formula for u2,p (t) above). Hence the needed switching points are always determined by the two parameters μ and τ . Our proof of this fact (in contrast to the geometric proof in [12]) makes use of the chronological calculus [1] and is applicable to higher-order auxiliary systems, in particular for the 5-dimensional system in the end of Section 2.
300
M.I. Krastanov and V.M. Veliov
4
A Constructive Second Order Scheme for Two Non-commutative Fields
Thanks to the results of the previous section we can formulate explicitly a discrete-time system of the type of (2) in the case m = 2 and for U = [−1, 1] × [−1, 1]. Namely, y k+1 = gN (y k , v k ), y 0 = x0 ,
v k = (v1k , v2k , v3k ) ∈ V3 ⊂ R3 ,
k = 0, . . . , N − 1 where gN (y, v) = y + h(f1 (y)v1 + f2 (y)v2 ) +
h2 f fj (y)vi vj + h[f1 , f2 ]v3 , 2 i,j=1,2 i
and V3 = V 1 ∪ S0 (V 1 ) ∪ S12 (V 1 ) ∪ S012 (V 1 ) ∪ V 2 ∪ S1 (V 2 ) ∪ S2 (V 2 ) ∪ (S12 )(V 2 ) , V31 = {v = (v1 , v2 , v3 ) : v1 = 2μ − 1, v2 = 2(μ + τ ) − 1, v3 = 2α(1 − τ )τ, (μ, τ ) ∈ P 1 (1), α ∈ [−1, 1]}, V32 = {v : v1 = 1 − 4τ, v2 = 2(μ + τ ) − 1, v3 = 2α[(1 − μ − 2τ )(τ − μ) + τ (μ + τ )], (μ, τ ) ∈ P 2 (1), α ∈ [−1, 1]}. Notice that the discrete system is not affine although the original one is such. Moreover, the constraint V3 for the discrete-time control v k is non-convex. All this makes somewhat questionable whether the obtained second-order approximation would be useful in an optimal control context. This issue deserves additional investigation and perhaps development of specific optimization methods. The case where the control constraint U is the unit ball in R2 requires also additional investigation, since to our knowledge the reachable set of the auxiliary system (5) has not been constructively parameterized.
Acknowledgments This research was supported by the Austrian Science Foundation (FWF) under grant No P18161-N13 and partly by the Bulgarian Ministry of Science and Higher Education – National Fund for Science Research under contract DO 02–359/2008.
High-Order Approximations to Nonholonomic Affine Control Systems
301
References 1. Agrachev, A., Gamkrelidze, R.: The exponential representation of flows and the chronological calculus. Math. USSR Sbornik, N. Ser. 107, 467–532 (1978) 2. Brockett, R.W.: Asymptotic stability and feedback stabilization. Differential geometric control theory. In: Proc. Conf., Mich. Technol. Univ., Prog. Math., vol. 27, pp. 181–191 (1982) 3. Dontchev, A., Farkhi, E.: Error estimates for discretized differential inclusion. Computing 41(4), 349–358 (1989) 4. Ferretti, R.: High-order approximations of linear control systems via Runge-Kutta schemes. Computing 58, 351–364 (1997) 5. Hermes, H.: Control systems wuth decomposable Lie lgebras. Special issue dedicated to J. P. LaSalle. J. Differential Equations 44(2), 166–187 (1982) 6. Kawski, M., Sussmann, H.J.: Noncommutative power series and formal Liealgebraic techniques in nonlinear control theory. In: Helmke, U., Pr¨ atzel-Wolters, D., Zerz, E. (eds.) Operators, Systems, and Linear Algebra, pp. 111–128. Teubner (1997) 7. Gr¨ une, L., Kloeden, P.E.: Higher order numerical schemes for affinely controlled nonlinear systems. Numer. Math. 89, 669–690 (2001) 8. Kloeden, E., Platen, E.: Numerical Solutions to Stochastic Differential Equations. Springer, Heidelberg (1992) (third revised printing, 1999) 9. Pietrus, A., Veliov, V.M.: On the Discretization of Switched Linear Systems. Systems & Control Letters 58, 395–399 (2009) 10. Sussmann, H.: A product expansion of the Chen series. In: Byrnes, C.I., Lindquist, A. (eds.) Theory and Applications of Nonlinear Control Systems, pp. 323–335. Elsevier, North-Holland, Amsterdam (1986) 11. Sussmann, H.: A general theorem on local controllability. SIAM Journal on Control and Optimization 25, 158–194 (1987) 12. Vdovin, S.A., Taras’ev, A.M., Ushakov, V.N.: Construction of an attainability set for the Brockett integrator. Prikl. Mat. Mekh. 68(5), 707–724 (2004) (in Russian); Translation in J. Appl. Math. Mech. 68(5), 631–646 (2004) 13. Veliov, V.M.: Best Approximations of Control/Uncertain Differential Systems by Means of Discrete-Time Systems. WP–91–45, International Institute for Applied Systems Analysis, Laxenburg, Austria (1991)
A Parametric Multi-start Algorithm for Solving the Response Time Variability Problem Albert Corominas, Alberto Garc´ıa-Villoria, and Rafael Pastor Institute of Industrial and Control Engineering (IOC), Universitat Polit`ecnica de Catalunya (UPC), Spain
Abstract. The Multi-start metaheuristic has been applied straight or hybridized with other metaheuristics to solve a wide range of optimisation problems. Moreover, this metaheuristic is very easy to be adapted and implemented for a wide variety of problems. In this study, we propose a parametric multi-start algorithm that keeps its original simplicity. To test the proposed algorithm, we solve the Response Time Variability Problem (RTVP). The RTVP is a NP-hard sequencing combinatorial optimisation problem that has recently defined in the literature. This problem has a wide range of real-life applications in, for example, manufacturing, hard real-time systems, operating systems and network environment. The RTVP occurs whenever products, clients or jobs need to be sequenced so as to minimise variability in the time between the instants at which they receive the necessary resources. The computational experiment shows the robustness of the proposed multi-start technique.
1
Introduction
The Response Time Variability Problem (RTVP) is a scheduling problem that has recently been defined in the literature [4]. The RTVP occurs whenever products, clients or jobs need to be sequenced so as to minimise variability in the time between the instants at which they receive the necessary resources. The RTVP has a broad range of real-life applications. For example, it can be used to regularly sequence models in the automobile industry [13], to resource allocation in computer multi-threaded systems and network servers [14], to broadcast video and sound data frames of applications over asynchronous transfer mode networks [5], in the periodic machine maintenance problem when the distances between consecutive services of the same machine are equal [1], and in the collection of waste [7]. One of the first problems in which it has appeared the importance of sequencing regularly is at the sequencing on the mixed-model assembly production lines at Toyota Motor Corporation under the just-in-time (JIT) production system. One of the most important JIT objectives is to get rid of all kinds of waste and inefficiency and, according to Toyota, the main waste is due to the stocks. To reduce the stock, JIT production systems require producing only the necessary models in the necessary quantities at the necessary time. To achieve this, one main goal, as Monden [13] says, is scheduling the units to be produced to I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 302–309, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Parametric Multi-start Algorithm for Solving the RTVP
303
keep a constant consumption rates of the components involved in the production process. Miltenburg [12] deals with this scheduling problem and assumes that models require approximately the same number and mix of parts. Thus, only the demand rates for the models are considered. In our experience with practitioners of manufacturing industries, we noticed that they usually refer to a good mixed-model sequence in terms of having distances between the units for the same model as regular as possible. Therefore, the metric used in the RTVP reflects the way in which practitioners refer to a desirable regular sequence. The RTVP is a NP-hard combinatorial optimisation problem [4]. Thus, the use of heuristic or metaheuristic methods for solving real-life instances of the RTVP is justified. Two multi-start algorithms to solve the RTVP were proposed in [6,3]. The general scheme of the multi-start metaheuristic consists of two phases. In the first phase an initial solution is generated. Then, the second phase improves the obtained initial solution. These two phases are iteratively applied until a stop condition is reached. The multi-start algorithm proposed in [6] to solve the RTVP consists of generating a random solution in the first phase and then applying to the random solution a local search in the second phase; the stop condition consists in reaching a given execution time. If the quality of the initial solution is low, the execution time required by the local search to find its local optimum is increased. An easy and fast way to obtain better initial solutions could be generating at each iteration, for example, 5 solutions and applying a local search only for the best solution of them. Given an instance, if the execution time allows running 10 iterations, then 50 solutions are generated and 10 solutions are optimised in total. This idea is applied in the multi-start algorithm proposed in [3] to obtain the initial solutions to be optimised. However maybe a better performance is obtained if all the 50 solutions are generated at first and then are optimised the 10 best of them; or, for example, generating, at each iteration, 10 solutions and optimising the 2 best of them, etc. In this paper we propose a multi-start algorithm to solve the RTVP that has two new parameters: the number of initial solutions generated at each iteration and the number of the best generated solutions that are optimised at each iteration. The remainder of this paper is organized as follows. Section 2 presents a formal definition of the RTVP; Section 3 proposes a parametric multi-start algorithm for solving the RTVP; Section 4 provides the results of a computational experiment; finally, some conclusions and suggestions for a future research are given in Section 5.
2
The Response Time Variability Problem (RTVP)
The aim of the Response Time Variability Problem (RTVP) is to minimise the variability in the distances between any two consecutive units of the same model. The RTVP is formulated as follows. Let n be the number of models, di the number of units to be scheduled of the model i (i = 1, . . . , n) and D the total n number of units (D = i=1 di ). Let s be a solution of an instance in the RTVP that consists of a circular sequence of units (s = s1 s2 . . . sD ), where sj is the
304
A. Corominas, A. Garc´ıa-Villoria, and R. Pastor
unit sequenced in position j of sequence s. For all unit i in which di ≥ 2, let tik be the distance between the positions in which the units k + 1 and k of the model i are found (i.e. the number of positions between them, where the distance between two consecutive positions is considered equal to 1). Since the sequence is circular, position 1 comes immediately after position D; therefore, tidi is the distance between the first unit of the model i in a cycle and the last unit of the same model in the preceding cycle. Let ti be the average distance between two consecutive units of the model i (ti = dDi ). For all symbol i in which di = 1, ti1 is equal to ti . The objective is to minimise the metric Response Time Variability (RTV) which is defined by the following expression: RT V =
n di
(tik − ti )2
(1)
i=1 k=1
For example, let n = 3, dA = 2, dB = 2 and dC = 4; thus, D = 8, tA = 4, tB = 4 and tC = 2. Any sequence such that contains exactly di times the symbol i (∀i) is a feasible solution. For example, the sequence (C, A, C, B, C, B, A, C) is a solution, where RT V = [(5 − 4)2 + (3 − 4)2 ] + [(2 − 4)2 + (6 − 4)2 ] + [(2 − 2)2 + (2 − 2)2 + (3 − 2)2 + (1 − 2)2 ] = 2.00 + 8.00 + 2.00 = 12.00.
3
The Multi-start Algorithm for Solving the RTVP
The multi-start metaheuristic is a general scheme that consists of two phases. The first phase obtains an initial solution and the second phase improves the obtained initial solution. These two phases are applied iteratively until a stop condition is reached. This scheme was first used at the beginning of 80’s ([2,9]). The generation of the initial solution, how to improve them and the stop condition can be very simple or very sophisticated. The combination of these elements gives a wide variety of multi-start methods. For a good review of multi-start methods, see [11,8]. The multi-start algorithm proposed in [6] for solving the RTVP is based on generating, at each iteration, a random solution and on improving it by means of a local search procedure. The algorithm stops after it has run for a preset time. Random solutions are generated as follows. For each position, a model to be sequenced is randomly chosen. The probability of each model is equal to the number of units of this model that remain to be sequenced divided by the total number of units that remain to be sequenced. The local search procedure used is applied as follows. A local search is performed iteratively in a neighbourhood that is generated by interchanging each pair of two consecutive units of the sequence that represents the current solution; the best solution in the neighbourhood is chosen; the optimisation ends when no neighbouring solution is better than the current solution. To improve the obtained results, an improved multi-start algorithm was proposed in [3]. The difference with respect to the aforementioned multi-start algorithm consists that, at each iteration, several random solutions are generated and only the best of them is improved by means of the local search procedure.
A Parametric Multi-start Algorithm for Solving the RTVP
305
1. Set the values of parameters P and N 2. Let the best solution found X initially be void 3. Let the RTV value of the best solution found be Z = ∞ 4. While execution time is not reached do: 5.
Generate P random solutions
6.
Let Xi (i = 1, . . . , P ) the ith best solution generated at step 5
7.
For each j = 1, . . . , N do:
8.
Apply the local optimisation to Xj and get Xjopt
9.
If RT V (Xjopt ) < Z, then X = Xjopt and Z = RT V (Xjopt )
10.
End For
11. End While 12. Return X
Fig. 1. Pseudocode of the multi-start algorithm
The multi-start technique is also used in [10]. In [10] a multi-start algorithm is compared with other seven metaheuristics (Boltzmann Machine, Evolution Strategy, Genetic Algorithm, Sampling and Clustering, Simulated Annealing, Tabu Search and Immune Networks) when solving the Quadratic Assignment Problem (QAP), which is also a hard combinatorial optimisation problem. Their computational experiment shows the effectiveness of the multi-start approach versus other more complex approaches. We propose a multi-start algorithm that is an interesting and new extension of those proposed in [6,3]. The local search procedure is the same used in [6,3]. The difference of our proposed algorithm is that, at each iteration, instead of generating one or more random solutions and optimising the best one, our algorithm generates P random solutions and optimises the N best of them. P and N are the two parameters of the algorithm, where N ≤ P . The number of iterations is limited by the available execution time, as it occurs in [6,3]. To the best of our knowledge, this approach has not been still tested. Note that the algorithms proposed in [6,3] can be though as special cases of our more general parametric algorithm in which P = N = 1 and P ≥ 1 ∧ N = 1, respectively. Figure 1 shows the pseudocode of our algorithm. As it has been mentioned, when the execution time of the algorithm is reached, the algorithm is immediately stopped (that is, the current local optimisation is also stopped).
4
Computational Experiment
The RTVP instances used in the computational experiment are divided in two groups according to their size. For the first group (called G1 ) 185 instances were generated using a random value of D (number of units) uniformly distributed between 50 and 100, and a random value of n (number of models) uniformly distributed between 3 and 30; for the second group (called G2 ) 185 instances were
306
A. Corominas, A. Garc´ıa-Villoria, and R. Pastor
generated using a random value of D between 100 and 200 and a random value of n between 3 and 65. For all instances and for each model i = 1, . . . , n, a random value of di (number of units of the model i) is between 1 and |(D − n + 1)/2.5| such that ni=1 di = D. The multi-start algorithm was coded in Java and the computational experiment was carried out using a 3.4 GHz Pentium IV with 1.5 GB of RAM. For each instance, the multi-start algorithm was run once for 180 seconds with different values of P (number of random solutions generated at each iteration) and N (number of the best generated solutions optimised at each iteration). We have also used the values P = N = 1, which is equivalent to use the multi-start algorithm proposed in [6]. Tables 1 and 2 show the averages per instance of the number of iterations done by the algorithm (it ), the total number of random solutions generated (sg), the total number of solutions optimised (so) and the averages per instance of the RTV values of the best solutions obtained for the G1 and G2 instances, respectively (RTV ). Table 1. Averages results of the G1 instances for 180 seconds P
N
N/P
it
sg
so
RTV
1
1
1
1,483.19
1,484
1,483.19
38.90
10 50 100 200 400
1 1 1 1 1
0.1 0.02 0.01 0.005 0.0025
1,746.24 1,757.40 1,655.48 1,455.79 1,164.63
17,470 87,900 165,600 291,200 466,000
1,746.24 1,757.40 1,655.48 1,455.79 1,164.63
37.48 37.06 37.37 37.27 37.08
10 50 100 200 400
2 10 20 40 80
0.2 0.2 0.2 0.2 0.2
853.70 172.75 86.35 43.15 21.53
8,540 8,650 8,700 8,800 8,800
1,707.41 1,727.52 1,727.07 1,725.83 1,722.69
37.45 37.58 37.62 37.60 37.76
10 50 100 200 400
5 25 50 100 200
0.5 0.5 0.5 0.5 0.5
322.67 64.77 32.39 16.21 8.10
3,230 3,250 3,300 3,400 3,600
1,613.33 1,619.31 1,619.51 1,620.56 1,620.73
38.02 38.17 37.92 38.43 38.38
Tables 1 and 2 show that the differences between the average RTV values obtained for different P and N values do not seem enough significant. Thus, the parametric multi-start algorithm seems that is quite insensitive to the values of the parameters P and N . Anyway, note that when P = N = 1 or N/P = 0.5 worse solutions are obtained than when N = 1 or N/P = 0.2. One reason is that less solutions are optimised because the initial solutions are more probable to be worse and more time is needed in the local optimisations. Less solutions are also optimised when a considerable execution time is spent on generating random
A Parametric Multi-start Algorithm for Solving the RTVP
307
Table 2. Averages results of the G2 instances for 180 seconds P
N
N/P
it
sg
so
RTV
1
1
1
178.82
179
178.82
163.53
10 50 100 200 400
1 1 1 1 1
0.1 0.02 0.01 0.005 0.0025
202.33 212.00 212.18 207.39 195.45
2,030 10,650 21,300 41,600 78,400
202.33 212.00 212.18 207.39 195.45
161.72 161.17 157.85 158.99 159.65
10 50 100 200 400
2 10 20 40 80
0.2 0.2 0.2 0.2 0.2
99.31 20.04 10.05 5.03 2.52
1,000 1,050 1,100 1,200 1,200
198.62 200.42 200.90 201.08 201.30
161.93 160.54 160.41 161.51 161.72
10 50 100 200 400
5 25 50 100 200
0.5 0.5 0.5 0.5 0.5
37.97 7.64 3.83 1.92 1.00
380 400 400 400 400
189.87 190.99 191.59 192.22 193.59
162.87 161.99 161.37 164.20 162.28
Table 3. Averages of the number of times that the best solution is obtained from the ith best initial solution
G1 G1 G1 G2 G2 G2
P
N
1st
2nd
3rd
4th
5th
6th
7th
8th
9th
10th
10 50 10 10 50 10
2 10 5 2 10 5
1.40 0.21 0.43 0.56 0.11 0.23
1.28 0.21 0.39 0.56 0.09 0.30
* 0.26 0.51 * 0.13 0.17
* 0.23 0.57 * 0.11 0.18
* 0.22 0.4 * 0.11 0.23
* 0.22 * * 0.14 *
* 0.23 * * 0.08 *
* 0.22 * * 0.12 *
* 0.22 * * 0.11 *
* 0.17 * * 0.10 *
solutions (P = 400 and N = 1). Therefore, it is recommended to use a quite high value of P (100 ≤ P ≤ 200) and a value of N equal to 1 instead of using the multi-start algorithm proposed in [6] (i.e., P = N = 1). We have observed how many times the best initial solution of the N solutions to be optimised gives the best local optimum found by the algorithm during its running. This observation has been also done for the remaining N solutions to be optimised (i.e., the second, the third, . . . , the N th best initial solution). The results show that all initial solutions have similar probability to be optimised to the best solution found by the algorithm. Therefore, the quality of an initial solution is not a sign of the quality of the local optimum. Although the quality of a local optimum is independent of the quality of the initial solution, notice that it is still advisable to obtain good initial solutions because the time needed for the
308
A. Corominas, A. Garc´ıa-Villoria, and R. Pastor
(a) For G1 instances
(b) For G2 instances
Fig. 2. Average of the RTV values obtained during the execution time
local optimisation is lower, as it has been explained before. Table 3 shows some examples of the number of times, on average, that the best solution found by the algorithm has been obtained from the ith best initial solution. Finally, Fig. 2 shows how the averages of the RTV values of the best obtained solutions for the G1 and the G2 instances decrease over the execution time for the multi-start algorithm (with P = 100 and N = 1).
5
Final Conclusions
The RTVP occurs in diverse environments as manufacturing, hard real-time systems, operating systems and networks environments. The RTVP occurs whenever products, clients or jobs need to be sequenced so as to minimise variability in the time between the instants at which they receive the necessary resources. Since it is an NP-hard combinatorial optimisation problem, heuristic methods are needed for solving real-life instances. We propose a parametric multi-start algorithm for solving an NP-hard sequencing combinatorial optimisation problem as it is the RTVP. Better solutions, on average, are obtained compared with the solutions of the multi-start algorithm proposed in [6]. The computational experiment also shows that the proposed algorithm seems that is not very sensitive to the parameters. Thus, the multi-start technique is robust. Moreover, the multi-start algorithm is very easy to design and to be implemented. It is also shown that the best local optimum can be obtained from an initial solution independently of its fitness although, as it is expected, the worst is the initial solution, the more time is spent on the local search. Our future research will focus on testing our proposed parametric multi-start algorithm in other combinatorial optimisation problem and using other local search methods.
Acknowledgment Supported by the Spanish Ministry of Education and Science under project DPI2007-61905; co-funded by the ERDF.
A Parametric Multi-start Algorithm for Solving the RTVP
309
References 1. Anily, S., Glass, C.A., Hassin, R.: The scheduling of maintenance service. Discrete Applied Mathematics 82, 27–42 (1998) 2. Boender, C.G.E., Rinnooy, A.H.G., Stougie, L., Timmer, G.T.: A Stochastic Method for Global Optimization. Mathematical Programming 22, 125–140 (1982) 3. Corominas, A., Garc´ıa-Villoria, A., Pastor, R.: Solving the Response Time Variability Problem by means of Multi-start and GRASP metaheuristics. Frontiers in Artificial Intelligence and Applications 148, 128–137 (2008) 4. Corominas, A., Kubiak, W., Moreno, N.: Response Time Variability. Journal of Scheduling 10, 97–110 (2007) 5. Dong, L., Melhem, R., Mossel, D.: Time slot allocation for real-time messages with negotiable distance constraint requirements. In: Real-time Technology and Application Symposium, RTAS, Denver (1998) 6. Garc´ıa, A., Pastor, R., Corominas, A.: Solving the Response Time Variability Problem by means of metaheuristics. Artificial Intelligence Research and Development 146, 187–194 (2006) 7. Herrmann, J.W.: Fair Sequences using Aggregation and Stride Scheduling. Technical Report, University of Maryland, USA (2007) 8. Hoos, H., St¨ utzle, T.: Stochastic local research: foundations and applications. Morgan Kaufmann Publishers, San Francisco (2005) 9. Los, M., Lardinois, C.: Combinatorial Programming, Statistical Optimization and the Optimal Transportation Network Problem. Transportation Research 2, 89–124 (1982) 10. Maniezzo, V., Dorigo, M., Colorni, A.: Algodesk: An experimental comparison of eight evolutionary heuristics applied to the Quadratic Assignment Problem. European Journal of Operational Research 81, 188–204 (1995) 11. Mart´ı, R.: Multi-start methods. In: Glover, Kochenberger (eds.) Handbook of Metaheuristics, pp. 355–368. Kluwer Academic Publishers, Dordrecht (2003) 12. Miltenburg, J.: Level schedules for mixed-model assembly lines in just-in-time production systems. Management Science 35, 192–207 (1989) 13. Monden, Y.: Toyota Production Systems Industrial Engineering and Management. Press, Norcross (1983) 14. Waldspurger, C.A., Weihl, W.E.: Stride Schedulling: Deterministic ProportionalShare Resource Management. Technical Report MIT/LCS/TM-528, Massechusetts Institute of Technology, MIT Laboratory for Computer Science (1995)
Enhancing the Scalability of Metaheuristics by Cooperative Coevolution Ciprian Cr˘ aciun, Monica Nicoar˘ a, and Daniela Zaharie Department of Computer Science, West University of Timi¸soara, Romania
[email protected],
[email protected],
[email protected]
Abstract. The aim of this paper is to analyze the ability of cooperative coevolution to improve the scalability of population based metaheuristics. An extensive set of experiments on high dimensional optimization problems has been conducted in order to study the particularities and effectiveness of some elements involved in the design of cooperative coevolutionary algorithms: groupings of variables into components, choice of the context based on which the components are evaluated, length of evolution for each component. Scalability improvements have been obtained in the case of both analyzed metaheuristics: differential evolution and harmony search.
1
Introduction
Optimization of high dimensional functions is challenging and results obtained by population-based metaheuristics (e.g. evolutionary algorithms, particle swarm methods, differential evolution etc.) for small or medium size problems are not necessarily valid for high dimensional ones. Cooperative coevolution is a framework developed in the context of evolutionary algorithms in order to enhance their ability to solve complex problems. In the case of large scale problems the idea of cooperative coevolution is to decompose the problem into constituent subproblems and solve them by using cooperative evolutionary processes (e.g. multiple subpopulations each one corresponding to a subproblem). Designing a cooperative coevolution variant of a metaheuristic raises several issues, the most important ones being that of identifying the components of the problem and that of designing a cooperation strategy (e.g. choosing the context to evaluate the components). Since the first cooperative coevolutionary genetic algorithm has been proposed in [5], several cooperative approaches for function optimization have been developed [1,7]. In some recent works [10,11] a strategy based on a random grouping of variables which is applied to a synchronous differential evolution algorithm is proposed. The aim of this paper is to extend the studies in [10] by analyzing both synchronous and asynchronous coevolutionary strategies and the impact produced by the length of evolution of each component. The influence of the cooperative coevolution on the scalability has been tested for both Differential Evolution and Harmony Search, a recent populationbased metaheuristics which has not been tested up to now for high dimensional problems. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 310–317, 2010. c Springer-Verlag Berlin Heidelberg 2010
Enhancing the Scalability of Metaheuristics
311
The main elements of the cooperative coevolutionary approach are presented in Section 2. Section 3 describes the particularities of Differential Evolution and Harmony Search while Section 4 contains the numerical results. Some conclusions are presented in Section 5.
2
Cooperative Coevolution for Large Scale Optimization
Let us consider the problem of minimizing a function f : D = [a1 , b1 ] × . . . × [an , bn ] → R. Any evolutionary approach for solving such a problem is based on evolving a population of potential solutions, X = {x1 , . . . , xm } ⊂ D, for a given number of generations, G. As n is larger, the number of generations needed to approximate the optimum with a given accuracy is also larger. If the required G does not depend linearly on n then we face the scalability problem. A possible solution to the scalability issue is to decompose the problem in subproblems by dividing the n-dimensional vector corresponding to a potential solution in smaller size components and by co-evolving in a cooperative way these components [5]. There are two main questions which should be answered when designing a cooperative coevolutionary algorithm: (i) how to divide the population elements in smaller size components? (ii) how to coevolve these components? For each of these questions there are several possible answers which are shortly reviewed in the following. Defining the components. Dividing a n-dimensional vector in K ≤ n components is equivalent with defining a function c : {1, . . . , n} → {1, . . . , K} which assigns to each variable i its corresponding component, c(i). Thus for a vector x = (x1 , x2 , . . . , xn ) the k-th component can be defined as Ck (x) = {xj |c(j) = k}, i.e. the set of all variables xj having its index, j, in the pre-image of k through the function c. The original vector x can be obtained by merging all its components, i.e. x = C1 (x), . . . , CK (x). If X = {x1 , x2 , . . . , xm } denotes a population then Ck (X) = {Ck (x1 ), . . . , Ck (xm )} will denote the (sub)population corresponding to the k-th component. In the traditional cooperative coevolutionary algorithms [5] the number of components is equal to the number of variables (K = n) and the function c satisfies c(i) = i. Such an approach is rather inappropriate in the case of interrelated variables, when an appropriate decomposition should group in the same component the highly correlated variables. However, in practice, it is difficult to know which variables are significantly correlated, therefore a compromise solution is to use random groupings as it is proposed in [10]. The groupings used in [10] are constructed based on random permutations and are characterized by equally sized groups. The size of each group depends on the number of groups which should be preset. In order to avoid the choice of the number of groups the same authors recently proposed in [11] an adaptive variant which selects between several possible values for the group sizes (e.g. {5, 10, 25, 50, 100}) according to some selection probabilities (computed by using historical data on the performance of different group sizes). A simpler approach is that based on randomly choosing the number of components K ∈ {Kmin , . . . , Kmax } and on randomly assigning the variables to components. Thus the components do not
312
C. Cr˘ aciun, M. Nicoar˘ a, and D. Zaharie
Algorithm 1. A generic synchronous cooperative coevolutionary algorithm CCEvolveS(X) 1: Initialize and evaluate X 2: while goal for X not reached do 3: Choose K 4: Construct C1 (X),. . .,CK (X) 5: for k = 1, K do 6: Ck (V ) ← evolveS(Ck (X),X) 7: end for 8: Evaluate V 9: X ← select(X, V ) 10: end while return X
evolveS(Ck (X), X) 1: Ck (W ) ← Ck (X) 2: while goal for Ck (X) not reached do 3: Ck (Y ) ← generate(Ck (W )) 4: Y ← merge(Ck (Y ),X) 5: evaluate Y 6: Ck (Y ) ← select(Ck (Y ), Ck (W )) 7: Ck (W ) ← Ck (Y ) 8: end while return Ck (Y )
Algorithm 2. A generic asynchronous cooperative coevolutionary algorithm CCEvolveA(X) evolveA(Ck (x), Ck (X), X) 1: Initialize and evaluate X 1: Ck (y) ← generate(Ck (X)) 2: while goal for X not reached do 2: y ← merge(Ck (y),X) 3: Choose K 3: evaluate y 4: Construct C1 (X),. . .,CK (X) 4: Ck (y) ← select(Ck (x),Ck (y)) 5: for k = 1, K do return Ck (y) 6: while goal for Ck (X) not reached do 7: for each Ck (x) ∈ Ck (X) do 8: Ck (x) ←evolveA(Ck (x),Ck (X),X) 9: end for 10: end while 11: end for 12: end while return X
necessarily have exactly the same size, but only their average size is the same. The numerical tests we conducted illustrated the fact that the behavior of this simple variant is similar to that of the adaptive variant proposed in [11]. Coevolving the components. Two variants of coevolving the components, a synchronous and an asynchronous one, are presented in Algorithms 1 and 2, respectively. The main difference between these two variants is related to the moment when the context used in evaluating the components is updated. In the synchronous variant the global population X (which represents the evaluation context) is updated only after all components have been evolved while in the asynchronous case the global population is updated after evolving each component. Since the metaheuristic itself can be synchronous or asynchronous we combined these strategies in order to obtain a fully synchronous (Algorithm 1) and a fully asynchronous (Algorithm 2) variant. Numerical experiments have been conducted for both variants and some comparative results are presented in Section 4.
Enhancing the Scalability of Metaheuristics
313
The function evolveS(Ck (X), X) returns the population corresponding to component k obtained by applying a population-based metaheuristic to Ck (X) using the context X. The stopping condition of the while loop inside evolveS is related to the number of generations allowed for each component. On the other hand the function evolveA(Ck (x),Ck (X), X) returns just one element corresponding to component k. The new element is generated starting from the current element and other elements selected from the population Ck (X) according to the used metaheuristic. The evaluation of the generated element is made using as context the current global population, X. When implementing the coevolutionary process, besides the choice of a metaheuristic there are two other issues to be solved: (i) how to choose collaborators from the context when evaluating Ck (X)? (ii) how many resources (e.g. number of generations or number of function evaluations) should be allocated to the evolution of each component? The first question has been addressed in several previous works [5,9] and the typical variants to choose the collaborators are: the best element of the population, the current element in the population and a random one. Concerning the second issue some authors use just one generation for each component [1,5] while others suggest to allocate a given number of function evaluations for each component [10]. However there is no systematic study regarding the influence of the evolution length for each component on the behavior of the coevolutionary approach. Therefore one of the aims of the numerical experiments we conducted is to analyze the influence of the evolution length per component on the behavior of cooperative coevolution.
3
Cooperative Coevolutionary Differential Evolution and Harmony Search
In order to analyze the impact of the cooperative coevolution on the metaheuristics scalability we implemented cooperative coevolutionary variants for two metaheuristics: Differential Evolution (DE) [6] and Harmony Search (HS) [3]. The reason for choosing the Differential Evolution technique is the existence of some recent results on coevolutionary DE [10,11] while the reason for choosing Harmony Search is the fact that all reported results concerning HS are for small size problems. Let us shortly describe DE, HS, and their cooperative coevolutionary variants. In the case of DE technique for each population element, xi , a so-called trial element yi = (yi1 , . . . , yin ) is constructed according to (1). The offspring of xi is the best element between xi and yi . j xr1 + Fi · (xjr2 − xjr3 ) if U < CRi j , j = 1, n (1) yi = xji otherwise In (1), r1 , r2 and r3 are distinct random elements from {1, . . . , m} and U is a random value uniformly generated in [0, 1] for each i and j. The control parameters Fi (scale factors) and CRi (crossover rates) are adapted according to the rules used in Brest’s jDE [2] (each element has its own parameters F and CR,
314
C. Cr˘ aciun, M. Nicoar˘ a, and D. Zaharie
which are randomly initialized and during the evolutionary process are reset, with a small probability, to other random values). Thus the only parameter to be specified is the population size m. There are two main variants for designing the DE evolutionary process: a synchronous and an asynchronous one. The synchronous DE (denoted as sDE in the experiments) replaces the population elements with their offsprings only after the offsprings were generated for the entire population. The asynchronous DE (denoted as aDE) corresponds to the steady state approach in evolutionary algorithms and replaces the current element with its offspring just after the construction of the offspring. For each DE variant the coevolutionary variant is obtained by using the corresponding DE generation rule in Step 3 of evolveS and in Step 1 of evolveA, respectively. Thus one obtains two algorithms: a synchronous cooperative coevolutionary DE (denoted as sCCDE) and the asynchronous version (denoted as aCCDE). Harmony Search (HS) has been developed by Z. W. Geem et al. in 2001 [3], starting from an interesting analogy between the music improvisation process and the search for the optimum. Based on this analogy, HS constructs new potential solutions by combining in a stochastic way the elements of the current population and by randomly perturbing them. The rule to construct a new element yi is given in (2). In the classical variant this new element replaces the worst element in the population if it is better than it. Thus HS follows an asynchronous strategy, therefore the coevolutionary HS (denoted as CCHS) should also be based on the asynchronous approach. Due to the particularities of the coevolutionary approach the new element replaces not the worst but the current one in the population (ith element). ⎧ j ⎨ xr + bw · U3 if U1 < p1 and U2 < p2 if U1 < p1 and U2 ≥ p2 , j = 1, n (2) yij = xjr ⎩ rand(aj , bj ) otherwise In (2), r is a random element of {1, . . . , m}, U1 , U2 denotes random values uniformly generated in (0, 1) for each j, U3 is randomly generated in (−1, 1), p1 ∈ (0, 1) denotes the probability of taking a variable from the current population (HMCR in [3]), p2 ∈ (0, 1) denotes the probability to perturb the variable (PAR in [3]) and rand(aj , bj ) is a random value in the domain corresponding to the jth variable. The values of these parameters used in our experiments are p1 = 0.99, p2 = 0.75. The parameter bw controls the size of the perturbation and recently in [4] has been proposed an effective way of choosing bw based on the variance of the current population, i.e. bw = var(X j ) where var(X j ) is the variance of the jth variable over the current population X. Despite the fact that HS has been successfully applied to various optimization problems its behavior on large scale ones has not been studied up to now. The first tests on problems having a size larger than 50 suggested that HS does not scale well with the problem size. Therefore we designed a cooperative coevolutionary variant (CCHS) based on using in Algorithm 2 the generation rule specified in (2).
Enhancing the Scalability of Metaheuristics
4
315
Numerical Results
For both classical and coevolutionary DE and HS variants we analyzed two aspects: (i) the amount of resources needed to reach a given performance, i.e. the number of functions evaluations, nf e , needed to approximate the optimal value with accuracy ; (ii) the performance reached by using a given amount of resources, i.e. the best value, f∗ , obtained after a given number of function evaluations, nf e∗ . In all experiments the accuracy is = 10−10 , the maximal number of functions evaluations is nf e∗ = 5000 · n and the reported values are obtained as averages over 30 independent runs. As benchmark we used a subset, consisting of two non-separable functions (Ackley and Griewank) and a separable one (Rastrigin), of the test suite provided in [8]. The experimental design consisted in combining for each coevolutionary variant (sCCDE, aCCDE, and CCHS) two values for the number of components (K = 10 and K = 20) with six possible values for the length of evolution per component (expressed as number of generations: Gk ∈ {1, 5, 10, 50, 100, n/K}). In a set of preliminary tests we analyzed several variants of choosing the collaborators when evaluating a component: the current element in the population, the best one, and a randomly selected one. Since the results of these tests suggested that the variant using as Table 1. Influence of the method, the number of components (K) and length of evolution per component (Gk ) on the performance (f∗ ) reached after 5000 · n function evaluations Method
Ackley K Gk f∗
Griewank K Gk f∗
Rastrigin K Gk f∗
n=100 sDE - - (8.8 ± 2.2) · 10−14 - - (3.0 ± 0.7) · 10−14 - aDE - - (9.9 ± 2.0) · 10−14 - - (3.1 ± 8.5) · 10−14 - sCCDE 2 50 (8.8 ± 1.3) · 10−14 2 50 (2.9 ± 0.5) · 10−14 2 50 aCCDE 2 50 (8.5 ± 1) · 10−14 2 50 2.8 · 10−14 ± 0 2 50 HS - - (8.7 ± 1.1) · 10−5 - - (3.3 ± 2.7) · 10−8 - CCHS 10 1 2.7 · 10−13 ± 0 101 (5.9 ± 0.8) · 10−14 101 n = 500 sDE - - (1.0 ± 3) · 10−11 - - (2.4 ± 1.3) · 10−3 - aDE - - (4.3 ± 6.5) · 10−12 - - (2.3 ± 0.1) · 10−2 - sCCDE 5 100(4.6 ± 0.2) · 10−13 5 100 (1.6 ± 0.08) · 10−13 1050 aCCDE 5 100(4.8 ± 0.3) · 10−13 5 100 (1.7 ± 0.1) · 10−13 1050 HS - - 16 ± 0.07 - - (1.9 ± 0.04) · 103 - −13 CCHS 20 1 (9.1 ± 0.1) · 10 201 (3.8 ± 1.8) · 10−13 205 n = 1000 sDE - - (4.0 ± 4.6) · 10−1 - - (4.1 ± 4.4) · 10−1 - aDE - - (9.3 ± 3.7) · 10−1 - - (3.8 ± 5.5) · 10−1 - sCCDE 10 100(1.0 ± 0.04) · 10−12 10100 (3.6 ± 0.1) · 10−13 205 aCCDE 10 100(1.0 ± 0.05) · 10−12 10100 (3.7 ± 0.1) · 10−13 2050 HS - - 19 ± 0.03 - - (8.6 ± 0.09) · 103 - −12 CCHS 20 1 (2.5 ± 0.04) · 10 201 (5.8 ± 0.1) · 10−13 201
(3.2 ± 17) · 10−2 (3.2 ± 17) · 10−2 0±0 0±0 150 ± 4.5 (2.0 ± 2.7) · 10−14 1.59 ± 1.50 1.8 ± 1.3 (1.3 ± 2.4) · 10−14 (9.4 ± 21) · 10−15 (1.6 ± 0.01) · 103 (5.1 ± 1.7) · 10−14 34.5 ± 14.7 47.6 ± 13.6 (4.5 ± 2.2) · 10−14 (4.5 ± 2.2) · 10−14 (4.6 ± 0.04) · 103 (2.0 ± 0.6) · 10−13
316
C. Cr˘ aciun, M. Nicoar˘ a, and D. Zaharie
Table 2. Influence of the method on the success ratio (SR), scalability factor (SF = nf e (n)/nf e (100)) and efficiency expressed as the number of functions evaluations needed to reach the accuracy = 10−10 (nf e ) Method n=100 sDE aDE sCCDE aCCDE HS CCHS n = 500 sDE aDE sCCDE aCCDE HS CCHS n = 1000 sDE aDE sCCDE aCCDE HS CCHS
Ackley SR SF nf e
Griewank SR SF nf e
Rastrigin SR SF nf e
1 1 1 1 0 1
1 1 1 1 1
238766 ± 3399 221766 ± 4533 322963 ± 1813 297433 ± 4422 306433 ± 4818
1 1 1 1 0 1
1 1 1 1 1
141766 ± 3726 132433 ± 4229 193683 ± 6432 179100 ± 7000 182433 ± 6155
0.96 0.96 1 1 0 1
1 1 1 1 1
153548 ± 4533 144582 ± 4973 200080 ± 4040 186100 ± 4898 218433 ± 4533
0.96 1 1 1 0 1
5.9 6.0 3.8 4.0 5.2
1427786 ± 6E5 1350200 ± 4E4 1257610 ± 1E4 1200100 ± 0 1600100 ± 0
0.96 0.9 1 1 0 1
6.4 6.7 4.1 4.1 5.4
910544 ± 4E4 896496 ± 4E4 798360 ± 1E4 751766 ± 2E4 1001767 ± 8975
0.36 0.20 1 1 0 1
6.1 6.5 5.2 5.3 7.1
9456540 ± 6E4 9502000 ± 5E4 1050530 ± 8E3 986766 ± 2E4 1551767 ± 1E4
0.1 0.06 1 1 0 1
18 17 7.9 8.0 14
4400200 ± 3E5 3900200 ± 3E5 2576007 ± 4E4 2400100 ± 0 4300100 ± 0
0.16 0.46 1 1 0 1
18 18 8.3 8.4 15
2560200 ± 2E5 2400200 ± 2E5 1615047 ± 3E4 1513433 ± 3E4 2800100 ± 0
0 0 1 1 0 1
10 10 20
2016733 ± 1E4 2000100 ± 0 4396767 ± 6E4
collaborator the current element behaves the best this is the variant we used in all experiments presented here. The population size was set to m = 100 for all coevolutionary variants. For non-coevolutionary variants (sDE, aDE, and HS) m was set to 100 when n < 500 while for n ≥ 500 it was set to m = 200. Table 1 contains for each method the results corresponding to the pair (K, Gk ) which led to the smallest averaged f∗ . The results illustrate that while for DE small number of components and large value of the evolution length are more beneficial, in the case of HS the situation is the reversed (a larger number of components and a short evolution per component). Table 2 shows that all coevolutionary variants have a good reliability (the success ratio is 1) and the scalability factor, which should be closer to n/100, is significantly improved especially in the case of DE. Concerning the comparison between the synchronous and asynchronous variants, by applying a t-test with a significance level of 0.05, one obtained that the number of functions evaluations needed to reach a given accuracy (nf e ) is smaller in the case of aCCDE than in the case of sCCDE. On the other hand, with respect to f∗ values, the synchronous and asynchronous variants behaves almost similarly (only in three cases sCCDE produced results significantly better than aCCDE).
Enhancing the Scalability of Metaheuristics
5
317
Conclusions
Several cooperative coevolutionary variants obtained by combining different ways of updating the contextual population, different numbers of components, different lengths of the evolution/component have been numerically analyzed. The obtained results illustrate the fact that the asynchronous variant behaves slightly better than the synchronous one with respect to the number of function evaluations and that the adequate length of the evolution/component depends on the method (it should be small in the case of HS and can be larger in the case of DE). Concerning the scalability of HS one can conclude that it can be significantly improved by coevolution. Acknowledgment. This work is supported by Romanian project PNCD II 11028/ 14.09.2007 (NatComp).
References 1. van den Bergh, F., Engelbrecht, A.P.: A Cooperative Approach to Particle Swarm Optimization. IEEE Transactions on Evolutionary Computation 8(3), 225–239 (2004) ˇ 2. Brest, J., Boˇskoviˇc, B., Greiner, S., Zurner, V., Mauˇcec, M.S.: Performance comparison of self-adaptive and adaptive differential evolution algorithms. Soft Computing 11(7), 617–629 (2007) 3. Geem, Z.W., Kim, J., Loganathan, G.: A New Heuristic Optimization Algorithm: Harmony Search. Simulation 76(2), 60–68 (2001) 4. Mukhopadhyay, A., Roy, A., Das, S., Das, S., Abraham, A.: Population variance and explorative power of Harmony Search: An analysis. In: Proc. ICDIM 2008, pp. 775–781 (2008) 5. Potter, M., De Jong, K.: A cooperative coevolutionary approach to function optimization. In: Davidor, Y., M¨ anner, R., Schwefel, H.-P. (eds.) PPSN 1994. LNCS, vol. 866, pp. 249–257. Springer, Heidelberg (1994) 6. Price, K.V., Storn, R., Lampinen, J.: Differential Evolution. A Practical Approach to Global Optimization. Springer, Heidelberg (2005) 7. Shi, Y., Teng, H., Li, Z.: Cooperative Co-evolutionary Differential Evolution for Function Optimization. In: Wang, L., Chen, K., S. Ong, Y. (eds.) ICNC 2005. LNCS, vol. 3611, pp. 1080–1088. Springer, Heidelberg (2005) 8. Tang, K., Yao, X., Suganthan, P.N., MacNish, C., Chen, Y.P., Chen, C.M., Yang, Z.: Benchmark Functions for the CEC 2008 Special Session and Competition on Large Scale Global Optimization, Technical Report, USTC, China (2007), http://nical.ustc.edu.cn/cec08ss.php 9. Wiegand, R.P., Liles, W.C., De Jong, K.A.: An Empirical Analysis of Collaboration Methods in Cooperative Coevolutionary Algorithms. In: Proc. of Genetic and Evolutionary Computation Conference, pp. 1235–1242. Morgan Kaufmann Publ., San Francisco (2001) 10. Yang, Z., Tang, K., Yao, X.: Large scale evolutionary optimization using cooperative coevolution. Information Sciences 178, 2985–2999 (2008) 11. Yang, Z., Tang, K., Yao, X.: Multilevel Cooperative Coevolution for Large Scale Optimization. In: Proc. of the 2008 IEEE Congress on Evolutionary Computation, pp. 1663–1670. IEEE Press, Los Alamitos (2008)
Hybrid ACO Algorithm for the GPS Surveying Problem Stefka Fidanova1 , Enrique Alba2 , and Guillermo Molina2 1
2
IPP – Bulgarian Academy of Sciences, Acad. G. Bonchev, bl.25A, 1113 Sofia, Bulgaria
[email protected] Universidad de M´ alaga, E.T.S.I. Inform´ atica, Grupo GISUM (NEO) M´ alaga, Espa˜ na {eat,guillermo}@lcc.uma.es
Abstract. Ant Colony Optimization(ACO) has been used successfully to solve hard combinatorial optimization problems. This metaheuristic method is inspired by the foraging behavior of ants, which manage to establish the shortest routes from their nest to feeding sources and back. In this paper, we propose hybrid ACO approach to solve the Global Positioning System (GPS) surveying problem. In designing GPS surveying network, a given set of earth points must be observed consecutively (schedule). The cost of the schedule is the sum of the time needed to go from one point to another. The problem is to search for the best order in which this observation is executed. Minimizing the cost of this schedule is the goal of this work. Our results outperform those achieved by the best-so-far algorithms in the literature, and represent a new state of the art in this problem.
1
Introduction
Satellite navigation systems have an impact in geoscience, in particular on surveying work in quick and effective determining positions and changes in positions networks. The most widely known space systems are: the American NAVSTAR global positioning system (also known as GPS), the Russian GLObal NAvigation Satellite System (GLONASS), currently only partially functional, and the forthcoming European satellite navigation system (GALILEO), expected for 2010. GPS satellites continuously transmit radio signals to the Earth while orbiting it. A receiver, with unknown position on Earth, has to detect and convert the signals received from all of the satellites into useful measurements. These measurements would allow a user to compute a three-dimensional coordinate position: the location of the receiver [7]. Solving this problem to optimality requires a very high computational time. Therefore, metaheuristic methods are used to provide near-optimal solutions for large networks within an acceptable amount of computational effort [4,6,8]. In this paper, we implement a Hybrid Ant Colony Optimization algorithm. We combine the ACO with various local search procedures to improve the achieved I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 318–325, 2010. c Springer-Verlag Berlin Heidelberg 2010
Hybrid ACO Algorithm for the GPS Surveying Problem
319
results. Our aim is to find the best parameter settings and the most appropriate local search procedure for this complex problem. We compare our results with results achieved by local search used in [8]. The ACO algorithm has parallel nature, thus the computational time of the GPS problem can be decreased running the algorithm on supercomputers. The rest of the paper is organized as follows. The general framework for a GPS surveying network problem as a combinatorial optimization problem is described in Section 2. Then, the ACO algorithm and its hybridization with different local search strategies are explained and applied in Section 3. The numerical results are presented and discussed in Section 4. The paper ends with conclusions and directions for future research.
2
Problem Description
The GPS network can be defined as set of stations (a1 , a2 , . . . , an ), which are co-ordinated by placing receivers (X1, X2, . . .) on them to determine sessions (a1 a2 , a1 a3 , a2 a3 , . . .) among them. The GPS surveying problem consists in searching for the best order in which these sessions can be organized to give the best schedule. Every session must start at the point, where previous one finished. Thus, the schedule can be defined as a sequence of sessions to be observed consecutively. The solution is represented by linear graph with weighted edges. The nodes represent the stations, the edges represent the sessions and the weight of the edges represent the moving cost. The objective function of the problem is the cost of the solution, which is the sum of the costs (time) to move from one point to another one, C(V ) = C(ai , aj ), where ai aj is a session in solution V . For example if the number of points (stations) is 4, a possible solution is V = (a1 , a3 , a2 , a4 ) and it can be represented by the linear graph a1 → a3 → a2 → a4 . The moving costs are as follows: C(a1 , a3 ), C(a3 , a2 ), C(a2 , a4 ). Thus the cost of the solution is C(V ) = C(a1 , a3 ) + C(a3 , a2 ) + C(a2 , a4 ). The initial data is a cost matrix, which represents the cost of moving a receiver from one point to another. The cost could be evaluated purely upon the time or purely upon the distance; for more details see Dare [4]. This problem is a relaxation of the Travelling Salesman Problem (TSP) since there is not requirements that one ends where one started. Thus the strategies to solve GPS surveying problem can be different from these for TSP.
3
Hybrid ACO Algorithm
Real ants foraging for food lay down quantities of pheromone (chemical cues) marking the path that they follow. An isolated ant moves essentially at random but an ant encountering a previously laid pheromone will detect it and decide to follow it with high probability and thereby reinforce it with a further quantity of pheromone. The repetition of the above mechanism represents the auto-catalytic behavior of a real ant colony where the more the ants follow a trail, the more attractive that trail becomes [2].
320
S. Fidanova, E. Alba, and G. Molina
The ACO algorithm uses a colony of artificial ants that behave as cooperative agents in a mathematical space were they are allowed to search and reinforce pathways (solutions) in order to find the optimal ones [3]. The problem is represented by graph and the ants walk on the graph to construct solutions. The solutions are represented by paths in the graph. After the initialization of the pheromone trails, the ants construct feasible solutions, starting from random nodes, and then the pheromone trails are updated. At each step the ants compute a set of feasible moves and select the best one (according to some probabilistic rules) to continue the rest of the tour. The structure of the ACO algorithm is shown by the pseudocode below. The transition probability pij , to choose the node j when the current node is i, is based on the heuristic information ηij and the pheromone trail level τij of the move, where i, j = 1, . . . . , n. pij =
β τijα ηij α β k∈Unused τik ηik
(1)
The higher the value of the pheromone and the heuristic information, the more profitable it is to select this move and resume the search. For heuristic information we use one over the moving cost from one point to another. Thus the sessions with less cost are more desirable. In the beginning, the initial pheromone level is set to a small positive constant value τ0 ; later, the ants update this value after completing the construction stage. ACO algorithms adopt different criteria to update the pheromone level. Hybrid Ant Colony Optimization Initialize number of ants; Initialize the ACO parameters; while not end-condition do for k=0 to number of ants ant k starts from random node; while solution is not constructed do ant k selects higher probability node; end while end for Local search procedure; Update-pheromone-trails; end while In our implementation we use MAX-MIN Ant System (MMAS) [10], which is one of the best performing ant algorithms. This approach uses a fixed upper bound τmax and a lower bound τmin of the pheromone trails. Thus the accumulation of a big amount of pheromone by part of the possible movements and repetition of same solutions is partially prevented. The main features of MMAS are: – Intensification of the search process. This can be achieved by allowing one ant, the best one, to add pheromone after each iteration.
Hybrid ACO Algorithm for the GPS Surveying Problem
321
– Diversification of the search process. After the first iteration the pheromone trails are reinitialized to τmax . In the next iteration only the movements that belong to the best solution receive a pheromone, while other pheromone values are only evaporated. The main purpose of using only one solution is to make solution elements, which frequently occur in the best found solutions, get a large reinforcement. The pheromone trail update rule is given by: τij ← ρτij + Δτij , 1/C(Vbest ) if (i, j) ∈ best solution Δτij = , 0 otherwise where Vbest is the iteration best solution and i, j = 1, . . . , n. To avoid stagnation of the search, the range of possible pheromone value on each movement is limited to an interval [τmin , τmax ]. τmax is an asymptotic maximum of τij and τmax = 1/(1 − ρ)C(V ∗ ), while τmin = 0.087τmax [10], where V ∗ is the optimal solution, but since it is unknown, we use Vbest instead. We hybridize MMAS algorithms by using Local Search (LS) procedures. We thus try to improve the achieved costs. The main concept of LS is searching the local neighborhood of the current solution [9]. The LS procedure perturbs a given solution to generate different neighborhoods using a move generation mechanism. In general, neighborhoods for large-size problems can be costly to search. Therefore, LS attempts to improve a current schedule V to a GPS network by a small series of local improvements. A move generation is a transition from a schedule V to another one V ∈ I(V ) in one step (iteration). The returned schedule V may not be optimal, but it is the best schedule in its local neighborhood I(V ). A local optimal schedule is a schedule with the local minimal cost value. In this paper several local search procedures L(k, l) are applied, where k is the number of generated neighbor solutions and l is the index of the perturbation method used. With L(0, −) we note the MMAS algorithm without LS. The LS procedure 1 is from [8] and in the next section we compare achieved results by our LS procedures with it. The prepared by us LS are as follows: 1. Nodes Sequential Swaps: for i = 1 to n − 1; for j = i + 1 to n; swap ai and aj ; [8] 2. Nodes Random Swaps: two nodes are chosen randomly and are swapped; 3. Randomly Delete an Edge: let the current solution be (a1 , a2 , . . . , ai , ai+1 , . . . , an ). The edge (i, i+1) is randomly chosen and deleted. The new solution is (ai+1 , . . . , an , a1 , . . . , ai ); 4. Greedy Delete an Edge: The longest (most expensive) edge is deleted. The new solution is constructed as in previous case; 5. Randomly Delete 2 Edges: Let the current solution be (a1 , a2 , . . . , ai , ai+1 , . . . , aj , aj+1 , . . . , an ). The edges (i, i + 1) and (j, j + 1) are randomly chosen and deleted. The new solutions are (ai+1 , . . . , aj , a1 , . . . , ai , aj+1 , . . . , an ),
322
S. Fidanova, E. Alba, and G. Molina
(aj+1 , . . . , an , ai+1 , . . . , aj , a1 , . . . , ai ), and (a1 , . . . , ai , aj+1 , . . . , an , ai+1 , . . . , aj ); 6. Greedy Delete 2 Edges: The two longest edges are deleted. The new solutions are prepared as in the previous case.
4
Experimental Results
In this section we analyze the experimental results obtained using the Hybrid MMAS algorithm described in the previous section. As test problems, we use real data from Malta and Seychelles GPS networks. The Malta GPS network is composed of 38 sessions and the Seychelles GPS network is composed of 71 sessions. We also use six larger test problems, taken from http://www.informatik.uniheidelberg.de/groups/comopt/software/TSLIB95/ATSP.html. These test problems range from 100 to 443 sessions. For every experiment, the results are obtained by performing 30 independent runs, then averaging the fitness values obtained in order to ensure statistical confidence of the observed difference. Analysis of the data using ANOVA/KruskalWallis test has been used to get 95 % level of statistical confidence of the results. Parameter settings have a crucial role for algorithm behavior. Therefore at the beginning we investigate on the transition probability parameters α and β. When the parameter α is large, the pheromone value has a strong influence on the choice the ants make. When the value of β is large, the transition probability is highly driven by the heuristic information and the algorithm operates in a greedy way. In Table 1 we show the costs achieved for every test problem. The best values are highlighted using boldface. The values of parameters (α, β) are from the set {(2,1), (1,1), (1,2), (1,3), (1,4)}. Analyzing the results, we conclude that the MMAS algorithm using (α, β)=(1,2) outperforms the other configurations. The minimal costs are obtained for (α, β) = (1, 2) in most of the cases except Table 1. Influence of the parameters α and β, L(0, −) Tests
(α, β)
sessions (2,1)
(1,1)
(1,2)
(1,3)
(1,4)
Malta
38
924
924
920
920
920
Seychelles
71
938
935
918
923
919
rro124
100
42124
41980
41454
41471
41457
ftv170
170
3581
3467
3417
3438
3429
rgb323
323
1677
1660
1670
1676
1665
rgb358
358
1692
1679
1683
1688
1682
rgb403
403
3428
3426
3443
3447
3452
rgb443
443
3776
3791
3754
3787
3792
Hybrid ACO Algorithm for the GPS Surveying Problem
323
Table 2. Influence of the evaporation parameter ρ, L(0, −) ρ
Tests
sessions 0.1
0.2
0.5
0.8
0.9
Malta
38
924
924
923
902
902
Seychelles
71
921
921
918
918
919
rro124
100
42183
41981
41454
41485
41276
ftv170
170
3578
3460
3417
3340
3356
rgb323
323
1679
1671
1670
1665
1659
rgb358
358
1697
1691
1683
1682
1679
rgb403
403
3443
3452
3443
3438
3427
rgb443
443
3782
3787
3754
3778
3750
rgb323 and rgb403. For these two tests configurations the α = β achieves the best costs. For Malta and rgb358 there are no significant differences between (α, β) = (1, 1) and (α, β) = (1, 2). Therefore we decide to continue the tests using (α, β) = (1, 2). The other parameters are set as follow: τ0 = 0.5, ρ = 0.5, number of ants 10, number of iterations equal to the number of sessions for the tested problem. The value of parameter τ0 is not important because after the first iteration (as it is mentioned in the previous section) all pheromone is set to τmax . We can use small number of ants, because of the random start of the ants in every iteration, and thus the number of calculations decrease. We decide to fix the ants number to be 10. The next parameter we determine is the evaporation ρ. In Table 2 we show costs achieved for all tested problems when the parameter ρ takes the values {0.1, 0.2, 0.5, 0.8, 0.9}. The best values are highlighted using boldface. Comparing the results we conclude that MMAS algorithm using ρ = 0.9 outperforms the other configurations. The minimal costs are obtained by ρ = 0.9 in five of the eight tests. In the other tests there are no significant differences between ρ = 0.8 and ρ = 0.9. Therefore we decide to continue our research using ρ = 0.9. The other parameters used in these tests take same values as in the previous case. We include in this study various kinds of local search procedures to improve the algorithm performance. To keep the running time low, the neighbor set consists of as many solutions as the number of sessions. The number of iterations is set equal to the size of the problem. Table 3 shows the costs achieved for all tested problems. The best values are highlighted using boldface. After analysis of the results, we conclude that LS procedure L(n, 5) outperforms the others. It obtains the best results in most of the tests except ftv170. In this test problem there are no significant differences between the costs achieved by L(n, 3) and L(n, 5). We observe that there is no relative difference between achieved costs by L(n, 1), L(n, 2) and L(0, −). We
324
S. Fidanova, E. Alba, and G. Molina Table 3. Hybrid MMAS algorithms, n is size of the problem
Tests
sessions
Malta
38
LS L(n,1)
L(n,2)
L(n,3)
L(n,4)
L(n,5)
L(n,6)
L(0,–)
902
902
895
895
872
902
902
Seychelles
71
920
920
915
915
851
893
919
rro124
100
41276
41276
41004
41004
39971
41281
41276
ftv170
170
3356
3356
3229
3290
3266
3320
3356
rgb323
323
1661
1661
1691
1666
1378
1423
1659
rgb358
358
1680
1680
1689
1702
1477
1549
1679
rgb403
403
3428
3428
3401
3401
2408
2710
3427
rgb443
443
3751
3751
3838
3838
2631
3349
3750
conclude than that L(n, 1) and L(n, 2) do not improve the costs achieved by MMAS.
5
Conclusions and Future Work
We address in this paper the GPS surveying problem. Instances containing from 38 to 443 sessions have been solved using an MMAS algorithm, both in its canonical form, and hybridized with several configurations of a Local Search procedure (LS). For each case, the best parameter settings are investigated. In most of the test cases, the MMAS+LS hybrid has shown improvements of the results compared to the canonical MMAS. In particular, the best results have been obtained using the LS procedure L(n, 5), where n is equal to the number of sessions. To keep the running time at a low value, the neighbor set of the LS consists of as many solutions as the number of the sessions and the number of iterations is equal to the size of the problem. In a general manner, using larger neighborhoods and more iterations we can achieve better costs. We can state that the proposed algorithm obtains promising results, and the hybridization with the appropriate LS produces significant improvements of the results. The nature of ant algorithms including MMAS is parallel with big transfer of data after every iteration. So they are very suitable for parallel applications on massively parallel architectures and supercomputers. In future work, other metaheuristcs algorithms will be developed, applied and compared. Our aim is to obtain a reliable sample of the expectable results for this problem when using different kinds of metaheuristics, and establish a state of the art for this problem.
Acknowledgments This work has been partially supported by the Bulgarian NSF under contract DO 02–115. Guillermo Molina is supported by grant AP2005-0014 from the Spanish government.
Hybrid ACO Algorithm for the GPS Surveying Problem
325
References 1. Beckers, R., Deneubourg, J.L., Gross, S.: Trail and U-turns in the Selection of the Shortest Path by the Ants. J. of Theoretical Biology 159, 397–415 (1992) 2. Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press, New York (1999) 3. Corne, D., Dorigo, M., Glover, F. (eds.): New Ideas in Optimization. McCraw Hill, London (1999) 4. Dare, P.J., Saleh, H.A.: GPS Network Design: Logistics Solution Using Optimal and Near-Optimal Methods. J. of Geodesy 74, 467–478 (2000) 5. Dorigo, M., Gambardella, L.M.: Ant Colony System: A Cooperative Learning Approach to the Traveling Salesman Problem. IEEE Transactions on Evolutionary Computation 1, 53–66 (1997) 6. Fidanova, S.: An Heuristic Method for GPS Surveying Problem. In: Shi, Y., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds.) ICCS 2007. LNCS, vol. 4490, pp. 1084–1090. Springer, Heidelberg (2007) 7. Leick, A.: GPS Satellite Surveying, 2nd edn. Wiley, Chichester (1995) 8. Saleh, H.A., Dare, P.: Effective Heuristics for the GPS Survey Network of Malta: Simulated Annealing and Tabu Search Techniques. Journal of Heuristics 7(6), 533– 549 (2001) 9. Schaffer, A.A., Yannakakis, M.: Simple Local Search Problems that are Hard to Solve. Society for Industrial Applied Mathematics Journal on Computing 20, 56–87 (1991) 10. Stutzle, T., Hoos, H.H.: MAX-MIN Ant System. In: Dorigo, M., Stutzle, T., Di Caro, G. (eds.) Future Generation Computer Systems, vol. 16, pp. 889–914 (2000)
Generalized Nets as Tools for Modeling of the Ant Colony Optimization Algorithms Stefka Fidanova1 and Krassimir Atanassov2 1
2
IPP – Bulgarian Academy of Sciences, Acad. G. Bonchev, bl.25A, 1113 Sofia, Bulgaria
[email protected] CLBME – Bulgarian Academy of Science, Acad. G. Bonchev, bl 105, 1113 Sofia, Bulgaria
[email protected]
Abstract. Ant Colony Optimization (ACO) has been used successfully to solve hard combinatorial optimization problems. This metaheuristic method is inspired by the foraging behavior of ant colonies, which manage to establish the shortest routes to feeding sources and back. We discuss some possibilities for describing of the ACO algorithms by Generalized Nets (GNs), that help us deeply to understand the processes and to improve them. Very important is that the GNs are expandable. For example, some of the GN places can be replaced with a new GN. Thus we can include procedures to improve the search process and the achieved results or various kind of estimations of algorithm behavior.
1
Introduction
Generalized Nets (GNs) [1,2] are extensions of ordinary Petri Nets (PNs) and their other extensions, as Time PN, Color PNs, Stochastic PNs, Self-organizing PNs, etc. They are not only a tool for modelling of parallel processes, but the GN-approach for representing of real processes can play the role of a generator of new ideas for extending of the initial processes. In the present paper we will discuss some possibilities for application of the GNs as tool for modelling of Ant Colony Optimization (ACO) algorithms. Real ants foraging for food lay down quantities of pheromone (chemical cues) marking the path that they follow. An isolated ant moves essentially at random but an ant encountering a previously laid pheromone will detect it and decide to follow it with high probability and thereby reinforce it with a further quantity of pheromone. The repetition of the above mechanism represents the auto-catalytic behavior of a real ant colony where the more the ants follow a trail, the more attractive that trail becomes. The ACO is inspired by real ant behavior to solve hard combinatorial optimization problems. Examples of optimization problems are Traveling Salesman Problem, Vehicle Routing, Minimum Spanning Tree, Constrain Satisfaction, Knapsack Problem, etc. The ACO algorithm uses a colony of artificial ants that behave as cooperative agents in a mathematical space where they are allowed to I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 326–333, 2010. c Springer-Verlag Berlin Heidelberg 2010
Generalized Nets as Tools for Modeling of the ACO Algorithms
327
search and reinforce pathways (solutions) in order to find the optimal ones. The problem is represented by graph and the ants walk on the graph to construct solutions. The solutions are represented by paths in the graph. After the initialization of the pheromone trails, the ants construct feasible solutions, starting from random nodes, and then the pheromone trails are updated. At each step the ants compute a set of feasible moves and select the best one (according to some probabilistic rules) to continue the rest of the tour. We discuss some possibilities for describing of the ACO algorithms by GNs, that help us deeply to understand the processes and to improve them. The ACO algorithm has parallel nature, thus the computational time can be decreased running the algorithm on supercomputers. By the GN description the parallel implementation of ACO can be improved. The paper is organized as follows. In section 2 are done short remarks on GNs. In section 3 GN is represented as tool for modeling ACO algorithm. In section 4 an example of GN for ACO applied on knapsack problem is given. At the end some conclusions and directions for future work are done.
2
Short Remarks on Generalized Nets
In this section we shall introduce the concept of a Generalized Net. The GN consists of transitions, places, and tokens. Every GN-transition is described by a seven-tuple (Fig. 1): Z = L , L , t1 , t2 , r, M,
,
where: (a) L and L are finite, non-empty sets of places (the transition’s input and output places, respectively); for the transition in Fig. 1 these are L = {l1 , l2 , . . . , lm } and L = {l1 , l2 , . . . , ln }; (b) t1 is the current time-moment of the transition’s firing; (c) t2 is the current value of the duration of its active state; (d) r is the transition’s condition determining which tokens will pass (or transfer) from the transition’s inputs to its outputs; it has the form of an Index Matrix (IM; see [4]):
r=
l1 .. .
l1 . . . lj . . . ln
ri,j ; li (ri,j − predicate ) .. . (1 ≤ i ≤ m, 1 ≤ j ≤ n) lm
ri,j is the predicate which corresponds to the i-th input and j-th output places. When its truth value is “true”, a token from i-th input place can be transferred to j-th output place; otherwise, this is not possible;
328
S. Fidanova and K. Atanassov
l1 .. .
j
li
j
.. . lm
j
r
-? .. . .. .
-
.. . .. .
-
- jl1 .. .
- jlj .. .
- jln
Fig. 1. GN-transition
(e) M is an IM of the capacities of transition’s arcs: l1
l1 . . . lj . . . ln
.. mi,j . M= ; li (mi,j ≥ 0 − natural number ) .. . (1 ≤ i ≤ m, 1 ≤ j ≤ n) lm (f ) is an object having a form similar to a Boolean expression. It may contains as variables the symbols which serve as labels for transition’s input places, and is an expression built up of variables and the Boolean connectives ∧ and ∨ whose semantics is defined as follows: ∧(li1 , li2 , . . . , liu ) - every place li1 , li2 , . . ., liu must contain at least one token, ∨(li1 , li2 , . . . , liu ) - there must be at least one token in all places li1 , li2 ,. . ., liu , where {li1 , li2 , . . . , liu } ⊂ L . When the value of a type (calculated as a Boolean expression) is “true”, the transition can become active, otherwise it cannot. The ordered four-tuple E = A, πA , πL , c, f, θ1 , θ2 , K, πK , θK , T, to , t∗ , X, Φ, b is called a Generalized Net (GN) if: (a) A is a set of transitions; (b) πA is a function giving the priorities of the transitions, i.e., πA : A → N , where N = {0, 1, 2, . . .} ∪ {∞}; (c) πL is a function giving the priorities of the places, i.e., πL : L → N , where L = pr1 A∪pr2 A, and pri X is the i-th projection of the n-dimensional set, where n ∈ N, n ≥ 1 and 1 ≤ k ≤ n (obviously, L is the set of all GN-places);
Generalized Nets as Tools for Modeling of the ACO Algorithms
329
(d) c is a function giving the capacities of the places, i.e., c : L → N ; (e) f is a function which calculates the truth values of the predicates of the transition’s conditions (for the GN described here let the function f have the value “f alse” or “true”, i.e., a value from the set {0, 1}; (f ) θ1 is a function giving the next time-moment when a given transition Z can be activated, i.e., θ1 (t) = t , where pr3 Z = t, t ∈ [T, T + t∗ ] and t ≤ t . The value of this function is calculated at the moment when the transition terminates its functioning; (g) θ2 is a function giving the duration of the active state of a given transition Z, i. e., θ2 (t) = t , where pr4 Z = t ∈ [T, T + t∗ ] and t ≥ 0. The value of this function is calculated at the moment when the transition starts functioning; (h) K is the set of the GN’s tokens. In some cases, it is convenient to consider this set in the form K = ∪ Kl , l∈QI
where Kl is the set of tokens which enter the net from place l, and QI is the set of all input places of the net; (i) πK is a function giving the priorities of the tokens, i.e., πK : K → N ; (j) θK is a function giving the time-moment when a given token can enter the net, i.e., θK (α) = t, where α ∈ K and t ∈ [T, T + t∗ ]; (k) T is the time-moment when the GN starts functioning. This moment is determined with respect to a fixed (global) time-scale; (l) to is an elementary time-step, related to the fixed (global) time-scale; (m) t∗ is the duration of the GN functioning; (n) X is the set of all initial characteristics the tokens can receive when they enter the net; (o) Φ is a characteristic function which assigns new characteristics to every token when it makes a transfer from an input to an output place of a given transition. (p) b is a function giving the maximum number of characteristics a given token can receive, i.e., b : K → N . A GN may lack some of the components, and such GNs give rise to special classes of GNs called reduced GNs. The omitted elements of the reduced GNs are marked by “*”.
3
GNs as Tools for Modelling of ACO Algorithms
An interesting feature of GN is that they are expandable. A new transition can be included between every two transitions or at the beginning and the end of the net. Every one of the places of the net can be replaced with GN. Thus GN representation can show us the weakness of the method and possibilities for improvements. Hybridization of the method is represented by including new transitions or new GN between two transitions. In [5] the functioning of standard ACO algorithm is described by GN. In [6] an extension of the first GN is constructed. It describes the functioning and the
330
S. Fidanova and K. Atanassov
results of the work of hybrid ACO algorithm including local search procedure. A next step of extension of the GN-model is done in [7], where ACO algorithm with fuzzy start of the ants on every iteration is described, including new transition at the beginning of the ACO-GN. Replacing a place from GN with new GN we can include new function of the ant. Now, we shall represent some new modifications of ACO algorithms, ideas for which come from GN description of the method. 1. If over the path of some ant in the previous simulation there is a linear part, i.e., a part without branches, and if some ant treads on a node that is the beginning of such path region, in the present simulation all this path can be gone on one step. 2. We shall discuss introducing of elements of intuitionistic fuzzy estimations of the pheromone quantity. So we can obtain the degree of existing of the pheromone for a fixed node and the degree of the evaporated pheromone. 3. A GN-model of multiplicity ants can be constructed. In this case we can simulate the process of joint ant transfer to a node, where the paths of the different ants will be divided. The GN-model can use some restrictions for this situation, because if the ants multiply on each step, their number will increase exponentially. For example, we can allow multiplication only in the case when the ants are in a near neighborhood of a node, where there was a big value on the previous simulation. Another version of this idea is: only the ant that is the nearest to such node, can multiply, but for each time-moment this is possible only for one ant. 4. Including “mortal” ants in the algorithm. In this case some of the ants can die, e.g., when some of them tread on a node through which another ant already had went. 5. A modification of this model will be a model in which some (or all) ants have limited life-time. In this case we must find the maximal values for a limited time.
4
A GN-Model of ACO for Multiple Knapsack Problem
In this section we construct GN for Multiple Knapsack Problem (MKP). Description of Multiple Knapsack Problem you can see in [3]. The Generalized Net (GN, see [2,4]) have 4 transitions, 11 places, and three types (α, ε, and β1 , β2 , . . . — in advance determined number) of tokens (see Fig. 2). These tokens enter, respectively, places l1 with the initial characteristic “volumes of knapsacks” and l2 with the initial characteristics for the i-th β-token “i-th thing, volume, price”. Z1 =< {l1 }, {l3 },
l3 >, l1 W1,3
Generalized Nets as Tools for Modeling of the ACO Algorithms
331
Z3
? l6 -- g Z1
? l1 l3 g -- g
Z4
-
? l10 g l7 -- g - l11 -g
Z2
l2
? l4 g -- g l5
-- g
l8
-- g l9
-- g
Fig. 2. GN net model for ACO
where W1,3 =“transition function”. The aim of this construction is: token α to go to place l3 when all β-tokens are already in place l5 and therefore we can calculate the common price of the things (let they are N in number) CP =
N
pr3 xβ0 i ,
i=1
where xω s is s-th characteristic (s ≥ 0; 0-th characteristic is the initial one) of token ω. Now, token α can obtain as a new characteristic “middle price for unit volume, i.e.
CP ”. xα 0
Token α from places l1 enters place l3 with a characteristic “vector of current transition function results ϕ1,cu , ϕ2,cu , ..., ϕn,cu ”, while token ε stays only in place l4 obtaining the characteristic “new m-dimensional vector of heuristics with elements – the graph vertices or, new l-dimensional vector of heuristics with elements – the graph arcs”. Z2 =< {l2 }, {l4 , l5 }, l4 l5 > l2 W2,4 W2,5
332
S. Fidanova and K. Atanassov
where: W2,4 = new m-dimensional vector of heuristics, W2,5 = all β-tokens are collected already in place l5 . Z3 =< {l3 , l4 , l6 , l9 , l10 }, {l6 , l7 , l8 , l9 }, l3 l4 l6 l9 l10
l6 l7 l8 l9 W3,6 W3,7 f alse W3,9 W4,6 W4,7 f alse W4,9 > W6,6 f alse f alse f alse f alse W9,7 W9,8 W9,9 true true true f alse
where: W3,7 = W3,6 = W3,9 = ¬W6,6 ∨ ¬W9,8 , W4,6 = W4,7 = W4,9 = ¬W6,6 ∨ ¬W9,8 , W6,6 = the current iteration is not finished, W9,7 = the current best solution is better than global best solution, W9,8 = “truth-value of expression C1 ∨ C2 ∨ C3 is true”, W9,9 = ¬W9,8 . where C1 , C2 and C3 are the following end-conditions: C1 – “computational time (maximal number of iterations) is achieved”, C2 – “number of iterations without improving the result is achieved”, C3 – “if the upper/lower bound is known, then the current results are close (e.g., less than 5%) to the bound”. Token α from place l3 enters place l6 with a characteristic “S1,cu , S2,cu , ..., Sn,cu ”, where Si,cu is the current partial solution for the current iteration, made by i-th ant (1 ≤ 1 ≤ n). If W6,6 = true it splits to three tokens α, α , and α that enter places l6 — token α — with a characteristic “new n-dimensional vector with elements — the couples of the new ants coordinates”, place l7 – token α — with the last α-characteristic, and place l9 — token α — with a characteristic “the best solution for the current iteration; its number”. Token α can enter place l8 only when W9,8 = true and there it obtains the characteristic “the best achieved result”. Z4 =< {l7 }, {l10 , l11 }, l10 l11 > l7 W7,10 W7,11 where: W7,10 = “truth-value of expression C1 ∨ C2 ∨ C3 is true”, W7,11 = ¬W7,10 .
Generalized Nets as Tools for Modeling of the ACO Algorithms
5
333
Conclusion
This paper shows description of ACO algorithm by GN. This description presents us the weakness of the algorithm and points us possible improvements. Important feature of the GNs is that they are expandable. So we can add new transitions and replace places with other GNs. This gives us ideas how to construct new modified ACO algorithms and to improve achieved results. Like a future work we shell develop new ACO algorithms and we shell apply them on various difficult optimization problems. The aim is to construct the most appropriate algorithm for given class of problems. The nature of ant algorithms are parallel with big transfer of data after every iteration. So they are very suitable for parallel applications on massively parallel architectures and supercomputers.
Acknowledgments This work has been partially supported by the Bulgarian NSF under contract DO 02–115.
References 1. Alexieva, J., Choy, E., Koychev, E.: Review and bibloigraphy on generalized nets theory and applications. In: Choy, E., Krawczak, M., Shannon, A., Szmidt, E. (eds.) A Survey of Generalized Nets. Raffles KvB Monograph, vol. 10, pp. 207–301 (2007) 2. Atanassov, K.: Generalized Nets. World Scientific, Singapore (1991) 3. Fidanova, S.: Ant colony optimization and multiple knapsack problem. In: Renard, J.P. (ed.) Handbook of Research on Nature Inspired Computing for Economics and Management, pp. 498–509. Idea Grup Inc. (2006) ISBN 1-59140-984-5 4. Atanassov, K.: On Generalized Nets Theory. Prof. M. Drinov Publishing House, Sofia (2007) 5. Fidanova, S., Atanasov, K.: Generalized Net Models of the Process of Ant Colony Optimization. Issues on Intitionistic Fuzzy Sets and Generalized Nets 7, 108–114 (2008) 6. Fidanova, S., Atanassov, K.: Generalized Net Model for the Process of Hybrid Ant Colony Optimization. Computer Rendus de l’Academie Bulgare des Sciences 62(3), 315–322 (2009) 7. Fidanova, S., Atanasov, K.: Generalized Net Models of the Process of the Ant Colony Optimization with Intuitionistic Fuzzy Estimations. In: Proc. of the Ninth Int. Conf. of Generalized nets, Sofia, pp. 41–48 (2008)
A Scatter Search Approach for Solving the Automatic Cell Planning Problem Francisco Luna, Juan J. Durillo, Antonio J. Nebro, and Enrique Alba Departamento de Lenguajes y Ciencias de la Computaci´ on E.T.S Ingenier´ıa Inform´ atica, Campus de Teatinos, 29071 M´ alaga, Spain {flv,durillo,antonio,eat}@lcc.uma.es
Abstract. Planning a cellular phone network makes engineers to face a number of challenging optimization problems. This paper addresses the solution of one of these problems, Automatic Cell Planning (ACP), which lies in positioning the antennae of the network and configuring them properly in order to meet several objectives and constraints. This paper approaches, for the first time, the ACP problem with a Scatter Search technique. The algorithm used is called AbYSS. Three large-scale real-world instances have been used for evaluating the search capabilities of AbYSS on this optimization problem. The results shows that AbYSS not only reaches very accurate solutions for the three instances, but also it scales well with increasingly sized instances.
1
Introduction
Automatic Cell Planning (ACP) [10] is one of the main design tasks that emerges in the deployment of cellular telecommunication networks. Given a set of candidate sites where the antennae of the network can be placed, the problem lies in chosing a subset of these sites and then configuring their installed antennae properly in such a way that both the cost of the entire infrastructure (i.e., the number of installed sites) and the signal interference are minimized while its traffic capacity is maximized. In real-world cellular networks, like those tackled in this work, the ACP problem also becomes a large-scale optimization problem since the number of sites required to meet the cellular operator requirements (traffic and capacity) is high and each of these sites has many configuration settings. As it can be observed, the ACP problem is multiobjective in nature, i.e., contradictory objectives (cost vs. quality of service — measured in terms of traffic capacity and signal interference) are to be optimized simultaneously. Instead of aggregating these objectives into one single function, we have used a multiobjective formulation. Contrary to single-objective optimization, the solution of such a multiobjective problem is not one single solution, but a set of nondominated solutions known as the Pareto optimal set, which is called Pareto border or Pareto front when it is plotted in the objective space [1,2]. Whatever solution of this set is optimal in the sense that no improvement can be reached on an objective without worsening at least another one. The main goal in the resolution I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 334–342, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Scatter Search Approach for Solving the ACP Problem
335
of a multiobjective problem is to compute the set of solutions within the Pareto optimal set and, consequently, the Pareto front. Our approach for addressing the ACP problem is a multiobjective Scatter Search (SS) algorithm called AbYSS (Archive-based hYbrid Scatter Search) [9]. To the best of our knowledge, this is the first time a SS method is used for solving this problem. Our goal is therefore twofold. On the one hand, we want to evaluate the suitability of AbYSS for addressing this kind of problems and, on the other hand, to analyze its ability of handling instances of increased size. For a comparison basis we have used three real-world cellular networks coming from the telecommunication industry (with 249, 568, and 747 sites), and PAES [4], a widely known algorithm in the literature previously used by the authors for this problem [7]. The experiments show that AbYSS reaches very accurate results, providing the cellular network designer with highly efficient configurations for the given instances. The results also report that our approach is able to scale well with the size of the instances. The rest of the paper is structured as follows. The next section provides both an overall view of the ACP problem and the particular approach used in this paper. Section 3 describes AbYSS and the modifications engineered to meet the requirements imposed by the optimization problem at hand. The experimental analysis and evaluation are included in Sect. 4. Finally, Sect. 5 presents the main conclusions and future lines of work.
2
The ACP Problem
Engineering a cellular phone network gives rise to many challeging optimization tasks at different levels of the system deployment (business-planning, design, routing, resource allocation, etc.) [12]. The ACP problem, also known as the network dimensioning problem or the capacity planning problem, appears as a major design problem that may have a large impact not only in the early phases of the network deployment but also in subsequent expansions/modifications of existing networks. In general, the ACP problem can be described as follows: given an area where the service has to be guaranteed, determine where to locate the antennae or BTSs (Base Transceiver Stations) and to select their configurations (antenna type, number of antennae, emission power, tilt, and azimuth) so that each point (or each user) in the area receives a sufficiently strong signal. Several approaches have been proposed in the literature to address this problem [10]. The one used in this work is based on the so-called “Cell and Test Point Model”, in which the working area is discretized into a set of test points that are spread over the whole area. These test points are used to measure the amount of signal strength in the region where the network operator intends to service the traffic demand of a set of customers. Three subsets of test points are defined: Reception Test Points (RTPs), where the signal quality is tested; Service Test Points (STPs), where the signal quality must exceed a minimum threshold to be usable by the customers; and the Traffic Test Points (TTPs), where a certain traffic amount is associated to the customers (measured in Erlangs). The model
336
F. Luna et al.
ensures that TTP ⊆ STP ⊆ RTP. Instead of allowing the location of BTSs in any position, a set of candidate sites where BTSs could be installed is identified so as to represent the real world scenarios with more realism. Not only the working area is discretized, but also the BTS configuration parameters such as the emission power, tilt, and azimuth only consider a subset of possible values. Finally, three objectives are to be optimized: minimizing both the cost of the network (fcost ) and the interferences between cells (finterf ), and maximizing the traffic held or network capacity (ftraf f ic ). Also, two constraints have to be satisfied: at least 80% of the working area has to be covered (ccov ) and a minimum handover (or cell overlapping to guarantee communication continuity) has to be guaranteed. A fully detailed description of the mathematical model used in this work can be found in [11,13].
3
Setting Up AbYSS
AbYSS (Archive-based hYbrid Scatter Search) [9] is a hybrid metaheuristic algorithm which follows the scatter search template [3] but using mutation and crossover operators coming from the Evolutionary field. This algorithm combines ideas of three state-of-the-art multiobjective evolutionary algorithms. On the one hand, an external archive is used to store the nondominated solutions found during the search, following the scheme applied by PAES [4], but using the crowding distance of NSGA-II as a niching measure instead of the adaptive grid; on the other hand, the selection of solutions from the initial set to build the reference set applies the density estimation of SPEA2 (see [1] for the details on these last two algorithms). The SS template, and thus AbYSS, is defined by five methods: diversification generation, improvement, reference set update, subset generation, and solution combination. These five methods are structured as shown in Fig. 1. The SS template is based on using a small population, known as the reference set, whose
P
Diversification Generation Method Improvement Method Initialization phase
Improvement Method
Reference Set Update Method
Solution Combination Method
Improvement Method
Stop if MaxIter reached
Subset Generation Method Reference Set
Scatter search main loop No more new solutions
Diversification Generation Method
Fig. 1. Outline of the standard scatter search algorithm
A Scatter Search Approach for Solving the ACP Problem
337
individuals are combined to build new solutions which are generated systematically. Furthermore, these new individuals can be improved by applying a local search method. The reference set is initialized from an initial population composed of diverse solutions, and it is updated with the solutions resulting from the local search improvement. The five SS methods are described below, right after detailing how the solutions manipulated by the algorithms are encoded. 3.1
Solution Representation Site Activation
A tentative solution of the ACP problem encodes the entire celullar network conOmnidirectional/ Directive figuration [13]. A multilevel encoding has been used in which level 1 handles the site BTS 3 BTS 2 BTS 1 activation, level 2 sets up the number and type of antennae, and level 3 configures Small directive/ Small directive/ Small directive/ Large directive the parameters of the BTS. Then, when Large directive Power Large directive a given site is enabled, one or more anPower Power tennas are activated, always keeping eiPower Tilt Tilt Tilt Azimuth Azimuth Azimuth ther one single omnidirectional antenna or from one to three directive antennas. Fig. 2. Hierarchical encoding Figure 2 displays the hierarchical encoding used. As it can be seen, it is not a classical encoding so the genetic operators used by the EAs have to be properly designed. 3.2
The Template Methods
AbYSS has been originally proposed for solving continous multiobjective optimization problems, so therefore the new methods have to be engineered to deal with the new solution encoding: – Diversification generation: initial solutions are generated by using a fully random strategy. It works as follows: each site is randomly activated/deactivated. If it is activated, a random configuration for this site is then generated. First, either an omnidirectional antenna or several directive antennas are installed. In the former setting, a random value for the antenna power has to be chosen. In the latter one, the number of BTSs has to be set up (from one to three). Then, for each directive BTS, its loss diagram (either short or large directive) as well as power, tilt, and azimuth values are randomly selected. – Improvement: in AbYSS, this method is defined by iteratively changing one given solution by means of a mutation operator [9]. The same improvement approach is used here but with the multilevel mutation proposed in [8]. – Solution combination: a crossover operator is used as solution combination method in AbYSS. The geographic crossover presented in [8] has been used. – Subset generation: the same strategy as in AbYSS is used here.
338
F. Luna et al.
(a)
(b)
(c)
Fig. 3. Topology of the instances (a) Arno 1.0, (b) Arno 3.0, and (c) Arno 3.1
– Reference set update: this method remains the same as in AbYSS but one main issue emerges here: how is the distance between solutions measured in order to check whether one given solution becomes member of the diversity subset in the Ref Set. We have used the distance proposal defined in [6], that is aimed at reporting high values when comparing tentative solutions in which the same candidate sites in these solutions have very different configurations.
4
Experimentation
This section is devoted to presenting the experiments conducted in this paper. First, we include the details of the ACP instances tackled. The methodology used is described afterwards. Finally, the results achieved are presented and analyzed. 4.1
ACP Instances
Three real-world instances provided by France Telecom R&D have been tackled here. They correspond with two well distinguished scenarios: the instance Arno 1.0 is a highway environment, whereas the instances Arno 3.0 and Arno 3.1 model two urban areas of different size. Their topologies are displayed in Fig. 3. Table 1 includes the most relevant values that characterize the three instances. It can be seen that they have an increasing size, ranging from 250 sites in Arno 1.0 to 743 sites in the Arno 3.1 instance (almost three times larger). It is important to note that not only the number of sites is increased but also the number of test points used to measure the quality of service in the working area.
A Scatter Search Approach for Solving the ACP Problem
339
Table 1. Several values that characterize the three instances tackled Instance Arno 1.0 Arno 3.0 Arno 3.1 Total traffic (Erlangs) 3 210.91 2 988.12 8 089.78 Number of sites 250 568 747 Number of test points (|ST |) 29 955 17 394 42 975 Number of traffic test points (|T |) 4 967 6 656 21 475
4.2
Algorithm Settings
As mentioned before, PAES has been considered in this preliminary study to check the suitability of AbYSS for addressing the ACP problem. In order to guarantee a fair and meaningful comparison, the two algorithms stop when reaching 25000 function evaluations. The maximum number of nondominated solutions in the approximated Pareto front is 100. PAES also applies the multilevel mutation operator used in AbYSS. In the two algorithms, tentative solutions are always mutated by changing one single site. The geographical crossover (solution combination method in AbYSS) is also applied with a probability of 1.0. The remaining settings of AbYSS are the same as those proposed in [9].
4.3
Methodology
Given the Pareto fronts of the studied ACP instances are not known, we have used the hypervolume (HV ) quality indicator [14], which does not require to know that information beforehand. It is a Pareto-compliant indicator [5] which measures both convergence towards Pareto optimal solutions and spread-out of solutions along the nondominated front. Higher values of the hypervolume indicator are desirable. Since it is not free from an arbitrary scaling of the objectives, all the fronts have been normalized in order to guarantee a fair comparison among the algorithms. Since we are dealing with stochastic algorithms and we want to provide the results with statistical significance, 30 independent runs for each algorithm and each problem instance have been carried out. The HV indicator is then computed for each of the approximated fronts, thus obtaining 30 HV values for each pair algorithm/instance. Next, the following statistical analysis has been carried out. First, a Kolmogorov-Smirnov test is performed in order to check whether the values of the results follow a normal (Gaussian) distribution or not. If so, the Levene test checks for the homogeneity of the variances. If samples have equal variance (positive Levene test), an ANOVA test is done; otherwise we perform a Welch test. For non-Gaussian distributions, the non-parametric Kruskal-Wallis test is used to compare the medians of the algorithms. We always consider in this work a confidence level of 95% (i.e., significance level of 5% or p-value under 0.05) in the statistical tests, which means that the differences are unlikely to have occurred by chance with a probability of 95%.
340
F. Luna et al. Table 2. HV values of AbYSS and PAES for the three tackled instances AbYSS PAES x ¯±σn Problem x ¯±σn Arno 1.0 0.3208±0.0229 0.2343±0.0680 + Arno 3.0 0.4157±0.0208 0.1889±0.0290 + Arno 3.1 0.3637±0.0237 0.1166±0.0255 +
AbYSS PAES
Number of sites 100 50 0
0 200 400 600 800 1000 Held traffic 1200 1400
0 5e+007 1e+008 1.5e+008 Interferences 2e+008
(a) AbYSS PAES
AbYSS PAES Number of sites 250 200 150 100 50 0
Number of sites 150 100 50 0 0 200 400 600 800 1000 1200 Held traffic 1400
5e+007 1e+008 1.5e+008 Interferences 2e+008
(b)
0
1e+008 2e+008 3e+008 4e+008 5e+008 6e+008 Interferences 7e+008 8e+008
1000 1500 2000 2500 Held traffic 3000
500
0
(c)
Fig. 4. Example fronts of the instances (a) Arno 1.0, (b) Arno 3.0, and (c) Arno 3.1
4.4
Results
Table 2 includes the mean, x¯ and the standard deviation, σn , of the HV indicator values reached by the approximated Pareto fronts computed by AbYSS and PAES on the three studied ACP instances. The “+” symbol in the last column points out that the differences between the mean HV values obtained are statistically different (successful tests in all the experiments). We have also used a gray background to highlight the larger (better) HV value obtained for each instance. The first clear conclusion that can be drawn from the table is that AbYSS always reaches approximated Pareto fronts with better (higher) HV values. These are expected results since the SS approach has a more powerful search engine that uses not only a mutation operator (multilevel) but also a recombination one
A Scatter Search Approach for Solving the ACP Problem
341
(geographic crossover). Table 2 also shows an interesting fact: the larger the instance the greater the difference between the HV values of the two algorithms. Indeed, the HV obtained by PAES in Arno 1.0, Arno 3.0, and Arno 3.1 are, respectively, 73.02%, 45.43%, and 32.07% the HV of AbYSS. This means that the latter is more capable to exploring larger search spaces. To graphically show the differences between the two algorithms, Fig. 4 displays the Pareto fronts of the three ACP instances obtained in one single run. This figure clearly points out the enhanced search capabilities of AbYSS, i.e., it is able to cover a larger portion of the Pareto front in the three instances. From the point of view of the network designer, the benefits are really relevant: AbYSS provides a richer set of network configurations (nondominated solutions) which present a wider trade off between the three ACP contradictory objectives. Specifically, it can be seen that the SS-based algorithm reaches solutions involving a very low cost (low number of sites) and provoking little interference, but holding a low amount of traffic.
5
Conclusions and Future Work
This paper presented a preliminary approach of using a Scatter Search algorithm called AbYSS to address the Automatic Cell Planning problem. This is a very relevant planning problem that appears in the design of cellular phone networks. In many real-world instances, like those considered here, this is also a large-scale optimization problem as far as these networks might cover large geographical areas and provide a potentially large amount of users with a high quality of service. In this preliminary study, AbYSS has been compared to PAES over three increasingly sized real world-instances. The hypervolume indicator has been used to measure the quality of the obtained nondominated sets of solutions. The results have shown that AbYSS reaches approximations of the Pareto front having better (higher) values for this quality indicator. This means that our approach is able to provide the network designer with a wide set of network configurations from which the most appropriate one can be chosen. The experiments have also revealed that AbYSS, contrary to PAES, can better deal with large-scale instances since the HV values do not get diminished when their size grows up. As future work, the efectiveness of AbYSS with respect to more advanced multiobjective optimizers such as NSGA-II or SPEA2 has to be checked. Further experiments with other configurations of AbYSS and with other problem instances are also planned.
References 1. Coello, C.A., Lamont, G.B., Veldhuizen, D.A.V.: Evolutionary Algorithms for Solving Multi-Objective Problems, 2nd edn. Genetic and Evolutionary Computation Series. Springer, Heidelberg (2007) 2. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. John Wiley & Sons, Chichester (2001)
342
F. Luna et al.
3. Glover, F.: A template for Scatter Search and Path Relinking. In: Hao, J.-K., Lutton, E., Ronald, E., Schoenauer, M., Snyers, D. (eds.) AE 1997. LNCS, vol. 1363, pp. 13–54. Springer, Heidelberg (1998) 4. Knowles, J., Corne, D.: The pareto archived evolution strategy: A new baseline algorithm for multiobjective optimization. In: IEEE CEC, pp. 9–105. IEEE Press, Los Alamitos (1999) 5. Knowles, J.D., Thiele, L., Zitzler, E.: A tutorial on the performance assessment of stochastic multiobjective optimizers. Technical Report TIK-Report 214, Computer Engineering and Networks Laboratory, ETH Zurich (2006) 6. Luna, F.: Metaheur´ısticas avanzadas para problemas reales en redes de telecomunicaciones (in Spanish). PhD thesis, University of M´ alaga (2008) 7. Luna, F., Nebro, A.J., Alba, E.: Parallel evolutionary multiobjective optimization. In: Nedjah, N., Alba, E., de Macedo, L. (eds.) Parallel Evolutionary Computations. Studies in Computational Intelligence, vol. 22, pp. 33–56. Springer, Heidelberg (2006) 8. Meunier, H., Talbi, E.G., Reininger, P.: A multiobjective genetic algorithm for radio network optimization. In: IEEE CEC, pp. 317–324. IEEE Press, Los Alamitos (2000) 9. Nebro, A.J., Luna, F., Alba, E., Dorronsoro, B., Durillo, J.J., Beham, A.: AbYSS: Adapting scatter search to multiobjective optimization. IEEE Transactions on Evolutionary Computation 12(4), 439–457 (2008) 10. Raisanen, L.: Multi-objective site selection and analysis for GSM cellular network planning. PhD thesis, Cardiff University (2006) 11. Reininger, P., Caminada, A.: Model for GSM radio network optimization. In: 2nd ACM Int. Conf. on Discrete Algorithms and Methods for Mobility. ACM Sigmobile, pp. 35–42 (1998) 12. Resende, M.G.C., Pardalos, P.M. (eds.): Handbook of Optimization in Telecommunications. Springer, Heidelberg (2006) 13. Talbi, E.G., Meunier, H.: Hierarchical parallel approach for GSM mobile network design. Journal of Parallel and Distributed Computing 66(2), 274–290 (2006) 14. Zitzler, E., Thiele, L.: Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach. IEEE Transactions on Evolutionary Computation 3(4), 257–271 (1999)
Some Aspects Regarding the Application of the Ant Colony Meta-heuristic to Scheduling Problems Ioana Moisil and Alexandru-Liviu Olteanu “Lucian Blaga” University of Sibiu, Faculty of Engineering Str. Emil Cioran, Nr. 4, Sibiu, 550025, Romania
[email protected],
[email protected]
Abstract. Scheduling is one of the most complex problems that appear in various fields of activity, from industry to scientific research, and have a special place among the optimization problems. In our paper we present the results of our computational study i.e. an Ant Colony Optimization algorithm for the Resource-Constrained Project Scheduling Problem that uses dynamic pheromone evaporation.
1
Introduction
Scheduling is one of the most complex problems that appear in various fields of activity, from industry to scientific research, and have a special place among the combinatorial optimization problems (COP). Different operating environments, with different constraints and optimization criteria, make the scheduling problem quite challenging. In short, a scheduling problem is a problem where a given set of activities or jobs need to be put in order such that certain optimization criteria are minimized. A job can have many operations and in general operations from the same job cannot be processed at the same time. The activities can have precedence constraints, resource needs, start times, deadlines and other restrictions The number of machines and jobs varies between different kinds of scheduling problems (Single Machine Total Tardiness Problem — SMTTP, Job Shop — JSP, Open Shop — OSP, Group Shop scheduling Problem — GSP, Resource-Constrained Project Scheduling Problems — RCPSP, Multi-Modal Resource Constrained Project Scheduling Problems — MMRCPSP, Multi-Project Scheduling Problems — MPSP, and variants). For any kind of problem, different orders of operations result in different amounts of time to complete all operations (makespan), and in almost all cases the main objective is to minimize the makespan. Most scheduling problems that are of practical interest are NP-hard (nondeterministic polynomial-time hard) problems. There are many methods for looking up for feasible and optimal solutions for a scheduling problem, ranging from exact methods to metaheuristics [15,5]. Exact methods, as CPM (critical path method), linear programming or bounded I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 343–351, 2010. c Springer-Verlag Berlin Heidelberg 2010
344
I. Moisil and A.-L. Olteanu
enumeration (decision trees) can lead to an optimal solution and have the advantage that they can determine if there is an unfeasible solution, but become unpractical for large size problems and for large sets of constrains, and cannot tackle with the uncertainty and dynamics of real-life problems. Heuristic methods (MIN SLK — minimum slack time; MIN LFT — nearest latest finish time; SFM — shortest feasible duration); LRP — the least resource proportion, Hierarchical approaches, Simulated Annealing, etc.), and metaheuristics (Evolutionary Algorithms, Swarm Intelligence [1]) cannot determine if there is an unfeasible solution. Other methods are evolutionary algorithms, molecular [10], DNA and membrane computing [16], and hybrid approaches. In our paper we are considering applying the ACO (Ant-Colony-Optimization) meta-heuristic for the Resource-Constrained Project Scheduling Problem, bringing a variation to the classic approach and comparing the performance with those obtained using a genetic algorithm and a best search method.
1.1
Resource-Constrained Project Scheduling
In the following we will describe the Resource-Constrained Project Scheduling Problem (RCPSP). RCPSP is a more general case of the scheduling problems. Let J = {J1 , J2 , . . . , Jn } be a set on n jobs, M = {M1 , M2 , . . . , Mm } a set of m machines and P = {p1 , p2 , . . . , pn } the processing times for each job. These jobs are intra-related in the form of a digraph. The precedence relation is unidirectional. If Ji links to Jj then the first is a predecessor for the second. The graph is constructed so that no loops are created. Two dummy jobs are added. They don’t have processing times or resource needs. The starting job S links all the jobs that have no predecessors, while the finishing job F is linked by all the jobs that have no successors. An example of a precedence graph is shown in Fig. 1. Each job Ji has a set of machines mi ⊆ M that need to work at the same time in order to be completed.
Fig. 1. A precedence graph
Some Aspects Regarding the Application of the Ant Colony
345
Fig. 2. Solution space
A valid solution is a chain starting from the first job S and finishing at the ending job F. The jobs need to be chosen in such an order that the precedence relations aren’t broken. In Fig. 2 we have shown the solution space for a problem with 4 jobs. This figure contains all the chains that can be formed between these jobs if no precedence constraints are active. With the increase in problem size, the solution space grows exponentially. Even if the precedence constraints reduce the number of valid solutions, enforcing them adds another layer of computational complexity, thus proving the complexity of RCPSP. Fig. 3 shows the resource usage for the optimal solution for the problem presented with the precedence constraints depicted in Fig. 1.
2
Problem Solution
We will present further how the Ant Colony Optimization algorithm works, and the changes we have brought to it. Also, in the results section we will show how this affects the overall performance of the algorithm, as well as a comparison with two other methods. 2.1
ACO (Ant Colony Optimization)
The idea behind the ACO metaheuristic was inspired by the foraging behaviour of ant colonies that are able to find the shortest path to a source of food by communicating indirectly through a substance called pheromones [7,3]. Ants do not have a systematic way for searching for food; they start randomly to explore the area near their nest. When an ant finds a source of food it determines in some way the quantity and the quality of the food, takes a certain amount to carry to the nest and on its way back deposits on the path a trail of pheromone, a chemical substance. The quantity of pheromone depends on the quantity and quality of the food. The pheromone trail will then become an indication for other ants for the path to the food. The shortest the path, the more the pheromone is
346
I. Moisil and A.-L. Olteanu
Fig. 3. Optimal solution
deposited [7,13]. This behaviour was used by the model of artificial ant colonies for solving combinatorial optimization problems. In the process of building a solution for a problem, artificial ants move concurrently and asynchronous on an appropriately constructed graph [8]. The ACO metaheuristic was introduced by Dorigo et al. [8]. The basic form of the meta-heuristic is: Foreach Iteration Construct Solution Update Pheromone Daemon Actions(optional) Endfor The first CO problem to which ACO was applied was the JSP (Colorni, Dorigo, Maniezzo, and Trubian , 1994). They defined a construction graph made up of the operations to be scheduled with an additional node representing the empty sequence from which ants start. The pheromone is associated with the edges of the graph. In later applications the pheromone is associated with the absolute position of an operation in the permutation; hence there is no more need of an additional node. The first application of ACO metaheuristic to the RCPSP was presented by Merkle et al. [13]. In their approach the eligibility of an activity to be scheduled next is based on a weighted evaluation of the latest start time (LST) priority rule and on pheromones that represent the learning effect of previous ants. A pheromone value i j describes how promising it seems to put activity j as the i-th activity into the schedule. Their approach includes separate ants for forward and backward scheduling. There is also a second optimization phase based on local search. 2.2
Building the Solution
In this step the ants form the solutions in the form of the order in which the activities are chosen to be asigned to the resources. Here we have the steps involved in building the solution:
Some Aspects Regarding the Application of the Ant Colony
347
Start with S while not finished C = Candidate Nodes if C not null Choose Next Node else if C contains F finished else Advance Time end if end if end while The ant agents start from the initial phase and choose the next job from the list of candidate jobs until they reach the final phase. This list contains all the jobs for which all their predecessors have finished execution, and also have enough resources available to start execution immediately. As we saw in the precedence graph, the final phase F has all the possible terminal jobs conected to it. It will only appear in the candidate jobs list when all the other jobs will have finished execution. The ants choose the next node from the candidate list based on: – Heuristic value — ηij – Pheromone value — τij The heuristic value ηij serves as a problem specific knowledge. For our problem, in order to minimize the span of the project, at each decisions point, jobs that have a longer critical path towards the final phase will have a higher probability to be chosen. This is done by having a higher heuristic value for that job. The heuristic value is the difference between the latest finishing time of all activities noted as LF TF and the latest finishing time of the job [13] ηij = LF TF − LF Tj + 1.
(1)
The value is set at start time for all the jobs. It does not change throughout the execution of the algorithm and is computed by starting from the finishing node and through recursion finding the longest path to the current job node. The pheromone value τij is a measure of past experience, and shows if scheduling job j after job i brought good solutions in the past. The probability of choosing job j to follow job i is: pij =
α
β
[ηij ] · [τij ] , α β [η ] · [τ ] ih ih h∈D
where D is the set of possible jobs that can be scheduled after job Ji .
(2)
348
I. Moisil and A.-L. Olteanu
2.3
Updating the Pheromone
The pheromone deposit is based on the quality of the solution [13]. The better the solution the more the pheromone values increase along it. The pheromone deposit formula is: τij = τij +
ρ 2·T
(3)
ρ is the pheromone deposit value, and is set beforehand. T is the completion time of the solution. An elitist approach is used and after each iteration only the iteration best and overall best solutions are strengthened. At the beginning, poor solutions will be chosen and strengthened. This can make the algorithm converge to that solution and ignore others. To prevent this, the pheromone evaporates at a constant rate during each iteration [13]. This allows for new and better solutions to be found and older ones to be forgotten. The pheromone evaporation formula is: τij = τij · χ,
(4)
where χ is the evaporation rate and holds a value between 0 and 1. 2.4
Dynamic Evaporation Approach
We thought that having the pheromone evaporate constantly and at a slow rate brings unnecessary computations during each iteration and slows down the algorithm. The purpose of evaporation is to forget suboptimal solutions that gathered enough pheromone along them to attract other ants unnecessarily. Why not trigger the evaporation only when it is needed? Also a static evaporation rate may not be optimal for every problem instance, and could bring premature convergence to a particular solution. For this purpose, we monitor the pheromone levels along the solutions that have pheromone deposited on them. These are the most probably to have the highest levels of pheromone from the pheromone matrix. If a pheromone value passes a preset threshold, the evaporation process is triggered. A higher evaporation value is used since we only want to trigger this step occasionally. 2.5
Results
For our simulations we have used the j120 set from Project Scheduling Problem Library [21]. These benchmarks contain 120 jobs and 4 resource types. There is a fixed number of each of the 4 resource types. Each result represents the average from the first 20 instances of the benchmarks run 50 times over across 10 seconds on a machine with a 2.2 GHz processor. We have set α and β to 1 and χ to 0.01 [14]. Also ρ was set to 100 and the number of 10 ants per iteration was found to be the best.
Some Aspects Regarding the Application of the Ant Colony
349
Fig. 4. Pheromone evaporation approaches
The graph on Fig. 4 shows the difference between the constant evaporation approach and the dynamic one. The first approach has a 1% evaporation rate, while the second a 20% evaporation rate and 90% threshold. The results show that the dynamic approach has slightly better results. We have also compared ACO to other approaches: a genetic algorithm and beam-search. The genetic algorithm has 4 steps: selection, reproduction, evaluation, and forming the new population. It starts from an initial population of solutions built using the same heuristic as ACO, but without the pheromone information. In this implementation we have chosen all the individuals to participate equally in forming the new generation. Pairs are randomly selected and 2 children are formed using a 2-point crossover operator and a mutation operator that interchanges each job in the solution with another with a 50% probability, as long as the precedence constraints are fulfilled. The evaluation step uses directly the solution time of each individual and the new generation is formed from both old and new individuals by keeping only the best half. The beam-search algorithm is a variation on the branch-and-bound algorithm. This algorithm starts from the first node and constructs all the partial solutions that can be formed by incrementally adding one more job. These partial solutions are then compared through the use of another heuristic and only a fixed number
350
I. Moisil and A.-L. Olteanu
Fig. 5. Comparison between ACO, genetic, and beam-search algorithms k are selected to be expanded further. The heuristic ηij for choosing Jj after Ji for the partial solution k is: tk k ηij = , (5) ηij
where tk is the current time that solution k has reached and ηij represents the heuristic used in the Ant Colony Optimization approach for choosing Jj after Ji . The Ant Colony Optimization approach clearly performs better than the other algorithms considered.
3
Conclusion
Metaheuristic algorithms have found wide application in scheduling problems. This paper considered applying the ACO meta-heuristic for the ResourceConstrained Project Scheduling Problem. We have presented results of numerical experiments involving pheromone evaporation. Dynamic evaporation brings slightly better results, but overall ACO performs much better than the other methods considered. Further research will focus on testing hybrid approaches, combining ACO metaheuristic with genetic algorithms.
References 1. Abdelaziz, F.B., Krichen, S., Dridi, O.: A Multiobjective Resource-Constrained Project-Scheduling Problem. In: Mellouli, K. (ed.) ECSQARU 2007. LNCS (LNAI), vol. 4724, pp. 719–730. Springer, Heidelberg (2007)
Some Aspects Regarding the Application of the Ant Colony
351
2. Angus, D.: Ant Colony Optimization: From Biological Inspiration to an Algorithmic Framework. Technical Report: TR013, Centre for Intelligent Systems & Complex Processes, Faculty of Information & Communication Technologies, Swinburne University of Technology Melbourne, Australia (2006) 3. Blum, C.: Theoretical and Practical Aspects of Ant Colony Optimization. Dissertations in Artificial Intelligence, vol. 282. Akademische Verlagsgesellschaft Aka GmbH, Berlin (2004) 4. Brucker, P.: Scheduling Algorithms. Springer, Heidelberg (2001) 5. Cook, W.J., Cunningham, W.H., Pulleyblank, W.R., Schrijver, A.: Combinatorial Optimization, 1st edn. John Wiley & Sons, Chichester (1997) 6. Dorigo, M.: Optimization, learning and natural algorithms, Ph.D. Thesis, Dip Elettronica e Informazione, Politecnico di Milano, Italy (1992) 7. Dorigo, M., Birattari, M., St¨ utzle, T.: Ant Colony Optimization. Artificial Ants as a Computational Intelligence Technique, IRIDIA — Technical Report Series Technical Report No. TR/IRIDIA/2006-023 (September 2006) 8. Dorigo, M., St¨ utzle, T.: Ant Colony Optimization. MIT Press, Cambridge (2005) 9. Hartman, S.: A Self-Adapting Genetic Algorithm for Project Scheduling under Resource Constraints. Naval Research Logistics 49, 433–448 (2002) 10. P˘ aun, G.: Membrane computing: some non-standard ideas. In: Jonoska, N., P˘ aun, G., Rozenberg, G. (eds.) Aspects of Molecular Computing. LNCS, vol. 2950, pp. 322–337. Springer, Heidelberg (2003) 11. Lenstra, J.K., Rinnooy Kan, A.H.G.: Complexity of Scheduling under Precedence Constraints. Oper. Res. 26, 22–35 (1978) 12. Daniel, M., Middendorf, M., Schmeck, H.: Pheromone Evaluation in Ant Colony Optimization. In: 26th Annual Conf. of the IEEE, vol. 4, pp. 2726–2731 (2000) 13. Merkle, D., Middendorf, M., Schmeck, H.: Ant Colony Optimization for ResourceConstrained Project Scheduling. IEEE Transactions on Evolutionary Computation 6(4), 333–346 (2002) 14. Olteanu, A.-L.: Ant Colony Optimization Meta-Heuristic in Project Scheduling. In: 8th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases, World Scientific and Engineering Academy and Society (WSEAS), Stevens Point, Wisconsin, USA, pp. 29–34 (2009) 15. Papadimitriou, C.H., Steiglitz, K.: Combinatorial Optimization: Algorithms and Complexity. Dover Pubns. (1998) ISBN 0-486-40258-4 16. Paun, G.: Membrane computing — An Introduction. Natural Computing Series. Springer, Heidelberg (2002) 17. St¨ utzle, T., Dorigo, M.: ACO algorithms for the Traveling Salesman Problem. Evolutionary Algorithms in Engineering and Computer Science. In: Evolutionary Algorithms in Engineering and Computer Science. Recent Advances in Genetic Algorithms, Evolution Strategies, Evolutionary Programming, Genetic Programming and Industrial Applications, ch. 9, pp. 163–184. Wiley, Chichester (1999) 18. Tavares, L.V., Weglarz, J.: Project Management and Scheduling: A Permanent Challenge for OR. European Journal of Operational Research 49(1), 1–2 (1990) 19. Valente, J.M.S., Alves, R.A.F.S.: Beam-search Algorithms for the early/tardy Scheduling Problem with Release Dates Investiga¸ca ˜o – Trabalhos em curso 143 (2004) 20. Wall, M.B.: A Genetic Algorithm for Resource-Constrained Scheduling. MIT Press, Cambridge (1996) 21. Project Scheduling Problem Library, http://129.187.106.231/psplib/main.html
High-Performance Heuristics for Optimization in Stochastic Traffic Engineering Problems Evdokia Nikolova Massachusetts Institute of Technology Cambridge, MA, U.S.A.
[email protected]
Abstract. We consider a stochastic routing model in which the goal is to find the optimal route that incorporates a measure of risk. The problem arises in traffic engineering, transportation and even more abstract settings such as task planning (where the time to execute tasks is uncertain), etc. The stochasticity is specified in terms of arbitrary edge length distributions with given mean and variance values in a graph. The objective function is a positive linear combination of the mean and standard deviation of the route. Both the nonconvex objective and exponentially sized feasible set of available routes present a challenging optimization problem for which no efficient algorithms are known. In this paper we evaluate the practical performance of algorithms and heuristic approaches which show very promising results in terms of both running time and solution accuracy.
1
Introduction
Consider the problem one faces every day to go to work: Find the best route between two points in a complex network of exponentially many routes, filled with uncertainty. In the absence of uncertainty, there are known polynomialtime algorithms, as well as numerous metaheuristics developed to yield very fast and practical running times for shortest path computations. However, the case of uncertainty still leaves unresolved theoretical problems and calls for practical metaheuristic methods. Note that with uncertainty it is not even clear how to define the optimal route: is it the route that minimizes the expected travel time? Or its variance, or some other metric? In this paper we consider the natural definition of an optimal route to minimize a convex combination of mean and standard deviation. As it turns out, the solution approaches for finding the optimal route under this metric also yield solutions to the related problem of arriving ontime (in which the optimal path maximizes the probability that the route travel time does not exceed a given deadline). The stochasticity is defined in terms of given mean and variance values for each edge in the network, under arbitrary distributions. With this definition of an optimal route, traditional methods of solving shortest path problems fail (the problem no longer has the property that a subpath of I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 352–360, 2010. c Springer-Verlag Berlin Heidelberg 2010
High-Performance Heuristics for Optimization
353
an optimal path is optimal and thus one cannot use dynamic programming techniques to solve it). The structure of the problem allows us to reduce the number of local optima so that we do not need to examine all exponentially many routes. However in the worst case, the local optima are still superpolynomially many. In the current paper we evaluate the performance of several algorithms and heuristic approaches. First, for a class of grid graphs, we provide experimental results that the number of local optima is sublinear and thus examining all of them can be done efficiently and yield the exact optimum. Second, for the purpose of practical implementation, it is preferable to examine only a small (constant or logarithmic) number of local optima. To this end, we examine heuristics that pick a small subset of the local optima, and provide bounds and experimental evaluation on the quality of the resulting solution.
2
Problem Statement and Preliminaries
We are given a graph G with n nodes and m edges and are interested in finding a route between a specified source node S and a destination node T . The edge lengths in the graph are stochastic and come from arbitrary independent1 distributions with given mean μi and variance τi (which can be different for different edges i). Our goal is to find the optimal route that incorporates a measure of risk. As such we define the optimal route via a natural family of objectives, namely to minimize a convex combination of the route’s mean and standard deviation: μi + (1 − α) τi (1) minimize α i∈P
i∈P
such that P is an ST -path. The summations above are over the edges i in a given ST -route and the minimization is over all valid ST -routes P . The parameter α ∈ [0, 1] specifies the objective function (convex combination) of choice. It will be helpful to consider a continuous formulation of the above discrete problem, by denoting an ST -route by its corresponding incidence vector x = (x1 , ..., xm ) where xi = 1 if edge i is present in the route and xi = 0 otherwise. Denote also the vector of means of all edges by μ = (μ1 , ..., μm ) and the vector of variances by τ = (τ1 , ..., τm ). The set of all feasible routes is represented by {01}-vectors x ∈ Rm , which are a subset of the vertices of the unit hypercube in m dimensions. The convex hull of these feasible vectors is called the path polytope. Thus, the continuous formulation of the routing problem is: √ minimize αμT x + (1 − α) τ T x (2) such that x ∈ path polytope. 1
The results here generalize to dependent distributions, however we focus this presentation on the independent case for clarity and brevity of the mathematical exposition.
354
E. Nikolova
This objective function is concave for all values of the parameter α ∈ [0, 1], so it attains its minimum at an extreme point2 of the feasible set [2]. This is a key property of the objective which establishes that the optimal solution of the continuous problem (which is a superset of the original feasible set) will be a valid path and thus will coincide with the optimal solution of the discrete problem. We emphasize this important observation in the following proposition. Proposition 1. The optimal solution of the continuous formulation (2) is the same as the optimal solution of the discrete problem (1). Furthermore the objective function is monotone increasing in the route mean and variance, thus its optimum also minimizes some convex combination of the mean and variance (as opposed to the mean and standard deviation!). Proposition 2. The optimal solution of the nonconvex problem (2) minimizes the linear objective βμT x + (1 − β)τ T x for some β ∈ [0, 1]. This second observation is critical for yielding a subexponential exact algorithm for the stochastic routing problem. The exact algorithm enumerates the candidate set of extreme points or paths (which is a small subset of all extreme points) in time linear in the number of such paths, which is at most nO(log n) in the worst case. In this paper, we seek to reduce this superpolynomial complexity. Related Work. Our work is most closely related to the work of Nikolova et al. [10] that proposes and analyzes the worst-case running time of the exact algorithm for a related objective (in the above notation, to maximize the objective T t−μ x √ , which arises in maximizing one’s probability of arriving ontime, that τT x is arriving within a specified time frame t, under normally distributed edge lengths). Our problem is also related to the parametric shortest path problem [3], in which the edge lengths, instead of being stochastic, are deterministic linear functions of a variable λ and the question is to enumerate all shortest paths over all values of the variable λ ∈ [0, ∞). The literature on stochastic route planning and traffic engineering is vast, though most commonly, the different models minimize the expected route cost or length (e.g., [12,11,8]. As such, the work in this domain is typically based on stochastic routing formulations of very different nature and solutions. A sample of work that is closest to the problem we consider is [9,7,5,1]. In particular, Loui [7] considers monotone increasing cost functions, however, the algorithms proposed have exponential worst-case running time. On the other hand, Fan et al. [5] provide heuristics for different model of adaptive routing, of unknown approximation guarantee. Nikolova et al. [9] prove hardness results for a broad class of objective functions and provide pseudopolynomial algorithms. Lim et al. [6] provide empirical results showing that the independence assumption of edge distributions does not affect the accuracy of the answer by too much. 2
An extreme point of a set C is a point that cannot be represented as a convex combination of two other points in the set C.
path variance
High-Performance Heuristics for Optimization
355
P1
P3 P2 path mean
Fig. 1. Finding and enumerating the candidate solution-paths P1 , P2 , ...
3
Performance of Exact Algorithm
The exact algorithm for solving the problem can be thought of as a seach algorithm on the values of the parameter β from Proposition 2. For a given β and a path P with incidence vector x, the linear objective βμT x + (1 − β)τ T x can be separated as a sum of edge weights i∈P (βμi + (1 − β)τi ) (note, the variance of the path P equals the sum of variances of the edges along the path, by the independence of the edge length distributions). Thus, finding the ST -route that minimizes the linear objective can be done with any deterministic shortest path algorithm such as Dijkstra, Bellman-Ford, etc. [4] with respect to edge weights (βμi + (1 − β)τi ). Both the exact and heuristic algorithms we consider will consist of a number of calls to a deterministic shortest path algorithm of the user’s choice, for appropriately chosen values β: this makes our approach very flexible since different implementations of shortest paths algorithms are more efficient for different types of networks and one can thus take advantage of the most efficient implementations available. We thus characterize the running time performance in terms of the number of such calls or iterations to an underlying shortest path algorithm. The exact stochastic routing algorithm first sets β = 1 and β = 0 and solves the resulting deterministic shortest paths problems (namely it finds the route with smallest mean and the route with smallest variance). Denote the mean and variance of the resulting routes P1 and P2 by (m1 , s1 ) and (m2 , s2 ) respectively. We next set β so that the slope of the linear objective is the same as the slope of the line connecting the points P1 and P2 (see figure 1). Denote the resulting path, if any, by P3 . We continue similarly to find a path between P1 and P3 and between P3 and P2 , etc, until no further paths are found. If there are k extreme points (paths) minimizing some positive linear objective of the mean
356
E. Nikolova
250
100 Min #local opt Max #local opt 1* sqrt(n)
80 # Local Optima
# Local optima
200
150
100
50
0
Min #local opt Max #local opt 0.4* sqrt(n)
60
40
20
0
50
100 150 sqrt(n)
200
250
0
0
50
100 150 sqrt(n)
200
250
Fig. 2. Number of local optima (extreme point paths) in grids of size 10×10 to 250×250, with edge mean length μe ← U nif orm[0, 1] and variance τe ← U nif orm[0, μe ]
and variance, then this algorithm finds all of them with 2k applications of a deterministic shortest path algorithm. In the worst case the number of extreme points k can be n1+log n [10]—that is superpolynomial (albeit subexponential) and too large to yield an efficient algorithm. However, in most networks and mean-variance values of interest, this number seems to be much lower, thus implying that the exact algorithm may have a good performance in practice. We thus investigate the performance of the algorithm in a class of networks that are the standard first test set for indicating performance on realistic traffic networks. We consider grid networks of size ranging from 10 × 10 (100 nodes) to 250 × 250 (62, 500 nodes). For each network size type 10z × 10z (where z = 1, ..., 25), we run 100 instances of the stochastic routing problem. In a single instance, we generate the mean values uniformly at random from [0, 1], and the variance values uniformly at random from the interval [0, mean] for a corresponding edge with an already generated mean value. (By scaling all edge means if necessary, we can assume without loss of generality that the maximum mean has value 1.) Out of these 100 simulations per network size, we record the minimum and the maximum number of extreme points and plot them against the square root of the network size (i.e., the square root of the number of nodes in the network). The resulting plots are shown in figure 2. To put these empirical results in context: The maximum number of extreme points on a network with 10, 000 nodes found from the simulations, is k = 45 (meaning the exact algorithm consisted of only 2k = 90 iterations of a deterministic shortest path algorithm to find the optimal stochastic route) as opposed to the predicted worst case value of 10, 0001+log 10,000 ≈ 1057 ! Similarly, the highest number of extreme points found in graphs of 40, 000 and 62, 500 nodes is k = 75 and k = 92 respectively as opposed to the theoretical worst-case values of 40, 0001+log 40,000 ≈ 1075 and 62, 5001+log 62,500 ≈ 1081 . In other words,
High-Performance Heuristics for Optimization 120
100 Min #local opt Max #local opt 0.4 1* n. 0.4 0.55* n.
Min #local opt Max #local opt 0.6 0.15* n.
100 # Local optima
# Local optima
80
60
40
20
0
357
80 60 40 20
0
20
60
40 n
80
100
0.4
0
0
200
400 n
600
800
0.6
Fig. 3. Number of local optima in grids with n nodes vs n0.4 (left) and n0.6 (right)
despite the pessimistic theoretical worst-case bound of the exact stochastic routing algorithm, it has a good performance in practice that is orders of magnitude smaller. In figure √ 2, we have fitted the maximum number of extreme points to a linear always strictly function of √ n: the extreme points for a graph √ with n nodes are √ less than n and asymptotically less than 0.4 n. To ensure that n is the right function to bound the asymptotics, we have also plotted the number of extreme points with respect to n0.4 and n0.6 , in figure 3. The latter two plots confirm that the rate of growth of extreme points is faster than θ(n0.4 ) and slower than θ(n0.6 ). On the basis of these empirical results, we conclude that a strict upper bound on the number of iterations of the exact algorithm for any grid graph with n √ nodes is n. We leave as an intriguing open problem to give a theoretical proof of this performance bound.
4
High-Performance Heuristics
In this section we present a heuristic for finding the optimal stochastic route, which dramatically reduces the running time of the exact algorithm above. Instead of enumerating all extreme points, we select a very small subset of them, leading to high practical performance of the algorithm. Remarkably, the big reduction of extreme points does not lead to a big sacrifice in the quality of the resulting solution. Our experiments show that even on the large networks of 40, 000 nodes, the heuristic examines only 3 to 6 extreme point paths (compare to the experimental 75 and the theoretical worst bound of 40, 0001+log 40,000 ≈ 1075 points of the exact algorithm above), and in all our simulations the value of the solution is within a multiplicative factor of 0.0001 = 0.01% of the optimum. The heuristic again utilizes Proposition 2, but instead of searching all possible parameter values β that yield a different extreme point (path), it tests an
358
E. Nikolova T
T
100x100 network, 39 extreme points, obj f = μ x + 7 sqrt(τ x)
150 28 T
24 22
# Extreme points # Iterations of exact alg. # Iterations of heuristic
Line: β μ x + (1−β) τ x
Algorithm running times
path variance
26
T
OPT
20 18 16 50
55 60 path mean
65
100
50
0
0
50
100 grid side
150
200
Fig. 4. (left) Plot of all extreme point paths of a 10, 000-node network. The optimal path is marked with a circle and labelled ‘OPT’. The heuristic algorithm minimizes a small number of linear functions as shown, each yielding one extreme point. It then outputs the extreme point (path) with smallest objective function value. (right) Number of iterations in the heuristic vs the exact algorithm, in 10x10 up to 200x200 grid networks. Recall that the number of iterations to find all extreme points in the exact algorithm is two times the number of extreme points, where each iteration is a call to a deterministic shortest path algorithm of the user’s choice.
appropriate geometric progression of values and selects the best of the resulting small set of paths. We illustrate the details of the heuristic through figure 4(left). This figure plots all extreme point-paths for a 10, 000-node network (with mean and variance values of its edges generated as explained in the previous section). Each point in the figure corresponds to one path in the network with mean equal to the x-coordinate of the point, and variance equal to the y-coordinate of the point. In this mean-variance plot, the optimum happens to be a path with mean 49.83 and variance 21.77 (marked by a circle in the figure). By Proposition 2 and as depicted on the figure, the optimal path minimizes the linear objective βμT x + β (1 − β)τ T x (this corresponds to a line with slope b = − 1−β ) for some range of parameter values β or equivalently for some range of slopes. If we could guess the optimal slope, then we would find the optimal route with a single iteration of a deterministic shortest path algorithm with respect to edge weights βμi + (1 − β)τi . Instead, we test a geometric progression of slopes with a multiplicative step a. The smaller the step a, the more linear objectives we end up testing, which increases the accuracy but also increases the running time of the algorithm. In our simulations, we experiment with different multiplicative steps. It turns out that using a multiplicative step of 1.01 results in very high accuracy of 99.99%, and also very few extreme point tests and iterations (up to 6 for all graphs with 2, 500 to 40, 000 nodes). We compare the running time of the heuristic with that of the exact algorithm in figure 4(right). This plot gives the highest number of deterministic shortest
High-Performance Heuristics for Optimization
359
path iterations that each algorithm has run over 100 simulations per network size. Recall again that the exact algorithm needs to run two times as many iterations as the total number of extreme points. The plot shows that the high performance and accuracy of the heuristic makes it a very practical and promising approach for the stochastic routing problem.
5
Conclusion
We investigated the practical performance of exact and heuristic algorithms for the stochastic routing problem in which the goal is to find the route minimizing a positive linear combination of the route mean and standard deviation. The latter is nonconvex integer optimization problem, for which no efficient algorithms are known. Our experimental results showed that the exact algorithm which is based on enumerating all paths that are potential local optima (extreme points √ of the feasible set), has surprisingly good running time performance O( n)R on networks of practical interest compared to its predicted theoretical worstcase performance nO(log(n)) R, where R is the running time of any deterministic shortest path algorithm of the user’s choice. We also showed that a heuristic that appropriately selects a small subset of the potentially optimal paths, has very high performance, using a small constant number of deterministic shortest path iterations and returning a solution that has a 99.99% accuracy. Heuristics of this type are thus a very promising practical approach.
References 1. Ackermann, H., Newman, A., R¨ oglin, H., V¨ ocking, B.: Decision making based on approximate and smoothed pareto curves. Theor. Comput. Sci. 378(3), 253–270 (2007) 2. Bertsekas, D., Nedi´c, A., Ozdaglar, A.: Convex Analysis and Optimization. Athena Scientific, Belmont (2003) 3. Carstensen, P.: The complexity of some problems in parametric linear and combinatorial programming. Ph.D. Dissertation, Mathematics Dept., Univ. of Michigan, Ann Arbor, Mich. (1983) 4. Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 2nd edn. The MIT Press, Cambridge (2001) 5. Fan, Y., Kalaba, R., Moore II, J.E.: Arriving on time. Journal of Optimization Theory and Applications 127(3), 497–513 (2005) 6. Lim, S., Balakrishnan, H., Gifford, D., Madden, S., Rus, D.: Stochastic motion planning and applications to traffic. In: Proceedings of the Eighth International Workshop on the Algorithmic Foundations of Robotics (WAFR), Guanajuato, Mexico (December 2008) (accepted) 7. Loui, R.P.: Optimal paths in graphs with stochastic or multidimentional weights. Communications of the ACM 26, 670–676 (1983) 8. Miller-Hooks, E.D., Mahmassani, H.S.: Least expected time paths in stochastic, time-varying transportation networks. Transportation Science 34, 198–215 (2000)
360
E. Nikolova
9. Nikolova, E., Brand, M., Karger, D.R.: Optimal route planning under uncertainty. In: Long, D., et al. (eds.) Proceedings of the Sixteenth International Conference on Automated Planning and Scheduling, pp. 131–141. AAAI, Menlo Park (2006) 10. Nikolova, E., Kelner, J.A., Brand, M., Mitzenmacher, M.: Stochastic shortest paths via quasi-convex maximization. In: Azar, Y., Erlebach, T. (eds.) ESA 2006. LNCS, vol. 4168, pp. 552–563. Springer, Heidelberg (2006) 11. Papadimitriou, C., Yannakakis, M.: Shortest paths without a map. Theoretical Computer Science 84, 127–150 (1991) 12. Polychronopoulos, G., Tsitsiklis, J.: Stochastic shortest path problems with recourse. Networks 27(2), 133–143 (1996)
Optimization of Complex SVM Kernels Using a Hybrid Algorithm Based on Wasp Behaviour Dana Simian, Florin Stoica, and Corina Simian University “Lucian Blaga” of Sibiu, Faculty of Sciences 5-7 dr. I. Rat¸iu str, 550012 Sibiu, Romˆ ania
Abstract. The aim of this paper is to present a new method for optimization of SVM multiple kernels. The kernel substitution can be used to define many other types of learning machines distinct from SVMs. We introduced a new hybrid method which uses in the first level an evolutionary algorithm based on wasp behaviour and on the co-mutation operator LR − Mijn and in the second level a SVM algorithm which computes the quality of chromosomes. The most important details of our algorithms are presented. The testing and validation proves that multiple kernels obtained using our genetic approach are improving the classification accuracy up to 94.12% for the “leukemia” data set.
1
Introduction
Classification task is to assign an object to one or several classes, based on a set of attributes. A classification task supposes the existence of training and testing data given in the form of data instances. Each instance in the training set contains one target value, named class label and several attributes named features. The accuracy of the model for a specific test set is defined as the percentage of test set items that are correctly classified by the model. If the accuracy is acceptable, the model can be used to classify items for which the class label is unknown. Two types of approaches for classification can be defined: classical statistical approaches (discriminate analysis, generalized linear models) and modern statistical machine learning (neural network, evolutionary algorithms, support vector machines — SVM, belief networks, classification trees, Gaussian processes). In the recent years, SVMs have become a very popular tool for machine learning tasks and have been successfully applied in classification, regression, and novelty detection. Many applications of SVM have been done in various fields: particle identification, face identification, text categorization, bioinformatics, database marketing, classification of clinical data. The goal of SVM is to produce a model which predicts target value of data instances in the testing set which are given only the attributes. Training involves optimization of a convex cost function. If the data set is separable we obtain an optimal separating hyperplane with a maximal margin (see Vapnik [12]). In the case of non separable data a successful method is the kernel method. Using an appropriate kernel, the data are projected in a space with higher dimension in which they are separable by a hyperplane [2,12]. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 361–368, 2010. c Springer-Verlag Berlin Heidelberg 2010
362
D. Simian, F. Stoica, and C. Simian
Standard SVM classifiers use a single kernel. Many recent studies proved that multiples kernels work better than the singles ones. It is very important to build multiple kernels adapted to the input data. For optimization of the weights, two different approaches are used. One possibility is to use a linear combination of simple kernels and to optimize the weights [1,3,7]. Other uses evolutionary methods [8] for optimizing the weights. Complex nonlinear multi kernels were proposed in [3,4,5,6,10,11], where hybrid approaches using a genetic algorithm and a SVM algorithm are proposed. The aim of this paper is to introduce a hybrid method for kernel optimization based on a genetic algorithm which uses an improved co-mutation algorithm using a modified wasp behaviour computational model. The paper is organized as follows: section 2 briefly presents the kernel substitution method. In section 3 and 4 we characterize the co-mutation operator LR − Mijn and the wasp behaviour computational method. Sections 5 contains our main results: construction and evaluation of our method. Conclusions and further directions of study are presented in section 6.
2
Kernel Substitution Method
SVM algorithm can solve the problem of binary or multiclass classification. We focus in this paper only on the binary classification due to the numerous methods to generalize the binary classifier to a n - class classifier [12]. Let be given the data points xi ∈ Rd , i = 1, . . . m and their labels yi ∈ {−1, 1}. We are looking for a decision function f , which associates to each input data x its correct label y = f (x). For the data sets which are not linearly separable we use the kernel method, which makes a projection of the input data X in a feature Hilbert’s space F , φ : X → F ; x → φ(x). The kernel represents the inner product of data objects implicitly mapped into a higher dimensional Hilbert space of features. Kernel functions can be interpreted as representing the inner product of the nonlinear feature space, K(xi , xj ) = φ(xi ), φ(xj ) The functional form of the mapping φ(xi ) does not need to be known. The basic properties of a kernel function are derived from Mercer’s theorem. Different simple kernels are often used: linear, polynomial, RBF, sigmoidal. These kernels will be defined in subsection 5.1. Multiple kernels can be designed using the set of operations (+, ∗, exp), which preserves the Mercer’s conditions.
3
The Co-mutation LR − Mijn Operator
We introduced in [11] a new co-mutation operator, named LR − Mijn , which makes long jumps, finding the longest sequence of σp elements, situated in the
Optimization of Complex SVM Kernels Using a Hybrid Algorithm
363
left or in the right of the position p. Let denote with σ a generic string and σ = σl−1 . . . σ0 , where σq ∈ A, ∀q ∈ {0, . . . , l − 1}, A = {a1 , . . . , as } being a generic alphabet. The set of all sequences of length l over the alphabet A will be denoted with Σ = Al . Through σ(q, i) we denote that on position q within the z sequence σ there is the symbol ai of the alphabet A. σp,j denotes the presence of z symbols aj within the sequence σ, starting from the position p and going right,n left and σlef t,m (p, i) specify the presence of symbol ai on position p within the sequence σ, between right symbols an on the right and left symbols am on the right,i left. We suppose that σ = σ(l − 1) . . . σ(p + lef t + 1, m)σlef t,i (p, i)σ(p − right − 1, n) . . . σ(0). The expression of LR − Mijn depends of the position of p. There are 4 different cases: p = right, p = l − lef t − 1; p = right, p = l − lef t − 1; p = right, p = l − lef t − 1; and p = right, p = l − lef t − 1. The expression of LR − Mijn , in all these cases can be found in [11]. As an example, let us consider the binary case, the string σ = 11110000 and the randomly chosen application point p = 2. In this case, σ2 = 0, so we have to find the longest sequence of 0 within string σ, starting from position p. This sequence goes to the right, and because we have reached the end of the string, and no occurrence of 1 has been met, the new string obtained after the application of LR − Mijn is 11110111. The most interesting points in the co-mutation operator presented above is the fact that allows long jumps, which are not possible in the classical BitFlip Mutation (BFM), thus the search can reach very far points from where the search currently is. Our co-mutation operator has the same capabilities of local search as ordinary mutation has and it has also the possibility of performing jumps to reach far regions in the search space which cannot be reached by BFM. This leads to a better convergence of an Evolutionary Algorithm based on the LR − Mijn in comparison with classical mutation operators. This statement was verified through experimental results in [11], where a simple evolutionary algorithm based only on selection and LR − Mijn is used for the optimization of some well-known benchmark functions(functions proposed by Rastrigin, Schwefel and Griewangk).
4
Wasp Behaviour Computational Models
In a real colony of wasps, individual wasp interacts with its local environment in the form of a stimulus-response mechanism, which governs distributed task allocation. An individual wasp has a response threshold for each zone of the nest. Based on a wasp’s threshold for a given zone and the amount of stimulus from brood located in this zone, a wasp may or may not become engaged in the task of foraging for this zone. A hierarchical social order among the wasps of the colony is formed through interactions among individual wasps of the colony. When two individuals of the colony encounter each other, they may with some probability interact in a dominance contest. The wasp with the higher social rank will have a higher probability of dominating in the interaction. Computational analogies of these systems have served as inspiration for wasp behaviour computational
364
D. Simian, F. Stoica, and C. Simian
models. An algorithm based on wasp behaviour is essentially a system based on agents that simulate the natural behaviour of insects. An artificial wasp, will probabilistically decide if it bids or nor for a task. The probability is dependent on the level of the threshold and stimulus. A hierarchical order among the artificial wasps is given using a probability function. Models using Wasp agents are used for solving large complex problems with a dynamic character and distributed coordination of resources and manufacturing control tasks in a factory. The elements which particularize an algorithm based on wasp behaviour are the ways in which the response thresholds are updated and the way in which the “conflicts” between two or more wasps are solved.
5 5.1
Main Results: The Model for Constructing Complex SVM Kernels Basic Construction
Our goal is to build and analyze a multiple kernel starting from the following simple kernels: d,r - Polynomial: Kpol (x1 , x2 ) = (x1 · x2 + r)d , r, d ∈ Z+ γ −1 2 - RBF: KRBF (x1 , x2 ) = exp 2γ 2 |x1 − x2 | γ (x1 , x2 ) = tanh(γ · x1 · x2 + 1) - Sigmoidal: Ksig
We use the idea of the model proposed in [5]. The hybrid techniques are structured in two levels: a macro level and a micro level. The macro level is represented by a genetic algorithm which builds the multiple kernel. The micro level is represented by the SVM algorithm which computes the quality of chromosomes. The accuracy rate is computed by the SVM algorithm on a validation set of data. In a first level, we will build and evaluate multiple kernels using the set of operations opi ∈ {+, ∗, exp}, i = 1, . . . , 3 and a genetic algorithm based on a modified LR − Mijn operator using a wasp-based computational scheme. In the fig. 1 the multiple kernel (K1 op2 K2 )op1 (K3 op3 K4 ) is represented: If a node contains the operation exp only one of its descendants is considered (the “left” kernel). We will consider in our construction at most 4 simple kernels. Every chromosome codes the expression of a multiple kernel. The chromosome which codes the multiple kernel described above has the structure given in fig. 2, where opi ∈ {+, ∗, exp}, i ∈ {1, 2, 3}, ti ∈ {P OL, RBF, SIG} represents the
Fig. 1. General representation of multiple kernel
Optimization of Complex SVM Kernels Using a Hybrid Algorithm
365
Fig. 2. Chromosome’s structure
type of the kernel and di , ri , γi are parameters from the definition of the kernel Ki , i ∈ {1, . . . , 4}. Each operation opi is represented using two genes, the type of kernel ti is also represented using two genes, for a degree dj 4 genes are allocated and the parameter ri is represented using 12 genes. If the associated kernel is not polynomial, these 16 genes are used to represent a real value of parameter γi in place of di and ri . Thus, our chromosome is composed from 78 genes. 5.2
Optimization of the LR − Mijn Operator Using a Wasp-Based Computational Scheme
In this subsection we present our approach in optimization of a LR − Mijn operator using a scheme based on a computational model of wasp behavior. Our intention was to allow a often faster changing of the operations in the multiple kernel’s structure. The representation length for one operation inside the chromosome’s structure is 2 genes, significantly less than the representation length for the multiple kernel’s parameters (4 and 12 genes). Therefore, usually, in the optimization process, the probability of changing the multiple kernels’ parameters is bigger the the probability of changing the operations from the multiple kernel. Each chromosome C has an associated wasp. Each wasp has a response threshold θC . The set of operations coded within chromosome broadcasts a stimulus SC which is equal to difference between maximum classification accuracy (100) and the actual classification accuracy obtained using the multiple kernel coded in the chromosome: (1) SC = 100 − CAC The modified LR − Mijn operator will perform a mutation that will change the operations coded within chromosome with probability: P (θC , SC ) =
2 SC 2 + θC
2 SC
(2)
The threshold values θC may vary in the range [θmin , θmax ]. When the population of chromosomes is evaluated, the threshold θC is updated as follows: θC = θC − δ, δ > 0,
(3)
if the classification accuracy of the new chromosome C is lower than in the previous step, and: θC = θC + δ, δ > 0, (4) if the classification accuracy of the new chromosome C is greater than in the previous step.
366
5.3
D. Simian, F. Stoica, and C. Simian
SVM Algorithm
The evaluation of the chromosome is made using the SVM algorithm for a particular set of data. To do this we divide the data into two subsets: the training subset, used for problem modeling and test subset used for evaluation. The training subset is also randomly divided into a subset for learning and a subset for validation. The SVM algorithm uses the data from the learning subset for training and the subset from the validation set for computing the classification accuracy which is used as fitness function for the genetic algorithm. For the implementation/testing/validation of our method was used the “leukemia” data set from the page LIBSVM data sets page [2]. In order to replace the default polynomial kernel from libsvm, we extend the svm parameter class with the following attributes: //our kernel is “hybrid” public static final int HYBRID = 5; // parameters for multiple polynomial kernels public long op[]; public long type[]; public long d[]; public long r[]; public double g[];
The class svm predict was extended with the method predict(long op[], long type[], long d[], long r[], double g[]) . The Kernel class was modified to accomplish the kernel substitution. In the k function method we have the computation of our simple kernels. Then, the kernels are combined using operation given in array param.op[]. In the genetic algorithm, the operations, the type of the simple kernels, and all other parameters are obtained from a chromosome, which is then evaluated using the result of the predict method presented above. After the end of the genetic algorithm, the best chromosome gives the multiple kernel which can be evaluated on the test subset of data. The way of construction this multiple kernel assures that it is a veritable kernel, that is, it satisfies Mercer’s conditions. 5.4
Experimental Results
Using the standard libsvm package, for the “leukemia” data set is obtained the following classification accuracy: java -classpath libsvm.jar svm predict leu.t leu.model leu.output Accuracy = 67.64705882352942%(23/34) (classification) Multiple kernels obtained using genetic approach are improving the classification accuracy up to 94.12%. In fig. 3 results from three runs of our genetic algorithm based on a modified LR − Mijn operator are presented. For each execution, dimension of population was 35 and the number of generations was 30.
Optimization of Complex SVM Kernels Using a Hybrid Algorithm
367
Fig. 3. Classification accuracy using multiple kernels
Fig. 4. Optimal kernels
One “optimal” multiple kernel obtained is depicted in fig. 4 (left), where γ = 1.97, d1 = 3, r1 = 609, d2 = 2, r2 = 3970, d3 = 1, r3 = 3615. Another “optimal” multiple kernel obtained is depicted in fig. 4 (right), where γ = 0.5, d1 = 3, r1 = 633, d2 = 2, r2 = 3970, d3 = 1, r3 = 4095.
6
Conclusions and Further Directions of Study
In this paper we presented a hybrid approach for optimization the SVM multiple kernels. The idea of using hybrid techniques for optimization the multiple kernels is not new, but is very recent and the way in which we designed the first level of the method is original. Modifying the co-mutation operator LR − Mijn using a wasp behaviour model improves the probability to choose new operations in the structure of multiple kernels. We observed that the parameter r, from the simple polynomial kernels which appears in the optimal multiple kernel is large. After many executions we also observe that a better accuracy and convergence is obtained if we impose a superior limit for the parameter γ of the sigmoidal/RBF simple kernels. Experimental results prove that the utilization of genetic algorithm based on modified LR − Mijn co-mutation operator, has a better convergence and improves the accuracy toward the classical genetic algorithm used in [2]. Further numerical experiments are required in order to asses the power of our evolved kernels.
References 1. Bach, F.R., Lanckriet, G.R.G., Jordan, M.I.: Multiple kernel learning, conic duality, and the SMO algorithm Machine Learning. In: Proceedings of ICML 2004. ACM International Conference Proceeding Series 69. ACM, New York (2004)
368
D. Simian, F. Stoica, and C. Simian
2. Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines (2001), http://www.csie.ntu.edu.tw/~ cjlin/libsvm 3. Diosan, L., Oltean, M., Rogozan, A., Pecuchet, J.P.: Improving SVM Performance Using a Linear Combination of Kernels. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds.) ICANNGA 2007. LNCS, vol. 4432, pp. 218–227. Springer, Heidelberg (2007) 4. Diosan, L., Rogozan, A., Pecuchet, J.P.: Une approche evolutive pour generer des noyaux multiples (An evolutionary approach for generating multiple kernels). Portal VODEL (2008), http://vodel.insarouen.fr/publications/rfia 5. Diosan, L., Oltean, M., Rogozan, A., Pecuchet, J.P.: Genetically Designed MultipleKernels for Improving the SVM Performance. Portal VODEL (2008), http://vodel.insa-rouen.fr/publications/rfia 6. Nguyen, H.N., Ohn, S.Y., Choi, W.J.: Combined kernel function for support vector machine and learning method based on evolutionary algorithm. In: Pal, N.R., Kasabov, N., Mudi, R.K., Pal, S., Parui, S.K. (eds.) ICONIP 2004. LNCS, vol. 3316, pp. 1273–1278. Springer, Heidelberg (2004) 7. Sonnenburg, S., Ratsch, G., Schafer, C., Scholkopf, B.: Large scale multiple kernel learning. Journal of Machine Learning Research 7, 1531–1565 (2006) 8. Stahlbock, R., Lessmann, S., Crone, S.: Genetically constructed kernels for support vector machines. In: Proc. of German Operations Research, pp. 257–262. Springer, Heidelberg (2005) 9. Simian, D.: A Model For a Complex Polynomial SVM Kernel. In: Proceedings of the 8-th WSEAS Int. Conf. on Simulation, Modelling and Optimization, Santander Spain. Within Mathematics and Computers in Science and Engineering, pp. 164– 170 (2008) 10. Simian, D., Stoica, F.: An evolutionary method for constructing complex SVM kernels. In: Proceedings of the 10th International Conference on Mathematics and Computers in Biology and Chemistry. Recent Advances in Mathematics and Computers in Biology and Chemistry, MCBC 2009, Prague, Chech Republic, pp. 172– 178. WSEAS Press (2009) 11. Stoica, F., Simian, D., Simian, C.: A new co-mutation genetic operator, Advanced topics on evolutionary computing. In: Proceeding of the 9th Conference on Evolutionary Computing, Sofia, pp. 76–82 (2008) 12. Vapnik, V.: The Nature of Statistical Learning Theory. Springer, Heidelberg (1995), http://www.csie.ntu.edu.tw/~ cjlin/libsvmtools/datasets
Enabling Cutting-Edge Semiconductor Simulation through Grid Technology Asen Asenov, Dave Reid, Campbell Millar, Scott Roy, Gareth Roy, Richard Sinnott, Gordon Stewart, and Graeme Stewart Device Modelling Group, University of Glasgow National e-Science Centre, University of Glasgow Department of Physics & Astronomy, University of Glasgow
[email protected]
Abstract. The progressive CMOS scaling drives the success of the global semiconductor industry. Detailed knowledge of transistor behaviour is necessary to overcome the many fundamental challenges faced by chip and systems designers. Grid technology has enabled the constantly increasing statistical variability introduced by discreteness of charge and matter to be examined in unprecedented detail. Over 200,000 transistors subject to random discrete dopants variability have been simulated, the results of which provide detailed insight into underlying physical processes. This paper outlines recent scientific results of the nanoCMOS project, and describes the way in which the scientific goals have been reflected in the grid-based e-infrastructure. Keywords: nanoCMOS electronics, virtual organization, security, variability.
1
Introduction to NanoCMOS Challenges
The progressive scaling of Complementary Metal Oxide Semiconductor (CMOS) transistors drives the success of the global semiconductor industry. This is in the heart of the famous Moore’s Law [5]. As device dimensions approach the nanometer scale however, chip and systems designers must overcome problems associated with the increasing statistical variability in nano-scale CMOS transistors sue to discreteness of charge and matter. The EPSRC funded project Meeting the Design Challenges of nanoCMOS Electronics (nanoCMOS) aims to explore and find solutions to the problems caused by the increasing statistical variability and it impact on system on a chip (SoC) design process. The EPSRC funded, nanoCMOS e-Science pilot project [25] developes eScience technology to support the computationally intensive simulations needed to gain a deeper understanding of the various sources of statistical variability and their impact on circuit and system design. These technologies also target the management and manipulation of the large amounts of resultant simulation data. The simulation of statistical variability requires 3D simulation of ensembles of devices to be performed on a statistical scale [16]. The increasing number I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 369–378, 2010. c Springer-Verlag Berlin Heidelberg 2010
370
A. Asenov et al.
of transistors in modern chips also necessitates the simulation of very large statistical samples to allow the study of statistically rare devices with potentially detrimental effects on circuit performance. Previously, the computational complexity of 3D device simulation has restricted studies of variability to small ensembles of approximately 200 devices [24,13]. However, this results in inaccurate predictions of the statistical distribution of transistor parameters [21]. This motivates the deployment of grid technologies for performing the simulations of large statistical samples and for data mining of the large amount of output data generated. In the current 45 nm technology generation [5], the main source of threshold voltage fluctuation in bulk MOSFETs are the random discrete dopants (RDD). Due to billions of transistors count in contemporary chips mean that statistically rare devices (beyond 5-6 σ from the mean) are beginning to have a significant effect on designs and yield. In this paper, we present groundbreaking results where ensembles in excess of 100,000 35 nm [19] and 13 nm gate length MOSFETs have been simulated in 3D using a grid based methodology, for. In total, these simulations required approximately 20 CPU years on a 2.4 GHz AMD Opteron system.
2
NanoCMOS E-Infrastructure and Simulation Methodology
The nanoCMOS project has adopted a hierarchical simulation methodology, which is shown in Fig. 1. In this paper we are primarily concerned with the bottom level of this food chain, where individual devices are simulated ‘atomistically’. Results from this level will be passed up to higher levels of abstraction in the design flow in the form of transistor compact models, which will be used in circuit level SPICE-like (Berkeley SPICE [1]) simulations. To support these simulations, the project employs a variety of grid middleware and middleware including technologies provided by the Open Middleware Infrastructure Institute (OMII-UK — http://www.omii.ac.uk) and the Enabling Grids for E-Science (EGEE — http://public.eu-egee.org) project. The early phase of work focused upon development of a family of OMII-UK services, which supported the device modelling and compact model generation phases of electronics design. These services were developed to exploit the OMIIUK GridSAM job submission system [4]. The aim of GridSAM is to provide a web service for submitting and monitoring jobs managed by a variety of Distributed Resource Managers (DRMs). This web service interface allows jobs to be submitted from a client in a Job Submission Description Language (JSDL), [11], document and supports retrieval of their status as a chronological list of events detailing the state of the job. GridSAM translates the submission instruction into a set of resource-specific actions — file staging, launching and monitoring - using DRM connectors for each stage. Proof of concept nanoCMOS job submission solutions were implemented with GridSAM that showed how access to resources such as the NGS,
Enabling Cutting-Edge Semiconductor Simulation through Grid Technology
371
Fig. 1. Hierarchical simulation methodology adopted by the nanoCMOS project
Sun Grid Engine clusters and Condor pools at Glasgow could be supported. The main computational resource available to nanoCMOS is the ScotGrid cluster [7] at the University of Glasgow. The ScotGrid cluster supports the EGEE middleware stack, including its own particular flavour of job submission and management software, all based around the gLite middleware. One of the frontend software environments used for EGEE job submission is Ganga [2]. Ganga supports bulk job submission, where the maximum number of concurrent jobs per user is limited to 1000, but since multiple devices can be simulated in a single job this was sufficient for our methodology. The nanoCMOS researchers have become the primary end users of the ScotGrid cluster with a combined CPU usage of 23% of the total resource. Thus far, the simulations undertaken in nanoCMOS have resulted in over 175,000 CPU hours of device simulations run on ScotGrid. More details on the implementation of the simulation framework for nanoCMOS are available in [17]. The project has explored a variety of data management and security solutions. Given that the vast majority of the simulation data is file-based, a performance and security evaluation of the Storage Resource Broker [8] and the Andrew File System (AFS) [15] was undertaken. The results of this are described in [27] along with our justification for the adoption of AFS. A variety of metadata associated with simulations is captured and made available to targeted metadata services, which are in turn coupled with the predominantly file-based AFS data. A range of query interfaces to these metadata systems are also under production. Key to both the data management and simulation frameworks of nanoCMOS is support for fine-grained security. We have identified that no single security solution fulfils the needs of nanoCMOS research. Instead, it has been necessary
372
A. Asenov et al.
Fig. 2. ID VG characteristics of the 35 nm Toshiba device as produced by the Glasgow simulator at a low drain voltage of 50 mV and a high drain voltage of 850 mV, calibrated against results obtained from both TCAD process simulation and experiment. The continuous doping profile is shown inset.
to integrate a range of security solutions, including Kerberos (for AFS) [20], the Virtual Organisation Membership Service (VOMS) [9], MyProxy [12], PERMIS [14], GSI [3], and Shibboleth [6]. The justification for integrating these technologies and the way in which they have been integrated is described in more detail in [28,29,26].
3
NanoCMOS Scientific Results and Discussion
The actual device simulations were carried with the Glasgow 3D ‘atomistic’ drift/ diffusion (DD) simulator [24]. The DD approach accurately models transistor characteristics in the sub-threshold regime, making it well suited for the study of VT fluctuations. The simulator includes Density Gradient quantum corrections [10], which accurately capture quantum confinement effects and are essential for preventing artificial charge trapping in the sharply resolved Coulomb potential of discrete impurities. Each device is also fully independent, allowing the problem to be easily parallelized using a task farming approach. The 35 nm gate length transistor used in the simulation studies was published by Toshiba [19] and the simulator was calibrated to the experimental characteristics of this device. Structural data for the device was obtained through commercial TCAD process simulation. The doping profile structure and characteristics of the Toshiba device are shown in Fig. 2. Having performed 100,000 3D simulations of the 35 nm MOSFET we were able to investigate its statistics with high degree of confidence. The distribution of random dopant induced threshold voltage fluctuations in the simulated ensemble can be seen in Fig. 3. The fluctuations arise from the fact that each
Enabling Cutting-Edge Semiconductor Simulation through Grid Technology -4 1
15
-3.5
Probability Density
Probability Density
Data Pearson IV Gaussian 10
5
0 0.1
0.15
0.2
0.25
VT (V) (a)
0.3
0.35
-3
373
-2.5
-2
3.5
4
Data Pearson IV Gaussian
0.5
0 2
2.5
3
VT (σ) (b)
Fig. 3. (a) The distribution of simulated VT data compared to Gaussian and Pearson IV distributions. (b) Tails of the VT distribution. The inaccuracy of the Gaussian fit to the distribution in the tails of the distribution can clearly be seen.
microscopically device will have, both a different number of and microscopically different configurations of dopant atoms. It is important to model accurately the tails of the distribution (at 6 σ and greater) so that the impact of statistically rare devices on circuit performance may be properly assessed. This is currently only possible through a ‘brute force’ approach, necessitating very large numbers of simulations. In order, to assess the accuracy of the simulation ensemble we have calculated the κ2 errors of the statistical moments of the threshold voltage distribution as a function of the ensemble size. These are shown in Fig. 4, where it can be seen that enabling larger scale simulations using grid technology has greatly improved the accuracy of our description of random dopant fluctuations. From Fig. 3 it is clear that RDD induced statistical distribution is asymmetrical which has been reported in experimental measurements [22]. A Gaussian distribution which is typically used to describe the RDD variability with the data mean and standard deviation yields a relatively large κ2 error of 2.42. Fitting the data using a Pearson Type IV distribution [18], results in a much better fit, with a κ2 error of 0.38. The analytical fits are shown in Fig. 3 for comparison, and
Fig. 4. κ2 error of the statistical moments as a function of sample size. It is assumed that the final values are the population values.
374
A. Asenov et al.
Table 1. Statistical moments of the VT distributions. Note that mean value has been normalised to the experimental mean threshold.
Fig. 5. Potential landscapes from the extremes of the VT distribution with VG = 225.9mV
the moments of the distribution and analytical fits are given in Table 1. A more detailed view of the tails is shown in Fig. 3(b), where the systematic error in the Gaussian is more apparent. The effect of RDD on an individual device can be understood by examining the electrostatic potential within the devices. Fig. 5 shows the potential profiles extracted from devices taken from the upper and lower tails, along with a ‘mean’ device chosen from the centre of the distribution for comparison. This clearly illustrates the physical causes of variation in device characteristics, since the behaviour of a MOSFET is determined by the height of the potential barrier in the channel. It can be seen that even at the nominal threshold voltage, the device from the lower tail has already switched on and the device from the upper tail is off. While statistically rare devices can be generated artificially, it is only through the large-scale simulation undertaken here that we can obtain the correct statistical description that governs the occurrence of such devices and through this achieve a better understanding the underlying physical causes of variability. Each device within the ensemble was analysed by sectioning into 1 nm thick volumes in the x and z directions, as demonstrated in Fig. 6(a). The correlation between the number of dopants in each volume, and the measured VT of the device can be calculated. The correlation between dopant position and threshold voltage is illustrated in Fig. 6(b). This allows the definition of the statistically significant region (SSR) where individual dopants can affect the bulk device behaviour, which, as expected, corresponds approximately to the device channel but also includes some small portions of the source and drain regions as well. For a fixed number of dopants in the SSR, there will also be a distribution of threshold voltage arising from the different positional configurations of dopants that can occur in the silicon lattice. The VT distributions for n = 35, 45, 55 dopants (corresponding to lower tail, mean, and upper tail devices) in the SSR can be seen in Fig. 7. From this it is clear that both mean and standard deviation
Enabling Cutting-Edge Semiconductor Simulation through Grid Technology
0nm
375
80nm
0 z (nm)
10
Z Y X
(a)
20
80 60 70 50 30 40 ) 20 40 x (nm 0 10 30
(b)
Fig. 6. The device is divided into 1 nm slices in x and z. The correlation between number of dopants in each slice and VT is used to determine (b) the SSR.
Fig. 7. Gaussian distribution of VT for selected occurrences of a number of dopants. Note the increasing mean and standard deviation of VT as a function of n.
of VT increase with the number of dopants. By further examining this relationship using all of the available data, it can be determined that the mean and standard deviation depend linearly on the number of dopants [23]. Therefore, this relationship can be extrapolated to arbitrary numbers within the SSR as necessary. From knowledge of the physical processes governing dopant implantation, the number of dopants within the SSR must be governed by a Poisson distribution. Therefore, a VT distribution can be constructed from the convolution of this Poisson distribution with the Gaussian distributions from the random position of dopants [23], which extends to very large values of σ. This calculation is illustrated graphically in Fig. 8. The linear relationship between the number of dopants and the mean and standard deviation of the positional Gaussians allows the distribution to be extrapolated to an arbitrary value of σ, resulting in a significantly more accurate prediction of the probability of finding devices
376
A. Asenov et al.
Fig. 8. Graphical illustration of how the full distribution is built up from the convolution of a Poissonian and the positional Gaussians -3.5
-3
-2.5
-2
1
Convolution Simulation Extrapolated
1×10
χ Error
0.5
1×10
-2
-4
2
Probability Density
-4 1
1×10
0 2
2.5
3
3.5
4
-6
1×10
VT (σ) (a)
-8
0.1
0.2
0.3
VT (V) (b)
Fig. 9. Tails of the convolved distribution and (b) the κ2 errors across the distribution. The convolution yields an excellent fit across the whole distribution.
in tails of the distribution, where real gains can be made in billion transistor count chips. The accuracy of the distribution resulting from this convolution is shown in Fig. 9(a), along with the extrapolated distribution; and the κ2 errors calculated from the comparison of this distribution with simulation data can be seen in Fig. 9(b). The total calculated values for κ2 the error are 0.94 for the convolution using simulation data and 0.55 for the extrapolated distribution. These values are comparable to the fitting error of the Pearson IV demonstrating the accuracy of this analytical description.
4
Conclusions and Future Work
It is clear that small statistical ensembles do not provide sufficient accuracy and information to design when variations at 6 σ and beyond are important. Large datasets are necessary as some effects, such as the asymmetry of the VT distribution observed here, are not visible in small datasets and will have seriously detrimental effects if not incorporated into design strategies. We have shown that
Enabling Cutting-Edge Semiconductor Simulation through Grid Technology
377
by applying grid and e-science technology to the problem of intrinsic parameter fluctuations we have gained a fundamental insight into device behaviour in the presence of random dopants. The large datasets obtained have provided sufficient data to develop and verify an accurate analytical model of the underlying physical processes that affect VT fluctuations in nano-scale semiconductor MOSFETs. The future work will be to explore the impact of these atomic variations throughout the design process. We are also exploring the atomistic variability of a variety of novel device architectures.
Acknowledgements This work was funded by a grant from the UK Engineering and Physical Sciences Research Council. We gratefully acknowledge their support.
References 1. 2. 3. 4. 5. 6. 7. 8. 9.
10. 11. 12. 13.
14. 15. 16. 17.
Berkeley SPICE, http://bwrc.eecs.berkeley.edu/Classes/icbook/SPICE/ Ganga webpage, http://ganga.web.cern.ch/ganga/ Globus Security Infrastructure, http://www.globus.org/security GridSAM — Grid Job Submission and Monitoring Web Service, http://gridsam.sourceforge.net/ International technology roadmap for semiconductors (2005), http://www.itrs.net/ Internet2 Shibboleth Architecture and Protocols, http://shibboleth.internet2.edu Scotgrid webpage, http://www.scotgrid.ac.uk Storage Resource Broker (SRB), http://www.sdsc.edu/srb/index.php Alfieri, R., et al.: VOMS: an authorization system for virtual organizations. In: Fern´ andez Rivera, F., Bubak, M., G´ omez Tato, A., Doallo, R. (eds.) Across Grids 2003. LNCS, vol. 2970, pp. 33–40. Springer, Heidelberg (2004) Ancona, M.G., Tiersten, H.F.: Macroscopic physics of the silicon inversion layer. Phys. Rev. B 35(15), 7959–7965 (1987) Anjomshoaa, A., et al.: Job Submission Description Language (JSDL) Specification, Version 1.0 (2005), http://www.ogf.org/documents/GFD.56.pdf Basney, J., Humphrey, M., Welch, V.: The MyProxy Online Credential Repository. Software Practice and Experience 35(9), 801–816 (2005) Brown, A., Roy, G., Asenov, A.: Poly-Si-gate-related variability in decananometer MOSFETs with conventional architecture. IEEE Transactions on Electron Devices 54(11), 3056–3063 (2007) Chadwick, D.W., Otenko, A., Ball, E.: Role-based Access Control with X.509 Attribute Certificates. IEEE Internet Computing 7(2), 62–69 (2003) Edward, R., Zayas, R.: Andrew File System (AFSv3) Programmer’s Reference: Architectural Overview. Transarc Corporation, Pittsburgh (1991) Frank, D.J., Taur, Y.: Design considerations for cmos near the limits of scaling. Solid State Electronics 46, 315–320 (2002) Han, L., Sinnott, R.O., Asenov, A., et al.: Towards a Grid-enabled Simulation Framework for nanoCMOS Electronics. In: 3rd IEEE International Conference on e-Science, Bangalore, India (December 2007)
378
A. Asenov et al.
18. Heinrich, J.: A guide to the Pearson type IV distribution. Technical report, University of Pennsylvania (2004) 19. Inaba, S., Okano, K., et al.: High performance 35 nm gate length CMOS with NO oxynitride gate dielectric and Ni salicide. IEEE Transactions on Electron Devices 49(12), 2263–2270 (2002) 20. Kohl, J.T., Neuman, B.C., T’so, T.Y.: The Evolution of the Kerberos Authentication System. Distributed Open Systems, 78–94 (1994) 21. Millar, C., Reid, D., et al.: Accurate statistical description of random dopant induced threshold voltage variability. IEEE Electron Device Letters 29(8) (2008) 22. Nassif, S., et al.: High performance CMOS variability in the 65 nm regime and beyond. IEDM Digest of Technical Papers, pp. 569–571 (2007) 23. Reid, D., Millar, C., et al.: Prediction of random dopant induced threshold voltage fluctuations in nanoCMOS transistors. In: SISPAD 2008 (2008) (in publication) 24. Roy, G., Brown, A., et al.: Simulation study of individual and combined sources of intrinsic parameter fluctuations in conventional nano-MOSFETs. IEEE Transactions on Electron Devices 53(12), 3063–3070 (2006) 25. Sinnott, R.O., Asenov, A., et al.: Meeting the Design Challenges of nanoCMOS Electronics: An Introduction to an EPSRC pilot project. In: Proceedings of the UK e-Science All Hands Meeting (2006) 26. Sinnott, R.O., Asenov, A., et al.: Integrating Security Solutions to Support nanoCMOS Electronics Research. In: IEEE International Symposium on Parallel and Distributed Processing Systems with Applications, Sydney Australia (2008c) 27. Sinnott, R.O., Bayliss, C., et al.: Secure, Performance-Oriented Data Management for nanoCMOS Electronics. Submitted to e-Science 2008, Indiana, USA (December 2008d) 28. Sinnott, R.O., Chadwick, D., et al.: Advanced Security for Virtual Organizations: Exploring the Pros and Cons of Centralized vs Decentralized Security Models. In: 8th IEEE International Symposium on Cluster Computing and the Grid (CCGrid 2008), Lyon, France (2008a) 29. Sinnott, R.O., Stewart, G., et al.: Supporting Security-oriented Collaborative nanoCMOS Electronics e-Research. In: International Conference on Computational Science, Krakow, Poland (2008b)
Incremental Reuse of Paths in Random Walk Radiosity Francesc Castro and Mateu Sbert Graphics and Imaging Laboratory, IIiA, Universitat de Girona {castro,mateu}@ima.udg.edu
Abstract. Radiosity techniques produce highly realistic synthetic images for diffuse environments. Monte Carlo random walk approaches, widely applied to radiosity, have as a drawback the necessity of a high number of random paths to obtain an acceptable result, involving a high computation cost. The reuse of paths is a strategy to reduce this cost, allowing that a path distributes light power from several light positions, resulting in a noticeable speed-up factor. We present a new strategy of reuse of paths, which will allow us, given a previously computed set of radiosity solutions corresponding to n light positions, to add new light positions and accurately compute the radiosity solution at a reduced cost by reusing paths. Our incremental strategy can be applied to light positioning in interior design, allowing the set of authorized light locations to be enriched by adding new positions chosen by the user.
1
Introduction
Radiosity techniques [4] are commonly used in global illumination. They follow the radiosity equations, which model the exchange of radiant energy between the objects in the environment. Monte Carlo random walk approaches have been widely applied to radiosity. A drawback of such approaches is the necessity of a high number of paths to obtain an acceptable result, involving a high computation cost. One way to reduce such a cost is the reuse of paths [10,11]. In the case of shooting random walk, reuse allows a random path to distribute light power from several light positions, resulting in a noticeable speed-up factor. We present in this article a new strategy of reuse of shooting paths. Such a strategy will allow us, given a previously computed set of radiosity solutions corresponding to n light positions, to add new light positions and accurately compute their corresponding radiosity solutions at a reduced cost thanks to the storage and reuse of the paths generated in the computation of the initial set of solutions. Multiple importance sampling [14] will guarantee the unbiasedness of the new solutions. Our incremental strategy can be applied to light positioning in interior design. The set of authorized light locations to be chosen or combined in order to decide I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 379–386, 2010. c Springer-Verlag Berlin Heidelberg 2010
380
F. Castro and M. Sbert
the best lighting can be enriched by adding new light positions. This allows an interactive application in which the user can choose new light locations and, at a reduced computation cost thanks to the reuse, see their corresponding lighting, or light combinations involving the new computed solution. This paper is structured as follows. Section 2 describes previous work in reuse of paths in radiosity. Section 3 describes our contribution, the incremental reuse of paths. Results are presented in section 4, while section 5 outlines the conclusions and some possible lines of future research.
2
Previous Work: Reuse of Shooting Paths in Radiosity
Radiosity [4] is a global illumination algorithm that simulates the multiple reflections of light in a scene. Radiosity assumes that the exiting radiance of a point is direction independent, i.e., it works with diffuse surfaces. The equation for the radiosity B(x) (emitted plus reflected light leaving point x) is: ρ(x)F (x, y)B(y)dy + E(x), (1) B(x) = D
where D stands for the set of all the surfaces in the scene, ρ(x) and E(x) respectively stand for the reflectance (fraction of the incoming light that is reflected) and emittance (emission radiosity) at point x, and F (x, y) stands for the form factor between points x and y, a geometric term (Fig. 1, left) defined in (2): F (x, y) =
cos θx cos θy V (x, y) , πr2
(2)
where θx , θy are the angles between the line from x to y and the respective normal vectors, r is the distance between x and y, and V (x, y) is the visibility function between x and y (with value 0 or 1). In practice, a discrete version of the radiosity equation is considered, corresponding to a discretization of the environment in small polygons called patches. The radiosity system of equations can be solved using Monte Carlo random walk [8,9,1]. Each path traced from a light source can be seen as light particle carrying an amount of power equal to Φ/N , N being the number of paths to be traced, and Φ being the emission power of the light. The transition probabilities are simulated by tracing rays from points uniformly distributed on the light surface, using a cosine distributed direction (according to the kernel of the form factor integral). The walk continues from the patch hit by this ray, to which a contribution equal to the power carried by the path is added. A survival test is done at each hit with the reflectance of the hit patch being the probability of survival. This technique is known as collision shooting random walk. The idea of reusing full paths was introduced by Halton in [7], in the context of the random walk solution of an equation system. Reuse of paths in random walks has been exploited in global illumination and radiosity, providing a significant reduction of the computation cost. Let us outline next some approaches in which shooting paths have been reused.
Incremental Reuse of Paths in Random Walk Radiosity
381
Fig. 1. (left) Point-to-point form factor geometry. V (x, y) is 1 if points x and y are mutually visible, and 0 otherwise. (right) Reuse of shooting paths. Consider 3 points x1 , x2 , x3 in the corresponding light positions. The path started at x1 (blue/continuous) is reused to distribute power from x2 and x3 , and the same with the paths started at x2 (black/dotted) and x3 (red/dashed). Note that point y2 is not visible from x3 , so the path started at x2 will not contribute to expand any power from x3 .
Sbert et al. [12] presented a fast light animation algorithm based on the virtual lights illumination method, proposed by Wald et al. [15]. They reused paths in all frames, presenting an acceleration factor close to the number of frames of the animation. Also Sbert et al. [11] presented a radiosity algorithm that allowed the reuse of paths for different light positions. Paths generated at a position could be used to distribute power from all the positions, in an each-for-all strategy (Fig. 1, right). Such an idea, which can be useful for light animation and light positioning [3], will be detailed next. Let us consider N points on a light in n different positions, and let xk1 , ..., xkn be the n positions considered for xk , one of such N points. Points y (first hit of the paths) will be sampled from any of the points xk1 , ..., xkn , which corresponds to use a mixture pdf mix(y) [13] which is just the average of the n form factors F (xk1 , y), ..., F (xkn , y). Thus, the power to be distributed from xkj can be estimated, according to MC integration, in the following way: n 1 F (xkj , yik ) Φ (3) Φ(xkj ) = × , n i=1 mix(yik ) N which means that each of the n samples considered y1k , ..., ynk is generated from one of the n positions. The i-th term in the sum in (3) can be seen as the amount of the power from xkj carried by the shooting path Γik , started at xki . Let Φj (Γik ) be this power: Φj (Γik ) =
F (xkj , yik ) mix(yik )
×
F (xkj , yik ) Φ Φ = n × . k , yk ) Nn N F (x i l=1 l
(4)
We have to note that this strategy involves a bias, because we do not consider the hits of the paths against the light. If we consider small lights, such a bias is rather unnoticeable, however.
382
3
F. Castro and M. Sbert
Our Contribution: Incremental Reuse of the Shooting Paths
We decide to take directional point lights, made up by a differential area plus a normal vector. This involves a simplification of the model regarding to considering area lights, and carries some advantages. First, the bias described in the previous section disappears. Second, and more important, since there is an only origin point xi at each light position i, there is noticeable reduction of the information to be stored to reuse the paths for the new light positions. We aim at computing the radiosity solution for a new light position p = n + 1. Such a new position will be chosen by the user. Next we will show how this new solution can be computed in an unbiased way at a reasonable cost by reusing the paths previously generated at the n original positions. We will consider the paths which distribute the emission power from position p to be generated in n + 1 different ways, that is, from the original n positions plus the new position p. This corresponds to take as mixture pdf the average of the n + 1 form factors F (x1 , y), ..., F (xn+1 , y), weighted by Ni , the number of paths generated at each position, to allow different number of samples at each n+1 position. Let M = i=1 Ni . Equation (5) shows the power from the light at p to be distributed by a path Γik generated at position i ∈ [1, n + 1]. Thus, such an equation allows to reuse the paths traced during the previous computation of the set of solutions, and it is also valid for the new paths traced from p. Φp (Γik ) =
F (xp , yik ) F (xp , yik ) Φ = × ×Φ n+1 k M mix(yik ) i=1 Ni F (xi , yi )
(5)
It is worth to be mentioned that some new paths have to be cast from the new position p in order not to introduce bias. Multiple importance sampling [14,13] guarantees unbiasedness only if all the points in the domain with non-zero value of the integrand can be generated using the mixture pdf. In practice, note that points y with F (xp , y) > 0, but non-reachable from the n original positions, only can be sampled by casting paths from the new position p. The reuse in the new position p of the shooting paths cast from the n original positions, according to equation (5), demands some data storage. For each shooting path Γk to be reused, we have to store: – Lk , the list of intersected patches. – yk , the first-hit point. – Sk , the sum of the form factors (weighted by Ni ) from each of the n positions to yk . In Fig. 2 we outline the reuse algorithm for computing the set of solutions corresponding to the n original positions. Note that the information above is stored. Fig. 3 shows the algorithm where the radiosity solution for a new light position p is computed by taking advantage of the information stored in algorithm in Fig. 2.
Incremental Reuse of Paths in Random Walk Radiosity
383
forEach original position i in 1..n do forEach k in 1..Ni do start path Γik at xi obtain and store first hit-point yik Sik =0 forEach original position j in 1..n do compute F (xj , yik ) Sik += Nj × F (xj , yik ) endfor store Sik forEach original position j in 1..n do Φki [j]= Φ × F (xj , yik )/Sik endfor continue path Γik until termination store list of patches intersected by Γik forEach original position j in 1..n do increase by Φki [j] the incoming power for each patch in the list endfor endfor endfor Fig. 2. Reuse and storage of paths for the original set of positions Choose a new position p = n + 1 forEach k in 1..Np do start path Γpk at xp obtain first hit-point yk Sk = 0 forEach position j in 1..n + 1 do compute F (xj , yk ) Sk += Nj F (xj , yk ) endfor continue path Γk until termination increase by Φ × F (xj , yk )/Sk the incoming power for each intersected patch endfor forEach stored path Γk in 1..M (M = n i=1 Ni ) do obtain stored values Lk , Sk , yk compute F (xp , yk ) Sk + = Np F (xp , yk ) At position p, increase by Φ × F (xp , yk )/Sk the incoming power for each patch in Lk endfor Fig. 3. Reuse of paths for an a-posteriori-added light position
3.1
Applications
This new strategy can be applied to light positioning in interior design. From a given set of light locations with their respective radiosity solutions, the user
384
F. Castro and M. Sbert
will be able to choose new light locations. Thanks to the storage of the shooting paths, the new radiosity solutions will be obtained fast. This allows the user to see the effect of the resulting illumination for the light placed at the new location, or even to replace an old position by the new one. The strategy can also be useful for heuristic-search-based algorithms of light positioning [6,5,2], in which an optimal configuration of lights is pursued by applying heuristic methods such as local-search, genetic algorithms, etc.
4
Results
We aim at comparing the performance of our strategy of reusing the previously computed paths (B) against the performance of computing the new solution without reuse (A). Computation time and mean square error (MSE) regarding to a reference solution will be taken into account in our comparisons. It is worth mention that our reusing strategy is noticeably accelerated [3] by running a pre-process in which the directional hemisphere over each light position is discretized in regions and information about the nearest intersection in the corresponding direction is stored in each region. Such a pre-process, which has also to be run for the newly-added positions, allows to accelerate the computation of the point-to-point form factors needed for the reuse, at a cost of introducing a small bias. We consider a room (see Fig. 4), discretized in near 40000 patches, in which n = 18 light positions have been taken into account in the original position set. Such positions are distributed in a non-regular way in the ceiling and walls of the room. The position added a-posteriori is also placed on one of the walls. Table 1 shows the cost due to the newly-added position and the MSE for different number of paths cast from this new position. In the case of reuse (B), we have generated 10 times less new paths than when no reuse is applied (A). Fig. 4 illustrates the advantage of the reuse of paths for newly added positions. On the left, an image corresponding to the radiosity solution obtained without reuse. On the right, an image corresponding to the solution obtained by reusing the stored paths. Table 1. A-posteriori addition of a new light. New paths, computation time, and MSE with respect to a reference solution. (A) without reuse; (B) reusing the paths from the previous computation of a set of 18 radiosity solutions. Time in (B) includes preprocess, form factor computation, and tracing paths from the new position. The relative efficiency of (B) regarding to (A) is approximately 8 in this experiment. (*) The bias due to the discretization of the directions over the positions has been subtracted to the MSE. NEW PATHS (A) TIME (A) MSE (A) NEW 100K 32 0.210 200K 64 0.098 400K 128 0.052 800K 258 0.029
PATHS (B) TIME (B) MSE (B)(*) 10K 8 0.080 20K 16 0.039 40K 31 0.029 80K 60 0.014
Incremental Reuse of Paths in Random Walk Radiosity
385
Fig. 4. (left) (A) (no reuse). 400K new paths. 128 sec. MSE= 0.052. (right) (B) (reusing paths). 40K new paths. 31 sec. MSE= 0.029. We have reused the paths stored from 18 previously computed solutions corresponding to 18 light positions distributed on the ceiling and walls of the room. Note that the image obtained with reuse is slightly less noisy, while its computation time is about 4 times lower than the one for the image without reuse. Small white squares represent the directional point light sources.
The storage of the information needed for each path to be reused has been done in the RAM in form of dynamic lists. It requires 24 bytes for a path of an average length of 4, resulting in about 172 MB of memory for the run corresponding to the image in Fig. 4 (right), where 400 K paths have been stored for each of the 18 original positions. All the runs have been done in a Pentium IV at 3 Ghz and 2 GB of RAM.
5
Conclusions and Future Work
We have presented a new strategy that allows, starting from a previously computed radiosity solution for a set of light positions, to unbiasedly estimate, at a reduced cost, the solutions for newly-added light positions, thanks to the reuse of the stored paths. Our approach appears to be clearly superior to computing independent new paths. The relative efficiencies, which are about 1 order of magnitude in our experiments, depend on many factors such as the scene geometry, the average reflectance, the number and position of the initial set of locations, the relative position of the newly-added locations, and the number of paths we decide to cast from the new locations. Our strategy can be easily applied to light positioning in interior design, allowing the user to choose a new light location given an initial set of solutions, and, once computed the new solution, to visualize it independently or combined with previous solutions.
386
F. Castro and M. Sbert
We plan in the future to take into account not only new positions but also orientations. Our strategy can also be applied to animations involving moving lights in which the light trajectory is not a-priory known but decided on the fly. Acknowledgment. This project has been funded in part with grant number TIN2007-68066-C04-01 from the Spanish Government.
References 1. Bekaert, P.: Hierarchical and stochastic algorithms for radiosity. Ph.D. Thesis. Catholic Univ. of Leuven (1999) 2. Castro, F., Acebo, E., Sbert, M.: Heuristic-search-based light positioning according to irradiance intervals. In: Butz, A., Fisher, B., Christie, M., Kr¨ uger, A., Olivier, P., Ther´ on, R. (eds.) SG 2009. LNCS, vol. 5531, pp. 128–139. Springer, Heidelberg (2009), http://ima.udg.edu/~ castro/articles/SG09.pdf 3. Castro, F., Sbert, M., Halton, J.: Efficient reuse of paths for random walk radiosity. Computers and Graphics 32(1), 65–81 (2008) 4. Cohen, M., Wallace, J.: Radiosity and Realistic Image Synthesis. Academic Press Professional, London (1993) 5. Delepoulle, S., Renaud, C., Chelle, M.: Improving light position in a growth chamber through the use of a genetic algorithm. Artificial Intelligence Techniques for Computer Graphics 159, 67–82 (2008) 6. Elorza, J., Rudomin, I.: An interactive system for solving inverse illumination problems using genetic algorithms. Computation Visual (1997) 7. Halton, J.: Sequential Monte Carlo techniques for the solution of linear systems. Journal of Scientific Computing 9(2), 213–257 (1994) 8. Pattanaik, S., Mudur, S.: Computation of global illumination by Monte Carlo simulation of the particle model of light. In: Proceedings of third Eurographics WorkShop on Rendering, pp. 71–83. Springer, Heidelberg (1992) 9. Sbert, M.: The use of global random directions to compute radiosity. Global Monte Carlo methods. Ph.D. Thesis. Univ. Polit`ecnica de Catalunya (1997) 10. Sbert, M., Bekaert, P., Halton, J.: Reusing paths in radiosity and global illumination. In: Proceedings of 4th IMACS Seminar on Monte Carlo Methods, Berlin, Germany, vol. 10(3-4), pp. 575–585 (2004) 11. Sbert, M., Castro, F., Halton, J.: Reuse of paths in light source animation. In: Proceedings of CGI 2004, Crete (Greece), pp. 532–535. IEEE Computer Society Press, Los Alamitos (2004) 12. Sbert, M., Szecsi, L., Szirmay-Kalos, L.: Real-time light animation. In: Computer Graphics Forum (proc. EG 2004), vol. 23(3), pp. 291–299 (2004) 13. Veach, E.: Robust Monte Carlo methods for light transport simulation. Ph.D. Thesis. Stanford University (1997) 14. Veach, E., Guibas, L.: Optimally combining sampling techniques for monte carlo rendering. In: ACM SIGGRAPH 1995 proceedings, pp. 419–428. Addison Wesley Publishing Company, Reading (1995) 15. Wald, I., Kollig, T., Benthin, C., Keller, A., Slussalek, P.: Interactive global illumination using fast ray tracing. In: Rendering Techniques 2002. ACM International Conference Proceeding Series, vol. 28, pp. 15–24. Eurographics Association (2002)
Monte Carlo Adaptive Technique for Sensitivity Analysis of a Large-Scale Air Pollution Model Ivan Dimov1,2 and Rayna Georgieva2 1
ACET, The University of Reading Whiteknights, PO Box 225, Reading, RG6 6AY, UK 2 IPP, Bulgarian Academy of Sciences Acad. G. Bonchev 25 A, 1113 Sofia, Bulgaria
[email protected],
[email protected]
Abstract. Variance-based sensitivity analysis has been performed for a study of input parameters contribution into output variability of a largescale air pollution model — the Unified Danish Eulerian Model. The problem of computing of numerical indicators of sensitivity — Sobol’ global sensitivity indices leads to multidimensional integration. Plain and Adaptive Monte Carlo techniques for numerical integration have been analysed and applied. Numerical results for sensitivity of pollutants concentrations to chemical rates variability are presented.
1
Introduction
Sensitivity analysis (SA) is a study of how uncertainty in the output of a model can be apportioned to different sources of uncertainty in the model input [6]. There are several available sensitivity analysis techniques [6]. Variance-based methods deliver sensitivity measures that are independent to the models behaviors: linearity, monotonicity, and additivity of the relationship between input factor and model output. The aim of our research is to develop an Adaptive Monte Carlo (MC) algorithm for evaluating Sobol’ global sensitivity indices increasing the reliability of the results by reducing the variance of the corresponding Monte Carlo estimator and applying adaptive concept to numerical integration of functions with local difficulties. It gives a possibility for a robust SA — a comprehensive study the influence of variations of the chemical rates on the model results in a particular case. This helps to identify the most influential parameters and mechanisms and to improve the accuracy. It also helps to simplify the model by ignoring insignificant parameters after careful complementary analysis of relations between parameters.
2
Background Studies
The investigations and the numerical results reported in this paper have been obtained by using a large-scale mathematical model called Unified Danish I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 387–394, 2010. c Springer-Verlag Berlin Heidelberg 2010
388
I. Dimov and R. Georgieva
Eulerian Model (UNI-DEM) [7,8]. This model simulates the transport of air pollutants and has been developed by Dr. Z. Zlatev and his collaborators at the Danish National Environmental Research Institute (http://www2.dmu.dk/ AtmosphericEnvironment/DEM/). Both non-linearity and stiffness of the equations are mainly introduced by the chemistry (CBM-4 chemical scheme) [8]. Thus, the motivation to choose UNI-DEM is that it is one of the models of atmospheric chemistry, where the chemical processes are taken into account in a very accurate way. Our main interest is to find out how changes in the input parameters of the model influence the model output. We consider the chemical rate constants to be input parameters and the concentrations of pollutants to be output parameters. In the context of this paper, the term “constants” means variables with normal distribution (established experimentally) with mean 1.0. 2.1
Sobol’ Global Sensitivity Indices Concept
It is assumed that the mathematical model can be presented as a model function u = f (x),
where x = (x1 , x2 , . . . , xd ) ∈ U d ≡ [0; 1]d
(1)
is a vector of input independent parameters with a joint probability density function (p.d.f.) p(x) = p(x1 , . . . , xd ). The total sensitivity index (TSI) of input = Si + parameter xi , i ∈ {1, . . . , d}, is defined in the following way [3]: Sxtot i Sil1 + Sil1 l2 + . . . + Sil1 ...ld−1 , where Si is called the main effect l1 =i
l1 ,l2 =i,l1
(first-order sensitivity index) of xi and Sil1 ...lj−1 is the j -th order sensitivity index (respectively two-way interactions for j = 2, and so on) for parameter xi (2 ≤ j ≤ d). The variance-based Sobol’ method [3] uses the sensitivity measures (indices) and takes into account interaction effects between inputs. An important advantage of this method is that it allows to compute not only the first-order indices, but also indices of a higher-order in a way similar to the computation of the main effects, the total sensitivity index can be calculated with just one Monte Carlo integral per factor. The computational cost of estimating all first-order and total sensitivity indices via Sobol’ approach is proportional to dN, where N is the sample size and d is the number of input parameters (see [2]). The method is based on a decomposition of an integrable model function f in the d-dimensional factor space into terms of increasing dimensionality: f (x) = f0 +
d
fl1 ...lν (xl1 , xl2 , . . . , xlν ),
f0 =
f (x)dx,
(2)
ν=1 l1 <...
where f0 is a constant. The representation (2) is unique (called ANOVA — rep1 resentation of the model function f (x) [4]) if 0 fl1 ...lν (xl1 , xl2 , . . . , xlν )dxlk = 2 2 0, 2 1 ≤ k ≤ ν, ν = 1, . . . , d. The quantities D = U d f (x)dx−f0 , Dl1 ... lν = fl1 ... lν dxl1 . . . dxlν are called variances (total and partial variances, respectively), where f (x) is a square integrable function. Therefore, the total variance
Monte Carlo Adaptive Technique for Sensitivity Analysis
389
of the model output is partitioned into partial variances [3] in the analogous way d as the model function, that is the ANOVA-decomposition: D = ν=1 l1 <...
... lν
=
Dl1 ... D
lν
,
ν ∈ {1, . . . , d}
(3)
are called Sobol’ global sensitivity indices [3,4]. The results discussed above make clear that the mathematical treatment of the problem of providing global sensitivity analysis consists in evaluating total sensitivity indices and in particular Sobol’ global sensitivity indices (3) of corresponding order. This leads to computing of multidimensional integrals I = Ω g(x)p(x) dx, Ω ⊂ Rd , where g(x) is a square integrable function in Ω and p(x) ≥ 0 is a p.d.f., such that Ω p(x) dx = 1. The procedure for computation of global sensitivity on the indices is based following representation of the variance Dy : Dy = f (x) f (y, z )dxdz − f02 (see [4]), where y = (xk1 , . . . , xkm ), 1 ≤ k1 < . . . < km ≤ d, is an arbitrary set of m variables (1 ≤ m ≤ d − 1) and z be the set of d − m complementary variables, i.e. x = (y, z). Let K = (k1 , . . . , km ) and the complement of the subset ¯ The last equality allows K in the set of all parameter indices is denoted by K. to construct a Monte Carlo algorithm for evaluating f0 , D, and Dy , where ξ = N N 1 1 P P f (ξj ) −→ f0 , f (ξj ) f (ηj , ζj ) −→ Dy + f02 , N j=1 N j=1 (η, ζ): N N 1 2 1 P P f (ξj ) −→ D + f02 , f (ξj ) f (ηj , ζj ) −→ Dz + f02 . N j=1 N j=1 2.2
Monte Carlo Approach for Small Sensitivity Indices
The standard Monte Carlo algorithm for estimating global sensitivity indices, proposed in [3], is spoilt by loss of accuracy when Dy f02 , i.e. in the case of small (in values) sensitivity indices. That is why here we have applied one of the approaches for evaluating small sensitivity indices — the so called combined approach [2]. The concept of the approach consists in replacement of the original integrand (the mathematical model function) with a function of the following type ϕ(x) = f (x) − c, where c ∼ f0 . The following estimator for variances has been proposed for this approach: Dy = ϕ(x) [ϕ(y, z ) − ϕ(x )]dxdx , D = ϕ(x)[ϕ(x) − ϕ(x )]dxdx .
3
Description of the Algorithms
Two Monte Carlo algorithms have been applied: Plain and Adaptive. Plain (Crude) Monte Carlo is the simplest possible MC approach for solving multidimensional integrals [1]. Let us consider the problem of the approximate computation of the integral I = Ω g(x)p(x)dx. Let ξ be a random point with a p.d.f.
390
I. Dimov and R. Georgieva
p(x). Introducing the random variable θ = f (ξ) such that Eθ = Ω g(x)p(x)dx. Let the random points ξ1 , ξ2 , . . . , ξN be independent realizations of the random point ξ with p.d.f. p(x) and θ1 = f (ξ1 ), . . . , θN = f (ξN ). Then an approximate value of I is θN = N1 N i=1 θi . The last equation defines the Plain Monte Carlo algorithm. There are various Adaptive Monte Carlo algorithms depending on the technique of adaptation [1]. Our Adaptive algorithm uses a posteriori information about the variance. The idea of the algorithm consists in the following: the domain of integration Ω is separated initially into subdomains with identical volume. The corresponding interval on every dimension coordinate is partitioned into M subintervals, i.e. Ω = j Ωj , j = 1, M d . Denote by pj and IΩj the following expressions: pj = Ωj p(x) dx and IΩj = Ωj f (x)p(x) dx. Consider (j) now a random point ξ ∈ Ωj with a density function p(x)/pj . In this case (j) p IΩj = E Nj N i=1 f (ξi ) . The algorithm starts with a relatively small number M which is given as input data. For every subdomain the integral IΩj and the variance are evaluated. Then the variance is compared with a preliminary given value. The obtained information is used for the next refinement of the domain and for increasing the density of the random points. The subdomain with the largest standard deviation is divided onto 2d new subdomains. The algorithm stops when the standard deviation at all obtained after division subdomains satisfies the preliminary given accuracy ε (or when a given maximum value of number of levels or subdomains where the stop criterion is not satisfied has been reached).
4
Analysis of Numerical Results and Discussion
The first stage of computations includes a generation of input data for our procedure using the UNI-DEM. The model runs have been done for the chemical rates variations with a fixed set of perturbation factors α = {αi }, i = 1, . . . , d, where every αi corresponds to a chemical rate among the set of 69 time-dependent chemical reactions and 47 constant chemical reactions, and d is the total number of chemical reactions taken into account in the numerical experiments. The cα (aimax , bjmax ) generated data is ratios of the following type rs (α) = s s max s , αi ∈ cs {0.1, 0.2, . . . , 2.0}, where the lower index s corresponds to the chemical species (pollutants). The denominator cmax = cmax (aismax , bjsmax ) is the maximum mean s s value of the concentration of chemical species s obtained for α = (1, . . . , 1), i.e. without any perturbations, aismax and bjsmax are the coordinates of the point, where this maximum has been reached, and imax , jmax are the mesh indices of this point. The nominator represents the values of the concentrations of the corresponding pollutant for a given set of values of the perturbation parameters αi ∈ {0.1, . . . , 2.0}, computed at the point (aismax , bjsmax ). Thus we consider a set of pollutant concentrations normalized according to the maximum mean value of the concentration of the corresponding chemical species. We also study
Monte Carlo Adaptive Technique for Sensitivity Analysis
391
The first variable (rate related to 3rd time−dependent reaction) is fixed (1.0) 1.25
1.1
reaction 3 (time−dep.) reaction 22 (time−dep.) reaction 6 (time−indep.)
1.2
1.1 1.05 1 0.95 0.9
0.9 0.8
1
0.7 0.8 0.6 0.6
0.5
0.4
0.4 0.3
0.2
0.2
0.85 0.8 0.6
1
1.2 ratios related to ozone
Ratios (related to k=1.0)
1.15
1.4
0 0.5
1.5 1
1 0.7
0.8
0.9
1
1.1
1.2
1.3
1.5
1.4
Factors used in relation to the reaction
Fig. 1. Sensitivity of ozone concentrations to changes of chemical rates
0.1
0.5 rate of 6th time−ind. reaction
rate of 22nd time−dep. reaction
Fig. 2. Model function for x1 = 1
numerically how different chemical rate reactions influence concentrations of a given pollutant. An example of how chemical rates of three different reactions (## 3, 6, and 22 of CBM-4 chemical scheme) influence ozone concentrations (for July 1998) is presented on Figure 1. One can see that in this particular case the influence of reactions ## 3 and 22 is significant and the influence of reaction 6 is almost negligible. Sensitivity analysis computations consists of two steps: approximation and computing of Sobol’ global sensitivity indices. As a result of computations with the use of UNI-DEM we obtain mesh functions in a form of tables of the values of the model function. The first step is to represent the model as a continuous function (1). To do that we use approximation by polynomials of third and forth degree, where ps (x) is the polynomial approximating the mesh function that corresponds to the s-th chemical specie. The approximation domain Ω = [0.1; 2.0]3 has been chosen a bit wider ranging than the integration domain Ω = [0.6; 1.4]3 in order to present more precisely the mesh model n function. The squared 2-vector norm of the residual defined as ps − rs 22 = l=1 [ps (xl )− rs (xl )]2 , xl ∈ [a; b]3 in the case of a polynomial of 4-th degree in three variables was ps − rs 22 = 0.022 for xl ∈ [0.1; 2.0]3 and ps − rs 22 = 0.00022 for xl ∈ [0.6; 1.4]3 in our numerical experiments. If one is not happy with the accuracy of polynomial approximation other tools should be used. So, we consider polynomials of 4-th degree in three variables as a case study which is completely satisfying us at this stage. The approximate function is smooth but it has a single peak at one of the corners of the domain. A section of the model function graphics (the first variable is fixed to 1.0) is presented on Figure 2. The adaptive approach seems to be promising for functions like this. Adaptive Monte Carlo algorithm (see Section 3) has been applied to the problem of numerical integration. The results have been compared with Plain MC (see Section 3). One of the best available random number generators, SIMD-oriented Fast Mersenne Twister (SFMT) [5] 128-bit pseudorandom number generator of period 219937 − 1 has been used to generate the required random points.
392
I. Dimov and R. Georgieva
Table 1. First-order and total sensitivity indices of input parameters estimated using different approaches of sensitivity analysis applying Plain Monte Carlo algorithm
XX XXXapproach quantity XXXX g0
D
S1
S3
Sxtot 1 Sxtot 3
N 104 106 107 104 106 107 104 106 107 104 106 107 104 106 107 104 106 107
Standard (Sobol’) Est. value Rel. error 0.5155 8e-05 0.5156 3e-05 0.5155 2e-05 0.2635 0.0002 0.2636 6e-05 0.2636 5e-05 0.2657 0.0127 0.2654 0.0116 0.2653 0.0114 0.2525 0.0013 0.2521 0.0002 0.2521 0.0002 0.4183 0.0002 0.4185 0.0003 0.4184 0.0001 0.4017 0.0002 0.4018 5e-06 0.4018 8e-06
N 104 106 107 104 106 107 104 106 107 104 106 107 104 106 107 104 106 107
Combined Est. value Rel. error 0.2525 0.0002 0.2526 7e-05 0.2526 5e-05 0.0052 0.0081 0.0052 0.0009 0.0052 0.0012 0.5349 0.0048 0.5337 0.0025 0.5325 0.0003 0.0012 0.4060 0.0019 0.0333 0.0019 0.0213 0.5389 0.0021 0.5392 0.0026 0.5380 0.0003 0.0009 0.6078 0.0022 0.0354 0.0022 0.0192
Results for some first-order and total sensitivity indices obtained by the Plain Monte Carlo algorithm are presented in Table 1. Two approaches for sensitivity indices have been applied — standard (Sobol’, 2001) and combined approach. For the implementation of the combined approach a Monte Carlo estimate of f0 has been used — c = 0.51365. The following notation is used in the tables: ε is the desired estimate of standard deviation, #sub is the number of subdomains after domain division, Nsub is the number of samples in each subdomain, N is the number of samples in the domain of integration (for the Plain algorithm), D is the variance; g0 is the integral over the domain [0.6; 1.4]3, where the integrand for the standard approach is the model function, f (x), and for the combined approach f (x) − c, respectively. Relative error is the absolute error divided by the exact value. Each estimated value is obtained after 10 algorithm runs. The exact values are known for our special case of mesh function approximation. One of the advantages of Sobol’ type approaches has been applied in the implementation of the Plain Monte Carlo algorithm — the possibility to compute first-order and total sensitivity indices of a given input parameter using only one Monte Carlo integral and two independent multidimensional sequences of random numbers. The results obtained confirm the expected effect of decrease of the relative error with the increase of the number of samples. On the other hand, the order of relative error decreases for the values of sensitivity indices in comparison with g0 and variation D for both approaches. These quantities are
Monte Carlo Adaptive Technique for Sensitivity Analysis
393
Table 2. First-order and total sensitivity indices of input parameters estimated using combined approach applying Plain and Adaptive Monte Carlo algorithms Estimated quantity g0
Plain N Est.value Rel.error 192 0.2516 0.0038 7200 0.2524 0.0005 32000 0.2524 0.0005 192 0.0055 0.0493 7200 0.0052 0.0130 32000 0.0052 0.0034
#sub 64 180 64
Nsub 3 40 500
Adaptive ε Est.value Rel.error 0.5 0.0056 0.0725 0.0165 0.0051 0.0333 0.1 0.0052 0.0003
S1
192 7200 32000
0.6502 0.5299 0.5326
0.2214 0.0046 0.0004
64 180 64
3 40 500
0.5 0.0165 0.1
0.5072 0.5307 0.5323
0.0473 0.0031 0.0001
S3
192 7200 32000 192 7200 32000 192 7200 32000
0.0055 0.0009 0.0013 0.5875 0.5346 0.5368 0.0004 0.0006 0.0018
1.7695 0.5367 0.3250 0.0923 0.0061 0.0020 1.1693 0.7529 0.2094
64 64 64 64 180 64 64 180 64
3 500 104 3 40 500 3 40 500
0.5 0.1 0.1 0.5 0.0165 0.1 0.5 0.0165 0.1
0.0016 0.0027 0.0019 0.5108 0.5345 0.5376 0.0047 0.0021 0.0022
0.1790 0.3463 0.0581 0.0503 0.0061 0.0004 1.1013 0.0895 0.0153
D
Sxtot 1 Sxtot 3
presented by only one (g0 ) or two (D) integrals while each sensitivity index (firstorder or total) is presented by a ratio of integrals estimated by the Plain Monte Carlo algorithm that leads to an accumulation of errors. The value of variation for the combined approach is much smaller than the value of variation for the standard approach and the division into that relatively small quantity leads to larger relative errors for total sensitivity indices using the combined approach. Nevertheless, the results for total sensitivity indices obtained by the combined approach are more reliable — the values of total effects are fully consistent with the expected tendencies according to Figure 1. A comparison between computed first-order and total sensitivity indices obtained by the combined approach using Plain and Adaptive Monte Carlo algorithms is given in Table 2. The concepts of the combined approach and the developed adaptive approach require numerical integration over 6-dimensional domain, i.e. twice as large as the dimension of the model function. That is why g0 has not been computed in this case. Since the total number of estimated quantities is seven — variance and main and total effects of three input parameters — the adaptive procedure applied to all of them would be inefficient according its computational cost. The criterion for achieving the desired accuracy in computing variance has been adopted as a common criterion in computing other quantities because all main and total effects depend on the variance. In contrast
394
I. Dimov and R. Georgieva
of that, two independent random sequences in 3-dimensional domain [0.6; 1.4]3 have been used for implementation of the Plain Monte Carlo algorithm. The number of samples for both Monte Carlo techniques has been chosen following the requirement for consistency of obtained results, i.e. the number of samples for Plain algorithm is a multiplication of average number of subdomains (from several runs) and number of samples in each subdomain. It has been observed that the computational times using two Monte Carlo approaches for numerical integration (Plain and Adaptive) to estimate the unknown quantities are comparable. For example, t = 0.073 s for the Plain (N = 32 000) and t = 0.078 s for the Adaptive. Thus, the conclusions about efficiency of the applied algorithms in computing the desired quantities can be made by comparing orders of estimated errors. The Adaptive algorithm has an advantage over the Plain algorithm for a fixed number of samples that confirms reducing variance effect of the applied adaptive technique. Moreover, the approximative values of quantities are sufficiently close to exact values even for the smallest chosen number of samples.
Acknowledgment The research reported in this paper is partly supported by NATO Science under grant CLG.982641, as well as by the Bulgarian NSF Grants DO 02-215/2008 and DO 02-115/2008. The authors thank Z. Zlatev and Tz. Ostromsky for providing necessary data for our analysis and for stimulating discussions.
References 1. Dimov, I.: Monte Carlo Methods for Applied Scientists. World Scientific, Singapore (2008) 2. Saltelli, S.: Making best use of model valuations to compute sensitivity indices. Computer Physics Communications 145, 280–297 (2002) 3. Sobol, I.M.: Sensitivity estimates for nonlinear mathematical models. Mathematical Modeling and Computational Experiment 1, 407–414 (1993) 4. Sobol, I.M.: Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Mathematics and Computers in Simulation 55(1-3), 271–280 (2001) 5. Saito, M., Matsumoto, M.: SIMD-oriented Fast Mersenne Twister: a 128-bit Pseudorandom Number Generator. Monte Carlo and Quasi-Monte Carlo Methods 2006, pp. 607–622. Springer, Heidelberg (2008) 6. Saltelli, A., Tarantola, S., Campolongo, F., Ratto, M.: Sensitivity Analysis in Practice. A Guide to Assessing Scientific Models. John Wiley & Sons Publishers, Chichester (2004) 7. Zlatev, Z.: Computer Treatment of Large Air Pollution Models. Kluwer Academic Publishers, Dordrecht (1995) 8. Zlatev, Z., Dimov, I.: Computational and Numerical Challenges in Environmental Modelling. Elsevier, Amsterdam (2006)
Parallel Implementation of the Stochastic Radiosity Method Roel Mart´ınez1 and Jordi Coma2 1
Institut d’Inform` atica i Aplicacions, Universitat de Girona
[email protected] 2 Escola Polit`ecnica Superior, Universitat de Girona
Abstract. To compute high quality images with a lot of polygons or patches may take hours. For this reason parallel processing will be a good option in order to decrease the computational cost. On the other hand, Monte Carlo methods offer good alternatives for parallelization, given their intrinsic decomposition properties in independent subtasks. We have implemented the stochastic method for radiosity using a cluster of PCs. Results are presented for 1 to 8 processors, exhibiting a good efficiency and scalability.
1
Introduction
Global illumination algorithms compute the single reflection of the light many times to simulate the multiple reflections. To obtain a single reflection at a point the estimate of the incoming radiance from a direction should be weighted by the probability of the reflection to a given direction and integrated taking into account all the possible incoming directions. Consequently, global illumination is basically a numerical integration problem. Thus global illumination is computationally very expensive, which means that in order to obtain a high quality image, with a lot of polygons or patches, may take hours. This high cost makes researchers in the area to look into parallelization alternatives to reduce it [9]. Local Monte Carlo approaches sample the domain of the integration randomly using a probability density p, evaluate the integrand f (x) here, and provide the f /p ratio as the primary estimate of the integral. This estimate is accurate if we can find p to mimic f precisely, i.e. to make f /p as flat as possible. This strategy, which is commonly referred to as importance sampling, places more samples where the integrand is high. Since in practice p can be very far from the integrand, the estimator may have high variance. Thus to get an accurate result many independent primary estimators should be used and compute the secondary estimator as their average. Global Monte Carlo methods do not rely on finding good sampling density. Instead, they take advantage of the fact that it is usually easy to evaluate the f at a well structured set of sample points x1 , . . . , xn . The emphasis is on that the simultaneous computation of f (x1 ), . . . , f (xn ) is much cheaper that the individual computation of f (x1 ),. . ., and f (xn ) by a local method, thus in this way we can have many more samples for the same I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 395–402, 2010. c Springer-Verlag Berlin Heidelberg 2010
396
R. Mart´ınez and J. Coma
computational effort. The bad side of this technique is that, finding a probability density that simultaneously mimics the integrand at many points is very difficult, thus practical methods usually use uniform sampling probability. In this way, global methods are implemented using global uniformly distributed lines, first used in [3], in contrast with “local” lines, generated from sampled points in the scene. The stochastic iteration algorithm for radiosity [16] is part of this family of global lines algorithms which has seen a further development in [12,10,14,2,5]. On the other hand, task farm (or naive) Monte Carlo parallelization is based on the fact that we can decompose a Monte Carlo computation with n lines or rays into m independent smaller ones with n/m rays each with no loss in precision [18,11,1]. We have implemented our parallel solution on a cluster of PCs. We choose a high performance cluster for decreasing the execution time of the radiosity algorithm. The rest of the paper is organized as follows. In section 2 we present the stochastic iteration radiosity method. In sections 3 and 4 we introduce the whole steps for the sequential and parallel implementation of the stochastic method, respectively. In section 5 we show our results and finally we present our conclusions in section 6.
2
Stochastic Iteration Method
Szirmay-Kalos presented in [16] a new method based on an stochastic iteration scheme (see [15]), where a random operator is selected randomly in each iteration. This new method is a finite element based iteration method, which uses a single ray or set of ray bundles to transfer energy in a single random direction. The concept of stochastic iteration has been proposed and applied for the diffuse radiosity problem in [6,7,8,13]. The basic idea of stochastic iteration is that instead of approximating the transport operator in a deterministic way, a much simpler random operator is used during the iteration which for most of the cases has the same behavior than the real operator. Szirmay-Kalos basically used two operators: the first one used a single ray based transport operator and the second operator is a ray bundle based operator. The first operator uses a single ray having a random origin and direction generated with a probability that is proportional to the cosine weighted radiance of this point. This ray transports the whole energy to the point which is hit by the ray. The second operator transfers the radiance of all surface points of the scene in a single random direction. The algorithm works as follows: the scene is tessellated in patches and it is assumed that a patch has uniform radiance in a given direction (but this does not mean that the patch has the same radiance in every direction, thus the nondiffuse case can also be handled). In order to evaluate the transport operator a random direction and a plane perpendicular to this random direction are defined, the so called transillumination direction and plane, respectively. The plane is decomposed in n × n pixels. All patches from the scene are projected into the transillumination plane and it is computed which patches are visible from a
Parallel Implementation of the Stochastic Radiosity Method
397
j
Pp
i
Fig. 1. Bundle of parallel lines that exiting patch i goes to patch j. P p is a plane perpendicular to the direction of the bundle and the thick lines represent the common projected area of the patches i and j onto the plane P p.
LS
(a)
(b)
(c)
Fig. 2. First shot method. (a) Scene with a light source (LS) and two objects. (b) The light source sends its energy to the scene. (c) The walls, floor and the objects received energy from the light source. The dash-dot line represents the light source and the thick lines represent surfaces which have received some energy.
given patch, in other words, the projected area of a patch that is visible from a given patch. This computation is done counting the number of pixels that two face to face patches have in common after the projection. Finally for each patch in the scene the transfer of energy is done using the common projected areas between patches (see figure 1). The stochastic iteration method is only efficient in “smoothed” scenes with emittance occupying a large part of the scene and more or less equilibrated. For this reason a first shot (see figure 2) distributing direct illumination before applying the algorithm is necessary [4,17].
3
Sequential Implementation
The sequential implementation first reads the scene geometry and then the scene is subdivided in patches. Second, the first shot step, with a predefined by the user number of lines, is computed. Third, the stochastic method with predefined number of lines is applied. Finally, the resulting scene is saved. The first shot step and the stochastic method are given by the following pseudo-code: begin firstShot() for i=0 to total number of patches do
398
R. Mart´ınez and J. Coma if patch[i] is a source then compute number of rays for this source according to its power for j=0 to number of rays do cast a local line transfer energy from the source to the first intersected patch endFor endIf endFor
end begin stochastic() create a bounding sphere for the whole scene for i=0 to total number of ray bundles do send_Bundle() endFor end
where send Bundle() function computes the projection of all patches, for a random direction, onto the projection plane and exchanges the energy between the patches. The more consuming parts, in our sequential implementation, are the firstShot and stochastic steps.
4
Parallel Implementation
In our parallel implementation we have two regions in our code: the sequential region and the parallel region. In our case the parallel region is given by the first shot step and the stochastic method. Every cluster node has its own copy of the scene data and radiosity solution vector. Thus, every node computes its own solution of the first shot step and stochastic method according to the code explained in previous section. The sequential region is given by three sections. The first section is loading the data at the beginning of the process. The second one is the combination of the results after the first shot and the distribution of this result to all the nodes. And the third one is the computation of the final results at the end of the process. The tasks for each node are balanced because the predefined number of rays and bundles of rays is divided by the number of processors. For every local line casted from the light source the first intersection is computed and then the intersected surface increments its accumulated power. The cost of every local line is almost the same. In the case of bundles of rays the cost can be different because it depends of the number of patches projected on every pixel. Considering that a scene needs hundreds or thousands of projections then average projection cost will be almost the same at the end of the process. On the other hand, the algorithmic cost depends of the communication cost plus the master node CPU time. Our communication cost is very low because
Parallel Implementation of the Stochastic Radiosity Method
399
Fig. 3. Museum stairs scene with 614 778 patches computed using 100 million rays for the first shot step and 3000 bundles of rays for the stochastic method
there are just two communications steps in our implementation. First, a bidirectional communication process is used after the first shot step. The master node combines the results of all nodes. Second, a unidirectional communication step is applied at the end of the process. Finally, the master node CPU add, for every scene patch, all the radiosity results and divide them by the number of processors. Thus the execution time, in the master node, is proportional to the number of patches.
5
Results
We have used a high performance cluster composed by four servers HP ProLian DL145, each server has two AMD Opteron 244 (1.8 GHz) processors and 2 GB of RAM. Also a 100Mbps network switch is used. The cluster has been configured with OSCAR (Open Source Cluster Application Resources) on a GNU/Linux Suse 10.0 OS. We did our implementation in C++ and LAM/MPI (Message Passing Interface) libraries. We tested our implementation with the museum stairs and airplane cabin scenes (see figures 3 and 4). Speed-up and Efficiency are measures that indicate how well a program has been parallelized. Let T (p) be the execution time on p CPUs. The Speedup S(p) and Efficiency E(p) are defined as: S(p) =
T (1) , T (p)
E(p) =
S(p) p
p = 1, 2, 3...n
400
R. Mart´ınez and J. Coma
Fig. 4. Airplane cabin scene with 438 518 patches computed using 100 million rays for the first shot step and 3000 bundles of rays for the stochastic method
Figure 5 shows the speed-up and efficiency of our parallel implementation, for the museum stairs scene in figure 3. We can see that the speed-up keeps around 90% for different number of CPUs. The efficiency E(p) remains constant over a different number of processors. We can see that our implementation has a good scalability. The projection plane was defined with 256 by 256 pixels then the projection plane is simulating more than 65 thousand global lines.
8
1.2 "Speed_up" "ideal_Speed_up"
"Efficiency" "ideal_Efficiency"
7
1.1
6
Efficiency E(p)
Speed-up S(p)
1 5
4
0.9
0.8 3
0.7
2
1
0.6 2
3
4
5 Processors
(a)
6
7
8
2
3
4
5 Processors
6
7
(b)
Fig. 5. (a) Speed-up and (b) Efficiency for the museum stairs scene in figure 3
8
Parallel Implementation of the Stochastic Radiosity Method
401
The museum stairs scene was subdivided in 614 778 patches and 100 million rays for the first shot step and 3000 bundles of rays for the stochastic method were cast. The execution time using the sequential implementation was 1395.2 seconds (485.1 seconds for the first shot and 910.1 for the stochastic method) and for our parallel implementation, using 8 CPUs, was 197.9 seconds (70.3 seconds for the first shot and 127.6 for the stochastic method). The airplane cabin scene was subdivided in 438 518 patches and 100 million rays for the first shot step and 3000 bundles of rays for the stochastic method were cast. The execution time for the sequential and parallel implementation was 967.3 (337.9 seconds for the first shot and 629.2 for the stochastic method) and 137.3 seconds (47 seconds for the first shot and 90.3 for the stochastic method), respectively. This scene shows similar speed-up and efficiency as the museum stairs scene.
6
Conclusions
We have presented a parallel implementation of the stochastic iteration method for radiosity. The implementation has been done in a cluster of PCs. Tests have been done on a system containing 8 processors, showing good efficiency and scalability. As future work it is possible to implement other related global line Monte Carlo algorithms. Also, other architectures, like multicore shared memory, are a good option for Monte Carlo methods.
Acknowledgement This project has been funded in part with grant number TIN2007-68066-C04-01 of the Spanish Government.
References 1. Alme, H., Rodrigue, G., Zimmerman, G.: Domain decomposition methods for parallel laser-tissue models with Monte Carlo transport. In: Niederreiter, H., Spanier, J. (eds.) Proceedings of the Third International Conference on Monte Carlo and Quasi-Monte Carlo methods in Scientific Computing, Claremont, California, USA. Springer, Heidelberg (1998) 2. Bekaert, P.: Hierarchical and Stochastic Algorithms for Radiosity. PhD thesis, Department of Computer Science, Katholieke Universiteit Leuven, Leuven, Belgium (1999) 3. Buckalew, C., Fussell, D.: Illumination Networks: Fast Realistic Rendering with General Reflectance Functions. Computer Graphics (ACM SIGGRAPH 1989 Proceedings) 23, 89–98 (1989) 4. Castro, F., Mart´ınez, R., Sbert, M.: Quasi-Monte Carlo and extended first-shot improvement to the multi-path method. In: Szirmay-Kalos, L. (ed.) Proc. Spring Conference on Computer Graphics 1998, Budimerce, Slovakia. Comenius University, pp. 91–102 (1998), http://www.dcs.fmph.uniba.sk/~ sccg/proceedings/1998.index.htm
402
R. Mart´ınez and J. Coma
5. Mart´ınez, R.: Adaptive and Depth Buffer Solutions with Bundle of Parallel Rays for Global Line Monte Carlo Radiosity. PhD thesis, Universitat Polit`ecnica de Catalunya, Barcelona, Spain (2004), http://ima.udg.es/~ roel 6. Neumann, L., Feda, M., Kopp, M., Purgathofer, W.: A New Stochastic Radiosity Method for Highly Complex Scenes. In: Fifth Eurographics Workshop on Rendering, Darmstadt, Germany, pp. 195–206 (1994) 7. Neumann, L.: Monte Carlo Radiosity. Computing 55(1), 23–42 (1995) 8. Neumann, L., Purgathofer, W., Tobler, R.F., Neumann, A., Elias, P., Feda, M., Pueyo, X.: The Stochastic Ray Method for Radiosity. In: Hanrahan, P.M., Purgathofer, W. (eds.) Rendering Techniques 1995 (Proceedings of the Sixth Eurographics Workshop on Rendering), New York, pp. 206–218. Springer, Heidelberg (1995) 9. Reinhard, E., Chalmers, A.G., Jansen, F.W.: Overview of Parallel Photo-Realistic Graphics. In: Eurographics 1998 State of the Art Reports, pp. 1–25 (1998), http://www.cs.bris.ac.uk/Tools/Reports/Authors/alan.html 10. Sbert, M.: The Use of Global Random Directions to Compute Radiosity: Global Monte Carlo Techniques. PhD thesis, Universitat Polit`ecnica de Catalunya, Barcelona, Spain (1997), http://ima.udg.es/~ mateu 11. Sbert, M., Perez, F., Pueyo, X.: Global Monte Carlo: A Progressive Solution. In: Hanrahan, P.M., Purgathofer, W. (eds.) Rendering Techniques 1995 (Proceedings of the Sixth Eurographics Workshop on Rendering), New York, pp. 231–239. Springer, Heidelberg (1995) 12. Sbert, M., Pueyo, X., Neumann, L., Purgathofer, W.: Global Multipath Monte Carlo Algorithms for Radiosity. The Visual Computer 12(2), 47–61 (1996) 13. Szirmay-Kalos, L., Foris, T., Neumann, L., Csebfalvi, B.: An Analysis of QuasiMonte Carlo Integration Applied to the Transillumination Radiosity Method. Computer Graphics Forum (Proceedings of Eurographics 1997) 16(3), C271–C281 (1997) 14. Szirmay-Kalos, L., Purgathofer, W.: Global Ray-Bundle Tracing with Hardware Acceleration. In: Drettakis, G., Max, N. (eds.) Rendering Techniques 1998 (Proceedings of Eurographics Rendering Workshop 1998), New York, pp. 247–258. Springer Wien, Heidelberg (1998) 15. Szirmay-Kalos, L.: Stochastic methods in global illumination — state of the art report. Technical Report TR-186-2-98-23, Vienna University of Technology, Vienna, Austria (1998), http://www.fsz.bme.hu/~ szirmay/puba.html 16. Szirmay-Kalos, L.: Stochastic Iteration for Non-Diffuse Global Illumination. Computer Graphics Forum (Proceedings Eurographics 1999) 18, C-233–C-244 (1999) 17. Szirmay-Kalos, L., Sbert, M., Mart´ınez, R., Tobler, R.F.: Incoming First-Shot for Non-Diffuse Global Illumination. In: Spring Conference on Computer Graphics, Budmerice, Slovakia (2000), http://www.fsz.bme.hu/~ szirmay/puba.htm 18. Zareski, D., Wade, B., Hubbard, P., Shirley, P.: Efficient parallel global illumination using density estimation. In: Proceedings of Visualization 1995 — Parallel Rendering Symposium, pp. 219–230 (1995)
Monte-Carlo Modeling of Electron Kinetics in Room Temperature Quantum-Dot Photodetectors Vladimir Mitin, Andrei Sergeev, Li-Hsin Chien, and Nizami Vagidov University at Buffalo, The State University of New York, Buffalo NY 14260, USA
[email protected]
Abstract. Results of our many-particle Monte-Carlo modeling of kinetics and transport of electrons in InAs/GaAs quantum-dot infrared photodetectors are reviewed. We studied the dependence of the electron capture time on the electric field at different heights of the potential barriers around the dots. The capture time is almost independent on the electric field up to a critical field about 1 kV/cm, and than substantially decreases with the field increase. We found that the capture time has exponential dependence on the inverse of the average electron energy, which is in agreement with theory. Our results show that controllable kinetics in quantum-dot structures may provide a significant increase in the photoconductive gain, device detectivity, and responsivity.
1
Introduction
There are numerous applications, ranging from infrared (IR) cameras and remote sensing to healthcare, which require sensitive scalable IR sensors. Currently, IR technologies are mainly based on quantum-well infrared photodetectors (QWIPs). QWIPs are widely utilized in various sensors and imaging devices operating at 77 K and below. However, QWIPs have drawbacks such as insensitivity to the normal incidence radiation and fast carrier capture at room temperature. Therefore, one of the main goals for the next generation of imaging IR systems is to increase the operating temperature without reduction in sensitivity.
2
Simulation Models
It has been expected that quantum-dot infrared photodetectors (QDIPs) (see Fig. 1) may operate at room temperatures. Theory has predicted long photoexcited electron lifetime due to the reduced relaxation rate associated with the “phonon bottleneck”. This effect assumes that only specific phonons can provide electron transitions between narrow atom-like energy levels in quantum dots (QDs). However, the experiments showed that the electron relaxation in QDs is just slightly slower than in quantum wells [2]. Nevertheless, there are I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 403–410, 2010. c Springer-Verlag Berlin Heidelberg 2010
404
V. Mitin et al.
Fig. 1. Schematic cross-section of QDstructure with vertical electron transport
Fig. 2. QD-structure with the local potential barriers around QDs
still some other possibilities to suppress the capture processes. In order to improve the detector performance, one should suppress photoelectron capture into QDs, which means that one should separate the conducting electron states, that provide electron transport, from the localized states, which are excited by IR radiation. The QDs in real space can be separated from the conducting channels by the potential barriers, which prevent the photoelectron capture. Moreover, the potential barriers may also be used to suppress the thermionic emission from QDs. In our papers [6,7], realizations of potential barriers have been suggested: 1) the local potential barriers may be formed around each QD (see Fig. 2) and 2) the collective potential barriers may be formed in modulation-doped quantumdot structures (see Fig. 3). In the first case the local potential barriers may be formed by the means of homogeneous doping of the interdot space. The potential barriers are formed by the electrons trapped in QDs and by the ions in the depletion regions. There are two main mechanisms of photoelectron capture in QDIP structures. One is due to electron tunneling (see Fig. 4(a)), and the other one is due to thermo-excitation (see Fig. 4(b)). The height of the potential barriers can be controlled over a wide range by varying host materials, doping level, and changing the characteristic length scale associated with the quantumdot structure. In our model, the detailed form of the potential barriers is not critical. The only important assumption we accept is that the probability of
Fig. 3. Collective barriers and conducting channels in the modulation-doped quantum-dot structure
Fig. 4. Photoelectron capture due to: (a) electron tunneling and (b) thermoexcitation; Vm is the barrier height
Monte-Carlo Modeling of Electron Kinetics
Fig. 5. Schematic cross-section of the MD-QDIP structure
405
Fig. 6. Potential profile of the MD-QDIP structure with lateral electron transport
tunneling processes is small compared with the capture probability via thermoexcitation. The collective barriers may be created, if the electrons populating QDs are taken relatively far from the QDs. The potential barriers shown in Fig. 3 separate the groups of QDs the electrons with high mobility in the undoped conducting channels. Currently, most of the QDIP structures use electron tunneling in the vertical direction through the stacked QDs (see Fig. 1). However, in such structures it might be difficult to have high electron mobility and high gain. An alternative structure was suggested, which is based on the lateral transport along the heterointerfaces [3,4] (see its cross-section and potential profile in Figs. 5 and 6). This modulation-doped quantum dot infrared photodetector (MD-QDIP), is one of the possible realizations of structures with collective potential barriers. In such structures long carrier lifetimes can be achieved due to large distances between QDs and heterointerfaces. The high responsivity can be achieved due to long photo-excited carrier lifetimes and high-mobility transport of carriers. The optimal structures will have the advantages of quantum wells (strong coupling to radiation due to large number of electrons in localized states), as well as advantages of QDs (manageable kinetics and low capture rate), which make them promising candidates for the advanced IR detector.
3
Results of Simulation
We have developed many-particle Monte-Carlo program for simulation of threedimensinal electron transport and capture processes by QDs. Such a method is an effective and versatile tool for the self-consistent simulations and for the study of transient processes. Our program includes all basic scattering processes, including electron scattering on acoustic, polar optical, and intervalley phonons. The modeling includes electrons in Γ -, L-, and X- valleys and takes into account redistribution of carriers between valleys. In current paper we present the results of numerical simulations for two types of QDIPs: 1) the one with local potential barriers around QDs (see Fig. 2) and 2) the one with collective barriers and
406
V. Mitin et al.
Fig. 7. Capture time, τcapt , as a function of Vm . The inset shows the radius of a QD, a, and interdot distance, b.
Fig. 8. Capture time, τcapt , as a function of electric field, Ey , at different potential barrier heights, Vm
conducting channels (MD-QDIP structure) (see Figs. 3 and 6). We assume that the intradot relaxation processes can be associated with the inelastic electronphonon scattering in the InAs dot area (Fig. 4(b)). We consider the electron capture process as a specific scattering process: (i) which is limited in space by the dot volume and (ii) during which a carrier transits from a conducting state to a localized bound state, which is below the potential barrier, Vm . Let us discuss the results of simulation for the structure with local potential barriers around QDs. Figure 7 demonstrates significant changes in the dependence of electron capture time, τcapt , on the height of the local barriers, Vm , in fields higher than 1 kV/cm. Figure 8 shows the dependence of the capture time, τcapt , on the electric field, Ey , for different heights of the potential barrier, Vm . Capture time, τcapt , is practically independent on Ey up to the critical field of the order of 1 kV/cm. At the fields higher than the critical electric field capture time substantially decreases with the field increase. Let us analyze the obtained simulation results in the framework of electron heating model (for the detailed equations, please see Ref. [1]). The dependence of the average electron energy, ε¯, on the electric field, Ey , is shown in Fig. 9. The height of the barriers, Vm , does not change the energy ε¯ gained in the electric field, Ey . Figure 10 shows the exponential dependence of the capture time, τcapt ,
Fig. 9. Average electron energy, ε¯, as a function of the electric field, Ey
Fig. 10. Capture time, τcapt , as a function of inverse electron average energy, 1/¯ ε
Monte-Carlo Modeling of Electron Kinetics
407
Fig. 11. Photoconductive gain as a function of the electric field, Ey
on the inverse average electron energy, 1/¯ ε. We will compare these results of simulation with the analytical ones later. Figure 11 shows the photoconductive gain as a function of Ey for a device with length of 1 μm. The gain reaches its maximum at Ey ≈ 1 kV/cm, which is characteristic field for the dependences shown in Figs. 7 and 8. The nonmonotonic dependence of gain has a simple explanation. At low electric fields τcapt is almost constant, while the gain increases due to the decrease of the transit time, ttr . At high electric fields, the gain decreases due to the exponential decrease of τcapt . Let us discuss the results obtained for the MD-QDIP structure with the collective barriers. We have used slightly simplified potential profile of the GaAs layer: instead of a self-consistent potential profile, shown in Fig. 6 by solid line, we use its triangular approximation, shown in the same figure by the dasheddotted line. We used the value of the perpendicular to the heterointerface electric field, Ex , equal to 11 kV/cm. Figure 12 shows the dependence of τcapt on the GaAs thickness, d. As expected, τcapt increases as d increases. Two factors are responsible for the increase of τcapt : 1) the increase of the distance between dots in x-direction and 2) the increase of the height of the effective potential barrier, Vm . With the increase of d electrons are more and more localized at the AlGaAs/GaAs interface. Figure 12 demonstrates the exponential dependence on d, which is in good agreement with
Fig. 12. Electron capture time, τcapt , as a function of GaAs thickness, d
Fig. 13. Electron capture time, τcapt , as a function of longitudinal electric field, Ey
408
V. Mitin et al.
Fig. 14. Electron capture time, τcapt , as a function of inverse energy 1/¯ ε
Fig. 15. The capture time, τcapt , as a function of the inverse temperature, 1/T
the experimental results of [3]. We can compare the obtained results for the model with the collective barriers with those obtained for the model with the local potential barriers. Figure 13 shows the dependence of τcapt on the applied longitudinal electric field, Ey . This dependence has the same characteristic features as the dependences obtained for the model with the local potential barriers (see the dependence with Vm = 0.1 eV in Fig. 8). Figure 14 shows the influence of electron heating on the capture time, τcapt : it shows the exponential dependence of τcapt on the inverse average electron energy, 1/¯ ε. This result is almost the same as in Fig. 10, which was obtained for the model with local potential barriers. The comparison of two results demonstrates that the model of electron heating can be also applied to the structures with collective barriers. Operating temperatures of MD-QDIPs play significant role in the device performance. Figure 15 shows exponential dependence of τcapt on the inverse temperature, 1/T .
4
Analytical Results
Here, as in our previous works [6,7,1], we have considered a quantum-dot structure with the local potential barriers around QDs. Without tunneling processes, the photoelectron capture rate, −1 τcapt = Nd σ˜ v,
(1)
is given by the equation for the trapping cross-section [1],
where
−1 eVm 3 αa F (V ) , σ = παa2 exp − 1+ kT 4 l
(2)
b eVm dr eV (r) F = a exp − , 2 kT kT a r
(3)
v˜ is the electron thermal velocity, Nd is the concentration of QDs, a is the radius of the dot, b is the interdot distance, l is the electron mean-free path with respect
Monte-Carlo Modeling of Electron Kinetics
409
to elastic electron scattering, α is the probability for an electron at r ≤ a to be captured by QD, and Vm is the maximum value of the potential barrier, i.e. Vm = V (a). We would like to emphasize that (2) and (3) are valid for any relation between l, a, and αa as well as for a wide variety of potentials. To compare these results with the conclusions of [5], let us consider the flat-potential approximation V = 0. In this case, −1 3 αa σ = παa2 1 + . (4) 4 l Following the Ref. [5], we accept that the inelastic intradot relaxation processes are described by the relaxation time τε . In this case, the coefficient α can be evaluated as α ≈ a/lε , where lε = v˜ τε and v˜ is the electron thermal velocity in the dot. Then, if a2 llε , we obtain for the capture rate: 1 v˜ 1 = πNd a3 . τcapt v˜ τε
(5)
In the opposite case, a2 llε , the capture rate is independent of the coefficient α and is given by: 1 = 4πNd Da , (6) τcapt where D = v˜l/3 is the diffusion coefficient. Both limiting cases are in the agreement with the results of [5]. Note, that the second term in the brackets in (4) describes the reduction of the carrier concentration near the dot due to the capture processes. As we already discussed, this effect becomes important in the electron capture process if l < αa, which results in (6). In the opposite case, the electron concentration is practically homogeneous in space and carrier capture is determined by (5). Let us note that in the presence of the potential barriers the second term in the brackets in (2) also describes the reduction of the carrier concentration near the dot. Due to the repulsive potential barriers this effect is increased by a factor of F (V ) given by (3). Comparing this result with the electron capture by the repulsive impurity traps, one can associate F (V ) with the Sommerfeld factor, which shows the reduction of carrier density (electron wavefunction) near the trap. If the local reduction of carrier density is negligible, the capture rate is equal to: 1 ˜ 1 eVm 3 v = πNd a exp − , (7) τcapt v˜ τε kT where the exponential factor describes the effect of potential barriers on the capture processes. ε). Therefore, we may conAs seen in Fig. 10, τcapt is proportional to exp(1/¯ clude that the carrier capture processes in the electric field may be described by (7), where the thermal energy kT has to be replaced by a factor of 2¯ ε/3. Besides, Eq. (7) shows that the capture time has an exponential dependence on the inverse thermal energy 1/kT , which corresponds with the result shown in Fig. 15. Thus, our model correctly describes the electron kinetics in MD-QDIPs.
410
5
V. Mitin et al.
Conclusion
Novel properties of QDIPs may be realized through the manipulation of dots and potential barriers created by the selective doping in quantum-dot structures. The structure with the local potential barriers around dots and lateral photoconductivity transport may be used to separate the localized intradot electron states from the conducting states in the matrix and, in this way, to control all electron processes. The controllable kinetics may provide significant increase in the photoconductive gain, device detectivity, and responsivity.
References 1. Chien, L.H., Sergeev, A., Vagidov, N., Mitin, V.: Hot-electron transport in quantumdot photodetectors. International Journal of High Speed Electronics and Systems 18, 255–264 (2008) 2. Ferreira, R., Bastard, G.: Phonon-assisted capture and intradot Auger relaxation in quantum dots. Appl. Phys. Lett. 74, 2818–2820 (1999) 3. Hirakawa, K., Lee, S.-W., Lelong, P., Fujimoto, S., Hirotanu, K., Sakaki, H.: Highsensitivity modulation-doped quantum-dot infrared photodetectors. Microelectronic Engineering 63, 185–192 (2002) 4. Lee, S.-W., Hirakawa, K.: Lateral conduction quantum-dot infrared photodetectors using photoionization of holes in InAs quantum dots. Nanotechnology 17, 3866–3868 (2006) 5. Lim, H., Movaghar, B., Tsao, S., Taguchi, M., Zhang, W., Quivy, A.A., Razeghi, M.: Gain and recombination dynamics of quantum-dot infrared photodetectors. Phys. Rev. B 74, 205321–1–8 (2006) 6. Mitin, V.V., Pipa, V.I., Sergeev, A.V., Dutta, M., Stroscio, M.: High-gain quantumdot infrared photodetector. Infrared Physics and Technol. 42, 467–472 (2001) 7. Vagidov, N., Sergeev, A., Mitin, V.: Infrared quantum-dot detectors with diffusionlimited capture. International Journal of High Speed Electronics and Systems 17, 585–591 (2007)
Particle Model of the Scattering-Induced Wigner Function Correction M. Nedjalkov1,2 , P. Schwaha1 , O. Baumgartner1, and S. Selberherr1 1
2
Institute for Microelectronics, TU Wien Gußhausstraße 27-29/E360, A-1040 Vienna, Austria Institute for Parallel Processing, Bulgarian Academy of Sciences Acad. G.Bontchev, Bl 25A, 1113 Sofia, Bulgaria
Abstract. The ability of accounting for quantum-coherent and phasebreaking processes is a major feature of the Wigner transport formalism. However, the coherent case is only obtained at significant numerical costs. Therefore, a scheme which uses coherent data obtained with a Green’s function formalism has been developed. This scheme calculates the necessary corrections due to scattering using the Wigner approach and the associated Boltzmann collision models. The resulting evolution problem is not only theoretically derived, but simulation results are presented as well.
1
Introduction
Modeling and simulation of electronic devices is a part of science where knowledge of mathematics, physics, and electrical engineering is required simultaneously to design, analyze, and optimize these core components of the integral circuits. For the semiconductor industry it appears as the only alternative to the enormously expensive trial-and-error manufacturing approach. By means of device modeling and simulation the physical characteristics of semiconductor devices are explored in terms of charge transport and electrical behavior [2]. The progress in this field depends on the level of complexity of the transport models and the application of efficient numerical and programming techniques. Physics provides a hierarchy of charge transport models summarized in Table 1. The ongoing miniaturization of devices forces the use of ever more sophisticated models to be able to capture all relevant effects and correctly calculate device behavior. The sophistication of the models, which increases from the bottom to the top is also characterized by an increase of the numerical complexity. The analytical models utilized during the infancy of microelectronics for circuit design were replaced by more rigorous drift-diffusion and hydrodynamic formulations, based on the moments of the Boltzmann equation. As these can be solved only numerically, corresponding deterministic methods have been developed. When microelectronic devices enter the sub-micrometer scale, the moment equations fail, while the Boltzmann equation remains relevant. The Boltzmann equation provides a detailed classical picture of carrier evolution, where physical probability functions are associated with the various processes describing I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 411–418, 2010. c Springer-Verlag Berlin Heidelberg 2010
412
M. Nedjalkov et al. Table 1. Carrier Transport Models
QUANTUM: nanometer and femtosecond scales
CLASSICAL
MODEL Green’s Functions Method
FEATURES Accounts for both spatial and temporal correlations Wigner equation; von-Neumann Quantum-kinetic models of equation for the density matrix spatial correlations Quantum-hydrodynamic equa- Quantum corrections to clastions sical hydrodynamics Boltzmann transport equation Comprehensive down to subµm and/or ps scales
carrier dynamics. The probabilistic nature along with the fact that the equation deals with a seven-dimensional phase space calls for a stochastic approach. The development of Monte Carlo (MC) methods for solving the equation can be considered as a major step in this field. The nanometer and femtosecond scales of operation of modern devices give rise to a number of phenomena which are beyond purely classical description. These phenomena are classified in the International Technology Road-map for Semiconductors (ITRS, www.itrs.net) according to their importance to the performance of next generation devices. As is recognized in ITRS Table 122, ‘Methods, models and algorithms that contribute to prediction of CMOS limits’ are of utmost interest next to ‘... computationally efficient quantum based simulators’. Quantum models capable of describing not only purely coherent phenomena such as quantization and tunneling but also phase breaking processes such as interactions with phonons are especially relevant. The rising computational requirements resulting from the increasing complexity are a major concern for the development and deployment of these approaches. The harmony between theoretical and numerical aspects of the classical models is disturbed in the upper part of Table 1. At the very top is the rigorous non-equilibrium Green’s function approach (NEGF). It allows the simultaneous consideration of coherent processes of correlations in space and time and of phase breaking processes due to the lattice. However, because of numerical issues the applicability is restricted to stationary structures, basically in the ballistic limit [6]. The computational burden can be reduced by working in a mode space, obtained by separation of the problem into longitudinal and transverse directions. Furthermore, if the transverse potential profile along the transport direction remains uniform, the modes in these directions can be decoupled so that the transport becomes quasimultidimensional. However, this is not the case for devices with a squeezed channel or with abruptly flared out source/drain contacts which require to consider two- and three-dimensional effects [3]. Phonon scattering has been included within a quasi-two-dimensional transport description [5]. Local approximations are commonly utilized in this case, where the phonon self-energy terms are diagonal in the coordinate representation. This is well justified for deformation
Particle Model of the Scattering-Induced Wigner Function Correction
413
potential interaction, but must be adopted for interactions with polar phonons, surface roughness, and ionized impurities. One level below in complexity are the Markovian in time (for pure states) approaches based on the density matrix or the unitary equivalent Wigner function. The recent interest in the semiconductor device society to simulation methods which rely on the Wigner transport picture is due to the ability of the latter to account for quantum-coherent and phase-breaking processes of de-coherence due to scattering of the carriers with phonons and other crystal lattice imperfections [4]. In this picture scattering can be accounted for in a straightforward way by using the Boltzmann collision models, while the coherent counterpart gives rise to a heavy numerical burden. This is in contrast to the Green’s function approach which is numerically efficient in the cases of coherent transport.
2
Scattering-Induced Wigner Function Correction
We propose an approach [1] which combines the advantages of the two pictures. Green’s function calculations of the coherent transport determined by the boundary conditions deliver the coherent Wigner function fwc . s dE 1 s ; fwc (x, kx ) = ρ(x, x ) = −2i G< (x, x , E) dse−ikx s ρ(x+ , x− ) 2π 2π 2 2 (1) The lesser Green’s function G< depends on the coordinates x, x and energy E. The coherent Wigner function fwc (x, kx ) is obtained from the density matrix 2 ρ(x1 , x2 ) with the help of the center-of-mass transformation x = x1 +x , s= 2 x1 − x2 . Furthermore, fwc is a solution of the coherent part of the Wigner-Boltzmann equation. hkx ∂ ¯ fw (x, k) = dkx Vw (x, kx − kx )fw (x, kx , kyz ) (2) m ∂x + dk fw (x, k )S(k , k) − fw (x, k)λ(k) Vw is the Wigner potential. Phase-breaking processes are accounted for by the Boltzmann scatteringoperator with S(k, k ), the scattering rate for a transition from k to k . λ(k) = dk S(k, k ) is the total out-scattering rate. The coherent problem is obtained from (2) by setting the scattering rate S (and thus λ) to zero. In this case the kyz dependence remains arbitrary and can be specified via the boundary conditions. Formally the extrapolation must be such that fwc (x, kx ) is recovered by the integral over kyz . Moreover we want to cancel the Boltzmann scattering operator at the boundaries where standard equilibrium conditions are ) is assumed in the assumed. Hence, a Maxwell-Boltzmann distribution fMB (kyz yz directions, and the functions fwc (x, k ) = fwc (x, kx )
2 h ¯ 2 (k 2 y +k z ) ¯2 h e− 2mkT ; 2πmkT
fwΔ (x, k) = fw (x, k) − fwc (x, k)
414
M. Nedjalkov et al.
can be introduced. The equation for the correction fwΔ is obtained by subtracting the coherent counterpart from (2). The correction is zero at the device boundaries, since the same boundary conditions are assumed for both cases.
3
Classical Limit
The obtained equation is approximated with the help of the classical limit. eE(x) ∂fwΔ (x, kx , kyz ) dkx Vw (x, kx − kx )fwΔ (x, kx , kyz ) = − (3) ¯h ∂kx This approximation is valid for slowly varying potentials, so that the force F (x) = eE(x) can only be a linear function within the spatial support of fwΔ . The force gives rise to Newton’s trajectories 0 0 hKx (τ ) ¯ F (X(τ )) dτ Kx (t) = kx − dτ (4) X(t) = x − m ¯h t t initialized by x, kx , 0. In this definition, if t > 0 the trajectory is called forward, otherwise it is a backward one. A backward trajectory crosses the boundary of the device at a certain time tb , so that fwΔ (X(tb ), k(tb )) = 0. The approximated equation can be transformed with the help of (4) 0 0 Δ fw (x, k) = dt dk fwΔ (X(t), k )S(k , k(t))e− t λ(k(τ ))dτ (5)
0
+
dt tb
tb
0 dk fwc (X(t), k )S(k , k(t)) − fwc (X(t), k(t))λ(k(t)) e− t λ(k(τ ))dτ
into a Fredholm integral equation of the second kind with a free term given by the second row of (5) determined by fwc . The solution can be presented as a Neumann series with terms obtained by iterative application of the kernel to the free term. The series corresponds to a Boltzmann kind of evolution process, where the initial condition is given by the free term. The genuine mixed mode problem posed by the boundary conditions is transformed into a classical evolution of the quantum-coherent solution fwc . The latter, however, allows negative values and thus cannot be interpreted as an initial distribution of classical electrons: rather positive and negative particles initiate the evolution process. In this way the quantum information remains in (5) by the sign of the evolving particles. The boundary is still presented by tb , however, it has a different physical meaning: it only absorbs particles, since trajectories with evolution time t < tb < 0 do not contribute to the solution. In very small devices the carrier dwelling time can be so small that the probability for multiple scattering events tends to zero. In such cases the initial condition itself presents the correction fwΔ . In all other cases the evaluation of the initial condition is a necessary step for finding the solution. A particle approach derived for this purpose with the help of the numerical Monte Carlo theory is presented next.
Particle Model of the Scattering-Induced Wigner Function Correction
4
415
Monte Carlo Approach
The general Monte Carlo task is to compute the averaged value I(Ω) of fwΔ in a given domain Ω of the phase space. Δ I = dx dkx θΩ (x, kx )fw (x, kx ) = dx dkx dky dkz θΩ (x, kx )fwΔ (x, k) The domain indicator θΩ (x, kx ) is unity, if its arguments belong to Ω, and 0 otherwise. We first consider the contribution of the second component f0B (x, kx ) of the initial condition which is the last term in (5). 0 2 2 2 ¯h2 I0B (Ω) = e−¯h (ky +kz )/2mkT dt dx dkx dky dkz 2πmkT −∞ fwc (X(t), Kx (t))θΩ (x, kx )λ(Kx (t), ·)e−
0 t
λ(Kx (τ ),·)dτ
θD (X(t))
The lower bound of the time integral has been extended to −∞, since the introduced device domain indicator θD takes care for its correct value at tb . The backward parametrization of the trajectories will be changed to forward ones aiming to achieve a more heuristic picture of the evolution of the real carriers. Two important properties of the trajectories will be utilized: (I) Any phase space point reached by the trajectory at any given time can be used for initialization, since it obeys a system of first order differential equations. A full notation of a trajectory X(t), Kx (t) contains the initialization point: X(t) = X(t; x, kx , 0) = xt , Kx (t) = Kx (t; x, kx , 0) = kxt . According to (I), the initialization can be changed from x, kx , 0 to xt , kxt , t so that x = X(0, xt , kxt , t), kx = Kx (0, xt , kxt , t). The second property (II) replaces for stationary transport the absolute clock by a relative one: trajectories are invariant with respect to a shift of both, initialization and parametrization time. Applying these properties consecutively gives: X(τ ) = X(τ ; x, kx , 0) = xt = X(t − τ ; xt , kxt , t) = X(−τ ; xt , kxt , 0) = X t (−τ ) Kx (τ ) = Kx (τ ; x, kx , 0) = kxt = Kx (t − τ ; xt , kxt , t) = Kx (−τ ; xt , kxt , 0) = Kxt (−τ ) With the introduced short notations X t , K t for the novel initialization xt , kxt , 0 (of the same trajectory!) it holds that x = X t (−t), kx = Kxt (−t) (recall that t < 0). Finally, according to the Liouville theorem dxdkx = dxt dkxt . Here we need to discuss the involved domains in the phase space: the initially specified domain Ω is mapped backwards in time onto some domain Ω(t) so that the integration over xt , kxt must be confined to Ω(t). However, we can augment the integration domain to the whole space, since by default trajectories defined by points outside Ω(t) will not enter Ω after time −t. For such trajectories θΩ = 0 so that the value I0B remains unchanged by this step. With these relations the expression for I0B becomes 0 2 2 2 h2 ¯ t t I0B = dt dx dkx dky dkz e−¯h (ky +kz )/2mkT 2πmkT tb fwc (xt , kxt )θΩ (X t (−t), Kxt (−t))λ(kxt , ky , kz )e−
0 t
λ(Kxt (−τ ),ky ,kz )dτ
θD (xt ).
416
M. Nedjalkov et al.
We change the sign of τ in the integral of the exponent, then replace t by −t, reorder the terms according their appearance in time and augment the expression by completing the probability densities in the curly brackets. I0B
∞ t t t = dt dx dkx dky dkz θD (x )
2 +k2 ) h ¯ 2 (ky z ¯2 h e− 2mkT 2πmkT
fwc (xt , kxt ) (6)
0
λ(kxt , ky , kz ) t − 0t λ(Kxt (τ ),ky ,kz )dτ λ(K (t), k , k )e θΩ (X t (t), Kxt (t)) y z x λ(Kxt (t), ky , kz ) These conditional probabilities give rise to a MC algorithm for evaluation of I0B (Ω). The computational task is now specified as the calculation of the value i,j of f0B = f0B (xi , kxj ) at a given point (xi , kxj ). Then Ω = Ω i,j can be determined by the phase space area with a small volume Δ = Δkx Δx around (xi , kxj ) so i,j that f0B = I0B (Ω i,j )/Δ. Another peculiarity is the point wise evaluation of fwc giving rise to the approximation: dxt dkxtfwc (xt , kxt ) fwc (i, j)Δ i,j
in (6). The following algorithm for evaluation of f0B in the points of the mesh (i, j) is suggested: 1. Associate to each node i, j of the mesh in the phase space an initialized to zero estimator ξ i,j . 2. The kxt , xt integrals in (6) corresponds to a loop over i, j nodes; Initiate l = 1, 2, . . . N trajectories from each node: the initial points can be fixed on the node or randomized within the cell; For each trajectory: 3. select the kyl , kzl values according to the term in the first curly brackets: algorithms for generation of Gaussian random numbers are well established. This step accounts for the ky , kz integrals. 4. The point xt , kxt initializes the trajectory Kxt (t), X t (t) at time t = 0. Along the trajectory λ(Kxt (t), kyl , kzl ) becomes a function of the time t. A value of the time t = tl , the free flight time, is generated by the term in the last curly brackets by applying some of the well known algorithms utilized for device Monte Carlo simulation. 5. Add to the estimator ξ i,j at the mesh node i, j nearest to Kxt (tl ), X t (tl ) the weight wl = fwc (xt , kxt )λ(kxt , kyl , kzl )/λ(Kxt (tl ), kyl , kzl ) 6. At the end of the i, j loop divide ξ i,j by N . This algorithm can be further enhanced by a selection of the contribution term in the i, j sum according to |fwc (i, j)|. The current version, however, is flexible in providing information about the significance of particular device regions. In a similar way the first term gives rise to the multiple integral ∞ I0A = dt dxt dkxt dky dkz dk θD (xt ) 0
2 h ¯ 2 (k 2 y +k z ) ¯2 h e− 2mkT 2πmkT
fwc (xt , kx )
Particle Model of the Scattering-Induced Wigner Function Correction
S(k , kxt , ky , kz ) λ(k )
t t λ(Kxt (t), ·)e− 0 λ(Kx (τ ),·)dτ
417
λ(k ) θΩ (X t (t), Kxt (t)) λ(Kxt (t), ·)
When compared to the previous case this algorithm accounts for an additional k integration related to the extra scattering event in I0A : 1. Associate to each node i, j an estimator ξ i,j . 2. Loop over i, j nodes corresponding to kx and xt integrals, and initiate l = 1, 2, . . . N trajectories from each node: 3. select the kyl , kzl values according to the term in the first curly brackets, thus accounting for the ky , kz integrals. 4. Select a wave vector according the term in the second curly-brackets. Input parameters are kx , kyl , kzl , the particular value of the after-scattering wave t vector is denoted by k = kxl , kyl , kzl . t t t 5. The point x , kxl initializes the trajectory Kxl (t), Xlt (t) at time t = 0. Generate a free-flight time value tl according to the term in the last curly brackets. t 6. Add to the estimator ξ i,j at the mesh node i, j nearest to Kxl (tl ), Xlt (tl ) the weight t wl = fwc (xt , kx )λ(kx , kyl , kzl )/λ(Kxl (tl ), kyl , kzl ) 7. At the end of the i, j loop divide ξ i,j by N .
concentration [cm−3 ]
1 × 1019
1 × 1018
1 × 1017
1 × 1016
initial density fist correction integral second correction integral corrected density
10
20
30
40
50 nm
60
70
80
90
Fig. 1. Initial carrier density, densities due to first and second integrals, and corrected density calculated with N = 104 . As expected theoretically the two integrals overlap deep in the equilibrium contacts. The scattering gives rise to an increase of the electron density in the well as compared to the coherent case.
418
5
M. Nedjalkov et al.
Simulation Results
The developed algorithm has been applied to a GaAs resonant tunneling diode. The 4nm wide quantum well is surrounded by 1nm barriers with a height of 0.3eV , applied is 0.3V bias. For such small device the initial condition represents the Wigner function correction. The results shown in Fig. 1 demonstrate that the physical effect of the scattering processes cannot be neglected as it would be done with a coherent approximation.
Acknowledgment ¨ This work has been partially supported by the Osterreichische Forschungsge¨ meinschaft (OFG), Project MOEL273 and by the Austrian Science Fund, special research program IR-ON (F2509).
References 1. Baumgartner, O., Schwaha, P., Karner, M., Nedjalkov, M., Selberherr, S.: Coupling of non-equilibrium Green’s function and Wigner function approaches. In: Proc. Simulation of Semiconductor Processes and Devices, Hakone, Japan, pp. 931–934 (2008) ISBN: 978-1-4244-1753-7 2. Goodnick, S.M., Vasileska, D.: Computational electronics. In: Buschow, K.H.J., Cahn, R.W., Flemings, M.C., Kramer, E.J., Mahajan, S. (eds.) Encyclopedia of Materials: Science and Technology, vol. 2, pp. 1456–1471. Elsevier, New York (2001) 3. Luisier, M., Schenk, A., Fichtner, W.: Quantum transport in two- and three- dimensional nanoscale transistors: Coupled mode effects in the non-equilibrium Green’s function formalism. Journal of Applied Physics 100, 043713–1–043713–12 (2006) 4. Querlioz, D., Saint-Martin, J., Do, V., Bournel, A., Dolfus, P.: Fully quantum self-consistent study of ultimate DG-MOSFETs including realistic scattering using Wigner Monte Carlo approach. In: Intl. Electron Devices Meeting, pp. 1–4 (2006) 5. Svizhenko, A., Antram, M.P.: Role of scattering in nanotransistors. IEEE Trans. on Electron Devices 50, 1459–1466 (2003) 6. Venugopal, R., Ren, Z., Datta, S., Lundstrom, M.S.: Simulating quantum transport in nanoscale transistors: real versus mode space approach. Journal of Applied Physics 92, 3730–3739 (2002)
Analysis of the Monte Carlo Image Creation by Uniform Separation A.A. Penzov1, I.T. Dimov1,2 , L. Szirmay-Kalos3, and V.N. Koylazov4 1
4
Institute for Parallel Processing - Bulgarian Academy of Sciences Acad. G. Bonchev, bl. 25A, 1113 Sofia, Bulgaria 2 ACET Centre, University of Reading, Whiteknights, PO Box 217, Reading, RG6 6AH, UK 3 Budapest University of Technology and Economics, Magyar Tud´ osok krt. 2, H-1117 Budapest, Hungary Chaos Software, Alexander Malinov Blvd., bl. 33 entr. B., 1729 Sofia, Bulgaria
Abstract. This paper considers the uniform separation technique of integration domain for Monte Carlo image creation. Uniform Separation technique is based on symmetrical separation into small equal size subdomains to solve the rendering equation which describes the photons propagation in a virtual scene. It tries to reduce the variance at solving the rendering equation pixel by pixel in the image pixels matrix. Our goal is to analyze and study the Monte Carlo estimators by Uniform Separation for numerical treatment of the rendering equation, as well as the influence over the total process of image creation. We develop our consideration on the base of mathematical theory for Monte Carlo estimations and the experience of image creation. New results for the symmetric separation of integration domain are obtained and presented.
1
Introduction
The solution of the rendering equation is equivalent to the estimation of the global illumination (light transport problem in a virtual described scene) for photo-realistic image creation [6]. It is a second kind Fredholm type integral equation, describing the light propagation in a closed domain (see in Figure 1). The radiance L, leaving from a point x on the surface of the scene in direction ω ∈ Ωx , where Ωx is the hemisphere at point x, is the sum of the self radiating light source radiance Le and all reflected radiance: (1) L(x, ω) = Le (x, ω) + L(h(x, ω ), −ω )fr (−ω , x, ω) cos θ dω , Ωx
where h(x, ω ) is the first point that is hit when shooting a ray from x into direction ω . The radiance Le has non-zero value if the considered point x is a point of solid light source. Therefore, the reflected radiance in direction ω is an integral of the radiance incoming from all points, which can be seen through the hemisphere Ωx at point x, attenuated by the surface BRDF (Bidirectional I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 419–426, 2010. c Springer-Verlag Berlin Heidelberg 2010
420
A.A. Penzov et al.
Fig. 1. The geometry for the rendering equation
Reflectance Distribution Function) fr (−ω , x, ω) and the projection cos θ . The angle θ is the angle between the surface normal nx at point x and the direction fr (−ω , x, ω) cos θ dω < 1, ω . The law for energy conservation holds, i.e.: Ωx
because a real scene always reflects less light than it receives from the light sources due to light absorption of the objects. If the point x is a point of a transparent object the transmitted light component must be added to Equation (1) and integration domain is a sphere Ω (x) at point x, where Ω (x) = Ωx Ω x . According to the Neumann series [12], the global illumination can be modeled as stationary linear iterative process L(x, ω) = Le + Le + 2 Le + . . . + i Le + . . . =
∞ i=0
i L e ≈
s
i Le + εs , (2)
i=0
and is integral operator of contraction reflecting the law for energy conservation, so εs is truncation error. Considering multi-dimensional integrals, it is very difficult or sometimes impossible the rendering equation to be solved analytically. The Monte Carlo (MC) methods are proved to be the only effective methods for the rendering equation. In order to improve Monte Carlo method and speed up the computation much of the efforts are directed to the variance reduction techniques (see [3]). The separation technique of the integration domain is widely used Monte Carlo variance reduction method. In this paper we consider the variance reduction at image creation by stratified Monte Carlo technique with uniform separation of integration domain. The question: how to estimate the convergence rate for creation of the total image and what is it in the worst case? is important to compare the different Monte Carlo algorithms. We analyze the convergence rate at different situation to find an estimation for the considered approach.
2
Uniform Separation of Integration Domain
The Uniform Separation technique of integration domain for rendering equation is introduced by us in [4] and developed in [8]. It utilizes the symmetry property
Analysis of the Monte Carlo Image Creation by Uniform Separation
421
Fig. 2. Uniform Triangle Separation - UTS
for hemisphere or sphere separation as well as symmetric samples generation. Based on it, the unit hemisphere with center at the origin of Descartes coordinate system could be partitioned into equal size symmetric sub-domains: – the coordinate planes partition the hemisphere into 4 equal areas, → − − → − → − → → − − → – the bisector planes to the dihedral angles ( X , Y ), ( X , Z ) and ( Z , Y ), partition each area into 6 equal spherical orthogonal triangles, as shown in Figure 2 for the area with positive coordinate values of X, Y , and Z. In this way we can separate the whole hemisphere Ωx into 24 symmetric parts Ωi called Uniform Triangle Separation (UTS). Then through a bijection F (u, v) : [0, 1]2 → ΩABC we generate samples in ΩABC and applying the symmetric sampling scheme [4] we can generate sampling points over Ωx . Since the symmetry is identity, the generation of uniformly distributed random samples in a sub-domain leads to the uniform distribution of all samples in the hemisphere. Because of F (u, v) : [0, 1]2 → ΩABC , more sampling points are grouped around the vertexes coincident with coordinate axes as shown in [9]. This fact could affect the uniformity of sampling point distribution over Ωx partitioned into 24 spherical triangles Ω , which might be a possible drawback for numerical integration. To avoid this fact and strength the uniformity of sampling point distribution over Ωx , we can apply Uniform Quadrangle Separation (UQS) as shown in Figure 3. The hemisphereΩx is partitioned into 12 equal size of spherical quadrangles Ω , each constructed by the union of two neighbors Ω . The symmetry property is hold and all Ω are symmetric each to others.
422
A.A. Penzov et al.
Fig. 3. Uniform Quadrangle Separation - UQS
Another possible partition of Ωx is achieved as shown in Figure 4, combining of UTS and UQS. The hemispherical integration domain is separated into 16 parts. First 8 sub-domains are equal size of orthogonal spherical triangles Ω . They are symmetric each to other and grouped with a common vertex around the normal vector to the surface. The hemispherical integration domain is completed with more 8 sub-domains of equal size spherical quadrangles Ω , also symmetric each to other. The symmetric property allows us to calculate in parallel the coordinates of the symmetric points. Once a sampling point P0 (x0 , y0 , z0 ) is generated in ΩABCD or ΩCED , the coordinates of all symmetric points from the other sub-domains can be obtained by simple change of the signs and/or places of x0 and y0 , while z0 is not affected. For example, P1 (y0 , x0 , z0 ), P2 (y0 , −x0 , z0 ), · · · and P7 (x0 , −y0 , z0 ). Useful bijections F1 (u, v) : [0, 1]2 → Ω and F2 (u, v) : [0, 1]2 → Ω for mapping the uniformly distributed in [0, 1] random variables u and v into the sampling points on ΩCED and ΩABCD , are derived in [1,4,8].
3
Error Analysis
Let us consider iterative Monte Carlo solution of (2) following the Uniform Separation tehcnique. On each iteration we solve multidimensional integrals that are equivalent to the problem of computing the integral J(L) = f (x)p(x)dx, Ω
Analysis of the Monte Carlo Image Creation by Uniform Separation
423
Fig. 4. Combined Uniform Separation - CUS
where Ω ≡ Es ; 0 ≤ xi < 1; i = 1, 2, . . . , s is s-dimensional unit cube and p is probability density function, i.e. p(x) ≥ 0 and Ω p(x)dx = 1. Let us apply uniform separation of Ω into m equal cubes Ωj (j = 1, 2, . . . , m), so that Ω=
m
Ωj , Ωi ∩ Ωj = ∅, i
= j.
j=1
Then J =
m j=1
Jj , where Jj =
Ωj
f (x)p(x)dx is the mean of the random variable
pj f (xj ), xj is a random point in Ωj with a probability density function p(x) pj and obviously pj = p(x)dx. It is possible to estimate Jj by the average of Nj Ωj
samples ξ¯Nj =
pj Nj
Nj i=1
∗ f (xj ) and J by ξN =
m m ξ¯Nj , where Nj = N (see [10]).
j=1
j=1
Theorem 1. ([Dupach (1956); Haber (1966)]) Let Nj = 1 for j = 1, . . . , m (so m = N ). The function f (x) has continuous and bounded derivatives that ∂f ≤ α for every j = 1, 2, . . . , s and let there exist constants c1 and c2 such ∂xj that conditions c1 c2 pj ≤ and dj ≤ 1 , N Ns hold, where pj = p(x)dx is the probability and dj = sup |x1 − x2 | is diΩj
ameter of the domain Ωj for each j = 1, 2, . . . , m.
x1 ,x2 ∈Ωj
424
A.A. Penzov et al. ∗(s)
Then for the variance of the MC estimator ξN the following relation is fulfilled
2 ∗(s) V ar ξN ≤ (sc1 c2 α)2 N −1− s . Based on consideration in [5] for multidimensional integrals in (2) s
Ωx = Ωx1 × Ωx2 × . . . × Ωxs =
m
Ωj ,
j=1
where ms is the number of the sub-domains Ωj and s is dimension of the domain √ Ωx . One can see that with Uniform Separation pj = m1s and dj = ms for j = √ 1 to the 1, . . . , ms and estimate the constants 1 ≤ c1 and ms N s ≤ c2 . According
∗(s)
3
2
above Theorem 1, the variance can be estimated like V ar ξN ≤ smα2 N −1 . Therefore the convergence rate for MC solution with Uniform Separation for continuous integrand is O(N −1 ). The main difficulties at the Monte Carlo methods for numerical integration of the rendering equation are related with the smoothness of the integrand. Due to the scene geometry in general case the integrand is discontinuous. Because the integration domain is partitioned into uniformly small by probability as well as by size sub-domains, the solution of the rendering equation by the Monte Carlo with Uniform Separation method is very similar to the Quasi Monte Carlo approaches. Therefore, the same error analysis can be applied to Monte Carlo with Uniform Separation, as that to the Quasi Monte Carlo approaches. Since, the integrands in computer graphics are discontinuous, only very pessimistic upper error bounds for the integral approximation with Quasi Monte Carlo approach might be expected. Taking into account this fact, an analytic generalization of the Quasi Monte Carlo convergence rate (see [11,12]) in realistic image creation is derived by Szirmay-Kalos: – Quasi Monte Carlo integration of the rendering equation is analyzed by decomposition of discontinuous function into smooth and discontinuous parts; – the error bound for Quasi Monte Carlo integration of discontinuous function is still better than the order of Monte Carlo bound for small dimensions. The observed convergence rate for discontinuous functions at the Quasi Monte s+1 Carlo approach is O N − 2s , where s is the dimension. So, we expect to obtain the same convergence rate for the Monte Carlo with Uniform Separation. Another point of view to analyze the convergence rate is suggested by Mitchell in [7]. Mitchell investigates the variance of image pixels with stratified Monte Carlo image creation in different situations. The convergence rate of a pixel, depends on whether the integrand function is smooth or it is only piece-wise continuous. Also, stratified sampling can increase the convergence rate noticeably in low-dimensional domains, but has little effect in high-dimensional domains. Let us continue in the same direction of consideration to analyze the rendering of all pixels in the image pixel matrix by Monte Carlo approach with Uniform
Analysis of the Monte Carlo Image Creation by Uniform Separation
425
Separation. The convergence rate can be derived applying the Theorem of Emelyanov-Illin (see [2]) for the error r AEq , associated with a numerical algorithm AEq . Theorem 2 (Emelyanov-Ilin). For the problem SEq of solving s−dimensional Fredholm integral equations of second kind with p−smooth data, error is
p r AEq ≤ c N − 2s for the deterministic algorithms AEq , as well as p − 2s − 12 r AR Eq ≤ c N for the randomized algorithms AR Eq . One can see, that a realistic image is calculated applying certain randomized algorithm AR Eq consequently pixel by pixel for all pixels in image pixel matrix. In the case when the data are s-smooth (p ≡ s) for each pixel in the image pixel matrix, we obtain the convergence rate of O(N −1 ). However, in general case the data beyond each pixel are different and p-smooth, where 1 ≤ p ≤ s. Following the Theorem 2 we can write for average pixel error: PN s p 1 1 1 R ravg AR = r A ≤ c Pp N − 2s − 2 , i Eq Eq PN i=1 PN p=1 p
where PN is the total number of pixels in the image pixel matrix and Pp is the number of processed pixels with p-smooth data (1 ≤ p ≤ s). If c = max(cp Pp ), 1 ≤ p ≤ s then 1 s 1 1 c − p −1 c 1 − N−2 R − − ravg AEq PN ≤ N 2s 2 = N 2 N 2s 1 PN p=1 PN 1 − N − 2s and obviously
1
1−N − 2 1−N
− 1 2s
≤ 1. Therefore, the average error for Monte Carlo
image creation with Uniform Separation is: s+1 c ravg AR P ≤ N − 2s . N Eq PN
4
Conclusions
The data to be rendered beyond each pixel are different. Almost always in the image pixel matrix there exist large areas of pixels where the function is contin uous. The analysis shows that those pixels are rendered with O N −1 rate of convergence. When the integrand is piece-wise continuous, the pixels are solved
426
A.A. Penzov et al.
s+1 by a Quasi Monte Carlo convergence rate of O N − 2s . Therefore, the convergence rate of the considered creation of images by Monte Carlo with uniform
s+1 separation will be in between O N − 2s and O N −1 . One can see that in the worst case, the estimated rate of convergence is the same as at Quasi Monte Carlo methods, which hints the usage of low discrepancy sequences. The obtained convergence rate determines Monte Carlo image creation by Uniform Separation like super-convergent Monte Carlo method.
Acknowledgments This work was supported by the International Scientific Cooperation between Bulgarian Academy of Sciences and Hungarian Academy of Sciences under Joint Research Project Grand No. 7 and by OTKA ref. No.: T042735.
References 1. Arvo, J.: Stratifed sampling of spherical triangles. In: Proceedings of SIGGRAPH 1995, ACM SIGGRAPH, pp. 437–438 (1995) 2. Dimov, I.T.: Optimal Monte Carlo Algorithms. In: ISMC 2006, Proceedings of IEEE — John Vincent Atanasoff, pp. 125–131 (2006) 3. Dimov, I.T.: Monte-Carlo Methods for Applied Scientists. World Scientific Publishing, Singapore (2008) 4. Dimov, I.T., Penzov, A.A., Stoilova, S.S.: Parallel Monte Carlo Sampling Scheme for Sphere and Hemisphere. In: Boyanov, T., Dimova, S., Georgiev, K., Nikolov, G. (eds.) NMA 2006. LNCS, vol. 4310, pp. 148–155. Springer, Heidelberg (2007) 5. Dimov, I.T., Penzov, A.A., Stoilova, S.S.: Parallel Monte Carlo Approach for Integration of the Rendering Equation. In: Boyanov, T., Dimova, S., Georgiev, K., Nikolov, G. (eds.) NMA 2006. LNCS, vol. 4310, pp. 140–147. Springer, Heidelberg (2007) 6. Kajiya, J.T.: The Rendering Equation. In: Proceedings of SIGGRAPH 1986 (1986); ACM SIGGRAPH Computer Graphics 20(4), 143–150 (1986) 7. Mitchell, D.P.: Consequences of stratified sampling in computer graphics. In: Proceedings of SIGGRAPH 1996, ACM SIGGRAPH, pp. 277–280 (1996) 8. Penzov, A.A., Dimov, I.T., Koylazov, V.N.: A New Solution of the Rendering Equation with Stratified Monte Carlo Approach. In: ICNAAM 2008, AIP Conference Proceedings, vol. 1048, pp. 432–435 (2008) 9. Penzov, A.A., Stoilova, S.S., Dimov, I.T., Mitev, N.M.: Uniform Separation for Parallel Monte Carlo Image Creation. In: Fourth Hungarian Conference on Computer Graphics and Geometry, SZTAKI, Budapest, Hungary, pp. 117–124 (2007) 10. Sobol, I.M.: Monte Carlo Numerical Methods, Nauka, Moscow (1975) (in Russian) 11. Szirmay-Kalos, L., Purgathofer, W.: Analisis of the Quasi-Monte Carlo Integration of the Rendering Equation. In: Proccedings of WSCG 1999, pp. 281–288. University of West Bohemia, Plzen (1999) 12. Szirmay-Kalos, L.: Monte-Carlo Methods in Global Illumination — Photo-realistic Rendering with Randomization. VDM Verlag Dr. Mueller e.K., Saarbr¨ ucken (2008)
The Role of the Boundary Conditions on the Current Degradation in FD-SOI Devices K. Raleva1 , D. Vasileska2 , and S.M. Goodnick2 1
2
Faculty of Electrical Engineering and ITs, UKIM, Skopje, Macedonia
[email protected] Department of Electrical Engineering and Center for Solid State Electronics Research Arizona State University, Tempe, AZ 85287-5706, USA
[email protected],
[email protected]
Abstract. In this paper we demonstrate the convergence properties of our electro-thermal Monte Carlo device simulator that solves selfconsistently the Boltzmann transport equation for the electrons and the energy balance equations for acoustic and optical phonon bath. We also illustrate that the amount of current degradation in different technologies devices depends upon the channel length and the boundary conditions imposed on the gate electrode and the artificial boundaries. Finally we address the importance of including non-stationary transport for both electrons and phonons. We show via comparison with standard Silvaco simulations that any simulator which does not take into account the nonstationary nature of the carrier transport in the system will give wrong predictions of the current degradation and the position of the hot-spot which arises because of the different time-scales involved in the electronoptical phonons interaction and the optical phonon to acoustic phonon decay.
1
Introduction
Over the last 30 years, silicon CMOS has emerged as the predominant technology of the microelectronic industry. The concept of device scaling has been consistently applied over many technology generations, resulting in consistent improvement in both device density and performance. But sometime within the next five years, traditional CMOS technology is expected to reach the limits of scaling. As channel lengths shrink bellow 22 nm, complex channel profiles are required to achieve the desired threshold voltages and to alleviate short-channel effects. To fabricate devices beyond current scaling limits, IC companies are pursuing fully-depleted (FD) Silicon On Insulator (SOI), dual gate (DG) structures and FinFETs. One of the major problems in the operation of nano-scale SOI devices are self-heating effects. These effects arise because SOI devices are thermally isolated from the substrate by the buried oxide layer (BOX). Self-heating leads to a substantial elevation of the local device temperature, which consequently I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 427–434, 2010. c Springer-Verlag Berlin Heidelberg 2010
428
K. Raleva, D. Vasileska, and S.M. Goodnick
modifies the device output characteristics. As transistor dimensions and the silicion film thickness are scaled to the order of tens of nanometers, i.e. much less than the phonon mean free path (λ = 300 nm in bulk silicon at room temperature), sub-continuum effects are expected to become important. Phonon boundary scattering [1] and phonon confinement effects [2] are known to decrease the effective thermal conductivity of thin silicon films. Nanoscale heat conduction measurements are very difficult to perform, therefore simulations play an increasingly important role in understanding heat generation and transfer at such small length and time scales. In modeling nanoscale heat transfer, the continuum heat theory (Fourier law, Joule heating) is not valid and must be replaced by a more sophisticated formulation which takes into account the energy transport via phonons and the non-equilibrium between acoustic and optical phonon baths. In previous work, we reported on the use of a new device simulator based on the self-consistent solution of Boltzmann transport equation for the electrons with a balance equation solution to the phonon Boltzmann equation [6,5]. In that work, we demonstrated that the non-locality of the electron transport and velocity overshoot effects significantly reduce self-heating and associated current degradation in nanoscale devices due to the fact that ballistic carriers do not dissipate energy in the active region of the device, rather in the drain contact [6]. It was also shown that in nanoscale devices, the hot spot corresponding to the peak phonon temperature moves towards the drain end of the channel where removal of heat is more effective, and where less effect on the transport dynamics occurs underneath the gate. However, very little detail was given of the numerical details of the self-consistent approach including the overall convergence of the scheme, as well as the role of boundary conditions.
Fig. 1. Flow chart of the electro-thermal simulator
The Role of the Boundary Conditions on the Current Degradation
429
Fig. 2. Cross-section of the simulated device with applied thermal boundary conditions
2
Convergence of the Scheme
As illustrated in Fig. 1, we self-consistently couple the Monte Carlo solution of the electron Boltzmann transport equation with the energy balance equations for both optical and acoustic phonons. This itself is a difficult problem as we are coupling a particle picture for electrons (which is inherently noisy) with a continuum model for the phonons. To achieve convergence of the coupled scheme, both temporal and spatial averaging of the variables extracted from the Monte Carlo (MC) solver (e.g. the electron density, drift velocity and temperature) must be performed. The number of simulated particles in the model contributes significantly to the smoothness of the variables being transferred to the energy balance solver. Within each ‘outer iteration’, we solve the Boltzmann Transport equation for the electrons using the Ensemble Monte Carlo (EMC) method for a time period of 10 ps to ensure that steady state conditions have been achieved. The required variables are then passed to the thermal solver which gives the updated optical and acoustic phonon temperatures. This constitutes one Gummel cycle or ‘outer iteration’ [5]. To test the overall convergence of the coupled EMC and thermal codes, we recorded the variation of the drain current with the number of thermal iterations for a given bias condition. For this study, we simulated the 25 nm fully-depleted SOI structure shown in Fig. 2, with the corresponding thermal boundary conditions indicated, discussed in more detail in Section 3. The device dimensions are: channel length -25 nm, source/drain length -25 nm, silicon layer thickness -10 nm, and BOX layer thickness -50 nm. The simulation results are given in Fig. 3, in which we plot the percent current decrease compared to the result of isothermal simulation. It is evident that only 3-5 iterations are necessary to obtain the steady-state solutions of the current. To further investigate the accuracy of the results and the convergence of the outer current iterative scheme, we increase the device width in order to increase
430
K. Raleva, D. Vasileska, and S.M. Goodnick
Fig. 3. Current decrease relative to the isothermal steady state current as a function of the number of thermal iterations for different number of simulated electrons for 25 nm fully-depleted SOI MOSFET. For the thermal part of simulations, the bottom of the BOX and the gate electrode are assumed to be isothermal boundaries, set to 300K. Substrate region is not modeled.
the number of simulated √ electrons. Namely, the statistical error of the MC method decreases as 1/ Nsim [3], where Nsim , is the number of simulated carriers. The statistical uncertainty of the results is 1.14% when device width is 1μm and 0.65% for 3μm device width, so the smoother convergence of the results is obtained with the larger number of simulated carriers (see Fig. 3).
3
The Role of the Thermal Boundary Conditions
To properly solve the phonon energy balance equations, the device should be attached to a heat sink somewhere along the boundary. The substrate is typically treated as a thermal contact in commercial simulation packages such as the Silvaco ATLAS simulation package (THERMAL3D module) [7], as shown in Fig. 2. In the case of SOI technology, the boundary condition should be applied to the bottom of the Si substrate. The thermal boundary conditions chosen need to reflect the physics of the individual device, as well as those in the surrounding environment. We have shown [6,5] that, due to the relatively high thermal conductivity of Si, assuming 300K on the back of the BOX layer is actually a good approximation. In obtaining the results presented in Section 2 of the paper we have assumed that the neighboring device is either on or the device is near a circuit hot-spot so that Neumann boundary conditions at the artificial boundaries (Fig. 2) were applied
The Role of the Boundary Conditions on the Current Degradation
431
Fig. 4. Current degradation vs. technology generation ranging from 25 nm to 180 nm channel length FD-SOI devices. Isothermal boundary condition of 300K are set on the bottom of the BOX. Parameter is the temperature on the gate electrode. Neumann boundary conditions are applied at the vertical sides.
to estimate the worst case scenario of the current degradation. A summary of the current degradation for different technology generations is presented in Fig. 4. As discussed previously, the degradation of performance increases with increasing gate length due to the increasing role of ballistic transport at short gate lengths. Also, the degradation is more severe for higher fixed gate temperature. With regard to the role of thermal boundary conditions on the gate and side electrodes, we look now at different combinations of Dirichlet and Neumann conditions (see Fig. 5). The simulation results show that when the gate electrode is used as an ideal heat sink (Tgate=300K), there is no difference in the current degradation due to a self-heating with thermal Neumann or Dirichlet boundary conditions applied at the vertical sides. But, when the heat transfer is not allowed through the gate electrode (Neumann boundary condition), the use of different boundary conditions at the vertical sides has an impact on the current degradation only for smaller devices. The use of Dirichlet boundary condition at both vertical sides and the gate electrode is more adequate for digital devices where the operation mode of the transistors changes in certain intervals from on to off, to on, etc. A ‘worst case scenario’ for self-heating represents when Neumann boundary conditions are imposed on all outer boundaries, except on the bottom of the BOX.
432
K. Raleva, D. Vasileska, and S.M. Goodnick
Fig. 5. Current degradation vs. technology generation ranging from 25 nm to 180 nm channel length FD-SOI devices for different combinations of Dirichlet and Neumann conditions at the gate electrode and vertical sides.
Fig. 6. Lattice temperature in the active silicon layer for 25 nm channel length FD-SOI device when using MC simulations (top two panels) and Silvaco ATLAS simulations (bottom panel). Different boundary conditions are applied at vertical sides. Temperature on the gate electrode is set to 300K.
4
Non-stationary Electron and Phonon Transports
In commercial device simulators, one couples the heat-conduction equation through the Joule heating term with either the drift-diffusion or energy balance
The Role of the Boundary Conditions on the Current Degradation
433
Fig. 7. Lattice temperature in the active silicon layer for 180 nm channel length FD-SOI device when using MC simulations (top two panels) and Silvaco ATLAS simulations (bottom panel). Different boundary conditions are applied at vertical sides. Temperature on the gate electrode is set to 300K.
equations of the corresponding carriers that participate in transport, thus arriving at the so-called non-isothermal drift-diffusion or energy balance models [7]. The Joule heating model is a local model and it can not predict properly heating effects in nanoscale devices in which non-stationary transport dominates. To prove this argument, we consider the 25 nm and 180 nm channel length devices. The temperature profiles and the corresponding current degradations for 25 nm channel length and 180 nm channel length device are shown in Figs 6 and 7 respectively. Simulation results for the 180 nm channel length device show that both Silvaco and MC predict almost the same position of the hot-spot. Even though the MC simulations predict higher temperature at the hot spot, the predicted current degradation is slightly smaller as in the Silvaco simulations the temperature along the whole channel section of the device is higher since once heat gets generated it immediately gets transported, which is opposite to the our thermal model where there is a bottleneck in the heat transfer between optical and acoustic phonons. We might, however say that the simple Joule heating model here is not such a bad approximation since velocity overshoot does not dominate carrier transport. The situation is completely different for the case of the 25 nm channel length device in which we have non-stationary transport throughout the whole length of the channel. Here, the MC simulations with Dirichlet boundary conditions predict that the hot spot is at the boundary between the drain end of the channel and the drain contact (again due to the bottleneck in heat transfer between optical and acoustic phonons) and in the Silvaco ATLAS simulations the hotspot is throughout the whole active region of the device and here as well, due to
434
K. Raleva, D. Vasileska, and S.M. Goodnick
the larger size of the hot spot we have more current degradation. The percentage degradation in current between MC with Dirichlet boundary conditions and Silvaco ATLAS simulations here is on the order of 60%. From the above analysis we might conclude that Silvaco ATLAS that utilizes Joule heating model completely misses the physics of lattice heating in deca-nano devices and the non-local model proposed in [6,5] must be used.
5
Conclusions
We have investigated the convergence properties of the electro-thermal device simulator that has been developed as part of our research work. We have also investigated the importance of the boundary conditions on the gate on the current degradation vs. different technology nodes. Finally we show that Joule heating is insufficient to capture the non-stationary nature of the thermal transport.
References 1. Ashegi, M., Leung, Y., Wong, S., Goodson, K.E.: Phonon-boundary scattering in thin silicon layers’. Applied Physics Letters 71, 1798 (1997) 2. Balandin, A., Wang, K.L.: Significant decrease of the lattice thermal conductivity due to phonon confinement in a free-standing semiconductor quantum well. Physical Review B 58, 1544 (1998) 3. Ferry, D.K.: Semiconductor Transport. Taylor and Francis, New York (2000) 4. Raleva, K., Vasileska, D., Goodnick, S.M.: The Role of the Substrate and the Material Properties of the BOX in Fully-Depleted SOI, SOD and SOAlN Devices. IEEE Transactions on Electron Devices (submitted) 5. Raleva, K., Vasileska, D., Goodnick, S.M., Dzekov, T.: Modeling thermal effects in nano-devices. J. Computational Electronics 7, 226 (2008) 6. Raleva, K., Vasileska, D., Goodnick, S.M., Nedjalkov, M.: Modeling Thermal Effects in Nanodevices. IEEE Trans. Electron Devices 55(6), 1306 (2008) 7. http://www.silvaco.org
Gamma Photon Transport on the GPU for PET L. Szirmay-Kalos1, B. T´oth1 , M. Magdics1 , D. L´egr´ ady1, and A. Penzov2 2
1 Budapest University of Technology, Hungary Institute for Parallel Processing, Bulgarian Academy of Sciences, Bulgaria
Abstract. This paper proposes a Monte Carlo algorithm for gammaphoton transport, that partially reuses random paths and is appropriate for parallel GPU implementation. According to the requirements of the application of the simulation results in reconstruction algorithms, the method aims at similar relative rather than absolute errors of the detectors. The resulting algorithm is SIMD-like, which is a requirement of efficient GPU implementation, i.e. all random paths are built with the same sequence of instructions, thus can be simulated on parallel threads that practically have no conditional branches. The algorithm is a combined method that separates the low-dimensional part that cannot be well mimicked by importance sampling and computes it by a deterministic quadrature, while the high-dimensional part that is made low-variation by importance sampling is handled by the Monte Carlo method. The deterministic quadrature is based on a geometric interpretation of a direct, i.e. non-scattered effect of a photon on all detectors.
1
Introduction
The simulation of gamma photon transport in scattering media is important in engineering simulations, nuclear technology, radiotherapy, PET/SPECT reconstruction, etc. In Positron Emission Tomography (PET) a pair of gamma photons is born from each positron–electron collision [Gea07]. Due to the special character of PET several simplifying assumptions can be made. Assuming that the electron and the positron are “not moving” before collision, the energy E of the photons can be obtained from the rest mass me of the colliding particles and the speed of light c, E = me c2 = 0.511 MeV. As these photons fly in the medium, they might collide with the electrons of the material. The probability that such collision happens in unit distance is the cross section σ. During such collision the photon may get scattered, absorbed according to the photoelectric effect and new photon pair may be generated, but in our energy range only scattering is relevant. When scattering happens, there is a unique correspondence between the relative scattered energy and the cosine of the scattering angle, as defined by the Compton formula: =
1 1 + 0 (1 − cos θ)
=⇒
cos θ = 1 −
1− , 0
where = E1 /E0 expresses the ratio of the scattered energy E1 and the incident energy E0 , and 0 = E0 /(me c2 ) is the incident energy relative to the energy of I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 435–442, 2010. c Springer-Verlag Berlin Heidelberg 2010
436
L. Szirmay-Kalos et al.
the electron. The differential of the scattering cross section at point x, i.e. the probability density that the photon is scattered from direction ω into differential solid angle dω in direction ω, is given by the Klein-Nishina formula: dσs (x, cos θ, 0 ) = C(x)( + 3 − 2 sin2 θ), dω
where cos θ = ω · ω .
In this equation C(x) is a material property at point x that is proportional to the number of electrons in a unit volume material. Note that the Klein-Nishina formula depends on incident energy 0 indirectly through sin θ. The probability density of the scattering direction can be expressed as the product of the total scattering cross section, which is the probability of scattering in a unit distance, σs (x, 0 ) = Ω
dσs (x, cos θ, 0 ) dω = 2πC(x) dω
1
+ 3 − 2 sin2 θ d (cos θ) ,
−1
and of the material invariant phase function describing the conditional probability density of the scattering direction given that scattering happened: P (cos θ, 0 ) =
dσs (x, cos θ, 0 ) + 3 − 2 sin2 θ 1 = . σs (x, 0 ) dω σs (x, 0 )/C(x)
As photons travel in the considered volume, they may get scattered several times before they leave the volume or are captured by a detector. We need to compute the expected landing energy at a detector of area AD : 1 D=
F (x, ω, 0 )0 d0 dxdω, ΩH AD
0
where ΩH is the hemisphere above the detector plane. Probability density F (x, ω, 0 ) that a photon of energy 0 is at point x and traveling in direction ω satisfies a Fredholm type integral equation: dσs (x, ω · ω, 0 ) dω , ω · ∇F (x, ω, 0 ) = −σs (x, 0 )F (x, ω, 0 ) + F (x, ω , 0 ) dω Ω
(1) where Ω is the directional sphere, and 0 , 0 , and ω · ω are related by the Compton formula. If 0 = 1, then source term E(x) should also be added to the right side, that expresses the probability density of newly generated gamma photons at point x and at a uniformly distributed direction. The system contains detectors forming a grid of typical resolution 256 × 256. Thus, for every initial sample point x0 we have to compute roughly 105 functionals of density F , which can be expressed as a Neumann series of integrals of ever increasing dimension for every detector [SK08]. Equation (1) is usually solved by Monte Carlo simulation that directly mimics the physical process. A typical algorithm is the following:
Gamma Photon Transport on the GPU for PET
437
for each sample do terminated = FALSE; Generate initial sample with energy 0 = me c2 and direction ω; while not terminated do Traverse line segment of direction ω and of the sampled length; if detector d is hit then Add photon energy to detector d; terminated = TRUE; else if examined volume is left then terminated = TRUE; else Sample new direction ω and relative energy change ; 0 = 0 ; endif endwhile endfor
There are several problems with this approach. Due to the high number of voxels and detectors, such simulation may take days or even weeks of computation on conventional computers. As we compute not a single integral but a high number of functionals, the integrands contain factors representing scattering inside the volume and a factor representing the detector response, which is typically an indicator function. Volume scattering and free path can be well mimicked by importance sampling, but the detector sensitivity is not, which causes high variance sampled estimates. Mimicking the physical process, the absolute errors of the detectors will be similar. However, in the application of the computed result, we rather need similar relative errors since reconstruction algorithms take the ratio of measured and simulated detector responses [SV82]. In this paper we propose an approach that simultaneously solves all mentioned problems. The high computational power is provided by a Graphics Processing Unit (GPU) programmed under CUDA [NVI07]. GPUs have special parallel architecture, and are effective only for Single Instruction Multiple Data (SIMD) like algorithms. It means that we may run hundreds of parallel threads, but — to get high performance — all threads should execute the very same instruction at a time on different data. Thus, we eliminate conditionals from the Monte Carlo algorithm and ensure that all threads always execute the same sequence of instructions.
2
New Algorithm
In order to re-use a single random sample for all detectors and guarantee similar relative error everywhere, the path simulation is decomposed to a random path building part and to a deterministic splitting, called connection part. During the path building part, certain number of random scattering points, called virtual sources [SKSS08] are generated in the volume. Then, the deterministic splitting part connects all virtual soruces to all detectors and computes the impact of this random path to each of the detectors (Fig. 1). As the deterministic connection
438
L. Szirmay-Kalos et al.
Fig. 1. The simulated physical process and the decomposition of the computation to Monte Carlo simulation and to classical quadrature
does not consider additional scattering events, only the accumulated extinction needs to be calculated between the scattering points and the detectors. The path building part involves free-path sampling, termination handling, and scattering angle sampling. The deterministic part computes accumulated extinction. In the following subsections we discuss how these elementary tasks can be attacked by programs having minimal dynamic loops and branching instructions. 2.1
Free Path Sampling
The cumulative probability density of the length along a ray of origin x and S direction ω is CDF (S) = 1 − exp − 0 σs (x + ωs)ds , which can be sampled by transforming a uniformly distributed random variable r1 and finding S = nΔs where a running sum exceeds the threshold of the transformed variable: n−1
σs (x + ωiΔs)Δs ≤ − log r1 <
i=0
n
σs (x + ωiΔs)Δs.
i=0
In order to get the energy dependent total scattering cross section, we build a one-dimensional texture (Fig. 2) in the preprocessing phase, that stores normalized values 1 T1 (0 ) = 2π −1
+ 3 − 2 (1 − cos2 θ) d cos θ,
where =
1 1 + 0 (1 − cos θ)
for 0 = 1/128, 2/128, . . . , 1. During sampling, σs (y) is obtained as the product of the normalized values and the density of electrons C, that is σs (y) = T1 (0 )C(y).
Gamma Photon Transport on the GPU for PET
439
17 texture T_1(eps0) texture T_2(eps0,r) 16 1 15 0.5 14
0 -0.5
13
-1 12 11
0 0.2
10
0.4 9
0.6 0.8
8
1 0
0.2
0.4
0.6
0.8
1
7 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Fig. 2. Contents of textures T1 (0 ) = σs /C (left) and T2 (0 , r2 ) = cos θ (right)
2.2
Termination Handling
The random path is terminated when the photon leaves the examined volume, which means that we have paths of random length, which are difficult to simulate on a SIMD machine (in fact, the longest path will determine the time of computation, which is unacceptable). Thus, in order not to waste time for the computation of very long paths, the maximum path lengths are set deterministically. We generate N1 paths of length 1, N2 paths of length 2, etc., finally NL paths of length L. Simultaneously, we assign weight w = 1/(N1 + . . . + NL ) to the first scattering points, weight w = 1/(N2 + . . .+ NL ) to the second scattering points, etc. It can happen that a path of lengths N leaves the volume before the N th scattering point. Such cases are handled by assigning energy 0 = 0 to such virtual sources falling outside the volume. Path lengths are set to ensure that the number of samples generating first, second, etc. virtual sources will be proportional to a geometric series 1, λ, λ2 , etc. where λ approximates the probability of leaving the volume. 2.3
Scattering Direction
The cumulative probability distribution of the cosine of the scattering angle is 2π CDF (cos θ, 0 ) = T1 (0 )
cos θ
+ 3 − 2 (1 − cos2 Θ) d cos Θ.
−1
The cumulative distribution is just a one-variate integral for a given incident energy 0 . Let us compute these integrals for cos θ ∈ [−1, 1] in a pre-processing phase for regularly sampled values. We set up a 128 × 128 resolution twodimensional array, called texture T2 (u, v) (Fig. 2), addressed by texture coordinates u, v in [0,1] and in the texel addressed by u, v we store cos θ obtained as the solution of v = CDF (cos θ, u). Note that this texture is independent of the material properties and should be computed only once during pre-processing.
440
L. Szirmay-Kalos et al.
During particle tracing the direction sampling is executed in the following way. Random or quasi-random sample r2 ∈ [0, 1) is obtained and we look up texture T2 (u, v) with it and with the incident energy 0 resulting in scattering angle cos θ = T2 (0 , r2 ), and consequently in the relative scattered energy. Note that the texture lookup automatically involves bi-linear interpolation of the precomputed data at no additional cost. The other spherical coordinate φ is sampled from uniform distribution, i.e. φ = 2πr3 where r3 is a uniformly distributed random value in the unit interval. Let us establish a Cartesian coordinate system i, j, k where k = ω is the incident direction, i = k × v/|k × v|, j = i × k. Here v is an arbitrary vector that is not parallel with ω . Using these unit vectors, the scattering direction is ω = sin θ cos φi + sin θ sin φj + cos θk. 2.4
Deterministic Detector Response Calculation
Having the random scattering points in the volume, each of them is treated as an individual virtual source and their contributions to all detectors are obtained. Solving equation 1 for a single virtual source of position x, incident photon direction ω , incident energy 0 , and weight w when additional scattering is ignored, we get the following detector response: D ≈ w exp − σs (i)Δs 0 P (ω · ω, 0 )Δω, where Δω is the solid angle in which the detector is visible from the scattering point, which can be cheaply approximated or can even be analytically computed [Eri90]. Attenuation exp (− σs (i)Δs) is obtained by ray-marching between the detector and the virtual source. Relative energy change is computed from 0 and cos θ = ω · ω using the Compton formula.
3
Results
The proposed method has been implemented in CUDA and run on nVidia GeForce 8800 GFX graphics hardware. We took a 1283 voxel array of a human head to describe the electron density C(x). The detector module has 256 × 256 detectors. The primary source is placed at the center of the head. The distribution of the virtual sources is shown by Fig. 3. The detector images of the scattered contribution (the direct contribution is computed separately and is not included in the error analysis) and the respective error and time graphs are in Fig. 4 and Fig. 5, respectively. Note that we could obtain the result of the scattered contribution within 10% relative L1 error using as few as 1000 photon paths. The explanation is that due to deterministic connection, these photon paths correspond to about 2000 virtual sources, which translate to 2000 samples in each detector (the real effective sample number is smaller since photons may fly out the examined volume). The simulation of 1000 photon paths of length 2, including the deterministic connection of each scattering point with all 2562 detectors, requires just 3.5 seconds on the GPU. This allows interactive placement of the source inside the volume.
Gamma Photon Transport on the GPU for PET
128 paths
512 paths
441
1024 paths
Fig. 3. Distribution of the scattering points in the volume
256 paths 1.44 secs
512 paths 2.26 secs
1024 paths 3.62 secs
4096 paths 14 secs
Fig. 4. Detector images after different number of photon paths 4
0.25
time
relative L1 error
3.5 0.2
3
2.5
0.15
2 0.1
1.5
1 0.05
0.5
0 1000
0 100
200
300
400
500
600
700
800
900
1000
Fig. 5. The relative L1 error and the computation time in seconds with respect to the number of photon paths. The length of the paths is 2.
4
Conclusions
This paper proposed a Monte Carlo gamma photon transport solver running on the massively parallel GPU hardware. The algorithm provides the same relative accuracy at each detectors since PET reconstruction algorithms will use ratios of measured and computed detector responses. In order to meet the requirements of SIMD processing concepts of the GPU, we eliminated the conditional loops
442
L. Szirmay-Kalos et al.
from the algorithm. As a result, the GPU is able to solve the transport problem interactively, while such simulations typically take hours on a single CPUs.
Acknowledgement This work has been supported by the Terratomo project of NKTH, Hungary, the National Scientific Research Fund (OTKA ref. No.: T042735), the Bulgarian Academy of Sciences, and by the Hungarian Academy of Sciences.
References [Eri90]
Eriksson, F.: On the measure of solid angles. Mathematics Magazine 63(3), 184–187 (1990) [Gea07] Geant: Physics reference manual, Geant4 9.1. Technical report, CERN (2007) [NVI07] CUDA (2007), http://developer.nvidia.com/cuda [SV82] Shepp, L., Vardi, Y.: Maximum likelihood reconstruction for emission tomography. IEEE Trans. Med. Imaging 1, 113–122 (1982) [SK08] Szirmay-Kalos, L.: Monte-Carlo Methods in Global Illumination — Photo-realistic Rendering with Randomization, VDM. Verlag Dr. M¨ uller, Saarbr¨ ucken (2008) [SKSS08] Szirmay-Kalos, L., Sz´ecsi, L., Sbert, M.: GPU-Based Techniques for Global Illumination Effects. Morgan and Claypool Publishers, San Rafael (2008)
Transport in Nanostructures: A Comparative Analysis Using Monte Carlo Simulation, the Spherical Harmonic Method, and Higher Moments Models M. Vasicek, V. Sverdlov, J. Cervenka, T. Grasser, H. Kosina, and S. Selberherr Institute for Microelectronics, TU Wien, Gußhausstr. 27-29, A 1040 Vienna, Austria
[email protected]
Abstract. With the modern transistor size shrinking below 45 nm the classical drift-diffusion model to describe transport in the conducting channel is loosing its validity. In short-channel devices carriers get accelerated by the driving field and do not thermalize before they reach the drain contact. Thus, the assumption underlying the classical transport model, that the driving electric field produces a weak perturbation of the local equilibrium distribution function, is violated. Several generalizations of the classical drift-diffusion model are possible. The most common approach in the TCAD community is to introduce higher moments of the distribution function. Another approach is to use a spherical harmonic expansion of the distribution function. We perform a comprehensive analysis of the validity of the highermoments transport models with the model based on spherical harmonic expansion by rigorously comparing their results with results of the Monte Carlo solution of the Boltzmann transport equation.
1
Introduction
The success of microelectronics technology has been partly enabled and supported by sophisticated Technology Computer-Aided Design (TCAD) tools which are used to assist in IC development and engineering at practically all stages from process definition to circuit optimization. At this moment, the TCAD related research and development cost reduction amounts to 35% and is expected to increase to 40% in the near future [1]. Most TCAD tools are based on semi-classical macroscopic transport models. From an engineering point of view, semi-classical models, such as the drift-diffusion transport model, have enjoyed an amazing success due to their relative simplicity, numerical robustness, and the ability to perform two- and threedimensional simulations on large unstructured meshes [9]. However, with device size dramatically reduced the TCAD tools based on a semi-classical transport description begin to show shortcomings. With downscaling the devices’ channel length the driving field and its gradient increase dramatically in the short I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 443–450, 2010. c Springer-Verlag Berlin Heidelberg 2010
444
M. Vasicek et al.
channel. As a result the carrier distribution along the channel can no longer be described by the shifted and heated Maxwellian distribution, and the solution of the Boltzmann transport equation is needed to determine sufficiently accurate the distribution function. The Monte Carlo method is a well-established numerical technique to solve the Boltzmann transport equation. Traditionally, the so called forward Monte Carlo method [7] is used to find the distribution function. For realistic structures, however, a direct numerical solution of this equation by discretization of the phase space is computationally too expensive. TCAD tools, however, do not usually solve the Boltzmann equation and are based on simplified transport models. Approximate solutions can be obtained by the method of moments. Defining the moments of the distribution function f (r, k, t), one consecutively obtains the drift-diffusion model [4], the hydrodynamic model [2], the energy-transport models [10], or the six moments model [3]. Transport models based on the moments of the Boltzmann equation are well accepted in TCAD. Another way of generalizing the classical drift-diffusion model is to use a spherical harmonic expansion of the distribution function. In this work we perform a comprehensive analysis of the validity of the highermoments transport models with the model based on spherical harmonic expansion by rigorously comparing their results with results of the Monte Carlo solution of the Boltzmann transport equation. The analytical conduction band model with the non-parabolicity parameter α =0.5 eV−1 is assumed in all the methods.
2 2.1
Simulation Methods Monte Carlo Solution of the Boltzmann Equation
The Monte Carlo methods have been applied to a variety of materials [6,11]. Within this approach, particles are moving on classical trajectories interrupted by scattering processes with phonons and impurities. Although solving the Boltzmann equation with a Monte Carlo technique is computationally very expensive, it is necessary to obtain accurate closure relations, expressing higher moments via the moments of lower order, to introduce and control scattering mechanisms at the microscopic level and, most importantly, to incorporate the peculiarities of the semiconductor band structure. Thus, different Monte Carlo algorithms are conveniently used for the calibration of TCAD tools [7]. 2.2
Higher Moments Models
From an engineering point of view, the advantages of the drift-diffusion model are its efficiency and numerical robustness. These properties make two- and three-dimensional numerical studies of fairly complex device structures feasible. However, hot-carrier effects are difficult to estimate correctly and non-local effects such as velocity overshoot are completely neglected. Higher-order transport models such as the hydrodynamic transport [2] and the energy transport [10]
Transport in Nanostructures: A Comparative Analysis
445
models are designed to overcome some of the shortcomings of the drift-diffusion model. The energy-transport model additionally takes into account the carrier energy balance. However, it typically tends to overestimate the non-local effects and thus the on-current of a device motivating the development of transport models including higher order moments. Some time ago a six moments transport model has been proposed [3]. Such a model, while computationally more efficient than the Monte Carlo method, provides additional information on the shape of the distribution function. 2.3
Spherical Harmonic Expansion
The steady state Boltzmann equation can conveniently be solved by expanding the angular dependence of the distribution function f (r, k) on k using a complete set of spherical harmonics Ylm (θ, φ): flm (r, k)Ylm (θ, φ), (1) f (r, k) = lm
where the θ and φ are the polar angles between the electric field E and k. In the low-field limit one can truncate the expansion (1) after the terms with l = 1, which in case of parabolic isotropic bands and randomizing elastic scattering results in a drift-diffusion transport model with low-field mobility. As is shown in [5], in case of elastic scattering this approximation gives good results for silicon where the valleys are not isotropic. For general scattering processes and realistic band structures as well as at higher driving fields more terms in the expansion (1) are needed [8].
3
Results
In order to investigate the validity of macroscopic transport models we have carried out extensive Monte Carlo simulations of transport through a silicon nin structure. The length of the two heavily doped contacts (ND = 1020 cm−3 ) is kept constant, while the length of the intrinsic channel (NA = 1016 cm−3 ) varies. The band structure of silicon is approximated by six valleys with parabolic dispersion relation. Electron scattering with acoustic and optical phonons as well as with ionized impurities is taken into account. The average velocity determines carrier transport in the structure. The average velocity as a moment of the distribution function can be computed along the device and is thus a more sensitive measure of validity for a particular transport model than any integral characteristic of a device like the total current. A typical result for the average velocity of carriers along an nin structure with an intrinsic region length of 40 nm computed with the Monte Carlo method is shown in Fig. 1. The solution of the Boltzmann equation by spherical harmonic expansion, with different number of terms in the series (1) is also displayed in Fig. 1. In such a short device the series (1) truncated at l = 1 gives only a poor approximation for the velocity profile. At the same time it is demonstrated that
446
M. Vasicek et al.
Monte Carlo
2
l=1 l=5 l=9 l = 15
7
ve [10 cm/s]
1.5
1
0.5
0 20
40
60
80 x [nm]
100
120
140
Fig. 1. Average velocity along the nin structure computed by solving the Boltzmann transport equation with the Monte Carlo method and using the spherical harmonic expansion method, with different numbers of terms included in the series (1). It is demonstrated that even in a short-channel structure the series expansion with terms only up to l = 9 included provides excellent results as compared to the more time consuming Monte Carlo data.
the spherical harmonic expansion (1) including terms up to l = 9 gives a perfect result. The inclusion of terms with l > 9 into (1) does not change the velocity profile confirming the rapid convergence of the series (1). Therefore, the spherical harmonic expansion method gives excellent results with only a few terms included into (1) even for short-channel devices. For this reason the method is much less time consuming than the Monte Carlo solution of the Boltzmann equation. However, Monte Carlo data are needed in order to validate the applicability of the spherical expansion method to describe transport in short-channel devices. Current-voltage characteristics computed with the Monte Carlo (identical to those computed with spherical harmonic expansion method) and using the macroscopic transport models based on the moments of the distribution function are shown in Fig. 2 and Fig. 3, for several channel lengths. It is demonstrated that for long devices (1000 nm) the drift-diffusion (DD), the energy transport (ET) model, and the six moments (SM) model give almost equivalent results which are in perfect agreement with the results of the spherical harmonic expansion method. For a device with Lch = 250 nm the drift-diffusion model underestimates the current. Since the carrier temperature is constant, the driftdiffusion model does not account for any non-local effects and cannot capture the non-local transport inside short-channel devices. This causes the accuracy of the drift-diffusion model to decrease for gate lengths shorter than 250 nm, where the restriction of constant carrier temperature must be relaxed. Due to the temperature gradient, heat flow and thermal diffusion appear. The drift-diffusion transport model must be augmented with the energy flow, or the third moment equation. The energy-transport model takes into account the energy flux equation in addition to the carrier energy balance equation. The energy-transport
Transport in Nanostructures: A Comparative Analysis
447
2.5 DD ET SM SHE
1.5
-3
2
Id [10 A/μm ]
2
100nm
1
0.5
250nm 1000nm
0 0
0.2
0.4 0.6 Vd/LCh [V]
0.8
1
Fig. 2. Current-voltage characteristics for devices with different channel length computed with the spherical harmonic expansion method and using macroscopic transport models based on moments of the distribution function. Here DD stands for the driftdiffusion model, ET for energy-transport model, SM for the six moments model, and SHE for spherical harmonic expansion.
DD ET SM SHE
0.04
2
Id [A/μm ]
0.03
0.02
40nm 0.01 80nm 0 0
0.2
0.4 0.6 Vd/LCh [V]
0.8
1
Fig. 3. The same as in Fig. 2 for shorter devices. The six moments model gives results closest to the results of Monte Carlo and the spherical Harmonic expansion methods.
model requires modeling of two mobilities for the current density and the energy flux for each carrier type, one relaxation time, and the non-parabilicity factor for non-parabolic bands. The simplified four-moments model is implemented in standard TCAD simulation tools and can be applied for a large set of semiconductor materials. The energy-transport model overestimates the drive current. Fig. 4 illustrates the average velocity profile in a device with a 40 nm long channel. The driftdiffusion model underestimates the average velocity while the energy-transport model overestimates it. In order to reduce this spurious velocity overshoot effect the next moments should be included for devices with Lch shorter than 100 nm, where the energy-transport model starts failing to predict the current accurately. Going one step further in the model hierarchy one obtains a transport model of sixth order. A balance equation for the average squared energy and the related flux equation are added. To close the equation system, the moment of sixth
448
M. Vasicek et al.
3.5
DD ET SM SHE
LCh = 40nm
3 2.5 2 1.5 1
vsat
0.5 0 0
0.2
0.4 0.6 Normalized Distance
0.8
1
Fig. 4. Average velocity along the device with 40 nm channel length computed with the macroscopic transport models and with the spherical harmonic expansion method (equivalent to the Monte Carlo results). While the drift-diffusion model underestimates the velocity and current, the energy-transport model overestimates the velocity in short-channel devices. The six moments model improves the situation. 70 60
DD ET SM
Relative Error [%]
50 40
Vd/LCh = 1 V
30 20 10 0 40
50
60
70 LCh [nm]
80
90
100
Fig. 5. Relative error of the current computed with macroscopic transport models. While the drift-diffusion and the energy-transport models are gradually losing their validity with the channel length reduced, the six moments model maintains its accuracy down to the channel length of 40 nm.
order has to be approximated using lower order moments. From Monte Carlo simulations serving as an accurate reference, an empirical closure relation has been proposed taking into account also the second order temperature. Compared to the energy-transport models, the six moments model requires two additional relaxation times for the second order temperature and the kurtosis flux. Having too many adjustable parameters is a particular inconvenience of the six moments model. A solution to this problem is based on tabulating some parameters of the model using Monte Carlo simulations. The parameter dependences on temperature, doping, and driving field are determined from the condition that the six moments transport model reproduces exactly all the six moments obtained from the Monte Carlo simulator under homogeneous conditions.
Transport in Nanostructures: A Comparative Analysis
449
The inclusion of higher moments improves the transport model significantly. The current-voltage characteristics are reproduced fairly well even in devices as short as 40 nm as demonstrated in Fig. 3, because of the more accurate results for the velocity (Fig. 4). The relative error of the current computed by the macroscopic transport models presented in Fig. 5 as function of the channel length clearly demonstrates the applicability of the six moments model to describe the output characteristics of devices with a channel as short as 40 nm.
4
Conclusion
Transport modeling for scientific computing applications has grown into a mature field of research. Models of different complexity, precision and accuracy are offered. Monte Carlo techniques are used to obtain solutions of the Boltzmann transport equation with arbitrary scattering mechanisms and general band structure. These methods require significant computing resources and are, therefore, relatively rarely used for industrial TCAD applications, where timely, but perhaps less accurate, results are of primary importance. For example, a typical time to get a single point at the output characteristic of a 100 nm long device on a 3 GHz AMD Opteron processor with a Monte Carlo method is 24 hours, while it is only 30 minutes with the spherical harmonic expansion method, in which terms up to ninth order are included. It is demonstrated that the spherical harmonic expansion method provides accurate results for structures with the channel length as short as 50 nm. At the same time, with the channel length decreased, the drift-diffusion and even the energy-transport models gradually lose their validity, and higher moments must be included into the model. We demonstrate that the six moments transport model, which needs only 60 seconds of CPU for a single IV point, provides accurate results for devices with the gate as short as 40 nm by a comprehensive comparison of the simulation results with the data obtained with the spherical harmonics expansion method and results of Monte Carlo simulations.
Acknowledgments We gratefully acknowledge financial support from the Austrian Science Fund FWF, project P19997-N14.
References 1. International Technology Roadmap for Semiconductors: 2005 Edition (2005), http://www.itrs.net/Common/2005ITRS/Home2005.htm 2. Blotekjaer, K.: Transport equations for electrons in two-valley semiconductors. IEEE Trans. Electron Devices 17, 38–47 (1970) 3. Grasser, T., Kosina, H., Gritsch, M., Selberherr, S.: Using six moments of Boltzmann’s transport equation for device simulation. J. Appl. Phys. 90, 2389–2396 (2001)
450
M. Vasicek et al.
4. Gummel, H.: A self-consistent iterative scheme for one-dimensional steady state transistor calculations. IEEE Trans. Electron Devices ED-11, 455–465 (1964) 5. Herring, C., Vogt, E.: Transport and deformation-potential theory for many-valley semiconductors with anisotropic scattering. Physical Review 101, 944–961 (1956) 6. Jacoboni, C., Reggiani, L.: The Monte Carlo method for the solution of charge transport in semiconductors with applications to covalent materials. Reviews of Modern Physics 55, 645–705 (1983) 7. Kosina, H., Nedjalkov, M., Selberherr, S.: Theory of the Monte Carlo method for semiconductor device simulation. IEEE Trans. Electron Devices 47, 1899–1908 (2000) 8. Pham, A., Jungemann, C., Meinerzhagen, B.: Deterministic multisubband device simulations for strained double gate PMOSFETs including magnetotransport. IEDM Techn. Dig., 895–898 (2008) 9. Selberherr, S.: Analysis and Simulation of Semiconductor Devices. Springer, Heidelberg (1984) 10. Stratton, R.: Diffusion of hot and cold electrons in semiconductor barriers. Physical Review 126, 2002–2014 (1962) 11. Sverdlov, V., Ungersboeck, E., Kosina, H., Selberherr, S.: Current transport models for nanoscale semiconductor devices. Materials Science and Engineering R 58, 228– 270 (2008)
Thermal Modeling of GaN HEMTs D. Vasileska1 , A. Ashok1 , O. Hartin2 , and S.M. Goodnick1 1
Department of Electrical Engineering and Center for Solid State Electronics Research Arizona State University, Tempe, AZ 85287-5706, USA 2 Frescale Inc, Tempe, AZ, USA
[email protected],
[email protected],
[email protected],
[email protected]
Abstract. Thermal effects were investigated to get a better understanding on the role of self-heating effects on the electrical characteristics of AlGaN/GaN HEMTs. This is implemented by solving simultaneously the acoustic and optical phonon energy balance equations and also takes into account the coupling of the two subsystems. The electro-thermal device simulator was used to observe the temperature profiles across the device. Hot spots or regions of higher temperatures were found along the channel in the gate-drain spacing. These preliminary results from the electro-thermal simulations suggest that the thermal effects do not have a drastic impact on the electrical characteristics, the current reduction falls between 5–10% over the simulated range of voltages. However, the non-equilibrium phonon effects might play an important role in determining the thermal distribution in these HEMTs and thus, resulting in reliability issues such as current collapse.
1
Theoretical Model
The solutions of the heat problem in semiconductor devices have focused on several issues: (1) independent analysis of the phonon heat bath problem via direct solutions of the phonon Boltzmann Transport Equations (BTE), and (2) inclusion of lattice heating in device simulators such as Silvaco by solving the Joule heating (Fourier law) equation that does not take into consideration the microscopic nature of the heat flow and the non-equilibrium between the acoustic and optical phonon heat baths. Briefly, three different models are most commonly used and these include: (1) Joule Heating, (2) electron-lattice scattering, and the (3) phonon model. Although these three models yield identical results in equilibrium, under non-equilibrium conditions the results of the three models can vary significantly. Lai and Majumdar, [2], developed a coupled electro-thermal model for studying thermal non-equilibrium in submicron silicon MOSFETs. Their results showed that the highest electron and lattice temperatures occur under the drain side of the gate electrode, which also corresponds to the region where non-equilibrium effects such as impact ionization and velocity overshoot are maximum. Majumdar et al., [3], have analyzed the variation of hot electrons and associated hot phonon effects in GaAs MESFETs. These hot carriers were observed to decrease the output drain current by as much as 15%. Recently, I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 451–458, 2010. c Springer-Verlag Berlin Heidelberg 2010
452
D. Vasileska et al.
there have been numerous studies on describing thermal effects in devices that couple the Monte Carlo/Poisson approach to electro-thermal modeling in SOI and nitride devices [4]. In these studies, most of the approaches use a simplistic model for non-equilibrium phonons, which does not distinguish the acoustic and the optical phonons as separate subsystems. In addition to that, none of the approaches include all the thermal effects that come into play in GaN HEMTs operating at large drain biases — hot electrons, self heating and non-equilibrium phonon effects. Raleva et al. [5] here at ASU, using a similar approach to Majumdar et al., solved the Boltzmann transport equation (BTE) for electrons using the ensemble Monte Carlo (EMC) method coupled with moment expansion equations for the phonons, both acoustic and optical. In their work, they solved simultaneously the acoustic and optical phonon energy balance equations and also take into account the coupling of the two subsystems in full depleted SOI devices [5]. In our work, we have utilized the same approach to include thermal effects in GaN HEMTs. In GaN, recent experimental findings have shown that the optical phonon (LO) lifetime have a carrier density dependence [7]. The lifetime was found to decrease from 2.5 ps at low densities, to 0.35 ps at higher densities in Raman spectroscopy measurements. We have formulated an analytical expression for the optical phonon (LO) lifetime that incorporates this density dependence. Figure 1 shows the experimental data and the analytical form of the phonon lifetime used in our simulations. To simulate the steady-state behavior of a device, the system is started in some initial condition, with the desired potential applied to the contacts, and then the simulation proceeds in a time stepping manner until steady-state is reached. When the system is driven into a steady-state regime and MC simulation time has elapsed, the steady-state current through a specified terminal is calculated. To continue with the thermal part of the simulation, the average electron density, drift velocity and electron temperature must be calculated on the device grid. These are used in the phonon balance equations to compute the acoustic and optical phonon temperature distribution all over the device. During the simulation, the gate contact and the bottom of the substrate are set to 300 K, while Neumann boundary conditions for the heat transfer are used in all other outer surfaces. When the simulation starts, all variables obtained from the first iteration of the EMC solver are calculated using uniform distribution for the acoustic and optical phonon temperatures. This means that only one scattering table is used for all electrons, no matter where they are located in the device. When the phonon temperatures are computed from the phonon energy balance equations, they are “returned” at the beginning of the MC free-flight — scattering phase. Now, for each mesh point, we have a scattering table which corresponds to the acoustic and optical phonon temperatures at that point. In this case, the electron position defines which scattering table is valid and then, by generating a random number, the scattering mechanism is chosen for the given electron energy.
Thermal Modeling of GaN HEMTs
453
Fig. 1. Carrier density dependant phonon lifetime, τLO
The outer Gummel loop between the MC solver and the phonon energy balance solver ends when the steady-state conditions for the phonon temperatures and the device current are reached. To test the overall convergence of the coupled thermal and EMC codes, the variations of the drain current with the number of thermal iterations are registered for every bias condition. The results of our simulations show that only 5–10 thermal iterations are necessary to obtain the steady-state solution of the current. A flowchart depicting the sequence of steps involved in the coupled electro-thermal Monte Carlo device simulator is shown in Figure 2.
2
Simulation Results
Simulations were run to investigate the heating effects of GaN HEMTs in their device characteristics. The coupled formulation polarization model was utilized in these simulations and the Monte Carlo routine was run for a steady state time of 10ps. After steady state, the time averaged electron density, drift velocity and energy are used in computing the lattice and phonon temperature distributions by solving the phonon equations. This procedure is repeated for several thermal iterations until a current convergence is obtained. Short range electron-electron
454
D. Vasileska et al.
Fig. 2. Flowchart of the electro-thermal Monte Carlo Device Simulator
interactions were not included in these simulations in order to reduce the huge strain placed on the computing resources (memory and computational time). The current convergence is achieved after around 5–10 iterations in our simulations for various gate and drain biases. Figure 3 shows the observed current convergence for zero gate voltage and 10 V applied on the drain. There is a reduction of about 11% in the drain current after 10 thermal iterations. The electron temperature profile shows a very different temperature distribution across the device. This is better seen in the temperature contour shown in Figure 4. The hot spot occurs in the gate-drain spacing, right where the gate terminates, but is restricted closer to the AlGaN/GaN interface. This means that most of the hot electrons are close to the AlGaN/GaN interface. The profile also shows that there might be some high energy electrons in the AlGaN barrier layer on the drain end. However, in the Silvaco simulations the same temperature distribution is not observed. The acoustic phonon (lattice) temperature and optical phonon temperature profiles are shown in Figures 5 and 6 respectively. Both profiles show that the peak temperatures are experienced along the channel in the gate-drain spacing. The phonon temperature profiles look fairly similar except that the optical phonons have a slightly larger spread around the middle of the channel. This “phonon hot spot” has been observed in a recent theoretical study by Yoder et al. done on GaN HEMTs using a dispersionless optical phonon model coupled
Thermal Modeling of GaN HEMTs
455
Fig. 3. Drain Current vs. # of thermal iterations for VGS = 0V and VDS = 10V
to the Monte Carlo transport routine. This study reveals that the hot spot is localized at the drain end of the channel at low drain-source bias, but expands towards the drain towards the middle portion of the channel at higher bias, significantly degrading channel mobility [6]. It is important to mention that they have used polarization charges as fixed source terms in the Poisson equation and also used a constant optical phonon lifetime of 5 ps. Figure 7 shows the ID — VD characteristics of the device respectively. In order to quantify the effect of these non-equilibrium thermal effects on the device performance, we started with isothermal simulations run at 300K and 400K. These I – V characteristics were them compared with the results obtained from our electro-thermal simulations. The ID — VD characteristics show that including the thermal effects reduce the drain current at larger drain voltages. It also shows that this reduction in drain current increases as the drain voltage increases, thereby bringing a negative rolloff — an effect typically attributed to lattice heating by many researchers and scientists. The drain current reduction ranges between 5–10% over the simulated range of drain voltages. The isothermal 400K simulation brings about a reduction of around 8% at VD = 10V in comparison to the thermal simulations. The influence of the thermal effects become smaller as we increase the gate bias negatively, fewer electrons are present in the channel as we go towards the threshold voltage.
456
D. Vasileska et al.
Fig. 4. Electron Temperature Contour for VGS = 0V and VDS = 10V
Fig. 5. Lattice Temperature Contour for VGS = 0V and VDS = 10V
Thermal Modeling of GaN HEMTs
Fig. 6. Phonon Temperature Contour for VGS = 0V and VDS = 10V
Fig. 7. ID −−VD characteristics with and without thermal effects
457
458
3
D. Vasileska et al.
Conclusions
In summary, the thermal effects arising due to contributions from the lattice, phonons and hot electrons were included in our simulations and current degradation is observed in the device characteristics. The temperature profiles show that hot spots in the device are confined close to the channel suggesting a strong dependence of the thermal distribution on the average electron density and energy. The high thermal conductivity of GaN and its alloys greatly helps in the faster heat dissipation seen in these devices. The presence of some electrons in the AlGaN barrier layer introduces the possibility of these electrons being lost to surface states and buffer trap sites. Several studies on GaN HEMT reliability have focused on these current collapses observed in short term DC stress tests. Most researchers have proposed that the role of these trap sites in trapping/detrapping electrons can lead to a permanent reduction in the polarization charge density available at the AlGaN/GaN interface [1]. Our simulation results also tend to point towards the same direction but extensive studies on electrothermal simulations for different device structures is needed to clearly identify the mechanism responsible for the current reliability issues that have plagued GaN HEMT technology.
Acknowledgements This work was supported in part by Freescale, Tempe, AZ and the Army Research Lab (ARL).
References 1. Bouya, M., Malbert, N., Labat, N., Carisetti, D., Perdu, P., Cl´ement, J.C., Lambert, B., Bonnet, M.: Analysis of traps effect on AlGaN/GaN HEMT by luminescence techniques. Microelectronics Reliability 48, 1366 (2008) 2. Lai, J., Majumdar, A.: Concurent thermal and electrical modeling of submicrometer silicon devices. Journ. of Appl. Phys. 79, 7353 (1996) 3. Majumdar, A., Fushinobu, K., Hijikata, K.: Effect of gate voltage on hot electron and hot phonon interaction and transport in a submicrometer transistor. Journ. of Appl. Phys. 77, 6686 (1995) 4. Matulionis, A.: Hot phonons in GaN channels for HEMTs. Phys. Stat. Sol. A 203(10), 2313 (2006) 5. Raleva, K., Vasileska, D., Goodnick, S.M., Nedjalkov, M.: Modeling Thermal Effects in Nanodevices. IEEE Trans. on Elec. Devices 55(6), 1306 (2008) 6. Sridharan, S., Venkatachalam, A., Yoder, P.D.: Electrothermal analysis of AlGaN/GaN high electron mobility transistors. Journ. of Compt. Electronics 7, 236 (2008) 7. Tsen, K.T., Kiang, J.G., Ferry, D.K., Morko¸c, H.: Subpicosecond time-resolved Raman studies of LO phonons in GaN: Dependence on photoexcited carrier density. Appl. Physc. Lett. 89, 112111 (2006)
Tuning the Generation of Sobol Sequence with Owen Scrambling Emanouil Atanassov, Aneta Karaivanova, and Sofiya Ivanovska Institute for Parallel Processing, Bulgarian Academy of Sciences, Acad. G. Bonchev, Bl. 25A, 1113 Sofia, Bulgaria {emanouil,anet,sofia}@parallel.bas.bg
Abstract. Sobol sequence is the most widely used low discrepancy sequence for numerical solving of multiple integrals and other quasi-Monte Carlo computations. Owen first proposed scrambling of this sequence through permutation in a manner that maintained its low discrepancy. Scrambling is necessary not only for error analysis but for parallel implementations. Good scrambling is especially important for GRID applications. However, scrambling is often difficult to implement and time consuming. In this paper we propose fast generation of Sobol sequence with Owen scrambling, tuned to specific hardware. Numerical and timing results, demonstrating the advantages of our approach are presented and discussed.
1
Introduction
Quasi-Monte Carlo methods are deterministic methods based on the use of highly uniform sequences (called quasirandom sequences). They can be interpreted as variants of Monte Carlo methods with change of the underlying pseudorandom number generator, [14]. Usually, quasi-Monte Carlo methods outperform Monte Carlo in terms of a faster convergence rate, [3]. This fact is due to the famous Koksma-Hlawka inequality for numerical integration, later extended for other problems. This inequality gives a theoretical error bound, and in order to have practical error estimates from quasi-Monte Carlo samples one can use randomized quasi-Monte Carlo methods. The core of randomized quasi-Monte Carlo is to find an effective and fast algorithm to scramble (randomize) the underlying quasirandom sequences. This approach for error estimation is based on treating each scrambled sequence as a different and independent random sample from a family of randomly scrambled quasirandom numbers. Thus, randomized quasi-Monte Carlo overcomes the main disadvantage of quasi-Monte Carlo while maintaining their favorable convergence rate. Secondarily, scrambling gives us a simple and unified way to generate quasirandom numbers for parallel, distributed, and Grid-based computational environments [6]. The idea of scrambling quasirandom sequences was first proposed by Cranley and Patterson who took lattice points and randomized them by adding random shifts to the sequences [7]. Later, Owen [15] and Tezuka [21] independently I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 459–466, 2010. c Springer-Verlag Berlin Heidelberg 2010
460
E. Atanassov, A. Karaivanova, and S. Ivanovska
developed two powerful scrambling methods for (t, s)-sequences. Owen also explicitly pointed out that scrambling can be used to provide error estimates for Quasi-Monte Carlo (QMC). Although many other methods for scrambling (t, s)sequences [13] have been proposed, most of them are modified or simplified Owen or Tezuka schemes. Owen’s scheme is theoretically powerful for (t, s)-sequences. The problem with Owen scrambling is its computational complexity. In this paper we propose and study effective generation of the most popular quasirandom sequence, Sobol sequence, with Owen scrambling using GPU computing. The model for GPU computing is to use a CPU and GPU together in a heterogeneous computing model. The sequential part of the application runs on the CPU and the computationally-intensive part runs on the GPU. From the user’s perspective, the application just runs faster because it is using the high-performance of the GPU to boost performance. The paper is organized as follows. In the next section we give some preliminaries: description of the Sobol sequence and Owen scrambling. The third section presents our algorithm for GPU implementation and some numerical results. The last section gives some conclusions and directions for future work.
2 2.1
Preliminaries Sobol Sequence
The Sobol sequence, [19], is the most popular quasirandom sequence because of its simplicity and efficiency in implementation. Many methods have been proposed for scrambling quasirandom sequences. Some scrambling methods [1,4,11,16,17] were designed specifically for the Sobol sequence. The construction of the Sobol sequence uses linear recurrence relations over the finite field, IF2 , where IF2 = {0, 1}, that is why pure digital permutation, where zero is not changed, is not suitable for the Sobol sequence [4]. The linear permutation is also not a proper method for scrambling the Sobol sequence, [4]. We use the Definition below, which covers most digital (t, m, s)-nets in base 2. The Sobol sequence is a (t, s)-sequence in base 2 and is a particular case of this definition. (k)
Definition Let A1 , . . . , As be infinite matrices Ak = {aij }, i, j = 0, 1, . . . , (k)
(k)
(k)
with aij ∈ {0, 1}, such that aii = 1 for all i and k, aij = 0 if i < j. The τ (1) , . . . , τ (s) are sequences of permutations of the set {0, 1}. Each non-negative integer n may be represented in the binary number system as n=
r
bi 2i .
j=0
Then the nth term of the low-discrepancy sequence σ is defined by x(k) n =
r j=0
2−j−1 τj (⊕ji=0 bi aij ), (k)
(k)
Tuning the Generation of Sobol Sequence with Owen Scrambling
461
where by “⊕” we denote the operation “bit-wise addition modulo 2”. Next lemma explains how we generate consecutive terms of the sequence. Lemma. Let σ be a sequence or net satisfying the above Definition, and let the non-negative integers n, p, m be given. Suppose that we desire the first p binary digits of elements in σ with indices of the form 2m j + n < 2p ; this implicitly defines a set of compatible j s. Thus the only numbers we need to compute are (k)
yj
= 2p x2m j+n . (k)
The integers {vr }∞ r=0 , which we call “twisted direction numbers”, are defined by p−1 (k) (k) vr = 2p−1−t ⊕p−1 j=m atj . (k)
t=0
Suppose that the largest power-of-two that divides 2m (j + 1) + n is l, i.e. 2m (j + 1) + n = 2i (2K + 1). Then the following equality holds (k)
(k)
yj+1 = yj 2.2
(k)
⊕ vi .
Owen Scrambling
The purpose of Owen scrambling is not only to improve the quality of the Sobol sequence but to make it suitable for parallel applications. The parallelization of the quasi-Monte Carlo applications has been under investigation during the last two decades due to the computationally intensive nature of the problems solved. The successful parallel implementation of a quasi-Monte Carlo application depends crucially on the parallelization of the quasirandom sequences used [5,6]. Much of the recent work on parallelizing quasi-Monte Carlo methods has been aimed at splitting a quasirandom sequence into many subsequences which are then used independently on various parallel processes, for example in [1,2]. This method works well for the parallelization of pseudorandom numbers, but due to the nature of quality in quasirandom numbers, this technique has some difficulties. Scrambling proved to be very efficient way of parallelization especially suitable for heterogeneous computing environments. (1)
(2)
(s)
Owen scrambling. Let xn = (xn , xn , . . . , xn ) be a quasirandom number in (1) (2) (s) [0, 1)s , and let zn = (zn , zn , . . . , zn ) be the scrambled version of the point xn . (j) (j) (j) (j) (j) Suppose each xn can be represented in base b as xn = (0.xn1 xn2 . . . xnK . . .)b with K being the number of digits to be scrambled. Then nested scrambling proposed by Owen [15,18] can be defined as follows: zn(j) = σ(x(j) n ), j = 1, 2, . . . , s, (j)
(j)
where σ = {π1 , π2 , . . . , πK }, zni = πi (xni ), for i = 1, 2, . . . , K. Here πi is a uniformly chosen permutation of the digits, {0, . . . , b − 1}. Of course, (t, m, s)net remains (t, m, s)-net under nested scrambling. However, nested scrambling
462
E. Atanassov, A. Karaivanova, and S. Ivanovska
requires bi−1 permutations to scramble the ith digit. Owen scrambling (nested scrambling), which can be applied to all (t, s)-sequences, is powerful; however, from the implementation point-of-view, nested scrambling or so-called path dependent permutations requires a considerable amount of bookkeeping, and leads to more problematic implementation. Owen [15] shows that a scrambled (t, m, s)-net is a (t, m, s)-net with probability one, and a scrambled (t, s)-sequence is a (t, s)-sequence with probability one. Owen [15,16,17] shows that the variance of a scrambled net estimator converges to zero faster than the variance of an ordinary Monte Carlo estimator does, while cautioning that the faster rate may not set in until the number of points becomes very large. For sufficiently smooth integrands, the variance is O(1/n3− ) in the sample size n.
3 3.1
Algorithm for GPU Implementation and Computational Tests The Graphics Processing Unit
The recent advances in hardware functionality and programmability of graphics processors (GPUs) have greatly increased their appeal as add-on co-processors for general purpose computing. The programming is based on CUDA (computer unified device architecture), [23], which is a hardware and software architecture that issues and manages data-parallel computations on GPU. The GPU is a Single Instruction Multiple Data (SIMD) parallel device. The GPU has to be considered as a coprocessor, while CPU is the host. The model for GPU computing is to use a CPU and GPU together in a heterogeneous computing model. The sequential part of the application runs on the CPU and the computationally-intensive part runs on the GPU. From the user’s perspective, the application just runs faster because it is using the high-performance of the GPU to boost performance. The application developer has to modify their application to take the compute-intensive kernels and map them to the GPU. The rest of the application remains on the CPU (Intel(R) Core 2 Quad CPU Q6600 @ 2.40GHz). Mapping a function to the GPU involves rewriting the function to expose the parallelism in the function and adding calls to I/O functions to move data to and from the GPU. Below we present our algorithm for GPU/CPU implementation, where we use NVIDIA GPU GeForce 9800 GT which allows vectorized operations only in single precision. Nevertheless generation of Sobol sequence in double precision can also be done efficiently on this GPU because only a simple operation (subtracting of a 1) must be carried on the CPU, while the more expensive operations related to the scrambling are done in the same way as in the single precision case on the GPU.
Tuning the Generation of Sobol Sequence with Owen Scrambling
3.2
463
The Algorithm
The algorithm has two phases. The first phase involves generation of the terms of the sequence without scrambling in the way as described in [1], which involves several bitwise xor operations for generating the first term of the sequence and only one bitwise operation per coordinate for generating of the subsequent terms plus one floating point operation (subtraction of constant 1) per coordinate. Using the Gray code is not necessary because the preprocessing eliminates the need to obtain the sequence in different order. This algorithm is described in detail in [1] and in this work we only make use of the high number of cores in the GPU to speed up the implementation. The most demanding part of the generation is the implementation of scrambling that follows the requirements for statistical independence, imposed by the Owen’s construction. Instead of keeping in memory all the terms of the sequence and performing the random permutations, we utilize a family of pseudorandom functions to generate random trees on the fly. Using the GPU and porting the publicly available implementation of the SALSA cipher [24], this method becomes feasible and usable in real-world situations. The algorithm for scrambling the nth coordinate of the N th term of the sequence follows: 1. Input — coordinate n, term of the sequence without scrambling τ = 0.t1 t2 t3 . . ., initial seed for the pseudorandom function family s. 2. Scramble 9 consecutive bits of τ using a sequence of 512 random bits, generated from the initial seed s, the coordinate n and the sequence of bits scrambled so far. The 512 random bits are interpreted as a random binary tree, where the root of the tree is bit number 0 and if the jth random bit corresponds to a node of the binary tree bit number 2 ∗ j + 1 corresponds to the left node and leaf number 2 ∗ j + 2 corresponds to the right node. The first bit of the consecutive 9 bits is changed if and only if the root of the tree is 1, and in such case we move to the right leaf, otherwise we move to the left leaf. We continue until we have scrambled all the 9 bits. 3. Continue with the next 9 bits until we have scrambled all the bits of τ . We used the SALSA function with 20 rounds. It is amenable to implementation on a GPU since it uses only small number of variables, uses integer operations and does not involve high number of logical operations and conditional statements. The cost for scrambling 9 bits of the terms of the sequence comes mainly from the invocation of the SALSA pseudorandom function. Scrambling one term is achieved with 3 invocations in single precision and with 6 invocations in double precision. We used in our tests the 20-rounds version of SALSA, with 512 bit output. 3.3
Numerical Tests
The computational tests are done with high-dimensional integral problems which always are a good way to test the quality of quasirandom sequences. We use a published set of test integrands [8,9,12,20,22]. Consider a class of test functions:
464
E. Atanassov, A. Karaivanova, and S. Ivanovska
Table 1. Accuracy for I1 and I2 with Owen scrambling (for dimension s = 10) N 210 211 212 213 214 215 216 217 218 219 220
I1 8.32E-03 9.34E-03 4.53E-03 1.34E-03 8.05E-03 3.01E-03 7.53E-04 3.58E-05 7.91E-04 4.47E-04 1.22E-04
I2 , ai = 0 1.23E-01 7.23E-02 6.26E-03 7.15E-03 6.51E-03 4.98E-05 2.68E-03 1.17E-03 5.64E-04 6.33E-04 5.39E-04
I2 , ai = i 1.26E-03 6.50E-04 1.70E-04 1.47E-04 4.14E-05 1.02E-05 1.18E-06 6.20E-07 9.76E-07 1.85E-07 6.18E-07
I2 , ai = i2 7.70E-05 3.24E-05 2.54E-05 1.69E-05 2.02E-06 2.51E-07 3.27E-08 2.38E-08 1.19E-08 1.25E-08 1.27E-07
Table 2. Accuracy for I1 and I2 without scrambling (for dimension s = 10) N 210 211 212 213 214 215 216 217 218 219 220
I1 2.56E-02 2.05E-02 7.56E-03 4.61E-03 4.23E-03 2.06E-03 1.56E-03 9.13E-04 7.94E-04 6.90E-04 1.69E-04
I2 , ai = 0 4.51E-02 4.42E-02 1.92E-02 8.20E-03 6.40E-03 3.94E-03 3.80E-03 8.33E-04 5.56E-04 5.53E-04 4.73E-04
1
I1 (f ) =
... 0
I2 (f ) =
1
... 0
0
I2 , ai = i 7.34E-04 6.02E-04 3.02E-04 1.69E-04 3.16E-05 1.30E-05 2.49E-06 2.76E-07 1.77E-06 5.08E-07 5.08E-07
I2 , ai = i2 3.44E-05 4.55E-05 2.91E-05 1.50E-05 2.33E-07 2.20E-07 5.97E-09 5.98E-08 5.35E-08 3.42E-08 3.98E-08
s 1
π (sinπxi )dx1 . . . dxs = 1 2 i=1
s 1
0 i=1
|4xi − 2| + ai dx1 . . . dxs = 1 1 + ai
where a are parameters. First, we present the accuracy achieved using Sobol sequence with Owen scrambling (Table 1). For comparison, on Table 2 one can see the accuracy when solving I1 and I2 using Sobol sequence without scrambling. Our GPU/CPU implementation fully supports the theoretical estimates that the variance of the scrambled estimator converges to zero faster than the variance of the ordinary Monte Carlo estimator does. On Fig. 1 elapsed time of the two implementations CPU (dash line) and GPU/CPU (i.e., the implementation uses GPU as co-processor) are compared. The GPU/CPU implementation is very efficient.
Tuning the Generation of Sobol Sequence with Owen Scrambling
465
CPU GPU
4
Time (ms.)
10
3
10
2
10
11
12
13 14 17 18 16 15 N N (number of integration points 2 )
19
20
Fig. 1. Numerical results for I1 (for dimension s = 10)
4
Conclusions
Our computational results demonstrate that the GPU offers a low-cost and highperformance computing solution for Owen scrambling which is theoretically better but has been avoided due to computational burden. Various improvements to this algorithm are possible, for instance by using a more efficient implementation of the SALSA pseudorandom function on this GPU or by choosing a different family of pseudorandom functions. Continued improvements will only make the GPU more attractive as a highperformance computing device for quasirandom applications. Future research and algorithm design in this area will include Grid/GPU implementation.
Acknowledgement Supported by the Ministry of Education and Science of Bulgaria under Grant No. DO02-146/2008.
References 1. Atanassov, E.: A New Efficient Algorithm for Generating the Scrambled Sobol Sequence. In: Dimov, I.T., Lirkov, I., Margenov, S., Zlatev, Z. (eds.) NMA 2002. LNCS, vol. 2542, pp. 83–90. Springer, Heidelberg (2003) 2. Bromley, B.C.: Quasirandom Number Generation for Parallel Monte Carlo Algorithms. Journal of Parallel Distributed Computing 38(1), 101–104 (1996) 3. Caflisch, R.: Monte Carlo and quasi-Monte Carlo methods. Acta Numerica 7, 1–49 (1998) 4. Chi, H.: Scrambled Quasirandom Sequences and Their Applications, PhD dissertation, FSU (2004)
466
E. Atanassov, A. Karaivanova, and S. Ivanovska
5. Chi, H., Jones, E.: Generating Parallel Quasirandom Sequences by using Randomization. Journal of Distributed and Parallel Computing 67(7), 876 (2007) 6. Chi, H., Mascagni, M.: Efficient Generation of Parallel Quasirandom Sequences via Scrambling. In: Shi, Y., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds.) ICCS 2007. LNCS, vol. 4487, pp. 723–730. Springer, Heidelberg (2007) 7. Cranley, R., Patterson, T.: Randomization of number theoretic methods for multiple integration. SIAM Journal of Numerical Analysis 13(6), 904–914 (1976) 8. Davis, P., Rabinowitz, P.: Methods of Numerical Integration. Academic Press, New York (1984) 9. Genz, A.: The numerical evaluation of multiple integrals on parallel computers. In: Keast, P., Fairweather, G. (eds.) Numerical Integration, pp. 219–230. D. Reidel, Dordrecht (1987) 10. Goeddeke, D., Strzodka, R., Turek, S.: Performance and accuracy of hardwareoriented native-, emulated- and mixed-precision solvers in FEM simulations. International Journal of Parallel, Emergent and Distributed Systems 22(4), 221–256 (2007) 11. Hong, H., Hickernell, F.: Algorithm 823: Implementing scrambled digital sequences. ACM Transactions on Mathematical Software 29(2), 95–109 (2003) 12. Joe, S., Kuo, F.: Remark on Algorithm 659: Implementing Sobol’s quasirandom sequence generator. ACM Transactions on Mathematical Software 29(1), 49–57 (2003) 13. Niederreiter, H.: Low-discrepancy and low-dispersion sequences. Journal of Number Theory 30(1), 51–70 (1988) 14. Niederreiter, H.: Random Number Generations and Quasi-Monte Carlo Methods. SIAM, Philadelphia (1992) 15. Owen, A.: Randomly permuted (t, m, s)-nets and (t, s)-sequences. In: Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing. Lecture Notes in Statistics, vol. 106, pp. 299–317 (1995) 16. Owen, A.: Scrambled net variance for integrals of smooth functions. Annals of Statistics 25, 1541–1562 (1997) 17. Owen, A.: Scrambling Sobo’l and Niederreiter-Xing points. Journal of Complexity 14, 466–489 (1998) 18. Owen, A.: Variance with alternative scramblings of digital nets. ACM Transactions on Modeling and Computer Simulation 13(4), 363–378 (2003) 19. Sobol, I.M.: Uniformly distributed sequences with additional uniformity properties. USSR Comput. Math. and Math. Phy. 16, 236–242 (1976) 20. Radovic, I., Sobol, I.M., Tichy, R.: Quasi-Monte Carlo methods for numerical integration: Comparison of different low discrepancy sequences. Monte Carlo Methods and Appl. 2, 1–14 (1996) 21. Tezuka, S.: Quasi-Monte Carlo discrepancy between theory and practice. In: Fang, K.T., Hickernell, F., Niederreiter, H. (eds.) MC&QMC Methods, pp. 124–140. Springer, Berlin (2002) 22. Wang, X., Fang, K.: The effective dimension and quasi-Monte Carlo. Journal of Complexity 19(2), 101–124 (2003) 23. http://www.nvidia.com/cuda 24. http://en.wikipedia.org/wiki/Salsa20
Applying the Improved Saleve Framework for Modeling Abrasion of Pebbles P´eter D´ ob´e, Rich´ ard K´ apolnai, Andr´ as Sipos, and Imre Szeber´enyi Budapest University of Technology and Economics, Hungary {dobe,kapolnai,szebi}@iit.bme.hu,
[email protected]
Abstract. Saleve is a generic framework for making the development of Parameter Study tasks easy for scientists and engineers not familiar with distributed technologies. In this paper we present our lightweight authentication procedure for Saleve to delegate user credentials towards the grid. Then we present a detailed statistics of abrasion of pebbles gained with Saleve. Finally we make remarks on the makespan of the application and look for ways to reduce it.
1
Introduction
A significant number of scientific and engineering problems could be regarded as a Parameter Study (PS) problem i.e. a parameter space is to be traversed which can be partitioned into subdomains, and these subdomains can be processed independently. Consequently, these problems can be parallelized trivially. Although there are many computational infrastructures to execute a PS problem, these systems usually need deep knowledge for optimal utilization, thus a researcher could benefit from a tool which liberates him/her from the task of managing jobs in a distributed system. The Saleve framework [14] aims to fulfil this mission. To use the services of Saleve is not challenging: the researcher needs to make some straightforward modification in the source code of the PS program, recompile it, and the binary can be executed either on the local desktop machine, or the same binary can submit itself to a grid or a cluster. The architecture of the Saleve component connecting to the EGEE grid [11] was presented in [10] and [7]. In this paper first we wish to introduce our architectural development in the method of authentication which integrates the authentication between Saleve and the EGEE grid, and between the Saleve services and the researcher (user). The user does not have to be aware whether the jobs were submitted to a grid or to a cluster, however, these systems need different authentication. Our method is based on the proxy certificate mechanism. If the user holds a grid-enabled certificate, a common proxy will be generated with the appropriate grid-specific extension. Otherwise, a dummy proxy certificate will be generated for authentication purposes. Comparing to other solutions, the main advantage of our approach is the lightweight client side i.e. no heavy software dependency on the user’s desktop machine. Second, we apply the Saleve framework for a computational model for a particular problem in modeling abrasion of pebbles which clearly belongs to the I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 467–474, 2010. c Springer-Verlag Berlin Heidelberg 2010
468
P. D´ ob´e et al.
domain PS. Our final goal is to treat the general, three dimensional case and investigate the evolution of the shapes emerging in the abrasion process. Here we present a two-dimensional model, which requires much less computational effort, however, a detailed investigation cannot be carried out rapidly without a parallel implementation. Finally we take observations on the performance of this Saleve application, and present ideas for improvement which may be generalized to other PS applications.
2
Grid Authentication Sequence in Saleve
This section describes how Saleve mediates the authentication of the user towards a grid. More specifically, we use as grid the EGEE infrastructure and its middleware, gLite [11], but this method probably could be generalized with minor modifications to other ones built on the proxy and attribute certificate mechanism (RFC 3820 and 3281). In this section we distinguish between a user who is a researcher (and thus a Saleve user), and a grid user who submits jobs to the grid. To present the authentication procedure, we need to outline the basics of Saleve. In a general usage scenario the user has a C/C++ scientific application to submit to a distributed environment. The first task the user has to accomplish in order to exploit the framework is making the Saleve client. This means some straightforward modifications need to be done in the original source code i.e. separating the parameter space partitioning logic and the main task, and the new source code is to be compiled and linked against the Saleve client library. We call the created application the Saleve client. The client can be executed on the local computer for testing and developing, but in case of a computationally intensive task the client can submit itself to a Saleve server. The Saleve server receives the client’s task already cut into pieces and dispatches these pieces as jobs to a selected adapter (plug-in) to handle the jobs. Currently the available adapters are for Condor, gLite, and local execution. Here we discuss the gLite plugin. A detailed presentation of Saleve can be found in [10,7,14]. 2.1
Authentication towards the gLite Middleware
When the Saleve server submits jobs to the grid, it has to authenticate itself as a grid user. Consequently, the researcher using Saleve (the user) needs to be a grid user at the same time, and has to delegate his/her right of accessing grid resources to the Saleve server. This supersedes our previous approach where the Saleve server itself was the grid user. Usually when a grid user wishes to submit a job to the grid, a short-term proxy certificate is created [12], signed by the grid user with the long-term key and extended by the VO. This short-term certificate and private key can be used by the job for delegation purposes with much less security risk. We itemize these steps as we need to alter the roles in the following subsection.
Applying the Improved Saleve Framework for Modeling Abrasion of Pebbles
469
Step 1 The grid user generates a proxy certification request only for authenticating to the VO. Step 2 The grid user signs the proxy request with the long-term key1 . Step 3 The grid user asks the VO to issue an attribute certificate (AC) based on his/her membership. Step 4 The grid user creates a new proxy request extended by the returned AC, and signs it with the old proxy generated in step 1 and 2. The steps above are integrated in the tool voms-proxy-init in gLite. 2.2
Involving the Client in the Delegation Process
Although the four steps above can be carried out by voms-proxy-init, it would have expensive assumptions on client side. If we used voms-proxy-init in the Saleve client, many software dependency would need to be installed, but our intention is making the client as lightweight as possible. On the other hand, in order to let the server authenticate the user, the client has to delegate the user credentials. This is achieved by distributing the four steps between the client and the server after a mutual authentication: Step 1 The server generates a proxy request with subject to the distinguished name of the user, and sends it to the client. Step 2 The client signs the request with the long-term key and sends it back. Step 3 The server asks the VO to issue an AC for the subject of the proxy. Step 4 The server creates a new proxy request extended by the returned AC, and signs it with the old proxy signed by the client (the user). The mutual authentication is performed in the SSL handshake when the clientserver connection is built up, maintaining a list of certificate authorities to be accepted. Alternatively, the server could verify the identity of client by verifying the signature on the proxy. This delegation procedure is favourable from many aspects: the client stays lightweight as it only needs to sign a standard certificate request, and the longterm credentials are safely stored only by the client. If the server is going to submit the jobs to a batch system where there is no need to delegate user credentials (e.g. a local Condor pool), step 3 and 4 are simply omitted as they belong to the Saleve gLite plugin. Moreover, step 3 and 4 can be executed by calling voms-proxy-init. Related Work. There are other toolkits that already implemented delegation mechanisms. The Globus Toolkit provides these functionalities [12], which can be utilized even via the GSI plugin in the gSOAP toolkit [4,15]. 1
Step 1 and 2 can be repeated in a chain by replacing the “long-term key” with the newly created proxy or even possible to omit them and use directly the long-term key to contact the VO, but these scenarios are not relevant in our case.
470
P. D´ ob´e et al.
Another approach to implement credential delegation is extending the web service interface of the grid service with delegation-related methods, more similarly to our case. One notable example is CREAM, the new computing element implementation of gLite [2], another one is the Delegation Interface offered by services of the ARC middleware [1]. The developers of GridSite also proposed a Delegation portType [13]. However, we could not afford to rely on these tools in order to keep the Saleve client lightweight which now only depends on the gSOAP [15] and the openssl libraries.
3
Simulating the Abrasion of Pebbles
A wide range of possible applications of the Saleve Framework could be mentioned here, as an example we take a new model of abrasion of pebbles [8,9]. The final goal is to develop a three dimensional discrete model of the abrasion process. Even the experiences with the planar, two dimensional model showed that the parallel implementation of the code is needed to gain fine results. A detailed description is given in [8]. The model captures the abrasion process of one pebble by collisions of several impactors during time. The shape of the pebble is a convex polygon. We distinguish between two possible events: 1. the impactor is much larger than the pebble (resulting a randomly chosen vertex of the polygon is chopped and replaced by a small edge.). 2. the impactor is much smaller than the pebble (resulting a randomly chosen edge of the polygon is retreated parallel to itself.). One of the key questions is the shape of a pebble after a long procedure of abrasion, i.e. a long sequence of events 1 or 2 with the probability p and (1 − p). The initial shape of the pebbles is gained by discretising an ellipse into a polygon 0 0 /Rmin , where with 40 vertices. The elongation of the shape is given by r0 = Rmax r denotes the flatness, Rmax and Rmin are the maximal and minimal distances between the centroid of the polygon and its perimeter. The superscript denotes the number of the numerical steps, i.e r0 is the initial flatness. By the initial shape and the parameter p we aim to investigate the outcome of the abrasion process. After a threshold in the number of collisions (in the presented simulation this i i /Rmin > number was 1000) we regard a pebble to be a needle if at any step Rmax i i 0 0 i i 8.0 and Rmax /Rmin > 1.5·Rmax /Rmin . On the other hand for Rmax /Rmin < 1.5 i i 0 0 /Rmin < Rmax /Rmin + ε the final shape is considered to be a circle and Rmax (ε is an arbitrarily small number). To avoid endless runs the number of collisions is maximised in 90000 steps, the pebbles in this group have reached none of the mentioned limit states. Fig. 1(a) shows the bifurcation diagram and 1(b) contains the number of steps required for the decision about the type of the abrasion process.
0.4 0.6
p
0.8 2
4
6
8 10 12 14 1
0
r
(a) final distribution of pebbles
steps
0 0.2
471
be n c tw eed ircle ee le n
Applying the Improved Saleve Framework for Modeling Abrasion of Pebbles
90000 60000 30000 0 2
4 6 0 8
r
10 12 14 1
0.8
0.6
0.4
0.2
0
p
(b) steps needed for classification
Fig. 1. Bifurcation diagrams for the abrasion of pebbles. The black points forming a thick curve on 1(a) denote those runs, where none of the limit states has been reached in 90000 steps of abrasion. The points above this curve denote pebbles became a needle, the points below the curve correspond to circle final shapes.
4
Performance of the Application
Although the simulation presented in Section 3 is an ordinary PS task, executing it efficiently on the grid raises non-trivial problems. We define efficiency as the maximum completion time or makespan, because it is the primary indicator of the satisfaction of a Saleve user. Below we give an illustration of our observations on the performance of a Saleve task, and suggest some ways for further improvement. The completion times of the jobs, thus the makespan as well, basically depend on two weakly correlated factors: the processing time, and the queueing time (or latency) of each job submitted to the grid.2 The impact of these factors are illustrated in Fig. 2 where the parameter space was partitioned into 700 pieces. Since both the maximum queueing time and maximum processing time for an arbitrary job are difficult to be foreseen, we propose a posteriori approaches to reduce them, i.e. we intervene the execution after submitting the jobs. Reducing the Queueing Time. Handling latency a priori is assigned to the grid schedulers, however, it is inevitable that a small percent of jobs spend extremely long time in a queue. To avoid this, which is crucial to reduce the makespan, we have tried to resubmit these jobs after a certain timeout, and cancel the old ones. The resulting completion times are shown in Fig. 3. This method guarantees that sooner or later every job gets in a limited queue. While today the grid has no cost of usage for a researcher yet, our objective function (the efficiency) does not directly depend on the load of the worker nodes or on the experienced latency by other users. Consequently it may be reasonable 2
For simplicity, we consider the waiting time after the completion negligible compared to the queueing time. Note that while the two types of time are independent for a single job, the maximum values of the two sets may be correlated.
472
P. D´ ob´e et al.
processing queueing
50000
time (sec)
40000
30000
20000
10000
0 0
100
200
300
400
500
600
700
jobs
Fig. 2. Job completion times ordered by queueing time
to experiment with a greedier procedure. When a job is considered to be in an extremely long queue, instead of cancelling and resubmitting it, the Saleve server could resubmit it several times at once. This could decrease the expected completion time of the resubmitted single job. Additionally, if we resubmitted only a limited percent of the jobs, it would not significantly increase the load of the grid. Reducing the Processing Time. The most intuitive idea to reduce the maximum processing time is selecting a finer partitioning. Although the average processing time would be decreased then, the maximum processing time would not be surely decreased: as the size of the jobs are usually not identical (the width of the black stripe in Fig. 2), in rare cases the maximum can even increase. Furthermore, incrementing massively the number of jobs also may increase the expected maximum latency because first, the probability of catching a very long queue is increased, second, the available grid resources are limited. This is why we plan to design a more sophisticated, iterative partitioning method. It starts from an initial partitioning, and if the processing time of a job exceeds a given threshold (a timeout), then cuts it into smaller pieces, and resubmits theses pieces as new jobs. Related Work. A number of efforts have been made to support the deployment of parameter study tasks. The iterative partitioning of the parameter space has also been introduced in [16], but with the interactivity of the user, not for efficiency purposes. The APST framework has also an adaptive scheduling algorithm with task duplication for generic middlewares [5,6]. A possible extension
Applying the Improved Saleve Framework for Modeling Abrasion of Pebbles
473
processing queueing timeout timeout timeout
50000
time (sec)
40000
30000
20000
10000
0 0
100
200
300
400
500
600
700
jobs
Fig. 3. Job completion times with a timeout of 3 hours
of our approach could be taking costs into account, for which a cost-time optimisation algorithm is discussed in e.g. [3].
5
Concluding Remarks
We presented our authentication improvement for Saleve which involves the user credentials in authenticating towards the grid. The most important feature of our solution is a practical one: the client could stay lightweight resulting a better portability of the scientific application due to its minimal requirements. We applied Saleve for the simulation of abrasion of pebbles and presented empirical high-resolution statistics. We also gave some feedback on the performance of the distributed application and outlined possibilities to improve it. ´ Acknowledgements. The authors are grateful to Akos Nagy for his contribution in the implementation. Part of this work was funded by the P´eter P´ azm´any program (RET-06/2005) of the National Office for Research and Technology, and by the Hungarian Scientific Research Fund (OTKA 72146). The authors would like to thank the Enabling Grids fro E-scienceE project (INFSO-RI-222667).
References 1. D1.2-1 Extensive description of functionality and definition of interfaces (WSDLs) for the core components, KnowARC public deliverable edition (August 2007) 2. Aiftimiei, C., et al.: Job submission and management through web services: the experience with the CREAM service. Journal of Physics: Conference Series 119(6), 062004, 10 p. (2008)
474
P. D´ ob´e et al.
3. Buyya, R., Murshed, M., Abramson, D., Venugopal, S.: Scheduling parameter sweep applications on global grids: a deadline and budget constrained cost-time optimization algorithm. Softw. Pract. Exper. 35(5), 491–512 (2005) 4. Cafaro, M., Lezzi, D., Fiore, S., Aloisio, G., van Engelen, R.: The GSI plug-in for gSOAP: Building cross-grid interoperable secure grid services. In: Wyrzykowski, R., Dongarra, J., Karczewski, K., Wasniewski, J. (eds.) PPAM 2007. LNCS, vol. 4967, pp. 894–901. Springer, Heidelberg (2008) 5. Casanova, H., Berman, F.: Parameter Sweeps on the Grid with APST. In: Grid Computing: Making the Global Infrastructure a Reality. John Wiley & Sons, Inc., New York (2003) 6. Casanova, H., Zagorodnov, D., Berman, F., Legrand, A.: Heuristics for scheduling parameter sweep applications in grid environments. In: Proceedings of the 9th Heterogeneous Computing Workshop, p. 349. IEEE Computer Society, Washington (2000) 7. D´ ob´e, P., K´ apolnai, R., Szeber´enyi, I.: Saleve: Supporting the deployment of parameter study tasks in the grid. In: Cracow Grid Workshop, Krakow, Poland, pp. 276–282. Academic Computer Centre CYFRONET AGH (2007) 8. Domokos, G., Sipos, A.A., V´ arkonyi, P.L.: Continuous and discrete models for abrasion processes. Periodica Polytechnica 40(1) (2009) (in Press) 9. Domokos, G., V´ arkonyi, P.: Static equilibria of rigid bodies: dice, pebbles and the poincare-hopf theorem. Journal of Nonlinear Science 16, 255–281 (2006) 10. D´ ob´e, P., K´ apolnai, R., Szeber´enyi, I.: Simple grid access for parameter study applications. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 470–475. Springer, Heidelberg (2008) 11. EGEE Information Sheets, http://www.eu-egee.org/ 12. Foster, I., Kesselman, C., Tsudik, G., Tuecke, S.: A security architecture for computational grids. In: 5th ACM conference on Computer and Communications Security, pp. 83–92. ACM, New York (1998) 13. McNab, A., Kaushal, S.: The GridSite security framework. UK e-Science All Hands Meeting (2005) ISBN 1-904425-53-4 14. Moln´ ar, Z., Szeber´enyi, I.: Saleve: simple web-services based environment for parameter study applications. In: The 6th IEEE/ACM International Workshop on Grid Computing. IEEE Computer Society, Los Alamitos (2005) 15. Van Engelen, R.A., Gallivan, K.A.: The gSOAP toolkit for web services and peerto-peer computing networks. In: 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid, pp. 128–135. IEEE Computer Society, Los Alamitos (2002) 16. Wibisono, A., Zhao, Z., Belloum, A., Bubak, M.: A framework for interactive parameter sweep applications. In: 8th IEEE International Symposium on Cluster Computing and the Grid, p. 703. IEEE Computer Society, Washington (2008)
Information Flow and Mirroring in an Agent-Based Grid Resource Brokering System Maria Ganzha1 , Marcin Paprzycki1, Michal Drozdowicz1 , Mehrdad Senobari2 , Ivan Lirkov3 , Sofiya Ivanovska3, Richard Olejnik4 , and Pavel Telegin5 1
3
Systems Research Institute, Polish Academy of Science, Warsaw, Poland 2 Tarbiat Modares University, Tehran, Iran Institute for Parallel Processing, Bulgarian Academy of Sciences, Sofia, Bulgaria 4 University of Sciences and Technologies of Lille, Lille, France 5 SuperComputing Center, Russian Academy of Sciences, Moscow, Russia Abstract. We are developing an agent-team-based Grid resource brokering and management system. One of issues that has to be addressed is team preservation through mirroring of key information. We focused our attention on information generated within the agent team. In this paper we discuss sources of information generated in the system and consider which information should be mirrored to increase long-term survival of the team.
1
Introduction
In our work, to develop an agent-team-based high-level intelligent Grid middleware, we have established when access to information, generated and stored in the team, is needed. For instance, when a team leader (the LMaster agent) receives a Call for Proposals (CFP ) message asking about conditions of executing a job, its response (an offer, or a rejection) is to be based on knowledge of the client and of market conditions (see [7, 11]). Such knowledge is to be based on data collected during past interactions with (potential) client(s). Since we assume that a Global Grid is a highly dynamic structure [12], in which nodes can crash, it is important to assure that the team knowledge will not be lost if its leader crashes. Therefore, in [8, 13] we have suggested utilization of an LMirror agent, which should keep copy of information necessary to prevent team disintegration, in the case of the LMaster crash. The aim of this paper is to point to sources of information useful for an agent team, and to discuss which information should be mirrored and when. To this effect, in the next section, we present a brief overview of the proposed system, followed by arguments for the need of information mirroring. We follow with description of sources of information to be mirrored. In each case we discuss when and where this information should be persisted.
2
System Overview
To discuss the birds-eye view of the system and the two main processes taking place in it (Worker joining a team and team accepting a job to be executed) we I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 475–482, 2010. c Springer-Verlag Berlin Heidelberg 2010
476
M. Ganzha et al.
Fig. 1. AML social diagram of the proposed system
will utilize an AML Social Diagram [5] in Figure 1. In our approach, agents work in teams. Each team is supervised by the LMaster agent. Agent teams utilize services of the Client Information Center (represented by the CIC agent), to advertise their resources, and Workers they are seeking. Teams consist of the LMaster, Workers, and the LMirror agent (which stores copy of information necessary to persist the team when the LMaster crashes). When the User is seeking team to execute its job, it specifies job constraints to the LAgent. The LAgent contacts the CIC and obtains the list of teams that can execute its job. Next, it utilizes trust information and the FIPA Contract Net protocol [15] to either find a team, or establish that no team fulfills User -specified conditions. When the User would like its computer(s) to become Worker (s) in a team, it specifies conditions of joining. The LAgent interacts with the CIC to obtain a list of teams that are seeking Workers. Next, it utilizes trust information and the FIPA Contract Net to establish if a team to join can be found. In both cases, trust information is a part of the LAgent knowledge. Similarly, when responding to the CFP from an LAgent, the LMaster utilizes Team knowledge (see [11,13]).
3
Need for Information Mirroring
While collecting knowledge by the LAgent, and utilizing it in decision making are interesting, we focus on processes taking place within the agent-team. Recall our assumption that in the Global Grid any node can disappear at any time and for an unspecified time. Obviously, this applies also to the LMaster agent. Let us now see what happens when the LMaster becomes unavailable (e.g. due to the Internet access failure), and there is no information mirroring. Observe that the LMaster is the only agent that Users (their LAgents) contact. Note that in the case of contract negotiations, there would be no long-term consequences in the case of a short-term disappearance of the LMaster (while the team would loose a potential contract, it should be able to manage the team). More serious is the fact that the LMaster is the only gateway through which jobs are received and results send back. The LMaster is also the only agent that Workers know about. In the case of a more permanent disappearance of the LMaster, damage could be irreparable; e.g. Workers would not be able to send results of completed
Agent-Based Grid Resource Brokering System
477
tasks and/or receive new tasks and would leave the team and not come back (see also [11]). Finally, observe that in this situation all knowledge collected by the LMaster becomes unavailable (and may be potentially lost forever). In response to these challenges we have decided to use data mirroring. In [6, 12, 13] we have proposed that an LMirror agent should store information needed to keep the team alive and competitive. Furthermore, in [13] we have described restoring the LMaster and the LMirror in the case when either one of them crashes. Let us now, consider three issues: (1) what information should be mirrored, (2) when should it be mirrored, and (3) where should it be mirrored.
4 4.1
What Should Be Mirrored, When and Where? Team Member Data
Let us start from information about members of the team. When a Worker joins the team, the LMaster obtains the following information: 1. ID of the LAgent joining the team. We have assumed that each agent utilizing our system has to be registered within it (function of the CIC ). It is thus possible to check the registration of the potential Worker, before accepting it into the team). This can be treated as a mini-security measure that can be extended as the system develops. 2. Available resources. In the CFP the LAgent includes (ontologically demarcated; [10]) information about resources offered for the team. This information about each Worker is to be, among others, used for job scheduling [14]. 3. Details of the contract. In [13], we have suggested that the contract proposal should contain: (a) specification of the time of availability, and (b) length of contract. However, results of the MiG project (see papers available at [16]) indicate that more complex data-sets can be involved in contract negotiations and thus in the contract. Out of these three items, the ID of the Worker has to be mirrored immediately. Without this information it will be impossible to contact this LAgent in the future. The remaining two could be recovered from the Worker. However, since information about item (1) is to be send to the LMirror regardless, information about items (2) and (3) may be included as well. In this way crashing of the LMaster will not disturb team Worker s with unnecessary exchanges of messages with the new LMaster. Note that the total amount of information mirrored here is proportional to the number of Workers in the team and thus should not be very large. 4.2
Job Contracts and Their Execution
In [8, 7, 6] we have specified a limited set of parameters describing the job contract. In the CFP User could specify: (a) job start time, (b) job end time, and (c) resource requirements, while the response contained the price proposed by
478
M. Ganzha et al.
the LMaster. Obviously, additional parameters may be added, but this does not affect material presented here. As soon as the contract is signed all data concerning it has to be mirrored. Furthermore, loosing information about a contract would mean that it would either not be completed, or results would have no User to be send to (the latter would happen in the case, when information was lost while the task was already being executed by one of the Workers). In both cases, the team would rapidly loose its reputation and potential future contracts [11]. Signing a contract means that files pertinent to the job will be send to the LMaster and have to be kept and mirrored until job is completed. As soon as the job is completed, its results have to be sent back to the User. However, they should be also kept by the Worker until it receives a confirmation from the LMaster that they were successfully delivered to the customer. At the same time the LMaster keeps the second copy of results. In this way we, again, have two copies of sensitive material available in the system. Information about job completion and successful delivery of results has to be stored. It is to be useful, first, for job scheduling (see, for instance, [14] and references collected there). Second, for trust management (see [11] and section 4.3), and third, for contract disputes. Interestingly, only fact that job has been completed and its results successfully delivered to the customer, has to be mirrored immediately (it describes the current status of the team). Information needed for future job scheduling may be mirrored “later”. Loosing data about execution times of some jobs (due to the crash of LMaster ) does not pose a threat to the existence of the team. Similarly, some information needed for trust management may be lost without damaging the team. Finally, note that scheduling and trust related information may be sent to the team data warehouse instead of the LMirror (see section 4.4). 4.3
Trust Information
Thus far we have dealt mostly with information that has to be mirrored immediately. Now we devote our attention to the remaining data generated in the system. In [11] we have considered trust management in the system. As noted above, here, we are interested only in relationships between the LMaster and its Workers. Following the discussion presented in [11] we assume that contract between Worker and the team is based on paying the Worker for availability (not for actual work done). It was also stipulated that Worker will have a right to not to be available for a limited number of times during its contract. Here, we can use the “pinging-mechanism” described in [6] to monitor Worker availability. Under these assumptions, for each Worker, we will collect information how many times (a) it fulfilled the contract (#f c) — was available when it was expected to be available; (b) violated the contract (#vc) — was not available when it was supposed to be; and (c) did more than contract required (#ac) — was available even though it could have been gone. This combined data will be stored, for each Worker, as a triple (#f c, #vc, #ac). While this information is in a cumulative
Agent-Based Grid Resource Brokering System
479
form, we have also access to detailed information about results of each “pingingprocedure,” and each assigned and completed job. Since the latter data is not proportional to the number of Workers, but to the number of processed jobs, we consider it separately in section 4.4. Obviously, while the trust-related information has to be mirrored, some loses (e.g. missing the fact that Worker 31B successfully fulfilled contract 324 55B 3 ) are not a big problem. Therefore, it is enough to update information about trust related issues in a digested format at predefined time-intervals; e.g. once a day. Frequency of updates depends on nature of jobs that a given team is executing and on the type of Worker contracts. If a team is executing time-consuming jobs and its members have long-term contracts, then update of trust information can occur much less frequently than in the case when large number of short jobs are completed by a team consisting of Workers with short-term contracts. We believe that a good approximation of the right time to send trust information to the LMirror is when, on average, each Worker has completed one contract. Note that the LMaster can adaptively adjust time between updates (responding to changing jobs and contracts of its team). 4.4
Volume Data Collection and Storage
Most information considered thus far was relatively small in volume — of order of the size of the team or of the number of currently contracted jobs. There are however important data sets that grow over time and thus can become very large. First, data generated during contract negotiations (either job execution, or team joining), needed to establish market conditions and results in team behavior adaptivity (see also [6,11]). Second, information about execution time of completed jobs, used for advanced scheduling techniques (see [14] and references collected there for more details). Third, detailed information about behavior of each worker (e.g. history of execution of each job, results of pinging sessions, etc.). This information is crucial not only for trust management, but also to build a realistic and comprehensive economy-based jobs scheduling model. Let us consider LAgent –LMaster negotiations (the remaining cases follow the same general pattern and are omitted for lack of space). Even though in [1,2,3,4] a large variety of contract negotiation mechanisms have been listed, in our work, for both job execution, and joining a team negotiations, we utilize the FIPA Contract Net Protocol [15]. The primary reason is that while allowing for calls for proposals with practically unlimited variety of constraints, it generates the contract (or establishes impossibility of an agreement) within a single round of negotiations. This also reduces the total amount of information that needs to be stored for further use (compare, for instance, with the amount of data generated during any of the multi-round auction mechanisms; [9]). Now, information about a single job contract negotiation consists (in the minimal case) of: – content of the CFP — LAgent ID, hardware and/or software sought; “start time”; time to be contracted / or was it an open-ended contract? – content of the response — the proposed price; etc.; – result of the negotiation — success/failure.
480
M. Ganzha et al.
Similarly, in the case of negotiation concerning a Worker joining the team, at least the following information can be stored: – content of the CFP — including information about available hardware and software; length of contract and availability, – content of the response — the proposed price, – result of the negotiation — did the Worker join the team, or not. Interestingly, in all cases loosing some of the data may decrease competitiveness of the team, but the process is characterized by graceful degradation. Therefore it is possible to perform updates in a digested fashion at times of reduced utilization of the system. However, observe that, as time passes, the total volume of collected information increases rapidly. While it would be possible to store only a specific amount of the most current data, this would reduce utility of data mining techniques (long-term trends may be lost). Therefore, it is important to preserve all historical data and a data warehouse may be a solution. It should be obvious that in these cases mirroring utilizing the LMaster and the LMirror pair can be very costly. First, an increasing volume of data would have to be stored by each one of them. Second, large amounts of data would have to be passed from the LMaster to the LMirror. Naturally, since we utilize singleround negotiations, passing updates in a compressed and digested form would reduce the burden somewhat, but this would help only in a short-term. Finally, recovery of the crashed LMaster (or LMirror ) would be extremely resource consuming. In this case, as a part of the recovery, an entire data warehouse would have to be transferred. For a large team that exists for a long time period (and our assumption is that good teams will be large and will exists for a long time), this would mean necessity to transfer hundreds of gigabytes of data. Such transfer would have to happen each time during recovery of the crashed managerial agent. Since, as suggested in [13], recovery of one crashed managerial agent stops the other one from doing anything else, it also stops all managerial activities in the team. Therefore, such approach seems infeasible in a long run. Considering this, based on the current trend of IT infrastructure outsourcing and specialization, we propose the following solution. The team data warehouse will be outsourced for storage to a team within the system that specializes in data warehousing. This is similar to a popular e-commerce scenario, where merchants contract software companies to design, implement, and run infrastructure of their e-stores, while software companies contract storage companies to actually store the data (this solution facilitates quality assurance for data preservation). Note that, utilization of a data warehouse allows to completely redefine the roles of the LMaster and the LMirror. Here, the LMaster stores information in the contracted “storage facility”. Only data needed for team operation, e.g. list of team members, is kept locally (and “mirrored” in the data warehouse; not with the LMirror ). The role of the LMirror is not to store team data, but to store procedures for: (a) checking existence of the LMaster, and (b) replacing an LMaster in the case when it crashes. In this way the LMirror can work as all other Workers (it will not be burdened by the mirroring procedures), which may also simplify the overall management and economic structure of the team.
Agent-Based Grid Resource Brokering System
5
481
Concluding Remarks
The aim of this paper was to discuss issues involved in information mirroring in an agent-based Grid resource management system. We have focused our attention on information generated within the agent team and considered four important cases: (1) team data, (2) job contracts and their execution, (3) trustrelated information, and (4) other sources of large volume information. We have established that we deal with two main situations: (a) small-volume data that has to be mirrored immediately, and (b) large volume data that may be mirrored infrequently. Further analysis indicated that collection of large volume data may be best achieved through utilization of a contracted data storage facility. This latter solution is our solution of choice and we plan to utilize it in our system.
Acknowledgments Work of Maria Ganzha and Michal Drozdowicz was supported from the “Funds for Science” of the Polish Ministry for Science and Higher Education for years 2008-2011, as a research project (contract number N516 382434). Collaboration of the Polish and Bulgarian teams is partially supported by the Parallel and Distributed Computing Practices grant. Collaboration of Polish and French teams is partially supported by the PICS grant New Methods for Balancing Loads and Scheduling Jobs in the Grid and Dedicated Systems. Collaboration of the Polish and Russian teams is partially supported by the Efficient use of Computational Grids grant. Work of Marcin Paprzycki and Sofiya Ivanovska was supported in part by the National Science Fund of Bulgaria under Grants No. D002-146 and D002-115.
References 1. Abraham, A., Buyya, R., Nath, B.: Nature’s heuristics for scheduling jobs on computational grids. In: 8th IEEE International Conference on Advanced Computing and Communications (ADCOM 2000), pp. 45–52 (2000) 2. Buyya, R., Abramson, D., Giddy, J., Stockinger, H.: Economic models for resource management and scheduling in grid computing. Concurrency and Computation: Practice and Experience 14(13-15), 1507–1542 (2002) 3. Buyya, R., Abramson, D., Venugopal, S.: The grid economy. Proceedings of the IEEE 93(3), 698–714 (2005) 4. Buyya, R., Giddy, J., Abramson, D.: An evaluation of economy-based resource trading and scheduling on computational power grids for parameter sweep applications. In: Second Workshop on Active Middleware Services (AMS 2000), p. 221. Kluwer Academic Press, Pittsburgh (2000) 5. Cervenka, R., Trencansky, I.: Agent Modeling Language (AML): A Comprehensive Approach to Modeling MAS. Whitestein Series in Software Agent Technologies and Autonomic Computing. Birkhauser, Basel (2007) 6. Dominiak, M., Ganzha, M., Gawinecki, M., Kuranowski, W., Paprzycki, M., Margenov, S., Lirkov, I.: Utilizing agent teams in grid resource brokering. International Transactions on Systems Science and Applications 3(4), 296–306 (2008)
482
M. Ganzha et al.
7. Dominiak, M., Ganzha, M., Paprzycki, M.: Selecting grid-agent-team to execute user-job — initial solution. In: Proc. of the Conference on Complex, Intelligent and Software Intensive Systems, pp. 249–256. IEEE CS Press, Los Alamitos (2007) 8. Dominiak, M., Kuranowski, W., Gawinecki, M., Ganzha, M., Paprzycki, M.: Utilizing agent teams in grid resource management — preliminary considerations. In: IEEE J. V. Atanasoff Conference, pp. 46–51. IEEE CS Press, Los Alamitos (2006) 9. Drozdowicz, M., Ganzha, M., Paprzycki, M., Gawinecki, M., Legalov, A.: Information Flow and Usage in an E-Shop operating within an Agent-Based E-commerce system. Journal of Sibirian Federal University, SFU, Krasnoyarsk (in press) 10. Drozdowicz, M., Ganzha, M., Paprzycki, M., Olejnik, R., Lirkov, I., Telegin, P., Senobari, M.: Ontologies, agents and the grid: An overview. In: Topping, B., Ivanyi, P. (eds.) Parallel, Distributed, and Grid Computing for Engineering. Computational Scientce, Engineering and Technology Series, vol. 21, pp. 117–140. SaxeCoburg Publications, Stirligshire (2009) 11. Ganzha, M., Paprzycki, M., Lirkov, I.: Trust management in an agent-based grid resource brokering system — preliminary considerations. In: Todorov, M. (ed.) Applications of Mathematics in Engineering and Economics’33. AIP Conf. Proc., vol. 946, pp. 35–46. American Institute of Physics, College Park (2007) 12. Kuranowski, W., Ganzha, M., Gawinecki, M., Paprzycki, M., Lirkov, I., Margenov, S.: Forming and managing agent teams acting as resource brokers in the grid — preliminary considerations. International Journal of Computational Intelligence Research 4(1), 9–16 (2008) 13. Kuranowski, W., Ganzha, M., Paprzycki, M., Lirkov, I.: Supervising agent team an agent-based grid resource brokering system — initial solution. In: Xhafa, F., Barolli, L. (eds.) Proceedings of the Conference on Complex, Intelligent and Software Intensive Systems, pp. 321–326. IEEE CS Press, Los Alamitos (2008) 14. Senobari, M., Drozdowicz, M., Ganzha, M., Paprzycki, M., Olejnik, R., Lirkov, I., Telegin, P., Charkari, N.: Resource management in grids: Overview and a discussion of a possible approach for a agent-based middleware. In: Topping, B., Ivanyi, P. (eds.) Parallel, Distributed, and Grid Computing for Engineering. Computational Scientce, Engineering and Technology Series, vol. 21, pp. 141–164. Saxe-Coburg Publications, Stirligshire (2009) 15. Welcome to the FIPA, http://www.fipa.org/ 16. Projekt Minimum intrusion Grid, http://www.migrid.org/MiG/Mig/published_papers.html
Scatter Search and Grid Computing to Improve Nuclear Fusion Devices Antonio G´omez-Iglesias1, Miguel A. Vega-Rodr´ıguez2, Francisco Castej´on-Maga˜ na1, Miguel C´ ardenas-Montes3, and Enrique Morales-Ramos4 1
2
National Fusion Laboratory, CIEMAT, Madrid, Spain {antonio.gomez,francisco.castejon}@ciemat.es http://www.ciemat.es Dep. of Technologies of Computers and Communications, University of Extremadura, C´ aceres, Spain
[email protected] 3 Dep. of Basic Research, CIEMAT, Madrid, Spain
[email protected] 4 ARCO Research Group, C´ aceres, Spain
[email protected]
Abstract. Even though nuclear fusion is the next generation of energy, there are still many problems that are present in current nuclear fusion devices. Several of these problems can be solved by means of modelling tools. These tools are extremely demanding in terms of computational resources and they also use a large number of parameters to represent the behaviour of nuclear fusion devices. The possibility to introduce Evolutionary Algorithms (EAs) like Scatter Search (SS) to look for approximate configurations offers a great solution for some of these problems. However, as long as these applications require high computational costs to perform their operations, the use of the grid is an ideal environment to carry out these tests. The distributed paradigm of the grid, as well as the high number of computational resources, represents an excellent alternative to execute these tools. Within this paper, we explain our work in these three fields, looking for optimized configurations of a nuclear fusion device.
1
Introduction
Scatter Search is a metaheuristic process that has been applied to many different optimisation areas with optimal results. SS works over a set of solutions, combining them to obtain new ones that improve the original set. Unlike other evolutionary methods, such as genetic algorithms (GA), SS is not based on large random populations. Rather, it is based on strategic selections among small populations: while GAs usually work with populations of hundreds or thousands of individuals, SS uses a small set of different solutions. In the case of optimisation problems related to many scientific areas, the required time to get a fitness value is so high that traditional computation cannot be used to perform an exploration of the entire solution space. Modern I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 483–490, 2010. c Springer-Verlag Berlin Heidelberg 2010
484
A. G´ omez-Iglesias et al.
paradigms, like grid computing, offer the computational resources and capabilities to carry out these optimisation problems, but they are not easy to use [3,5,9,10]. Until now, many investigations have been carried out in order to use parallel architectures and GAs [2], but the coupling of grid capabilities and complex EAs is still in an immature state. The development of the grid has created a way that could potentially lead to an increase in the performance of these kinds of algorithms in terms of execution time and problem size. Nevertheless, a high level of expertise is required to develop and execute grid applications because many problems can arise due to the specifics of the grid infrastructure. Our goal consists of improving the equilibrium of plasma in a nuclear fusion device (TJ-II, a stellarator [4] located in Madrid) by means of the SS algorithm and grid computing. The rest of this paper is organised as follows: section 2 introduces some basic concepts about nuclear fusion while section 3 gives a description of the elements and methods of SS algorithm based on the implementation proposed in [11]. Section 4 shows the required workflow to optimise nuclear fusion devices, whereas section 5 is devoted to the distributed fitness calculation using the grid. In section 6, we collect the results obtained in our experiments. Finally, in section 7, we conclude the paper and summarise a variety of perspectives of the explained work.
2
Nuclear Fusion
Nuclear fusion is the process by which multiple atomic particles join together to form a heavier nucleus. It is accompanied by the release or absorption of energy. The fusion of two light nuclei generally releases energy while the fusion of heavy nuclei absorbs energy [6]. Among the different fusion reactions, the most promising one is shown in (1). Most of current fusion devices work with these elements in order to obtain the reaction. In the equation, D denotes a deuteron, T a triton, 4 He an isotope of helium being an α-particle, n a neutron and MeV megaelectron-volt. (1) D + T =4 He + n + 17.6M eV The quality of magnetic confinement is characterized by different criteria [12]. Equilibrium means the characteristics of the plasma are varying slowly so we have sustainable characteristics of the particles inside the device. By improving the equilibrium of plasma, we improve the efficiency of the device; therefore, increasing the probability of achieving fusion reactions.
3
Scatter Search Algorithm
Descriptions of this algorithm, as well as explanations for all the stages involved, are explained in great detail in a variety of related works [7,11]. The design of these stages is very wide and general, with the aim that anyone can implement other techniques for any of them.
Scatter Search and Grid Computing to Improve Nuclear Fusion Devices
3.1
485
Concurrent Implementation
For many problems we can get a new generation after a few seconds, but when each fitness value requires several minutes or even hours to get its evaluation, this template does not offer an optimal result. Furthermore, we could have a scenario where the researchers involved in a specific problem do not have access to all the computational power required, but do have access to the grid, something that many people can easily do. For these reasons, we have modified this template by adding functionalities that allow us to handle grid jobs and proxies without human supervision.
4 4.1
Workflow to Measure the Equilibrium Measuring the Equilibrium
What we have to measure in order to get a realistic idea of the equilibrium of our → device is the value of − v d , or drift velocity of particles within confined plasma. This means to calculate the value given by (2), that is our target function (a deeper explanation of this expression can be found in the related work [8]). Ftargetf unction
− → − N → B × ∇ |B| = B3 i=1
(2) i
In this equation, i represents the different magnetic surfaces existing inside TJII and B, the intensity of the magnetic field. The target is to minimise the value given by this function. This is the fitness function implemented in our SS algorithm. Once we have the data to solve this function, it takes nearly ten minutes, on average, to get the final value running. For current configurations of TJ-II we get values greater than 1.0E + 16 for this expression. Our goal will be to reduce this value which, in fact, means we are improving the confinement of particles inside the device. To get all of the values involved in this function, we need to execute a workflow application to calculate the magnetic surfaces existing in a magnetic confinement device. These applications are Fortran codes running in specific computers, often with deprecated compilers which are not longer available. The efforts needed in order to run these applications in heterogeneous environments, as grid computing, are also high. 4.2
Workflow Application
The workflow we need to execute to measure the equilibrium of the TJ-II, by solving (2), is widely explained in the related work [8]. The entire workflow requires one hour and a half on average to finish its computation and less than one hour for optimal configurations.
486
5
A. G´ omez-Iglesias et al.
Grid-Oriented SS Implementation
Grid cannot be compared with supercomputers or special parallel machines which can offer excellent results in term of efficiency. Nevertheless, the distributed paradigm of the grid, along with the number of computational resources available, make using SS with complex optimisation problems a very promising alternative. In these problems, the different fitness functions can be calculated in different Worker Nodes (WNs) of the grid, reducing the overall execution time. We have developed a set of python scripts to interact with the command line in the User Interface (UI) in order to perform complex operations, such as proxy management, allowing the user to call off any job or resend failed jobs. New tests will only require the modification of selection and combination methods to get a new implementation of the algorithm, but the entire source code (SS algorithm and python scripts) will remain the same. 5.1
Python Scripts
To get a non-supervised system, we have developed a set of python scripts that interact with the metascheduler and the proxy to properly manage all of the required processes. We have developed the interaction with two metaschedulers: GridWay and WMS (Workload Management System). With GridWay, we can access the grid environment by using both gLite and Globus, while WMS is the metascheduler by default in gLite middleware. As a result, we get a highly customizable system that can interact with different kinds of grid middlewares. Not only do these scripts provide functionalities related to job management, they also interact with the whole grid environment loading the metascheduler, detecting any failures in the grid infrastructure, collecting usage statistics, or performing proxy management. 5.2
SS Implementation
Diversification Method. We are using a diversification method based on Greedy Randomized Adaptive Search Procedure (GRASP) constructions [13]. This method uses a local search procedure [1] to improve the generated solutions. The evaluation function given for this GRASP method is the fitness value obtained for the solution when a value is added to one of the elements within the solution. The resulting solution is the best overall solution found with this local search. Combination Method. We have chosen a mutation-based procedure to mix individuals and obtain a new one. This mutation-based procedure uses the sample standard deviation of each chromosome in the whole population to perform the mutation. This function assures convergence within the values of the chromosomes even though this convergence is only noticed after a long number of generations.
Scatter Search and Grid Computing to Improve Nuclear Fusion Devices
487
Improvement Method. The improvement method we have to use cannot perform many evaluations because of the high execution time required for each of these evaluations. For this reason, we have developed a method that introduces small variations in the values of the solution and checks if the solution is better than the original one. Diversity Measure. In our case, to measure the diversity among the elements in the population, we use the normalised value of each chromosome of any individual. This normalisation is calculated using (3). In other cases, this normalisation is not necessary; however, for this problem the difference between the possible values of the chromosomes makes it impossible to use the current value of each chromosome. N orm valuei =
valuei − M ini M axi − M ini
(3)
Finally, with the expression (4) we can obtain the distance between two individuals, p and q by: n |pi − qi | , (4) d(p, q) = i=1
where n is the number of chromosomes in the individuals p and q. Using this expression, we can get different levels of diversity within the elements in the population, which is also one of the objectives of the SS algorithm.
6
Results
Here we present several results obtained by running our system on the grid. First, we show results related to the time required to evaluate 50 iterations in the SS. Next, we present results focused on the evolution of the fitness function within the population along the generations. As mentioned before, the computational time required for the local search process is very high, so different tests with this process activated and deactivated have been carried out and are also described in this section. The number of chromosomes of each solution is set to 100 for all the tests to get comparable values for the fitness function. We have run our tests using two different VOs (Virtual Organizations) and accesing two different grid infrastructure. In the first infrastructure we can access to 5,088 computational resources in a gLite infrastructure using Scientific Linux 4.7. The second infrastructure consists of 11,321 computational resources, running also the gLite middleware and Scientific Linux 4.7. 6.1
Computational Results
First, we tried two tests with the same configuration and without local search, as shown in Table 1. The size of the subsets in the Reference Set is indicated in the table with RefSet Size. The results are also shown in this table.
488
A. G´ omez-Iglesias et al.
Table 1. Execution times for two different configurations without local search and with local search. All the times are in the hh:mm:ss format. Test First Test Second Test Third Test Fourth Test Number of Iterations 2 50 2 50 Number of Chromosomes 100 100 100 100 Population Size 100 100 100 50 RefSet Size 10 10 10 5 Number of Jobs 574 12,361 11,427 6,781 Execution Time 36:10:19 535:36:25 495:12:51 310:12:53 Aggregated Execution Time 783:07:21 9,095:41:07 8,727:29:07 5,321:03:57
In this table, Execution Time represents the time from the SS start until it finishes, using the grid. Aggregated Execution Time is the sum of the execution times of all the jobs sent to the grid which, in fact, is the time that the algorithm must take to perform all the calculations in a single machine. If we focus on the results obtained for 50 iterations, we can see that the required time to obtain these results was 535:36:25 (more than 22 days). Considering the aggregated execution time being more than 9,095 hours, which is almost 379 days, the advantage of using grid computing becomes clear. The required time to get these results depends on the number of computational resources available, as well as the number of jobs submitted by other users. In the ideal case of simultaneously having the same number of free WN as the number of maximum jobs, with one single user, this time will be lower, but this is just an ideal case which is difficult to reach in real grid environments. For the next tests, we just activated the local search and used the configurations in last columns of Table 1. With these configurations the execution times required are as shown in the same table, that also shows the aggregated execution time required by all the processes. The main problem with the configuration that has 2 iterations is the long execution time required. The local search process takes a long time as a result of the high number of configurations to be processed. For this reason, when we tried the fourth test, we reduced the population, as well as the reference set size, which increased the number of iterations. Table 2. Best values for the different tests performed Test Initial Random Individuals Current Configurations 2 Iterations - Local Search Deactivated 40 Iterations - Local Search Deactivated 2 Iterations - Local Search Activated 50 Iterations - Local Search Activated - Smaller Population
Best Value 1.2E+22 1.0E+10 1.76781E+05 7.056280E+03 1.47256E+04 1.02314E+04
Scatter Search and Grid Computing to Improve Nuclear Fusion Devices
6.2
489
Evaluation Results
In Table 2 we show the best results obtained for the previous tests, as well as the fitness value (on average) for random individuals used at the beginning of the process and the value for existing configurations of the device. There are big differences depending on the different configurations used. The resulting design allows for a better confinement of particles within this device, allowing the possibilities to obtain fusion reactions to increase and improve the efficiency of the device.
7
Conclusions and Future Work
In this paper we have shown an implementation of the SS algorithm using the capabilities offered by grid computing to improve the equilibrium of confined plasma in a nuclear fusion device. The computational cost of the existing modelling tools and the time required to run different tests makes it almost impossible to use traditional environments. In this work, grid computing has been used and the results are very promising. Even though execution times continue being long, we are thankful that due to a number of the computational resources that the grid has, this time has been reduced from several months to just a few weeks. In the future, we plan on performing more tests with different configurations for the algorithm, as well as changing the combination method. One of the characteristics that can be measured with the designed workflow is the equilibrium, but many other characteristics can be measured with VMEC. This will lead us to a multi-objective implementation of the SS using grid computing. Acknowledgments. This work has been funded in part through the Research Infrastructures initiatives of the 7th Framework Programme of the European Commission, grant agreement Number 211804 (EUFORIA project) and grant agreement Number 222667 (EGEE III). The authors want to thank Ashley Bucholz for her interest and comments about the work.
References 1. Aarts, E., Lenstra, J.K.: Local Search in Combinatorial Optimization. Princeton University Press, Princeton (2003) 2. Alba, E., Tomassini, M.: Parallelism and Evolutionary Algorithms. IEEE Transactions on Evolutionary Computation 6(5), 443–462 (2002) 3. Berman, F., Hey, A., Fox, G.C.: Grid Computing. Making the Global Infrastructure a Reality. John Wiley & Sons, Chichester (2003) 4. Castej´ on, F., et al.: Ion Orbits and Ion Confinement Studies on ECRH Plasmas in TJ-II Stellarator. Fusion Science and Technology 50(3), 412–418 (2006) 5. Foster, I., Kesselman, C.: The Grid: Blueprint for a New Computing Infrastructure. Morgan Kaufmann, San Francisco (1999)
490
A. G´ omez-Iglesias et al.
6. Freidberg, J.: Plasma Physics and Fusion Energy. Cambridge University Press, Cambridge (2007) 7. Glover, F.: A Template for Scatter Search and Path Relinking. In: Hao, J.-K., Lutton, E., Ronald, E., Schoenauer, M., Snyers, D. (eds.) AE 1997. LNCS, vol. 1363, pp. 13–54. Springer, Heidelberg (1998) 8. G´ omez-Iglesias, A., et al.: Grid Computing in Order to Implement a ThreeDimensional Magnetohydrodynamic Equilibrium Solver for Plasma Confinement. In: 16th Euromicro International Conference on Parallel, Distributed and networkbased Processing, pp. 435–439. IEEE Computer Society, Los Alamitos (2008) 9. Joseph, J., Fellenstein, C.: Grid Computing. Prentice Hall, Englewood Cliffs (2003) 10. Juh´ asz, Z., Kacsuk, P., Kranzlm¨ uller, D.: Distributed and Parallel Systems: Cluster and Grid Computing. Springer Science, Heidelberg (2005) 11. Laguna, M., Mart´ı, R.: Scatter Search — Methodology and Implementations. Kluwer Academic Publishers, Boston (2003) 12. Miyamoto, K.: Plasma Physics and Controlled Nuclear Fusion. Springer, Heidelberg (2005) 13. Pitsolulis, L.S., Resende, M.G.C.: Greedy Randomized Adaptive Search Procedures. In: Handbook of Applied Optimization. Oxford University Press, Oxford (2002)
Service-Oriented Integration of Grid Applications in Heterogeneous Grids Radoslava D. Goranova University of Sofia “St. Kl. Ohridski”, FMI, 5 James Bourchier Blvd., 1164 Sofia, Bulgaria
Abstract. Open Grid Service Architecture (OGSA) is an architectural model for service-oriented Grids. During the years, OGSA domineer as Grid architecture for building Grid systems and it tightly hold SOA principles. However, we can not claim that OGSA is supported by every Grid environment. On contrary, the development of OGSA as Grid standard is not finished yet, and many Grid environments do not implement it. The heterogeneous Grids use different technologies and protocols to provide services for access to Grid resources. Some of them are not service-oriented, others are fully or partially. This is causing problems for developers of Grid applications, who have to use different approaches to access the different Grids. Moreover, from the users’ point of view, it is very difficult to use simultaneously resources from these different types of Grids. If we do not take into account the different security policies that every Grid implements, there is no unified way to access Grid services or to integrate them. Solution of the problem is SOA. In this article we present our service-oriented approach for integration. For test purpose we use Grid middleware Globus Toolkit 4 and g-Lite.
1
Introduction
The Grid [1] is hardware and software infrastructure that provides coordinated resource sharing and high-end computational capabilities, based on standard open protocols, for delivering of qualities of services. The idea of the grid is to provide inexpensive and consistent access to unlimited computational and storage recourses. From architectural point of view Grid is five-layered (Figure 1). Fabric layer provides resources: computational, storage systems, network resources, specific devices, sensors even clusters or pools. Connectivity layer defines communication and authentication protocols for exchanging data between fabric layer and secure mechanism for resource access. Collective layer contains protocols and services for resource interaction monitoring, diagnostics, scheduling, brokering, service discovering, etc. The services expose specific grid functionalities and are not bind to any particular technology. Application layer provides software tools: application toolkits, portals and libraries for access to the services defined and implemented in collective layer. User application layer covers user defined applications and user defined grid services. Independently of the fact that every grid projects designs and builds its own architecture to realize project requirements, we can say that every one of these projects’ architectures in its base I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 491–498, 2010. c Springer-Verlag Berlin Heidelberg 2010
492
R.D. Goranova
Fig. 1. Grid architecture
follows the grid architectural model described above. Based on project requirements, the different Grid projects realize this model using different approaches. Some of them for example, follow principles of the service-oriented architecture (SOA), others on client-server architecture. Based on architectural design, Grid environments can be classified as fully service-oriented, partially and not serviceoriented. Fully service-oriented are grid environments that provide well-defined and self-described services for all grid functionalities and capabilities. Example architecture for that is OGSA [2]. Open Grid Service Architecture (OGSA) is Grid architecture, which tries to impose itself as a standard for building serviceoriented Grid systems. OGSA specification is being developed by the Grid community with leadership from the Globus Project. OGSA is an architectural model for service-oriented Grids that elaborate traditional Grid architecture. It tightly holds SOA principles. The aim of OGSA is to define a common, standard, and open architecture for grid-based applications and to standardize all the services in grid systems, by specifying a set of standard interfaces for them. Although OGSA domineer as a standard, we can not claim that it is supported by every Grid environment. On contrary, the development of OGSA as Grid standard is not finalized yet, and many Grid environments do not implement it. Typical examples for that are middleware as g-Lite and BOINC. The first one is partially service-oriented and the second is not service-oriented at all. OGSA itself is implemented by Globus Toolkit middleware. In this article we will generalize main problems, which arise during development of grid application for heterogeneous grid environments and we will propose an approach for solving these problems. Our research is based on examples of g-Lite middleware and Globus Toolkit.
2
Problem Definition
The Grid middleware is a software environment that implements connectivity, collective and application layer from grid architecture by coordinating and integrating Grid resources and services. The heterogeneous Grids use different technologies and protocols in their middleware for design and implementation of
Service-Oriented Integration of Grid Applications in Heterogeneous Grids
493
environment’s services. The development of grid application however is associated with access and invocation of the middleware’s services. This is causing problems for developers of Grid applications, who have to use different approaches to access the different Grids. Globus Toolkit 4 (GT4), for example is service-oriented toolkit for development of Grid infrastructures. It realizes OGSA requirements for statefull services, by implementing Web Services Resource Framework (WSRF) specification. GT4 provides high-level services as discovery service, job submission, security service, as well as common runtime libraries and tools. Technology used in GT4 for realization of all high-level services is Axis2. From developers’ point of view the advantage of this middleware is that high-level services covers all necessary functionality for one grid application and one technology is used for realization of all of them. Another advantage is that the end user can define, register and deploy its own user-defined service. Disadvantage is the service description; high-level services can not be invoked by standard Axis2 tools client. Another example of common used middleware is g-Lite. The environment is designed and implemented for EGEE Grid infrastructure and is tightly specified for the need of this project. G-Lite provides grid services (components) as Workflow Management System, computing services as Computing Element, information services as BDII, RGMA, storage services as Storage Element and security services. Some of them are serviceoriented high-level services and others are not. Technology used for realization of service-oriented high-level services is Axis 1.4. For other grid functionality, the middleware provides different API (Java, Python, C) for access. From developers’ point of view the advantage of this middleware is that service-oriented high-level services are well defined and described. Disadvantage is that the end user can not define its own service and that there is no unified access to all high-level services. Integration of services provided by heterogeneous grid environments will require unification and standardization of the provided services. This requirement of course can not be satisfied for Grid environments which use different technologies for service realization. On the other hand, this is a problem for development of these grid applications which uses services and resources from heterogeneous grid environments. On the base of the two given examples, we can generalize the following type of problems, concerning partially g-Lite environment and partially integration problems. 1. No unified way for access to high-level services — The developer has to use different technology to access services from one middleware. 2. Lack of service-oriented high-level services — Not all important high-level services are accessible as actual service (Grid service) 3. Lack of opportunity to define and register user-defined service — User can develop its own grid service, but can not register it in the grid environment. 4. Difficulties for integration of high-level service from more than one grid environments — the difficultness to use simultaneously services from different type of Grids, excluding the different security policies. Solution for those problems is in Service-oriented integration.
494
3
R.D. Goranova
Service-Oriented Approach for Integration
Service-oriented integration (SOI) [3] is approach to integrate computing entities using only service interactions in a service-oriented architecture. As we mentioned above most grid environments, provide services and API to application for access to grid functionalities. Through the use of APIs, end users or applications can access the functionality of underling grid middleware. However, the APIs exposed by different environments will differ in the way they are accessed and with technology they are accessed. The objective of heterogeneous grid integration is to understand and use APIs for accessing the required functionalities and to mask the technology differences between different technologies used for APIs and their access. The service-oriented approach for integration achieved that by using services, which expose interfaces. The interfaces, as they are defined by SOA, provide the contract between different applications. This contract assures that interface of the service stays unchanged, nevertheless of interface’s implementation. The main accent in service-oriented approach for integration is well defined interfaces, which are loosely coupled. How this approach can be applied as a solution of above defined problems is a matter of more extensive research. To the rest part of this article we described the initial steps and conclusions that were made during the process of investigation and future work that remain to be done. All results and conclusions are made on the base of the example of the two middleware GT4 and g-Lite. As we mention above, not all services in legacy Grid environments are service-oriented. We can not change this, but we can define set of well defined interfaces of services as service adaptors, service proxies and service mediator, which to provide mechanism for legacy service access. Such kind of service layer has to be developed or adapted for every grid environment, which is used by applications. If we reconsider the above described problems, the problem with unified access to the services and the lack of service-oriented high-level services can be easily solved by creating service adaptors and service mediators. Service adaptor is a service, which wrap a legacy function and make it callable through web services. In this way functionality that is not service-enabled, can be exposed as a service in the mining of SOA. Service mediators hide complexity of underling legacy function, by providing service-enabled access to this functionality. It is important to use only one technology for implementation of all the services. That will provide unification and standardization of the way these services are accessed. The possibility for a user to register a set of services (chosen or developed by him/her) is of great benefit for every service-oriented environment. Using service-oriented approach, this problem can be solved by defining registry, a service which store user-defined service description, service endpoint and service security policies into a database. To implement this service a UDDI specification with standard functions: browse, get, and publish, can be used. The integration of high-level service on more than one grid environment however is not an easy task. The different security policies and mechanisms, which heterogeneous Grid uses, will require creation of proxy services for every Grid which will be integrated.
Service-Oriented Integration of Grid Applications in Heterogeneous Grids
495
Fig. 2. Services
4
Applying Approach Experience
In order to solve the above problems, we start with definition and development of registries. The function of registries in our layer is very important, because they contain information for users, certification authorities, virtual organizations and services description. The technology that we used for layer development is based on Tomcat Server, Axis SOAP Engine and MySQL for database management system. First we investigate g-Lite and GT4 services and classify them according several criteria: level, origin, and description. The level of the service specifies if the service is used for access of grid resource (low level), if it is accessed by another service or by grid application (high level). According to that criterion g-Lite services are low level and high level services. The origin of the service defines if the service is a part of the existing environment or is a part from external application. In this case, services are system and user-defined services. G-Lite has only system services. Description of the service is the most important characteristic, since it defines its functionality. According that criterion services are computational — provide functionality for job submission, security — provide functionality for user authentication and authorization and proxy creation, storage — provides functionality for file transfer and storage, informational — provides functionality for service discovery, composite — provides functionality for service composition. All registered user-defined services and system services are stored in a database in registry services (Figure 2). The information which we store about the services is based on the above defined criterion. We store the service type (origin and description), for example JSB for job submission, USR for user-defined, INF for information services. We store also and grid type containing information in which environment this service is defined. High level services for adding service into the registry, browsing registry and finding service remain to be developed. For every grid user, which has valid certificate and is registered in a virtual organization we store information into registry users (Figure 3). We store information about user certificate, generated user proxy and distinguish name. We also store information about user key and key password, however this sets many security issues. Problems are mainly related to that if an application can access this registry, then it can create more and more
496
R.D. Goranova
Fig. 3. Users
user proxies on his behalf, simply because the application will have access to user key and user password. This is against all security policies in the Gird, based on X509 PKI, which claims that only the user can have access to its private key. In order to avoid this problem, solution that can partially solve the problem is that application can use dynamically build connection to the registry based on user DN which invoke the service. Access to the row in the table of users, can be restricted based on data contained in that row (user DN). In that way the application will be able to see only those rows where the value of the data in the DN column is equal to the DN passed in the dynamic connection. But still more security issues remain to be solved, as how user will secure upload his private key, if there is a need the two fields for user key and user password to be double encrypted for better security etc. For g-Lite, such kind of user information (user DN, user rights and corresponding CA) is stored on a file in a directory of every component (grid service), which can be accessed by the user. The same is valid and for the public certificates of certification authorities. Even that all valid certification authorities has URL, from where their public certificates can be downloaded, this URLs are also stored on a file on the same grid service. That makes development of grid application dependable from the operating system and tightly related to deployment of that application on a machine where all mentioned user and certification authorities files exists. That of course is disadvantage, which can be avoided exactly by using these two types of registries: users and certification authorities. For Certification Authorities (Figure 4), we store information about caURL, status, version, PEM certificate, etc. Until now that information can be loaded and updated through shell script (both for gLite and for GT4), that scans and reads files from specified directory where all certificates are located. The registry is updated every time when new CA is added, updated or removed. This registry is applicable also for GT4 certification authorities, which use the same principles of user authorization. Interesting problems that can occur in the case are related to Certificates Authorities from different grid environment that use the same alias, but are different in their mining. Such kind of problems can be solved, by unifying every certification authority in the registry not only by its allies, but by the type of the grid as well. The advantages of using database, especially for services which security is based on X509 certificates and grid user proxies is that all necessary information from grid security is stored in one place, which can be accessed from everywhere. That can allow grid applications to be developed independently from the
Service-Oriented Integration of Grid Applications in Heterogeneous Grids
497
Fig. 4. Certification Authorities
operating system. Using this feature we develop an example WMS mediator, which loads user’s proxy and CA from registries and expose main WMS functionality, requiring as input only the IP address of WMS Proxy Server. The problem with WMS service is that service security is dependent from proxy certificate and its verification. WMS service provides Java based client API for access to service functionality. To use this API, developer has to have access to directory with all valid certificates of all Certification Authorities, in order to verify its user certificate. This made WMS usable only from a place (computer) where such directory exists. The technology we use for service development is Axis2. For the test purpose and in order to unify the way services in GT4 and g-Lite are accessed, we start with development of service adaptor for job submission in g-Lite and GT4 environment. The technology we use for service development is again Axis2. For the moment service provides functionalities for job submission only in g-Lite environment (Figure 5). We still work on the problem to develop Axis2 proxy service, which to provide functionality for grid proxy creation and voms proxy creation and store generated proxy certificate into the database.
Fig. 5. Job Submission service
498
5
R.D. Goranova
Conclusion and Future Work
In g-Lite environment there are a lot of components which are not serviceenabled. The aim of this research is not to develop services for every non serviceenable component in g-Lite, but provide a concept way how access to the services from g-Lite and GT4, two heterogeneous Grids can be unified. However, many thing remains to be done. More services adaptors have to be defined for information services in the two types of grids. In g-Lite for example the information services are realized based on two components: BDII and MON box. G-Lite provides client service discovery functionality (API and CLI), which searches for requested information in both components. Provided service discovery functionality is not service-enabled. GT4 provides MDS4 index service, for component information and discovery. In this case, the service adaptor has to provide service discovery functionality as a service, using the API or CLI, which g-Lite provides and mechanism for invoking GT4 services, based on user credential. Moreover well defined common description of all services has to be provided for service registry. And last but not least, a mechanism how user-defined services can be deployed and where has also to be provided.
References 1. Foster, I., Kesselman, K.: The Grid2: Blueprint for a New Computing Infrastructure. Morgan Kaufman, San Francisco (2004) 2. Foster, I., Keshimoto: The Open Grid Service Architecture, Version 1.5 (2006), http://forge.gridforum.org/projects/ogsa-wg 3. Juric, M., Loganathan, R.: SOA Approach to Integration. Packt Publishing (2007)
Grid Based Environment Application Development Methodology Dorian Gorgan, Teodor Stefanut, Victor Bacu, Danut Mihon, and Denisa Rodila Computer Science Department, Technical University of Cluj-Napoca {dorian.gorgan,teodor.stefanut,victor.bacu}@cs.utcluj.ro, {danut mihon,denisa rodila}@yahoo.com
Abstract. The satellite image oriented processing in the Grid based environment applications needs flexible tools, process description and execution in order to reveal significant information. The main processing consists of imagery classification which is actually a search of information through combinations of multispectral bands of satellite data. The Grid based application development process should assist the developer to implement required functionality, but making transparent to the user the Grid complexity and the mapping of the conceptual application over the physical Grid resources. The paper exemplifies the gProcess based Grid application development methodology throughout the development phases of the GreenView environment application, and analyzes the efficiency of such an approach. The primary aim of the GreenView application is the refinement of surface and vegetation parameters in SEE and CE regions based on satellite images. The GreenView application is based on ESIP platform, which lies on the gProcess toolset specific satellite image processing operators and services. Keywords: Environment application, development methodology, Grid, workflow, process description.
1
Introduction
The Environment oriented Satellite Data Processing Platform (ESIP) has been developed through the SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) [11] two-year project (2008–2010), which is funded by the European Commission. The ESIP platform supports the development and the execution of the Grid based applications concerning particularly with the processing of satellite images and generally with environment-related processing and studies. ESIP is based on the gProcess platform [10] developed by the MedioGrid [8] national research project. It provides the user with the possibility to explore the optimal solutions for Grid processing and information searching in the multispectral bands of the satellite images. The gProcess platform is an interactive toolset supporting the flexible description, instantiation, scheduling and execution of the Grid processing. The gProcess platform is a collection of Web and I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 499–506, 2010. c Springer-Verlag Berlin Heidelberg 2010
500
D. Gorgan et al.
Grid services, and tools providing the following basic functionality: visual manipulation based interactive description of the Grid based satellite image processing by pattern workflow; development of hypergraphs as a composition of basic operators, services, and subgraphs; pattern workflow instantiation for particular satellite image; satellite data management, access and visualization; and workflow based Grid execution. The applications based on ESIP platform aim to benefit from Grid processing and storage capabilities but, at the same time, to provide the final users with functionalities that will make the data processing and acquisition complexity transparent for them. This paper presents a methodology for creating environmental applications (e.g. GreenView [11] using the resources provided by the ESIP platform. The methodology concerns mainly with algorithm identification and analysis, data model definition, identification of atomic parts of the algorithms, parallel or serial processing design, description of processing using gProcess, graph description and instantiated graph specification, process execution, and graphical user interface.
2
Related Works
The large amount of information received from the satellites — about 1.5 Terabytes of raw data per day from the next generation of HES [6,12] geostationary satellites — requires very important processing resources and massive data storage. This is one of the main reasons for the need of parallel distributed computing in GIS information extraction and real-time processing. Over the years, the scientific community has developed large data repositories in Earth Observation domain, but these are usually addressed only to specific organizations and are not available for public access. Actually, projects like GPOD [4] and GENESI-DR [5] aim to offer open access to repositories that provide functionalities for satellite images management, indexing and retrieval. These projects allow both manual and automatic data access, through a Web interface or Web services. Simple and complex Grid operators based on workflow description have been developed for data processing as well. The workflow systems use direct acyclic graph as a solution for process modeling. UNICORE [3] and Symphony [7] use the direct acyclic graph (DAG) based descriptions, while Kepler [1] and Triana [2] use direct cyclic graphs. Another available solution is related with Petri nets [9]. The description language of Grid workflows uses mainly the XML description, such as the Grid Service Flow Language (GSFL) and Grid Workflow. The GreenView application relies on the gProcess workflow based description and execution, which has been developed through the MedioGrid project.
3
Grid Application Development Methodology
As mentioned above, actually most of the components needed to create an environmental application are scattered among different platforms that have been
Grid Based Environment Application Development Methodology
501
developed independently. Many times, creating a reliable bridge that connects all these heterogeneous tools is a very complex task. Moreover, the complexity of an environmental application requires that specialists from different domains such as mathematics, informatics, geography, and earth observation, work together in most of the development phases. As a result, in order to develop efficiently a reliable environmental application, a development methodology must be considered. This paper presents such a methodology based on the ESIP and the gProcess workflow description and execution platforms, with the following development phases: 1. Algorithm identification and analysis; 2. Data model definition; 3. Identify atomic parts of the computation; 4. Algorithm implementation; 5. gProcess based process description; 6. User interface development.
4
Algorithm Identification and Analysis
At the beginning of the application development it must be established what types of processing algorithms (e.g. interpolation, random number generators etc.) will be implemented. After choosing the type of algorithms, where applicable, a thorough analysis must be conducted in order to choose the best algorithm for each processing type (e.g. for interpolation we could use: neighbor interpolation, bicubic interpolation, bilinear interpolation, etc.). There are many criteria that can be used on algorithm selection, like: precision, computation resources needed on a specific data set, parallelization possibilities, etc. The selection process must be conducted by experts from mathematics, informatics and application domain together in order to assure the best result in the algorithms selection. At the end of this phase, the algorithms must be described as mathematical formulas that will be later implemented as operators in a Grid processing structure. For example, in our application we will implement a coarse to fine interpolation aiming to create an image with a better resolution than the original. The missing points will be computed using a distance weighted nonlinear interpolation that computes the value of a Vm point from four surrounding known values: V1, V2, V3, and V4. The mathematical formulas that will be implemented are: di 4 cos4 π2 · dmax Vi · Vm = (1) 4 4 π · dj cos i=1 j=1 2 dmax di = arcsin
sin2
φa − φb 2
+ cosφa cosφb
sin2
λa − λb 2
R
(2)
where: Vi — one of the four surrounding pixels, Vi (Φi , Λi ); di — the great circle distance between two points; dmax — the great circle distance between the furthest two pixels of the surrounding four pixels; R — average radius for a spherical approximation of the Earth (≈ 6371.01 km).
502
5
D. Gorgan et al.
Data Model Definition
This second phase of the development process aims to establish what kind of data types (e.g. satellite images) and sources can be used and processed in the application. There are several satellite image types (MODIS, Landsat, Aster, etc.) that provide different information in specific data formats and structures. The selection of image types is conditioned mainly by the information required for the functionalities included into the application and, where multiple image types are compatible, by the data structures (e.g. coordinate systems) and information quality (e.g. image resolution). The next step in data model definition is to identify the available data sources (repositories) for the data types established. These sources should provide functionalities for automatic data management, indexing and retrieval, as these databases should provide to the end user the possibility to search in real time for needed information. For our example application the data model will be defined by a MODIS satellite image representing a certain area of the Earth and temperature data values recorded in specific points inside the same area. The resolution of the MODIS image is 1 km2 and the resolution of the temperature measurements is about 150 km2 . The aim of the application will be to compute the temperature in every point of the MODIS image using the interpolation algorithm described above and the recorded temperature values. As an output data an image in .tiff format will be obtained representing the temperature values for the same area but with a higher resolution of 1 km2 .
6
Identify Atomic Parts of the Computation
In order to take advantage of the Grid processing capabilities, the execution of the algorithms needed for the application should be parallelized. The parts of the processing algorithms that can be executed independently, named atomic parts, should be identified and analyzed for potential parallel execution. This analysis should also consider the time needed for data transfer between the nodes of the Grid network, as there are situations when this operation could take longer than the overall serialized execution of the algorithms’ atomic parts on the same Grid node. The implementation of the algorithms as atomic parts will increase significantly the percentage of the re-usage resources, as these parts can be used in the same application but different functionalities or even in different applications, speeding up the development process. The interpolation algorithm used in our example is a simple one and does not involve separate processing steps, so it can be all considered as an atomic part. Nevertheless, the conversion of the temperature file to the metric coordinate system and the pseudo-coloring algorithm are separate atomic parts of the application and will be implemented as separate Grid operators. As the data involved in the processing can be quite large, all three operators are serialized and executed on the same node in the grid. In these conditions,
Grid Based Environment Application Development Methodology
503
the processing time for every month is about 32 seconds and grows linearly with the number of months. Therefore we group one month regarding computation on the same node, and parallelize the different months related computation, by using the Grid parallel processing capabilities. This way, two or more processing can take place in the same time and the overall computing duration will be significantly shorter.
7
Algorithm Implementation
This phase decides the implementation form of the algorithms’ atomic parts. Grid or web services, procedures or separate applications can be used to implement a specific part of the algorithm, while the other parts could be implemented in the same or in different forms. Some of the most important criteria that could influence the choice are: the source of the needed information (local saved file, resource from a distributed database, remote file that can be acquired through HTTP or FTP), Grid platform specific requirements or the possibility of this operator being used in other applications over Web or Grid. As a result of the analysis carried out in the previous step of the methodology, we will need three operators in our example application: system coordinate conversion, interpolation algorithm computation, and the pseudo-coloring. As an example, we consider that the algorithm for conversion from geographical to metric coordinates is available already implemented as a Grid or Web service.
8
Graph Based Process Description
Conceptually the algorithm can be described as an acyclic graph (Fig. 1, left) that has arcs which represent the execution dependencies between nodes and four different types of nodes: Input data or resource. This type of node can represent a satellite image or a data value (int, float, string data types) used for special operations (e.g. threshold). Operator. Represents any atomic operation identified in the 3rd phase of the methodology and implemented at the 4th phase as procedure or standalone application. Service. It is Web or Grid service that may be included into the processing graph in a similar way as ordinary operator. The main difference comes from the way in which these nodes are executed over the Grid. Sub-graph. Used for developing complex processing description graphs, this type of nodes allows the integration of a sub-graph in another graph. Virtually, any acyclic processing graph could be included as sub-graph in another one. 8.1
Pattern Process Description
Making use of the graph elements described above and through the interface provided by the gProcess platform, the processing graph for each algorithm
504
D. Gorgan et al.
Fig. 1. PDG examples: left — workflow describing the Grid based processing, by satellite image bandwidths (Input), operators (OP), Grid and Web services (S), subgraphs (SG), and processed images (Output). right — the non-linear interpolation algorithm.
can easily be described combining the atomic parts and the data defined at the 1st and 2nd phases of the methodology (Fig. 1, right) by a pattern process description graph (PDG) [10]. High attention should be paid to the compatibility of two connected nodes, in terms of input and output types and formats. The PDG description of the algorithm specifies the data models used, the operators (i.e. atomic parts) involved and how all these nodes are connected, but does not make any reference to specific data resources (e.g. a specific image file, int or float value, etc). This way, the same PDG can be used to process any specific data that meets the data model requirements, and avoid remaking the graph based description for every data set. 8.2
Instantiated Process Description Graph
In order to apply an algorithm over a compatible data set, the PDG that describes the algorithm must be mapped over this specific data by the instantiated PDG (iPDG). This operation requires that for every resource node defined in the PDG to specify a data resource that meets the format and type requirements (e.g. satellite image, int, float, string, etc). All the data files should meet the description of the data models defined in the 2nd phase of the development methodology. 8.3
Process Execution
The execution of an algorithm over a data set can be done using the iPDG file described by the user. When the execution is requested the application verifies if the specified iPDG is correctly formed and, if the file meets the requirements, it is passed to the Executor service provided by the gProcess platform. The service parses the iPDG description and generates the appropriate internal data structure, expanding the sub-graphs nodes and considering for execution only the atomic nodes such as operators, services and data resources. At this point, before launching in execution an operator, the Executor service performs a final consistency checking, similar to the one performed at PDG
Grid Based Environment Application Development Methodology
505
Fig. 2. Sample of temperature computation for coarse to fine resolution of satellite image: left — initial temperature measurements of coarse resolution satellite image; right — computed temperature of fine resolution satellite image
description phase but extended by verifications concerning the availability of the operators (procedures, applications, services, etc.) and data sources that are to be used in execution.
9
User Interface Development
The application interface must provide the user with access to all the application functionalities while hiding the complexity of the processing that is carried out in the background. The development of the interface may start earlier, just after the data model definition phase, when all the required information for the user interface design is already available. The graphical user interface development should be user-centered in order to maximize the interface usability and efficiency. As most of the functionalities needed by the environmental application are already implemented in ESIP platform, the interface development will be constrained only by the very specific information related to the project (e.g. operators, data model, workflow, and parallelization).
10
Conclusions
The development of Environmental applications based on Grid infrastructures and dedicated to non-technical experts from different domains is a challenging task. The complex computing and data acquiring processes must be covered with a simple and straightforward interface that allows specialists with average computer and no Grid related skills to process massive distributed data or available through http or ftp protocols, in order to conduct research or observation studies based on satellite images analysis. The development process requires a thorough analysis over the algorithms that are to be implemented, the required data and its availability, the technologies used to develop the application and over the interface design and implementation. All these studies must be performed by specialists from very different
506
D. Gorgan et al.
domains such as mathematics, informatics, and Earth Observation-related domains, which must work together to a single and unitary product. The methodology presented in this paper offers guiding recommendations that allow a better collaboration between the experts participating to the development process and a more efficient resources management. Acknowledgments. This research is supported by SEE-GRID-SCI (SEE-GRID eInfrastructure for regional eScience) project, funded by the European Commission through the contract nr RI-211338.
References 1. Altintas, I., Berkley, C., Jaeger, E., Jones, M., Ludaescher, B., Mock, M.: Kepler: Towards a Grid-Enabled System for Scientific Workflows. In: Proceedings of Workflow in Grid Systems Workshop in GGF10, Berlin, Germany (2004) 2. Churches, D., Gombas, G., Harrison, A., Maassen, J., Robinson, C., Shields, M., Taylor, I., Wang, I.: Programming scientific and distributed workflow with Triana services. Journal of Concurrency and Computation: Practice and Experience 18(10), 1021–1037 (2005) 3. Erwin, D.W., Snelling, D.F.: UNICORE: A Grid Computing Environment. In: Sakellariou, R., Keane, J.A., Gurd, J.R., Freeman, L. (eds.) Euro-Par 2001. LNCS, vol. 2150, pp. 825–834. Springer, Heidelberg (2001) 4. Fusco, L., Cossu, R., Retscher, C.: Open Grid Services for Envisat and Earth observation applications. In: Plaza, A., Chang, C. (eds.) High Performance Computing in Remote Sensing, pp. 237–280. Chapman & Hall/CRC/Taylor & Francis Group (2008) 5. GENESI-DR project consortium (2009), http://www.genesi-dr.eu 6. Hawick, K.A., Coddington, P.D., James, H.A.: Distributed frameworks and parallel algorithms for processing large-scale geographic data. Parallel Computing 29(10), 1297–1333 (2003) 7. Lorch, M., Kafura, D.: Symphony — A Java-based Composition and Manipulation Framework for Computational Grids. In: Proceedings of 2nd IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGrid 2002), Berlin, Germany, pp. 136–144. IEEE Computer Society, Washington (2002) 8. Petcu, D., Zaharie, D., Gorgan, D., Pop, F., Tudor, D.: MedioGrid: a Grid-based Plat-form for Satellite Images. In: Procs. IDAACS 2007, pp. 137–142. IEEE Press, Los Alamitos (2007) 9. Peterson, J.L.: Petri Nets. ACM Computing Surveys 9(3), 223–252 (1977) 10. Radu, A., Bacu, V., Gorgan, D.: Diagrammatic Description of Satellite Image Processing Workflow. In: Proc. Int. Symp. Symbolic and Numeric Algorithms for Scientific Computing (SYNASC 2007), pp. 341–348. IEEE Press, Los Alamitos (2007) 11. SEE-GRID-SCI project consortium, SEE-GRID eInfrastructure for regional eScience (2009), http://www.see-grid-sci.eu/ 12. Schmit, T.J., Li, J., Gurka, J.: Introduction of the Hyperspectral Environmental Suite (HES) on GOES-R and beyond. Presented at the International (A)TOVS Science Conference (ITSC-13) in Sainte Adele, Quebec, Canada (2003)
User Level Grid Quality of Service Anastas Misev1 and Emanouil Atanassov2 1
2
University Sts Cyril and Methodius, Faculty of Natural Sciences & Mathematics Institute of Informatics, Skopje, Macedonia
[email protected] Bulgarian Academy of Sciences, Institute for Parallel Processing, Sofia, Bulgaria
[email protected]
Abstract. Improving the Quality of Service of the Grid infrastructure is one of the most important ongoing issues in the Grid community. It has many implications, from broader users’ acceptance of the technology to the shift of the Grid usage from the scientific world toward the businesses. While best effort can be acceptable in the scientific community, the business applications must have clear and well defined service levels based on deterministic QoS metrics. In this paper we introduce a different type of Grid QoS, one under users’ control. Using it, the users can have better control over the level of services they are using from the Grid infrastructure.
1
Introduction
To have a wide acceptance of some technology, it has to be user oriented and user controlled, as much as possible. Even traditionally one-way services like audio or video broadcasting are getting a whole new dimension when the users can control them, in the form of on-demand services, personalized streams etc. We expect to get the same from the Grid technology. Since its beginnings, it is a technology adapted mostly by the scientific community. But having in mind its potential and possible ways of development like cloud computing, network computing etc, something has to be done to make it closer to the users, but at the same time attractive to the businesses. Improving the Quality of Service of the Grid infrastructure is one of the most important ongoing issues in the Grid community. It has many implications, from broader users’ acceptance of the technology to the shift of the Grid usage from the scientific world toward the businesses. While best effort can be acceptable in the scientific community, the business applications must have clear and well defined service levels based on deterministic QoS metrics. Using techniques like process mining [1] we have realized that there are many cases of jobs with very high queue waiting times [9]. The long waiting time is one of the most common reasons for job failures, usually due to expired proxies. Similar results can be found in [7]. Also, many new Grid users note that even their jobs are short; they could get stuck in the waiting queues for very long time. This makes them pretty skeptical toward accepting this powerful infrastructure. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 507–514, 2010. c Springer-Verlag Berlin Heidelberg 2010
508
A. Misev and E. Atanassov
The paper is structured as follows: in the second part we discus general need and justification for the QoS in Grid computing, in the third part we review the current work in the area of Grid QoS, in the fourth part we introduce our proposed system, in the fifth part we explain its operation in more details, in the sixth we give our initial experimental results and in the final two parts we give guidelines for future work and conclusion.
2
The Need for Grid QoS
The issue of Grid QoS has been addressed by many authors, from various perspectives [11]. Foster and Kesselman [5] refer to the Grid QoS as one of the greatest challenges. They identify it as an end-to-end resource management system, capable of orchestrating multiple elements in order to meet the growing needs of the users, including businesses. A common approach is the categorization of the QoS attributes in two major categories: quantitative and qualitative. Qualitative attributes are usually very hard to measure and refer to elements including user satisfaction and service reliability. On the other hand, quantitative attributes can be measured exactly. They include network attributes (throughput, delay, jitter, bandwidth), processing attributes (exclusive or shared usage, time intervals of dedicated processing) or storage attributes (capacity, bandwidth). Although qualitative attributes are hard to measure, they are very important, especially from the users’ perspective. Maintaining Grid QoS can be crucial for some type of applications. Interactive jobs, jobs involving strict deadlines, visualization and rendering are only a few examples where certain level of QoS is needed to support them. Without it, most of them will produce unusable results. According to work of Al-Ali et al [2], several requirements are expected to be fulfilled by Grid resource management systems, related to the QoS: – – – – – –
Advance Resource Reservation Reservation Policy Agreement Protocol Security Simplicity Scalability
QoS has been extensively used in the areas of multimedia and networking. Although the ultimate goal is similar, satisfying user requirements, the QoS in grid computing usually requires centralized control [4]. This central system will then be queried to determine the resources to be allocated for the execution of specific tasks. One of the main tasks of this system will be enforcement of the Service Level Agreements governing the QoS. The Grid QoS should be observed at several layers: networking, middleware and applications layer [13]. Maintaining QoS has different requirements and implementation techniques at each of these layers. To identify the possible bottlenecks in the jobs lifetime, thus point the elements where QoS is crucial to be applied, we first need to deeply understand the
User Level Grid Quality of Service
509
jobs’ workflow. As a Grid middleware platform for analysis, we have chosen the gLite middleware, used in most European Grid installation. By applying a process mining tool ProM [17] to the data collected by the Logging and Bookkeeping [12] service of the gLite middleware, we obtained an objective overview of the job workflow. Most important conclusions are elaborated in [9]. Two of the used plug-ins leads us to a similar solution. Both Performance sequence diagram and Petri net performance analysis pointed to a relatively high percentage (up to 20% for some users) of the jobs spend between 57 and 65 hours waiting in the queues, before failing. Even the average waiting time for all the jobs is more than 6 hours [8]. This conclusion is in line with other researchers. Khalli et al [7] identify waiting times at batch schedulers as “the most relevant and prevalent source of performance variability”.
3
Current Grid QoS
Since it has been identified as one of the most important aspects of Grid computing, QoS has been addressed by many authors. Most important work in this area is presented in this chapter. The General-purpose Architecture for Reservation and Allocation (GARA) is the most commonly known framework for supporting QoS in the context of computational Grids. GARA [6] provides programmers with the capability to specify end-to-end QoS requirements. It provides advance reservations, with uniform treatment of various types of resources such as network, computation, and storage. GARA’s reservation is aimed to provide a guarantee that the client or application initiating the reservation will receive a specific QoS from the resource manager. GARA also provides an application programming interface to manipulate reservation requests, such as create, modify, bind, and cancel. GARA uses the Dynamic Soft Real-Time (DSRT) scheduler. The most important drawback of this framework is that it lacks the support for web services, meaning that it is not OGSA-compliant. An OGSA based programmable framework must export APIs for managing allocation of CPU and storage resources to the Grid Services. GT3 provides a set of management mechanisms enabling job submission on distributed resources, namely the Globus Resource Allocation Manager (GRAM). GRAM defines and implements APIs and protocols allowing Grid clients to securely instantiate jobs, according to the functionality of remote schedulers. Members of the Global Grid Forum (GGF) have identified the issues of concern related to Grid QoS. The GGF Grid Resource Agreement and Allocation Protocol (GRAAP) Working Group (WG), has produced a WS-Agreement[15]. The specification consists of three parts which may be used in a composable manner: a schema for specifying an agreement, a schema for specifying an agreement template, and a set of port types and operations for managing agreement life-cycle, including creation, expiration, and monitoring of agreement states This agreement defines the required behavior (QoS) of a delivered service, with reference to a particular service consumer.
510
A. Misev and E. Atanassov
Grid Quality of Service Management (G-QoSm) [2] is a framework to support QoS management in computational Grids in the context of the Open Grid Service Architecture (OGSA) [16]. G-QoSm consists of three main operational phases: establishment, activity, and termination. During the establishment phase, a client application states the desired service and QoS requirements. G-QoSm then undertakes a service discovery, based on the specified QoS properties, and negotiates an agreement for the client application. During the activity phase additional operations such as QoS monitoring, adaptation, accounting and possibly re-negotiation may take place. During the termination phase the QoS session is ended due to resource reservation expiration, agreement violation, or service completion; resources are then freed for use by other clients. The framework supports these three phases using a number of specialist components.
4
User Level QoS System
Our proposed system of job prioritization is intended to supplement the current gLite based grid infrastructure in as less invasive manner as possible. We propose an implementation in a way that it could be an option for the sites to have it, giving the users some control over the QoS level of the middleware. Using this model of QoS will give the possibility to the users that have short running jobs to have them wait less time in the queues, while maintaining the fair resource sharing with other users. To avoid misuse of the prioritization, the system will constantly monitor the actual jobs performances and compare them with the user reported estimations. Hence, if a user tries to run a long running job and gives a wrong estimation of its’ running time, he will be penalized for his further usage. Since the system is integrated with the ULMON Tool [10], it makes the deployment of the prioritization part on the Computing Elements optional. The implementation of the system on more CEs can be done gradually. If a job is matched to a CE that does not support prioritization, the user’s credits will be returned to his pool for future usage. On the other hand, using the requirements part of the JDL, users can pick the sites that support this system and have their jobs matched only to this sites. In the future, this feature might become part of the GLUE schema making the selection of the sites much easier. The proposed tool satisfies most of the requirements mentioned earlier, defined by Al-Ali et al, with some modifications.
5
ULQoS Operation
Our QoS system operation is illustrated in Figure 1. The user submits a job to the WMS, using either command line tools or other means (portal, IDE, etc). Then, using the web interface of the User Level MONitoring tool, he gets a list of his jobs, including the newly submitted one. He selects the job that he wants to be prioritized and assigns it the expected running time and some amount of credits. The system, using the amount of credits allocated c and the expected
User Level Grid Quality of Service
511
Fig. 1. Sequence diagram of the operation of the tool
execution time et, calculates the priority level p of the job. Priority is calculated using (1): c p = k1 (1) et To ensure proper operation and fair usage, we have limited the execution time et to 100 minutes and the maximum number of credits per job c to 100. We note that the constants k1 , k2 and k3 used in the calculation will be corrected using empirical results. It then sends message containing the priority to the computing element that the job has been matched to, so it can prioritize it. After the jobs finishes, the system (again using the data from the ULMON tool), having the actual running and waiting times, calculates the correction of the credits. If the infrastructure obeyed the users’ QoS level request, there is no correction. If the infrastructure partially obeyed the prioritization, cret credits are returned to the user. Finally, if the infrastructure ignored the QoS request, all the credits are returned to the user. The correction of the credits is done using (2), where wt is the actual waiting time and ext is the actual running time. cret =
wt k2 . max(et, ext)
(2)
If the user tried to misuse the system, by specifying a short execution time, but the actual running time was much bigger, then it is penalized by taking cpen credits from his pool. The penalty calculation is done using cret =
max(et, ext) ck2 . et
(3)
512
A. Misev and E. Atanassov
The idea to prioritize short running jobs comes from the user experience and user perception of the infrastructure. Users having long running jobs rely on the fact that they will get the results in a longer period. On the other hand, users that submit shorter jobs are much more affected by the long waiting times. The proposed system also allows the users to give priority to longer running jobs, but for that purpose they will have to allocate more credits to acquire the same priority level. The limited number of available credits per time period also ensures that the resources will be shared fairly, preventing the users which only submit short running jobs to be in a privileged position over the ones that run longer ones.
6
Usage Results
To verify the proposed QoS mechanism, we have used simulation. The simulation was done using the SimWiz simulator [3]. The input data was generated using the guidelines obtained from the real life L&B data [8].
Fig. 2. Waiting times in the queues with and without prioritization
Fig. 3. Comparison of the ratio of the waiting and execution time
User Level Grid Quality of Service
513
Most important results are shown in the Figures 2 and 3. Figure 2 shows the comparison of the waiting times for a mixture of jobs with different priorities. We note here that the average waiting time in both cases is the same. Only the jobs with lower priority, usually longer running ones, will experience longer waiting times, which are expected and are considered not to be a disadvantage. Users that submit long running jobs count on having the results back much later. The efects of prioritization are most obvious with the short running jobs with high priority. Figure 3 shows the ratio of the waiting time over the job execution time is much lower when prioritizaion is used.
7
Future Work
Future versions will include priority aging, the system corrects the priority periodically, using the time spend in the queues wt as additional parameter. This way, jobs with some initial priority and with some waiting time in the queues, will not be suppressed by new jobs with equal or little higher priority. The next step is the production testing on real life systems. We will start with small number of sites, using only one WMS and ULMON instance. Further on, more sites and more WMS instances will be included in the system. To enable easier identification of the sites that will potentially use this ULQoS, we also propose future integration in the middleware, meaning that site administrators can advertize their sites that use ULQoS, using the GLUE [14] schema, so the users can pick them in their JDLs.
8
Conclusion
Giving the users greater monitoring and QoS control over the Grid middleware will be one of the most important factors toward greater user acceptance of the technology. Also, defining clear QoS possibilities will make the infrastructure more appealing to the businesses too. Our proposed system only focuses on the user perspective of the Grid QoS, giving some advantages to the users submitting short running jobs, while maintaining the average waiting times the same. We expect that by its successful implementation on more sites, the users will get more comfortable to use this technology and to rely on it for their daily operations. Further research of this matter may produce industry level QoS standards that when applied will provide overall QoS mechanisms and metrics, giving the possibility to define clear and sound usage relations between all the participants, including service providers and business and scientific users. Acknowledgement. This paper is based on the work done in the framework of the SEE-GRID-SCI FP7 EC funded project, with partial support from NSFB grant D002–146/2008.
514
A. Misev and E. Atanassov
References 1. van der Aalst, W.M.P., Weijters, A.J.M.M. (eds.): Process Mining. Special Issue of Computers in Industry, vol. 53. Elsevier Science Publishers, Amsterdam (2004) 2. Al-Ali, R., von Laszewski, G., Amin, K., Hategan, M., Rana, O., Walker, D., Zaluzec, N.: QoS support for high-performance scientific Grid applications. In: Cluster IEEE International Symposium on Computing and the Grid, CCGrid 2004, pp. 134–143. IEEE Computer Society, Los Alamitos (2004) 3. Cohen, M.: SimWiz — a Self-Learning Tool for Learning Discrete Event System Simulation, Innovative Techniques in Instruction Technology, E-learning, Eassessment, and Education, pp. 370–374. Springer, Netherlands (2008) 4. Czajkowski, K., Fitzgerald, S., Foster, I., Kesselman, C.: Grid Information Services for Distributed Resource Sharing. In: Proceedings of the Tenth IEEE International Symposium on High-Performance Distributed Computing, HPDC-10 (2001) 5. Foster, I., Kesselman, C.: The grid: blueprint for a new computing infrastructure, 2nd edn., pp. 31–32. Morgan Kaufmann, San Francisco (2004) 6. Foster, I., Kesselman, C., Lee, C., Lindell, R., Nahrstedt, K., Roy, A.: A distributed resource management architecture that supports advance reservation and coallocation. In: Proceedings of the International Workshop on Quality of Service, pp. 27–36. IEEE Communications Society (1999) 7. Khalili, O., He, J., Olschanowsky, C., Snavely, A., Casanova, H.: Measuring the Performance and Reliability of Production Computational Grids. In: Proceedings of the 7th IEEE/ACM International Conference on Grid Computing, pp. 293–300. IEEE Computer Society, Los Alamitos (2006) 8. Misev, A.: Performance analysis of GRID middleware using process mining, Technical report for the Scientific Seminar, IPP BAS, August 28 (2007) 9. Misev, A., Atanassov, E.: Performance Analysis of GRID Middleware Using Process Mining. In: Bubak, M., van Albada, G.D., Dongarra, J., Sloot, P.M.A. (eds.) ICCS 2008, Part I. LNCS, vol. 5101, pp. 203–212. Springer, Heidelberg (2008) 10. Misev, A., Atanassov, E.: User Level Monitoring Tool. In: 4th EGEE User Forum, Catanina, Italy, EGEE (2009) 11. Oguz, A., Campbell, A.T., Kounavis, M.E., Liao, R.F.: The Mobiware Toolkit: Programmable Support for Adaptive Mobile Networking. IEEE Personal Communications Magazine 5(4), 32–43 (1998) 12. Sitera, Z., Skrabal, J., Vocu, M.: Advances in the L&B Grid Job Monitoring Service (2009), http://lindir.ics.muni.cz/dg_public/lb2.pdf 13. Soldatos, J., Polymenakos, L., Kormentzas, G.: Programmable Grids Framework Enabling QoS in an OGSA Context. In: Bubak, M., van Albada, G.D., Sloot, P.M.A., Dongarra, J. (eds.) ICCS 2004, Part III. LNCS, vol. 3038, pp. 195–201. Springer, Heidelberg (2004) 14. GLUE Specification v. 2.0 (2009), http://forge.gridforum.org/sf/go/doc15023?nav=1 15. Grid Resource Allocation Agreement Protocol (GRAAP) WG: Web Services Agreement Specification (WS-Agreement) (2007), http://forge.gridforum.org/sf/projects/graap-wg 16. OGSA-WG: Defining the Grid: A Roadmap for OGSA Standards (2005), http://www.ogf.org/documents/GFD.53.pdf 17. ProM tool for Process mining (2009), http://www.processmining.org/
Dictionary Compression and Information Source Correction D´enes N´emeth, M´at´e Lakat, and Imre Szeber´enyi Budapest University of Technology, Magyar Tud´ osok k¨ or´ utja 2, H-1117 Budapest, Hungary Abstract. This paper introduces a method to compress, store, and search a dictionary of a natural language. The dictionary can be represented as groups of words derived form a stem. We describe how to represent and store a word group in a way that is compact and efficiently searchable. The compression efficiency of the used algorithm highly depends on the quality of the information source. The currently available tools and data sources contain several mistakes, which can be cleaned by the introduced method. The paper also analyzes the efficiency of XML and two binary formats, and proposes two methods: directed acyclic graph transformation and word group regrouping that can be used to increase efficiency.
1
Introduction
With the development of computer technologies the human interaction with the machines has also moved from punching and reading holes on a card to a more user friendly form. For humans one of the most natural ways of conveying thoughts would be to express ones wishes in mother tongue to the machine. Taken into account the current state of technology we are far from achieving this. The first step would be to correct the inputted data (typed or voice recognized). The purpose of this paper is to show how a dictionary of a language can be created, stored and searched in an effective way. The first section explains very briefly how a group of words derived from a single stem can be compressed and stored. The second section introduces the developed storage formats with their metrics and properties. The third section evaluates the conclusions that can be drown from the statistical analysis of the compressed data, highlights the problems and proposes enhancements to the compression. The fourth section proposes an algorithm that can be used to determine the words which were incorrectly inserted into a single group of words derived from a stem, while the last section defines the data compression problem, which would have to be solved to move the compression to the global level.
2
Word Group Compression
Let W denote a set of derived words from one stem. Let us assume that the size of this set is N , and W = {ω1 , ω2 , . . . , ωN }, and is a set with distinct elements. Let I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 515–522, 2010. c Springer-Verlag Berlin Heidelberg 2010
516
D. N´emeth, M. Lakat, and I. Szeber´enyi
ωi,j be the i-th word’s j-th character, and let |ωi | be the length of the i-th word. Let us define ∗ as the operation of concatenation, so ωi = ωi,1 ∗ ωi,2 ∗ · · · ∗ ωi,|ωi | . Let us define the base word (B) to be the longest character sequence, which is included by all ωi words. Let us use the word decomposition algorithm described in [2] to create the affixums. The affixums consist of minimal number of prefix and suffix set pairs (P1 , S1 ) , . . . , (Pm , Sm ), which fulfill the following requirements: 1. i = 1 . . . m : p∗B∗s=W ∀p∈Pi
2. ∃i, j fulfill 3. ∃i, j fulfill
∀s∈Si
: = Pj \ {pi ∈ Pj } that (P1 , S1 ) , . . . , Pj , Sj , . . . , (Pm , Sm ) still (1) : Sj = Sj \ {si ∈ Sj } that (P1 , S1 ) , . . . , Pj , Sj , . . . , (Pm , Sm ) still (1) Pj
The affixums and base words for each word group are stored in a database. The database consists of prefixes (Pi ), suffixes (Si ), and base words (Bi ), which are represented with variable branching trees. The affix is a pair of prefix and suffix set. The database also holds the array of affix elements (AF Fi ) and the array of affixums (AF Si ). If one type of object contains another type of element (for example an affix contains a prefix) only a reference is stored. This means that if either part of two components is exactly the same only one instance and a reference is stored.
3
Storage Formats
This section describes the developed different memory and disk based database storage formats. The database can be stored in XML, in indexed binary format, and in several non-indexed binary formats. Each of these formats has its advantages and disadvantages. The XML on side is the most compatible format, but on the other side it consumes significantly more space than the binary format. The indexed property in the binary format allows a much faster pattern matching, but consumes some extra space compared to the non-indexed binary format. In all of those formats some parts of the database are represented by variable branching trees. It is hard to store variable branching trees in computer systems due to the fact that the size and number of connections to children may differ on each node. This makes indexing complicated, so we are using a method, which transforms a variable branching tree into a binary tree. The transformation is the following: Every node has three connected nodes: its parent, its first children, and its next sibling. Figure 1 illustrates how a variable branching tree can be transformed into an identical binary tree. The horizontal arrows on the right figure represent the sibling, while the vertical arrows represent the first child relation. From this point onward let us always use this transformed form of the variable branching tree. The information to be stored on a node is the data related to the
Dictionary Compression and Information Source Correction
oA@ oo~o~~ @@@ o o @@ oo ~ ooo ~~~ @ w oo o B C D? ~~ ??? ~ ?? ~~ ?? ~~~ E
F
G
H
517
A
B
I
E
/C
F
/D
G
/H
/I
Fig. 1. Variable branching tree transformation
node. If the amount of this information is the same, then there are three types of nodes relative to the amount of space required to store a single node with its connections. These categories are the following: Nodes with 0, 1, or 2 connected ascending nodes. This type of trees can easily be represented in an XML due to the recursive capabilities of the XML format, so we will only introduce the developed binary indexed and non-indexed formats. 3.1
Binary Non-indexed Tree Representation
In this format each node is represented in the following way recursively starting with the root node. The first bit is used to represent whether the node has a first child, then a second bit is used to represent whether the node has a next sibling, then the information residing on the node is serialized. The only requirement against this data is that it is stored in a way, which is able to reconstruct itself if at the start of the reconstruction phase the binary stream handler is positioned at the first bit of the data on the node, and after serialization the binary stream handler should be placed after the serialized binary data. Each node is followed by its child, and the next sibling is placed after the complete sub tree of its child if it exists. In this way no links representing the connections are needed. This format is the most compact, but has the disadvantage of being unable to process the next sibling node without processing the complete sub tree of the first child. Due to this property this format can mainly be used for data transportation or persistent data serialization, but not as a cost efficiently searchable database. 3.2
Binary Indexed Tree Representation
This format stores the tree in an indexed way, which eliminates the drawback of the non-indexed representation of being unable to process the next sibling of the node without pre processing the complete sub tree of the first child if it exists. This serialization consists of two parts: the nodes in the tree and a header, which stores the metadata of the serialization and indexes. The header consists of the following items, in which the number in the brackets represents the amount
518
D. N´emeth, M. Lakat, and I. Szeber´enyi
of bits the data element is stored on. In the header let K = log2 (sizeof (int)). Every number in the header is represented in the following way: First on K bits the length is stored, which specifies on how many bits the number is stored after this K bit. The header consists of the following elements: – – – – – – – – –
(2 bit) The type of the root (K bit) The size of leaf index size (The size of leaf index size): The number of leaf nodes (K bit) The size of single index size (The size of single index size): The number of single nodes (K bit) The size of double index size (The size of double index size): The number of double nodes (K bit) The size of single offset size (The size of single offset size): The offset of the first single node relative to the end of the header – (K bit) The size of double offset size – (The size of double offset size): The offset of the double single node relative to the end of the header The second part of the serialization is the serialized nodes contained in the tree. The nodes are grouped into three categories: nodes with 0, 1 or 2 connected nodes. Each node in all categories begins with 2 bits, which represent the category of the node. The first bit is 1, if it has a first child, and the second bit is 1 if it has a next sibling node. After these two bits the data portion of the node is serialized, which is followed by the node type dependent indexes of the directly descended nodes. Let us define INDEX as the number of bits required to represent any index offset within one single tree in any of the categories. – Leaf nodes: No index is stored – Single node: • (2): The category of the connected node • (1): 1=The descended node is a first child • (INDEX): The index of the descended node (the location of the descended node is typeoffset + nodesize * index bits after the header) – Double node • (2) The category of the first child • (2) The category of the next sibling • (INDEX): The index of the first child (the location of the descended node is typeoffset + nodesize * index bits after the header) • (INDEX): The index of the next sibling (the location of the descended node is typeoffset + nodesize * index bits after the header) Figure 2 shows the achieved compression ratio. The x axis represents the number of word groups, while the y axis represent the compression ratio relative to the uncompressed size of the data. The lines on the figures were created by fitting a power law using the least squares method on the different data set. These lines are nearly parallel, which means that the efficiency of the developed storage formats are only different in a constant factor.
Compresion ratio
Dictionary Compression and Information Source Correction
1
519
Uncompressed XML.raw bin.indexed bin.noindex XML.bz2 bin.indexed.bz2 bin.noindex.bz2
0.1
0.01
0.001
0.0001
1e-05 100
1000
10000
Number of basewords
Fig. 2. The compression ratio
10
The size of the DB [MB]
bin.indexed bin.noindex bz2(xml) bz2(bin.noindex) bz2(bin.indexed)
1
0.1
0.01 100
1000
10000
Number of basewords
Fig. 3. The size of the database
Figure 3 shows the size of the database for the different storage formats. If a bzip2 function is applied on the different forms of the database a higher compression ratio can be achieved. If we compare a normal and a bziped format the biggest difference is between the XML and its compressed form. The XML format has no indexing capabilities, so the bziped XML should be compared with the bziped non-indexed binary format, which provides better compression. Comparing the compression ratios of the binary indexed and it’s bziped form shows that bzip2 is unable to compress the indexed data as it was capable with the non-indexed data.
520
4
D. N´emeth, M. Lakat, and I. Szeber´enyi
Statistical Analysis
This section elaborates on the conclusions that can be drawn from the statistical analysis of the data in the database. Let us define a word group valid if it has at least one common letter in every word. In the compression 50000 word groups were processed and only 39000 were valid. Word groups can become invalid if the stem is mutilated through the derivation process or through the faulty creation of the word group. The first is a natural, but a very rare event. In Hungarian it mainly happens only if the stem consists of one or two letters. This represents in less than 0.1 percent of the words, however in the compression the number of faulty word groups were around 17 percent. They were mainly caused by the errors in the dictionary and bugs in the hunspell and aspell spellchecker programs [3,4]. These errors may result in that words derived from different stems to be grouped into the same group. 4.1
Word Group Cutting
This section proposes an algorithm that can be used to divide a word group into smaller groups. These smaller groups try to couple words upon which the word group compression provides better compression ratio. Then these subgroups are divided into two categories: 1, group of correct words 2, words which do not belong to the group (faulty stem determination) or misspelled words (assumed correct by the spellchecker). The type 2 groups can be manually analyzed or automatically dropped from the database. The algorithm that can be used to determine these types are the following: Let the original word group be G = {ω1 , ω2 , . . . , ωN }. For every ωi word let us select S = ωi,j ωi,j+1 . . . ωi,j+l sub word in every possible way. Let M be the number of words, which include this S sub word. Let T table consist of the different l and M pairs, and let us store one i index for each l and M pair. For every S sub word let us add a line in the table if every line in the table would be distinct after the adding. This table will contain maximal |wi | ∗ N entries, and can be used to select the best base word. 4.2
DAG Transformation
We analyzed why the bzip was able to further compress the non-index binary database, and we come to the conclusion that is mainly caused by two properties of the database. First the variable branching tree representing a prefix or suffix has sub trees, which are alike. Without the indexes bzip recognizes this as same character sequences and was able to compress it. Since prefixes and suffixes are not connected further to any more elements each element can be independently compressed. Our binary representation or pattern matching functionalities does not depend on the parent relation between any of the tree elements. Without requirement of the parent relation in the tree it can be easily transformed into a DAG [1], which will enhance the compression.
Dictionary Compression and Information Source Correction
521
The non-indexed tree representation requires the ascending nodes of an element to be always present, so in our binary formats only the indexed version is capable of handling DAGs. In this case no modification is needed to the storage format.
5
Towards Global Optimality
The algorithm described in the [2] article provides an efficient solution for compressing a single group of words. If the affix groups created from groups of words are regrouped by violating the local optimality within the affix (prefix or suffix group) a better overall compression can be achieved. This section describes the mathematical problem that has to be solved to provide a better compression. Let M = {ω1 ; ω2 ; . . . ; ωn } be the set of all affix words (prefix or suffix), let ωi be the i-th affix word, and let us denote the Gi set of the i-th affix. Let us assume that we have m number of affix sets. In this case let G1 ⊆ M, G2 ⊆ M, . . . , Gm ⊆ M . Let k be the storage cost of a single wi word, and let ci be the number of references to the i-th affix. Let us find the G1 ⊆ M, G2 ⊆ M, . . . , Gm ⊆ M minimal number of sets and the T n × m and S m × m matrices, which fulfill the following requirements: 1, if wi ∈ Gj – T [i, j] = 0, if wi ∈ / Gj 1, if Gj ⊆ Gi – S[i, j] = 0, otherwise Gj = Gi – ∀i ∈ [1, m]: – k
∀i,j
6
S[i,j]=1
T [i, j] +
i=1...m
ci
S[i, j] is minimal
j=1...m
Conclusions
The single word group compression algorithm can only be efficiently used if the information source to be compressed contains very few errors. The currently available spellchecker tools and databases unfortunately can not produce high quality group of words derived form a single stem. This causes that the data has to be cleaned beforehand, and only those algorithms will be able to compress the data efficiently, which take multiple word groups into account simultaneously. The used indexed binary form of the database is efficiently searchable, and makes it possible to use DAGs as the variable branching trees. In the case of standard computers the database can easily fit into the memory, which makes the seaching most efficient. However if the database needs to be stored and used in systems with very limited memory it can be searched by partially importing the required parts of the database.
522
D. N´emeth, M. Lakat, and I. Szeber´enyi
Acknowledgments Part of this work was funded by the P´ azm´any P´eter program (RET-06/2005) of the National Office for Research and Technology, and the authors would like to thank the Enabling Grids for E-scineceE project (INFSO-RI-222667).
References 1. Buneman, P., Grohe, M., Koch, C.: Path queries on compressed XML. In: Freytag, J.C., et al. (eds.) Proc. VLDB 2003, pp. 141–152. Morgan Kaufmann, San Francisco (2003) 2. N´emeth, D.: Parallel dictionary compression using grid technologies. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 492–499. Springer, Heidelberg (2008) 3. N´emeth, L., Tr´ on, V., Hal´ acsy, P., Kornai, A., Rung, A., Szakad´ at, I.: Leveraging the open-source ispell codebase for minority language analysis. In: Proceedings of SALTMIL 2004. European Language Resources Association, pp. 56–59 (2004) 4. Viktor, T., Gy¨ ogy, G., P´eter, H., Andr´ as, K., L´ aszl´ o, N., D´ aniel, V.: Hunmorph: open source word analysis. In: Proceedings of the ACL Workshop on Software, Association for Computational Linguistics, Ann Arbor, Michigan, pp. 77–85 (2005)
A Parallelization of Finite Volume Method for Calculation of Gas Microflows by Domain Decomposition Methods Kiril S. Shterev and Stefan K. Stefanov Institute of Mechanics, Bulgarian Academy of Sciences, Acad. G. Bonchev, Block 4, Sofia 1113, Bulgaria {kshterev,stefanov}@imbm.bas.bg http://www.imbm.bas.bg
Abstract. In this paper a parallel organization of a finite volume algorithm for calculation of two-dimensional compressible, viscous gas flows is presented. The problem is addressed to the new emerging area of the micro gas flows, taking place in Micro-Electro-Mechanical Systems (MEMS). Technically, the parallel algorithm is organized by using standard MPI, non-blocking communications and the latency of the communications is overlapped with useful work. The speedup was estimated on two clusters. The first cluster uses MYRINET interconnections (BG04ACAD). The second uses conventional cards for interconnections (BG03NGCC). Both clusters are a part of GRID-infrastructure of the European Research Area (ERA). An ideal speedup is obtained on BG04-ACAD for a number of processors up to 20 CPUs. The speedup obtained on BG03NGCC is very good, however it depends on the mesh refinement. Keywords: Finite volume method, gas microflows, parallel algorithms, GRID.
1
Introduction
The computational analysis of fluid dynamics problems depends strongly on the computational resources [11]. The computational demands are related mainly to: the CPU performance and the memory size. The considered in the paper example of calculation of a two-dimensional gas flow past a supersonic speed particle moving in planar microchannel is a typical problem demonstrating such kind of computational requirements. We consider a supersonic flow with Mach number equal to 2.43. The problem is described in details in [9]. The supersonic speed leads to the existence of a bow shock wave , which reflects from the channel walls. As a result past the particle we obtain a complex picture of interaction of reflected shock waves (see Fig. 1). The shock waves have significant gradients of velocities, pressure, and temperature. Thus, an accurate calculation of the flow requires a very fine or adaptive grid to be used. Calculations of this problem has been carried out for a set of gradually refined meshes. Finally, a mesh with 8000x1600 cells was found to give stable and accurate enough results. It is easy I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 523–530, 2010. c Springer-Verlag Berlin Heidelberg 2010
524
K.S. Shterev and S.K. Stefanov
Fig. 1. Horizontal velocity (upper part) and pressure (lower part) fields calculated by parallel FVM [9]
to predict that a calculation of this problem on a single PC will take years. Obviously, a parallel organization of the computational process is required in order to overcome this difficulty. In the next section we present how this can be accomplished.
2
Parallel Organization
Parallel organizations of FVM algorithms are presented in many papers (for examples, see [12,6]). We presented here the main statements of the parallel organization of the finite volume method [8,11] (FVM) for calculation of the considered gas flow. The corresponding serial algorithm is presented in details in [10]. The same notations are used here. A domain decomposition (data partitioning) approach is used. The idea is a single instruction stream and multiple data stream (SIMD) to be used in accordance with Flinn’s taxonomy. The realization was accomplished by using standard MPI (Message Passing Interface) [3] instructions. This approach makes possible running the code on symmetric multiprocessing (SMP) systems and computer clusters. Most of the modern supercomputers now are highly-tuned computer clusters using commodity processors combined with custom interconnects [5]. Here the send/received packages are of small size. The size of packages is less then 50KB, which is negligible for bandwidth 1Gbps. Therefore the bottleneck of the communications is the latency. There are a lot of communications in one iteration of FVM. To reach a high parallel efficiency non-blocking communications are used and latency is overlapped with calculations. Figure 2 presents an example of decomposition of a domain into two processes. The halo region for process 0 contains a column of variables, some of them positioned in the centre and others on the face of the cells. The halo region for process 1 contains two columns of variables, placed in the centre and on the face
A Parallelization of Finite Volume Method
525
Fig. 2. Domain decomposition of FVM computational domain
of the cells. When halo regions are set in this way the minimum sent/received messages between neighbours is reached. The steps in the parallel FVM are the same as in the serial FVM [10]. Parallel FVM 1. Initial guess of u, v, p, T and calculation of ρ, using equation of state. Start loop 1 • 2. Initial conditions for the current time step: t = t + Δt, u(n−1) = u, v (n−1) = v, p(n−1) = p, T (n−1) = T , ρ(n−1) = ρ. • Start loop 2 (current time step calculations): •• 3. Evaluate flux and diffusion terms: F x , F y , Dux , Duy , Dvx , Dvy , DT x , DT y . •• 4.1. Evaluate pseudo velocity u ˆ, coefficient du and coefficients apx , bpx of pressure equation in the boundary region. •• 4.2. Initiate sending of boundary values of apx and bpx to neighbours and initiate receipt of halo values of apx and bpx from neighbours. •• 4.3. Calculate boundary values of pseudo velocity vˆ, coefficient dv and coefficients for pressure equation apy , bpy . •• 4.4. Initiate sending of boundary values of apy and bpy to neighbours and initiate receipt of halo values of apy and bpy from neighbours. •• 4.5. Calculate non-boundary values of pseudo velocity u ˆ, coefficient du px px and coefficients for pressure equations a and b . •• 4.6. Calculate non-boundary values of pseudo velocity vˆ, coefficient dv and coefficients for pressure equations apy and bpy . •• 4.7. Wait for completion of sending of boundary values of apx , bpx , apy , and bpy and wait for completion of receipt of halo values of apx , bpx , apy , and bpy .
526
K.S. Shterev and S.K. Stefanov
•• Start loop 3: • • • 5.1. Calculate temperature and pressure in the boundary cells, using energy and pressure equations. • • • 5.2. Initiate sending of the boundary values of p and T to neighbours and initiate receiving of halo values of p and T from the neighbours. • • • 5.3. Calculate temperature and pressure in the non-boundary cells. • • • 5.4. Wait for completion of the sending of boundary values of p and T and wait for completion of receiving of halo values of p and T. • • • 5.5. Initiate sending of maximum residuals for the equations of pressure and energy from each process to process 0 and initiate receipt of maximum residuals for equations or pressure and energy from all processes in process 0. • • • 5.6. Initiate array pold for next iteration in loop 3: pold = p. • • • 5.7. Wait for completion of sending of maximum residuals for equations for pressure and energy and wait for completion of receipt of maximum residuals for equations for pressure and energy in process 0. • • • 5.8. The loop 3 continues until convergence criteria are not satisfied or the maximum number of iterations is not reached. Initiate sending and receipt of information to stop or continue loop 3. • • • 5.9. Initiate array Told for next iteration in loop 3: Told = T . • • • 5.10. Wait for completion of sending and receipt of information to stop or continue loop 3. •• End loop 3. At least 2 iteration have to be performed to ensure convergence. The detailed analysis of two-dimensional unsteady state pressure driven channel flow [10] showed that 2 iterations were enough to achieve a good accuracy. •• 6.1. Calculate u in the boundary cells. •• 6.2. Initiate sending of boundary values of u to neighbours and initiate receipt of halo values of u from neighbours. •• 6.3. Calculate v in the boundary cells. •• 6.4. Initiate sending of boundary values of v to neighbours and initiate receipt of halo values of v from neighbours. •• 6.5. Calculate u in non-boundary cells. •• 6.6. Initiate sending of maximum residuals for u from each process to process 0 and initiate the reception of the maximum residuals for u from all processes in process 0. •• 6.7. Calculate v in the non-boundary cells. •• 6.8. Initiate sending of maximum residuals for v from each processes to process 0 and initiate receipt of maximum residuals for v from all processes in process 0. √ •• 6.9.√Calculate the array containing information for T . (This is useful as T is often used in algorithm).
A Parallelization of Finite Volume Method
527
•• 6.10. Wait for completion of sending of boundary values of u and v and wait for completion of receipt of halo values of u and v. •• 6.11. Wait for completion of sending of maximum residuals for u and v and wait for completion of receipt of maximum residuals for u and v in process 0. •• 6.12. Check conditions to stop loop 2. •• 6.13. Initiate sending of information to continue or to stop loop 2 in process 0 to all processes and initiate receipt of decision to stop or to continue loop 2 in all processes. •• 6.14. Calculate density ρ, using equation of state and p and T calculated in loop 3. •• 6.15. Wait for completion of sending and receipt of information to stop or continue loop 2. • End loop 2. End loop 1: The calculation is stopped, when the end time or the stationary state is reached. If these criteria are not reached go to step 2. Note: Equations for p and T in loop 3 are calculated using Jacobi iterative method (see [7]). The organization of sending and receiving messages is done fully manual in order to make possible the use of non-blocking communications. To this aim a structure DomainDecomposition is created. It contains information for every process. The following functions are used for calculation of tags and indices of requests of messages: • CalculateTagForSendVariableToNeighbour and CalculateIndexRequestForSendVariableToIJ0 calculate the tag and index of request of each message send to a neighbour. Both functions receive an unique index for each variable (i variable) that must be send (for example the index for pressure is 1, for temperature is 2 and etc.) and the index of the neighbour (i neighbour). Variable zero tag for this IJ is the maximum tag of all processes on the left (Fig. 2.). • CalculateTagForSendVariableToIJ0 and CalculateIndexRequestForSendVariableToIJ0 calculate the tags and indices of request of the messages send to process 0. Both functions receive the following variables: i variable; N exchanged variables with neighbours — the number of all variables, which are sent/received with neighbours; N neighbours — the number of neighbours (for 2D case is 4); N actions, which is equal to 2 (two possible actions — send and receive). • CalculateTagForSendVariableFromIJ0 and CalculateIndexRequestForRecvVariableFromIJ0 calculate tags and indices of request of the messages send of process 0 to other processes, where N send variables to IJ0 is the number of send variables to process 0. A part of source code containing functions for calculation of tags and indices of requests of messages:
528
K.S. Shterev and S.K. Stefanov
inline int CalculateTagForSendVariableToNeighbour( const int& i_variable, const int& i_neighbour){ return (zero_tag_for_this_IJ + i_variable * N_neighbours + i_neighbour);}; inline int CalculateIndexRequestForSendOrRecvVariableNeighbour( const int& i_variable, const int& i_neighbour, const int& send_or_recv_data){ return(i_variable * N_actions * N_neighbours + send_or_recv_data * N_neighbours + i_neighbour);}; inline int CalculateTagForSendVariableToIJ0( const int& i_variable){ return (zero_tag_for_this_IJ + N_exchanged_variables_with_neighbours * N_neighbours + i_variable);}; inline int CalculateIndexRequestForSendVariableToIJ0( const int& i_variable){ return(N_exchanged_variables_with_neighbours * N_actions * N_neighbours + i_variable);}; inline int CalculateTagForSendVariableFromIJ0( const int& i_variable){ return (zero_tag_for_this_IJ + N_exchanged_variables_with_neighbours * N_neighbours + N_send_variables_to_IJ0 + i_variable);}; inline int CalculateIndexRequestForRecvVariableFromIJ0( const int& i_variable){ return(N_exchanged_variables_with_neighbours * N_actions * N_neighbours + N_send_variables_to_IJ0 + i_variable);};
3
Speedup Analysis
The speedup of the parallel FVM for calculations of gas microflows, was estimated on two clusters. The first cluster uses myrinet [4] interconnection (BG04ACAD). The second cluster uses conventional cards for interconnection (BG03NGCC). The characteristics of the clusters are shown in Table 1. They are a part of a GRID-infrastructure [2] of the European Research Area (ERA). The tests were submitted by user certificate [1]. The speedup is calculated as Sn = Ts /Tpar and the efficiency is calculated as En = Sn /n, where n is the number of cores (CPUs), Sn is the speedup, when n-cores are used, Ts is the time for sequential run of the program (run on a single core), Tpar is the time of code run on n-cores, En is the efficiency, when n-cores are used. The test problem mentioned in the introduction is a calculation of supersonic flow past a small square shaped particle moving in a microchannel, Fig. 1. The test calculations were carried out on two meshes: 500x100 and 1000x200 cells. The calculation of the case (500x100 cells) was completed for around 2.5 hours on a single core.
A Parallelization of Finite Volume Method
529
Table 1. The characteristics of the clusters BG04-ACAD and BG03-NGCC Cluster: Numbers of cores: CPU model: CPU GHz: Cache size: RAM per core: Interconnection:
a)
BG04-ACAD BG03-NGCC 80 200 AMD Opteron(tm) Processor 250 Intel(R) Xeon(R) CPU E5430 2.4 GHz 2.66 GHz 1024 KB 6144 KB 2 GB 2 GB Myrinet Conventional cards
b)
Fig. 3. The speedup of the clusters BG04-ACAD and BG03-NGCC, for meshes: a) 500x100 cells and b) 1000x200 cells
The results concerning the speedup are shown in Fig. 3. The speedup for BG04-ACAD was calculated by taking the minimal run time obtained from 6 runs of each case. The speedup for BG03-NGCC was calculated by taking the minimal time obtained from 11 runs of each case. The tests on BG03-NGCC needed more runs, than those performed on BG04-ACAD, because there was a larger variation in the run times obtained from BG03-NGCC due to the use of conventional card for interconnection. A super linear speedup was reached on the cluster BG04-ACAD for the case with mesh 500x100 cells, Fig. 3. a). The reason for this is the cache effect. The program, run on a single core, needs around 14.5 MB of memory. The increase of process number leads to a decrease of the used memory of the program and, in the case with 20 processes, all data can be placed in the cache memory of the processors (Table 1.). The case with 4 times more cells (1000x200) needs around 58 MB on a single core, Fig. 3. b), respectively the available cache memory is not enough to hold all data and a super linear speedup cannot be reached. The speedup for BG03-NGCC is very good Fig. 3. a), even considering the small problem on mesh 500x100 cells. For a finer mesh (1000x200 cells) the speedup Fig. 3. b) is excellent, nevertheless that BG03-NGCC uses conventional cards
530
K.S. Shterev and S.K. Stefanov
for interconnection. The speedup line is not very smooth (Fig. 3) because of the weak interference with other runs simultaneously performed by other users on the clusters.
4
Conclusions
The non-blocking communications make possible an excellent speedup to be reached. A super linear speedup was reached on the cluster with myrinet interconnection (BG04-ACAD). A very good speedup was reached on the cluster BG03-NGCC for the case of mesh with 500x100 cells and an excellent speedup — on mesh (1000x200). These results show that the good parallel organization makes the algorithm efficient even when run on clusters with conventional cards for interconnection. Acknowledgments. The authors appreciate the financial support of NSFB DO 02-115, Module 1. The author K. Shterev also appreciates the financial support of THE EUROPEAN SOCIAL FUND, OPERATIONAL PROGRAMME HUMAN RESOURCES DEVELOPMENT, GRANT No: BG051PO001/07/3.3-0255/17.06.2008. The results were obtained using the grid sites of the South East European Regional Operating Centre of the EGEE project (http://public. eu-egee.org).
References 1. 2. 3. 4. 5. 6. 7.
8. 9.
10.
11. 12.
Bulgarian Academic Certification Authority (BG.ACAD|CA), http://ca.acad.bg Enabling Grids for E-sciencE (EGEE), http://www.eu-egee.org MPI Forum, http://www.mpi-forum.org Myricom Inc., http://www.myri.com Supercomputers, www.top500.org Bui, T.T.: A parallel finite-volume algorithm for large-eddy simulation of turbulent flows. Computers & Fluids 29, 877–915 (2000) Em, K.G., Kirby II, R.M.: Parallel Scientific Computing in C++ and MPI. A Seamless Approach to Parallel Algorithms and their Implementation. University Press, Cembridge (2003) Patankar, S.V.: Numerical heat transfer and fluid flow. Hemisphere Publishing Corporation, New York (1980) Shterev, K.S., Stefanov, S.K.: Finite Volume Calculations of Gaseous Slip Flow Past a Confined Square in a Two-Dimensional Microchannel. In: Proceedings of the 1st European Conference on Microfluidics, La Societe Hydrotechnique de France (2008) Shterev, K.S., Stefanov, S.K.: Pressure based finite volume method for calculation of compressible viscous gas flows. Journal of Computational Physics 229, 461–480 (2010), doi:10.1016/j.jcp.2009.09.042 Versteeg, H.K., Malalasekra, W.: An Introduction to Computational Fluid Dynamics: The Finite Volume Method, 2nd edn. Prentice Hall, Pearson (2007) Wang, P., Ferraro, R.D.: Parallel multigrid finite volume computation of threedimensional thermal convection. Computers & Mathematics with Applications 37, 49–60 (1999)
Background Pollution Forecast over Bulgaria D. Syrakov1, K. Ganev2 , M. Prodanova1, N. Miloshev2 , G. Jordanov2, E. Katragkou3, D. Melas3 , A. Poupkou3, and K. Markakis3 1
National Institute of Meteorology and Hydrology, Bulgarian Academy of Sciences, Sofia, Bulgaria
[email protected] 2 Geophysical Institute, Bulgarian Academy of Sciences, Sofia, Bulgaria 3 Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki, Greece
Abstract. Both, the current level of air pollution studies and social needs in the country, are in a stage mature enough for creating Bulgarian Chemical Weather Forecasting and Information System. The system is foreseen to provide in real time forecast of the spatial/temporal Air Quality behaviour for the country and (with higher resolution) for selected sub-regions and cities on the base of the weather forecast and national emission inventory. The country-scale part of the system is designed. It is based on the US EPA Models-3 System. The meteorological input is the ALADIN output, ALADIN being the national numerical weather forecast tool. The boundary conditions are prepared by a similar system running operationally in Aristotle University of Thessaloniki, Greece (AUTH). Special interface is created to retrieve in real time the AUTH-system forecasts producing boundary files uploaded to dedicated server in Bulgaria. In the paper, detailed description of the System will be given together with first results of its testing.
1
Introduction
Air quality forecasting is extremely challenging scientific problem, which has recently emerged as an important priority in many urbanized and industrialized countries due to the increasing harmful effect on health and environment caused by airborne pollution constituents like ozone and particulate matter (PM). One of the requirements of the Ozone Directive 2002/3/EC [5] is the availability of a trustworthy air pollution forecast system. The goals of reliable air quality forecasts are the efficient control and protection of population exposure as well as possible emission abatement measures. In last years the concept of “chemical weather” arises and in many countries respective forecast systems are being developed along with the usual meteorological weather forecasts (see, for instance, [7,9,10,11,12], and many others). As air pollution crosses national borders, it would be cost-effective and beneficial for citizens, society and decision-makers that national chemical weather forecast and information systems would be networked across Europe. For the I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 531–537, 2010. c Springer-Verlag Berlin Heidelberg 2010
532
D. Syrakov et al.
purpose a new COST Action ES0602 “Towards a European Network on Chemical Weather Forecasting and Information Systems (ENCWF)” [15] was launched aiming at providing a forum for harmonizing, standardizing and benchmarking approaches and practices in data exchange for air quality forecast real-time information systems in Europe. Bulgaria was one of the first countries that join COST Action ES0602, but so far it has not its own Chemical Weather Forecast System in spite air pollution causes serious damage on human health and constructions. The harmful effects of the background pollution on natural and agricultural ecosystems are noticeable, too. All this invokes a project financially supported by the National Science Fund aiming at creating Bulgarian Chemical Weather Forecast and Information System (BGCW) intended to provide timely, informative and reliable forecast products tailored to the needs of various users and decision-makers. The system is planned to have nesting structure, starting from the region of Bulgaria and nearest territories as a whole (background pollution) and zooming to smaller areas of interest In the paper, the structure of the country part of the system will be described. Some preliminary results addressed to model validation and verification will be presented, as well.
2
Operational Design of BGCW
The country (background) part of BGCW is designed in a way to fit the realtime constraints and to deliver forecasts twice a day (00 and 12 UTC) for the next 48 hours. US EPA Models-3 air quality modelling system is used, here, consisting of: (i) CMAQ — Community Multi-scale Air Quality model [16] being the chemical-transport model (CTM) of the System; (ii) MM5 — the 5th generation PSU/NCAR Meso-meteorological Model [14] used as meteorological pre-processor to CMAQ, and (iii) SMOKE — Sparse Matrix Operator Kernel Emissions Modelling System [17] being the emission pre-processor to CMAQ. Meteorological forecasts are obtained at the main synoptic terms using the MM5 mesoscale meteorological model forced by ALADIN output, ALADIN being the national weather forecast tool. Three nested modeling domains are set: ALADIN domain being the outer and the biggest one. MM5 domain is nested in it and CMAQ domain, covering Bulgaria, is nested in the MM5 one. The MM5 (respectively CMAQ) domain resolution is 10 km. The MM5 vertical structure consists of 23 σ-levels with varying thickness, extending up to 100 hPa height. Proper physical options are set. MM5 starts its calculation 12 hours earlier for spin-up reasons (60-hour run). In Fig. 1, the data flow diagram for one 48-hour cycle is displayed. In the boxes, together with the names of the system’s elements the format of the respective output is given. The white boxes present Models-3 components, the dark gray ones — newly created interface modules (Fortran codes). The light grey boxes present the data input to the system. First of all, this is the meteorological forecast created by ALADIN which calculations drive MM5. MCIP,
Background Pollution Forecast over Bulgaria
533
Fig. 1. 48-hour data flow of CW forecast calculations
the Meteorology-Chemistry Interface Processor, is part of CMAQ and together with the needed meteorological parameters prepares some other data (fluxes, dry deposition velocities etc.) to be used by CMAQ and SMOKE. Area Source (AS) gridded inventory feeds the AEmis (AS emission processor). The Large Point Source (LPS) inventory is input to SMOKE, together with the ambient meteorological data as to produce LPS emissions (LPS processor). The met-data together with the gridded land-use are used by SMOKE to produce biogenic emissions (BgS processor). SMOKE is used once more to merge these 3 emission files in a model-ready emission input. Apart these two main inputs (meteorology and emissions), CMAQ needs initial and boundary conditions. The initial conditions are taken from previous CMAQ run. The case with the boundary conditions (BC) is much more complicated. BCs are of great importance for small regions like Bulgaria. For such small areas, multiple nesting is usually applied. In BGCW, the boundary conditions are provided by the chemical weather forecast system running operationally in Aristotle University of Thessaloniki (AUTH), Greece [9,10]. AUTH system exploits the nested domain approach. There, the air quality forecast is carried out for Europe (50 km spatial resolution), the Balkans (10km), and Athens (2km) using a modelling system which consists of the prognostic meteorological model MM5 and the photochemical air quality model CAMx [4]. Over the European region MM5 is forced by AVN/NCEP global forecast with default (climatic) boundary conditions. AUTH system is run once a day producing 3-day pollution forecast. The boundary conditions for CMAQ are prepared in two steps. First, CAMx data is interpolated for CMAQ boundary points, an operational procedure that takes place in AUTH, results being uploaded via Internet to a dedicated sever in Sofia. Here, this data is processed as to produce 3-day CMAQ-ready boundary condition file. The last box in Fig. 1 tags activities that are not done, yet. So far, main efforts refer system verification and validation against measurement data.
534
3
D. Syrakov et al.
Emission Modelling
CMAQ demands its emission input in specific format reflecting the time evolution of all polluters accounted for by the used chemical mechanism. The emission inventory usually is made on annual basis for, as a rule, big territories (municipalities, counties, countries, etc.) and many pollutants are estimated as groups like VOC and PM2.5. In preparing CMAQ emission file, a number of specific estimates must be done. First, all this information must be gridded. Secondly, time variation profiles must be over-posed on these annual values to account for seasonal, weekly, and daily variations. Finally, VOC and PM2.5 emission estimates must be split into more defined compounds (speciation) in order to be properly modeled for chemical transformations and deposition. Obviously, models are needed as emission pre-processors to CTMs. Such a component in Models-3 system is SMOKE. Unfortunately, it is highly adapted to the US conditions and is partly used, here, only for calculating LPS and BgS emissions and to merge the tree types emission files into a unique CMAQ input. Emission input to BGCW is foreseen to be the Bulgarian National Emission Inventory for 2007. It is elaborated for AS and LPS separately, distributed over 10 SNAPs (Selected Nomenclature for Air Pollution) classifying pollution sources according the processes leading to harmful material release to the atmosphere [3]. The temporal allocation is made on the base of daily, weekly, and monthly profiles, provided by Builtjes et al. [1]. The speciation procedure is dependent on the Chemical Mechanism (CM) used. Here, the Carbon Bond, v.4, (CB4) is exploited [6] upgraded with the ISORROPIA aerosol model [8]. It requires splitting of VOC to 10 lump pollutants and PM2.5 to 5 groups of aerosol. A specific approach for obtaining speciation profiles is used here. The USA EPA data base [18] is exploited. A Bulgarian emission expert has found coincidence between main Bulgarian sources for every SNAP with similar source types from US EPA nomenclature. The weighted averages of the respective speciation profiles are accepted as SNAP-specific splitting factors, weights being the percentage of contribution of every source type in emission of particular SNAP. As shown in Fig. 1, the AS emission file is prepared by interface programs AEmis. Gridded emission inventory data is input to AEmis. The program performs only speciation and time allocation for every grid cell and for every SNAP. SMOKE produces LPS emission file. For the purpose, LPS inventory is transformed in the required IDA format. Serious update of SMOKE profiles and references’ files had to be made as to urge the model to produce Europe-specific results.
4
CMAQ Calculations and Validation
Fourteen σ-levels with varying thickness determine the vertical structure of CMAQ. The Planetary Boundary Layer is presented by 8 of these levels. CMAQ calculations are run twice a day, the previous forecast for the respective hour of the day is initial concentration for the next run.
Background Pollution Forecast over Bulgaria
535
Before starting to use BGCW, validation of model results is needed in order to assure the quality and credibility of delivered output. Here, validation for ozone and for 2000 is done. The up-to-date national emission inventory is in a stage of completion. For the validation, TNO inventory [13] is exploited. Its resolution is on average 15 × 15 km. GIS technology is applied as to produce AEmis input from this data base. The background measurements in Bulgaria are quite scanty. Fortunately, in the frame of research project [2], hourly measurements of ozone for all year 2000 have been done in two background points — peak Rojen (Rodopi mountains, south Bulgaria) and Ahtopol (small Black sea town, southeastern Bulgaria). Respective meteorology data for 2000 is taken from ALADIN archive. The validation calculations are made in off-line mode. Four scenarios are set as to distinguish the importance of different processes to ozone formation. Scenario A is basic — all kind of emissions and boundary conditions are accounted for. With Scenario B, the boundary values of all pollutants are set to zero. Scenarios C and D are as A and B, respectively, but biogenic emissions are excluded. Further on, results only for Rojen will be presented and discussed. Ahtopol’s results are quite similar. In the left graph of Fig. 2, one can notice that almost all scatter points are into FA2 boundaries (dashed lines). The fitting lines are quite close to the ideal fitting line showing slight underestimation of measured data. The correlation coefficients of both fits are quite high that reflects the good quality of the simulation. The comparison of scenarios A and B (the graph in the middle–left of Fig. 2) shows that it is quite important to account correctly for the boundary conditions, especially at air quality calculations over such a small domain. BC form almost 90% of pollution levels in the region. The middle–right graph on Fig. 2 show that Scenario D ozone values are at least three times lower than in Scenario B, pointing out the importance of biogenic emissions for ozone formation. As far as the best way to compare ozone data is the usage of daily maxima, the simulation quality of BGCW system is demonstrated once more in the right graph of Fig. 2, where the scatter diagrams of measured vs. calculated daily maxima (for Rojen, again) is displayed. One can notice that the fitting line are close to the ideal one, correlation coefficients being over 0.95, measured daily maxima are somewhat underestimated. The simulation results are rather good. This means that BGCW system is able to reproduce quite realistically the everyday ozone variations and serve as air quality forecast and information system.
Fig. 2. Scatter diagrams: Measurements vs. Scenario A (left); A vs. B (middle–left), B vs. D (middle–right), daily maxima measured vs. daily maxima Sc. A (right)
536
5
D. Syrakov et al.
Conclusion
Evaluation of BGCW simulations showed that the modelling system has a satisfactory performance with respect to O3 as shown from the different plots discussed. Despite using boundary conditions from another modelling system the basic spatial and temporal O3 patterns are captured by the model. The best simulation quality refers summer time daily maxima. The reasonable performance of the BGCW system for the past time simulations justifies its use for future forecast and information. Due to the considerable volume of calculations and operational needs it is worth to check the ability to use BGCW system on GRID. Efforts in this direction are planed to be put.
Acknowledgments The presented results were not possible without the experience obtained during the participation in the fifth FP project BULAIR, the sixth FP Network of Excellence ACCENT and the sixth FP Integrated Project QUANTIFY. The resent funding by the National Science Fund (Grant DO02-161/16.12.2008 and DO02-115/2008) must be specially acknowledged.
References 1. Builtjes, P.J.H., van Loon, M., Schaap, M., Teeuwisse, S., Visschedijk, A.J.H., Bloos, J.P.: Project on the modelling and verification of ozone reduction strategies: contribution of TNO-MEP, TNO-report, MEP-R2003/166, Apeldoorn, The Netherlands (2003) 2. Donev, E., Zeller, K., Avramov, A.: Preliminary background ozone concentrations in the mountain and coastal areas of Bulgaria. Environmental Pollution 17, 281–286 (2002) 3. EMEP/CORINAIR: Atmospheric emission inventory guidebook, 3 edn., Copenhagen, European Environmental Agency (2002), http://reports.eea.europa.eu/EMEPCORINAIR3/en/page002.html 4. ENVIRON: User’s Guide to the Comprehensive Air Quality Model with Extensions (CAMx), Version 4.40, ENVIRON International Corporation, Novato, CA (2006) 5. European Parliament: DIRECTIVE 2002/3/EC of 12 February 2002 relating to ozone in ambient air. Official Journal of the European Communities L67, 14–30 (March 9, 2002) 6. Gery, M.W., Whitten, G.Z., Killus, J.P., Dodge, M.C.: A Photochemical Kinetics Mechanism for Urban and Regional Scale Computer Modeling. Journal of Geophysical Research 94, 12925–12956 (1989) 7. Monteiro, A., Lopes, M., Miranda, A., Borrego, C., Vautard, R.: Air pollution for ecast in Portugal: a demand from the new air quality framework directive. Int. J. Environment and Pollution 25(1-4), 4–15 (2005) 8. Nenes, A., Pandis, S.N., Pilinis, C.: ISORROPIA: A new thermodynamic equilibrium model for multiphase multicomponent inorganic aerosols. Aquat. Geoch. 4, 123–152 (1998)
Background Pollution Forecast over Bulgaria
537
9. Poupkou, A., Kioutsioukis, I., Lisaridis, I., Markakis, K., Giannaros, T., Katragkou, E., Melas, D., Zerefos, C., Viras, L.: Evaluation in the Greater Athens Area of an air quality forecast system. In: Proc. of the IX EMTE National-International Conference of Meteorology-Climatology and Atmospheric Physics, Thessaloniki, Greece, pp. 759–766 (2008) 10. Poupkou, A., Kioutsioukis, I., Lisaridis, I., Markakis, K., Melas, D., Zerefos, C., Giannaros, T.: Air quality forecasting for Europe, the Balkans and Athens. In: 3rd Environmental conference of Macedonia, Thessaloniki, Greece (2008b) 11. San Jose, R., Perez, J., Gonzalez, R.: An Operational Real-Time Air Quality Modelling System for Industrial Plants. Environmental Modelling & Software 22, 297– 307 (2007) 12. Sofiev, M., Siljamo, P., Valkama, I., lvonen, M., Kukkonen, J.: A dispersion modeling system SILAM and its evaluation against ETEX data. Atmospheric Environment 40, 674–685 (2005/2006) 13. Visschedijk, A.J.H., Zandveld, P.Y.J., Denier van der Gon, H.A.C.: A High Resolution Gridded European Emission Database for the EU Integrate Project GEMS, TNO-report 2007-A-R0233/B, Apeldoorn, The Netherlands (2007) 14. http://box.mmm.ucar.edu/mm5/ 15. http://www.chemicalweather.eu/ 16. http://www.cmaq-model.org/ 17. http://www.smoke-model.org/ 18. http://www.epa.gov/ttn/chief/emch/speciation/
Climate Change Impact Assessment of Air Pollution Levels in Bulgaria D. Syrakov1, M. Prodanova1, N. Miloshev2 , K. Ganev2 , G. Jordanov2, V. Spiridonov1 , A. Bogatchev1, E. Katragkou3, D. Melas3 , A. Poupkou3 , and K. Markakis3 1
National Institute of Meteorology and Hydrology, Bulgarian Academy of Sciences, Sofia, Bulgaria
[email protected], 2 Geophysical Institute, Bulgarian Academy of Sciences, Sofia, Bulgaria
[email protected] 3 Laboratory of Atmospheric Physics, Aristotle University of Thessaloniki, Greece
Abstract. The presented work is aiming at climate change impacts and vulnerability assessment in Bulgaria. Climate change may affect exposures to air pollutants by affecting weather and thereby local and regional pollution concentrations. Local weather patterns influence atmospheric chemical reactions and can also affect atmospheric transport and deposition processes. US EPA Models-3 System for a region with resolution of 10 km covering Bulgaria is exploited here. The meteorological background is produced by the climatic version of ALADIN weather forecast system. TNO emission inventory for 2000 is used. The chemical boundary conditions are extracted from 50-km resolution runs over Europe made in Aristotle University of Thessaloniki, Greece. Calculations for the period 1991-2000 are performed, results presented in a study. For year 2000, some scenarios are run, results compared with measured data.
1
Introduction
Extremely many scientific projects and related publications are aimed to assessment of the impact of climate changes on various areas of human activity and environment. The impact of climate changes on pollution levels uses to be object of various publications. Special attention is due to NATO Project CLG 980505 final report [2]. Long-term runs are performed over Europe with resolution of 10 km exploiting UNI-DEM AQM [11,12] and meteorology for a period of 16 years (1989-2004). Kinds of scenarios (traditional, climatic, emission) are combined in 14 different runs. The meteorology for prospective climatic runs is prepared on the base of suitable changing the available meteorological variables in a way to fit the means from the IPCC climate scenario SRES A2 [6]. The comments of results are quite interesting. The EC FP6 project CECILIA [15] is devoted to climate change impacts. CECILIA’s Work Package 7 aims at long-term simulations of chemistry air quality I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 538–545, 2010. c Springer-Verlag Berlin Heidelberg 2010
Climate Change Impact Assessment of Air Pollution Levels in Bulgaria
539
models (AQM) driven by Regional Climate Models (RCMs) for present climate and for future projections with fine resolution of 10 km for some target regions of Central and Eastern Europe. The boundary conditions for these regions are prepared by calculations covering the whole of Europe with a grid resolution 50× 50 km. CECILIA approach differs from the one just described. The meteorological fields are determined as output of RCMs forced by available forecast/reanalysis or global scale climatic models runs. In spite the investigated periods are quite long, the emission scenario is fixed — only EMEP 50 km data base for 2000 [18]. For fine scale simulations a proper disaggregating of EMEP data base is exploited [10]. The aim is to estimate only the influence of meteorology changes on pollution levels in regional and local aspect. In the paper, first results of applying CMAQ chemical transport model for long-term simulation over the region of Bulgaria are presented stressing on model validation.
2
Meteorological Data Base
The meteorological data base for this study is created off-line by the ALADIN Regional Climatic Model (ALADIN-RCM). It is a modification of the current operative weather forecast system in the National Institute of Meteorology and Hydrology (NIMH) of Bulgaria. Description of the small size high resolution RCM with the ALADIN dynamics (ALADIN-RCM) is given in [9] and [3]. ALADIN dynamics was entirely built on the notion of compatibility with its “mother” system, IFS/ARPEGE. The ALADIN/CMAQ meteorological data base consists of 6-hour GRIB files containing the main meteorological parameters in 31 standard p-levels going up to 50 hPa.
3
Air Quality Models Used, Computational Domains and Data Flow
US EPA Models-3 air quality modelling system is used here, consisting of: (i) CMAQ (Community Multi-scale Air Quality model, [16]) being the chemicaltransport model; (ii) MM5 (the 5th generation PSU/NCAR Meso-Meteorological Model, [13]) used as meteorological pre-processor to CMAQ, and (iii) SMOKE (Sparse Matrix Operator Kernel Emissions Modelling System, [17]) being the emission pre-processor to CMAQ. The calculations are performed for a region containing mainly Bulgaria with 10 × 10 km resolution. For the purpose, three nested computational domains are set: ALADIN’s, MM5’s, and CMAQ’s ones. Lambert conformal projection is used with parameters: Lat1 = 42.5◦ , Lat2 = 42.5◦, λc = 25◦ , ϕc = 42.5◦ . A number of interfaces (Linux scripts and Fortran codes) are created as to link those models with different types input information in a system capable to perform long term calculations. The 1-day data flow is shown in Fig. 1. The white boxes present Models-3 compounds, MCIP being Meteorology-Chemistry Interface Processor of CMAQ.
540
D. Syrakov et al.
Fig. 1. Data flow of calculations for estimating climate change impact on air quality
The light grey boxes present the data input to the system and the dark grey ones — Fortran codes. First of all, this is the ALADIN data that drives MM5. Its output feeds MCIP that produces meteorological input to both CMAQ and SMOKE. The CAMx data base [7,8] contains air pollution calculations for Europe on a 50-km grid. It is used for off-line interpolations to the CMAQ boundary points (module BG.BC1). The uploaded results are basis for on-line elaboration of CMAQ boundary conditions input file (module BG.BC2).
4
Modelling
CMAQ demands its emission input in specific format reflecting the time evolution of all polluters accounted for by the chemical mechanism used. As a rule, the emission inventory is made on annual basis for big territories (municipalities, counties, countries, etc.). Many pollutants are estimated as groups like VOC (Volatile Organic Compounds), PM2.5 (Particle Matter with d < 2.5μm). In preparing model-ready emission data, a number of specific estimates must be done (gridding, time allocation, speciation) for the main type of sources: Area (AS), Large Point (LPS), and Biogenic (BgS) sources. Obviously, emission models are needed as pre-processors to the chemical transport models. Such a component in EPA Models-3 system is SMOKE. Unfortunately, it is highly adapted to the US conditions and is partly used here. Fed by gridded land-use data and MCIP output it calculates biogenic sources and merges AS-, LPS- and BgS-files into a single CMAQ emission input file. As shown in Fig. 1, anthropogenic emission files (AS and LPS) are prepared by the interface programs AEmis and PEmis. Input to them are the gridded emission inventories for year 2000. Here, TNO high resolution inventory [10] is exploited. GIS technology is applied as to produce 10× 10 km AEmis and PEmis input from this data base.
Climate Change Impact Assessment of Air Pollution Levels in Bulgaria
541
The temporal allocation is made by over-posing daily, weekly, and monthly profiles (see [1]). The speciation profiles are elaborated following the recommendations given in the US EPA site [14]. AEmis and PEmis perform speciation and time allocation for every grid cell and every pollutant for the current Julian date. The obtained hourly values are output in NetCDF format and enter in SMOKE to be merged with BgS emissions. Fourteen σ-levels with varying thickness determine the vertical structure of CMAQ. The Planetary Boundary Layer is presented by 8 of them. The daily CMAQ output is a NetCDF file with 3D hourly data for 78 pollutants. Besides meteorology, boundary conditions and emissions, CMAQ needs also Initial Condition (IC) input. As the CMAQ calculations are day by day, the last hour output for a day is IC for the next day. Finally, a number of post-processing programs (XtrMET, XtrCON) are created aiming at saving important part of the output information.
5
Validation of Model Results
Before starting the assessment of climate change impact on air quality, validation of model results is needed in order to assure the quality and credibility of delivered output. There are quite few measurements in the territory of Bulgaria able to be used (no EMEP station in the domain). Fortunately, it occurs that during all the year 2000 in the frame of research project [4] hourly ozone measurements have been performed in two points in Bulgaria — peak Rojen, Rhodopi mountain (41N41, 24E44, altitude 1700 m), and Ahtopol, small town at Black sea cost (42N05, 27E57, altitude 10 m). This data is the main source for estimating model results. For year 2000 four scenarios are calculated (see Fig. 2). The aim is to estimate the importance of biogenic sources and the boundary conditions. The scatter diagrams of Scenario A vs. measurements are displayed in Fig. 3. One can notice that almost all scatter points are into FA2 boundaries (dashed lines). The fitting lines are quite close to the ideal fitting showing slight underestimation of measurement data in Rojen and overestimation in Ahtopol. The correlation coefficients of both fits are quite high that reflects the good quality of the simulation.
Fig. 2. Measurement points (left) and scenario’s description (right)
542
D. Syrakov et al.
Fig. 3. Scatter diagrams of measurement and modelled (Scenario A) 1-hour ozone data
Fig. 4. Scenarios comparison: A vs. B (left), A vs. C (middle), and B vs. D (right), Ahtopol
The main insights from the sensitivity analysis will be shown by the results from different scenarios for Ahtopol. The Rojen diagrams are quite similar. The left graph of Fig. 4 demonstrates the exclusive importance of the boundary conditions for such a small area as Bulgarian territory. The lack of pollution at the boundaries, i.e. the action of only Bulgarian pollution sources, produces ozone concentrations several times less than the Basic scenario. The other two graphs demonstrate the importance of biogenic sources in ozone formation. Together with the direct comparisons some indexes, recommended by European Ozone Directive [5], mainly AOT40 (Accumulated Over Threshold of 40 ppb in the day-time hours during the period from May 1 to July 31 values), NOD60 (Number Of Days in which the day-time 8 hours running averages over ozone concentration exceed the critical value of 60 ppb), and ADM (Averaged Daily Maxima) are calculated for both measured data and Scenario A simulations. The comparison of AOT40 and NOD60 indexes is not satisfactory, that is not strange keeping in mind the high sensitivity of these indexes to the measurement and calculation errors as well as to the ozone daily variations. ADM, is mostly used for validation purposes, shows quite close results: 59.3 and 54.4 ppb for Rojen and 54.1 and 53.6 ppb for Ahtopol for the measured and calculated ozone, respectively. These comparisons support the conclusion that the simulation results are rather
Climate Change Impact Assessment of Air Pollution Levels in Bulgaria
543
good and can be used with certainty for investigation of the climate change impact on pollution levels.
6
Long Term Simulations: Results for 1991–2000 Time–Slice
Following the technology just described, the 10-year period of the CECILIA Present Climate time-slice was simulated. No other comparisons with measurement data are performed. Here, mainly demonstrations will be presented. In the left side of Fig. 5, the “climatic” ozone field is presented compared with the field of 10-year mean ADM. One can notice some resemblance between space patterns but the values are quite different — the maximum of climatic ozone field is close to the minimum of the ADM–field. The inter–annual variability of ADM is demonstrated in Fig. 6. In spite of the similar space distribution, the ADM for 2000 are by several ppb higher than for 1997. This can be explained with the prevailing meteorological conditions — cold and wet summer of 1997 and dry and worm year 2000. The last illustration of ALADIN/CMAQ output, presented in Fig. 7, are the “climatic” fields of SO2 and PSO4 (particulate sulfate). The left graph shows two main pollution spots in the Bulgarian territory — TPP “Maritza-Iztok” in
Fig. 5. 10-year mean ozone concentration (left) and Averaged Daily Maxima (right)
Fig. 6. Averaged Daily Maxima for years 1997 (left) and 2000 (right)
544
D. Syrakov et al.
Fig. 7. 10-year averaged fields of SO2 (gas, left) and PSO4 (aerosol, right)
the centre (3 neighbouring TPPs that used to be the most powerful SO2 polluter in the Balkans) and the industrial area “Sofia-Pernik” on the left. The ligniteburning TPPs are quite well expressed on the PSO4 map as well (the graph in right). Many plots with inter-annual variations of those and all other saved pollutants as well as related meteorological parameters can be presented and interpreted that will be object of further work.
7
Conclusion
Evaluation of ALADIN/CMAQ simulations showed that the modelling system has a satisfactory performance with respect to O3 as shown from the plots discussed in this study. Despite the use of boundary conditions from another modelling system the basic spatial and temporal O3 patterns are captured by the model. The best simulation quality refers summer time daily maximums. The reasonable performance of the ALADIN/CMAQ system for the present time simulations justifies its use for future time projections. The enormous volume of calculations related with climate change impact assessment on air quality requires powerful computational platforms like GRID. As far as two new 10-year time slices must be simulated in the frame of CECILIA project, some efforts were put as to produce a version of the above described data stream suitable for GRID application. The results are very promising: the performance is faster than on the single worker and the obtained results are practically identical. Many jobs, reflecting different time pieces, can be prepared and run almost simultaneously exploiting in great extent the parallel computational abilities of the GRID.
Acknowledgments This study is made under the financial support of European Commission — 6thFP Integrated Project CECILIA [15] as well as by the Bulgarian National Science Fund (grants No. DO02-161/16.12.2008 and DO02-115/2008). The presented results were not possible without the experience obtained during the
Climate Change Impact Assessment of Air Pollution Levels in Bulgaria
545
participation in the 5th FP project BULAIR, the 6th FP Network of Excellence ACCENT and the 6th FP Integrated Project QUANTIFY.
References 1. Builtjes, P.J.H., van Loon, M., Schaap, M., Teeuwisse, S., Visschedijk, A.J.H., Bloos, J.P.: Project on the modelling and verification of ozone reduction strategies: contribution of TNO-MEP, TNO-report, MEP-R2003/166, Apeldoorn, The Netherlands (2003) 2. Csomos, P., Cuciureanu, R., Dimitriu, G., Dimov, I., Doroshenko, A., Farago, I., Georgiev, K., Havasi, A., Horvath, R., Margenov, S., Moseholm, L., Ostromsky, T., Prusov, V., Syrakov, D., Zlatev, Z.: Impact of Climate Changes on Pollution Levels in Europe, NATO Project CLG 980505 Final Report, NATO HQ, Brussels, Belgium (2006) 3. D´equ´e, M., Piedelievre, J.-P.: High-Resolution climate simulation over Europe. Climate Dynamics 11, 321–339 (1995) 4. Donev, E., Zeller, K., Avramov, A.: Preliminary background ozone concentrations in the mountain and coastal areas of Bulgaria. Environmental Pollution 17, 281–286 (2002) 5. European Parliament: DIRECTIVE 2002/3/EC of 12 February 2002 relating to ozone in ambient air. Official Journal of the European Communities L67, 14–30 (March 9, 2002) 6. Houghton, J.T., Ding, Y., Griggs, D.J., Noguer, M., van der Linden, P.J., Dai, X., Maskell, K., Johnson, C.A. (eds.): Climate Change 2001: The Scientific Basis. Cambridge University Press, Cambridge (2001) 7. Katragkou, E., Zanis, P., Tegoulias, I., Melas, D., Krueger, B.C., Huszar, P., Halenka, T.: Tropospheric Ozone over Europe: An air quality model evaluation for the period 1990-2001. In: Proceedings of IX EMTE National-International Conference of Meteorology-Climatology and Atmospheric Physics, p. 649. ishing house of Aristotle University of Thessaloniki, Thessaloniki (2008) URL, 649 8. Krueger, B.C., Katragkou, E., Tegoulias, I., Zanis, P., Melas, D., Coppola, E., Rauscher, S.A., Huszar, P., Halenka, T.: Regional decadal photochemical model calculations for Europe concerning ozone levels in a changing climate. Idojaras — Quarterly Journal of the Hungarian Meteorological Service (in press) 9. Spiridonov, V., D´equ´e, M., Somot, S.: ALADIN-CLIMATE: from the origins to present date. ALADIN Newsletter 29 (2005) 10. Visschedijk, A.J.H., Zandveld, P.Y.J., Denier van der Gon, H.A.C.: A High Resolution Gridded European Emission Database for the EU Integrate Project GEMS, TNO-report 2007-A-R0233/B, Apeldoorn, The Netherlands (2007) 11. Zlatev, Z.: Computer treatment of large air pollution models. Kluwer Academic Publishers, Dordrecht (1995) 12. Zlatev, Z., Moseholm, L.: Impact of climate changes on pollution levels in Denmark. Environmental Modelling 217, 305–319 (2008) 13. http://box.mmm.ucar.edu/mm5/ 14. http://www.epa.gov/ttn/chief/emch/speciation/ 15. http://www.cecilia-eu.org/ 16. http://www.cmaq-model.org/ 17. http://www.smoke-model.org/ 18. http://webdab.emep.int/
Effective Algorithm for Calculating Spatial Deformations of Pre-stressed Concrete Beams J´ anos T¨or¨ ok, D´enes N´emeth, and M´at´e Lakat Centre of Information Technology, Budapest University of Technology Magyar tud´ osok krt. 2, H-1117 Budapest, Hungary
Abstract. An effective algorithm is presented in this paper to solve boundary value problems describing the deformations of pre-stressed concrete beams. The solution presented in this paper aims to deliver a speedup of several magnitudes compared to existing algorithms in order to be able to handle more complex problems inaccessible so far. Parallelization problems are investigated and a solution is proposed which allows the running of this algorithm in various environments from desktop computers to clusters and Grids.
1
Introduction
There are many fields where the solution of more complicated problems requires exponentially more computation time [5]. In such cases accessing more computation power does not really help. In order to make a considerable step forward one has to renew the tools and achieve orders of magnitude speed gain. The problem we address here is the algorithms used for calculating deformation of simple architecture elements, including prestressed concrete beams, arches, beam assemblies, etc. It is unnecessary to describe the importance of providing physically correct results in a field, where the general engineering procedures consist of heuristics and in some cases finite element methods with mainly empirical models [1]. Physically exact calculations would be priceless but they need the solutions of large number of non-linear equations, with solutions being one-dimensional object in a d-dimensional domain [2]. An important requirement makes these problems event more difficult, namely that it is not enough to calculate the solutions only once and use it for different cases, because by changing the parameters of the structures the topology of the solutions may change. Therefore one must come up with an algorithm that can be run by the architects and does not require the access of supercomputers. There are two different approaches used in the literature [8]: 1) find some “trivial” solution and try to follow it by changing slowly the parameters, 2) take the parameter space and scan for solutions [7]. The drawback of the first is that it is unable to find solutions that are not connected to the “trivial” one and thus may predict false stability, on the other hand the second solution requires immense amount of CPU time. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 546–553, 2010. c Springer-Verlag Berlin Heidelberg 2010
Effective Algorithm for Calculating Spatial Deformations
547
A delicate combination of the two approaches led so far to the best algorithm [4]. However as usual engineers want more. In previous papers [3,6] we described the basics of a new idea that searches the solutions in an artificial potential by gradient algorithm. This solution was integrated into a hybrid algorithm that combined approaches 1) and 2). It gave promising results, but due to the limitations of the original approach (see Sec. 3) a new algorithm had to be designed. In this paper we describe the analyses that gave birth to an algorithm which is optimized on both memory and CPU consumption and is able to access problems too difficult for previous algorithms.
2 2.1
Description The Problem
The non-linear differential equations of the boundary value problems describing the problems can be transformed to a set of non-linear equations [4]. A problem having d degrees of freedom can thus be represented as: 0 = f1 (x1 , x2 , . . . xd ) 0 = f2 (x1 , x2 , . . . xd ) .. . 0 = fd−1 (x1 , x2 , . . . xd ),
(1)
where fj are non-linear functions. It is important to note that we have one equation less than the number of unknowns (meaning one physically independent parameter). Thus we will have one-dimensional solutions (lines, curves) in the d-dimensional parameter space (x1 , x2 , . . . xd ∈ X). One important remark is that in general these functions are really complicated to calculate which takes a lot of CPU time. On the other hand due to their complicity in general we consider that analytically no information is available for the functions. From now on we use letters x (any point) and y (solution candidate) to designate points in the parameter space with index i = 1, 2 . . . d, and letter f for functions with index j = 1, 2 . . . d − 1. 2.2
Solution Testing
It is easy to see that trivial solution testing methods by limiting |fj ({yi })| are useless in our case as no information is available on the functions. Therefore we chose the following method for solutions testing. Let us consider a simplex in the d-dimensional space. If a solition line goes through the simplex all functions must cross zero, and thus change sign which can be easily detected. This assumption is true only if the simplices are small enough to contain only one zero line which is checked by repeating the calculation with different simplex size.
548
J. T¨ or¨ ok, D. N´emeth, and M. Lakat
The next task is to create simplices. To work directly with simplices in d dimensions is a nightmare so we chose to handle hypercubes and if necessary decompose them to simplices. The simplex decomposition of a hypercube is non-trivial, even the number of simplices may vary. Since there is no algorithm to provide the decomposition with minimum number of simplices we used the easiest method with d! simplices. One can fix two points from each simplex, which can be chosen to be the origin (0, 0, . . . 0) and the opposite corner (1, 1, . . . 1). The other corners of the simplices are chosen by permuting the unit vectors without repetition. On the faces of the simplices a linear extrapolation can be carried out to interpolate the solution points which requires the solution of a set of linear equations. The solutions of these equations will be points on the surface of the simplices more accurate than the corners. If a point also lies on the surface of the containing hypercube we can follow the solution line by choosing the touching hypercube next. 2.3
Potential
In general there is no hint about the poisiont of the solutions so up to now mainly parameter scanning was used. Here we propose a method to find a solution faster. We create an artificial potential which will have minima at the solutions so that gradient algorithms can be used to find them. A simple choise of the potential was presented in [6]: d−1 cj fj2 ({xi }), (2) U ({xi }) ≡ j=1
where cj > 0 are coefficients, taken from random sampling in X: cj =
m
fj−2 ({xi }), (k)
(3)
k=1 (k)
where {xi } is a random point in X. Obviously if {yi } is a solution of (1), then U ({yi }) = 0, otherwise U > 0. 2.4
Algorithm
The new algorithm presented here uses a mixed approach. The ability to find solutions by a gradient method in the artificial potential of U makes the scanning of the parameter space useless. Therefore the strategy adopted here is the following: 1. Generate a solution candidate point using one of the following methods: (a) Gradient method (starting from random points, regular lattice, etc.) (b) read from file (c) from an algorithm (if the previous two methods cannot be used) 2. Test for solutions the hypercube which contains the point 3. If a solution was found, follow it. 4. Return to 1.
Effective Algorithm for Calculating Spatial Deformations
3 3.1
549
Analysis Concept
The algorithm has two distinct parts: the solution candidate point generation and the solution following. The solution following is done similarly to previous realizations: If solution was found in a hypercube, the simplex algorithm is used to determine on which faces it exits from the cube (can be more than two if there are branches). The neighbouring hypercubes in these directions are considered as candidates for solutions and put in a list to be tested. This procedure is repeated until the list gets empty. This list is expected to remain small, if we do not insert hypercubes which were tested: So as a solution is followed a cube is taken from the list and a new is inserted. The only increase in size of the queue may come from branches. As we said, we have to keep track the tested hypercubes (or the ones with solutions) to avoid back and forth walking on a solution line. Due to the fine resolution, large number of hypercubes should be stored and searched effectively. An optimized solution is presented in Sec. 3.4. The gradient method was found to be the most effective solution candidate point generation method. One can choose the starting point of the gradient in many different ways (random, regular, hierarchical, etc.) but we found no significant difference in efficiency. In some cases the gradient method can not be used, because, for example, parts of X are not in the domain of fj , or the landscape of U is too noisy. In these cases we have to generate the solution candidate points in conventional ways. Either in some special cases analytic solutions exist and this can be supplied or a kind of sweeping algorithm can be used. The first choice is simple — one has to provide points to the code which are checked for solutions, the second is more complicated and implemented in hierarchically cutting X into half and scanning the cutting surface. This requires the scan of a d − 1 dimensional space, which is one dimension less than the original, but still very large. This latter option is useful for two purposes: 1) if everything else fails this should always work. This can even be used for small domains where the gradient algorithm seems to fail. 2) To ensure that there are no solutions above a given length scale (distance between two cuts). This can be useful when the mapping of X is done for a new problem. 3.2
Computation Need of Gradient Method
In general, it is impossible to generate analytically the derivative of the fj functions, so finite difference must be used, which makes the gradient method time consuming. We made several (∼ 100 − 1000) runs of the gradient method with real test problems to study the computation need of the gradient method. The result is summarized in Table 1. The important thing we can learn from the above analysis is that the cost (number of function calculations) of a gradient run grows with the dimension as:
550
J. T¨ or¨ ok, D. N´emeth, and M. Lakat
Table 1. The average number of gradient steps, the required function calls are shown as functions of the space dimension. An average of few hundred runs is presented. dim. grad. step f calls normalized calls d ng nf nf /d4.92 3 18.6 178 0.80 5 133.6 2223 0.81 6 270.5 5355 0.80
nf ∼ d5 which is an algebraic (power law) increase which should be compared to the exponential increase of any sweeping algorithm (nf ∼ N d , where N is the linear resolution). Thus the gradient algorithm has two major advantages over previous approaches: 1) The scale free nature of it, 2) the power law computation time increase with d compared to the exponential one. 3.3
Computation Need of Simplex Decomposition
The simplex decomposition [8] can be very expensive since we have d! simplices in a d dimensional cube (see Sec. 2.2). Furthermore we have to solve a set of linear equations which requires d2 operations. Even if the simplex decomposition does not require the calculation of the expensive function values, due to its factorial increase in computation need necessarily, it will become the most time consuming part of the code with increasing d. However, current measurements in parameter space dimensions show only 0.1 − 2% computation overhead, which is negligible. However this might become the bottleneck in higher dimensions. 3.4
Memory Need
Keeping track of all hypercubes in X is impossible: for example in d = 5 a resolution of 100 hypercubes per dimension would require 10 gigabytes of memory. Therefore we keep track only the hypercubes with solution. The one-dimensional nature of the solutions may suggest that the list of sites with solution is small, however, there are cases which resemble more to a bowl of spaghetti and the number of hypercubes with solution may be several hundred times the linear solution. In order to keep memory usage small, we used the feature that solutions are curves. Thus in each step we only move to a neighboring cube. The algorithm used for storing hypercubes with solutions is the following: 1. Define a box-size parameter s 2. We consider a continuous section of the solutions which is not longer than s in any dimension. It is defined by its starting point, its path, and its bounding box 3. The starting point is a d dimensional vector 4. The bounding box consists of two d dimensional vectors pointing to the two opposite corners of the box which contains the entire section
Effective Algorithm for Calculating Spatial Deformations
551
5. The path is a bit-coded array in which one element indicates the next hypercube. Each element is composed of a header and a data part which are put together in a continuous bit array: (a) header: d bits indicating (with bit 1) in which direction the coordinate of the next hypercube is different from the previous one. Let c be the number of bits with 1, then (b) data consists of c bits, indicating a difference of +1 (bit set=1) or −1 (bit is not set=0) in the respective direction between the successive hypercubes coordinates. The parameter s must be chosen carefully, because both extreme values of it are unfavorable: If s is too small, the number of sections gets too high, if s is too large, the linear search inside the box gets too long. Of course the optimization is function of the resolution, number of solution lines, etc. In the studied cases the optimum was around 2 − 10% of the linear size of the system. 3.5
Parallelization
The algorithm presented here can easily be parallelized, since both parts of the algorithm (solution candidate point generation, solution following) take their work from a kind of list which would be ideal for a Master-Slave type scenario. In our case we chose to avoid direct parallelization to be able to run the code on Grids, where inter process communication is not supported. We chose to simply parallelize the 1st part of the algorithm, namely the solution candidate point generation and leave the rest as it is. In this way we may end up calculating certain solutions multiple times. In Section 4 we investigate the price of this solution.
4
Results
We chose two problems: the 3-dimensional euler and the 5-dimensional sp [9] for test runs of the presented algorithm. The latter was run with two different parameter space X. We recorded the time used by the codes as well as the number of solutions. Let us note by N the number of hypercubes in each dimension, by ng the number of gradient runs, by τ the running time of the tests, by ns the number of hypercubes with solution. The solution lines are visualized on Fig. 1. Table 2 details the time needs, and efficiency of the algorithm with different parameters in different dimensions. This linear increase can be observed mainly in the case of the euler model where the solution is a big connected line set thus most of the time the algorithm follows solutions. The time need of the gradient can even be approximated from the data to be ∼ 4s for 900 gradient runs. In the other cases, when the gradient part is more pronounced τ increases even more slowly with N . Thus a
552
J. T¨ or¨ ok, D. N´emeth, and M. Lakat
n g=1000 n g= 100 n g= 10 6 4
z 20
−2 −4 −6
n g= 10000 n g= 1000 n g= 100
4 2 z0 −2 −4 200
−300 −200 −100
y
0
100
0 100 200
−200
−100
x
200
160 120
y
80 40 0
−20
−10
0
10
20
30
x
Fig. 1. Visualization of the solution lines of the euler (left) and the sp (right) problem. The different colors indicate solutions found with different number of gradient runs. As the same random seed was used for all runs, the solutions are always subsets of an other one with higher ng . Table 2. Time and efficiency measurements of two algorithms called euler (d=3) and sp (d = 5). spa and spb differ in the size of X. N Code ng τ ns /N Code ng τ ns /N Code ng τ ns /N 200 spa 100 31 29 spb 100 52 27 euler 10 0.36 23 400 spa 100 39 20 spb 100 66 24 euler 10 0.69 24 800 spa 100 96 31 spb 100 95 22 euler 10 1.46 25 1600 spa 100 198 29 spb 100 159 20 euler 10 3.49 26 200 spa 1000 189 36 spb 1000 342 27 euler 100 0.86 33 400 spa 1000 214 40 spb 1000 374 49 euler 100 1.29 33 800 spa 1000 270 39 spb 1000 408 46 euler 100 2.02 29 1600 spa 1000 396 37 spb 1000 528 35 euler 100 4.41 29 200 spa 10000 1782 38 spb 10000 3266 54 euler 1000 4.68 37 400 spa 10000 1812 40 spb 10000 3326 87 euler 1000 5.24 38 800 spa 10000 1879 39 spb 10000 3377 78 euler 1000 6.48 39 1600 spa 10000 1997 37 spb 10000 3553 59 euler 1000 9.67 38
much higher resolution solution can be obtained with marginal increase in the computational time. Let us note that the increase of N should be followed by the strengthening of the convergence conditions of the gradient algorithm, because the solution candidate point found may not lay in the right hypercube. This effect can be observed on the model spb , where the largest resolution gives less solutions. One can also read from the table that indeed it is extremely difficult to tell how many initial gradient runs should be used to find all solutions. Sometimes a 10% increase in the number of solutions requires 10 times more gradient runs because the domain of convergence of certain branches is too small. This effect is difficult to overcome, but two advises can be given: 1) Branches close to the boundary are difficult to detect → increase size of X well above the interested domain. 2) Branches are too close to each other → either restrict the domain of gradient start generation to that part, or to use a parameter sweep in those congested areas.
Effective Algorithm for Calculating Spatial Deformations
553
The sp model shows that in high dimensions the chosen parallelization scheme, namely that we run independent runs with pre-divided solution candidate point sets is optimal. If we compare runs with N =200 and N =1600 for large number of ng we can see, that the difference in computation time is about 10% which is spent on solution following. Thus the gradient part (the solution candidate point search) accounts for 98 − 88% of the computation time. So the part that might be calculated multiple times account only for 2 − 12% which is much better than the general gain in direct parallelization.
5
Conclusion
In this paper we introduced a new algorithm capable of solving non-linear boundary value problems used in engineering fields. The computation time gain compared to previous algorithms can be as large as 104 in d=5. The same (or even larger) amount of gain is achieved in memory usage which makes the code fit on clusters of virtual computers having little memory. The independent parallelization scheme enables the code to be run in Grid and stand-alone environment. All these factors make the new concept much more flexible. Acknowledgment. Part of this work was funded by P´eter P´ azm´any program RET-06/2005 of the National Office for Research and Technology and the EGEEIII project (INFSO-RI-222667).
References 1. Bangash, M.Y.H.: Manual of Numerical Methods in Concrete. Thomas Telford, London (2001) 2. Domokos, G., G´ asp´ ar, Z., Szeber´enyi, I.: CAMES 4(1), 55–68 (1997) 3. Domokos, G., Sipos, A., V´ arkonyi, P.: Szerkezet-tervez´es az interneten: egy g´epi ´ ıt´es-Ep´ ´ ıt´eszettudom´ algoritmus gyors´ıt´ asi lehet˝ os´egei. Ep´ any 34(3-4), 271–291 (2006) 4. Domokos, G., Szeber´enyi, I.: A Hybrid Parallel Approach to One-Parameter Nonlinear Boundary Value Problems. Computer Assisted Mechnics and Engineering Sciences 11, 15–34 (2004) 5. Garey, M.R., Johnson, D.S.: Computers and Intractability: A Guide to the Theory of NP-Completeness. W.H. Freeman, New York (1979) 6. Pasztuhov, D., T¨ or¨ ok, J.: A Gradient Hybrid Parallel Algorithm to One-Parameter Nonlinear Boundary Value Problems. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 500–507. Springer, Heidelberg (2008) 7. Sipos, A., Domokos, G.: Assymetrical, spatial deformations of reinforced concrete columns and prestressed beams. In: Bal´ azs, G.L., Borosny´ oi, A. (eds.) Proc. FIB Symposium on “Keep concrete active”, vol. II, pp. 693–699. Publishing Co. of BUTE, Budapest (2005) 8. Szeber´enyi, I.: PVM Implementation of Boundary Value Problems on Heterogeneous Parallel Systems. Periodica Polytechnica Electrical Engineering 44(3-4), 317 (2000) 9. Szeber´enyi, I.: Parallel Algorithms for Solving Nonlinear Boundary Value Problems, PhD thesis, BME (2003)
Structurally Stable Numerical Schemes for Applied Dynamical Models Roumen Anguelov Department of Mathematics and Applied Mathematics, University of Pretoria Institute of Mathematics and Informatics, Bulgarian Academy of Sciences
[email protected]
Abstract. The paper deals with the construction of reliable numerical discretizations of continuous dynamical systems arising as models for different natural phenomena, with a focus on schemes which correctly replicate the properties of the original dynamical systems. The work is based on the new concept of topological dynamic consistency, which describes in precise terms the alignment of the properties of the discrete dynamical system and the approximated continuous dynamical system. The derivation of structurally stable numerical schemes via the nonstandard finite difference method is also demonstrated.
1
Introduction
Let W be an open subset of Rd , d ≥ 1. Consider the initial value problem dy = f (y), dt y(0) = x,
(1) (2)
where x ∈ W and f ∈ C 0 (W, W ). We assume that (1) defines a (positive) dynamical system on D ⊂ W . This means that for every x ∈ D the problem (1)–(2) has a unique solution y = y(x, t) ∈ D for all t ∈ [0, ∞). In a typical setting D is a compact subset of Rd , but we do not need to make upfront such an assumption. For a given t ∈ (0, ∞), the mapping S(t) : D → D given by S(t)(x) → y(x, t) is called the evolution operator and the set {S(t) : t ∈ (0, ∞)}
(3)
is the well-known evolution semi-group. For every x ∈ D the set {S(t)(x) : t ∈ (0, ∞)} is called the (positive) orbit of x. Suppose that the solution of (1)–(2) is approximated on the time grid {tk = kh : k = 0, 1, ...}, where h is the time step, by a difference equation of the form yk+1 = F (h)(yk ), y0 = x,
(4) (5)
where the maps F (h) : D → D are defined for every h > 0. Hence, for every given h > 0, equation (4) defines a discrete dynamical system with an evolution I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 554–562, 2010. c Springer-Verlag Berlin Heidelberg 2010
Structurally Stable Numerical Schemes for Applied Dynamical Models
555
semi-group {(F (h))k : k = 1, 2, ...}. The orbit of a point x ∈ D is the sequence {(F (h))k (x) : k = 0, 1, 2, ...}. The main aim of our investigation is the alignment of the properties of the systems (1)–(2) and (4)–(5). In this conceptual setting the structural stability of the evolution operators (3) of the original system and the evolution maps of its discretization, that is, {F (h) : h ∈ (0, ∞)} (6) plays an important role. More precisely, it is a typical setting in Numerical Analysis to solve a well-posed problem by a stable numerical method. This, in particular, means that small changes of the data change neither the solution of the original problem nor the approximate solution by the numerical method in significant ways. Therefore, looking at the problem and its discretization as dynamical systems one would require by analogy that small changes of the data do not affect in a significant way the properties of these dynamical systems. This, in essence, means that they both need to be structurally stable. Definition 1. Let V be a topological space of maps from X to X. A map f ∈ V is called V structurally stable if there exists a neighborhood U of f in the topology of V such that every map g ∈ U is topologically equivalent to f , that is, there exists a homeomorphism μ : X → X such that f ◦ μ = μ ◦ g.
(7)
In the general theory of topological dynamics the space V is typically C 1 (X, X). For models in practical applications this space is not always applicable. Hence the generalization given in Definition 1 is essential. Structural stability for flows is defined in a similar way where topological equivalency is replaced by orbit equivalency. It is important to note the different nature of the dynamical systems defined by (1) and (4). The first is a continuous dynamical system (a flow), while the second one is a discrete dynamical system. Most of the concepts of dynamical systems are defined for both continuous and discrete system but very often, as it is the case with structural stability as well, the meaning is different. In particular let us note that even if a flow is structurally stable the maps S(t1 ) and S(t2 ) are not necessarily topologically equivalent when t1 = t2 . To simplify the matters and enhance the main ideas, we assume here that equation (1) is such that S(t), t > 0, are all topologically equivalent to each other. This is for example the case of Morse-Smale flows with fixed points only. In this setting the requirement that the flow defined by (1) is structurally stable can be replaced with the stronger requirement that the maps S(t), t > 0, are structurally stable. The reliability of the method (4) is in terms of correctly replicating the properties of the dynamical system it approximates.
2
Topological Dynamic Consistency
The concept of topological dynamic consistency, introduced in [4], describes in precise terms the alignment of the properties of the discrete dynamical system and the approximated continuous dynamical system. This development is
556
R. Anguelov
based on the important observation that the “dynamics” of dynamical systems, e.g. fixed points, periodic orbits, non-wandering sets, and the way these final states are approached, are topological properties. The similarity of a continuous dynamical system with the associated numerical methods, viewed as discrete dynamical systems, is in the sense of topological equivalence between the corresponding evolution operators. Definition 2. The difference scheme (4) is called topologically dynamically consistent with the dynamical system (1), whenever all the maps in the set (3) are topologically equivalent to each other and every map in the set (6) is topologically equivalent to them. (Thus, the maps S(t) and F (h) are topologically the same for every t > 0 and h > 0). The next theorem provides a straightforward way of proving topological dynamic consistency via the structural stability of the evolution operators of the original system and its numerical method. Theorem 1. Let V be a topological space of maps from D to D such that it contains the sets (3) and (6) and the mappings S : (0, ∞) → V and F : (0, ∞) → V are both continuous. Let also the following conditions hold: (i) for each t > 0 the map S(t) is V structurally stable; (ii) for each h > 0 the map F (h) is V structurally stable; (iii) there exists h > 0 such that S(h) and F (h) are topologically equivalent. Then the numerical method (4) is topologically dynamically consistent with the dynamical system (1). Proof. The topological equivalence of maps is an equivalence relation in the space V. Thus, we need to prove that maps in the two sets (3) and (6) belong to the same equivalence class. Let us see first that condition (i) implies that the maps S(t), t > 0, are all topologically equivalent. Let 0 < t1 < t2 . From the structural stability of the maps S(t), t > 0, it follows that for every t ∈ [t1 , t2 ] there exists a neighborhood Wt of S(t) in V such that S(t) ∼ g for every g ∈ Wt . Since the map S : (0, ∞) → V is continuous, for every t ∈ (0, ∞) there exists δt such that S(τ ) ∈ Wt whenever |τ − t| < δt . Then t∈[t1 ,t2 ] (t − δt , t + δt ) is an open cover of [t1 , t2 ]. By the compactness of [t1 , t2 ] it follows that there exists a finite set {τ1 , τ2 , ..., τk } such that [t1 , t2 ] ⊂
k
(τi − δτi , τi + δτi ).
i=1
Without loss of generality we may assume that τ1 = t1 , τk = t2 and that the set is arranged in increasing order, that is, τi < τi+1 . For an arbitrary pair τi , τi + 1 there exists τi+ 12 such that τi+ 12 ∈ (τi − δτi , τi + δτi ) ∩ (τi+1 − δτi+1 , ti+1 + δτi+1 ). Therefore S(τi+ 12 ) ∈ Wτi ∩ Wτi+1 , which implies that Sτi ∼ Sτi+1 . Then using induction we obtain that Sτ1 ∼ Sτk or equivalently St1 ∼ St2 . Hence all maps S(t), t > 0, are topologically equivalent to each other.
Structurally Stable Numerical Schemes for Applied Dynamical Models
557
Using a similar argument we obtain from condition (ii) that all maps F (h), h > 0 are topologically equivalent to each other. The statement of the theorem then follows from (iii). Remark 1. Properties (i) and (ii) in Theorem 1 are attributes of the problem and the method respectively. Property (iii) provides a link between the problem and the method which may be considered an analogue of the consistency of the scheme since the value of h in (iii) is typically unknown and is described as sufficiently small. This property is established for a variety of numerical methods in the works of B.M. Garay and M-C Li, see [5,8]. In all cases the topological equivalence is established for sufficiently small h. We should emphasize the fact that Theorem 1 provides for topological equivalence of the respective evolution operators for all values of h and t, thus implying the topological dynamic consistency of the scheme. The main ingredient in Theorem 1 is the structural stability of the involved maps. Deriving sufficient conditions for structural stability of flows and diffeomorphisms is one of the greatest achievements of Topological Dynamics with main contributions from Anosov, Moser, Palis, Smale, Robin, and Robinson, [7]. The current form of the structural stability theorem, proved by Robinson, [12,13], states that flows and diffeomorphisms on a compact manifold without boundary which satisfy Axiom A and the strong transversality condition are structurally stable. In particular, Morse-Smale flows and diffeomorphisms on a compact manifold are structurally stable, see [14, Theorem 12.2]. In all these theorems the structural stability is considered with respect to the space C 1 (X, X). Hence, and as mentioned in the Introduction, their practical application could be rather limited. For example, the evolution operator of the epidemiological model considered in the next section is not C 1 structurally stable since it has an invariant set on the boundary and a fixed point which is not necessarily hyperbolic. Our approach in such cases is to design for the given model an appropriate space V so that the respective evolution operator is structurally stable. Then we construct a discretization such that the evolution operator of the numerical scheme is in V and is also V structurally stable. Then the topological dynamic consistency of the scheme follows from Theorem 1.
3
SEIR(→S) Model
As a model example for the application of the theory derived so far we consider a basic compartmental model for the spread of an infectious disease in a given population. The course of the disease is schematically represented as S −→ E −→ I −→ R ( −→ S), where S denotes the number of susceptible individuals, E — the number of exposed (carriers which are not yet infective), I — the number of infectives, and R — the number of recovered with immunity. The following mathematical
558
R. Anguelov
model is derived in [16, Chapter 21] as a system of differential equations for the S fractions of the respective classes in the total population N , that is, u = N , E I R x = N, y = N, z = N ⎧ du ⎪ ⎪ ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎪ dx ⎪ ⎨ dt dy ⎪ ⎪ ⎪ ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎪ ⎩ dz dt
= ν(1 − u) − uy + ηz = uy − (ξ + ν)x (8) = ξx − (θ + ν)y = θy − (η + ν)z
The time is scaled in such a way that the coefficient of the nonlinear term uy representing the mass action principle for the spread of the infection equals one. The nonnegative constants ξ, θ, and η model the transfer rates between the respective compartments, while ν is linked to the life expectancy under the assumption of constant population. The system of ODEs (8) defines a dynamical system on the three dimensional closed simplex G = {(u, x, y, z) : u ≥ 0, x ≥ 0, y ≥ 0, z ≥ 0, u + x + y + z = 1}. Hence, one can eliminate one of the variables, e.g. u, and obtain the system in the following equivalent form ⎧ dx ⎪ ⎪ = (1 − x − y − z)y − (ξ + ν)x ⎪ ⎪ ⎪ ⎨ dt dy (9) = ξx − (θ + ν)y ⎪ dt ⎪ ⎪ ⎪ ⎪ ⎩ dz = θy − (η + ν)z dt where (9) defines a dynamical system on the compact domain D = {(x, y, z) : x ≥ 0, y ≥ 0, z ≥ 0, x + y + z ≤ 1}. The point (0, 0, 0) is always an equilibrium of (9). This is the Disease Free Equilibrium (DFE). The system may have another equilibrium, namely, ze =
θν(R0 − 1) ν +η ν +θ ze , xe = ye , , ye = ν + η − ηθR0 θ ξ
(10)
ξ where R0 = (ν+ξ)(ν+θ) is the basic replacement ratio. The point in (10) is an equilibrium of the dynamical system (9) whenever it belongs to its domain D, that is, when R0 > 1. It is called an Endemic Equilibrium (EE) since it describes a permanent presence of the disease. Our concern here are the properties of the
Structurally Stable Numerical Schemes for Applied Dynamical Models
559
dynamical system (9). It was proved in [16, Theorem 21.2] that: • If R0 ≤ 1 then DFE is globally asymptotically stable on D.
(11)
• If R0 > 1 then DFE is a hyperbolic saddle point with stable manifold Γ = {(x, y, z) ∈ D : x = y = 0}, EE is stable and attracting with basin of attraction D \ Γ .
(12)
It is easy to see that the maps S(t) for the dynamical system (9) are not C 1 structurally stable due to the fact that they have a fixed point and an orbit on the boundary of the domain D. In order to obtain structural stability for S(t) we consider the following smaller space ⎫ ⎧ 1) g : D → g(D) is a diffeomorphism ⎬ ⎨ V = g : D → D : 2) DFE is a fixed point of g ⎩ 3) Γ is invariant and in the stable manifold of DFE ⎭ Note that all the maps in V have the “unstable” features of S(t), which leads to the following theorem. Theorem 2. For every t > 0 the evolution operator S(t) is V structurally stable provided (ν + ξ + θ)(ν + η) > θξ. (13) The proof is carried out essentially by the same method as the proof of [6, Theorem 16.3.1], where the structural stability of flows with a single globally attractive fixed point is established. Condition (13) ensures that whenever DFE∈ D \ Γ , its basin of attraction is D \ Γ , see [16, Theorem 21.11]. Hence it can be replaced by any other condition implying the said property, e.g. η = 0, see [16, Theorem 21.12]
4
V-Structurally Stable Scheme by the Nonstandard Finite Difference Method
Traditionally the main concerns of Numerical Analysis are the stability, convergence and rate of convergence of numerical methods. While the importance of convergence is not in doubt, it often happens that essential properties of the approximated models are lost in the discretization of the differential equations. These may include the physical laws (e.g. conservation of mass, energy or momentum) used to construct the differential models. The main contribution of the Non-Standard Finite Difference Method to the field of Numerical Analysis is the conceptual base and the tools for preserving essential physical properties of the models, [9,10,3]. There has been a considerable effort in recent years to construct numerical procedures which correctly replicate the properties of the original dynamical system by using the non-standard finite difference method. In fact the concept of dynamic consistency, which was made precise in the recent works of the author and his collaborators, originally appeared in the context of
560
R. Anguelov
this method, [1,11]. The following scheme, which uses nonlocal approximation of the nonlinear term, is crafted in such a way that the operator F (h) is in V and it is also V structurally stable. ⎧ uk+1 − uk ⎪ ⎪ = −uk+1 yk + νxk+1 + νyk+1 + (η + ν)zk+1 ⎪ ⎪ h ⎪ ⎪ ⎪ ⎪ xk+1 − xk ⎪ ⎪ = uk+1 yk − (ξ + ν)xk+1 ⎨ h (14) yk+1 − yk ⎪ ⎪ ⎪ = ξx − (θ + ν)y k+1 k+1 ⎪ ⎪ h ⎪ ⎪ ⎪ ⎪ z − zk k+1 ⎪ ⎩ = θyk+1 − (η + ν)zk+1 h Let us note that we discretize the four equation form (8) of the dynamical system since in this way the method is more convenient for both implementation and theoretical analysis. This does not increase the computational complexity since the values of uk+1 need to be calculated anyway. The method is implicit but any time step involves only the solution of a linear system ⎛ ⎞ ⎛ ⎞ uk+1 uk ⎜ xk+1 ⎟ ⎜ xk ⎟ ⎟ ⎜ ⎟ C(yk ) ⎜ (15) ⎝ yk+1 ⎠ = ⎝ yk ⎠ , zk+1 zk where
⎛
⎞ 1+hyk −hν −hν −h(η+ν) ⎜ −hyk 1+h(ξ+ν) ⎟ 0 0 ⎟ C(yk ) = ⎜ ⎝ 0 ⎠ −hξ 1+h(θ+ν) 0 0 0 −hθ 1+h(η+ν)
From this formulation it is easy to see that starting from an initial condition in G the approximations remain nonnegative for all k = 1, 2, ... and that uk+1 + xk+1 + yk+1 + zk+1 = uk + xk + yk + zk = 1.
0.5
R =0.93 0
0.25
R =3.9
0.4
0
0.2 0.3
x y z
0.15 x y z x
0.2
0.1
y z 0 0
5
10
15
20
0.1 e
0.05
e e
25
t
Fig. 1. Approximate solution, R0 = 3.9
0 0
5
10
15
20
25
30
35
t
Fig. 2. Approximate solution, R0 = 0.93
Structurally Stable Numerical Schemes for Applied Dynamical Models
561
Hence the scheme (14) defines a discrete dynamical system on G. Therefore, the respective difference operator F (h) for the system (9) maps D onto D. Further one can see that DEF and EE are preserved together with their stability. Therefore, F (h) ∈ V. The proof of the V structural stability of F (h) is rather technical but it uses similar arguments as in Theorem 2.The approximate solutions for two sets of constants are presented in Figs. 1 and 2. In Fig. 1 we have R0 = 3.9 > 1 while in Fig. 2 R0 = 0.93 < 1 and one may observe that the properties of the exact solutions in (11) and (12) are correctly replicated. We note that it may happen that standard methods also preserve the stated properties. However, in general, this cannot be guaranteed or at least cannot be guaranteed for all step sizes. Examples to that effect for similar systems can be found in [2,4]. See also [3,9,10] for a general discussion on the issue.
References 1. Al-Kahby, H., Dannan, F., Elaydi, S.: Non-standard discretization methods for some biological models. In: Mickens, R.E. (ed.) Applications of nonstandard finite difference schemes, pp. 155–180. World Scientific, Singapore (2000) 2. Anguelov, R.: Dynamically Consistent Schemes for Epidemiological Models. In: Proceedings of BGSIAM 2008, Demetra, Sofia, pp. 11–14 (2009) 3. Anguelov, R., Lubuma, J.M.-S.: Contributions to the mathematics of the nonstandard finite difference method and applications. Numerical Methods for Partial Differential Equations 17, 518–543 (2001) 4. Anguelov, R., Lubuma, J.M.-S., Shillor, M.: Dynamically consistent nonstandard finite difference schemes for continuous dynamical systems. In: AIMS Proceedings (to appear) 5. Garay, B.M.: On structural stability of ordinary differential equations with respect to discretization methods. Numerische Mathematik 72, 449–479 (1996) 6. Hirsch, M.W., Smale, S.: Differential equations, dynamical systems and linear algebra. Academic Press, New York (1974) 7. Katok, A., Hasselblatt, B.: Introduction to the Modern Theory of Dynamical Systems. Cambridge University Press, New York (1995) 8. Li, M.-C.: Structural stability for the Euler method. SIAM Journal of Mathematical Analysis 30, 747–755 (1999) 9. Mickens, R.E. (ed.): Applications of Nonstandard Finite Difference Schemes. World Scientific, Singapore (2000) 10. Mickens, R.E. (ed.): Advances in the applications of nonstandard finite difference schemes, pp. 1–9. World Scientific, Singapore (2005) 11. Mickens, R.E.: Discrete models of differential equations: the roles of dynamic consistency and positivity. In: Allen, L.J.S., Aulbach, B., Elaydi, S., Sacker, R. (eds.) Difference equations and discrete dynamical systems, pp. 51–70. World Scientific, Singapore (2005) 12. Robinson, C.: Structural stability of C 1 diffeomorphisms. Journal of Differentrial Equations 22, 238–265 (1976) 13. Robinson, C.: Structural stability on manifolds with boundary. Journal of Differentrial Equations 37, 1–11 (1980)
562
R. Anguelov
14. Robinson, C.: Dynamical systems: stability symbolic dynamics and chaos, 2nd edn. CRC Press, Boca Raton (1999) 15. Stuart, A.M., Humphries, A.R.: Dynamical systems and numerical analysis. Cambridge University Press, New York (1998) 16. Thieme, H.R.: Mathematics of population biology. Princeton University Press, Princeton (2003)
Matrix and Discrete Maximum Principles Istv´an Farag´ o E¨ otv¨ os Lor´ and University, P´ azm´ any P. s. 1/c, 1117 Budapest, Hungary
Abstract. Qualitative properties play central role in constructing reliable numerical models for parabolic problems. One of such basic properties is the discrete maximum principle. In this paper we analyze its relation to the so-called matrix maximum principles. We analyze the different matrix maximum principles (Ciarlet, Stoyan and Ciarlet-Stoyan maximum principles) and their relation. Introducing the iterative algebraic problem (IAP) we show that the discrete maximum principles for discrete parabolic problems are more general than the algebraic maximum principles. We also analyze and compare the conditions which ensure the above qualitative properties. Keywords: Heat equation, qualitative properties, discrete maximum principle, linear finite element, finite difference method. Subject Classifications: 65N06, 65N30.
1
Introduction
When we construct mathematical and/or numerical models in order to describe or solve real-life problems, these models should have certain qualitative properties which typically arise from some basic principles of the modelled phenomena. In other words, it is important to preserve the characteristic properties of the original process, i.e., the models have to possess the equivalents of these properties. In this paper we investigate the parabolic problem C
∂u − ∇(κ∇u) = f, on QT = Ω × [0, T ], ∂t u|ΓT = g,
(1)
(2)
where C : Ω → IR is a given bounded function with the property 0 < Cmin ≤ C ≡ C(x) ≤ Cmax , the known bounded function κ : Ω → IR has continuous first derivatives and fulfills the property 0 < κmin ≤ κ ≡ κ(x) ≤ κmax , the function g : ΓT → IR is continuous on ΓT , the function f : QT → IR is bounded in QT , ∇
The work was supported by the Hungarian Research Grant OTKA K 67819 and by Project HS-MI-106/2005 of NSF of Bulgaria.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 563–570, 2010. c Springer-Verlag Berlin Heidelberg 2010
564
I. Farag´ o
denotes the usual nabla operator, and the solution u is sought in C 2,1 (QT ∪ ΓT ). (Later we use the notation u∂ (x, t) := g(x, t) for x ∈ ∂Ω and u0 (x) := g(x, 0) for x ∈ Ω.) The problem (1)-(2) is generally applied for the description of heat conduction processes. The variables x and t play the role of the space and time variables, respectively. Problem (1)-(2) is suitable to describe diffusion and transport phenomena.
2
The Discrete Heat Problem and the Iterative Algebraic Problems
In order to construct the numerical model, we introduce a mesh where we denote the interior mesh points by P1 , . . . , PN ∈ Ω, and the points on the boundary by PN +1 , . . . , PN +N∂ ∈ ∂Ω, respectively. Using the finite difference or finite element method for the space discretization and the θ-method with the step-size Δt for the time discretization, we arrive at the one-step iterative method of the form (M + θΔtK)u(n) = (M − (1 − θ)ΔtK)u(n−1) + Δt f (n,θ)
(3)
where M and K are known matrices and θ ∈ [0, 1] is a fixed given parameter. In the sequel, we denote M + θΔtK and M − (1 − θ)ΔtK by A and B, respectively, and we use the following partitions of the matrices and vectors: (n) u0 (n) A = [A0 |A∂ ], B = [B0 |B∂ ], u = (n) , u∂ where A0 and B0 are square matrices from IRN ×N ; A∂ , B∂ ∈ IRN ×N∂ , u0 ∈ (n) IRN denotes the unknown approximation and u∂ ∈ IRN∂ is known from the boundary condition, i.e., from the given function g. Then, iteration (3) can be written as (n) (n−1) u0 u0 (n) [A0 |A∂ ] . (4) (n) = [B0 |B∂ ] (n−1) + f u∂ u∂ (n)
The relation (4) defines a one-step iterative process (n)
A0 u0
(n−1)
= B0 u0
(n−1)
+ B∂ u∂
(n)
− A∂ u∂ + f (n) ,
(5)
∈ IRN∂ }
(6)
which is defined on the set of the given vectors (n−1)
H = {u0
, f (n) ∈ IRN , u∂
(n−1)
(n)
, u∂
(n)
and we want to find the only unknown vector u0 . We will refer to the problem (5) as iterative algebraic problem (IAP). Similarly to the continuous problem, the preservation of the qualitative properties in the IAP is a crucial task too. To this aim, we introduce the following definitions. (For more details, we refer to [2].)
Matrix and Discrete Maximum Principles
565
Definition 1. We say that the IAP (5) is discrete non-negativity preserving (DNP) if the implication (n−1)
f (n) ≥ 0; u∂
(n)
≥ 0; u∂
(n−1)
≥ 0 and u0
(n)
≥ 0 ⇒ u0
≥0
(7)
is true. Definition 2. We say that the IAP (5) satisfies the discrete weak boundary maximum principle (DWBMP) when the implication (n)
f (n) ≤ 0 ⇒ max u0
(n−1)
≤ max{0, u0
(n−1)
, u∂
(n)
, u∂ }
(8)
is true. Definition 3. We say that the IAP (5) satisfies the discrete strong boundary maximum principle (DSBMP), when the implication (n)
f (n) ≤ 0 ⇒ max u0
(n−1)
≤ max{u0
(n−1)
, u∂
(n)
, u∂ }
(9)
is true. In the following we analyze the relation of these qualitative properties to the properties defined for the partitioned matrices.
3
Matrix Maximum Principles and Their Relations
In the literature, for partitioned matrices with a certain structure some qualitative properties have been introduced (e.g., [1,3,6]). We consider the block-matrix H ∈ IRk×k and the block-vector y ∈ IRk in the form H1 H2 y1 H= , y= , (10) 0 I y2 where submatrices H1 ∈ IRk1 ×k1 , I ∈ IRk2 ×k2 , H2 ∈ IRk1 ×k2 , 0 ∈ IRk2 ×k1 , y1 ∈ IRk1 and y2 ∈ IRk2 with k = k1 + k2 . In the sequel, for arbitrary vectors v, w ∈ IRk , we will use the following notations: max{v} := max{v1 , v2 , . . . , vk }, max{0, v} := max{0, max{v}}.
(11)
According to the papers [1,5,6], we introduce some definitions. Definition 4. We say that the matrix H satisfies the Ciarlet matrix maximum principle (CMMP) if for arbitrary vectors y1 ∈ IRk1 and y2 ∈ IRk2 , such that H1 y1 + H2 y2 ≤ 0, the inequality max{y1 } ≤ max{0, y2 } holds. Definition 5. We say that the matrix H satisfies the Stoyan matrix maximum principle (SMMP) if for arbitrary vectors y1 ∈ IRk1 and y2 ∈ IRk2 , such that H1 y1 + H2 y2 = 0 and y2 ≥ 0, the inequality max{y1 } ≤ max{y2 } holds.
566
I. Farag´ o
The above definitions give information about the location of the maximum components of the unknown vector y ∈ IRk , using some a priori information: for the CMMP the non-negative maximum, while for the SMMP the maximum is taken over the last indices i = k1 + 1, k1 + 2, . . . , k, i.e., on the sub-vector y2 . Remark 1. If a matrix H satisfies the CMMP, then it is necessarily regular. To show this, it is enough to prove that the relation H1 y1 = 0 implies that y1 = 0. By the choice y2 = 0, the application of the CMMP implies the inequality y1 ≤ 0. Repeating this argument for −y1 , we obtain −y1 ≤ 0, which shows the validity of the required equality. The similar statement holds for the SMMP, too. The CMMP and SMMP properties can be guaranteed in the following way. Theorem 1 ([1]). The matrix H satisfies the CMMP if and only if the following two matrix conditions hold: (C1): H is monotone, i.e., H−1 =
−1 H−1 1 −H1 H2 0 I
≥ 0;
(12)
(C2): using the notation ekm ∈ IRkm (m = 1, 2) for the vectors with all coordinates equal to one, we have −H−1 1 H2 ek2 ≤ ek1 .
(13)
The condition (C2) can be relaxed by the following sufficient condition (C2’): the row sums of the matrix H are all non-negative, i.e., H1 ek1 + H2 ek2 ≥ 0.
(14)
Remark 2. The CMMP obviously implies the SMMP. Therefore, the above conditions also guarantee the SMMP property. It is worth mentioning that for the un-partitioned matrix H the monotonicity and the condition He ≥ 0 (which is, in fact, the analogue of the condition (14)) are necessary conditions for validity of the SMMP. We can combine the CMMP and the SMPP as follows: we require that under the CMMP condition the implication in the SMMP is true, i.e., we introduce Definition 6. We say that the matrix H satisfies the Ciarlet-Stoyan matrix maximum principle (CSMMP) if for arbitrary vectors y1 ∈ IRk1 and y2 ∈ IRk2 , such that H1 y1 + H2 y2 ≤ 0, the relation max{y1 } ≤ max{y2 } holds. Obviously, the CSMMP implies both the CMMP and SMMP properties. This property can be guaranteed by the following statement (cf. [4]). Lemma 1. A matrix H has the CSMMP property if and only if it is monotone and the condition (15) H1 ek1 + H2 ek2 = 0 holds.
Matrix and Discrete Maximum Principles
567
Proof. First we assume that H is monotone and (15) holds. Let y1 ∈ IRk1 and y2 ∈ IRk2 be arbitrary vectors with the property H1 y1 + H2 y2 ≤ 0. Since H is −1 monotone, therefore H−1 1 ≥ 0 and −H1 H2 ≥ 0 (cf. (12)). Therefore we have −1 −1 y1 ≤ −H−1 1 H2 y2 ≤ −H1 H2 (max{y2 }ek2 ) = −(max{y2 })H1 H2 ek2 .
(16)
Due to the assumption (15), the relation (16) implies that y1 ≤ (max{y2 })ek1 ,
(17)
which proves the first part of the statement. Now we assume that H has the CSMMP property. Since the CSMMP property implies the CMMP property, therefore, due to the Theorem 1, H is monotone. In order to show (15), first, let us put y1 = −H−1 1 H2 ek2 and y2 = ek2 . Since for this choice the CSMMP is applicable, we get the estimation max{−H−1 1 H2 ek2 } ≤ max{ek2 } = 1. Let us put now y1 = H−1 1 H2 ek2 and y2 = −ek2 . The CSMMP is again applicable and we get the estimation max{H−1 1 H2 ek2 } ≤ max{−ek2 } = −1. The above two estimations clearly result in the equality −H−1 1 H2 ek2 = ek1 , which yields the required relation.
4
MMP to Linear Algebraic Systems of Special Form
We use the notation b for the vector Hy ∈ IRk , i.e., we consider the system Hy = b. Hence, b has the partitioning
b=
b1 b2
(18)
,
(19)
where b1 ∈ IRk1 and b2 ∈ IRk2 , respectively. Moreover, let (n) (n) ¯f A −B u H= , y= , b= , (n−1) 0 I u u(n−1) with A=
A0 A∂ 0 I
, B=
B0 B∂ 0 I
, u
(n)
=
(n)
u0 (n) u∂
, ¯f (n) =
(20)
f (n) (n) f∂
. (21)
Here A0 , B0 ∈ IRN ×N , A∂ , B∂ ∈ IRN ×N∂ , u0 , f (n) ∈ IRN and u∂ , f∂ ∈ IRN∂ . ¯ ¯ ¯ Then in (18) H ∈ IR2N ×2N , y, b ∈ IR2N . The problem (18),(20),(21) is equivalent to the system of linear algebraic equations of the form (n)
Au(n) = Bu(n−1) + ¯f (n) . (n)
(n)
(n−1)
(n)
(n)
(22)
, then (22) becomes equivalent to (5). If we use the notation f∂ = u∂ − u∂ We will refer to this problem as the canonical algebraic problem (CAP). The qualitative properties of this problem can be defined as follows.
568
I. Farag´ o
Definition 7. We say that the CAP is non-negativity preserving, when b ≥ 0 results in the relation y ≥ 0, that is, the implication (n−1)
f (n) ≥ 0; u0
(n)
≥ 0 and u∂
(n−1)
≥ u∂
(n)
≥ 0 ⇒ u0
≥0
(23)
is true. Definition 8. We say that the CAP satisfies the Ciarlet maximum principle, when the matrix H has the CMMP property, i.e., the implication (n)
f (n) ≤ 0, u∂
(n−1)
≤ u∂
(n)
(n−1)
⇒ max{u0 } ≤ max{0, u0
(n−1)
, u∂
(n)
, u∂ } (24)
is true. Analogically, we introduce the following Definition 9. We say that the CAP satisfies the Stoyan maximum principle, when the correspondig matrix H by (20),(21), has the SMMP property, which yields that the implication (n−1)
f (n) = 0, u0 ⇒
(n) max{u0 }
≤
(n)
≥ 0, u∂
(n−1)
= u∂
≥0
(n−1) (n−1) (n) max{u0 , u∂ , u∂ }
(25)
is true. It follows from the definitions that the validity of the Ciarlet maximum principle implies the validity of the Stoyan maximum principle. First we investigate the non-negativity preservation property of the CAP. Since, according to (12), −1 −1 A A B H−1 = , (26) 0 I we need the monotonicity of the matrix H, which is valid if and only if the following conditions hold: A−1 ≥ 0;
A−1 B ≥ 0.
(27)
(This yields that the matrices A and B must form a weak regular splitting of the matrix A − B.) Using (21), we get that (27) is valid if and only if the relations A−1 0 ≥ 0,
−A−1 0 A∂ ≥ 0,
A−1 0 B0 ≥ 0,
A−1 0 (B∂ − A∂ ) ≥ 0
(28)
are true. Hence, the following statement is true. Lemma 2. The CAP is non-negativity preserving if and only if (28) is true. We pass to the investigation of the Ciarlet maximum principle property of the CAP. Due to Theorem 1, it is sufficient to require the monotonicity of the matrix H and the relation (A − B)e ≥ 0. (29)
Matrix and Discrete Maximum Principles
569
Substituting (21), the inequality (29) yields the condition (A0 − B0 )e0 + (A∂ − B∂ )e∂ ≥ 0.
(30)
Hence we get Lemma 3. The CAP satisfies both the Ciarlet and Stoyan maximum principle properties if the conditions (28) and (30) are satisfied.
5
Qualitative Properties of IAP
The discretization (by using finite element/difference method) results in a CAP (n) (n) (n−1) with notation f∂ = u∂ − u∂ . (This means that f∂ is not a free parameter in the CAP.) In the sequel, we analyze the relation between the qualitative properties of the CAP and the IAP. We define some subsets of H in (6) (n−1)
H+ = {f (n) ≥ 0, u0 M H+
= {f
(n)
≥ 0,
(n−1)
≥ 0, u∂
(n)
(n) u∂
(n−1) u∂
≥ 0, u∂
(n−1) u0
≥ 0,
≥
C HDBMP = {f (n) ≤ 0}, HDW BMP = HDBMP S HDSBMP = {f (n) = 0, u0
(n−1)
(31) ≥ 0},
(n−1)
{ u∂
(n−1)
≥ 0, u∂
≥ 0},
(n)
= u∂
(n)
≥ u∂ },
(32)
≥ 0}.
(33)
The inclusions M C S H+ ⊃ H+ ; HDBMP ⊃ HDW BMP ; HDBMP ⊃ HDSBMP
(34)
are obvious. For the CAP the different qualitative properties (non-negativity preservation, Ciarlet maximum principle, Stoyan maximum principle) are deM S C , HDSBMP , and HDSBMP , respectively. For the IAP, fined on the sub-sets H+ the corresponding qualitative properties (non-negativity preservation, DWBMP, DSBMP) are defined on the wider subsets H+ and HDBMP , respectively. Moreover, if some qualitative property is guaranteed for the IAP, then the corresponding CAP also possesses this qualitative property on the smaller subset, where it is defined. Hence, from the definitions we have Theorem 2. Using the notations f (x, t) for the source (forcing) function, u0 (x) for the initial function and u∂ (x, t) for the boundary function, Table 1 shows the applicability of IAP and CAP to the definition of the different qualitative properties in the initial first boundary value problem for the time-dependent linear PDE. We may observe that the applicability of the Ciarlet and Stoyan maximum principles is rather restrictive: the first one can be applied only for a problem with sign-determined source function and to boundary conditions decreasing in time
570
I. Farag´ o
Table 1. Conditions for the given data of the continuous problem providing different qualitative properties
DNP NPCAP DWBMP Ciarlet DSBMP Stoyan
f u0 u∂ non-negative non-negative non-negative non-negative non-negative non-negative and time-decreasing non-positive any any non-positive any time-decreasing non-positive any any zero non-negative non-negative and time-independent
(e.g., time-independent). The Stoyan maximum principle gives some information about the maximum (minimum) only for a homogeneous equation with non-negative initial and time-independent, non-negative boundary conditions. If one of the above conditions does not hold, we must apply another principle. We can also compare the conditions which guarantee the qualitative properties. Definition 1 yields that the IAP (5) has DNP property if and only if the conditions A−1 0 ≥ 0;
A−1 0 B∂ ≥ 0,
A−1 0 B0 ≥ 0,
−A−1 0 A∂ ≥ 0
(35)
are satisfied. This yields that the conditions (35) imply the conditions (28), i.e., the non-negativity preservation property of the IAP implies the non-negativity preservation property of the CAP. Similar results can be shown for the different discrete maximum principles, too.
References 1. Ciarlet, P.G.: Discrete maximum principle for finite-difference operators. Aequationes Math. 4, 338–352 (1970) 2. Farag´ o, I., Horv´ ath, R.: Discrete maximum principle and adequate discretizations of linear parabolic problems. SIAM Sci. Comput. 28, 2313–2336 (2006) 3. Fujii, H.: Some remarks on finite element analysis of time-dependent field problems. In: Theory and practice in finite element structural analysis, pp. 91–106. Univ. Tokyo Press, Tokyo (1973) 4. Kar´ atson, J., Korotov, S.: Discrete maximum principles in finite element solutions of nonlinear problems with mixed boundary conditions. Numer. Math. 99, 669–698 (2005) 5. Stoyan, G.: On a maximum principle for matrices and on conservation of monotonicity with applications to discretization methods. Z. Angew. Math. Mech. 62, 375–381 (1982) 6. Stoyan, G.: On maximum principles for monotone matrices. Lin. Alg. Appl. 78, 147–161 (1986)
On a Bisection Algorithm That Produces Conforming Locally Refined Simplicial Meshes Antti Hannukainen1 , Sergey Korotov2, and Michal Kˇr´ıˇzek3 1
2
Institute of Mathematics, Helsinki University of Technology P.O. Box 1100, FI–02015 Espoo, Finland
[email protected] Department of Mathematics, Tampere University of Technology P.O. Box 553, FI–33101 Tampere, Finland
[email protected] 3 Institute of Mathematics, Academy of Sciences ˇ a 25, CZ–115 67 Prague 1, Czech Republic Zitn´
[email protected]
Abstract. First we introduce a mesh density function that is used to define a criterion to decide, where a simplicial mesh should be fine (dense) and where it should be coarse. Further, we propose a new bisection algorithm that chooses for bisection an edge in a given mesh associated with the maximum value of the criterion function. Dividing this edge at its midpoint, we correspondingly bisect all simplices sharing this edge. Repeating this process, we construct a sequence of conforming nested simplicial meshes whose shape is determined by the mesh density function. We prove that the corresponding mesh size of the sequence tends to zero for d = 2, 3 as the bisection algorithm proceeds. It is also demonstrated numerically that the algorithm seems to produce only a finite number of similarity-distinct triangles.
1
Introduction
Bisection algorithms which are very convenient for refining simplicial partitions were originally used for solving nonlinear equations [4,18]. Various properties of partitions generated by such algorithms were proved in a number of works in the 70-th [6,17,19,20]. Later, in the eighties, mainly due to efforts of M. C. Rivara, bisection-type algorithms became popular also in the FEM (finite element method) community for mesh refinement/adaptation purposes [12,13,15,16]. Several another variants of the algorithm suitable for standard FEMs were also proposed, analysed and numerically tested in [1,2,3,7,10,11] (see also references therein). A practical realization of bisection algorithms is much simpler than red, red-green, and green refinements of simplices to provide mesh conformity, especially in the case of local mesh refinements and in three or higher dimensions (see [9]). The guiding rules for mesh refinements/adaptivity often come from a posteriori error estimation which generally delivers estimates in the form of integrals I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 571–579, 2010. c Springer-Verlag Berlin Heidelberg 2010
572
A. Hannukainen, S. Korotov, and M. Kˇr´ıˇzek
over the solution domain. Thus, we usually have in hands a certain function over a given domain which dictates the actual mesh reconstruction, see e.g. paper [5]. Its general idea is essentially used in this work, where we propose to apply the longest-edge bisection algorithm, introduced in [12,13] and later analysed in [7], in the following (modified) form. We choose for bisection not the longest edge in the partition, but the edge which has a maximal value of its length multiplied by the value of mesh density function, which is defined a priori. Some properties of such a bisection algorithm are analyzed in this work. Our approach entirely differs from the others, as we do not need any postrefinements of meshes (which can be rather nontrivial task, see e.g. [14, p. 2228]) to provide their conformity. Let Ω ⊂ Rd be a bounded polygonal or polyhedral domain. By T we denote a usual conforming simplicial mesh of Ω, i.e., with no hanging nodes. Let E = E(T ) be the set of all edges of all simplices of T . A set F of meshes is said to be a family of meshes if for every ε > 0 there exists a mesh T ∈ F such that |e| < ε for all edges e ∈ E(T ), where | · | stands for the Euclidean norm.
2
Mesh Density Function
Local simplicial mesh refinements of Ω can be done by means of a priori given positive mesh density function m which is supposed here to be Lipschitz continuous over Ω, i.e., there exists a constant L such that |m(x) − m(y)| ≤ L|x − y|,
x, y ∈ Ω.
(1)
Ideally, it should be large over those parts of Ω, where we need a very fine mesh. On the other hand, m is defined to be small over those parts of Ω, where we do not need a fine mesh (see, e.g. Example 1 and Remark 2). From the positiveness and continuity of m we see that there exists a constant m0 such that 0 < m0 ≤ m(x)
∀x ∈ Ω.
(2)
Denote by Me the midpoint of the edge e and define the criterion function J(e) = |e| m(Me ).
(3)
We shall look for an edge e∗ ∈ E (see Remark 1 below) at which J attains its maximal value, i.e., (4) J(e∗ ) = max J(e). e∈E
Further, we bisect all simplices from T sharing e∗ through the midpoint Me∗ (see Figs. 1 and 2). For d = 3 each bisection plane contains the edge opposite to e∗ in a particular simplex. This refinement strategy will be called the generalized conforming bisection (GCB-) algorithm. It is used repeatedly to produce a family of conforming nested meshes. If m ≡ 1 (or if m is constant) then we get the standard conforming longest-edge bisection algorithm which has been recently analyzed in [7].
Algorithm That Produces Conforming Locally Refined Simplicial Meshes
573
Fig. 1. Generalized bisection when e∗ is inside Ω and at the boundary ∂Ω for d = 2. The dotted lines represent the last bisections.
Fig. 2. Bisection of all tetrahedra sharing the edge e∗
Remark 1. If there exists more than one edge for which J attains its maximum, we may choose e∗ randomly. (This happens, e.g., if all triangles in T are equilateral and m ≡ 1.) Another way would be to modify slightly the definition of J as follows J(ei ) = |ei |m(Mei ) + 10−10 i, ei ∈ E, i = 1, . . . , n, where n is the number of edges in E.
3
Convergence of the Mesh-Size for GCB-Algorithm
In [6], Kearfott proved for the longest-edge bisection algorithm (which however produces hanging nodes, in general, see [7, p. 1688]) that the largest diameter of all simplices tends to zero. In Theorem 2 below we prove the same result for our GCB-algorithm and d = 2, 3. Before that we show that the maximal value of J monotonically tends to zero after the “mesh size becomes sufficiently small”.
574
A. Hannukainen, S. Korotov, and M. Kˇr´ıˇzek
Theorem 1. For each newly generated edge e after one step of the GCBalgorithm applied to T we always have J(e ) ≤ 0.9 J(e∗ ), provided |e| ≤ 0.03
m0 L
∀e ∈ E(T ).
(5) (6)
Proof. Let e∗ be the edge satisfying (4) and (6). Let T ∈ T be a triangle which will be bisected for d = 2. If d = 3, then T will stand for one of the triangular faces containing e∗ of the tetrahedra from T that will be bisected. There will be three new edges in T : two halves of e∗ and the median to e∗ . Let e be the first (or second) half of e∗ . Then from the positiveness and Lipschitz continuity of m we obtain that 0.5 m(Me ) ≤ 0.9 m(Me∗ ). (7) Indeed, from (1), (6), and (2) we find that m(Me ) ≤ m(Me∗ ) + L|Me − Me∗ | 1 = m(Me∗ ) + L|e∗ | ≤ m(Me∗ ) + 0.8 m0 ≤ 1.8 m(Me∗ ). 4 Hence, (7) holds. Multiplying (7) by |e∗ | = 2|e |, we obtain (5), namely J(e ) = |e |m(Me ) ≤ 0.9 |e∗ |m(Me∗ ) = 0.9 J(e∗ ). Now let e ⊂ T be the median to e∗ and let the lengths of edges a, b, c of T satisfy |a| ≤ |b| ≤ |c|. (8) Consider three possible cases: 1) Let c = e∗ (see Fig. 3). Then by (6), (2), and (3) √ 3 0.09 3 2 L|c| ≤ |c|m0 ≤ 0.9 − J(c). 8 8 2
(9)
Let t = e be the median on the edge c. Since the angle opposite to c is greater than or equal to π3 , we have √ 3 |c|. (10) |t| ≤ 2 Applying (3) and also (10) twice, we find by the Lipschitz continuity of m that √ √ √ 3 3 3 J(e ) = |t|m(Mt ) ≤ |c|m(Mt ) = |c|m(Mc ) + |c|(m(Mt ) − m(Mc )) 2 √ √2 √2 √ |t| 3 3 3 3 ≤ J(c) + L|c||Mt − Mc | ≤ J(c) + L|c| 2 2 2 2 2 √ √ √ √ 3 3 1 3 3 3 ≤ J(c) + L|c| |c| ≤ J(c) + L|c|2 ≤ 0.9 J(c) = 0.9 J(e∗ ), 2 2 2 2 2 8
Algorithm That Produces Conforming Locally Refined Simplicial Meshes
575
C
b
a
t
A
c
c
2
2
B
Fig. 3. Bisection of a triangle T ∈ T for which c = e∗
where the last inequality follows from (9). Thus, (5) holds. 2) Assume now that b = e∗ and let u = e be the median on b. From (6) and (2) we see that √ √ 3 |e∗ | 3 L ≤ 0.03m0 ≤ 0.9 − m(Mc ). 2 4 2 From this, the Lipschitz continuity of m, and the fact that |Mu − Mc | = 4b we obtain √ √ √ 3 3 3 |e∗ | m(Mu ) ≤ m(Mc ) + L ≤ 0.9m(Mc ). 2 2 2 4 By (8) we can find that (see Fig. 4) √ 3 |u| ≤ |c|. (11) 2 The equality in (11) is attained when the triangle ABC is equilateral as marked in Fig. 4. Thus, √ 3 J(e ) = J(u) = |u|m(Mu ) ≤ |c|m(Mu ) ≤ 0.9|c|m(Mc ) 2 = 0.9 J(c) ≤ 0.9 J(b) = 0.9 J(e∗ ) and (5) holds again. 3) Finally, let a = e∗ . We shall distinguish two cases: a) Let |b| 2 ≤ |a|. Then |b| m0 ∗ ≤ |e | = |a| ≤ due to (6). From this, the Lipschitz continuity of m and 2 9L (2) we find further that 9m(Ma ) ≤ 9m(Mc ) + 9L
|b| ≤ 9m(Mc ) + m0 ≤ 10m(Mc ). 2
From above and the inequality J(c) ≤ J(a) we obtain that |c| ≤
m(Ma ) 10 |a| ≤ |a|. m(Mc ) 9
576
A. Hannukainen, S. Korotov, and M. Kˇr´ıˇzek C
B
a 2
c
a
b
u
v
a 2
A
b
b
2
2
C
A
Fig. 4. Admissible region for the vertex B. The position of B in its left corner yields the maximal value of the ratio |u| . |c|
This implies that (see Fig. 5) |v| ≤
1−
B
c
Fig. 5. A very small admissible region for the vertex C. The position of C in its right corner yields the . The maximal value of the ratio |v| |c| triangle ABC is almost equilateral.
√ 9 2 319 |c| = |c|. 20 20
(12)
The equality on the left-hand side of (12) is attained when the vertex C is in the 9 right corner (where |a| = 10 |c| and |b| = |c|) of the admissible region marked in Fig. 5. From (12), the positiveness and Lipschitz continuity of m, (4), and (6) we obtain √ √ 319 319 |a| J(e ) = J(v) = |v|m(Mv ) ≤ |c|m(Mv ) ≤ |c|(m(Mc ) + L ) 20 20 4 √ 319 0.03 ≤ |c|(m(Mc ) + m0 ) ≤ 0.9 |c|m(Mc ) 20 4 = 0.9 J(c) ≤ 0.9 J(a) = 0.9 J(e∗ ). b) Now, let |a| <
|b| 2 .
Then by (6) we have
1 1 |b| |b| |c| m(Ma ) ≤ (m(Mb ) + L ) = J(b) + L|b||c| 2 2 2 2 4 1 0.03 0.03 1 ≤ J(b) + m0 |b| ≤ J(b) + J(b) < J(b). 2 4 2 4
J(a) = |a|m(Ma ) <
However, this contradicts the relation J(e∗ ) = J(a) ≥ J(b). Hence, the case 3b) cannot happen. Theorem 2. The GCB-algorithm yields a family of nested conforming meshes whose longest edges tend to zero if the initial mesh satisfies (6).
Algorithm That Produces Conforming Locally Refined Simplicial Meshes
577
1 0.8 0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1 −1
−0.5
0
0.5
1
Fig. 6. The initial mesh of Ω
Proof. Let T be a triangle (possibly also a face of a tetrahedron) that will be bisected. Then all three newly generated edges will be shorter than the longest edge c of T . Therefore, the length of the longest edge of the whole mesh is a nonincreasing function. Thus its limit exists as the bisection proceeds. We prove that this limit is zero. Due to Theorem 1, the value J(e∗ ) tends monotonically to zero, since for all newly generated edges we have J(e ) ≤ 0.9J(e∗ ) provided (6) holds. Therefore, J(e∗ ) = |e∗ |m(Me∗ ) ≥ |e|m(Me ) ≥ |e|m0 . Since m0 in (2) is positive, we find that also |e| → 0
4
Numerical Results
In this section we demonstrate the performance of GCB-algorithm, which illustrates its convergence even though condition (6) is not satisfied for the initial mesh. We also present numerical evidence that the number of similarity-distinct triangles generated by our method seems to be finite. Example 1. Consider the L-shaped domain Ω = (−1, 1) × (−1, 1) \ [0, 1) × [0, 1). Its initial mesh is illustrated in Fig. 6. Set m(x) =
1 . 1 + 4|x|
(13)
This function and the corresponding meshes from two different iterations are plotted in Fig. 7. Figures 6 and 7 nicely illustrate that the longest edge tends to zero as refinements proceed. Example 2. In [19,20] Martin Stynes assumes that all triangles are bisected at the same time. He showed that this repeated process yields only a finite number of similarity-distinct subtriangles, but also hanging nodes may appear. Our algorithm does not produce hanging nodes. Moreover, from Fig. 8 we observe
578
A. Hannukainen, S. Korotov, and M. Kˇr´ıˇzek 1
1
1
0.8
0.8
0.8
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0
0
−0.2
−0.2
−0.2
−0.4
−0.4
−0.4
−0.6
−0.6
−0.6
−0.8
−0.8
−1 −1
−0.5
0
0.5
1
−0.8
−1 −1
−0.5
0
0.5
1
−1 −1
−0.5
0
0.5
1
Fig. 7. Mesh after 100 and 500 refinements for the mesh density function (13) (left and centre). The right picture shows the behaviour of function (13). 90
2.5
80 70
2
60 50 1.5 40 30 1
20 10 0
0.5 0
10
20
30
40
50
60
Fig. 8. Various colours indicate the value of the decimal logarithm of the number of similarity-distinct subtriangles for α ≤ β ≤ γ, α ∈ (0, π/3], β ∈ (0, π/2) and m ≡ 1
that the number of similarity-distinct subtriangles is also finite if Ω is a triangle that changes over all possible shapes. This result is proved in [7, p. 1691] under some angle conditions and m ≡ 1. Remark 2. In practice, m need not be Lipschitz continuous, but it can have jumps or singularities. The function m can also be modified during computational process. For instance, we may put m(x) = |˜ u(x) − uh (x)| to control adaptive mesh refinement for a posteriori error estimates. Here u˜ is a higher order approximation of the exact solution of some boundary value problem and uh is a numerical approximation of the true solution. The function u ˜ can be obtained, e.g., by some averaging superconvergence technique [8].
Algorithm That Produces Conforming Locally Refined Simplicial Meshes
579
Acknowledgement. This paper was supported by Institutional Research Plan nr. AV0Z 10190503 and Grant nr. IAA 100190803 of the Academy of Sciences of the Czech Republic. Also, the support of Projects 211512 and 124619 from the Academy of Finland is acknowledged. The authors are indebted to reviewers for very useful comments.
References 1. Adler, A.: On the bisection method for triangles. Math. Comp. 40, 571–574 (1983) 2. Arnold, D.N., Mukherjee, A., Pouly, L.: Locally adapted tetrahedral meshes using bisection. SIAM J. Sci. Comput. 22, 431–448 (2000) 3. B¨ ansch, E.: Local mesh refinement in 2 and 3 dimensions. IMPACT Comp. Sci. Engrg. 3, 181–191 (1991) 4. Eiger, A., Sikorski, K., Stenger, F.: A bisection method for systems of nonlinear equations. ACM Trans. Math. Software 10, 367–377 (1984) 5. Eriksson, K.: An adaptive finite element method with efficient maximum norm error control for elliptic problems. Math. Models Methods Appl. Sci. 4, 313–329 (1994) 6. Kearfott, R.B.: A proof of convergence and an error bound for the method of bisection in Rn . Math. Comp. 32, 1147–1153 (1978) 7. Korotov, S., Kˇr´ıˇzek, M., Krop´ aˇc, A.: Strong regularity of a family of face-to-face partitions generated by the longest-edge bisection algorithm. Comput. Math. Math. Phys. 48, 1687–1698 (2008) 8. Kˇr´ıˇzek, M., Neittaanm¨ aki, P.: On superconvergence techniques. Acta Appl. Math. 9, 175–198 (1987) 9. Kˇr´ıˇzek, M., Strouboulis, T.: How to generate local refinements of unstructured tetrahedral meshes satisfying a regularity ball condition. Numer. Methods Partial Differential Equations 13, 201–214 (1997) 10. Liu, A., Joe, B.: On the shape of tetrahedra from bisection. Math. Comp. 63, 141–154 (1994) 11. Liu, A., Joe, B.: Quality of local refinement of tetrahedral meshes based on bisection. SIAM J. Sci. Comput. 16, 1269–1291 (1995) 12. Rivara, M.-C.: Algorithms for refining triangular grids suitable for adaptive and multigrid techniques. Internat. J. Numer. Methods Engrg. 20, 745–756 (1984) 13. Rivara, M.-C.: Selective refinement/derefinement algorithms for sequences of nested triangulations. Internat. J. Numer. Methods Engrg. 28, 2889–2906 (1989) 14. Rivara, M.-C.: Lepp-algorithms, applications and mathematical properties. Appl. Numer. Math. 59, 2218–2235 (2009) 15. Rivara, M.-C., Iribarren, G.: The 4-triangles longest-side partition and linear refinement algorithm. Math. Comp. 65, 1485–1502 (1996) 16. Rivara, M.-C., Levin, C.: A 3D refinement algorithm suitable for adaptive and multigrid techniques. Comm. Appl. Numer. Methods Engrg. 8, 281–290 (1992) 17. Rosenberg, I.G., Stenger, F.: A lower bound on the angles of triangles constructed by bisection of the longest side. Math. Comp. 29, 390–395 (1975) 18. Sikorski, K.: A three dimensional analogue to the method of bisections for solving nonlinear equations. Math. Comp. 33, 722–738 (1979) 19. Stynes, M.: On faster convergence of the bisection method for certain triangles. Math. Comp. 33, 717–721 (1979) 20. Stynes, M.: On faster convergence of the bisection method for all triangles. Math. Comp. 35, 1195–1201 (1980)
A Discrete Maximum Principle for Nonlinear Elliptic Systems with Interface Conditions J´ anos Kar´ atson Department of Applied Analysis and Computational Mathematics ELTE University, H-1117 Budapest, Hungary
[email protected]
Abstract. A discrete maximum principle is proved for some nonlinear elliptic systems. A recent result is extended to problems with interface conditions. The discrete maximum principle holds on meshes with suitably small mesh size and under proper acuteness type conditions. In particular, this implies the discrete nonnegativity property under nonnegative data. As an example, reaction-diffusion systems in chemistry are considered, using meshes with the above properties; one can derive that the numerically computed concentrations are nonnegative as required.
1
Introduction
The maximum principle is an important qualitative property of second order elliptic equations, hence its discrete analogues, the so-called discrete maximum principles (DMPs) have drawn much attention. Various DMPs have been given for FD and FE discretizations, including geometric conditions on the computational meshes, see e.g. [1,2,3,14,17,19] for linear and [10,11,15] for nonlinear equations. For elliptic operators including lower order terms as well, the DMP has the form (1) max uh ≤ max{0, max uh }, ∂Ω
Ω
and it can be ensured for sufficiently fine mesh under geometric conditions expressing uniform acuteness or non-narrowness in the case of simplicial or rectangular FEM meshes, respectively. (Similar conditions also appear for prismatic FEM.) The above results concern a single elliptic equation. The DMP has first been extended to certain systems in [12]. The considered class has cooperative and weakly diagonally dominant coupling, which conditions also appear in the underlying continuous maximum principle [4,16]. In the case of mixed boundary conditions and nonpositive right-hand sides, we have max max uhk ≤ max max{0, max gkh },
k=1,...,s
Ω
k=1,...,s
ΓD
where ΓD is the Dirichlet boundary. The acuteness type conditions for simplicial FE meshes have also been suitably weakened in [12]. The goal of this paper is to extend the above DMP to systems with interface conditions. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 580–587, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Discrete Maximum Principle for Nonlinear Elliptic Systems
2
581
Some Algebraic Background
Let us consider a system of equations of order (k+m)×(k+m) with the structure ˜ d c AA ¯, ¯ (2) A¯ c := = ˜ =: d ˜ c d 0 I where I is the m × m identity matrix and 0 is the m × k zero matrix. Many known results on discrete maximum principles for FEM or FDM discretizations ¯ is of generalized are based on the ‘matrix maximum principle’, proved in [2]: if A ¯ nonnegative type and d is (elementwise) nonnegative, then max
i=1,...,k+m
ci ≤ max{0,
max
i=k+1,...,k+m
ci }.
(3)
¯ to be of generalized nonnegative type is the irreducibilA key assumption for A ity of A i.e., for any i = j there must exist a sequence of nonzero entries {ai,i1 , ai1 ,i2 , . . . , ais ,j } of A. This is a technical condition which is sometimes difficult to check in applications, see e.g. [6], therefore a weakened property has been introduced in [12]. This requires two definitions. Definition 1. Let A be an arbitrary k × k matrix. The irreducible blocks of A are the matrices A(l) (l = 1, . . . , q) defined as follows. Let us call the indices i, j ∈ {1, . . . , k} connectible if there exists a sequence of nonzero entries {ai,i1 , ai1 ,i2 , . . . , ais ,j } of A, where i, i1 , i2 , . . . , is , j ∈ {1, . . . , k} are distinct indices. Further, let us call the indices i, j mutually connectible if both i, j and j, i are connectible in the above sense. Let N1 , . . . , Nq be the equivalence classes, i.e. the maximal sets of mutually connectible indices. Letting (l) (l) Nl = {s1 , . . . , skl } for l = 1, . . . , q, we have k1 + · · ·+ kq = k. Then we define for (l)
all l = 1, . . . , q the kl × kl matrix A(l) by Ap q := as(l) ,s(l) p
q
(p, q = 1, . . . , kl ).
¯ with the structure (2) is said to be Definition 2. A (k + m) × (k + m) matrix A of generalized nonnegative type with irreducible blocks if the following properties hold: (i) aii > 0, i = 1, . . . , k, (ii) aij ≤ 0, i = 1, . . . , k, j = 1, . . . , k + m k+m aij ≥ 0, i = 1, . . . , k, (iii)
(i = j),
j=1
(iv’) For each irreducible block of A there exists an index i0 = i0 (l) ∈ Nl = (l) (l) {s1 , . . . , skl } for which k ai0 ,j > 0. (4) j=1
¯ be a (k + m) × (k + m) matrix with the structure (2), Theorem 1. [12]. Let A ¯ is of generalized nonnegative type with irreducible blocks in and assume that A the sense of Definition 2. ¯ c)i ≤ 0, i = If the vector ¯ c = (c1 , . . . , ck+m )T ∈ Rk+m is such that di ≡ (A¯ 1, . . . , k, then (3) holds.
582
3
J. Kar´ atson
The Problem
Let us consider the system ⎫ a.e. in Ω \ Γ, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ = γk (x) a.e. on ΓN , ⎪
−div bk (x, ∇uk ) ∇uk + qk (x, u1 , . . . , us ) = fk (x) k bk (x, ∇uk ) ∂u ∂ν
[ uk ]Γ = 0
and
k bk (x, ∇u) ∂u ∂ν
uk = gk (x)
Γ
= k (x)
⎪ a.e. on ΓD , ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ on Γ,
(k = 1, . . . , s)
(5) where [ u]Γ and b(x, ∇u) ∂u denote the jump (i.e., the difference of the limits ∂ν Γ from the two sides of the interface Γ ) of the solution u and the normal flux b(x, ∇u) ∂u ∂ν , respectively. We impose the following
Assumptions 3.1 (i) Ω ⊂ Rd is a bounded domain with a piecewice C 1 boundary; ΓD , ΓN are disjoint open measurable subsets of ∂Ω such that ∂Ω = Γ D ∪ Γ N , and Γ is a piecewise C 1 surface lying in Ω. (ii) (Smoothness and growth.) For all k = 1, . . . , s we have bk ∈ (C 1 ∩L∞ )(Ω× Rd ) and qk ∈ C 1 (Ω × Rs ). Further, let 2 ≤ p < p∗ , where p∗ :=
2d d−2
if d ≥ 3 and p∗ := +∞ if d = 2,
and let there exist constants β1 , β2 ≥ 0 such that ∂qk p−2 (k, l = 1, . . . , s; x ∈ Ω, ξ ∈ Rs ). ∂ξl (x, ξ) ≤ β1 + β2 |ξ|
(6)
(7)
(iii) (Ellipticity.) There exists m > 0 such that bk ≥ m holds for all k = 1, . . . , s. Further, defining ak (x, η) := bk (x, η)η for all k, the Jacobian ∂ matrices ∂η ak (x, η) are uniformly spectrally bounded from both below and above. ∂qk (iv) (Cooperativity.) We have (x, ξ) ≤ 0 for all k, l = 1, . . . , s, k = l, ∂ξl x ∈ Ω, ξ ∈ Rs . (v) (Weak diagonal dominance for the Jacobians w.r.t. rows and columns.) s ∂qk l=1
∂ξl
(x, ξ) ≥ 0,
s ∂ql (x, ξ) ≥ 0 ∂ξk
(k = 1, . . . , s; x ∈ Ω, ξ ∈ Rs ).
l=1
(vi) For all k = 1, . . . , s we have fk ∈ L2 (Ω), γk ∈ L2 (ΓN ), k ∈ L2 (Γ ), and ∗ gk = gk|Γ with g ∗ ∈ H 1 (Ω). D Remark 1. Assumptions (iv)-(v) imply ξ ∈ Rs ).
∂qk ∂ξk (x, ξ)
≥ 0 (k = 1, . . . , s; x ∈ Ω,
A Discrete Maximum Principle for Nonlinear Elliptic Systems
583
For the weak formulation of such problems, we define the Sobolev space 1 HD (Ω) := {z ∈ H 1 (Ω) : z|ΓD = 0} . The weak formulation of problem (5) then reads as follows: find u ∈ H 1 (Ω)s such that A(u), v = ψ, v ∗
and u − g ∈ where A(u), v =
s
1 (∀v ∈ HD (Ω)s )
(8)
1 HD (Ω)s ,
bk (x, ∇u) ∇uk · ∇vk +
Ω k=1
(9) s
qk (x, u) vk
(10)
k=1
1 for given u = (u1 , . . . , us ) ∈ H 1 (Ω)s and v = (v1 , . . . , vs ) ∈ HD (Ω)s , further,
ψ, v =
s
fk vk +
Ω k=1
s
ΓN k=1
γk vk +
s
k vk
(11)
Γ k=1
1 (Ω)s , and finally, g ∗ := (g1∗ , . . . , gs∗ ). Existence for given v = (v1 , . . . , vs ) ∈ HD and uniqueness of the weak solution can be derived following the general framework of monotone elliptic operators, see e.g. [5]. For a single equation, this was achieved and a discrete maximum principle was considered in [11].
4
Finite Element Discretization
We define the finite element discretization of problem (5) in the following way. First, let n ¯0 ≤ n ¯ be positive integers and let us choose basis functions 1 (Ω), ϕ1 , . . . , ϕn¯ 0 ∈ HD
1 ϕn¯ 0 +1 , . . . , ϕn¯ ∈ H 1 (Ω) \ HD (Ω),
which correspond to homogeneous and inhomogeneous boundary conditions on ΓD , respectively. These basis functions are assumed to be continuous and to satisfy n ¯ ¯ ), ϕp ≡ 1, ϕp (Bq ) = δpq (12) ϕp ≥ 0 (p = 1, . . . , n p=1
for given node points Bp ∈ Ω (p = 1, . . . , n ¯ 0 ) and Bp ∈ ΓD (p = n ¯ 0 + 1, . . . , n ¯ ), where δpq is the Kronecker symbol. (These conditions hold e.g. for standard linear, bilinear or prismatic finite elements.) Finally, we assume that any two interior basis functions can be connected with a chain of interior basis functions with overlapping support. This assumption is natural by its geometric meaning, counterexamples only occur in quasi-degenerate cases as mentioned in [7]. We in fact need a basis in the corresponding product spaces, which we define by repeating the above functions in each of the s coordinates and setting zero in the other coordinates. That is, let n0 := s¯ n0 and n := s¯ n. First, for any 1 ≤ i ≤ n0 , if i = (k − 1)¯ n0 + p
for some 1 ≤ k ≤ s and 1 ≤ p ≤ n ¯ 0 , then
584
J. Kar´ atson
φi := (0, . . . , 0, ϕp , 0, . . . , 0)
where ϕp stands at the k-th entry,
that is, (φi )m = ϕp if m = k and (φi )m = 0 if m = k. From these, we let 1 (Ω)s . Vh0 := span{φ1 , . . . , φn0 } ⊂ HD
(13)
Similarly, for any n0 + 1 ≤ i ≤ n, if i = n0 + (k − 1)(¯ n−n ¯0) + p − n ¯0
for some 1 ≤ k ≤ s and n ¯0 + 1 ≤ p ≤ n ¯,
T
then
φi := (0, . . . , 0, ϕp , 0, . . . , 0)
where ϕp stands at the k-th entry,
that is, (φi )m = ϕp if m = k and (φi )m = 0 if m = k. From this and (13), we let Vh := span{φ1 , . . . , φn } ⊂ H 1 (Ω)s . Using the above FEM subspaces, the finite element discretization of problem (5) leads to the task of finding uh ∈ Vh such that A(uh ), v = ψ, v (∀v ∈ Vh0 ) and uh − g h ∈ Vh0 , i.e., uh = g h on ΓD (where g h = n
uh =
n
(14) (15)
gj φj ∈ Vh is the approximation of g ∗ on ΓD ). Then, setting
j=n0 +1
cj φj and v = φi (i = 1, . . . , n0 ) in (8), we obtain the n0 × n system
j=1
of algebraic equations n
aij (¯ c) cj = di
(i = 1, . . . , n0 ),
(16)
j=1
where for any ¯ c = (c1 , . . . , cn )T ∈ Rn (i = 1, . . . , n0 , j = 1, . . . , n), the entries ¯ c) are of A(¯ s s c) = bk (x, ∇uh ) (∇φj )k · (∇φi )k + Vkl (x, uh ) (φj )l (φi )k , (17) aij (¯ Ω k=1
k,l=1
where
1
Vkl (x, uh (x)) = 0
and
di :=
s Ω k=1
∂qk (x, tuh (x)) dt ∂ξl fˆk (φi )k +
s
ΓN k=1
(k, l = 1, . . . , s; x ∈ Ω),
γk (φi )k +
s Γ k=1
k (φi )k
(18)
where fˆk (x) := fk (x) − qk (x, 0). We enlarge system (16) to a square one with the structure (2) by adding a proper zero and an identity block, and write it briefly as ¯ c)¯ A(¯ c = d. ¯ c) has the entry aij (¯ I.e. for i = 1, . . . , n0 and j = 1, . . . , n, matrix A(¯ c) from (17).
A Discrete Maximum Principle for Nonlinear Elliptic Systems
5
585
The Discrete Maximum Principle
We will consider a family of finite element discretizations, and thereby we need a regularity notion for the mesh. Definition 3. Let us consider a family of FEM subspaces V = {Vh }h→0 constructed as above. The corresponding mesh will be called quasi-regular w.r.t. problem (5) if c1 hγ ≤ meas(supp ϕp ) ≤ c2 hd , where the positive real number γ satisfies d ≤ γ < γd∗ (p) := 2d − from Assumption 3.1 (ii).
(d−2)p 2
with p
First we verify the desired generalized nonnegativity for the stiffness matrix. Theorem 2. Let problem (5) satisfy Assumptions 3.1. Let us consider a family of finite element subspaces Vh (h → 0), such that the corresponding family of meshes is regular from above, according to Definition 3, for some proper γ. Assume further that for any indices p = 1, ..., n ¯ 0 , t = 1, ..., n ¯ (p = t), if meas(supp ϕp ∩ supp ϕt ) > 0 then ∇ϕt · ∇ϕp ≤ 0 on Ω and ∇ϕt · ∇ϕp ≤ −K0 hγ−2 (19) Ω
with some constant K0 > 0 independent of p, t and h. ¯ c) defined in (17) is of generalized Then for sufficiently small h, the matrix A(¯ nonnegative type with irreducible blocks in the sense of Definition 2. Proof. In [12, Theorem 4.10], the statement was proved for problems like (5) without interface conditions. Note that the presence of the interface conditions c). only affects the r.h.s. (18) of the algebraic system, but not the entries aij (¯ Hence, we can apply the quoted theorem to (17). A classical way to satisfy the integral condition in (19) is a pointwise inequality on the support, which expresses uniform acuteness for a family of simplicial meshes. However, one can ensure it with less strong conditions as well, see discussed in [12]. Now we can derive the discrete maximum principle for problem (5): Corollary 1. Let the assumptions of Theorem 2 hold. If fk ≤ qk (x, 0),
γk ≤ 0
and
k ≤ 0
(k = 1, . . . , s),
then for sufficiently small h, the FEM solution uh = (uh1 , . . . , uhs ) of system (5) satisfies max max uhk ≤ max max{0, max gkh }. (20) k=1,...,s
Ω
k=1,...,s
ΓD
Proof. Using (18) and that (φi )k ≥ 0 by (12), the coordinate vector ¯ c = ¯ c)i ≤ 0, i = 1, ..., k. Then Theorem (c1 , ..., ck+m )T ∈ Rk+m is such that di ≡ (A¯ 1 is valid. From (3), using (12) and the same derivation as in [12, Theorem 4.5], we obtain (20).
586
6
J. Kar´ atson
Some Related Results
Analogously to Corollary 1, a change of signs yields the discrete minimum principle for problem (5): under the assumptions of Theorem 2, if fk ≥ qk (x, 0),
γk ≥ 0
and k ≥ 0
(k = 1, . . . , s),
then for sufficiently small h, the FEM solution uh = (uh1 , . . . , uhs ) of system (5) satisfies min min uhk ≥ min min{0, min gkh }. k=1,...,s Ω
ΓD
k=1,...,s
In particular, if in addition gk ≥ 0 for all k, then we obtain the discrete nonnegativity principle: (k = 1, . . . , s). uhk ≥ 0 As an important example, let us consider reaction-diffusion systems in chemistry. The steady states of such processes can sometimes be described by systems ⎫ −bk Δuk + Pk (x, u1 , . . . , us ) = fk (x) in Ω, ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ∂uk bk ∂ν = γk (x) on ΓN , ⎬ (k = 1, . . . , s). ⎪ uk = gk (x) on ΓD , ⎪ ⎪ ⎪ ⎪ ⎪ ∂uk ⎭ [ uk ]Γ = 0 and bk ∂ν Γ = k (x) on Γ (21) Here, for all k, the quantity uk describes the concentration of the kth species, and Pk is a polynomial which characterizes the rate of the reactions involving the k-th species. A common way to describe such reactions is the so-called mass action type kinetics [8], which implies that Pk (x, 0) ≡ 0 on Ω for all k. The function fk ≥ 0 describes a source independent of concentrations. System (21) was considered in [12] without the interface conditions. Now the latter express that a part of the process is localized on the interface. We consider system (21) as a special case of (5) under Assumptions 3.1. As pointed out in [12], such chemical models describe processes with cross-catalysis and strong autoinhibiton. Since the concentrations uk are nonnegative, a proper numerical model must produce such numerical solutions. Based on the abovementioned discrete nonnegativity principle, we obtain the required property: if the FEM discretization of system (21) satisfies the conditions of Theorem 2, and fk ≥ 0,
γk ≥ 0,
gk ≥ 0,
k ≥ 0
(k = 1, . . . , s), h
then for sufficiently small h, the FEM solution u = (uh 1 , . . . , uh s )T of system (21) satisfies (k = 1, . . . , s). uhk ≥ 0 on Ω Remark 2. Both in the general problem (5) and the example (21), one may include additional terms sk (x, u1 , . . . , us ) on ΓN and/or Γ , which we omit here for technical simplicity. Then sk must satisfy similar properties as assumed for qk or Pk , respectively. Such localized reactions for a single equation (i.e. one chemical agent) have been considered e.g. in [9].
A Discrete Maximum Principle for Nonlinear Elliptic Systems
587
References 1. Brandts, J., Korotov, S., Kˇr´ıˇzek, M.: The discrete maximum principle for linear simplicial finite element approximations of a reaction-diffusion problem. Linear Algebra Appl. 429, 2344–2357 (2008) 2. Ciarlet, P.G.: Discrete maximum principle for finite-difference operators. Aequationes Math. 4, 338–352 (1970) 3. Ciarlet, P.G., Raviart, P.-A.: Maximum principle and uniform convergence for the finite element method. Comput. Methods Appl. Mech. Engrg. 2, 17–31 (1973) 4. de Figueiredo, D.G., Mitidieri, E.: Maximum principles for cooperative elliptic systems. C. R. Acad. Sci. Paris S´er. I Math. 310(2), 49–52 (1990) 5. Farag´ o, I., Kar´ atson, J.: Numerical solution of nonlinear elliptic problems via preconditioning operators. Theory and applications. In: Advances in Computation, vol. 11. NOVA Science Publishers, New York (2002) 6. Draganescu, A., Dupont, T.F., Scott, L.R.: Failure of the discrete maximum principle for an elliptic finite element problem. Math. Comp. 74(249), 1–23 (2005) 7. Hannukainen, A., Korotov, S., Vejchodsk´ y, T.: On Weakening Conditions for Discrete Maximum Principles for Linear Finite Element Schemes. In: Margenov, S., Vulkov, L.G., Wasniewski, J. (eds.) Numerical Analysis and Its Applications. LNCS, vol. 5434, pp. 297–304. Springer, Heidelberg (2009) 8. H´ ars, V., T´ oth, J.: On the inverse problem of reaction kinetics. In: Farkas, M. (ed.) Qualitative Theory of Differential Equations, Szeged, Hungary (1979); Coll. Math. Soc. J´ anos Bolyai, vol. 30, pp. 363–379. North-Holland–MSJB, Budapest (1981) 9. Kandilarov, J.D., Vulkov, L.G.: Analysis of immersed interface difference schemes for reaction-diffusion problems with singular own sources. Comput. Methods Appl. Math. 3(2), 253–273 (2003) 10. Kar´ atson, J., Korotov, S.: Discrete maximum principles for finite element solutions of nonlinear elliptic problems with mixed boundary conditions. Numer. Math. 99, 669–698 (2005) 11. Kar´ atson, J., Korotov, S.: Discrete maximum principles for FEM solutions of some nonlinear elliptic interface problems. Int. J. Numer. Anal. Modelling 6(1), 1–16 (2009) 12. Kar´ atson, J., Korotov, S.: A discrete maximum principle in Hilbert space with applications to nonlinear cooperative elliptic systems. SIAM J. Numer. Anal. 47(4), 2518–2549 (2009) 13. Korotov, S., Kˇr´ıˇzek, M.: Acute type refinements of tetrahedral partitions of polyhedral domains. SIAM J. Numer. Anal. 39, 724–733 (2001) 14. Korotov, S., Kˇr´ıˇzek, M., Neittaanm¨ aki, P.: Weakened acute type condition for tetrahedral triangulations and the discrete maximum principle. Math. Comp. 70, 107–119 (2001) 15. Kˇr´ıˇzek, M., Qun, L.: On diagonal dominance of stiffness matrices in 3D. East-West J. Numer. Math. 3, 59–69 (1995) 16. Mitidieri, E., Sweers, G.: Weakly coupled elliptic systems and positivity. Math. Nachr. 173, 259–286 (1995) ˇ ın, P.: Discrete maximum principle for higher-order finite ele17. Vejchodsk´ y, T., Sol´ ments in 1D. Math. Comp. 76(260), 1833–1846 (2007) 18. Varga, R.: Matrix iterative analysis. Prentice Hall, New Jersey (1962) 19. Xu, J., Zikatanov, L.: A monotone finite element scheme for convection-diffusion equations. Math. Comp. 68, 1429–1446 (1999)
Inverse Problem for Coefficient Identification in Euler-Bernoulli Equation by Linear Spline Approximation Tchavdar T. Marinov1 and Rossitza Marinova2 1
2
Southern University at New Orleans, Department of Natural Sciences, 6801 Press Drive, New Orleans, LA 70126
[email protected] Department of Mathematical and Computing Sciences, Concordia University College of Alberta, Edmonton, AB, Canada T5B 4E4, Adjunct professor at the Department of Computer Science, University of Saskatchewan, SK, Canada
[email protected]
Abstract. We display the performance of the technique called Method of Variational Imbedding for solving the inverse problem of coefficient identification in Euler-Bernoulli equation from over-posed data. The original inverse problem is replaced by a minimization problem. The EulerLagrange equations comprise an eight-order equation for the solution of the original equation and an explicit system of equations for the coefficients of the spline approximation of the unknown coefficient. Featuring examples are elaborated numerically. The numerical results confirm that the solution of the imbedded problem coincides with the exact solution of the original problem within the order of approximation error.
1
Introduction
Consider the simplest form of Euler-Bernoulli equation d2 d2 u σ(x) = f (x), 0 ≤ x ≤ 1. dx2 dx2
(1)
The function f (x) represents the transversely distributed load. The coefficient σ(x), called flexural rigidity, is the product of the modulus of elasticity E and the moment of inertia I of the cross-section of the beam about an axis through its centroid at right angles to the cross-section. If the coefficient σ(x) > 0 and the right-hand side function f (x) ≥ 0 are given, under proper initial and/or boundary conditions, the problem possesses a unique solution, usually referred as a direct solution. In practice, there exist lots of interesting problems, in which the coefficient σ(x) is not exactly known. In reality, Euler-Bernoulli equation models a tensioned “beam”. Under environmental loads, caused by environmental phenomena such as wind, waves, current, tides, earthquakes, temperature, ice, the structure of the I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 588–595, 2010. c Springer-Verlag Berlin Heidelberg 2010
Inverse Problem for Coefficient Identification in Euler-Bernoulli Equation
589
ingredients of the “beam” is changing. Usually it is expensive, even not possible, to measure the changes of the properties of the materials directly. On the other hand, the changes in the physical properties of the materials cause changes in the coefficient σ in equation (1) and, respectively, changes in the solution. Thus, a new, so called inverse, problem appears: to find simultaneously the solution u and the coefficient σ of the Euler-Bernoulli equations. A method for transforming an inverse problem into a correct direct problem, but for a higher-order equation, was proposed in [1] and called Method of Variational Imbedding (MVI). The idea of MVI is to replace the incorrect problem with a well-posed problem for minimization of a quadratic functional of the original equations, i.e. we “embed” the original incorrect problem in a higher-order boundary value problem which is well-posed (see [1,7,8]). For the latter, a difference scheme and a numerical algorithm for its implementation can easily be constructed. The advantage of MVI comparing to a regularization method (see, for example, [10,4]) is that there are no “boundary layers” at the two ends of the interval as it was observed for a similar problem in [5]. Recently, in [9], the MVI was applied to the problem for identifying the coefficient σ in (1) in the case when the coefficient is a piecewise constant function. In the present work we are considering the case when the coefficient is a piecewise linear function. Although this paper is focused on a fourth order ordinary differential equation, the proposed method can be generalized for identification of coefficient in partial differential equations. Similar to the procedure proposed here, the approach for the identification of a coefficient in parabolic partial differential equation is given in [2]. In [8] MVI was successfully applied to the problem for identification of a coefficient in elliptic partial differential equation. The paper is organized as follows. In Section 2 the inverse problem for identification of the unknown coefficient is formulated. The application of the MVI to the inverse problem is described in Section 3. The numerical scheme is given in Section 4. Illustration of the constructed numerical scheme is given in Section 5.
2
Inverse Problem Formulation
Consider the Euler-Bernoulli equation (1) where the function f (x) is given. We expect that the functions under consideration are as many time differentiable as necessary. If the coefficient σ is not given, in order to identify it one needs additional information. Suppose the solution satisfies the conditions u(0) = α0,0 ,
u(1) = α1,0 ,
(2)
u (0) = α0,1 ,
u (1) = α1,1 ,
(3)
u (0) = α0,2 , u(ξi ) = γi ,
u (1) = α1,2 , i = 1, 2, . . . , n − 1,
(4) (5)
where the points 0 < ξi < 1 are given. We suppose that the coefficient σ is a piecewise linear function σ(x) = σi (x) = ai + bi (x − ξi−1 ) for ξi−1 < x < ξi , i = 1, 2, . . . , n,
(6)
590
T.T. Marinov and R. Marinova
where the constants ai , bi , are unknown (ξ0 = 0, ξn = 1). The number of unknown constants ai , bi is 2n. On the other hand, the additional number of conditions in (2)-(5) for the fourth order ordinary differential equation is n + 1 = (n + 5) − 4. Therefore, if we add the condition for continuity of the function σ, i.e., σi (ξi ) = σi+1 (ξi ),
(7)
i = 1, 2, . . . , n − 1 the number of the conditions is exactly equal to the number of unknown constants. There may be no solution (u, σ), satisfying all of the conditions (2)–(5), for arbitrary αk,l , (for k = 0, 1 and l = 0, 1, 2), and γi , (for i = 1, 2, . . . , n − 1). For this reason, we suppose that the problem is posed correctly after Tikhonov, [10] i.e., it is known a-priori that the solution of problem exists. In other words, we assume that the data in the boundary conditions (2)–(5) have “physical meaning” and, therefore, the solution exists. The problem is how to convert this additional information to the missing information on the coefficients. The solution approach proposed here is a generalization of the implementation of MVI to a similar problem given in [6] for identification of a piecewise coefficient of two parts (see also [7] and [8]), and continuation of the idea proposed in [9].
3
Variational Imbedding
Following the idea of MVI, we replace the original problem with the problem of minimization of the functional 1
1 I(u, σ) =
A (u, σ)dx = 2
0
d2 dx2
2 d2 u σ(x) 2 − f (x) −→ min , dx
(8)
0
where u satisfies the conditions (2)–(5), σ is an unknown piecewise function, defined with equation (6). The functional I(u, σ) is a quadratic and homogeneous function of A(u, σ) and, hence, it attains its absolute minimum if and only if A(u, σ) ≡ 0. In this sense there is an one-to-one correspondence between the original equation (1) and the minimization problem (8). Since σ(x) is a piecewise linear function, we can rewrite the functional I as ξ 2 n i 2 d d2 u σi − f (x) −→ min . I(u, σ) = dx2 dx2 i=1
(9)
ξi−1
The necessary condition for minimization of the functional I is expressed by the Euler-Lagrange equations for the functions u(x) and σ(x). 3.1
Equation for u
The Euler-Lagrange equation with respect to the function u reads 2 d2 d2 d d2 d2 d2 u σ A= σ σ − f (x) = 0. dx2 dx2 dx2 dx2 dx2 dx2
(10)
Inverse Problem for Coefficient Identification in Euler-Bernoulli Equation
591
Therefore, in each interval ξi−1 < x < ξi , the function u satisfies the equation d4 d2 u d2 d2 d2 σi 4 σi 2 = σi 2 f (x), 2 2 dx dx dx dx dx
(11)
under the boundary conditions (2)–(5). Since each equation (11) is of eight order we need some additional boundary conditions. From the original problem we have d2 d2 u σi − = f (ξi ), dx2 dx2 ξi
d2 u d2 σ + = f (ξi ), i+1 dx2 dx2 ξi
(12)
where ξi− and ξi+ stand for the left-hand and right-hand derivatives and d4 d2 u d2 σ f (ξi ), − = i dx4 dx2 ξi dx2 2 2 d u d u σi 2 ξ− = σi+1 2 ξ+ , dx i dx i
d4 d2 d2 u σ f (ξi ), + = i+1 dx4 dx2 ξi dx2 2 2 d d u d d u σi 2 ξ− = σi+1 2 ξ+ , dx dx i dx dx i
(13) (14)
where i = 1, 2, . . . , n − 1. 3.2
Equation for σ
The problem is coupled by the equation for σ. Since σ is a piecewise function, for the functional I one arrives at the problem for minimization of the function q(a1 , . . . , an , b1 , . . . , bn ) =
n
(Ai11 a2i +2Ai12 ai bi +Ai22 b2i +Ai1 ai +Ai2 bi +Ai0 ), (15)
i=1
with respect to a1 , . . . , an , b1 , . . . , bn under the continuity conditions (7) which we rewrite in the form ai + bi (ξi − ξi−1 ) − ai+1 = 0.
(16)
Minimizing the function q under the constraints (16) we introduce Lagrange multipliers μi and introduce the function: Q=
n i 2
A11 ai + 2Ai12 ai bi + Ai22 b2i + Ai1 ai + Ai2 bi + Ai0
(17)
i=1
+
n−1
μi (ai + bi (ξi − ξi−1 ) − ai+1 ) .
i=1
We obtain the following five-diagonal system of linear equations for ai , bi , μi : ∂Q = 2A111 a1 + 2A112 b1 + A11 + μ1 = 0, ∂a1 ∂Q = 2A112 a1 + 2A122 b1 + A12 + μ1 ξ1 = 0, ∂b1 ∂Q = a1 + b1 ξ1 − a2 = 0, ∂μ1
(18) (19) (20)
592
T.T. Marinov and R. Marinova
and ∂Q = 2Ai11 ai + 2Ai12 bi + Ai1 + μi − μi−1 = 0, ∂ai ∂Q = 2Ai12 ai + 2Ai22 bi + Ai2 + μi (ξi − ξi−1 ) = 0, ∂bi ∂Q = ai + bi (ξi − ξi−1 ) − ai+1 = 0, ∂μi
(21) (22) (23)
for i = 2, . . . , n − 1, and ∂Q = 2An11 an + 2An12 bn + An1 − μn−1 = 0, ∂an ∂Q = 2An12 an + 2An22 bn + An2 = 0. ∂bn
4
(24) (25)
Difference Scheme
We solve the formulated eight-order boundary value problem using finite differences. It is convenient for the numerical treatment to rewrite the eight order equation (11) as a system of four second order equations. In each of the subintervals [ξi−1 , ξi ], i = 1, 2, . . . , n we solve the following system of four equations u = v, 4.1
(σv) = w,
w = z,
(σz) = (σf ) .
(26)
Grid Pattern and Approximations
We introduce a regular mesh with step hi (see Fig. 1) in each of the subintervals [ξi−1 , ξi ], i = 1, 2, . . . , n, allowing to approximate all operators with standard central differences with second order of approximation. i−1 For the grid spacing in the interval [ξi−1 , ξi ] we have hi ≡ ξin−ξ , where ni i −2 is the total number of grid points for the i-th interval. Then, the grid points are defined as follows: xij = ξi−1 + (j − 1.5)hi for j = 1, 2, . . . , ni . Let us introduce the notation uij = u(xij ) for i = 1, 2, . . . , n, and j = 1, . . . , ni . We employ symmetric central differences for approximating the differential operators. The differential operators in the boundary conditions are approximated with second order formulae using central differences and half sums.
•
ξi−1 | •
•
•
xi1
xi2
xi3
xi4
...
•
•
•
xini −3
xini −2
xini −1
Fig. 1. The mesh used in our numerical experiments
ξi |
• xini
Inverse Problem for Coefficient Identification in Euler-Bernoulli Equation
4.2
593
General Construction of the Algorithm
(I) With the obtained “experimentally observed” values of αk,l , (for k = 0, 1 and l = 0, 1, 2, 3), and γi , (for i = 1, 2, . . . , n − 1) the eight-order boundary value problem (10), (2)–(5), (12)–(14) is solved for the function u with an initial guess for the function σ. (II) The approximation of the function σ for the current iteration is calculated from the system (18)-(25). If the l2 -norm of the difference between the new and the old field for σ is less than ε0 then the calculations are terminated. Otherwise, the algorithm returns to (I) with the new calculated σ.
5
Numerical Experiments
The accuracy of the difference scheme is validated with tests involving various grid spacing h. We have run a number of calculations with different values of the mesh parameters and verified the practical convergence and the O(h2 ) approximation of the difference scheme. For all calculations presented here, ε0 = 10−12 and the initail values for the coefficients in σ are ai = 0.5 and bi = 1.5. Here we illustrate the developed difference scheme using two coefficient identification problems. 5.1
Linear Coefficient
Consider the case when σ(x) = 1 + x and f (x) = (3 + x) exp(x) for which under proper boundary conditions the exact solution is u(x) = exp(x).
(27)
For this test we let the number of intervals n in the definition (6) of σ equal to 1, i.e., n = 1. In other words, we know a-priori that the coefficient is a linear function. The goal of this test is to confirm second order of approximation of the proposed scheme. The values of the identified coefficient σ = a + bx with four different steps h are given in Table 1. The rates of convergence, calculated as a2h − aexact b2h − bexact , , rate = log (28) rate = log2 2 ah − aexact bh − bexact Table 1. Obtained values of the coefficients a and b, and the rate of convergence for four different values of the mesh spacing h exact 0.1 0.05 0.025 0.0125
a 1.0 0.996671245 0.999166953 0.999791684 0.999947911
|a − aexact | rate b |b − bexact | rate — — 1.0 — — 3.328754087E-03 — 0.998334860 1.665139917E-03 — 8.330468464E-04 1.99851 0.999583428 4.165712337E-04 1.99901 2.083158588E-04 1.99962 0.999895839 1.041605270E-04 1.99975 5.208818084E-05 1.99974 0.999973961 2.603891276E-05 2.00007
594
T.T. Marinov and R. Marinova
Table 2. l2 norm of the difference u − uexact and the rate of convergence for four different values of the mesh spacing h 0.1 0.05 0.025 0.0125
||u − uexact ||l2 rate 1.018490804573E-04 — 2.367271717866E-05 2.10514 5.686319062981E-06 2.05766 1.392183281145E-06 2.03015
Table 3. l2 norm of the differences u−uexact and σ −σexact and the rate of convergence for four different values of the mesh spacing h 0.01 0.005 0.0025 0.00125
||σ − σexact ||l2 rate ||u − uexact ||l2 rate 1.069187970901E-08 — 4.552581248664E-05 — 2.584956511555E-09 2.0483 1.138599436962E-05 1.99942 6.379806228978E-10 2.01856 9.614682942106E-07 3.56588 1.586000035587E-10 2.00812 1.624871713448E-07 2.56491
are also shown in Table 1. Similar results for the l2 norm of the difference between the exact and the numerical values of the function u are presented in Table 2. This test clearly confirms the second order of convergence of the numerical solution to the exact one. 5.2
Linear Coefficient as a Piecewise Linear Function
Consider again the solution (27) but now we do not assume a-priori that the coefficient is the same function in the whole interval. We identify the coefficient as a piecewise linear function, as defined in (6), for n = 10. In each subinterval, the expected values of the coefficient σ are ai = 1 and bi = 1. For this test we performed a number of calculations with different spacings h. The l2 norm of the difference between the exact and the numerical values of the functions u and σ, and the rate of convergence, calculated using the norm of the difference, for four different steps h, are given in Table 3. The fact that the numerical solution approximates the analytical one with O(h2 ) is clearly seen from the Table 3.
Acknowledgment This work was partially supported by MITACS and NSERC.
References 1. Christov, C.I.: A method for identification of homoclinic trajectories. In: Proc. 14th Spring Conf., Sunny Beach, Sofia, Bulgaria. Union of Bulg. Mathematicians, Sofia (1985)
Inverse Problem for Coefficient Identification in Euler-Bernoulli Equation
595
2. Christov, C.I., Marinov, T.T.: Identification of heat-conduction coefficient via method of variational imbedding. Mathematics and Computer Modeling 27(3), 109–116 (1998) 3. Hadamard, J.: Le Probleme de Cauchy et les Equations aux Derivatives Partielles Lineares Hyperboliques, Hermann, Paris (1932) 4. Latt`es, R., Lions, J.L.: M`ethode de quasi-reversibilite et applications, Dunod, Paris (1967) 5. Lesnic, D., Elliott, L., Ingham, D.B.: Analysis of coefficient identification problems associated to the inverse Euler-Bernoulli beam theory. IMA J. of Applied Math. 62, 101–116 (1999) 6. Marinov, T.T., Christov, C.I.: Identification the unknown coefficient in ordinary differential equations via method of variational imbedding. In: Deville, M., Owens, R. (eds.) 16th IMACS World Congress 2000 Proceedings, paper 134–2 (2000) ISBN 3-9522075-1-9 7. Marinov, T.T., Christov, C.I., Marinova, R.S.: Novel numerical approach to solitary-wave solutions identification of Boussinesq and Korteweg-de Vries equations. Int. J. of Bifurcation and Chaos 15(2), 557–565 (2005) 8. Marinov, T.T., Marinova, R.S., Christov, C.I.: Coefficient identification in elliptic partial differential equation. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2005. LNCS, vol. 3743, pp. 372–379. Springer, Heidelberg (2006) 9. Marinov, T.T., Vatsala, A.: Inverse Problem for Coefficient Identification in EulerBernoulli Equation. Computers and Mathematics with Applications 56(2), 400–410 (2008) 10. Tikhonov, A.N., Arsenin, V.: Methods for Solving Incorrect Problems, Nauka, Moscow (1974)
A Fully Implicit Method for Fluid Flow Based on Vectorial Operator Splitting Rossitza S. Marinova1, , Raymond Spiteri2 , and Eddy Essien1 1
Department of Mathematical and Computing Sciences, Concordia University College of Alberta, 7128 Ada Boulevard, Edmonton, AB T5B 4E4, Canada
[email protected] http://www.math.concordia.ab.ca/marinova 2 Department of Computer Science, University of Saskatchewan 176 Thorvaldson Bldg, 110 Science Place, Saskatoon, SK S7N 5C9 Canada
Abstract. A fully implicit finite difference method for the incompressible Navier–Stokes equations is presented that conserves mass, momentum, the square of velocity components, and, in the inviscid limit, kinetic energy. The method uses a staggered arrangement of velocity and pressure on a structured Cartesian grid and retains its conservation properties for both uniform and non-uniform grids. The convergence properties of the method are confirmed via numerical experiments.
1
Introduction
Flows with high Reynolds numbers or complex geometries are of great interest to industry; hence there is significant demand for robust and stable algorithms and software even at the expense of increased computational cost. Fully implicit time-stepping methods are generally more robust and stable than explicit and semi-explicit methods. Therefore, as mentioned in [7], fully implicit methods should be further investigated and developed. The most popular time-stepping methods for the Navier–Stokes equations are the so-called projection or operator-splitting methods (e.g., fractional step or pressure correction methods) are not fully implicit; see [7] and [6]. Decoupling velocity and pressure reduces the system into simpler sub-problems, but the choice of boundary conditions for the pressure in these procedures is problematic. Moreover, the explicit element introduced by this decoupling requires small time steps. Although operator-splitting methods can work well, they must be used with care in terms of how well the overall solution algorithm behaves. They are usually not suitable for high Reynolds number flows and long time calculations because a small time step is required for accuracy and stability. Because a fully implicit approach leads to a system of nonlinear equations that may be singular, special spatial discretization or stabilization techniques are needed. On the other hand, strongly coupled solution strategies need to deal with
Adjunct professor at the Department of Computer Science, University of Saskatchewan, SK, Canada.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 596–603, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Fully Implicit Method for Fluid Flow
597
large nonlinear algebraic systems that must be solved iteratively. Although fully implicit schemes may lead to unconditionally stable solvers, these approaches can be too expensive for practical computations. In this paper, we employ coordinate splitting, a generalization of the Douglas– Rachford scheme [4], leaving the system coupled at each fractional step to allow the satisfaction of the boundary conditions and avoid the introduction of artificial boundary conditions for the pressure. For the temporal discretization, we use the backward Euler scheme.
2 2.1
Problem Statement Incompressible Navier–Stokes Equations
Consider the incompressible Navier–Stokes equations in dimensionless form u ∂u u = ν∇2u − ∇p + g + u · ∇u ∂t
(1)
coupled by the continuity equation, also called the incompressibility constraint, div u = ∇ · u = 0
(2)
on Ω × (0, T ), where Ω is a bounded compact domain with a piece-wise smooth x , t) = (u, v, w) is the fluid velocity at position x ∈ Ω boundary ∂Ω. Here u = u (x x, t) is the fluid kinematic pressure, and time t ∈ (0, T ), T is given, p = p(x ν = 1/ Re is the kinematic viscosity, Re is the Reynolds number, g is an external force, ∇ is the gradient operator, and ∇2 is the Laplacian operator. 2.2
Initial and Boundary Conditions
We assume the initial condition x ), u t=0 = u 0 (x
(3)
where ∇ · u 0 = 0. Remark: In order to avoid singularities, initial and boundary conditions should agree at t = 0 and x ∈ ∂Ω. Navier–Stokes equations can be classified as partial differential-algebraic equations (PDAEs), e.g., [1]. The challenges in the numerical solution are well known; they are connected with the fact that the Navier–Stokes equations are not an evolutionary system of Cauchy–Kovalevskaya type and that the pressure is an implicit function responsible for the satisfaction of the continuity equation. Furthermore, no boundary conditions on the pressure can be imposed on the rigid boundaries. This creates formidable obstacles for the construction of fully implicit schemes.
598
3
R.S. Marinova, R. Spiteri, and E. Essien
Singularity of Direct Fully Implicit Schemes
We first write the system (1), (2) in the following form, u ∂u u + ∇p = g , + (C + L)u ∂t
∇ · u = 0,
(4)
1 where C is the nonlinear convection operator and L = − Re ∇2 is the linear viscosity operator. The use of a fully implicit approach such as the Backward Euler method for time stepping leads to the solution of the following nonlinear stationary problem at each time step
x , t + Δt) − u (x x , t) u (x x, t + Δt) + ∇p(x x, t + Δt) + (C + L) u (x Δt x , t + Δt), ∇ · u (x x , t + Δt) = 0, = g (x
(5)
x , t + Δt) u (x x , t + Δt) + ∇p(x x, t + Δt) + (C + L) u (x Δt x , t) u (x x , t + Δt), ∇ · u (x x , t + Δt) = 0. = + g (x Δt
(6)
or
The choice of discretization in space is crucial for the stability of the scheme. Because the equations to be solved are conservation laws, it is highly desirable for the numerical scheme should also preserve these laws [9]. We choose approximations of the differential operators for which the numerical scheme preserves the integral properties of the respective differential problem. Constructing such schemes especially in the case of operator splitting is a challenging task. After discretizing the system of equations (6) in space, one arrives at a linear system, which can be written in matrix form as N Q u f = . (7) 0 QT 0 p In the case of a fully implicit approach, the matrix N is not symmetric positive definite in contrast to systems arising from an explicit treatment of the convective term. Stabilization techniques are usually based on perturbed versions of the equation of continuity. There exist many variations of pressure stabilization techniques [2]; see [7] and [6] for review. Although not originally derived as stabilization methods, the artificial incompressibility method [3] and the penalty method [10] can be also placed into this category. All these methods aim at stabilizing pressure oscillations and allowing standard grids and elements. They are usually used with finite element discretizations. Most popular time-stepping methods, including fully implicit schemes such as (5), typically do not solve the resulting system in a fully coupled manner. Velocity and pressure are usually decoupled, and this requires imposition of
A Fully Implicit Method for Fluid Flow
599
pressure boundary conditions. Solving the system (7) in a fully coupled approach is preferred because it preserves the implicitness of the scheme, but such solvers require further development. Direct linear solvers, such as Gaussian elimination, are not efficient for 3D problems. Iterative strategies, such as BiCGStab and GMRES, combined with suitable preconditioners, can be effectively used for solving such systems of equations.
4
Balanced Pressure Equation
A formulation with a pressure equation is preferable compared to the continuity equation because we can construct a solver for the resulting non-linear stationary problem that is not only robust with respect to the physical and numerical parameters but also computationally efficient. For this reason, we use a special pressure equation, which is equivalent to the standard Poisson equation for pressure on a differential level. A similar form of pressure equation is presented in [5]. In this paper, the Laplacian of the pressure in a pressure equation is balanced using the divergence of the momentum equations u − g )]. ∇ · u = [∇2 p + ∇ · (Cu
(8)
On the discrete level, the right-hand side of equation (8) does not necessarily vanish. Here, is a balancing parameter that can be varied. As noted in [5], this parameter is not related to the time step Δt. In fact, the formulation of the problem (1), (2) is equivalent to the formulation with the pressure equation (1), (8) (or (9)) if and only if the continuity equation is satisfied on the boundary. The equation we use is similar to (8). The balancing coefficient in our pressure equation is equal to the viscosity in the momentum equation ν = 1/ Re and we u , so that the modified pressure also use a balancing coefficient γ for the term ∇·u equation becomes u − g )]. (9) γ(∇ · u ) = ν[∇2 p + ∇ · (Cu We find that using a balanced pressure equation such as (9) in combination with conservative difference approximations (see next section) improves the convergence of (7) considerably. Conservative discretizations are not considered in [5], nor is a splitting procedure used to improve the efficiency of the solver for linear systems of equations.
5 5.1
Difference Problem Analytical Requirements
We want a difference scheme that satisfies the following analytical requirements: 1. Conservation properties Following [9], we call T (ϕ) conservative if it can be written in divergence form T [·] = ∇ · (S[·]), where S is an operator that can be used to write the
600
R.S. Marinova, R. Spiteri, and E. Essien
system of equations in an equivalent form on the continuous level provided the continuity equation is satisfied. In general, however, these forms are not equivalent on the discrete level. It is known that (a) The mass is conserved a priori because the continuity equation (2) appears in divergence form. (b) The momentum is conserved a priori if the continuity equation (2) is satisfied: the pressure and viscous terms are conservative a priori; the convective term is also conservative a priori if ∇ · u = 0. (c) The square of a velocity component ϕ2 is of importance in case of coordinate splitting. If the convective term is written in a skew-symmetric u ] = 12 u [∇ · (u uu ) + u · ∇u u], then it conserves ϕ2 . For instance, in form C[u direction x ∂ϕ ϕ ∂(ϕu) 1 ∂(ϕ2 u) +u · (10) ϕ Cx [ϕ] = = 2 ∂x ∂x 2 ∂x The convective term in a skew-symmetric form is conservative a priori, whereas the pressure and viscous term are not conservative. def (d) The kinetic energy K = 12 (u2 + v 2 + w2 ): The skew-symmetric convective term is energy conservative, the pressure term is energy conservative if the continuity equation is satisfied, and the viscous term is not energy conservative. In addition to conservation we also ensure that the scheme satisfies the following properties: 2. 3. 4. 5.
5.2
Compatibility for Poisson’s equation for pressure. Commutativity of the Laplacian and divergence operators. Consistency between gradient and divergence operators. A velocity field that is solenoidal at each time step; i.e., ∇ · u = 0. Satisfaction of properties 1–5 leads to strong L2 stability of the scheme [8]. Coordinate Splitting and Spatial Discretization
We consider a flow in a region with rectilinear boundaries in Cartesian coordinates. The boundary conditions derived from the continuity equation ∇ · u = 0 in the three-dimensional case read ∂u ∂v ∂w = ψ (y, z), = ψ (x, z), = ψ3 (x, y), 1 2 ∂x (x=c1 ,y,z) ∂y (x,y=c2 ,z) ∂z (x,y,z=c3 ) (11) where (x = c1 , y, z), (x, y = c2 , z), and (x, y, z = c3 ) are boundary points, ci are constants, and ψi , i = 1, 2, 3 are given functions. The stationary equation in general form can be written as A[ϕ] = G,
A = L + P + N,
(12)
A Fully Implicit Method for Fluid Flow
601
u] = ∇ and P [p] = γ∇ · u , C = Cx + Cy + Cz , and G contains the where P [u source term and terms coming from the time discretization. Equation (12) is converted to an evolution system by adding derivatives with respect to an artificial time s ∂ϕ + A[ϕ] = G. ∂s
(13)
We split the operator A = A1 + · · · + Al in (13) as follows: ϕn+1/l − ϕn Δs n+i/l
ϕ
n+(i−1)/l
−ϕ Δs
= A1 ϕn+1/l + A1 ϕn +
l
Ai ϕn + Gn
i=2
= Ai (ϕn+i/l − ϕn ), i = 2, . . . , l,
where l = 2, 3 in 2D and 3D, respectively. The splitting procedure is a generalization of the scheme of Douglas and Rachford [4]. Standard three-point differences are used for the second derivatives, which inherit the negative definiteness of the respective differential operators. The first derivatives for pressure are discretized with second-order differences. The grid is staggered in each direction. Also, we keep the coupling between the pressure and the respective velocity component through the boundary conditions at each fractional step. This allows us to construct a robust implicit splitting scheme with strong L2 -stability.
6
Numerical Results
We now verify the convergence properties of the method in space and time. To confirm the discretization in space, we perform calculations on a uniform mesh h = hx = hy = hz and the following analytic solution √ ⎛ ⎞ ⎛√ ⎞ 2 exp(− u √ 2x) cos(y + z) u = ⎝ v ⎠ = ⎝ exp(−√2x) sin(y + z) ⎠ , w exp(− 2x) sin(y + z) √ p = − exp(−2 2x), Re = 100 in the unit square. No boundary condition is imposed on p. We only present results for the velocity component u because they are representative of the three components. We use maximum and average errors as accuracy measures: num. anal. − fi,j,k |, Errorf,max. = max |fi,j,k i,j,k
Errorf,aver. =
num. anal. |fi,j,k − fi,j,k | , (14) Nx Ny Nz
i,j,k
where f can be any of the functions u, v, w, or p. The discretization errors for u are presented in Table 1 and Figure 1. In the Errorf,max (2h) last column of Table 1, the convergence rate = log2 Errorf,max (h) is reported. It
602
R.S. Marinova, R. Spiteri, and E. Essien Table 1. Discretization Error: h = hx = hy = hz 1/h 8 16 32 64
h2 1.56250E-02 3.90625E-03 9.76563E-04 2.44141E-04
Erroru,max. 3.20504E-04 1.14828E-04 3.36453E-05 8.95454E-06
Erroru,aver. rate 4.64364E-05 2.06725E-05 1.480867308 6.70046E-06 1.770999827 1.87366E-06 1.909714065
Fig. 1. Discretization error as a function of the mesh spacing h = hx = hy = hz (•– h2 ; – maximum error; – average error
Fig. 2. Divergence Δt = 0.1; 0.2; 0.4; 0.8
can be seen that the rate of convergence approaches the second order; i.e., the discretization error is of order of O(h2 ) in space.
A Fully Implicit Method for Fluid Flow
603
In order to validate the 3D unsteady algorithm, we perform tests by using the following 3D analytical solution of the incompressible Navier–Stokes equations u = v = w = et ,
p = (x + y + z)et .
(15)
The results, summarized in Figure 2, confirm the first-order convergence in time.
7
Conclusion
A fully implicit method for the numerical solution of the incompressible Navier– Stokes equations is presented. The most important characteristic of the method is its stability due to the implicit treatment of the boundary conditions and conservative spatial discretization. Future work will focus on implementation of higher-order time-stepping and comparison with existing approaches.
Acknowledgment This work was partially supported by MITACS and NSERC.
References 1. Ascher, U.M., Petzold, L.R.: Computer Methods for Ordinary Differential Equations and Differential-Algebraic Equations. SIAM, Philadelphia (1998) 2. Brezzi, F., Fortin, M.: Mixed and Hybrid Finite Element Methods. Springer, Berlin (1991) 3. Chorin, A.J.: Numerical solution of the Navier-Stokes equations. Mathematics of Computation 22(104), 745–762 (1968) 4. Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Amer. Math. Soc. 82, 421–439 (1956) 5. Hafez, M., Soliman, M.: Numerical solutions of the incompressible Navier-Stokes equations in primitive variables, pp. 183–201 (1993) 6. Kwak, D., Kiris, C., Kim, C.S.: Computational challenges of viscous incompressible flows. Computers & Fluids 34, 283–299 (2005) 7. Langtangen, H.P., Mardal, K.-A., Winther, R.: Numerical methods for incompressible viscous flow. Advances in Water Resources 25, 1125–1146 (2002) 8. Marinova, R.S., Takahashi, T., Aiso, H., Christov, C.I., Marinov, T.T.: Conservation properties of vectorial operator splitting. Journal of Computational and Applied Mathematics 152(1-2), 289–303 (2003) 9. Morinishi, Y., Lund, T.S., Vasiliev, O.V., Moin, P.: Fully conservative higher order finite difference schemes for incompressible flow. Journal of Computational Physics 143, 90–124 (1998) 10. Reddy, J.N.: On penalty function methods in the finite element analysis of flow problems. International Journal for Numerical Methods in Fluids 18, 853–870 (1982)
Discrete Maximum Principle for Finite Element Parabolic Operators Miklos E. Mincsovics Department of Applied Analysis and Computational Mathematics, ELTE, P´ azm´ any P´eter s´et´ any I/C, Budapest H-1117, Hungary
[email protected]
Abstract. When we construct continuous and/or discrete mathematical models in order to describe a real-life problem, these models should possess various qualitative properties, which typically arise from some basic principles of the modelled phenomenon. In this paper we investigate this question for the numerical solution of initial-boundary value problems for parabolic equations with nonzero convection and reaction terms with function coefficients in higher dimensions. The Dirichlet boundary condition will be imposed, and we will solve the problem by using linear finite elements and the θ-method. The principally important qualitative properties for this problem are the non-negativity preservation and different maximum principles. We give the conditions for the geometry of the mesh and for the choice of the discretization parameters, i.e., for θ and the time-step sizes, under which these discrete qualitative properties hold. Finally, we give numerical examples to investigate how sharp our conditions are.
1
Introduction
Let Ω ∈ IRd denote a bounded simply connected domain with a Lipschitz ¯T = Ω ¯ × [0, T ] and boundary ∂Ω. We use the notations QT = Ω × (0, T ), Q ΓT = (∂Ω × [0, T ]) ∪ (Ω × {0}) for the parabolic boundary. In this paper we consider the parabolic operator which is defined for the ¯ T ) and which can be described as functions v(x, t) ∈ C 2,1 (QT ) ∩ C(Q La,b,c v =
∂v − div(a grad v) + b, grad v + c v, ∂t
(1)
where a, c : Ω → IR, b : Ω → IRd and a ∈ C 1 (Ω), b, c ∈ C(Ω). The symbol ·, · is the usual scalar product in IRd . In the sequel we assume that 0 < am ≤ a ≤ aM , b ≤ bM and 0 ≤ c ≤ cM holds with the constants am , aM , bM , cM . · denotes the norm of IRd induced by the scalar product ·, ·. Then, the operator La,b,c satisfies the so-called weak boundary maximum principle (WBMP), which means the following: for any v ∈ dom La,b,c the relation max{0, max v} = max v, ΓT
¯T Q
for the functions La,b,c v ≤ 0
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 604–612, 2010. c Springer-Verlag Berlin Heidelberg 2010
(2)
Discrete Maximum Principle for Finite Element Parabolic Operators
605
¯ T takes its holds, i.e., the continuous function v on the bounded and closed Q non-negative maximum on the parabolic boundary ΓT . For the operator La,b,0 a more strict relation holds, the strong boundary maximum principle (SBMP): max v = max v, ΓT
¯T Q
for the functions La,b,0 v ≤ 0.
(3)
The operator La,b,c also has the so-called non-negativity preservation (NP) property, which says that the condition v|ΓT ≥ 0 and La,b,c v ≥ 0 imply v ≥ 0 for any function v ∈ dom La,b,c . These results can be found in [4]. The weak/strong boundary maximum principle and the non-negativity preservation property are physically based, and their violation contradicts to the nature. Therefore, their preservation in the discrete models is very important, and it is quite a natural requirement for the really reliable and meaningful numerical modelling of various real-life phenomena, like the above formulated parabolic process. This problem was already investigated with constant coefficients and absent convection terms, see [3,6,7,8]. In this work we analyze the discrete boundary maximum principles and the NP for the discrete parabolic operator corresponding to (1), where the discretization is done by the linear finite element methods with respect to the space variable and we apply the one-step θ-method for the time integration. We give those conditions under which this qualitative properties holds.
2
Mesh Operators, the Discrete Boundary Maximum Principles and the Non-negativity Preservation Property
The sets P = {x1 , x2 , . . . , xN } and P∂ = {xN +1 , xN +2 , . . . , xN +N∂ } consist ¯ = N + N∂ and of different vertices in Ω and on ∂Ω, respectively. We set N ¯ P = P ∪ P∂ . Let T be as before and M any given integer. We define the time discretization step-size Δt as Δt = T /M . We introduce the notations RM = ¯ M = {t = nΔt, n = 0, 1, . . . , M } and the {t = nΔt, n = 1, 2, . . . , M − 1}, R ¯ ¯ ¯ ¯ M ) ∪ (P × {0}). (The set sets QM = P × RM , QM = P × RM , GM = (P∂ × R GM is the discrete parabolic boundary.) The values ν(xi , nΔt) of the function ν ¯ M will be denoted by ν n . Similar notation is applied to the function defined in Q i ¯ ¯ N N n Lν. We introduce the vectors ν n = [ν1n , . . . , νN ¯ ] ∈ IR , e = [1, . . . , 1] ∈ IR . Definition 1. Linear mappings that map from the space of real-valued functions ¯ M to the space of real-valued functions defined on QM are called defined on Q discrete (linear) mesh operators. The discrete version of the boundary maximum principles and the non-negativity preservation property can be formulated as follows.
606
M.E. Mincsovics
Definition 2. We say that the discrete mesh operator L satisfies the discrete weak/strong boundary maximum principle (DWBMP/DSBMP) if for any function ν ∈ dom L the relation max{0, max ν} = max ν GM
¯M Q
/
max ν = max ν GM
(4)
¯M Q
holds for the mesh-functions Lν ≤ 0. Definition 3. The discrete mesh operator L is called non-negativity preserving (DNP) if for any ν ∈ dom L such that minGM ν ≥ 0 and Lν ≥ 0, the relation ν ≥ 0 holds. In the sequel we analyze the discretization of the operator La,b,c , defined in ¯ ) which have the (1). We denote the FEM basic functions by φi (x) (i = 1, . . . N ¯ properties for x ∈ Ω ¯ N φi (x) = 1, φi (x) ≥ 0. (5) i=1
Then, introducing the bilinear form on H 1 (Ω) as (a grad u, grad v + b, grad u v + c uv) dx, we define the eleB [u, v] := Ω
N ×N ments Kij and Mij of the as stiffness matrix and mass matrix K, M ∈ IR Kij = B [φj , φi ], Mij = φj φi dx. Then L has the form: ¯
Ω
(Lν)ni = (X1 ν n − X2 ν n−1 )i , where X1 =
1 Δt M
+ θK, X2 =
1 Δt M
i = 1, . . . , N, n = 1, . . . , M,
(6)
− (1 − θ)K with θ ∈ [0, 1].
We denote by La,b,c the two-level discrete mesh operator, obtained by finite element space and θ-method time discretization of the operator La,b,c . For such operators the following theorems describe the relation between DWBMP/ DSBMP and DNP, the proof goes similarly as in [7]. Theorem 1. If the discrete mesh operator of type (6) satisfies the DWBMP/ DSBMP, then the DNP property is also valid. On the other hand, under the DNP property and the condition K e ≥ 0 / K e = 0 the DWBMP/DSBMP holds. Theorem 2. Let us assume (5). If c ≥ 0, then K e ≥ 0. In addition, if c ≡ 0, then K e = 0 holds. Theorem 1 and 2 provide for us that we need to care only with the DNP of La,b,c , which is defined by the rectangular matrices X1 and X2 , respectively. These matrices, by the inner and boundary points, can be partitioned into a convenient form, namely X1 = [X10 | X1∂ ], X2 = [X20 | X2∂ ]. Here X10 and X20 are square matrices from IRN ×N , and X1∂ , X2∂ ∈ IRN ×N∂ . Introducing the notation λn0 = [(La,b,c ν)n1 , . . . , (La,b,c ν)nN ] with an arbitrary chosen mesh
Discrete Maximum Principle for Finite Element Parabolic Operators
607
function ν for the operator La,b,c , and supposing the regularity of the matrix X10 , we arrive at the iteration form ν n0 = −(X10 )−1 X1∂ ν n∂ + (X10 )−1 X2 ν n−1 + (X10 )−1 λn0 ,
(n = 1, . . . , M ).
Thus, it is easy to see that the next Theorem is valid. Theorem 3. Let us suppose that the matrix X10 is regular. Then La,b,c possesses the discrete non-negativity preservation property if and only if the following relations hold: (P1) (X10 )−1 ≥ 0, (P2) −(X10 )−1 X1∂ ≥ 0, (P3) (X10 )−1 X2 ≥ 0. The following statement gives a sufficient condition for (P1)-(P3) in terms of the matrices M and K, see [7]. Theorem 4. Under the assumptions (P 1 ) Kij ≤ 0,
¯, i = j, i = 1, ..., N, j = 1, ..., N
¯ (7) (P 2 ) Δt(X1 )ij = Mij + ΔtθKij ≤ 0, i = j, i = 1, ..., N, j = 1, ..., N, (P 3 ) Δt(X2 )ii = Mii − Δt(1 − θ)Kii ≥ 0, i = 1, ..., N, the non-negativity assumptions (P1)–(P3) in Theorem 3 are satisfied.
3
Conditions for DWBMP/DSBMP of La,b,c
In this section, based on Theorems 1, 2, and 4 in order to guarantee the DWBMP /DSBMP for the operator La,b,c /La,b,0 , we give conditions under which the assumptions (P 1 )-(P 3 ) are satisfied. We assume that Ω can be covered by a regular simplicial mesh Th , and that this mesh is of nonobtuse type, i.e., all the angles made by any faces of each simplex T ∈ Th are not greater than π/2. Further we assume that the FEM ¯ ) are the so-called hat functions, which are basic functions φi (x), (i = 1, 2, . . . N continuous and linear over each simplex T ∈ Th and φi (xj ) = δij . We remark the following information about Theorem 4 in the light of that we made some restriction for the mesh and the basic functions: – Since Mij = Kij = 0, (i = j) for the index pairs which determine nonneighbouring vertices, we need to investigate the remainder (we will not remind you of this in the following). – (P 1 ) is one additional restriction for the mesh, (P 2 ) and (P 3 ) give a lower and an upper bound for the time-step Δt. Naturally, the lower bound must be smaller than the upper bound, this can be attained by the corresponding choice of θ.
608
M.E. Mincsovics
– In the case θ = 0 the condition (P 2 ) cannot be fulfilled. Thus, we fix that θ ∈ (0, 1]. (However, if we use the lumped mass technique, then θ = 0 is possible, too.) In case θ = 1 (P 3 ) is automatically fulfilled. – (P 2 ) implies (P 1 ). However, we need to require a strict inequality in (P 1 ) for the index pairs which determine neighbouring vertices, to make (P 2 ) possible. Let us denote this modified condition by (P 1 ). Since we want to get a usable condition for the mesh, we investigate (P 1 ) instead of (P 1 ) in the following. – Finally, we remind the reader that conditions (P 1 )-(P 3 ) are only sufficient but not necessary to guarantee maximum principles. 3.1
Local Conditions for DWBMP/DSBMP of La,b,c
For this case we define/estimate the elements of the local mass and stiffness matrices similarly as in [2,7,9]. The contributions to the mass matrix M over the simplex T ∈ Th are Mij |T =
measd T , (i = j), (d + 1)(d + 2)
Mii |T =
2 measd T . (d + 1)(d + 2)
(8)
We estimate the contribution to the stiffness matrix K over the simplex T in the following way. If the simplex T is tightened by the d + 1 piece vertices xi , and we denote by Si the (d − 1)-dimensional face opposite to the vertex xi , then cos γij is the cosine of the interior angle between faces Si and Sj . Note that (measd T)d = (measd−1 Si )mi , where mi is the (Euclidean) distance between Si and Pi . Let us introduce the notations: am (T ) = minT a, aM (T ) = maxT a, bM (T ) = maxT b , cM (T ) = maxT c. Then, cos γ a gradφj , gradφi dx = − a gradφj gradφi cos γij dx = − mj miji a dx ≤
T
T
T
d T) − am (Tm)(meas cos γij (≤ 0) in case i = j, otherwise i mj 2 d T) a gradφi , gradφi dx = a gradφi dx ≤ aM (T )(meas and m2i T T b, gradφj φi dx ≤ |b, gradφj | φi dx ≤ b gradφj φi dx ≤
T bM (T ) mj
T
φi dx =
T bM (T )(measd T) mj (d+1)
T
hold.
Thus we have an estimation for the non-diagonal elements cM (T ) am (T ) cos γij bM (T ) Kij |T ≤ (measd T) − + + mi mj mj (d + 1) (d + 1)(d + 2) and
aM (T ) bM (T ) 2cM (T ) + + Kii |T ≤ (measd T) m2i mi (d + 1) (d + 1)(d + 2) for the diagonal elements.
(9)
(10)
If we require (P 1 )-(P 3 ) on every simplex T ∈ Th , then we get a sufficient condition to fulfill (P 1 )-(P 3 ). Thus, one can easily check — on the base of (8)–(10) — that the next Theorem is valid.
Discrete Maximum Principle for Finite Element Parabolic Operators
609
Theorem 5. Let us assume that for the simplicial mesh Th the geometrical condition bM (T ) mi cM (T ) mi mj cos γij > + (11) am (T ) d + 1 am (T ) (d + 1)(d + 2) is satisfied. Then, for Δt chosen in accordance with the lower bound Δt ≥
−1 1 d+2 (d + 1)(d + 2) am (T ) cos γij − bM (T ) − cM (T ) θ mi mj mj
(12)
and the upper bound Δt ≤
−1 aM (T ) (d + 1)(d + 2) bM (T ) d + 2 1 + + c (T ) , M 1−θ 2 m2i 2 mi
(13)
respectively, the linear finite element discrete mesh operator La,b,c /La,b,0 satisfies the discrete weak/strong boundary maximum principle.
3.2
Global Conditions for DWBMP/DSBMP of La,b,c
Theorem 5 is of little use in practice, since conditions (11)-(13) should be checked for each T ∈ Th , moreover it does not contain any useful information about the corresponding choice of θ. In the following we deal with getting rid of these problems. To this aim, let us introduce the notations m = minTh mi , M = maxTh mi , G = minTh cos γij , ♠ = am G (d+1)(d+2) M2
−
bM d+2 m
aM (d+1)(d+2) 2 m2
+
bM d+2 2 m
+ cM and ♥ =
− cM . Then, from Theorem 5 it follows:
Corollary 1. Let us assume that for the simplicial mesh Th the geometrical condition ♥>0 (14) holds. Moreover the condition for the parameter θ the inequality θ≥
♠ ♠+♥
(15)
holds, too. Then under the condition 11 1 1 ≤ Δt ≤ θ♥ 1−θ♠
(16)
the linear finite element discrete mesh operator La,b,c with c ≥ 0 satisfies the DWBMP, and with c ≡ 0 it satisfies the DSBMP, similarly as in the continuous model.
610
M.E. Mincsovics
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0 −0.2
0
0
0.5
1
1.5
−0.2
0
0.5
1
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0 −0.2
1
1.5
0 0
0.5
1
−0.2
0
0.5
1
Fig. 1. Mesh and refined mesh on two different domains Ω
We remark that the geometrical condition (14) can be substituted for the less M M2 restrictive condition G > baM + caM . However, with this condition m d+1 m (d+1)(d+2) we cannot guarantee that the right side of (15) is not greater than one. The condition (14) is an upper bound for the angles, and it depends on how fine the mesh is, i.e., the ratio M 2 /m cannot be too large. Note that G ≤ d1 , and here the equality holds only in the case where Th is a uniformly regular simplicial mesh, i.e., that consisting of the congruent regular simplices, see [7]. Naturally, this can be attained if Ω is special. This case allows us the widest choice of the parameters θ, Δt. However even in this case ♠ > ♥ for d > 2, which means that the Crank– Nicolson method is excluded. If Th and θ is such that the conditions (14) and (15) hold, then the lower and upper bounds for Δt determine a nonempty interval, this is condition (16). Note that our bounds contain as special case the bounds in [7] — in which the operator La,0,c with constant coefficients was investigated — if we set the parameters as aM = am = a, bM = 0, cM = c.
4
Numerical Examples
As one can see, the conditions collected in the last section are sufficient, but not necessary to guarantee the DWBMP/DSBMP. Consequently, we need to investigate how sharp our conditions are. This section is devoted to illustrate this question with several numerical examples. We fix the dimension d = 2 and the parameters a ≡ 1, b ≡ (6, 0), c ≡ 10. We investigate two operators La,b,c with homogeneous Dirichlet boundary conditions, which differ only in their domains, see Figure 1. In the first √ case the domain √ is a rhombus, determined by the vertices (0, 0), (1, 0), (3/2, 3/2), (1/2, 3/2), which allow us to use a uniformly regular simplicial mesh. In the second case the domain is a unit square; here we used a mesh which contains right-angled triangles, which is problematical from the point of view of Corollary 1. The question is which bounds we obtain from Corollary 1 – see Table 1 – and how these
Discrete Maximum Principle for Finite Element Parabolic Operators
611
Table 1. Bounds of DWBMP obtained from Corollary 1 rhombus (l) rhombus (r) square (l) square (r) geometrical condition not fulfilled fulfilled not fulfilled not fulfilled lower bound for θ 0.9644 θ = 1/2, bounds for Δt θ = 1, bound for Δt 0.1399 Table 2. The real bounds of DWBMP rhombus (l) rhombus (r) square (l) square (r) K−1 0
≥0 true lower bound for θ 0 θ = 1/2, bounds for Δt 0 and 0.0476 θ = 1, bound for Δt 0
true 0.8525 0.0415
false -
true 0.9809 0.0699
compare with the real bounds of the DWBMP – see Table 2 – insisting on the stability of course. Stability means that ρ(X−1 10 X20 ) < 1, where ρ(·) denotes the spectral radius of the matrix. The next theorem – see [1] – shows us that we can lose the stability already with an unsuitable choice of the mesh, namely, it is not repairable with a clever choice of θ and Δt. This explains the column of square (l) of Table 2, namely, for such a mesh for which K−1 0 0 holds, the DWBMP rules out the stability. However, Theorem 6 has an advantage, namely, for such a mesh for which K−1 0 ≥ 0 holds, DWBMP implies stability. Theorem 6. Assume that K0 = X10 −X20 , and X10 is nonsingular with X−1 10 ≥ 0 and (X10 )−1 X20 ≥ 0. Then the following statements are equivalent. (i) K−1 0 ≥ 0. (ii) ρ(X−1 10 X20 ) < 1. The conclusion is that our conditions are useful, however, they delimit our mesh generation, and there are lot of domains (e.g., in 3D the cube) for which no acute angled triangulation is known.
References 1. Berman, A., Plemmons, A.R.J.: Nonnegative matrices in the mathematical sciences. Academic Press, New York (1997) 2. Brandts, J., Korotov, S., Krizek, M.: Simplicial finite elements in higher dimensions. Applications of Mathematics 52, 251–265 (2006) 3. Farag´ o, I., Horv´ ath, R.: Discrete maximum principle and adequate discretizations of linear parabolic problems. SIAM Sci. Comput. 28, 2313–2336 (2006) 4. Farag´ o, I., Horv´ ath, R.: Qualitative properties of monotone linear operators. Electronic Journal of Qualitative Theory of Differential Equations 8, 1–15 (2008)
612
M.E. Mincsovics
5. Farag´ o, I., Horv´ ath, R.: Continuous and discrete parabolic operators and their qualitative properties. IMA Journal of Numerical Analysis 29, 606–631 (2009) 6. Farag´ o, I., Horv´ ath, R., Korotov, S.: Discrete maximum principle for linear parabolic problems solved on hybrid meshes. Appl. Num. Math. 53, 249–264 (2005) 7. Farag´ o, I.: Discrete maximum principle for finite element parabolic models in higher dimensions. Math. Comp. Sim (2009), doi:10.1016/j.matcom.2009.01.017 8. Fujii, H.: Some remarks on finite element analysis of time-dependent field problems. Theory and Practice in Finite Element Structural Analysis, 91–106 (1973) 9. Holand, I., Bell, K.: Finite element methods in stress analysis. Tapir, Trondheim (1996)
Climate Change Scenarios for Hungary Based on Numerical Simulations with a Dynamical Climate Model Ildik´ o Pieczka, Judit Bartholy, Rita Pongr´ acz, and Adrienn Hunyady Department of Meteorology, E¨ otv¨ os Lor´ and University, P´ azm´ any st. 1/a, H-1117 Budapest, Hungary
[email protected],
[email protected],
[email protected],
[email protected] http://nimbus.elte.hu/
Abstract. Climate models are systems of partial differential equations based on the basic laws of physics, fluid motion, and chemistry. The use of high resolution model results are essential for the generation of national climate change scenarios. Therefore we have adapted the model PRECIS (Providing REgional Climates for Impacts Studies), which is a hydrostatic regional climate model HadRM3P developed at the UK Met Office, Hadley Centre, and nested in HadCM3 GCM. It uses 25 km horizontal resolution transposed to the Equator and 19 vertical levels with sigma coordinates. First, the validation of the model (with two different sets of boundary conditions for 1961–1990) is accomplished. Results of the different model experiments are compared to the monthly climatological data sets of the Climatic Research Unit (CRU) of the University of East Anglia as a reference. Significance of the seasonal bias fields is checked using Welch’s t-test. Expected future changes — in mean values, distributions and extreme indices — are analysed for the period 2071–2100. The results suggest that the significant temperature increase expected in the Carpathian Basin may considerably exceed the global warming rate. The climate of this region is expected to become wetter in winter and drier in the other seasons. Keywords: Regional climate modeling, model PRECIS, Carpathian Basin, temperature, precipitation, Welch’s t-test.
1
Introduction
Climate models are widely used for estimating the future changes. They are computer-based numerical models of the governing partial differential equations of the atmosphere. Global climate models — due to their coarse horizontal resolution — do not provide enough information about regional scale. For the generation of national climate change scenarios in European subregions the use of fine resolution (10–25 km) regional climate models (RCMs) is crucial[8]. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 613–620, 2010. c Springer-Verlag Berlin Heidelberg 2010
614
I. Pieczka et al.
The expected regional climate change in the Carpathian Basin (located in Central-East Europe) is modelled by four different RCMs. For instance, validation studies of models RegCM and ALADIN-Climate are discussed in [17] and [4], respectively, and preliminary results of REMO are published in [16]. The present paper discusses the climate modeling experiments using the model PRECIS. We evaluated the model capability of reconstructing the present climate (1961– 1990) using two different sets of boundary conditions, (i) from the European Centre for Medium Range Weather Forecast (ECMWF) ERA-40 reanalysis database, (ii) from the HadCM3 GCM output data. In order to fulfill the validation task the results of the different model experiments are compared to the monthly climatological data sets of the Climatic Research Unit (CRU) of the University of East Anglia as a reference. For the future (2071–2100) we completed two simulation runs, namely the IPCC A2 and B2 scenario runs. In order to objectively evaluate the results, Welch’s t-test is used for testing the simulated seasonal fields.
2
Regional Climate Model PRECIS
The installation and the adaptation of the regional climate model PRECIS at the Department of Meteorology, E¨ otv¨ os Lor´ and University (Budapest, Hungary) started in 2004. PRECIS is a high resolution limited area model with both atmospheric and land surface modules. The model was developed at the Hadley Centre of the UK Met Office [20], and it can be used over any part of the globe (e.g., [14,1]). The PRECIS regional climate model is based on the atmospheric component of HadCM3 [7] with substantial modifications to the model physics [10]. The atmospheric component of PRECIS is a hydrostatic version of the full primitive equations (1–5): ∂v ∂v = −v · ∇v − ω + f k × v − ∇Φ + DM ∂t ∂p ∂T = −v · ∇T + ω ∂t
κT ∂T − p ∂p
+
rad Q con Q + + DH cp cp
(1)
(2)
∂q ∂q = −v · ∇q − ω + E − C + Dq ∂t ∂p
(3)
∂ω = −∇ · v ∂p
(4)
∂Φ RT =− ∂p p
(5)
where DM = (Dλ , Dφ ) are dissipation terms for momentum and DH and Dq are diffusion terms for heat and moisture, respectively, q is the specific humidity, E and C are the rates of evaporation and condensation due to cloud processes, and Φ is the geopotential [18].
Climate Change Scenarios for Hungary
615
Table 1. Configuration and properties of PRECIS Prognostic variables
Horizontal coordinate-system Horizontal resolution Horizontal grid Horizontal discretization Vertical coordinate-system Coupling (LBC treatment) Linear term Advection Timestep
Surface pressure, zonal and meridional wind components, water vapour, potential temperature [10] Spherical polar coordinates rotated to the equator [10] 0.44◦ × 0.44◦ or 0.22◦ × 0.22◦ [10] Arakawa B grid [2] Finite differences (split-explicit) [10] Hybrid: terrain-following + pressure [15] Davies relaxation scheme [10] Explicit scheme [10] Heun-scheme [11] 5 min [10]
Table 1 summarizes the main properties and the configuration of the model. Splitting techniques are commonly used when large-scale models — like climate models — are treated numerically [5]. In our model during the integration the geostrophic adjustment is separated from the advection part: adjustment is iterated three times per 5 minutes advection timestep. Averaged velocities over the three adjustment timesteps are used for advection, which is integrated in time using the Heun scheme [11]. This finite difference scheme is 4th order accurate except at high wind speeds when it is reduced to 2nd order accuracy for stability. In our studies, we used the finest possible horizontal resolution (0.22◦ ) for modeling the Central European climate. Hence, the target region contains 123 × 96 grid points, with special emphasis on the Carpathian Basin and its Mediterranean vicinity containing 105 × 49 grid points (Fig. 1). In the post-processing of the RCM outputs, daily mean values are used. In case of the control period (1961-1990), the initial conditions and the lateral boundary conditions (IC&LBC) for the regional model are taken from (i) the ERA-40 reanalysis database [6] using 1◦ horizontal resolution, compiled by the ECMWF, and (ii) the HadCM3 ocean-atmosphere coupled GCM using 150 km as a horizontal resolution. For the validation of the PRECIS results CRU TS 1.0 ([12,13]) data sets are used.
3
Design of the Statistical Analysis
In order to improve our knowledge about the reliability of the model results, we have made a hypothesis test. We cannot assume that the variances are equal in our samples (e.g. the variances of the model are equal with the variances of the observational data set, or the variances of the model are equal for the two different periods), so we decided to use Welch’s t-test [19]. It is an adaptation of Student’s t-test intended for use with two samples having possibly unequal variances. It defines the statistic t by the following formula:
616
I. Pieczka et al.
Fig. 1. Topography of the selected Central European domain used in model PRECIS
X1 − X2 t= 2 s1 s2 + N22 N1
(6)
where X i , s2i , and Ni are the ith sample mean, sample variance, and sample size, respectively. The degrees of freedom can be calculated from (7–8): C= l =
C 2 (N
N2 s21 N1 s22 + N2 s21
(N1 − 1)(N2 − 1) 2 2 − 1) + (1 − C) (N1 − 1)
(7)
(8)
so the degree of freedom is the closest integer value to l . Once they have been computed, these statistics can be used with the t-distribution to test the null hypothesis that the two population means are equal (using a two-tailed test). In our work we used the seasonal mean values of the 30-year periods. For the period of 1961–1990 we analysed only the results from the PRECIS run where the initial and lateral boundary conditions were provided by a GCM, because in case of the other run (where we had the IC&LBC from the ERA-40 reanalysis) it cannot be assumed that the samples are independent. For that run we plan to calculate the one-tailed t-test of the bias.
4
Results
During the validation process, we analyzed monthly, seasonal, and annual temperature mean values and precipitation amounts for the control period. We found that, in general, the seasonal mean temperature fields are overestimated by the PRECIS simulations. The largest bias values are found in summer, when the average overestimation of PRECIS over Hungary is 2.2◦ C.
Climate Change Scenarios for Hungary
617
After completing the statistical hypothesis tests the results show that in case of Hungary the temperature bias values are not significant at 0.05 level at 93% and at 70% of the gridpoints located within the country in autumn and spring, respectively. In winter significant positive bias is found at 76% of the gridpoints at 0.05 level, while in summer bias values are significant at all the gridpoints of the Hungarian domain. Precipitation is far more variable both in time and in space than temperature. The spatially averaged precipitation is overestimated in the entire model domain, especially, in spring and winter. In case of Hungary, the spring precipitation is overestimated (by 35% on average), while in the other three seasons the precipitation is slightly underestimated (by less than 10% on average) in the country. For the 1961–1990 reference period statistical hypothesis tests suggest that in case of precipitation the spring bias values are significantly large (at 99% of all the gridpoints located inside the Hungarian borders). Interesting to see, that although the bias is only 9% in winter (this means about 10 mm/month) on average, it is large enough to make the bias significantly large at 28% of the gridpoints. Fig. 2 (left column) shows the differences between the simulated (PRECIS outputs) and the observed (CRU data) values for the 1961–1990 reference period in case of temperature and precipitation in the gridpoints where they are significant (non-significant bias values are indicated by white color). Temperature and precipitation bias fields of the PRECIS simulations can be considered acceptable if compared to other European RCM simulations ([9,3]). In our research plan several future scenario runs are scheduled using the PRECIS model for the period 2071–2100. Here, the results of the first accomplished PRECIS experiments are summarized using the SRES A2 and B2 emission scenarios [8]. The two right columns of Fig. 2 show the expected mean seasonal temperature and precipitation change in the gridpoints where it is significant at 0.05 level by the end of the 21st century (middle column — A2 scenario, right column — B2 scenario). The changes are more pronounced in case of A2 than B2, which is due to the higher CO2 concentration level projected for A2 scenario (it is 850 ppm, while about 600 ppm for B2 scenario). The expected temperature changes are significantly large for all seasons by 2071–2100 for both scenarios. The largest warming is projected for summer, 8◦ C and 6◦ C on spatial average for Hungary in case of A2 and B2 scenarios, respectively. For precipitation the projected drying expected in summer is significant at 0.05 level in each gridpoint located in the Carpathian Basin. According to the PRECIS simulations, the expected precipitation decreases are 58% and 43% on spatial average for the entire country for A2 and B2 scenarios, respectively. Seasonal precipitation amounts, in spring and autumn are also projected to decrease by the end of the 21st century, however the expected change is significant only in some of the gridpoints within Hungary (e.g., in the northern part of the country the expected spring drying is significant at 0.05 level in case of A2 scenario). Winter is projected to become wetter in Hungary, especially for A2 scenario in
618
I. Pieczka et al.
Temperature BIAS (°C) PRECIS/CTL-CRU (1961-1990) IC&LBC: HadCM3 GCM
Expected temperature change (°C) PRECIS/SCEN-PRECIS/CTL (2071-2100) IC&LBC: IPCC A2 scenario IC&LBC: IPCC B2 scenario
DJF
MAM
JJA
SON
Precipitation BIAS (mm/month) PRECIS/CTL-CRU (1961-1990) IC&LBC: HadCM3 GCM
Expected precipitation change (mm/month) PRECIS/SCEN-PRECIS/CTL (2071-2100) IC&LBC: IPCC A2 scenario IC&LBC: IPCC B2 scenario
DJF
MAM
JJA
SON
Fig. 2. Figure shows the BIAS (left) and the expected change (right) of the seasonal mean temperature (◦ C) and precipitation (mm/month) predicted by PRECIS in the gridpoints where it is significant at 0.05 level in the Carpathian Basin
Climate Change Scenarios for Hungary
619
the western Transdanubian region, where the expected change exceeds 5–10% and it is significant at 0.05 level.
5
Conclusions
Based on the results presented in this paper, the following conclusions can be drawn. 1. Performance of the model for the past is acceptable compared to other simulations. 2. The sign of the expected changes is the same for A2 and B2 scenarios, but the amplitude of the change is larger in case of A2. 3. In all the four seasons significant warming is projected at 0.05 level, the largest warming can be expected in summer. 4. Precipitation amount is expected to decrease by the end of the century, except in winter. The projected drying is significant at 0.05 level in summer.
MEC
Albert Apponyi programme
Established by the support of the National Office for Research and Technology.
Acknowledgments. Research leading to this paper has been supported by the following sources: the Hungarian Academy of Sciences under the program 2006/TKI/246 titled Adaptation to climate change, the Hungarian National Research Development Program under grants NKFP-3A/082/2004 and NKFP6/079/2005, the Hungarian National Science Research Foundation under grants T-049824, K-67626, K-69164, and K-678125, the Hungarian Ministry of Environment and Water, and the CECILIA project of the European Union Nr. 6 program (contract no. GOCE-037005).
References 1. Akhtar, M., Ahmad, N., Booij, M.J.: The impact of climate change on the water resources of Hindukush-Karakorum-Himalaya region under different glacier coverage scenarios. Journal of Hydrology (2008), doi:10.1016/j.jhydrol.2008.03.015 2. Arakawa, A., Lamb, V.R.: Computational design of the basic dynamical processes of the UCLA general circulation model. In: Chang, J. (ed.) Methods in Computational Physics, vol. 17, pp. 173–265. Academic Press, New York (1977) 3. Bartholy, J., Pongr´ acz, R., Gelyb´ o, G.: Regional climate change expected in Hungary for 2071–2100. Applied Ecology and Environmental Research 5, 1–17 (2007) 4. Csima, G., Hor´ anyi, A.: Validation of the ALADIN-Climate regional climate model at the Hungarian Meteorological Service. Id˝ oj´ ar´ as 112, 155–177 (2008)
620
I. Pieczka et al.
´ Zlatev, Z.: Different splitting techniques with 5. Dimov, I., Farag´ o, I., Havasi, A., application to air pollution models. Int. J. Environment and Pollution 32(2), 174– 199 (2008) 6. Gibson, J.K., Kallberg, P., Uppala, S., Nomura, A., Hernandez, A., Serrano, A.: ERA description, Reading. ECMWF Reanalysis Project Report Series, vol. 1, p. 77 (1997) 7. Gordon, C., Cooper, C., Senior, C.A., Banks, H., Gregory, J.M., Johns, T.C., Mitchell, J.F.B., Wood, R.A.: The simulation of SST, sea ice extents and ocean heat transports in a version of the Hadley Centre coupled model without flux adjustments. Climate Dynamics 16, 147–168 (2000) 8. IPCC: Climate Change 2007: The Physical Science Basis. In: Solomon, S., Qin, D., Manning, M., Chen, Z., Marquis, M., Averyt, K.B., Tignor, M., Miller, H.L. (eds.) Contribution of Working Group I to the Fourth Assessment Report of the IPCC, 996 p. Cambridge University Press, Cambridge (2007) 9. Jacob, D., B¨ arring, L., Christensen, O.B., Christensen, J.H., de Castro, M., D´equ´e, M., Giorgi, F., Hagemann, S., Hirschi, M., Jones, R., Kjellstr¨ om, E., Lenderink, G., Rockel, B., S´ anchez, E., Sch¨ ar, C., Seneviratne, S.I., Somot, S., van Ulden, A., van den Hurk, B.: An inter-comparison of regional climate models for Europe: Model performance in Present-Day Climate. Climatic Change 81, 21–53 (2007), doi:10.1007/s10584-006-9213-4 10. Jones, R.G., Noguer, M., Hassell, D.C., Hudson, D., Wilson, S.S., Jenkins, G.J., Mitchell, J.F.B.: Generating high resolution climate change scenarios using PRECIS, 40 p. UK Met Office Hadley Centre, Exeter (2004) 11. Mesinger, F.: Horizontal Advection Schemes of a Staggered Grid — An Enstrophy and Energy-Conserving Model. Monthly Weather Review 109, 467–478 (1981) 12. New, M., Hulme, M., Jones, P.: Representing twentieth-century space-time climate variability. Part I: Development of a 1961–90 mean monthly terrestrial climatology. Journal of Climate 12, 829–856 (1999) 13. New, M., Hulme, M., Jones, P.: Representing twentieth-century space-time climate variability. Part 2: Development of 1901–96 monthly grids of terrestrial surface climate. Journal of Climate 13, 2217–2238 (2000) 14. Rupa Kumar, K., Sahai, A.K., Krishna Kumar, K., Patwardhan, S.K., Mishra, P.K., Revadekar, J.V., Kamala, K., Pant, G.B.: High-resolution climate change scenarios for India for the 21st century. Current Science 90, 334–345 (2006) 15. Simmons, A.J., Burridge, D.M.: An energy and angular-momentum conserving vertical finite difference scheme and hybrid vertical coordinates. Monthly Weather Review 109, 758–766 (1981) 16. Sz´epsz´ o, G., Hor´ anyi, A.: Transient simulation of the REMO regional climate model and its evaluation over Hungary. Id˝ oj´ ar´ as 112, 203–232 (2008) 17. Torma, C., Bartholy, J., Pongr´ acz, R., Barcza, Z., Coppola, E., Giorgi, F.: Adaptation and validation of the RegCM3 climate model for the Carpathian Basin. Id˝ oj´ ar´ as 112, 233–247 (2008) 18. Trenberth, K.E. (ed.): Climate System Modeling, 788 p. Cambridge University Press, Cambridge (1992) 19. Welch, B.L.: The significance of the difference between two means when the population variances are unequal. Biometrika 29, 350–361 (1938) 20. Wilson, S., Hassell, D., Hein, D., Jones, R., Taylor, R.: Installing and using the Hadley Centre regional climate modelling system, PRECIS. Version 1.5.1, 157 p. UK Met Office Hadley Centre, Exeter (2007)
Numerical Simulations of Reaction-Diffusion Systems Arising in Chemistry Using Exponential Integrators R˘azvan S¸tef˘anescu and Gabriel Dimitriu “Gr. T. Popa” University of Medicine and Pharmacy Department of Mathematics and Informatics, 700115 Ia¸si, Romania
[email protected],
[email protected]
Abstract. We perform a comparative numerical study of two reactiondiffusion models arising in chemistry by using exponential integrators. Numerical simulations of the reaction kinetics associated with these models, including both the local and global errors as a function of time step and error as a function of computational time are shown.
1
Introduction
Reaction-diffusion models can display a wide variety of spatio-temporal phenomena in response to large amplitude perturbations. The nature of the kinetic terms plays the important role in determining the solution behaviour. The efficient and accurate simulation of such systems, however, represent a difficult task. This is because they couple a stiff diffusion term with a (typically) strongly nonlinear reaction term. When discretised this leads to large systems of strongly nonlinear, stiff ODEs. In this work we carry out a comparative numerical approach of two reaction-diffusion models arising in chemistry by using exponential integrators. The paper is organized as follows. Section 2 briefly describes the exponential integrators and their features. In Section 3 the two reaction kinetics — the chlorite-iodide-malonic acid (CIMA) starch reaction and a glycolysis model governed by a cross kinetics, respectively — are presented on the basis of which the numerical study is carried out. Section 4 is devoted to a short description of the numerical schemes applied to the models under study, together with results of the numerical simulations. Some concluding remarks are drawn at the end.
2
General Framework of the Exponential Integrators
Exponential integrators represent numerical schemes specifically constructed for solving differential equations (see for details [11]), where it is possible to split the problem into a linear and a nonlinear part y˙ = Ly + N (y, t),
y(tn−1 ) = yn−1 ,
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 621–628, 2010. c Springer-Verlag Berlin Heidelberg 2010
(1)
622
R. S ¸ tef˘ anescu and G. Dimitriu
where y ∈ Cd , L ∈ Cd×d and N : Cd × R → Cd for all t. In some cases (discretizations of PDEs), depending on the model under study, the matrix L is unbounded as d tends to infinity. The goal of the exponential integrators is to treat the linear term exactly and allow the remaining part of the integration to be integrated numerically using an explicit scheme. An exponential integrator has two main characteristics: (i) If L = 0, then the scheme reduces to a standard general linear scheme. This is often called the underlying general linear scheme; (ii) If N (y, t) = 0 for all y and t, then the scheme reproduces the exact solution of (1). To satisfy (i) the exponential function must be used within the numerical scheme. Despite the fact that L is unbounded, typically the coefficients of the scheme will be bounded. For an s-stage exponential integrator of Runge-Kutta type, we define the internal stages and output approximation: Yi = h yn = h
s j=1 s
aij (hL)N (Yj , tn−1 + cj h) + ui1 (hL)yn−1 ,
i = 1, . . . , s ,
bi (hL)N (Yi , tn−1 + ci h) + v1 (hL)yn−1 .
(2)
i=1
The feature (i) above is satisfied if we require in (2) as ui1 (0) = 1, aij (0) = aij , v1 (0) = 1, and bi (0) = bi , where the real numbers aij and bj represent the coefficients of the underlying Runge-Kutta scheme. A step of length h in an exponential general linear scheme, requires to import r approximations into the [n−1] step, denoted as yi , i = 1, . . . , r. The internal stages (as in the Runge-Kutta case) are written as Yi , i = 1, . . . , s. After the step is completed, r updated approximations are computed. These are then used in the next step. Each step in an exponential general linear scheme can be written as Yi = h [n]
yi
=h
s j=1 s
aij (hL)N (Yj , tn−1 + cj h) + bij (hL)N (Yj , tn−1 + cj h) +
j=1
r j=1 r
[n−1]
,
i = 1, . . . , s ,
[n−1]
,
i = 1, . . . , r. (3)
uij (hL)yj vij (hL)yj
j=1
The exponential integrators of Runge-Kutta type are easily seen to be a special case when r = 1 with ui1 (z) = ai0 (z), v11 (z) = b0 (z) and b1j (z) = bj (z).
3
Defining the Model Equations
In this section we shortly describe the models governing different reaction kinetics arising in chemistry, which will be solved numerically in Section 4. Following nondimensionalization of the system in space, such that the spatial domain becomes [0, 1], the general system is represented by the following equations: ut = Du uxx + f (u, v) ,
vt = Dv vxx + g(u, v) ,
Numerical Simulations of Reaction-Diffusion Systems Arising
623
where u and v are the concentrations of the two morphogens, Du and Dv are the corresponding diffusion coefficients, and f (u, v) and g(u, v) encode the reaction kinetics between u and v. Model describing the chlorite-iodide-malonic acid starch reaction. The first Turing patterns were noticed in the chlorite-iodide-malonic acid (CIMA) starch reaction [7]. The model proposed by Lengyel and Epstein [10] stresses three processes: the reaction between malonic acid (MA) and iodine to create iodide, and the reactions between chlorite and iodide and chloride and iodide. By imposing the experimentally realistic hypothesis that the concentration of malonic acid, chlorine dioxide, and iodine are constant, Lengyel and Epstein obtained the following model: 4uv ∂v uv ∂u 2 2 = k1 − u − +∇ u, = k2 k3 u − + c∇ v , ∂t ∂t 1 + u2 1 + u2 where u, v are the concentrations of iodide and chlorite, respectively and k1 , k2 , k3 , and c are positive constants. Glycolysis model with cross kinetics. We also analyze numerically a glycolysis model [1] with cross kinetics (where the activator upregulates itself, but downregulates the inhibitor, and the inhibitor likewise upregulates the activator and downregulates itself). In this model, the functions f and g are given by f (u, v) = ru2 v + νv − μu ,
g(u, v) = r(1 − u2 v) − νv .
Here, the parameters r, μ, and ν are positive constants.
4
Numerical Schemes and Computational Issues
Here, we present the numerical schemes defining the exponential integrators that have been used in our simulations. All these integrators belong to the package EXPINT written in Matlab [3]. In this description, we will use two terms of order. The non-stiff order refers to the case when the operator L is bounded, such conditions were derived in [11]. The stiff order refers to the case when L is unbounded [5], for various schemes. The first scheme that has been applied to our models is named Lawson4. The scheme Lawson4 belongs to the Lawson schemes constructed by applying the Lawson transformations [9] to the semilinear problem. It is based on the classical fourth order scheme of Kutta (see [4], Eq. (235i)), and this scheme has stiff order one. The scheme denoted by hochost4 was developed by Hochbruck and Ostermann. It has five-stages and is the only known exponential Runge-Kutta method with stiff order four. Nørsett designed in [13] a class of schemes which reduced to the AdamsBashforth methods when the linear part of the problem is zero. ABLawson4 scheme has stiff order one and is based on the Adams-Bashforth scheme of order four and is represented in this way so that the incoming approximation has the form y [n−1] = [yn−1 , hNn−2 , hNn−3 , hNn−4 ]T .
624
R. S ¸ tef˘ anescu and G. Dimitriu
ABNørsett4 is a stiff order four scheme of Nørsett [13], which is implemented so that the incoming approximation has the same form as in ABLawson4. ETD schemes are based on algebraic approximations to the nonlinear term in the variation of constants formula. ETD means “Exponential Time Differencing” and the name stems from [6]. The scheme ETD4RK due to Cox and Matthews (in [6], Eqs. (26)–(29)) started the recent focus on exponential integrators, unfortunately it has only stiff order two. ETD5RKF is a non-stiff fifth order scheme based on the six stage fifth order scheme of Fehlberg. The scheme RKMK4t uses a convenient truncation of the dexp−1 operator, leading to the method of Munthe-Kaas [12], which again is of stiff order two, but suffers from instabilities when non-periodic boundary conditions are used. Krogstad [8] constructed the generalized Lawson schemes as a means of overcoming some of the undesirable properties of the Lawson schemes. This class of schemes uses approximations of the nonlinear term from previous steps, resulting in an exponential general linear method. The scheme genlawson45 included in the package mentioned above is also used for our numerical study. In what follows, we present some comparative results concerning the quality of the numerical schemes that have been used for CIMA reaction model. There are shown relationships between the global error, the time step h varying from 10−3 to 10−1 , and the computational time. The global error is calculated at the final moment of time (t = 1) by the formula: ||Y (:, 1) − Yref (:, 1)||2 , where Y (:, 1) := (u(:, 1), v(:, 1)) and Yref (:, 1) := (uref (:, 1), vref (:, 1)). All the reference solutions Yref were obtained by using Matlab routine ode15s. We notice for CIMA model, that etd5rkf scheme has the smallest global error but, at the same time, the largest computational time, while the ablawson4 scheme indicated a significant increased rate of the global error in conjunction with the lowest computational time. Practically, we have found the same behaviour for the other numerical schemes that have been implemented for this model. This is the reason why we have just presented the numerical results obtained only for these three schemes.
Table 1. Comparative results concerning the quality of the numerical schemes: the global error as a function of time step h and time used for the CIMA reaction kinetics CIMA, ND=128, IC: Smooth, c = 0.1, k1 = 0.095, k2 = 0.5, k3 = 3.5 Scheme/ Etd5rkf Ablawson4 Genlawson45 Time step Time used Global Err. Time used Global Err. Time used Global Err. 0.0010 1.203 3.940E-08 0.625 8.816E-05 1.297 5.498E-07 0.0020 0.656 8.524E-07 0.313 4.193E-04 0.625 7.032E-06 0.0032 0.422 5.271E-06 0.234 9.923E-04 0.438 3.136E-05 0.0063 0.266 5.428E-05 0.125 2.893E-03 0.219 2.134E-04 0.0127 0.188 3.494E-04 0.063 6.866E-03 0.141 1.007E-03 0.0313 0.094 2.264E-03 0.094 1.701E-02 0.078 4.987E-03 0.0625 0.078 7.078E-03 0.016 2.411E-02 0.063 1.380E-02 0.1000 0.063 1.445E-02 0.031 1.332E-01 0.047 1.001E-01
Numerical Simulations of Reaction-Diffusion Systems Arising
625
Glycolysis, ND=128, IC: Smooth, r=0.025, miu =0.045, niu =0.35
−2
10
lawson4 hochost4 etd4rk rkmk4t abnorsett4 ablawson4 etd5rkf genlawson45 modgenlawson45
−4
Global error
10
−6
10
−8
10
−10
10
−12
10
−2
10
−1
0
10
10
1
10
Time used
Fig. 1. Comparative results concerning the global error as a function of computational time for the glycolysis model with cross reaction kinetics
From theoretical point of view, the genlawson45 scheme, which has a non stiff order 6, and a stiff order 5, should have a global error less than the one obtained by applying the etd5rkf scheme, which has non stiff order 5 and stiff order 1. The results contained in Table 1 are in contradiction with the previous statement, so that we tried to use other model parameters to see whether the situation continues to persist. We noticed that if we increase the regularity of the initial conditions, the global error in genlawson45 case has lower values (see [2] for a similar interpretation related to the nonlinear Schr¨ odinger equation). In the case of the cross reaction kinetics for glycolysis model, the schemes genlawson45 and modgenlawson45 had the best behaviour in comparison with the results of the other 7 schemes involved in our study (see Fig. 1). Figure 2 gives timing results in the sense that it depicts dependencies of the global error as a function of the computational time for glycolysis model. In this case, good results have been obtained with the lawson4, etd4rkf, and rkmk4t schemes. Despite their low stiff order (stiff order 1), the ABLawson4 and etd5rkf schemes (that have both the stiff order 1) behaved as 3.7 and 5.2 order schemes respectively, upon glycolysis problem with smooth initial conditions. Also, the genlawson45 scheme (with stiff order 5) behaved like a 5.6 order scheme (see Fig. 3). We remark that with a proper choice of the initial condition we can give an ascending boost to the scheme order levels, especially for schemes with low stiff order, but further investigations are needed.
626
R. S ¸ tef˘ anescu and G. Dimitriu Glycolysis, ND=128, IC: Smooth, r=0.025, miu =0.045, niu =0.35
−2
10
lawson4 hochost4 etd4rk rkmk4t abnorsett4 ablawson4 etd5rkf genlawson45 modgenlawson45
−4
Global error
10
−6
10
−8
10
−10
10
−12
10
−3
10
−2
−1
10 Timestep h
10
Fig. 2. Comparative results concerning the quality of the numerical schemes: the global error as a function of time step h for the glycolysis model with cross reaction kinetics Glycolysis, ND=128, IC: Smooth, r=0.025, miu =0.045, niu =0.35
−2
10
−4
Global error
10
−6
10
3.7 −8
10
5.6 5.2
−10
10
ablawson4 etd5rkf genlawson45
−12
10
−3
10
−2
10 Timestep h
−1
10
Fig. 3. An order test applied to the ablawson4, etd5rkf, and genlawson45 schemes. The dotted lines are only indicator lines shoing how corresponding orders look like.
Numerical Simulations of Reaction-Diffusion Systems Arising
627
Table 2. Comparative results concerning the quality of the numerical schemes: the local error evaluated as a function of the time step h for the CIMA reaction kinetics CIMA, ND=128, IC: Smooth, c = 0.1, k1 = 0.095, k2 = 0.5, k3 = 3.5 Scheme/ Inner points Lawson4 Hochost4 Etd4rkt Rkmk4t Friedli 0.001 3.864E-06 6.181E-06 6.169E-06 3.467E-05 6.176E-06 0.002 3.993E-05 5.126E-05 5.122E-05 2.134E-04 5.125E-05 0.004 3.391E-04 3.036E-04 3.038E-04 8.104E-04 3.037E-04 0.016 6.391E-03 4.553E-03 4.565E-03 8.255E-03 4.551E-03 0.032 1.693E-02 1.293E-02 1.294E-02 3.911E-02 1.291E-02 0.126 7.011E-02 7.457E-02 7.325E-02 4.028E-01 7.363E-02 0.251 1.569E-01 1.739E-01 1.443E-01 1.024E+00 1.447E-01 Table 3. Comparative results concerning the quality of the numerical schemes: the local error evaluated as a function of the time step h for the glycolysis model with cross reaction kinetics Glycolysis, ND=128, Scheme/ Inner points Lawson4 0.001 5.700E-14 0.002 1.820E-13 0.004 4.216E-12 0.016 2.157E-09 0.032 3.287E-08 0.126 7.105E-06 0.251 9.375E-05
IC: Smooth, r = 0.025, μ = 0.045, ν = 0.35 Hochost4 1.690E-13 5.012E-12 1.524E-10 1.169E-07 2.628E-06 5.068E-04 3.737E-03
Etd4rkt 1.610E-13 4.771E-12 1.450E-10 1.111E-07 2.495E-06 4.753E-04 3.439E-03
Rkmk4t 5.770E-13 1.792E-11 5.453E-10 4.210E-07 9.591E-06 2.038E-03 1.709E-02
Friedli 1.650E-13 4.884E-12 1.485E-10 1.139E-07 2.562E-06 4.925E-04 3.609E-03
Tables 2 and 3 contain the results of a local order test applied to the two models under investigation. Here, another exponential Runge-Kutta method — Friedli scheme with the stiff order 3 — is added. This test does not make any sense for multistep schemes. The vector of time steps is built up, so that the smallest time step must divide all other time steps used. Otherwise, the computation would take an unnecessarily long time. The local error is calculated at every inner time point by ||Y (:, t)−Yref (:, t)||2 , where as before, Y (:, t) := (u(:, t), v(:, t)) and Yref (:, t) := (uref (:, t), vref (:, t)). The plots precise in their title N D = 128 and “IC: Smooth”. This means that we have used 128 Fourier modes in the spatial direction (must be power of 2), and as initial conditions for the model variables, we have chosen a set of values with a Gaussian distribution.
5
Conclusions
In this study we carried out a comparative numerical approach for two models arising in chemistry: the chlorite-iodide-malonic acid (CIMA) starch reaction and
628
R. S ¸ tef˘ anescu and G. Dimitriu
a glycolysis model governed by a cross kinetics. The numerical results have been obtained by using several exponential integrators belonging to Matlab package EXPINT [3]. We computed for the glycolysis model the values of the observed orders belonging to three numerical schemes. We also noticed that lower stiff order schemes depend upon the regularity of the initial conditions. Illustrative plots and numerical results were presented concerning comparative global and local errors as well as timing results of the integration schemes.
Acknowledgments The paper was supported by the project ID 342/2008, CNCSIS, Romania.
References 1. Ashkenazi, M., Othmer, H.G.: Spatial Patterns in Coupled Biochemical Oscillators. J. Math. Biol. 5, 305–350 (1978) 2. Berland, H., Skaflestad, B.: Solving the nonlinear Schr¨ odinger equation using exponential integrators. Modeling, Identification and Control 27(4), 201–217 (2006) 3. Berland, H., Skaflestad, B., Wright, W.: EXPINT — A Matlab package for exponential integrators. Numerics 4 (2005) 4. Butcher, J.C.: Numerical methods for ordinary differential equations. John Wiley & Sons, Chichester (2003) 5. Celledoni, E., Marthinsen, A., Qwren, B.: Commutator-free Lie group methods. FGCS 19(3), 341–352 (2003) 6. Cox, S.M., Matthews, P.C.: Exponential time differencing for stiff systems. J. Comp. Phys. 176(2), 430–455 (2002) 7. Kepper, P., De Castets, V., Dulos, E., Boissonade, J.: Turing-type chemical patterns in the chlorite-iodide-malonic acid reaction. Physica D 49, 161–169 (1991) 8. Krogstad, S.: Generalized integrating factor methods for stiff PDEs. Journal of Computational Physics 203(1), 72–88 (2005) 9. Lawson, D.J.: Generalized Runge-Kutta processes for stable systems with large Lipschitz constants. SIAM J. Numer. Anal. 4, 372–380 (1967) 10. Lengyel, I., Epstein, I.R.: Modeling of Turing structures in the chlorite-iodidemalonic acid-starch reaction system. Science 251, 650–652 (1991) 11. Minchev, B., Wright, W.M.: A review of exponential integrators for semilinear problems. Technical Report 2, The Norwegian University of Science and Technology (2005) 12. Munthe-Kaas, H.: High order Runge-Kutta methods on manifolds. Applied Numerical Mathematics 29, 115–127 (1999) 13. Nørsett, S.P.: An A-stable modification of the Adams-Bashforth methods. In: Conf. on Numerical solution of Differential Equations, Dundee, pp. 214–219. Springer, Berlin (1969)
On the Discretization Time-Step in the Finite Element Theta-Method of the Two-Dimensional Discrete Heat Equation Tam´as Szab´o E¨ otv¨ os Lor´ and University, Institute of Mathematics 1117 Budapest, P´ azm´ any P. S. 1/c, Hungary
[email protected]
Abstract. In this paper the numerical solution of the two-dimensional heat conduction equation is investigated, by applying Dirichlet boundary condition at the upper side and Neumann boundary condition to the left, right, and lower sides. To the discretization in space, we apply the linear finite element method and for the time discretization the well-known theta-method. The aim of the work is to derive an adequate numerical solution for the homogeneous initial condition by this approach. We theoretically analyze the possible choices of the time-discretization step-size and establish the interval where the discrete model can reliably describe the original physical phenomenon. As the discrete model, we arrive at the task of the one-step iterative method. We point out that there is a need to obtain both lower and upper bounds of the time-step size to preserve the qualitative properties of the real physical solution. The main results of the work is determining the interval for the time-step size to be used in this special finite element method and analyzing the main qualitative characterstics of the model. Our theoretical results are verified by different numerical experiments.
1
Preliminaries
The Proton exchange membrane fuel cells (PEMFC) are electrochemical conversion devices that produce electricity from fuel and an oxidant, which react in R the presence of an electrolyte. The most commonly used membrane is Nafion, which relies on liquid water humidification of the membrane. It is not appropriate to operate it above 80-90◦C, since the membrane would dry, which implies the importance of the deeper analysis of the heat conduction modeling of a fuel cell. Since the heat produced in fuel cells appearing on a specified side of cell, we need to apply different boundary conditions on different sides of the two dimensional mathematical model. Minimum time step sizes for different discrete one-dimensional diffusion problems have been analyzed by many researchers [7]. Thomas and Zhou [10] have constructed an approach to develop the minimum time step size that can be used in the finite element method (FEM) of one-dimensional diffusion problems. In a previous work [9], we pointed out its imperfections and extended the analysis to I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 629–636, 2010. c Springer-Verlag Berlin Heidelberg 2010
630
T. Szab´ o
the theta-method as well, and developed an upper bound for the maximum time step size. In this paper, the bounds of the time step size for two-dimensional classical diffusion problem was investigated that can be used in the FEM applying bilinear shape functions [6]. During the analysis the heat conduction equation is considered as a prototypical parabolic partial differential equation. The general form of this equation on Ω × (0, T ), where Ω := [0, 1] × [0, 1], is c
∂u = κ∇2 u, (x, y) ∈ Ω, t ∈ (0, T ), ∂t ∂u = 0, t ∈ [0, T ) u|ΓD = τ, ∂n ΓN
(1.1)
u(x, y, 0) = u0 (x, y), (x, y) ∈ Ω, where c represents the specific heat capacity, u is the temperature of the analyzed domain, t and x, y denote the time and space variables, respectively, κ is the coefficient of the thermal conductivity. Moreover, τ is the given temperature at ΓD , a non-negative real number. The left-hand side of this equation expresses the rate of the temperature change at a point in space over time. The righthand side indicates the spatial thermal conduction in direction x. ΓN denotes a specified part of the boundary of Ω, where Neumann boundary condition holds (x = 0 or x = 1 or y = 1), and ΓD denotes the part of the boundary where Dirichlet boundary condition was applied (y = 0). During the analysis of the problem the space was divided into 2n2 equilateral triangle elements (Fig. 1). The heat capacity and the coefficient of thermal conductivity are assumed to be constants. We seek the spatially discretized temperature ud in the form: ud (x, y, t) =
n
φi,j (t)Ni,j (x, y),
(1.2)
i,j=0
where Ni,j (x, y) are the following shape functions: ⎧ 1− ⎪ ⎪ ⎪ ⎪ ⎪ 1− ⎪ ⎪ ⎪ ⎨1 + Ni,j (x, y) := ⎪ 1+ ⎪ ⎪ ⎪ ⎪ ⎪1 + ⎪ ⎪ ⎩ 1−
1 (xi h 1 h (xi 1 h (yj 1 h (xi 1 h (xi 1 h (yj
− x) − h1 (yj − y), − x), − y), − x) + h1 (yj − y), − x), − y),
if if if if if if
(x, y) ∈ ω1i,j (x, y) ∈ ω2i,j (x, y) ∈ ω3i,j (x, y) ∈ ω4i,j (x, y) ∈ ω5i,j (x, y) ∈ ω6i,j
(1.3)
φi,j are unknown, and n2 is the ordinal number of nodes. The unknown temperature index j starts from 1, hence, due to the boundary condition at y = 0 the temperature is known, namely, φ0,i (t) = τ, ∀i = 0, 1...n.
On the Discretization Time-Step in the Finite Element Theta-Method
631
Fig. 1. The equilateral triangle grid on the analysed domain
By redistributing the indices, the following equations draw up: n i,k=0
φi,k (t)
cNi,k Nj,l dydx+ Ω n
φi,k (t)
κ Ω
i,k=0
∂Ni,k ∂Nj,l ∂Ni,k ∂Nj,l + ∂x ∂x ∂y ∂y
dydx = 0,
(1.4)
j = 1, 2 . . . n, l = 0, 1 . . . n. 3
Let K, M ∈ R(n+1)×n denote the so-called stiffness and mass matrices, respectively, defined by:
∂Ni,k ∂Nj,l ∂Ni,k ∂Nj,l κ + (Kk,l )i,j = dydx, (1.5) ∂x ∂x ∂y ∂y Ω cNi,k Nj,l dydx. (1.6) (Mk,l )i,j = Ω
Then (1.4) can be expressed as:
M Φ + K Φ = 0,
(1.7)
where Φ ∈ Rn×n is a vector function with the components φi,j . For the time discretization of the system of ODEs (1.7) we apply the well-known theta-method, which results in the equation M
Φm+1 − Φm + K ΘΦm+1 + (1 − Θ)Φm = 0. Δt
(1.8)
Clearly, this is a system of linear algebraic equations w.r.t. the unknown vector Φm+1 being the approximation of the temperature at the new time-level which
632
T. Szab´ o
is an array on the discretization of Ω. Here the parameter Θ is related to the applied numerical method and it is an arbitrary parameter on the interval [0, 1]. It is worth emphasizing that for Θ = 0.5 the method yields the Crank-Nicolson implicit method which has higher accuracy for the time discretization [2]. In order to preserve the qualitative characteristics of the solution, the connections between the equations and the real problem must be analyzed. To obtain a lower bound for the time-step size, Lorenz’s theorem was applied [5]. As it is well known, the temperature (in Kelvin) is a non-negative function in physics. In this article the following sufficient condition will be shown for the time-step size of the finite element theta-method to retain the physical characteristics of the solution: √ h2 c h2 c 3 + 14 ≤ Δt ≤ , (1.9) 12Θκ 8(1 − Θ)κ where h is the length of the spatial approximation. This sufficient condition is well known for problems with pure Dirichlet boundary conditions but not for the problems with mixed boundary conditions (Newton and Dirichlet), see, e.g., [3,4].
2
Analysis of FEM Equation
Let Φi denote the transpose of the temperature vector at the i-th row of the T discretization of Ω, namely Φi = (Φi,0 , Φi,1 , . . . , Φi,n ) . First we analyze the case when homogeneous initial condition is given, i.e., u0 (x) = 0. Then Φ0i = 0, (i = 1, 2, ..., n). Since Φ0 = τ > 0, therefore, it is worth emphasizing that, if τ is greater than zero, there is a discontinuity in the initial conditions at y = 0. We investigate the condition under which the first iteration, denoted by Φ=Φ1 , results in non-negative approximation. Considering the fact that the matrices of equation (1.8) are block tridiagonal matrices the following system can be obtained: AΦ0 + BΦ1 + CΦ2 = 0
(2.1(1))
AΦ1 + BΦ2 + CΦ3 = 0 ...
(2.1(2))
AΦn−2 + BΦn−1 + CΦn = 0 AΦn−1 + DΦn = 0
(2.1(n-1)) (2.1(n))
where A, B, C, D are n-by-n tridiagonal matrices that one can get easily after performing the integral in (1.5) and (1.6) for the given shape functions. When A = 0n,n , then the solution of this equation system is equal to zero viz. Φn = 0 follows from (2.1(n)), (2.1(n-1)) results in Φn−1 = 0 and so on, by taking into consideration the regularity of D. This means that the numerical scheme does not change the initial state, which contradicts to the physical process. Therefore, in the sequel we assume that A = 0n,n .
On the Discretization Time-Step in the Finite Element Theta-Method
633
We seek the solution in the form Φi = Zi Φ0 , i = 0, 1, ..., n.
(2.2)
Obviously, Z0 is the n-by-n indentity matrix. Using (2.1(n)), Φn can be expressed as Φn = −D−1 AZn−1 Φ0 , (2.3) which implies that
Zn = −D−1 AZn−1 = Xn−1 Zn−1 ,
(2.4)
Xn−1 = −D−1 A.
(2.5)
where
In the next step, Zn−1 can be expressed from (2.1(n-1)), applying (2.4), and then for the i-th equation the following relation holds: Zi = − (B + CXi ) where
−1
AZi−1 = Xi−1 Zi−1 , i = 1, 2, ..., n − 1, −1
Xi−1 = − (B + CXi )
A, i = n − 1, n − 2, ..., 1.
(2.6)
(2.7)
Hence we obtained the following statement. Theorem 1. The solution of the system of linear algebraic equations (2.5) can be defined by the following algorithm. 1. 2. 3. 4.
We put Z0 = In ; We define Xn−1 , Xn−2 , ..., X0 by the formulas (2.5) and (2.7), respectively; We define Z1 , Z2 , ..., Zn by the formulas (2.4) and (2.6), respectively; By the formula (2.2) we define the values of Φi .
The relation Φi ≥ 0 holds only under the condition Zi ≥ 0. From (2.6) we can see that it is equivalent to the non-negativity of Xi for all i = 0, 1, ..., n − 1. Therefore, based on (2.7), it is hard to get a condition of the non-negativity of the solution, hence it contains the inverse of the sum of two matrices. It is worth emphasizing that D is an M-matrix, therefore, its inverse is positive. Using the one-step iterative method to the discretization of (1.7) the following system of linear algebraic equations is obtained: P1 Φm+1 = P2 Φm , m = 0, 1, . . . ,
(2.8)
where P1 = M + ΔtΘK, P2 = M − Δt(1 − Θ)K. It is clear that for all Φm+1 to be non-negative, the non-negativity of the following matrix is required: P = P1−1 P2
(2.9)
The sufficient conditions of the non-negativity of P are the following: P1−1 ≥ 0 and P2 ≥ 0.
(2.10)
634
T. Szab´ o
For P2 it is easy to give a condition that guarantees its non-negativity. By anaysing the elements of the matrix. The following condition can be obtained: h2 c − 4κ(1 − Θ) ≥ 0, 2Δt
(2.11)
which is equivalent to the condition Δt ≤
h2 c . 8(1 − Θ)κ
(2.12)
For the homogeneous initial condition it is clear that this condtion for the timestep size disappears. It is not possibile to obtain a sufficient condition for the nonnegativity of the matrix P1−1 by the so-called M-Matrix method (similarly for the one-dimensional case [9]). It follows from the fact that P1 contains some positive elements in its offdiagonal. Therefore, a sufficient condition for the inversepositivity of P1 is obtained by Lorenz [5]. Lemma 2. (Lorenz) Let A be an n-by-n matrix, denote Ad and A− the diagonal and the negative offdiagonal part of the matrix A, respectively. Let A− = Az + As = (azij ) + (asij ). If aij ≤
n
s azik a−1 = j, kk akj , ∀aij , i
(2.13)
k=1
then A is a product of two M-matrices, i.e., A is monotone. We could analyse matrix P1 with the theorem if it is decomposed into the diagonal part, the positive offdiagonal part, the upper triangular and lower triangular negative parts. All the conditions of the theorem are satisfied if:
2 1 Δt −κ 2 12 h c , Δt 1 + 4κ 2 2 h c
(2.14)
√ h2 c 3 + 14 ≤ Δt 12Θκ
(2.15)
1 ≤ 12 which implies the lower bound
Hence, the following statement is proven. Theorem 3. Let us assume that the conditions (2.12) and (2.15) hold. Then for the problem (1.1) with arbitrary non-negative initial condition the linear finite element method results in a non-negative solution on any time level.
On the Discretization Time-Step in the Finite Element Theta-Method
3
635
Numerical Experiments
For the numerical experiments for the boundary condition at the left hand side of the space domain we choose the value τ = 273. A special type of Gaussian elimination was used for the inversion of the sparse tri-diagonal matrices [8]. The following figures are in three dimensions, in Fig. 2 the first dimension is the estimated time, the second one is the spatial variable y at x = 1, and the third one is temperature at the nodes. First, we apply a relatively long time step, which results in a negative P2 . One can see in Fig. 2 that the numerical method is quite unstable, hence there is an oscillation with decreasing tendency in the results. When we apply a smaller time step than the time step defined by (2.15), then close to the first node there will be small negative peaks. These solutions are unrealistic, since the absolute temperature should be non-negative. For the sake of completeness in Fig. 3 we applied the time-step size from the interval (1.9), and we have got a more stable numerical method. In this figure the first two dimensions are the spatial ones (x, y) and the third is the temperature at the nodes. It is easy to see that, by use of appropriate time steps, the solution becomes much smoother than in Fig. 2.
350
Temperature [K]
300 400
250
300
200
200
150
100
100
0
10
50
20 30
25
20
15
10
5
30
y
Time
Fig. 2. The solution when a small time step is applied
250
200
Temperature [K]
400
300
150
200 100 100
0
10
50
20 30
25
20
15
10
5
30
y
x
Fig. 3. The solution obtained by applying time step from the interval (1.9)
636
4
T. Szab´ o
Conclusions and Further Works
In this article a sufficient condition was given for the time-step size of the finite element theta-method to preserve the physical characteristics of the solution. For the homogeneous initial condition we have shown that only a lower bound exists for the time-step size of the finite element theta-method, in order to preserve the non-negativity at the first time level. Since we were interested in the nonnegativity preservation property on the whole discretized time we showed the existence of bounds from both directions, i.e., there are upper and lower bounds for the time-step, as well. Detailed analysis of other types of shape functions and rectangular domains will be done in the future. This work is important for a better understanding of the chemical and physical phenomena of the fuel cells by way of the mathematical modeling.
References 1. Berman, A., Plemmons, R.J.: Nonnegative matrices in the mathematical sciences. Computer Science and Applied Mathematics. Academic Press (Harcourt Brace Jovanovich, Publishers), New York (1979) 2. Crank, J., Nicolson, P.: A practical method for numerical evaluation of solutions of partial differential equations of the heat conduction type. Proceedings of the Cambridge Philosophical Society 43, 50–64 (1947) 3. Farago, I.: Non-negativity of the difference schemes. Pour Math. Appl. 6, 147–159 (1996) 4. Farkas, H., Farag´ o, I., Simon, P.: Qualitative properties of conductive heat transfer. In: Sienuitycz, S., De Vos, A. (eds.) Thermodynamics of Energy Conversion and Transport, pp. 199–239. Springer, Heidelberg (2000) 5. Lorenz, J.: Zur Inversmonotonie diskreter Probleme. Numer. Math. 27, 227–238 (1977) (in German) 6. Marchuk, G.: Numerical methods. Mˇ uszaki K¨ onyvkiad´ o, Budapest (1976) (in Hungarian) 7. Murti, V., Valliappan, S., Khalili-Naghadeh, N.: Time step constraints in finite element analysis of the Poisson type equation. Comput. Struct. 31, 269–273 (1989) 8. Samarskiy, A.A.: Theory of difference schemes, Moscow, Nauka (1977) (in Russian) 9. Szab´ o, T.: On the Discretization Time-Step in the Finite Element Theta-Method of the Discrete Heat Equation. In: Margenov, S., Vulkov, L., Wa`sniewski, J. (eds.) Numerical Analysis and Its Applications. LNCS, vol. 5434, pp. 564–571. Springer, Berlin (2009) 10. Thomas, H.R., Zhou, Z.: An analysis of factors that govern the minimum time step size to be used in finite element analysis of diffusion problem. Commun. Numer. Meth. Engng. 14, 809–819 (1998)
A Locally Conservative Mimetic Least-Squares Finite Element Method for the Stokes Equations Pavel Bochev1 and Max Gunzburger2 1
Applied Mathematics and Applications, P.O. Box 5800, MS 1320, Sandia National Laboratories , Albuquerque, NM 87185-1320, USA
[email protected] 2 Department of Scientific Computing, Florida State University, Tallahassee FL 32306-4120, USA
[email protected]
Abstract. Least-squares finite element methods (LSFEMs) for the incompressible Stokes equations that use nodal C 0 elements conserve mass approximately. This has led to the common and widespread misconception that LSFEMs are unable to provide the local conservation of their mixed cousins. We develop a mimetic LSFEM for the Stokes equations that yields divergence-free velocity fields and satisfies a discrete momentum equation. The LSFEM uses the velocity-vorticity-pressure Stokes equations, div-conforming elements for the velocity, curl-conforming elements for the vorticity and discontinuous elements for the pressure.
1
Introduction
In this paper we develop a mimetic least-squares finite element method (LSFEM) for the first-order velocity-vorticity-pressure (VVP) form ⎧ ⎪ in Ω ⎨ ∇ × ω + ∇p = f ∇×u−ω = 0 in Ω (1) ⎪ ⎩ ∇·u= 0 in Ω . of the incompressible Stokes equations, augmented with the normal velocity– tangential vorticity boundary condition u·n=0
and ω × n = 0
on Γ ,
and the zero mean pressure constraint p dΩ = 0 .
(2)
(3)
Ω
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DEAC04-94-AL85000.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 637–644, 2010. c Springer-Verlag Berlin Heidelberg 2010
638
P. Bochev and M. Gunzburger
In (1)–(3), u, ω, p, ν, and f are the (non-dimensional) velocity, vorticity, pressure, inverse Reynolds number, and body force per unit mass, respectively. In what follows we restrict attention to simply connected, contractible bounded regions Ω ⊂ R3 with Lipschitz-continuous boundary Γ = ∂Ω. The paper is organized as follows. Section 2 introduces notation and reviews key technical results needed in the sequel. The new method is presented in Section 3 and its conservation properties are established in Section 4.
2
Notation
As usual, L2 (Ω) is the Hilbert space of all square integrable functions with inner product and norm (·, ·) and · 0 , respectively, L20 (Ω) is the subspace of L2 (Ω) consisting of functions with zero mean, bold face denotes vector spaces, and1 H01 (Ω) = {u ∈ L2 (Ω) | ∇u ∈ L2 (Ω);
u = 0 on Γ }
H0 (Ω, curl) = {u ∈ L2 (Ω) | ∇ × u ∈ L2 (Ω); H0 (Ω, div) = {u ∈ L (Ω) | ∇ · u ∈ 2
L20 (Ω);
u×n=0 u·n=0
(4) on Γ }
on Γ } .
(5) (6)
The spaces (4)–(6), equipped with the graph norms u2G = u20 + ∇u20
u2C = u20 + ∇ × u20
and u2D = u20 + ∇ · u20 ,
(7)
are Hilbert spaces, as is H(Ω, curl) ∩ H0 (Ω, div) endowed with norm u2CD = u20 + ∇ × u20 + ∇ · u20 .
(8)
We recall that under the assumptions on Ω the DeRham complex ∇
∇×
∇·
R → H01 (Ω) −→ H0 (Ω, curl) −→ H0 (Ω, div) −→ L20 (Ω) −→ 0
(9)
is exact; see [4,1]. We also recall the Poincar´e–Friedrichs inequalities: u0 ≤ CP ∇u0
∀u ∈ H 1 (Ω) ∩ L20 (Ω)
u0 ≤ CP ∇ × u0 + ∇ · u0 ∀u ∈ H(Ω, curl) ∩ H0 (Ω, div) .
(10)
(11)
Mimetic LSFEMs for the VVP Stokes system (1)–(3) are formulated using finite element approximations of the De Rham complex (9). Such approximations comprise of conforming finite element subspaces Gh0 (Ω) ⊂ H01 (Ω), Ch0 (Ω) ⊂ H0 (Ω, curl), Dh0 (Ω) ⊂ H0 (Ω, div) and S0h (Ω) ⊂ L20 (Ω) which form an exact sequence, and bounded projection operators ΠG : H01 (Ω)
→ Gh0 (Ω), ΠC : 1
We write H 1 (Ω), H(Ω, curl) and H(Ω, div) when (4)–(5) are not constrained by boundary conditions.
A Locally Conservative Mimetic Least-Squares Finite Element Method
639
H0 (Ω, curl)
→ Ch0 (Ω), ΠD : H0 (Ω, div)
→ Dh0 (Ω), and ΠS : L20 (Ω)
→ S0h (Ω) such that the diagram ∇
∇×
∇·
H01 (Ω) −→ H0 (Ω, curl) −→ H0 (Ω, div) −→ L20 (Ω) ΠG ↓
ΠC ↓ ∇
Gh0 (Ω) −→
Ch0 (Ω)
ΠD ↓ ∇×
−→
Dh0 (Ω)
ΠS ↓
(12)
∇·
−→ S0h (Ω)
commutes. The standard L2 (Ω) projections onto compatible finite element spaces are denoted by πG , πC , πD and πS . In what follows we restrict attention to affine partitions Th of Ω into simplicial elements κ because construction of finite element spaces and projection operators such that (12) holds is best understood in that setting; see [1,3]. For brevity LSFEMs are formulated using the lowest-order finite element DeRham complex for which Gh0 (Ω) is the piecewise linear C 0 nodal space, Ch0 (Ω) is the lowestorder N´edelec edge element space of the first-kind [6], Dh0 (Ω) is the lowest-order Raviart-Thomas [7] face element space and S0h (Ω) is the piecewise constant space.
3
Mimetic LSFEM for the Stokes Equations
The key to well-posed LSFEMs for the VVP Stokes boundary value problem (1)–(2) and (3) is the following stability result for that system. Theorem 1. Assume that Ω is a simply connected bounded domain in R3 with a Lipschitz boundary. Then, there exists a positive constant C such that C uCD + ωC + pG ≤ ∇ × ω + ∇p0 + ∇ × u − ω0 + ∇ · u0 (13) for all {u, ω, p} ∈ H0 (Ω, div) ∩ H(Ω, curl) × H0 (Ω, curl) × H 1 (Ω) ∩ L20 (Ω). Proof. Owing to the boundary condition ω × n = 0 on ∂Ω, ∇ × ω + ∇p20 = ∇ × ω20 + ∇p20
(14)
∇ × u − ω20 = ∇ × u20 + ω20 − (∇ × u, ω) − (u, ∇ × ω) .
(15)
and Using this identity and the Cauchy inequality, it follows that ∇ × u − ω20 + ∇ · u20 ≥ ∇ × u20 + ∇ · u20 + ω20 − ∇ × u0 ω0 − u0 ∇ × ω0 . Because u ∈ H0 (Ω, div) ∩ H(Ω, curl), the Poincar´e-Friedrichs inequality (11) implies that 1 ∇ × u20 + ∇ · u20 ≥ 2 u20 . (16) CP
640
P. Bochev and M. Gunzburger
In conjunction with the previous bound, we see that 1 ∇ × u20 + ∇ · u20 2
∇ × u − ω20 + ∇ · u20 ≥ +
1 u20 + ω20 − ∇ × u0 ω0 − u0 ∇ × ω0 . 2CP2
Using the ε-inequality for the last two terms gives β∇ × ω + ∇p20 + ∇ × u − ω20 + ∇ · u20 1 1 ≥ β ∇ × ω20 + ∇p20 + u20 + ω20 ∇ × u20 + ∇ · u20 + 2 2CP2 ε1 1 ε2 1 − ∇ × u20 − ω20 − u20 − ∇ × ω20 2 2ε1 2 2ε2
1 1 2 ≥ β− ∇ × ω0 + 1 − ω20 2ε2 2ε1
1 1 1 1 2 + (1 − ε1 ) ∇ × u0 + − ε2 u20 + β∇p20 + ∇ · u20 , 2 2 CP2 2 where β = 2CP2 . Setting ε1 = 2/3, ε2 = 1/(2CP2 ), and using the Poincar´eFriedrichs inequality (10) yields 2CP2 ∇ × ω + ∇p20 + ∇ × u − ω20 + ∇ · u20 1 1 1 ∇ × u20 + ∇ · u20 + u20 6 2 4CP2 1 2CP2 p2G +CP2 ∇ × ω20 + ω20 + 4 (1 + CP2 ) ≥ min Cu , Cω , 2CP2 (1 + CP2 )−1 u2CD + ω2C + p2G , ≥
with Cω = min
CP2 ,
1 4
and
Cu = min
1 1 1 , , 4CP2 6 2
.
The theorem follows by noting that ∇ × ω + ∇p20 + ∇ × u − ω20 + ∇ · u20 1 ≥ 2CP2 ∇ × ω + ∇p20 + ∇ × u − ω20 + ∇ · u20 . 2 max{1, 2CP } 2 Stability bound (13) asserts that J(u, ω, p; f ) = ∇ × ω + ∇p − f 20 + ∇ × u − ω20 + ∇·u20
(17)
is a norm–equivalent least-squares functional on X = H0 (Ω, div) ∩ H(Ω, curl) × H0 (Ω, curl) × H 1 (Ω) ∩ L20 (Ω) ,
(18)
A Locally Conservative Mimetic Least-Squares Finite Element Method
641
and so, (17) has a unique minimizer out of X; see [2]. Unfortunately, conforming discretization of (17) requires finite element subspaces of H(Ω, curl)∩H0 (Ω, div) which makes this formulation ill-suited2 for problems where the latter is not equivalent to [H01 (Ω)]3 . Instead, we propose to use a non-conforming discretization of (17) in which X is approximated by the non-conforming finite element space X h = Dh0 × Ch0 × S0h .
(19)
In other words, we approximate the first component of X by a div-conforming space. Note that this choice allows us to impose the boundary condition (2) on the approximating space. The third component of X is also approximated in a non-conforming fashion. This enables additional conservation properties in the corresponding least-squares formulation. Because the first and the last component spaces of X h are non-conforming, the original least-squares functional (17) has to be modified to a functional that is well-defined for X h . To this end we replace the curl operator acting on the velocity field by a discrete weak curl defined by ∗ (20) ∇h × vh , uh = vh , ∇ × uh ∀ uh ∈ Ch0 (Ω) , and the gradient operator acting on the pressure by a discrete weak gradient ∇∗h : S0h (Ω)
→ Dh0 (Ω) defined by ∗ h h h ∀ vh ∈ Dh0 (Ω) . (21) ∇h u , v = u , −∇ · vh The weak curl operator defined in (20) provides discrete version of the Poincar´e– Friedrichs inequality (11) for div-conforming elements: ∀ uh ∈ Dh0 (Ω) ; uh 0 ≤ CP ∇∗h × uh 0 + ∇ · uh 0 (22) see [2] for further details and properties of discrete operators. Thus, we are led to the following non-conforming least-squares functional J h (uh , ωh , ph ; f ) = ∇ × ωh + ∇∗h ph − f 20 + ∇∗h × uh − ωh 20 + ∇ · uh20 (23) The associated non-conforming discrete least-squares principle is given by min
{uh ,ω h ,ph }∈X h
J h (uh , ωh , ph ; f ) .
(24)
The following theorem asserts that (24) is well-posed. 2
A finite element subspace CDh of H(Ω, curl) ∩ H0 (Ω, div) must contain piecewise smooth vector fields that are both tangentialy and normally continuous, i.e., fields that are necessarily of class C 0 . It follows that CDh ⊂ [H01 (Ω)]3 . Unless Ω has smooth boundary or is a convex polyhedron, the latter is not equivalent to H(Ω, curl) ∩ H0 (Ω, div) and CDh does not have approximability property with respect to H(Ω, curl) ∩ H0 (Ω, div), see [5, Corollary 3.20, p. 97].
642
P. Bochev and M. Gunzburger
Theorem 2. Define the div-conforming version of (8) by uh 2C ∗ D = uh 20 + ∇∗h × uh 20 + ∇ · uh 20
∀uh ∈ Dh0 .
(25)
There exists a positive constant C, independent of h, such that C uh C ∗ D + ωh C + ∇∗h ph 0 ≤ ∇ × ω h + ∇∗h ph 0 + ∇∗h × uh − ω h 0 + ∇ · uh 0
(26)
for all {uh , ωh , ph } ∈ X h ≡ Dh0 × Ch0 × S0h . Proof. The key junctures in the proof of Theorem 1 are (14), (15) and (16). We show that, thanks to the use of a finite element DeRham complex, discrete versions of these key junctures continue to hold in the discrete setting. From definition (21) of ∇∗h we see that h ∗ h w , ∇h q = − ∇ · wh , q h ∀ q h ∈ S0h
and ∀ wh ∈ Dh0 ,
and because the range of ∇× is in Dh0 it follows that ∇ × ωh , ∇∗h ph = ∇ · ∇ × ω h , ph = 0 . This provides a discrete version ∇ × ω h + ∇∗h ph 20 = ∇ × ω h 20 + ∇∗h ph 20 of the first key juncture (14). To obtain a discrete version of (15) we use (20) which implies that ∗ ∇h × vh , wh = vh , ∇ × wh
∀ vh ∈ Dh0
and ∀ wh ∈ Ch0 .
From this identity it easily follows that ∇∗h × uh − ωh 20 = ∇∗h × uh 20 + ωh 20 − ∇∗h × uh , ωh − uh , ∇ × ω h . (27) Finally, (22) provides a discrete analogue of (16): ∇∗h × uh 20 + ∇ · uh 20 ≥
1 uh 20 . CP2
(28)
The proof of (26) now follows by repeating the remaining steps from the proof 2 of Theorem 1, and taking into consideration the definition of · C ∗ D . Theorem 2 implies that (24) is a well-posed minimization problem, i.e., that the least-squares functional (23) has a unique minimizer out of X h .
A Locally Conservative Mimetic Least-Squares Finite Element Method
4
643
Conservative Properties
In this section we establish the conservative properties of the least-squares principle defined in (24). Specifically, we show that the least-squares solution satisfies the following discrete version of the VVP Stokes system ⎧ ∇ × ωh + ∇∗h ph = πD f ⎪ ⎪ ⎨ ∇∗h × uh − ω h = 0 (29) ⎪ ⎪ ⎩ ∇ · uh = 0 . The bound (26) in Theorem 2 implies that (29) has a unique solution in X h . Theorem 3. Solution of the LSFEM (24) coincides with the solution of (29). Proof. Let {uh , ωh , ph } ∈ X h = Dh0 × Ch0 × S0h denote the solution of (29). We show that {uh , ω h , ph } satisfies the first-order optimality condition of (23). To this end, write the corresponding variational problem as ∗ ⎧ h h ∗ h h h × u − ω , ∇ × v , ∇ · v ∇ + ∇ · u =0 ∀ vh ∈ Dh0 ⎪ h h ⎨ ∇ × ω h + ∇∗h ph , ∇ × ξ h − ∇∗h × uh − ωh , ξ h = f , ∇ × ξh ∀ ξh ∈ Ch0 ⎪ ⎩ ∇ × ωh + ∇∗h ph , ∇∗h q h = f , ∇∗h q h ∀ q h ∈ S0h . The second and third equations in (29) imply that ⎧ ∗ h h h ⎪ ⎨ ∇h × u − ω , ξ = 0 ∇∗h × uh − ωh , ∇∗h × vh = 0 ⎪ ⎩ ∇ · uh , ∇ · vh = 0 for all ξ h ∈ Ch0 and all vh ∈ Dh0 . Thus, it remains to show that ⎧ ⎨ ∇ × ω h + ∇∗h ph , ∇ × ξ h = f , ∇ × ξh ∀ ξh ∈ Ch0 ⎩ ∇ × ωh + ∇∗ ph , ∇∗ q h = f , ∇∗ q h ∀ q h ∈ S0h h h h
(30)
holds for the solution of (29). From the exact sequence property it follows that ∇ × ξh ∈ Dh0 for any ξ h ∈ Ch0 . This, in conjunction with the first equation in (29) and definition of the L2 (Ω)-projection πD , implies that f , ∇ × ξ h = πD f , ∇ × ξ h = ∇ × ω h + ∇∗h ph , ∇ × ξh , i.e., the first equation in (30). Likewise, ∇∗h q h ∈ Dh0 for any q h ∈ S0h and so, f , ∇∗h q h = πD f , ∇∗h q h = ∇ × ω h + ∇∗h ph , ∇∗h q h , i.e., the second equation in (30) also holds.
2
644
P. Bochev and M. Gunzburger
This theorem establishes that LSFEMs based on (23) result in velocity approximations that are divergence-free on every element. Moreover, we see that the least-squares solution satisfies discrete versions of the momentum equation and the second equation in (1). These properties cannot be achieved by using standard nodal finite element spaces and are the main reason to consider compatible finite elements in least-squares methods. Because (24) is a non-conforming LSFEM, its error analysis cannot be accomplished by standard elliptic arguments. This subject, as well as implementation and computation with the mimetic LSFEM will be considered in forthcoming papers.
References 1. Arnold, D.N., Falk, R.S., Winther, R.: Finite element exterior calculus, homological techniques, and applications. Technical Report 2094, Institute for Mathematics and Its Applications (2006) 2. Bochev, P., Gunzburger, M.: Least-Squares finite element methods. Applied Mathematical Sciences, vol. 166. Springer, Heidelberg (2009) 3. Christiansen, S.H., Winther, R.: Smoothed projections in finite element exterior calculus. Math. Comp. 77, 813–829 (2007) 4. Dezin, A.: Multidimensional Analysis and Discrete Models. CRC Presss, Boca Raton (1995) 5. Ern, A., Guermond, J.L.: Theory and Practice of Finite Elements. Applied Mathematical Sciences, vol. 159. Springer, New York (2004) 6. N´ed´elec, J.C.: Mixed finite elements in R3 . Numerische Mathematik 35, 315–341 (1980) 7. Raviart, P.A., Thomas, J.M.: A mixed finite element method for second order elliptic problems. In: Galligani, I., Magenes, E. (eds.) Mathematical Aspects of the Finite Element Method. LNM, vol. 606. Springer, New York (1977)
Additive Operator Decomposition and Optimization–Based Reconnection with Applications Pavel Bochev1 and Denis Ridzal2 1 Applied Mathematics and Applications Optimization and Uncertainty Quantification, Sandia National Laboratories , Albuquerque, NM 87185-1320, USA
[email protected],
[email protected] 2
Abstract. We develop an optimization-based approach for additive decomposition and reconnection of algebraic problems arising from discretizations of partial differential equations (PDEs). Application to a scalar convection–diffusion PDE illustrates the new approach. In particular, we derive a robust iterative solver for convection–dominated problems using standard multilevel solvers for the Poisson equation.
1
Introduction
Decomposition of PDEs into component problems that are easier to solve is at the heart of many numerical procedures: operator split [15] and domain decomposition [17] are two classical examples. The use of optimization and control ideas to this end is another possibility that has yet to receive proper attention despite its many promises and excellent theoretical foundations. For previous work in this direction we refer to [11,4,8] and the references cited therein. In this paper we develop an optimization–based approach for additive decomposition and reconnection of algebraic equations that is appropriate for problems associated with discretized PDEs. Our approach differs from the ideas in [11,4,8] in several important ways. The main focus of [4,8] is on formulation of non–overlapping domain–decomposition via optimization, whereas our approach targets the complementary single domain/multiple physics setting. Our ideas are closer to the decomposition framework in [11]. Nevertheless, there are crucial differences in the definition of the objectives, controls, and the constraints. The approach in [11] retains the original equation, albeit written in a form that includes the controls, and the constraints impose equality of states and controls. In other words, reconnection in [11] is effected through the constraints rather than the objective.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DEAC04-94-AL85000.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 645–652, 2010. c Springer-Verlag Berlin Heidelberg 2010
646
P. Bochev and D. Ridzal
In contrast, we replace the original problem by component problems that, by themselves, are not equivalent to the original problem. These problems define the constraints, while reconnection is effected via the objective functional. Consequently, in our approach the states are different from the controls and the objective functional is critical for “closing” the formulation.
2
Additive Operator Decomposition
For clarity we present the approach in an algebraic setting, i.e., we consider the solution of the linear system Ax = b , (1) where A ∈ IRn×n is a nonsingular matrix and x and b are vectors in IRn . We present a method suitable for the scenario in which the matrix A comprises multiple operators with fundamentally different mathematical properties, e.g. resulting from an all–at–once discretization of a multiphysics problem, in which case the linear system (1) requires nonstandard, highly specialized solution techniques. For a concrete example and discussion, see Section 4. The proposed optimization–based approach for the solution of (1) rests on the assumption that the matrix A can be written as the sum of two component matrices (2) A = A1 + A2 for which robust solvers are readily available. We note that A1 and A2 can represent the original operator components of a multi–operator equation, however, other, nontrivial decompositions are oftentimes more useful, see Section 4. We assume that A1 and A2 are nonsingular. To motivate the optimization formulation, we consider an equivalent formulation of (1) in terms the component matrices, A1 x − u − b + A2 x + u = 0 , where u ∈ IRn is an arbitrary coupling vector. As the overall intent is to make use of robust solvers available for the solution of linear systems involving A1 and A2 , we aim to develop a procedure that would allow us to repeatedly solve linear systems of the type A1 x = u + b
and
A2 x = −u ,
instead of the original problem. Our approach, based on ideas from optimization and control, is presented next.
3
Reconnection via an Optimization Formulation
We propose to replace (1) by the following constrained optimization problem: ⎧ 1 ε 2 2 ⎪ minimize J x (x , x , u) = − x + εu ⎪ 1 2 1 2 2 2 ⎪ 2 ⎨ (3) ⎪ A1 x1 − u = b ⎪ ⎪ subject to ⎩ A2 x2 + u = 0,
Additive Operator Decomposition and Optimization–Based Reconnection
647
where ε > 0 is a regularization parameter and · 2 denotes the Euclidean 2–norm. In the language of optimization and control, x1 and x2 are the state variables and u is the distributed control variable. We first show that (3) is well–posed, i.e., that it has a unique solution {xε1 , xε2 , uε }. Then we examine the connection between {xε1 , xε2 } and the solution x of the original equation (1). 3.1
Existence of Optimal Solutions
The first–order necessary conditions for {xε1 , xε2 , uε } to be a solution of (3) state that there is a Lagrange multiplier vector pair {λε1 , λε2 }, λε1,2 ∈ IRn , such that the KKT system of equations is satisfied, ⎛
⎞⎛ ε ⎞ ⎛ x1 I −I 0 AT1 0 T ⎟ ⎜ xε ⎟ ⎜ −I I 0 0 A2 ⎟ ⎜ 2 ⎟ ⎜ ⎜ ⎜ ⎜ ⎟⎜ ⎟ ⎜ 0 εI −I I ⎟ ⎜ uε ⎟ = ⎜ ⎜ 0 ⎜ ⎟⎜ ⎟ ⎜ ⎝ A1 0 −I 0 0 ⎠ ⎝ λε1 ⎠ ⎝ λε2 0 A2 I 0 0
0 0 0 b 0
⎞ ⎟ ⎟ ⎟ ⎟, ⎟ ⎠
(4)
where I is the n×n identity matrix. In the following, we let H denote the 3n×3n (full) Hessian matrix and Z the 3n × n null–space matrix, respectively, ⎞ ⎛ ⎞ ⎛ I −I 0 A−1 1 ⎟ ⎜ ⎟ ⎜ I 0⎠, H = ⎝ −I Z = ⎝ −A−1 ⎠, 2 0 0 εI I with the property
A1 0 −I 0 A2 I
Z=
0 0
.
= Z T HZ, known as the reduced Hessian, is symLemma 1. The matrix H metric positive definite. = (A−1 +A−1 )T (A−1 +A−1 )+εI. Proof. A straightforward calculation yields H 1 2 1 2 −1 −1 T −1 −1 The matrix (A1 + A2 ) (A1 + A2 ) is (at least) symmetric positive semidefinite, while εI is symmetric positive definite. The claim follows. Corollary 1. Invertibility of A1 and A2 and Lemma 1 directly imply that the KKT matrix defined in (4) is nonsingular, and that there is a unique pair {{xε1 , xε2 , uε }, {λε1 , λε2 }} satisfying (4). Furthermore, the vector triple {xε1 , xε2 , uε } is the unique global solution of the optimization problem (3). For a proof, see [12, p.444–447]. −1 T −1 −1 Remark 1. As we will show below, the matrix (A−1 1 +A2 ) (A1 +A2 ) is in fact 2 symmetric positive definite. Hence the regularization term εu2 in (3), which is typically needed to guarantee existence and uniqueness of optimal solutions, is seemingly superfluous. However, one should not forget that we think of (1) as
648
P. Bochev and D. Ridzal
resulting from the discretization of a PDE, i.e., that we are dealing with a family of linear systems parametrized by some measure h of the mesh size, instead of with a single linear system. In this case the regularization term is needed to guarantee the uniform in h invertibility of A; see [2]. 3.2
Reformulation Error
In general, as ε > 0, the state solutions xε1 and xε2 of (3) will differ from the solution x of the original problem (1). In this section we calculate the error induced by ε. −1 Lemma 2. The matrix (A−1 1 + A2 ) is nonsingular, with the inverse given by −1 T −1 −1 −1 A1 − A1 A A1 . (It follows trivially that the matrix (A−1 1 + A2 ) (A1 + A2 ) is symmetric positive definite.)
Proof. By inspection, −1 −1 −1 −1 A1 ) = I + A−1 A1 − A−1 A1 (A−1 1 + A2 )(A1 − A1 A 2 A1 − A 2 A1 A −1 −1 −1 −1 A1 = I + A2 − (A1 + A2 ) − A2 A1 (A1 + A2 ) −1 −1 −1 = I + A2 (A1 + A2 ) − I − A2 A1 (A1 + A2 ) A1 = I.
Lemma 3. The following statements hold: −1 (I − A1 A−1 )b , xε1 − x = ε A−1 1 H and
−1 A2 A−1 b . xε2 − x = −ε A−1 2 H
(5) (6)
Proof. We present a proof for (6), statement (5) can be verified analogously. The KKT system (4) yields expressions for the states, ε xε1 = A−1 1 (u + b),
ε xε2 = −A−1 2 u ,
(7)
the Lagrange multipliers, ε ε λε1 = −A−T 1 (x1 − x2 ),
ε ε λε2 = A−T 2 (x1 − x2 ),
and the controls, uε = (1/ε)(λε1 − λε2 ) . By substitution, we obtain the so–called reduced system for the controls, ε −1 −1 −1 −1 −1 T −1 T (A1 + A−1 2 ) (A1 + A2 ) + εI u = −(A1 + A2 ) A1 b ,
(8)
implies which using the notation for the reduced Hessian H −1 (A−1 + A−1 )T A−1 − A−1 b xε2 − x = A−1 2 H 1 2 1 −1 −1 −1 −1 T −1 (A1 + A2 ) A1 = A2 H −1 −1 −1 T −1 b − (A1 + A−1 2 ) (A1 + A2 ) + εI A2 A −1 (A−1 + A−1 )T A−1 − (A−1 + A−1 )A2 A−1 − εA2 A−1 b = A−1 2 H 1 2 1 1 2 −1 (A−1 + A−1 )T A−1 (A1 + A2 ) − (A−1 + A−1 )A2 A−1 = A−1 2 H 1 2 1 1 2 −1 A2 A−1 b . −εA2 A−1 b = −ε A−1 H 2
Additive Operator Decomposition and Optimization–Based Reconnection
649
−1 T −1 −1 Lemma 4. Let M = (A−1 1 +A2 ) (A1 +A2 ) and let Λmin denote the smallest eigenvalue of M. Let ε < (Λmin /2). Then,
−1 2 < H
2 Λmin
.
= M I − (−εM −1 ) . Due to ε < (Λmin /2) and e.g. [7, Proof. We have H Lemma 2.3.3], the matrix I − (−εM −1 ) is nonsingular, hence we can write −1 = I − (−εM −1 ) −1 M −1 , which implies H 1
−1 I − (−εM −1 ) ≤
1 Λmin Λmin (1 − εM −1 2 ) 2 1 2 1 = < . = Λmin (1 − (ε/Λmin )) Λmin − ε Λmin
−1 2 ≤ H
Theorem 1. Let the assumptions of Lemma 4 be satisfied. There exists a constant C, independent of ε, such that xε1 − x2 + xε2 − x2 ≤ εCb2 . Proof. Lemmas 3 and 4 yield the claim directly, with C=
2 Λmin
−1 A (I − A1 A−1 ) + A−1 A2 A−1 . 1 2 2 2 2 2 2
Remark 2. For parametrized linear systems (1) corresponding to discretized PDEs the constant C may depend on the mesh size h. 3.3
A Solution Algorithm
We now exploit the structure of the reformulated problem to develop robust solution methods for (1). Our approach is based on the premise that robust solution methods for linear systems involving A1 and A2 are readily available. We focus on a solution method known as the null–space or reduced–space approach, which effectively decouples the component equations in (4). In this approach one typically solves the reduced system (8) iteratively, using a Krylov subspace method. The main computational cost of such an approach is in the repeated application of the reduced Hessian operator, specified in Algorithm 1. A similar procedure can be derived for the (one–time) computation of the right–hand side in (8). Optimal states can be recovered by solving either of the equations given in (7).
4
Numerical Results
In this section we apply the optimization–based reformulation to an SUPG– stabilized [10] discretization of the scalar convection–diffusion elliptic problem −νΔu + c · ∇u = f in Ω,
u = uD on ΓD ,
∇u · n = 0 on ΓN .
(9)
650
P. Bochev and D. Ridzal
to vector u Algorithm 1. Application of reduced Hessian H 1 2 3 4
input : vector u u output: vector H Solve: A1 y1 = u, A2 y2 = u Compute: y3 = y1 + y2 Solve: AT1 y4 = y3 , AT2 y5 = y3 u = y4 + y5 + εu Compute: H
(state equations) (adjoint equations)
We assume that Ω is a bounded open domain in Rd , d = 2, 3, with the Lipschitz continuous boundary ∂Ω = ΓD ∪ ΓN , c is a given velocity field with ∇ · c = 0, uD is a given Dirichlet boundary function, n is the outward unit normal, and ν > 0 is a constant diffusion coefficient. We focus on settings where ν is small compared to |c|, i.e., when (9) is convection–dominated. The linear system resulting from (9) is typically of the form (νD + C) x = b ,
(10)
where D is a (pure) diffusion matrix (discretization of the Laplace operator), C corresponds to the convection contribution (including any terms due to the SUPG stabilization), and b stands for a discretization of the source term f (plus stabilization terms). In order to apply our optimization–based approach to (10), we make the identification A1 = D + C,
and A2 = (ν − 1)D.
(11)
The key property of this decomposition is that, unlike the original operator (νD + C), the matrices A1 and A2 are diffusion–dominated. The scalable iterative solution of linear systems arising from PDEs of the type (9) is a very active area of research. Algebraic multigrid methods (see [16] and references therein) work well for diffusion–dominated problems but their performance degrades in the convection–dominated case. The efforts to extend multigrid methods to such problems [1,3,13,14] have led to improvements, albeit at the expense of increased algorithm complexity. As we show below, widely used “off–the–shelf” multigrid solvers, such as BoomerAMG (hypre library) [9] or ML (Trilinos project) [6], can lack robustness when applied to problems with complex convection fields, particularly in the case of very large P´eclet numbers (defined here as 1/ν). We consider the so–called double–glazing problem, see [5, p.119]. The domain is given by Ω = [−1, 1]2 , subject to a uniform triangular partition, generated by dividing each square of a uniform square partition of Ω into two triangles, diagonally from the bottom left to the top right corner. The convection field is given by c = 2y(1 − x2 ), −2x(1 − y 2 ) , and the diffusion constant is set to ν = 10−8 . Boundary conditions are specified by
ΓN = ∅,
uD = 0 on {[−1, 1) × {−1}} ∪ {{−1} × (−1, 1)} ∪ {[−1, 1) × {1}} , uD = 1 on {{1} × [−1, 1]} .
Additive Operator Decomposition and Optimization–Based Reconnection
651
We compare the solution of linear systems arising from the full convection– diffusion problem (10), denoted the full problem, to the solution of the optimization reformulation (3) with A1 and A2 defined in (11). For the solution of (10) we use the multigrid solvers BoomerAMG and ML as preconditioners for the GMRES method with a maximum Krylov subspace size of 200, denoted by GMRES(200). Solver abbreviations, options, and failure codes used hereafter are summarized below. We note that the stated MLILU and BAMG parameters reflect the best solver settings that we could find for the example problem. MLILU BAMG
ML: incomplete LU smoother (IFPACK, ILU, threshold=1.05), W cycle BoomerAMG: Falgout–CLJP coarsening (6), symmetric Gauss–Seidel / Jacobi hybrid relaxation (6), V cycle (1)
—MX
exceeded maximum number of GMRES iterations (2000)
The optimization reformulation is solved via the reduced–space approach, denoted here by OPT. The outer optimization loop amounts to solving the linear system (8) using unpreconditioned GMRES(200). Within every optimization iteration, we solve four linear systems, see Algorithm 1. The linear systems involving A1 and A2 are solved using GMRES(10), preconditioned with ML with a simple smoother, to a relative stopping tolerance of 10−10 . We remark that in either case only 5–8 iterations are required for the solution of each system. The regularization parameter ε is set to 10−12 , see [2]. Table 1 presents a comparison of the number of outer GMRES(200) iterations for the optimization approach and the total number of GMRES(200) iterations for multigrid solvers applied to the full problem. Relative stopping tolerances are set to 10−6 . For our test problem, MLILU and BAMG show very strong mesh dependence, and eventually fail to converge on the 256 × 256 mesh. OPT, on the other hand, is robust to mesh refinement, and successfully solves the problem for all mesh sizes. In addition, Table 1 clearly demonstrates that while MLILU and BAMG are very sensitive to the size of the P´eclet number, OPT’s performance is affected only mildly, and in fact does not change when the diffusion constant is lowered from ν = 10−4 to ν = 10−8 . Overall, our optimization–based strategy provides a promising alternative for the robust solution of problems on which widely used multigrid solvers struggle. It is best used, and easily implemented, as a robustness–improving wrapper around standard multigrid solvers. To better Table 1. Number of outer GMRES(200) iterations for the optimization approach; total number of GMRES(200) iterations for multigrid applied to the full problem ν = 10−8 64 × 64 128 × 128 256 × 256 OPT ML BAMG
114 71 72
114 196 457
113 —MX —MX
128 × 128 ν = 10−2 ν = 10−4 ν = 10−8 84 9 7
114 96 33
114 196 457
652
P. Bochev and D. Ridzal
assess the general effectiveness of the approach, additional numerical experiments on a wider range of target problems are clearly needed and will be the subject of future studies.
References 1. Bank, R.E., Wan, J.W.L., Qu, Z.: Kernel preserving multigrid methods for convection-diffusion equations. SIAM J. Matrix Anal. Appl. 27(4), 1150–1171 (2006) 2. Bochev, P., Ridzal, D.: An optimization-based approach for the design of robust solution algorithms. SIAM J. Numer. Anal. (submitted) 3. Brezina, M., Falgout, R., MacLachlan, S., Manteuffel, T., McCormick, S., Ruge, J.: Adaptive smoothed aggregation (αsa) multigrid. SIAM Review 47(2), 317–346 (2005) 4. Du, Q., Gunzburger, M.: A gradient method approach to optimization-based multidisciplinary simulations and nonoverlapping domain decomposition algorithms. SIAM J. Numer. Anal. 37, 1513–1541 (2000) 5. Elman, H., Silvester, D., Wathen, A.: Finite Elements and Fast Iterative Solvers. Oxford University Press, Oxford (2005) 6. Gee, M.W., Siefert, C.M., Hu, J.J., Tuminaro, R.S., Sala, M.G.: ML 5.0 Smoothed Aggregation User’s Guide. Technical Report SAND2006-2649, Sandia National Laboratories, Albuquerque, NM (2006) 7. Golub, G.H., van Loan, C.F.: Matrix Computations, 3rd edn. Johns Hopkins University Press, Baltimore (1996) 8. Gunzburger, M., Lee, H.K.: An optimization-based domain decomposition method for the Navier-Stokes equations. SIAM J. Numer. Anal. 37, 1455–1480 (2000) 9. Henson, V.E., Meier Yang, U.: Boomeramg: a parallel algebraic multigrid solver and preconditioner. Appl. Numer. Math. 41(1), 155–177 (2002) 10. Hughes, T.J.R., Brooks, A.: A theoretical framework for Petrov-Galerkin methods with discontinuous weighting functions: Application to the streamline-upwind procedure. In: Rao, R.H.G., et al. (eds.) Finite Elements in Fluids, vol. 4, pp. 47–65. J. Wiley & Sons, New York (1982) 11. Lions, J.L.: Virtual and effective control for distributed systems and decomposition of everything. Journal d’Analyse Math´ematique 80, 257–297 (2000) 12. Nocedal, J., Wright, S.J.: Numerical Optimization, 1st edn. Springer, Heidelberg (1999) 13. Notay, Y.: Aggregation-based algebraic multilevel preconditioning. SIAM J. Matrix Anal. Appl. 27(4), 998–1018 (2006) 14. Sala, M., Tuminaro, R.S.: A new Petrov–Galerkin smoothed aggregation preconditioner for nonsymmetric linear systems. SIAM J. Sci. Comp. 31(1), 143–166 (2008) 15. Strang, G.: On the construction and comparison of difference schemes. SIAM J. Numer. Anal. 5, 506–517 (1968) 16. St¨ uben, K.: A review of algebraic multigrid. J. Comp. Appl. Math. 128(1-2), 281– 309 (2001) 17. Toselli, A., Widlund, O.: Domain Decomposition Methods - Algorithms and Theory. Springer, New York (2005)
Least-Squares Spectral Element Method on a Staggered Grid Marc Gerritsma, Mick Bouman, and Artur Palha TU Delft, Delft, The Netherlands
[email protected]
Abstract. This paper describes a mimetic spectral element formulation for the Poisson equation on quadrilateral elements. Two dual grids are employed to represent the two first order equations. The discrete Hodge operator, which connects variables on these two grids, is the derived Hodge operator obtained from the wedge product and the innerproduct. The gradient operator is not discretized directly, but derived from the symmetry relation between gradient and divergence on dual meshes, as proposed by Hyman et al., [5], thus ensuring a symmetric discrete Laplace operator. The resulting scheme is a staggered spectral element scheme, similar to the staggering proposed by Kopriva and Kolias, [6]. Different integration schemes are used for the various terms involved. This scheme is equivalent to a least-squares formulation which minimizes the difference between the dual velocity representations. This generates the discrete Hodge- operator. The discretization error of this schemes equals the interpolation error.
1
Mimetic Formulation of the Poisson Equation
The Poisson equation is given by Δφ = f ,
(1)
supplemented with appropriate boundary conditions (Dirichlet, Neumann, etc). This equation can be written in vector form as a first order system u = gradφ , divu = f .
(2)
In terms of differential forms this equation is given as u(1) = dφ(0) ,
dq (n−1) = f (n) , q (n−1) = u(1) ,
(3)
where u(1) denotes a 1-form, φ(0) is a 0-form and f (n) is a prescribed n-form, n = dim(Ω). The Hodge -operator associates the 1-form with a corresponding (n − 1)-form. Mimetic discretizations are based on the strong analogy between differential geometry and algebraic topology. The global, metric-free description can be rephrased without error in terms of cochains, while the local description requires I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 653–661, 2010. c Springer-Verlag Berlin Heidelberg 2010
654
M. Gerritsma, M. Bouman, and A. Palha
a differential field reconstruction. The two operators which switch between these two descriptions are the reduction operator, R and the interpolation operator, I. For a thorough discussion of the interplay between differential forms and cochains the reader is referred to [1,3,7,8]. In this paper, we work on two dual meshes, the Gauss-Lobatto-Legendre (GLL) mesh and the extended Gauss (EG) mesh. All variables, functions and points on the EG-mesh are referred to with a ˜ i (ξq ) is a basis function on the EG-mesh evaluated at a point on the twiddle. So h GLL-mesh, while hi (ξ˜q ) is a basis function defined on the GLL-mesh evaluated at a point on the EG-mesh.
2
Dual Spectral Element Meshes
The two equations in (3) can both be represented exactly in discrete form. However, the u in the first equation, u(1) = dφ(0) , is internally oriented along contours (or line segments), whereas the q (n−1) in the second equation is externally oriented with respect to (n − 1)-dimensional hyper-surfaces. This duality between k-forms acting internally on k-dimensional geometric objects and externally (n − k)-forms acting on (n − k)-dimensional geometric objects can be represented by two dual meshes, [7,8]. For the first mesh we choose the a spectral element mesh with a Gauss Lobatto-Legendre (GLL) points. Let ξi , denote the zeros of 1 − ξ 2 LN (ξ) where LN (ξ) is the derivative of the Legendre polynomial of degree N for i = 0, . . . , N . Then the GLL-mesh is given by (ξi , ξj ), i, j = 0 . . . , N , in the case n = 2. The GLL-mesh is depicted in Fig. 1 with the black squares.
1
0
−1 −1
0
1
Fig. 1. Dual spectral element grids, GLL grid (solid lines) and the EG grid (dashed lines)
Least-Squares Spectral Element Method on a Staggered Grid
655
The mesh, which is dual to the GLL-mesh, needs to associate p-cells with (n − p)-cells. Geometrically, this means that given a cell-complex (the GLLmesh in the spectral element) as shown in Fig. 1 by the solid lines, we need to set up a corresponding dual element in which p-cells on the primal element correspond to (n − p)-cells on the dual element. In this paper we use the Gauss points supplemented with nodes on the boundary as dual nodes, see Fig. 1, which form the intersection of the dashed lines in Fig. 1. Gauss points: LN (ξ˜i ) = 0 , i = 1, . . . , N
(4)
So we have (ξ˜i , η˜j ) supplemented with the boundary nodes (ξ˜0 , η˜j ) = (−1, η˜j ) , (ξ˜N +1 , η˜j ) = (1, η˜j ) , j = 1, . . . , N ,
(5)
and (ξ˜i , η˜0 ) = (ξ˜i , −1) , (ξ˜i , η˜N +1 ) = (ξ˜i , 1) ,
i = 1, . . . , N .
(6)
This topological positioning of the dual points follows the location of the scalar variables in the support operator method by Hyman et al., [5]. Representing different quantities on different meshes has also been investigated by Kopriva and Kolias, [6]. The extended Gauss mesh will be referred to as the EG-mesh.
3 3.1
The Reduction Operators R and Interpolation Operators I The Conservation Law dq (n−1) = f (n)
On the GLL-grid, the black grid in Fig. 1, we discretize the conservation law dq (1) = f (2) . Let us associate with each line segment, which connects two GLLpoints, a normal velocity integrated over that line segment, see Fig. 2. x qi,j
ηj
=
q ηj−1
(1)
(ξi , s) ds and
y qi,j
ξi
= ξi−1
q (1) (s, ηj ) ds .
(7)
The operation of reducing the flux field to integrated values over faces (for n = 2 line segments) is the reduction operator R. For the reconstruction operator I we need an interpolant which interpolates values, defined in the GLL-nodes, given by 1 − ξ 2 LN (ξ) , i = 0, . . . , N , (8) hi (ξ) = N (N + 1)LN (ξi )(ξi − ξ) and an interpolant which interpolates integral quantities defined on edges, given by i−1 ei (ξ) = − dhk (ξ) , (9) k=0
656
M. Gerritsma, M. Bouman, and A. Palha y q 1,5
1 x q 1,4
y q 2,5 x q 2,4
y q 1,4
x q 1,3
x q 2,3
x q 1,2
x q 1,1
x q 4,3
x q 3,2
x q 2,1
x q 5,3
y q 4,3
x q 4,2
y q 3,2 x q 3,1
y q 2,1
x q 5,4 y q 4,4
y q 3,3
y q 2,2
y q 1,1
x q 4,4
x q 3,3
x q 2,2
y q 4,5
y q 3,4
y q 2,3
y q 1,2
−1 −1
x q 3,4 y q 2,4
y q 1,3
0
y q 3,5
x q 5,2
y q 4,2 x q 4,1
y q 3,1
x q 5,1 y q 4,1
0
1
Fig. 2. Location of the externally oriented flux components on the GLL-mesh as defined by (7)
then the interpolation of these edge values (7) is given by the 1-forms IRq (1) (ξ, η) = −
N N i=1 j=0
y qi,j ei (ξ)hj (η) +
N N
x qi,j hi (ξ)ej (η) .
(10)
i=0 j=1
Note that the basis functions ei have the property that ⎧ ξk ⎨ 1 if i = k ei (ξ) = ⎩ ξk−1 0 if i = k
(11)
which amounts to RI = Id, so this interpolation is the left inverse of the reduction operator R. Since I is only an approximate right inverse of R, (10) is an approximation to the true solution q (1) . If we take the exterior derivative of (10) we obtain dIRq (1) (ξ, η) = −
N N i=1 j=0
y qi,j ei (ξ)dhj (η) +
N N
x qi,j dhi (ξ)ej (η)
i=0 j=1
N N x y y x = qi,j + qi,j ei (ξ)ej (η) − qi−1,j − qi,j−1
(12)
i=1 j=1
= IδRq (1) , where δ is the coboundary operator acting on (n − 1)-cochains. This shows that the interpolation operator satisfies the commutation relation dI = Iδ, the second commuting diagram property, CDP2, by Bochev and Hyman, [3].
Least-Squares Spectral Element Method on a Staggered Grid
657
The reduction operator applied to the 2-form f (2) is defined as the integration of f (2) over the volumes [ξi−1 , ξi ] × [ηj−1 , ηj ], ξi ηj (2) fi,j = Rf := f (2) , i, j = 1 . . . , N . (13) ξi−1
ηj−1
The interpolation of the 2-cochain fi,j is defined as Ifi,j =
N N
fi,j ei (x)ej (y) .
(14)
i=1 j=1
Using (12) and (14) we can write dq (1) = f (2) as N N i=1 j=1
y y x x qi,j + qi,j − qi−1,j − qi,j−1 − fi,j ei (x)ej (y) = 0 .
(15)
This equation can be interpreted as a spectral interpolation of a finite volume method. An alternative way of writing this equation is in terms of the incidence matrix, F2,1 , which is a matrix representation of the coboundary operator acting on 1-cochains
x qi,j F2,1 − f (16) y i,j ei (x)ej (y) = 0 . qi,j Discretization of u(1) = dφ(0)
3.2
For the discretization of u(1) = dφ(0) we use the EG-mesh as depicted in Fig. 1. The reduction operator for the 0-form φ(0) is defined by sampling the scalar function φ(0) in the extended Gauss points, i.e. φ˜i,j = Rφ(0) := φ(0) (ξ˜i , η˜j ) .
(17)
A nodal interpolation of the φi,j is given by I φ˜i,j =
N N i=1 j=1
+
˜ i (ξ)h ˜ j (η) + φ˜i,j h
N
˜ N +1 (η) h ˜ i (ξ) φ˜i,0 ˜ h0 (η) + φ˜i,N +1 h
i=1
N
˜ N +1 (ξ) h ˜ j (η) , φ˜0,j ˜ h0 (ξ) + φ˜N +1,j h
(18)
j=1
where
⎧ (−1)N ⎪ (1 − ξ) LN (ξ) ⎪ 2 ⎪ ⎪ ⎪ ⎪ ⎨ (1−ξ2 )LN (ξ) ˜ i (ξ) = h ˜2 ˜ ⎪ ⎪ (1−ξi )LN (ξi )(ξ−ξi ) ⎪ ⎪ ⎪ ⎪ ⎩1 2 (1 + ξ) LN (ξ)
if i = 0 if 1 ≤ i ≤ N if i = N + 1
(19)
658
M. Gerritsma, M. Bouman, and A. Palha
The connection between the divergence equation on the GLL-mesh and the gradient equation on the EG-mesh is established by (0) (1) dφ(0) ∧ q (1) (20) dφ , q ωn = Ω Ω = φ(0) ∧ q (1) − φ(0) ∧ dq (1) , (21) ∂Ω
Ω
where in (20) we use the definition of the Hodge operator, see for instance [4], in (21) we use integration by parts for the wedge product. Equations (20) and (21) are used to implicitly define the dφ(0) . By using (21) we ensure that the resulting Laplace operator will be symmetric, due to the similarity to the support operator method proposed by Hyman et al., [5]. Let us discretize the Hodge of the gradient dφ as Gφ = (G x φ, G y φ) as Gφ = −
N N k=1 l=0
(G y φ)k,l ek (ξ)hl (η) +
N N k=0 l=1
(G x φ)k,l hk (ξ)el (η) .
(22)
If we take the inner-product between (10) and (22), and integrate over the domain, then we obtain an approximation of the term on the left hand side of (20). For this integration we use a mixture of Gauss and Gauss-Lobatto integration: For the first term we use Gauss integration in the ξ-direction and Gauss-Lobatto integration in the η-direction, while for the second term we use Gauss-Lobatto in the ξ-direction and Gauss integration in the η-direction ⎧ ⎫ N N N N ⎨ ⎬ x (Gφ, q) = qp,j ej (˜ ηq ) (G x φ)p,l el (˜ ηq ) wpGL wqG + (23) ⎩ ⎭ Ω p=0 q=1 j=1 l=1 N N N N y ˜ q ei (ξp ) (G y φ) ek (ξ˜p ) wG wGL (24) p=1 q=0
i=1
i,q
k=1
k,q
p
q
For the boundary integral, the first term in (21), we insert the reconstructed polynomials ∂Ω
I φ˜k,l ∧ Iqi,j =
N N y ˜ y ˜ qi,N φp,N +1 − qi,0 φp,0 wpG + i=1 p=1
N N x ˜ x ˜ qN,j φN +1,q − q0,j φ0,q wqG
(25)
j=1 q=1
And finally, we need to discretize the second term in (21). Here we use Gauss integration in both directions
x N N N N q φ(0) ∧ dq (1) = φp,q F2,1 ei (ξ˜p )ej (˜ ηq )wpG wqG . (26) y q Ω i,j p=1 q=1
i=1 j=1
Least-Squares Spectral Element Method on a Staggered Grid
659
After we have evaluated the integrals numerically, we can write the resulting system as
x q ¯ T A¯ Gφ q = φ¯T F2,1 B¯ q , ∀¯ q= (27) qy where we have assembled the contribution of the two terms in (21) on the right hand side. Note also that A is symmetric. This relation is satisfied for all q¯ if ¯ = A−1 BT F2,1 T φ¯ = Gφ
−1 T A E1,0 φ¯ , B Discrete Hodge
(28)
where E1,0 is the incidence matrix relating 0-cells to 1-cells on the EG mesh. Once we have this cochain interpolation of dφ(0) on the GLL mesh, we can apply the coboundary operator in integral form as derived in (26) to get the discrete version of d dφ(0) = f (2) 1,0 T E BA−1 BT E1,0 φ¯ = f¯ ,
(29)
where f¯ is a vector containing the 2-cochains fij . Lemma 1. The same discretization is obtained in terms of a least-squares formulation. If we minimize the difference between Gφ and dφ as Find Gφ ∈ Λn−1 Ω; L2 such that for φ ∈ Λ0 Ω; L2
∗Gφ = arg
min
v∈Λn−1 (Ω;L2 )
1 2 v − dφL2 (Ω) 2
(30)
Proof. A necessary condition for a minimizer is dφ ∧ w (v, w) = (∗dφ, w) = Ω = φ∧w− φ ∧ dw , ∀w ∈ Λn−1 Ω; L2 ∂Ω
Ω
The trivial norm-equivalence C1 vL2 (Ω) ≤ vL2 (Ω) ≤ C2 vL2 (Ω) , for C1 = C2 = 1 , establishes well-posedness. Inserting the conforming discrete function space completes the proof.
This minimization is the same as the least-squares minimization described in [2], equations (4.5) and (4.6), and [3], equation (7.10), with γ = 0 and g = 0, respectively. The lemma shows that mixed mimetic method, as described in [1], section 6.4.2, (161), equals the least-squares minimization as described in [2,3].
660
M. Gerritsma, M. Bouman, and A. Palha 2
10
m=1 (numerical) m=1 (interpolation) m=2 (numerical) m=2 (interpolation)
0
10
−2
10
−4
L2 -error
10
−6
10
−8
10
−10
10
−12
10
−14
10
5
10
15
20
25
N
Fig. 3. Convergence of the L2 -error vs the polynomial degree N for the cases m = 1 (blue dashed line) and m = 2 (blue solid line) and the interpolation error of the exact solution on the EG mesh for m = 1 (red dashed line) and m = 2 (red solid line)
4
Results
As a first test to assess the above described method the following test case is considered. We solve Δφ = f (x, y) , (x, y) ∈ [−1, 1]2 ,
(31)
with the exact solution given by φ(x, y) = sin(πx) sin(mπy). The exact boundary conditions for φ are prescribed along the boundary of the domain. In Fig. 3 the L2 -error vs the polynomial degree is plotted for m = 1 and m = 2 in blue. Norm-equivalence in Lemma 1 implies that the error of the least-squares approximation is equal to interpolation error of the exact solution on the EG mesh. This interpolation error of the exact solution is indicated with the red lines in Fig. 3, which confirms that the discretization error equals the interpolation error. This would imply that the numerical solution is nodally exact.
References 1. Bochev, P.B.: A discourse on variational and geometric aspects of stability of discretizations. In: Deconinck, H. (ed.) VKI LS 2003-2005. 33rd Computational Fluid Dynamics Lecture Series, vol. 72, Von Karman Institute for Fluid Dynamics, Chaussee de Waterloo, Rhode Saint Genese (2005) (B-1640) ISSN0377-8312 2. Bochev, P.B., Gunzburger, M.: On least-squares finite element methods for the Poisson equation and their connection to the Dirichlet and Kelvin Principles. SIAM J. Numer. Anal. 43(1), 340–362 (2005) 3. Bochev, P.B., Hyman, J.M.: Principles of mimetic discretizations of differential equations. In: Arnold, D., Bochev, P., Lehoucq, R., Nicolaides, R., Shashkov, M. (eds.). IMA, vol. 142. Springer, Heidelberg (2006)
Least-Squares Spectral Element Method on a Staggered Grid
661
4. Flanders, H.: Differential forms with applications to the physical sciences. Academic Press, Inc., New York (1963) 5. Hyman, J., Shaskov, M., Steinberg, S.: The numerical solution of diffusion problems in strongly heterogeneous non-isotropic materials. Journal of Computational Physics 132, 130–148 (1997) 6. Kopriva, D.A., Kolias, J.H.: A Conservative Staggered-Grid Chebyshev Multidomain Method for Compressible Flows. Journal of Computational Physics 125, 244– 261 (1996) 7. Mattiussi, C.: The Finite Volume, Finite Difference, and Finite Elements Methods as Numerical Methods for Physical Field Problems. Advances in Imaging and Electron Physics 113, 1–146 (2000) 8. Tonti, E.: On the mathematical structure of a large class of physical theories. Accademia Nazionale dei Lincei, estratto dai Rendiconti della Classe di Scienze fisiche, matematiche e naturali, Serie VIII, Vol. LII, fasc. 1, Gennaio (1972)
Mimetic Least-Squares Spectral/hp Finite Element Method for the Poisson Equation Artur Palha and Marc Gerritsma Delft University of Technology, Delft, The Netherlands
[email protected]
Abstract. Mimetic approaches to the solution of partial differential equations (PDE’s) produce numerical schemes which are compatible with the structural properties – conservation of certain quantities and symmetries, for example – of the systems being modelled. Least Squares (LS) schemes offer many desirable properties, most notably the fact that they lead to symmetric positive definite algebraic systems, which represent an advantage in terms of computational efficiency of the scheme. Nevertheless, LS methods are known to lack proper conservation properties which means that a mimetic formulation of LS, which guarantees the conservation properties, is of great importance. In the present work, the LS approach appears in order to minimise the error between the dual variables, implementing weakly the material laws, obtaining an optimal approximation for both variables. The application to a 2D Poisson problem and a comparison will be made with a standard LS finite element scheme.
1
Introduction
The role of physics lies, essentially, in the translation of a physical phenomena (heat transport, fluid flow, electromagnetism, etc.) into an adequate mathematical statement that yields correct predictions of the phenomena, that is, models it. This translation is usually accomplished by setting up a set of partial differential or integro-differential equations that relate the different physical quantities. In order to do this, these physical quantities have to be represented by scalar fields (e.g., temperature) or vector fields (e.g., velocity). The main drawback of the classical approach which uses scalar and vector fields to represent physical properties is the fact that they obscure the topological and geometrical content of the model. Differential geometry provides a more adequate framework for the analysis of these problems. A specific feature of differential forms is that they possess algebraic as well as geometrical, topological, differential, integral, and many other properties. The generality of this framework allows one, for example, to express the classical ∇, ∇× and ∇· differential operators as well as the gradient theorem, Kelvin-Stokes theorem and Gauss theorem, in a concise manner in terms of differential forms and an operator, called exterior derivative, acting on these objects. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 662–670, 2010. c Springer-Verlag Berlin Heidelberg 2010
Mimetic Least-Squares Spectral/hp Finite Element Method
663
Apparently the equations that describe different branches of science look completely distinct. However, the concept of “conservation law” ranges from the conservation of specific quantities (energy, mass, etc.), to the conservation of the balance between a change of physical quantities (energy, linear momentum, angular momentum and mass) and relevant external forcing (such conservation laws may be called the balance conservation laws). What do they have in common? Tonti [15], aided by the generality provided by differential geometry, pointed out that there exists a fundamental similarity between these physical theories that can be traced back to common geometric principles from which they stem. For this, Tonti developed the so-called Tonti diagrams. These diagrams are graphical representations of these field equations. With them it becomes clear that physical theories are made up of equations that relate global integral quantities, associated with appropriate geometrical objects (oriented p-dimensional space-time surfaces), and equations that relate local physical quantities with differently oriented geometrical objects. The first, known as conservation or equillibrium laws, have an inherently topological nature in the sense that they are independent of metric. The latter, also known as material contitutive relations, are instrinsically connected to the metric. For more comprehensive information the reader is referred to the works by Tonti [15], Bossavit [4] and Mattiussi [13]. Numerical schemes are an essential tool for solving PDE’s. These schemes, being a model reduction, inherently lead to loss of information of the system being modeled, namely on its structure, e.g. conservation of certain quantities – mass, momentum, energy, etc. – and symmetries, which are embedded into the PDE’s as a result of the geometrical properties of the differential operators. It is known today, that the well-posedness of many PDE problems reflects geometrical, algebraic topological and homological structures underlying the problem. It is, therefore, important for the numerical scheme to be compatible with these structures (the physics), i.e., to mimic them. The goal of mimetic methods is to satisfy exactly, or as good as possible, the structural properties of the continuous model. It is becoming clear that in doing so, one obtains stable schemes. Additionally, a clear separation between the processes of discretization and approximation arises, the latter only take place in the constitutive relations. Least Squares (LS) schemes offer many desirable properties, most notably the fact that they lead to symmetric positive definite algebraic systems, which represent an advantage in terms of computational efficiency of the scheme. Nevertheless, LS methods fail to satisfy the conservation laws, [14]. A mimetic LS formulation will satisfy the conservation law exactly. In the cuurrent paper, the LS approach is used to minimise the error between the dual variables, obtaining an optimal approximation. This LS approximation is known in the literature as the implementation of Weak Material Laws, as proposed by Bochev and Hyman in [1].
2
Mimetic Approaches for the 2D Poisson Equation
For mimetic schemes we introduce basic concepts from differential geometry. The construction of the Tonti diagram for the 2D Poisson equation will be presented
664
A. Palha and M. Gerritsma
and also an analysis of possible routes for mimetic discretizations, in particular, the one taken in this work. 2.1
Differential Geometry, a Refresher
For a brief introduction to differential geometry, see for instance [5,8], one needs essentially to introduce one object (k-differential form, or k-form), four operators on these objects (wedge product, ∧, exterior derivative, d, inner product and the Hodge- operator, ). A k-differential form, or k-form, is a k-linear and antisymmetric mapping: ω k : T Mx × . . . × T Mx →R
(1)
k
where T Mx is the tangent space of a k-differentiable manifold (smooth kdimensional surface) at x. The wedge product, ∧, in a n-dimensional space, is defined as a mapping: ∧ : Λk × Λl → Λk+l ,
k + l < n,
(2)
with the property that: ω k ∧ αl = (−1)kl αl ∧ ω k , where Λk is the space of k-forms. This means that, in Rn , one can write any k-form as: ωk = wi1 ,...,ik (x)dxi1 ∧ . . . ∧ dxik ,
(3)
(4)
i1 <...
where dxi are elemental basis 1-forms. In this way one can define the set of functions {wi1 ,...,ik (x)} as proxies of k-forms, in the sense that there is a oneto-one correspondence between these sets and k-forms. Moreover, one can verify that these sets, in R2 for example, have a correspondence to well known entities: scalar fields as proxies for 0-forms and 2-forms, vector fields as proxies for 1forms. The inner product in Rn and this correspondence between k-forms and their proxies, provides the mapping (·, ·) : Λk × Λk → R: ai1 ,...,ik (x)bi1 ,...,ik (x) (5) (αk , β k ) = i1 <...
The L2 inner product for k-forms, in n-dimensional space, can also be defined by: k k k k n α ,β Ω = α ,β ω , (6) Ω
where the n-form ω n = dx1 ∧ · · · ∧ dxn in an n-dimensional space. The exterior derivative d, in a n-dimensional space, is a mapping: → Λk+1 , d : Λk
k = 0, 1, . . . , n − 1,
(7)
Mimetic Least-Squares Spectral/hp Finite Element Method
which satisfies: d ω k ∧ αl = dω k ∧ αl + (−1)k ω k ∧ dαl , and
ddω k = 0,
∀ω ∈ Λk ,
k+l
k
665
(8) (9)
The Hodge- operator, in an n-dimensional space, is a mapping: : Λk → Λn−k , that satisfies:
α ∧ β = (α, β) ω n ,
k ≤ n,
(10)
∀α, β ∈ Λk .
(11)
With the above mentioned definitions it is straighforward to verify that in 2D, the usual, ∇f , ∇×v and ∇·v, are easily expressed as df 0 , dv 1 , dv 1 , respectively. Additionally, for a simply connected domain Ω the spaces of differential forms together with the exterior derivative d constitute an exact sequence, called the de Rham complex, which, in 2D is1 : / Λ0
R
d
d
/ Λ2
Λ˜2 o
2.2
/ Λ1
d
Λ˜1 o
(12)
d
Λ˜0 o
R
Tonti Diagram for the 2D Poisson Equation and Its Discretizations
In order to construct the Tonti diagram for the 2D Poisson equation, −∇2 φ = f , one needs first to restate it as a system of first order PDE’s2 : ⎧
⎨ −dφ0 = u1 −∇φ = u ⇔ (13) d˜ v 1 = f˜2 , ∇·u=f ⎩ v˜1 = u1 where one clearly sees that the u1 that appears in the first equation is not equal to the v˜1 (a twisted 1-form) that appears in the second equation, but rather 1
By using the and -operators, one can convert this differential complex to a complex in terms of vector operators. In 2D the associated De Rham complex in vector form is given by R
/ H1
∇
L2 o 2
∇·
∇×
/ H 1 (curl)
/ L2
H 1 (div) o
−∇⊥
H1 o
R
This special form can be found in [3,6]. Here f 2 denotes a 2-form. Not to be confused with the square of f .
666
A. Palha and M. Gerritsma
it is related to it through a material constitutive relation. This is not explicit when using standard vector calculus and, as it will be seen later (sections 3 and 4), it plays an important role in the accurate numerical solution of the Poisson equation. The Tonti diagram for this system of equations is: f˜O 2
φ0
d
d
u1
(14)
/ v˜1
In order to numerically solve this problem, different discretization approaches may be implemented. Three main approaches may be followed: dual grid methods, elimination methods and primal-dual methods. The first approach, albeit not always possible since it depends on the existence of a pair of topologically dual grids, is the one that enables a one-to-one correspondence between dual variables which results in a simple discrete Hodge- operator and enables one to satisfy exactly all equations globally. The second approach sacrifices one of the equillibrium equations while satisfying exactly the other equilibrium equation and the constitutive equation locally. The third approach satisfies exactly the equilibrium equations between unkown physical quantities but relaxes the constitutive equation, being enforced weakly. This third approach is the one followed in this work. Hence, discretization of the problem in appropriate function spaces leads to the following Tonti diagram to be solved (dotted lines represent weakly imposed relations): (15) φ0h f˜h2 O d
u1h
3
d
/ v˜1 h
Weak Material Laws: The Role of Least-Squares
As proposed by Bochev and Hyman [1], a way of defining the Hodge- operator and hence the constitutive equation in a weak sense is by using a least-squares minimization process that penalizes the discrepancy between the dual physical quantities. The exact equillibrium equation appears as a linear constraint that must be satisfied by the minimizers of the functional. Hence the problem is reduced to a constrained minimization problem: Seek (φ0h , u1h , v˜h1 ) in Λ0h × Λ1h × Λ˜1h such that I(φ0h , u1h , v˜h1 ) = 12 v˜h1 + u1h 20 + d˜ vh1 − f˜2 20
(16)
subject to: − dφ0h = u1h
(17)
Mimetic Least-Squares Spectral/hp Finite Element Method
667
If the subspaces Λ0h , Λ1h and Λ2h , and the twisted ones, are chosen in such a way that they constitute a de Rham complex: R
/ Λ0 h
d
d
h
Λ˜2h o
/ Λ1
d
Λ˜1h o
/ Λ2 h
(18)
d
Λ˜0h o
R
then (17) is satisfied exactly and one can substitute u1h by −dφ0h without any approximation involved. In this way the constrained minimization problem is reduced to a simple minimization problem only on two variables, φ0h and v˜h1 : Seek (φ0h , v˜h1 ) in Λ0h × Λ˜1h such that I(φ0h , v˜h1 ) = 12 v˜h1 − dφ0 20 + d˜ vh1 − f˜2 20
(19)
In this way, the Hodge- operator is implemented as L2 projections between the different dual spaces.
4
Application to the 2D Poisson Equation
To apply the above mentioned scheme to the 2D Poisson equation first the appropriate subspaces Λ0h , Λ1h and Λ2h , and the associated twisted form spaces, must be specified. Since one will use a spectral/hp LS method, these spaces are defined as:
Λ1h,p
Λ0h,p = span hpi (x)hpj (y) , i = 0, . . . , p j = 0, . . . , p ˜ p−1 (x)hp (y) ⊗ hp (x)h ˜ p−1 (y) , i, m = 1, . . . , p j, n = 0, . . . , p = span h n m i j ˜ p−1 (x)h ˜ p−1 (y) , i = 1, . . . , p j = 1, . . . , p Λ2h,p = span h i j
where hpi (ξ) is the i-th Lagrange interpolant of order p over Gauss-Lobatto˜ p (ξ) is the i-th Lagrange interpolant of order p over the Legendre points and h i Gauss-Legendre points. We see that with this choice the degrees of freedom associated with these bases of the discrete subspaces are located where they should be: at nodal points (for 0-forms), at edges (for 1-forms) and at volumes (for 2-forms). In this way, the resulting reconstructed physical quantities will have different continuity properties: continuity across elements (0-forms), tangential continuity along edges (1-forms) and no continuity across elements (2-forms). It is possible to show that these subspaces constitute a De Rham complex, as in (18) and hence they are suitable to be used to represent the unknown degrees of freedom. Kopriva, [12], employed a similar use of staggered spectral element grids.
668
A. Palha and M. Gerritsma
Fig. 1. Numerical solution of φ for mimetic LS approach (left) and standard LS (right), with p = 4 and 4 elements
Fig. 2. Numerical solution of uy for mimetic LS approach (left) and standard LS (right), with p = 4 and 4 elements
In order to assess the above described method, it will be applied to the solution of the 2D Poisson equation with a particular right hand side and Dirichlet boundary conditions in order to obtain the following analytical solution: φ(x, y) = cos(x2 ) + y 2 ,
x ∈ [−1, 1] × [−1, 1]
(20)
Results using mimetic least-squares method and a standard least-squares approach are compared. In Fig. 1 the numerical solutions for φ(x, y) for the mimetic LS method and for the standard LS method are presented. In Fig. 2 the numerical solutions for uy (x, y) for the mimetic LS approach and for the standard LS method are presented. One clearly observes the oscillations in the solution of the standard LS solution. These wiggles are responsible for the increase in the L2 error and also on the increased value of the curl of u(x, y). In Fig. 3 the L2 errors for the
Mimetic Least-Squares Spectral/hp Finite Element Method
669
Fig. 3. L2 convergence for mimetic and standard LS (left) and ∇ × u for mimetic LS and standard LS (right)
mimetic and standard LS approaches are presented (for φ, u and primal–dual velocity differences).
5
Concluding Remarks
As can be seen in Fig. 3, the mimetic LS method results in smaller errors, especially for the u variable. Additionally, with regard to ∇ × u one sees a great improvement in the mimetic LS method, where the errors are approximately 2 orders of magnitude smaller compared to conventional LS. The primal–dual differences, v˜h1 + u1h 0 , appears to be a good estimator for the error of the numerical solution, as can be seen in Fig. 3.
Acknowledgments The authors would like to acknowledge Foundation for Science and Technology (Portugal) for the funding given by the PhD grant SFRH/BD/36093/2007.
References 1. Bochev, P., Hyman, J.: Principles of mimetic discretizations of differential operators. IMA 142, 89–119 (2006) 2. Bochev, P.: Discourse on variational and geometric aspects of stability of discretizations. 33rd Computational Fluid Dynamics Lecture Series, VKI LS 20032005 (2003) 3. Bochev, P., Robinson, A.C.: Matching algorithms with physics: exact sequences of finite element spaces. In: Estep, D., Tavener, S. (eds.) Collected lectures on preservation of stability under discretization. SIAM, Philadelphia 4. Bossavit, A.: On the geometry of electromagnetism. J. Japan Soc. Appl. Electromagn. & Mech. 6 (1998)
670
A. Palha and M. Gerritsma
5. Burke, W.L.: Applied differential geometry. Cambridge University Press, Cambridge (1985) 6. Demkowicz, L.: Computing with hp-adaptive finite elements, vol. 1. Chapman and Hall/CRC (2007) 7. Desbrun, M., Kanso, E., Tong, Y.: Discrete differential forms for computational modeling. In: SIGGRAPH 2005: ACM SIGGRAPH 2005 Courses (2005) 8. Flanders, H.: Differential forms with applications to the physical sciences. Academic Press, Inc., New York (1963) 9. Proot, M., Gerritsma, M.: A least-squares spectral element formulation for the Stokes problem. J. Sci. Computing 17, 285–296 (2002) 10. Hiptmair, R.: Discrete Hodge operators. Numer. Math. 90, 265–289 (2001) 11. Jiang, B.: The least squares finite element method: theory and applications in computational fluid dynamics and electromagnetics. Springer, Heidelberg (1998) 12. Kopriva, D.A., Kolias, J.H.: A Conservative Staggered-Grid Chebyshev Multidomain Method for Compressible Flows. Journal of Computational Physics 125, 244– 261 (1996) 13. Mattiussi, C.: An analysis of finite volume, finite element, and finite difference methods using some concepts from algebraic topology. J. Comp. Physics 133, 289– 309 (1997) 14. Proot, M.M.J., Gerritsma, M.I.: Mass- and momentum conservation of the leastsquares spectral element method for the Stokes problem. Journal of Scientific Computing 27(1-3), 389–401 (2007) 15. Tonti, E.: On the formal structure of physical theories. Consiglio Nazionale delle Ricerche, Milano (1975)
Adaptive Least Squares Finite Element Methods in Elasto-Plasticity Gerhard Starke Institut f¨ ur Angewandte Mathematik, Universit¨ at Hannover, 30167 Hannover, Germany
[email protected] http://www.ifam.uni-hannover.de/~ gcs/
Abstract. In computational mechanics applications, one is often interested not only in accurate approximations for the displacements but also for the stress tensor. Least squares finite element methods are perfectly suited for such problems since they approximate both process variables simultaneously in suitable finite element spaces. We consider an H 1 × H(div) least squares formulation for the incremental formulation of elasto-plasticity using a plastic flow rule of von Mises type. The nonlinear least squares functional constitutes an a posteriori error estimator on which an adaptive refinement strategy may be based. The variational formulation under plane strain and plane stress conditions is investigated in detail. Standard conforming elements are used for the displacement approximation while the stress is represented by Raviart-Thomas elements. The algebraic least squares problems arising from the finite element discretization are nonlinear and nonsmooth and may be solved by generalized Newton methods.
1
Introduction
A least-squares mixed finite element method for the incremental formulation of elasto-plastic deformation models is investigated in this contribution. It is based on a first-order system based on the stress tensor and the displacement field as independent process variables. The method was introduced and analyzed in [9] where a benchmark example under the plane strain assumption was used for computational testing, In this contribution we show that the plane stress case can be handled by our approach in much the same way and we discuss the solution of the arising nonlinear and non-smooth algebraic least-squares problems by a generalized Newton method. The monograph by Bochev and Gunzburger [5] gives a survey the many variants and application areas for finite element methods of least-squares type. These methods constitute an alternative to mixed finite element methods of saddle point structure whenever accurate approximations of the stress tensor is desired. Among its advantages is the greater flexibility in combining finite element spaces for the different process variables which are not restricted by an inf-sup condition. Moreover, if the least-squares functional is shown to be equivalent to a I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 671–678, 2010. c Springer-Verlag Berlin Heidelberg 2010
672
G. Starke
norm of the error, then its local evaluation provides an a posteriori error estimator. This may be used in an adaptive refinement technique, see [3] for a detailed study of such strategies in the context of least-squares formulations. The most appropriate combination for the elasto-plasticity models treated in this paper consists of Raviart-Thomas elements for the stresses coupled with conforming finite element spaces of the same polynomial degree for the displacement components. This is due to the fact that the same order of approximation is achieved for the individual variables. In particular, next-to-lowest order Raviart-Thomas spaces are combined with piecewise quadratic conforming finite elements in our computations. Such a least-squares finite element approach was extended to the Signorini contact problem in [1]. The numerical simulation of elasto-plastic deformation processes has been an intensive area of research for several decades. Two monographs which appeared at the end of the last century cover the state of the art from a more engineeringoriented perspective [8] and a more abstract mathematical view [7]. The solution of the nonlinear algebraic systems associated with finite element discretizations of elasto-plastic models was the subject of much recent research acitivities including [2,4,11]. The connection of generalized Newton-type methods with the radial return projection approaches common in the engineering community was investigated in [12].
2
Least-Squares Formulation of Incremental Elasto-Plasticity
The common models for elasto-plastic deformation processes may be written as a first-order system div σ = 0 ,
(1)
σ = C(ε(u) − p) ,
(2)
for the stress tensor σ : Ω → IR3×3 and the displacement field u : Ω → IR3 . In (2), div σ means row-wise application of the divergence operator, and ∇u contains the gradient vectors of the components of u in each row. Similarly to the model of linear elasticity, ε(u) =
1 (∇u + (∇u)T ) 2
(3)
denotes the linearized strain tensor, and Cε = 2με + λ (tr ε)I
(4)
represents a linear material law. The difference to the elastic case lies in the term p which stands for the plastic strains satisfying additional constraints. In particular, for the von Mises plasticity model, the deviatoric stress part dev(σ) = σ −
1 (tr σ) I 3
(5)
Adaptive Least Squares Finite Element Methods in Elasto-Plasticity
673
satisfies the constraint
2 K(α) (6) 3 with a hardening function K(α). The hardening parameter α : Ω → IR constitutes an additional process variable in the case of elasto-plasticity with hardening which is governed by an evolution equation as is the case for the plastic strains. Therefore, elasto-plasticity models become time-dependent with the need to employ an appropriate time-discretization scheme. The model for elasto-plastic deformation processes described above is taken from [8, Chap.2]. Discretization in time by an implicit Euler scheme leads to a first-order system for the increments σ inc and uinc in the representation σ = σ old + σ inc and u = uold + uinc , respectively. The system associated with one time-step in an incremental formulation of elasto-plasticity may be written as |dev(σ)| ≤
div(σ old + σ inc ) = 0 , σ
inc
− R(ε(u
inc
); σ
old
,α
old
) =0.
(7) (8)
The stress operator R(ε; σ old , αold ) in (8) depends, in general, nonlinearly and non-smoothly on ε as soon as plastic deformation occurs. For notational convenience the increments σ inc and uinc are simply denoted by σ and u (which had a different meaning in (2)) throughout the rest of this paper. For simplicity, we will also omit the dependence on σ old and αold in the stress operator and simply write R(ε) instead of R(ε; σ old , αold ). With the Sobolev spaces HΓN (div, Ω) = {s ∈ H(div, Ω) : s · n = 0 on ΓN } , HΓ1D (Ω) = {p ∈ H 1 (Ω) : p = 0 on ΓD } , the solution of the system (7), (8) is then sought in σ N + HΓN (div, Ω)3 for σ : Ω → IR3×3 and in HΓ1D (Ω)3 for u : Ω → IR3 . Here, σ N ∈ H(div, Ω)3 satisfies the boundary conditions σ N · n = g on ΓN where g denotes the increment of the boundary traction. For the case of von Mises plasticity with isotropic hardening, the stress response is given by 1 dev(σ old + C ε) γR (dev(σ old + C ε)) R(ε) = C ε − , (9) 2μ |dev(σ old + C ε)| where the return parameter γR (dev(σ old + C ε)) is implicitly defined as the solution of the nonlinear equation 2 2 γR old old γR = |dev(σ + C ε)| − K α + , (10) 3 3 2μ if |dev(σ old + C ε)| > 2/3K(αold ) and γR (dev(σ old + C ε)) = 0 otherwise. The hardening parameter is updated by 2 γR (dev(σ old + C ε)) old α=α + . (11) 3 2μ
674
G. Starke
The least-squares formulation, associated with (7) and (8), consists in minimizing the functional F (σ, u; σ old , αold ) = div(σ old + σ)2 + C −1/2 (σ − R(ε(u)))2
(12)
among all suitable (σ, u) ∈ H(div, Ω)3 × H 1 (Ω)3 where · abbreviates the L2 (Ω)d norm. More precisely, our aim is to find σ ∈ σ N + HΓN (div, Ω)3 and u ∈ HΓ1D (Ω)3 such that F (σ, u; σ old , αold ) ≤ F(σ N + τ , v; σ old , αold )
(13)
holds for all τ ∈ HΓN (div, Ω)3 and v ∈ HΓ1D (Ω)3 . Due to the fact that R(ε(u)) is not differentiable everywhere, the subdifferential ∂R(ε(u)) needs to be employed in the formulation of the variational problem. The minimum of (12) is given by the solution of the variational problem (div(σ old + σ), div τ ) + (C −1 (σ − R(ε(u)), τ ) = 0 0 ∈ (C −1 (σ − R(ε(u)), ∂R(ε(u))[ε(v)])
(14) (15)
for all τ ∈ HΓN (div, Ω)3 and v ∈ HΓ1D (Ω)3 (see e.g. [7, Section 4] for the use of the subdifferential in the context of plasticity models). If K(α) satisfies K(α) ≥ K0 > 0 , K (α) ≥ K1 > 0
(16)
for all α > 0, then it is known that the system (7), (8) possesses a unique solution (cf. [7, Section 8]). Obviously, this solution is also the unique minimizer of (12). Under the assumption (16), it can also be shown that the least-squares functional (12) is equivalent to the H(div, Ω) × H 1 (Ω) norm of the error in the following sense (see [9, Theorem 3.2]). Let (σ, u) ∈ (σ N +HΓN (div, Ω)3 )×HΓ1D (Ω)3 denote the solution of the system (7), (8). Then, there are positive constants β1 , β2 such that ¯ 2div,Ω + u − u ¯ u ¯ ; σold , αold ) ¯ 21,Ω ≤ F(σ, β1 σ − σ ¯ 2div,Ω + u − u ¯ 21,Ω ≤ β2 σ − σ ¯ u ¯ ) ∈ (σ N + HΓN (div, Ω)3 ) × HΓ1D (Ω)3 (e.g. in hold for any approximation (σ, finite element spaces). This norm equivalence implies that the solution of (14), (15) is unique.
3
Plane Stress Model and Finite Element Approximation
There are certain situations in which the simulation may actually be carried out in a two-dimensional domain. For thin plane-like structures where loads are only applied in tangential directions, all stress components perpendicular to the plane vanish leading to the plane stress model. The plane strain case where the
Adaptive Least Squares Finite Element Methods in Elasto-Plasticity
675
strains are restricted to two dimensions is treated in [9]. For standard finite element approaches, the plane stress model is often avoided due to additional complications associated with the third displacement component. Within the least-squares framework the treatment of the plane-stress situation is just as straightforward as the plane-strain case as we shall describe now. Assuming plane stress conditions means that ⎞ ⎛ σ11 σ12 0 σ = ⎝ σ21 σ22 0 ⎠ , (17) 0 0 0 where σ 1 = (σ11 , σ12 ) ∈ H(div, Ω) and σ 2 = (σ21 , σ22 ) ∈ H(div, Ω) for a twodimensional domain again denoted as Ω. For the strain tensor, this implies ⎛ ⎞ (∂2 u1 + ∂1 u2 )/2 0 ∂1 u1 ⎠. ∂2 u2 0 ε(u) = ⎝ (∂2 u1 + ∂1 u2 )/2 0 0 ∂3 u3 Since we are not interested in the displacement component u3 itself but only require the strain component ε33 (x1 , x2 , 0) for (x1 , x2 ) ∈ Ω to complete our model, we may write ⎛ ⎞ (∂2 u1 + ∂1 u2 )/2 0 ∂1 u1 ∂2 u2 0 ⎠ ε(u) = ⎝ (∂2 u1 + ∂1 u2 )/2 0 0 ε33 with ε33 (x1 , x2 , 0) ∈ L2 (Ω). For the two remaining displacement components we still have u1 , u2 ∈ HΓ1D (Ω). This implies that dev(Cε(u)) ⎛2 ⎞ 1 1 1 0 3 ∂1 u1 − 3 ∂2 u2 − 3 ε33 2 (∂2 u1 + ∂1 u2 ) 1 2 1 1 ⎠ 0 = 2μ ⎝ 2 (∂2 u1 + ∂1 u2 ) 3 ∂2 u2 − 3 ∂1 u1 − 3 ε33 2 1 1 0 0 3 ε33 − 3 ∂1 u1 − 3 ∂2 u2 and the least-squares functional (12) is to be minimized with respect to σ i ∈ 1 2 σN i + HΓN (div, Ω), ui ∈ HΓD (Ω), i = 1, 2 and ε33 ∈ L (Ω). The choice of appropriate finite element spaces Σ h , Uh and Sh for the approximation of (σ 1 , σ 2 ), (u1 , u2 ) and ε33 , respectively, is done with the aim to achieve a certain approximation order with respect to the norm in HΓN (div, Ω)2 ×HΓ1D (Ω)2 × L2 (Ω). Suitable for the stress approximation is a product space of Raviart-Thomas elements (of degree k ≥ 0) for σ 1 and σ 2 . The interpolation estimate for Raviart-Thomas elements (cf. [6, Prop. III.3.9]) yields div (σ i − Π h σ i ) + σ i − Π h σ i ≤ Chk+1 (|div σ i |k+1,Ω + |σ i |k+1,Ω ) (18) for i = 1, 2 with a suitable interpolation operator Π h . H 1 -conforming finite elements which consist of piecewise polynomials of degree k + 1 lead to ∇(u − Ψ h u) ≤ Chk+1 |u|k+2,Ω
(19)
676
G. Starke
for the interpolation error. Discontinuous piecewise polynomial interpolation of degree k for ε33 , separately on each element T ∈ Th , leads to ε33 − Qh ε33 ≤ Chk+1 |ε33 |k+1,Ω ,
(20)
where Qh may be chosen as the L2 (Ω) projection. For the least-squares finite element approach, the functional (12) is minimized with respect to the spaces Σ h × Vh × Sh . Combined with the equivalence of the least-squares functional shown in [9], (18), (19) and (20) imply the error estimate div (σ − σ h ) + C −1/2 (σ − σ h ) + C 1/2 ε(u − uh ) ≤ Chk+1 (|div σ 1 |k+1,Ω + |div σ 2 |k+1,Ω + |σ|k+1,Ω + |u|k+2,Ω ) for the least-squares finite element approximation. The regularity assumptions div σ i ∈ H k+1 (Ω), σ ∈ H k+1 (Ω)2 for i = 1, 2 and u ∈ H k+2 (Ω)2 will rarely be fulfilled in applications of practical relevance even for k = 0. More importantly, the equivalence of the least-squares functional (12) to the error norm implies that the functional itself may be used as an a posteriori error estimator. This allows the least-squares finite element method to be implemented in an adaptive fashion based on the local evaluation of the least-squares functional for a posteriori error estimation.
4
Generalized Gauss-Newton Method
For the solution of the nonlinear least-squares problem (13), a Gauss-Newton type iteration is the natural approach. This consists in minimizing a quadratic least-squares functional of the form F (k) (δσ, δu) = div(σ old + σ (k) + δσ)2 +C −1/2 (σ (k) + δσ − R(ε(u(k) )) − S (k) [ε(δu)]2 in each step and setting (σ (k+1) , u(k+1) ) = (σ (k) , u(k) ) + (δσ, δu). Since R(ε(u(k) )) is not necessarily differentiable, S (k) [ε(δu)] is chosen as an element of the subdifferential, in general. For |dev(σ old + C ε(u(k) ))| > 2/3K(αold ), differentiating (9) yields S (k) [ε(δu)] = Cε(δu) (k) (k) (ξ :ε(δ u))ξ 1 − 2μ γR (ξ(k) )[ε(δu)] + γR (ξ (k) ) ε(δ(k)u) − (k) |ξ | |ξ |3 where ξ (k) = dev(σ old + C ε(u(k) )). Differentiating (10) leads to γR (ξ (k) )[ε(δu)]
=
ξ (k) : ε(δu) |ξ(k) |
2 γR (ξ (k) ) 1 (k) old γ (ξ )[ε(δu)] K α + − 3μ R 3 2μ
Adaptive Least Squares Finite Element Methods in Elasto-Plasticity
677
which implies the explicit formula (ξ (k) )[ε(δu)] = γR
1 1+ K 3μ
αold +
2 γR (ξ (k) ) 3 2μ
−1
ξ(k) : ε(δu) |ξ (k) |
(k) old (k) to be usedoldfor the computation of S [ε(δu)]. If |dev(σ + C ε(u ))| < 2/3K(α ) holds, then we are inside the elastic domain and
S (k) [ε(δu)] = Cε(δu) . We may simply use (21) also in the case |dev(σ old + C ε(u(k) ))| = since this certainly constitutes an element of the subdifferential.
5
(21) 2/3K(αold )
A Computational Example
We present numerical results for a benchmark problem of elasto-plasticity taken from [10]. The problem is given by a quadratic plate of an elasto-plastic isotropic material with a circular hole in the centre. At the upper and lower edges of the plate, traction forces pointing outwards are applied. Because of the symmetry of the domain, it suffices to discretize only a fourth of the total geometry. The computational domain is then given by Ω = {x ∈ IR2 : 0 < x1 < 10, 0 < x2 < 10, x21 + x22 > 1} . The boundary conditions on the top edge of the computational domain (x2 = 10, 0 < x1 < 10) are set to σ · n = (0, t)T , on the right edge (x1 = 10, 0 < x2 < 10) and on the circular arc (x21 + x22 = 1) the boundary conditions are σ · n = (0, 0). Symmetry boundary conditions are prescribed on the rest of the boundary, i.e., (σ11 , σ12 ) · n = 0, u2 = 0 on the bottom (x2 = 0, 1 < x1 < 10), and u1 = 0, (σ21 , σ22 ) · n = 0 on the left (x1 = 0, 1 < x2 < 10). The Poisson ratio is ν = 0.29 which implies for the Lam´e constants λ = 1.381 μ. A combination of linear and exponential isotropic hardening of the form K(α) = K0 + Hα + (K∞ − K0 )(1 − e−ωα ) with K0 = 450, K∞ = 750, H = 129 and ω = 16.93 is used as in [10]. The load is increased starting from t = 0 in steps of Δt = 2.5. For each load step, an initial triangulation consisting of 52 elements is successively refined based on the local evaluation of the least-squares functional. The refinement rule is calibrated in such a way that the number of degrees of freedom approximately doubles with each refinement step. Table 1 shows the size of the least-squares functional, at different stages of the simulation. Shown in brackets is the number of iteration steps required to reduce the square of the energy norm of the nonlinear residual in (14), (15) below 10−6 F (σ h , uh ). For t = 150, the results are still well within the elastic domain which means that the results in Table 1 simply correspond to a linear elasticity
678
G. Starke Table 1. Least-squares functional F(σ h , uh ) and iteration counts l t = 150 t = 175 t = 200 t = 250 t = 300 t = 350 t = 400 t = 450
2.24 2.24 2.24 2.18 2.27 2.40 3.72 1.46
0 e-5 e-5 e-5 e-5 e-5 e-5 e-5 e-4
(1) (1) (1) (1) (2) (2) (2) (2)
3.04 3.04 2.98 3.14 4.02 1.89 5.78 8.45
1 e-6 e-6 e-6 e-6 e-6 e-5 e-5 e-5
(1) (1) (2) (2) (2) (2) (2) (2)
5.77 5.77 6.05 8.05 2.65 2.00 1.87 2.65
2 e-7 e-7 e-7 e-7 e-6 e-5 e-5 e-5
(1) (2) (2) (2) (2) (2) (3) (3)
1.37 1.41 1.47 2.05 1.62 5.44 5.60 7.17
3 e-7 e-7 e-7 e-7 e-6 e-6 e-6 e-6
(1) (2) (2) (2) (2) (3) (3) (3)
4.04 4.23 4.37 5.01 3.19 6.95 8.20 9.38
4 e-8 e-8 e-8 e-8 e-7 e-7 e-7 e-7
(1) (2) (2) (2) (3) (3) (3) (3)
problem which, of course, is solved in one Newton step. Inelastic deformation starts around t = 170 which already becomes apparent from the increase to 2 Newton steps on finer levels for t = 175. The maximum number of Newton steps, however, remains bounded by 3 as the load is further increased which indicates that our generalized Newton approach is very effective in this example.
References 1. Attia, F.S., Cai, Z., Starke, G.: First-order system least squares for the Signorini contact problem in linear elasticity. SIAM J. Numer. Anal. 47, 3027–3043 (2009) 2. Axelsson, O., Blaheta, R., Kohut, R.: Inexact Newton solvers in plasticity: Theory and experiments. Numer. Linear Algebra Appl. 4, 133–152 (1997) 3. Berndt, M., Manteuffel, T.A., McCormick, S.F.: Local error estimates and adaptive refinement for first-order system least squares. Electr. Trans. Numer. Anal. 6, 35–43 (1997) 4. Blaheta, R.: Convergence of Newton-type methods in incremental return mapping analysis of elasto-plastic problem. Comput. Methods Appl. Mech. Engrg. 147, 167– 185 (1997) 5. Bochev, P., Gunzburger, M.: Least-Squares Finite Element Methods. Springer, Berlin (2009) 6. Brezzi, F., Fortin, M.: Mixed and Hybrid Finite Element Methods. Springer, New York (1991) 7. Han, W., Reddy, B.D.: Plasticity: Mathematical Theory and Numerical Analysis. Springer, New York (1999) 8. Simo, J.C., Hughes, T.J.R.: Computational Inelasticity. Springer, New York (1998) 9. Starke, G.: An adaptive least-squares mixed finite element method for elastoplasticity. SIAM J. Numer. Anal. 45, 371–388 (2007) 10. Stein, E., Wriggers, P., Rieger, A., Schmidt, M.: Benchmarks. In: Stein, E. (ed.) Error-controlled Adaptive Finite Elements in Solid Mechanics, ch. 11, pp. 385–404. John Wiley and Sons, Chichester (2002) 11. Wieners, C.: Multigrid methods for Prandtl-Reuss plasticity. Numer. Linear Algebra Appl. 6, 457–478 (1999) 12. Wieners, C.: Nonlinear solution methods for infinitesimal perfect plasticity. Z. Angew. Math. Mech. 87, 643–660 (2007)
The Automatic Construction and Solution of a Partial Differential Equation from the Strong Form Joseph Young Institutt for informatikk Universitetet i Bergen
[email protected]
Abstract. In the last ten years, there has been significant improvement and growth in tools that aid the development of finite element methods for solving partial differential equations. These tools assist the user in transforming a weak form of a differential equation into a computable solution. Despite these advancements, solving a differential equation remains challenging. Not only are there many possible weak forms for a particular problem, but the most accurate or most efficient form depends on the problem’s structure. Requiring a user to generate a weak form by hand creates a significant hurdle for someone who understands a model, but does not know how to solve it. We present a new algorithm that finds the solution of a partial differential equation when modeled in its strong form. We accomplish this by applying a first order system least squares algorithm using triangular B´ezier patches as our shape functions. After describing our algorithm, we validate our results by presenting a numerical example.
1 Introduction The variety of algorithms used to solve a partial differential equation has been both an asset as well as a burden. On one hand, we have an assortment of extremely sophisticated tools that allow us to solve a diverse set of problems. On the other, the heterogeneous nature of these algorithms makes it challenging to design a general modeling tool. Within the realm of finite element methods, there has been considerable progress towards this goal. Modeling tools such as deal.II [2], FEniCS [7,10], FreeFEM [9], GetDP [6], and Sundance [11] allow the user to specify the weak form of a differential equation by hand. Then, given a specific kind of element, these tools either assist in or automate the construction of the linear system that arises from the discretization. In spite of their usefulness, these tools assume that their user possesses the technical expertise to find the weak form of a differential equation. Unfortunately, this can be a difficult task. Ideally, we would like a system that can transform the original strong form of the differential equation into a computable solution. This would allow a user with far less technical knowledge to solve a problem than is currently possible. While it is doubtful that such a perfect mechanism exists for all differential equations, we focus on a system that can achieve this goal for a relatively broad class of problems.
This research was financed by the Research Council of Norway through the SAGA-geo project. Special thanks to Magne Haveraaen for his suggestions and guidance.
I. Lirkov, S. Margenov, and J. Wa´sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 679–686, 2010. c Springer-Verlag Berlin Heidelberg 2010
680
J. Young
Specifically, we automate the straightforward least squares algorithm [3,4] using triangular B´ezier patches as our shape functions. Neither our choice of the straightforward least squares algorithm nor our choice of B´ezier patches is unique [14,1,13]. Nonetheless, we combine these pieces in such way that we can automate the construction and solution of any polynomial differential equation where every function can be adequately approximated by a surface composed of several B´ezier patches. This includes all smooth functions as well as, in a practical sense, some discontinuous functions. We do not intend nor claim that this system will provide the best possible solution in all cases. Simply, it provides a smooth solution given relatively little analytical work by the user. In this way, we view it as a tool that allows an end user to rapidly prototype a problem and then determine whether further investigation into an alternative algorithm is necessary.
2 A Calculus for B´ezier Patches and Surfaces The key to this method is the manipulation of surfaces composed of B´ezier patches. In the following section, we introduce and develop a calculus for these surfaces. 2.1 Basic Definition Let us define the Ith Bernstein polynomial of degree k over the jth simplex within the set t as bkI, j,t : p → k (T (x))I [T (x)]i ≥ 0 for all i bkI, j,t (x) = I j 0 otherwise where I ∈ 0p+1 , |I| = k, and T j (x) denotes the solution y of the (p + 1) × (p + 1) linear system z1 z2 . . . z p+1 x y= 1 1 ... 1 1 where t j ∈ p+1×p = (z1 , . . . , z p+1 ) and zi denotes a corners of the simplex j. Based on these polynomials, we form B´ezier patches by taking the sum over all possible polynomials of degree k. We form a surface by summing over all possible simplices. 2.2 Sums, Negation, Subtraction, and Multiplication Our core manipulations include the sum, negation, subtraction, and multiplication of patches. Assume we have two patches, f and g, defined over the same simplex where αI bkI 1 g= βI bkI 2 . f = |I|=k1
|I|=k2
When k1 = k2 , we find the sum f + g by simply adding the corresponding coefficients. However, when the degrees differ, we must elevate the degree of one of the patches until they match. Without loss of generality, assume k1 > k2 . Then, let k k −k 2 1 2 J I−J β J k γI = |J| = k2 (I − J)i ≥ 0∀i
1
I
The Automatic Construction and Solution of a PDE from the Strong Form
681
where |I| = k1 . Then, we have that [5,8,12] g= βI bkI 2 = γI bkI 1 . |I|=k2
|I|=k1
In a similar manner, in order to negate a patch, we simply negate the coefficients. Then, in order to subtract one patch from another, we negate the second patch and add it to the first. Another useful manipulation is the product between two different B´ezier patches which is equal to [5,12] h= γI bkI 1 +k2 |I|=k1 +k2
where γI =
k
1
I1
|I1 | = k1 , |I2 | = k2 , I = I1 + I2
αI1 kI22 βI2 k +k . 1
2
I1 +I2
2.3 Derivatives Next, we give the directional derivative in the direction h of a B´ezier patch as [5,8,12] ⎞ ⎛ ⎜⎜⎜ ⎟ ⎜⎜⎜ αI bk ⎟⎟⎟⎟⎟ (x)(h) = γI˜bk−1 ˜ j,t (x) I, j,t ⎠ ⎝ I, |I|=k
where γI˜ =
|I|=k
˜ |I|=k−1
αI βI˜
βI˜ =
k[T 0j (h)]i I − I˜ = ei . 0 otherwise
In addition, p describes the length of x, ei denotes the ith canonical vector, bk−1 I−ei , j,t (x) = 0 for all x when I−ei contains negative indices, and T 0j (x) denotes the solution to the linear system z1 z2 . . . z p+1 x y= 1 1 ... 1 0 where t j ∈ p+1×p = (z1 , . . . , z p+1 ). This is the same transformation as T j except that the right hand side includes a 0 in the final element rather than 1. In other words, the derivative of a B´ezier patch is simply a B´ezier patch of degree one lower. 2.4 Smoothness Although a single B´ezier patch is smooth and differentiable, a surface composed of many patches may not be differentiable nor continuous between patches. In order to stitch these patches together, we leverage the blossom of the Bernstein polynomials. This introduces a set of linear constraints on the coefficients where we can tune the amount of smoothness between patches.
682
J. Young
We compute the blossom of the Ith Bernstein polynomial of degree k, bˆ I, j,t : k p i=1 → , through the recurrence relation bˆ kI, j,t (x1 , . . . , xk ) =
p+1 [T (x1 )]i bˆ kI−ei , j,t (x2 , . . . , xk ) i=1
where we define bˆ 0I, j,t (x) = 1 for all x and bˆ kI, j,t (x) = 0 for all x when I contains negative indices. Next, let us consider two simplices, defined by the corners r and s, that share a boundary. The face that defines this boundary requires p points. Thus, without loss of generality let s p+1 denote the single point not needed to define this boundary. For example, given the triangulation r3 r1
s2
s3 r2
s1 ,
the points s1 and s2 define the boundary while s3 remains unneeded. Then, the boundary between the B´ezier patches defined over the simplices r and s is q times continuously differentiable when [8,12] αIs = αIrˆ bˆ kI,r,t 1 , . . . , s1 , s2 , . . . , s2 , . . . , s p+1 , . . . , s p+1 ) ˆ (s ˆ I
I1
I2
I p+1
for all I where I p+1 ≤ q and αIr and αIs denotes the coefficients of the B´ezier patch over the simplices r and s respectively. 2.5 Symbolic Coefficients When we use B´ezier surfaces within a differential equation, some coefficients are constant whereas others are symbolic. As a result, we require a mechanism to track our symbolic variables. Fortunately, each manipulation we define above simply forms a new B´ezier surface where the new coefficients are polynomials of the old. Thus, assume we have n unknown variables xi . We represent each coefficient by a tuple (α, a, A, . . . ) where α is a constant, a is a vector, A is a matrix, and higher order terms are represented as tensors. In this manner, the actual value of this coefficient is α + a, x + Ax, x + . . . Therefore, we must also define operations such as addition, subtraction, and multiplication on symbolic coefficients. We define addition as (α, a, A, . . . ) + (β, b, B, . . . ) = (α + β, a + b, A + B, . . . ) multiplication of linear terms as (α, a)(β, b) = αβ, αb + βa, (abT + baT )/2 and the other operations analogously.
The Automatic Construction and Solution of a PDE from the Strong Form
683
3 Algorithm In the following section, we describe our algorithm used to solve the differential equation. We describe this process in three steps. First, we decompose the problem into a first order system. Second, we replace all functions with their B´ezier surface equivalents. Third, we construct and solve the least-squares problem. This involves combining all functions together using our calculus defined above, integrating the resulting surfaces, and solving the final optimization problem. 3.1 Decomposition into a First Order System As with most least squares approaches, we decompose our problem into a first order system. In the straightforward least squares algorithm, this is not necessary, but yields many benefits such as reducing the condition number of the final optimality system [3,4]. We emphasize that there are an infinite number of ways to decompose most problems and choosing one particular scheme may have benefits over the other. Our decomposition method is very simple and simply demonstrates that this decomposition is possible to automate. Our scheme mirrors the trick used to decompose an ordinary differential equation into a system of first order equations. Let us consider a kth order PDE, F(x, u, Dα1 u, . . . , Dαm u) = 0 where αi is a multiindex in 0p . We construct a graph where we label each node by α, for |α| ≤ k, and connect two nodes αi and α j when αi − α j 1 = 1. Thus, we connect two nodes if they differ by a single index. Further, we only allow nodes α when α αi for some αi where denotes pointwise inequality. By its construction, this graph must be connected and contain a spanning tree. The spanning tree gives us our decomposition F(x, u, Dα1−β1 uβ1 , . . . , Dαm −βm uβm ) = 0
Dδ−γ uγ = uδ .
In the first equality, the constant βi is a label where αi βi and both αi and βi are connected in the spanning tree. The second equality must hold for all labels δ and γ such that βi δ γ and both δ and γ are connected in the spanning tree. Finally, we replace any instance of u0 by u. For example, consider the problem D20 u + D11 u + D02 u = 0. The spanning tree generated by the problem is 2,0
1,1 1,0
0,2 0,1
0,0
.
Thus, according to this decomposition, we would rewrite our problem as D10 u10 + D10 u01 + D01 u01 = 0
D10 u = u10
D01 u = u01 .
684
J. Young
3.2 Approximating Non-B´ezier Surfaces Our algorithm requires that all functions be B´ezier surfaces. Therefore, all non-B´ezier surfaces must be approximated. In order to accomplish this, we force the error in the approximation to be orthogonal to the Bernstein polynomials used to construct our B´ezier patches. This leads to the system of equations generated by |t|
αI j bkI, j,t , bkI,ˆ ˆj,t
j=1 I=|k|
L2 (Ω)
= f, bkI,ˆ ˆj,t
L2 (Ω)
for all |I|ˆ = k and all 1 ≤ ˆj ≤ |t|. The domain Ω is a square domain that encompasses the domain of the differential equation. After solving, the variable α defines the coefficients of a B´ezier surface that approximates f . 3.3 Constructing the Least Squares Problem Given a first order decomposition, we rewrite a system of differential equations Fi (x, u, Dα1 u, . . . , Dαm u) = 0 on Ω
G j (x, u, Dα1 u, . . . , Dαm u) = 0 on ∂Ω
as the least squares problem min Fi (x, u, Dα1 u, . . . , Dαm u)2L2 (Ω) + G j (x, u, Dα1 u, . . . , Dαm u)2L2 (∂Ω) . u
i
j
We have chosen the L2 norm since we implement the straightforward least squares algorithm. From a constructive point of view, we require a norm that uses our algebra on B´ezier surfaces. As long as we restrict ourselves to the real numbers, this includes all W k,p norms for an integer k and even p. As a result, it may be possible to automate a more complicated least-squares algorithm as long as it adheres to these norms. However, we do not explore this idea further within this paper. Next, we replace all non-B´ezier surfaces with B´ezier surface approximations. We also discretize u using B´ezier surfaces as our shape functions. The coefficients for each of these surfaces uses our symbolic scheme from above. In the case of our unknown coefficients, initially all terms are zero except for the linear term which is the ith canonical vector, ei , where i denotes the index of that unknown coefficient. Once we have made this substitution, we use our calculus on B´ezier patches from above to combine all patches into a single patch. In other words, we simplify all subtraction, addition, multiplication, negation, and differentiation operations including the squaring of the problem due to the L2 norm. Since we require this simplification, we focus solely on polynomial differential equations composed of these operations. In the end, we are left with the optimization problem ⎞ ⎞ ⎛ |t| ⎛⎜ ⎜⎜⎜ ⎟⎟⎟ ⎟⎟ ⎜⎜⎜ |t| ˆ ki ⎟ k ⎜ i ⎟ ⎜⎜⎜ ⎟⎟⎟ ⎜⎝⎜ minn pI, j (x)bI, j,t ⎟⎟⎠⎟ + q (x)b I, j I, j,t ⎝ ⎠ x∈ Ω ∂Ω j=1 |I|=ki
j=1 |I|=kˆ i
The Automatic Construction and Solution of a PDE from the Strong Form
685
where pI, j and qI, j are polynomials whose coefficients are still represented using our symbolic scheme. Then, we integrate away all the B´ezier surfaces and obtain a polynomial r based on these symbolic coefficients. Finally, we add stitching constraints based on the blossoming scheme described above. Thus, our complete problem has the form minn r(x) st Ax = 0 x∈
where A is a matrix generated by including all possible stitching constraints. As a note, we require the differentiability between patches to be equal to the highest order derivative in the problem. Even though we decompose our problem into first order form, we can not guarantee convergence unless all functions conform to the original space of functions. For example, if we solve Poisson’s equation, Δu = f , we require two derivatives between each patch. In the end, we have a linearly constrained polynomial program. In the case of a linear differential equation, this yields a linearly constrained quadratic program.
4 Numerical Experiments In order to experimentally verify the convergence of our method, we solve Poisson’s equation in two dimensions on the unit square u xx + uyy = x, y → −8π2 cos(2πx) sin(2πy) on [0, 1] × [0, 1] u = x, y → sin(2πy) when x = 0 or x = 1 u = 0 when y = 0 or y = 1. The solution to this problem is the function u where u(x, y) = cos(2πx) sin(2πy). We solved the problem formulated as both a first order system as well as an undecomposed problem using the original formulation. In both cases, we used fourth order B´ezier surfaces. The table below summarizes the relative error in these solutions as well as the condition number of the KKT conditions First Order System Original Formulation Subdivisions Error Rate κ Error Rate κ 1 6.00e-1 - 6.02e+3 6.43e-1 - 1.84e+3 . 2 4.03e-1 0.574 5.19e+3 2.85e-1 1.17 5.57e+3 4 1.18e-1 1.77 1.79e+4 1.88e-1 .600 8.21e+4 8 4.30e-2 2.13 1.28e+6 We do not have results for the first order system on an eight time subdivided surface since we ran out of memory. Although a better implementation would greatly alleviate the problem, this highlights that the symbolic manipulations are memory intensive. As a result, decomposing the problem into a first order system significantly increases the amount of memory required. In spite of this drawback, the method seems to converge.
686
J. Young
5 Conclusion We have presented an algorithm that takes a differential equation modeled in the strong form and automatically produces a solution. It accomplishes this by applying a first order system least squares finite element method. As our shape functions, we use surfaces composed of triangular B´ezier patches. We use these functions because their special properties allow us to combine all functions in the differential equation into a single B´ezier surface whose coefficients are simply polynomials of our unknown variables. This allows us to produce a linearly constrained polynomial program which, when solved, produces our solution. Certainly, this algorithm will not work in all cases. However, it provides a mechanism that produces a solution with a minimal amount of effort. Even in cases when the true solution contains features that are difficult to model, we can use this method to quickly produce a rough approximation to the true solution.
References 1. Awanou, G., Lai, M.J., Wenston, P.: The multivariate spline method for scattered data fitting and numerical solution of partial differential equations. In: Chen, G., Lai, M.J. (eds.) Wavelets and Splines: Athens 2005, pp. 24–74. Nashboro Press (2006) 2. Bangerth, W., Hartmann, R., Kanschat, G.: deal.II–a general purpose object-oriented finite element library. ACM Transactions on Mathematical Software 33(2) (August 2007) 3. Bochev, P.B., Gunzburger, M.D.: Finite element methods of least-squares type. SIAM Review 40(4), 789–837 (1998) 4. Bochev, P.B., Gunzburger, M.D.: Least-Squares: Finite Element Methods. Springer, Heidelberg (2009) 5. de Boor, C.: B-form basics. In: Farin, G. (ed.) Geometric Modeling: Algorithms and New Trends (1987) 6. Dular, P., Geuzaine, C., Henrotte, F., Legros, W.: A general environment for the treatment of discrete problems and its application to the finite element method. IEEE Transactions on Magnetics 34(5), 3395–3398 (1998) 7. Dupont, T., Hoffman, J., Johnson, C., Kirby, R., Larson, M., Logg, A., Scott, R.: The FEniCS project, PREPRINT 2003-21 (2003) 8. Farin, G.: Triangular Bernstein-B´ezier patches. Computed Aided Geometric Design 3, 83– 127 (1986) 9. Hecht, F., Pironneau, O., Hyaric, A.L., Ohtsuka, K.: Freefem++, 2nd edn., Version 2.24-2-2, www.freefem.org 10. Logg, A.: Automating the finite element method. Sixth Winter School in Computational Mathematics (March 2006) 11. Long, K.: Sundance 2.0 tutorial. Technical Report SAND2004-4793, Sandia National Laboratory (July 2004) 12. Prautzsch, H., Boehm, W., Paluszny, M.: B´ezier and B-Spline Techniques. Springer, Heidelberg (2002) 13. Schumaker, L.L.: Computing bivariate splines in scattered data fitting and the finite-element method. Numerical Algorithms 48, 237–260 (2008) 14. Zheng, J., Sederberg, T.W., Johnson, R.W.: Least squares methods for solving differential equations using B´ezier control points. Applied Numerical Mathematics (48), 237–252 (2004)
Zienkiewicz-Type Finite Element Applied to Fourth-Order Problems Andrey B. Andreev1 and Milena R. Racheva2 1
Department of Informatics Technical University of Gabrovo 5300 Gabrovo, Bulgaria 2 Department of Mathematics Technical University of Gabrovo 5300 Gabrovo, Bulgaria
Abstract. In general, a finite element method (FEM) for fourth-order problems requires trial and test functions belonging to subspaces of the Sobolev space H 2 (Ω), and this would require C 1 −elements, i.e., piecewise polynomials which are C 1 across interelement boundaries. In order to avoid this requirement we will use nonconforming Zienkiewicz-type (Z-type) triangle applied to biharmonic problem. We propose a new approach to prove the order of convergence by comparison to suitable modified Hermite triangular finite element. This method is more natural and it could be also applied to the corresponding fourth-order eigenvalue problem. Some computational aspects are discussed and numerical example is given.
1
Introduction
A motivation for avoiding the use of C 1 − finite elements is their very high dimension. Also, in many cases the feasible C 0 -elements for fourth-order problems give more simple and flexible computational schemes. Clearly, the effective choice of a method is complex depending on many aspects of the underlying problem. Our purpose is to illustrate a new convergence analysis of Z-type nonconforming triangular finite element. Let Ω be a bounded polygonal domain in R2 with boundary ∂Ω. Let also m H (Ω) be the usual m−th order Sobolev space on Ω with a norm · m,Ω and a seminorm |·|m,Ω . Throughout this paper (·, ·) denotes the L2 (Ω)−inner product. Consider the following fourth-order model problem for f ∈ L2 (Ω): Δ2 u = f in Ω, ∂u = 0 on ∂Ω, u= ∂ν
(1)
where ν = (ν1 , ν2 ) is the unit outer normal to ∂Ω and Δ is the standard Laplacian operator. The weak form of the problem (1) is: find u ∈ H02 (Ω) such that a(u, v) = (f, v), ∀ v ∈ H02 (Ω), I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 687–694, 2010. c Springer-Verlag Berlin Heidelberg 2010
(2)
688
A.B. Andreev and M.R. Racheva
Fig. 1.
where
a(u, v) =
2
Ω i,j=1
2 2 ∂ij u ∂ij v dx ∀ u, v ∈ H 2 (Ω).
We approximate the solution of (2) by the finite element method. Consider a family of triangulations τh of Ω which fulfill standard assumptions [6]. The partitions τh consist of triangles K and h is the mesh parameter. We introduce the FE spaces Vh related to the partitions τh . They are defined by means of Z-type triangular elements which will be introduced. It is well-known, that the Zienkiewicz triangle represents a reduced cubic Hermite finite element (see [4,5,9]) for which: – K is a 2-simplex with vertices ai , 1 ≤ i ≤ 3; – one possible set of degrees of freedom is (for any test function p): p(ai ), 1 ≤ i ≤ 3 and Dp(ai )(aj − ai ), 1 ≤ i, j ≤ 3, i = j. – PK ⊂ P3 (K) and dimPK = 9 (Fig. 1). Using directional derivatives, there is a variety of ways to define a finite element using P3 , two of which are shown in Fig. 1. Note that arrows represent directional derivatives along the indicated directions at the points. Some Z-type triangular elements having the same degrees of freedom can be also proposed by means of different ways (see, e.q. [9]). The following properties of Z-type element could be mentioned: (i) it is an incomplete and non-conforming C 0 −element for fourth-order problems; (ii) it uses the degrees of freedom just the same with the Zienkiewicz triangle, but its shape function space is different; (iii) it takes values of function and its derivatives at vertices as degrees of freedom and by this the global number of degrees of freedom is an optimal one; (iv) it is convergent (applied to fourthorder problems) in contrast to Zienkiewicz triangle, which is only convergent in parallel line condition and divergent in general grids.
Zienkiewicz-Type Finite Element Applied to Fourth-Order Problems
689
Then, the corresponding approximate variational problem of (2) is: find uh ∈ Vh ⊂ H02 (Ω) such that ah (uh , vh ) = (f, vh ), ∀ vh ∈ Vh ,
(3)
where
ah (uh , vh ) =
K∈τh
2 ∂ 2 uh ∂ 2 vh dx. K i,j=1 ∂xi ∂xj ∂xi ∂xj
For any 2-simplex K we define P(K) = P3 (K) = P2 (K) + span λ2i λj − λi λ2j , 1 ≤ i < j ≤ 3 , where λi , i = 1, 2, 3 are the barycentric coordinates of K. Then we can define the shape function space by P3 (K). Lemma 1. ([9], Lemma 1) The set of degrees of freedom is PK −unisolvent.
2
Main Result
In order to get convergence analysis of the considered Z-type element we also consider the Hermite triangle with a suitably modified 10-th degree of freedom, namely, integral value on K is taken instead of the value at the barycenter of K (Fig. 1). Let ΠK denote the interpolation operator corresponding to Z-type finite element partition τh and πh be the interpolation operator related to modified Hermite finite element. Our convergence analysis is based on the estimation of Πh v − πh v for any v ∈ H02 ∩ H 3 (Ω) on each element K ∈ τh [1]. For any v ∈ L2 (Ω) and v|K ∈ H m (K), ∀K ∈ τh we define the following mesh-dependent norm and seminorm [9]: vm,h =
K∈τh
1/2 v2m,K
,
|v|m,h =
1/2 |v|2m,K
.
K∈τh
Theorem 1. Let Vh be the FE space corresponding to the nonconforming Z-type element. Then there exists a constant C = C(Ω) > 0, independent of h and such that 2 inf hm |v − vh |m,h ≤ Ch3 v3,Ω , ∀v ∈ H02 ∩ H 3 (Ω). vh ∈Vh
m=0
Proof. First, we shall estimate Πh v − πh v on each finite element K ∈ τh . For this purpose we transform any triangle K to the reference element T = {(t1 , t2 ) : t1 , t2 ≥ 0, t1 + t2 ≤ 1}.
690
A.B. Andreev and M.R. Racheva
The shape functions of Z-type element on the reference element T are: 1 3 1 ϕ1 (t1 , t2 ) = − t1 t2 − t21 + t21 t2 + t1 t22 + t31 ; 2 2 2 ϕ2 (t1 , t2 ) = 2t1 t2 + 3t21 − 2t21 t2 − 2t1 t22 − 2t31 ; 1 1 1 ϕ3 (t1 , t2 ) = − t1 t2 − t21 t2 + t1 t22 ; 2 2 2 1 1 2 1 ϕ4 (t1 , t2 ) = − t1 t2 + t1 t2 − t1 t22 ; 2 2 2 ϕ5 (t1 , t2 ) = 2t1 t2 + 3t22 − 2t21 t2 − 2t1 t22 − 2t32 ; 1 1 3 ϕ6 (t1 , t2 ) = − t1 t2 − t22 + t21 t2 + t1 t22 + t32 ; 2 2 2 3 1 3 ϕ7 (t1 , t2 ) = −t2 + t1 t2 + 2t22 − t21 t2 − t1 t22 − t32 ; 2 2 2 ϕ8 (t1 , t2 ) = 1 − 4t1 t2 − 3t21 − 3t22 + 4t21 t2 + 4t1 t22 + 2t31 + 2t32 ; 3 3 1 ϕ9 (t1 , t2 ) = −t1 + 2t21 − t31 + t1 t2 − t21 t2 − t1 t22 . 2 2 2 By analogy, we obtain the shape functions of the Hermite element on T consecutively from a1 to a3 : ψ1 (t1 , t2 ) = 2t1 t2 − t21 − t21 t2 − 2t1 t22 + t31 ; ψ2 (t1 , t2 ) = −18t1t2 + 3t21 + 18t21 t2 + 18t1 t22 − 2t31 ; ψ3 (t1 , t2 ) = 2t1 t2 − 3t21 t2 − 2t1 x22 ; ψ4 (t1 , t2 ) = 2t1 t2 − 2t21 t2 − 3t1 t22 ; ψ5 (t1 , t2 ) = −18t1t2 + 3t22 + 18t21 t2 + 18t1 t22 − 2t32 ; ψ6 (t1 , t2 ) = 2t1 t2 − t22 − 2t21 t2 − t1 t22 + t32 ; ψ7 (t1 , t2 ) = −t2 + 4t1 t2 + 2t22 − 3t21 t2 − 4t1 t22 − t32 ; ψ8 (t1 , t2 ) = 1 − 24t1 t2 − 3t21 − 3t22 + 24t21 t2 + 24t1 t22 + 2t31 + 2t32 ; ψ9 (t1 , t2 ) = −t1 + 4t1 t2 + 2t21 − 4t21 t2 − 3t1 t22 − t31 ; ψ10 (t1 , t2 ) = 60t1 t2 − 60t21 t2 − 60t1 t22 . Using these shape functions we calculate: (Πh v − πh v)|T = (60t1 t2 − −
60t21 t2
−
60t1 t22 )
v(a1 ) + v(a2 ) + v(a3 ) 3
∂(a1 −a2 ) v(a1 ) + ∂(a1 −a3 ) v(a1 ) ∂(a2 −a1 ) v(a2 ) + ∂(a2 −a3 ) v(a2 ) − 24 24
Zienkiewicz-Type Finite Element Applied to Fourth-Order Problems
∂(a −a ) v(a3 ) + ∂(a3 −a2 ) v(a3 ) 1 − 3 1 − 24 meas T = 60t1 t2 (1 − t1 − t2 )ET (v) ≤ where
T
691
v(t) dt
20 |ET (v)|, 9
(4)
⎡
⎤ 3 3 1 1 ⎢1 ⎥ ET (v) = ⎣ v(ai ) − ∂(ai −aj ) v(ai ) − v(t) dt⎦ 3 i=1 24 i,j=1 meas T T i=j
is the error functional of quadrature formula and ET (v) = 0 for any v ∈ P2 (T ). Let FK be the invertible affine mapping which maps the reference finite element T on to the finite element K: FK : T → K, t → x = FK (t) = BK (t) + bK with BK ∈ R2×2 , bK ∈ R2×1 , and where det BK = O(h2 ). Therefore, from the Bramble-Hilbert lemma [6] there exists a constant C such that −1/2 |v|3,K . (5) |ET (v)| ≤ C|v|3,T ≤ Ch3 (det BK ) Thus, using that Πh v − πh v0,K = (det BK )1/2 Πh v − πh v0,T , from (4) and (5) it follows that Πh v − πh v0,h =
1/2 Πh v −
πh v20,K
≤ Ch3 v3,Ω .
(6)
K∈τh
Applying explicit calculations, we obtain: |∂i (Πh v − πh v) ||T ≤ Ch2 |v|3,K , |∂ij (Πh v − πh v) ||T ≤ Ch|v|3,K , i, j = 1, 2. These inequalities and (6) give Πh v − πh vm,h ≤ Ch3−m v3,Ω , m = 0, 1, 2.
(7)
Finally, we use the fact, that the order of v − πh vm,h , m = 0, 1, 2 is an optimal one for the cubic Hermite polynomials [5,6]. From (7) and applying v − Πh vm,h ≤ v − πh vm,h + Πh v − πh vm,h , from the FE interpolation theory we prove the theorem. Remark 1. However, for m = 0, 1 we have Πh v − πh vm,h = Πh v − πh vm,Ω .
692
A.B. Andreev and M.R. Racheva
Theorem 2. Let u ∈ H 3 (Ω) ∩ H02 (Ω) and uh ∈ Vh be the solutions of the problems (2) and (3), respectively. Then there exists a constant C = C(Ω) > 0, independent of h and such that u − uh 2,h ≤ Chu3,Ω .
(8)
Proof. By Theorem 1 for any v ∈ H02 (Ω) lim inf v − vh 2,h = 0.
h→0 vh ∈Vh
Therefore limh→0 u − uh 2,h = 0. Having in mind that Vh is constructed by Z-type nonconforming elements, we have (see also [9], Lemma 3): |ah (v, vh ) − (Δ2 v, vh )| ≤ Ch|v|3,Ω |vh |2,h , ∀v ∈ H 3 (Ω), ∀vh ∈ Vh .
(9)
For the solutions u and uh we apply the second Strang lemma (see [6], Theorem 31.1): ⎛ ⎞ |a (u, v ) − (f, v )| h h h ⎠ u − uh 2,h ≤ C ⎝ inf u − vh 2,h + sup . vh ∈Vh vh 2,h vh ∈Vh vh =0
Then, the main estimate (8) follows from (9) and the result of Theorem 1. We can apply previous results to the corresponding fourth-order eigenvalue problem (EVP) [3]: Δ2 w = λw in Ω ⊂ R2 , subject to homogeneous boundary conditions. The variational EVP takes the form a(w, v) = λ(w, v), ∀v ∈ V ⊂ H02 (Ω).
(10)
The finite dimensional analogue of the problem (10) by means of Z-type nonconforming elements is (see [8]): ah (wh , vh ) = λh (wh , vh ),
∀vh ∈ Vh .
(11)
It is to be noted here that the sesquilinear form ah is uniformly elliptic, i.e. (α > 0): αΔu20,Ω ≤ ah (u, u), ∀u ∈ H 2 (Ω). Theorem 3. (see [2]) Let (λ, w) and (λh , wh ) be eigensolutions of (10) and (11), (k) respectively. Then λh −→ λ(k) (h → 0), k = 1, . . . , Nh and for any sequence of (k) (k) normalized eigenfunctions wh ∈ Vh , wh 0,Ω = 1, there exist eigenfunctions w ∈ H02 (Ω) such that (k)
wh − w2,h −→ 0 (h → 0).
Zienkiewicz-Type Finite Element Applied to Fourth-Order Problems
693
Moreover, if w ∈ H 3 (Ω) ∩ H02 (Ω), then (k)
wh − w2,h ≤ Chw3,Ω , (k)
|λh − λ| ≤ Ch2 w23,Ω .
3
Numerical Results
To illustrate our theoretical results we report in this example on related twodimensional biharmonic eigenvalue problem. Let Ω be a square domain: Ω: −
π π < xi < , 2 2
i = 1, 2.
Consider the following model problem: Δ2 u = λu
in Ω,
∂u = 0 on ∂Ω. ∂ν For this problem the exact eigenvalues are not known. We use their lower and upper bounds obtained by Weinstein and Stenges [10] (see also Ishihara [7]). In Table 1 the results from our numerical experiments for the first four eigenvalue are given. The domain is divided into uniform mesh with 2n2 issosceles triangular Z-type elements and the mesh parameter is h = π/n, n = 5, 6, 7, 8. As it can be seen, the numerical implementation confirm the convergence asserted by Theorem 1 - Theorem 3. The proposed Z-type elements are eligible especially for computing eigenvalues. And, for the approximate eigenvalues and eigenfunctions obtained by means of the proposed elements some postprocessing technique could be easily applied for the sake of improving both the rate of convergence to the exact solution and the properties of the approximate eigenfunctions [2]. u=
Table 1. Eigenvalues computed by means of Z-type finite elements h
λ1
λ2
λ3
λ4
π/5
14.8023
64.1267
70.4634
162.3236
π/6
14.0465
60.3152
65.3925
156.5718
π/7
13.6470
58.6170
62.5207
153.7372
π/8
13.4721
57.7318
61.3971
152.4870
bounds: lower 13.2820 upper 13.3842
55.2400 56.5610
55.2400 56.5610
120.0070 124.0740
694
A.B. Andreev and M.R. Racheva
Acknowledgement This work is partially supported by the Bulgarian Ministry of Science under grant VU-MI 202/2006.
References 1. Andreev, A.B., Racheva, M.R.: Optimal Order FEM for a Coupled Eigenvalue Problem on 2-D Overlapping Domains. In: Margenov, S., Vulkov, L.G., Wa´sniewski, J. (eds.) NAA 2008. LNCS, vol. 5434, pp. 151–158. Springer, Heidelberg (2009) 2. Andreev, A.B., Racheva, M.R.: Acceleration of Convergence for eigenpairs Approximated by means of nonconforming Finite Element Methods. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2009. LNCS, vol. 5910, pp. 695–702. Springer, Heidelberg (2010) 3. Babuska, I., Osborn, J.: Eigenvalue Problems. In: Ciarlet, P.G., Lions, J.L. (eds.) Handbook of Numerical Analysis, vol. II, pp. 641–787. North-Holland, Amsterdam (1991) 4. Bazaley, G.P., Cheung, Y.K., Irons, B.M., Zienkiewicz, O.C.: Triangular Elements in Plate Bending - Conforming and Non-conforming Solutions. In: Proceedings of the Conference on Matrix Methods in Structural Mechanics, pp. 547–576. Wright Patterson A.F. Base, Ohio (1965) 5. Brenner, S., Scott, L.R.: The Mathematical Theory for Finite Element Methods. Springer, New York (1992) 6. Ciarlet, P.: Basic Error Estimates for the FEM, vol. 2, pp. 17–351. Elsevier, Amsterdam (1991) 7. Ishihara, K.: A mixed finite element method for the biharmonic eigenvalue problem of plate bending. Publ. Res. Institute of Mathematical Sciences 14, 399–414 (1978) 8. Rannacher, R.: Non-conforming Finite Element Methods for eigenvalue Problems in Linear Plate Theory. Numer. Math. 33, 23–42 (1979) 9. Wang, M., Shi, Z., Xu, J.: A New Class of Zienkiewicz-type Non-conforming Element in Any Dimensions. Numer. Math. 106, 335–347 (2007) 10. Weinstein, A., Stenger, W.: Methods of intermediate problems for eigenvalues, theory and applications. Academic Press, London (1972)
Acceleration of Convergence for Eigenpairs Approximated by Means of Non-conforming Finite Element Methods Andrey B. Andreev1 and Milena R. Racheva2 1 Department of Informatics Technical University of Gabrovo 5300 Gabrovo, Bulgaria 2 Department of Mathematics Technical University of Gabrovo 5300 Gabrovo, Bulgaria
Abstract. Non-conforming finite elements are commonly used for approximating fourth-order eigenvalue problems in linear plate theory. We derive error estimates for eigenpairs by nonconforming Zienkiewicz-type triangular elements. We also propose a simple postprocessing method which improves the order of convergence of finite element eigenpairs. Thus, an a posteriori analysis is presented by means of different triangular elements. The technique applied is general. It is illustrated and discussed by numerical results.
1
Introduction
This study deals with a procedure for accelerating the convergence of finite element approximations of the eigenpairs for fourth-order problem. Zienkiewicztype (Z-type) two-dimensional nonconforming triangular element is applied. Then we can use conforming elements for more simple elliptic problem. Let Ω ⊂ R2 be a bounded polygonal domain with boundary ∂Ω. Let also m H (Ω) be the usual m−th order Sobolev space on Ω with a norm · m,Ω and a seminorm | · |m,Ω . Consider a thin elastic plate corresponding to the domain Ω. If the material is homogeneous and isotropic, the question about the possible small vibrations of the plate leads to the basic eigenvalue problem: Δ2 u = λu in Ω, ∂u = 0 on ∂Ω, u= ∂ν
(1)
where ν = (ν1 , ν2 ) is the unit outer normal to ∂Ω and Δ is the standard Laplacian operator. The variational eigenvalue problem (EVP) corresponding to (1) is: find (λ, u) ∈ R × H02 (Ω) such that a(u, v) = λ(u, v), ∀ v ∈ V ⊂ H02 (Ω), I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 695–702, 2010. c Springer-Verlag Berlin Heidelberg 2010
(2)
696
A.B. Andreev and M.R. Racheva
where
a(u, v) =
2
Ω i,j=1
2 2 ∂ij u ∂ij v dx, ∀ u, v ∈ H 2 (Ω)
and (·, ·) denotes the L2 (Ω)−inner product. Obviously, the bilinear form a(·, ·) is symmetric and V −elliptic. Moreover, the inclusion of V in L2 (Ω) is compact. Therefore, problem (2) has a countable infinite set of eigenvalues λj , all being strictly positive and having finite multiplicity, without a finite accumulation point (see, e.g. [9]). The corresponding eigenfunctions uj can be chosen to be orthonormal in L2 (Ω). They constitute a Hilbert basis for V . We shall approximate the solution of (2) bythe finite element method (FEM). Consider a family of regular partitions τh = i Ki of Ω consisting of triangular finite elements Ki which fulfill standard assumptions (see [7], Chapter 3). If hi is diameter of Ki , the h = maxi hi is the finite element parameter corresponding to any partition τh . With a partition τh we associate a finite dimensional subspace Vh of V by means of Z-type triangular elements such that every FE is an incomplete cubic polynomial (see [1,6,10]). For a test function p one possible set of degree of freedom is: = j, p(ai ), 1 ≤ i ≤ 3 and Dp(ai )(aj − ai ), 1 ≤ i, j ≤ 3, i where ai , i = 1, 2, 3 are the vertices of the element K. It is clear that PK ⊂ P3 (K) and dimPK = 9 (Fig. 1). We determine the approximate eigenpairs (λh , uh ) by FEM using nonconforming Z-type element. Then, the EVP corresponding to (2) is: find (λh , uh ) ∈ R × Vh such that ah (uh , vh ) = λh (uh , vh ), ∀ vh ∈ Vh ,
Fig. 1.
(3)
Acceleration of Convergence for Eigenpairs Approximated
where ah (uh , vh ) =
K∈τh
2
2
K i,j=1
697
2 2 ∂ij uh ∂ij vh dx.
Convergence Analysis
First, we introduce the mesh-dependent norm and seminorm [1,10]. For any v ∈ L2 (Ω) and v|K ∈ H m (K), ∀K ∈ τh we define: vm,h =
1/2
v2m,K
,
|v|m,h =
K∈τh
1/2 |v|2m,K
.
K∈τh
We shall use the essential property of the FE space Vh (see [1], Theorem 1): if Vh corresponds to nonconforming Z-type element, then there exists a constant C = C(Ω) > 0, independent of h and such that inf
vh ∈Vh
2
hm |v − vh |m,h ≤ Ch3 v3,Ω , ∀v ∈ H02 ∩ H 3 (Ω).
(4)
m=0
Let us also define the elliptic projection Rh ∈ L(V, Vh ) by ah (Rh u − u, vh ) = 0, ∀ vh ∈ Vh .
(5)
Thus, Rh is an operator of orthonormal projection from V over Vh with respect to the scalar product ah (·, ·). The next theorem gives error estimate of the eigenvalues using nonconforming Z-type finite element. Theorem 1. Let (λ, u) and (λh , uh ) be eigensolutions of (2) and (3), respectively. Then for any simple eigenvalue λm (m ≥ 1), λm,h −→ λm (h → 0). Moreover, if the corresponding eigenfunction um belongs to H02 (Ω)∩H 3 (Ω), then |λm − λm,h | ≤ Ch2 um 23,Ω .
(6)
Proof. From (4) and (5) it is easy to see that for any u ∈ V u − Rh u2,h ≤ C inf u − vh 2,h . vh ∈Vh
(7)
Now, for any integer m ≥ 1 we estimate the difference λm −λm,h . For this purpose we introduce the space Vm which is generated by the first m (exact) eigenfunctions {ui }, 1 ≤ i ≤ m. The approximate eigenvalue λm,h can be characterized as various extrema of the Rayleigh quotient [5]. Then (dim(Rh Vm ) = m): λm,h ≤
max
0 =vh ∈ Rh Vm
ah (vh , vh ) = vh 20,Ω
max
v∈ Vm v0,Ω =1
ah (Rh v, Rh v) . Rh v20,Ω
698
A.B. Andreev and M.R. Racheva
Since Rh v is an orthogonal projection over Vh w.r.t. ah (·, ·), we have: ah (Rh v, Rh v) ≤ ah (v, v), therefore λm,h ≤
sup vh ∈Vm v0,Ω =1
ah (v, v) ≤ λm Rh v20,Ω
sup v∈ Vm v0,Ω =1
1 . Rh v20,Ω
(8)
In the last inequality we suppose that ah (v, v) = a(v, v). This is true if for example Vm ⊂ H 2 (Ω). Let us also emphasize that vh s,h = vh s,Ω , s = 0, 1 for all vh ∈Vm . For this, letus consider a function v ∈ Vm such that v0,Ω = 1. m m Then v = i=1 αi ui with i=1 α2i = 1. Using that v is normalized w.r.t. L2 (Ω) we obtain: 1 − Rh v20,Ω = (v − Rh v, v + Rh v) = 2(v − Rh v, v) − v − Rh v20,Ω , or Rh v20,Ω ≥ 1 − 2(v − Rh v, v).
(9)
On the other hand, from (2) we derive (v − Rh v, v) =
m
αi (v − Rh v, ui ) =
i=1
m αi i=1
λi
ah (v − Rh v, ui ).
Applying equality (5) we get (v − Rh v, v) =
m αi i=1
λi
ah (v − Rh v, ui − Rh v).
Next, as ah (·, ·) is continuous, i.e. ah (u, v) ≤ M u2,hv2,h , we obtain: m α i (v − Rh v, v) ≤ M v − Rh v2,h (ui − Rh ui ) . λ i i=1 2,h
By the Cauchy-Schwarz inequality we estimate: m 1/2 m 1/2 m α2 αi i (ui − Rh ui )2,h ≤ ui − Rh ui 22,h 2 λ λ i i i=1 i=1 i=1 √ m ≤ λ1
sup v∈Vm v0,Ω =1
Combining (10), (9) and (8), we get ⎛ λm,h ≤ ⎝1 + C
v − Rh v2,h .
⎞ sup
v∈Vm v0,Ω =1
v − Rh v22,h ⎠ λm .
(10)
Acceleration of Convergence for Eigenpairs Approximated
699
We rewrite the last result as |λm,h − λm | ≤ C(λ)
sup v∈Vm v0,Ω =1
v −
Rh v22,h
≤C
m
ui − Rh ui 22,h .
i=1
Now, let us suppose that Vm ⊂ H 3 (Ω) ∩ H02 (Ω) and ui ∈ H 3 (Ω), i = 1, . . . , m. Applying (7) and the approximation property (4) to the last inequality we prove the estimate (6). As a corollary of the considerations above, if λm is a simple eigenvalue, for the corresponding eigenfunctions we have: m 1/2 2 ui − Rh ui 2,h . um,h − um 2,h ≤ C sup v − Rh v2,h ≤ C v∈Vm v0,Ω =1
i=1
Under Vm ⊂ H 3 (Ω) ∩ H02 (Ω) we get
um,h − um 0,h ≤ Ch
m
1/2 ui 23,h
.
(11)
i=1
3
Superconvergent Postprocessing Technique
Procedures for accelerating the convergence of FE approximations of the eigenpairs are developed by the authors for different problems (see, e.g. [2,3,4,8]). Herein we prove that these ideas could be applied to biharmonic eigenvalue problem approximated by nonconforming finite elements. Now we present a relatively simple postprocessing method that gives better accuracy for eigen- values. It is based on a postprocessing technique that involves an additional solving of a source problem on an augmented FE space. Let uh be any approximate eigenfunction of (3) with uh 0,Ω = 1. Since the finite element solution uh obtained by nonconforming Z-type element is already known, we consider the following variational elliptic problem: a( u, v) = (uh , v), ∀v ∈ V. We define the number
= λ
(12)
1 . ( u, uh )
Theorem 2. Let the FE Vh be constructed by Z-type nonconforming triangular elements. If (λ, u) is an eigenpair of problem (2), u ∈ H 3 (Ω) and (λh , uh ) is the corresponding solution of (3) and eigenfunctions are being normalized u0,Ω = uh 0,Ω = 1, then
= O(u − uh 2 ). |λ − λ| 0,Ω If in addition the finite element partitions are regular, then the following superconvergent estimate holds:
≤ Ch6 u2 . |λ − λ| 3,Ω
(13)
700
A.B. Andreev and M.R. Racheva
The proof may proceed similarly as the same result obtained for other problems in [2,3,8]. The superconvergent estimate is very useful from a theoretical point of view. Nevertheless, in practice the exact solution of the source problem (12) is hardly ever available. So, the finite element solution, which corresponds to (12) is: ah ( uh , vh ) = (uh , vh ), ∀vh ∈ V h ,
(14)
where V h will be precised. Namely, (1) (i) V h is constructed by modified Hermite element (Fig. 1). It uses integral value over any element K as a degree of freedom instead of the value of shape function at the center of gravity of K. This is also a nonconforming finite element. (2) (ii) V h is a finite element space obtained by using conforming Bell’s triangle (Fig. 1) (see, also [7]). We point out herein, Bell’s triangle is optimal among triangular polygonal finite elements of class C 1 , since dimPK ≥ 18 for such kind of finite elements. We denote 1
h = λ , ( uh , uh )
h are the solutions of (3) and (14), respectively. where uh and u Theorem 3. Let the assumptions of Theorem2 be fulfilled and let us use the (s) finite element subspaces V h , s = 1, 2. Then the following estimate holds:
h | ≤ Ch2(s+1) , s = 1, 2. |λ − λ
(15)
Proof. First we express: 1 1 − = ( u, uh ) − ( uh , uh ) = a( u, u
) − ah ( uh , u
h )
λ λh = ah ( u−u
h , u
) + ah ( uh , u
) − ah ( uh , u
h ) = ah ( u−u
h , u
) − ah ( u−u
h , u
h ), consequently 1 1 − = ah ( u−u
h , u
−u
h ).
λ
h λ The continuity of the ah −form gives:
−λ
h | ≤ C |λ u−u
h 22,h . By similar arguments as in Theorem1 and by standard assumptions for the smoothness of u
we can derive the estimate as (11) (see, also [7]): u−u
h 2,h ≤ Chs+1 u2s+1,Ω , s = 1, 2. The superconvergent estimate (15) follows from the last inequality, (13) and triangle inequality.
Acceleration of Convergence for Eigenpairs Approximated
4
701
Numerical Results
Consider the following model problem: Δ2 u = λu u=
in Ω,
∂u = 0 on ∂Ω, ∂ν
where Ω be a square domain: Ω: −
π π < xi < , 2 2
i = 1, 2.
For this problem the only known concerning the exact eigenvalues are their lower and upper bounds obtained by Weinstein and Stenges [11]. In Table 1 the results from numerical experiments for the first four eigenvalue are given. Regardless of the fact, that the second and the third eigenvalue are equal, the proposed postprocessing technique is put into effect for both of them.Using uniform mesh, the domain is divided into 2n2 issosceles triangular Z-type elements. So that, the mesh parameter is π/n, n = 4, 6, 8. As it is shown in Table 1, the postprocessing takes effect especially on a coarse grid. Postprocessing is implemented putting into use Hermite elements (Fig. 1). It is to be noted here that if one is mainly interested in computing of eigenvalues, he can successfully apply Hermite elements, because they completely ensure an improvement of the approximate values. However, concerning eigenfunctions the use of Bell’s finite elements for the postprocessing procedure gives not only a better improvement compared to the use of Hermite elements; it also improves the smoothness of approximate eigenfunctions. Table 1. Eigenvalues computed by means of Z-type finite elements (FEM) and their improvements as a result of postprocessing procedure (PP) by means of Hermite finite elements h
λ1
λ2
λ3
λ4
π/4
FEM PP
15.0780 14.8992
67.1107 63.2144
75.2068 71.7732
168.1998 153.0853
π/6
FEM PP
14.0465 13.9459
60.3152 58.7350
65.3925 63.3821
156.5718 149.8718
π/8
FEM PP
13.4721 13.4152
57.7318 56.1393
61.3971 58.9924
152.4870 148.6162
bounds: lower 13.2820 upper 13.3842
55.2400 56.5610
55.2400 56.5610
120.0070 124.0740
702
A.B. Andreev and M.R. Racheva
Acknowledgement This work is partially supported by the Bulgarian Ministry of Science under grant D002–147/2008.
References 1. Andreev, A.B., Racheva, M.R.: Zienkiewicz-type Finite Element Applied to Fourthorder Problems. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2009. LNCS, vol. 5910, pp. 687–694. Springer, Heidelberg (2010) 2. Andreev, A.B., Racheva, M.R.: On the Postprocessing Technique for Eigenvalue Problems. In: Dimov, I.T., Lirkov, I., Margenov, S., Zlatev, Z. (eds.) NMA 2002. LNCS, vol. 2542, pp. 363–371. Springer, Heidelberg (2003) 3. Andreev, A.B., Racheva, M.R.: Superconvergent Finite Element Postprocessing for Eigenvalue Problems with Nonlocal Boundary Conditions. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 645–653. Springer, Heidelberg (2008) 4. Andreev, A.B., Lazarov, R.D., Racheva, M.R.: Postprocessing and Higher Order Convergence of the Mixed Finite Element Approximations of Biharmonic Eigenvalue Problems. JCAM 182, 333–349 (2005) 5. Babuska, I., Osborn, J.: Eigenvalue Problems. In: Ciarlet, P.G., Lions, J.L. (eds.) Handbook of Numerical Analysis, vol. II, pp. 641–787. North-Holland, Amsterdam (1991) 6. Bazaley, G.P., Cheung, Y.K., Irons, B.M., Zienkiewicz, O.C.: Triangular Elements in Plate Bending - Conforming and Non-conforming Solutions. In: Proceedings of the Conference on Matrix Methods in Structural Mechanics, pp. 547–576. Wright Patterson A.F. Base, Ohio (1965) 7. Ciarlet, P.: Basic Error Estimates for the FEM, vol. 2, pp. 17–351. Elsevier, Amsterdam (1991) 8. Racheva, M.R., Andreev, A.B.: Superconvergence Postprocessing for Eigenvalues. Comp. Meth. in Appl. Math. 2(2), 171–185 (2002) 9. Raviart, P.A., Thomas, J.M.: Introduction a l’Analyse Numerique des Equations aux Derivees Partielles, Masson Paris (1988) 10. Wang, M., Shi, Z., Xu, J.: A New Class of Zienkiewicz-type Non-conforming Element in Any Dimensions. Numer. Math. 106, 335–347 (2007) 11. Weinstein, A., Stenger, W.: Methods of intermediate problems for eigenvalues, theory and applications. Academic Press, London (1972)
A Two-Grid Method on Layer-Adapted Meshes for a Semilinear 2D Reaction-Diffusion Problem Ivanka T. Angelova and Lubin G. Vulkov Faculty of Natural Science and Education University of Rousse, 8 Studentska str., Rousse 7017, Bulgaria
[email protected],
[email protected]
Abstract. A singularly perturbed semilinear reaction-diffusion equation, posed in the unit square, is discetizied by a two-grid scheme on layer-adapted meshes. In the first step, the nonlinear problem is discretized on coarse grid. In the second step, the problem is discretized on a fine grid and linearized around the interpolation of the computed solution on the first step. We show theoretically and numerically that the global error on Shishkin or Bakhvalov mesh is the same as would have been obtained if the nonlinear problem had been solved directly on the fine grid.
1
Introduction
Reaction-convection-diffusion equations (where reaction or convection terms, but not both, may be absent) appear in various applications [11,12]. Since 2000 there has been a surge of interest in numerical methods for such problems. Several papers prove convergence results that are uniform in the singular parameters. Most of the results are based on the fitted (layer adapted) mesh methods (FMM). The layer-adapted meshes allow resolution of the structure of the layer [10,11,12] and provide uniform convergence in the singular perturbation parameter(s) even when they are combined with classical discretizations [10,11,12]. Besides Bakhvalov meshes (B-mesh) [3] and Shishkin’s piecewise equidistant meshes (S-mesh) [10,11,12], there are grids that are based on equidistribution [10,12], Bakhvalov-Shishkin meshes (B&S-mesh) and Vulanovic’s improved Shishkin mesh [10,11,12]. The choice of the mesh plays a key role for the uniform convergence of the numerical methods for nonlinear problems, see [9,15]. Interest in the construction of high-order approximation to singularly perturbed problems is currently at high level. But such constructions often lead to extension of the stencil or to discretizations that are not inverse monotone [12]. Another way to increase the accuracy of the numerical solution to singularly problems is the use of Richardson extrapolation. However the Richardson procedure requires solution of systems of nonlinear algebraic equations on each of the nested meshes, see [13]. For large nonlinear problems (2D, 3D scalar equations or systems of nonlinear ordinary and partial differential equations) the question for effective solution of the nonlinear algebraic systems of equations that arise I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 703–710, 2010. c Springer-Verlag Berlin Heidelberg 2010
704
I.T. Angelova and L.G. Vulkov
after the discretizations is also very important. Two-grid method (TGM) is a discretization technique for nonlinear equations based on two grids of different sizes and in the same time provides fast and economical algorithms for solution oft he discrete nonlinear systems. The idea is to use a coarse-grid space to produce a rough approximation of the solution of nonlinear problems, and then use it as the initial guess for one Newton-like iteration on the fine grid. The method involves a nonlinear solve on the coarse grid of diameter H and a linear solve on the fine grid of diameter h << H. Two-grid method was first introduced in [2] and [18] for nonlinear elliptic equations. Later on, two-grid methods was further investigated by many authors for different problems in physics, mechanics, engineering. In this paper we will consider FMM combined with TGM to solve the model singularly perturbed semilinear elliptic problem: −ε2 u + f (x, y, u) = 0 in Ω = (0, 1) × (0, 1),
(1)
u = g(x, y) on ∂Ω.
(2)
Our main assumption is that f has partial derivative fu (x, y, u) > c20 > 0 for all (x, y, u) ∈ Ω × R1
(3)
for some constant c0 > 0. Under this condition the reduced problem has a unique solution u0 which is sufficiently smooth in Ω. N. Kopteva considered uniform convergence for (1), (2) under weaker assumptions [12]. Numerical experiments show that our algorithms still work for these more general problems. The paper is organized as follows. In Section 2 we recall some layer-adapted meshes and discuss uniform convergent numerical methods for (1)-(3) in the linear case. In Section 3 we prove uniform convergence for (1)-(3) via a Newton’s linearization procedure. Section 4 provides two-grid algorithms while Section 5 presents results of numerical experiments for the proposed algorithms. In one-dimension two-grid algorithms for singularly perturbed problems was constructed and studied in [1,16,17]. The present results can be extended to other adapted meshes, for example the graded meshes of [5].
2
Meshes and Linear Problem
On w introduce mesh w h = whx × w hy whx = {xi , 0 ≤ i ≤ Nx ; x0 = 0, xNx = 1; hxi = xi+1 − xi }, w hy = {yj , 0 ≤ j ≤ Ny ; y0 = 0, yNx = 1; hyj = yj+1 − yj }.
(4)
To discretize problem (1)-(2) in the linear case of f (x, y, u) = c(x, y)u − d(x, y) we use the standard central difference scheme − ε2 (Uxx + Uyy ) + f (x, y, U ) = 0, (x, y) ≡ (xi , yj ) ∈ wh , U = g, on the boundary ∂wh ,
(5)
Two-Grid Methods on Layer-Adapted Meshes
705
where Ux = Ux,i = (U (xi , yj ) − U (xi−1 , yj ))/hxi , Ux = Ux,i = Ux,i+1 , Uy = Uy,j = (U (xi , yj ) − U (xi , yj−1 ))/hyj , Uy = Uy,j = Uy,j+1 , Ux = Ux ,i = (U (xi+1 , yj ) − U (xi , yj ))/xi , Uy = Uy,j = (U (xi , yj+1 ) − U (xi , yj ))/yj , xi = (hxi + hxi+1 )/2, hx0 = hx1 /2, xNx = hxNx /2, yj = (hyj + hyj+1 )/2, hy0 = hy1 /2, yNy = hyNy /2, Uxx = Uxx,i = (Uxi − Uxi )/xi , Uyy = Uyy,j = (Uy,j − Uy,j )/yj . For the reaction-diffusion problem (1)-(2), a layer-adapted mesh from [11,12] is formed in the following manner. We divide each of the interval w x = [0, 1] and w y = [0, 1] into three parts [0, σx ], [σx , 1 − σx ], [1 − σx , 1] and [0, σy ], [σy , 1 − σy ], [1 − σy , 1], respectively. Assuming that Nx , Ny are divisible by 4, in the parts [0, σx ], [1 − σx , 1] and [0, σy ], [1 − σy , 1] we allocate Nx /2 + 1 and Ny /2 + 1 mesh points, respectively. Points σx , (1 − σx ) and σy , (1 − σy ) correspond to transition to the boundary layers. We consider meshes w hx and w xy which are equidistant in [xNx /4 , x3Nx /4 ] and [yNy /4 , y3Ny /4 ] but graded in [0, xNx /4 ], [x3Nx /4 , 1] and [0, yNy /4 ], [y3Ny /4 , 1]. On [0, xNx /4 ], [x3Nx/4 , 1] and [0, yNy /4 ], [y3Ny /4 , 1] let our mesh be given by a mesh generating function φ with φ(0) = 0 and φ(1/4) = 1 which is supposed to be continuous, monotonically increasing, and piecewise continuously differentiable. Then our mesh is defined by ⎧ ξi = i/Nx , i = 0, . . . , Nx /4; ⎨ σx φ(ξi ), i = Nx /4 + 1, . . . , 3Nx /4 − 1; xi = ihh ⎩ 1 − σx (1 − φ(ξi )), ξi = (i − 3Nx /4)/Nx, i = 3Nx /4 + 1, . . . , Nx , ⎧ ξj = j/Ny , j = 0, . . . , Ny /4; ⎨ σy φ(ξj ), j = Ny /4 + 1, . . . , 3Ny /4 − 1; yj = jhy ⎩ 1 − σy (1 − φ(ξj )), ξj = (j − 3Ny /4)/Ny , j = 3Ny /4 + 1, . . . , Ny , hx = 2(1 − 2σx )Nx−1 , hy = 2(1 − 2σy )Ny−1 . We also assume that φ does not decrease. This condition implies that hxi ≤ hxi+1 , i = 1, . . . , Nx /4 − 1, hxi ≥ hxi+1 , i = 3Nx /4 + 1, . . . , Nx − 1, hyj ≤ hyj+1 , j = 1, . . . , Ny /4 − 1, hyj ≥ hyj+1 , j = 3Ny /4 + 1, . . . , Ny − 1, For S-mesh (see Fig.1 a) we choose the transition points σx , 1 − σx and σy , 1 − σy −1 −1 as in [11] σx = min{4−1 , c−1 , c0 ε ln Ny }. If σx = σy = 0 ε ln Nx }, σy = min{4 −1 −1 1/4 , then Nx , Ny are very small relative to ε. In this case, the difference scheme (5) can be analyzed using standard techniques. We therefore assume −1 that σx = c−1 0 ε ln Nx , σy = c0 ε ln Ny .
I.T. Angelova and L.G. Vulkov
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
y
y
706
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5 x
0.6
0.7
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
a)
0.5 x
0.6
0.7
0.8
0.9
1
b) Fig. 1. a) S-mesh. b) B-mesh.
Consider the mesh generating function φ in the form φ(ξ) = 4ξ. In this case the meshes w hx and w hy are piecewise equidistant with step sizes −1 Nx−1 < hx < 2Nx−1 , hx = hx (ε) = 4c−1 0 εNx ln Nx , −1 Ny−1 < hy < 2Ny−1 , hy = hy (ε) = 4c−1 0 εNy ln Ny .
In the linear case at a some requirements to input data c, f that provide sufficient smoothness of the solution u it was proved in [4] that the difference scheme [6] on the S-mesh converges ε - uniformly to the solution of (1) - (3) and the estimate holds (6) U − uwh ,∞ ≤ CN −2 ln2 N, N = min{Nx , Ny }. The constant C is independent of ε and Nx , Ny . For B-mesh (see Fig.1 b) one chooses the transition points with −1 −1 σx = c−1 ), σy = c−1 ) and the mesh generating function φ in the 0 ε ln(ε 0 ε ln(ε form φ(ξ) = ln[1 − 4(1 − ε)ξ]/ ln ε. For the difference scheme (5) on B-mesh the estimate holds [4,12] U − uwh ,∞ ≤ CN −2 , N = min{Nx , Ny }.
3
(7)
Uniform Convergence for the Nonlinear Problem
From the theory of Holder space solutions of nonlinear elliptic problems [7] and results of Han&Kellog [8] for linear problems, we can prove that if f has second order derivatives with respect to u that are functions of C 2,α (Ω) and g ∈ C 4,α (S), where S is any side of the rectangle Ω , then u ∈ C 4,α (Ω). On this base we will develop a constructive proof for uniform convergence of the difference scheme (5) via the Newton’s linearization of problem (1)-(3): −ε2 u(m+1) + fu (x, y, u(m) )u(m+1) = −f (x, y, u(m) ) + fu (x, y, u(m) )u(m) , (8) u(m+1) |∂Ω = g(x, y), m = 0, 1, 2, · · · •
Two-Grid Methods on Layer-Adapted Meshes
707
The initial guess u(0) (x, y) one chooses from mathematical or physical considerations, but in the next section it plays a key role at the construction of the two-grid algorithm. It follows from the maximum principle that f (x, y, 0)L∞ (Ω) + gL∞ (∂Ω) . uL∞(Ω) ≤ l = c−1 0
(9)
Let introduce θ=
max
(x,y)∈Ω, |ξ|≤l+2ρ
fuu (x, y, ξ).
Assume that for the initial guess u0 (x, y) the estimate holds u(0) − uL∞ (Ω) ≤ ρ = const.
(10)
Lemma 1. Suppose that c−2 0 θρ < 1. Then m
2 u(m) − uL∞ (Ω) ≤ c20 θ(c−2 , m = 0, 1, 2 · · · • 0 θρ)
(11)
The proof is by induction applying the maximum principle to the boundary value problem for v = u(m+1) − u and it is similar to the corresponding one in the 1D case [1]. Let us consider the finite-difference analogue of the iterative process (8): (m+1)
Lh U (m+1) = −ε2 (Uxx = −f (xi , yj , U
(m)
)
(m+1)
+ Uyy
) + fu (xi , yj , U (m) )U (m+1)
(12)
+ fu (xi , yj , U (m) )U (m) , (m)
U
(x, y) ∈ wh , m = 0, 1, 2, . . . = g, on the boundary ∂wh .
Lemma ρ0 and h0 , independent of ε such that if h = 2. There exist constants max max hxi , max hyj ≤ h0 and ρ ≤ ρ0 , C independent of ε, h, such 1≤i≤Nx
1≤j≤Ny
that m
2 U (m) − u ≤ C[N −2 lnk N + (c−2 ], m = 0, 1, 2, . . . , 0 θρ)
(13)
where k = 2 for S-mesh and k = 0 for B-mesh. The proof is based on introducing of an axillary finite difference process corresponding to (8) and is similar to the corresponding one in the 1D case [1]. Now combining the estimates (6), (7) with Lemmas 1, 2 one can prove the following assertion. Theorem 1. Let u be the solution of problem (1)-(3) u and U be the solution of (9) on B-mh or on S-mh. Then U − u∞ ≤ CN
−2
k
ln N,
k = 0 for B − mesh, k = 2 for S − mesh.
(14)
708
4
I.T. Angelova and L.G. Vulkov
Two-Grid Algorithm
In this section we propose a simple two-grid algorithm based on the estimate (13). If U = U (Nx , Ny ) is the solution of the nonlinear discrete problem (5) we denote its bilinear interpolant
(x, y) = U
Nx ,Ny
Uij φi (x)φj (y),
i,j
where φi (x) is the standard piecewise linear basis function associated with the interval (xi−1 , xi+1 ). If follows from Theorem 1 that
− u∞ ≤ u − uI + CN −2 lnk N, U where uI = uI (x, y) is the interpolant of the continuous solution u. From argu − u∞ ≤ ments in [4,14] we have the global parameter - uniform error bound U CN −2 lnk N . If in the iterative process (8) one takes m = 1 and the initial guess
(x, y), then in (13) we will have u(0) (x, y) = U m
(C0−2 θρ)2 = CN −4 ln2k N.
(15)
Next, if one solves the such linearized problem (12) on a finnier mesh, for example, Nxf = Nx2 , Nyf = Ny2 (f≡fine), then the first term in (13) looks as follows k = 0 for B − mesh, (N f )−2 lnk N f = 2N −4 ln2k N, (16) k = 2 for S − mesh. Therefore, (15) and (16) imply that as a result we obtain an accuracy of up to almost fourth order for S-mesh and fourth order for B-mesh, with respect to the coarse grid. Let us now present our algorithm. We first introduce a fine mesh, for example, the following one: wfh = w fhx × w fhy as (4) but Nxf = Nx2 , Nyf = Ny2 . 1. Solve the discrete problem (5) on the coarse grid wh and then construct the
(x, y) defined on the domain Ω. function U 2. Solve the linear discrete problem
(xi , yj ))U (xi , yj ) −ε2 (Uxx,i + Uyy,j ) + fu (xi , yj , U
(xi , yj ))U (xi , yj ) − f (xi , yj , U
(xi , yj )), = fu (xi , yj , U i = 1, . . . , Nx2 − 1, j = 1, . . . , Ny2 − 1, U = g on the boundary ∂ωhf . to find the fine mesh numerical solution U f (x, y), (x, y) ∈ Ω.
f (x, y), (x, y) ∈ wh . 3. Interpolate U f to obtain U The next theorem is the main result of the present paper.
Two-Grid Methods on Layer-Adapted Meshes
709
Theorem 2. Let the assumptions in Theorem 1 hold. Then for the error of the two-grid method we have k = 4 for S − mesh, f −4 k
U − u∞,wh ≤ CN ln N, (17) k = 0 for B − mesh.
5
Numerical Experiments
In this section we present numerical experiments obtained by applying the twogrid algorithm. The test problem and the exact solution are: −ε2 u +
u−1 = f (x, y), u ∂Ω = 0, (x, y) ∈ Ω, 2−u
exp(−x/ε) + exp(−(1 − x)/ε) exp(−y/ε) + exp(−(1 − y)/ε) uε (x, y) = 1 − 1− , 1 + exp(−1/ε) 1 + exp(−1/ε)
and f (x, y) is calculated from the exact solution. The tables below present the errors EN = U − u∞ , where U is the numerical solution on a mesh with N Table 1. The maximum error and the numerical order of convergence for ε = 1, 10−1 , 10−2 , 10−3 , 10−4 for the scheme (11) and the TGA on S and B-mesh, where Nx = Ny = N and Nxf = Nyf = N f ε 1
10−1
10−2
10−3
10−4
N Nf I st ON II st ON f I st ON II st ON f I st ON II st ON f I st ON II st ON f I st ON II st ON f
4 8 16 64 6,558E-5 1,644E-5 1,9960 1,9990 4,105E-6 2,566E-7 4,0000 4,0001 8,911E-2 3,295E-2 1,4351 1,8486 8,827E-3 1,124E-3 2,9735 3,6891 1,016E-1 6,830E-2 0,5731 0,7524 1,679E-2 3,656E-3 2,1994 2,7158 1,013E-1 6,649E-2 0,6078 0,7145 1,566E-2 2,397E-3 2,7078 2.1115 1,099E-1 6,635E-2 0,7247 0,7157 1,555E-2 2,382E-03 2,7062 2,1026 S-mesh
16 256 4,113E-6
ε 1
1,604E-8 9,150E-3
10−1
8,714E-05 4,055E-2
10−2
5,565E-4 4,052E-2
10−3
5,548E-3 4,049E-2 5,546E-3
10−4
N Nf I st ON II st ON f I st ON II st ON f I st ON II st ON f I st ON II st ON f I st ON II st ON f
4 8 16 64 6,558E-5 1,644E-5 1,9960 1,9990 4,105E-6 2,566E-7 4,0000 4,0001 8,911E-2 2,295E-2 1,9568 1,9001 8,827E-3 1,124E-3 2,9735 3,8649 2,090E-2 5,571E-3 1,9073 2,0056 2,300E-2 2,090E-3 3,4604 3,7377 2,339E-1 5,979E-2 1,9681 2,0082 2,803E-2 1,138E-3 3,9860 3,9967 2,372E-1 5,980E-2 1,9875 2,0062 1,857E-2 1,138E-3 4,0284 4,0008 B-mesh
16 256 4,113E-6 1,604E-8 6,150E-3 7,714E-05 1,388E-3 1,566E-4 1,486E-2 7,129E-5 1,489E-2 7,107E-5
710
I.T. Angelova and L.G. Vulkov
mesh steps. Also, we calculated numerical orders of convergence by the formula: ON = (ln EN − ln E2N ) / ln 2. Table 1 displays the maximum error EN and the numerical orders of convergence ON on the 1st and second steps of the two-grid algorithms on S and B-meshes. These results indicate that the uniform order of convergence is in agrement with Theorems 1, 2.
References 1. Angelova, I.T., Vulkov, L.G.: Comparison of the Two-Grid Method on Different Meshes for a Singularly Perturbed Semilinear Problem. Amer. Inst. of Phys. 1067, 305–312 (2008) 2. Axelsson, O.: On Mesh Independence and Newton Methods. Applications of Math. 4-5, 249–265 (1993) 3. Bakhvalov, N.S.: On the Optimization Methods for Solcing Boundary Value Problems with Bounsary Layers. Zh. Vychisl. Math. Fiz. 24, 841–859 (1969) (in Russian) 4. Clavero, C., Gracia, J., O’Riordan, E.: A Parameter Robust Numerical Method for a Two Dimensionsl Reaction-Diffusion Problem. Math. Comp. 74, 1743–1758 (2005) 5. Duran, A., Lombardi, A.: Finite element approximation of convection-diffusion problems using graded meshes. Appl. Numer. Math. 56, 1314–1325 (2006) 6. Farrell, P.A., Miller, J.J.H., O’Riordan, E., Shishkin, G.I.: On the Non-Existence of ε-Uniform Finite Difference Method on Uniform Meshes for Semilinear Two-Point Boundary Value Problems. Math. Comp. 67(222), 603–617 (1999) 7. Gilbarg, D., Trudinger, N.: Ellipric Partial Differential Equations of Second Order. Springer, Heidelberg (1998) 8. Han, H., Kellog, R.B.: Differentiability Properties of Solutions of the Equation −ε2 u + ru = f (x, y) in a Square. SIAM J. Math. Anal. 21, 394–408 (1990) 9. Herceg, D., Surla, K., Radeka, I., Malicic, I.: Numerical Experiments with Different Schemes for Singularly Perturbed Problem. Novi Sad J. Math. 31, 93–101 (2001) 10. Linss, T.: Layer-Adapted Meshex for Convection-Diffusion Problems. Habilitation, TU Dresden (2007) 11. Miller, J.J.H., O’Riordan, E., Shishkin, G.I.: Fitted Numerical Methods for Sinhular Perturbed Problems. World Scientific, Singapore (1996) 12. Roos, H., Stynes, M., Tobiska, L.: Numerical Methods for Singular Perturbed Differential Equations. In: Convection-Diffusion and Flow Problems. Springer, Berlin (2008) 13. Shishkin, G.I., Shishkina, L.P.: A High-Order Richardson Method for a Quadilinear Singularly Perturbed Elliptic Reaction-Diffusion Equation. Diff. Eqns. 41(7), 1030– 1039 (2005) (in Russian) 14. Stynes, M., O’Riordan, E.: A Uniformly Concergent Galerkin Method on a Shishkin Mesh for a Convection-Diffjsion Problem. J. Math. Anal. Applic. 214, 36–54 (1997) 15. Vulanovic, R.: Finite-Difference Methods for a Class fo Strongly Nonlinear Singular Perturbation Problems. Num. Math. Theor. Appl. 1(2), 235–244 (2008) 16. Vulkov, L., Zadorin, A.: Two-Grid Interpolation Algorithms for Difference Schemes of Exponential Type for Semilinear Diffusion Convection-Dominated Equations. Amer. Inst. of Phys. 1067, 284–292 (2008) 17. Vulkov, L., Zadorin, A.: A Two-Grid Algorithm for Solution of the Difference Equations of a System, of Singularly Perturbed Semilinear Equations. LNCS, vol. 5434, pp. 582–589. Springer, Heidelberg (2009) 18. Xu, J.: A Novel Two-Grid Method for Semilinear Ellipptic Equations. SIAM, J. Sci. Comput. 15(1), 231–237 (1994)
Comparative Analysis of Solution Methods for Delay Differential Equations in Hematology Gergana Bencheva Institute for Parallel Processing, Bulgarian Academy of Sciences, Acad. G. Bontchev, Bl. 25A, 1113 Sofia, Bulgaria
[email protected]
Abstract. Considered model of blood cells production and regulation under the action of growth factors consists of a system of ordinary differential equations with delays. Six methods for solution of such systems are experimentally compared. XPPAUT, a software tool that brings together various algorithms and provides means for visualisation of the solution, is used for this purpouse. The computing times and the quality of the solution are analyzed with the help of test data for red blood cells.
1
Introduction
Recent achievements in the development of high-performance computer facilities turn the computer modelling (CM) into a classical tool for analysis and prediction of various real-life phenomena. Problems in biology and medicine are among the most widespread applications of CM since the approach “trial-error” is not recommended for dealing with questions related to understanding and predicting of human physiological processes in health and disease. The investigations in the current paper are inspired from a problem in hematology, where blood cells (BCs) production and regulation should be modelled in relation with blood pathologies. The complex biological process during which the mature BCs evolve from primitive cells in the bone marrow called hematopoietic pluripotent stem cells (HSCs) is known as hematopoiesis. Each type of BCs is a result of the action of specific proteins, known as Growth Factors or Colony Stimulating Factors (CSF), at specific moments during the hematopoiesis. This is possible due to the high self-renewal and differentiation capacity of the HSCs. The nonlinear system of delay differential equations (DDEs) proposed in [1] is considered as initial model of hematopoiesis. DDEs provide an important way of describing the time evolution of biological systems whose rate of change also depends on their configuration at previous time instances. Introduction of delays (in this case it corresponds to the cell cycle duration) allows to improve the models by taking into account important aspects and to face more complicated phenomena based on feedback control. The applicability to the considered model of various solution methods is analyzed with the help of XPPAUT – a software tool available from the web page of G. B. Ermentrout [7]. XPPAUT brings together a number of algorithms for I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 711–718, 2010. c Springer-Verlag Berlin Heidelberg 2010
712
G. Bencheva
various types of equations, including DDEs, and provides means for visualisation of the solution. The implemented methods for DDEs are divided into groups – explicit/implicit, with fixed/adaptive step, designed for nonstiff/stiff problems. A set of numerical tests for representative test examples are performed and analyzed for solvers from each group in order to tune the software parameters and to compare their performance. The remainder of the paper is organized as follows. Section 2 deals with the mathematical model of hematopoiesis as it is formulated in [1]. The package XPPAUT and the considered methods for numerical solution of DDEs are briefly presented in section 3. Section 4 is devoted to the numerical tests and to the comparative analysis of the methods with respect to both measured cpu time and behaviour of the solution. The latter is investigated for different time steps and its proximity to the steady states is studied.
2
Mathematical Model
In this section we briefly present the system of DDEs proposed in [1] to model the blood cells production and regulation mediated by CSF. In the bone marrow, HSCs are divided into two groups: proliferating cells and nonproliferating (or quiescent) cells. Their populations at time t are denoted by P (t) ≥ 0 and Q(t) ≥ 0 respectively. The population of the circulating mature BCs is denoted by M (t) ≥ 0 and the growth factor concentration is E(t) ≥ 0. The following system of stiff nonlinear ordinary DDEs have to be solved ⎧ dQ ⎪ ⎪ = −δQ(t) − g(Q(t)) − β(Q(t), E(t))Q(t) + 2e−γτ β(Q(t − τ ), E(t − τ ))Q(t − τ ) ⎪ ⎪ dt ⎪ ⎪ ⎪ ⎨ dM = −μM(t) + g(Q(t)) ⎪ dt ⎪ ⎪ ⎪ ⎪ ⎪ dE ⎪ ⎩ = −kE(t) + f (M(t)) dt
(1)
with appropriate initial conditions for t ∈ [−τ, 0]. The delay τ corresponds to the proliferating phase duration, which is assumed to be constant. The rates at which the proliferating and quiescent cells can die are represented in (1) by the parameters γ and δ respectively. The degradation rates μ and k of the mature BCs and of CSF in the blood are assumed to be positive. Quiescent cells can either be introduced in the proliferating phase with a rate β(Q(t), E(t)) or differentiate in mature BCs with a rate g(Q(t)). The negative feedback control f (M (t)) of the bone marrow production on the CSF production acts by the mean of circulating mature BCs: the more circulating BCs are, the less growth factor is produced. The trivial steady-state of (1) is not a biologically interesting equilibrium since it describes a pathological situation that can only lead to death without appropriate treatment. The existence of nontrivial positive steady-state is ensured by the conditions (see [1] for details):
0 < δ + g (0) < β
f (0) 0, k
, 0 ≤ τ < τmax
⎛ ⎞ f (0) 2β 0, k 1 ⎝
⎠. := ln f (0) γ δ + g (0) + β 0, k
(2)
Comparative Analysis of Solution Methods for DDEs in Hematology
713
The parameters and the coefficient functions involved in the model depend on the particular type of blood cells.
3
Solution Methods and Software Tools
Various numerical methods for solution of systems of ordinary differential equations are developed for both stiff and nonstiff problems, as well as for the case of retarded arguments (see e.g. [4,5,6,2] and references therein). They fall in general into two classes: those which use one starting value at each step (“one-step methods”) and those which are based on several values of the solution (“multistep methods”). Each of the methods in these classes can be either explicit or implicit, and in addition they can be with fixed or adaptive step. When dealing with delays, a special attention should be focused on the locating and including into the mesh of the so called breaking points (or primary discontinuities), at which the solution possesses only a limited number of derivatives, and remains piecewise regular between two consecutive such points. The explicit and implicit Euler’s methods are the most popular one-step methods. The s-stage Runge-Kutta methods are one-step methods, where s auxiliary intermediate approximations are computed to obtain a sufficiently high accuracy for the approximations at the main step points. Rosenbrock methods are Runge-Kutta type methods for stiff ODEs which are linearly implicit, i.e. the systems to be solved are linear even for nonlinear ODEs. This is not the case, e.g. for implicit Euler method, where Newton iterations are additionally applied to handle nonlinear problems. Adams methods are the first representatives of the linear multi-step methods. If they are explicit they are known as AdamsBashforth, and the implicit ones are known as Adams-Moulton methods. The predictor-corrector methods are special multi-step methods in which an explicit linear multi-step method (predictor) is combined with an implicit one (corrector). Nordsieck methods are equivalent to the implicit Adams methods. They provide representation of multi-step methods which allows a convenient way of changing the step size. XPPAUT is “a tool for simulating, animating and analyzing dynamical systems”, created by G. B. Ermentrout and freely available through the webpage [7]. Some of the features of XPPAUT are: a) brings together algorithms for solving various types of equations, including DDEs; b) contains the code for bifurcation program AUTO; c) provides means for visualisation of the solution; d) is portable on various systems and its building requires the standard C compiler and Xlib. More details about the features and the usage of XPPAUT may be found in [3] and in the documentation distributed together with the package. The behaviour of the XPPAUT implementations of six widely used ODE solvers is compared in the current paper, namely: a) backward Euler (BE) – the simplest implicit method; b) classical Runge-Kutta (RK) – four stage method that gives O(h4 ) accuracy; c) Dormand-Prince 5 (DP5) – based on Runge-Kutta method of order 5 with adaptive step and automatic step size control; d) Rosenbrock (RB2) – based on Matlab version of the two stage Rosenbrock algorithms;
714
G. Bencheva Table 1. The methods
Runge-Kutta (RK) Adams-Bashforth (AD) Dormand-Prince 5 (DP5) Backward Euler (BE) CVODE (CV) Rosenbrock (RB2)
Explicit + + +
Implicit
Fixed Step + +
+ + +
+
Adaptive Step
Stiff
+ + +
+ +
e) Adams-Bashforth (AD) – fourth order predictor-corrector; f) CVODE (CV) – uses C-version of LSODE, written by S. D. Cohen and A.C. Hindmarsh, LLNL. LSODE is adaptive method based on the Nordsieck representation of the fixed step size Adams formulas. The first four of these methods are one-step methods, while the last two are representatives of the multi-step solvers. The correspondence of the considered methods to the classes of explicit/implicit, fixed/adaptive step or nonstiff/stiff integrators is indicated in Table 1. Delay equations are solved in XPPAUT by storing previous data and using cubic polynomial interpolation to obtain the delayed value. All three types of delays – constant, time dependent and state dependent can be handled with XPPAUT. The author recommends the usage of fixed step integrators for solving DDEs “although adaptive will sometimes work” (see p. 56 of the documentation distributed together with the package). This remark is kept in mind for the analysis of the solvers in the next section.
4
Numerical Tests and Analysis of the Results
The numerical tests are performed with version 5.98 of XPPAUT, which has been slightly modified to allow measurement of the computing time and recompiled under Linux for 64 bit processors. The comparative analysis concerns the test data for erythropoiesis presented in [1]. Test example. The coefficient functions for normalized initial conditions and the values of the parameters involved in the model are taken as follows: Function β(E) = β0
E , 1+E
g(Q) = GQ, f (M ) =
Parameter β0 > 0 G>0
a , a, K, r > 0 1 + KM r
τ ∈ [0, τmax ), τmax = 2.99 days
δ G β0 γ μ k a K r
Value
Range (day −1 )
0.01 day −1 0.04 day −1 0.5 day −1 0.2 day −1 0.02 day −1 2.8 day −1 6570 0.0382 7
0 – 0.09 0 – 0.09 0.08 – 2.24 0 – 0.9 0.001 – 0.1 — — — —
Comparative Analysis of Solution Methods for DDEs in Hematology
715
With the notation α(τ ) = 2e−γτ − 1 for τ ∈ [0, τmax ], the nontrivial steadystates of (1) for this choice of β, g and f are defined by Q∗ =
μ 1 G K 1/r
aβ0 α(τ ) − (δ + G)(a + k) k(δ + G)
1/r ,
M∗ =
G ∗ Q , μ
E∗ =
δ+G . β0 α(τ ) − (δ + G)
(3)
They are stable for τ close to zero and τ ≥ 2.82 days. Hopf bifurcation occurs for τ = 1.4 and the steady states become unstable until the stability switch. The software parameters are first tuned for each solver (the default values are not appropriate for some cases), and the results are compared with respect to both the computing time and the behaviour of the solution for three values of the delay τ : 0.5, 1.4, 2.9 and for three values of the mesh parameter dt. The latter denotes the step size for the fixed step integrators and the step for which the solution is stored in the output data file for the adaptive ones. Time comparison. The measured cpu times in seconds are collected in Table 2. The first column is for the method, where the error tolerance (1.e − 6 or 1.e − 9) for the adaptive solvers is given in brackets. The rest of the columns are in groups of three – one for each value of τ . The columns in each group are for the values of dt: 0.2, 0.1 and 0.05. As a general rule, when dt is reduced twice the computation time is increased approximately twice for all values of τ . Exceptions are CV and RB2, which is expected because they are implicit methods with adaptive step. The methods with the smallest and largest computing times are AD and CV with tolerance 1.e − 9 respectively, which requires special attention on the properties of their solutions. If they are comparable to (for AD) or much better than (for CV) the solution with the rest of the methods, AD and/or CV could become favourites for reliable future computer simulations. Solution comparison. Various plots are used to analyze the solvers, part of which are presented below. First set of plots is to compare the influence of the time step (see Fig. 1). For the case τ = 0.5, the solutions for dt = 0.1 and dt = 0.05 almost coincide for all solvers, while the one for dt = 0.2 is different from the others. This confirms the expectation that better approximations of the solution are obtained for smaller step sizes. The three populations for Runge-Kutta with Table 2. Computing times Method \dt RK AD BE DP5 (1.e − 6) DP5 (1.e − 9) CV (1.e − 6) CV (1.e − 9) RB2 (1.e − 6) RB2 (1.e − 9)
0.2 0.05 0.02 0.13 0.09 0.12 0.12 0.62 0.17 0.49
τ = 0.5 0.1 0.09 0.04 0.25 0.17 0.19 0.1 1.0 0.34 0.57
0.05 0.2 0.1 0.46 0.34 0.35 0.07 1.66 0.65 0.8
0.2 0.04 0.03 0.15 0.08 0.17 0.28 0.75 0.17 1.17
τ = 1.4 0.1 0.09 0.05 0.30 0.18 0.25 0.29 1.28 0.32 1.18
0.05 0.18 0.1 0.56 0.35 0.34 0.14 1.96 0.65 1.27
0.2 0.04 0.02 0.15 0.09 0.13 0.07 0.58 0.17 0.59
τ = 2.9 0.1 0.1 0.05 0.26 0.17 0.18 0.03 0.67 0.32 0.53
0.05 0.19 0.1 0.49 0.34 0.34 0.02 0.96 0.64 0.78
716
G. Bencheva Cell populations Q and M, Runge-Kutta, τ=0.5, dt=0.05
Growth factor E, Runge-Kutta, τ=0.5, dt=0.05
10
1 Quiscent stem cells Q Matured cells M
Growth factor E 0.8 Concentration 3 (x 10 mU/mL)
Cell populations 8 (x 10 cells/kg)
8
6
4
2
0.6
0.4
0.2
0
0 0
200
400
600
800
1000
0
200
400
Time
600
800
1000
Time
a) Solution with Runge-Kutta for τ = 0.5 Cell population M, Adams-Bashforth, τ=1.4
Growth factor E, Adams-Bashforth, τ=1.4
9
1.4 dt=0.2 dt=0.1 dt=0.05
dt=0.2 dt=0.1 dt=0.05
1.2 Concentration 3 (x 10 mU/mL)
Cell population 8 (x 10 cells/kg)
8
7
6
1 0.8 0.6 0.4
5 0.2 4
0 0
200
400
600
800
1000
0
200
400
Time
600
800
1000
Time
b) Solution with Adams-Bashforth for τ = 1.4 Cell population Q, CVODE, τ=2.9, tol=1.e-9
Growth factor E, CVODE, τ=2.9, tol=1.e-9
3.5
45 dt=0.2 dt=0.1 dt=0.05
dt=0.2 dt=0.1 dt=0.05
40 35 Concentration 3 (x 10 mU/mL)
Cell population 8 (x 10 cells/kg)
3 2.5 2 1.5
30 25 20 15 10
1 5 0.5
0 0
200
400
600 Time
800
1000
0
200
400
600
800
1000
Time
c) Solution with CVODE for τ = 2.9, tol = 1.e − 9 Fig. 1. Solution of the DDE system (1) with RK, AD, and CV
dt = 0.05 are presented in Fig. 1 a). After some oscillations at the beginning, they are stabilized around the steady states: Q∗ = 3.19592, M ∗ = 6.39183, E ∗ = 0.14091. The solution obtained with AD for τ = 1.4 and with CV for τ = 2.9 is presented respectively in Fig. 1 b) and in Fig. 1 c) for the three values of dt. The periodic solution for τ = 1.4 is due to the Hopf bifurcation. The behaviour for τ = 2.9 is similar to the case of τ = 0.5, but now the steady states are: Q∗ = 1.91601, M ∗ = 3.83203, E ∗ = 5.05134. The second set of plots aims to compare the behaviour in groups of solvers – explicit/implicit, fixed/adaptive step, for nonstiff/stiff problems. They are prepared for each population separately with the smallest value of dt. Initially, the solution in the whole integration interval t ∈ [0, 1000] is drawn. In the cases
Comparative Analysis of Solution Methods for DDEs in Hematology Growth factor E, Fixed step, τ=2.9, dt=0.05
5.052
DP5 CV RB2 * E
5.054
Concentration 8 (x 10 cells/kg)
Concentration 8 (x 10 cells/kg)
Growth factor E, Adaptive step, τ=2.9, dt=0.05
RK AD BE * E
5.054
717
5.05 5.048 5.046
5.052 5.05 5.048 5.046
5.044 800
850
900
950
5.044 800
1000
850
900
Time
950
1000
Time
a) Comparison fixed/adaptive step methods for τ = 2.9, t ∈ [800, 1000] Growth factor E, Nonstiff solvers, τ=2.9, dt=0.05
Growth factor E, Stiff solvers, τ=2.9, dt=0.05
20
20 RK AD BE DP5
15 Concentration 8 (x 10 cells/kg)
Concentration 8 (x 10 cells/kg)
15
CV RB2
10
5
10
5
0
0 0
200
400
600 Time
800
1000
0
200
400
600
800
1000
Time
b) Comparison nonstiff/stiff solvers for τ = 2.9, t ∈ [0, 1000] Fig. 2. Comparison of the methods in groups
where the figure looks like the solutions for various methods coincide, the plots are “zoomed” for better analysis and the last 150-200 days are plotted (i.e. t ∈ [800, 1000] or t ∈ [850, 1000]). Zoomed plots of the growth factor for fixed step (left) and adaptive step (right) solvers are shown in Fig. 2 a). The additional line in both cases represents the steady state E ∗ . The solutions with the fixed step integrators RK and BE coincide and are closer to E ∗ than the solution with AD. The adaptive integrators behave in a similar way – curves for DP5 and RB2 coincide and are closer to E ∗ than the one for CV. The comparisons of nonstiff versus stiff solvers (Fig. 2 b)) as well as of the explicit versus implicit solvers lead to the same conclusion – the one-step methods BE, RK, DP5 and RB2 have similar quality of the solution for this test example, while the methods AD and CV are not that close to the steady states. This observation requires additional set of plots in order to compare the two representatives of the multi-step methods (Fig. 3). For both values of τ with stable steady states, the solution obtained with AD is between the solutions obtained with CV for tol = 1.e − 6 and tol = 1.e − 9. For τ = 0.5 the closest to the steady state is the curve for CV, tol = 1.e − 9, while for τ = 2.9 this holds for AD. A possible reason for this behaviour of CV is hidden in the remark at the end of section 3: CV is an adaptive method and the way of handling DDEs in XPPAUT is not appropriate for it. Other techniques for handling DDEs in combinations with CV and its latest version SUNDIALS (available for download from [8]) should be analyzed in order to solve efficiently problems in hematology.
718
G. Bencheva Growth factor E, AD/CV, τ=0.5, dt=0.05
Growth factor E, AD/CV, τ=2.9, dt=0.05
0.15
5.08 CV, tol 1.e-9 CV, tol 1.e-6 AD * E
0.148
5.06 Concentration 8 (x 10 cells/kg)
Concentration 8 (x 10 cells/kg)
0.146
CV, tol 1.e-9 CV, tol 1.e-6 AD * E
5.07
0.144 0.142 0.14 0.138 0.136
5.05 5.04 5.03 5.02 5.01
0.134
5
0.132
4.99 860
880
900
920 Time
940
960
980
1000
860
880
900
920
940
960
980
1000
Time
Fig. 3. Comparison of AD and CV for τ = 0.5 (left) and τ = 2.9 (right), t ∈ [850, 1000]
Concluding Remarks. XPPAUT implementations of six methods for DDEs have been experimentally investigated. The solutions with the representatives of one-step methods have comparable quality, while they are slightly worse for the multi-step methods. Taking into account both the computing time and the quality of the solution, one could use XPPAUT implementations of RK, DP5 and RB2 as a toolbox for reliable future computer simulations. The particular choice among them depends on the properties of the problem – RK is the fastest one, DP5 can be applied in the cases where an adaptive step has to be used for a non-stiff problem, while RB2 has better properties for stiff problems. Acknowledgement. Supported in part by the Bulgarian NSF Grant DO02-214.
References 1. Adimy, M., Crauste, F., Ruan, S.: Modelling Hematopoiesis Mediated by Growth Factors with Applications to Periodic Hematological Diseases. Bulletin of Mathematical Biology 68(8), 2321–2351 (2006) 2. Bellen, A., Guglielmi, N., Maset, S.: Numerical methods for delay models in biomathematics. In: Quarteroni, A., Formaggia, L., Veneziani, A. (eds.) Complex Systems in Biomedicine, pp. 147–185. Springer, Milano (2006) 3. Ermentrout, B.: Simulating, analyzing and animating dynamical systems: a guide to XPPAUT for researchers and students. SIAM, Philadelphia (2002) 4. Hairer, E., Norsett, S.P., Wanner, G.: Solving ordinary differential equations I, Nonstiff problems. Springer Series in Computational Mathematics. Springer, Heidelberg (2000) 5. Hairer, E., Wanner, G.: Solving ordinary differential equations II, Stiff and Differential-Algebraic Problems. Springer Series in Computational Mathematics. Springer, Heidelberg (2002) 6. Lambert, J.D.: Numerical methods for ordinary differential systems: the initial value problem. John Wiley & Sons Ltd., Chichester (1991) 7. http://www.math.pitt.edu/~ bard/xpp/xpp.html 8. https://computation.llnl.gov/casc/sundials/main.html
Solving Non-linear Systems of Equations on Graphics Processing Units Lubomir T. Dechevsky, Børre Bang, Joakim Gundersen, Arne Laks˚ a, and Arnt R. Kristoffersen Narvik University College, P.O.B. 385, N-8505 Narvik, Norway
Abstract. In [6], [7] a method for isometric immersion of smooth mvariate n-dimensional vector fields, m = 1, 2, 3, 4, . . ., n = 1, 2, 3, 4, . . . onto fractal curves and surfaces was developed, thereby creating an opportunity to process high-dimensional geometric data on graphics processing units (GPUs) which in this case are used as relatively simple parallel computing architectures (with relatively very low price). For this construction, the structure of multivariate tensor-product orthonormal wavelet bases was of key importance. In the two afore-mentioned papers, one of the topics discussed was the spatial localization of points in high dimensional space and their images in the plane (corresponding to pixels in the images when processed by the GPU). In the present work we show how to compute approximately on the GPU multivariate intersection manifolds, using a new orthonormal-wavelet scaling-function basismatching algorithm which offers considerable simplifications compared to the original proposed in [6], [7]. This algorithm is not as general as the Cantor diagonal type of algorithm considered in [6], [7], but is much simpler to implement and use for global mapping of the wavelet basis indices in one and several dimensions. This new, simpler, approach finds also essential use in the results obtained in [5] which can be considered as continuation of the present paper, extending the range of applications of the present simplified approach to GPU-based computation of multivariate orthogonal wavelet transforms. The new method can be also used to accelerate the initial phase of the so-called Marching Simplex algorithm (or any other ’marching’ algorithm for numerical solution of nonlinear systems of equations).
1
Introduction
In recent years, the performance of graphics hardware has increased more rapidly than that of central processing units (CPUs). CPU designs are optimized for high performance on sequential code, and it is becoming increasingly difficult to use additional transistors to improve performance on this code, [12]. One of the ways to address the persistent need of ever-increasing computational power is, of
This work was supported in part by the 2006, 2007 and 2008 Annual Research Grants of the priority R&D Group for Mathematical Modeling, Numerical Simulation and Computer Visualization at Narvik University College, Norway.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 719–729, 2010. c Springer-Verlag Berlin Heidelberg 2010
720
L.T. Dechevsky et al.
course, the increasing use of multiprocessor architectures, ranging from petaflop supercomputers to large-scale heterogeneous networks and to individual multicore personal computers. This approach certainly “does the job”, but it tends to be rather expensive; moreover, the necessity to balance the load of all CPUs in the multiprocessor system throughout the whole time of the computation leads to severe complications in the parallelization of sequential algorithms. On the other hand, commodity graphic processing units (GPUs) are evolving rapidly in terms of processing speed, memory size, and most significantly, programmability. During the last decade the performance of the GPUs has doubled every 9 months, [14]. In addition to the improvement in performance, the features have increased drastically, enhancing the flexibility and programmable functionality. This development has largely been driven by the mass-market demand for faster and more realistic computer games and multi-media entertainment. In the last years graphics hardware design has been particularly sensitive to memory bandwidth problems due to a rapidly increasing data transfer volume. As a result, modern GPUs can access and transfer large data blocks tremendously faster than CPUs, [13]. Moreover, this access and transfer is parallel, thus turning the GPU into an increasingly advanced parallel computing architecture (which is comparatively very cheap and quite affordable for the average user!). Most scientific applications have to handle large amounts of data and thus they suffer from both limited main memory bandwidth and the obsolete repeated command transfer, when the same operation has to be performed on each component of the data block. The GPU will do this operation simultaneously on the entire data block. Before the age of GPU-programming (also referred to as generalpurpose programming on GPUs, or GPGPU programming, see, e.g., [15]), only a minor part of the GPU’s capacity was in use, while now GPU-programming exploits this capacity to its very limits, and provides a framework which is open to dynamical upgrades of the features of the GPU. Let Vj−1 = Vj ⊕ Wj , j ∈ Z, be the nested V-subspaces and W-spaces of a multi-resolution analysis in L2 (Rn , Rm ) (see, e.g., [1], [11]). In [6], [7] we studied an algorithm for isometric mapping between smooth n-variate m-dimensional vector fields and fractal curves and surfaces, by using orthonormal wavelet bases. This algorithm matched the orthonormal bases of scaling-functions (the bases of the “V-spaces” of multiresolution analyses). Here we propose a new alternative to the Cantor diagonal type of algorithm of [6], [7], for matching the “V -space” bases. The new algorithm is applicable only when the definition domain of the n-variate m-dimensional vector fields in the setting of [6], [7] is a bounded hyper-cube (hyper-rectangle) in Rn , but it is much simpler than the general Cantor diagonal type of algorithm. In the present paper we shall consider applications of this new approach for GPU-based solving of nonlinear equations, not only in the case when this system has a unique solution, but also in the much more general situation when the solution is a d-dimensional manifold immersed in Rm , d = 0, 1, . . . , min{m, n} (here the notation for m and n has the same meaning as in [6], [7]). As a continuation of the present study, in [5] we shall propose a new algorithm which matches the orthonormal bases of
Solving Non-linear Systems of Equations on Graphics Processing Units
721
(mother) wavelets (the “W-spaces” of multiresolution analyses) with application to GPU-based computing of multivariate orthogonal wavelet transforms. This work, together with [6], [7] and [5], is part of the research of the R&D Group for Mathematical Modeling, Numerical Simulation and Computer Visualization at Narvik University College within two consecutive Strategic Projects of the Norwegian Research Council – the completed project ’GPGPU – Graphics Hardware as a High-end Computational Resource’ (2004-2007), and the ongoing project ’Heterogeneous Computing’ (2008-2010).
2
Preliminaries
A well-known property of Hilbert spaces with equal dimension/cardinality of the basis is that all such spaces are isometric, i.e., for any such two Hilbert spaces H1 and H2 , there exists a linear bijective operator T : H1 → H2 with T x, T xH2 = x, xH1 , for every x ∈ H1 , where ·, ·Hi is the respective inner product (see, e.g., [9]). In this case, the inverse operator T −1 exists, and T −1 = T ∗ , where T ∗ is the Hilbert adjoint of T ; T ∗ is an isometry between H2 and H1 . This is the fact which permits the study of Hilbert spaces to be reduced to study of only their canonical representatives: l2 or L2 with the respective cardinality of the basis. All spaces in consideration in this study are Hilbert spaces. When A is an orthogonal operator (isometry) acting between finite-dimensional Hilbert spaces H1 and H2 with equal dimensions dim H1 = dim H2 = μ < ∞, then A can be represented by a μ × μ orthogonal matrix A. When appropriate, here and in [5] we shall identify a linear operator with its representation matrix (and sometimes we shall need to distinguish between them). → − Let f ∈ Vj1 be compactly supported (to ensure finiteness of the sums in k). Then, the following representation holds ⎞ ⎛ f1 (x1 , . . . , xn ) →− − ⎟ ⎜ .. f (→ x)=⎝ (1) ⎠= . fm (x1 , . . . , xn ) ⎛
⎞ α1,j1 ,k1 ,...,kn ⎜ ⎟ [0] [0] .. = ··· (2) ⎝ ⎠ϕj1 (x1 ) · · · ϕj1 (xn ) . k1 =−∞ kn =−∞ αm,j1 ,k1 ,...,kn [0] → − α − (3) = →ϕ − → (x1 , ..., xn ) → n j1 k j1 k − k ∈Z n j1 −1 2 →[l] [l] − [0] → − →ψ − (x , ..., x ) + = α − β − →ϕ − 1 n → → (x1 , ..., xn ) , (4) j kj j kj → n j0 k j0 k − → n l=1 j=j0 − k ∈Z kj ∈Z ∞
∞
and if, in particular, n = m = 1, then g ∈ Vj1 is a functional curve, with
722
L.T. Dechevsky et al.
g (t) =
∞ k=−∞
=
k∈Z
αj1 k ϕ j1 k (t)
α j0 k (t) + j0 k ϕ
(5)
β jkj ψjkj (t).
(6)
j=j0 kj ∈Z
In (4), (6) j1 and j1 are independent and j0 ≤ j1 , j0 ≤ j1 ; the scaling functions [0] [l] [0] ϕjk and wavelets ψjk belong to an orthonormal wavelet basis of L2 (Rn ); ϕj1 are the scaling functions belonging to a (generally different) orthonormal wavelet ϕjk , ψ basis of L2 (Rn ), taken at the finest level j1 ; jk belong to an orthonormal wavelet basis of L2 (R). Here high-dimensional orthonormal wavelet bases are of tensor-product type (see e.g. [1], [4]). Typically, in the default case the respective [0] [0] bases would coincide, namely ϕ − jk = ϕjk , ψjk = ψjk for all j, k →=ϕ− →, ϕ jk jk → − and k , where ϕjk and ψjk are the univariate orthonormal scaling functions and [0] [0] wavelets generating the tensor-product basis functions ϕ − →, ψ − → . In the sequel, jk jk we shall always consider only compactly supported (multi)wavelet bases. Biorthonormal (in particular, orthonormal) compactly supported wavelet bases with sufficient regularity are unconditional Riesz bases in a broad variety of function spaces, and induce equivalent weighted l2 wavelet-coefficient sequence norms in these spaces (see [1] and [4]). The norms used in deriving the isometric conversion results in [6], [7]and in the present paper are ∞ ∞
2 →
− 2 → f = · · · − α j1 k1 ···kn Rm (7)
L2(Rn ,Rm )
k1 =−∞
=
− n → k ∈Z
kn =−∞
2 − → α j0 k0 Rm +
j1
j=j0
− → k j ∈Zn
n 2 −1
l=1
2
−
[l]
→ →
,
β j− k j m
(8)
R
where . Rm is the usual Euclidean Hilbert norm in Rm , for compactly supported → − f ∈ Vj1 ⊂ L2 (Rn , Rm ), and 2
g L
2(R1 ,R1 )
=
∞ 2 α j k k=−∞
=
(9)
1
2 2 α + β jkj , j0 k
k∈Z
(10)
j=j0 kj ∈Z
for compactly supported g ∈ Vj1 ⊂ L2 (R, R).
3 3.1
Main Results Isometric Conversion between Dimension and Resolution: A Simpler Approach
The concept of conversion between dimension and resolution, proposed in [6], [7], is based on the isometry between Hilbert spaces spanned over orthonormal
Solving Non-linear Systems of Equations on Graphics Processing Units
723
wavelet bases with higher number of variables and lower resolution level, and ones spanned over orthonormal wavelet bases with lower number of variables and higher resolution level, so that the dimensions of the respective Hilbert spaces are equal. For functions defined on unbounded domains, e.g., on the whole Rn , the number of nonzero scaling coefficients at a given resolution level depends on the concrete choice of the function, and for a compactly supported function is finite, but may be arbitrarily large, depending on the size of the support. Hence, in the general problem setting of [6], [7], the algorithm for matching of the orthonormal bases had to be of Cantor diagonal type. The complexity of the resulting basis-matching algorithm in [6], [7] is, however, rather high. Here we propose a different new algorithm, valid only for the particular (but practically most important) case of a bounded hyper-rectangle in Rn . First of all, without loss of generality, two reductions can be made: the hyperrectangle can be rescaled to the “unit” hyper-cube [−1, 1]n ⊂ Rn , and the support of the univariate scaling function generating the multivariate tensor-product wavelet basis (see [1], [4], [11]) can be rescaled to [−1, 1]. We shall also make the natural assumption that the scaling-function basis is boundary-corrected [2] or appropriately periodized, with period 2 [3]. Under these assumptions, the range of the indices ki , i = 1, . . . , n, in (2) and k in (5) will be reduced to ki = 0, 1, . . . , 2j1 − 1 and k = 0, 1, . . . , 2j1 − 1, respectively. Furthermore, according to the results in [6], [7], the spans (3) and (5), endowed with the respective Hilbert norms (7) and (9), will be isometric if, and only if, their dimensions are the same, i.e., when j1 = nj1 . Denoting μ = j1 , we define an orthogonal operator An,μ (with matrix An,μ ) acting isometrically between the two Hilbert spaces by establishing 1-1 correspondence between the elements of the bases, as follows. → ν ∈ {0, 1, . . . , 2μn − 1} An,μ : (k1 , . . . , kn ) ∈ {0, 1, . . . , 2μ − 1}n is defined via the 2μ -adic representation of integer numbers ν=
n
ki · 2μ(n−i) .
(11)
i=1
Its inverse (and, in view of its orthogonality, also its adjoint) is ATn,μ : ν ∈ {0, 1, . . . , 2μn − 1} → (k1 , . . . , kn ) ∈ {0, 1, . . . , 2μ − 1}n is given by the Euclid-type algorithm with consecutive steps ν μ(n−1) 2µ(n−1)
= k1 , ν − k1 · 2 μ(n−2)= ν1 , ν1 = k2 , ν1 − k2 · 2 = ν2 , 2µ(n−2) .. .. .
.ν n = k , ν − kn · 20 = 0. n n−1 µ(n−n) 2
(12)
Clearly this new construction cannot be applied in the most general case considered in [6], [7], while the Cantor diagonal type of basis matching proposed in [6], [7] will work, at least in principle, whenever the sums in (2),(3),(5) and the
724
L.T. Dechevsky et al.
respective sums in (7), (9) are all finite. It is also clear, however, that the new construction proposed here is much simpler and geometrically more straightforward. This is confirmed also by a comparison between the planar images of the implicit manifolds which are solutions of the model systems of nonlinear equations provided below, and respective planar images of the same or similar manifolds in parametric form obtained by the Cantor diagonal type basis matching in [6], [7]. 3.2
Computing the Solutions of Nonlinear Operator Equations
The general idea how to use isometric conversion between dimension and resolution for solving nonlinear operator equations was already outlined in the concluding subsection 2.8 of [6] and section 3 of [7] but, due to the complexity of the Cantor diagonal basis-matching algorithm, in [6], [7] it was applied only for the representation of parametric manifolds, but not for manifolds defined implicitly by systems of equations. With An,μ and A−1 n,μ defined via (11) and (12), respectively, it is convenient to apply the ideas of [6], [7] to both parametric and implicit manifolds. Here we shall consider the latter case in detail. In this case, the main steps of the solution of a given nonlinear system of equations are, as follows. 1. Reduce the system of equations to the canonical operator form. → − − → − → x n) = O m, − x n ∈ Ω ⊂ Rn f m (→
2. 3.
4.
5. 6. 7.
(13)
(m equations with n-unknowns, homogeneous RHS and vector-valued LHS m → with coordinates (fi,m (− x n ))i=1 ). ˜ ⊂ [−1, 1]n, so that at least one point of the Scale the domain Ω ⊂ Rn to Ω ˜ boundary of Ω belongs also to the boundary of [−1, 1]n in Rn . → − → − ˜ ⊂ [−1, 1]m (for example, by multiplying Scale the range f m (Ω) to f˜ m (Ω) the i-th equation in the canonical form (13) by appropriately chosen factor → − → − → − → − ˜ is a ci > 0, i = 1, . . . , m). (Note that O m ∈ f i,m (Ω) and O m ∈ f˜ i,m (Ω) necessary condition for the system of equations to have a solution.) → − Depending on the available greyscale resolution, the nature of f and the context of the problem, divide [−1, 1] into M non-overlapping intervals, with M being the number of available greyscale levels, and assign 1-1 to each consecutive interval the respective consecutive greyscale level. In this construction, 0 (zero) should be interior point for “its” interval (and the size of “its” interval will determine the resolution of the method, so in practice this size will be tuned depending on the pixel size). → − − → Expand f˜ m ( x˜ m ) in n-variate expansion (2), (3). (The resolution level j1 depends on n and the pixel resolution).) → − Apply a direct mapping (11) to transform f˜i,m ( x ˜ n ) into a functional curve g˜i,1 (ti ), ti ∈ [−1, 1], i = 1, . . . , m, where g˜i,1 is given via (5). Apply an inverse mapping (12) and the scaling in item 4, to transform the graph of g˜i,1 (ti ) into the i-th planar greyscale pixel image, i = 1, . . . , m.
Solving Non-linear Systems of Equations on Graphics Processing Units
725
8. For the i-th greyscale pixel image, identify the pixels “having the greyscale level of 0”, mark these pixels as “black” and the remaining pixels as “white”, i = 1, . . . , m. 9. Intersect the i-th set of “black” pixels for all i = 1, . . . , m. The intersection of all m sets is a “black pixel” “point cloud” in the plane (or, eventually, an empty set). 10. All mappings invoked in items 1-7 were invertible. Now, invert all these mappings and compose them in reverse order and apply the method of pointwise localization proposed in [6], [7] to transform the planar point cloud in item 9 to a respective point cloud in Ω ⊂ Rn . This point cloud in Rn is accepted as the numerical solution of the nonlinear system of equations (operator equation) at the given resolution level. 11. Depending on the context of the problem, the point cloud obtained in item 10 can be processed further in one of several ways (see the next section for further details).
4
Concluding Remarks and Simple Model Examples
Case d = 0 in item 11 in subsection 3.2. In the case when the dimension d of the solution manifold is equal to 0 (i.e., there is no solution, or there is a unique solution, or there is a finite number of isolated solutions in Ω ⊂ Rn ) the method will provide localization of the solutions up to tolerance which depends on the pixel resolution and the greyscale resolution of the GPU and on the number of variables n; it does not depend on the number of equations m = 1, . . . , n. For LHS in (13) which are not spatially inhomogeneous, efficient (although rough) localization of solutions can still be achieved for n up to 10-20, but for more spatially inhomogeneous LHS this range rapidly drops to 3-8. Nevertheless, the latter range is sufficient at least for intersection problems in computer aided geometric design where n is typically 2, 3, or 4. Case d > 0 in item 11 in subsection 3.2. In the case when d > 0 (d = 1 for curves in Rn , d = 2 for surfaces in Rn , ... , d = n − 1 for hypersurfaces in Rn ) the initial idea proposed in [6], [7] was to generate topology in the intersection manifold starting from a sufficiently dense wireframe in [−1, 1]n , however, due to the very different nature of the planar image that a smooth curve on the manifold may have (ranging from smooth planar curve to area-filling fractal), this approach was later abandoned. Instead, we found out that it is very efficient to use the present algorithm described in items 1-10 as a replacement of the first several local-refinement iterations of the Marching-Simplex algorithm [8], [10]. The criterion for detection of singularities used in the Marching-Simplex algorithm is based on the Morse lemma and computes locally ranks of matrices; using the present algorithm for localization of the solution and keeping the use of the Marching-Simplex algorithm as the end solution-refining algorithm leads to a considerable speed-up. Moreover, for values of n in the range 3-6, the resolution of the GPU is sufficient to achieve reasonable error tolerance in most cases, and for these cases the resulting point cloud in item 10 of subsection 3.2 can be used
726
L.T. Dechevsky et al.
Fig. 1. Example 1 (see section 4)
for the design of a stopping criterion for the local-refinement iterations of the Marching-Simplex algorithm which is simpler and computationally cheaper to verify than the general stopping criterion given in [8]. Future work on this topic includes efficient software implementation of the present algorithm (see items 1-10 of subsection 3.2) and its combination with the Marching-Simplex algorithm for values of n : 3 ≤ n ≤ 6, as well as comprehensive testing versus the “pure” standalone Marching-Simplex algorithm. Another promising direction of research is the use of the new basis-matching algorithm (11), (12), for GPU-based computation of d-variate orthogonal wavelet transforms, d = 3, 4, . . . (see [5]). It should be noticed that the representation of a GPU-computing architecture as a planar array of pixels with greyscale value is, of course, a gross oversimplification. It is possible to enrich the model by
Solving Non-linear Systems of Equations on Graphics Processing Units
727
Fig. 2. Example 2 (see section 4)
considering RGB colours, α-values, number of pipelines, etc., but these additional degrees of freedom only increase the dimensionality of the computing array architecture, thereby improving the maximal possible resolution of the computing architecture. The idea of the algorithm considered in section 3 remains essentially the same. In conclusion of this exposition, let us consider two model examples. Example 1. m = 1, n = 4. Consider the 4-dimensional spherical hypersurface with center at (0,0,0,0) and radius 1, given by the implicit equation x2 + y 2 + z 2 + w2 − 1 = 0,
(14)
the 4-dimensional ellipsoid hypersurface with implicit equation x2 + 2y 2 + 3z 2 + 4w2 − 1 = 0,
(15)
728
L.T. Dechevsky et al.
and the 4-dimensional hyperboloid hypersurface with implicit equation x2 + y 2 − z 2 − w2 − 1 = 0,
(16)
In Figure 1a, b, c, respectively, are given (with the same error tolerance) the graphs of the planar point clouds of item 9 in subsection 3.2 for each one of the three manifolds, including also respective zooms into these planar graphs. (For n > 4 a hierarchy of at least n − 3 consecutive zooms into each of the graphs would be needed to reveal the ellipticity, hyperbolicity, etc., with respect to each coordinate.) Example 2. m = 2, n = 4. Given is a nonlinear system of 2 equations, the first one being (14) and the second being the implicit equations of 2 parallel hyperplanes in 4D-space 2w2 − 1 = 0, (17) The solution is a surface in 4D-space, the points of which have constant w√ component (= ±1/ 2), and its projection onto the 3D-subspace Oxyz is the sphere with implicit equation x2 + y 2 + z 2 = 1/2,
(18)
In Figure 2, subfigure 1a, is given the superposition of the point clouds of item 9 of subsection 3.2 for the two hypersurfaces given by (14) and (17), respectively. In subfigure 1b is given a 3D-space projection of the resulting point cloud in item 10 of subsection 3.2, and this 3D-space point cloud is compared to the sphere (18) corresponding to the exact solution. It is seen that the distance between the point cloud and the sphere (i.e., the error) is relatively large. In Figure 2, subfigures 2a, 2b, are given the respective graphical results with higher resolution. In subfigure 2b it is seen that the error has considerably decreased.
References 1. Cohen, A.: Wavelet methods in numerical analysis. In: Ciarlet, P.G., Lions, J.L. (eds.) Handbook of Numerical Analysis, vol. VII, pp. 417–712. Elsevier, Amsterdam (2000) 2. Cohen, A., Daubechies, I., Vial, P.: Wavelets and fast wavelet transforms on the interval. Appl. Comput. Harmon. Anal. 1, 54–81 (2007) 3. Dahmen, W.: Wavelet and multiscale methods for operator equations. Acta Numerica, 55–228 (1997) 4. Dechevsky, L.T.: Atomic decomposition of function spaces and fractional integral and differential operators. In: Rusev, P., Dimovski, I., Kiryakova, V. (eds.) Transform Methods and Special Functions, part A (1999); Fractional Calculus & Applied Analysis, vol. 2(4), pp. 367–381 (1999) 5. Dechevsky, L.T., Gundersen, J., Bang, B.: Computing n-variate Orthogonal Discrete Wavelet Transforms on Graphics Processing Units. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2009. LNCS, vol. 5910, pp. 730–737. Springer, Heidelberg (2010)
Solving Non-linear Systems of Equations on Graphics Processing Units
729
6. Dechevsky, L.T., Gundersen, J.: Isometric Conversion Between Dimension and Resolution. In: Dæhlen, M., Mørken, K., Schumaker, L. (eds.) 6th International Conference on Mathematical Methods for Curves and Surfaces, pp. 103–114. Nashboro Press, Tromsø 2004 (2005) 7. Dechevsky, L.T., Gundersen, J., Kristoffersen, A.R.: Wavelet-based isometric conversion between dimension and resolution and some of its applications. In: Proceedings of SPIE: Wavelet applications in industrial processing V, Boston, Massachusetts, USA, vol. 6763 (2007) 8. Dechevsky, L.T., Kristoffersen, A.R., Gundersen, J., Laks˚ a, A., Bang, B., Dokken, T.: A ‘Marching Simplex’ algorithm for approximate solution of non-linear systems of equations and applications to branching of solutions and intersection problems. Int. J. of Pure and Appl. Math. 33(3), 407–431 (2006) 9. Dunford, N., Schwartz, J.T.: Linear Operators. General Theory, vol. 1. Wiley, Chichester (1988) 10. Gundersen, J., Kristoffersen, A.R., Dechevsky, L.T.: Comparing Between the Marching-Cube and the Marching Simplex Methods. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2009. LNCS, vol. 5910, pp. 756–764. Springer, Heidelberg (2010) 11. Mallat, S.: A Wavelet Tour of Signal Processing, 2nd edn. Acad. Press, New York (1999) 12. Purcell, T.J., Buck, I., Mark, W.R., Hanrahan, P.: Ray Tracing on Programmable Graphics Hardware. ACM Transactions on Graphics 21(3), 703–712 (2002); Proceedings of ACM SIGGRAPH 2002, ISSN 0730-0301 13. Rumpf, M., Strzodka, R.: Using Graphics Cards for Quantized FEM Computations. In: Proceedings VIIP 2001, pp. 193–202 (2001) 14. Rumpf, M., Strzodka, R.: Numerical Solution of Partial Differential Equations on Parallel Computers. In: Graphics Processor Units: New Prospects for Parallel Computing. Springer, Heidelberg (2004) 15. General-Purpose Computation Using Graphics Hardware, http://www.gpgpu.org/
Computing n-Variate Orthogonal Discrete Wavelet Transforms on Graphics Processing Units Lubomir Dechevsky, Joakim Gundersen, and Børre Bang Narvik University College, P.O.B. 385, N-8505 Narvik, Norway
Abstract. In [4,5] an algorithm was proposed for isometric mapping between smooth n-variate m-dimensional vector fields and fractal curves and surfaces, by using orthonormal wavelet bases. This algorithm matched only the orthonormal bases of scaling functions (the “V-spaces” of multiresolution analyses). In the present communication we shall consider a new algorithm which matches the orthonormal bases of wavelets (the “W-spaces” of multiresolution analyses). Being of Cantor diagonal type, it was applicable for both bounded and unbounded domains, but the complexity of its implementation was rather high. In [3] we proposed a simpler algorithm for the case of boundary-corrected wavelet basis on a bounded hyper-rectangle. In combination with the algorithm for the “V-spaces” from [4,5], the new algorithm provides the opportunity to compute multidimensional orthogonal discrete wavelet transform (DWT) in two ways – via the “classical” way for computing multidimensional wavelet transforms, and by using a commutative diagram of mappings of the bases, resulting in an equivalent computation on graphics processing units (GPUs). The orthonormality of the wavelet bases ensures that the direct and inverse transformations of the bases are mutually adjoint (transposed in the case of real entries) orthogonal matrices, which eases the computations of matrix inverses in the algorithm. 1D and 2D orthogonal wavelet transforms have been first implemented for parallel computing on GPUs using C++ and OpenGL shading language around the year 2000; our new algorithm allows to extend general-purpose computing on GPUs (GPGPU) also to higher-dimensional wavelet transforms. If used in combination with the Cantor diagonal type algorithm of [4,5] (the “V-space” basis matching) this algorithm can in principle be applied for computing DWT of n-variate vector fields defined on the whole Rn . However, if boundary-corrected wavelets are considered for vector-fields defined on a bounded hyper-rectangle in Rn , then the present algorithm for GPU-based computation of n-variate orthogonal DWT can be enhanced with the new simple “V-space”-basis matching algorithm of [3]. It is this version that we consider in detail in the present study.
This work was supported in part by the 2007, 2008 and 2009 Annual Research Grants of the priority R&D Group for Mathematical Modeling, Numerical Simulation and Computer Visualization at Narvik University College, Norway.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 730–737, 2010. c Springer-Verlag Berlin Heidelberg 2010
Computing n-Variate Orthogonal Discrete Wavelet Transforms on GPUs
1
731
Introduction
Let Vj−1 = Vj ⊕ Wj , j ∈ Z, be the nested V-subspaces and W-spaces of an orthonormal multiresolution analysis in L2 (Rn , Rm ) (see, e.g., [1,6]), “⊕” denoting the direct sum of orthogonal subspaces. In [3] we proposed a new method for matching the bases of scaling functions in the V-spaces of orthonormal multiresolution analyses in L2 -spaces of n-variate m-dimensional vector-fields. In comparison with the Cantor diagonal type method proposed earlier in [4,5], the new method of [3] is less general, but also much simpler. In [3] we apply this new simple method for the solution of systems of nonlinear equations and the solution of geometric intersection problems in n-dimensions, n = 2, 3, 4, . . .. In the present study we expand the range of applications of the new approach proposed in [4,5,3] to the computation of n-variate orthogonal discrete wavelet transforms (DWTs). All needed preliminary notation and preliminary results about isometric Hilbert spaces, orthogonal operators and their adjoints, orthogonal wavelet expansions in one and several variables and sequence norms in terms of wavelet coefficients can be found in section 2 of [3]. This work, together with [4,5,3] is part of the research of the R&D Group for Mathematical Modelling, Numerical Simulation and Computer Visualization at Narvik University College within two consecutive Strategic Projects of the Norwegian Research Council – the completed project ’GPGPU – Graphics Hardware as a High-end Computational Resource’ (2004-2007), and the ongoing project ’Heterogeneous Computing’ (2008-2010).
2
Main Result: A New Basis-Matching Algorithm
In this section we shall consider in detail the case n = d, m = 1 (the ddimensional DWT, i.e., the DWT for processing d-variate scalar-valued fields). The case of m-dimensional vector-fields, m = 2, 3, . . . , follows easily by applying the results of the present section to each coordinate of the vector-field. 2.1
Canonical Isometries between Hilbert Spaces
Since all bases in consideration are orthonormal, the linear (transformation) operators/matrices A for change between such bases are unitary. For conciseness of the exposition, here we shall consider only real-valued Hilbert spaces, which means that the unitary operator A is orthogonal, or, in equivalent matrix form, A has real entries and A−1 = AT , where AT is the transposed matrix of A. Consider the Hilbert spaces Vd,J = VJ ⊗ VJ ⊗ . . . ⊗ VJ , with dimension d, d−times
resolution level J and, V1,dJ , with dimension 1, resolution level dJ, where the operator ⊗ corresponds to tensor-product of spaces spanned over scaling function bases, as in [1,2]. The bases in Vd,J and V1,dJ are denoted by Φd,J and Φ1,Jd , respectively, with dim Φd,J = 2Jd = dim Φ1,Jd .
732
L. Dechevsky, J. Gundersen, and B. Bang
Since the spaces Vd,J and V1,dJ have the same dimension, and each one of them has a selected fixed basis in it, there is a “canonical” isometry Ad,J between them which maps 1 − 1 the fixed basis of Vd,J onto the fixed basis of V1,dJ . This choice of the canonical isometry is unique modulo right composition with an orthogonal operator σd,J and modulo left composition with an orthogonal operator σ1,dJ corresponding to a permutation of the basis in Vd,J and V1,dJ , respectively. Any such permutation operator would be represented by a permutation matrix (orthogonal matrix with only one non-zero element in every row and every column, the non-zero element being equal to 1). In the commutative diagrams given below, all linear operators will be identified with their matrices. 2.2
Discrete Wavelet Transform J−1 ¯ d,J,j = Vj ⊕ The space V Wμ is isometric to Vd,J for every j ≤ J − 1, they μ=j
are both Hilbert spaces and have the same dimension. Note, however, that while the orthonormal basis in Vd,J (the “space of all reconstructed signals”) consists ¯ d,J,j entirely of 2Jd scaling (father wavelet) functions, the orthonormal basis in V jd (the space of all transformed (decomposed) signals) consists of only 2 scaling (father wavelet) functions, complemented with 2jd (2(J−j)d − 1) wavelets (mother wavelet functions). ¯ d,J,j will be denoted by Ψd,J,j , dim Ψd,J,j = 2Jd . The orthonormal basis of V ¯ d,J,j , jd ≤ J − 1 is given by an orthogonal The isometry from Vd,J to V d operator (and orthogonal matrix) Ωd,J,jd . In this case Ψd,J,jd = Ωd,J,jd Φd,J
(1)
is the matrix of change of basis between Φd,J and Ψd,J,jd . Indeed, let f be an element of Vd,J expanded in the basis Φd,J by f = XT Φd,J , ¯ d,J,j expanded in the basis Ψd,J,j by and an element of V f = YT Ψd,J,j , where X and Y are 2Jd × 2Jd matrices. (We refer to the coordinate matrix X as the reconstructed d-dimensional signal of f and the coordinate matrix Y as the decomposed d-dimensional signal of f .) Changing the basis, we obtain −1 Y = ΩTd,J,jd X (2)
−1 and it is seen from (1) that the matrix ΩTd,J,jd is the matrix of the d −1 = dimensional DWT. But since Ωd,J,jd is an orthogonal matrix, ΩTd,J,jd Ωd,J,jd holds, and Y = Ωd,J,j X.
Computing n-Variate Orthogonal Discrete Wavelet Transforms on GPUs
733
Fig. 1. Commutative diagram and its inverse
So, in this notation the direct DWT corresponds to the orthogonal 2Jd × 2Jd matrix Ωd,J,j . The inverse (reconstruction) DWT then corresponds to ΩTd,J,jd . 2.3
Problem Setting
¯ d,J,j and V ¯ 1,dJ,j1 are both Hilbert spaces with the same dimension The spaces V d = 2Jd and are, therefore, isometric. There are infinitely many isometries between these spaces – each one being given by one, and only one, orthogonal 2Jd × 2Jd matrix Bd,J , provided that a fixed basis is considered in each of the two spaces. The main object of research in the present study is, as follows. Given the fixed orthonormal bases Φd,J , Φ1,Jd , Ψd,J,jd and Ψ1,Jd,j1 in the isometric Hilbert ¯ d,J,j and V ¯ 1,Jd,j , respectively, compute the matrix Bd,J spaces Vd,J , V1,dJ , V 1 d of the orthogonal operator which makes the diagrams on Figure 1 commutative. Analogously to Ad,J , for the fixed bases in consideration, Bd,J is unique mod¯ d,J,j and right ulo left composition with permutation Sd,J of the basis in V d ¯ 1,Jd,j1 (see Figure 1). composition with permutation S1,dJ of the basis in V From the commutative diagram it is clear that Bd,J can be computed as follows: Bd,J = Ω1,J,j1 Ad,J ΩTd,J,jd , provided that Ωd,J,jd and Ω1,J,j1 have been computed. Because of the orthonormality of all bases and the orthogonality of all operators involved in the commutative diagram on Figure 1, it is possible to invert this diagram, with all inverses being easy to compute via transposition (see Figure 1). In particular, the inverse of Bd,J needed for the computation of ΩTd,J,jd is equal to BTd,J . Rather than computing Ωd,J,j via the CPU, we shall use the afore-mentioned commutative diagram and its inverse to compute Ωd,J,j for j = jd by Ωd,J,jd = BTd,J Ω1,J,j1 Ad,J .
(3)
assuming that we have a realization of the orthogonal operator Bd,J and that Ω1,J,j1 is computed on the GPU.
734
L. Dechevsky, J. Gundersen, and B. Bang
Therefore, our task from now on is to find a way to compute one possible solution for Bd,J . Remark 1. In 3 Ω1,J,j1 can be replaced by its 2D representation Ω2,J,j2 = B2,J Ω2,J,j2 AT2,J . Then, Ωd,J,jd = BTd,J B2,J Ω2,J,j2 AT2,J Ad,J is a new isometry better adapted for the 2-dimensional architecture of the GPU. For simplicity of presentation, in this exposition we shall continue to consider Ω1,J,j1 . It can also be mentioned that a 1D DWT on GPU is available in CUDA1 SDK2 . 2.4
Computing Bd,J
Let d ∈ N. Without loss of generality, we may assume that J ≥ 1 and 0 ≤ j = jd ≤ J − 1. Choose and fix j0 : 0 ≤ j0 ≤ J − 1 and set j = jd = j0 , j1 = j0 · d. Decompose Vd,J and V1,Jd into direct sums of V- and W-spaces, as follows J−1 ¯ d,J,j0 = Vd,j0 ⊕ Wd,μ Vd,J isometric to V μ=j0 Jd−1
¯ 1,Jd,j1 = V1,j0 d ⊕ V1,Jd isometric to V
W1,ν
ν=j0 d
¯ d,J,j0 and V ¯ 1,Jd,j1 by proposing Now we shall construct an isometry between V an appropriate 1 – 1 bijection between their bases. ¯ 1,Jd,j1 to match RHS in V ¯ d,J,j0 in the Rewriting the -term in the RHS of V following way Jd−1
W1,ν =
ν=j0 d
J−1 (μ+1)d−1 μ=j0
W1,ν =
ν=μd
J−1
d W1,μ ,
μ=j0
d =W1,µ
we get ¯ d,J,j0 = Vd,j0 ⊕ Vd,J isometric to V
J−1
Wd,μ
μ=j0 J−1
¯ 1,Jd,j1 = V1,j0 d ⊕ V1,Jd isometric to V
μ=j0
d W1,μ
Therefore it now suffices to design appropriate bijections between the ord , μ = j0 , . . . , J − 1, given by thonormal bases of Wd,μ and W1,ν Wd,μ =
d 2 −1
σ1,l σ2,l σd,l ⊗ · · · ⊗ Z1,μ Z1,μ ⊗ Z1,μ
l=1 1 2
CUDA: the computing engine in NVIDIA’s GPUs. SDK: Software Development Kit.
Computing n-Variate Orthogonal Discrete Wavelet Transforms on GPUs
735
d Fig. 2. 2μd -block bijection between Wd,μ and W1,μ , µ = j0 , . . . , J − 1. The blocks μ d are 1of Wd,μ are d-dimensional of size 2 in each dimension; the blocks of W1,μ dimensional, of size 2μd .
and d,μ = W1,ν
d−1
W1,n+μd
n=0
Here σk,l ∈ {0, 1}, k = 1, . . . , d, l = 1, . . . , 2d − 1; σk,l , k = 1, . . . , d, are uniquely d σk,l 2d−k and determined by the binary representation of l; l =
σ
k,l Z1,μ =
k=1
V1,μ , σk,l = 0 W1,μ , σk,l = 1
k = 1, . . . , d, l = 1, . . . , 2d − 1, μ = j0 , j0 + 1, . . . , J − 1.
736
L. Dechevsky, J. Gundersen, and B. Bang
d Now Wd,μ is mapped 1-1 onto W1,μ for every μ = j0 , . . . , J − 1, in terms of blocks of basis functions of dimension 2μd each, as follows (see Figure 2).
1. Vd,j0 maps 1-1 onto V1,j0 d by the isometric operator Ad,j0 . 2. For every μ = j0 , . . . , J −1, Wd,μ will be mapped to the 2d −1 corresponding blocks of basis functions W1,n+μd for n = 0, . . . , d − 1. 3. Every corresponding block of basis functions in Wd,μ (with dimension d and 2μ ) is then being mapped 1-1 onto a respective block of basis functions W1,μ (with dimension 1 and size 2μd ) by the isometric operator Ad,μ . The selection of the operator Ad,μ can be made in many different ways, for example, by the original Cantor diagonal method developed in [4,5], or by the method proposed in [3]. Here we opt for the latter method which is simpler to implement. Namely (see subsection 3.1 in [3]), Ad,μ : (k1 , . . . , kd ) ∈ {0, 1, . . . , 2μ − 1}d → n ∈ {0, 1, . . . , 2μd − 1} is defined by n=
d
ki · 2μ(d−i) .
i=1
Its inverse (and, in view of its orthogonality, also its adjoint) ATd,J : n ∈ {0, 1, . . . , 2μd − 1} → (k1 , . . . , kd ) ∈ {0, 1, . . . , 2μ − 1}d is given by n μ(d−1) 2µ(d−1) = k1 , n − k1 · 2 μ(d−2)= n1 , n1 = k2 , n1 − k2 · 2 = n2 , 2µ(d−2) .. .. . n. 0 d = k , n d d−1 − kd · 2 = 0. 2µ(d−d) where [a] denotes, as customary, the integer part of a ≥ 0.
3
Concluding Remarks
The isometric mapping constructed here can be used as the basis for completion of several tasks, as follows: – Fast solution of nonlinear operator equations on the GPU. For example, consider finding the manifold obtained by the implicit d-dimensional m→→ − variate equation f (− x ) = 0. • Developing a pixel-based intersection algorithm in d-dimensional space, for any d = 2, 3, 4, . . .. • Generating the topology of intersection (the ordering of the point cloud). • First results in this direction are announced in [3] – GPU-computable multidimensional orthogonal DWT. As an important part of future work on this topic we see a detailed comparison of the performance of the new GPU-computable d-dimensional DWT discussed in this study to a standard CPU-computable d-dimensional DWT. As part of
Computing n-Variate Orthogonal Discrete Wavelet Transforms on GPUs
737
the research leading to [8], we conducted comparative study of the execution time for processing a set of model images by a standard CPU-computable 2dimensional wavelet transform and a respective GPU-computable 2 dimensional wavelet transform based on OpenGL shading language. At an average, the GPU based version was approximately 30 times faster [7]. More recently, a new generation of GPUs has been introduced, with a vast increase of programmability, computational power and speed, and it can be expected that using such a newer GPU would result in a speed-up of more than 102 times. We conjecture that the increase of the dimension d of the DWT will lead to increase of this speed-up.
References 1. Cohen, A.: Wavelet methods in numerical analysis. In: Ciarlet, P.G., Lions, J.L. (eds.) Handbook of Numerical Analysis, vol. VII, pp. 417–712. Elsevier, Amsterdam (2000) 2. Dechevsky, L.T.: Atomic decomposition of function spaces and fractional integral and differential operators. In: Rusev, P., Dimovski, I., Kiryakova, V. (eds.) Transform Methods and Special Functions, Part A (1999); Fractional Calculus & Applied Analysis, vol. 2(4), pp. 367–381 (1999) 3. Dechevsky, L.T., Bang, B., Gundersen, J., Laks˚ a, A., Kristoffersen, A.R.: Solving Non-linear Systems of Equations on Graphics Processing Units. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2009. LNCS, vol. 5910, pp. 719–729. Springer, Heidelberg (2010) 4. Dechevsky, L.T., Gundersen, J.: Isometric Conversion Between Dimension and Resolution. In: Dæhlen, M., Mørken, K., Schumaker, L. (eds.) 6th International Conference on Mathematical Methods for Curves and Surfaces, pp. 103–114. Nashboro Press, Tromsø (2005) ISBN 0-0-9728482-4-X 5. Dechevsky, L.T., Gundersen, J., Kristoffersen, A.R.: Wavelet-based isometic conversion between dimension and resolution and some of its applications. In: Proceedings of SPIE: Wavelet applications in industrial processing V, Boston, Massachusetts, USA, vol. 6763 (2007) 6. Mallat, S.: A Wavelet Tour of Signal Processing, 2nd edn. Acad. Press, New York (1999) 7. Moguchaya, T.: GM-Waves: A Wavelet Software Library, MSc-thesis, Narvik University College (2004) 8. Moguchaya, T., Grip, N., Dechevsky, L.T., Bang, B., Laks˚ a, A., Tong, B.: Curve and surface fitting by wavelet shrinkage using GM Waves. In: Dæhlen, M., Mørken, K., Schumaker, L. (eds.) Mathematical Methods for Curves and Surfaces, pp. 263–274. Nashboro Press, Brentwood (2005)
Wavelet Compression, Data Fitting and Approximation Based on Adaptive Composition of Lorentz-Type Thresholding and Besov-Type Non-threshold Shrinkage Lubomir T. Dechevsky1 , Joakim Gundersen1 , and Niklas Grip2 1
Narvik University College, P.O.B. 385, N-8505 Narvik, Norway Department of Mathematics, Lule˚ a University of Technology, SE-971 87 Lule˚ a, Sweden
2
Abstract. In this study we initiate the investigation of a new advanced technique, proposed in Section 6 of [3], for generating adaptive Besov– Lorentz composite wavelet shrinkage strategies. We discuss some advantages of the Besov–Lorentz approach compared to firm thresholding.
1
Introduction
In [3] we considered 3 types of signals: – The typical case of quasi-sparseness of the wavelet-coefficient vector is when the signal is sufficiently smooth. In this case, it is usually sufficient to apply nonadaptive threshold shrinkage strategies such as, for example, hard and soft thresholding (in its various global, levelwise or block versions) (see,e. g., [6,7,8,9]). – Fractal signals and images which are continuous but nonsmooth everywhere (a simple classical example being the Weierstrass function). In this case, the vector of wavelet coefficients looks locally full everywhere. This general case was specifically addressed in [5], where a family of wavelet-shrinkage procedures of nonthreshold type were considered for very general classes of signals that belong to the general scale of Besov spaces and may have a full, non-sparse, vector of wavelet coefficients (see, in particular [5], Appendix B, item B9). – The intermediate case of spatially inhomogeneous signals which exhibit both smooth regions and regions with (isolated, or continual fractal) singularities. For this most interesting and important for the applications case, nonadaptive thresholding shrinkage tends to oversmooth the signal in a neighbourhood of every singularity, while nonadaptive nonthresholding shrinkage tends
Research supported in part by the 2008 and 2009 Annual Research Grants of the priority R&D Group for Mathematical Modeling, Numerical Simulation and Computer Visualization at Narvik University College, Norway. Research supported in part by the Swedish Research Council (project registration number 2004-3862).
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 738–746, 2010. c Springer-Verlag Berlin Heidelberg 2010
Wavelet Compression, Data Fitting and Approximation
739
to undersmooth the signal in the regions where the signal has regular behaviour and locally quasi-sparse wavelet-coefficient vector. On the basis of the preliminary study of this topic in [5,12], in [3] (Section 6) was proposed a method for developing a next, second, generation of composite wavelet shrinkage strategies having the new and very remarkable property to adapt to the local sparseness or nonsparseness of the vector of wavelet coefficients, with simultaneously improved performance near singularities, as well as in smooth regions. A first attempt for designing efficient strategy for adaptive wavelet thresholding was the so-called firm thresholding, which consistently outperformed adaptive non-composite wavelet shrinkage techniques such as the soft and hard wavelet thresholding which appear as limiting cases of the newer firm thresholding. For this purpose, in [3] we proposed to upgrade firm threshold to incorporate all of the afore-mentioned wavelet shrinkage strategies within the very general setting of the so-called K-functional Tikhonov regularization of incorrectly posed inverse deterministic and stochastic problems [5], thereby obtaining a lucid uniform comparative characterization of the above-said approaches and their interrelations. The new approach proposed in [3] suggests, instead, to apply for the first time a new type of thresholding – the so-called Lorentz-type thresholding (based on decreasing rearrangement of the wavelet coefficient vector, as outlined for the first time in [5], Appendix B, item B10(b)). Using this idea, in [3] we propose to upgrade firm thresholding to a composite adaptive shrinkage procedure based on data-dependent composition of the new Lorentz-type thresholding with the nonthreshold shrinkage procedures of [5] . It can be shown that the composition of these new highly adaptive strategies achieve the best possible rate of compression over all signals with prescribed Besov regularity (smooth signals as well as fractals). This is valid for univariate and multivariate signals. A full theoretical analysis of this construction would be very spacious and would require a very considerable additional theoretical and technical effort. We intend to return to this theoretical analysis in the near future. In this short exposition we shall do preliminary comparative graphical analysis on a benchmark image with a local singularity (which has already been used for this purpose in [3,4,5,12]).
2 2.1
Preliminaries Riesz Unconditional Wavelet Bases in Besov Spaces
s For the definition of the inhomogenous Besov spaces Bpq (Rn ) (and for the respective range of the parameters p, q, s) we refer to [3, Section 4]. The same section in [3] contains the necessary information about the Riesz wavelet bases [0] [l] of orthogonal scaling functions ϕj0 k and wavelets ψjk , as well as their respective [l]
scaling and wavelet coefficients αj0 k and βjk , j = j0 , . . . , j1 − 1, where j0 .j1 ∈ N, j0 < j1 .
740
2.2
L.T. Dechevsky, J. Gundersen, and N. Grip
Non-parametric Regression
For the benchmark model of non-parametric regression with noise variance δ 2 considered in this paper, see [3, Section 4, formula (7)]. The empirical scaling [l] coefficients α ˆ j0 k and βˆjk are also defined there, in formula (13). 2.3
Wavelet Shrinkage
The methodology to estimate f is based on the principle of shrinking wavelet coefficients towards zero to remove noise, which means reducing the absolute value of the empirical coefficients. (Mother) wavelet coefficients (β-coefficients) having small absolute value contain mostly noise. The important information at every resolution level is encoded in the coefficients on that level which have large absolute value. One of the most important applications of wavelets - denoising - began after observing that shrinking wavelet coefficients towards zero and then reconstructing the signal has the effect of denoising and smoothing. To fix terminology, a thresholding shrinkage rule sets to zero all coefficients with absolute values below a certain threshold level, λ ≥ 0 , whilst a nonthresholding rule shrinks the non-zero wavelet coefficients towards zero, without actually setting to zero any of the nonzero coefficients. The cases when threshold shrinkage should be preferred over non-threshold shrinkage, and vice versa, were briefly discussed in section 1. I. Threshold shrinkage. This is the most explored wavelet shrinkage technique. (i) Hard and soft threshold shrinkage. The hard and soft thresholding rules proposed by Donoho and Johnstone [7,8,9] (see also the seminal paper [6] of Delyon and Judistky) for smooth functions, are given respectively by: x, if |x| > λ δ (x; λ) = (1) 0, if |x| ≤ λ
and δ (x; λ) =
sgn(x)(|x| − λ), if |x| > λ 0, if |x| ≤ λ
(2)
where λ ∈ [0, ∞) is the threshold value. Asymptotically, both hard and soft shrinkage estimates achieve within a log n factor of the ideal performance. Soft thresholding (a continuous function) is a ’shrink’ or ’kill’ rule, while hard thresholding (a discontinuous function) is a ’keep’ or ’kill’ rule. (ii) Firm thresholding. After the initial enthusiasm of wavelet practicians and theorists in the early 1980-s generated by the introduction of hard and soft thresholding, a more serene second look at these techniques revealed a number of imperfections, and various attempts were made to design more comprehensive threshold rules (for further details
Wavelet Compression, Data Fitting and Approximation
741
on this topic, see [5,12,3] and the references therein). A relatively successful attempt was the introduction by Gao and Bruce [10] of the firm thresholding rule ⎧ |x|−λ ⎨ sgn(x)λ2 λ2 −λ11 , if |x| ∈ (λ1 , λ2 ] δ(x, λ1 , λ2 ) = (3) x, if |x| > λ2 ⎩ 0, if |x| ≤ λ1 By choosing appropriate thresholds (λ1 , λ2 ), firm thresholding outperforms both hard and soft thresholding; which are its two extreme limiting cases. Indeed, note, that lim δ f irm (x, λ1 , λ2 ) = δ sof t (x, λ1 )
(4)
lim δ f irm (x, λ1 , λ2 ) = δ hard (x, λ1 ).
(5)
λ2 →∞
and λ2 →λ1
The advantage of firm thresholding compared to its limiting cases comes at a price: two thresholds are required instead of only one, which doubles the dimensionality of optimization problems related to finding data dependent optimal thresholds (such as cross-validation, entropy minimization, etc.). Nevertheless, with the ever increasing computational power of the new computer generations, the advantages of firm thresholding compared to soft and hard ones tend to outweigh the higher computational complexity of firm thresholding algorithms. (iii) Lorentz-curve thresholding. Vidakovic [13], proposed a thresholding method based on the Lorentz curve for the energy in the wavelet decomposition. A brief outline of the idea of Lorentz-curve thresholding can be found in subsection 5.3 of [3]. (iv) General Lorentz thresholding. This is a far-going generalization of the concept of Lorentz-curve thresholding based on the combined use of two deep function-analytical and operator-theoretical facts: (A) computability of the Peetre K-functional between Lebesgue spaces in terms of the non-increasing rearrangement of a measurable function; (B) isometricity of Besov spaces to vector-valued sequence spaces of Lebesques type. This construction was introduced in [5], and the details, which we omit here for conciseness, can be found in [5], Appendix B, Item B10(b), or in [3], subsection 6.4. Item (B) is extended in a fairly straightforward way to the more general scale of Nikol’skii-Besov spaces with (quasi)norm (see [5], Appendix B, Item B12 and the references therein). II. Non-thresholding Shrinkage. In the last 10-15 years there has been increasing interest to the study of fractals and singularity points of functions (discontinuities of the function or its derivatives, cusps, chirps, etc.), and
742
L.T. Dechevsky, J. Gundersen, and N. Grip
this raised the necessity of studying non-threshold wavelet shrinkage. At this point, while threshold rules can be considered as well explored, non-threshold rules, on the contrary, are fairly new, and the corresponding theory is so far in an initial stage. This can be explained by the fact that traditionally only very smooth functions have been estimated. In [5] was proposed, and in [12,3] — further studied, a family of waveletshrinkage estimators of non-threshold type which are particularly well adapted for functions belonging to Besov spaces and have a full, non-sparse, vector of wavelet coefficients. The approach, proposed by Dechevsky, Ramsay and Penev in [5], parallels Wahba’s spline smoothing technique based on Tikhonov regularization of ill-posed inverse problems, upgraded in [5] to be able to handle in a uniform way also the case of fitting less regular curves, surfaces, volume deformations and, more generally, multivariate vector fields belonging to Besov spaces. The relevant details about the Besov non-threshold wavelet shrinkage proposed in [5] can be found there in Appendix B, Item B9(a,b), as well as in [12], section 5, or in [3], subsection 5.2. Similar to the general Lorentz thresholding, the above Besov shrinkage strategies are obtained as consequence of some deep function-analytical and operator-theoretical facts, namely, as follows. (C) The metrizability of quasinormed abelion groups via the Method of Powers (see [1], section 3.10). (D) The Theorem of Powers for the real interpolation method of PeetreLions (see [1], section 3.11), leading to explicit computation of K-functional which in this case is also called quasilinearization (see [1]). 2.4
Composite Besov-Lorentz Shrinkage
One criterion, which was announced in section 6 of [3] for the first time, is to σ s use the real interpolation spaces between Bππ and Bpp where the parameters p, π, s and σ are as in [5], Appendix B, Item B9(a). It is known that the Besov s(θ) σ s , Bpp )θ,p(θ) = Bp(θ)p(θ) , where scale is closed under real interpolation, i.e., (Bππ 1 θ 0 ≤ θ ≤ 1, and p(θ) is defined by p(θ) = 1−θ π + p. The parameter θ, which is a coefficient in a convex combination, is determining the degree to which the composite estimator is of Lorentz-type and the degree to which it is a Besov shrinkage-type estimator. Since general Lorentz shrinkage is a threshold method, while Besov shrinkage is of non-threshold type, the last observation also implies that θ can be used also as a control parameter for regulating the compression rate. The parameter θ can be computed via cross-validation, considerations for asymptotic minimax rate, Bayesian techniques, or any other procedure for statistical parametric estimation. The resulting composite estimator is in this case highly adaptive to the local smoothness of the estimated function. We omit a more detailed exposition of this topic here, since it merits a much more detailed study.
Wavelet Compression, Data Fitting and Approximation
743
1.2 Noisy Original Firm Lorentz−Besov
1
0.8
y
0.6
0.4
0.2
0
−0.2 −0.5
0
0.5 x
1
1.5
Fig. 1. Non-parametric regression estimation of “λ-tear”, λ = 0.25. Noise: white, unbiased, δ 2 = 0.01. Sample size N = 1024. Wavelet basis of the estimators: orthonormal Daub 6. Wavelet shrinkage strategies: firm and Lorentz-Besov.
Note that the valuable additional functionality offered by the composite BesovLorentz shrinkage is based again on several deep function-analytical and operatortheoretical facts which can be outlined as follows: properties (A-D) stated above, and facts (E), (F) given below. (E) The reiteration theorem for the real interpolation method of Peetre-Lions (see [1], section 3.4 and 3.11). (F) The generalization, via the Holmstedt formula (see [1], section 3.6), of the formula for computation of the Peetre K-functional between Lebesgue spaces in terms of the non-increasing rearrangement of a measurable function (see [1], section 5.2).
3
Besov-Lorentz Shrinkage versus Firm Thresholding
Composite Besov-Lorentz shrinkage has considerably more control parameters than firm thresholding and, therefore, optimization with respect to all parameters of the Besov-Lorentz model would be a considerably more challenging computational problem than optimization related to firm thresholding. However, this
744
L.T. Dechevsky, J. Gundersen, and N. Grip 1.4 Noisy Original Firm Lorentz−Besov
1.2
1
0.8
y
0.6
0.4
0.2
0
−0.2
−0.4
−0.6 −0.5
0
0.5 x
1
1.5
Fig. 2. Under the conditions in Figure 1, the noise variance is δ 2 = 0.1
is outweighed by the several advantages of Besov-Lorentz shrinkage, some of the most important of which are listed below. 1. Firm thresholding rule has been designed as a unification of the hard and soft thresholding scale; it is not related with any function-analytic properties of the estimated signal. On the contrary, Besov-Lorentz shrinkage is derived from the important properties (A-F) stated above. In particular, Besov-Lorentz shrinkage provides a convenient framework for fine control of the optimization which, unlike firm thresholding, can be performed under a rich variety of meaningful constraints. This allows the introduction of bias in the estimation process, whenever information about such bias is available, with drastic improvement in the quality of estimation. 2. The optimization criterion proposed in [10] for optimizing the thresholds of firm thresholding is of entropy type. It is general but, at the same time, it is inflexible with respect to introduction of meaningful bias information which may be available apriori. 3. For appropriate values of the parameters of Besov shrinkage (even without invoking Lorentz thresholding) one of the two possible limiting cases of firm thresholding - soft thresholding - is attained within the Besov scale (see [5], p. 344). On the other hand, the often limiting case of firm thresholding, that is, hard thresholding, can be attained within the Lorentz thresholding model, provided that this model is based on a Nikol’skii-Besov space scale. (The
Wavelet Compression, Data Fitting and Approximation
745
standard Besov space scales (with s(x) ≡ const, x ∈ Rn ) are insufficient for implementing the hard threshold rule within the Lorentz threshold setting.) It can further be shown that any shrinkage strategy attained via the firm thresholding rule can be also attained within the Besov-Lorentz composite strategy, but not vice versa. Some more conclusions on the comparison between firm and Besov-Lorentz shrinkage can be drawn after comparative graphical analysis of the performance of the two shrinkage strategies on Figures 1 and 2, for the benchmark image “λ-tear” (see [5], Example 1), with λ = 0.25. In order to make firm thresholding sufficiently competitive, instead of the usual entropy-based optimization criterion we have optimized here firm thresholding with respect to the same criterion as Besov-Lorentz shrinkage: a criterion based on a Nikol’skii-Besov scale which takes in consideration the singularity of “λ-tear” at x = 0. On Figure 1 is displayed the performance of the two estimators for medium-to-large noise (the noise variance δ 2 = 0.01 is the same as in Example 1 in [5]; the average noise amplitude is about 35% of the amplitude of the original signal); on Figure 2 is shown the performance of the estimators in the case of extremely large noise amplitude (noise variance δ 2 = 0.1; the average noise amplitude exceeds 100% of the amplitude of the original signal); the sample size for both Figure 1 and 2 is N = 1024; the compression rate of the Besov-Lorentz shrinkage is very high: 98.6% for Figure 1 and 99.4% for Figure 2. Note that in both Figures 1 and 2 Besov-Lorentz shrinkage outperforms firm thresholding in a neighbourhood of the singularity at x = 0 (this is especially well distinguishable in the presence of large noise (Figure 2) where firm thresholding exhibits the typical “oversmoothing” behaviour in a neighbourhood of x = 0). Note that the “overfitting” by the Besov-Lorentz estimator in the one-sided neighbourhoods left and right of the singularity at x = 0 is due to the relatively large support of the Daub 6 orthonormal wavelet used (which, on its part, is a consequence of the requirement for sufficient smoothness of the Daubechies wavelet). If in the place of Daubechies orthonormal wavelets one uses biorthonormal wavelets or, even better, multiwavelets or wavelet packets (see, e.g., [11]), the decrease of the support of the wavelets will lead to removal of the above said overfit while retaining the advantages of better fitting of the singularity. Summarizing our findings, we can extend the list of comparison items 1-3, as follows. 4. The trade-off between error of approximation and rate of compression can be efficiently controlled with Besov-Lorentz shrinkage, while with firm thresholding no such control is available. 5. Besov-Lorentz shrinkage outperforms firm thresholding in fitting singularities. If conventional orthonormal Daubechies wavelets are used, the good fit of isolated singularities comes at the price of overfitting smooth parts of the signal neighbouring the respective isolated singularity. However, this overfit can be removed by using multiwavelets or wavelet packets which, unlike Daubechies orthogonal wavelets, simultaneously combine sufficient smoothness with narrow support.
746
L.T. Dechevsky, J. Gundersen, and N. Grip
References 1. Bergh, J., L¨ ofstr¨ om, J.: Interpolation Spaces. An Introduction. In: Grundlehren der Mathematischen Wissenshaften, vol. 223. Springer, Berlin (1976) 2. Dechevsky, L.T.: Atomic decomposition of function spaces and fractional integral and differential operators. In: Rusev, P., Dimovski, I., Kiryakova, V. (eds.) Transform Methods and Special Functions, Part A (1999); Fractional Calculus & Applied Analysis, vol. 2, pp. 367–381 (1999) 3. Dechevsky, L.T., Grip, N., Gundersen, J.: A new generation of wavelet shrinkage: adaptive strategies based on composition of Lorentz-type thresholding and Besovtype non-thresholding shrinkage. In: Proceedings of SPIE: Wavelet Applications in Industrial Processing V, Boston, MA, USA, vol. 6763, article 676308, pp. 1–14 (2007) 4. Dechevsky, L.T., MacGibbon, B., Penev, S.I.: Numerical methods for asymptotically minimax non-parametric function estimation with positivity constraints I. Sankhya, Ser. B 63(2), 149–180 (2001) 5. Dechevsky, L.T., Ramsay, J.O., Penev, S.I.: Penalized wavelet estimation with Besov regularity constraints. Mathematica Balkanica (N. S.) 13(3-4), 257–356 (1999) 6. Delyon, B., Juditsky, A.: On minimax wavelet estimators. Applied and Computational Harmonic Analysis 3, 215–228 (1996) 7. Donoho, D.L., Johnstone, I.M.: Ideal spatial adaptation via wavelet shrinkage. Biometrika 81(3), 425–455 (1994) 8. Donoho, D.L., Johnstone, I.M.: Minimax estimation via wavelet shrinkage. Annals of Statistics 26(3), 879–921 (1998) 9. Donoho, D.L., Johnstone, I.M., Kerkyacharian, G., Picard, D.: Wavelet shrinkage: Asymptopia? Journal of the Royal Statistical Society Series B 57(2), 301–369 (1995) 10. Gao, H.-Y., Bruce, A.G.: WaveShrink with firm shrinkage. Statist. Sinica 7(4), 855–874 (1997) 11. Mallat, S.: A Wavelet Tour of Signal Processing, 2nd edn. Acad. Press, New York (1999) 12. Moguchaya, T., Grip, N., Dechevsky, L.T., Bang, B., Laks˚ a, A., Tong, B.: Curve and surface fitting by wavelet shrinkage using GM Waves. In: Dæhlen, M., Mørken, K., Schumaker, L. (eds.) Mathematical Methods for Curves and Surfaces, pp. 263– 274. Nashboro Press, Brentwood (2005) 13. Vidakovic, B.: Statistical Modeling by Wavelets. Wiley, New York (1999)
On Interpolation in the Unit Disk Based on Both Radon Projections and Function Values Irina Georgieva1 and Rumen Uluchev2 1
2
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. G. Bonchev, Bl. 8, Sofia 1113, Bulgaria
[email protected] Department of Mathematics and Informatics, Todor Kableshkov HST (VTU), 158 Geo Milev Str., Sofia 1574, Bulgaria
[email protected]
Abstract. We consider interpolation of function in two variables on the unit disk with bivariate polynomials, based on its Radon projections and function values. This is closely related to surface and image reconstruction. Due to the practical importance of these problems, recently a lot of mathematicians deal with interpolation and smoothing of bivariate functions with a data consisting of prescribed Radon projections or mixed type of data – Radon projections and function values. Here we present some new results and numerical experiments in this field.
1
Introduction and Preliminaries
There are a lot of practical problems, e.g. in computer tomography, plasma tomography, electronic microscopy, etc., where information about the processes can be derived as line integrals of functions. Due to the great importance of these fields of the science, many researchers turn to the mathematical methods for data processing of Radon projections type of data. Such methods in engineering and medicine have been investigated by many mathematicians [5,12,13,15,16] and others. Interpolation, smoothing, and numerical integrating of bivariate functions with a data consisting of prescribed Radon projections was considered in [1,2,3,4,6,7,8,10,12,14]. In [7] smoothing of mixed type of data – both Radon projections and function values – was studied. Recent efforts of the authors are directed to interpolation by bivariate polynomials of mixed type of data. Here we present a regular scheme of mixed type consisting of chords and points of the unit circle in the plane. Denote by Πn2 the set of all real algebraic polynomials in two variables of total degree at most n. Then, if P ∈ Πn2 we have P (x, y) =
αij xi y j ,
αij ∈ R.
i+j≤n
The dimension of the linear space Πn2 is
n+2 2 .
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 747–755, 2010. c Springer-Verlag Berlin Heidelberg 2010
748
I. Georgieva and R. Uluchev
2 Let B := {x = (x, y) ∈ R : x ≤ 1} be the unit disk in the plane, 2 2 where x = x + y . Given t ∈ [−1, 1] and an angle of measure θ ∈ [0, π), the equation x cos θ + y sin θ − t = 0 defines a line perpendicular to the vector cos θ, sin θ and passing through the point (t cos θ, t sin θ). The set I(θ, t) := ∩B is a chord of the unit disk B, where θ determines the direction of I(θ, t) and t is the distance of the chord from the origin. Suppose that for a given function f : R2 → R the integrals of f exist along all line segments on the unit disk B. For fixed t ∈ [−1, 1] and 0 ≤ θ < π the Radon projection of the function f over the segment I(θ, t) is defined by f (x) dx. Rθ (f ; t) := I(θ,t)
Observe that Rθ ( · ; t) is a linear functional. Since I(θ, t) ≡ I(θ + π, −t) it follows that Rθ (f ; t) = Rθ+π (f ; −t). Thus, the assumption above for the direction of the chords 0 ≤ θ < π is not a loss of generality. It is well-known that the set of Radon projections Rθ (f ; t) : −1 ≤ t ≤ 1, 0 ≤ θ < π determines f uniquely (see John [11], Radon [16]). According to a more recent result of Solmon [17], an arbitrary function f ∈ L1 (R2 ) with compact support in B is uniquely determined by any infinite set of Radon projections. It was shown by Marr [12] that every polynomial P ∈ Πn2 can be reconstructed uniquely by its projections only on a finite number of directions. Marr’s formula is a basic equality in the Chebyshev-Fourier analysis which shows the reduction of the multivariate case to a univariate one. An important role in approximation of functions on the unit disk play Chebyshev polynomials of second kind. We denote the Chebyshev polynomials of degree m as usually by sin(m + 1)θ Um (t) := , t = cos θ. sin θ They are used for construction of an orthonormal basis of Πn2 , which is a helpful tool in studying functions in the unit disk.
2
Interpolation Problem for Radon Projections Type of Data
For a given scheme of chords Ik , k = 1, . . . , n+2 2 , of the unit circle ∂B, find a polynomial P ∈ Πn2 satisfying the conditions: . (1) P (x) dx = γk , k = 1, . . . , n+2 2 Ik
If (1) has a unique solution for every given set of values {γk } the interpolation problem is called poised and the scheme of chords – regular.
On Interpolation in the Unit Disk
749
Firstly, Hakopian [10] proved that the set of n+2 chords, joining any given 2 n + 2 points on the unit circle is a regular scheme. Later Bojanov and Georgieva [1] provided a new of regular schemes. family chords {I(θ, t)} of the The Radon projections are taken along a set of n+2 2 unit circle, partitioned into n + 1 subsets, such that the k-th subset consists of k + 1 parallel chords. More precisely, consider the scheme (Θ, T ), where Θ := {θ0 , θ1 , . . . , θn }, 0 ≤ θ0 < · · · < θn < π, and T := {tki } is a triangular matrix of distances 1 > tkk > · · · > tkn > −1, corresponding to the angle measures θk , k = 0, . . . , n. The problem is to find a polynomial P ∈ Πn2 satisfying the interpolation conditions P (x) dx = γki , k = 0, . . . , n, i = k, . . . , n. (2) I(θk ,tki )
A necessary and sufficient condition for regularity of the schemes of this type was proved by Bojanov and Georgieva [1]. But for a set of points T , it is often difficult to determine if the problem (2) has a unique solution. Several regular schemes of this type were suggested by Georgieva and Ismail [6] and by Georgieva and Uluchev [7], where the distances from the origin of segments from the origin correspond to the zeros of orthogonal polynomials. Another kind of regular scheme was proposed by Bojanov and Xu [4]. It n consists of 2 n+1 2 + 1 equally spaced directions with 2 + 1 chords, associated with the zeros of the Chebyshev polynomials, in each direction.
3
Interpolation Problem for Mixed Type of Data
Now we consider interpolation using mixed type of data – both Radon projections and function values at points lying on the unit circle. Suppose that • Θ := {θ0 , θ1 , . . . , θn }, 0 ≤ θ0 < · · · < θn < π; • T := {tki } is a triangular matrix with 1 > tkk > · · · > tk,n−1 > −1, k = 0, . . . , n − 1; • X := {x0 , . . . , xn }, where xk are points on the unit circle. The problem is to find a polynomial P ∈ Πn2 satisfying the interpolation conditions P (x) dx = γki , k = 0, . . . , n − 1, i = k, . . . , n − 1, (3) I(θk ,tki ) P (xk ) = fk , k = 0, . . . , n. Note the difference with problem (2) is that we replace the interpolation condition on the smallest chord in each direction θk with a function value interpolation condition at a point xk . If (3) has a unique solution for every given set of values {γki } and {fk } we call the interpolation problem (3) poised and the scheme of chords and points (Θ, T, X) – regular.
750
I. Georgieva and R. Uluchev
Let us introduce the following matrices: ⎛ ⎞ Uk (tkk ) Uk+1 (tkk ) ... Un−1 (tkk ) Un (tkk ) ⎜Uk (tk,k+1 ) Uk+1 (tk,k+1 ) . . . Un−1 (tk,k+1 ) Un (tk,k+1 ) ⎟ ⎜ ⎟ ⎟ U∗k := ⎜ ⎜. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .⎟ . ⎝Uk (tk,n−1 ) Uk+1 (tk,n−1 ) . . . Un−1 (tk,n−1 ) Un (tk,n−1 )⎠ Uk (−1) Uk+1 (−1) ... Un−1 (−1) Un (−1) The following result is a necessary and sufficient condition for the interpolation problem (3) to be poised. The proof will appear in a subsequent article [9]. Theorem 1. For a given set of chords and points (Θ, T, X) of the unit circle in the plane with 0 ≤ θ0 < · · · < θn < π, 1 > tkk > · · · > tk,n−1 > −1, and xk = (− cos θk , − sin θk ), k = 0, . . . , n, the interpolation problem (3) is poised if and only if det U∗k = 0, k = 0, . . . , n.
4
Regular Schemes for Mixed Type of Data
Here we give a regular scheme for interpolation based on mixed type of data. The following assertion on Chebyshev systems is well-known. Proposition 1. The functions {sin lx}m l=1 form a Chebyshev system in (0, π). Now we state and prove the main result in this survey. Theorem 2. Let n be given positive integer, and (i) Θ = {θ0 , . . . , θn }, 0 ≤ θ0 < · · · < θn < π; (ii) tki = ηi = cos (i+1)π n+1 , i = k, . . . , n−1, be the zeros of Chebyshev polynomials of second kind Un (x); (iii) xk = (− cos θk , − sin θk ), k = 0, . . . , n. Then the interpolation problem (3) is poised, i.e. the scheme for interpolation (Θ, T, X) is regular. Proof. According to Theorem 1 it is sufficient to prove that det U∗k = 0 for all k = 0, . . . , n. Let us fix an integer k, 0 ≤ k ≤ n − 1. For each j = 1, . . . , n, we obtain Un−j (tk,n−l ) =
(−1)n−j+1 sin l (n−j+1)π n+1 sin (n−l+1)π n+1
.
We shall make use of the notation ϕn−k (xn−k ) ϕn−k (xn−k−1 ) . . . ϕn−k (x0 ) ϕ ϕn−k−1 (x0 ) n−k−1 (xn−k ) ϕn−k−1 (xn−k−1 ) . . . Δk = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , ϕ1 (xn−k ) ϕ1 (xn−k−1 ) . . . ϕ1 (x0 ) (k + 1)(−1) 2 n−k+1 (k + 2)(−1) . . . (n + 1)(−1)
On Interpolation in the Unit Disk
751
where ϕl (x) = sin lx, l = 1, . . . , n,
and xj =
(n − j + 1)π , j = 1, . . . , n. n+1
Now recall the property Um (−1) = (m + 1)(−1)m of Chebyshev polynomials of second kind. Factoring out the common multipliers of the rows and columns in the determinant det U∗k we get det U∗k = Δk
n−k
(−1)n−j+1
j=1
sin (n−j+1)π n+1
,
where obviously the product factor does not vanish. We expand Δk in minors corresponding to the elements of the last row. If we denote by An−k+1,j the minor of Δk , obtained by taking the determinant of Δk with last row and column j “crossed out”, we obtain Δk =
n−k+1
n−k+1
j=1
j=1
(−1)j+n−k+1 (k+j)(−1)j An−k+1,j = (−1)n−k+1
(k+j)An−k+1,j .
Observe that from Proposition 1 above it follows that ϕn−k (τn−k ) ϕn−k (τn−k−1 ) . . . ϕn−k (τ1 ) ϕn−k−1 (τn−k ) ϕn−k−1 (τn−k−1 ) . . . ϕn−k−1 (τ1 ) =0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ϕ1 (τn−k ) ϕ1 (τn−k−1 ) . . . ϕ1 (τ1 ) for all 0 < τ1 < · · · < τn < π. Then all minors An−k+1,j do not vanish and have one and the same sign, hence n−k+1
(k + j)An−k+1,j = 0.
j=1
Therefore Δk = 0 for all k = 0, 1, . . . , n − 1. For k = n we have Δn = Un (−1) = (n + 1)(−1)n and Δn = 0, too. Applying Theorem 1 we conclude that the interpolation problem (3) is poised and the given data (Θ, T, X) is a regular scheme.
5
Numerical Experiments
Here we present some results from numerical experiments carried out using the Mathematica software by Wolfram Research Inc. We apply the regular scheme of chords and points (Θ, T, X) from Theorem 2 with θk = (k+1)π n+2 , k = 0, . . . , n, in all examples below.
752
I. Georgieva and R. Uluchev
Example 1. We compare the interpolation based on Radon projections only with the interpolation based on mixed type of data for the function sin 3π (x − 0.3)2 + y 2 + 10−18 f (x, y) = . 3π (x − 0.3)2 + y 2 + 10−18 On Figure 1 the graphs of f (x, y), the interpolant P16 (x, y) and the error f (x, y)− P16 (x, y) for the problem (2) are shown. The regular scheme consists of 153 16 chords only, corresponding to the zeros of U17 (t), θk = (k+1)π , and tki = 18 k=0 16 cos (i+1)π , k = 0, . . . , 16. 18 i=k Figure 2 represents the graphs of the interpolating polynomial and the error function. The interpolating polynomial Q16 (x, y) is the solution of the problem (3) and it is based on 136 pieces of Radon projections and 17 function values at (k+1)π 16 the points xk = − cos (k+1)π . 18 , − sin 18 k=0 It should be pointed out that in the mixed case we get much better approximation. The relative error of interpolation is as follows: f − P16 2 = 4.77945 × 10−3 , f 2
f − Q16 2 = 1.16826 × 10−4 , f 2
f − P16 ∞ = 2.91338 × 10−2 , f ∞
f − Q16 ∞ = 2.59069 × 10−4 . f ∞
1
1
z0.5
z0.5 1
1 0
0
0.5 0
-1
0
-1
y
1
0.5 y
-0.5
0 x
-0.5
0 x
0.5
0 -0.5
-0.5
-0.5
0.5
z 0.02 0 -0.02 1 -1
-0.5
0 x
0.5
0.5 1 -1
1 -1
1 -1
f (x, y)
P16 (x, y)
f (x, y) − P16 (x, y)
Fig. 1.
1
z 0.5 1 0
1
0.5
-1
0
y
0.5 z 0.0001 0.0002 -0.0001 -0.0002 0 -1 1
-0.5
-0.5 x
0.5 1
-0.5
0
-0.5
0 x
0
0.5 1 -1
-1
Q16 (x, y)
f (x, y) − Q16 (x, y) Fig. 2.
y
y
On Interpolation in the Unit Disk
753
Example 2. We try our mixed regular scheme for interpolation of the function sin 3π (x − 0.3)2 + y 2 + 10−18 f (x, y) = 3π (x − 0.3)2 + y 2 + 10−18 sin 3π (x + 0.4)2 + (y + 0.1)2 + 10−18 + 1.3 3π (x + 0.4)2 + (y + 0.1)2 + 10−18 with polynomials Pn (x, y) of degree n = 10, 12, 14, 16. On Figure 3 we present the graphs of the function, its interpolating polynomial of degree n = 10 and the error function. Calculations show that for n ≥ 12 there is no visible difference between the graphs of the function and its interpolating polynomial of degree n. The relative interpolation error is given in Table 1. 1 y
1 y
0.5
0
0.5
0
-0.5
-0.5
-1
-1
1
1
z 0.5
z 0.5 1
0
0
-0.5 -1
z0.1 0 -0.1 -1 1
-0.5 -1 -0.5
0
x
x
0.5
1
0.5 1
1
f (x, y)
-0.5
0
0 0.5
y
-0.5
-0.5 0 x
0.5
P10 (x, y)
-1
f (x, y) − P10 (x, y)
Fig. 3.
Table 1. n
10
12
14
16
f − Pn 2 f 2
6.15932 × 10−2
8.82093 × 10−3
4.67868 × 10−4
8.09702 × 10−5
Example 3. We interpolate the function f (x, y) = sin 2x + sin 3y with polynomials Pn (x, y) of degree n = 8, 10, 12, 15. The relative error for the interpolation problem (3) is given in Table 2.
754
I. Georgieva and R. Uluchev Table 2. n
f − Pn 2 f 2 f − Pn ∞ f ∞
8
10
12
15
5.01005 × 10−4
9.77689 × 10−6
1.55919 × 10−7
8.92670 × 10−9
1.51321 × 10−3
4.80045 × 10−5
9.79461 × 10−6
1.75293 × 10−7
Acknowledgement This research was supported by the Bulgarian Ministry of Education and Science under Grant No. VU-I-303/07.
References 1. Bojanov, B., Georgieva, I.: Interpolation by bivariate polynomials based on Radon projections. Studia Math. 162, 141–160 (2004) 2. Bojanov, B., Petrova, G.: Numerical integration over a disc. A new Gaussian cubature formula. Numer. Math. 80, 39–59 (1998) 3. Bojanov, B., Petrova, G.: Uniqueness of the Gaussian cubature for a ball. J. Approx. Theory 104, 21–44 (2000) 4. Bojanov, B., Xu, Y.: Reconstruction of a bivariate polynomials from its Radon projections. SIAM J. Math. Anal. 37, 238–250 (2005) 5. Davison, M.E., Grunbaum, F.A.: Tomographic reconstruction with arbitrary directions. Comm. Pure Appl. Math. 34, 77–120 (1981) 6. Georgieva, I., Ismail, S.: On recovering of a bivariate polynomial from its Radon projections. In: Bojanov, B. (ed.) Constructive Theory of Functions, pp. 127–134. Marin Drinov Academic Publishing House, Sofia (2006) 7. Georgieva, I., Uluchev, R.: Smoothing of Radon projections type of data by bivariate polynomials. J. Comput. Appl. Math. 215(1), 167–181 (2008) 8. Georgieva, I., Uluchev, R.: Surface reconstruction and Lagrange basis polynomials. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 670–678. Springer, Heidelberg (2008) 9. Georgieva, I., Uluchev, R.: Interpolation of mixed type data by bivariate polynomials (to appear) 10. Hakopian, H.: Multivariate divided differences and multivariate interpolation of Lagrange and Hermite type. J. Approx. Theory 34, 286–305 (1982) 11. John, F.: Abh¨ angigkeiten zwischen den Fl¨ achenintegralen einer stetigen Funktion. Math. Anal. 111, 541–559 (1935) 12. Marr, R.: On the reconstruction of a function on a circular domain from a sampling of its line integrals. J. Math. Anal. Appl. 45, 357–374 (1974) 13. Natterer, F.: The Mathematics of Computerized Tomography. Classics in Applied Mathematics, vol. 32. SIAM, Philadelphia (2001)
On Interpolation in the Unit Disk
755
14. Nikolov, G.: Cubature formulae for the disk using Radon projections. East J. Approx. 14, 401–410 (2008) 15. Pickalov, V., Melnikova, T.: Plasma Tomography. Nauka, Novosibirsk (1995) (in Russian) ¨ 16. Radon, J.: Uber die Bestimmung von Funktionen durch ihre Integralwerte l¨ angs gewisser Mannigfaltigkeiten. Ber. Verch. S¨ achs. Akad. 69, 262–277 (1917) 17. Solmon, D.C.: The X-ray transform. J. Math. Anal. Appl. 56(1), 61–83 (1976)
Comparison between the Marching-Cube and the Marching-Simplex Methods Joakim Gundersen, Arnt R. Kristoffersen, and Lubomir T. Dechevsky Narvik University College, P.O.B. 385, N-8505 Narvik, Norway
Abstract. The marching-cube algorithm is one of the efficient algorithms for computing the solutions of low-dimensional nonlinear systems of equations. It is widely used in industrial applications involving intersection problems in 2, 3 and, possibly, higher dimensions. In 2006, a research team, including the authors of this article, proposed a new ’marching’ approach which differs essentially from the marching-cube approach. We coined this new algorithm as The Marching Simplex Algorithm. Some of the advantages of the marching simplex algorithm were mentioned already at the time of its introduction. However, a detailed comparison between the two algorithms has not been made so far, and the purpose of this article is to address the issues of such a comparison.
1
Introduction
We consider the following problem about solution of a system of n nonlinear equations, n = 1, 2, . . ., with n+m variables, m = 0, 1, 2, . . . (or, in equivalent geometric form, the problem about computing the intersection of n hyper-surfaces in Rn+m ) fi (x1 , ..., xn , xn+1 , ..., xn+m ) = 0, i = 1, ..., n, (1) We consider (1) under the following general assumptions: – The solution manifold has possibly variable rank r, 0 ≤ r ≤ n on different points of the solution manifold. – In different points of the solution manifold the system (1) may be solvable with respect to a different (n + m − r)-tuple of variables among x1 , ..., xn+m , which will be implicit functions of the remaining r independent variables among x1 , ..., xn+m in a neighborhood of this point of the solution manifold. If we assume that the pointP = (x1,P , ..., xn,P , xn+1,P , ..., xn+m,P )T ∈ Rn+m belongs to the solution manifold of (1), then (1) can be rewritten at P in the following way: fi (x1 , ..., xr(P ) , yr(P )+1 , ..., yn+m ) = 0,
i = 1, ..., n,
(2)
Research supported in part by the 2006, 2007 and 2008 Annual Research Grants of the priority R&D Group for Mathematical Modeling, Numerical Simulation and Computer Visualization at Narvik University College, Norway.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 756–764, 2010. c Springer-Verlag Berlin Heidelberg 2010
Comparison between the Marching-Cube and the Marching-Simplex Methods
757
where 0 ≤ r(P ) ≤ n and xj = xj,P , j = 1, ..., r(P ), yl = xr(P )+l,P , l = 1, ..., n + m − r(P ),
(3)
and at P the system (2) can be solved explicitly with respect to the y-variables in (3) as functions of the x-variables in (3). To cover all possible cases when P varies over the solution manifold, it is necessary to extend (2) onto all possible permutations of the n+m variables x1 , ..., xn+m in (2). Note that, for all possible permutations in (3), and for any choice of the point P on the solution manifold in (1, 2), 1 ≤ r(P ) ≤ n, n + m − r(P ) ≥ m. (4) We shall pay special attention to the cases of planar curves (m = n = 1), surfaces in 3D (m = 1, n = 2) and curves in 3D obtained as intersections of two surfaces (m = 2, n = 1). These most important geometric cases correspond to constant maximal rank of the solution manifold. These are also the main cases of interest in computer-aided design (CAD), computer-aided geometric design (CAGD), scientific visualization in natural sciences, engineering and industrial applications, as well as for the development of simulators and computer games. For all cases in two and three dimensions (i.e., m + n = 2 or m + n = 3) there are some established “industrial standard” methods for computing and visualizing intersections. One of these methods is ray tracing (see, i.e., [13,7]), which has the advantage of only dealing with 1-dimensional intersection problems, since it is based on collecting information about intersections with geometric objects on the way of a visual ray, and transforming this information into colour corresponding to a given pixel within a given frame (time step). This method produces highly realistic images, but at the cost of extreme computational demands. A high-resolution real-time ray tracing algorithm is very unlikely to appear in the foreseeable future. An alternative real-time algorithm producing realistic level curves in 2 dimensions and level surfaces in 3 dimensions is the Marching Square/Cube algorithm. We have investigated in detail and used this algorithm for creating a software application for animated scientific visualization of the results obtained via the simulation testbed BedSim of the Swedish company LKAB (http://lkab.com) see [9]. In the process of developing this software, we became aware not only of the computational strongpoints of this algorithm, but also of its weaknesses. Because of these weaknesses, we addressed the issue of upgrading it in [11], where we considered one vertex-based CPU-computational and one pixel-based GPUcomputational upgrade of the Marching Square/Cube algorithm1 . Although these two upgrades proved to enhance considerably the range of applicability and the efficiency of the original Marching Square/Cube algorithm, these upgrades were still quite restrictive in a number of important aspects. In [5] we 1
Acronyms: CPU and GPU stand for “central” and “graphics” processing units of the computer, respectively.
758
J. Gundersen, A.R. Kristoffersen, and L.T. Dechevsky
proposed a “marching” algorithm which is essentially free from all limitations and constraints typical for the Marching Square/Cube algorithm and its various upgrades. This algorithm works in the most general setting (1-4) and remains efficient for values of m and n which are higher than with any of the marching square/cube versions. The purpose of the present paper is to concisely compare the original Marching Square/Cube algorithm, as used in [9], its CPU/vertex-based and its GPU/ pixel-based upgrades from [11], and the Marching Simplex algorithm proposed in [5]. The main new contribution in this paper is the comparative analysis in section 3. Section 4 contains important information about future work related to the connection between the Marching Simplex algorithm and a new upgrade of the Marching Cube algorithm based on the concept of wavelet-based isometric conversion between dimension and resolution - see [3] and the references therein.
2
A Hierarchy of “Marching” Algorithms for Intersection
Here we give a concise outline of the Marching Square/Cube algorithm, its vertex based CPU-computable upgrade from [11], its pixel-based GPU-computable upgrade from [11], and the most general Marching Simplex algorithm proposed in [5]. 2.1
Marching Square/Cube Algorithm
In the general setting (1-4), the Marching Cube algorithm2 can be used for finding only hypersurfaces in Rm+1 (n = 1, m = 0, 1, . . .). Its practical use is mainly for computing isocurves (n = m = 1) in R2 and isosurfaces n = 1, m = 2 in R3 . These algorithms are based on different lookup tables, depending on the dimension. This is a straightforward way which in 2 and 3 dimensions is fairly fast. The lookup table for 2D has only 24 = 16 entries and for 3D it has 28 = 256 entries. There are several places on the Internet where such tables are available. We have chosen to base our algorithm on the lookup table in [2]. The basic principle behind the marching square/cube algorithm is: – Subdivide the space into a series of smaller squares/cubes. – “March” through each of them testing the corner points if they are above or below the given iso-value (i.e., testing the sign of the LHS in the one and only equation (n = 1) in (1)). – Depending on the result of the testing, the lookup tables are used to decide what kind of iso-curve/-surface configuration we have inside the square/cube. – By interpolating the edges between the corners it is possible to find an approximate solution to where the iso-curve/-surface intersects the square/ cube. For more details about the marching square/cube algorithm see, e.g., [2]. 2
The marching cube algorithm has, until recently, been patented by William E. Lorensen and Harvey E. Cline. This 20-year patent expired in June 2005.
Comparison between the Marching-Cube and the Marching-Simplex Methods
2.2
759
A Vertex-Based CPU-Computable Upgrade
The Marching Square/Cube algorithm, even considered in its greatest possible generality (to which general case one should refer to as the Marching Hypercube algorithm), is still quite restrictive: within the general setting (1-4) it handles only the particular case n = 1 (i.e., when the intersection is a hypersurface in Rn+m ). In [11] we made an attempt to construct an upgrade of “the Marching Hypercube algorithm” which should be able to handle problem (1-4) in full generality, i.e., for all n = 1, 2, . . ., under the only constraint that either the solution manifold is compact, or only a compact part of the solution manifold will be computed. The main idea can be outlined, as follows. 1. Assuming that Ω ⊂ Rn+m , where Ω is compact, find a hypercube (or hyperrectangle) Kn+m ⊂ Rn+m , so that Kn+m (it may be additionally required that Kn+m be minimal with the above-said property; we shall not discuss this topic in detail here). We assume that the sides of Kn+m are parallel to the coordinate axes in Rn+m . 2. For every one of the n + m sides of Kn+m consider a strictly increasing knotvector (i.e., all knots are simple) with sufficiently small maximal distance between two neighbouring knots, and define the piecewise affine B-spline system for this knot-vector. Then, approximate the LHS of each of the n equations in (1) by n + m-variate tensor product Lagrange interpolation with piecewise affine B-splines in every variable. 3. In the place of (1) solve the approximating system of n equations with n + m unknowns on Kn+m , as obtained in item 2. The solution amounts to a “Marching hyper-rectangle” algorithm where for each hyper-rectangle a system of n (n+m)-polylinear equations has to be solved. In [11] we outlined how this system can be solved for n+m = 2 and n+m = 3. The construction of the solution can be extended by induction for any k = 1, 2, 3, 4, . . . , with n + m = k. 4. The solution of approximate system of item 3 for any given hyper-rectangle in Kn+m is an intersection (possibly, void) of n hypersurfaces in Rn+m . If this solution has a non-void intersection with the considered hyper-rectangle in Kn+m , then, and only then, this intersection is accepted as part of the approximate solution manifold. In this way, while “marching” over all hyperrectangles in the tensor-product grid in Kn+m , the whole approximate solution manifold in Kn+m is obtained. (The procedure would eventually require also a finalizing intersection with Ω ⊂ Kn+m .) For further details on this upgrade of the Marching Square/Cube algorithm see [11]. 2.3
A Pixel-Based GPU-Computable Upgrade
This was the second alternative of an upgrade of the original Marching Square / Cube algorithm which was proposed in [11], as a GPU-computable analogue of
760
J. Gundersen, A.R. Kristoffersen, and L.T. Dechevsky
the CPU-computable construction in subsection 2.2. In its basic version considered in detail in [11], this method works for the cases n + m = 1 : n = 1, m = 0; n + m = 2 : n = 1, m = 1 or n = 2, m = 0; n + m = 3 : n = 1, m = 2 or n = 2, m = 1
(5) or n = 3, m = 0;
which essentially cover all cases of 3-dimensional geometric modelling of intersections in CAGD. In [11] we specifically considered the cases n = 1, m = 1 (implicit equation of a curve in the plane), and n = 2, m = 1 (implicit equation of a curve in the 3-dimensional space). The key tool for solving both of these problems using the GPU was the 1-1 mapping of the values of a parametric surface in 3D defined on Ω ⊂ R2 as an RGB-colouring of its parametric domain Ω, where the value of “red”, “green”, and “blue” corresponded to the (appropriately scaled) value of the x, y, and z coordinate of a point P on the surface, respectively. (This type of bijective colour coding of parametric surfaces in 3D was introduced in [4] as “RGB-colour map (mode 2)”.) For further details and examples about this upgrade of the Marching Square / Cube algorithm, see [11]. 2.4
The Marching Simplex Algorithm
Testing of the algorithm in subsection 2.2 showed that it has a number of deficiencies (more about this will be said in the next section). The source of these deficiencies was, in the first place, the nonlinearity of the polylinear system of equations to be solved for every hyper-rectangle in the grid in Kn+m . In [5] we addressed the problem of eliminating these deficiencies by, modifying the tiling over which the “marching” was performed: in the place of hyper-rectangles in Rn+m , we considered hyper-simplices in Rn+m . The most important consequence of this change of the tiling was that the system of n equations with n + m unknowns which had to be solved for every tile, in the case of hypersimplices turned out to be linear (and not just polylinear, as was the case with hyper-rectangular tiles). The Marching Simplex algorithm proved to be a simple and robust general approach to computing intersections, free of the deficiencies of the tensor-product approach of subsection 2.2. For further details, see [5].
3
Comparison of the Considered Four Marching Algorithms
The main advantage of the original Marching Square/Cube algorithm (see subsection 2.1) is that it is a fast and simple method for computing iso-hypersurfaces in Rm+1 . However, its applicability is limited to essentially only the values m + 1 = 2 and m + 1 = 3 since the number of entries for higher dimensions increases fast, i.e., for 4D we get 216 = 65536 entries and, for 5D,
Comparison between the Marching-Cube and the Marching-Simplex Methods
761
Fig. 1. On the left hand side, the grid size is 22 × 22. In the middle, the grid size is 21 × 21. Typical artefact arising when using the Marching Square/Cube algorithm of subsection 2.1 or its tensor-product upgrade in subsection 2.2. On the right hand side, the grid size is 107 × 107. Visibly, the defect in the singularity point has disappeared, but it can still be found after sufficient zooming in. This situation may arize with any of the marching algorithms in subsection 2.1, 2.2 and 2.3. However, for the pixelbased GPU-computable algorithm in subsection 2.3 it is necessary to zoom down to “subpixel level”, which is impossible, and the singularity is correctly resolved at the minimal possible “pixel level”.
232 = 4294967296 entries! 3 Other essential limitations of the original Marching Square/Cube algorithm include the following. – The algorithm works only when the solution manifold has maximal rank everywhere and this maximal rank is 1. – If the equation (1-4) has isolated solutions (e.g., a unique solution), in which case the rank is 0, the algorithm can easily “lose” the solution. – The algorithm has no mechanism for detection of singularities and can easily misrepresent the topology of the solution manifold (see Figure 1). – The storage of the m-dimensional simplices obtained by the algorithm as part of the solution hyper-surface in Rm+1 becomes increasingly complicated, especially taking in consideration that these simplices will in general be nonuniform (unlike the Marching Simplex algorithm where the “marching” can be performed over uniform simplices only). The tensor-product upgrade, considered in subsection 2.2 improves upon the original Marching Square/Cube algorithm in several aspects, as follows. – The algorithm is applicable in the most general setting (1-4) for arbitrary number of equations n = 1, 2, . . .. – It can be used not only when the solution has constant maximal rank everywhere, but also in the case of variable rank and in the presence of singularities, including branching of solutions. – Conceptually, this algorithm has a mechanism for detection of singularities. Weaknesses of the algorithm are, as follows. 3
In [1] an algorithm for automatic generation of a lookup table of the iso-surface k and its triangulation for all possible 22 entries of the hypercube in a k-dimensional regular grid is proposed, but its efficiency is already questionable for k = m+1 = 4, 5.
762
J. Gundersen, A.R. Kristoffersen, and L.T. Dechevsky
– The system of n equations to be solved on every “marching” step is nonlinear, albeit polylinear. The total degree of the equations in this system is n + m. – With the increase of n, the computations become increasingly ill-conditioned. – The number of conditional statements that have to be verified is equal to the number of A-coefficients of monomials n+m n+m of the n+m in the polylinear expansion product j=1 (aj + bxj ) = A1 + j=1 A1j xj + · · · + An+m,1 j=1 xj . It can be seen that this number is 2n+m , i.e., it increases exponentially with n + m. (The ordering of the conditional statements is, as follows - see [11]: first An+m,1 = 0 is verified, then An+m−1,j = 0, j =1, . . . , n+ m, and so n+m n+m on. At level n + m − j there are = conditional n+m−j j statements to verify.) – Because of the ill-conditioned computations, the method is very sensitive (non-robust) with respect to round-up errors and other systematic and random errors. – Due to its non-robustness, it is possible to lose part of the solution or wrongly detect the presence or absence of singularity, or the type of singularity, leading to eventual misinterpreting of the topology of the solution manifold (see Figure 1). – It is possible to ensure correctness of the results obtained by this algorithm by appropriately chosen small tolerances, which is a common practice in the computation of intersections [6,12] but, due to the intrinsic non-robustness of the algorithm, tuning the tolerances is quite delicate. The algorithm in subsection 2.3 provides a limited upgrade of the algorithm in subsection 2.1: the efficient range of the latter is n = 1, m = 1 or m = 2 as mentioned earlier, while the former algorithm of subsection 2.3 handles well all cases in (5). This algorithm is not as generally applicable as the one from subsection 2.2 but within its range of application it is highly efficient and quite robust. Because of its robustness it is relatively straightforward to adjust the tolerances [11] for it. A very interesting, and perhaps rather surprising, fact is that within the given resolution this algorithm resolves correctly the singularities and, as a consequence, does not miss parts of the solution. The reason for this feature is explained by the situation given in Figure 1. As weakness of this algorithm we note the following. – Some important computations, in particular related to sorting, are still done on the CPU, and act as a bottleneck slowing down the overall high performance of the algorithm [11]. – In cases requiring long computations, the GPU, which is oriented towards real-time applications, tends to cut its computations and produce random results instead. This problem begins to be addressed only in the latest generation of GPUs with increased programmability. The Marching Simplex algorithm of subsection 2.4 works for all n and m in (1), like the algorithm in subsection 2.2, but without the above-listed deficiencies of the latter.
Comparison between the Marching-Cube and the Marching-Simplex Methods
763
– It outperforms the Marching Square/Cube algorithm of subsection 2.1 in iso-surfacing already for n = 2, 3, and its performance continues to be good for higher values of n, partly because no look-up tables are needed for it. – It also considerably outperforms the tensor-product approach in subsection 2.2 because the system of equations to be solved for every tile is linear. – Due to the convex nature of the simplex and the linearity of the related system of equations, for every n the needed computations are convex and, hence, intrinsically well-conditioned. – As a consequence of the previous items, the algorithm is robust with respect to round-up and random errors. – The algorithm allows highly efficient and simple kd-tree refinement (kd-tree in k dimensions (quad-tree in 2D, oct-tree in 3D, etc.)). – The kd-tree refinement is coupled with a comprehensive criterion for singularity detection based on the Morse lemma [14]. Moreover, due to the above-said convexity of the computations, this criterion is also quite robust with respect to round-up and random errors. – As noted in the concluding section of [5], this algorithm can be used to compute approximately also solution manifolds which have fractal nature. One such example has been considered in [10]. The only shortcoming of the algorithm we are aware of is that it is CPUcomputable and currently it is run only as a sequential “marching” algorithm.
4
Future Work
Our plans for future work in the directions of research discussed in this paper are related to further development of the algorithms presented in subsections 2.3 and 2.4, and include the following. 1. Removing the bottleneck in the algorithm of subsec. 2.3, due to sorting via the CPU by using instead GPU-computable Bitonic Sort presented in [8]. 2. Upgrading the algorithm from subsection 2.3 by replacing the RGB colour (mode 2) coding [4] by RGB colour (mode 3) coding [4] which is based on the principle of isometric conversion between dimension and resolution [4]. In this context, we intend also to replace Cantor diagonal type of basis-matching algorithm in [4] with the simpler basis-matching algorithm proposed in [3]. 3. We intend to combine the algorithms in subsections 2.3 and 2.4 after the upgrade in item 2 (see [3]) by using the GPU-computable algorithm as initial localization stage, with subsequent refinement stage via the CPU-computable Marching Simplex algorithm of subsection 2.4. 4. Full-scale parallelization of the Marching Simplex algorithm of subsection 2.4 is possible thanks to the kd-tree refinement it uses.
References 1. Bhaniramka, P., Wenger, R., Crawfis, R.: Isosurfacing in Higher Dimensions. In: Proceedings of the 11th IEEE Visualization 2000, Conference (VIS 2000). IEEE Computer Society, Los Alamitos (2000)
764
J. Gundersen, A.R. Kristoffersen, and L.T. Dechevsky
2. Bourke, P.: Polygonising a Scalar Field (1997), http://astronomy.swin.edu.au/~ pbourke/modelling/polygonise/ 3. Dechevsky, L.T., Bang, B., Gundersen, J., Laks˚ a, A., Kristoffersen, A.R.: Solving nonlinear systems of equations on graphics processing units. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2009. LNCS, vol. 5910. Springer, Heidelberg (2009) 4. Dechevsky, L.T., Gundersen, J.: Isometric Conversion Between Dimension and Resolution. In: Dæhlen, M., Mørken, K., Schumaker, L. (eds.) Mathematical methods for Curves and Surfaces, pp. 103–114. Nashboro Press (2005) 5. Dechevsky, L.T., Kristoffersen, A.R., Gundersen, J., Laks˚ a, A., Bang, B., Dokken, T.: A “marching simplex” algorithm for approximate solution of non-linear systems of equations and applications to branching of solutions and intersection problems. Int. J. Pure Appl. Math. 33(3), 407–431 (2006) 6. Dokken, T.: Aspects of Intersection Algorithms and Approximation, Ph.D. Thesis, University of Oslo (1996) 7. Glassner, A.S.: An Introduction to Ray Tracing. Morgan Kaufmann, San Francisco (2000) 8. Govindaraju, N.K., Manocha, D., Raghuvanshi, N., Tuft, D.: GPUSort: High Performance Sorting using Graphics Processors. Department of Computer Science, UNC Chapel Hill (2005), http://gamma.cs.unc.edu/GPUSORT/results.html 9. Gundersen, J., Dechevsky, L.T.: Scientific visualization for the ODE-based simulator BedSim of LKAB. Int. J. Pure Appl. Math. 41(9), 1197–1217 (2007) 10. Kristoffersen, A.R.: M.Sc. Diploma Thesis, Narvik University College (2004) 11. Kristoffersen, A.R., Georgiev, D.T., Dechevsky, L.T., Laks˚ a, A., Gundersen, J., Bang, B.: Vertex versus Pixel Based Intersection: Using the GPU for Investigation of Surface Intersection. Int. J. Pure Appl. Math. 33(3), 407–431 (2006) 12. Sederberg, T.W., Chen, F.: Implicitization using moving curves and surfaces. Computer Graphics Annual Conference Series, pp. 301–308 (1995) 13. Shirley, P.: Realistic Ray Tracing. A K Peters, Natick (2000) 14. Zorich, V.A.: Mathematical Analysis. Nauka, Moscow, vol. 1 (1981) (in Russian)
Transitions from Static to Dynamic State in Three Stacked Josephson Junctions Ivan Christov, Stefka Dimova, and Todor Boyadjiev Faculty of Mathematics and Infromatics, University of Sofia, 5 James Bourchier Boulevard, 1164 Sofia, Bulgaria {ivanh,dimova,todorlb}@fmi.uni-sofia.bg
Abstract. Using the Sakai-Bodin-Pedersen model, a system of three perturbed sine-Gordon equations is numerically studied. Effective numerical algorithms are proposed and realized to investigate the transitions from static to dynamic state in a stack of three Josephson junctions. Critical currents for individual junctions are found for different values of the damping parameter at low magnetic field. We show that the switching from static to dynamic state of the interior junction can trigger the switching of the exterior ones, and this process leads to current locking. We find that the critical current of the individual junction depends on the damping parameter and on the static or dynamic states of the other junctions.
1
Introduction
Multistacked long Josephson Junctions (JJs) form an interesting physical system, where both nonlinearity and interactions between subsystems play an important role. Such systems allow to study physical effects that do not occur in single JJs. One of the most interesting experimental results for two stacked JJs found in resent years is the so-called current locking (CL). The essence of this phenomenon is as follows: there exists a range of the external magnetic field, where the different junctions switch to dynamic state simultaneously when the external current exceeds some critical value. It was shown by means of numerical simulation [3] that experimentally found CL for two stacked JJs can be obtained and well explained in the framework of the inductive coupling model [5]. The simplest generalizable model of stacked JJs are the three stacked JJs, because it takes into account the different behavior of the interior and exterior junctions. The first and the third junctions are coupled only to one neighboring junction, while the second junction is coupled to its two neighbors below and above. In the next section the mathematical model is described. In the third section effective numerical methods and algorithms are proposed to investigate the transitions of the junction from static to dynamic state. The numerical results are discussed in the last section. Critical currents for individual junctions are found for different values of the damping parameter at low magnetic field. It is shown that for three stacked symmetric junctions the critical current of the I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 765–770, 2010. c Springer-Verlag Berlin Heidelberg 2010
766
I. Christov, S. Dimova, and T. Boyadjiev
interior junction strongly depends on the dynamic state of the exterior ones and vice versa. At the same time, there is a range in the magnetic field in which the junctions switch simultaneously to dynamic state, i.e., current locking occurs.
2
Mathematical Model T
The dynamics of the Josephson phases ϕ(x, t) = (ϕ1 (x, t), ϕ2 (x, t), ϕ3 (x, t)) in symmetric three stacked inductively coupled long JJs is described by the following system of perturbed sine-Gordon equations [5]: ϕtt + αϕt + J + Γ = L−1 ϕxx ,
−l < x < l,
0 < t ≤ T,
(1)
where α is the dissipation coefficient (damping parameter), Γ = γ (1, 1, 1)T is the vector of the external current density, J = (sin ϕ1 , sin ϕ2 , sin ϕ3 )T is the vector of the Josephson current density, L = tridiag (1, S, 1), (−0.5 < S ≤ 0) and 2l is the length of the stack. L−1 is the following symmetric matrix: ⎛
1−S 2 1−2S 2
⎜ ⎜ −S L−1 = ⎜ 1−2S 2 ⎝ S2 1−2S 2
−S 1−2S 2
S2 1−2S 2
1 1−2S 2
−S 1−2S 2
−S 1−2S 2
1−S 2 1−2S 2
⎞ ⎟ ⎟ ⎟. ⎠
In this work we consider stacks of overlap geometry placed in external magnetic field he , therefore the system (1) should be solved together with the boundary conditions: ϕx (−l) = ϕx (l) = H, (2) where H is the vector H = he (1, 1, 1)T . To close the differential problem appropriate initial conditions must be posed: ϕ(x, 0) − given,
ϕt (x, 0) − given.
(3)
The existence of Josephson current generates a specific magnetic flux. When the external current is less than some critical value all the junctions are in some static state, i.e., we have a time independent solution of (1), (2). In this case the measured voltages in all junctions are zero. The exceeding of this critical value leads to switching to dynamic state of the system and we have nonzero voltage of at least one of the junction. The voltage in i-th junction is mathematically given by: T l 1 Vi = lim ϕi,t (t, x)dxdt. (4) T →∞ 2lT 0 −l
We define the critical current of individual junction as the value at which this junction switches to nonzero voltage. Also, we want to make a correspondence between the critical current values of switching to dynamic state and the bifurcation values of some static solutions.
Transitions from Static to Dynamic State
767
In the three stacked case we consider static solutions, which are combinations of solutions existing in the one junction case. The important solutions in the case of a single junction are: – Meissner solutions, denoted by M , for parameters he = 0, γ = 0: ϕ(x) = kπ, k = 0, ±1, ±2, . . . , – fluxon (antifluxon) solutions, for which there are exact analytical expressions in the case of infinite junctions (l = ∞) and he = 0, γ = 0. The single fluxon/antifluxon solution has the well known form ϕ(x) = 4 arctan(exp (±x)) + 2kπ. Further, for n-fluxon distributions we use the simple notation F n . In this work we consider triplets (M, M, M ), (M, F 1 , M ), (M, F 2 , M ), because, as the numerical experiments show, they play a significant role at low magnetic field he . To study the global stability of a possible static solution, the following SturmLiouville problem (SLP) is generated: −L−1 uxx + Q(x)u = λ u, l u, u dx − 1 = 0, ux (±l) = 0,
(5a) (5b)
−l
where Q(x) = Jz (ϕ(x)). To study the global stability is equivalent to study the positive definiteness of the second variation of the potential energy of the system. The minimal eigenvalue λmin determines the stability of the solution under consideration. A minimal eigenvalue, equal to zero, means a bifurcation caused by change of some parameter, in our case — the external current γ. We use the calculated bifurcation values for the static solutions, found when solving the dynamic problem, for comparison.
3
Numerical Methods and Algorithms
To solve the dynamic problem (1), (2), (3), we use the finite difference method. The main equation (1) is approximated by the ”cross-shaped” scheme. To approximate the boundary conditions (2), the three point one-sided finite differences 2 are used. Let h and τ be the steps in space and time respectively, and δ = (τ /h) . Then the difference scheme is: −1
yˆkl = (1 + 0.5ατ )
[2ykl + (0.5ατ − 1)ˇ ykl − τ 2 (sin ykl + γ) +
3 m=1
k = 1, . . . , n − 1, l = 1, 2, 3, L−1 = (alm )3l,m=1 , yˆ0l = (4ˆ y1l − yˆ2l − 2hhe )/3,
l l yˆnl = (4ˆ yn−1 − yˆn−2 + 2hhe )/3.
The approximation error of this scheme is O(τ 2 + h2 ).
δalm yxm ¯x,k ],
768
I. Christov, S. Dimova, and T. Boyadjiev
To check the numerical stability and the real order of accuracy, we have made computations for fixed time levels and embedded meshes in space. The results show second order of convergence in space and time. The numerical procedure for finding the critical currents of the individual junctions for fixed parameters S, l, α works as follows. We start with he = 0, γ = 0, ϕ(x, 0) = 0, ϕt (x, 0) = 0. For given magnetic field he , increasing the current γ from zero by a small amount δγ, we calculate the current-voltage dependence (denoted further I-V characteristics) by using the difference scheme. To find the approximate value of the voltage Vi given by (4), an averaging procedure is proposed. The numerous experiments made show that it is reasonable to neglect the transient stage from one value of γ to the next, taking place in a time interval (0, t0 ). The value of t0 is determined by numerical experiments. Thus, we calculate the average voltages V i,k , i = 1, 2, 3 for t ≥ t0 for embedded intervals in t, Tk = T0 + kΔ: V i,k
1 = 2lTk =
1 2lTk
t0+Tk l
ϕi,t (t, x)dxdt t0
−l
l [ϕi (t0 + Tk , x) − ϕi (t0 , x)]dx. −l
The averaging is finished when the differences |V i,k+1 − V i,k | become less than a given value ε. Thus, the next point of the I-V characteristic is found. In our computations, for the next value of the current γ, the phase distributions of the last two time levels achieved are used as initial time levels. We increase the current until all the junctions switch to nonzero voltage. Then the external field he is increased by a small amount and the I-V characteristic is calculated for this new value of he . As initial data, the phase distributions for γ = 0 and the previous value of he are used. Because of the symmetry of the stack we investigate only the case he ≥ 0. To find the critical currents for he ∈ [0, 1.5] more than 500 000 000 time steps are needed. In order to find the bifurcation curves of the static distributions of the magnetic flux we solve numerically the static problem, corresponding to the dynamic one (1), (2). To solve the boundary-value problem for the nonlinear system of second order ordinary differential equations we use an iterative algorithm, based on the continuous analog of Newton’s method (CAMN) [4]. CANM gives a linear boundary value problem at each iteration step, which is solved numerically by means of Galerkin finite element method (FEM) and quadratic finite elements. FEM is used also to reduce the matrix Sturm-Liouville problem (5) to an algebraic eigenvalue problem whose few smallest eigenvalues and the corresponding eigenfunctions are found by the subspace iteration method [1]. To test the accuracy of the above methods, we have used the method of Runge by computing the solutions on three embedded meshes. The numerous experiments made show a
Transitions from Static to Dynamic State
CRITICAL CURRENT
1
769
MMM R R
0.5
R FF
MF1M
M-JJex
MF2M
,-JJin
0 0
0.5 1 EXTERNAL MAGNETIC FIELD
1.5
2l=10, S=-0.3, R - JJin in resistive state, FF - JJin in flux-flow state
Fig. 1. Bifurcation diagram in the plane (he , γ), α = 0.1
super-convergence of order four. For more detailed explanation of these methods see [2].
4
Numerical Results
Let us mention, the stack is symmetric, but it allows asymmetric solutions, for example asymmetric static solution (F, M, M ). Nevertheless all of the solutions, achieved when solving the dynamic problem, were symmetric - the solutions in the exterior junctions were of the same type. The results for the critical currents of the individual junctions with stack parameters S = −0.3, α = 0.1, 2l = 10 are graphically shown in Fig. 1. We consider the interval [0, 1.5] in he . In this interval the static solutions which play role are (M, M, M ), (M, F 1 , M ), (M, F 2 , M ), i.e., the fluxons enter in the interior junction (JJ in ) first. The following conclusions can be made. There is a domain in vicinity of zero field, where the junctions switch to nonzero voltage simultaneously, i.e., we have CL. In addition there are two →
to resistive state
CURRENT
CURRENT
0.3
→
0.6
0.4
M-JJex ,-JJin
0.2
to resistive state
→
→
to resistive state
to resistive state
0.2
→
0.1
0
2l=10, S= -0.3, α=0.1
0
0
2
4
6
VOLTAGE
Fig. 2. I-V characteristic, he = 0.67
8
M-JJex ,-JJin
to flux-flow state
2l=10, S= -0.3, α=0.1 0
0.5
VOLTAGE
1
1.5
Fig. 3. I-V characteristic, he = 1.37
770
I. Christov, S. Dimova, and T. Boyadjiev
CRITICAL CURRENT
1 MMM
0.5
MF2M 0 0
0.25
0.5 0.75 1 1.25 EXTERNAL MAGNETIC FIELD
1.5
1.75
2l=10, S=-0.3, simultaneous swithing to dynamic state
Fig. 4. Bifurcation diagram in the plane (he , γ), α = 0.01
smaller domains at which we have CL divided by domains at which JJ in switches first to resistive (R) state. At higher fields JJ in switches to flux-flow (FF) state (moving chain of fluxons). In this domain we have no CL at all. The moving fluxons create a time dependent perturbation of Josephson phase in the exterior junctions (JJ ex ) which results in switching to nonzero voltage at lower current than in the case when JJ in is in R-state. The I-V characteristics for he = 0.67 and he = 1.37 are shown in Fig. 2 and Fig. 3. Let us mention that for every he the lower critical current point lies on some bifurcation curves of the solutions (M, M, M ), (M, F 1 , M ), (M, F 2 , M ). As opposed to the case α = 0.1, for α = 0.01 we obtain simultaneous switching to nonzero voltage in the whole interval [0,1.5]. In this case the critical current points lie only on the bifurcation curves of (M, M, M ) and (M, F 2 , M ) solutions. The results are graphically shown in Fig. 4. Acknowledgment. This work is supported by Sofia University Scientific foundation under Grant No 39/2009.
References 1. Bathe, K.J., Wilson, E.: Numerical Methods in Finite Element Analisis. Prentice Hall, Englewood Cliffs (1976) 2. Christov, I., Dimova, S., Boyadjiev, T.: Stability and Bifurcation of the Magnetic Flux Bound States in Stacked Josephson Junctions. In: Margenov, S., Vulkov, L.G., Wa´sniewski, J. (eds.) NAA 2008. LNCS, vol. 5434, pp. 224–232. Springer, Heidelberg (2009) 3. Goldobin, E., Ustinov, A.V.: Current locking in magnetically coupled long Josephson junctions. Phys. Rev. B 59, 11532–11538 (1999) 4. Puzynin, I.V., et al.: Methods of computational physics for investigation of models of complex physical systems. Particals & Nucley 38 (2007) 5. Sakai, S., Bodin, P., Pedersen, N.F.: Fluxons in thin-film superconductor-insulator superlattices. J. Appl. Phys. 73, 2411–2418 (1993)
Factorization-Based Graph Repartitionings Kateˇrina Jurkov´ a1 and Miroslav T˚ uma2
2
1 Technical University of Liberec, Studentsk´ a 2, 461 17 Liberec 1, Czech Republic
[email protected] Institute of Computer Science, Czech Academy of Sciences, Pod Vod´ arenskou vˇeˇz´ı 2, 18207 Praha 8, Czech Republic
[email protected]
Abstract. The paper deals with the parallel computation of matrix factorization using graph partitioning-based domain decomposition. It is well-known that the partitioned graph may have both a small separator and well-balanced domains but sparse matrix decompositions on domains can be completely unbalanced. In this paper we propose to enhance the iterative strategy for balancing the decompositions from [13] by graph-theoretical tools. We propose the whole framework for the graph repartitioning. In particular, new global and local reordering strategies for domains are discussed in more detail. We present both theoretical results for structured grids and experimental results for unstructured large-scale problems.
1
Introduction
The problem of proper graph partitioning is one of the classical problems of the parallel computing. The actual process of obtaining high-quality partitionings of undirected graphs which arises in many practical situations is reasonably well understood. In addition, the resulting algorithms are sophisticated enough [5,1]. Such situations are faced, e.g., if standard criteria for partitionings expressed by balancing sizes of domains and minimizing separator sizes are considered. However, the situation may be different if one needs to balance the time to perform some specific operations. An example can be the time to compute sparse matrix decompositions, their incomplete counterparts, or the time for some auxiliary numerical transformations. It can happen that a partitioning which is well-balanced with respect to the above-mentioned standard criteria may be completely unbalanced with respect to some time-critical operations on the domains. The general framework of multi-constraint graph partitioning may not solve the problem. The graph partitioning problem is closely coupled with the general problem of load balancing. In particular, the partitioning represents a static load balancing. In practice, load distribution in a computation may be completely different from the original distribution at the beginning of the computation. Further, dynamic load balancing strategies can then redistribute the work dynamically. A lot of interest was devoted to analysis of basic possible sources of such problems [4]. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 771–778, 2010. c Springer-Verlag Berlin Heidelberg 2010
772
K. Jurkov´ a and M. T˚ uma
Principles of the cure of such problems one can find, e.g., in [7,14]. In some situations, in order to cover complicated and unpredictably time-consuming operations on the individual domains, one can talk about minimization with respect to complex objectives [13], see also [12]. The strategy proposed in [13] consists in improving the partitioning iteratively during the course of the computation. In some cases it is known much more about such critical operations. This paper aims at exploiting this knowledge. Then the additional information may be included into the graph partitioner, or used to improve the graph partitioning in one simple step providing some guarantees on its quality at the same time. Both these strategies have their own pros and cons. While integration of the additional knowledge into the graph partitioner seems to be the most efficient approach, it may not be very flexible. In addition, an analysis of such approach may not be simple when the typical multilevel character of partitioning algorithms is taken into account. A careful redistribution of the work in one subsequent step which follows the partitioning seems to provide the useful flexibility. Since the time-critical operation performed on the domains is the sparse matrix factorization, the key to our strategy is to exploit the graph-theoretic tools and indicators for the repartitioning. Let us concentrate on the complete factorization of a symmetric and positive definite (SPD) matrix which is partitioned into two domains. In this case, the underlying graph model of the factorization is the elimination tree. Our first goal is to show the whole framework of theoretical and practical tools which may allow post-processing of a given graph partitioning in one simple step. Then the repartitioned graph should be better balanced with respect to the factorization. Further we will discuss one such tool in more detail. Namely, we will mention that we can directly determine counts of columns which should be modified in the new factorization once we change the border nodes, which are vertices of the separated domains incident to the separator. We confirm both theoretically and experimentally that by carefully chosen reorderings we can decrease the number of these modifications. Section 2 summarizes some terminology and describes the problem which we would like to solve. Section 3 explains basic ideas of our new framework. Then we discuss the problem of minimizing modifications in factorized matrices on domains both theoretically and experimentally.
2
Basic Terminology and Our Restrictions
Let us first introduce some definitions and concepts related to the complete sparse matrix factorizations and reorderings. For simplicity we assume that adjacency graphs of all considered matrices are connected. Also, we will discuss only the standard pointwise graph model. But note that practical strategies for graph repartitioning should be based on blocks or other coarse representations described, e.g., by factorgraphs or hypergraphs. The decomposition of an SPD matrix A is controlled by the elimination tree. This tree and its subtrees provide most of the structural information relevant to the sparse factorization. Just by traversing the elimination tree, sizes of matrix
Factorization-Based Graph Repartitionings
773
factors, their sparsity structure, supernodal structure or other useful quantities [3,10] can be quickly determined. The elimination tree T is the rooted tree with the same vertex set as the adjacency graph G of A and with the vertex n as its root. It may be represented by one vector, typically called PARENT[.], defined as follows: min{i > j| lij = 0}, for j < n, (1) P AREN T [j] = 0, for j = n. where lij are entries of L. The n-th column is the only column which does not have any offdiagonal entries. When applying Cholesky factorization to a sparse matrix, it often happens that some matrix entries which were originally zeros become nonzeros. These new nonzero entries are called fill-in. High-quality sparse Cholesky factorization strongly minimizes the fill-in. Tools for this minimization are called fill-in minimizing reorderings. Basically, there are two categories of these reordering approaches. Global reorderings as nested dissection (ND) consider the graph as one entity and divide it into parts by some predefined, possibly recursive heuristics. Local reorderings are based on subsequent minimization of the quantities which represent local estimates of the fill-in. Important cases of such reorderings are MMD and AMD variations of the basic minimum degree (MD) algorithm. Many quantities related to the sparse factorization of SPD matrices can be efficiently computed only if the matrix is preordered by an additional specific reordering apart from a chosen fill-in minimizing reordering. One such additional reordering useful in practical implementations is the postordering. It is induced by a postordering of the elimination tree of the matrix, being a special case of a topological ordering of the tree. For a given rooted tree, its topological ordering labels children vertices of any vertex before their parent. Note that the root of a tree is always labeled last. Further note that any reordering of a sparse matrix that labels a vertex earlier than its parent vertex in the elimination tree is equivalent to the original ordering in terms of fill-in and the operation count. In particular, postorderings are equivalent reorderings in this sense.
3
Framework for the Graph-Based Repartitioning
In this section we will split the problem of repartitioning into several simpler tasks. Based on this splitting we will propose individual steps of our new approach. As described above, the problem arises if we encounter a lack of balance between sizes of the Cholesky factors on the domains. Using the elimination tree mentioned above, we are able to detect this imbalance without doing any actual factorization. This detection is very fast having its time complexity close to linear [10]. Then, the result of the repartitioning step is the new distribution of the graph vertices into domains which also implicitly defines the graph separator. The repartitioning step can be naturally split into two simpler subproblems. First, one needs to decide which vertices should be removed from one domain. Second, it should be determined where these removed vertices should be placed into the reordering sequence of the other domain. Alternatively, the domains may
774
K. Jurkov´ a and M. T˚ uma
be reordered and their factorizations recomputed from scratch. In the following two subsections, we will consider these two mentioned subproblems separately. For both of them we present new considerations. The third subsection of this section will present one simpler task in more detail as well as both theoretical and experimental results. 3.1
Removal of Vertices
Assume that the matrices on domains were reordered by a fill-in minimizing reordering. Further assume that some vertices should be removed from one domain to decrease the potential fill-in in the factorization. An important task is to determine which vertices should be removed from the domain such that their count would be as small as possible in addition to the further constraints mentioned below. In other words, the removal of chosen vertices should decrease the fill-in as fast as possible. The following Algorithm 1 offers a tool to solve this problem. It counts the number of row subtrees of the elimination tree in which each vertex is involved. Note that row subtrees represent sparsity structures of rows of Cholesky factor and they can be found by traversing the elimination tree. The algorithm is new, but it was obtained by modifying the procedure which determines the leaves of the row subtrees in the elimination tree in [11]. Algorithm 1. Count number of row subtrees in which the vertices are contained. for column j=1,n do COUNT(j):=n-j+1 PREV ROWNZ(j)=0 end for for column j=1,n do for each aij = 0, i > j do k=PREV ROWNZ(i) if k < j − |T [j]| + 1 for ξ = ct−1 , . . . , ct − 1 do COUNT(ξ ) = COUNT(ξ ) -1 end for end if PREV ROWNZ(i)=j end for end for Here T denotes the elimination tree of matrix A, and T [i] denotes the subtree of T rooted in the vertex i. T [i] also represents the vertex subset associated with the subtree, that is the vertex i and all its proper descendants in the elimination tree. |T [i]| denotes the number of vertices in the subtree T [i]. Consequently, the number of proper descendants of the vertex i is given by |T [i]| − 1. PREV ROWNZ is an auxiliary vector for tracking nonzeros in previously traversed rows. The
Factorization-Based Graph Repartitionings
775
computed quantity is denoted by COUNT. A critical assumption here is that the elimination tree is postordered. Having computed the counts, our heuristic rule for fast decrease of the fill-in is to remove vertices with the largest COUNT. Let us note that the removal of vertices may also change the shape of the elimination tree, and our rule does not take this fact into account. To consider this, recent theory of sparse exact updates which uses multiindices should be taken into account, see the papers by Davis and Hager quoted in [3]. Further note that the removal should also take into account distance of the removed vertices from the border vertices. Therefore, we propose to use the counts from Algorithm 1 as the secondary cost for the Fiduccia-Mattheyses improvement of the Kernighan-Lin algorithm [6]. This is an iterative procedure which, in each iteration, looks for a subset of vertices from the two graph domains such that their swapping leads to a partition with the smaller size of the edge separator. Our modification of the cost function then seems to enable more efficient repartitionings. 3.2
Insertion of Vertices
Having a group of vertices to be inserted into the new domain D2 we need to determine where these vertices should appear in the new reordering sequence. Here the objective function is to minimize the effect on the fill-in in the corresponding Cholesky factor. Note that in the next subsection we will mention another motivation: minimize the number of columns to be recomputed in the Cholesky factor, if it was computed. Shortly, theoretical considerations related to the delayed elimination in [8] motivate an insertion of a vertex to the position of the parent of the least common ancestor of its neighbors in the elimination tree T which we have. Consider a vertex to be inserted, and denote by N the set of its neighbors in D2 . Let α be the least common ancestor of N in T . Denote by Tr [α] the unique subtree of T determined by α and N . Then the increase of the fill-in in the new decomposition includes one edge for each vertex of Tr [α] and at most β multiples of the union of adjacency sets of the vertices from Tr [α] where β is the distance from α to the root of T plus one. In order to minimize the effect of the insertion on the fill-in, we need to minimize this amount. As in the previous subsection, this criterion may represent an additional cost function for the local heuristic like Kerninghan-Lin and we are about to perform an experimental study of its application. 3.3
Repartitioning for Generalized Decompositions
Consider the problem of repartitioning when we construct a factorization for which it may be difficult to obtain a tight prediction of the fill-in. An example can be an incomplete Cholesky decomposition. Similar situation can be faced when solving a sequence of systems of linear equations obtained, e.g., from the Newton’s method applied to solve a nonlinear boundary value problem. In this case, we may need a partial reuse of the partitionings over the sequence. Then
776
K. Jurkov´ a and M. T˚ uma
Table 1. Counts of columns which should be recomputed in Cholesky decomposition if boundary vertices are modified matrix application dimension nonzerosstandard MDnew approach bmw7st 1 structural mechanics 141,347 3,740,507 7,868 5,039 bodyy6 structural mechanics 19,366 77,057 2,354 476 cfd1 CFD pressure matrix 70,656 949,510 10,924 7,497 cfd2 CFD pressure matrix 123,440 1,605,669 15,021 10,416 hood car hood 220,542 5,494,489 7,099 2,192 kohn-sham4 quantum chemistry 90,300 270,598 3,564 2,233 m t1 tubular joint 97,578 4,925,574 9,093 7,095 pwtk pressurized wind tunnel 217,918 5,926,171 8,218 4,437 x104 beam joint 108,384 5,138,004 4,656 3,842
we face the following two problems: repartitioning as well as the recomputation of the decomposition. In this subsection we propose techniques that minimize in advance the effort needed for recomputing the partition by a careful choice of reorderings. The efficiency of the new strategies is measured by the counts of columns or block columns which should be recomputed in order to get the new decomposition. These counts measure the additional work for the decomposition once the partition was changed. For simplicity, here we restrict ourselves to changes in the domain from which the vertices are removed. The first approach which we propose generalizes the concept of local reorderings with constraints. This concept was introduced in [9] to combine local and global approaches, and recently investigated in [2]. Our procedure exploits the minimum degree reordering and uses the distance of vertices from the main separator as the second criterion which breaks the MD ties. Table 1 summarizes numerical experiments with the new reordering. All matrices except for the discrete Kohn-Sham equation are from the Tim Davis collection. The counts of factor columns to be recomputed (standard and new strategy, respectively) if a group of border nodes of the size fixed to two hundred is removed are in the last two columns. The counts were computed via the elimination tree. For the ND reorderings we present a formalized theoretical result for the structured grids. We will show that the choice of the first separator in the case of a k × k regular grid problem strongly influences the number of columns to be recomputed in case the border is modified by the removal or insertion. The situation is depicted in Figure 1 for k = 7. The figures represent the separator and the subdomain sets after four steps of ND. The border vertices are on the right and they are filled. The following theorem uses the separator tree in which the vertices describe the subdomain sets and separators, and which is a coarsening of the standard elimination tree. Theorem 1. Consider the matrix A from a k × k regular grid problem with ND ordering having l levels of separators. Assume that the matrix entries corresponding to the border vertices are modified. Denote by al and bl , respectively,
Factorization-Based Graph Repartitionings
777
Fig. 1. Grids with the ND separator structure related to Theorem 1. Type-I-grid on the left and Type-II-grid on the right.
maximum number of matrix block columns which may change in the Cholesky decomposition of A from Type-I-grid or Type-II-grid. Then liml→∞ al /bl = 3/2 for odd l and liml→∞ al /bl = 4/3 for even l. Proof. Clearly, a1 = 3 since the changes influence both domains. Consequently, all block columns which correspond to the entries of the separator tree have to be recomputed. Similarly we get b1 = 2 since the block factor column which corresponds to the subdomain without the border vertices does not need to be recomputed. Consider the Type-I-grid with k > 1 separators. It is clear that the separator structure of this grid we get by doubling Type-II-grid and separating them by a central separator. Consequently, ak+1 = 2∗bk +1, where the additional 1 corresponds to the central separator. Similarly we get the relation bk+1 = ak +1, since its separator structure is the same as if we would add to the considered Type-II-grid with k > 1 separators of another Type-II-grid and separate them by the central separator. The block columns of the new Type-II-grid do not need to be recomputed. Putting the derived formulas together we get ak+2 = 2 ∗ ak + 3 and bk+2 = 2 ∗ bk + 2. This gives al = 3(2l+1 − 1) and bl = 2(2l+1 − 1) for k = 2l + 1, and al = 4(2l ) − 3 and bl = 3.(2l ) − 2 for k = 2l, and we are done. Clearly, the choice of the first separator of ND plays a decisive role. Further, there exist accompanying results for the generalized ND and one-way dissection. The counts of modified vertices were obtained from the separator tree [10].
4
Conclusion
We considered new ways to find proper and fast graph repartitioning if our task is to decompose matrices on the domains. In this case it is possible to propose new and efficient methods refining the general-purpose concept of complex objectives. The approach goes beyond a straightforward use of symbolic factorization. After describing a comprehensive framework of the whole approach we
778
K. Jurkov´ a and M. T˚ uma
presented theoretical and experimental results for one particular problem. The explained techniques can be generalized to more domains and for general LU decomposition.
Acknowledgement This work was supported by the project No. IAA100300802 of the Grant Agency of the Academy of Sciences of the Czech Republic.
References 1. Cataly¨ urek, U.V., Aykanat, C.: Hypergraph-partitioning based decomposition for parallel sparse-matrix vector multiplication. IEEE Transactions on Parallel and Distributed Systems 20, 673–693 (1999) 2. Chen, Y., Davis, T.A., Hager, W.W., Rajamanickam, S.: Algorithm 887: CHOLMOD, Supernodal sparse Cholesky factorization and update/downdate. ACM Trans. Math. Softw. 35, 22:1–22:14 (2008) 3. Davis, T.A.: Direct Methods for Sparse Linear Systems. SIAM, Philadelphia (2006) 4. Hendrickson, B.: Graph partitioning and parallel solvers: Has the emperor no clothes? In: Ferreira, A., Rolim, J.D.P., Teng, S.-H. (eds.) IRREGULAR 1998. LNCS, vol. 1457, pp. 218–225. Springer, Heidelberg (1998) 5. Karypis, G., Kumar, V.: A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20, 359–392 (1999) 6. Kernighan, B.W., Lin, S.: An efficient heuristic procedure for partitioning graphs. The Bell System Technical Journal 49, 291–307 (1970) 7. Kumar, V., Grama, A., Gupta, A., Karypis, G.: Introduction to Parallel Computing. Benjamin-Cummings (1994) 8. Liu, J.W.H.: A tree model for sparse symmetric indefinite matrix factorization. SIAM J. Matrix Anal. Appl. 9, 26–39 (1988) 9. Liu, J.W.H.: The minimum degree ordering with constraints. SIAM J. Sci. Comput. 10, 1136–1145 (1989) 10. Liu, J.W.H.: The role of elimination trees in sparse factorization. SIAM J. Matrix Anal. Appl. 11, 134–172 (1990) 11. Liu, J.W.H., Ng, E.G., Peyton, B.W.: On finding supernodes for sparse matrix computations. SIAM J. Matrix Anal. Appl. 14, 242–252 (1993) 12. Pinar, A., Hendrickson, B.: Combinatorial Parallel and Scientific Computing. In: Heroux, M., Raghavan, P., Simon, H. (eds.) Parallel Processing for Scientific Computing, pp. 127–141. SIAM, Philadelphia (2006) 13. Pinar, A., Hendrickson, B.: Partitioning for complex objectives. In: Parallel and Distributed Processing Symposium, vol. 3, pp. 1232–1237 (2001) 14. Schloegel, K., Karypis, G., Kumar, V.: A unified algorithm for load-balancing adaptive scientific simulations. In: Proceedings of the ACM/IEEE Symposium on Supercomputing, vol. 59. ACM, New York (2000)
An Efficient Algorithm for Bilinear Strict Equivalent (BSE)- Matrix Pencils Grigoris I. Kalogeropoulos1, Athanasios D. Karageorgos1, and Athanasios A. Pantelous2 1
Department of Mathematics, University of Athens, Greece {gkaloger,athkar}@math.uoa.gr 2 Department of Mathematical Sciences, University of Liverpool U.K.
[email protected]
Abstract. In this short paper, we have two main objectives. First, to present the basic elements of the strict bilinear equivalence. Secondly, to describe an efficient algorithm for investigating the conditions for two homogeneous matrix pencils sF1 − sˆG1 and sF2 − sˆG2 to be bilinear strict equivalent. The proposed problem is very interesting since the applications are merely numerous. The algorithm is implemented in a numerical stable manner, giving efficient results. Keywords: Bilinear-Strict Equivalence, Matrix Pencil Theory, Algorithmic Technique.
1
Introduction
Matrix pencils are naturally associated with differential (difference) systems of the type S(F, G) : F x(t) ˙ = Gx(t)(F xk+1 = Gxk ), where F, G ∈ Fm×n , x ∈ Fn (F is a field - R or C) which in turn describe a variety of problems of the descriptor system theory and constitute a more general class than linear state systems. The pencils sF − G and F − sˆG, where s, sˆ ∈ F, are related by the homogeneous pencil sF − sˆG (or equivalently by an ordered pair (F, −G)). In our paper, an efficient algorithm for the investigation of the conditions for two homogeneous matrix pencils sF1 − sˆG1 and sF2 − sˆG2 to be bilinear strict equivalent is presented. The study of the Bilinear-Strict Equivalence (BSE) has been initiated by Turnbull and Aitken, see [6]. In their work, the covariance property of invariant polynomials and the invariance of the minimal indices have been established. In [2], the complete set of invariants for the BSE class of a matrix pencil (F, G) is defined extending further the results proposed by [6]. Analytically, the complete set of invariants of homogeneous matrix pencil sF − sˆG under the BSE is provided by studying the invariants of the homogeneous binary polynomials f (s, sˆ) under the Projective-Equivalence transformations. Moreover, it is derived that the study of such type of invariants can be I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 779–786, 2010. c Springer-Verlag Berlin Heidelberg 2010
780
G.I. Kalogeropoulos, A.D. Karageorgos, and A.A. Pantelous
provided by the equivalent invariants of matrices under the Extended-Hermite type transformations, see for more details [2]. The [2] provides to us with the set of column and row minimal indices (i.e. c.m.i and r.m.i), the set of all indexes for the pencil (F, −G), the set of P l¨ ucker and Grassmann-type vectors. The proposed results construct a solid framework for the study of properties of linear state space systems under space and frequency coordinate transformations. For instance, recently, in [3], through the notion of bilinear equivalence an interesting stabilization criterion for the homogeneous matrix pencil is finally derived. Now, let us define (F, G) ∈ Fm×n ×Fm×n and (s, sˆ) be a pair of indeterminates. The polynomial matrix sF − sˆG ∈ Fm×n [s, sˆ] is defined as the homogeneous matrix pencil of the pair (F, G). Clearly, sF − sˆG is a matrix over the ring F[s, sˆ], i.e. polynomials in (s, sˆ) with coefficients from F, but it may be also viewed as a matrix over the rings F(s)[ˆ s] or F(ˆ s)[s]. Δ
Definition 1. Let L = {L : L = (F, −G) ; F, G ∈ Fm×n } be the set of ordered Δ pairs of m × n-matrices and Θ = {θ : θ = (s, sˆ)} be the set of ordered pairs of indeterminates. For every L = (F, −G) ∈ L and θ = (s, sˆ) ∈ Θ, the matrix [L] = [ F −G ] ∈ Fm×2n is called a matrix representation of L and the homogeneous polynomial matrix sIn Δ F −G = sF − sˆG Lθ = L (s, sˆ) = sˆIn is referred as the θ-matrix pencil of L. Definition 2. Define the following sets of matrix pencils Δ
Lθ = {Lθ : for a fixed θ = (s, sˆ) ∈ Θ and for every L = (F, −G) ∈ L} , Δ
L (Θ) = {Lθ : for every θ = (s, sˆ) ∈ Θ and for every L = (F, −G) ∈ L} With the following definition, the BSE is defined on L or equivalently on L (Θ). This equivalence relation is generated by the action of appropriate transformation groups acting on L or equivalently on L (Θ). Definition 3. Consider first the set Δ K = k : k = (M, N ) ; M ∈ Fm×m , N ∈ F2n×2n ; det M, det N =0 and a composition rule (∗) defined on K as follows: ∗ : K × K → K : for every k1 = (M1 , N1 ) ∈ K, and k2 = (M2 , N2 ) ∈ K, then
Δ
k1 ∗ k2 = (M1 , N1 ) ∗ (M2 , N2 ) = (M1 M2 , N2 N1 ) {It may be easily verified that (K, ∗) is a group with identity element (Im , I2n ).}
An Efficient Algorithm for Bilinear Strict Equivalent (BSE)- Matrix Pencils
781
The action of K on L is defined by ◦ : K × L → L : for every k = (M, N ) ∈ K and L = (F, −G) ∈ L, then
k ◦ L = k ◦ (F, −G) = L = (F , −G ) ∈ L : [L ] = M [L] N Δ
or equivalently,
Δ F −G = M F −G N.
Such an action defines an equivalence relation EK on L, and EK (L) denotes the equivalence class or orbit of L ∈ L under K. Now, an important subgroup of K and the notion of the Bilinear-Strict Equivalence on L or L (Θ) is introduced. (Bilinear - Strict Equivalence): The subgroup (H − B, ∗) of (K, ∗), where Δ
H − B = {r : r = h ∗ b; for every h ∈ H for every b ∈ B}
(1)
where Δ =0 H = h : h = (R, P ) ; R ∈ Fn×n , P = diag {Q, Q} , Q ∈ Fn×n ; det R, det P and
Δ
B=
b:b=
αIn βIn αβ Im , ∈ F2×2 ; αδ − βγ =0 = (Im , Td ) , d = γ δ γIn δIn
is called the Bilinear-Strict Equivalence Group (BSEG). The action of H − B on L is defined by ◦ : H − B × L → L : for every r ∈ H − B and for a L = (F, −G) ∈ L, then Δ
r ◦ L = (h ∗ b) ◦ (F, −G) = h ◦ {b ◦ (F, −G)} = b ◦ {h ◦ (F, −G)} = L = (F , −G ) ∈ L : [L ] = Im R [L] F Td or equivalently
Δ αIn βIn F −G = Im R F −G Q = αRF Q − γRGQ βRF Q − δRG Q γIn δIn
The equivalence relation EH−B defined on L is called BSE, see [2] and [4]. Two ˆ 2 ∈ Lθ , θ = (s, sˆ) ∈ Θ and pencils L1θ = sF1 − sˆG1 ∈ Lθ , L2θ = λF2 − λG ˆ θ = (λ, λ) ∈ Θ are said to be bilinearly - strict equivalent, i.e. L1θ EH−B L2θ , if ˆ and thus a b ∈ B and only if there exists a transformation d : (s, sˆ) → (λ, λ) generated by d, and a h ∈ H such that (F2 , −G2 ) = (h ∗ b)◦(F1 , −G1 ). In matrix form the above condition may be expressed by 2 Lθ = (h ∗ b) ◦ L1θ ,
782
G.I. Kalogeropoulos, A.D. Karageorgos, and A.A. Pantelous
or
Δ αQ βQ F2 −G2 = R F1 −G1 γQ δQ = αRF1 Q − γRG1 Q βRF1 Q − δRG1 Q .
By EB (F, G) is denoted the BSE class or orbit of Lθ = sF − sˆG, or equivalently of L = (F, −G).
2
Creating the Conditions for the BSE
In the beginning of this section, we should define the Projective Equivalence (PE). Let Fd {Θ} be the set of homogeneous polynomials of degree d with coefficients from the filed F (i.e. R or C) in all possible indeterminates θ = (s, sˆ) ∈ Θ. ˆ be the projective transformation defined by Let ξ : (s, sˆ) →(λ, λ) λ s αβ ξ: = (2) ˆ . sˆ γ δ λ The action of ξ on f (s, sˆ) ∈ Fd {Θ} may be defined by ˆ = f (αλ + β λ, ˆ γλ + δ λ) ˆ ξ ◦ f (s, sˆ) ≡ f˜(λ, λ)
(3)
ˆ ∈ Fd {Θ} will be said to be projectively equivTwo polynomials f1 (s, sˆ), f2 (λ, λ) ˆ if there exists a ξ ∈ P GL(1, C/R) alent and it is denoted by f1 (s, sˆ)EP f2 (λ, λ), (i.e. the general projective group on the projective straight line of C/R) such ˆ and a c ∈ R − {0} that ξ : (s, sˆ) →(λ, λ) ˆ ξ ◦ f1 (s, sˆ) = c · f2 (λ, λ)
(4)
Clearly, (4) defines an equivalence relation EP on Fd {Θ}, which is called PE. ˆ i ∈ ρ} are two ordered sets of hoLet F1 = {fi (s, sˆ), i ∈ ρ}, F2 = {f˜i (λ, λ), mogeneous polynomials. F1 , F2 are said to be projectively equivalent, F1 EP F2 ˆ for every i ∈ ρ and for the same transportaif and only if fi (s, sˆ)EP f˜i (λ, λ) tion ξ. Moreover, the projective equivalence class of f (s, sˆ)(F ) shall be denoted by EP (f )(EP (F )). Now, in order to find the conditions for BSE, the following Proposition is needed. ˆ = λF2 − λG ˆ 2 , F (F1 , G1 ) = Proposition 1. Let L1 (s, sˆ) = sF1 − sˆG1 , L2 (λ, λ) ˆ i ∈ ρ2 − k2 } be the correspond{fi (s, sˆ), i ∈ ρ1 − k1 }, and F (F2 , G2 ) = {f˜i (λ, λ), ˆ where (ρ1 , k1 ), ing homogeneous invariant polynomial sets of L1 (s, sˆ), L2 (λ, λ), (ρ2 , k2 ) are the respective pairs of rank and power. ˆ for some h ∈ H and some b ∈ B is generated by the If L1 (s, sˆ)EH−B L2 (λ, λ) ξ ˆ then function ξ ∈ P GL(1, C/R) such that (s, sˆ) →(λ, λ),
An Efficient Algorithm for Bilinear Strict Equivalent (BSE)- Matrix Pencils
783
1. ρ1 = ρ2 = ρ and k1 = k2 = k. 2. And F (F1 , G1 )EP F (F2 , G2 ). An interesting result proposed and proved by [2] and [6] is presenting below. ˆ = λF2 −λG ˆ 2 , and Ic (F1 , G1 ), Proposition 2. Let L1 (s, sˆ) = sF1 −ˆ sG1 , L2 (λ, λ) Ic (F2 , G2 ) and Ir (F1 , G1 ), Ir (F2 , G2 ) be the corresponding sets of column and ˆ respectively. If row minimal indices (c.m.i.) and (r.m.i.) of L1 (s, sˆ) and L2 (λ, λ) ˆ L1 (s, sˆ)EH−B L2 (λ, λ), then Ic (F1 , G1 ) = Ic (F2 , G2 ) and Ir (F1 , G1 ) = Ir (F2 , G2 ). Thus, Propositions 1 and 2 express the covariance property of the homogeneous invariant polynomials and the invariance property of the sets of c.m.i. and r.m.i. of L(s, sˆ) under EH−B equivalence, respectively. By combinig the above two results, the following criterion for EH−B equivalence of matrix pencils is obtained. ˆ = λF2 − λG ˆ 2. Theorem 1. Let L1 (s, sˆ) = sF1 − sˆG1 , L2 (λ, λ) ˆ L1 (s, sˆ)EH−B L2 (λ, λ) if and only if the following conditions hold: 1. Ic (F1 , G1 ) = Ic (F2 , G2 ), Ir (F1 , G1 ) = Ir (F2 , G2 ) 2. F (F1 , G1 )EP F (F2 , G2 ) Proof. The necessity of the theorem is actually a straightforward result of Proposition 1 and 2. Next, in order to prove the sufficiency of the theorem, we assume that the conditions (1) and (2) are true. Now, since F (F1 , G1 )EP F (F2 , G2 ), ˆ have the same rank ρ (note that ρ1 = ρ2 = ρ) and then L1 (s, sˆ), L2 (λ, λ) the same power k (also note that k1 = k2 = k). Moreover, there exists a ξ ˆ for which ξ ◦ fi (s, sˆ) = ci · transformation ξ ∈ P GL(1, C/R) : (s, sˆ) →(λ, λ) ˆ for every i = 1, 2, ..., ρ − k. This transformation ξ generates a b ∈ B fi (λ, λ) ˆG ˆ Now, since L1 (s, sˆ)EB L1 (λ, λ), ˆ ˜1 ≡ L ˜ 1 (λ, λ). such that b ◦ L1 (s, sˆ) = λF˜1 − λ ˜ ˜ ˜ ˜ then Ic (F1 , G1 ) = Ic (F1 , G1 ) = Ic (F2 , F2 ), and Ir (F1 , G1 ) = Ir (F1 , G1 ) = Ir (F2 , G2 ). Furthermore, the sets of the homogeneous invariant polynomials of ˜ 1 (s, sˆ) and L2 (s, sˆ) differ only by scalars units of F[s, sˆ] (following the conL ˜ 2 (s, sˆ) have the same Smith form struction of ξ). Consequently L1 (s, sˆ) and L over F[s, sˆ] (or equivalently the same sets of elementary divisors) and the same ˜ 2 (s, sˆ), see for more details [1] and [2]. sets of c.m.i., r.m.i. and L1 (s, sˆ)EH L ˜ 2 (s, sˆ). Now, since Therefore, there exists a h ∈ H such that L1 (s, sˆ) = h ◦ L ˆ ˆ and finally, it ˜ L2 (s, sˆ) = b ◦ L2 (λ, λ), it follows that L1 (s, sˆ) = (h ∗ b) ◦ L2 (λ, λ) ˆ is derived that L1 (s, sˆ)EH−B L2 (λ, λ). As we have already seen in this section, the notion of EP equivalence, which is defined on the set of homogeneous invariant polynomials F (F, G) of the pencil sF − sˆG is the milestone for the characterization of the EH−B . Moreover, the whole problem is reduced into the investigation of the conditions under which ˆ ∈ Fd {Θ} are EP equivalence. Thus, the problem two polynomials f (s, sˆ), f˜(λ, λ) ˆ is equivalent with the of finding the conditions under which f (s, sˆ)EP f˜(λ, λ) determination of the complete and independent set of invariant for the orbit EP (f (s, sˆ)).
784
3
G.I. Kalogeropoulos, A.D. Karageorgos, and A.A. Pantelous
The Algorithm for the BSE
Before we state the main result of this paper, we introduce some notations. We denote by D(F, G) the symmetric set of elementary divisors (e.d.) over C, by B(F, G) = {BR (F, G); BC (F, G)} the unique factorization set, and by J (F, G) = {JR (F, G); JC (F, G)} the list of F (F, G) (and thus for L(s, sˆ)). The set B(F, G) is assumed to be naturally ordered and if π ¯ B(F, G) = (πBR (F, G), π BC (F, G)) ∈< B(F, G) >, then the (π, π )-matrix basis is designed by the Tπ,π . Note that the T (F, G) denotes the family of matrix representations of B(F, G). In order to analyze further the significant theorem 1, we need the following definition, see also [2] and [5]. Definition 4. Let f (s, sˆ) ∈ Rk {Θ}, B(f ) = {BR(f ); BC (f )} and let μ #BR (f ), ν #BC (f ) (# denotes the number of elements of the corresponding set). Let us also assume that π ∈< BR (f ) >, π ∈< BC (f ) > and that π
[B (f )] R
Tπ,π = B π,π (f ) = BCπ (f ) is the (π, π )-matrix basis of f (s, sˆ). We may define: (i) T (f ) {Tπ,π : ∀π ∈< BR (f ) >, ∀π ∈< BC (f ) >} as the family of matrix representation of B(f ) or of f (s, sˆ). (ii) If μ + ν ≥ 3, Tπ,π ∈ T (f ), r ∈< μ + ν >, (i1 , i2 ) ∈ Qr2,μ+ν−1 , then the (r, i1 , i2 ) − R-canonical Grassmann vector (CGV) of Tπ,π , g˜π,π is well r,i1 ,i2 defined and is referred as the (π, π ) − (r, i1 , i2 ) − R-CGV of f (s, sˆ). The set of all such vectors Gf {˜ gπ,π : ∀π ∈< BR (f ) >, ∀π ∈< BC (f ) >, ∀r ∈< μ + ν > r,i1 ,i2 and ∀(i1 , i2 ) ∈ Qr2,μ+ν−1 } is well defined and is called the R-CGV set of f (s, sˆ). Now, as an extension of theorem 1 and in order to be able to develop our algorithm, the following Theorem is very important, see also [2]. Theorem 2. Let L(s, sˆ) = sF − sˆG ∈ LΘ be a general order matrix pencil, B(F, G), J (F, G), G(F, G), Ic (F, G) and Ir (F, G) be the associated sets with the L(s, sˆ). The complete set of invariants for the EH−B equivalence class is defined by: (i) Ic (F, G), Ir (F, G), (ii) J (F, G) = {JR (F, G); JC (F, G)}, ¯ where ¯(E) = diag{E, E, ..., E}, E = [j]R . (iii) G(F, G) or G(F, G) modulo-C2 (E),
Finally, for every Tπ,π ∈ T (F, G), we denote by Prπ,π (F, G),P π,π (F, G) and P ∗ (F, G) the set of (π, π ) − r-prime Pl¨ ucker vectors (r ∈< μ + ν >) of Tπ,π , the set of all r-prime Pl¨ ucker of Tπ,π , and the Pl¨ ucker vector set of T (F, G) on the L(s, sˆ), respectively. Now, in this part of the section, we develop the algorithm for testing the EH−B ˆ = λF2 −λG ˆ 2 . During equivalence of two pencils L1 (s, sˆ) = sF1 −ˆ sG1 and L2 (λ, λ)
An Efficient Algorithm for Bilinear Strict Equivalent (BSE)- Matrix Pencils
785
this process the sets of G(Fi , Gi ) or P ∗ (Fi , Gi ) should be computed. A systematic procedure for testing the EH−B equivalence implies the following steps. The algorithm for the BSE ˆ respectively. Step 1: Find the ranks ρ1 , ρ2 of L1 (s, sˆ), L2 (λ, λ), ˆ If ρ1 = ρ2 , we stop, since we do not have L1 (s, sˆ)EH−B L2 (λ, λ). Step 2: If ρ1 = ρ2 , find the sets of degrees J1 and J2 of F (Fi , Gi ), for i = 1, 2. ˆ If J1 = J2 stop, since since we do not have L1 (s, sˆ)EH−B L2 (λ, λ). = Step 3: If J1 = J2 , compute the J (Fi , Gi ), for i = 1, 2. If J (F1 , G1 ) ˆ J (F2 , G2 ) stop, since we do not have L1 (s, sˆ)EH−B L2 (λ, λ). Step 4: If J (F1 , G1 )=J (F2 , G2 ), compute Ic (Fi , Gi ) and Ir (Fi , Gi ) for i=1, 2. If Ic (F1 , G1 ) = Ic (F2 , G2 ), or (and) Ir (F1 , G1 ) = Ir (F2 , G2 ) stop, ˆ since we do not have L1 (s, sˆ)EH−B L2 (λ, λ). Step 5: Now, if Ic (F1 , G1 ) = Ic (F2 , G2 ) and Ir (F1 , G1 ) = Ir (F2 , G2 ), then compute the unique factorization set B(Fi , Gi )={BR (Fi , Gi ); BC (Fi , Gi )} for i = 1, 2. The sets of BR (Fi , Gi ), BC (Fi , Gi ) characterize the sets of real and complex e.d. DR (Fi , Gi ), DC (Fi , Gi ) respectively. Thus define the set of homogeneous polynomials FR (Fi , Gi ) ∈ SRJa {Θ}, FC (Fi , Gi ) ∈ SRJb {Θ}. If we do not have FR (F1 , G1 )EB FR (F2 , G2 ). or FC (F1 , G1 )EB FC (F2 , G2 ), then stop, since we do not have ˆ F (F1 , G1 )EB F (F2 , G2 ) and thus we do not take L1 (s, sˆ)EH−B L2 (λ, λ). Step 6: Finally, if FR (F1 , G1 )EB FR (F2 , G2 ) and FC (F1 , G1 )EB FC (F2 , G2 ), then proceed to the computation of G(Fi , Gi ), for i = 1, 2 or continuing to the special cases’ tests for checking the EP equivalence of F (Fi , Gi ) for i = 1, 2.
4
Numerical Examples
In order to evaluate further the results of the previous sections, especially the proposed Algorithm for the BSE, which is based on the Theorem 1 and 2, we consider the homogenous pencils ⎡ ⎤ ⎤ ˆ 0 λ + 4λ 0 0 s − 2ˆ s 0 0 0 ⎢ ˆ ⎢ 0 2s − 4ˆ 0 ⎥ s0 0 ⎥ ˆ = ⎢ 0 λ + 4λ 0 ⎥ ⎥ , L2 λ, λ L1 (s, sˆ) = ⎢ ⎣ ⎣ 0 ⎦ ˆ 0 3s 0 0 0 λ + 2λ 0 ⎦ ˆ 0 0 0 −ˆ s 0 0 0 λ + 3λ ⎡
ˆ = 4. Thus, we It is easily verified that ρ1 = det(L1 (s, sˆ)) = ρ2 = det(L2 (λ, λ)) nd continue to the 2 step.
786
G.I. Kalogeropoulos, A.D. Karageorgos, and A.A. Pantelous
ˆ (λ + Now, F (F1 , G1 ) = {s − 2ˆ s, 6(s − 2ˆ s)s(−ˆ s)} and F (F2 , G2 ) = {λ + 4λ, ˆ ˆ ˆ 4λ)(λ + 2λ)(λ + 3λ)}, then it easily derived that J1 = J2 . So, following the 3rd step of the Algorithm, we should compute the J (Fi , Gi ), for i = 1, 2. Since, J (F1 , G1 ) = {(1, 1), (1, 3)}, and J (F2 , G2 ) = {(1, 1), (1, 3)}, we have to compute the Ic (Fi , Gi ) and Ir (Fi , Gi ) for i = 1, 2. However, in our case the Ic (F1 , G1 ) = Ic (F2 , G2 ) = O and Ir (F1 , G1 ) = Ir (F2 , G2 ) = O. Here, i.e. in the Step 5, we compute B(F1 , G1 ) = {BR (F1 , G1 )} = {(1, −2; 1), (1, −2; 1), (1, 0; 1), (0, 1; 1).}, and B(F2 , G2 ) = {BR (F2 , G2 )} = {(1, 4; 1), (1, 4; 1), (1, 2; 1), (1, 3; 1).}. Thus, the sets FR (Fi, Gi ) for i = 1, 2 are bilinearly equivalent, with the trans12 formation β = . Finally, according to the 6th step, we have to proceed 13 to the computation of G(Fi , Gi ), for i = 1, 2 or continuing to the special cases’ tests for checking the EP equivalence of F (Fi , Gi ) for i = 1, 2. In this case, we ˆ and b ◦ (6(s − 2ˆ prefer the second choice, i.e. b ◦ (s − 2ˆ s) = −(λ + 4λ) s)s(−ˆ s)) = ˆ ˆ ˆ Consequently, the two pencils are BSE. 6(λ + 4λ)(λ + 2λ)(λ + 3λ).
5
Conclusion-Further Research
In this paper, we have created an algorithm for testing the EH−B equivalence of two pencils. The EH−B equivalence class has been characterized by a complete set of invariants. However, there exists a number of other functions (also defined for pencils), which are not EH−B invariant and generally speaking, they can ˜ ˜. This kind of problem might give different values for the elements Lθ and L θ be interested in studying different topics of numerical analysis or theoretical applications of system and control theory. Acknowledgments. The authors are very grateful to the anonymous referee for the comments and the significant suggestions.
References 1. Gantmacher, R.F.: The theory of matrices, Chelsea, New York, U.S.A., vol. 1, 2 (1959) 2. Kalogeropoulos, G.I.: Matrix Pencils and linear systems theory, PhD thesis, Control Engineering Centre, City University, London, U.K. (1985) 3. Kalogeropoulos, G.I., Karageorgos, A.D., Pantelous, A.A.: A stabilization criterion for matrix pencils under bilinear transformation. Linear Algebra and its Applications 428(11–12), 2852–2862 (2008) 4. Karcanias, N., Kalogeropoulos, G.I.: Bilinear-strict equivalence of matrix pencils and autonomous singular differential systems. In: Proceedings of 4th I.M.A. Intern. Conf. on Control Theory, Robinson College, Cambridge, U.K. (1984) 5. Marcus, M.: Finite dimensional multilinear algebra (two volumes). Marcel and Deker, New York (1973) 6. Turnbull, H.W., Aitken, A.C.: An introductionto the theory of canonical matrices. Dover Publications, New York (1961)
Two-Grid Decoupling Method for Elliptic Problems on Disjoint Domains Miglena N. Koleva and Lubin G. Vulkov Faculty of Natural Science and Education University of Rousse, 8 Studentska str., Rousse 7017, Bulgaria {mkoleva,lvalkov}@ru.acad.bg
Abstract. We propose a two-grid finite element method for solving elliptic problems on disjoint domains. With this method, the solution of the multi-component domain problem(simple example — two disjoint rectangles) on a fine grid is reduced to the solution of the original problem on a much coarser grid together with solution of several problems (each on a single-component domain) on fine meshes. The advantage is the computational cost although the resulting solution still achieves asymptotically optimal accuracy. Numerical experiments demonstrate the efficiency of the algorithms.
1
Introduction
In this paper, we consider coupling elliptic transmission problem on a multicomponent domain with nonlocal interface conditions on parts of the boundary of the components. The study of such problems could be motivated physically by the occurrence of various nonstandard boundary and coupling conditions in modern physics, biology, engineering [2,3,10]. One dimensional problems was numerically solved in [5-8]. However, here we want to illustrate the two-grid idea in a new direction, namely we use the two-grid discretization method to decouple the multi-component domain problem to several elliptic equations, each of them solved on its own domain. The two-grid method was proposed of Axelsson [1] and J. Xu [12], independently of each other, for a linearization of the nonlinear problems. The two-grid finite element method was also used by J.Xu and many others scientists (see the reference in [4]) for discretizing nonsymmetric indefinite elliptic and parabolic equations. By employing two finite element spaces of different scales, one coarse and one fine space, the method was used for symmetrization of nonsymmetric problems, which reduces the solution of a nonsymmetric problem on a fine grid to the solution of a corresponding (but much smaller) nonsymmetric problem, discretized on the coarse grid and the solution of a symmetric positive definite problem on the fine grid. There are also many applications to nonlinear elasticity problem, fluid mechanics [9,11] etc. The two-grid approach in the present paper is an extension of the idea in [4], where it was used to decouple of a Shr¨ odinger system of differential equations. The system of partial differential equations is first discretized on the coarse I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 787–795, 2010. c Springer-Verlag Berlin Heidelberg 2010
788
M.N. Koleva and L.G. Vulkov
grid, then a decoupled system is discterized on the fine mesh. As a result, the computational complexity of solving the Shr¨ odinger system is comparable with solving two decoupled Poisson equations on the same grid. In this paper we consider a system of two elliptic equations, each of them solved on its own rectangle. The two problems are coupled with nonlocal interface conditions on parts of the rectangles’s boundary. First, the original system is approximated on the coarse mesh and then a decoupled system is discretized on the fine grid. The rest of the paper is organized as follows. In Section 2, we introduce the model problem, used to demonstrate our method. In the next section, we study the finite element solution, while in Section 4 we propose the two grid finite element algorithms and analyze the convergence. Section 5 presents results of numerical experiments, showing the effectiveness of our method. In the next section, by C we denote positive constant, independent of the boundary value solution and mesh sizes.
2
Two-Rectangles Model Problem
As a model example we study the elliptic problem, defined on the disjoint rectangles Ωn = (an , bn ) × (cn , dn ) with boundaries ∂Ωn , n = 1, 2, see Figure 1, (1) − (pn (x, y)unx )x − q n (x, y)uny y + rn (x, y)un = f n (x, y), (x, y) ∈ Ωn , un (x, cn ) = un (x, dn ) = 0, an ≤ x ≤ bn ,
(2)
u (a1 , y) = 0, c1 ≤ y ≤ d1 , u (b2 , y) = 0, c2 ≤ y ≤ d2 , (3) d 2 p1 (b1 , y)u1x (b1 , y)+ s1 (b1 , y)u1 (b1 , y)= ϕ2 (a2 , η)u2 (a2 , η)dη, c1 ≤ y ≤ d1 , (4) 1
2
c2
−p
2
d1 ϕ1 (b1 , η)u1 (b1 , η)dη, c2 ≤ y ≤d2 . (5)
(a2 , y)u2x (a2 , y)+ s2 (a2 , y)u2 (a2 , y)=
c1
Throughout the paper we assume that the input datum satisfy the usual regularity and ellipticity conditions in Ωn , n = 1, 2, pn (x, y), q n (x, y), rn (x, y) ∈ L∞ (Ωn ), f n ∈ L2 (Ωn ),
(6)
0 < pn0 ≤ pn (x, y) < pn1 , 0 < q0n ≤ q n (x, y) < q1n , 0 ≤ rn (x, y) ≤ r1n ,
(7)
for (x, y) ∈ Ωn . In the real physical problems (see [2,3]) we often have ϕ1 , ϕ2 > 0. We introduce the product space L = {v = (v 1 , v 2 ) | v n ∈ L2 (Ωn )}, n = 1, 2, endowed with the inner product and the associated norm 1/2
(u, v)L = (u1 , v 1 )L2 (Ω1 ) + (u2 , v 2 )L2 (Ω2 ) , vL = (v, v)L , where n n (un , v n )L2 (Ωn ) = u v dxdy, n = 1, 2. Ωn
Two-Grid Decoupling Method for Elliptic Problems on Disjoint Domains
789
y d
2
Γ2
d1
Ω
Ω1
2
Γ1 c
c
Eij3
Eij4
1
Eij2
Eij1
Eij6
Eij5
2
0
a1
a2
b
1
b2
x
Fig. 1. Domains Ω1 and Ω2
We also define the spaces Hm = {v = (v 1 , v 2 ) | v n ∈ Hm (Ωn )}, m = 1, 2, . . . endowed with the inner products and norms 1/2
(u, v)Hm = (u1 , v 1 )Hm (Ω1 ) + (u2 , v 2 )Hm (Ω2 ) , vHm = (v, v)Hm , where (un , v n )Hm (Ωn ) =
j k j=0 l=0
∂ j un ∂ j vn , , n = 1, 2, m = 1, 2, . . . ∂xl ∂y j−l ∂xl ∂y j−l L2 (Ωn )
In particular, we set H0m = {v = (v 1 , v 2 ) ∈ H1 | v n = 0 on Γn , n = 1, 2}, m = 1, 2, . . . , where Γ1 = ∂Ω1 \{(b1 , y) | y ∈ (c1 , d1 )} and Γ2 = ∂Ω2 \{(a2 , y) | y ∈ (c2 , d2 )}, see Figure 1. Finally, with u = (u1 , u2 ) and v = (v 1 , v 2 ) we define the following bilinear form:
a(u, v) =
p1 u1x vx1
+
q 1 u1y vy1
1 1 1
+r u v
c1
c2
p2 u2x vx2 + q 2 u2y vy2 + r 2 u2 v 2 dx dy + 2
d1
1
1
1
s (b1 , y)u (b1 , y)v (b1 , y) dy c1
Ω2
d1 d2
dx dy +
Ω1
+
−
2
1
ϕ (a2 , η)u (a2 , η)v (b1 , y) dηdy − 1
1
1
2
2
2
d2
s2 (a2 , y)u2 (a2 , y)v 2 (a2 , y) dy
c2
d2 d1 c2 1
c1
1
1
2
ϕ (b1 , η)u (b1 , η)v (a2 , y) dηdy
= a (u , v ) + a (u , v ) + b (u , v ) + b2 (u1 , v 2 )
and c(v) = c2 (v 2 ) + c2 (v 2 ), where cn (v n ) =
2
(8)
1
f n v n dxdy, n = 1, 2.
Ωn
Lemma 1. Under the conditions (6) and (7) the bilinear form a(u, v), defined by (8), is bounded on H1 × H1 . If besides it, the conditions sn ∈ L∞ (cn , dn ), ϕn ∈ L∞ ((cn , dn ) × (cn , dn )) a. e. in ∂Ωn , n = 1, 2, (9)
790
M.N. Koleva and L.G. Vulkov
are fulfilled, this form satisfies the G˚ arding’s inequality on H01 , i.e. there exist positive constants m and κ such that a(u, u) + κ u2L ≥ m u2H1 ,
∀ u ∈ H01 .
The equivalent variational problem of (1)-(5) is defined as follows: find (u1 , u2 ) ∈ H1 such that a(u, v) = c(v), ∀v ∈ H01 .
(10)
The proof of the next assertion is similar to this of Theorem 1 in [5]. Theorem 1. Let in additional to (6), (7) and (9), the first derivatives of pn (x, y) and q n (x, y), n = 1, 2 also satisfy (6). Then the variational problem (10) has an unique weak solution u ∈ H2 and uH2 ≤ Cf L2 .
3
Finite Element Method Solution
The discretization of Ω1 ∪ Ω2 is generated by the uniform mesh ωh = ω h1 ∪ ω h2 , ω hn = {(xi , yj ) : xi = an + (i − 1)hn , i = 1, . . . , Nn , xNn = bn , yj = cn + (j − 1)kn , j = 1, . . . , Mn , yMn = dn }, n = 1, 2. Now, we consider a standard piecewise finite element space Vh = Vh1 ∪ Vh2 , associated with ω h . Namely, we use linear triangular elements, see Figure 1. With Esij , s = 1, . . . , 6 we denote the sixth elements, associated with the grid node (xi , yj ). Let Φnh (x, y) is a basis of Vhn , n = 1, 2, Φh = Φ1h ∪ Φ2h . We seek the finite element solution solution u1h (x, y) ∈ Vh1 , u2h (x, y) ∈ Vh2 in the form u1h (x, y) =
N1 M1
1 Ui,j Φ1hi,j (x, y), u2h (x, y) =
i=1 j=1
N2 M2
2 Ui,j Φ2hi,j (x, y).
(11)
i=1 j=1
We lump the mass matrix by nodal-point integration, that is we get nonzero contributions only for diagonal elements of the local (element) mass matrix. For right hand side of (1) we use the product approximation formula. With integrals of known function ϕ and unknown solution U we deal as follows: for n = 1, 2
dn
cn
ϕn (·, η)U n (·, η)dη ≈
yl+1 Mn −1 1 [U n (·, yl ) + U n (·, yl+1 )] ϕn (·, η)dη. (12) 2 yl l=1
The remaining integral can be computed exactly, depending on the function ϕn . Such approximation conserves the second order of convergence rate.
Two-Grid Decoupling Method for Elliptic Problems on Disjoint Domains
791
The resulting discrete approximation of (1)-(5) is as follows
n n ˆn + Q ˇn Pˆi,j Pˇi,j Pˆ n + Pˇ n Q n n n n − 2 Ui+1,j − 2 Ui−1,j + + + ri,j Ui,j hn hn h2n h2n i,j
ˆn ˇn Q Q i,j n i,j n n − 2 Ui,j+1 − 2 Ui,j−1 = fi,j , n = 1, 2, i = 2, . . . , Nn −1, j = 1, . . . , Mn −1, hn hn n 1 2 n Ui,1 = U1,j = UN = Ui,M = 0, i = 1, . . . , Nn , j = 1, . . . , M1 , n = 1, 2, 2 ,j n
1 ˇ ˆ1 + Q ˇ 1 ) h1 r 1 PN1 ,j 1 Pˇ 1 h1 (Q 1 1 UN1 −1,j + + + +s UN − 1 ,j h1 h1 2k12 2 N1 ,j
M2 ˆ1 ˇ1 h1 Q h1 Q 1 h1 1 N1 ,j 1 N1 ,j 1 2 − UN1 ,j+1 − UN1 ,j−1 − U1,l ϕ 2l = f , 2k12 2k12 2 2 N1 ,j
(13)
l=1
j = 2, . . . , M1 − 1,
ˆ2 + Q ˇ 2 ) h2 r 2 Pˇ 2 h2 (Q 2 2 U + + + +s − h2 2,j h2 2k22 2 2 Pˆ1,j
2 U1,j − 1,j
ˆ2 h2 Q 1,j 2 U1,j+1 2k22
M1 ˇ2 h2 Q 1 h2 2 1,j 2 1 f , j = 2, . . . , M2 − 1, − U − UN ϕ 1 = 1,j−1 1 ,l l 2k22 2 2 1,j l=1
n ˆ n, Q ˇ n, where Ui,j = U n (xi , yj ), n = 1, 2 and the coefficients of (13): Pˆ n , Pˇ n , Q n ϕ , n = 1, 2 can be computed exactly or numerically. For example,
Pˆijn =
⎧ ⎪ ⎪ ⎨
1 ( + )pn dxdy, hn kn ij ij E1 E6 pn h1 , ⎪ ⎪ ⎩ 1i+ n2 ,j n (pi,j + pi+1,j ), 2
exact midpont approx. trapezoidal approx.
⎫ 1 ( + )q n dxdy ⎪ hn kn ⎪ ⎬ ij ij E5 E6 n qi,j− k1 ⎪ ⎪ 2 ⎭ n 1 (q n + qi,j−1 ) 2 i,j
ˇn =Q ij
and ϕ nl are obtained from bn , (11) and (12). The error analysis of the above finite element discretization can be achieved by standard techniques, see e.g. [4]. Theorem 2. Under the assumptions in Theorem 1, uh = (u1h , u2h ) has the error estimate u − uh Hsh ≤ Ch2−s uL , s = 0, 1, where · s is the discrete s-Sobolev norm.
4
Two-Grid Decoupling FEM Algorithms
H h Let define a new mesh (coarse mesh) ωH = ωH 1 ∪ ω 2 , analogically to ω with h mesh steps: Hn > hn and Kn > kn , n = 1, 2. The mesh ω we will call fine mesh. As before, we consider a standard piecewise finite element space VH (with basis
792
M.N. Koleva and L.G. Vulkov
ΦH ), associated with ω H . Let Uni = [U n (xi , y1 ), U n (xi , y2 ), . . . , U n (xi , yMn −1 ), U n (xi , Mn )]T , i = 1, . . . , Nn , n = 1, 2. Algorithm 1 step 1. Find U1N1 and U21 from (13) on the coarse mesh ω H . step 2. For all Φh ∈ Vh on the fine mesh ω h : a) Find ˚ u1h ∈ Vh1 such that a1 (˚ u1h , Φ1h ) = c1 (Φ1h ) − b1 (u2H , Φ1h ). 2 2 2 2 b) Find ˚ uh ∈ Vh such that a (˚ uh , Φ2h ) = c2 (Φ2h ) − b2 (˚ u1h , Φ2h ). Note that at step 2 the problems in Ω1 and Ω2 are computed consequently and separately. Now, Algorithm 1 can be improved in a successive fashion, organizing an iteration process, such that at each step to account the last computed values of U1N1 and U21 . Algorithm 2 n(1)
step 1. Find ˚ unh , n = 1, 2 from Algorithm 1, ˚ uh
:= ˚ unh
step 2. For k = 1, 2, . . . and all Φh ∈ Vh on the fine mesh ω h : 1(k+1) 1(k+1) 2(k) a) Find ˚ uh ∈ Vh1 : a1 (˚ uh , Φ1h ) = c1 (Φ1h ) − b1 (˚ uh , Φ1h ). 2(k+1) 2(k+1) 1(k+1) b) Find ˚ uh ∈ Vh2 : a2 (˚ uh , Φ2h ) = c2 (Φ2h ) − b2 (˚ uh , Φ2h ). The proofs of the next theorems are based on the ideas in [4]. Theorem 3. Under the assumptions in Theorem 1, ˚ unh for n = 1, 2, computed with Algorithm 1 has the following error estimate un − ˚ unh 1 0.5(h2n + kn2 ) + 0.5(Hn3 + Kn3 ), n = 1, 2. n(k)
The next theorem shows that ˚ uh , k = 2, 3, . . . can reach the optimal accu2/(k+1) racy in H1 -norm, if the coarse mesh step size Hn (Kn ) is taken to be hn 2/(k+1) (kn ). As the dimension Vh is much smaller than VH , the efficiency of the algorithms is then evident. n(k)
Theorem 4. Under the assumptions in Theorem 1, ˚ uh with Algorithm 2, has the following error estimate n(k)
unh − ˚ uh
for n = 1, 2, computed
1 0.5(Hnk+1 + Knk+1 ), n = 1, 2, k ≥ 1,
Consequently, n(k)
un − ˚ uh n(k)
1 0.5(h2n + kn2 ) + 0.5(Hnk+1 + Knk+1 ), n = 1, 2. 2/(k+1)
Namely, ˚ uh , k ≥ 1 has the same accuracy as ˚ unh in H1 -norm, if Hn = hn 2/(k+1) and Kn = kn , n = 1, 2.
Two-Grid Decoupling Method for Elliptic Problems on Disjoint Domains
5
793
Numerical Examples
The test example is problem (1)-(5), with a1 = 1, b1 = 2, c1 = 0.2, d1 = 0.8, a2 = 3, b2 = 4.5, c2 = 0, d2 = 1 and zero Dirichlet boundary conditions. The coefficients are: p1 (x, y) = ex+y , q 1 (x, y) = sin(x + y), r1 (x, y) = x + y, s1 (x, y) = 2xy, 2 2 2 p (x, y) = x + y , q 2 (x, y) = xy, r2 (x, y) = x − y, s2 (x, y) = 2(x + y), 2x x 1 2 ϕ (x, y) = 2b1 y(1−y) , ϕ (x, y) = a2 y(1−y) . In the right hand side of equations (4),(5) we add functions f 3 (x, y) and f 4 (x, y), respectively and determine f s (x, y), s = 1, . . . , 4, so that u = (u1 , u2 ), u1 (x, y) = 2[cos(10πy − π) + 1](x − a1 )2 y(d1 + 0.2 − y), u2 (x, y) = 2[cos(4πy − π) + 1](x − b2 )2 y(d2 − y), is the exact solution of the problem (1)-(5). In the examples, the coefficients Pˆ , Pˇ , ˆ and Q ˇ are computed using midpoint approximation, while for the right hand Q side we use trapezoidal rule approximation. Mesh parameters are H1 = H2 = H, K1 = K2 = K for the coarse mesh and h1 = h2 = h, k1 = k2 = k for the fine mesh. The results are given in H1 ( · 1 ), L2 and max discrete norms and the convergence rate (CR) is calculated using double mesh principle: E h = uh −u, h CR = log2 [E h /E 2 ]. All computations are performed by MATLAB 7.1. The generated linear systems of algebraic equations are solved by QR decomposition method with pivoting, using MATLAB libraries function ‘mldivide’. Example 1. (Algorithm 1 ) We chose h = H 3/2 and k = K 3/2 , such that to reveal the accuracy of the solution, computed with Algorithm 1. The results are listed in Table 1, which show that ˚ uh − u1 ≈ O(H 3 + K 3 ) = O(H 3 + K 3 + 2 2 h + k ), i.e. the assertion of Theorem 3. Example 2. (Algorithm 2 ) Here we will demonstrate the efficiency of Algon(k) rithm 2. In order to check the convergence rate, each ˚ uh , n = 1, 2 are computed (k+1)/2 (k+1)/2 with fine mesh step size h = H ,k=K for all 1, . . . , k iterations. 3
3
n 2 2 Table 1. Error and CR in different norms ˚ un h − u , h = H , k = K , Algorithm 1
Ω1 mesh H = 1/4, K = 1/5 H = 1/8, K = 1/10 H = 1/16, K = 1/20 H = 1/32, K = 1/40
2
Ω2 max
L2
H1
1.92 2.70e-1
8.82e-2 1.24e-2
3.97e-2 4.85e-3
5.70e-1 7.36e-2
(3.2299 )
(2.8301 )
(2.8304 )
(3.0331 )
(2.9532 )
1.24e-3
3.34e-2
1.73e-3
6.13e-4
9.43e-3
(3.0402 )
(3.0151 )
(2.8415 )
(2.9840 )
(2.9644 )
max
L
H
3.25e-1 4.02e-2
9.57e-2 1.02e-2
(3.0152 )
5.07e-3 (2.9871 )
1
7.64e-4
1.83e-4
4.25e-3
2.80e-4
8.84e-5
1.37e-3
(2.7303 )
(2.7604 )
(2.9743 )
(2.6273 )
(2.7921 )
(2.7831 )
794
M.N. Koleva and L.G. Vulkov k+1 2
Table 2. Error and CR in different norms for h < H
,k
k+1 2
Ω1 mesh
Ω2
L2
max
, Algorithm 2
H1
L2
max
H1
k = 2, h H 5/2 , k K 5/2 — fixed for all iterations 1.00e-2 2.58e-3 7.57e-2 3.17e-3 1.27e-3 2.00e-2 6.96e-4 1.50e-4 2.75e-3 1.63e-4 5.16e-5 7.43e-4
H = 1/4, K = 1/5 H = 1/8, K = 1/10
(3.8448 )
(4.1043 )
(4.7828 )
(4.2815 )
(4.6213 )
(4.7505 )
k = 3, h H 7/2 , k K 7/2 — fixed for all iterations 4.48e-4 1.14e-4 3.40e-3 1.46e-4 5.70e-5 9.96e-4
H = 1/4, K = 1/5
2
0.02 0.015
1.5 0.01 1 U
Error
0.005
0.5
0 −0.005
0 −0.01 −0.5 1
−0.015 1 0.8
4.5 4
0.6
3.5 3
0.4
2.5
4.5 4
0.6
3.5
0
2.5 2
0.2
1.5 1
3
0.4
2
0.2 y
0.8
x
y
0
1.5 1
x
Fig. 2. Numerical solution (left) and error (right) for H = 0.1, K = 0.02, Alg.1, step 1
The results — errors in different discrete norms and convergence rate, are shown n(k) in Table 2. We can see that ˚ uh −un 1 ≈ O(h2 +H k+1 ), which is the statement of Theorem 4. Moreover, we observe that for k ≥ 2 the convergence increases rather faster than the theoretical assertion.
6
Conclusions
The main advantage of Algorithms 1,2 is that we reach a high accuracy of the numerical solution, solving the whole problem (Ω1 ∪ Ω2 ) only once — on the coarse mesh, then the problem is separated into two problems and we solve them separately and consequently on the fine mesh, increasing the convergence. This study can obviously extended to elliptic and parabolic problems with polygonal and curvilinear domains. Also, the nonlinear two-grid decoupling method [1,12] can be developed to nonlinear differential problems on disjoint domains of type [2,3].
Acknowledgement This research is supported by the Bulgarian National Fund of Science under Project Sk-Bg-203.
Two-Grid Decoupling Method for Elliptic Problems on Disjoint Domains
795
References 1. Axelsson, O.: On mesh independence and Newton methods. Appl. Math. 38(4–5), 249–265 (1993) 2. Amosov, A.A.: Global solvability of a nonlinear nonstationary problem with a nonlocal boundary condition of radiative heat transfer type. Diff. Eqns. 41, 96–109 (2005) 3. Druet, P.E.: Weak solutions to a time-dependent heat equation with nonlocal radiation condition and right hand side Lp (p ≥ 1). WIAS Preprint 1253 (2008) 4. Jin, J., Shu, S., Xu, J.: A two-grid discretization method for decoupling systems of partial differential equations. Math. Comp. 75, 1617–1626 (2006) 5. Jovanovi´c, B.S., Vulkov, L.G.: Numerical solution of a parabolic transmission problem. IMA J. Numer. Anal. (2009) (to appear) 6. Jovanovi´c, B.S., Vulkov, L.G.: Numerical solution of a hyperbolic transmission problem. Comp. Meth. in Appl. Math. 4(4), 374–385 (2009) 7. Koleva, M.: Finite element solution of boundary value problems with nonlocal Jump Conditions. J. of Math. Modl. and Anal. 13(3), 383–400 (2008) 8. Koleva, M., Vukov, L.: A two-grid approximation of an interface problem for the nonlinear Poisson-Boltzmann equation. In: Margenov, S., Vulkov, L.G., Wa´sniewski, J. (eds.) NAA 2008. LNCS, vol. 5434, pp. 369–376. Springer, Heidelberg (2009) 9. Mu, M., Xu, J.: A Two-grid Method of a Mixed Stokes-Darcy Model for Coupling Fluid Flow with Porous Media Flow. SIAM J. Numerical Analysis 45, 1801–1813 (2007) 10. Tikhonov, A.N.: On functional equations of the Volterrra type and their applications to someproblems of mathematical physics. Byull. Mosk. Gos. Univ., Ser. Mat. Mekh. 1(8), 1–25 (1938) 11. Xu, J.: Two-grid discretization techniques for linear and nonlinear PDEs. SIAM Journal on Numerical Analysis 33, 1759–1777 (1996) 12. Xu, J.: A novel two-grid method for semilinear elliptic equations. SIAM J. Sci. Comput. 15(1), 231–237 (1994)
A Method for Sparse-Matrix Computation of B-Spline Curves and Surfaces Arne Laks˚ a Narvik University College, P.O.B. 385, N-8505 Narvik, Norway
[email protected]
Abstract. Matrix methods of computing B-spline curves and surfaces have been considered in the work of several authors. Here we propose a new, more general matrix formulation and respective upgraded notation. The new approach is based on non-commutative operator splitting, where the domain and range of every factoring operator differ by one dimension, and the factoring operators are represented by a product of sparse rectangular matrices with expanding dimensions differing by 1, so that these matrices are d × (d + 1)-dimensional (with d increasing with an increment of 1) and have nonzero values only on their two main diagonals (ai,i ) and (ai,i+1 ), i = 1, . . . , d. In this new matrix formulation it is possible to obtain the generation of the B-spline basis and the algorithms of de Casteljau and Cox–de Boor in a very lucid unified form, based on a single matrix product formula. This matrix formula also provides an intuitively clear and straightforward unified approach to corner cutting, degree elevation, knot insertion, computing derivatives and integrals in matrix form, interpolation, and so on. For example, computing the matrix product in the formula from left to right results in the successive iterations of the de Casteljau algorithm, while computing it from right to left is equivalent to the successive iterations in the Cox– de Boor algorithm. Although the new matrix factorization is essentially non-commutative, in Theorem 1 we formulate and prove an important commutativity relation between this matrix factorization and the operator of differentiation. We use this relation further to propose a new, considerably more concise form of matrix notation for B-splines, with respective efficient computation based on sparse-matrix multiplication.
1
Introduction
Modern B-spline theory has several branches. One of them is based on the blossoming method proposed by Ramshaw [8] and, in a different form by de Casteljau [2]. However, here we shall look first at B-splines in (what we can call) the classical way, introduced in 1972 by at least three authors simultaneously. Namely, Cox [7] established a recursive algorithm for simple knots, while deBoor [4] introduced the algorithm for general knots; in his paper deBoor mentioned that
Research supported in part by the 2006, 2007 and 2008 Annual Research Grants of the priority R&D Group for Mathematical Modeling, Numerical Simulation and Computer Visualization at Narvik University College, Norway.
I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 796–804, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Method for Sparse-Matrix Computation of B-Spline Curves and Surfaces
797
Louis Mansfield had also discovered the recursion. It can also be mentioned, that among others, Knut Moerken and Tom Lyche have been using matrix notation on B-splines. We usually denote B-splines in two different ways, using a short and a more complete notation. These notations are Bk,i (t) = B(t; ti , . . . , ti+k ). When referring to about B-splines we mean normalized B-splines, that is, all B-splines defined on an infinite knot sequence sum up to 1 (also called form the partition of unity), i.e., j
B(t; ti , . . . , ti+k ) = 1,
tj ≤ t < tj+1 .
i=j−k
Compared to the original Curry-Schoenberg B-splines M (t; ti , . . . , ti+k ) (see [5]) we thus have the following relationship B(t; ti , . . . , ti+k ) =
ti+k − ti M (t; ti , . . . , ti+k ). k
Note that for integer non-multiple knots, i.e, (. . . , 1, 2, . . .), B(t; ti , . . . , ti+k ) = M (t; ti , . . . , ti+k ). This leads to the following definition of the B-spline that usually is called the Cox-de Boor recursion formula Definition 1. Given are k increasing real numbers {ti , ti+1 , . . . , ti+k }, where ti+k > ti (they do not otherwise need to be strictly increasing). The B-spline of degree d and order k = d + 1 is defined by the recursion formula B(t; ti , . . . , ti+k ) = wd,i (t) B(t; ti , . . . , ti+k−1 )+(1 − wd,i+1 (t)) B(t; ti+1 , . . . , ti+k ) where the termination of the recursion is 1, B(t; tj , tj+1 ) = 0, and where wd,i (t) =
2
tj ≤ t < tj+1 , otherwise,
t − ti . ti+d − ti
Bezier Curves and the De Casteljau Algorithm
The de Casteljau corner-cutting algorithm is basically recursive convex linear interpolation between two points. This can be expressed in matrix form, computing from right to left, as follow; ⎛ ⎞ ⎛ ⎞ c0 1−t t 0 0 ⎜ ⎟ 1−t t 0 ⎝ 0 1 − t t 0 ⎠ ⎜ c1 ⎟ c(t) = 1 − t t (1) ⎝ c2 ⎠ 0 1−t t 0 0 1−t t c3
798
A. Laks˚ a
where the vector on right-hand side (RHS) is a vector with entries which are points, i.e., elements of an affine space determined by a vector basis and a point origin (usually ∈ R2 or R3 ). The consecutive multiplication by matrices from right to left in the formula is reducing the dimension of the vector by 1 for each computation, i.e, the factor matrices are of dimension d × (d + 1); we denote them by Td (t). It follows that equation (1) can be rewritten as c(t) = T1 (t) T2 (t) T3 (t) c where c = (c0 , c1 , c2 , c3 )T . If we only compute the three matrices from the left we get a vector with the four Bernstein polynomials [1] of degree three, (1 − t)3 , 3(1 − t)2 t, 3(1 − t)t2 , t3 . We also define the matrix notation of the set of Bernstein polynomials of degree n to be T n (t) = T1 (t)T2 (t) · · · Tn (t). It follows that equation (1) can be expressed as c(t) = T 3 (t) c
3
The B-Spline Factor Matrix T (t)
A Bezier curve is actually a B-spline curve where the k = d + 1 first knots are 0, and the k last knots are 1, and without internal knots. The B-spline recursion algorithm (definition 1) is, therefore, analogous to equation 1 computing only the matrices from left-hand side (LHS) to the right. We can, therefore, extend the d×(d+1) matrix Td (t) to a more comprehensive version, i.e. the B-spline type of this matrix. Recall the linear translation and scaling function from definition 1, t−ti , if ti ≤ t < ti+d (2) wd,i (t) = ti+d −ti 0, otherwise. As we can see, (2) describes a piecewise straight line. Recall here that in the case of Bernstein/Bezier type we just have k = d + 1 equal knots at 0 and d + 1 equal knots at 1. It then follows that wd,i (t) = t for all relevant d and i. Definition 2. The B-spline factor matrix Td (t) is a d × (d + 1) band-limited matrix with two nonzero elements on each row. The matrix is, as follows ⎛ ⎞ 1− wd,i−d+1 (t) wd,i−d+1 (t) 0 ... ... 0 ⎜ ⎟ ⎜ ⎟ .. ⎜ ⎟ ⎜ ⎟ 0 1− w (t) w (t) 0 · · · . d,i−d+2 d,i−d+2 ⎜ ⎟ Td (t) = ⎜ .. .. ⎟ .. .. .. .. ⎜ ⎟ . . . . . . ⎜ ⎟ ⎜ ⎟ .. .. ⎝ . (t) w (t) 0 ⎠ . 0 1− w 0
···
···
d,i−1
0
d,i−1
1− wd,i (t) wd,i (t)
A Method for Sparse-Matrix Computation of B-Spline Curves and Surfaces
799
Note here that in the matrix Td (t) the first index is d for all w. The last index in the w is denoted i. This index is determined by the knot vector, and it is the index of the knot fixed by the requirement ti ≤ t < ti+1 .
(3)
We observe in the matrix Td (t) that the last index of w is decreased by 1 from one line to the next line up. For the bottom line it is equal to i, for the line above the bottom line it will then equal to i − 1, and so on. For d = 3 we have the following formula for a B-spline curve (note the indices of the coefficients on the RHS), 1 − w2,i−1 (t) w2,i−1 (t) 0 c(t) = 1 − w1,i (t) w1,i (t) 0 1 − w2,i (t) w2,i (t) ⎛ ⎞ ⎛ ⎞ ci−3 1 − w3,i−2 (t) w3,i−2 (t) 0 0 ⎜ ci−2 ⎟ ⎟ ⎝ 0 1 − w3,i−1 (t) w3,i−1 (t) 0 ⎠⎜ ⎝ ci−1 ⎠ , 0 0 1 − w3,i (t) w3,i (t) ci where the index i is determined by the requirement (3) in Definition 2. A general expression for a third degree B-spline curve is thus equivalent to the Bezier case c(t) = T1 (t)T2 (t)T3 (t) c = T 3 (t) c. T
where c = (ci−3 , ci−2 , ci−1 , ci ) and where the index i is determined by the requirement (3) in Definition 2. To investigate the derivative of Td (t) we first look at the derivative of the linear translation and scaling function (2) which we denote 1 , if ti ≤ t < ti+d δd,i = ti+d −ti . (4) 0, otherwise. δd,i is a piecewise constant, independent of the translation, only reflecting the scaling. In the Bezier case, δd,i = 1 for all relevant i and d. The derivative of the matrix T (t) is then defined by the following. Definition 3. The B-spline derivative matrix T is a d × (d + 1) band-limited matrix with two nonzero constant elements on each row (independent of t). The matrix is, as follows ⎞ ⎛ 0 ... ... 0 −δd,i−d+1 δd,i−d+1 ⎜ .. ⎟ ⎜ ⎟ ⎜ ··· . ⎟ 0 −δd,i−d+2 δd,i−d+2 0 ⎜ ⎟ ⎜ ⎟ . . . . . . ⎜ ⎟ . . . . . . Td = ⎜ . . . . . . ⎟ ⎜ ⎟ ⎜ ⎟ .. .. ⎟ ⎜ . . 0 −δd,i−1 δd,i−1 0 ⎠ ⎝ 0
···
···
0
−δd,i δd,i
The index i in the definition follows the rule (3) of the index in Td (t) from definition 2.
800
3.1
A. Laks˚ a
Commutativity Relations between T (t) Matrices and Their Derivatives
Before we can look at derivatives of B-spline curves we have to look at a special property of the matrix T (t) and its derivative T . Lemma 1. Commutativity relation between differentiation and multiplication of two T (t) matrices. For all d > 0 and for a given knot sequence tj−d , tj−d+1 , . . . , tj+d+1 , and ti ≤ t < ti+1 there holds Td (t) Td+1 = Td Td+1 (t).
(5)
Proof. We start by computing the LHS of 5. Taking the two elements that are different from zero in one line of Td (t) and computing them with the sub-matrix of Td+1 yields something different from zero, −δd+1,j−1 δd+1,j−1 0 1 − wd,j (t) wd,j (t) 0 −δd+1,j δd+1,j which gives −(1 − wd,j (t))δd+1,j−1 (1 − wd,j (t))δd+1,j−1 − wd,j (t)δd+1,j
wd,j (t)δd+1,j . (6)
We now apply the same procedure at the RHS of 5, 1 − wd+1,j−1 (t) wd+1,j−1 (t) 0 −δd,j δd,j 0 1 − wd+1,j (t) wd+1,j (t) which gives −(1 − wd+1,j−1 (t))δd,j
(1 − wd+1,j (t))δd,j − wd+1,j−1 (t)δd,j
wd+1,j (t)δd,j . (7)
We compute the first element of (6), (1 − wd,j (t))δd+1,j−1 =
tj+d − t , (tj+d − tj )(tj+d+1 − tj−1 )
(8)
tj+d − t , (tj+d+1 − tj−1 )(tj+d − tj )
(9)
and the first element of (7), (1 − wd+1,j−1 (t))δd,j =
which shows that the first element of (6) is equal to the first element of (7). We now compute the third element of (6), wd,j (t)δd+1,j =
t − tj , (tj+d − tj )(tj+d+1 − tj )
(10)
t − tj , (tj+d+1 − tj )(tj+d − tj )
(11)
and the third element of (7), wd+1,j (t)δd,j =
A Method for Sparse-Matrix Computation of B-Spline Curves and Surfaces
801
which shows that the third element of (6) is equal to the third element of (7). Finally, we compute the second element of (6), which is minus the first element minus the third element of (6), i.e. (1 − wd,j (t))δd+1,j−1 − wd,j (t)δd+1,j = (8) − (10), and if we reorganize the second element of (7), we get minus the first element minus the third element of (7), i.e. (1 − wd+1,j (t))δd,j − wd+1,j−1 (t)δd,j = (1 − wd+1,j−1 (t))δd,j − wd+1,j (t)δd,j = (9) − (11), which also shows that the second element of (6) is equal to the second element of (7). This shows that the respective elements are equal for the whole line (row) and, thus, every line, the whole expressions are equal, which completes the proof. The main commutativity result now follows. Theorem 1. Commutativity relation between differentiation and multiplication of several consecutive of T (t) matrices. For all d > 0 and for a given knot vector, the value of a matrix product of several consecutive T (t)-matrices, one of which is differentiated, does not depend on the choice of the differentiated matrix, i.e., Td (t) Td+1 (t) · · · Td+j−1 (t) Td+j = Td Td+1 (t) · · · Td+j−1 (t) Td+j (t).
(12)
Proof. This follows from the result of Lemma 1 by induction 3.2
B-Splines in Matrix Notation
First, a concrete example: given a third degree B-spline curve c(t) = T1 (t) T2 (t) T3 (t) c = T 3 (t) c. It follows from Theorem 1 that the derivative is c (t) = (T1 T2 (t) T3 (t) + T1 (t) T2 T3 (t) + T1 (t) T2 (t) T3 ) c = 3 T 2 (t) T3 c. We are now ready to give a general expression of a B-spline function/curve and its derivatives in matrix notation. Definition 4. A B-spline function/curve of degree d is given in matrix notation, as follows c(t) = T d(t) c (13) where the set of B-splines of degree d are the T d (t) = T1 (t) T2 (t) · · · Td (t),
802
A. Laks˚ a
where T1 (t) T2 (t) · · · Td (t) are given according to Definition 2, and where c is the coefficient vector with d + 1 elements c = (ci−d , ci−d+1 , . . . , ci )T , and where the index i is given by the requirement ti ≤ t < ti+1 . The j-th derivative, 0 < j ≤ d, of a B-spline function/curve of degree d is of degree d − j and is expressed, as follows c(j) (t) = where
4
d! T d−j (t) T j c (d − j)!
(14)
T j = Td−j+1 Td−j+2 · · · Td .
Knot Insertion
Knot insertion is one of the most important algorithms on B-splines. It was introduced simultaneously in 1980 by Boehm [3] for single knots and Cohen, Lyche and Riesenfield [6] for general knot insertion. Knot insertion can also be expressed in the context of matrix notation. In the case of insertion of single knots we have the following expression c = Td ( t) c
(15)
where t is the value of the new knot and c is a vector of the d new coefficients/points replacing the d − 1 points that are in the interior of the vector c at the RHS (except for the first and the last coefficients). Here, as usual, the index i is determined by ti ≤ t < ti+1 . We shall look at an example, a second degree B-spline function where we get: ⎞ ⎛ ci−2 ci−1 1 − w2,i−i (t) w2,i−i (t) 0 ⎝ ci−1 ⎠ . = (16) ci 0 1 − w2,i ( t) w2,i ( t) ci In Figure 1 there is, at the LHS, a second degree B-spline curve (in R2 ) and its control polygon (c0 , c1 , c2 , c3 , c4 ). At the RHS the knot vector is illustrated (on R, horizontal). There are three equal knots at the beginning and at the end of the curve. In the middle a new knot, t, is to be inserted. It follows from the position of t that i = 3. The two “linear translation and scaling” functions involved in t) in expression (16), w2,2 and w2,3 , are shown at the RHS of the the matrix T2 ( figure. The result of the knot insertion is that the internal coefficient at the RHS c2 and c3 . of (16), c2 , is replaced by two new coefficients at the LHS of (16), This can clearly be seen on the LHS of Figure 1.
A Method for Sparse-Matrix Computation of B-Spline Curves and Surfaces
803
Fig. 1. At the LHS, there is a 2nd degree B-spline curve and its control polygon (in t is inserted. The R2 ). At the RHS, the knot vector is illustrated (in R). One new knot, result is, as we can see from the LHS, two new control points c2 and c3 instead of the old control point c2 .
5
The B-Spline Matrix
The B-spline matrix Bd (t) is a (d + 1) × (d + 1) matrix defined as follows ⎞ ⎛ T d (t) ⎟ ⎜ d T d−1 (t) T d ⎟ ⎜ ⎜ (d − 1)d T d−2 (t) T d−1 ⎟ Bd (t) = ⎜ ⎟. ⎟ ⎜ .. ⎠ ⎝ . d! T 1
It follows that
6
⎛
⎞ ci−d ⎜ ⎟ ⎜ .. ⎟ ⎜ ⎟ ⎟ ⎜ ⎜ ⎟ = Bd (t) ⎜ . ⎟ ⎝ ⎠ ⎝ ci−1 ⎠ ci c(d) (t) c(t) c (t) .. .
⎞
⎛
Complete Hermite Interpolation with B-Splines
Complete Hermite interpolation with B-splines requires that the number of coefficients n = m(d + 1), where m is the number of interpolation points and d is the polynomial degree of the B-spline curve. It follows that if ti ≤ t < ti+1 then Bd (t) is invertible and we get ⎞ ⎛ ⎞ ⎛ c(t) ci−d ⎜ c (t) ⎟ ⎜ .. ⎟ ⎜ ⎟ ⎜ . ⎟ ⎟ = Bd (t)−1 ⎜ .. ⎟ ⎜ ⎝ . ⎠ ⎝ ci−1 ⎠ ci c(d) (t) It thus follows that for each interpolation point where the position and d consecutive derivatives are given, we get d + 1 coefficients using the equation above.
804
A. Laks˚ a
References 1. Bernstein, S.: D´emonstration du theor`eme de Weierstrass fond´ee sur le calcul des probabiliti´es. Comm. Soc. Math. 13(1-2) (1912) 2. de Casteljau, P.: Shape Mathematics and CAD. Kogan Page, London (1986) 3. Boehm, W.: Inserting new knots into b-spline curves. Computer Aided Geometric Design 12(4), 199–201 (1980) 4. de Boor, C.: On calculation with B-splines. Journal of Approximation Theory 6, 50–62 (1972) 5. Curry, B.H., Schoenberg, I.J.: On Polya frequency functions IV: The fundamental spline functions and their limits. J. d’Analyse Math. 17, 71–107 (1966) 6. Cohen, E., Lyche, T., Riesenfeld, R.: Discrete B-splines and subdivision techniques in computer aided geometric design and computer graphics. Comp. Graphics and Image Process 14(2), 87–111 (1980) 7. Cox, M.G.: Curve fitting with piecewice polynomials. J. Inst. Math. Appl. 8, 36–52 (1972) 8. Ramshaw, L.: Blossoms are polar forms. Computer Aided Geometric Design 6(4), 323–359 (1989)
Parallel MIC(0) Preconditioning for Numerical Upscaling of Anisotropic Linear Elastic Materials Svetozar Margenov and Yavor Vutov Institute for Parallel Processing Bulgarian Academy of Sciences, Acad. G. Bontchev, Bl. 25A, 1113 Sofia, Bulgaria {margenov,yavor}@parallel.bas.bg
Abstract. Numerical homogenization is used for upscaling of the linear elasticity tensor of strongly heterogeneous microstructures. The implemented 3D algorithm is described in terms of six auxiliary elastic problems for the reference volume element (RVE). Rotated trilinear RannacherTurek finite elements are used for discretization of the involved subproblems. A parallel PCG method is implemented for efficient solution of the arising large-scale systems with sparse, symmetric, and positive semidefinite matrices. The implemented preconditioner is based on modified incomplete Cholesky factorization MIC(0). The numerical homogenization scheme is derived on the assumption of periodic microstructure. This implies periodic boundary conditions (PBCs) on the RVE. From algorithmic point of view, an important part of this study concerns the incorporation of PBCs in the parallel MIC(0) solver. Numerical upscaling results are shown. The test problem represents a trabecular bone tissue, taking into account the elastic response of the solid phase. The voxel microstructure of the bone is extracted from a high resolution computer tomography image. The presented results evidently demonstrate that the bone tissues could be substantially anisotropic. The computations are performed on IBM Blue Gene/P machine at the Bulgarian Supercomputing Center.
1
Introduction
The Preconditioned Conjugate Gradient (PCG) method is known to be the best solution tool for large systems of linear equations with symmetric and positive definite sparse matrices [3]. The used preconditioning technique is crucial for the PCG performance. It is also know that the PCG method converges for semidefinite matrices in the orthogonal to the kernel subspace. This paper is organized as follows. The applied numerical homogenization scheme is given in section 2. In section 3 the parallel MIC(0) preconditioner is described. Some results from numerical experiments are presented in the last section. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 805–812, 2010. c Springer-Verlag Berlin Heidelberg 2010
806
2
S. Margenov and Y. Vutov
Homogenization Scheme
Let Ω be a hexahedral domain representing our RVE and u = (u1 , u2 , u3 ) be the displacements in Ω. The components of the small strain tensor are: 1 ∂ui (x) ∂uj (x) εij (u (x)) = + (1) 2 ∂xj ∂xi We assume that the Hooke’s law holds: σij (x) = cijkl (x)εkl (x)
(2)
Here, the Einstein summation convention is assumed. The tensor c is called stiffness tensor and σ is the stress tensor. We can rewrite (2) in matrix notation ⎡ ⎤ ⎡ ⎤⎡ ⎤ σ11 c1111 c1122 c1133 c1123 c1113 c1112 ε11 ⎢ σ22 ⎥ ⎢ c2211 c2222 c2233 c2223 c2213 c2212 ⎥ ⎢ ε22 ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ σ33 ⎥ ⎢ c3311 c3322 c3333 c3323 c3313 c3312 ⎥ ⎢ ε33 ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ (3) ⎢ σ23 ⎥ ⎢ c2311 c2322 c2333 c2323 c2313 c2312 ⎥ ⎢ 2ε23 ⎥ . ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎣ σ13 ⎦ ⎣ c1311 c1322 c1333 c1323 c1313 c1312 ⎦ ⎣ 2ε13 ⎦ σ12 c1211 c1222 c1233 c1223 c1213 c1212 2ε12 The symmetric 6 × 6 matrix C is called stiffness matrix. For an isotropic materials C has only 2 independent degrees of freedom. For orthotropic materials (there are three orthogonal planes of symmetry in this case), the matrix C has 9 independent degrees of freedom — 3 Young’s moduli E1 , E2 , E3 , 3 Poisson’s ratios ν23 , ν31 , ν12 and 3 shear moduli μ23 , μ31 , μ12 .
C Ort
⎡ 1−ν ν 23 32 ν21 + ν31 ν23 ν31 + ν21 ν32 E E Δ E2 E3 Δ ⎢ E2 E3 Δ ⎢ ν12 + ν13 ν32 1 −2ν313 ν13 ν32 + ν31 ν12 ⎢ ⎢ E3 E1 Δ E E Δ E E 3 1 3 1Δ ⎢ ν13 + ν12 ν23 ν23 + ν13 ν21 1 − ν12 ν21 =⎢ ⎢ ⎢ E1 E2 Δ E1 E2 Δ E1 E2 Δ ⎢ μ23 ⎢ ⎣ μ31
⎤ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ ⎥ ⎥ ⎥ ⎦
(4)
μ12 1 − ν12 ν21 − ν13 ν31 − ν23 ν32 − 2ν12 ν23 ν31 ν12 ν21 ν23 ν32 where Δ = , = , = , E1 E2 E3 E1 E2 E2 E3 ν13 ν31 = . E3 E1 We follow the numerical upscaling method from [9], see also [5]. The homogenization scheme requires to find the functions ξ kl = (ξ1kl , ξ2kl , ξ3kl ), k, l = 1, 2, 3, satisfying the following problem in a week formulation:
∂ξpkl ∂φi ∂φi dΩ = cijkl (x) dΩ, ∀φ ∈ H1P (Ω), (5) cijpq (x) ∂xq ∂xj ∂xj Ω Ω
Numerical Upscaling of Anisotropic Linear Elastic Materials
807
where φ = {φi }3i=1 and H1P (Ω) = {φ ∈ H1 : φi are Ω − periodic}. After computing the characteristic displacements ξ kl , we find the homogenized elasticity tensor cH using the explicit formula:
∂ξpkl 1 H cijkl = cijkl (x) − cijpq (x) dΩ. (6) |Ω| Ω ∂xq Due to the symmetry of the stiffness tensor c, the following relations ξ kl = ξ lk hold. Therefore, it is enough to solve six problems (5) to get the homogenized elasticity tensor. Rotated trilinear (Rannacher-Turek) finite elements [11] are used for numerical solution of (5). This choice is motivated by the additional stability of the nonconforming FEM discretization in the case of strongly heterogeneous materials [2]. The construction of robust non-conforming FEM methods are generally based on application of mixed formulation leading to a saddle-point system. By the choice of non continuous finite elements for the dual (pressure) variable, it can be eliminated at the (macro)element level, and we get a symmetric positive (semi-)definite FEM system in primal (displacements) variables. We use this approach which is referred as reduced and selective integration (RSI) [10].
3
Parallel MIC(0) Preconditioning
Our preconditioning algorithm is based on a preexisting parallel MIC(0) elasticity solver [13], based on a parallel MIC(0) solver for symmetric and positive definite scalar elliptic problems [1]. The preconditioner uses the isotropic variant of the displacement decomposition (DD) method (see, e.g., [6]). We write the DD auxiliary matrix in the form ⎡ ⎤ A CDD = ⎣ A ⎦ (7) A where A is the stiffness matrix corresponding to the bilinear form
3
∂u ∂v a(u , v ) = E(x) ∂x i ∂xi Ω i=1 h
h
dx,
(8)
where u and v are Ω-periodic functions. The DD splitting is motivated by the second Korn’s inequality, which holds for the RSI FEM discretization under consideration. A brief introduction to the modified incomplete factorization [8] is given below. Let us rewrite the real N × N matrix A = (aij ) in the form A = D − L − LT
808
S. Margenov and Y. Vutov
where D is the diagonal and (−L) is the strictly lower triangular part of A. Then we consider the approximate factorization of A which has the form: CMIC(0) = (X − L)X −1 (X − L)T
(9)
with X = diag(x1 , · · · , xN ) being the diagonal matrix determined by the condition of equal rowsums. We are interested in the case when X > 0 and thus CMIC(0) is positive definite for the purpose of preconditioning. If this holds, we speak about stable MIC(0) factorization. Concerning the stability of MIC(0), the following theorem holds. Theorem 1. Let A = (aij ) be a symmetric real N × N matrix and let A = D − L − LT be the splitting of A. Let us assume that L ≥ 0, Ae ≥ 0, Ae + LT e > 0,
e = (1, · · · , 1)T ∈ RN ,
(10)
i.e. that A is a weakly diagonally dominant with nonpositive offdiagonal entries and that A + LT = D − L is strictly diagonally dominant. Then the relation xi = aii −
i−1 N
aik k=1
xk
akj > 0
j=k+1
holds and the diagonal matrix X = diag(x1 , · · · , xN ) defines stable MIC(0) factorization of A. The perturbed version of MIC(0) algorithm is used in our study. This means that ˜ The diagonal the incomplete factorization is applied to the matrix A˜ = A + D. ˜ ˜ ˜ ˜ perturbation D = D(ξ) = diag(d1 , . . . dN ) is defined as follows: ξaii if aii ≥ 2wi d˜i = 1/2 ξ aii if aii < 2wi where 0 < ξ < 1 is a properly chosen parameter, and wi = j>i −aij . In particular, this allows us to satisfy the stability conditions (10) in the case of PBCs. The idea of our parallel algorithm is to apply the MIC(0) factorization on an auxiliary matrix B, which approximates A. The matrix B has a special block structure, which allows a scalable parallel implementation. Following the stan dard FEM assembling procedure we write A in the form A = e∈ωh LTe Ae Le , where Ae is the element stiffness matrix, Le stands for the restriction mapping of the global vector of unknowns to the local one, corresponding to the current element e. Let us consider the following approximation Be of Ae : ⎡ ⎤ ⎡ ⎤ a11 a12 a13 a14 a15 a16 b11 a12 a13 a14 a15 a16 ⎢ a21 a22 a23 a24 a25 a26 ⎥ ⎢ a21 b22 a23 a24 a25 a26 ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ a31 a32 a33 a34 a35 a36 ⎥ ⎢ a31 a32 b33 0 0 0 ⎥ ⎢ ⎥ ⎢ ⎥ Ae = ⎢ ⎥ , Be =⎢ a41 a42 0 b44 0 0 ⎥ . ⎢ a41 a42 a43 a44 a45 a46 ⎥ ⎢ ⎥ ⎣ a51 a52 a53 a54 a55 a56 ⎦ ⎣ a51 a52 0 0 b55 0 ⎦ a61 a62 a63 a64 a65 a66 a61 a62 0 0 0 b66
Numerical Upscaling of Anisotropic Linear Elastic Materials
809
Fig. 1. Structure of the solid phase: 128 × 128 × 128 voxels
The local numbering follows the pairs of the opposite nodes of the reference element. The diagonal entries of Be are modified to hold the rowsum criteria. the locally defined matrices Be we get the global matrix B = Assembling T −1 A) ≤ 3 holds uniformly e∈ωh Le Be Le . The condition number estimate κ(B with respect to mesh parameter and possible coefficient jumps (see for the related analysis in [1]). The modified matrix B has diagonal blocks, corresponding to the (x, y) cross sections. This allows to perform in parallel the solution of linear systems with matrix (9) [1]. It is important no note that the PBCs do not change the diagonal blocks of the stiffness matrix A as well as of the auxiliary matrix B. However, there are changes in the structure of the offdiagonal blocks, which require some principal modifications in the parallel code. Finally, the implemented parallel preconditioner for the considered linear elasticity nonconforming FEM systems has the form: ⎤ ⎡ CMIC(0) (B) ⎦. CMIC(0) (B) CDD MIC(0) = ⎣ CMIC(0) (B)
4
Numerical Experiments
The analyzed test specimen is a part of trabecular bone tissue extracted from a high resolution computer tomography image [4]. The trabecular bone has a strongly expressed heterogeneous microstructure composed of solid and fluid phases. To get a periodic RVE, the specimen is mirrored three times, see Fig. 1. In this article, our goal is to obtain the homogenized elasticity tensor of the trabecular bone tissue, taking into account the elastic response of the solid phase only. To this purpose, numerical experiments with exponentially decreasing Young modulus for the voxels corresponding to the fluid phase are performed. In other words, the empty (fluid) voxels are interpreted as fictitious domain. Homogenized properties of different RVEs with varying size of 32 × 32 × 32, 64 × 64 × 64
810
S. Margenov and Y. Vutov
and 128×128×128 voxels are studied. The Young modulus and the Poisson ratio of the solid phase are taken from [7] as follows: E s = 14.7GPa and ν s = 0.325. We set also ν f = ν s which practically doesn’t influence the numerical upscaling results. In what follows, the fictitious domain Young modulus E f is given in Pascals (see, e.g., the values in the parentheses bellow). ⎤ ⎡ 4.86×108 7.49×107 6.74×107 ⎥ ⎢ 7.49×107 3.03×108 9.45×107 ⎥ ⎢ 7 7 8 ⎥ ⎢ 6.74 ×10 9.45 ×10 7.43 ×10 H 7 ⎥ ⎢ C [1.47×10 ] = ⎢ ⎥ 8.45 ×107 ⎥ ⎢ ⎦ ⎣ 6.22×107 3.37×107 ⎤
⎡
4.35×108 5.65×107 5.03×107 ⎢ 5.65×107 2.38×108 7.69×107 ⎢ ⎢ 5.03×107 7.69×107 7.06×108 H 6 C [1.47×10 ] = ⎢ ⎢ 6.66×107 ⎢ ⎣ 4.89×107
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 1.40×107
⎡
⎤
4.29×108 5.46×107 4.86×107 ⎢ 5.46×107 2.30×108 7.49×107 ⎢ ⎢ 4.86×107 7.49×107 7.01×108 H 5 C [1.47×10 ] = ⎢ ⎢ 6.44×107 ⎢ ⎣ 4.74×107
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ 1.18×107
Here we have vanished the entries of C H which tend to zero with the increase of the PCG accuracy. The structure of C H corresponds to the case of orthotropic materials which is due to the enforced triple mirroring procedure. Following (4), the Young moduli Ei in each of the coordinate directions and the Poisson ratios νij = − εj /εi are computed explicitly by the formulas Ei = 1/sii
νij = −Ei sji
where sij stand for the elements of the matrix S = (C H )−1 , see [12]. Tables 1, 2 and 3 contain the computed homogenized Young moduli and Poisson ratios varying the fictitious domain Young modulus E f for the considered three different specimens. A stable behaviour of the implemented numerical homogenization scheme is observed in all cases. From practical point of view, a good accuracy of the computed homogenized Young moduli and Poisson ratios is achieved if the fictitious domain modulus E f = δE s for δ ∈ 10−3 , 10−4 . The next observation is that in all three cases (RVEs composed of 323 , 643 and 1283 voxels) the orthotropy ratio is more than 3. This evidently confirms that the hypothesis that the trabecular bone structure could be interpreted (approximated) as isotropic is not realistic. The last table illustrates the convergence rate of the implemented DD MIC(0) preconditioner. The available theoretical
Numerical Upscaling of Anisotropic Linear Elastic Materials
811
Table 1. Homogenized linear elasticity coefficients: n=32 Ef 1.47×109 1.47×108 1.47×107 1.47×106 1.47×105
E1 4.52×109 2.03×109 1.67×109 1.63×109 1.62×109
E2 6.23×109 4.72×109 4.48×109 4.46×109 4.45×109
E3 6.24×109 4.67×109 4.45×109 4.42×109 4.42×109
ν12 0.208 0.095 0.074 0.072 0.071
ν23 0.300 0.271 0.264 0.263 0.262
ν31 0.286 0.229 0.212 0.210 0.210
μ23 2.29×109 1.73×109 1.66×109 1.65×109 1.65×109
μ31 1.39×109 4.81×108 3.56×108 3.42×108 3.40×108
μ12 1.35×109 3.80×108 2.42×108 2.26×108 2.24×108
Table 2. Homogenized linear elasticity coefficients: n=64 Ef 1.47×109 1.47×108 1.47×107 1.47×106 1.47×105
E1 2.86×109 8.73×108 5.69×108 5.33×108 5.29×108
E2 3.11×109 1.12×109 8.02×108 7.62×108 7.58×108
E3 3.55×109 1.94×109 1.73×109 1.71×109 1.71×109
ν12 0.288 0.191 0.127 0.117 0.116
ν23 0.270 0.164 0.124 0.119 0.118
ν31 0.281 0.185 0.117 0.102 0.101
μ23 1.19×109 4.94×108 3.90×108 3.77×108 3.76×108
μ31 9.07×108 1.62×108 5.22×107 3.88×107 3.74×107
μ12 9.50×108 1.71×108 5.22×107 3.77×107 3.62×107
Table 3. Homogenized linear elasticity coefficients: n=128 Ef 1.47×109 1.47×108 1.47×107 1.47×106 1.47×105
E1 2.66×109 7.90×108 4.65×108 4.20×108 4.15×108
E2 2.47×109 5.97×108 2.81×108 2.24×108 2.16×108
E3 2.67×109 9.51×108 7.10×108 6.78×108 6.75×108
ν12 0.315 0.282 0.228 0.222 0.222
ν23 0.284 0.180 0.114 0.100 0.098
ν31 0.278 0.171 0.094 0.076 0.073
μ23 8.76×108 1.93×108 8.46×107 6.66×107 6.44×107
μ31 8.78×108 1.68×108 6.22×107 4.89×107 4.75×107
μ12 8.87×108 1.64×108 3.37×107 1.40×107 1.18×107
Table 4. Number of iterations n 32 64 128
E f [Pa] 1.47×109 1.47×108 1.47×107 1.47×106 1.47×105 N 2 359 296 222 363 575 711 734 18 874 368 343 577 848 1 236 1 436 150 994 944 481 840 1 505 2 311 2 482
estimates concern some more model problems for homogeneous materials. In such cases, the number of iterations is nit = O(n1/2 ) = O(N 1/6 ). Here the number of iterations has very similar behaviour for coefficient jumps of the range {10 − 102 }. The good news here is that even for very large jumps of up to 105 , the convergence is only slightly deteriorating. Based on the reported results we can conclude that the developed numerical homogenization algorithmis and software tools provide a reliable tool for computer simulation of strongly heterogeneous anisotropic voxel microstructure. As a next important step we plan to incorporate in the upscaling scheme the contribution of the fluid phase of the bone tissues. The implementation of additionally stabilized solvers for PBC FEM problems as well as the incorporation of scalable AMG preconditioners in the composed algorithm are also under development.
812
S. Margenov and Y. Vutov
Acknowledgments The authors gratefully acknowledge the partial support provided via Bulgarian NSF Grant DO 02-147/08.
References 1. Arbenz, P., Margenov, S., Vutov, Y.: Parallel MIC(0) preconditioning of 3D elliptic problems discretized by Rannacher-Turek finite elements. Computers and Mathematics with Applications 55(10), 2197–2211 (2008) 2. Arnold, D.N., Brezzi, F.: Mixed and nonconforming finite element methods: Implementation, postprocessing and error estimates. RAIRO, Model. Math. Anal. Numer. 19, 7–32 (1985) 3. Axelsson, O.: Iterative Solution Methods. Cambridge University Press, Cambridge (1994) 4. Beller, G., Burkhart, M., Felsenberg, D., Gowin, W., Hege, H.-C., Koller, B., Prohaska, S., Saparin, P.I., Thomsen, J.S.: Vertebral Body Data Set ESA29-99-L3, http://bone3d.zib.de/data/2005/ESA29-99-L3/ 5. Bensoussan, A.: Asymptotic Analysis for Periodic Structures. Elsevier, Amsterdam (1978) 6. Blaheta, R.: Displacement decomposition — incomplete factorization preconditioning techniques for linear elasticity problems. Num. Lin. Algebr. Appl. 1, 107–126 (1994) 7. Cowin, S.: Bone poroelasticity. J. Biomechanics 32, 217–238 (1999) 8. Gustafsson, I.: Modified incomplete Cholesky (MIC) factorization. In: Evans, D.J. (ed.) Preconditioning Methods; Theory and Applications, pp. 265–293. Gordon and Breach, New York (1983) 9. Hoppe, R.H.W., Petrova, S.I.: Optimal shape design in biomimetics based on homogenization and adaptivity. Math. Comput. Simul. 65(3), 257–272 (2004) 10. Malkus, D., Hughes, T.: Mixed finite element methods. Reduced and selective integration techniques: an uniform concepts. Comp. Meth. Appl. Mech. Eng. 15, 63–81 (1978) 11. Rannacher, R., Turek, S.: Simple nonconforming quadrilateral Stokes Element. Num. Meth. Part. Diff. Equs. 8(2), 97–112 (1992) 12. Saad, M.H.: Elasticity — Theory, Applications and Numerics. Elsevier, Amsterdam (2005) 13. Vutov, Y.: Parallel DD-MIC(0) Preconditioning of Nonconforming Rotated Trilinear FEM Elasticity Systems. In: Lirkov, I., Margenov, S., Wa´sniewski, J. (eds.) LSSC 2007. LNCS, vol. 4818, pp. 745–752. Springer, Heidelberg (2008)
The Bpmpd Interior Point Solver for Convex Quadratically Constrained Quadratic Programming Problems Csaba M´esz´aros Laboratory of Operations Research and Decision Systems, Hungarian Academy of Sciences
[email protected]
Abstract. The paper describes the convex quadratically constrained quadratic solver Bpmpd which is based on the infeasible–primal–dual algorithm. The discussion includes subjects related to the implemented algorithm and numerical algebra employed. We outline the implementation with emhasis to sparsity and stability issues. Computational results are given on a demonstrative set of convex quadratically constrained quadratic problems.
1
Introduction
During the past two decades perhaps the most dramatic progress in computational optimization has been achieved in the field of implementation of interior point methods. The recent implementation techniques follow basically the major contributions of the developers of commercial and public domain interior–point packages [1]. The success of interior point methods (IPM) in the linear programming (LP) practice resulted in an increased interest in the application of the IPM technology in nonlinear optimization. It is now widely accepted that on large–scale problems interior point methods can significantly outperform the algorithms based on the generalizations of the simplex approach (see the benchmark results in [11]). The paper describes our IPM solver, Bpmpd, which is an efficient implementation of the infeasible–primal–dual algorithm for solving linear (LP), convex quadratic (QP) and convex quadratically constrained quadratic (QCQP) programming problems. The paper is organized as follows: Section 2 presents the infeasible–primal– dual method on which our implementation is based. In section 3 we outline some of the most important implementation techniques used in Bpmpd ensuring efficiency and robustness. Numerical results are shown in Section 4 on large–scale QCQP test problems. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 813–820, 2010. c Springer-Verlag Berlin Heidelberg 2010
814
2
C. M´esz´ aros
The Infeasible Primal–Dual Method
Without loss of generality, the convex QCQP problem is usually assumed to be in the following form: 1 T x Qi x 2
min 12 xT Q0 x + cT0 x, + cTi x ≥ bi for i = 1, . . . , m, x ≥ 0,
(1)
where x, ci ∈ Rn , bi ∈ R, Qi ∈ Rn×n , furthermore, Q0 and −Qi for i = 1 . . . m are symmetric positive semidefinite. A dual problem in the Wolfe-sense can be associated to (1) as follows: T max bT y − 12 xT Q0 x − m i=1 yi x Qi x , m C T y ≤ c0 + Q0 x − i=1 yi Qi x, (2) y ≥ 0, T
where y ∈ Rm and C = [c1 , . . . , cm ] . Primal–dual interior point algorithms are iterative approaches which seek to satisfy the Karush–Kuhn–Tucker type optimality conditions for the primal–dual problem (1-2). Megiddo, Mizuno and Kojima [4] and others contributed to the theoretical background of this type of algorithms, while the first remarkable implementation results were shown by Lustig, Marsten, and Shanno [5]. The first step to obtain the infeasible–primal–dual algorithm is to replace the nonnegativity constraints with logarithmic penalty terms for strictly positive approximate solutions x > 0 and s > 0, where s ∈ Rm . This results in the following logarithmic barrier problem: n m min 12 xT Q0 x + c0 T x − μ i=1 ln xi − μ i=1 ln si , (3) 1 T T 2 x Qi x + ci x − si = bi for i = 1, . . . , m, where μ > 0 barrier parameter. From the Lagrangian of the above problem, L(x, y, s), we can derive a set of nonlinear equations that describes the first–order optimality conditions: ∇x L(x, y, s) = c0 + Q0 x − μX −1 e − C T y −
m
yi Qi x = 0,
(4)
i=1
1 T x Qi x + cTi x − si − bi = 0, for i = 1, . . . , m, 2 ∇s L(x, y, s) = −μS −1 e + y = 0,
∇y L(x, y, s) =
(5) (6)
T
where X = diag(x1 , . . . xn ), S = diag(s1 , . . . sm ), e = (1, . . . , 1) . Furthermore, we introduce z ∈ Rm as: z = μX −1 e and rewrite (7) and (6) as: z = μX −1 e =⇒ Xz = μe, −μS −1 e + y = 0 =⇒ Sy = μe.
(7)
The Bpmpd Interior Point Solver
815
The main idea of the primal-dual log barrier methods is that the resulted nonlinear system of equations is solved by the Newton method, while μ is decreased to zero. For describing the algorithm, we use the notations bellow: H(y) = Q0 − ai (x) = ci +
m
yi Qi ,
i=1 m
xT Qi , for i = 1, . . . , m,
i=1 T
A(x) = (a1 (x) , . . . , am (x)T )T , 1 ξi = bi − xT Qi x − cTi x + si for i = 1, . . . , m, 2 m ζ = c0 + Q 0 x − z − C T y − yi Qi x. i=1
It is easy to derive that in every iteration, the Newton method results in the following system of equations: ⎡ ⎤⎡ ⎤ ⎡ ⎤ A(x, y) −I 0 0 Δx ξ ⎢ −H(y) 0 A(x, y)T I ⎥ ⎢ Δs ⎥ ⎢ ⎥ ζ ⎢ ⎥⎢ ⎥=⎢ ⎥, (8) ⎣Z 0 0 X ⎦ ⎣ Δy ⎦ ⎣ μe − XZe ⎦ 0 Y S 0 Δz μe − SY e which can be reduced by pivoting on X and Y to
−ZX −1 − H(y) A(x, y)T Δx ζ − μX −1 e + Ze = . A(x, y) Y −1 S Δy ξ + μY −1 e − Se
(9)
Let M denote the matrix of the augmented system (9) and D and F denote: D = ZX −1 + H(y), F = Y −1 S. It is easy to see that the assumptions for (1) provide the positive definiteness of D and F which make the matrix of the augmented system quasidefinite [13]. In every iteration an implementation of the primal–dual interior point method defines the matrix and right–hand side of (1) and solves the new linear equation system. The result is used for deriving the solution of (8) which will be applied as search direction from the current iterate. The implementation of a usual interior point solver consists of the following parts: – Presolve and scaling, sparse matrix ordering. – Choose starting point (x0 , s0 , y 0 , z 0 ) > 0. – Iteration cycle.
816
C. M´esz´ aros
• • • •
Check stopping criteria. Define M and μ. Solve the Newton system. Compute steplengths and update primal and dual solutions.
– Postsolve and descaling. The development of Bpmpd started in 1993 and in 1996 it was released in the Netlib in source code form as an interior-point implementation to solve LP problems. Since then, the functionality of the solver has been extended [6] and the efficiency [7] and robustness [8] were increased significantly.
3
Implementation
The efficient and robust implementation of a numerical algorithm is a highly complex task. The recent version of Bpmpd consists of approximately 25 thousand C source lines. The numeric computations are the steps most involved in any implementation. Making them efficient is of extraordinary importance. Even the symbolical investigations are closely related to the numerical procedures, because they are used for reducing the number of necessary arithmetical operations as possible. Therefore, during the development of Bpmpd a significant effort was concentrated on making the “numerical algebra kernel” as robust and fast as possible. Nowadays, all general–purpose large-scale interior point implementations use direct decomposition to solve the reduced Newton system (9). This can be done by the symmetric factorization of a quasidefinite system as
−D AT (10) P T = LΛLT , P A F where L ∈ R(m+n)×(m+n) lower triangular, Λ ∈ R(m+n)×(m+n) diagonal and P ∈ R(m+n)×(m+n) permutation. Direct factorization approaches have some disadvantages, i.e., L may fill up as the models getting larger, which increases memory and floating point computation requirements. However, due to the quasidefinite property the P permutation can be determined in a pure symbolical step, without numerical considerations. Once the permutation has been determined, the factorization pattern is fixed for all interior point iterations [13]. Then the numerical factorization can take advantage of the fixed factorization pattern by exploiting the computational hardware. 3.1
Improving the Numerical Properties of Factorizations
As shown in [9] one drawback of direct factorization approaches is that in some situations the sufficient numerical accuracy cannot be achieved. One possible solution, suggested in [8], overcomes the numerical difficulties by applying iterative refinement and regularization
(0)
(k+1)
(k) Δx Δx Δx ∗ −1 = I − (M ) M + , (11) Δy Δy Δy
The Bpmpd Interior Point Solver
817
where M ∗ ∈ R(m+n)×(m+n) and differs from M in the diagonal values only:
−D∗ AT M∗ = A F where where D∗ − D is diagonal. One can see that the requirement −1 λ I − (M ∗ ) M < 1 for the convergence of the iterative refinement process (11) is equivalent to the condition. Dii max ∗ < 2, (12) Dii which describes the necessary conditions for the regularization matrix D∗ . This kind of regularized iterative refinement process is implemented in Bpmpd as described in [8]. 3.2
Implementation of the Numerical Kernels
Virtually, all efficient computer hardware have similar features: – Hierarchical memory. – Vector computing capabilities. – Parallel processing capabilities. The exploitation of hardware features like these is difficult in sparse computations, therefore efficient implementations try to identify parts of the sparse factorization which can be done in dense mode. The ideas in [7] were based on the efficient exploitation of supernodal partitions [2] of the factors. A supernode is a set of continuous columns of L, which share the same sparsity structure. In order to speed up numerical updates by columns that do not form supernodes as described above, we have generalized the supernodes in the following manner: a set of columns is considered as a generalized supernode if the corresponding diagonal block in L is a dense lower triangular matrix. Since each column of a supernode like this will contain the sparsity pattern of the preceding columns as a subset, one can reorder the row indices and form a submatrix with dense blocks inside. The following example shows a generalized supernode in the original and permuted forms. ⎡ ⎤ ⎡ ⎤ ∗ 1 1 ∗ ⎥ ⎢∗ ∗ ⎥ 2 2⎢ ∗ ∗ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎥ 3⎢ ⎢ ∗ ∗ ∗ ⎥ Permute rows ⎢ ∗ ∗ ∗ ⎥ 3 ⎢ ⎥ ⎢ ⎥ 4 ⎢ ∗ ∗⎥ =⇒ ⎢∗ ∗ ∗⎥ 5 ⎣ ⎣ ⎦ ∗ ∗ ∗⎦ 6 5 ∗∗∗ ∗∗ 4 6 ∗∗∗ Whereas the permuted index ordering will make the sparse update process a bit more costly, the computation of the update with a generalized supernode can be easily organized by dense block updates where the number of necessary
818
C. M´esz´ aros
dense block operations is at most the number of columns in the supernode. The generalization of the supernodes in the above manner has an additional benefit, namely, this makes it possible to store the sparsity pattern of the columns of the generalized supernodes in a compressed form. This can result in significant savings in the required storage space and make data localization better, which is also beneficial for the hardware. By using the supernode decompositions above, a significant part of the floating point operations can be performed in dense mode which can be implemented efficiently by using loop unrolling and blocking techniques [12,10]. The computational kernel process of our implementation can be described as follows: after removing the empty rows, the supernodes and generalized supernodes are decomposed into blocks: ⎤ ⎡ B0 ⎢ B1 ⎥ ⎢ ⎥ ⎣...⎦, Bk where B0 is dense lower triangular matrix and, Bi>0 is rectangular. Bi>0 is either a dense matrix or it has a dense block structure. A natural idea is to take the size of the cache memory into consideration when determining the supernode partitioning, i.e., splitting large supernodes into smaller ones such that each Bi fits entirely to the cache memory. This approach balances well the benefit of the faster cache memory and the overhead presented by the multiple updates of split supernode partitions. The Bi blocks are stored row–wise, allowing the use of inner product computations during the factorization process, which is efficient for vector processing. In our push factorization approach the supernodes are processed one by one. The inner cycle of this factorization process computes the numerical values of a supernode and its numerical contribution on the remaining matrix as follows: 1. Compute the symmetric decomposition B0 = F0 Λ0 F0t . (parallel) for i = 1, . . . , k. 2. Update Bi ←− Bi F0−T Λ−1 0 for i = 1, i ≤ k 3. Compute the update matrices Uj = Bi Λ0 BjT (parallel) for j = 1, . . . , i−1. 4. Compute the lower triangular update matrix Ui = Bi Λ0 BiT . 5. Update the remaining matrix by Uj (parallel) for j = 1, . . . , i. end for In step 1 we store F0 in the storage place of B0 , thus the algorithm exploits the cache memory by re-using the data of B0 in steps 1 and 2 and the data of Bi in steps 3 and 4. This is a key point in parallel computations, since the access to the main memory is often a limiting factor. As noted in [7], the above process exploits vector processing very efficiently. 3.3
Sparse Matrix Ordering
The practical success in large–scale optimization highly depends on the exploitation of sparsity structures. In case of interior point methods, the pivot order of
The Bpmpd Interior Point Solver
819
the symmetric factorization has a very strong influence on the number of fill–in and, therefore, on the storage requirement and the speed of factorizations. In the symbolic phase we try to find a good row permutation P for (10) that minimizes the number of nonzeros in L. Since finding the “optimal” permutation is an NP complete problem [14], heuristics are used in practice. Bpmpd applies a variant of the multilevel bisection algorithm [3] combined with local heuristics [2].
4
Numerical Results and Conclusions
In this section we shall compare the efficiency of Bpmpd with Cplex version 11. Our tests were performed on the convex QCQP test problems published by Mittelmann at http://plato.asu.edu/ftp/ampl_files/qpdata_ampl/. The experiments were performed on a desktop machine equipped with one 3 Ghz Core 2 Duo Intel processor. From the test problems we omitted the small ones which were solved by both solvers in less than 2.5 seconds. Table 1 contains the statistics of the test problems and the solution times in seconds. Table 1. Numerical results on QCQP problems Problem Problem size Solution time name m n nz(A) CPLEX BPMPD boyd1 20 93262 652258 19.73 2.3 boyd2 186533 93264 423788 29.75 1.12 cont-100 9803 10198 58808 3.02 0.73 cont-200 39603 40398 237608 14.88 4.21 cont-201 40200 40398 209402 12.42 5.19 cont-300 90300 90598 471602 20.97 16.86 stadat1 4001 2002 11999 5.58 0.06 aug2dc 10002 20201 60202 2.56 0.28 cvxqp1-l 5002 10001 15000 424.98 5.78 cvxqp2-l 2502 10001 7501 210.58 3.78 cvxqp3-l 7502 10001 22499 638.61 7.04 ksip 1003 21 17393 4.67 0.03 liswet11 10002 10003 40004 8.77 0.36 liswet12 10002 10003 40004 8.02 1.33 liswet1 10002 10003 40004 10.64 0.36 liswet7 10002 10003 40004 12.7 0.26 liswet8 10002 10003 40004 15.2 0.78 liswet9 10002 10003 40004 11.17 0.95 stcqp2 2054 4098 413340 3.3 0.06 yao 2002 2003 8004 3.45 0.05
Our numerical results indicate that nonseparable QCQPs can be solved efficiently in the standard form, and the performance of our implementation compares favorably to that of Cplex. Furthermore, our experiences show that interior point methods with adequate numerical and sparsity techniques are robust and efficient tools in large–scale nonlinear optimization.
820
C. M´esz´ aros
Acknowledgments This work was supported in part by Hungarian Research Fund OTKA K-77420 and K-60480.
References 1. Andersen, E., Gondzio, J., M´esz´ aros, C., Xu, X.: Implementation of interior point methods for large scale linear programs. In: Terlaky, T. (ed.) Interior Point Methods of Mathematical Programming, pp. 189–252. Kluwer Academic Publishers, Dordrecht (1996) 2. George, A., Liu, J.: Computer Solution of Large Sparse Positive Definite Systems. Prentice-Hall, Englewood Cliffs (1981) 3. Karypis, G., Kumar, V.: A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM Journal on Scientific Computing 20(1), 359–392 (1999) 4. Kojima, M., Megiddo, N., Mizuno, S.: Theoretical convergence of large-step primaldual interor-point algorithms for linear programming. Mathematical Programming 59(1) (1993) 5. Lustig, I., Marsten, R., Shanno, D.: Computational experience with a primal-dual interior point method for linear programming. Linear Algebra Appl. 20, 191–222 (1991) 6. M´esz´ aros, C.: The BPMPD interior-point solver for convex quadratic problems. Optimization Methods and Software 11-12, 431–449 (1999) 7. M´esz´ aros, C.: On the performance of the Cholesky factorization in interior point methods on Pentium 4 processors. CEJOR 13(4), 289–298 (2005) 8. M´esz´ aros, C.: On numerical issues of interior point methods. SIAM J. on Matrix Analysis 30(1), 223–235 (2008) 9. M´esz´ aros, C.: On the Cholesky factorization in interior point methods. Computers & Mathematics with Applications 50, 1157–1166 (2005) 10. M´esz´ aros, C.: Fast Cholesky factorization for interior point methods of linear programming. Computers & Mathematics with Applications 31(4/5), 49–54 (1996) 11. Mittelmann, H., Spellucci, P.: Decision tree for optimization software. World Wide Web (1998), http://plato.la.asu.edu/guide.html 12. Rothberg, E., Gupta, A.: Efficient Sparse Matrix Factorization on High-Performance Workstations-Exploiting the Memory Hierarchy. ACM Trans. Mathematical Software 17(3), 313–334 (1991) 13. Vanderbei, R.: Symmetric quasi-definite matrices. SIAM J. on Optimization 5(1), 100–113 (1995) 14. Yannakakis, M.: Computing the minimum fill–in is NP–complete. SIAM J. Algebraic Discrete Methods 2, 77–79 (1981)
On Shape Optimization of Acoustically Driven Microfluidic Biochips Svetozara I. Petrova1,2 1
2
Department of Mathematics, University of Applied Sciences, Am Stadtholz 24, 33609 Bielefeld, Germany Institute for Parallel Processing, Bulgarian Academy of Sciences, Acad. G. Bontchev, Block 25A, 1113 Sofia, Bulgaria
Abstract. The paper deals with the shape optimization of acoustically driven microfluidic biochips. This is a new type of nanotechnological devices with a wide range of applications in physics, pharmacology, clinical diagnostics, and medicine. The complex system of channels inside the chip can be optimized with regard to different objectives and various optimal criteria. To simplify the model, we solve Partial Differential Equations (PDEs) constraint optimization problem based on a stationary Stokes-like system as equality constraints and bilateral inequality constraints with upper and lower bounds for the design parameters coming from geometrical and technological restrictions. The shape of the domain is designed by control parameters determining certain parts of the boundary. Preliminary fixed segments of the lateral walls of the channel are approximated by B´ezier-curves with design parameters taken as the B´ezier control points. For the solution of the problem we use a pathfollowing interior-point method with inexact Newton solvers. Keywords: Surface acoustic waves, biochips, shape optimization, pathfollowing interior-point method, Newton solver. Mathematics subject classification: 65K10, 74F10, 90C30.
1
Introduction
In the last years, the use of acoustically driven microfluidic biochips as a novel type of nanotechnological devices has attracted a lot of attention (cf., e.g. [2,4,5,7,9]). These miniaturized chips with a surface approximately one quadrat centimeter or less contain a complex system of channels with appropriately located interdigital transducers, pumps, reservoirs, capillary barriers, mixers, valves, etc. and serve as biochemical labs, also called labs-on-a-chip. In particular, they can transport droplets or fluid volumes of nanoliters and perform biochemical analysis of tiny samples. The small size of the devices in a scale of micrometers is of a significant importance for reducing the costs in many practical applications. I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 821–828, 2010. c Springer-Verlag Berlin Heidelberg 2010
822
S.I. Petrova
Fig. 1. Transportation of a droplet containing probe in SAWs driven biochip
The basic working principle and the operating mode of the microfluidic biochips is based on piezoelectrically actuated Surface Acoustic Waves (SAWs) on the surface of a chip [9]. The acoustic streaming effects excited by SAWs have been used for a long time in high frequency applications (cf., [2,7]). The piezoelectric substrate material on the chip is equipped with the so-called interdigital transducers on the surface. They convert electromagnetic input signals into elastic waves on the surface of the substrate and produce SAWs. The interaction of the SAWs generated by electric pulses of high frequency together with fluid volumes of amounts of nanoliters leads to streaming patterns in the fluid and serves as a driving force for a motion of the fluid as whole. The acoustic waves propagate like a miniaturized earthquake and in this way transport liquids along the surface of the chip. A transportation of a droplet containing probe along a lithographically produced network of channels to marker molecules placed at prespecified surface locations is presented in Fig. 1. On the fluidic network a small portion of titrate solution (middle) is separated from a larger volume (right). Surface acoustic waves transport this quantity towards the analyte (left) at the reaction site. Once a critical concentration is attained, it can be either detected by a change of the color of the analyte or a change of the conductivity. In the latter case, this can be easily measured by a sensor that is integrated on the same chip. In the recent years, the described acoustically driven microfluidic biochips have been actively used in physics, pharmacology, molecular biology, and clinical diagnostics. In most of these applications one needs a precise handling of very small amounts of tiny samples and costly reagents, cost-effectiveness, better sensitivity, and significant speed-up of biochemical analysis. We are interested in modeling and numerical simulation of microfluidic flows on biochips and optimization techniques for an improved design of these devices (channel structure, location of the capillary barriers, optimal positioning of the interdigital transducers). More precisely, we focus on shape optimization of domains with channel geometries filled with a flow and consider an objective functional depending on specific physical applications. The paper is organized as follows. Section 2 describes the physical model using the classical Navier-Stokes equations. Due to different time scales, the model consists of two systems of Partial Differential Equations (PDEs). Our PDEs constrained optimization problem based on a stationary Stokes-like system and numerical results are presented in Section 3.
On Shape Optimization of Acoustically Driven Microfluidic Biochips
2
823
The Physical Model
The physical model relies on the compressible Navier-Stokes equations with velocity v, pressure p, and density ρ of the fluid in a time-dependent computational domain Ω(t) ∂v η + [∇v]v = −∇p + ηΔv + ζ + ∇(∇ · v) , ρ ∂t 3 ∂ρ + ∇ · (ρv) = 0 , (1) ∂t p = p(ρ) = aργ in Ω(t), t > 0. Here, η and ζ are the shear and bulk viscosity coefficients and a, γ > 0. Usually, a linear dependence (γ = 1) of fluid (such as water) is assumed. Note that the right-hand side of the first equation in (1) is the divergence of the Newtonian stress tensor Σ = Σ(v, p) defined componentwise as follows ∂vi ∂vj 2 Σij (v, p) = −p δij + η + + ζ − η (∇ · v)δij . ∂xj ∂xi 3 The second row in (1) represents the continuity equation. The last (constitutive) equation is considered to close the system. The problem (1) is coupled by the boundary conditions v(t, x + u) =
∂u (t, x) ∂t
on ∂Ω(t), t > 0,
where u = u(t, x) is the SAW displacement (from the equilibrium state) of the substrate at the surface, which is typically a harmonic function of time. Suitable initial conditions for the unknowns v = 0 and p = p0 are prescribed at t = 0 in Ω0 := Ω(0). The system (1) is a multi-scale problem due to the extremely different time scales. The acoustics (elastic waves on the chip surface interact with acoustic waves of fluid) is a process with a time scale of nanoseconds /10−8s/, while the acoustic streaming is in milliseconds /10−3 s − 100 s/. We use the idea behind the approximation theory, assuming an expansion of the unknowns v, p, ρ with a scale parameter ε v = 0 + εv + ε2 v + O(ε3 ) , p = p0 + εp + ε2 p + O(ε3 ) , ρ = ρ0 + ερ + ε2 ρ + O(ε3 ) , where ε, 0 < ε 1, is proportional to the maximal SAWs displacement of the domain boundary. We define v 1 = εv , v 2 = ε2 v and analogously p1 = εp , p2 = ε2 p and ρ1 = ερ , ρ2 = ε2 ρ . Assume all quantities with subscript 0 to be known and constant in time and space. Collecting all terms of order O(ε) we get the linear system for damped propagation of acoustic waves ∂v 1 η − ηΔv 1 − ζ + ∇(∇ · v 1 ) + ∇p1 = 0 , ρ0 ∂t 3
824
S.I. Petrova
∂ρ1 + ρ0 ∇ · v 1 = 0 , ∂t p1 − c20 ρ1 = 0
(2) in Ω(t), t > 0
∂u (t, x) on ∂Ω(t), t > 0, coming from the ∂t observation u = O(ε) and the Taylor expansion with boundary condition v 1 (t, x) =
v(t, x + u) = v(t, x) + [∇v] u + O( u 2 ) =
∂u (t, x) ∂t
and initial conditions v 1 = 0 and p1 = 0 in Ω0 . The constant c0 stands for the small signal speed of the fluid with c20 := aγργ−1 . The problem (2), with 0 dominated acoustic effects, is our microscopic time-scale problem. After having solved the acoustic system (2) for the variables v 1 , p1 , and ρ1 , the collection of all second order terms O(ε2 ) results in ∂v 2 ∂v 1 η ρ0 − ηΔv 2 − ζ + ∇(∇ · v 2 ) + ∇p2 = −ρ1 − ρ0 [∇v 1 ]v 1 , ∂t 3 ∂t ∂ρ2 + ρ0 ∇ · v 2 = −∇ · (ρ1 v 1 ) , (3) ∂t in Ω(t), t > 0, p2 − c20 ρ2 = 0 v 2 = −[∇v 1 ] u p2 = 0 v 2 = 0,
on ∂Ω(t), t > 0, on Ω0 .
The system (3) can be interpreted as an instationary Stokes problem and refers to our macroscopic time-scale problem. We are interested in the acoustic streaming effects observable on the larger (macroscopic) time scale. For that reason we consider the time-averaging operator 1 t0 +T < a >:= a(t) dt, T t0 where T := 2πω is the period of the harmonic SAW oscillation. To simplify the model and perform a numerical simulation, we neglect the time dependence of the domain which is possible due to the very small SAW displacements u = O(ε) and the scale parameter ε 1. Thus, we arrive at the following stationary Stokes system for the acoustic streaming: η ∂v 1 ∇(∇ · v 2 ) + ∇p2 = − ρ1 + ρ0 [∇v 1 ]v 1 , − ηΔv 2 − ζ + 3 ∂t ρ0 ∇ · v 2 = −∇ · (ρ1 v 1 ) in Ω , (4) 2 in Ω , p 2 − c0 ρ 2 = 0 v 2 = − [∇v 1 ] u
on ∂Ω.
The linear system (4) describes the stationary average flow field, resulting after relaxation of SAWs of high frequency. For more details of the model description see [2,7,9].
On Shape Optimization of Acoustically Driven Microfluidic Biochips
3
825
Shape Optimization Problem
In this section, we focus on the solution of the Stokes-like problem (4) and consider it as a state system in our PDEs constrained optimization. The standard algorithms for the physical model requires first a solution of subproblem (2) for the damped propagation of acoustic waves to find v 1 , p1 , ρ1 with a given periodic function u as the SAW displacement on the boundary of the domain. For this subproblem we rely on the numerical simulation presented in [5] and concentrate further on the solution of the second subproblem (4). For convenience, in what follows, we set v = v 2 , p = p2 , and ρ = ρ2 and denote ∂v 1 f v := − ρ1 + ρ0 [∇v 1 ]v 1 , fp := −∇ · (ρ1 v 1 ) . ∂t Our purpose is to find the optimal structural design of a channel located on the chip surface. We suppose that the computational domain Ω ⊂ lR2 is occupied by a part of a channel with a capillary barrier and a reservoir, as illustrated in Fig. 2. The capillary barrier is placed between the channel and the reservoir to insure a proper filling of the reservoir with a precise amount of liquid probe. The additional outlet valves on upper and lower walls of the boundary are passive when the capillary barrier is opened and activate when it is in stopping mode. We consider an optimization problem with an objective functional J(w, α) with respect to the state variables w = (v, p, ρ)T and the design parameters α in the domain Ω(α) with a part of the boundary depending on α. Our aim is to compute the maximal pumping rate for the surface acoustic wave driven fluidic transport. For that purpose, we consider a cross section Ω(α) of the channel and define the objective functional as follows J(w, α) := max ρv ·ds (5) α Ω(α) Suppose that α = (α1 , . . . , αM )T is the vector of the B´ezier control points of order M on both sides of the lateral walls of the channel. More precisely, we consider Mu points (α1 , . . . , αMu )T on the upper part of the lateral wall and Ml points on the lower part of the wall, so that M = Mu + Ml . Typically, the B´ezier control points are the prespecified coefficients in the linear combination for the B´ezier-curve by means of Bernstein polynomials. Taylor-Hood P2/P1 (quadratic with respect to the velocity components and linear for the pressure and density) finite elements are used for the spatial discretization of the Stokes-like system with shape regular triangulation of the domain. Denote by v ∈ lRN1 , p ∈ lRN2 , and ρ ∈ lRN3 respectively, the velocity, the pressure, and the density components in the discretized model, so that w = (v, p, ρ)T ∈ lRN with N = N1 + N2 + N3 . In the sequel, we consider the discretized PDEs constrained shape optimization problem to optimize the objective (5) subject to the PDEs equality constraints η ∇(∇ · v) + ∇p − f v = 0 (6) C1 (w, α) := −ηΔv − ζ + 3
826
S.I. Petrova
C2 (w, α) := ρ0 ∇ · v C3 (w, α) := p −
c20
− fp = 0
in Ω(α) ,
=0
ρ
and the inequality constraints on α given entrywise as follows (i)
αmin < αi < α(i) max ,
1 ≤ i ≤ M.
(7)
The third constraint in (6) relates the third equation in (4) and is included only for clarity in presentation. Practically, this constraint is not taken into account in the numerical experiments due to its simplicity. Coupling the constraints by Lagrange multipliers we define the Lagrangian function associated with problem (5)-(7) L(w, α, λ, z) := J(w, α) + λT1 C1 (w, α) + λT2 C2 (w, α) + λT3 C3 (w, α)
(8)
− z T1 (α − αmin ) − z T2 (αmax − α). Here, λ := (λ1 , λ2 , λ3 )T ∈ lRN with λi ∈ lRNi , i = 1, 2, 3 refers to the Lagrange multipliers for the equality constraints. The vector z := (z 1 , z 2 )T with z 1 ≥ 0, z 2 ≥ 0 stands for the Lagrange multipliers corresponding to the inequality constraints. The limits vectors for the design parameters are denoted by T T (1) (M) (M) αmin = αmin , . . . , αmin , αmax = α(1) , . . . , α . max max Thus, we are led to the saddle-point problem inf sup L(w, α, λ, z) w , α λ,z with primal variables (w, α)T ∈ lRN × lRM and dual variables (λ, z)T ∈ lRN × lR2M . The inequality constraints (7) are treated by logarithmic barrier terms resulting in the perturbed objective functional β (μ) (w, α) := J(w, α) − μ
M
(i) log(αi − αmin ) + log(α(i) max − αi ) ,
(9)
i=1
where μ > 0, μ → 0, is the barrier parameter. The introduction of logarithmic terms leads to the following parameterized family of equality constrained optimization subproblems (10) inf β (μ) (w, α) w ,α subject to C1 (w, α) = 0 , C2 (w, α) = 0 , C3 (w, α) = 0 .
(11)
The parameterized Lagrangian function corresponding to (10)-(11) is defined L(μ) (w, α, λ) := β (μ) (w, α) +
3 i=1
λTi Ci (w, α) .
(12)
On Shape Optimization of Acoustically Driven Microfluidic Biochips
827
Denote the unknown solution by ψ := ψ(μ) = (w(μ), α(μ), λ(μ))T . The firstorder necessary Karush-Kuhn-Tucker (KKT)-conditions for optimality read F (μ) (ψ) := ∇L(μ) = 0,
(13)
T (μ) (μ) (μ) (μ) (μ) where ∇L(μ) := Lv , Lp , Lρ , Lα , L is the gradient of the Lagrangian λ (12) with respect to the unknown variables. Then, (13) results in
(μ)
3
(μ)
3
(μ)
3
(μ)
i=1 3
Lv = ∇v J + Lp = ∇p J + Lρ = ∇ρ J + Lα = ∇α J + (μ)
∂v (λTi Ci (w, α)) = 0 ,
i=1
∂p (λTi Ci (w, α)) = 0 ,
i=1
∂ρ (λTi Ci (w, α)) = 0 , ∂α (λTi Ci (w, α)) − μ D1−1 e + μ D2−1 e = 0 ,
i=1
T
L = ( C1 (w, α), C2 (w, α), C3 (w, α) ) λ
= 0.
Here, we have denoted by e = (1, . . . , 1)T the vector of all ones and consider the so-called perturbed complementarity conditions D1 z 1 = μ e and D2 z 2 = μ e with (i) (i) diagonal matrices D1 = diag(αi −αmin ) and D2 = diag(αmax −αi ). The sequence Table 1. Convergence results of the path-following method N 1680 3424 6852
Mu Ml iter µ itern 8 8 4 2.1e-5 2 8 8 7 3.5e-5 3 8 8 10 7.4e-5 4
Fig. 2. Domain with capillary barrier and reservoir (initial and optimal shape)
828
S.I. Petrova
of solutions ψ(μ), μ → 0 of the nonlinear system of equations (13) define the central path (referred also as barrier trajectory). For the solution procedure we rely on path-following interior-point method with an inexact Newton solver. For description of the method and recent applications, see [1,3]. The optimal shape is obtained by using M = 16 B´ezier points on both sides of the boundary as design variables and is presented by dotted line in Fig.2. In Table 1 we report the convergence results of the path-following method. Here, N is the number of all degrees of freedom, Mu and Ml are the number of B´ezier points on upper and lower boundary, iter is the number of global iterations with a tolerance tol = 10−5 , μ is the final value of the barrier parameter (starting value μ = 1), and itern is the number of iterations for the Newton solver with a tolerance toln = 10−3 .
Acknowledgements The main research in this paper has been supported by the German NSF Grant HO8779/1 within DFG SPP 1253 and the Bulgarian NSF Grant DO02–115/08. The author is thankful to all collaborators involved in the first project for the fruitful discussions. The work has been also partially granted by the internal Research Funding Program 2010 and FSP AMMO at the University of Applied Sciences Bielefeld, Germany.
References 1. Antil, H., Hoppe, R.H.W., Linsenmann, C.: Optimal design of stationary flow problems by path-following interior-point methods. Control and Cybernetics 37(4) (2008) 2. Bradley, C.E.: Acoustic streaming field structure: The influence of the radiator. J. Acoust. Soc. Am. 100, 1399–1408 (1996) 3. Hoppe, R.H.W., Petrova, S.I.: Path-following methods for shape optimal design of periodic microstructural materials. Optimization Methods and Software 24(2), 205– 218 (2009) 4. Karniadakis, G., Beskok, A., Aluru, N.: Microflows and Nanoflows. Fundamentals and Simulation. Springer, Berlin (2005) 5. K¨ oster, D.: Numerical simulation of acoustic streaming on surface acoustic wavedriven biochips. SIAM J. Sci. Comput. 29(6), 2352–2380 (2007) 6. Mohammadi, B., Pironneau, O.: Applied Shape Optimization for Fluids. Oxford University Press, Oxford (2001) 7. Uchida, T., Suzuki, T., Shiokawa, S.: Investigation of acoustic streaming excited by surface acoustic waves. In: Proc. of the 1995 IEEE Ultrasonics Symposium, pp. 1081–1084. IEEE, Los Alamitos (1995) 8. Ulbrich, S., Ulbrich, M.: Primal-dual interior-point methods for PDE-constrained optimization. Math. Program. 117, 435–485 (2009) 9. Wixforth, A.: Acoustically driven programmable microfluidics for biological and chemical applications. JALA 11(6), 399–405 (2006)
Block Preconditioners for the Incompressible Stokes Problem M. ur Rehman, C. Vuik, and G. Segal Faculty of Electrical Engineering, Mathematics and Computer Science, Delft Institute of Applied Mathematics, Delft University of Technology Mekelweg 4, 2628 CD Delft, The Netherlands
[email protected]
Abstract. This paper discusses the solution of the Stokes problem using block preconditioned iterative methods. Block preconditioners are based on the block factorization of the discretized problem. We focus on two specific types: SIMPLE-type preconditioners and the LSC preconditioner. Both methods use scaling to improve their performance. We test convergence of GCR in combination with these preconditioners both for a constant and a non-constant viscosity Stokes problem.
1
Introduction
Solution of the Stokes problem is a hot topic in the research community nowadays. Discretization of Stokes results in a so-called saddle point type problem. Saddle point problems appear not only in fluid dynamics but also in elasticity problems and some other fields. An iterative method that is developed for one type of saddle point can be applied in other areas as well [1]. Our work is focused on solving the saddle point problem that arises from the finite element discretization of the Stokes problem. In case of stable discretization we can formulate the problem as: f F BT u = . (1) B 0 p g F corresponds to the viscos part, B T is the discretized gradient operator, and B the divergence operator. We define n as the number of velocity unknowns and m the number of pressure unknowns (n ≥ m). The system is symmetric positive semi-indefinite. A Krylov subspace method is employed to solve the incompressible Stokes problem. Convergence of Krylov methods depends on the eigenvalue spectrum. A preconditioning technique is used to improve the convergence. Instead of solving Ax = b, one solves a system P −1 Ax = P −1 b, where P is the preconditioner. A good preconditioner should lead to fast convergence and the system of the form P z = r should be easy to solve. Since system (1) is symmetric indefinite , in the literature [2], preconditioned MINRES [3] is frequently used to solve the Stokes problem. However, one of the requirement of MINRES is that the preconditioner should be symmetric positive definite. This restricts the choice to a I. Lirkov, S. Margenov, and J. Wa´ sniewski (Eds.): LSSC 2009, LNCS 5910, pp. 829–836, 2010. c Springer-Verlag Berlin Heidelberg 2010
830
M. ur Rehman, C. Vuik, and G. Segal
block diagonal preconditioner. Since we are using block triangular preconditioners which are not SPD, it is impossible to use MINRES. Therefore, we use GCR [4,7] to solve the Stokes problem. GCR also allows variable preconditioners so we can use inaccurate solvers for the subsystems. We compare preconditioners that use scaling based on the velocity matrix and the velocity mass matrix. The preconditioners that uses scaling with the velocity mass matrix perform better than the rest of the preconditioners even in the variable viscosity Stokes problem. 1.1
Block Preconditioners
Block preconditioners are based on factorization of formulation (1) and can be written as: As = Lb Db Ub , (2) where As is the system matrix, Lb the lower block, Db the block diagonal and Ub the upper block matrices represented as: I 0 F 0 I Mu−1 B T , Db = and Ub = , (3) Lb = 0 S 0 I Ml−1 B T I where Ml = Mu = F and S = −BF −1 B T is known as the Schur complement matrix. Since the computation of F −1 is not practical for large problem it is necessary to approximate the Schur complement. This is the basis for all block preconditioners. The generalized form of the Schur complement matrix can be written as Sˆ = BM −1 B T , where M −1 is a suitable approximation of F −1 which is cheap to compute. We define two extra block matrices F BT , (4) Lbt = Lb Db = 0 Sˆ
and Ubt = Db Ub =
F 0 . B Sˆ
(5)
We will discuss preconditioners based on these definitions in the next section. 1.2
SIMPLE-Type Preconditioners
In this section, we discuss variants of the SIMPLE preconditioner [8]. The SIMPLE preconditioner can be written as PS = Lbt Ub ,
(6)
where M = Mu = D with D the diagonal of the velocity matrix. The preconditioner consists of one velocity solve and one pressure solve. It appears from our experiments that the number of iterations increases with an increase in problem
Block Preconditioners for the Incompressible Stokes Problem
831
size (number of unknowns). However, this preconditioner shows robust convergence with rough accuracy for the subsystems. A variant of SIMPLE, SIMPLER, uses the same approximation for the Schur complement. To solve PSR Zup = rup , where PSR is the SIMPLER preconditioner and rup the residual, SIMPLER performs the following steps: −1 −1 zup = Ubt Lb rup , Ml = D,
(7)
Zup = zup + Ub−1 L−1 bt rup (rup − As zup ), Mu = D.
(8)
The above formulation shows that SIMPLER consists of two velocity solves and two pressure solves. Usually SIMPLER is used with one velocity solve and two pressure solves, it appears that the second velocity solve in (8) can be skipped without any effect on the convergence. Lemma 1. In the SIMPLER preconditioner/algorithm, both variants (one or two velocity solves) are identical when the subsystems are solved by a direct solution method. For a proof see [9]. In this paper, we will use hSIMPLER instead of SIMPLER because we have observed that for Stokes, hSIMPLER performs better than SIMPLER. In hSIMPLER (PhSR ) we use SIMPLE in the first iteration, whereas SIMPLER is employed afterwards. Another variant of SIMPLE known as MSIMPLER (Modified SIMPLER) uses the same formulation. However, instead of using D, the diagonal of the velocity mass matrix Q is used as scaling matrix. Q is defined by: φi φj dΩ, (9) Qi,j = Ω
where φj and φi are the velocity basis functions. MSIMPLER (PMSR ) uses Ml = M = Mu = diag(Q) as scaling. This preconditioner gives better convergence than other variants of SIMPLE. Convergence of MSIMPLER is almost independent of the inner accuracy. 1.3
LSC Preconditioner
Next we discuss the diagonally scaled LSC preconditioner. A block triangular preconditioner is used with a special approximation of the Schur complement, defined by: PLSC = Lbt , (10) where Sˆ is approximated by a least squares commutator approach [10]. The approximation is based on commutator of the convection diffusion operator (on the velocity and pressure space) with the gradient operator. The Schur complement approximation is given as: Sˆ−1 ≈ −Fp (BM1−1 B T )−1 ,
(11)
832
and
M. ur Rehman, C. Vuik, and G. Segal
Fp = (BM2−1 B T )−1 (BM2−1 F M1−1 B T ),
where M1 and M2 are scaling matrices. With M1 = M2 = diag(Q). The preconditioner shows nice convergence in solving Stokes and Navier-Stokes. Since it also involves two Poisson and one velocity solve, it construction and cost per iteration is comparable to MSIMPLER. In general the convergence of LSC depends both on the mesh size and the Reynolds number. According to [2] sometimes there is no h-dependency. For LSC some results with stabilized elements have been published. In this paper, we use scaling based on the diagonal of the velocity matrix (M1 = M2 = D), which we call LSCD . The same type of scaling is also used in [11]. In the next section, we compare all these preconditioners with two types of scaling for an isoviscos and a varying viscosity problem. For the subsystems solves, we use both ICCG(0) [5] and AMG preconditioned CG from PETSc (Portable, Extensible Toolkit for Scientific Computation) [6].
2
Numerical Experiments
We divide this section into two parts. In the first part, we solve the Stokes problem with constant viscosity and in the second part, a variable viscosity Stokes problem is solved. The Stokes problem is solved up to an accuracy of k 2 10−6 . The iteration is stopped if the linear systems satisfy r b2 ≤ tol, where rk is the residual at the kth step of Krylov subspace method, b is the right hand side, and tol is the desired tolerance value. In all experiments, the Stokes equation discretized with Taylor-Hood Q2-Q1 elements. 2.1
Isoviscos Problem
In this section, we solve the driven cavity Stokes problem by employing preconditioners discussed in this paper. The definition of region and boundary conditions is given on the left-hand side of Fig. 1. The velocity subsystem is solved with AMG preconditioned CG and the pressure subsystem with ICCG(0). One V(1,1) cycle of AMG is used per preconditioning step in CG. In SIMPLE, MSIMPLER and LSC we compute the residual with accuracy 10−1 for velocity and 10−2 for the pressure subsystems. In hSIMPLER and LSCD we keep the inner accuracy 10−6 for both subsystems. This is done since the convergence of both preconditioners depends on the inner accuracy. A general convergence profile of block preconditioners discussed in this paper is given in Fig. 2 that shows that the pairs of LSC and MSIMPLER and LSCD and hSIMPLER have the same convergence pattern. Compared to other preconditioners, SIMPLE shows slow convergence. From Table 1, it is clear that MSIMPLER and LSC outperform other preconditioners. MSIMPLER and LSC show less dependence on the grid size compared
Block Preconditioners for the Incompressible Stokes Problem
833
Fig. 1. Solution of the Stokes problem: (Left) Driven cavity, (Right) Die problem with varying viscosity
3
10
2
LSCD
10
SIMPLE hSIMPLER LSC MSIMPLER
1
GCR Relative Residual
10
0
10
−1
10
−2
10
−3
10
−4
10
−5
10
0
50
100
150 No. of iterations
200
250
300
Fig. 2. Convergence pattern of various block preconditioners Table 1. GCR iterations: Solution of the constant viscosity Stokes problem Grid 32 × 32 64 × 64 128 × 128 256 × 256
PS 84 162 310 705
PhSR PM SR PLSCD PLSCD 25 13 19 11 43 16 26 17 80 21 40 21 161 28 70 27
to other preconditioners in which the number of outer iterations is almost proportional to the number of elements in each direction. If we look at the inner iterations for the velocity and the pressure in Fig. (3), we see that MSIMPLER and LSC are the best choices in terms of consumption of inner and outer iterations. SIMPLE shows robust behavior with small inner accuracy. Per iteration, hSIMPLER and LScD are expensive due to the demand for high inner accuracy. Therefore, even with a large number of outer iterations in SIMPLE, its inner iterations are comparable to hSIMPLER.
834
M. ur Rehman, C. Vuik, and G. Segal
5
4
10
10
SIMPLE SIMPLER MSIMPLER LSC
SIMPLE hSIMPLER MSIMPLER LSC
D
4
10 ICCG(0) iterations
AMG PCG iterations
D
LSC
3
10
2
LSC
3
10
10
2
1
10 32x32
64x64
128x128
256x256
Grid size
10 32x32
64x64
128x128
256x256
Grid Size
Fig. 3. Inner iterations: Solution of the driven cavity Stokes problem: Velocity subsystem(Left), Pressure subsystem (Right)
2.2
Variable Viscosity Problem
The right-hand side of Fig. 1 gives the configuration of the variable viscosity Stokes problem. The viscosity levels are plotted in color. The problem we consider is that of an aluminum round rod, which is heated and pressed through a die. In this way a prescribed shape can be constructed. In this specific example we consider the simple case of a small round rod to be constructed. The viscosity model used describes the viscosity as function of shear stress and temperature. The temperature and shear stress are the highest at the die where the aluminum is forced to flow into a much smaller region. The end rod is cut which is modelled as a free surface. Boundary conditions are prescribed velocity at the inlet and stress free at the outlet. At the container surface (boundary of thick rod), we have a no-slip condition. At the die we have friction, which is modeled as slip condition along the tangential direction and a no-flow condition in normal direction. The round boundary of the small rod satisfies free slip in tangential direction and no-flow in normal direction. We observed that in constant viscosity problem, MSIMPLER and LSC show better convergence than the other preconditioners discussed. The main improving factor in these two preconditioners is scaling with the velocity mass matrix. Here, we solve the variable viscosity Stokes problem and we expect that since the velocity mass matrix does not contain any information about the viscosity variations, other preconditioners may perform better than these two. From Table 2, it seems that even for the variable viscosity problem, MSIMPLER and LSC perform much better than the other preconditioners. In this problem, instead of viscosity variation, grid size plays a dominant role. Although LSCD and SIMPLE are scaled with the velocity matrix, which depends on the viscosity, their performance is poor compared to MSIMPLER and LSC. For this problem hSIMPLER is not reported since it does not show convergence even with high inner
Block Preconditioners for the Incompressible Stokes Problem
835
Table 2. GCR iterations: Solution of the variable viscosity Stokes problem using block preconditioned GCR Unknowns PS PM SR PLSCD PLSC 33k 257 14 52 15 66k 419 16 64 17 110k 491 17 92 15 195k 706 17 110 19
6
5
10
10 SIMPLE MSIMPLER LSC
SIMPLE MSIMPLER LSCD
LSC
LSC No. of ICCG(0) iterations
No. of ICCG(0) iterations
D
5
10
4
10
3
10 33k
4
10
3
66k
110k Number of unknowns
195k
10 33k
66k
110k
195k
Number of unknowns
Fig. 4. Inner iterations: Solution of the variable viscosity Stokes problem: Velocity subsystem(Left), Pressure subsystem (Right)
accuracy. The inner accuracy of the preconditioners is the same that we used for constant viscosity problem except for LSC that is changed to 10−2 for the velocity part since it stagnates for 10−1 . The number of iterations consumed by the subsystems shown in Fig. 4, have the same pattern as we see in the constant viscosity Stokes problem.
3
Conclusions
In this paper, we solve constant and variable viscosity Stokes problem using block preconditioned GCR. SIMPLE and LSC-type preconditioners are used with two different scalings. Table 3 shows dependence of the preconditioners on mesh size and inner accuracy. It is observed that MSIMPLER and LSC show Table 3. Dependence of various block preconditioners on mesh size and subsystem accuracies Dependence on PS PhSR PM SR PLSCD PLSC Mesh size Yes Yes Mild Yes Mild Inner accuracy No Yes No Yes Small
836
M. ur Rehman, C. Vuik, and G. Segal
better performance than other preconditioners for both type of problems. Both use scaling with the velocity mass matrix which has no information regarding the variation of viscosity. For further research, both preconditioners will be tested for problems with large viscosity contrasts.
References 1. Benzi, M., Golub, G.H., Liesen, J.: Numerical solution of saddle point problems. Acta Numerica 14, 1–137 (2005) 2. Elman, H.C., Silvester, D., Wathen, A.J.: Finite Elements and Fast Iterative Solvers with applications in incompressible fluids dynamics. Oxford University Press, Oxford (2005) 3. Paige, C.C., Saunders, M.A.: Solution of sparse indefinite systems of linear equations. SIAM J. Numerical Analysis 12(4), 617–629 (1975) 4. Eisenstat, C., Elman, H.C., Schultz, M.H.: Variational iterative methods for nonsymmetric systems of linear equations. SIAM J. Numer. Anal. 20(2), 345–357 (1983) 5. Meijerink, J.A., van der Vorst, H.A.: An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix. Math. Comp. 31, 148–162 (1977) 6. Balay, S., Gropp, W.D., McInnes, L.C., Smith, B.F.: Efficient Management of Parallelism in Object Oriented Numerical Software Libraries. Modern Software Tools in Scientific Computing, 163–202 (1997) 7. van der Vorst, H.A., Vuik, C.: GMRESR: a family of nested GMRES methods. Num. Lin. Alg. Appl. 1, 369–386 (1994) 8. Vuik, C., Saghir, A., Boerstoel, G.P.: The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces. Int. J. Numer. Meth. Fluids 33, 1027–1040 (2000) 9. ur Rehman, M., Vuik, C., Segal, G.: SIMPLE-Type preconditioners for the Oseen problem. Int. J. Numer. Meth. Fluids. Published Online (2008) 10. Elman, H., Howle, V.E., Shadid, J., Shuttleworth, R., Tuminaro, R.: Block Preconditioners Based on Approximate Commutators. SIAM J. Sci. Comput. 27(5), 1651–1666 (2006) 11. May, D.A., Moresi, L.: Preconditioned iterative methods for Stokes flow problems arising in computational geodynamics. Physics of the Earth and Planetary Interiors 171, 33–47 (2008)
Author Index
Adler, J. 1 Alba, Enrique 318, Andreev, Andrey B. Angelova, Ivanka T. Anguelov, Roumen Apreutesei, Narcisa Aprovitola, Andrea Asenov, Asen 369 Ashok, A. 451 Atanassov, Emanouil Atanassov, Krassimir Axelsson, Owe 76
334 687, 695 703 231, 554 165 67
459, 507 173, 326
Bacu, Victor 499 Bang, Børre 719, 730 Bartholy, Judit 613 Baumgartner, O. 411 Bencheva, Gergana 711 Bochev, Pavel 637, 645 Bogatchev, A. 538 Bonfigli, Giuseppe 124 Bouman, Mick 653 Boyadjiev, Todor 765 Boyanova, Petia 84 Brzobohat´ y, Tom´ aˇs 92 Bultmann, Roswitha 239 Candiani, Gabriele 157 C´ ardenas-Montes, Miguel 483 Carnevale, Claudio 157 Castej´ on-Maga˜ na, Francisco 483 Castro, Francesc 379 Cervenka, J. 443 Chien, Li-Hsin 403 Christov, Ivan 765 Chryssoverghi, I. 247 Coletsos, J. 247 Coma, Jordi 395 Corominas, Albert 302 Cr˘ aciun, Ciprian 310 D’Ambra, Pasqua 67 Dechevsky, Lubomir T. 719, 730, 738, 756 Dimitriu, Gabriel 165, 621
Dimova, Stefka 765 Dimov, Ivan T. 197, 387, 419 di Serafino, Daniela 67 D´ ob´e, P´eter 467 Dobrinkova, Nina 173 Donchev, Tzanko 256 Do-Quang, Minh 108 Dost´ al, Zdenˇek 92 Drozdowicz, Michal 475 Durillo, Juan J. 334 Ebel, Adolf 214 Efendiev, Y. 148 Elbern, Hendrik 214 Essien, Eddy 596 Farag´ o, Istv´ an 54, 563 Feichtinger, Gustav 239 Felgenhauer, Ursula 264 Fidanova, Stefka 173, 318, 326 Filippone, Salvatore 67 Filippova, Tatiana F. 272 Friese, Elmar 214 Gadzhev, G. 180, 223 Ganev, K. 180, 223, 531, 538 Ganzha, Maria 475 Garc´ıa-Villoria, Alberto 302 Georgieva, Irina 747 Georgieva, Rayna 197, 387 Georgiev, Ivan 100 Georgiev, Krassimir 188 Gerritsma, Marc 653, 662 G´ omez-Iglesias, Antonio 483 Goncharova, Elena 280 Gonz´ alez, Rosa M. 206 Goodnick, S.M. 427, 451 Goranova, Radoslava D. 491 Gorb, Y. 148 Gorgan, Dorian 499 Goris, Nadine 214 Grasser, T. 443 Grip, Niklas 738 Gundersen, Joakim 719, 730, 738, 756 Gunzburger, Max 637 Gusev, Mikhail I. 286
838
Author Index
Hannukainen, Antti 571 Hartin, O. 451 ´ Havasi, Agnes 54 Hofreither, Clemens 132 Hunyady, Adrienn 613 Iliev, Oleg P. 14 Ivanovska, Sofiya 459, 475 Jenny, Patrick 124 Jordanov, G. 180, 223, 531, 538 Jurkov´ a, Kateˇrina 771 Kalogeropoulos, Grigoris I. 779 K´ apolnai, Rich´ ard 467 Karageorgos, Athanasios D. 779 Karaivanova, Aneta 459 Kar´ atson, J´ anos 580 Katragkou, E. 531, 538 Kokkinis, B. 247 Koleva, Miglena N. 787 Korotov, Sergey 571 Kosina, H. 443 Kosturski, Nikola 140 Koylazov, V.N. 419 Kozubek, Tom´ aˇs 92 Krastanov, Mikhail I. 294 Kraus, Johannes 100 Kristoffersen, Arnt R. 719, 756 Kˇr´ıˇzek, Michal 571 Lakat, M´ at´e 515, 546 Laks˚ a, Arne 719, 796 Langer, Ulrich 132 Lazarov, Raytcho D. 14 L´egr´ ady, D. 435 Lirkov, Ivan 475 Luna, Francisco 334 Magdics, M. 435 Manteuffel, T. 1 Margenov, Svetozar 84, 100, 140, 805 Marinova, Rossitza S. 588, 596 Marinov, Tchavdar T. 588 Markakis, K. 531, 538 Markopoulos, Alex 92 Markov, S. 231 Mart´ınez, Roel 395 McCormick, S. 1 Melas, D. 531, 538
M´esz´ aros, Csaba 813 Mihon, Danut 499 Millar, Campbell 369 Miloshev, N. 180, 223, 531, 538 Minani, F. 231 Mincsovics, Miklos E. 604 Misev, Anastas 507 Mitin, Vladimir 403 Moisil, Ioana 343 Molina, Guillermo 318 Morales-Ramos, Enrique 483 Morant, Jos´e L. 206 Nebro, Antonio J. 334 Nedjalkov, M. 411 N´emeth, D´enes 515, 546 Neytcheva, Maya 108 Nicoar˘ a, Monica 310 Nieradzik, Lars Peter 214 Nikolova, Evdokia 352 Nolting, J. 1 Olejnik, Richard 475 Olteanu, Alexandru-Liviu 343 Ostromsky, Tzvetan 197 Ovseevich, Alexander 280 Palha, Artur 653, 662 Pantelous, Athanasios A. 779 Paprzycki, Marcin 475 Pastor, Rafael 302 Penzov, A.A. 419, 435 P´erez, Juan L. 206 Petrova, Svetozara I. 821 Pieczka, Ildik´ o 613 Pisoni, Enrico 157 Pongr´ acz, Rita 613 Popov, P. 148 Poupkou, A. 531, 538 Prieto, F. 206 Prodanova, M. 180, 223, 531, 538 Racheva, Milena R. 687, 695 Raleva, K. 427 Reid, Dave 369 Ridzal, Denis 645 Rodila, Denisa 499 Roy, Gareth 369 Roy, Scott 369 Ruge, J. 1
Author Index Sabelfeld, Karl 26 San Jos´e, Roberto 206 Sbert, Mateu 379 Schwaha, P. 411 Segal, G. 829 Selberherr, S. 411, 443 Senobari, Mehrdad 475 Sergeev, Andrei 403 Shterev, Kiril S. 523 Simian, Corina 361 Simian, Dana 361 Sinnott, Richard 369 Sipos, Andr´ as 467 Spiridonov, V. 538 Spiteri, Raymond 596 Starke, Gerhard 671 S ¸ tef˘ anescu, R˘ azvan 165, 621 Stefanov, Stefan K. 523 Stefanut, Teodor 499 Stewart, Gordon 369 Stewart, Graeme 369 Stoica, Florin 361 Strunk, Achim 214 Sverdlov, V. 443 Syrakov, D. 180, 223, 531, 538 Szab´ o, Tam´ as 629 Szeber´enyi, Imre 467, 515 Szirmay-Kalos, L. 419, 435 Tang, L. 1 Telegin, Pavel
475
Todorova, A. 180, 223 Tomar, Satyendra 132 T¨ or¨ ok, J´ anos 546 T´ oth, B. 435 Tragler, Gernot 239 Tr¨ oltzsch, Fredi 40 T˚ uma, Miroslav 771 Uluchev, Rumen 747 ur Rehman, M. 829 Vagidov, Nizami 403 Vasicek, M. 443 Vasileska, D. 427, 451 Vega-Rodr´ıguez, Miguel A. 483 Veliov, Vladimir M. 294 Volta, Marialuisa 157 Vondr´ ak, V´ıt 92 Vuik, C. 829 Vulkov, Lubin G. 703, 787 Vutov, Yavor 805 Willems, Joerg Xin, He
14
108
Yang, Huidong Young, Joseph
116 679
Zaharie, Daniela 310 Zlatev, Zahari 54, 188, 197 Zulehner, Walter 116
839