Lecture Notes in Earth Sciences Editors: S. Bhattacharj1, Brooklyn G. M. Friedman, Brooklyn and Troy H J Neugebauer, Bonn A Sedacher, Tueblngen and Yale
90
Roland Klees Roger Haagmans (Eds.)
Wavelets in the Geosciences With 102 Figures and 3 Tables
Springer
Editors Prof Dr. Roland Klees Roger Haagmans Delft Umverslty of Technology Delft Institute of Earth-Oriented Space Research (DEOS) Thijssewegl 1, 2629 JA Delft, The Netherlands
From January 1st, 2000 on, Mr. Haagmans wdl have the following address. Roger Haagmans Agricultural University of Norway Department of Mapping Sciences P.O Box 5034, 1432 As, Norway Cataloging-m-Pubhcation data applied for Die Deutsche Blbhothek - CIP-Emheltsaufnahme Wavelets 111the geosclences " with 3 tables / Roland Klees ; Roger Haagmans (ed) - Berlin, Heidelberg, New York, Barcelona ; Hong Kong, London, Milan ; Pans ; Singapore ; Tokyo Springer, 2000 (Lecture notes in earth sciences ; 90) ISBN 3-540-66951-5
"For all Lecture Notes in Earth Sciences published till now please see final pages of the book" ISSN 0930-0317 ISBN 3-540-66951-5 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the fights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or m any other way, and storage in data banks Duplication of this pubhcation or parts thereof ~s permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law © Sprxnger-Verlag Berlin Heidelberg 2000 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. m th~s publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore tree for general use. Typesetting: Camera ready by author Printed on acid-free paper SPIN: 10723309
32/3142-543210
Preface
This book is a collection of the Lecture Notes of the School of Wavelets in the Geosciences held in Delft (The Netherlands) from 4-9 October 1998. The objective of the school was to provide the necessary information for understanding the potential and limitations of the application of wavelets in the geosciences. Three lectures were given by outstanding scientists in the field of wavelet theory and applications: D r . Matthias Holschneider, Laboratoire de Geomagnetisme, Institut de Physique du Globe De Paris, France; Dr. Wire Sweldens, Mathematical Sciences Research Centre, Lucent Technologies, Bell Laboratories, Murray Hill, N J, U.S.A.; Prof. Dr. Willi Freeden, Geomathematics Group, Department of Mathematics, University of Kaiserslautern, Germany. The lectures have been supplemented by intensive computer exercises. The school has been very successful due to the engagement and the excellent presentations of the teachers, the very illustrative and instructive computer exercises, the lively interest and participation of the participants, and the many fruitful discussions we had. Therefore I want to express my thanks to the teachers for the excellent job they did, for providing typewritten Lecture Notes, and for their excellent co-operati0n. Thanks also to the students, who were actively engaged in the lectures and exercises during the whole week. The organisation of such a School is not possible without the support of many others. First of all I want to thank Prof. Dr. Willi Freeden and his co-workers from the Geomathematics Group, Department of Mathematics, University of Kaiserslautern, who co-organised the school. They provided a considerable contribution to the success of the school. I also want to express my thanks to the support of Michael Bayer, Martin van Gelderen, Roger Haagmans and my secretary, Wil Coops-Luyten, who were heavily involved in the organisation of the school and in all practical aspects including the wonderful social program which gave the participants some flavour of Dutch culture and the beautiful city of Delft. In the work of organisation, we have been supported by all the staff of Section Physical, Geometric and Space Geodesy (FMR). Last but not least I want to thank Martin van Gelderen and Roger Haagmans for preparing the introduction. Essential to the success of the school was the support we received from various organisations: the International Association of Geodesy (IAG), the Netherlands Geodetic Commission (NCG), the Department of Geodesy at Delft University of Technology, and the Delft Institute for Earth-Oriented Space Research (DEOS). Their support is gratefully acknowledged.
Delft, October 1999
Roland Klees
Table of Contents
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Introduction to Continuous Wavelet Analysis . . . . . . . . . . . . . . . . . . . . . . . . . .
XlII 1
Matthias Holschneider (Institut de Physique du Globe de Paris) Building Your Own Wavelets at Home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
72
Wim Sweldens (Bell Laboratories) and Peter SchrSder (California Institute of Technology) Factoring Wavelet Transforms into Lifting Steps . . . . . . . . . . . . . . . . . . . . . . .
131
Ingrid Daubechies (Princeton University) and Wire Sweldens (Bell Laboratories) Spherical Wavelets: Efficiently Representing Functions on a Sphere . . . . . . .
158
Peter SchrS"der (California Institute of Technology) and Wim Sweldens (Bell Laboratories) Least-squares Geopotential Approximation by Windowed Fourier Transform and Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Willi Freeden and Volker Michel (University of Kaiserslautern)
189
Organization
Organization Committee Program: Roland Klees (Delft University of Technology) and Willi Freeden (University of Kaiserslautern) Local organization: Michael Bayer (University of Kaiserslautern), Wil Coops, Martin van Gelderen, Roger Haagmans, Ren~ Reudink, Huiping Xu (Delft University of Technology)
Sponsors International Association of Geodesy Netherlands Geodetic Commission Department of Geodesy, Delft University of Technology Delft Institute for Earth-Oriented Space Research
Lecturers Matthias Holschneider, CPT, CNRS Luminy, Case 907, F-13288 Marseille, France, and Institut de Physique du Globe de Paris, Laboratoire de G~omagn6tisme, 4 place Jussieu, F-75252 Paris, France, hols0ipgp, jussieu, fr Wire Sweldens, Bell Laboratories, Lucent Technologies, Murray Hill NJ 07974, U.S.A., wim@bell-labs, corn Willi Freeden, Universityof Kaiserslautern,Laboratory of Technomathematics, Geomathematics Group, 67653 Kaiserslautern,P.O. Box 3049, Germany, freeden©mathematik.uni-kl.de
Other contributors Peter SchrSder, Department of Computer Science, California Institute of Technology, 1200E, California Blvd., MS 256-80, Pasadena, CA 91125, U.S.A, ps©cs, caltech, edu Ingrid Daubechies, Princeton University, Department of Mathematics, Fine Hall, Washington Rd., Princeton, NJ 08544-1000, U.S.A., b e c c a @ m a t h , princeton, edu Volker Michel, University of Kaiserslautern, Laboratory of Technomathematics, Geomathematics Group, 67653 Kaiserslautern, P.O. Box 3049, Germany, michel©mathematik.uni-kl, de
VIII
Organization
Martin van Gelderen, Roger Haagmans I and Roland Klees, Delft Institute for Earth-Oriented Space Research, Delft University of Technology, Thijsseweg 11, 2629 JA Delft, the Netherlands, gelderen@geo, t u d e l f t .nl, haagmans@geo, t u d e l f t , nl, klees@geo, t u d e l f t , nl
Participants C.M.C. Antunes, Faculty of Science, University of Lisbon, Rua da Escola Politcnica 58, 1250 Lisbon, Portugal, (351)1 392 1855,
[email protected] L. Barens, Heriot-Watt University, Dept. of Petroleum Engineering, Riccarton, Edinburgh EH14 4AS, United Kingdom, (44)131 451 3691,
[email protected] G. Beyer, TU Dresden, Institut ffir Planetare Geod~sie, George-B~hr-Str. 1, D-01062 Dresden, Germany, (49)35 1463 4483, beyerQipg.geo.tu-dresden.de M. Boccolari, Universita degli Studi di Modena, Dipart. Di Scienze dell'Ingegeneria, Sez. Osservatorio Geofisico, Via Campi 213/A, 1-4100 Modena, Italy, (39)59 370703, MaurobQrainbow.unimo.it F. Boschetti, CSIRO, Division of Exploration and Mining, 39 Fairway, Nedlands WA 6009, Australia, (61)8 9389 8421,
[email protected] J. Bouman, DEOS, Delft University of Technology, Thijsseweg 11, NL-2629 JA Delft, The Netherlands, (31)15 278 2587,
[email protected] J. Branlund, University of Minnesota, Dept. of Geology and Geophysics, 310 Pillsbury Drive SE, Minneapolis, MN 55455, U.S.A., (1)612 624 9318,
[email protected] A. de Bruijne, DEOS, Delft University of Technology, Thijsseweg 11, 2629 JA Delft, The Netherlands, (31)15 278 2501,
[email protected] F. Chane-Ming, Laboratoire de Physique de l'Atmosphere, Universit~ de la R@union, 15 Avenue Ren@Cassia, F-97715 St. Denis cedex 9, France, (33)262 938239, fchane'Quniv-reunion.fr I. Charpentier, Projet Idopt, Inria Rhone Alpes, LMC, Domaine Universitaire, P.O. Box 53, F-38041 Grenoble cedex 9, France, (33)4 76 63 57 14, IsabeUe.CharpentierQimng.fr G.R.T. Cooper, Geophysics Department, University of the Watersrand, Private Bag 3, Johannesburg, South Africa, (27)11 716 3159,
[email protected] D. Di Manro, National Institute of Geophysics, Via di Vigna Murata 605, 1-00143 Rome, Italy, (39)6 51860328, dimauroQingrm.it A. Duchkov, Novosibirsk State University, Morskoy pr. 36-16, Novosibirsk 630090, Russia, (7)3832 34 30 35,
[email protected] L. Duval, Institut Francais du Petrole, Avenue de Bois-Preau 1-4, F-92852 Rueil Malmaison Cedex, France, (33)1 47 52 61 02, lanrent.duvalQifp.fr L.I. Fern£ndez, Facultad de Ciencias Astron6micas y Geofisicas, Universidad Nacional de La Plata, Paseo del Bosue S/N, 1900 La Plata, Buenos Aires, Argentina, (54)21 83 8810, lanrafQfcaglp.unlp.edu.ar 1 Now at: Agricultural University of Norway, Dept. of Mapping Sciences, P.O. Box 5034, N-1432/~s, Norway
Organization
IX
C. Fosu, Int. of Geodesy & Navigation, University FAF Munich, D-85577 Neubiberg, Germany, (49),89 6004 3382,
[email protected] G. Franssens, Belgian Institute for Space Acromomy (BISA), Ringlaan 3, B-1180 Brussels, Belgium, (32)2373 0370,
[email protected] M. Giering, C.D.S., Mars Inc., 100 International Drive, Mt. Olive, NJ 07828, U.S.A., (1)973 691 3719,
[email protected] A. Gilbert, University of Stuttgart, Geodetic Institute, Geschwister-Scholl-Str. 24 D, D-70174 Stuttgart, Germany, (49)7 11 121 4086,
[email protected] T. Harte, University of Cambridge, The Computer Laboratory, Pembroke Street, Cambridge CB2 2GQ, United Kingdom, (44)1223 33 5089, tphl001GCL.cam.ac.uk A. Hermann, DEOS, Delft University of Technology, Thijsseweg 11, 2629 JA Delft, The Netherlands, (31)15 278 4169,
[email protected] J. Krynski, Institute of Geodesy and Cartography, Jasna 2/4, P-00950 Warsaw, Poland, (48)22 827 0328,
[email protected] R. Kucera, Dept. Of Mathematics, VSB-Technical University of Ostrava, Str. 17, Pistopadu 15, 70833 Ostrava Poruba, Czech Republic, (42)69 699 126,
[email protected] J.N. Larsen, Geophysical Department, University of Copenhagen, Vibekegade 27, 2.th, DK-2100 Kopenhagen, Danmark, (45)3927 0105, nykjaer @math.ku.dknykjaer @math.ku.dk D.I.M. Leitner, Institute of Meteorology and Geophysics, Heinrichstrasse 55, A-8010 Graz, Austria, (43)316 382446, milQbimgs6.kfunigraz.ac.at M. Lundhavg, Nansen Environmental and Remote Sensing Center, Eduard Griegsv. 3A, N-5037 Solheimsviken, Norway, (47)55 29 72 88,
[email protected] Mehmet Emin Ayhan, General Command of Mapping, Geodesy Department, TR-06100 Cebeci, Ankara, Turkey, (90)312 363 8550/2265,
[email protected] R. Montelli, UMR - Geosciences Azur, 250 rue Albert Einstein Bat. 4, F-06560 Valbonne, France, (33)4 9294 2605,
[email protected] G. Olsen, Dept. of Mathematical Sciences, Agricultural University of Norway, P.O. Box 5034, N-1432 As, Norway, (47)64948863, gunnar.olsenQimf.nlh.no, P. Paun, Projet Idopt, Inria Rhone Alpes, LMC, Domaine Universitaire, P.O. Box 53, F-38041 Grenoble cedex 9, France, (33)4 76 63 57 14,
[email protected] G. Plank, Institute of Theoretical Geodesy, Dept. of Math. and Geoinf., TU Graz, Stein£cherstr. 36/18, A-8052 Graz, Austria, (43)316 583 662,
[email protected] R. Primiceri, Dipartimento di Scienze dei Materiali, Universita'degli Studi de Lecci, Via Arnesano, C.A.P. 73100 Lecce, Italy, (39)832 320 549,
[email protected]
X
Organization
I. Revhaug, The Agricultural University of Norway, Dept. of Mapping Sciences, P.O. Box 5034, N-1432 As, Norway, (47)649 48 841,
[email protected], F. Sacerdote, Dipartimento di Ingegneria Civile, Universitk di Firenze, Via di S. Marta 3, 1-50139 Firenze, Italy, (39)55 479 6220,
[email protected] G. Schug, Geomathematics group, University of Kaiserslautern, P.O. Box 3049, D-67653 Kaiserslautern, BRD, (49)631-205-3867,
[email protected] F.J. Simons, Dept. of Earth, Atmospheric and Planetary Sciences, Massachusets Institute of Technology, Room 54-517A, 77 Massachusetts Avenue, Cambridge, MA 02139-4307, U.S.A., (1)617 253 0741,
[email protected] J. Spetzler, Universiteit Utrecht, Faculteit Aardwetenschappen, Budapestlaan 4, 3584 CD UTRECHT, The Netherlands, (31)30 253 51 35, spetzlerQgeo.uu.nl G. Strykowski, National Survey and Cadastre, Rentemestervej 8, DK-2400 Copenhagen NV, Danmark, (45)35 87 5316,
[email protected] J. Vazquez, JPL/Caltech, 300/323 4800 Oak Grove Drive, Pasadena, CA 91109, U.S.A., (1)818 354 6980, JVQPacific.JPL.NASA.GOV H. Wen, TU-Graz, Steyrergasse 30, A-8010 Graz, Austria, (43)316 873 63 55,
[email protected] P.M. Zeeuw, Centrum voor Wiskunde en Informatica, Postbus 94079, 1090 GB Amsterdam, The Netherlands, (31)20 592 42 09,
[email protected]
Introduction
Background
The international school on Wavelets in the Geosciences fits within the tradition of the summer schools of the International Association of Geodesy that exist already since many years. The topics are generally carefully selected and cover an interesting field of current and future interest. In this respect the wavelets fulfill such expectations; they offer challenging mathematical opportunities and many fields of application. The range of applications is so wide that the focus was limited to the geosciences. In this way participants with different background were offered the opportunity to meet and exchange ideas with each other and with an excellent team of lecturers, consisting of Matthias Holschneider, Wire Sweldens and Willi Freeden. The school was organized by prof. R. Klees of the Delft University of Technology in the Netherlands and by prof. W. Freeden of the University of Kaiserslautern in Germany. The school was hosted by the Department of Geodesy of the Faculty of Civil Engineering and Geosciences of the Delft University of Technology in Delft, the Netherlands. It was held from October 4th to October 9th 1998. The number of participants was limited to 41 due to the restricted number of computers available for the exercises. The group was divers in several respects: 5 continents and 19 countries were represented. The majority of participants were Ph.D. students or members of the academic staff, and the group was completed by a few post-docs, M.Sc. students and nonacademics. Around 29% had a background in geodesy, 48% in geophysics, 19% in mathematics and the rest in other disciplines. So, from the point of view of bringing people from different background and different nationalities together in a stimulating scientific environment, which is one of thee major goals of the "summer" schools, it was a success. The basic objective of the school was to provide the necessary information to understand the potential and limitations of the application of wavelets in the geosciences. This includes: - the mathematical representation in one and more dimensions like on the sphere - the properties as compared to Fourier techniques - the signal representation and analysis ability - the use of operators in terms of wavelets - gaining experiences with wavelets using examples from geosciences in computer exercises Lectures
and Notes
The school lasted six days and contained three major subjects. Every subject was covered in two days time. All topics were supported by practical exercises o n the
0 C~
0
90
0
o.~
x
Introduction
XV
computer with examples from geodynamics, topography representation, gravity field modeling etc. The program consisted of three parts that are described in these lecture notes. The idea behind the total program is to present the current status of continuous wavelet analysis for data analysis applications in the geosciences as a first step. The second part has a more specific focus on discrete analysis with special emphasis on the second generation wavelets. This allows to loosen the link with Fourier analysis, and the extension to complex surfaces and fast and efficient data approximation. The third part presents the current status of global data analysis on the sphere with the potential field as a special field of application. This setup makes it possible for participants and readers to get a view on wavelets also beyond their own field of interest which may lead to curiosity, discussion and possibly new developments.
Part
1
The first part was lectured and written by Matthias Holschneider. The basic facts about harmonic analysis through Fourier transforms in one dimension are recalled. In particular several sampling theorems are presented linking the Fourier transform over the real line to the Fourier transform over the circle and to the so called FFT (Fourier transform over the discretize circle). This is important in applications since only the FFT is implementable on computers, however the signal one treats are at least in spirit functions over the real line. Many "errors" in data processing come from a non clear distinction between the various :transforms. To remedy with certain shortcomings of standard Fourier techniques the wavelet transform is introduced as a possible time-frequency technique. Importaut general features of wavelet transform are discussed such as energy conservation, inversion formulas, covariance properties and reproducing kernels. This is important to be able to read wavelet transforms of signals. Also wavelet transforms of functions on the circle (periodic functions) are treated. The possibility of down-sampling of wavelet coefficients is linked to the existence of frames and orthonormal bases of wavelets. Two algorithms for the implementation of continuous wavelet transform are presented. The first is based on the computation of convolutions using the FFT with a sorrow boundary treatment. The second is an algorithm based on a dyadic interpolation scheme. Several applications in the field of non stationary filtering, singularity processing, and detection of sources of potential fields are evaluated and discussed. First, upon manipulating and modifying wavelet coefficients rather than Fourier coefficients, it is shown how to construct non stationary filters. In examples this technique is applied to the decomposition of the polar motion of the earth into several components. The underlying algebra of wavelet Toeplitz operators is discussed. In particular the user is made aware of problems of non-commutativity. Wavelet transforms of noise and de-noising methods are treated based on adaptive filter design in wavelet space (thresholding). Secondly, it is shown how Io-
XVI
Introduction
cal singularities may be detected and processed through wavelet techniques. In particular how the scaling behavior of wavelet coefficients reveals the local singularity exponent. Applications of this to fractal analysis of data (extraction of generalized fractal wavelet dimension) are introduced. In particular the wavelet correlation dimension is important for the interpretation of scattering on fractal objects. In the case of isolated singularities the wavelet technique is applied to the detection analysis of geomagnetic jerks in the data of secular wariation of the magnetic field. Thirdly, a special family of wavelets based on the Poisson semigroup is used to detect hidden singularities, corresponding to remote sources of potential fields. Only the remote field is available for the analysis, but cross scale relations of the wavelet coefficients may be used to localize and characterize sources of the field inhomogeneity.
Part 2
The second part was lectured by Wim Sweldens and the lecture notes are coauthored by Peter SchrSder, and Ingrid Daubechies. Starting from historical developments they try to answer the question: Why multi-resolution? In order to illustrate this the contrast with Fourier methods is explained, and the connection with filter banks is shown. This is illustrated with a simple example: Haar wavelets. At this point the focus is on the need for spatial constructions, as opposed to Fourier based constructions, revisiting Haar. Here, the essentials of lifting are introduced: predict and update. The next step goes beyond Haar, where linear and higher order polynomial based wavelet are Considered. Finally, implementation aspects of lifting, such as speeding up, in-place memory calculations, and integer to integer transforms get special attention. The lecture proceeds with a look at the need for second generation wavelets. Aspects of importance are wavelets for boundaries, domains, irregular samples, weighted measures, and manifolds. Here the use of lifting in order to construct second generation wavelets is explained. In order to construct wavelets on the sphere one needs to build triangular grids on a sphere: geodesic sphere construction. Also the definition of multiresolution on a sphere needs to be defined. Then one can focus on constructing filter operators on a sphere. Again lifting is used now to build spherical wavelets. This is where one arrives at the fast spherical wavelet transibrm. An application is discussed from spherical image processing. The next discussion points at techniques for triangulating general 2-manifolds. For this purpose wavelets defined on 2-manifolds are required, and multi-resolution transforms "off' manifolds are introduced. This is supported by examples.
Introduction
XVII
Part 3
The third and last part of the school is lectured by Willi Freeden and the lecture notes are co-authored by Volker Michel with a focus on harmonic wavelets on the sphere. For the determination of the earth's gravity field many types of observations are nowadays available, such as terrestrial gravimetry, airborne gravimetry, satellite to satellite tracking, satellite gradiometry. The mathematicM connection between these observables on the one hand and the gravity field and the shape of the earth on the other is called the integrated concept. In this lecture windowed Fourier transforms and harmonic wavelets on the sphere are introduced tbr example for approximating the gravitational part of the gravity field progressively better and better. The classical outer harmonic models of physical geodesy, concerned with the earth's gravity field, triggered the development of the integrated concept in terms of bounded linear functionals on reproducing Hilbert (Sobolev) spaces. It is of importance to deal with completeness properties and closure theorems for dense systems of linear (observational) functionals acting on outer harmonics and reproducing kernel functions. The uncertainty principle for functions harmonic outside a sphere is treated. This leads us to the terminology of 'space-frequency localization'. It is shown that the uncertainty principle is a restriction in gravitational field determination which tells us that sharp localization in space and frequency is mutually exclusive. Finally, a space-frequency investigation is considered for the most important trial functions used in physical geodesy. Here two types of scaling functions can be distinguished, vdz. band-limited and non-band-limited. It is illustrated that in all cases the constituting elements of a multilevel approximation by convolution consist of 'dilation' and 'shifting' of a mother kernel, i.e. a potential with vanishing zero order moment. Next, the concept of multi-resolution analysis for Sobolev spaces of harmonic functions is introduced which is especially relevant for geophysicM purposes. Two possible substitutes of the Fourier transform in geopotential determination are the Windowed Fourier Transform (WFT) and the W~velet Transform (WT). The harmonic WFT and WT are introduced and it is shown how these can be used to give information about the geopotential simultaneously in the space domain and the frequency (angular momentum) domain. The counterparts of the inverse Fourier transform are derived, which allows a reconstruction of the geopotential from its WFT and WT, respectively. Moreover, a necessary and sufficient condition is derived, that an otherwise arbitrary function of space and frequency has to satisfy in order to be the WFT or WT of a potential. Finally, least-squares approximation and minimum norm (i.e. least-energy) representation, which will play a particular role in geodetic applications of both WFT and WT, are discussed in more detail.
xvIII
Introduction
Exercises
and Demonstrations
All lectures were accompanied by practical exercises using flexible wavelet programs. Academic signMs as well as true geophysical data could be treated and analysed. Next to these 'hands-on' sessions also examples from a diversity of more complex or computationally intensive applications to support the theory were demonstrated. Two videos illustrated the power of the methods in the field of computer animation and earth gravity field approximation. The enthusiasm of the lecturers and the combination of theoretical foundations and developments, and the link to the practical applications lead to a good insight into the current status and the future challenges in the field of wavelets in the geosciences. We very much enjoyed organizing and taking part in the school so we hope that you enjoy reading the book and applying the wavelets in a similar fashion.
Delft, October 1999
Roger Haagmans Martin van Gelderen
Introduction to Continuous Wavelet Analysis Matthias Holschneider 1,2 1 CPT, CNRS Luminy, Case 907, F-13288 Ma~seille, France Institut de Physique du Globe de Paris, Laboratoire de G~omagn4tisme,
4 place Jussieu, F-75252 Paris, France hols@ipgp, jussieu, fr
1
A short motivation: why time-frequency analysis?
Imagine yourself listening to a piano concerto of Beethoven. Did you ever wonder how it is possible that something like melodies and hence music exists? In more mathematical terms the miracle is the following: our ears receive a one dimensional signal p(t) over the real line I~ that ,describes how the local pressure varies with the time t. However this one-dimensional information is then somehow "unfolded" into a two-dimensional time-frequency plane: the function over the real line p(t) is mapped into a function over the time-frequency plane that tells us "when" which "frequency" occurs. Now this is in the strict sense a contradiction in its own. A pure frequency is given by the one parameter family of complex exponentials e~ = e itw. Consequently it has no time-point attached to it since it goes from - c o to + ~ . On the other hand a precise time point-represented by the delta distribution 5(t) has all frequencies in it and hence now frequency can be associated with it. Therefore neither p(t) nor it Fourier transform ~(w) give us sufficient information. The hearing process is thus based on some compromise between time-localization and frequency-localization. This is also true for wavelet transforms as we shall see. The idea is to replace the elementary frequencies e~ = e iwt on which the Fourier transform is based (see next section) by a two-parameter family of elementary functions gt,f that are localized around the time point t and the frequency f . Before we construct these functions, we first recall some basics about the central tool in signal processing, the Fourier transform.
2
The Fourier transform
We consider four types of Fourier transforms: over the real line ]~ over the circle T, and over the discretized circle Z / N Z . 2.1
Fourier transform
over
Recall that the Fourier transform of a function s(t) over the real line is obtained by "comparing" s with the one parameter family of pure oscillations e~ (t) = e iw~
2
Matthias Holschneider
by taking all possible scalar products; that is for s E L 1(]~) we have :_~-oo
s ~ Fs,
dte-i~ts(t),
(Fs)(w) = (e~ I s)R =
where we have introduced the notation
(s [ r)~t = / + - 5 dt s(t) r(t), whenever the integral converges absolutely. We usually write ~ for Fs. The Fourier transform of s then is a function over the parameter w, which may be interpreted as a frequency. We therefore call ~'(w) the frequency content or frequency representation of s, and s itself may be referred to as the time representation. It is well known, that no information is lost, since the Fourier transform preserves the energy in the sense that (s]w)=~l
(~[~),
/dt,s(t)[ ~=
/ d t [ ~ ' ( w ) [ 2.
(1)
The inverse Fourier transform is given by the adjoint operator: For r E LI(~) N L2(]~) it reads
r ~ F -lr,
(F -lr)(t)=
+ : dw ei~ r(w),
FF-I=F-1F=~. It allows us to write s as a superposition of the elementary functions e~:
~(t) = 1
/,f
l/,f
dw~(w)e~(t) = ~
We may summarize the formulas by saying that
dw~(w)ei~t.
2~r-1/2F is unitary.
The convolution theorem. The convolution product of two signals reads F • ~(t) = f duF(t - u) ~(u). It is commutative (F * s = s * F) but the roles of F and s are distinct in applications. F is the filter and s is the signal to be filtered. The convolution theorem states
F • ~(~) = ~(~)~(~). 2.2
F o u r i e r t r a n s f o r m over Z
In practice we have only sampled signals. Mathematically these are sequences or functions over Z. The Fourier transform of a sequence in L2(~,) reads
Fzs(~) = Z ~ s ( ' ~ ) " mEZ
Continuous Wavelet Analysis
3
Again we write ~'(w) instead of FZs(w). Now, w is a priori a real number. But, since all functions e i~m are 2~ periodic, it turns out that
FZs(w + 2~r) = FZs(~o). We can identify periodic functions with functions over the circle T. Thus the Fourier transform maps functions over integers Z to functions over T F z : L2(Z) -÷ L2(T) Again it preserves the energy
E
ls(m)[2 =
['g(w)[2"
]02~d~ ~ m ~ ( ~ )
= ~e f T d ~ ( ~ ) .
mCZ The inversion formula reads
~(m) =
The convolution of two sequences reads
mEZ and again we have
F ,~(~) = 2(~)~(~) 2.3
T h e F o u r i e r t r a n s f o r m over T
For 27r-periodic functions we can define the n-th Fourier coefficient through
FTs(n) =
~
2~r
e~ns(t).
Thus the Fourier transform over the circle maps periodic functions (= functions over the 1-torus) to sequences (= functions over Z). Again we have conservation of energy
]FVs(m)l 2 = mEZ
f0
dtls(t)] 2 .
The convolutions of two functions over the 1-torus reads
F * s = ~0 27r dtF(t - u)s(u). In this formula F has to be taken as wrapped around (see Fig. 1)
4
Matthias Holschneider
Z
Fig. 1. The wrap-around in the convolution over the circle
2.4
The Fourier transform over Z/NE
In numerical algorithms however we still use another Fourier transform. It is defined for periodic sequences of finite length N. N
FZ/NZs(m) = ~ s(k) e2~k/N. k=l
Again this transform preserves the energy N 1 N 2 E Is(m)12= ~ k~ll F x / N x s ( k ) " m~--1
=
It can be inverted through 1
N
s(,~) = ~ ~ ( k )
e~'~k/N.
k=l
Again a convolution theorem holds. The convolution of two periodic sequences is defined in the obvious way: N
F , s(~) = Z E(m - k) s(k). k=l
Again we have to take F in a wrap-around manner. Now we have F * s(m) = F ( m ) ~ ( m ) .
2.5
Sampling and the Poisson summation formula
The various Fourier transforms we have encountered are linked together via the Poisson summation formula. It shows that there is a link between sampling and periodizing.
Continuous Wavelet Analysis
5
Perfect sampling. More specifically ibr a function s(t) over the real line we associate the sequence of its perfect samples at all integer points via z : (s) -+ (z),
(2s)(m) = s(,~).
In the same way we may sample a periodic function s at N evenly distributed points Z ~ : (T) -+ (Z/NE), (Zs)(m) = s(2zcm/N). A function over Z (= a sequence) may be down-sampled k-times (= throw away all points, except those situated at ..., -2k, - k , 0, k, 2k, ...)
E~: (z) -+ (z),
~ s ( m ) : s(k,~).
Finally a periodic sequence of length N may be down=sampled further by a factor k if k divides N, in which case we set
r~/Nz. (Z/NZ) -~ (Z/NZ), k z/Nz ~k s(m)=s(kmmodN), m=O, 1 , . . . N / k - 1 .
Periodizing. There is a natural map to come from functions over the real line to functions over the 1-torus, that is to periodic functions. It is the periodization operator. H : (~) -~ (T),
Us(t) = ~ s(t + 2~k). kCZ
In the anMogous way we may periodize periodic functions to obtain functions with a k-times smaller period. For this reason we write TT for the space of functions with period T. As before, T stands for T2~. k--1
HkV: (TT) --+ (TT/k),
Hs(t) = E s((t + Tm/k)modT). m=O
A sequence may be periodized to obtain a periodic sequence with period N
~:
(z) ~ (Z/NZ),
~ s ( m ) : Z s(m + kN). kEZ
Finally a periodic sequence of period N may be periodized to obtain a sequence with period N/k provided k divides N
HkZ/N~: (Z/NZ) -+ (Z/(N/k)Z), k-1
II~Z/NZs(m)=~-~s(m+l,kmodN), t-~O
m=O,l,...N/k.
6
Matthias Holschneider
The Poisson summation formulas. The Poisson summation formula states in its easiest form that periodizing and sampling are related through Fourier transforms. Z F = FVH,
FZ~ = HF
Or in diagram language
(a)
%
(v)
(~) -&
(z)
(~)
~
(z)
(~) -~
(T)
The analogous other relations with the above definitions are listed below
~ F z = FZ/~zrS~, Z~INZFZl Nz = Fz/(N/~)ZH2/Nz There is another useful picture for these relations. If we identify a sequence with a niodulated ~-comb,
l(m) ~ Z l(.~)~(t-.~), mEZ
and if we introduce the equally spaced delta comb as
u= Z
~(t- m)
rn6Z
then the sampling operator may be written as
Z:s~-~tl.s The periodization operator reads H : 8 ~-~ t12~r * 8,
and the Poisson formula finally reads simply
Band limited signals. A signal is band - limited if its Fourier transform is supported by some interval, say [wl, w2]. In case of a band-limited signal, we may actually compute with the samples only, provided they are sufficiently close. More precisely let Aw = w2 - wl be the bandwidth of the signal s. Suppose we sample s evenly with a frequency/2. That means we know the sequence
22./~s(k)
= s(2~k/¢2),
k c z.
Continuous W~velet Analysis
7
Suppose, $2 > Aw. Then s may be recovered from its samples via
s(t) = E ( N 2 ~ / s ? s ) ( k ) H ( t
- 21r£2k),
kCZ
where H is the function whose Fourier transform is the rectangular window of the interval [wl,w2]. Indeed, the Fourier transform of the s a m p l d sequence (Z2~/gs)(k) is the periodized Fourier transform of the original signal. Since this is band-limited and since the sampling frequency is high enough, the spectrum is just repeated without overlap. We multiply it in Fourier space with the rectangular window of the interval [Wl,aJ2] to cut out the spectrum of the original signal. Now this multiplication in Fourier space, amounts to a convolution with H of the modulated d-comb U2~/gs, and this yields the above formula. In particular, suppose, we have a band-limited filter F and a band-limited signal s, both in the same band [wl, w2]. We then may compute with the samples only. Indeed, by Poisson summation we have
The convolution product on the left-hand side is over the real line, the one on the right-hand side is the convolution of sequences.
The sampling spaces. Spaces of band-limited functions are particular instances of so called sampling spaces. These are spaces of functions over the real line, which are defined by there sample values only. For example consider the space of continuous, piecewise, affine functions with knots at the integers. A function s in this space is defined, if we know its value at the integers. The value at every intermediate point t with n < t < n + 1 is then obtained by linear interpolation s(t) = (t - n) s(n) + (n + 1 - t) s(n + 1) Consider the unique function h in this space that has h(0) = 1 and h(n) = 0 for n ~ 0. This is the triangular function h(t)=
{1-1t[ 0
for[t[<1 else
A minutes reflection shows that s(t) =
s(n) h ( t - n). nEZ
This last formula holds for all sampling spaces. Again, for sampling spaces we may compute with the samples only. Indeed, let F be an arbitrary function over I~ and s in some sample space generated by the elementary function h. Now let Z ( F • h) = f be a discrete filter. We then have ~ ( F * s) = / * 2 s .
8
Matthias Holschneider
But note that in general F * s is not in the original sampling space any more! Therefore in particular, although we know the sampling values of the convolution product~ we cannot compute F * s at points in between just from f * Z s by means of interpolation. Instead we have to use a different filter for each shift
Z ( T r F * s)(m) = F * s(m - T) = fr * Zs(m), fT(m) = (F * h)(m - ~-). Finite signals. In general the signals we work with are of finite length only. Moreover they are samples. What is the relation between the Fourier transform computed with the discrete Fourier transform of the truncated samples and the Fourier transform of the original signal? 3
The
wavelet
transform
It is clear that the Fourier transform alone does not help us to understand the hearing process alluded too in the beginning. The wavelet transform is based on the two parameter family of dilated functions; see [10], [4] and [15]. We introduce the ibllowing operators T6 :
s(t - b),
E~ : ~(~) ~-~ ~(~ - w) ~
s(t) ~-~ ei~ts(t)
and
D~ : g(t) ~-~ l g I t ) ,
~(w) ~+ ~(aw).
Let g(t) be a fixed function, called the wavelet, and consider the two-parameter family of dilated and translated functions
gb,a=TbDag,
gb,a(t)=lg(t~ab ).
Note that dilation comes first since dilations and translations do not commute. For a complex valued signal s(t) the wavelet transform of s with respect to g reads
wgs(b, a) = (gb, t8) =
a-d
s(t),
be~,a
> O.
Note that we write y for the complex conjugate of g. Other notations for the wavelet transform are )4;[g; s](b, a). These numbers will sometimes be called the wavelet coefficients of s with respect to the wavelet g. The wavelet transform can also be seen as a family of convolutions indexed by a parameter a. The convolution product of two function is defined as
s * r(t) = [ + ~ dt' s(t - t') r(t') = r * s(t). J-co
Continuous Wavelet Analysis
9
Then ~V~s(b, a) = ~a * s(b), where ~(t) = ~ ( - t ) . The two dimensional parameter space of wavelet analysis may be identified with the upper half-plane
lH={(b,a):beF~a >0}. 3.1
Wavelets and time-frequency analysis
As we explained in the introduction we want to built a kind of mathematical ear, based on atoms localized at different positions and frequencies. One way to enforce the basic wavelet to have oscillations is to require that
dtg(t)=O
~:~
~(w=O)=O.
Such a function has at least one oscillation. A bump is never a wavelet in that sense. Note however that the derivative of a bump may do very well. Now, since the 0 frequency is not contained in the spectrum of g, it is necessarily localized around a frequency different from 0, say o2. But then the scaled wavelet ga = g ( . / a ) / a is oscillating with a frequency w / a (see Fig. 2). Thus the upper half-plane of wavelet analysis is also a time-frequency half plane. The wavelet transform is a kind of mathematical ear if we identify b
<
>
time
020
- - < ~ frequency. a
3.2
Wavelets and approximation theory
By construction, the wavelet transform is a kind of mathematical microscope if we make the following identifications b < ~ position
(aA) -1 < ~ enlargement g ~
>
optics
The interpretation of a as a scale becomes even clearer if we consider the following. Suppose we want to analyse an arbitrary function s over the real line ]~ on different length scales. The first attempt to look at s at different length scales might be to look at smoothed versions of s. Therefore pick a real valued, localized, non-negative "smoothing filter" ¢(t) consider the family Ca(t) = ¢ ( t / a ) / a , a > 0 of dilated versions of ¢ and look at the smoothed versions
10
Matthias Holschneider
a=2 a=l
a=l/2
~a=l
t
'0 Fig. 2. The dilation operator in time and frequency. Note that small scales (a < < 1), correspond to high frequencies To~a),
of s. Since the width of the support of ¢(, is proportional to a one might say t h a t the smoothed version contains the details of s up to length-scale a, where the scale is measured in units of the size of the support of ¢a=1 = ¢. The features of s living on a smaller scale are smoothed out and not visible any more in ha- As a gets smaller, more and more detail are visible added and eventually one gets back all of s. Indeed, one can prove t h a t sa -~ s in L 2 (~). The idea of wavelet analysis is now to look at the details t h a t are added if one goes from scale a to scale a - da with da > 0 but infinitesimally small. Since aa contains all details of s up to scale a it follows that the difference cra-da -- aa are the details of s t h a t are living at length-scale a. Going to the limit da --+ 0 we are lead to look at W(b,a) = -aOaa~(b). (2) T h e factor a in front is for convenience only. Now since ¢ is smooth and compactly supported we m a y take the derivative under the integral and we obtain thanks to the identity - a O a ( ¢ ( t / a ) / a ) = ( ¢ ( t / a ) / a + (t/a) ¢ ' ( t / a ) ) / a t h a t
W(b,a)=-aOa
J-oof+~dtl¢( ~ a b
s(t)
= g, * s(b), with g(t) = (t Ot + 1) ¢(t) and g~(t) = g ( t / a ) / a . This expression looks similar to the one we have used before in the definition of the approximations a~. However one i m p o r t a n t thing has changed: ¢ which was a b u m p - as expressed by f ¢ = 1
Continuous Wavelet Analysis
11
- is replaced by g which rather satisfies * _ ~ dt g(t) = O,
as can be seen by partial integration: f t Ot ¢ = - f ¢. This follows also from
~(~)
=
~a~(~),
which can be verified by straight forward computation. Therefore to obtain the details of s at a length scale a we have to look at the convolution of s with the dilated version of a function g t h a t is of 0-mean; that is a wavelet. The convolutions Wa = ga * s corresponding to the details of s at length scale a is the wavelet transform of s with respect to the wavelet ~(. - t). K we sum up all the details Wa = W(., a) over all scales we might hope to recover s in the sense t h a t
/
e
P __daVi;~ =
a
in the limit e -+ O, p -+ oo.
- - ga * s --+ s
e
a
This actually holds if the wavelet g is derived from the smoothing b u m p g as before in (2). Then, since W~ = - a O ~ r ~ it follows t h a t the above integral evaluates to a~ - ap. The first t e r m goes uniformly to s in the limit e --+ 0, and the second t e r m goes to 0. Therefore the wavelet transform allows us to unfold a function over the one dimensional space I~ into a function over the two dimensional half-plane ~ of positions and details. It is this two-dimensional picture t h a t has made the success of wavelet analysis.
4
Some
elementary
algebraic
properties
We list some obvious properties of the wavelet transform for later reference. Linearity. The wavelet transform is a linear transformation, or linear operator, so t h a t the superposition principle applies; that is we have:
W A s + r ) = W ~ + W~r,
Wg(~) = ~ W ~
for every function s and r and every complex number a E C. With respect to the wavelet it is anti-linear
wig + v; s] = wig; s] + W[v; ~],
w[~g; ~] = ~wig; 4-
S y m m e t r y g ++ s. If we exchange the roles of g and s we then have the following useful formula.
12
Matthias Hotschneider Wavelet and Parity. For arbitrary functions the parity operator is defined by s ~ Ps,
(Ps)(t) = s ( - t )
A function is called even if P s = s and it is called odd if P s = - s . The analog for functions over the half-plane 7- is ~ - ~ P7-,
(P~(b,a) = T(-b,a)
By direct computation we can show that W[Pg; Ps] = 7~ )/V[g; s],
W[Pg; s] = ~Y[g; Ps].
Therefore the wavelet transform of an even (odd) function with respect to an even (odd) wavelet is itself" even P ]/Ygs = D?9 s. The wavelet transform of an even (odd) function with respect to an odd (even) wavelet is odd P W g s = -]/Ygs. Wavelet transform in Fourier space. We may write the action of dilation and translation in Fourier space
s(t) ~ s(tla)ta ~ * ~(~) ~+ ~(a~) s(t) ~ s(t - b) ¢=* ~(~) ~ ~(~) e -~b
(4)
Because of Parseval's equation for the Fourier transform (1) we may rewrite the wavelet transform in frequency space. Since by (4) we have g-~,a(w) = ~(aw) e -ib~ its follows that for g,s E L2(]~) we may compute the wavelet coefficients in Fourier space via
i /+f a~~(aw) e ~b~~(~)
(5)
The restriction )/Ygs(., a) of Wgs to one of the lines a = c is called a voice. It is given by the convolution of s with the dilated analyzing wavelet Wgs(-,a) = ga * s(.),where ~ta(t) = -~(-t/a)/a. Accordingly, since the convolution theorem states that for r E Ll(]~), s E L2(I~) we have point-wise almost everywhere
¢'~(~)
=
~(~) ~(~),
(6)
the Fourier transform of a voice reads W~s(-, a)(w) = ~(aw) ~(w).
(7)
Because a > 0, the positive (negative) frequencies of g will only interact with the positive (negative) frequencies of s; that is the wavelet transform does not mix the positive frequencies of the wavelet with the negative frequencies of the analyzed function. It therefore is natural to treat the positive and the negative part separately, and we define
Continuous Wavelet Analysis
13
D e f i n i t i o n 1. A function s C L 2(~) is called progressive or prograde iff its Fourier transform is supported by the positive frequencies only supp ~ _ ]~_.
It is called regressive or retrograde if the time reversed function s ( - t ) is progressive, or what is the same iff its Fourier transform is supported by the negative frequencies only. The space of progressive and regressive functions in L 2(I~) are closed subspaces and we shall denote them by H~_(1~) and H 2_(I~), respectively. The whole Hilbert space splits into an orthogonal sum L 2 (It~) = H~_(I~) @ H 2_(l~). Since both spaces are closed it follows that the orthogonal projectors on the positive (negative) frequencies are continuous. In Fourier space they read
~ ~%, 8 ~ ~-s,
~(~) ~+ o@) ~(~) ~(~) ~ o(-~)~'(~),
where the Heaviside function O is defined as {~ O(t) =
t
O.
Because H + + H - = if we may use these two projectors to split the analyzing wavelet and the analyzed function into a progressive and a regressive component. For the wavelet transform we obtain
Wig; s] = W[H+g; ~ % ] + W[/I-g;/I-8]. Thus if g is a progressive wavelet only the positive frequencies of the analyzed function are "seen" by the analyzing wavelet and we have )/V[g; 8] = )/Y[g; II+s]
for g progressive.
From (7) it follows, that for progressive wavelets, each voice is a progressive function. In general the analysis with a progressive wavelet means an a priori loss of information on s for a non-p_rogressive 8 E L2(]~). However if s is real-valued, again no information is lost a priori. Indeed a real-valued function is defined by its positive frequencies alone since it satisfies the Hermitian symmetry
~(~) = ~ ( - ~ ) . Thus it may be recovered from its progressive part II+s:
7r Jo
14
Matthias Holschneider
where ~ z is the real part of z E C. Therefore the wavelet transform of a real function s with respect a real wavelet g may be expressed in terms of the associated progressive functions:
W[g; s] = 2 ~W[H+g; II+s] = 2 ~W[g; II+s] = 2 ~W[II+g; s]
(8)
Co-variance of wavelet transforms. We have already encountered the dilation operator and the translation operator acting on functions over the real line via Tb : s(t) -+ s(t - b),
D~ : s(t) --~ s(t/a)/a.
Correspondingly we have the dilation :Da,, a ~ > O, and translation operator 7~,, b~ E 1~ acting on functions over the half-plane.
n~+T)a,n,
n ( b , a ) ~ + a 1, T ~ ( b ~ , - ~ a ) ;
n(b,a) n(b-b',a).
\
/
\
/ ~\
7
\
/ \ \
\/
o/
\/
V T
o
b
Fig. 3. The translation operator and the dilation operator acting on a function on the half-plane
On the real axis a = 0 we get back the dilation and translation of functions over ]~. The wavelet transform satisfies now the following, co-variance property as can be verified by direct computation Wig; Das] = Z)~W[g, s]
W[g; Tbs] = 7b )A;[g; s],
or more explicitly Wig; s(t - b')](b, a) = Wig; s](b - b', a); }V[g; s(t/a')](b, a) = Fv'[g; s](b/a', ale').
(9)
Continuous Wavelet Analysis
15
This means that the wavelet transform of a dilated (translated) function is obtained by dilating (translating) the wavelet transform. With respect to the wavelet the following co-variance holds
)/V[g(t - b'); s](b, a) = Wig; s](b + b' a, a); )/Y[a ~g(a't); s](b, a) = Wig; s](b, a/a'). In order to gain a little more geometric intuition for these transforms let us look at the invariant subsets of the half-plane; that is those subsets of IH that are mapped into themselves when an arbitrary re-scaling (shifting) is applied. Clearly every straight line passing through the origin is mapped onto itself under all possible dilations and the same holds for any collection of such lines. Vice versa every invariant subset must contain together with the point (b, a) all its rescaled points {(b/a', a/a') : a' > 0}. But this is the straight line passing through (b, a) and the origin (0, 0). Therefore the invariant sets for the re-scaling are all cone-like structures with top at the origin. A similar argument shows that the invariant subsets for the translations are the strips parallel to the real axis. See Fig. 4 for an illustration.
,
o
b
Fig. 4. An example of a translation invariant subset (left) and a dilation invariant subset (right) of the half-plane.
4.1
T h e uncertainty relation.
As is well known, it is impossible to be simultaneously arbitrarily2 localized in time and in frequency. More precisely if we view Is(t)l 2 and I~(t)l (after suitable normalization) as probability densities for the distribution of times and
16
Matthias Hotschneider
frequencies, we may introduce the mean values < t >=
f dt t Is(t)12
< w >=
f dw w IF(co)12
f ls(t)l 2 ' and width At = ~/~ t2 >,
f ts(~)f 2
'
/~LO = X / ~ W 2 >,
where < t 2 >=
f dt t 2 Is(t)l 2
f Is(t)l 2
< w 2 >=
f dw w 2 IF(w)12
'
f t ( )12
'
then the Heisenberg uncertainty relation states that A t A w > 1. If we associate with any signal a rectangle of size A t and Aw with its center at (< t >, < w >) in the time-frequency plane, then the Heisenberg uncertainty relation gives a lower bound for the area of such "phase - space cells". How do our translation and dilation operations act in terms of these quantities? It is easy to see, that the following holds Tb : < t > ~ < t > +b, A t ~ A t , < w >~+< w >,
A w ~-~ A w
Da : < t > ~ a < t >, A t ~ aAt, < w > ~ a < w > /a, A w ~+ Aw/a. Therefore, we do not change the area in phase space of our wavelets, but we choose between time localization and frequency localization in such a way that A w / < w > remains invariant. See Fig. 5 for an illustration. 5
The
basic
functions:
the wavelets
Let us come back to the wavelets itself. As the name says, these functions are elementary oscillations. As such they should be localized in time, they should also be localized in frequency space, and they should be oscillating. Localization in time and frequency can be quantified by the following conditions
Ig(t)l <_ c(1 + ltl2) - 12 and l~'(w)l _< c(1 + jwl2) -l*12
(10)
To enforce the oscillation of the wavelets, we have to require that some of the moments of wavelets vanish +f
dtg(t) t m = O,
m = 0... ,n
(11)
Continuous Wavelet Analysis
17
At m:=l/a
I
] t
Fig. 5. The localization in phase space of the wavelets.
This last condition can easily be seen to be equivalent to o
=
whenever g E L 1(~) and t" g E L 1(I~). According to this last condition a wavelet has some oscillations, which justifies the name "wavelet". A bump (e.g. e -t2/2) is not a wavelet. But the derivative of a bump is a wavelet. 6
The
real wavelets
These wavelets axe, as the name says real valued functions over the real line. As for any real valued function the Fourier transform satisfies the HermitJan symmetry =
Often, the real wavelets are either even g ( - t ) = g(t), or o d d , g ( - t ) = - g ( t ) . In this case, the Fourier transform is real valued and even or imaginary valued and odd respectively.
The wavelet of A. Haar. Its explicit expression is given by g(t)=
-1 +1 0
for 0_< t < 1/2 for1/2 t
Obviously this wavelet is extremely localized in t i m e - - i t has even compact s u p p o r t - - b u t it has only poor regularity--it is not even continuous. This is mirrored in a poor localization in frequency space ~(w) = 2 1 - cos w ei(W+~)/2. ~d
18
Matthias Holschneider
II +1t2
I ! It ! I ! !
If
-8~
!
~LIA
W
+1
d
'IiAi ,, 02
-1
Fig. 6. The wavelet of A. Haar. Note the oscillations of the Fourier transform that produce a slow decrease of at most 1/w. This is due to the low regularity of the time representation.
In Fig. 6 we have sketched both representations. This function is actually the first wavelet that has been used in mathematics. It has been introduced by Alfred Haar 1906. He also shows that the set g ( 2 J t - k ) with j , k C Z is an ortho-normal basis of L2(Z). The set of points {(2Jk,2i)} form a grid in the half-plane. It is called the dyadic grid. It is enough to know the wavelet transform of s C L 2 (I~) with respect to g at all points of the dyadic grid to know it everywhere. This is an example of a sampling theorem in wavelet space that we shall discuss in greater detail in later sections. We also shall see in later sections that there is a whole family of more regular functions that have the same property.
The Poisson wavelets. Consider the function g(t) = (tOt + 1 ) P ( t ) ,
P(t) -
1 1 7 r l + t 2"
(12)
In Fourier space this wavelet reads g(~) = M e-I~l. The analysis of a function s with respect to this wavelet is closely related to an initial value problem of the Laplace operator. Recall that the Laplace operator acting on functions over the half-plane 1H reads
A function 7- = T(b, a) is called harmonic in the half-plane if it is 2-times continuously differentiable at every point in the open half-plane and if it satisfies at A T ( b , a) = O~T(b, a) + O~T(b, a) = 0, at every point (b, a) C ~ .
Continuous Wavelet Analysis
19
0.4
t (9 -2
Fig. 7. The Poisson wavelet in the time and frequency representation. Note the singularity at w = 0.
Consider now t h e following initial value problem: given a function s in LP(~) find a harmonic function T in the half-plane such that
i ) . f + ~ d b IT(b,a)l p ~_ c < co ii ) 7~(.,a) -+ s(-), a -+ 0
in LP(~).
(13)
In general a second order differential operator needs two boundary conditions (one for the value and a second one for its normal derivative). However it turns out that the finiteness condition i ensures uniqueness and we have the following theorem due to Fatou. T h e o r e m 1. The initial value problem (13) has exactly one solution for 1 ~ p ~ co. It is given by
T(b, a ) = Pa * s(b),
Pa = a
and P as given by (12). In addition we have for almost every b E ~ that T(b, a) -+ s(b) as a -+ O. Note that the following semi-group property holds P~ * PZ = P~+~ By direct computation one can verify that T actually is harmonic in the halfplane. The proof of the uniqueness however is a little more complicated. The kernel Pa(b) is called the Poisson Kernel and the function T in the theorem is called the harmonic continuation of s into the upper half-plane.
20
Matthias Holschneider
The wavelet transform with respect to the Poisson wavelet g is therefore given by }/Fgs(b, a) = -aOaT(b, a) where T is the harmonic continuation of s into the upper half-plane. In words: use the harmonic extension operator to go into the upper half-plane, then take derivatives. We will use this wavelet in Sect. 12 to analyze potential fields.
The wavelet of D. Mart. As we shall see now the speed of approach to the equilibrium in a diffusion process is related to a wavelet transform. Let us adopt a physical language at this stage. Then s(x) corresponds to a (one dimensional) temperature profile. If no heat sources are present this distribution will evolve according to the heat equation (0~ - 0 2 ) T ( x , t) = 0,
t > 0, x e ]~
with the boundary condition T(x, O) = s(x). This gives a sequence of smoothed functions st = T(., t) which will tend eventually to 0, the thermodynamic equilibrium. The speed of heat-loss defines a function 7~ over the half-plane TO(x, v~)
= Ot T(x, t).
Fig. 8. (Left) The diffusion process starting from a characteristic function.(Right)The heat-loss rate. Note that the heat loss is most important at the sharp edges of the initial condition.
This function has been used by D. Marr in the problem of edge detection [17]. The idea is that the diffusion process is acting the strongest at the sharp corners of the function and hence the coefficients 7¢(x, t) will be in the limit t -+ 0 largest near the points x where s(x) has a sharp transition. This is actually true as can
Continuous Wavelet Analysis
21
be seen in Fig. 8. As is well known the time evolution is given by convolution with the heat kernel
T(x,t)
= Kt * s(x),
1
2
Kt(x) = --e - x /t 7rv/i
Again a semi-group property holds K~ * K~ = Kt+~.
For the function ~ over the half-plane we therefore get (changing back to our standard b, a notation upon setting b = t, a = v/t) __
1 (t 2 _ 1) e 7~(b, a) = ga * s(b), g(t) = 1r
--t 2
, ga(x) = g ( t / a ) / a ,
which is again a wavelet transform. The first 2 moments of g vanish according to the decrease of ~(w) = w2e -~2 at w ~ 0. In Fig. 9 we have shown a draft.
1
II i
fi
!
! I
J
I
Fig. 9. The wavelet of D.Marr in the time and frequency representation. Note that in the time representation it looks like a smoothed second derivative.
6.1
The progressive wavelets
In this section we shall have a closer look at H ~ (~), the space of progressive functions. Recall t h a t we defined this space as the closed subspace of L2(~) of functions having only positive frequencies. Accordingly a progressive wavelet g E H ~ (~) is a superposition of progressive exponentials
g(t) =
fo ° d~ ~(w) e~%
22
Matthias Hotschneider
where the integral is to be taken in tile mean square limit sense. Since w is never negative under the integral we may replace t by z = t + ix with x > 0 and the integral is still convergent, and it is easy to see that g(z) is an analytic function over the upper half-plane. In addition since le-WXl < 1 under the integral the integrand is square integrahle and it follows by Parseval's equation that we have f _ + ~ dt Ig(t + ix)l 2 = ~1 f0 °° dw I~(w)l 2 e - 2 ~ < c < oo for some constant not depending on x > O. This is actually a complete characterization of H~_(]~) as the boundary values of functions that are analytic in the upper half-plane. T h e o r e m 2. (Paley a n d W i e n e r ) Let s be a function in H~ (~). Then there is a unique function T , analytic in the upper half-plane such that i) / + f
dt IT(t + ix)l 2 <_ c < cc
ii ) ~-+olim/ + ~
dt IT(t + ix) - s(t)l 2 = O.
Vice versa for any analytic function T over the upper half-plane satisfying i) there is a function s E H~ (t~) such that s is the boundary value of 7" in the sense of ii). Since f ~ dw e -Wx+i~t = ( x - it) -1 it follows that for s E H~_(l~) the analytic continuation into the upper half-plane is given by the Cauchy kernel 8(t + ix) = C, * s,
C, = x - l C ( . / x ) ~
C(t) -
1
7r(1
it)
Again a semi-group property holds for the Cauchy kernels C, •
= C,+,.
The analogue holds for H2_(]~) where the Upper half-plane has to be exchanged with the lower half-plane. A second possibility is to keep the upper half-plane but to exchange analyticity with anti-analyticity. Recall that this means T(~) is analytic. 6.2
P r o g r e s s i v e wavelets w i t h real valued f r e q u e n c y representation
A special set of progressive wavelets is the set of functions for which the Fourier transform is real valued. Here tile real and the imaginary part are given by ~g(t) = - ~
dw(~(w) + ~ ( - w ) ) e i~t
Continuous W~avelet Analysis
=-
d~(~)
2~
.~g(t) = ~
23
cos(~t),
dw(~(w) - ~ ( - w ) )e ~W* O0
= 2--~
dw~(co) sin(wt),
and thus the real part is an even function whereas the imaginary part is an odd function. They are related by a Hilbert transform
~g = - H . ~ g ,
.~g = H ~g.
This transformation is essentially defined as the multiplication by sign w = w~ lwl in Fourier space H: ~'(w) ~ - i sign (w) - ~'(w) (14) Since in Fourier space H is multiplication with a bounded function it follows that H is continuous on L 2 (~). Now multiplication in Fourier space corresponds to convolution, which in the case of the Hilbert transform reads
H s ( t ) = -1 lim f + ~ d s(u) ~-~0 j _ ~
t - u
( t - u) 2 + ~2
Note that this is formally a convolution with 1/(zrt). From the representation of the Hilbert transform in Fourier space (14) we see that the orthogonal projectors H + : L2(~) --+ H~_ (I~) and H - : L2(]~) -+ H2_(~) on the progressive and regressive functions can be expressed as follows
1 II + = -~ (]l + i l l ) ,
I I - = ~1 (~ _ i l l ) .
The Cauchy wavelets. They are defined as
g(t) = (2~) -1 r ( ~ + 1) (1 - / t ) - ( l + ~ ) , where the gamma flmction is defined for ~ z > 0 by
F(z) =
dt t z-1 e -t.
These wavelets are highly regular but have an at most polynomial decay at co. This is mirrored in the fast decrease of the Fourier coefficients and a lower regularity of the Fourier transform (at w = 0). ~(w)={O ~e-~
forw>0 otherwise
Here we recognize that for c~ E N these wavelets are nothing but the a t h derivative of the Cauchy kernel. Accordingly we obtain the following interpretation
24
Matthias Holschneider
of the wavelet transform of s C L2(I~) with respect to these wavelets: take the projection H÷s of s into the space of progressive functions H~(IR). This function can be extended to an analytic function over the upper half-plane. If we call this function T(z) we have (a E N) W~s(b,a) = a ~ Y(~)(b +
in),
Y(~)(z) = OTY(z).
Thus the wavelet transform with respect to these wavelets is closely connected to the analysis of analytic functions over the half-plane. This observation is due to Paul [20], who has used these wavelets in quantum mechanics. These wavelets are complex valued and we may distinguish between the modulus Ig(t)l and the phase argg(t). The phase is only defined up to an integer multiple of 2~r. But on every open set on which lg(t)l ~ 0 it may be chosen to be a continuous function, if g itself is continuous. At every point t where g(t) = 0 the phase is not defined. The phase speed is called the instantaneous frequency d
a(t) = ~ argg(t). This name is justified by the fact that the instantaneous frequency of the pure frequency e i ~ is equal w. By direct computation we can verify that for the Cmlchy wavelets we have [g(t)[ = (1
+ t2) -(l÷a)/2,
Therefore the phase turns (1 ÷
argg(t) = (1 + a) arctant,
a)/2 times as t goes from - o c to +c~.
+1/2
t~
-t:2
t
~
-1.5 !
I
i
+l,5
3
Fig. 10. The Cauchy wavelet for c~ = 2 in time and frequency representation. The phase turns only finitely many times. Note the singular (-=- non-smooth) behavior at (a2~ 0.
Continuous Wavelet Analysis
25
The Gaussian "wavelets" of J. Morlet. T h e y where first used in geophysical explorations [10] and are at the origin of the development t h a t wavelet analysis has taken since this time. These wavelets are obtained by shifting a Gaussian function in Fourier space, or what is the same, by multiplying it with an exponential g(t)
= e
=
A more convenient way to parameterize them is to fix the central oscillation at wo' = 1, say, and to change the size of the envelope
g(t) = e % -t~/2~. Strictly speaking this function is not a progressive wavelet, it is not even a wavelet, because it is not of zero mean. However the negative frequency components of g are small compared to the progressive component if w0 > 0 is large enough or (what is the same) if a is large enough. The phase p u l s e s - - u p to a c o r r e c t i o n - - a t a constant speed
d4)(t) dt - ~2(t) = Ojo + p(t)
112~
+1i2
~|
i t
5.336_.
Fig. 11. The wavelet of J. Morlet. for wo = 5.336... in time representation. Note that the real part (solid line) and the imaginary part (dotted line) are oscillating around each other with constant phase speed. The modulus and negative modulus (thick solid line) are an envelope for these oscillations. (Right) The Fourier transform of the wavelet of J. Morlet for wo = 5.336 .... Note that it does not vanish at the origin w = 0, but that it is numerically small.
26
7
Matthias Hols~_ueider
Some explicit analyzed functions and easy examples
In this section we list some examples t h a t reveal the main features of the wavelet transform. Also not all statements shall be made mathematically precise this section serves to enlarge our intuition about wavelet transforms. 7.1
The wavelet transform of pure
frequencies
These functions are related to the invariant strips in the half plane. Consider the function e~o(t ) = e i~°t, t h a t describes an oscillation with frequency ~o. The analyzing wavelet should be absolutely integrable in order to have an everywhere well-defined wavelet transform.
.4
4
a
a
1M
1t4
1116
1116
-~ -1 0 ÷1 b +~ -~ -i 0 ÷1 b ÷~ Fig. 12. The modulus of the wavelet transform of the pure oscillation with the help of the wavelet of Haar. Black corresponds to high values whereas white are small values. Note that the maximum of the amplitude is at scale a = 1/4. However there are oscillations for large scales due to the poor locMization in frequency space of the Haar wavelet. Note that the scale-parameter axis is logarithmically.
Since e~0 are progressive functions, every voice of the wavelet transform will be a progressive function too. T h e pure oscillations are up to a phase invariant under translations Tbe~o ~ e-i~o b e~o.
Some of the main features of the wavelet transform of pure oscillations follow from this translation invariance and from the co-variance of the wavelet transform. Using the co-variance property (9) we m a y write: ~42[g; e~o](b, a) = W[g; e~o (t + b)](0, a).
Continuous Wavelet Analysis
27
From the invariance of e~0 and the linearity of the wavelet transform it follows that W[g; e¢o](b, a) = e-i¢°ZW[g; e¢o](0, a). Therefore every voice--the scale a is fixed to a constant--is a pure oscillation with the same frequency ~o but with an amplitude and a phase, that may vary from voice to voice. The whole transform is determined by the wavelet transform along a straight line passing through the boarder line of the half-plane, e.g. b = 0. Carrying out the integration
}/~[g;e~o](O'a)=/+~dtlg( ) ~ _ ~ t
e~°t = ~( a~0 )
we obtain and thus the modulus and the phase of the wavelet transform read
M(b, a) = ~(a~o) ,
~(b, a) = arg~(a~o) + ~0 b.
(15)
Therefore the instantaneous frequency of every voice is imposed by the analyzed pure oscillation Ohm(b, a) = The modulus of the wavelet transform is constant along every voice, (a = c), and its behavior aiong the line b = 0 is given by the modulus of the Fourier transform of the wavelet. Note that the phase itself is only defined modulo 27r, but if ]~/0s is differeutiable, then the derivatives Ob~(b,a) and Oa~(b,a) are uniquely defined wherever ]M(b,a)] ~ O. Suppose that lgl has its maximal value for w = wo. Then the modulus of the wavelet transform takes its maximal value for the voice at scale
a = Wo/~o
(16)
This might indicate that Wo/ais somehow related to a frequency. The localization around this central voice depends on the localization of the spectral envelope of g around Wo. Therefore the analysis of a pure oscillation with the help of the Haar wavelet wilt give rise to a poor localization in the half-plane (see Fig. 12), whereas e.g. the Morlet wavelet gives rise to a high localization around the scale (see Fig. 13). Consider the lines of constant phase; that is the set of points in ]H, where the phase $ has a given value. If the phase of ~ is constant as e.g. if the wavelet has a real valued Fourier transform, then these lines are the straight lines b = c. Note that for wavelets with non-real Fourier transforms, these lines are curved. 7.2
T h e r e a l oscillations
In the case we analyze real valued functions it may be useful to consider however a progressive wavelet, since then we may distinguish between modulus and phase.
28
Matthias Holschneider
5 -4r
0
b
+4~
~r
0
b
~r
Fig. 13. (Left) The modulus of the wavelet transform of a pureoscillation with the help of-the Morlet wavelet. Note that it is well localized around the central voice a = wo/~o = 1.(Right) The phase of the wavelet transform of a pure oscillation with the help of the Bessel wavelet. The phase pulses at a constant speed ~0 -- 1, independent of the voice.
If on the contrary we use a real-valued wavelet, then the wavelet transform is oscillating itself, making the frequency analysis more difficult. Indeed, consider
s(t) = cos(~ot + ¢)
1 (e_i¢ e_~o~ + e~¢ e~Ot)"
The frequency content of s is localized at the frequencies -~o and ~o- The associated progressive function is the pure oscillation H + s ( t ) = e ~¢ e ~¢°~ / 2.
For progressive wavelets the wavelet transform of s is given by the wavelet transform of the associated progressive function H + s , which was treated in the previous example. A real-valued wavelet instead "sees" the positive and the negative frequency component. By (8) we obtain
wig; cos(~0t + ¢)](b, a) = ~ (~(a~o) e~(b~°+¢)) This implies that every voice is a real oscillation with frequency ~o and a phase and an an~lplitude that varies from one voice to the other Wig; cos(~ot + ¢)](b, a) = A(a) cos(~o b + ¢(a));
A(~) = I~(~o)1, ¢(a) = arg~(a~o) + ¢ See Fig. 14 for a sketch.
Continuous Wavelet Analysis
29
0
-7
0
b
+7
Fig. 14. The wavelet transform of a real oscillation with respect to the real wavelet of D. Mart. The wavelet transform is real-valued. It oscillates at the speed of the analyzed function.
7.3
The homogeneous singularities
These functions are related to the dilation invariant regions in the half plane. Consider a function t h a t is 0 for negative times t < 0 and t h a t suddenly starts more or less smoothly:
s(t) = Itl~ = o(t) t ~,
~ > -1,
(17)
where we have used the notation ]tl± = (ttl +t)/2. The bigger ~ is, the smoother is the transition at t -- 0. In order to analyze these functions, the analyzing wavelet g should be localized such t h a t tag is still in L 1 (~). For instance
Ig(t)l < c(1 + It1) -(1+~+~) with some c > 0 will be enough. The onsets a r e - - u p to a scalar--dilation invariant; t h a t is they are homogeneous functions of degree
s(~t) = ~ 8(t)
(is)
The most general dilation invariant function with exponent a > - 1 , however can be written as s(t) = c_ Itt ~- + c+ IriS. We now want to show how we recover the parameters a, c± in the wavelet coefficients. As for the pure oscillations m a n y of the general features of wavelet transforms of homogeneous functions follow from this invariance and the co-variance of the wavelet transform. From the homogeneity of s and (9) we have W[g; s](b, a) = Wig;
s(at)](b/a,
1) = a s W[g;
s](b/a,
1).
30
Matthias Holschneider
Therefore the wavelet transform satisfies the same type of homogeneity as the analyzed function: (A > 0) W[g; s](Ab, Aa) = A~ )/Y[g; s](b, a)
(19)
Because a is real valued, the phase of the wavelet transform remains according to (19) unchanged, along a line b/a = bc.
)A;(b, a) = aC~F(b/a),
F(b) = F¢(O, a).
Therefore the lines of constant phase converge towards the origin [9]. The modulus of a zoom instead decreases with a power law that mirrors the type of singularity of the analyzed function. We therefore may recover c~ by looking at the scaling behavior of IWl along a straight line passing through the origin. Explicit computation yields that with respect to a progressive wavelet we have
YV(O, a) = CeiCa a. Here C is a real valued constant, it encodes the size of c±. ¢ is a real valued phase, and it encodes the relative size of c±, or what is the same, the local geometry of the singularity. Finally the exponent, a is the scaling exponent of the singularity. See Fig. 15.
/fJ 0
~14
n12
3n14
.....
log(a)
r:
57t/4
3~/2
7rc/4
Fig. 15. A log-log plot reveals the scaling exponent of the singularity. For a given exponent, the phase along a straight line is constant. Its value encodes the local geometry.
Consider as first explicit example the wavelet transfbrm of a delta function localized at the origin. It is not quite of the kind (17) but it satisfies the invariance (18) with a = - 1 . By direct computation we find a
Continuous Wavelet Analysis
31
_5
_0
-2o 0 b +20 -2o 0 b +20 Fig. 16. (Left) The modulus of the wavelet transform of ~ with respect to a Morlet wavelet. For representational reasons we re-scale each voice to take out the trivial factor 1/a. Thus actually a IWl is shown. (Right) The associated phase picture. Note that the lines of constant phase converge towards the point, where the singularity is located. We have used a cut-off: for small valued of the modulus the phase is put to 0.
and thus a behavior of a -1 of every zoom. In Fig. 16 we have sketched the wavelet analysis of 6 with the help of a Morlet wavelet. To give an explicit example of a true onset we will use the Cauchy wavelet g~ = (2zr) -1 F(fl + 1) ( 1 - it) -(1+~) with fl > a. As we have seen in Sect. 6.2, for this wavelet we m a y identify the half-plane with the complex upper half plane by setting z = b + ia. Indeed, to give a different argument to see this, the complex conjugated, dilated and translated wavelet m a y be written as a g--~
= (27r)-1 i1+~ -P(1 -k ]~) a ~ (b + ia - t) -1-/3
=
-1 r ( 1 + Z) an
( /
Therefore the wavelet transform with respect to gz i s - - u p to the pre-factor a n an analytic function in z = b + ia. In the case of the onsets we obtain by direct computation W[g~; Itl~](b, a) = c a ~ ( - i z ) ~ - Z ; c = (2~r) -1 e -i'(1+~)/2 F(1 + a) F(fl - a). 7.4
The wavelet analysis of a hyperbolic chirp
Consider the function
8(t) = ( - i t )
(20)
32
Matthias Holschneider
~o
30
_1
_0A
0
b
+~5
-~5
0
b
÷~5
Fig. 17. The wavelet analysis of the onset vq with the help of some Cauchy wavelet. (Left) the re-scaled modulus x/~ lYVIis shown. Note that the a axis is logarithmically. (Right) The value of the phase of the zoom b = 0 is-37r/4 showing a specific local topologie of the singularity. The straight lines of constant phase are curved due to the logarithmic scale in a.
This is a progressive function (at least in the sense of distributions). In addition this function satisfies the discrete scaling invariance 8(~t) = s(t),
.y = e 2~/~,
which is mirrored in its wavelet transform via
The modulus of s is constant--with the exception of t = 0 where it j u m p s - - a n d a phase speed that decreases with a / t as Itl -+ cc M(t) =
cosh ~ -
+ sign(t) ~ sinh T
= 7'
which is the reason for calling s a "hyperbolic chirp". The wavelet transform of the hyperbolic chirp with respect to the Cauchy wavelet g~ is obtained using the analytic continuation in a of (20)
w i g . ; (-~t)~](b, a) = ca" (-iz) ~ - ' , c = (iTr)-1 sinh(c~Tr) F(1 + is) F ( . - ic~). For the modulus and the phase we obtain explicitly
M(b, a) -- lcl a~ (b2 + a2) -~'/~ e -'~a~¢~n'~/', ~(b, a) = arg c_,~ arctan (a/b) + ol log V/~ +
as
Continuous Wavelet Analysis
33
The lines of constant phase are now logarithmic spirals turning around the point (0, 0). However only the part of the spirals that lies in the upper halfplane is visible in the wavelet transform. These spirals are solution of a l o g ( N ) - # arg z = c
~
Iz]
~--- (2' e a r g z ~ / a
Thus the density of spirals is determined by the quotient #/a.
.20
.15 a
.0 -5
o
b
+15
-5
o
~h
+15
Fig. 18. (Left) The modulus of the wavelet analysis of a hyperbolic chirp with the help of a Cauchy wavelet. Note that the modulus is localized around the straight line a = b that corresponds to the instantaneous frequency of the chirp. (Right) The phase picture of the wavelet transform of a hyperbolicchirp. Note that the lines of constant phase are logarithmic spirals turning around the point (0, 0).
Let us now locate the points, where the modulus of a voice is maximal; that is we look for the points in the half-plane where &M(b, a) = 0 and O~M(b, a) < O. By direct computation we find that these points are located on the straight line a = Kb
(21)
C~
The central frequency of the analyzing wavelet gmu is given by w0 = #. If we identify as in (16) the quantity #/a with a frequency, we see from (21) that the modulus of the wavelet transform is located around the points in the upper half plane, where the time corresponds to the position b ++ t and the inverse of the scale corresponds to the instantaneous frequency iz/a ++ J2(t) of the analyzed function. The phase speed of every voice on the line (21) corresponds to the instantaneous i~equency of the analyzed function if we identify again the position parameter b with the time t:
Ohm(b, All this can be observed in Fig. 18.
ct = f2(b) = -g
34
Matthias Holschneider
a
.0.75
-~2 b +~ -~ 0 b ÷~2 Fig. 19. The modulus (left) and phase (right) of the wavelet analysis of the superposition of two pure frequencies with the help of the Morlet wavelet with internal frequency w = 6. The two components are clearly visible
7.5
Interactions
In the last examples we want to combine the previous elementary examples to obtain more complex functions. Clearly the superposition principle tells us that the wavelet, transform of a superposition of functions is the superposition of the respective transforms. However this does not apply to the modulus and the phase-pictures, because they are obtained by non-linear operations on the transform. Consider e.g. the superposition of two pure frequencies at frequency ~o and ~1. For the sake of simplicity we suppose that ~ is real with a single maximum in w0 and monotonic to both sides. We then have
a) = arg
e
+
Thus for voices where [g(a~o)l > > [g(a~l)l the phase is determined by ~0, whereas in the inverse case the phase is essentially determined by ~1. Thus in the phase picture we observe a transition from the phase speed ~o to ~1- Compare Fig. 19. For the modulus the same reasoning shows that it consists essentially in two strips if [g(~0/~l)[ << 1, and Ig(~l/~0)l << 1. Thus it depends on the frequency ratio (and not on the difference) whether or not two frequencies may well be separated through wavelet transform. Note that also music is based on frequency ratios and not on frequency differences. An octave for instance is a ratio of I : 2. See Fig. 20 for an illustration.
Continuous Wavelet Analysis
35
9.75
+12 Fig. 20. The same as Fig. 19 but the two frequencies have frequency ratio closer to 1. Note that in the phase picture (right) we can distinguish still the two frequencies. -12
7.6
0
b
+12
-12
o
b
Two deltas
Now consider two delta functions located at to and tl. At large scales, the wavelet will not be able to distinguish both deltas and modulus and phase are similar to the associated pictures of one single delta localized at the midpoint. At small scale however the wavelet will be localized enough to distinguish both functions. At an intermediate scale again a strong interaction can be observed. (See Fig. 21.) 7.7
Delta and pure frequency
To end this section consider a function that is a superposition of a delta function in time and a delta function in frequency. Again an interaction zone can be observed, outside of which the features of the delta function and the pure frequency are qualitatively undisturbed (see Fig. 22).
8
The
Wavelet
synthesis
Consider a function r over the parameter space ]H. Given a wavelet h we can consider the following operator.
It consists in assembling the dilated and translated wavelets each weighted with the (possibly complex) weight r(b, a). Other notations are J~4[h, r](x).
36
Matthias Holschneider
5
-5 0 b +~ -5 0 b +5 F i g . 21. Two delta functions at - 1 and +1 analyzed by some Morlet wavelet. The modulus picture has been re-scaled again by the trivial factor a -1 . Note t h a t at small scales the two delta functions are separated. Again the lines of constant phase converge towards the singularities
5
-5 0 b +5 -5 0 b +5 F i g . 22. Superposition of 5 and pure frequency analyzed by some Morlet wavelet. The modulus picture has been re-scaled again by the trivial factor a -1.
8.1
Some easy properties
Linearity.
W e have l i n e a r i t y in r a n d h
fl4[h~ clrl + c2r2] JUt[d1 hi + d2h2, r]
=
c1~4[h, rl] + c2]t4[h, r2]
= d l ) v t [ h l , r] + d22vt[h2, r]
Continuous Wavelet Analysis
37
Covariance. The following covariance with respect to dilations and translations over the half-plane holds r(b, a) ~+ r(b - fl, a) ~ A4hr(x) ~+ ./~4hr(x -- fl), ~(b, a) ~ ~(bl~, a l ~ ) l ~ ~ Mh~(x) ~+ M ~ r ( x / a ) / a . 8.2
F o r m a l relations w i t h ~/V
Adjoint. The wavelet synthesis with respect to g is the adjoint of the wavelet synthesis with respect to g /IH dbdaa g(b,a) Wgs(b,a) = fRdt 3dgr(t)s(t). Inversion. Consider the combination wavelet analysis followed by wavelet synthesis, A4hWg. By covariance this operator commutes with translations and dilations. Therefore it is given by a Fourier multiplier, which is homogeneous of degree 0: M h W g = F -1 mg,h F.
The right hand side means: take the Fourier transform, multiply it with the function mg,h and take the inverse Fourier transform again. The transfer function mg,h takes only two values
/rtg,h(~d)
~(~) ~(~) [ f o ~(-~)~(-~)
= ~ fo~
for ~ > 0, for ~ > 0.
Indeed, that mg,h is of that form follows by covariance: the combination AJ'W is an operator that comutes with both dilations and translations hence it must be a 0-degree homogeneous Fourier multiplier. The specific form can be found by apllying A/IW to the 5 distribution.
Kernel. Consider now the operation Wavelet synthesis followed by wavelet analWgAJh. It is given by a non-commutative convolution over the half-space
ysis
W g M h r = IIg,h * r, where the convolution product reads
and the kernel is given by
Hg,h(b, a) = }4;gh(b, a). All formulas can be proven by simple exchange of integrals.
38 8.3
Matthias Holschneider The reconstruction formula
A wavelet is called admissible if it satisfies I~(~)1 ~ < 0.
All wavelets we have encountered, except the Morlet wavelets, are admissible. The Morlet wavelets are numerically admissible. For admissible wavelets we can define two constants c~ = ~0 ~ dw i?(±wll2 W
For real wavelets we have c + = c~- thanks to Hermitian symmetry. For progressive wavelets we have c~ = 0, since they vanish on negative frequencies. For regressive wavelets we have c + = 0. If h is another admissible wavelet we may consider the following constants Cg,h ~
fo~ ~ (
~)~(±~,)
We then have
Mh Wgs = c~flI+s + c~,flI-s i
+
= ½(C~h + C;,DS + ~(Cg,h -- C;,h)HS, where H ± are the projectors on the positive / negative frequencies, respectively, and H denotes the Hilbert transform. Some particular cases are: • For g = h, real valued, admissible, we have
Mg Wgs = cgs. Thus s may be recovered fl'om its wavelet transform via a wavelet synthesis. • For s progressive and g, h arbitrary we have
• For s regressive and g, h arbitrary we have
A4g Wgs = c~hs. • For s arbitrary and g or h progressive we have
Thus we extract the positive frequencies of the signal. • For s arbitrary and g or h regressive we have
Mg Wgs = c~,flI-s.
Continuous Wavelet Analysis
39
Thus we extract the negative frequencies of the signal. • For s real valued and g or h progressive we obtain the analytic signal associated with s. In particular we have
~ M g ~ g s = cg,h + s,
-~M9 Wgs = c+h H8.
Thus wavelet analysis followed by wavelet synthesis allows us to obtain the Hilbert transform of the signal. • For s real valued and g or h regressive we obtain the conjugated of the analytic signal associated with s. In particular we have
~A49 Wgs = C~,hS, 8.4
~A49 }/Ygs = --C~hHS.
The energy conservation.
For any admissible wavelet we have t h a t the energy of the transform stays bounded if the signal has bounded energy
dtls(t) l2 . f ~ ~b~---2a a lWg~(~,a)l 2 _< (c~++ c ; ) [ JR We m a y even have equality in the following form:
/ : dbd___aaa 'Wgs(b' a ) [ 2 = C9 ]R dt Is(t)l 2 . This holds for s progressive and g arbitrary, admissible with C 9 = c+, or for s regressive and g arbitrary, admissible with C 9 = c~ or for s arbitrary and g real valued with C 9 = c + + c~" = 2c +. In all other cases we have in general only the estimate above. The proof is based in all cases on the fact that the adjoint of )4;9 is up to a factor also its inverse:
(wgs I Wgs)~ = (~ l M9w98) R = c9 (818)~. 8.5
The space of wavelet transforms and reproducing kernels
Not all functions over the half-plane are wavelet transform of some function of the real line. A function r is the wavelet transform with respect to an admissible wavelet g of some progressive function s if the coefficients r satisfy
r(b, a) = --~'D?gj~gr(b , a). cg
Indeed, if r = )/Ygs, then s = Jk49r , and thus the above equation. Now this is a non-commutative convolution with a kernel
r(b, a) = II • r(b, a),
1 ~ = -~Wgg.
c~
40
Matthias Holschneider
More explicitly it reads
This equation is called the equation of the reproducing kernel. Thus there are correlations with a correlation length with order of magnitude a in the wavelet coefficients.
_i
..2
-2 0 b +2 '0 b +2 Fig. 23. The wavelet transform of the Cauchy wavelet with respect to itself: the reproducing kernel.
The reproducing kernel can formally be interpreted as the minimal internal correlation of the wavelet coefficients. We only do formal computations. Consider a r a n d o m function r over the real line. We note by E (r(t)) the expectation of the r a n d o m variable r. The correlation of two time points t, u is the mean of the r a n d o m function ~(t) s(u)
¢(t, u) = E
A white noise s is a "function" t h a t is of zero mean, and for which different time points are completely un-correlated E (s(t)) = 0, E (s(t) s(u)) = 5(t - u). Consider the wavelet transform of such a random function. It is itself a random function, but this time over the half-plane. The correlation between two points in the half-plane is given by
qS(b, a; b', a') -- E (Wgs(b', a') Was(b , a))
Continuous Wavelet Analysis
41
We exchange formally the integration with the mean over all realizations and obtain (9b,a =
TbD~g)
qb(b,a;b',a').=E (/~+CCdtgb,a(t)s(t) f_+~dU-jb,,a,(u) s(u))
:
o
o)
In the last equation we have used the correlation function of a white noise. Integrating over the delta function we see t h a t the correlation function is given by the reproducing kernel.
IH(b-b' a) --5 a ' a'
• (b, a; b', a') = co ~
Therefore even if the analyzed function is completely un-correlated, the wavelet coefficients show a correlation because of the reproducing kernel property. The correlation depends on and Look at Fig. 24 where the scaledependency of the correlations is visible. Note however that we did not quite explain this phenomenon. We only showed, t h a t in the mean over m a n y realizations the correlation is given by the reproducing kernel. Nothing was said about a single realization.
(b - b')/aI
a
a/aL
a
0
-2 0 b +2 -2 0 b +2 Fig. 24. The modulus (left) and phase (right) of the wavelet transform of a white noise with respect to the Morlet wavelet. The regions of correlation change proportionally with the scale.
42 9
Matthias Holschneider Extensions
to higher
dimensions
In higher dimensions there are two distinct approaches. The first is to use the dilation and translation as before to generate a family of analyzing wavelets. T h a t is we define for functions in L 2 (]~n)
W[g,s](b,a)=~dxl-~(~ab),
bc~'n,a>O
The parameter space of wavelet analysis is now the upper half-space lI:In = {(b, a) : b E 1Rn,a > 0} Typically g has a Fourier transform that is localized in some corona. Therefore we may identify a ++
Note that a is degenerated in the sense that it contains only information about the size of the spatial frequency but no direction information. The wavelet synthesis now reads
Jt4[h'r](x)= fo~da
~ db 1----h r~ (~ab
r(b,a).
We have the following relations.
Adjoint. /~ -dbda F(b, a) )/Ygs(b,a) = fR~ dx A4gr(x) s(x). a
Inversion. J~4hWg= F-: mg,hF, with
In case that mg,h(~) = constant we say (g, h) is an analysis reconstruction pair.
Energy conservation. If (g, g) is a reconstruction pair, /
then
dbda [Wgs(b,a)lU =
=cg
a
dxls(x)l 2,
cg--
- - I~(a~) a
.
Continuous Wavelet Analysis
43
Kernels. Vl;g~4hr =
H * r,
//=
Wgh,
where
II .r(b,a) = / ~ db'da' t -~II \(b-b'~] Reproducing kernel. In particular, for (g, h) an analysis-reconstruction pair, we have t h a t r = Wgs for some s iff r = II~,h * r,
II~,h .........
1
W~h,
Cg,h
Cross kernel. If (g, w) is
Cg'h ----
an analysis-reconstruction pair, then
WhS = Ilg+h * ]/YgS, Hg-~h = 9.1
~o~ da a ~(a~) h(a~).
1
cg,h
)4;hr.
The rotation group
There is a second approach to higher dimensions. Let SO(n) be the group of rotations of Euclidean n-space. We denote by Ox the rotated vector x E ]~n for the rotation 0 E SO(n). We then construct a wavelet analysis based on dilation rotations, and translations.
Wgs(b,a,O) = / dx l-g (O-*(~- X) ) s(x). Now the wavelet coefficients are functions over the space ~2 = ]H ~
×
SO(n).
The rotation p a r t allows us to obtain directional information too. Now a wavelet is admissible, if
0 < ~g =
~~
1~(~){~ <
~.
For two wavelets we define
%,h =
o N ~(~)g(~).
The wavelet synthesis reads
A/lhr(x) = /~ dbda dz(O)r(b'a'O) lan h (O-*(ba x))
44
Matthias Holschneider
In that case we always have energy conservation, and essentially every wavelet is a reconstruction wavelet, since thanks to the rotation we cover the frequencies evenly, without favoring any direction:
1
db,z, d (o)
Cg
=
a
=
e
,h
n
The other formulas hold in the analogous way. The possibility to use a different wavelet for the reconstruction than for the analysis has been used to invert the Radon transform, since it was interpreted as a wavelet transform with rotations with respect to a 5-shaped wavelet [14]. For more general groups and wavelets on them see [11], [2] and [13]. 10 10.1
Partial
reconstructions
from partial
information
I n t e r p o l a t i n g t h e wavelet coefficients
The reproducing kernel appears also in the following problem of interpolation. Suppose we are given N pair-wise distinct points in the half-plane (bo, a0),..., (bN-1, aN-l), and N complex numbers "/0,-.., 7N-Z. Can we find a function s over the real line, such that its wavelet transform with respect to a given wavelet g takes on the values Vn at the points (bn, an), i.e.
Wgs(bn, an) = (gb~,a~ I S) = 7-,,
n ----0 , . . . , N ?
Equivalently we may ask whether there is a function over the half-plane that satisfies
i) T E imageWg ~ ii ) T(b~,a, 0 =Vn.
T = Pg * 7",
(22) (23)
If we can answer this question we know how to interpolate wavelet coefficients without leaving the image space of the wavelet transform. Clearly we have to suppose that the functions gb,~,a~ are linear independent. This is a very weak assumption however. Unfortunately in this case there is now an infinity of ways how to do this interpolation. Indeed the space of square integrable functions over the half-plane satisfying the reproducing kernel equation (= that are in the image of the wavelet transform) is an infinite dimensional separable Hilbert space, and the N linear equations are therefore satisfied on an infinite dimensional affine subspace. We therefore must choose a particular solution. A canonical choice might be the function in the image space that has minimal energy as expressed by
iii) for all 7~ E L2(]H) satisfying (22) and (23) we have [In[i2 ~ NTII2 Conditions (i), (ii) and (iii) define a unique functions that can be constructed explicitly as is shown by the following theorem that in the context of wavelet analysis is due to [10].
Continuous Wavelet Analysis
45
3. Let g E H~_(I~)be admissible and let Pg = c~lWgg be the associated reproducing kernel. Given N points (bn, an) E ][-I , n = 0 , . . . , N - 1 let Mn,m be the N × N matrix defined by Theorem
M=Mn,m=(gb,,,a.,~igb,,,,,~,,,)= 1 p (bn-bm. an) am
g \
am
' ~m
"
Suppose further that det M ~ 0. Let 70 --~ 7N-1 E C be given. Then the interpolation problem (i), (ii) and 5ii) has a unique solution T. It is given by N--1 T(b,a) =
(b-bn
Eft
1pg
n=0
an
a) ;
\
an
a-n
with fl = (/~0,... ,/~g--1) given by N--1
/~ = M - 1 7
4=:=>
% = E
Mn,,~/~m-
m=O
10.2
Reconstruction
over voices
Consider now the set of voices at scale a = ~r~, n E Z with a > 1. These scales are equidistant in a logarithmic scale log a. Now look at the partial reconstruction s # along these voices
s#(t) = Z
ab Wgs(b,~) hb,~°
(24)
n~--OO
B y mere superposition of the previous results we see t h a t s # is related to s by a "Fourier multiplier" +oo
s~(~) =~(~)~(~), ~(~)= ~
ii(.-~)~(..~).
(25)
However this filter r is not a s m o o t h function since its Fourier t r a n s f o r m is not decaying at infinity. As one can see in Fig. 25 ~" typically is a b o u n d e d ihnction t h a t oscillates a r o u n d some m e a n value. If we define the o p e r a t o r B : s ~ s # , ~'(w) ~ ~(w)~(w) with h = g. T h e n it is immediately verified t h a t B can only be b o u n d e d if g is admissible. Indeed ? E L ~ (]~) implies t h a t
E
-< <
jEZ Therefore we can write with some c t > 0 > -
/,°
_> ,~-~
~j+l
Z
=
.
jEZ __( M
zf aEZ
Ig(~,)l
~
.,c,,
dw o7/
"
1~(~)1"
46
Matthias Holschneider
I
Fig. 25. The Fourier multiplier (25) of the partial reconstruction s # (24) over infinite many voices.
showing t h a t g has to be admissible. Before exploiting this relationship we do some elementary considerations in an H_~ (l~) context. First a formal considertion. Note that the sum (24) m a y be considered as a Riemann sum, namely for p regular enough: +oo
/ ~ da p(a) a
lim(~ c~---~l
1) Z
P(~)
n=--oo
Therefore if h is a reconstruction wavelet, then (a - 1)~'(w) -+ Cg,h as a -+ 1, A
and therefore (a - 1) s# -+ Cg,h~ and a complete reconstruction of s is obtained. But even for a different from 1, s # may be close to s provided the multiplier is close to a constant. The next theorem gives an estimation of the quality of the approximation of the partial reconstruction. Theorem
4. Let g, h e LI(~) N H~(IR) satisfy
jEZ
I~(~J~)i ~ -< ~ < ~,
?C I~(~J~)j ~ -< ~ <
jEZ
with some ~/ > O, and let s E H~ (1~). Suppose that there is a constant c, 0 < c < 1, and another constant c E C such that
ess sup ~>0 ~ .~zZh(""') ~(""') - 1 < ~. Then the following limit exists in H~ (]~) N
s#=
lim S N =
N ~oo
r+oo
~-~ ] db)/Ygs(b,a n) hb,~", N-+oo n'-~_N J_oo lim
(26)
Continuous Wavelet Analysis
47
and we have the following estimation IIs - ~ l h
< e Iisli:
For h = g the condition of the theorem is satisfied iff 0 < A = essinf ~>0 ~
[~(a'~w)[ 2 ,
nEZ
B = esssup a>o E
[g(anw)[2 < 0%
nEg
in which case we can set 2
C-B+~,
B-A e-B+A.
(27)
Therefore the smaller e or the closer the constants A and B are to each other, the b e t t e r the reconstruction i s - - a t least in the mean square sense. 10.3
An iteration procedure
Until now we only have obtained a partial reconstruction s # t h a t approximates s. As we shall see now, this is only the first step in an approximation scheme t h a t allows us to reconstruct s completely from the knowledge of the voices at scale a = a n. We state it in the following abstract form. T h e o r e m 5. Let K : B -~ B be a bounded operator acting in a Banach space V that satisfies IIK - ~IIB v _< e < 1.
Then I42-1 exists and is in B ( V ) , the set of bounded linear operators acting in V . Let s C V be given. If we set S n + 1 ~--- S n -~- r n + l ~
with So = s, then s~ -+ K - i s
rn+l
= 8 n -- Ks
m
as n -~ co and the difference may be estimated as fin
I I K - ~ s - s " [ [ v <- 1 - e
Hsllv
With the help of this theorem the complete reconstruction of s from its voices Wgs(., aJ) can be obtained by applying it to the partial reconstruction operator ee
B:s -, e with c as given by (26).
r+cc
J-oo eb w,8(b,
--
48
Matthias Holschneider
10.4
S a m p l i n g t h e w a v e l e t coefficients
Because of the reproducing kernel the information contained in the wavelet transform is highly redundant. Indeed, suppose we take as wavelet the band-limited, progressive wavelet whose Fourier transform is given by the characteristic function of the interval [Tr,27r]. Then, it is enough to know the voices Yl;g(b, a = 2J), j E Z to know everything. Moreover, each voice is band-limited and therefore may be sampled. The sampling speed has to be adapted to the scale. And thus it is enough to know the wavelet transform on a grid of the form A = {(b,a) = (k2J,2J): k,j e Z}.
I
a
1 1
I 1 i
•
•
•
*
•
•
i
•
•
: • • • ; . : . ' . 1 . : . 1, : . : . : ." . : . : . . • : •. 1 . 1 I
i
b
Fig. 26. The dyadic grid.
This is the dyadic grid. One can show (see e.g. [6]) that for reasonably smooth wavelets, there is a grid of the form Ae,~ = {(SkTJ,7 j) : k,j E Z} with 5 small enough and 7 close enough to 1, such that the family of functions gb,a, (b, a) E A~,c~ is complete in the sence that it is possible to reconstruct s from the knowledge of values
( b,o Is),
(b,a) e
in a stable way. In particular this shows that the wavelet transforms may be sampled on such a grid without toss of information.
Continuous Wavelet Analysis
11
Processing
49
of s i n g u l a r i t i e s w i t h w a v e l e t s
We have seen before, homogeneous functions give rise to cone-like structures in wavelet space. The exact statements are as follows. The most general signal having a singularity of order a at to can be written as s(t) = e_ Jt - to] ~ + c+ It - t01~ + r(t),
(28)
where r(t) is of higher regularity. That is, r E A n,/~ > a. Recall, that the HSlder spaces Az are defined as the set of functions that satisfy
Ir(t)-r(u)l_<elt-ul ~, 0<~_<1. For 1 ~ _< 2 the above definition would imply r = coast. Therefore we require Or to satisfy the above condition, and so on for larger values of/3. Thus fl is a fractional degree of smoothness. Functions in An are [/?]-times differentiabte with a bounded derivative. The wavelets however will detect singularities at any order, provided they have enough vanishing moments. By linearity, the wavelet transform of s in (28) is given by the superposition of the transform of the homogeneous part and the smoother reminder. The first part has been discussed in Sect. 7.3. The second part is covered by the following two theorems. They show that at small scales the singular nature of s may essentially be recovered since the principal part of the singularity determines the wavelet coefficients at small scales. T h e o r e m 6. (Global regularity) A function r is in A n, fl ~ N if and only if for some regular wavelet g ~ 0 with
/ d r ( 1 +ltl)Zlg(t)l < co,
/dtg(t)tk=O,k=0,1,...,Z
we have
IWgs(b, a)t < can T h e o r e m 7. (Local regularity) If a bounded function is H61der continuous at to, Is(to + t) - s(to)t <_ cN ~, 0 < Z <_ 1, then its wavelet coefficients satisfy
IWgs(to +b,a)l
<
c'(a ~ + tbl~),
(b,~) -+ O,
provided, the wavelet is localized enough. On the other hand suppose the wavelet coefficients of a bounded function satisfy ]W~s(b,a)] <_ ca ~, IWgs(to+b,a)l
c > O,
ibl~
a e + ~ ] ,
O
then Is(to + t) - s(to)l <_ c']tl~. Therefore local decrease of the wavelet coefficients is equivalent to global regularity of the aaaalyzed function. Important however is, that g has enough moments vanishing [16]. This can be used to analyze fractals at various scales [3], [12]. In [1] this technique is used to analyze singularities in the time behavior of the geomagnetic field, the so-called geomagnetic jerks.
50 12
Matthias Holschneider Wavelet
analysis
of potential
fields
A particular instance of the analysis of singularities is the analysis of hidden singularities. Consider a point source located at the origin, and assume now that you can measure the field it generates in a distant hyper-plane. The problem is to recover the properties (type, strength, location, orientation,...) of the source from this measurement. A naive approach might be to use a deconvolution technique. However, because of the finite accuracy of measurements, this method cannot be really used in practice. The approach we propose is based on the continuous wavelet transform. Since this is a family of convolutions with well-localized functions, we shall not encounter the instabilities of deconvolutions. This kind of technique might have applications in remote sensing of sources. Obvious examples are subsurface imaging in geophysics from potential field measurements on the Earth's surface. Also in medicine, where stationary temperature fields obey the Poisson equation, applications to infrared thermography are in sight. This section is essentially taken from [18]. 12.1
T h e field o f h o m o g e n e o u s s o u r c e s
Consider now the Poisson equation in ~n+l,
A¢(q) = 5r(q), q e ]~n+l, where the (generalized) function a is a source term, and this equation is to be understood in the sense of distributions. We work from now on in n dimensions because the n + l~th direction will play a privileged role and we shall write q = (x,z), with x E I~n, z E l~. Clearly, in application in geophysics we have n = 2, the two horizontal dimensions. The third dimension is the vertical direction. The solution of ¢ in terms of a is essentially unique if we impose some suitable global growth conditions. We are particularly interested in homogeneous sources of the type discussed in Sect. 11. Suppose now that q is a homogeneous distribution in ]Rn+l of degree a. Consider first the case where a ~ N. We claim that there is a unique distribution ¢ which is homogeneous of degree a + 2, and satisfies A¢ = a. Indeed, the Fourier transform ~ is a homogeneous distribution in l~n+l of degree - n - 1 - a. It follows that ¢ = - 3 / I u 12defines a distribution in 1~+1\ {0}. Now this distribution is homogeneous of degree p = - n - 3 - a. Since now p ~ N, we may extend ¢ to a homogeneous distribution in all of t~~+1 . Its inverse Fourier transform ¢ is then clearly a solution of the Poisson equation and the degree of homogeneity is a + 2 as claimed. Suppose now that n + 1 - a E N. Again we may set ¢ = - ~ / [ u [2 to define a distribution in ~ n + l \ {0}. But now it is not clear whether or not it has a homogeneous extension. However a quasi-homogeneous extension exists. Its inverse Fourier transform satisfies therefore at the decomposition of quasihomogeneous distributions ¢(¢~) = A~+2¢ (¢) + 0 (¢) In A,
(29)
Continuous Wavelet Analysis
51
where 0 is a polynomial of degree a + 2. Therefore in particular for a < - 2 the field is again homogeneous. Note that this general discussion is nicely exhibited by the Green's functions, where a = 5:
Go (q) = { Cl In ]ql (n = 1) c,~ Iql 1-n (n ~ 1) " Until now we have discussed general homogeneous sources. However, for obvious physical reasons~ we have to require that a is supported by a subset of the lower half-space z < 0. As an additional property we may introduce the boundary distribution of the field ¢ in the hyper plane z = 0. More precisely the following limit exists in the sense of distributions
¢(.,z) -~ ¢ (.,0+), (z -+ 0). This boundary distribution satisfies the same homogeneity and quasi homogeneity properties as ¢. In addition, the field ¢ (.,z) may be recovered from the boundary distribution by means of the harmonic continuation formula, ¢ (.,z) = D~p* ¢ (-,0+). 12.2
Wavelets based on the Poisson semi-group
We now introduce a class of wavelets that behave nicely under the Poisson semigroup. This will be necessary to analyze homogeneous potential fields and will be used in the next section. More precisely, we say that a wavelet g satisfies the dilation-continuation condition if the following holds true
Dag , Da,p = cDa,,g.
(30)
Here c = c(a,a ~) and a" = a"(a,a') are functions of the scales a and a'. This means that the continuation operator maps the dilated wavelet Dag into a wavelet at the same position but at scale a" and amplitude c. In this section we will discuss some properties of wavelets that satisfy the dilation-continuation property. In particular we want to construct a large family of solutions of (30). First note that an immediate solution is given by the Poisson kernel itself. Indeed the semi-group property
Dap * Da,p = Da+a'p shows that the Poisson kernel is a (non-admissible) wavelet that satisfies the dilation-continuation property with c = 1 and a" = a+a I. To obtain more general solutions, consider a linear operator £, which satisfies the following properties with respect to the dilation and translation operators,
Da£ = aZ£,Da, /~ E ~,
T ~ = ~Tb.
(31)
(32)
52
Matthias Holschneider
Property (32) means that £: is a Fourier multiplier which by (31) is homogeneous of degree ft. Thus
£.: ~(u) ~ m(u)~(u), m()~u) = )~Zm(u).
(33)
Now if g has the dilation-continuation property, we claim that also £g has the dilation-continuation property with the same function a" and c replaced by (a/a") ~ c. Indeed we may write
D~f~g * Da,p = a~£ (Dag * Da,p) = (a/a") ~ cD~,,Cg. Therefore in particular all functions given by =
=
are solutions of equation (30). In the special case where we have in addition c = c(a') and a" = a-+ a ~, a second family of solutions can be obtained as follows: suppose g satisfies
D,zg * Da,p = c (a') D~+~,g,
(34)
a derivation with respect to a shows that ( x ~ ) g is again solution of (34) with the same function c. Therefore, a general family of solutions is given by
P(xO~)£p,
(35)
where P is a polynomial in one va~'iable and £ is the operator (33). 12.3
WaVelet analysis o f h o m o g e n e o u s fields
Both the covariance of the wavelet transform with respect to dilations and the homogeneity of degree a + 2 of the field ¢ ensure that W [g, ¢ (., z)] (b,a) = "W[g, Dzp * ¢ (.,0+)] (b,a)
: [Da"g* Dzp* ¢ (',0+)] (b) = c (a, z) [Da"(~,z)'g * ¢ (', 0+)] (b)
= c ( a , z ) W [g,¢ (., 0+)] (b,a" (a,z)). Now, using the covariance of the wavelet transform and the homogeneity of the boundary distribution ¢ (., 0+), we obtain )W [g, ¢ (., z)] (b, a)
=c(a'z)a"(a'z)-a-2)/y[g'¢("O+)]
(
a"(a,z)
1)
"
Continuous Wavelet Analysis
53
To simplify the discussion, let us assume that the wavelet g belongs to the class defined by (35). Then the last equation becomes }V [g, ¢ (., z)] (b, a)
= (a--~z)~(a+z)-°~-2lA2[g,C(',O+)] (a-~z,1). In order to get some insight into the geometry of this last equation note that there are two functions f and F such that the wavelet transform can be written as
W[g,¢(.,z)](b,a) =
f(a)F
~
,
where f and F read
f (a) = (a + z)-~-2 ( S -a4 S ) [3' F(b) =
w
[~, ¢ (.,0+)] (b, 1).
Note that the set of points (b, a) which satisfy b~ (z + a) = cst are located on the straight line in the half-space. For various constaaats cst, we obtain a family of lines that intersect at the point (0, - z ) outside the half-space ]I-In. Therefore, the wavelet transform exhibits a cone-like structure where the top of the cone is shifted to the location of the source outside the half-space. This provides a natural geometric way to locate the source. The homogeneity a of the source, can be obtained either from the full expression of f (a) using the formerly estimated z or from the asymptotic behavior f (a) _~ a -~-2 in the limit a -~ c~. It might be instructive to give a second derivation of these results using only the homogeneity of the field ¢ and not using the boundary field ¢ (., 0+). Again, the covariance of the wavelet transform with respect to dilations and the homogeneity of the field ¢ imply that
w [g, ¢ (., z)] (b, a)
=
)IV [g, Da,/a¢(.,z)] ~---~,a j t,a'a)
"
Here the dilation is acting on the first n variables only. Now, the harmonic extension relation enables us to obtain ¢ (., za'/a) from ¢ (-, z):
D? [g, ¢ (., z)l (b, a)
-~ (~)n-a-2~/~][g,Dz(a,/a_l)p,¢(. z)] (b~--~t,al)
54
Matthias Holschneider
\/ HORIZONTAL DISTANCE
HORIZONTAL DISTANCE
HORIZONTAL DISTANCE
HORIZONTALDISTANCE
HORIZONTAL~STANCE
Fig. 27. (Left) Potential fields caused by homogeneous sources with exponents ~/= 1 (top), ? = 1.5 (middle), 1, = 2 (bottom). All sources have a common depth z = 20 units of length below the measurement plane of the fields, and their horizontal positions are marked with the vertical arrows. (Right) The voices a = 6 (solid lines) and a = 20 (dashed lines) of the wavelet transforms of the fields shown on the left and computed with the analyzing wavelet given by equation (38). These voices are identical up to a dilation and a. scaling factor which involve the source parameters -y and z (see equation 39).
As before, assume t h a t the wavelet g has the dilation-continuation p r o p e r t y (30). T h e n we obtain
w[9,¢(.,z)]
,
(36)
where c and a" are functions of a', a and z. If, in addition, g belongs to the family (35), this last expression simplifies
w [9, ¢ (., z)] (b, a)
Continuous Wavelet Analysis
:
w [g, ¢ (., z)] \
\ a-UJT- 1
i,+z ' o,,)•
55
(37)
This equation is valid for all a" > 0 which now plays the role of a parameter. In order to recover from this expression the geometry of the shifted conelike structure in the wavelet transform, consider two points ~Q = (b, a) and ~2" = (b (a" + z) / (a + z), a"). The straight line they define passes through the location of the source (0, - z ) . From (37), we see that the ratio of the wavelet coefficients at these points can be written as ~A)[g, ¢ (-, z)] (~Q) W [g, ¢ (., z)] ( 9 " )
12.4
f (a) f (a")
Examples of source characterization
We shall now examine how the wavelet transform enables both a localization and a characterization of homogeneous sources responsible for an observed field. For an easy display of the results, we work in a 2-dimensional physical space (i.e. with n = 1). Equation (37) is our basic working equation from which the horizontal and the vertical coordinates of the source are to be determined together with its multi-polar index 7 • The wavelet used in the present example is
Yl (x) =
(x),
(3s)
and such that )~ = 1. For this wavelet, equation (37) reduces to
a (a"+z~ ~+1~/Y [gl, ¢ (', z)] (ba"+z a"'~ \ a + z ' ]"
]/Y [gl, ¢ (', z)] (b, a) = ~7 \ \ ]
(39)
We consider three examples of sources localized at the origin and corresponding to 7 = 1, 1.5, and 2. The potential fields ¢ (x, z = 20) created by each source are shown on the left-hand side of Fig. 27 and are displayed from top to bottom for 9' = 1 , 1 . 5 , and 2 respectively. The right-hand side of Fig. 27 represents two voices [a = 6 (solid lines) and a = 20 (dashed lines)] of the wavelet transform of these fields. A two-step algorithm can be used to estimate both the multi polar index -y and the depth z to the source from the measurement plane (here, z = 20). First, the dilation (a + z) / (a" + z) necessary to transform voice a" into voice a in formula (39) is determined and the source depth z is readily obtained. Then the exponent 7 is computed from the amplitude ratio of the experimental voices. When applied to the data shown in Fig. 27, this procedure accurately (i.e. up to the numerical precision of the computer) restitutes the theoretical values of V and z. A second way to derive both the depth z and the horizontal position of the source is provided by using the cone-like structure of the wavelet transform (see Fig. 28). Indeed, formula (37) shows that the set of points where ObW [gL, ¢ (', z)] (b, a) = 0 forms cone-like structures pointing towards the point
56
Matthias Holschneider
[
4.0]
4,0-:
~
~" ~.oi
3.0.
~ 2.0.
2.oi
L0]
~ L0i 0,02
0.0i
Horizontal Distance
Horizontal Distance
~i I0
o 3.0
:i
0,0 Horizontal Distance
04 IIi'J
i 3.0 2,0
~ I
1.0
0
1
2.0
.
0
~
Horizontal Distance
~ 3,0 ~ 2.0 1.0 0.0 ......... ,
Horizontal Distanc~
................
Horizontal Distance
Fig. 28. (Left) Wavelet maps, W [gl, ¢ (., z -- 20)] (b, a), of the fields shown on the left side of Fig. 27. The magnitude of the wavelet transforms takes large values in a limited cone-like area roughly centered on the horizontal positions of the sources (marked with vertical arrows). (Right) Tile focusing of the cone-like ~ e a s may be precisely observed by tracking the lines of extrema (taken along the b direction) of the wavelet map.
( 0 , - z ) located outside the wavelet transform half-space ]Hn. This is shown in Fig. 29 where the dilation axis is drawn with a linear scale, Remark that this convergence of the lines of maxima of the wavelet transform enables a good determination of both the vertical and the horizontal position of the source which creates the analyzed field. 13
Wavelet
analysis on the sphere
Recently the construction of wavelets on the sphere has attracted attention in particular in view of applications to geophysical situations. S o m e recent con-
Continuous Wavelet Analysis
20.0:
57
7=I
10,0 0.0
i
f
-101 I
-20.(
z =19.0
~
20.0. i0,0, 0,0
r ii
-10.4 z =19,1
-20.0
![
20.0-"
2 10.0 0.0
la ill iiJ
-10.( ,~, ~
-20.( . . . .
i
. . . .
J
. . . .
itl ~ . . . .
z =19,0 i
. . . .
i
. . . .
/
H o r i z o n t a l Distance
Fig. 29. Same lines of extrema as for Fig. 28 but displayed with a linear dilation axis. The lines are straight lines whose intersection lies outside the ]H~ half-space at the source location.
structions of decompositions of unity on the sphere are [5] and [7] (see also the contribution of Freeden et al. in this volume). The first construction is based on group representations in the tangent bundle of the sphere. The second construction is mainly based on the Poisson or Heat semi-group. It also focuses on rotation invariant wavelets. The construction we propose in this paper is differ-
58
Matthias Holschneider
ent in that it constructs a family of wavelets in a more ad hoc way. At small scale we shall see that. the sphere is flat and asymptotically our wavelet analysis tends to the usual wavelet analysis of functions on 1~2 based ori dilations and rotations. A discretized version of continuous wavelet analysis on the sphere can e.g. be found in [21]. 13.1
T h e r o t a t i o n group
We now want to decompose the Hilbert space L2(S 2) = (S2,d~) of square integrable functions over the unit sphere $2; that is S 2 is the subset of all points x in 3-dimensional Euclidean space with Ixl = 1. Here d~2 is the surface measure of the sphere. We sometimes use spherical coordinates (0, ¢), where 0 is the colatitude and ¢ the longitude. Thus 8 = 0 corresponds to the north pole, that we occasionally denote by N. The south-pole S corresponds to 0 = ~r. We denote by SO(n) the group of rotations of n-dimensional space. Then the 2-sphere may be identified with the homogeneous space S 0 ( 3 ) / S 0 ( 2 ) . We therefore have natural left action x ~+ ~.x of{ E SO(3) on x E S 2. (Note that, since the rotation g r o u p is not comutative, we have to distinguish between a left and a right action.) It is given by the rotations of the sphere. We thus have an obvious unitary representation of SO(3) in L2(S 2) by means of (U(~)s)(x) = s ( x - U ~ ) , 13.2
U(~)U(~') = U ( ~ ' ) .
Spherical harmonics
As is well known, the representation U is reducible. It splits into irreducible components L 2 (S 2) = ~ Wz. Each Wt is 21 + 1 dimensional. Consider the spherical harmonics Yl~, with I = 0, 1, ..., n = - l , - l + 1, ..., I. The collection {Y1m I Iml < l}, l fixed, is an orthonormal basis of Wt. Thus the complete collection {Ytm } is an ortho-normal basis of L~(S2). In terms of the Ylm the representation U of SO(3) reads (~'~1 U(~) t Y~n) = a ,z,D t~'~ where the D m'n are the Wiegner functions. For n = 0 we explicitly have =~¢
~
D,m~O( { ) = y n ( x ) .
(40)
That means Din,°({) depends only on the co-set ~ / S 0 ( 2 ) which can identify with a point in S 2. It is natural to introduce the Fourier coefficients of s E L2(S 2) via
Then Plancherel's formula takes on the following form
2
=Z Z
l=O Iml< &
Continuous Wavelet Analysis 13.3
59
Directional wavelets on the sphere
We now present the construction of directional wavelets on the sphere. The problem is that there is no natural dilation operator on the sphere. One way to see this is that the infinitesimal dilation
s(e, ¢) ~ eOes(e, ¢) with core C~(S2\S) is symmetric (S = south-pole), but has no self-adjoint extensions. Hence its exponential does not generate a group. We therefore have to introduce the different scales in a more or less ad hoc way. Let ga E L2(S 2, dr2) be a family of functions indexed by a parameter a > 0. We usually suppose that a --~ ga is Bochner integrable but for all practical purposes one might suppose at least continuity (a ~ ga) E C ° ( l ~ , L2($2)). We then define a wavelet transform of s C L2(S2,df2) with respect to this family of functions as the following set of scalar products.
W[{ga},S](,~,a) = ,
~ • SO(3),a e ]~I-.
It is a function over the parameter half-space = so(3) x ~_.
Since the group SO(3) is compact, it is unimodular and hence it has an essentially unique invariant measure denoted by dZ. We normalize it such that
fs
0(3)
d Z = 4~ 2,
which is possible since SO(3) is compact. The formal wavelet synthesis with respect to a family of wavelets satisfying ha C L2(S2,d~) reads now
J~4[{ha}, T](x) = ~o~ -da a / s 0(3) dZ(¢)T(¢, a) (U(¢)ha)(x). D e f i n i t i o n 2. We say a family of/unctions ga E C ~ ( S ~) is admissible if the following three conditions are met. Condition 1: (energy conservation) there is a finite constant cg, 0 < ]c91 < cc such that for all s E L2(S 2) with f s 2 s = 0 we have
/
dZ(~) Ada a
lW[{ga}, s](~, a)t 2 = eg ~ :
df2(x)Is(x)t
2 .
Condition 2: (large scale decay) for all c~ > 0 there is a finite constant ca, such that for a > 1 we have
IW[(ga},s](~,a)l ~_e~a-".
60
Matthias Holschneider
Condition 3: (Euclidean limit) There is a function g E L2(~ 2) such that the following limit holds point wise almost everywhere lira a2go(~ -1 (ax)) = g(x),
a--+O
where • : $2\{0, 0 , - 1 } ~ 1~2 is the stereographic projection of the sphere with the south-pole removed onto the open two dimensional plane. Some comments are in order. Condition I says that the wavelet transform is a partial isometry. Condition 2 and 3 allow us to interpret a as a scale parameter: condition 2 says that there are no features at large scale. This is natural since the functions on the sphere live on a compact space. This also is similar to what can be observed on the circle. Condition 3 finally tells us that at small scales, the earth is fiat; that is a behaves asymptotically like a dilation parameter as a -+ 0. Clearly one might be interested in stronger convergence properties depending on the application. In analogy with the case of wavelet analysis over [~2 we also consider admissible pairs D e f i n i t i o n 3. We say.that {ga} and {ha} is an admissible analysis reconstruc-
tion pair if they satisfy Condition 2 and Condition 3 and if in addition Condition 1 is replaced by Condition 4: (boundedness of wavelet transform) there is a constant cg such that
f~ dZ(~)a ^ da iW[{go}, s](~, a)l 2 _< Cg fs2 d~(x)Is(x)l ~ , Condition 5: (boundedness of wavelet synthesis) there is a constant Ch such that
~ d~(x)IM~o>T(x)I ~ <_c~ f~ d~(~)a A da iT(~,a)t2, Condition 6: (inversion formula) M{ho}~/V{go}
= :n.
We always have the relation that ]vi{ao} is the adjoint of W{go} as can be formally seen by exchanging the integrals
fs2 d~(x)(A4{g,})w(x)s(x) = f ~ dX(~)a A daw(~,a) W{.q~}s(~, a). We now want to give sufficient conditions on {g~} that ensure admissibility of g~. T h e o r e m 8. Condition i (energy conservation) holds if and only if ~ ( l , rn) =
(Yt"~ [ g~) L2(S2,da) ,satisfies Cg(l)
=
STr f ~ 2l + 1 Jo
with some cg independent of 1.
da E I g~J l , a f,~l<_l
--
m)l
2=
co
Continuous Wavelet Analysis
61
Proof. For fixed w E L2(S 2, d[2) consider the operator
B: ~ ~ f
as o(a)
d~(¢) .U(¢)~
It is a bounded operator from L2(S2,d~) into itself. In addition, since d Z is invariant under the group translations it is easy to verify that
Bu(¢)
=
U(~)B
for all ~. Thus, since the representation of the rotations group is irreducible in each subspace Wl, we have 0o
l----O
where the //z are the projectors on the invariant spaces ~¥I and cl are some constants. To compute the constants we note that the Wiegner functions D~ 'm' are known to satisfy at
m2,m2,(~) - 211 8~r2 ~s 0(3) dZ(~)D~"ml'(¢)* D~2 + 15t~'t25ml'm25m'~'m'~" We may take s = Yz° and hence 8~r2
c(1)- 21 + 1 ~
t~(t"~)12
Iml<~ Now choosing w = g~ a n d integrating over a, the theorem follows.
[]
C o r o l l a r y 1. Condition ~ (boundedness of transform) holds if] there is a con-
stant c < co such that cg(1) <_c. Condition 5 holds iff ch(1) <_c. Finally Condition 6 holds iff 8~r L °° da E ~(l,m)ha(1,m) = 1. 2l+1
a Iml
Proof. The first statement is obvious. For the second we note that the wavelet synthesis is the adjoint of the wavelet analysis. Hence it has the same norm. For the last statement it is enough to consider the operator B:
f
S
Js 0(3)
dZ(¢) (U(~)g~ I s) V(¢)h~
and to integrate over a.
[]
C o r o l l a r y 2. If ga and hh is an admissible analysis -reconstruction pair the we have the following identity Valid for s, w E L 2(S 2, d[2) and f s -- f w = O.
J~ da/s a
= c~,~ /
o(3)
3S 2
dZ(#)W[g,s](¢,a) W[h, wl(~, a )
dn(~) s(~) ~/)(X).
62
Matthias Holschneider
Proof. Indeed, we may write
= <s I w)L~(s~ ) []
Condition 2, the large scale decay, is easy to meat. T h e o r e m 9. Condition 2 (large scale decay) holds iff oo
Z
I~(l,m)l 2 < O(a-~)
a~
4=0 Jmi
Proof. Use Parseval's equation on the sphere.
[]
As usual, the image is characterized by a reproducing kernel equation. T h e o r e m 10. Let g, h E S(I~2) be an analysis-reconstruction pair. Then T C image VPg if and only if T(G a) = / ~ I dZ(P)a,A da' IIg,h(~. p-l, a, a'),
where Hg,h(~,a,a') -~ }Vgha,(~,a). For arbitrary T the right hand side defines a projection operator onto image Wg. In ease g = h, this projector is orthogonal. Proof. This is a standard argument for partial isometries. We may suppose Cg,h = 1. Consider H = WgMh. We have H ~ = W.(MhWg)Mh
= W ~ M h = H.
Thus H is a projector. Its range lies obviously in image ~/Yg and since T = 1/Yes implies H T - = W A M h W . ) ~ = W~s = H T , the other inclusion holds as well. In case g = h, the operator is self-adjoint. The reproducing formula above follows now by exchanging the integrals in
T(~,a) = / s 2 dl2(x) /lg dZ(p)a' A da' U(~)ga (x) T(p, a') U(p)h~, (x) / ~ dZ(p)a I A de' T(p, a') (U(~)ga(x) ] V(p)h~(X))L2(S2,d~) But
now
(U(¢)ga(x) l U(p)h~(x)) = (U(P-I~)ga(x) I ha,(x)) Wgh~,(p-iGa), and the theorem is proved.
[]
Continuous Wavelet Analysis
{}{} {}{}
63
Fig. 30. The modulus and the phase at scale a = 4.
Fig. 31. The modulus and the phase at scale a = 2.
13.4
The Euclidean limit
We now come to the construction. Take the two dimensional Fourier transform of g and write it in polar coordinates, ~ = ~(k, ¢). Decompose for each k into a Fourier series 1
^
~(k,~) = ~ ( k , n ) ~
•
f2~
~"~, ~(w,n) = Jo d~g(k'~)~-i"~
nEZ
We first shall prove that the Euclidean limit holds for the following family of functions (X) ~=o I~l_
Later we modify this family slightly to make it admissible. T h a t the Euclidean limit holds can be observed numerically by looking at Fig. 31 to 34, where the scale of the wavelet changes from a = 2 to a = 1/4. The thick outermost circle corresponds to the south-pole. The inner circle is the equator. The center of the picture is the north-pole. In all figures we have fixed the rotation around the north-pole to 2Ir/3. We come to the details. As we did before let us also write g in polar coordinates g = g ( w , ¢), w > 0, ¢ E [0, 2~r), and let us introduce the Fourier coefficients with respect to ¢ 1
g(w,¢) = ~ ~ nCZ
-
~(~,n)~"*, ~(~,~)=
] i 2~
d¢~(~,¢)~-~.
64
Matthias Holschneider
F i g . 32. The modulus and the phase at scale a = 1.
F i g . 33. The modulus and the phase at scale a = 1/2.
F i g . 34. The modulus and the phase at scale a = 1/4.
Lemma
1. The following relation holds:
~(k, •) =
waw~/(w, n) &(kw),
where J~ is the n-th order Bessel function. Proof. T h e F o u r i e r t r a n s f o r m of g r e a d s in s p h e r i c a l c o o r d i n a t e s k > 0, ¢ E
[0, 2~)
__ __1 wdw 2~r n~z o
d(p 7(w, n) e -i(kw cos(¢-~)+ne)
Continuous Wavelet Analysis
= 2--~E
ein¢
wdw
65
d~ 7(w, n) e -i(kw cos(~)+n~)
nEZ
wdw "/(w, n) Jn(kw) e inC.
= -27r
Upon identifying the terms of both expressions that the expansion coefficients ~f and ~ are related as stated in the lemma. [] Now the following limit is known to hold uniformly for 0 E I, compact (see e.g. [22]) r----
lim J2-~FYlm(Oll , ¢) = Jm(O)e im¢. l-+c~o
v
t
We now propose to look at ~he limit a ~ 0 in oo
Note that
z~w Ty~12 2I + 1 41r
f,~t
Therefore, by Schwarz's inequality~ discarding a term smaller than a2e, we may suppose t h a t g is such that ~ is supported by a corona cl < lkl _< c2. Now as a gets small, the l that contribute to the sum get large since a- 1 stays in [cl, c2]. We thus can use the above asymptotic for Ytm without changing the limit, and we obtain oo
a~ /=0
~ (a~)~(al,m)J~(al)~' ~ [ml<_t
As a -+ 0, the sum over 1 may be replaced by a Riemann integral and we obtain as limit (al --+ w, a --+ dw)
1 2re
wdw ~(w, m) Jm (wO) e im¢.
By the Fourier inversion formula (Lemma 1) this equals g(O, ¢) in polar coordinates. 13.5
An admissible family
We now modify the family slightly to make it admissible. Consider again a function g C So (]~2). Then let as before
~(k, ~) =
ff
de ~(k, ¢) ~ - ~ .
66
Matthias Holschneider
Now set
oo
/=0 tml
~(al, m) ~'o(k,.~) =
/ 8~2
V~-r~
oo
fo
da ~
z:-,l.~l
l~(a',.~)l ~
T h e o r e m 11. The {ga} is an admissible family.
Proof. Clearly Condition 1 and 2 are satisfied• To show that the Euclidean limit holds we note that by Parseval's equation we have for l -+ c~
/7
da E
-+
lg(a"¢)l =
lg(k)12
Iml
Note that the integral on the right hand side is the admissibility condition of Murenzi [19]. It therefore follows that
vq~(al,-~) -+ £(I,.~) as it should.
14
[3
Filtering in wavelet space
As we have seen, we may recover the analyzed signal from its wavelet coefficients by means of the wavelet synthesis. We now want to consider linear operators that are defined through manipulations of the wavelet coefficients. T h a t is we look at operators of the following form : s ~ .a4hMWgs,
where M stands for some linear manipulation• We will mostly consider M to be just a multiplication with some function. Particular interesting instances are where M is the characteristic" function of some subset Z of the time-scale halfspace. This is the function that takes tile value 1 for every point inside Z and 0 outside. In that way our o p e r a t o r / 2 consists in cutting out of the wavelet coefficients some features of interest and reconstructing them. This may be used to decompose a complex signal into relevant smaller parts that may be interpreted individually.
14.1
T r a n s l a t i o n i n v a r i a n t filtering
Let G(a) be function of the scales only and consider the operator a? : s ~ M h G W g s .
Continuous Wavelet Analysis
67
T h e n Y2 is a linear operator t h a t commutes with the translations. Thereibre it is a convolution operator C2s -- F • s. To know the filter function F it is enough to let $2 act on a ~ distribution, F = f25. Formally we m a y write
D:s~+F*s,
F=
-a- F ( a )
O,h)-
.
In Fourier space we have
F ( w ) = foo~-~F(a)g(aw)h(w) • Typically F(a) is equal to 1 on some interval of scales and 0 outside. This m a y be useful in the design of band-pass filters, adapted to the specific needs of the signal.
Derivation operators. Consider the special weight C(a) = a s.
In t h a t case, we m a y compute the action explicitly.
~(~) = ~ c + ( i ~ ~
for ~ > 0
[ c_(-iw) ~ f o r w < 0 ' with
~_ = ~ / ~
~g(-~).
In particular for a = - 1 , - 2 , . . . we can distinguish the following cases. If either g or h is progressive then $2 is a derivation operator of acting on the progressive (regressive) p a r t of s
$2 = c±O-~ H~. If g and h are both symmetric or b o t h anti-symmetric (g(-t) = g(t) and h ( - t ) = h(t) or g ( - t ) = -g(t) and h ( - t ) = -h(t)) then ~ • h is symmetric. Hence we have
[2 = c±O-a. Another way to understand the relation between derivatives and multiplication by a -1 is obtained in the integration-by-part formula )A219 , Os](b, a) = - a - l W[Og, s](b, a).
It shows t h a t multiplication with a -1 is the wavelet transform with respect to the derived wavelet.
68
Matthias Holschneider
14.2
N o n - s t a t i o n a r y filtering
Now consider two domains l"z and/"2 in the time-frequency half-plane, together with their respective projectors M1, M2, given by the multiplication with their characteristic functions. Clearly both operators commute M1M2 =
M2M~.
Moreover, if/"1NF2 = ~ then the projectors are independent as expressed through M1M2 = M 2 M I = O.
These elementary, intuitive facts do not hold, if we consider the associated nonstationary time-scale filters
/2~ = MhM~Wg,
i = 1, 2.
Indeed, let us first consider the comutator, and let us suppose for simplicity g = h. It comes with H = }/Y• the reproducing kernel /21/22 - - /22/21 "~- J~(-/~Jrl//~Jr2 -- M 2 H J 3 d I ) ~ ~.
Now in general M1HM2 - M2HMI ~ 0, since the H is not a delta function, nor even compactly supported. However, if we suppose that H is well localized like
]II(b,a)l <
c.~(a + 1/a)-m(1 +
lbl)-m,
Vm > O,
then the comutator vanishes asymptotically if the two regions in the time-scale plane a~ce well separated. More precisely if we define the distance between two points (b, a) and (b', a') as
A ((b,a),
(b',a'))=
exp(llog(a/a')[)+
Ib ~,b'l + [b-a b'l m
then one can show that
11[~/22]1] _ ~.~/~(F~,F~) -~,
.J(r~,Z'2) -+ ~o,
where the distance between the two sets/] and F2 is defined as
A(F1,/"2) =
inf
(b,a)Crl,(b',a')er2
A((b, a), (b', a')).
Thereibre, depending on the localization of the reproducing kernel, the two regions in the time-scale half-plane are more or less independent. Note that this non-independence is intimately related to the Heisenberg uncertainty relation. In the same way, the product /21/22 does not vanish in general even if the two associated regions do not overlap. However, again, as the distance gets large compared to the size of a reproducing kernel, the interaction between the two regions becomes small and the two non-stationary filters can be considered as independent.
Continuous Wavelet Analysis
69
0.9 X Componen~ 0.6 .~
0.4
~
0.0
0.2
-0.2 -0 4
1890 0.8 0.6
,.~
1915 .
1940
1965
1990
1940 ~,¢ars
1965
1990
, Y Compo~cnL
0.4
1890
1916
Fig. 35. IEI~S polar motion series. The X component is measured along the Greenwich meridian and counted positive towards Greenwich, and the Y component is measured along the 9 0 ° W meridian aa~dpositive westward.
14.3
Wavelet e x t r a c t i o n of t h e C h a n d l e r W o b b l e
This technique of non-stationary filtering has been applied to the analysis of polar motion in [8] where the reader may find additional details. In this section we briefly outline the steps in the analysis that have been used in the above paper to extract the so called Chandler component from the polar motion. In Fig. 35 you can see polar motion since the beginning of the century. It represents the intersection of the instantaneous rotation axis of the earth with a plane tangent to the conventional north pole. The X-axis is along the Greenwich meridian and the Y axis is the W90 meridian. It is convenient to combine both components into a complex signal via s = X - i Y . The choice of the sign for Y ensures that s is essentially prograde (see Def'. 1). In the analysis we will use a wavelet of the Morlet type. Since we have a complex valued signal we have to distinguish between prograde and retrograde components. Now as you can see in Fig. 36, the wavelet analysis separates the two components which are clearly visible in the prograde transform of the signal and to a much lesser extend in the retrograde part (Fig. 36) . The two components are the anual, forced, component and the free Chandler wobble around 435 mean solar days. We now extract the two components by putting to 0 all coefficients outside some polynomial region containing the main energy of the component we want to extract. This is precisely a non stationary filter as described in the previous section. Doing this for the prograde and retrograde part we obtain four components plus a residual term. In Fig. 37 the reconstructed retrograde and prograde components are displayed in a right-handed coordinate system. The original polar motion is shown at the bottom left together with the residuals (bottom right) obtained by removing the retrograde and prograde components from the total motion. The gray trajectory is for the whole data set and the circles represent the
70
Matthias Holschneider
1800
tBgO
:t940
1960
1980
1900
lggO
1940
lgaO
~.gBO
Fig. 36. Prograde (left) and retrograde (right) wavelet transform of the polar motion data shown in Fig. 35.
(a) rea'ogr~le ~,nuat
-0.3
0o) prograde annual
wso
0.0
1)
~"
.......
0.3 G
0.6 ~
G
(c) re~'ogra~ Chandl(
-0.3
(d) :Fx}gradeCh,~ter
NL
°°
0.3 G
0.0 ~ -0.3
(C) total p o ~
G
~
(0 residual
0.3 0,6
.
.
-0.6
.
.
-0.3
.
.
0,0
arc seconds
.
0.3
.
.
.
-0,6
.
.
-0.3
02
0.3
arc seconds
Fig. 37. The four components of polar motion and their residual.
residual pole position every ten years starting in 1890 at the top right. Observe the fast westward trend in the residuals around 1960 (the eighth circle from the top right). The so extracted prograde Chandler component can now be analyzed further and we refer to [8] for details.
Continuous Wavelet Analysis
71
References 1. M. Alexandrescu, D. Gibert, G. Hulot, J. L. Le Mou~l, and G. Saracco. Detection of geomagnetic jerks using wavelet analysis. J. Geophys. Res., 100:12557-12572, 1995. 2. S. T. Ali, J. P. Antoine, J. P. Gazeau, and U. A. Mueller. Coherent states and their generalizations: A mathematical overview. Rev. Math. Phys., 7(7)~:1013-1104, 1995. 3. A. Arneodo, G. Grasseau, and M. Holschneider. On the wavelet transform of multifractals. Phys. Rev. Left., 61:2281-2284, 1988. 4. J. M. Combes, A. Grossmann, and Ph. Tchamitchiam. Wavelets, Time-Frequency Methods and Phase Space. Springer-Verlag, 1989. 5. St. Dahlke and P. Maass. Wavelet Analysis on the Sphere. Technical report, University of Potsdam, 1995. 6. I. Daubechies. Ten Lectures on Wavelets. SIAM, 1992. 7. W. Freeden and U. Windheuser. Wavelet Analysis on the Sphere. Technical report, Univerty of Kalserslantern, 1995. 8. D. Gibert, M. Holschneider, and J. L. LeMouel. Wavelet Analysis of the Chandler Wobble. Technical report, IPGP (to appear in JGR), 1998. 9. A. Grossmann, M. Holschneider, R. Kronland-Martinet, and J. Morlet. Detection of abrupt changes in sound signals with the help of wavelet transforms. In Advances in Electrononics and Electron Physics, volume 19 of Inverse Problems, pages 289306. Academic Press, 1987. 10. A. Grossmann and J. Mortet. Decomposition of Hardy functions into square integrable Wavelets of Constant Shape. SIAM J. Math. Anal., 15:723-736, 1984. 11. A. Grossmann, J. Morlet, and T. Paul. Transforms associated to square integrable group representations I: general results. J. Math. Phys., 26:2473-2479, 1985. 12. M. Holschneider. On the wavelet transformation of fractal objects. J. Stat. Phys., 50(5/6):953-993, 1988. 13. M. Holschneider. General inversion formulas for wavelet trans~brms. J. Math. Phys., 34(9):4190-4198, 1993. 14. M. Holschneider. Inverse Radon transfoms through inverse wavelet transforms. Inverse Problems, 7:853-861, 1993. 15. M. Holschneider. Wavelets: An Analysis Tool. Oxtbrd University Press, 1995. 16. M. Holschneider and Ph. Tchamitchian. Pointwise Regularity of Riemanns "nowhere differeutiable" function. Inventiones Mathematicae, 105:157-175, 1991. 17. D. Mart. Vision. Freeman and Co., 1982. 18. F. Moreau, D. Gibert, M. Hotschneider, and G. Saracco. Wavelet analysis of potential fields. Inverse Problems, 13:165-178, 1997. 19. R. Murenzi. Ondelettes multidimensionelles et application a l'anlyse d'images. These (Louvain la Neuve), 1990. 20. Th. Paul. Functions analytic on the half-plane as quantum mechanical states. J. Math. Phys., 25:3252-3263, 1984. 21. P. SchrSder and W. Sweldens. Spherical Wavelets: Texture Processing. Technical report, University of South Carolina, 1995. 22. D. A. Varshalovich, A. N. Moskaler, and V. K. Khersanskii. Quantum Theory of Angular Momentum. World Scientific, 1988.
B u i l d i n g Your O w n W a v e l e t s at H o m e * Wim Sweldens 1 and Peter SchrSder 2 1 Bell Laboratories, Lucent Technologies, Murray Hill NJ 07974, U.S.A. wim©bell-labs, com 2 Department of Computer Science, California Instituteof Technology, Pasadena, C A 91125, U.S.A. ps©cs, caltech, edu
Part I: First G e n e r a t i o n W a v e l e t s 1
Introduction
Wavelets have been making an appearance in many pure and applied areas of science and engineering. Computer graphics with its many and varied computational problems has been no exception to this rule. In these notes we will attempt to motivate and explain the basic ideas behind wavelets and what makes them so successful in application areas. The main motivation behind the development of wavelets and the many related ideas (see Fig. 1) was the search for ]ast algorithms to compute compact representations of functions and data sets. How can such compact representations be achieved? There are many approaches, some more computationMly intensive, others less, but they all amount to exploiting structure in the data or underlying functions. Depending on the application area this goes by different names such as the exploitation of "structure," "smoothness," "coherence," or "correlation." Of course, for purely random signals or data no compact representations can be found. But most of the time we are interested in realistic data and functions which do exhibit some smoothness or coherence. In these cases wavelets and the fast wavelet transform turn out to be very useful tools. While the name "wavelets" is relatively young (early 80's) the basic ideas have been around for a long time in many areas from abstract analysis to signal processing and theoretical physics. The main contribution of the wavelet field as such has been to bring together a number of similar ideas from different disciplines and create synergy between these techniques. The result is a flexible and powerful toolbox of algorithmic techniques combined with a solid underlying theory. Because of the different "parents" of wavelets, there are many ways to motivate their construction and understand their properties. One example is subband filtering from the area of signal processing, where the aim is to decompose a W. Sweldens and P. SchrSder. \~ravelets in Computer Graphics, from SIGGRAPH'96 Course Notes, 1996, ACM Conference on Computer Graphics and Interactive Techniques,reprinted by permission of the authors.
Wavelets at Home
73
Time frequency analysis Besov spaces Multigrid Subband filtering Splines Subdivision Coherent states Transient analysis Adaptive gridding Multiresolution analysis
~avelets~
Surfaces Integralequations Imagecompression
Fig. 1. Many areas of science, engineering, and mathematics have contributed to the development of wavelets. Some of these are indicated surrounding the center bubble.
given signal into frequency bands. In this case filter design and Fourier analysis are essential tools. Researchers in approximation theory and abstract analysis were interested in the characterization of function spaces defined through various notions of smoothness. Yet others were attempting to build approximate eigenfunctions for certain integral operators to enhance their understanding of the underlying structures. Instead of retracing these developments we will focus primarily on the idea of coherence, or smoothness, and its exploitation to motivate and derive the wavelet transform together with a large class of"different wavelets. The tool that we use to build wavelets transforms is called th e Ii!~ing scheme [26, 27]. The main feature of the lifting scheme is that all constructions are derived in the spatial domain. This is in contrast to the traditional approach, which relies heavily on the frequency domain. Staying in the spatial domain leads to two major advantages. First, it does not require the machinery of Fourier analysis as a prerequisite. This leads to a more intuitively appealing treatment better suited to those interested in applications, rather than mathematical foundations. Secondly, lifting leads to algorithms that can easily be generalized to complex geometric situations which typically occur in computer graphics and in geosciences. This will lead to socalled "Second Generation Wavelets." The lifting scheme was developed in 1994, but has nmnerous connections with earlier developments and can even be traced back all the way to the Euclidean algorithm! The development of lifting was inspired by earlier work of Lounsbery et al. concerning wavelet transforms of meshes [18] and work of Donoho concerning interpolating wavelet transforms [13]. Both these developments are special cases of lifting. Lifting is also closely related to filter bank constructions of Vetterli and Herley [28] and local decompositions of Carnicer, Dahmen and Pefia [2]. To make the treatment as accessible as possible we will take a very "nuts and bolts" algorithmic approach. In particular we will initially ignore many of the mathematical details and introduce the basic techniques with a sequence of examples. Other sections will be devoted to more formal and rigorous mathe-
74
Wire Sweldens and Peter Schr5der
matical descriptions of the underlying principles. These sections axe marked with an asterisk and can be skipped on first reading. In this first part, we treat the classical or "First Generation Wavelets". We begin with a simple example of a wavelet transform to introduce the basic ideas. Later we introduce lifting in general and move on to the mathematical background. The second part is concerned with generalizations to more complex geometries and "Second Generation Wavelets". A word of caution is in order before we dive in the wavelet sea. Some readers might be familiar with other overview or tutorial material concerning wavelets. In most cases these expositions use the classical frequency domain framework. Since we are staying entirely in the spatial domain our exposition may initially look rather foreign. This is due to the fact that our approach relies entirely on the new lifting philosophy. However, we assure the reader that by the end of the first part the connections between lifting and the classical treatment will be apparent. We hope that as a result of this approach the reader will gain new insight into what makes wavelets "tick." 2
A Simple
Example:
The
Haar
Wavelet
Consider two numbers a and b and think of them as two neighboring samples of a sequence. So a and b have some correlation which we would like to take advantage of. We propose a well-known, simple linear transform which replaces a and b by their average s and d i f f e r e n c e d: 8--
a+b 2
d = b - a.
(1)
The idea is that if a and b are highly correlated, the expected absolute value of their difference d will be small and can be represented with fewer bits. In case that a = b the difference is simply zero. We have not lost any information because given s and d we can always recover a and b as: a = s - d/2 b = s + d/2.
These reconstruction formulas can be found by inverting a 2 x 2 matrix. This simple observation is the key behind the so-called Haar wavelet transform. Consider a signal s~ of 2 n sample values su,l: sn = {s~,~ 1 0 < l < 2~}. Apply the average and difference transform onto each pair a = sut and b = s21+l. There are 2"-1 such pairs (l = 0 . . . 2 n - I ) ; denote the results by S n - l , t , and dn-l,l: 8n-l'l
~---
8n,21 -~- 8n,21q_ 1 2
d n - l , 1 -~" Sn,21+l - sn,21.
Wavelets at Home
d~ -
d~ -
1
YY 8n
,.
8rt--1
P,
Sn--2
75
.J
2
~A1
uo
,/Y • . .
I,
81
~
80
Fig. 2. Structure of the wavelet transform: recursively split into averages and differences. do
80
d__1_1
~"
81
d2
"
82
dn_ 1,z
• . •
~,
Sn--l,1
~
8n,l
Fig. 3. Structure of the inverse wavelet transform: recursively merge averages and differences.
The input signal Sn, which has 2 n samples, is split into two signals: 8n_ 1 with 2 n-1 averages sn-l,z and dn-1 with 2 n-1 differences dn-l,~. Given the averages Sn-1 and differences d~-i one can recover the original signal s~. We can think of the averages sn-1 as a coarser resolution representation of the signal su and of the differences d~-I as the information needed to go from the coarser representation back to the original signal. If the original signal has some local coherence, e.g., if the samples are values of a smoothly varying function, then the coarse representation closely resembles the original signal and the detail is very small and thus can be represented efficiently. We can apply the same transform to the coarser signal s n - t itself. By taking averages and differences, we can split it in a (yet) coarser signal Sn-2 and another difference signal d~-2 where each of them contain 2 ~-2 samples. We can do this n times before we run out o f samples, see Fig. 2. This is the Haar transform. We end up with n detail signals dj with 0 ~ j < n - 1, each with 2j coefficients, and one signal So on the very coarsest scale. The coarsest level signal So contains only one sample so,0 which is the average of all the samples of the original signal, i.e., it is the DC component or zero frequency of the signal. By using the inverse transform we start from So and dj for 0 < j < n and obtain Su again. Note that the total number of coefficients after transfbrm is 1 for So plus 2j for each dj.
76
Wim Sweldens and Peter Schr5der
This adds up to n--1
l÷~'~2J j=O
=2 n,
which is exactly the number of samples of the original signal. The whole Haar transform can be thought of as applying a N x N matrix (N = 2 u) to the signal sn. The cost of computing the transform is only proportional to N. This is remarkable as in general a linear transformation of an N vector requires O ( N e) operations. Compare this to the Fast Fourier Transform, whose cost is O ( N log N). It is the hierarchical structure of a wavelet transform which allows switching to and from the wavelet representation in O(N) operations. 3
Haar
transform
and
Lifting
In this section we propose a new way of looking at the Haar transform. The novelty lies in the way we compute the difference and average of two numbers a and b. Assume we want to compute the whole transform in-place, i.e., without using auxiliary memory locations, by overwriting the locations that hold a and b with the values of respectively s and d. This can not immediately be done with the formulas of (1). Indeed, assume we want to store s in the same location as a and d in the same location as b. Then the formulas (1) would lead to the wrong result. Computing s and overwriting a leads to a wrong d (assuming we compute the average after the difference.) We therefore suggest an implementation in two steps. First we only compute the difference:
d= b-a, and store it in the location for b. As we now lost the value of b we next use a and the newly computed difference d to find the average as:
s = a+d/2. This gives the same result because a + d/2 = a + (b - a)/2 = (a + b)/2. The advantage of the splitting into two steps is that we can overwrite b with d and a with s, requiring no auxiliary storage. A C-like implementation is given by b -= a; a += b / 2 ; after which b contains the difference and a the average. The computations carl be done in-place. Moreover we can immediately find the inverse without formally solving a 2 × 2 system: simply run the above code backwards! (i.e., change the order and flip the signs..) Assume a contains the average and b the difference. Then a -= b/P.; b += a; recovers the values a and b in their original memory locations~ This particular scheme of writing a transform is a first, simple instance of the lifting scheme.
Wavelets at Home
eveo -i
.(
.)
,
77
sj-i
I
sj
oddj-1
4-1
Fig. 4. The lifting scheme, forward transform: first compute the detail as the failure of a prediction rule, then use that detail in an update rule to compute the coarse signal.
4
The
Lifting
Scheme
In this section we describe the lifting scheme in more detail. Consider a signal sj with 2J samples which we want to transform into a coarser signal sj-1 and a detail signal dj-1. A typical case of a wavelet transform built through lifting consists of three steps: split, predict, and update. Let us discuss each stage in more detail. - S p l i t : This stage does not do much except for splitting the signal into two disjoint sets of samples. In our case one group consists of the even indexed samples s~l and the other group consists of the odd indexed samples s21+l. Each group contains half as many samples as the original signal. The splitting into even and odds is called the L a z y wavelet transform. W e thus built an operator so that (evenj_l, oddj_l) := Split(sj) Remember that in the previous example a was an even sample while b was an odd sample. - P r e d i c t : The even and odd subsets are interspersed. If the signal has a local correlation structure, the even and odd subsets will be highly correlated. In other words given one of the two sets, it should be possible to predict the other one with reasonable accuracy. We always use the even set to predict the odd one. In the Haar case the prediction is particularly simple. An odd sample sj,21+l will use its left neighboring even sample sj,2z as its predictor. We then let the detail d j - i , l be the difference between the odd sample and its prediction: dj-l,z = sj,2z+l - sj,21, which defines an operator P such that dj-1 = oddj-1 - P(evenj_l). As we already argued, it should be possible to represent the detail more efficiently. Note that if the original signal is a constant, then all details are exactly zero.
78
Wim Sweldens and Peter SchrSder evenj _ t 8j-1
sj
U ,
Fig. 5. The li£¢ing scheme, inverse transform: first undo the update and recover the even samples, then add the prediction to the details and recover the odd samples.
- U p d a t e : One of the key properties of the coarser signals is that they have the same average value as the original signal, i.e., the quantity 2j - i
S = 2- j
8j,l /=0
is independent of j. This results in the fact that the last coefficients So,0 is the DC Component or overall average of the signal. The update stage ensures this by letting Sj-l,l = sj,2t + d j - l , l / 2 . Substituting this definition we easily verify that 2j - 1
2j-1
2~-1
2j
E s~_l.~ = E (sj.~. + ~j_1../2) = 1/2 E (s.. + sj.~.÷~) = 1/2 E s~,~, I=0
l=0
1=0
l=0
which defines an operator U of the form s j - i = evenj_i + U(dj-1). All this can be computed in-place: the even locations can be overwritten with the averages and the odd ones with the details. An abstract implementation is given by: (oddj_l, evenj_l) := Split(sj); odd j_ x - = P ( e v e n j - 1); evenj_l += U(oddj_l); These three stages are depicted in a wiring diagram in Fig. 4. We can immediately build the inverse scheme, see the wiring diagram in Fig. 5. Again we have three stages:
Wavelets at Home -
79
U n d o u p d a t e " Given dj and s j we can recover the even samples by simply subtracting the update information: evenj:l -- sj-1 - U(dj_l). In the case of Haar, we compute this by letting sj,21 = s j - l , l - d j _ l , J 2 .
-
U n d o predict: Given evenj_i and d j - 1 we can recover the odd samples by adding the prediction information oddj_l = dj-1 + P(evenj_l). In the case of Haar, we compute this by letting 8n,21+1 : d n - l , l -[" 8n,21.
-
M e r g e : Now that we have the even and odd samples we simply have to zipper them together to recover the original signal. This is the inverse Lazy wavelet: sj = Merge(evenj_l, oddj-1).
Assuming that the even slots contain the averages and the odd ones contain the differences, the implementation of the inverse transform is: evenj_l -= U(oddj_t); oddj_l += P(evenj_l); s j : = Merge(oddj_l, evenj_l).
The inverse transform is thus always found by reversing the order of the operations and flipping the signs. The lifting scheme has a number of algoritmic advantages: -
-
-
In-place: all calculations can be performed in-place which can result in important memory savings. Efficiency: in many cases the number of floating point operations needed to compute both smooth and detail parts is reduced since subexpressions are reused. Parallelism: "unrolling" a wavelet transform into a wiring diagram extfibits its inherent SIMD (Single Instruction Multiple Data stream) parallelism at all scales, with single write and multiple read semantics.
But perhaps more importantly lifting has some structural advantages which are both theoretically and practically relevant: -
Inverse Transform: writing the wavelet transform as a sequence of elementary predict and update (lifting) steps, it is immediately obvious what the inverse transform is: simply run the code backwards. In the classical setting, the inverse transform can typically only be found with the help of Fourier techniques.
80
Wire Sweldens and Peter SchrSder Linear approximationover a l l samples
Linear approximationover even samples
Linear prediction at odd based on even locations
Difference: originalminus predict
:
:
:
A
Fig. 6. Example of linear prediction. On the top left the original signal with a piecewise linear approximation. To its right is the coarser approximation based only on the even samples. Using the even samples to predict values at the odd locations based on a linear predictor is shown in the bottom left. The detail coefficients are defined as the difference between the prediction and the actual value at the odd locations (bottom right.) We may think of this as the failure of the signal to be locally like a first degree polynomial.
-
G e n e r a l i t y : this is the most important advantage. Since the design of the transform is performed without reference to Fourier techniques it is very easy to extend it to settings in which, for example, samples are not placed evenly or constraints such as boundaries need to be incorporated. It also carries over directly to curves, surfaces and volumes.
It is for these reasons t h a t we built our exposition entirely around the lifting scheme.
5
T h e Linear Wavelet Transform
One way to build other wavelet transforms is through the use of different predict a n d / o r update steps. W h a t is the incentive behind improving predict and update? The H a a r transform uses a predictor which is correct in case the original signal is a constant. It eliminates zeroth order correlation. We say t h a t the order of the predictor is one. Similarly the order of the update operator is one as it preserves the average or zeroth order moment. In m a n y cases it is desirable to have predictors which can exploit coherence beyond zeroth order correlation and similarly it is often desirable to preserve higher order moments beyond the zeroth in the successively coarser versions of the flmction. In this section we build a predictor and update which are of order two. This means t h a t the predictor will be exact in case the original signal is a linear and
Wavelets at Home
81
the update will preserve the average and the first moment. This turns out to be fairly easy. For an odd sample sj,:~+l we let the predictor be the average of the neighboring sample on the left (sj,2~) and the neighboring sample on the right (sj,2l+2.) The detail coefficient is given by dj-l,l = sj,21+l - 1/2(sj,2t + sj,2t+2). Fig. 6 illustrates this idea. Notice that if the original signal was a first degree polynomial, i.e., if sz = a l + ~ for some a and j3, this prediction is always correct and all details are zero, i.e., N = 2. In other words, the detail coefficients measure to which extent the original signal fails to be//near. The expected value of their magnitudes is small. In terms of frequency content, the detail coefficients capture high frequencies present in the original signal. In the update stage, we first assure that the average of the signal is preserved or
_- t / 2 E s . . 1
l
We therefore update the even samples sj,2t using the previously computed detail signals dj-l,t. Again we use the neighboring wavelet coefficients and propose an update of the form:
s¢-l,t = sj,2t + A (dj-l,f-1 + dj-l,l). To find A we compute the average:
E sj-t,1 = E sZ2t + 2A E 1
l
dj-l,t ~ (1-
~A)E
l
sj,21 "~ 2 A E l
sJ,2l+l" l
From this we get A = 1/4 as the correct choice to maintain the average. Because of the symmetry of the update operator we also preserve the first order moment or
E
1 sj-l,l = 1/2 E l
l sLl" l
One step in the wavelet transform is shown in the scheme in Fig. 7. By iterating this scheme we get a complete wavelet transform. The inverse is as easy to compute, letting sj,2z = Sj-l,i - 1/4 (dj-l,t-1 + dj-l,l), to recover the even and sj,2z+l = dj-l,1 + 1/2(sj,21 + sj,2z+2), to recover the odd samples.
82
Wire Sweldens and Peter SchrSder
sj,2k
8j,2k+l
8j,2k+2
. ••
I
8j,2k
Sj-t,k
d j - 1,k
dj-l,k
8j,2k4-2
8j-l,k+l
.. •
-..
Fig. 7. At the top the initial vector of coefficients. As a first step all odd locations have 1/2 of their neighboring even locations subtracted. In the second step all even locations get a contribution of 1/4 of their neighboring odd locations leaving the smooth and detail coefficients. Now one can recurse by repeating the same set of operations with a memory stride of 2, 4, 8, and so forth. In the end the entire transform sits in the original memory locations.
Remarks:
- In this section we have not mentioned anything about what should happen at the edges of the signal. For now one can assume t h a t signals are either periodic or infinite. In a later section, we show how lifting can be used to correctly deal with edge effects. - The wavelet transform presented above is the biorthogonal (2,2) of CohenDaubechiesFeauvean [5]. One might not immediately recognize this, but by substituting the predict in the update, one can check t h a t the coarser coefficients are given by s j - l , l = - 1 / 8 sj,2z-2 + 1/4 sj,2z-1 + 3/4 sj,2t + 1/4 sj,2z+l - 1/8 sj,2t+2.
Note that, when written in this form (which is not lifting), the transform cannot be computed in-place. Also to find the inverse transform one would have to rely on Fourier techniques. This is much less intuitive and does not generalize to irregular settings.
6
Subdivision
Methods
In the previous section we used the ideas of predict and u p d a t e to build wavelet transforms. As examples of prediction steps we saw constant prediction and tinear prediction. In this section we will focus on subdivision, which is a powerful paradigm to build predictors. Subdivision is used extensively in C A G D to generate curves and surfaces. In t h a t context it is used to refine a given mesh (1D
Wavelets at Home
83
even
IMerge
......
odd Fig. 8. The simplest subdivision and by implication inverse wavelet transform is interpolating subdivision. Values at odd locations are computed as a function of some set of neighboring even locations. The even locations do not change in the process.
or 2D) through a simple local procedure. Choosing this procedure carefully will result in an ever better approximation of some smooth limit curve or surface. Spline methods with the de Casteljau and the de Boor algorithms, as well as certain interpolating subdivisions, such as the method of Deslauriers-Dubuc, fall into this category [10, 11]. Considering subdivision methods as sources of predictors corresponds to focusing on the design of various forms of P function box(es) in wavelet transforrn wiring diagrams such as shown in Figs. 4 and 5. Later on we will see how to construct suitable U function boxes, but for now we will work in the setting without a U box, see Fig. 8. Equivalently, we may think of subdivision as an inverse wavelet transform with no detail coefficients. In this contexti subdivision is often referred to as "the cascade algorithm." In later sections, we will see how for a given subdivision scheme one can define different ways to compute the detail coefficients. We begin by describing interpolating subdivision which is often useful if one is interested in constructions which interpolate a given data set. This corresponds to constructions with the simplest form of the P function box as shown in Fig. 8. Here subdivision corresponds to predicting new values at the odd positions 8jq-l,2kq-1 at the next finer level, while all old values 8jq-l,2k ~- 8 j , k remain the same. Next we will consider a subdivision which is based on average interpolation and is related to the Haar transform and, through differentiation, to the interPolating transform. As the name suggests this method interpolates local averages, rather than samples of a function. This method will result in a P box which will be put in front of the inverse Haar transform a s shown in Fig. 9. Finally we discuss cubic B-splines as another example of building predictors. In this case we will see how powerful predictors can be built by allowing prediction to consist of several elementary stages including rescales (see the division by 2 on the top wire in Fig. 10.)
84
W i m Sweldens and Peter SchrSder
even
A
Merge I
t
I
odd F i g . 9. Average-interpolating subdivision can be used to enhance an existing inverse Haar transform. Based on a number of even neighbors a Haar detail signal is computed and added to the odd locations. Applying the usual inverse H a i r transform yields average-interpolating functions.
A
F i g . 10. B-splines can also be built with lifting. For cubic B-splines this diagram results. It is an instance of multiple P stages and also introduces a new element, scaling.
Wavelets at Home
6.1
85
Interpolating Subdivision
Interpolating subdivision starts by considering the problem of building an interpolant for a given data sequence. For example, we might be given a sequence of samples of some unknown function at regular intervals and the task to fill in intermediate values in a smooth fashion. Deslauriers and Dubuc attacked this problem by defining a recursive procedure for finding the value of an interpolating function at all dyadic points [10,11]. This algorithm proceeds by inserting a new predicted coefficient inbetween each pair of existing coefficients. Since none of the already existing coefficients will get changed interpolation of the original data is assured. Perhaps the simplest way to set up such an interpolating subdivision scheme is the following. Let {s0,h} with k C Z be the original sample values. Now define a refined sequence of sample values recursively as 8j+1,2k
:
8j,k
sj+l,:k+l = 1/2 (si,k + Sj,k+l), and place the 8j, k at locations xj,k = k 2 - j . Or in words, new values are inserted halfway between old values by linearly interpolating the two neighboring old values (see Fig. 11.) This subdivision rule was already used in the linear prediction transformation in Section 5. In the limit, values at all dyadic points will be defined and the function can be extended to a continuous function over all real numbers. The result is a piecewise linear interpolation of the original sample values. Suppose the initial sample values given to us were actually samples of a linear polynomial. In that case our subdivision scheme will exactly reproduce that linear polynomial. We then say that the order of the subdivision scheme is 2. The order of poIynomiaI reproduction is important in quantifying the quality of a subdivision (or prediction) scheme. To see how to build more powerful versions of such subdivisions we look at the procedure above in a slightly different light. Instead of thinking of it as averaging we can describe it via the construction of an interpolating polynomial. Given data 8j,k and 8j,k+ 1 at Xj,k ~-- k 2J and Xj,k+l = (k + 1) 2j determine the unique linear polynomial which interpolates this data. Now define the new coefficient s/+1,2k+l as the value that this polynomial takes on at xj+i,2k+l = (2k + 1) 2j+l, which is halfway between k 2J and (k + 1) 2J. Given this way of looking at the problem an approach to build higher order predictors immediately suggests itself: instead of using only immediate neighbors to build a linear interpolating polynomial, use more neighbors on either side to build higher order interpolating polynomials and define the new value by evaluating the resulting polynomial at the new midpoint,. For example, we can use two neighboring values on either side and define the (unique) cubic polynomial p(x) which interpolates those four values sj,k-1 = p ( x z k - 1 ) sj,k = p ( x z k )
86
Wim Sweldens and Peter SchrSder
112 1/2 F 0
0~ I / 2
F,
I
/2/°
"%2" 0
~f'O'~_ ,,~X
/~X
~ m la
o
O
0
oX 0 x.
0 x
0 x
0
X
0
0
X Ox 0
0 X
0
X
X
Fig. 11. The linear subdivision, or prediction, step inserts new values inbetween the old values by averaging the two old neighbors. Repeating this process leads in the limit to a piecewise linear interpolation of the original data.
Wavelets at Home
II,
!1
I!
i
linear interpolation
t y
1, ii
87
ill,
i2~mu ,'cubic interpolation /( ; , "
t It
IIll,
7' I /
~ linear interpolation
I
.~
!
cubic interpolation
II
I iilili
liiI,
Fig. 12. On the left, the linear subdivision (or prediction) step inserts new values inbetween the old values by averaging the two old neighbors. On the right cubic polynomials are used for every quad of old values to determine a new inbetween value.
8j,k_i_1 ~---p(Xj,k+ 1)
sj,k+2 = p(xj,k+2). The new coefficient at xj+l,2k+l is defined to be the value t h a t this cubic polynomial takes on at the midpoint. With the old samples untouched we get 8j-kl,2 k : 8j,k
Sj+l,Zk+l = p(xj+l,2k+l ). Note t h a t the polynomial is in general a different one for each successive set of 4 old values. Fig. 12 illustrates these ideas with the linear prediction on the left side and the cubic prediction on the right. Observing t h a t this unique cubic interpolating polynomial is itself a linear function of the values {SLk_l , 8j,k,Sj,k+l,Sj,kq-2} w e find, after some algebraic m a n i p u l a t i o n - - a n d under the assumption t h a t the associated xj,k are equally s p a c e d - - t h e new value sj+l,2k+l to be a simple weighting of {sj,k-1, sj,k, sj,k+l, sj,k+2} with weights { - 1 / 1 6 , 9 / 1 6 , 9 / 1 6 , - 1 / 1 6 } . This stencil is well known in the CAGD literature as the 4-point scheme [16]. In general we use N (N = 2D even) samples and build a polynomial of degree N - 1. For each group of N = 2D coefficients {sj,k-D+l,..., Sj,k,..., Sj,k+D}, it involves two steps: 1. Construct a polynomial p of degree N - 1 so t h a t
Sj,k+Z = p(xj,k+t) for -- D + 1 < 1 < D. 2. Calculate one coefficient on the next finer level as the value of this polynomial at Xj+l,2kq_l 8j-i-l,2k+1
= p(xj+l,2k+l ).
88
Wim Sweldens and Peter SchrSder
t.0-
|.0-
0.5
0.5
0.0
0.0
-0.5 -4
|.0
I
I
I
I
I
I
!
I
-3
-2
-1
0
1
2
3
4
-0.5 -4
I
I
I
I
I
I
I
I
-3
-2
-1
0
1
2
3
4
1,0-
!
0,5
0.5-
0.0
0.0-
-0.5 -4
I
I
I
I
I
I
I
I
-3
-2
-1
0
1
2
3
4
-0.5 -4
I
I
I
I
I
I
I
-3
-2
-1
0
1
2
3
Fig. 13. Scaling functions resulting from interpolating subdivision. Going from left to right, top to bottom the order N of the subdivision is 2, 4, 6, and 8. Each function takes on the value one at the origin and zero at all other integers. Note that the functions have support, i.e., they are non-zero, over somewhat larger intervals than shown here.
We say that the order of the subdivision scheme is N. What makes interpolating subdivision so attractive from an implementation point of view is that we only need a routine which can evaluate an interpolating polynomial at a single location given some number of sample values and locations. The new sample value is defined through evaluation of this polynomial at the new, refined location. A particularly efficient (and stable) procedure for this is Neville's algorithm [24, 21]. If all samples are evenly spaced this polynomial and the associated weights need to be computed only once and can be used from then on. Notice also that nothing in the definition of this procedure requires the original samples to be located at integers. This feature can be used to define scaling functions over irregular subdivisions. Interval boundaries for finite sequences are also easily accommodated. We will come back t o these observations in Part II.
6.2
Interpolating Scaling Functions
Here we formally define the notion of scaling functions. Each coefficient sj,k has one scaling function denoted as qaj,k(x) associated with it. This scaling function is defined as follows: set all sj,l on level j equal to zero except for sj,k which is set to 1. Now run the interpolating subdivision scheme starting from level
Wavelets at Home
89
j ad infinitum. The resulting limit function is ~j,k(x). Consider some original sequence of sample values sj,k at level j. Simply using linear superposition and starting the subdivision scheme at level j, yields a limit function f(x) of the form f(x) = sj,k k
If the sample locations are regularly spaced (Xj,k = k 2 -j) it is easy to see that all scaling functions axe translates and dilates of one fixed function ~(x) = ~0,0(x): v
,k(x) =
_ k).
This function is also called the fundamental solution of the subdivision scheme. Fig. 13 shows the scaling functions ~0,o which result from the interpolating subdivision of order 2, 4, 6, and 8 (left to right, top to bottom). In the case of linear interpolation it is easy to see that the fundamental solution is the well known piecewise linear hat function. Perhaps surprisingly the fundamental solution of the cubic scheme described above is not a (piecewise) cubic polynomial. However, it is still true that the subdivision process will reproduce all cubic polynomials: if the initial sequence of samples came from a cubic polynomial P(x) then all refinement steps will use exactly that polynomial p(x) = P(x) (due to uniqueness) to define new intermediate values. As a consequence all new points will be samples of the original polynomial, in the limit reproducing the original cubic. Equivalently we say that any cubic polynomial can be written as a linear combination of scaling functions. The properties of ~(x) are: 1. C o m p a c t s u p p o r t : ~(x) is exactly zero outside the interval [ - N + 1, N - 1 ] . This easily follows from the locality of the subdivision scheme. 2. I n t e r p o l a t i o n : ~(x) is interpolating in the sense that ~(0) = 1 and ~(k) = 0 for k ~ 0. This immediately follows from the construction. 3. P o l y n o m i a l r e p r o d u c t i o n : Polynomials up to degree N - 1 can be expressed as linear combinations of scaling functions. More precisely: Z(k2-J)p
vj,
(x) =
for 0 < ; < N.
k
This can be seen by starting the scheme on level j with the sequence x~.,k and using the fact that the subdivision definition insures the reproduction of polynomials up to degree N - 1. 4. S m o o t h n e s s : Typically ~j,k C C a where a = a(N). We know that a(4) < 2 and a(6) < 2.830 (strict bounds). Also, for large N the smoothness increases linearly as .2075N. This fact is much less trivial than the previous ones. We refer to [10,11] and [9, p. 226]. 5. Refinability: This means the scaling function satisfies a refinement relation of the form N
v(x) =
(2x - l). l=--N
90
Wim Sweldens and Peter SchrSder This can be seen as follows. Do one step in the subdivision starting from so,k = Jk,0. Call the result hl = sl,l. It is easy to see that only 2N + 1 coefficients hi are non-zero. Now start the subdivision scheme from level 1 with these values sl,1. The refinement relation follows from the fact that this should give the same result as starting from level 0 with the values s0,k. Also because of interpolation, it follows that h21 = 50,1. We refer to the h~ as filter coefficients. With a change of variables in the refinement relation we get
h -2,
j,k(x) =
(2)
l
In the case of linear subdivision, the filter coefficients are hi = {1/2, 1, 1/2}. The associated scaling function is the familiar linear B-spline "hat" function. The cubic case leads to the filter hi = { - 1 / 1 6 , 0 , 9 / 1 6 , 1 , 9 / 1 6 , 0 , - 1 / 1 6 } . For those familiar with the more traditional treatment of wavelets, the ht coefficients describe the impulse response sequence of the low-pass filter used in the. inverse wavelet transform. Once we have the filter coefficients, we can write the subdivision as
sj+l,t = E ht-2k sj,k. k
We see that because h2t = ~0,1, the subdivision scheme is interpolating, i.e., even indexed samples remain unchanged. Note that the hi are used in a refinement relation to go from finer level scaling ]unctions to coarser level scaling ]unctions, while during subdivision the same hk are used to go from coarser level samples to finer level samples. 6.3
Average-Interpolating Subdivision
In contrast to the interpolating subdivision scheme of Deslauriers-Dubuc we now consider another subdivision scheme: average-interpolation as introduced by Donoho [12]. To begin with we focus on the basic idea, i.e., how to produce the new values at the finer level directly from the old values at the coarser level. At the end of this section, we will fit average-interpolation into lifting as shown in the wiring diagram of Fig. 9. The starting point of interpolating subdivision was a set of samples of some function. Average-interpolation can be motivated similarly. Suppose that instead of samples we are given averages of some unknown function over intervals so,k =
f
k+l
] ( x ) dx.
Jk
For purposes of exposition we will assume that the intervals are all unit sized, a restriction which will be removed in the second generation setting (see P a r t II.) Such values might arise from a physical device which does not perform point sampling, but integration, as is done for example, by a CCD cell (to a first approximation). How can we use such values to define a function whose averages
Wavelets at Home
91
;', ,-7 ~J
eI
:!
Fig. 14. Examples of average:interpolation. On the left a diagram showing the constant average-interpolation scheme. Each subinterval gets the average of a constant function defined by the parent interval. This is what happens in the inverse Haar transform if the detail coefficients are zero. On the right the same idea is applied to higher order average-interpolation using a neighboring interval on either side. The unique quadratic polynomial which has the correct averages over one such triple is used to compute the averages over the subintervals of the middle interval. This process is repeated ad infinitum to define the limit function.
are the measurement values given to us? One obvious answer is to use these values to define a piecewise constant function which takes on the value So,k for x E [ k , k + 1]. This corresponds to the following constant average-interpolation scheme 8j+l,2k :
8j, k
8 j + l , 2 k + l : Sj,k.
This is the inverse Haar transform with all detail coefficients equal to zero. Cascading this procedure ad infinitum we get a function which is defined everywhere and is piecewise constant. Furthermore its averages over intervals [k, k + 1] match the observed averages. The disadvantage of this simple scheme is t h a t the limit function is not smooth. In order to understand how to increase the smoothness of such a reconstruction we define a general average-interpolating procedure. One way to think about the previous scheme is to describe it as follows. We • assume t h a t the (unknown) function we are dealing with is a constant polynomial over the interval [k, k + 1]. The values of sj+l,2k and sj+l,2k+l then follow as the averages of this polynomial over the respective subintervals. The diagram on the left side of Fig. 14 illustrates this scheme. Just as before we can extend this idea to higher order polynomials. The next natural choice is quadratic. For a given interval consider the intervals to its left
92
Wim Sweldens and Peter SchrSder
and right. Define the (unique) quadratic polynomial p(x) such that -.~ f k 2-j
sj,k-1
8j'k
S(k-1)2-~ p(x) dx =
f
(k+l)2-J p(x) dx
3 k 2-J
= f (k'4-2)2-j
sj,k+l
J(k+l)2-J p(x) dx.
Now compute sj+l,2k and Sj+l,2k+l as the average of this polynomial over the left and right subintervals of [k 2-J, (k + 1)2-J] f (~+1/2)2-J sj+l,2k = 2 ] p(x) dx Jk2-J
8j+l,2kA- 1 "~ 2
f
(k-bl)2
j
~(k+1/2)2-~
p(x) dx.
Fig. 14 (right side) shows this procedure. It is not immediately clear what the limit function of this process will look like, but it is easy to see that the procedure will reproduce quadratic polynomials. The argument is the same as in the interpolating case: assume that the initial averages {S0,h} were averages of a given quadratic polynomial q(x). In that case the unique polynomial p(x) which has the prescribed averages over each triple of intervals will always be that same polynomial q(x) which gave rise to the initial set of averages. Since the interval sizes go to zero and the averages over the intervals approach the value of the underlying function in the limit the original quadratic polynomial q(x) will be reproduced. We can define the scaling function exactly the same way as in the interpolating subdivision case. In general we use N intervals (N odd) to construct a polynomial of degree N - 1. The order of the subdivision scheme is given by N. Fig. I5 shows the scaling functions of order 1, 3, 5, and 7 (left to right, top to bottom.) This scheme also has the virtue that it is very easy to implement. The conditions on the integrals of the polynomial result in an easily solvable linear system relating the coefficients of p(x) to the sj,k. In its simplest form (we will see more general versions later on) we can streamline this computation even further by taking advantage of the fact that the integral of a polynomial p(x) is itself another polynomial P(x) = f o P(Y) dy. This leads to another interpolation problem 0 sj,k sj,k + Sj,k+l Sj,k + Sj,k+l + sj,k+2
= = = =
P ( ( k - 1)2 - j ) P ( k 2- j ) P ( ( k + 1)2 - j ) P ( ( k + 2)2-J).
Wavelets at Home
1,0 -
1.0 o
0.5-
0.5-
0.0
0.0
-0.5 -4
I
I
!
I
I
I
I
I
-3
-2
-1
0
I
2
3
4
-0.5
.~
1.0-
0,5-
0.5-
0.0
0,0-
-4
i -3
I
'I
I
I
I
I
-2
-1
0
1
2
3
I
I
I
I
-4
1,0-
-0.5
I
93
-0.5 -4
I
I
I
I
I
I
I
I
-3
:2
-1
0
1
2
3
4
Fig. 15. Scaling functions which result from average-interpolation. Going frora left to right, top to bottom orders of the respective subdivision schemes were 1, 3, 5, and 7.
Given such a polynomial P(x) the finer averages become
sj+l,2k = 2(P((k + 1/2)2 - j ) - P ( k 2 - J ) ) 2(P((]g + 1)2 - j ) - P((k + 1/2)2-J)).
8j+l,2k+l
=
This computation, just like the earlier interpolating subdivision, can be implemented in a stable and efficient way with Neville's algorithm. More generally we define the average-interpolating subdivision scheme of order N as follows. For each group of N = 2D + 1 coefficients {Sj,k--D~..., 8j,k,..., SAk+D}, it involves two steps: 1. Construct a polynomial p(x) of degree N - 1 so t h a t :
sj,k+t
[ (k+/+l)2-j
J(k402-i
p(x) dx for - D < l < D.
2. Calculate two coefficients on the next finer level as
sj+l,2k = 2
f
(k+l/2)2 j
Jk 2J
p(x) dx
ss+l'2k+l - 2 J(k+i/2)2-~P(x)dx" f (k+l)2-e -
-
94
Wim Sweldens and Peter SchrSder
These definitions will also work if the intervals are not on a dyadic grid but have unequal size. In that case one has to carefully account for the given interval width when computing the averages. However, nothing fundamental changes, even in the presence of boundaries or weighted measures. We will turn to these generalizations in Part II. There is one issue we have not yet addressed (see the remark in the first paragraph of this section). The equations we have given so far do not fit into the lifting scheme framework, since one cannot simply overwrite sj,~ with sj+1,2k as is desirable. Instead we use an already existing inverse Haar transform and use the average-interpolating subdivision as a P box before entering the inverse Haar transform. The diagram in Fig. 9 illustrates this setup. Instead of computing sj+l,2k and Sj+l,2k+l directly we will compute their difference dj,k = Sj+l,2k+l -8j+l,2k and feed this as a difference signal into the inverse Haar transform. Given that the average of sj+l,2k and 8j+l,2k+1 is 8j,k, it follows that the inverse Haar transform when given Sj,k and dj,k will compute sj+l,2k and sj+12k+l as desired. This leads to a transform with three lifting steps---the average-interpolating prediction, the Haar update, the Haar prediction--and all the advantages of lifting, such as in-place and easy invertibility remain.
6.4
Average-Interpolating Scaling Functions
Scaling functions are defined exactly the same way as in the interpolating case. If the samples are regularly spaced, they are translates and dilates of one fixed function ~(x). The limit function f(x) of the subdivision scheme is given by
f(x) = E sj,k ~j,k(x).
k The properties of the associated scaling function are as follows: 1. C o m p a c t s u p p o r t : ~(x) is exactly zero outside the interval [ - N + 1, N]. This easily follows from the locality of the subdivision scheme. 2. A v e r a g e - i n t e r p o l a t i o n : ~(x) is average-interpolating in the sense that
~k k+l eft(X) dx .~- (~k,O. This immediately follows from the definition. 3. Polynomial reproduction: ~(x) reproduces polynomials up to degree N 1. In other words E1/(p+I)((k+I)
p + l - k p + l ) ~ ( x - k ) = x p for 0 ~ p < N .
k
This can be seen by starting the scheme with this particular coefficient sequence and using the fact that the subdivision reproduces polynomials up to degree N - I.
Wavelets at Home
95
4. S m o o t h n e s s : ~(x) is continuous of order R, with R = R(N) > 0. One can show that R(3) > .678, R(5) > 1.272, R(7) > 1.826, R(9) _> 2.354, and asymptotically R(N) ~ .2075N [12]. 5. Refinability: ~(x) satisfies a refinement relation of the ~brm N
l=-N-bl
This follows from similar reasoning as in the interpolating ease starting from 80,k --~ (~O,k. As before subdivision can be written as: Sj+l,1 ~- E hl-2k 8j,k. k
In the case of quadratic subdivision, the filter coefficients are ht = {-1/8, 1/8, 1, 1, 1/8, - 1 / 8 ) . For quartic subdivision h~ = {3/128, -3/128, -11/64, 11/64, 1, 1, 11/64,-11/64,-3/128, 3/128} results. These constructions are part of the biorthogonal family of scaling functions described by Cohen-Daubechies-Feauveau [5], where they are named (1,3) and (1,5) respectively. An interesting connection between Deslauriers-Dubuc interpolation and average-interpolation was pointed out by Donoho [12, Lemma 2.2]: Given some sequence {s0,k} apply interpolating subdivision of order N = 2D to it. Comparing this to the sequence that results from applying average-interpolation of order N ~ = 2D - 1 to the sequence {s~,k = s0,k+~ - So,k} we find sj,k = 2j sj,t k (once again assuming equal sized intervals.) This observation follows directly from the fact that the average of the derivative of an interpolating polynomial p is simply the difference of the values that the polynomial takes on at the end and start of an interval, divided by the size of the interval. As a consequence we have
dvI(x)
=
+ 1) -
(3)
where the superscripts stand for I-nterpolating scaling function and A-verage I-nterpolating scaling function. This provides a simple recipe tbr computing the derivative of a function f(x) which results from interpolation of some sequence {s0,k}: simply take successire differences first and then apply the average-interpolating subdivision of one order less. Obviously, one could also take the differences and then apply an interpolating subdivision method, not necessarily average-interpolation. In that case the limit function would be an approximation of the derivative, while Equation 3 holds exactly. 6.5
B-Spline Subdivision
B-splines and in particular cubic B-splines are a very common primitive in computer graphics. Their popularity sterns from the fact that they are C 2, which
96
Wim Swetdens and Peter SchrSder 1.00.50.0 -0.5
I
I
I
I
I
I
I
I
-3
-2
-1
0
I
2
3
4
Fig. 16. Cubic B-sptine scaling function.
is important in many rendering applications to ensure smooth shading. They also possess the convex hull property and are variation diminishing which makes them a favorite for curve and surface modeling tasks. In this section we will show how cubic B-splines can be used as a predictor in wavelet constructions. In order to keep the exposition simple we will only consider cardinal cubic B-splines here, i.e., all control points are associated with integer parameter positions. We will atso assume a bi-infinite sequence for now and only discuss the boundary case later. Given a set of cubic B-spline control points at the integers {s0,k} subdivision tells us how to find a set of control points at the half integers which describe the same underlying B-spline curve. Repeating this process as before the control points become dense and converge to the actual curve (see Fig. 16). Typically this subdivision rule is described through a convolution with the sequence {hk} = 1/8 {1, 4, 6, 4, 1}, i.e., sj+l,2k = 1/8 (sj,k-1 + 6sj,k +
By,k+1)
sj+t,2k+1 = 1/8 (4szk + 4szk+l ). As stated this implementation of cubic B-spline subdivision does not fit into our wiring diagram framework, since it cannot be executed in place: the memory location for sj,k and Sj+l,2k are identical. Computing sj+l,2~ wilt leave us with the wrong values needed for the computation of sj+1,2~+2, for example! This is the first instance of a subdivision method which requires a sequence of elementary P function boxes, as well as a new primitive: scaling. Scaling fits into the lifting philosophy as it can be done in-place and is trivially inverted. There are a number of advantages to implementing the cubic B-spline subdivision in that way. Aside from allowing us to do the computation in place, it will be maximally parallel and, as we will see later, make it trivial to derive elementary wavelets (and through further lifting an entire class of wavelets.) We begin by averaging even coefficients into odd locations Sj+l,2k+l = 1/2 (szk + Sj,~+I).
Wavelets at Home
97
This is the Podd box in the diagram of Fig. 10. Next apply a Peven function box to get 8j+l,2k = 8j,k "]- 1/2 (Sj+l,2k--1 -1- 8 j + l , 2 k + l ) . Finally dividing all even locations to be 2 s j+ l,2k / = 2 we get our desired sequence. Substitution of the definition of the odd locations immediately verifies that the result is as intended 8j+l,2k "~ 1/2 (1/4sj,k-1 + 6/4Sj,k + 1/4Sj,k+l). We have succeeded in building cubic B-spline scaling functions with a simple inplace computation which only involves immediate neighbors. As we will see later, choosing an appropriate U function box to put before this inverse transform (see Fig. 10) can give us associated spline wavelets.
6.6
The N e x t Step
We have just seen a mlmber of ways to generate scaling functions which fit well with the overall philosophy of lifting and give the practitioner a rich set of constructions from which to choose. Before we discuss the construction of the associated wavelets, and how they "drop out of" the wiring diagrams we first consider the notion of a Multiresolution Analysis more formally in the following section. 7
Multiresolution
Analysis
In this section, we will go into some more mathematical detail about multiresolution analysis as originally conceived by Mallat and Meyer [20,19]. In the previous section we defined the notion of a scaling function T(x) and saw how all scaling functions are simply translates and dilates of one fixed function: ~j,~(x) = ~(2Jx - 1). In this section we will use these functions to build a multiresolution analysis. Assume we start a subdivision scheme of the previous section on level j from a sequence {sj,i}. Because of linearity, we can write the resulting limit function s t ( x ) as s (x) = sj, I
The scaling functions have compact support, so that for a fixed x the summation only involves a finite number of terms. We next define the space Vj of all limit functions obtained from starting the subdivision on level j. This is the linear space spanned by the scaling functions q)j,l with 1 E Z: Vj = span{~j,l(x) t l E Z}.
98
Wim Sweldens and Peter Schrhder
For example, if ~(x) is the Haar scaling function, the ~s) is the space of functions which are piecewise constant on the intervals [k2 - j , (k + 1)2 - j ) with k C Z. If ~(x) is the piecewise linear hat (or tent) function, Vj is the space of continuous functions which are piecewise linear on the intervals [k2-j, (k + 1)2-J) with kEZ. The different Vj spaces satisfy the following properties which make them a multiresolution analysis:
1. Nestedness: k) C Vj+I. 2. T r a n s l a t i o n : if f(x) E Vj then f(x + k2 - j ) E Vj. 3. D i l a t i o n : if f(x) E Vj then f(2x) e Vj+I. 4. C o m p l e t e n e s s : every function f(x) of finite energy (E L ~) can be approximated with arbitrary precision with a function from Vj for suitably high
j. The nestedness property follows immediately from the refinement relation. If we. can write ~(x) as a linear combination of the ~ ( 2 x - k ) , then we can also write ~j,t as a linear combination of the ~j+l,k and thus Vj C ~+1. The translation and dilation properties follow from the definition of ~s~. The proof of the completeness property is beyond the scope of this tutorial. We refer to [9] for more details. A very important property of a multiresolution analysis is the order. We say that the order of a multiresolution analysis is N in case every polynomial p(x) of degree strictly less than N can be written as a linear combination of scaling functions of a given level. In other words polynomials of degree less than N belong to all spaces Vj. For example, in case of the Haar multiresolution analysis, N = 1. Constant functions belong to all Vj spaces. In case of the hat function, N = 2 because all linear functions belong to all Vj spaces. Note that the order of a multiresolution analysis is the same as the order of the predictor used to build the scaling functions. This concludes the discussion of subdivision, scaling functions, and multiresolution. Remember that the original motivation for treating subdivision was that it provides predictors which readily fit into the lifting framework. In the next section we turn to the other main component of lifting: update.
8
Update Methods
We have seen several examples of wiring diagram constructions. In the very beginning of this part we started with the Haar forward and inverse transform. In that case it was clear from inspection what the update box had to do, given that we wanted the coarser signal to be an average of the finer signal. For the linear transform, finding the update box required some algebraic manipulations. Generally update boxes are designed to ensure that the coarser signal has the same average as the higher resolution signal. in the section on subdivision methods we discussed various ways of realizing predictors in the context of an inverse transform with, effectively, zero wavelet
Wavelets at Home Interpolating subdivision even m DOFs
99
Interpolating subdivision completed even 2m-1 DOFs m-1 DOF
Q
odd
I odd
Fig. 17. On the left pure interpolating subdivision is indicated: odd samples are computed using a predictor based on even samples. Beginning with rn degrees of freedom, (2m - 1) result after one step (in the bounded interval case). Simply extending the lower wire to the left is a proper completion, i.e., it describes the extra degrees of freedom added by one subdivision step. The proof consists of observing that the completed diagram is invertible (as opposed to the pure subdivision diagram which is not). Given that it is invertible the bottom wire must represent the remaining degrees of freedom. In the language of matrices--all operations are linear--this simply states that subdivision can be completed in such a way that the resulting matrix is banded and its inverse is banded as well.
coefficients. The diagrams in Figs. 8, 9, and 10 show where the update boxes go, but we have not yet explained how they should be designed. T h a t is the subject matter of this section. We begin by making some simple observations about forward and inverse transforms and give a number of design criteria for update boxes. As in the section on subdivision we will only consider the regular setting and postpone generalizations to Part II. 8.1
Forward and Inverse Transforms
One of the nice features of the lifting formalism is the ease with which one finds the inverse transform: simply flip the diagram left to right, switch additions and subtractions, and switch multiplications and divisions. In this way forward and inverse transtbrm are equivalent from a design point of view. For some questions, it is easier to consider one rather than the other. For example, in the case of borrowing predictors from subdivision methods it is most natural to first consider the inverse transform, and later derive the forward transform. These relationships between forward and inverse "wiring diagrams" lead directly to important consequences. Let us consider the case of subdivision more closely. It is a linear transformation which takes in m degrees of freedom on the top wire, say coefficients associated with the integers, and outputs (2m - 1) degrees of freedom, i.e., coefficients associated with the half integers due to interpolating subdivision. We are a bit sloppy here when ignoring issues such as the boundary, but for purposes of our argument we may do so for now (later on when we discuss the generalization to boundaries we will see that the present argument is solid.) Fig. 17 indicates this idea on the left.
100
Wire Sweldens and Peter SchrSder
...0,0,1,0,0...
Limit functions
l
I ---0,0,-5,1,.5,O,O---
/1
/ \
\\
//
/ \
\%
t Limit functions
Merge
o,o,
f
: f
,
,
, ~ /
,,!,
,
/
v,
Fig. 18. The result of putting a delta sequence on the top (smooth) or bottom (detail) wire of an interpolating subdivision (inverse transform) in the case of the linear predictor. On the right the resulting functions, after subdivision ad infinitum, for a sequence of consecutive delta inputs. Note how the detail functions sit "inbetween" the scaling functions. The picture is essentially the sarne for higher order interpolating predictors: the detail functions are inbetween the scaling functions and dilated (by a factor of 2) versions of the scaling functions.
A question t h a t immediately arises is whether we can characterize the remaining m - 1 degrees of freedom in a simple and useful fornl. This is an important aspect of multiresolution analysis where we have the spaces Vj C Vj+I and we m a y ask how to characterize the difference between two such consecutive spaces: what is the difference in "resolution" between the two spaces? W h a t are we loosing when we go from a finer resolution to a coarser resolution? Since subdivision is a linear transformation we m a y think of it as a (2m 1) x m matrix. In t h a t language the question of characterizing the extra degrees of freedom, is to ask for a set of columns to add to this matrix so t h a t it becomes square, is banded, and its inverse is banded as well. At first sight these requirements appear to be rather hard to satisfy. However, we already know the answer to this question! Suppose we have the inverse transform diagram with the detail wire added (on the right side of Fig; 17) then we can immediately build a diagram with the forward transform ("running everything backwards" ). Concatenating the two we get the identity by construction. Since the whole operation is linear we have just convinced ourselves t h a t the linear operator represented by the inverse transform is invertible, if we also consider the lower detail wire. In other words, writing subdivision in our somewhat nonstandard way, we can immediately read off a
W'avelets at Home ...0,0,1,0,0... k'Y
~
- I'~"
.
1 ...0,1,4,6, 4 ,1,0... ~ .
101
Limitfunctions ~.
1,0,0... //I functionjlx Limit / iI~__ .o,o._~ .1.4. ~I \ ~ // ~ \ ~, ' \
...0,0,1,0,0..+ Fig. 19. The result of putting a delta sequence on the top (smooth) or bottom (detail) wire of a cubic B-spline subdivision (inverse transform). On the right the resulting functions, after subdivision ad infinitum, for a sequence of consecutive delta inputs.
representation of the additional degrees of freedom introduced as one goes from Vj to Vj+I: simply put the delta sequence (~o,t on the detail wire. In the case of interpolating subdivision (Fig. 18 shows the example of linear interpolating subdivision) the result is a delta sequence 61,2t+t. If we continue the subdivision process ad infinitum the functions shown result. Note how the functions due to (successive) smooth coefficients overlap. The functions due to detail coefficients, the actual wavelets, are of the same shape in this case, but smaller and sit inbetween. A more complicated (and interesting) example is provided by the subdivision diagram for the cubic B-splines. Fig. 19 illustrates the behavior of this subdivision diagram as a function of putting successive delta sequences on the smooth and detail wire respectively. For the smooth wire 50,k results in the well known sequence 1/8{1,4, 6, 4, 1} of refinement coefficients. Putting 60,1 on the detail wire results in the sequence 1/4 {1,4, 1}. From our considerations above we know that this sequence must be a completion of the space spanned by the 1/8 {1, 4, 6, 4, 1} and in this way captures the degrees of freedom inbetween Vj and ~+1. In principle we could stop here and would not have to worry about update boxes. The problem is that there are many possible completions and the ones we get "for free" from the subdivision diagram are not necessarily the only ones or the best ones. Note that the detail functions which we got in the linear (Fig. 18) and cubic B-spline case (Fig. 19) do not have a zero integral, a condition we need to ensure that the coarser version of the signal has the same integral as the finer version (see the next subsection). Designing update boxes is all about manipulating the representation of these differences, of "inbetween" degrees of freedom. That this should be so is easy
102
Wim Sweldens and Peter SchrSder
to see. The update box on the inverse transform side makes a contribution to the smooth wire based on a signal on the odd wire (see Figs. 8, 9, and 10). Considering the sequence 50,1 on the detail wire we can see how having a U box will alter the sequence generated at the end (see Fig. 20). From the examples of the Haar and linear transform we remember that the main purpose of the update box is to ensure that the average of the coarser level signals is maintained, i.e, 2j - 1
2- j ~
s~,t
(4)
l=0
is independent of j. From this we see that there is no need for an extra update box in the case of wavelet transforms built from average-interpolation. Given that we can write the forward transform as a Haar transform followed by one average-interpolation prediction operation, the update box which is part of the Haar transform already assures (4). We next discuss how to design update boxes for interpolating subdivision and cubic B-spline subdivision.
8.2
Update for Interpolation
We already encountered one update step for the linear example in the very beginning. It consisted of computing the coarser level coefficients as sj-l,z = sj,2t + 1/4
(dj-l,l-1 -1- dj-l,1).
We show here how such an update can be derived in general. The main purpose of the update step is to ensure that the average is maintained. Saying that the average of the signal sj is equal to the average of the signal sj-1 is equivelent to saying that the average of the detail or difference signal dj-1 is zero. Given that the detail signal is nothing else but a linear combination of complementary sequences as derived in the previous section, it is sufficient to construct complementary sequences with zero average. We consider Fig. 18 again. The complementary sequence, i.e., the result from putting a 1 on the odd wire is {0, 0, 1, 0, 0}. Obviously this does not have a zero average. Therefore we use the update box from the odd wire to the even wire (see Fig. 20). The result of putting a 1 on the even wire is {1/2,1,1/2, 0, 0} or {0, 0, 1/2, 1, 1/2} depending on the position of the original 1. The update box now combines these sequences to build a complementary sequence with average zero. We propose an update which adds a fraction A of the odd element into the two neighboring evens. This leads to a complementary sequence of the form
{O,O,l,O,O}-A{1/2,1,1/2,0,O}-A{O,O, 1/2,1,1/2}. Choosing A = 1/4 leads to the complementary sequence {-1/8,-1/4,3/4,-1/4,-1/8}, which has average zero as desired.
Wavelets at Home
103
...0,0,..25,-.25,0...
----"@
I
OOlOO
...0,-.125,-.25,.75,-.25,-.125,0...
Limit functions •..0,-•125,75,-. 125,0... "~ "
/ ~ \L,
/\ ',,/
/ ~,.
'
Fig. 20. Starting with the delta sequence on the bottom wire, the update box make a contribution to the even wire, which in turn causes further changes in the odd wire after prediction• The result is the sequence at the end which corresponds to the indicated limit functions. The average of the coefficients is zero: as expected.
Other types of update can be used as well. They ensure t h a t not only the average but also the first ?~ moments of the sequences are preserved, i.e., 2 j --1
m p = 2 -j E
lp sj,l
(s)
l=o is independent of j for 0 < p < 2¢. The update weights can be found easily as well. In [26] it is shown t h a t as long as fit < N one can simply take the same weights for update as one used for prediction, divided by two. For example, in case the predictor is of order N _> 4, one can get an update w i t h / Y = 4 by letting
s j - l j = sj,2l - 1/32 d j - l , t - 2 + 9/32 dj-l,l-i + 9/32 dj-l,I -- 1/32 dj-l,~+l. 8.3
Update
f o r c u b i c B-splines
T h e u p d a t e box for the B-splines can be found using the same reasoning as in the previous section. The complementary sequence now is
{O,O, 1 / 4 , 1 , 1 / 4 , 0 , O } - A/8{1,4,6,4,1, O,O}- A/8{O,O, 1,4,6,4,1}~ which leads to A = 3/8.
9
Wavelet Basis Functions
In this section we formally establish the relationship between detail coefficients and the wavelet functions. In the earlier section on multiresolution analysis (Section 7) we described spaces Vj which are spanned by the fundamental solutions
104
Wim Sweldens and Peter Schrhder
of the subdivision process, the scaling functions ~j,z. Here we will examine the differences Vj÷I \Vj and the wavelets Cj,t (x) which span these differences. Consider an initial signal Sn = {Sn,z I 1 E Z}. We can associate a function s•(x) in Vn with this signal:
8o(x) = Z t
Calculate one level of the wavelet transform as described in an earlier section. This yields coarser coefficients sn-l,1 and coefficients d~-l,l. The coarser coefficients s n - l j corresponds to a new function in K~-I:
8n--l (X) -~ E
8n-l'l~On-l'l(X)" l
Recall that ~n-z,l(x) is the function that results if we run the inverse transform on the sequence 5n-l,l. With this the above equation expresses the fact that the function sn-1 (x) is the result of "assembling" the functions associated with a 1 in each of the positions (n - 1,/), each weighted by sn-l,~. At first sight it is unclear which function the signal du-1 corresponds to. Somehow we feel that it corresponds to the "difference" of the signals Sn(X) and sn-1 (x). To find the solution we need to define the wavelet function. Consider a detail coefficient d0,0 = 1 and set all other detail coefficients d0,k with k ¢ 0 to zero. Now compute one level of an inverse wavelet transform. This corresponds to what we did in Figs. 18 and 19 when putting a single 1 on the lower wire. After a single step of an inverse transform this yields coefficients gl of a function in V1, e.g., gl = {0,1, 0} in case of linears without lifting in Fig. 18 or gl = { - 1 / 8 , - 1 / 4 , 3 / 4 , - 1 / 4 , - 1 / 8 } in the case of linears with lifting. We define this function to be the wavelet function ¢(x) = ¢0,0(x). Thus:
¢0,0(x) = Z
= Z
l
l
- z)
This wavelet function is often called the mother wavelet. All other wavelets ¢j,k are obtained by setting dj,k = 1 and dj,l with I ¢ k to zero, calculating one level of the inverse wavelet transforms and then subdividing ad infinitum to get the corresponding function in I/)+1. Using an argument similar t-o the case of scaling functions, one can show that all wavelets are translates and dilates of the mother wavelet:
% , ( x ) = ¢(2J
- k).
Fig. 21 shows several mother wavelets coming from interpolation, Fig. 22 shows wavelets resulting from average-interpolation, and Fig. 23 shows a wavelet assod a t e d with cubic B-spline scaling functions. Now we can answer the question from the start of this section. We first define the detail function dn-l(x) to be
dn-1 (x) = E dn-lj ~3n--1,l. l
Wavelets at Home
105
0.50.5 t 0.0
02
-0.5
i
-4
i
i
t
1
I
-0.5
- 3 - 2 - ~ 0 1 2 3 4
!
-4-3
0.5-
0.5-
0.0
0.0
-0.5 -4
I
-3
J
-2
I
-1
J
0
~
1
J
2
I
3
I
-0.5
4
-4
I
-3
I
I
I
I
I
I
I
-2
-1
0
1
2
3
4
I
-2
I
-1
I
0
I
1
I
2
I
3
I
4
Fig. 21. Wavelets from interpolating subdivision. Going from left to right, top to bottom are the wavelets of order N = 2, 4, 6, and 8. Each time/~r = 2.
Because of linear superposition of the wavelet transform, it follows that
sn(x) = Sn-l (X) 4- dn-1(x) = E 8n-l'l ~)n-l'l(X) q- E dn-l,l Cn-l,l. l
l
In words, the function defined by the sequence sn,t is equM to the result of performing one forward transform step, computing Sn-l,t (upper wire) and dn-l,t (lower wire), and then using these coefficients in a linear superposition of the elementary functions which result when running an inverse transform on a single 1 on the top or b o t t o m wire, in the respective position. So what we expected is true: the detail function dn-l(X) is nothing but the difference between the original function and the coarser version. As we will see later, there are many different ways to define a detail function dn-1. They typically depend on two things: first, the detail coefficients dn-l,l (which are computed in the forward transform) and secondly the wavelet functions Cn-l,t (which are built during the inverse transform). The n-level wavelet decomposition of a function Sn(X) is defined as n--1
8 (x) = 8o(x) + Z dj(x). j=0
The space Wj is defined to be the space that contains the difference functions:
wj = span{¢j, (z) i l e z}.
106
Wim Sweldens and Peter SchrSder
?
0.5~
0.5-
0.0
0.0
-0,5 -
-0,5 -
-4
t
I
I
!
I
t
t
I
-3
-2
-1
0
1
2
3
4
0.5
0,5
0.0
0.0
-0.5
-0.5 -4
1
I
I
I
1
l
1
t
-3
-2
-I
0
1
2
3
4
I
I
t
t
I
I
1
-4
-3
-2
-1
0
2
3
4
I
I
I
I
I
I
I
I
-4
-3
-2
-I
0
l
2
3
4
Fig. 22. Wavelets from average-interpolation. Going from left to right, top to bottom are the wavelets which correspond to the orders N = 1, 3, 5, and 7. Each time/~r _- 1.
It then follows that Wj is a space complementing ~ in
~+1
5+1 = yj • wj. In an earlier section we defined the notion of order of a muttiresolution analysis. If the order is N, then the wavelet transform started from any polynomial p(x) = sn(x) of degree less than N will only yield zero wavelet coefficients dj,z. Consequently all detail functions dj (x) are zero and all sj (x) with j < n are equal to p(x). In this section we introduce the dual order N of a multiresotution analysis. We say that the dual order is N in case the wavelets have 2¢ vanishing moments:
Z Z x p Cj, (x) dx = 0 for 0 _
<
19.
Because of translation and dilation, if the mother wavelet has fig vanishing moments, then all wavelets do. As a results all detail functions dj (x) in a wavelet representation also have ~r vanishing moments and all coarser versions s t (x) of a function s~(x) have the first/Y moments independent of j:
Wavelets at Home
107
0.5
0.0
-0.5
I
I
I
I
I
I
I
I
-3
-2
-1
0
1
2
3
4
Fig. 23. Cubic B-spline wavelet
Remark: Readers who a r e familiar with the more traditional signal processing introduction to wavelets will note that order and dual order relate to the localization of the signals in frequency. Typically the coarser function Sn-l(x) will contain the lower frequency band while the detail function dn-t (x) contains the higher frequency band. The order of a MRA is related to the smoothness of the scaling functions and thus to how much aliasing occurs from the lower band to the higher band. The dual order corresponds to the cancellation of the wavelets and thus to how much aliasing occurs from the higher band to the lower. In the lifting scheme, the predict part ensures a certain order, while the update part ensures a particular dual order.
108
Wim Sweldens and Peter SchrSder
P a r t II: S e c o n d G e n e r a t i o n W a v e l e t s 10
Introduction
In the first part we only considered the regular setting, i.e., all samples are equally spaced, and subdivision always puts new samples in the middle between old samples. Consequently a sample value sj,k "lives" at the location k 2 - j and all the scaling functions and wavelets we constructed were dyadic translates and dilates of one fixed "mother" function. We refer to these as first generation waveIets. We described the construction of wavelets with the help of the lifting scheme, which only uses techniques of the spatial domain. However, historically first generation wavelets were always constructed in the frequency domain with the help of the Fourier transform, see e.g. [9]. All the wavelets and scaling functions we described in the first part can be derived with these classical methods. Using the lifting scheme though makes it very straightforward to build wavelets and scaling functions in much more general settings in which the Fourier transform is not applicable anymore as a construction tool. In the following sections we consider more general settings such as boundaries, irregular samples, and arbitrary weight functions. These cases do not allow for wavelets which are translates and dilates of one fixed function, i.e., the wavelets at an interval boundary are not just translates of nearby wavelets. This lack of translation and dilation invariance requires new construction tools, such as lifting, to replace the Fourier transform. Lifting constructions are performed entirely in the spatial domain and can be applied in the more general, irregular settings. Even though the wavelets which result from using the lifting scheme in the more general settings will not be translates and dilates of one function anymore, they still have all the powerful properties of first generation wavelets: fast transforms, localization, and good approximation. We therefore refer to them as Second Generation Vc~avelets [27]. The purpose of the remainder of this part is to show that subdivision, interpolation, and lifting taken together result in a versatile and straightforward to implement second generation wavelets toolkit. All the algorithms can be derived via simple arguments involving little more than the manipulation of polynomials (as already seen in the first Section). We focus on three settings which lead to Second Generation Wavelets: -
-
Intervals: When working with finite data it is desirable to have basis functions adapted to life on an interval. This way no awkward solutions such as zero padding, periodization, or reflection are needed. We point out that many wavelet constructions on the interval already exist, see [1, 6, 3], but we would like to use the subdivision schemes adapted to boundaries since they lead to more straightforward constructions and implementations. I r r e g u l a r samples: In many practical applications, the samples do not necessarily live on a regular grid. Resampling is fraught with pitfalls and may even be impossible. A basis and transform adapted to the irregular grid is desired.
Wavelets at Home -
109
W e i g h t e d i n n e r p r o d u c t s : Often one needs a basis adapted to a weighted inner product instead of the regular L 2 inner product. A weighted inner product of two functions f and g is defined as
(f, g) = f w(x) f(x) g(x) dx, where w(x) is some positive function. Weighted wavelets are very useful in the solution of boundary value ODEs, see [25]. Also, as we will see later, they are useful in the approximation of functions with singularities. Obviously, we are also interested in combinations of these settings. Once we know how to handle each separately, combined settings can be dealt with easily.
11 11.1
Interpolating
Subdivision
and
Scaling
Functions
Interval Constructions
Recall that interpolating subdivision assembles N = 2D coefficients 8j, k in each step. These uniquely define a polynomial p(x) of degree N - 1. This polynomial is then used to generate one new coefficient sj+1,t. The new coefficient is located in the middle of the N old coefficients. When working on an interval the same principle can be used as long as we are sufficiently far from the boundary. Close to the boundary we need to adapt this scheme. Consider the case where one wants to generate a new coefficient sj+l,z, but is unable to find the same number of old samples Sj,k to the left as to the right of the new sample, simply because they are not available. The basic idea is then to choose, from the set of available samples sj,~, those N which are closest to the new coefficient sj+l,t. To be concrete, take the interval [0, 1]. We have 2J + 1 coefficients Sj,k at locations k 2 - j for 0 < k <_ 2j. The left most coefficient Sj+l,0 is simply sz0. T h e next one, Sj+l,1 is found by constructing the interpolating polynomial to the points (Xj,k, sj,k) for 0 _< k < N and evaluating it at Xj+l,1. For sj+l,2 we evaluate the same polynomial p at x j+1,2. Similar constructions work for the other N boundary coefficients and the right side. Fig. 24 shows this idea for a concrete example.
11.2
Irregular Samples
The case of irregular samples can also be accommodated by observing that interpolating subdivision does not require the samples to be on a regular grid. We can take an arbitrarily spaced set of points xj, k with xj+l,2k = xj,k and xj,k < xj,k+t. A coefficient sj,k lives at the location xj,k. The subdivision schemes can now be applied in a straightforward manner.
110
Wim Sweldens and Peter Schrhder
M m I,, k=l
2
3
4
k=l
2
3
k=l
3
Fig. 24. Behavior of the cubic interpolating subdivision near the boundary. The midpoint samples between k = 2, 3 a~ad k = 1, 2 are unaffected by the boundary. When attempting to compute the midpoint sample for the interval k = 0, 1 we must modify the procedure since there is no neighbor to the left for the cubic interpolation problem. Instead we choose 3 neighbors to the right. Note how this results in the same cubic polynomial as used in the definition of the midpoint value k = 1, 2, except this time it is evaluated at 1/2 rather than 3/2. The procedure clearly preserves the cubic reconstruction property even at the interval boundary and is thus the natural choice for the boundary modification.
11.3
Weighted Inner Products
Interpolating subdivision does not involve an inner product, hence a weighted inner product does not change the subdivision part. However, the update part does change since it involves integrals of scaling functions, as we shall see soon. We postpone the details of this until after the general section on computing update weights. 11.4
Scaling Functions
As in the first generation case, we define the scaling function q0j,~ to be the result of running the subdivision scheme ad infinitum starting from a sequence sj,k, = 5j,a,. The main difference with the first generation case is that, because of the irregular setting, the scaling functions are not necessarily translates and dilates of each other. For example, Fig. 25 shows the scaling functions affected by the boundary. The main feature of the second generation setting is that the powerful properties such as approximation order, the refinement relation, and the connection with wavelets remain valid. We summarize the main properties: -- The limit function of the subdivision scheme starting at level j with coefficients sj,k can be written as
f : E sj,k ~j,k. k - The scaling functions are compactly supported. - The scaling functions are interpolating:
Wavelets at Home
111
1"
1
O"
-1
0
1-
O-
-1
I
i
I
I
I
I
I
I
1
2
3
4
5
6
7
8
-]
I
1
~m
1"
,
,
,
•
,
,
,
,
1
2
3
4
5
6
7
8
i
2
3
4
!
5
I
6
7
I
8
~11111 0
,
,
,
,
•
,
,
,
1
2
3
4
5
6
7
8
Fig. 25. Examples of scaling functions affected by a boundary. Left to right top to bottom scaling functions of cubic (N = 4) interpolation with k = 0,1, 2, 3. Note how the boundary scaling functions are still interpolating as one would expect.
- Polynomials upto degree N - 1 can be written as linear combinations of the scaling functions at level j. Because of the interpolating property, the coefficients are sarnples of the polynomial: x P k ~ j , k = X p for 0 < p < N . k
- Very little is known about the smoothness of the resulting scaling functions in the irregular case. Recall t h a t they are defined as the limit of a fully nonstationary subdivision scheme. Work in progress though suggests t h a t with some very reasonable conditions on the weight function or the placement of the sample locations, one can obtain roughly the same smoothness as in the regular case. - T h e y satisfy refinement relations. Start the subdivision on level j with Sj,k = ~j,k. We know t h a t the subdivision scheme converges to ~j,k- Now do only one step of the subdivision scheme. Call the resulting coefficients hj,k,t = sj+l,l. Only a finite number are non zero. Since starting the subdivision scheme at level j + 1 with the {hj, k,i I l} coefficients also converges to ~j,k, we have that ~j,~(x)
=
Z
hj,k,t
~j+l,,(x)
(6)
l
Note how in this case the coefficients of the refinement relation are (in general) different for each scaling function. Compare this with the first genera-
112
Wire Sweldens a~ld Peter SchrSder tion setting of Equation (2), where hj,kj = hl-:k. In that setting the hj,kj are translation and dilation invariant.
Depending on the circumstances there are two basic ways of performing this subdivision process. The more general way is to always construct the respective interpolating polynomials on the fly using an algorithm such as Neville. This has the advantage that none of the sample locations have to be known beforehand. However, in case the sample locations are fixed and known ahead of time, one can precompute the subdivision, or filter coefficients. These will be the same as the ones from the refinement relation: assume we are given the samples {sj,~ i k} and we want to compute the {sj+l,1 I l}. Given that the whole process is linear, we can Simply use superposition. A 1 at location (j, k) would, after subdivision, give the sequence {hj,kj [ 1}. Superposition now immediately leads to: sj+l,l = ~ hj,k,l Sj,k. k
(7)
This is an equivalent formulation of the subdivision. It requires the precomputation of the hi,k,1. These can be found once offiine by using the polynomial interpolation algorithm of Neville. Compare Equations (6) and (7) which use the same coefficients hj,k,Z. In (6) the summation ranges over 1 and the equation allows us to go from a fine level scaling function to a coarse level scaling function. In (7) the summation ranges over k and the equation allows us to go from coarse level samples to fine level samples. 12
The
Unbalanced
Haar
Transform
Before we discuss how average-interpolation works in the second generation setting, we first take a look at an example. The Unbalanced Haar transform is the generalization of the Haar wavelet to the second generation setting [17]. The first difference with the interpolating case is that a coefficient sj,k does not "live" at location Xj,k any more, but rather on the interval [xj,k,xj,k+l]. We define the generalized length of an interval [Xj,k, xj,k+l] as Xj,k-I-1
5,k =
w(x) dx, d Xj,k
where w(x) is some positive weight function. From the definition, it immediately follows that Ij-l,k = Ij,2k "~ Ij,2k+l. (8) With this definition Ij,k measures the weight given to a particular interval and its associated coefficient Sj,k. Given a signal s j, the important quantity to preserve is not so much the average of the coefficients, but their weighted average:
Ij,k Sj,k k
Wavelets at Home
113
With this in mind we can define the generalization of the Haar transform, which is called the Unbalanced Haar Transform. The detail wavelet coefficient is still computed as before: dj-l,k
~ 8j,2k+l -- 8j,2k,
but the average is now computed as a weighted average:
Ij,2, sj,2k + Ij,2k+l s~,2k+l 8 j _ l , k -~
Ij_l,k
Defining the transform this way assures two things: 1. If the original signal is a constant, then all detM1 coefficients are zero and all coarser versions are constants as well. This follows from the definition of the transform and Equation (8). 2. The weighted average of all coarser signals is the same, i.e.
l
does not depend on j. This follows from computing the coarser signals as weighted averages. Later we will see that the order of the Unbalanced Haar MRA as well as its dual order are one. Next we have to cast this in the lifting framework of split, predict, and update. The split divides the signal in even and odd indexed coefficients. The prediction for an odd coefficient 8j,2k+1 is its left neighboring even sample sj,2k which leads to the detail coefficient:
dj-l,k
= 8j,2k+l
- -
Sj,2k.
In the update step, the coarser level is computed as [27] 8j-l,k
: 8j,2k + ~Ij,2k+l d j-- 1, k • j-l,k
Using Equation (8), one can see that this is equivalent to the weighted average computation. 13
Average-Interpolating
Subdivision
In this section we discuss how average-interpolation works in the second generation case as introduced in [25]. The setting is very similar to the first generation case. We first describe the average-interpolating subdivision scheme and then show how this fits into the lifting strategy. We start by assuming that we are given weighted averages of some unknown function over intervals 1 8n,l ---- ~
/~ x~,l ~'t+l w(x)f(x) dx.
114
~Vim Swetdens and Peter SchrSder
Just as before we define average-interpolating subdivision through the use to higher order polynomials, with the first interesting choice being quadratic. For a given interval consider the intervals to its left and right. Define the (unique) quadratic polynomial p(x) so that
1 f.~fj,k w(x)p(x)dx j,l~-i
Sj'k--1 ~ I j , k - 1
1
--/xj,k+l w(x) p(x) dx
•sj,k+l - _rj,k+
w(x) p(x)
Now compute Sj+l,2k and sj+~,2~+~ as the average of this polynonfial over the left and right subintervals of [xj,k,xj,k+l] 8j+l'2k
-
1
f~j+~,2k+~ w(x) p(x)
I j + l , 2k J xj+l,2~
1
/~j+1,2~+2 --
8 J ' F l ' 2 k + l = ~j-bl,2kq-1 ~'xj+1,2~+1
dx
w(x) p(x) dx.
It is easy to see that the procedure will reproduce quadratic polynomials. Assume that the initial averages {s0,h} were weighted averages of a given quadratic polynomial P(x). In that case the unique polynomial p(x) which has the prescribed averages over each triple of intervals will always be that same polynomial P(x) which gave rise to the initial set of averages. Since the interval sizes go to zero and the averages over the intervals approach the value of the underlying function in the limit the original quadratic polynomial P(x) will be reproduced. Higher order schemes can be constructed similarly. We construct a polynomial p of degree N - 1 (where N = 2D + 1) so that 1
fxj,k+~+~w(x) p(x) dx
8j'kq-I - - I j , k T l ~"wj,~,+t
for - D < l < D,
Then we calculate two coefficients on the next finer level as 1
fxj+~2~+~w(x) p(x) dx 1 fzs+~,~k+u w(x) p(x) dx. [jqq,2kq-1 ,r xj+1,2~+1
8Jq-l~2k --~ t~'J"kl,2k ,.':~j÷l,2k
8jq-l,2kq-1 - -
The computation is very similar to the first generation case, except for the fact that the polynomial problem cannot be recast into a Neville algorithm any longer since the integral of a polynomial times the weight function is not necessarily a polynomial. These algorithms take care of the weighted setting and the irregular samples setting. In the case of an interval construction, we follow the same philosophy
Wavelets at Home
/
\
115
\
Fig. 26. Behavior of the quadratic average-interpolation process near the boundary. The averages for the subintervals k = 2 and k = 1 are unaffected. When attempting to compute the finer averages for the left most interval the procedure needs to be modified since no further average to the left of k = 0 exists for the average-interpolation problem. Instead we use 2 intervals to the right of k = 0, effectively reusing the same average-interpolating polynomial constructed for the subinterval averages on k = 1. Once again it is immediately clear that this is the natural modification to the process near the boundary, since it insures that the crucial quadratic reproduction property is preserved.
as in the interpolating case. We need to assemble N coefficients to determine an average-interpolating polynomial. In case we cannot align them symmetrically around the new samples, as at the end of the interval, we simply take more from one side than the other. This idea is illustrated in Fig. 26. Next we cast average-interpolation into the lifting framework. We use the average-interpolating subdivision as a P box before entering the inverse Unbalanced Haar transform. The diagram in Fig. 9 illustrates this setup. Instead of computing 8j_kl,2k and 8j-kl,2k+ 1 directly we will compute their difference dzk = sj+l,2k+l - Sj+l,2k and feed this as a difference signal into the inverse Unbalanced Haar transform. Given that the weighted average of sj+1,2k and 8j+l,2k+l , as computed by average-interpolation is 8j,k, it ibllows that the inverse Unbalanced Haar transform when given sj,k and dj,h will compute sj+l,2~ and sj+l,2k+l as desired.
14
Average-Interpolating Scaling Functions
As before an average-interpolating scaling function ~j,k (x) is defined as the limit function of the subdivision process started on level j with the sequence 5Zk. We here list the main properties: - The limit function of the subdivision scheme started at level j with coefficients sj,k can be written as
f -~ E 8j'k ~OJ'k" k - The scaling functions are compactly supported.
116
Wire
Swe]dens
and Peter
SchrSder
1.0-
1.0-
0.0 . . . . . . .
0.0-
-1.0
0
~
I
I
I
I
I
I
I
1
2
3
4
5
6
7
8
-1.0
0
I
I
I
I
1
I
I
I
1
2
3
4
5
6
7
8
l.O--
1.0
0.0
0.0-
. . . . . . . . .
-12 0
t
2
3
4
I
I
I
I
5
6
7
8
-1.0
0
I
I
t
t
I
I
I
I
1
2
3
4
5
6
7
8
Fig. 27. Examples of scaling functions affected by a boundary. Left to right, top to bottom scaling functions of quadratic (N = 3) average-interpolation at k = 0, 1, 2, 3. The functions continue to have averages over each integer subinterval of either 0 or 1.
-
The scaling functions are average-interpolating: 1
-
/~i,~+1
w(x) ~j,k (X) dx = 5k,~.
The scaling functions of level j reproduce polynomials upto degree N - 1. cj,k qOj,k = x p for 0 < p <
~
N
k with coefficients
±
Cj'k = Ij,k Jxi,~ -
The scaling functions satisfy refinement relations: ¢flj,k = ~
hj,k,l qOj+l,l.
(9)
1 Fig. 27 shows the average-interpolating boundary functions.
15
Cubic B-spline Scaling Functions
In the case of cubic B-splines we need to worry about the endpoints of a finite sized interval. Because of their support the scaling functions close to the endpoints would overlap the outside of the interval. This issue can be addressed in a
Wavelets at Home
117
number of different ways. One treatment, used by Chui and Quak [3], uses multiple knots at the endpoints of the interval. The appropriate subdivision weights then follow from the evaluation of the de Boor algorithm for those control points. The total number of scaling functions at level j becomes 2j + 3 in this setting. Consequently it is not so easy anymore to express everything in a framework which is based on insertion of new control points inbetween old ones. We used a different treatment which preserves this property. The Podd box remains as before at the boundary--every odd location still has all even neighbor on either side--but we change the Peven box since the left(right-) most even position has only one odd neighbor. In this case the Peven box makes no contribution to the boundary control point and furthermore the boundary control point does not get rescale& This leads to endpoint interpolating piecewise cubic polynomial scaling functions as shown in Fig. 28.
1-
Z -1 0
0-
l
t
I
I
I
i
l
2
3
5
6
7
'1
-I
8
0
I
I
|
I
I
1
2
3
4
5
'i
6
I
!
7
8
Fig. 28. In the case of cubic B-splines only the two leftmost splines change for the particular adaptation to the boundary which we chose.
16
Multiresolution
Analysis
Now that we have defined subdivision and scaling functions in the second generation setting, it is a small step to multiresolution analysis. Remember that the result of the subdivision algorithm started from level j can always be written as a linear combination of scaling functions.
k The definition of the 173.spaces is now exactly the same as in the first generation case: = span{ j,h I0 _< k < g j } . We assume Kj scaling functions on level j. It follows from the refinement relations (6) that the spaces are nested:
118
Wire Sweldens and Peter SchrSder
Again we want that any function of finite energy (E L2) can be approximated arbitrarily closely with scaling functions. Mathematically we write this as U Vj is dense in L2. d>0
The order of the MRA is defined similarly to the first generation case. We say that the order is N in case every polynomial of degree less than N can be written as a linear combination of scaling functions on a given level. The subdivision schemes we saw in the previous sections have order N where N is odd for interpolating subdivision and even for average-interpolating subdivision.
Integrals of Scaling Functions We will later see that in order to build wavelets, it is important to know the integral of each scaling function. In the first generation case this is not an issues. Due to translation and dilation the integral of ~j,t (x) is always 2 - j . In the second generation case, the integrals are not given by such a simple rule, Therefore we need an algorithm to compute them. We define: Mj,k : = / + f
w(x) qoj,k(x) dx.
The computation goes in two phases. We first approximate the integrals on the finest level n numerically using a simple quadrature formula. The ones on the coarser levels j < n can be computed iteratively. From integrating the refinement relation (6) it immediately follows that
Mj,k : E hj,k'l Mj+l,1. I Once we computed the hj,k,L coefficients, the recursive computation of the integrals of the scaling functions is straightforward. We will later need them in the update stage.
17
Lifting and Interpolation
In this section, we discuss how to use the lifting scheme to compute second generation wavelet transforms. The steps will be exactly the same as in the first generation case: split, predict, update. The split stage again is the Lazy wavele~ transform. It simply consists of splitting the samples s j,1 into the even indexed samples sj,2k and the odd indexed samples sj,2k+l. In the predict stage we take the even samples and use interpolating subdivision to predict each odd sample. The detail coefficient is the difference of the odd sample and its predicted value. Suppose we use a subdivision scheme of order N = 2D to build the predictor. The detail coefficient is then computed as
dj-l,k := Sj,2k+l -- P(Xj,2k+I),
Wavelets at Home
119
where p(x) is the interpolating polynomial of degree N - 1 which interpolates the points (xj-l,k+l, sj-l,k+l) with - D + 1 < 1 < D. Thus if the original signal is a polynomial of degree strictly less than N, the detail signal is exactly zero. The purpose of the update stage is to preserve the weighted average on each level; we want that
dx co
k
k
does not depend on the level j. According to the lifting scheme we do this using the detail computed in the previous step. We propose an update step of the form: 8 j - - l , k :'~ 8j,21¢ q'- Aj,k-1 dj,k-1 + Aj,k dj,k. (10) In order to find the Aj,k, assume we run the inverse transform from level j - 1 to j starting with all sj-l,t zero and only one dj-l,k non-zero. Then undoing the update and running the subdivision scheme should result in a function with integral zero. Undoing the update will result in two non zero even coefficients, sj,2~ and 8j,2kq- 2 . Undoing the update involves computing:
sj,2k
:=
sy-l,k - AZk-1 dj,k-1 - Aj,k d~,~
s~,2~+2 := sj-l,k+l - Aj,k dj,k - AZk+I dzk+l. Given that only dj,a is non zero, we have that sj,2k = --Aj,k and Sj,2k+2 = --Aj,k. Now running the subdivision scheme results in a function given by
9j,2k+l (x) - Aj,k ~j-l,k (x) -- Aj,k ~j--l,k+l (X).
(11)
This function has to have integral zero. Thus: Mj,2kq-1
Aj,k = M j - l , k + Mj-~,k+~" This shows us how to choose Aj,~. One can also build more powerful update methods, which not only assure that the integral of the sj(x) functions is preserved, but also their first (generalized) moment:
/
co
• sj
dx. co
This requires the calculation of the first order moments of the scaling functions, which can be done analogously to the integral calculations. The lifting (11) then has different lifting weights for ~9j_1, k and ~j-l,k+l (as opposed to Aj,k for both) which can be found by solving a 2 x 2 linear system.
18
Wavelet functions
We can now define wavelet functions exactly the same way as in the first generation case. Compute an inverse wavelet transform from level j to level j + 1 with
•(z)
+
= (x)
q •( ~ ) ~ ' . ~ ~ ' . ~
= (~).~p
uot~auu~ i t ~ o p oqc~ otqJo G •s~uomouI ~IIt,qSI.U~A ~ = N 0A'e,L[S~oIoA~m oq~ som,Ids-~t oq~ ~o~I "I = N oA~tI s:~o[oA'em OlD puq~ ~ = AT os~O-llOt~,~|od,Io:glt-o~aoA~ oq~, u I "s~UOglOtlI ~lltqgt -u~a g = N oavq '$u!~V.I qaI.A~ ai!nq 's:~oioAem oq~ pue 1~ = N os~a SuD~iodaoau!
•(asia $tlD~Iod:to~ul ) g 'g ' I '0 = a/ aoj l~ = N aopao pu~ s : m o m o m $1I!l{St.WeA g = A/ qa~.A*~SaOIOA~A~moaaoq o~ doa '~q~ta o:~ ajoI ~uto o "ga~punoq v dq po:Doj:tv S~OIOA'eA~Jo soldm'exa "6g "~!eI
I
I
9
L
8
I
I
I
L
8
9
I
g I
g I
~ I
t, I
~ l
E I
g l
Z f
8
I I
l
L
I
frO-
l
9 I
5 I
l~ !
E I
E I
I I
frO-
- S'O
- ~;'0
• 0"0
- 0'0
I
8
L
I
~;'0"
I
9 I
g I
~ I
g I
E I
I I
~'0-
-frO
- 0'0
- ~'0
oq~ u I "£iOA.t~aodsoa os~a out ids- ff a n n a pu~ 'uo~.a~aIodao:~m.-o$~aoa~ '$uD~iodaoa -u! oqa u! ~g~mpunoq oq~ Aq po~aojg~ s : t o t o a ~ oq:~ ~ o q s I~ pu~ '0~ '68 "s$~.~I 1
oq~ st. uo.t~aunj ~ut~Insaa oq& "mq~t.ao~I~ uot.stAtpqns oq~ una llOq~L "{l [ ~'~"c8} oauonbos ~uDInsoi oq~ ii~O "oaaz oa~ s~uoD!~ooa Im~op aoq~o oq~ oItq~ auo o~ ~os ~,Cp ~uo!a~ooa W~op ouo £iuo pu~ moz o~ ~os ~,c~ s~uo!o!~ooa oI~aS osa~oa II~ ~opgaqa S ao~od pu~ suoplo~ S m~A~
0gI
Wavelets at Home
121
0.50.0-
0.0
-0.5
-0.50
I
I
1
I
2
0.5
3
j
0.0
I
4
I
5
I
6
l
I
7
~
........ 1
I
2
I
3
I
I
4
5
I
6
I
I
7
8
I
"'1
O0
-0.5 0
1
1
0.5 t
-
I
0
8
I
2
I
3
I
4
I
5
I
6
I
7
-0]5t I
8
0
I
1
I
2
I
3
I
4
'
I'
5
'1
6
7
8
Fig. 30. Examples of wavelets affected by a boundary. Going left to right, top to bottom wavelets with ?~r = 1 vanishing moment and order N = 3 for k = 0,1, 2, 3 (average interpolation).
0.5
0.5-
-0.5
0
I
1
I
2
1
3
I
4
I
5
I
6
I
7
I
8
-0.5
0
I 1
I 2
I 3
1
4
I
5
I
6
I
7
t
8
0.5-
-0.5
I
I
I
I
I
I
I
I
1
2
3
4
5
6
7
8
Fig. 31. The two left most (top row) wavelets are influenced by the boundary. Thereafter they default to the usual B-spline wavelets (bottom left), (cubic B-spline case).
122
Wim Sweldens and Peter SchrSder
The multiresolution representation of a function s~ (x) can now be written as ~--i
Sn(X) = So(X) + do(x) "I- dl(x) + . . . d n - l ( X ) = so(X) + E E dj,k •j,k. j=O k The main advantage of the wavelet transform is the fact that the expected value of the detail coefficient magnitudes is much smaller than the original samples. This is how we obtain a more compact representation of the original signal. Remember that the order of a multiresolution analysis is N if the multiresolution representation of any polynomial of degree strictly less than N yields only zero detail signals. In other words if Sn(X) = p(x) is a polynomial of degree less than N, then sn(x) = so(x) and all details are identically zero. Another quantity which characterizes a multiresolution analysis is its dual order. We say that the dual order is ~r in case all wavelets have N vanishing moments or + f w(x) x p dx 0 for 0 < P < _fi/. We only consider the case where 2~ -= 1. The same techniques can be used for the more general case if so needed. Consequently all detail signals have a vanishing integral,
w(x) di(x) dx = O.
//~ This is equivalent to saying that
+ f w(x) st (x) dx is independent of the level j. We define the subspaces Wj as Wj = span(¢j,k l 0 < k < Kj+I - Kj }, then
= vj •
(13)
The dimension of Wj is thus the dimension of ~+1 (Kj+I) minus the dimension
of
(KD.
19
Applications
In this section we describe results of some experiments involving the ideas presented earlier. The examples were generated with a simple C code whose implementation is a direct transliteration of the algorithms described above. The only essential piece of code imported was an implementation of Neville's algorithm from Numerical Recipes [21]. All examples were computed on the unit interval,
Wavelets at Home
123
that is all constructions are adapted to the boundary as described earlier. The only code modification to accommodate this is to insure that the moving window of coefficients does not cross the left or right end point of the interval. The case of a weight function requires somewhat more machinery which we describe in section 19.3.
19.1
Interpolation of Randomly Sampled Data
The first and simplest generalization concerns the use of xjo,k placed at random locations. Fig. 32 shows the scaling functions (top) and wavelets (bottom) which result for such a set of random locations. The scaling functions are of order N = 4 (interpolating subdivision) and the wavelets have N = 2 vanishing moments. In this case we placed 7 uniformly random samples between x3,0 = 0 and X3,s = 1. These locations are discernible in the graph as the unique points at which all scaling functions have a root save for one which takes on the value 1 (indicated by solid diamond marks). Sample points at finer levels were generated recursively by simply adding midpoints, i.e., xj+l,2k+l = 1/2 (xj,k + Xj,k+l) for j > 3. An interesting question is how the new sample points should be placed. A disadvantage of always adding midpoints is that imbalances between the lengths of the intervals are maintained. A way to avoid this is to place new sample points only in intervals whose length is larger than the average interval length. Doing so repeatedly will bring the ratio of largest to smallest interval length ever closer to 1. Another possible approach would add new points so that the length of the intervals varies in a smooth manner, i.e., no large intervals neighbor small intervals. This can be done by applying an interpolating subdivision scheme, with integers as sample locations, to the xj,k themselves to find the xj+l,2k+l. This would result in a smooth mapping from the integers to the xj,k. After performing this step the usual interpolating subdivision would follow. Depending on the application one of these schemes may be preferable. Next we took some random data over a random set of 16 sample locations and applied linear (N = 2) and cubic (N = 4) interpolating subdivision to them. The resulting interpolating functions are compared on the right side of Fig. 33. These functions can be thought of as a linear superposition of the kinds of scaling functions we constructed above for the example j = 3. Note how sample points which are very close to each other can introduce sharp features in the resulting function. We also note that the interpolation of order 4 exhibits some of the overshoot behavior one would expect when encountering long and steep sections of the curve followed by a reversal of direction. This behavior gets worse for higher order interpolation schemes. These experiments suggest that it might be desirable to enforce some condition on the ratio of the largest to the smallest interval in a random sample construction.
124
Wire Sweldens a n d Peter SchrSder
2.0
1.0
,"
" / ~
\
/
~-
.
7-
I '~'
0.0
-1.0 /
-2.0
0.0
I
I
I
0.2
0.4
0.6
.....
I .....
I
0.8
1.0
2.0
1.o
/f• P ~ l
o.o
\
,"
. -.
"" .~'.,
.
~___--~-~h_--~
i /
/
\
~
\
/ ~
~
f
~..
__.~
f-%
\
/
/.,% \
. . . . . . . .
/.
~. _~-dC
.
.
.
.
<' ' :'l'~
,,, 5-_'-~_
/
r.
-1.o
-2.0 0.0
I
I
I
I
0.2
0.4
0.6
0.8
1.0
F i g . 32. Example of scaling functions (top) with N = 4 (interpolating subdivision) a n d wavelets (bottom) with ~" = 2 vanishing m o m e n t s adapted to irregular sample locations. T h e original sample locations xa,k are highlighted with d i a m o n d marks. Note t h a t one of the scaling functions is 1 at each of the marks, while all others are 0.
Wavelets at Home
125
L0 0.5
°'°oo
02
04
o6
08
1o
0,5 o,o oo
o~2
04
o6
o8
1()
F i g . 33. E x a m p l e of data defined at r a n d o m locations x4,k on the u n i t interval a n d interpolated with interpolating subdivision of order N = 2 a n d 4 respectively.
t.0 -t
0.5 --I
,,.~ltlliltl~.I,IIlBlll~fli1111~tl Jll/lllB~6t~
1.0 -
,.,~,ll,lllllll~l~ ~
0.5 -
0.0-
0.0 -t] ~' I
0.2 1.0-
t
0.4
I
0.6
I
0.8
1
1.0
/~'L/~/~/.---~
0.0
I
0.2
I
0.4
I
0.6
I
I
0.8
1.0
!
! 1.0
1.0-
O.5-
0.5-
0.0
0.0I
0.0
0.2
I
0.4
I
0.6
!
0.8
I
1.0
0.0
I
0.2
I
0.4
I
0.6
0.8
F i g . 34. A sine wave with additive noise sampled at uniformly distributed r a n d o m locations in the unit interval a n d reconstructed with quintic average-interpolation (upper left). Successive smoothings are performed by going to coarser resolutions a n d cascading back out (upper right and b o t t o m roW).
126
Wire Sweldens and Peter SchrSder
19.2
S m o o t h i n g of R a n d o m l y S a m p l e d D a t a
A typical use for wavelet constructions over irregular sample locations is smoothing of data acquired~at such locations. As an example of this we took 512 uniformly random locations on the unit interval (1:9) and initialized them with averages of sin(3/4~rx) with ±20% additive white noise. The resulting function is plotted on the top left of Fig. 34 at level 9. The scaling functions used were based on average-interpolation with N = 5 and fi" = 1. Smoothing was performed by going to coarser spaces (lower index), setting all wavelet coefficients to zero and subdividing back out. From left to right, top to bottom the coarsest level used in the transform is Vg, VT, ~ , and 1:3. We hasten to point out that this is is a very simple and naive smoothing technique. Depending on the application and knowledge of the underlying processes much more powerful smoothing operators can be constructed [15,14]. This example merely serves to suggest that such operations can also be performed over irregular samples.
"051/ oo
o.o.
-I.o
0!2
Fig. 35. Comparison of weighted (solid line) and unweighted (dotted line) wavelets at the left endpoint of the interval where the weight function x -1/2 becomes singular. On the left N = 4 and _~ = 2; on the right N = 5 and/V = 1. Note how the weighted wavelets take on smaller values at zero in order to adapt to the weight function whose value tends to infinity.
19.3
Weighted Inner Products
When we discussed the construction of scaling functions and wavelets we pointed out how a weight function in the inner product can be incorporated i n the transform. The only change in the code is due to the fact that we cannot cast the average-interpolation problem into the form of a Neville interpolation algorithm anymore, since in general the integral of a polynomial times the weight function is not another polynomial. Instead we first explicitly construct the polynomial p in the subdivision and use it to find the filter coefficients. This implies solving the underlying linear system which relates the coefficients of p(x) to the observed
Wavelets at Home
127
weighted averages. Similarly, when lifting the interpolating wavelets to give them 2 vanishing moments the weighted moments of the scaling flmction enter. In both of these cases the construction of weighted bases requires additional code to compute moments and solve the linear systems involved in finding the filters. We saw in Section 16 how moment and integral calculations can be performed reeursively from the finest level on up by using the refinement relationship for the scaling function. Without going into any further detail we point out that moment calculations and the solution of the linear system to find p(x) can be numerically delicate. The stability depends on which polynomial basis is used. For example, we found the linear systems that result when expressing everything with respect to global monomial moments so ill-conditioned as to be unsolvable even in double precision. The solution lies in using a local polynomial, i.e., a basis which changes for each interval. A better choice might be a basis of local orthogonal polynomials.
In our experiments we used the weight function x -I/2 which is singular at the left interval boundary. To compute the moments we used local monomials, resulting in integrals for which analytic expressions are available. Fig. 35 shows some of the resulting wavelets. In both cases we show the left most wavelet, which is most impacted by the weight function. Weighted and unweighted wavelets further to the right become ever more similar. Part of the reason why they look similar is the normalization. For example, both weighted and unweighted sealing functions have to satisfy ~k ~Oj,k = I. The images show wavelets with N = 4 (interpolating) and 2~" = 2 vanishing moments on the left and wavelets N = 5 (average-interpolation) and 2~r = 1 vanishing moment on the right. In both cases the weighted wavelet is shown with a solid line and the unweighted case with a dotted line.
1E+00 =
IE+O0 -.
.---c--N= 1 ~N=3
---.,~-.- N = 1 ---.o-- N = 3 1E-O1
~ N : 5 7
-
1E-OI • t~ e,I
..~ 1E-02,
IE~02
tE,.03,
1E4)3 '
"' '""1 IE+01
. . . . . . . .
unwelghted ease
I
IE+02
......
I IE-14)I
" weighted
''"' ' " Q ..... ' 1E+02
"'
ease
Fig. 36. Comparison of approximation error when expanding the function sin(47rx t/2) over [0, 1/2] using wavelets constructed with respect to an unweighted inner product (left) and a weighted inner product with weight x - 1 / 2 (right). Here N --- 1, 3, 5, and 7.
128
Wire Sweldens and Peter SchrSder
The weighted and unweighted wavelets are only slightly different in shape. However, when applied to the expansion of some function they can make a dramatic difference. As an example we applied both types of wavelets to the function f(x) = sin(4~rxl/2), which has a divergent derivative at zero. With unweighted wavelets the convergence will be slow close to the singularity, typically O(h) with h = 2 - j independent of N. In other words, there is no gain in using higher order wavelets. However, if we build weighted wavelets for which the weight function times f is an analytic function, we can expect O(h N) behavior everywhere again. For our example we can take w(x) = x -1/2. This way the weighted wavelets are adapted to the singularity of the function f . Fig. 36 shows the error in the resulting expansions with order N = 1, 3, 5, and 7 and dual order N = 1. For unweighted wavelets higher order constructions only get better by a constant factor, while the weighted wavelets show higher order convergence when going to higher order wavelets.
20
Warning
Like every "do it yourself at home" product this one comes with a warning. Most of the techniques we presented here are straightforward to implement and before you know it you will be generating wavelets yourself. However, we did not discuss most of the deeper underlying mathematical properties which assure that everything works like we expect it to. These address issues such as: What are the conditions on the subdivision scheme so that it generates smooth functions? or: Do the resulting scaling functions and wavelets generate a stable, i.e., Riesz basis? These questions are not easily answered and require some heavy mathematics. One of the fundamental questions is how properties, such as convergence of the subdivision algorithm, Riesz bounds, and smoothness, can be related back to properties of the filter sequences. This is a very hard question and at this moment no general answer is available to our knowledge. We restrict ourselves here to a short description of the extent to which these questions have been answered. In the classical case, i.e., regular samples and no weight function, everything essentially works. The regularity of the basis functions varies linearly with N. In the case of the interval, regular samples, and no weight fimction, again the same results hold. This is because the boundary basis functions are finite linear combinations of the ones from the real line. In the case of regular samples with a weight function, it can be shown that with some minimal conditions on the weight function, the basis functions have the same regularity as in the unweighted case. In the case of irregular samples, little is known at this moment. Everything essentially depends on how irregular the samples are. It might be possible to obtain results under the conditions that the irregular samples are not too far from the regular samples, but this has to be studied in detail in the future. Recent results concerning general multiscale transforms and their stability were obtained by Wolfgang Dahmen and his collaborators. They have been working (independently from [27, 26]) on a scheme which is very similar to the lifting
Wavelets at Home
129
scheme [2, 8]. In particular, Dahmen shows in [7] which properties in addition to invertibility of the transform are needed to assure stable bases. Whether this result cart be applied to the bases constructed here needs to be studied in the future. 21
Outlook
SO far we have only discussed the construction of second generation wavelets on the real line or the interval. Most of the techniques presented here such as polynomial subdivision and lifting extend easily to much more general sets. In particular domains in ira, curves, surfaces, and manifolds. One example is the construction of wavelets on the sphere [22]. There we use the lifting scheme to construct locally supported, biorthogonal spherical wavelets and their associated fast transforms. The construction starts from a recursive triangulation of the sphere and is parameterization independent. Since the construction does not rely on any specific properties of the sphere it can be generalized to other surfaces. The only question which needs to be addressed is what the right replacement for polynomials is. Polynomials restricted to a sphere are still a natural choice because of the connection with spherical harmonics, but on a general surface this is no longer the case. References 1. L. Andersson, N. Hall, B. Jawerth, and G. Peters. Wavelets on closed subsets of the real line. In [23], pages 1-61. 2. J. M. Carnicer, W. Dahmen, and J. M. Pefia. Local decompositions of refinable spaces. Appl. Comput. Harmon. Anal., 3:127-153, 1996. 3. C. Chui and E. Quak. Wavelets on a bounded interval. In D. Braess and L. L. Schumaker, editors, Numerical Methods of Approximation Theory, pages 1-24. Birkh£user-Verlag, Basel, 1992. 4. C. K. Chui, L. Montefusco, and L. Puccio, editors. Conference on Wavelets: Theory, Algorithms, and Applications. Academic Press, San Diego, CA, 1994. 5. A. Cohen, I. Daubechies, and J. Feauveau. Bi-orthogonal bases of compactly supported wavelets. Comm. Pure Appl. Math., 45:485-560, 1992. 6. A. Cohen, I. Daubechies, and P. Vial. Multiresolution analysis, wavelets and fast algorithms on an interval. Appl. Comput. Harmon. Anal., 1(1):54-81, 1993. 7. W. Dahmen. Stability of multiscale transformations. J. Fourier Anal. Appl., 2(4):341-361, 1996. 8. W. Dahmen, S. PrSssdorf, and R. Schneider. Multiscale methods for pseudodifferential equations on smooth manifolds. In [4], pages 385-424. 1994. 9. I. Danbechies. Ten Lectures on Wavelets. CBMS-NSF Regional Conf. Series in Appl. Math., Vol. 61. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1992. 10. G. Deslauriers and S. Dubuc. Interpolation dyadique. In Fractals, dimensions non enti~res et applications, pages 44-55. Masson, Paris, 1987. 11. G. Deslauriers and S. Dubuc. Symmetric iterative interpolation processes. Constr. Approx., 5(1):49-68, 1989.
130
Wire Sweldens and Peter Schr5der
12. D. L. Donoho. Smooth wavelet decompositions with blocky coefficient kernels. In [23], pages 259-308. 13. D. L. Donoho. Interpolating wavelet transforms. Preprint, Department of Statistics, Stanford University, 1992. 14. D. L. Donoho and I. M. Johnstone. Ideal spatial adaptation via wavelet shrinkage. Biometrika, to appear, 1994. 15. D. L. Donoho and I. M. Johnstone. Adapting to unknown smoothness via wavelet shrinkage . 1995. 16. N. Dyn, D. Levin, and J. Gregory. A 4-point interpolatory subdivision scheme for curve design. Comput. Aided Geom. Des., 4:257-268, 1987. 17. M. Girardi and W. Sweldens. A new class of unbalanced Haar wavelets that form an unconditional basis for Lp on general measure spaces. J. Fourier Anal. Appl., 3(4), 1997. 18. M. Lounsbery, T. D. DeRose, and J. Warren. Multiresolution surfaces of arbitrary topological type. ACM Trans. on Graphics, 16(1):34-73, 1997. 19. S. G. Mallat. Multiresolution approximations and wavelet orthonormal bases of L2(•). Trans. Amer. Math. Soc., 315(1):69-87, 1989. 20. Y. Meyer. Ondelettes et Opdrateurs, I: Ondelettes, II: Opdrateurs de CalderdnZy.qmund~ III: (with R. Coifman), Opdrateurs multilingaires. Hermann, Paris, 1990. English translation of first volume, Wavelets and Operators, is published by Cambridge University Press, 1993. 21. W. H. Press, B. P. Flannery, S. A. Teukolsky, and W. T. Vetterling. Numerical Recipes. Cambridge University Press, 2nd edition, 1993. 22. P. SchrSder and W. Sweldens. Spherical wavelets: Efficiently representing functions on the sphere. Computer Graphics Proceedings, (SIGGRAPH g5), pages 161-172, 1995. 23. L. L. Schumaker and G. Webb, editors. Recent Advances in Wavelet Analysis. Academic Press, New York, 1993. 24. J. Stoer and R. Bulirsch. Introduction to Numerical Analysis. Springer Verlag, New York, 1980. 25. W. Sweldens. Construction and Applications of Wavelets in Numerical Analysis. PhD thesis, Department of Computer Science, Katholieke Universiteit Leuven, Belgium, 1994. 26. W. Sweldens. The lifting scheme: A custom-design construction of biorthogonal wavelets. Appl. Comput. Harmon. Anal., 3(2):t86-200, 1996. 27. W. Sweldens. The lifting scheme: A construction of second generation wavelets. SIAM J. Math. Anal., 29(2):511-546, 1997. 28. M. Vetterti and C. Herley. Wavelets and filter banks: Theory and design. IEEE Trans. Acoust. Speech Signal Process., 40(9):2207-2232, 1992.
Factoring Wavelet Transforms into Lifting Steps* Ingrid Daubechies I and Wim Sweldens2 1 Program for Applied and Computational Mathematics, Princeton University, Princeton N3 08544, U.S.A. ingrid@math, princeton, edu
2 Bell Laboratories, Lucent Technologies, Murray Hill NJ 07974, USA wim@bell-labs, com
1
Introduction
Over the last decade several constructions of compactly supported wavelets originated both from mathematical analysis and the signat processing community. The roots of critically sampled wavelet transforms are actually older than the word "wavelet" and go back to the context of subband filters, or more precisely quadrature mirror filters [35, 36, 42, 50-53, 57, 55, 59]. In mathematical analysis, wavelets were defined as translates and dilates of one fixed function and were used to both analyze and represent general functions [13, 18, 22, 34, 21]. In the mid eighties the introduction of multiresolution analysis and the fast wavelet transform by Mallat and Meyer provided the connection between subband filters and wavelets [30, 31, 34]; this led to new constructions, such as the smooth orthogonal, and compactly supported wavelets [16]. Later many generalizations to the biorthogonal or semiorthogonal (pre-wavelet) case were introduced. Biorthogonality allows the construction of symmetric wavelets and thus linear phase filters. Examples are: the construction of semiorthogonal spline wavelets [1, 8, 10, 11,49], fully biorthogonal compactly supported wavelets [12, 56], and recursive filter banks [25]. Various techniques to construct, wavelet bases, or to factor existing wavelet filters into basic building blocks are known. One of these is IiRing. The original motivation for developing lifting was to build sedond generation wavelets, i.e., wavelets adapted to situations that do not allow translation and dilation like non-Euclidean spaces. First generation wavelets are all translates and dilates of one or a few basic shapes; the Fourier transform is then the crucial tool for wavelet construction. A construction using lifting, on the contrary, is entirely spatial and therefore ideally suited for building second generation wavelets when Fourier techniques are no longer available. When restricted to the translation and dilation invariant case, or the "first generation," lifting comes down to wellknown ladder type structures and certain factoring algorithms. In the next few paragraphs, we explain lifting and show how it provides a spatial construction and allows for second generation wavelets; later we focus on the first generation case and the connections with factoring schemes. * I. Daubechies and W. Sweldens. From J. Fourier Anal. and Appl., 4, 1998, 245-267. Reprinted by permission of Birkhauser Boston.
132
Ingrid Daubechies and Wire Sweldens
The basic idea of wavelet transforms is to exploit the correlation structure present in most real life signals to build a sparse approximation. The correlation structure is typically local in space (time) and frequency; neighboring samples and frequencies are more correlated than ones that are far apart. Traditional wavelet constructions use the Fourier transform to build the space-frequency localization. However, as the following simple example shows, this can also be done in the spatial domain. Consider a signal x = (Xk)keZ with Xk E ~. Let us split it in two disjoint sets which are called the polyphase components: the even indexed samples xe = (x2k)keZ, or "evens" for short, and the odd indexed samples xo = (x2k+l)keZ, or "odds." Typically these two sets are closely correlated. Thus it is only natural that given one set, e.g., the even, one can build a good predictor P for tile other set. Of course the predictor need not be exact, so we need to record the difference or detail d: d = xe - P(xo). Given the detail d and the odd, we can immediately recover the even as xe = P(xo) + d. If P is a good predictor, then d approximately will be a sparse set; in other words, we expect the first order entropy to be smaller for d than for Xo. Let us took at a simple example. An easy predictor for an odd sample X2k+l is simply the average of its two even neighbors; the detail coefficient then is dk = X2k+l -- (x2k
+ X2k+l)/2.
From this we see that if the original signal is locally linear, the detail coefficient is zero. The operation of computing a prediction and recording the detail we will call a lifting step. The idea of retaining d rather than xo is well known and forms the basis of so-called DPCM methods [26, 27]. This idea connects naturally with wavelets as follows. The prediction steps can take care of some of the spatial correlation, but for wavelets we also want to get some separation in the frequency domain. Right now we have a transform from (x~, Xo) to (x¢, d). The frequency separation is poor since xe is obtained by simply subsampling so that serious aliasing occurs. In particular, the running average of the x~ is not the same as that of the original samples x. To correct this, we propose a second lifting step, which replaces the evens with smoothed values s with the use of an u p d a t e operator U applied to the details: s = x~ + U(d).
Again this step is trivially invertible: given (s, d) we can recover x~ as x~ = s - U(d), and then xo can be recovered as explained earlier. This illustrates one of the built-in features of lifting: no matter how P and U are chosen, the scheme is
Factoring Wavelet Transforms into Lifting Steps
133
d
X
S
Xo
Fig. 1. Block diagram of predict and update lifting steps.
always invertible and thus leads to critically sampled perfect reconstruction filter banks. The block diagram of the two lifting steps is given in Fig. 1. Coming back to our simple example, it is easy to see that an update operator that restores the correct running average, and therefore reduces aliasing, is given by Sk = X2k + (dk-1 + dk)/4.
This can be verified graphically by looking at Fig. 2. This simple example, when put in the wavelet framework~ turns out to correspond to the biorthogonal (2,2) wavelet transform of [12], which was originally constructed using Fourier arguments. By the construction above, which did not use the Fourier transform but instead reasoned using only spatial arguments, one can easily work in a more general setting. Imagine for a moment that the samples were irregularly spaced. Using the same spatial arguments as above we could then see that a good predictor is of the form fl x2k + (1 --/~) x2e+l where the fl varies spatially and depends on the irregularity of the grid. Similarly spatially varying update coefficients can be computed [46]. This thus immediately allows for a (2,2) type transform for irregular samples. These spatial lifting steps can also be used in higher dimensions (see [45]) and leads e.g., to wavelets on a sphere [40] or more complex manifolds. Note that the idea of using spatial wavelet constructions for building second generation wavelets has been proposed by several researchers: - The lifting scheme is inspired by the work of Donoho [19] and Lounsbery et al. [29]. Donoho [19] shows how to build wavelets from interpolating scaling functions, while Lounsbery et al. build a multiresolution analysis of surfaces using a technique that is algebraically the same a~ lifting. - Dahmen and dollaborators, independently of lifting, worked on stable completions of multiscale transforms, a setting similar to second generation wavelets [7,15]. Again independently, both of Dahmen and of lifting, Harten developed a general multiresolution approximation framework based on spatial prediction [23.].
134
Ingrid Daubechies and Wim Sweldens
sk
d~
I 2k - 2
I 2k - 1
2k
2k + 1
2k + 2
2k + 3
2k + 4
Fig. 2. Geometric interpretation for piecewise linear predict and update lifting steps. The original signal is drawn in bold. The wavelet coefficient dk is computed as the difference of an odd sample and the average of the two neighboring evens. This corresponds to a loss dk/2 in area drawn in grey. To preserve the running average this area has to be redistributed to the even locations resulting in a coarser piecewise linear signal sk drawn in thin line. Because the coarse scale is twice the fine scale and two even locations are affected, d~/4, i.e, one quarSer of the wavelet coefficient, has to be added to the even samples to obtain the s~. Then the thin and bold lines cover the same area. (For simplicity we assumed that the wavelet coefficients d~-i and dk+l are zero.)
- In [14], Dahmen and Micchelli propose a construction of compactly supported wavelets that generates complementary spaces in a multiresolution analysis of univariate irregular knot splines. The construction of the (2,2) example via lifting is one example of a 2 step lifting construction for an entire family of Deslanriers-Dubuc biorthogonal interpolating wavelets 1. Lifting thus provides a framework that allows the construction of certain biorthogonal wavelets which can be generalized to the second generation setting. A natural question now is how much of the first generation wavelet families can be built with the lifting framework. It turns out t h a t every F I R wavelet or filter bank can be decomposed into lifting steps. This can be seen by writing the transform in the polyphase form. Statements concerning perfect reconstruction or lifting can then be made using matrices with polynomial or Laurent polynomial entries. A lifting step then becomes a so-called elementary matrix, t h a t is, a triangular matrix (lower or upper) with all diagonal entries equal to one. It is a well known result in matrix algebra t h a t any matrix with polynomial entries and determinant one can be factored into such elementary matrices. For those familiar with the common notation in this field, this is writ1 This family was derived independently, but without the use of lifting, by several people: Reissell [38], Tian and Wells [47], and Strang [43]. The derivation using lifting carl be found in [44].
Factoring Wavelet Transforms into Lifting Steps
135
ten as SL(n; I~[z, z-l]) = E(n; IR[z,z-l]). The proof relies on the 2000 year old Euclidean algorithm. In the filter bank literature subband transform built using elementary matrices are known as ladder structures and were introduced in [5]. Later several constructions concerning factoring into ladder steps were given [28, 41, 48, 32, 33]. Vetterli and Herley [56] also use the Euclidean algorithm and the connection to diophantine equations to find all high pass filters that, together with a given low-pass filter, make a finite filter wavelet transform. Van Dyck et al. use ladder structures to design a wavelet video coder [20]. In this paper we give a self-contained constructive proof of the standard factorization result and apply it to several popular wavelets. We consider the Laurent polynomial setting as opposed to the standard polynomial setting because it is more general, allows for symmetry and also poses some interesting questions concerning non-uniqueness. This paper is organized as follows. In Section 2 we review some facts about filters and Lanrent polynomials. Section 3 gives the basics behind wavelet transforms and the polyphase representation while Section 4 discusses the lifting scheme. We review the Euclidean algorithm in Section 5 before moving to the main factoring result in Section 6. Section 7 gives several examples. In Section 8 we show how lifting can reduce the computational complexity of the wavelet transform by a factor two. Finally Section 9 contains comments. 2
Filters and Laurent
polynomials
A filter h is a linear time invariant operator and is completely determined by its impulse response: {hk C ~ I k C Z}. The filter h is a Finite Impulse Response (FIR) filter in case only a finite number of filter coefficients hk are non-zero. We then let kb (respectively kc) be the smallest (respectively largest) integer number k for which hk is non-zero. The z-transform of a FIR filter h is a Laurent polynomial h(z) given by k~
h(z) = ~_~ hk z -k. k=kb In this paper, we consider only FIR filters. We often use the symbol h to denote both the filter and the associated Laurent polynomial h(z). The degree of a Laurent polynomial h is defined as In] = ke - kb.
So the length of the filter is the degree of the associated polynomial plus one. Note that the polynomial z p seen as a Laurent polynomial has degree zero, while as a regular polynomial it would have degree p. In order to make consistent statements, we set the degree of the zero polynomial to -co. The set of 'all Laurent polynomials with real coefficients has a commutative ring structure. The sum or difference of two Lanrent polynomials is again a Laurent polynomial. The product of a Laurent polynomial of degree l and a
136
Ingrid Daubechies and Wim Sweldens
Laurent polynomial of degree l ~ is a Laurent polynomial of degree l + 1~. This ring is usually denoted as ~[z, z - l ] . Within a ring, exact division is not possible in general. However, for Laurent polynomials, division with remainder is possible. Take two Laurent polynomials a(z) and b(z) ~ 0 with la(z)t > ib(z)h then there always exists a Laurent polynomial q(z) (the quotient) with Iq(z)i = t a ( z ) l - Ib(z)l, and a Laurent polynomial r(z) (the remainder) with ir(z)l < Ib(z)l so that
a(z) = b(z) q(z) + r(~). We denote this as (C-language notation):
q(z) = ~(~) / b(~) and r(z) = a(z) ~ ~(~). If Ib(z)l = 0 which means b(z) is a monomial, then r(z) = 0 and the division is exact. A Laurent polynomial is invertible if and only if it is a monomial. This is the main difference with the ring of (regular) polynomials where constants are the only polynomials that can be inverted. Another difference is that the long division of Laurent polynomials is not necessarily unique. The following example illustrates this.
Example 1. Suppose we want to divide a(z) = z -1 + 6 + z by b(z) = 4 + 4z. This means we have to find a Laurent polynomial q(z) of degree 1 so that r(z) given by
r(z) = a(z) - b(z) q(z) is of degree zero. This implies that b(z)q(z) has to match a(z) in two terms. If we let those terms be the term in z -1 and the constant then the answer is q(z) = 1/4 (z -1 + 5). Indeed,
~(z) =
(~-1 + 6 + z) - (4 + 4 ~ ) ( 1 / 4 z -1 + 5/4) = - 4 ~ .
The remainder thus is of degree zero and we have completed the division. However if we choose the two matching terms to be the ones in z and z -1 , the answer is q(z) = 1/4 (z -1 + 1). Indeed,
r(z) = (z -1 + 6 + z) - (4 + 4 z ) ( 1 / 4 z -1 + 1/4) = 4. Finally, if we choose to match the constant and the term in z, the solution is q(z) = 1/4 (5 z -1 + 1) and the remainder is r(z) = - 4 z -1. The fact that division is not unique will turn out to be particularly useful later. In general b(z)q(z) has to match a(z) in at least la(z)t - Ib(z)l + 1 terms, but we are free to choose these terms in the beginning, the end, or divided between the beginning and the end of a(z). For each choice of terms a corresponding long division algorithm exists. In this paper, we also work with 2 × 2 matrices of Laurent polynomials, e.g.,
M(z) : [~(z) b(~)]
[c(z) d(z) j "
Factoring Wavelet Transforms into Lifting Steps
137
These matrices also form a ring, which is denoted by M(2;l~[z,z-1]). If the d e t e r n ~ a a t of such a matrix is a monomial, then the matrix is invertible. The set of invertible matrices is denoted GL(2; ]~[z,z-l]). A matrix from this set is unitary (sometimes also referred to as para-unitary) in case -1
3
Wavelet
transforms
Fig. 3. Discrete wavelet transform (or subband transform): The forward transform consists of two analysis filters h (low-pass) and ~ (high-pass) followed by subsampling, while the inverse transform first upsamples and then uses two synthesis filters h (lowpass) and g (high-pass).
Fig. 3 shows the general block scheme of a wavelet or subband transform. The forward transform uses two analysis filters h (low-pass) and ~ (band pass) followed by subsampling, while the inverse transform first upsamples and then uses two synthesis filters h (low-pass) and g (high-pass). For details on wavelet and subband transforms we refer to [43] and [57]. In this paper we consider only the case where the four filters h, g, h, and ~, of the wavelet transform are FIR filters. The conditions for perfect reconstruction are given by
h(z) h(z -1) + g(z) "~(z-1) = 2 h(z)'h(-z -1) + g(z) g ( - z -1) : 0. We define the
modulation matrix M(z) as M(z) = [h(z) h ( - z ) ] [g(z) g ( - z ) J "
We similarly define the dual modulation matrix condition can now be written as
M(z). The perfect reconstruction
M(z-1) t M(z) = 2I,
(1)
138
Ingrid Daubechies and Wire Sweldens
• LP
P(z) BP
Fig. 4. Polyphase representation of wavelet transform: first subsample into even and odd, then apply the dual polyphase matrix. For the inverse transform: first apply the polyphase matrix and then join even and odd.
where I is the 2 × 2 identity matrix. If all filters are FIR, then the matrices M(z) and M(z) belong to GL(2; ]R[z,z-l]). A special case are orthogonaI wavelet transforms in which case h = h and g = ~. The modulation matrix M(z) = M(z) is then v ~ times a unitary matrix. The polyphase representation is a particularly convenient tool to express the special structure of the modulation matrix [3]. The polyphase representation of a filter h is given by h(z) = h~(z 2) + z-lho(z2), where he contains the even coefficients, and ho contains the odd coefficients:
ho(z) =
h kz
and ho(Z) =
k
Z-k, k
or
he(z~) _ h(z) + h(-z) and ho(z 2) - h(z) - h(-z) 2 2z -1 We assemble the polyphase matrix as P(z) =
[he(z)
ho(z) go(Z)J'
so that
We define/5(z) similarly. The wavelet transform now is represented schematically in Fig. 4. The perfect reconstruction property is given by
P(z)/5(z-1)t = I.
(2)
Again we want P(z) and P(z) to contain only Laurent polynomials. Equation (2) then implies that det P(z) and its inverse are both Laurent polynomials; this is possible only in case d e t P ( z ) is a monomial in z: d e t P ( z ) = Czl; P(z) and/~(z) belong then to GL(2; t~[z, z-l]). Without loss of generality we assume that d e t P ( z ) = 1, i.e, P(z) is in SL(2;~[z,z-1]). Indeed, if the determinant is
Factoring Wavelet Transforms into Lifting Steps
~BP
139
-
Fig. 5. The hfting scheme: First a classical subband filter scheme and then lifting the tow-pass subband with the help of the high-pass subband.
•
LP ~
~
'
~
Fig. 6. The dual lifting scheme: First a classical subband filter scheme and later lifting the high-pass subband with the help of the low-pass subband. not one, we can always divide go(z) and go(z) by the determinant. This means t h a t for a given filter h, we can always scale and shift the filter g so that the determinant of the polyphase matrix is one. The problem of finding an FIR wavelet transform thus amounts to finding a matrix P(z) with determinant one. Once we have such a matrix, P ( z ) and the four filters for the wavelet transform follow immediately. From (2) and Cramer's rule it follows that
he(z)=go(Z-1),
[to(Z)~--ge(z-1),
[le(z)=-ho(z-1),
~o(z) mhe(z-1).
This implies
~(z) = z -1 h ( - z -1) and h(z) = - z -1 g(-z-1). The most trivial example of a polyphase matrix is P(z) = I. This results in h(z) = h(z) = 1 and g(z) = ~(z) = z -1. The wavelet transform then does nothing else but subsampling even and odd samples. This transform is called the polyphase transform, but in the context of lifting it is often referred to as the Lazy wavelet transform [44]. (The reason is that the notion of the Lazy wavelet can also be used in the second generation setting.) 4
The
Lifting
Scheme
The lifting scheme [44, 45] is an easy relationship between perfect reconstruction filter pairs (h, g) that have the same low-pass or high-pass filter. One can then start from the Lazy wavelet and use lifting to gradually build one's way up to a multiresolution analysis with particular properties.
140
Ingrid Daubechies and Wim Sweldens
Definition 1. A filter pair (h, g) is complementary in case the corresponding polyphase matrix P(z) has determinant 1. If (h, g) is complementary, so is (h, ~ . This allows us to state the lifting scheme. T h e o r e m 1 (Lifting). Let (h, g) be complementary. Then any other finite filter gnaw complementary to h is of the form:
g°°w(z) = g(z) + h(z)
s(z2),
where s(z) is a Laurent polynomial. Conversely any filter of this form is complementary to h. Proof. The polyphase components of h(z) s(z 2) are for even he(z) s(z) and for odd ho(z) s(z). After lifting, the new polyphase matrix is thus given by
This operation does not change the determinant of the polyphase matrix. Fig. 5 shows the schematic representation of lifting. Theorem 1 can also be written relating the low-pass filters h and h. In this formulation, it is exactly the Vetterli-Herley lemma [56, Proposition 4.7]. The dual polyphase matrix is given by:
We see that lifting creates a new h filter given by
T h e o r e m 2 (Dual lifting). Let (h, g) be complementary. Then any other finite filter h ~°w complementary to g is of the form:
h~°~(z) = h(z) + g(z) t(z2), where t(z) is a Laurent polynomial. Conversely any filter of this form is complementary to g. After dual lifting, the new polyphase matrix is given by
DuN lifting creates a new ~ given by =
-
Fig. 6 shows the schematic representation of dua~ lifting. In [44] lifting and dual lifting are used to build wavelet transforms starting from the Lazy wavelet. There
Factoring Wavelet Transforms into Lifting Steps
141
a whole family of wavelets is constructed from the Lazy followed by one dual lifting and one primal lifting step. All the filters h constructed this way are half band and the corresponding scaling functions are interpolating. Because of the many advantages of lifting, it is natural to try to build other wavelets as well, perhaps using multiple lifting steps. In the next section we will show that any wavelet transform with finite filters can be obtained starting from the Lazy followed by a finite number of alternating lifting and dual lifting steps. In order to prove this, we first need to study the Euclidean algorithm in closer detail. 5
The Euclidean
Algorithm
The Euclidean algorithm was originally developed to find the greatest common divisor of two natural numbers, but it can be extended to find the greatest common divisor of"two polynomials, see, e.g, [4]. Here we need it to find common factors of Laurent polynomials. The main difference with the polynomial case is again that the solution is not unique. Indeed the gcd of two Laurent polynomials is defined only up to a factor z p. (This is similar to saying that the gcd of two polynomials is defined only up to a constant.) Two Laurent polynomials are relatively prime in case their gcd has degree zero. Note that they can share roots at zero and infinity. T h e o r e m 3 (Euclidean A l g o r i t h m for L a u r e n t P o l y n o m i a l s ) . Take two
Laurent polynomials a(z) and b(z) ~ 0 with la(z)l > Ib(z)l. Let ao(z) = a(z) and bo(z) = b(z) and iterate the following steps starting from i = 0
a +l(Z)
= b (z)
bi+l (z) = ai(z) % bi(z).
(3) (4)
Then an(z) = gcd(a(z), b(z) ) where n is the smallest number/or which b~(z) = O. Given that Ib~+l(z)l < Ibi(z)l, there is an m so that lb,~(z)l = 0. The algorithm then finishes for n = m + l . The number of steps thus is bounded by n < tb(z)l+l. If we let qi+l(z) = ai(z) / bi(z), we have that
=
i=71.
1-qi(z)
[b(z)]"
Conseqnently a(z) ] = r I [q~z)
[an(OZ)
and thus a~(z) divides both a(z) and b(z). If a~(z) is a monomial, then a(z) and b(z) are relatively, prime.
Example 2. Let a(z) = ao(z) = z -1 + 6 + z and b(z) = bo(z) = 4 + 4 z . Then the first division gives us (see the example in Section 2): al(z) = 4 + 4 z
Ingrid Daubechies and Wim Sweldens
142
bt(z) = 4 ql(z) = 1 / 4 z -1 + 1/4.
The next step yields a2(z) = 4 b2(z) = 0 q2(z) = 1 + z.
Thus,
a(z)
and
b(z)
are relatively prime and
The number of steps here is n = 2 = 6
The
Factoring
Ib(z)l + 1.
Algorithm
In this section, we explain how any pair of complementary filters (h, g) can be factored into lifting steps. First, note that he (z) and ho (z) have to be relatively prime because any common factor would also divide det P(z) and we already know that det P(z) is 1. We can thus run the Euclidean algorithm starting from he(z) and ho(z) and the gcd will be a monomial. Given the non-uniqueness of the division we can Mways choose the quotients so that the gcd is a constant. Let this constant be K . We thus have that
Note that in case Iho(z)l > Ih~(z)l, the first quotient ql(z) is zero. We can always assume that n is even. Indeed if n is odd, we can nmltiply the h(z) filter with z and g(z) with - z -1. This does not change the determinant of the polyphase matrix. It flips (up to a monomial) the polyphase components of h and thus makes n even again. Given a filter h we can always find a complementary filter g 0 by letting
10][0
•
Here the final diagonal matrix follows from the fact that the determinant of a polyphase matrix is one and n is even. Let us slightly rewrite the last equation. First observe that
Factoring Wavelet Transforms into Lifting Steps
143
Using the first equation of (5) in case i is odd and the second in case i is even yields: P°(z) = ~
[~
[q2)(z) 01ILK I ~ K ] "
(6)
Finally, the original filter g can be recovered by applying Theorem 1. Now we know that the filter g can always be obtained from gO with one lifting Or:
Combining all these observations we now have shown the following theorem: T h e o r e m 4. Given a complementary filter pair (h,g), then there always exist Laurent polynomials si(z) and ti(z) for 1 < i < m and a non-zero constant K so that m
P(z)= ~
[10 s~z) ] [t~(lz)01] [ K I ~ K ] .
The proof follows from combining (6) and (7), setting m = h i 2 + 1, tin(z) = O, and sin(z) = K e s(z). In other words every finite filter wavelet transform can be obtained by starting with the Lazy wavelet followed by m lifting and dual lifting steps followed with a scaling. The dual polyphase matrix is given by
From this we see that in the orthogonal case ( P ( z ) = P(z)) we immediately have two different factorizati0ns. Figs. 7 and 8 represent the different steps of the forward and inverse transform schematically.
7
Examples
We start with a few easy examples. We denote filters either by their canonical names (e.g. Haar), by (N, N) where N (resp. /V) is the number of vanishing moments of ~ (resp. g), or by (la - ls) where la is the length of analysis filter and Is is the length of the synthesis filter h. We start with a sequence x = {xt I 1 E Z} and denote the result of applying the low-pass filter h (resp. highpass filter g) and downsampling as a sequence s = {sl I l C Z} (resp. d). The intermediate values computed during lifting we denote with sequences s (i) and d (i). All transforms are instances of Fig. 7.
144
Ingrid Daubechies and Wire Sweldens
BP
Fig. 7. The forward wavelet transform using lifting: First the Lazy wavelet~ then alternating lifting and dual lifting steps, and finally a scaling.
Fig. 8. The inverse wavelet transform using lifting: First a scaling, then alternating dual lifting and lifting steps, and finally the inverse Lazy transform. The inverse transform can immediately be derived from the forward by running the scheme backwards.
7.1
Haar wavelets
In the case of (unnormalized) Haar wavelets we have that h(z) = 1 + z -1, g(z) = - 1 / 2 + 1/2z - t , h(z) = 1/2 + 1/2z -1, and ~(z) = - 1 + lz -1. Using the Euclidean algorithm we carl thus write the polyphase matrix as:
[1:/:]= [110] Thus on the analysis size we have:
This corresponds to the following implementation of the forward transform:
8} 0) ---~X2l d}0) ~---x21+l d, =d} ° ) - s}°) sl = s}°) + 1/2 dl, while the inverse transform is given by: 8} °) = 8~ - 1 / 2 dz
Factoring Wavelet Transforms into Lifting Steps
145
d~°) = dt + s~°) x2t+l = d~°)
X21 = 8~0). 7.2
Givens rotations
Consider the case where the polyphase matrix is a Givens rotation (a ~ 7r/2). We then get cosa-sins]= sin a cos a j
[sinai
cos a
01] [ l o - s i n a cosa] [co~a 1/cOosa]. 1
We can also do it without scaling with three lifting steps as (here assuming [cosa-sina] = (cosa - 1 ) / s i n a ] (cos a - l l ) / s i n a ] [sina cosa j [~ [sinlaO1] [~ " This corresponds to the well known fact in geometry that a rotation can always be written as three shears. The lattice factorization of [51] allows the decomposition of any orthonormal filter pair into shifts and Givens rotations. It follows that any orthonormal filter can be written as lifting steps, by first writing the lattice factorization and then using the example above. This provides a different proof of Theorem 4 in the orthonormal case. 7.3
Scaling
These two examples show that the scaling from Theorem 4 can be replaced with four lifting steps:
[
-- [10
[
°1]
or
Given that one can always merge one of the four lifting steps with the last lifting step from the factorization, only three extra steps are needed to avoid scaling. This is particularly important when building integer to integer wavelet transforms in which case scaling is not invertible [6].
146
Ingrid Danbechies and Wim Sweldens
7.4
I n t e r p o l a t i n g filters
In case the low-pass filter is half band, or h(z) + h ( - z ) = 2, the corresponding scaling function is interpolating. Since he(z) = 1, the factorization can be done in two steps: P ( z ) = [hol(z) l + ho(z) g~(z) j =
The filters constructed in [44] are of this type. This gives rise to a family of (N, N) (N and N even) symmetric biorthogonal wavelets built from the DeslauriersDubuc scaling functions mentioned in the introduction. The degrees of the filters are Ihol = N - 1 and lgel = N - 1. In case N < N, these are particularly easy as .q!N)(z) = --1/2 h (R) (z-l). (Beware: the normalization used here is different from the one in [44].) Next we look at some examples that had not been decomposed into lifting steps before. 7.5
4-tap o r t h o n o r m a l filter w i t h two vanishing m o m e n t s (D4)
Here the h a n d g filters are given by [16]: h(z) = ho + hi z -1 + h2 z -2 + h3 z -3 g(z) = - h 3 z 2 + h2 z 1 - hi + ho z - t ,
with ho-
4v/~ ,
hi-
4vr~ ,
h2=
4 - v ~ ' and h 3 =
4---~
The polyphase matrix is P(z) = P(z) =
h0 + h2 z -1 -h3 z 1 - hi ] hl + h3 z -1 h2z l + ho J '
(8)
and the factorization is given by: P(z) =
@
.
(9)
As we pointed out in Section 6 we have two options. Because the poljphase matrix is unitary, we can use (9) as a factorization for either P ( z ) or P ( z ) . In the latter case the analysis polyphase matrix is factored as: [ ~
0 1 1
+
1
Factoring Wavelet Transforms into Lifting Steps
147
This corresponds to the following implementation for the forward transform:
d}1) "~ X21÷l -- vfax21 s}1) = x2, + 4 ~ / 4 ~} ~) + ( ~ -
2)/4 ~}$)1
O) d}2) = d}1) + s 1--1 81 = (v/3 + 1)/x/2s} 1)
dl
The inverse transform follows from reversing the operations and flipping the signs: d} 2) = ( v ~ + 1)/v/2dt s} 1) = ( v ~ = dT)
1)/V~st _
x~, = ~}') - ~ / 4 ~
1) - ( ~ -
2)/4 ~}~)~
x21+l = d}1) + x/3x2t. The other option is to use (9) as a factorization for P ( z ) . The analysis polyphase matrix then is factored as:
[7 0]
1
and leads to the following implementation of the forward transform:
s} 1) = x21 + v ~ z : l + l
~}t) = x2z+I - ¢ 5 / 4 ~}~) - ( ¢ ~ - 2)/4 s}g~
dt = ( V ~ + 1)/v~d} 2). Given that the inverse transform always follows immediately from the forward transform, from now on we only give the forward transform. One can also obtain an entirely different lifting factorization of D4 by shifting the filter pair corresponding to: h(z) = ho z + hl + h2 z - l + ha z -2 g(z) = h3 z - h2 + hi z -1 - ho z -2,
with P ( z ) .= P ( z ) -~- [ hi -}- h3 z-1 - h 2 - hO z-1 t [ hoz+h2 h3z+hl J
148
Ingrid Daubechies and Wim Sweldens
as polyphase matrix. This leads to a different factorization: ~(z) =
[10 ][+,
~
z + ~
3¢~
0
01
3-~ ' 3v~ J
and corresponds to the following implementation: d~ 1) = x21+l - 1 / v ~ x 2 1 + 2
s~1) = x~, + ( 6 - 3¢~)/4@) + ¢~/4 ~1)~ d l) _
d, = (3 - v/-3)/(3vf2) 42). This second factorization can also be obtained as the result of seeking a factorization of the original polyphase matrix (8) where the final diagonal matrix has (non-constant) monomial entries. 7.6
6-tap orthonormal filter with three vanishing moments (D6)
Here we have
3
h(~) = ~
h~ z -~,
k-=- 2
with [16] h_2 = ~/2 (1 + V~-0+ ~/5 + 2 vrio)/32 h_l : v/2 (5 + v ~ + 3 ~/5 + 2 v/l-0)/32 ho = v~ (10 - 2 ~T0 + 2 V/5 + 2 v~-0)/32 hi : v~ (10 - 2 v / ~ - 2 ~5 + 2 v/~)/32 h2 : v~ (5 + v~-0- 3 ~5 + 2 ~/~)/32
The polyphase components are h e ( z ) = h - 2 z "4- ho + h2 z - 1
ge(z) = - h a z - h i - h - 1 z - 1 h o ( z ) = h - 1 z + hi + h3 z - 1 go(Z) = h2 z + ho + h - 2 z - 1 .
Factoring Wavelet Transforms into Lifting Steps
149
In the factorization algorithm the coefficients of the remainders are calculated as:
ro = h - 1 - h3 * h - 2 / h 2 rl = h i - h2 * h o / h 2 sl = ho - h - 2 * r l / r o - h2 * to~r1 t = -h3/h-2
* s~.
If we now let -0.4122865950 h 2 / r l ~ -1.5651362796 h - 2 / r o ~ 0.3523876576 r l / s l ~ 0.0284590896 to~s1 ~ 0.4921518449 5 = - h 3 / h - 2 * s~ ~ -0.3896203900 4= sl ~ 1.9182029462, h3/ht ~
then the factorization is given by: P ( z ) = [1a 01] [10/3z-t1+ fl'] [ 7 1 7 ' z ~1 [ ; ~1] [~ 1 ~ ] " We leave the implementation of this filter as an exercise for the reader. 7.7
(9-7) filter
Here we consider the popular (9-7) filter pair. The analysis filter h has 9 coefficients, while the synthesis filter h has 7 coefficients. Both high-pass filters g and have 4 vanishing moments. We choose the filter with 7 coefficients to be the synthesis filter because it gives rises to a smoother scaling function than the 9 coefficient one (see [17, p. 279, Table 8.3], note that the coefficients need to be multiplied with v~). For this example we run the factoring algorithm starting from the analysis filter: he(z)
= h4 (z 2 + z -2) + h2 (z + z -1) + ho and
ho(z)
The coefficients of the remainders are computed as: ro = ho - 2h4
hi~h3
r l = h2 - h4 - h4 h i ~ h 3 so -~ hi - h3 - h3 r o / r l to = ro - 2 r l .
= h3 (z 2 + z -1) + hi (z + 1).
150
Ingrid Daubechies and Wim Sweldens
Then define a = h4/h3 ~ = h3/rl
~
-1.586134342 -0.05298011854
0.8829110762 = so~to ~ 0.4435068522 = to = ro - 2rl ~ 1.149604398.
7 = rl/so
~
Now
[Z(I~_z) °1] [1~(1+ z-~)]J [5(11+ z ) ° l ] LO 1
[~1~]
Note that here too many other factorizations exist; the one we chose is symmetric: every quotient is a multiple of (z + 1). This shows how we can take advantage of the non-uniqueness to maintain symmetry. The factorization leads to the following implementation: s~ O) = x21
d~O) =
X2/+l
6(0) S~1) = 8~O) -{-
(d I(~) + ~,~(1) ~I--1]
d~ 2) = d/(1) + ff (S~ 1) .aQ(1)~ . o/+ 1]
=
+
+ d"),,_.
st = ~ s~2)
~ = ~)/~. 7.8
C u b i c B-splines
We finish with an example that is used frequently in computer graphics: the (4,2) biorthogonal filter from [12]. The scaling function here is a cubic B-spline. This example can be obtained again by using the factoring algorithm. However, there is also a much more intuitive construction in the spatial domain [46]. The filters are given by
h(z)
= 3/4 + 1/2 (z + z -1) + 1/8 (z 2 + z -2)
a(z) = 5/4z -1 - 5/32 (1 + z-2) _ 3/8 (z + z -3) - 3/32 (z ~ + ~-4),
and the factorization reads:
Factoring Wavelet Transforms into Lifting Steps
8
151
Computational complexity
In this section we take a closer look at the computational complexity of the wavelet transform computed using lifting. As a comparison base we use the standard algorithm, which corresponds to applying the polyphase matrix. This already takes advantage of the fact that the filters will be subsampled and thus avoids computing samples that will be subsampled immediately. The unit we use is the cost, measured in number of multiplications and additions, of computing one sample pair (sz, dl). The cost of applying a filter h is Ihl + 1 multiplications and lhl additions. The cost of the standard algorithm thus is 2(th I + Igl) + 2. If the filter is symmetric and Ihl is even, the cost is 3 Ihl/2 + 1. Let us consider a general case not involving symmetry. Take Ihl = 2N, Igl = 2M, and assume M _> N. The cost of the standard algorithm now is 4(N + M) + 2. Without loss of generality we can assume that lhel = N, lhol = N - 1, tgel = M, and Igol = M - 1. In general the Euclidean algorithm started from the (h~, ho) pair now needs N steps with the degree of each quotient equal to one (iqil = I for 1 < i < N). To get the (g~,go) pair, one extra lifting step (7) is needed with Isl = M - N. The total cost of the lifting algorithm is: scaling: 2 N lifting steps: 4N final lifting step: 2(M - N + 1) total
2(N+M+2)
We have shown the following: T h e o r e m 5. Asymptotically, for long filters, the cost o/the lifting algorithm for
computing the wavelet transform is one hal/of the cost of the standard algorithm. In the above reasoning we assumed that the Euclidean algorithm needs exactly N steps with each quotient of degree one. In a particular situation the Euclidean algorithm might need fewer than N steps but with larger quotients. The interpolating filters form an extreme case; with two steps one can build arbitrarily long filters. However, in this case Theorem 5 holds as well; the cost for the standard algorithm is 3(N + N) - 2 while the cost of the lifting algorithm is 3/2(N +/~). Of course, in any particular case the numbers can differ slightly. Table I gives the cost S of the standard algorithm, the cost L of the lifting algorithm, and the relative speedup ( S / L - 1) for the examples in the previous section. One has to be careful with this comparison. Even though it is widely used, the standard algorithm is not necessarily the best way to implement the wavelet transform. Lifting is only one idea in a whole tool bag of methods to improve the speed of a fast wavelet transibrm. Rioul and Duhamel [39] discuss several other schemes to improve the standard algorithm. In the case of long filters, they suggest an FFT based scheme known as the Vetterli-algorithm [56]. In the case of short filters, they suggest a "fast running FIR" algorithm [54]. How these ideas combine with the idea of using lifting and which combination will be optimal
152
Ingrid Daubechies and Wim Sweldens
Table 1. Computational cost of lifting versus the standard algorithm. Asymptotically the lifting algorithm is twice as fast as the standard algorithm. Wavelet
Standard
Lifting Speedup
Haar 3 3 0% D4 14 9 56% D6 22 14 57% (9-7) 23 14 64% (4,2) B-spline 17 10 70% (N, N) Interpolating 3(N + N) - 2 3/2(N + N) ~ 100% Ihl=2N, Igl=2M 4 ( N + M ) + 2 2 ( N + M + 2 ) ~100%
for a certain wavelet goes beyond the scope of this paper and remains a topic of future research. 9
Conclusion
and Comments
In this tutorial presentation, we have shown how every wavelet filter pair can be decomposed into lifting steps. The decomposition amounts to writing arbitrary elements of the ring SL(2; ~[z, z-l]) as products of elementary matrices, something that has been known to be possible for a long time [2]. The ibllowing are a few comments on the decomposition and its usefulness. First of all, the decomposition of arbitrary wavelet transforms into lifting steps implies that we can gain, for all wavelet transforms, the traditional advantages of lifting implementations, i.e. 1. Lifting leads to a speed-up when compared to the standard implementation. 2. Lifting allows for an in-place implementation of the fast wavelet transform, a feature similar to the Fast Fourier Transform. This means the wavelet transform can be calculated without allocating auxiliary memory. 3. All operations within one lifting step dan be done entirely parallel while the only sequential part is the order of the lifting operations. 4. Using lifting it is particularly easy to build non linear wavelet transforms. A typical example are wavelet transforms that map integers to integers [6]. Such transforms are important for hardware implementation and for lossless image coding. 5. Using lifting and integer-to-integer transforms, it is possible to combine biorthogonal wavelets with scalar quantization and still keep cubic quantization cells which are optimal like in the orthogonal case. In a multiple description setting, it has been shown that this generalization to biorthogonality allows for substantial improvements [58]. 6. Lifting allows for adaptive wavelet transforms. This means one can start the analysis of a function from the coarsest levels and then build the finer levels by refining only in the areas of interest, see [40] for a practical example.
Factoring Wavelet Transforms into Lifting Steps
153
The decomposition in this paper also suggests the following comments and raises a few open questions: 1. Factoring into lifting steps is a highly non-unique process. We do not know exactly how many essentially different factorizations are possible, how they differ, and what is a good strategy for picking the "best one"; this is an interesting topic for future research. 2. The main result of this paper also holds in case the filter coefficients are not necessarily real, but belong to any field such as the rationals, the complex numbers, or even a finite field. However, the Euclidean algorithm does not work when the filter coefficients themselves belong to a ring such as the integers or the dyadic numbers. It is thus not guaranteed that filters with binary coefficients can be factored into lifting steps with binary filter coefficients. 3. In this paper we never concerned ourselves with whether filters were causal, i.e., only had filter coefficients for k > 0. Given that all subband filters here are finite, causality can always be obtained by shifting the filters. Obviously, if both analysis and synthesis filters have to be causal, perfect reconstruction is only possible up to a shift. By executing the Euclidean algorithm over the ring of polynomials, as opposed to the ring of Laurent polynomials, it can be assured that then all lifting steps are causal as well. 4. The long division used in the Euclidean algorithm guarantees that, except for at most one quotient of degree 0, all the quotients will be at least of degree 1 and the lifting filters thus contain at least 2 coefficients. In some cases, e.g., hardware implementations, it might be useful to use only lifting filters with at most 2 coefficients. Then, in each lifting step, an even location will only get information from its two immediate odd neighbors or vice versa. Such lifting steps can be obtained by not using a full long division, but rather stopping the division as soon as the quotient has degree one. The algorithm still is guaranteed to terminate as the degree of the polyphase components will decrease by exactly 1 in each step. We are now guaranteed to be in the setting used to sketch the proof of Theorem 5. 5. In the beginning of this paper, we pointed out how lifting is related to the multiscate transforms and the associated stability analysis developed by Wolfgang Dahmen and co-workers. Although their setting looks more general than firing since it allows for a non-identity operator K on the diagonal of the polyphase matrix, while lifting requires identities on the diagonal, this paper shows that, in the first generation or time invariant setting, no generality is lost by restricting oneself to lifting. Indeed, any invertible polyphase matrix with a non-identity polynomial K(z) on the diagonal can be obtained using lifting. Note that some of the advantages of lifting mentioned above rely fundamentally on the K = I and disappear when allowing a general K. 6. This faetorization generalizes to the M-band setting. It is known that a M x M polyphase matrix with elements in a Euclidean domain and with determinant one can be reduced to an identity matrix using elementary row and column operations, see [24, Theorem 7.10]. This reduction, also known as the Smith normal form, allows for lifting factorizations in the M-band case.
154
Ingrid Daubechies and Wire Sweldens
In [48] the discussion of the decomposition into ladder steps (which is the analog, in different notation, of what we have called here the factorization into lifting steps) is carried out for the general M-band case; please check this paper for details and applications. 7. Finally, under certain conditions it is possible to construct ladder like structures in higher dimensions using factoring of multivariate polynomials. For details, we refer to [37]. Acknowledgments The authors would like to thank Peter SchrSder and Boon-Lock Yeo for many stimulating discussions and for their help in computing the factorizations in the example section, Jelena Kovaucevi~ and Martin Vetterli for drawing their attention to reference [28], Paul Van Dooren for pointing out the connection between the M-band case and the Smith normM form, and Geert Uytterhoeven, Avraham Melkman, Mark Masten, and Paul Abbott for pointing out typos and oversights in an earlier version. Ingrid Daubechies would like to thank NSF (grant DMS-9401785), AFOSR (grant F49620-95-1-0290), ONR (grant N00014-96-1-0367) as well as Lucent Technologies, Bell Laboratories for partial support while conducting the research for this paper. Wim Sweldens is on leave as Senior Research Assistant of the Belgian Fund of Scientific Research (NFWO).
References 1. A. Aldroubi and M. Unser. Families of multiresolution and wavelet spaces with optimal properties. Numer. Funet. Anal. Optim., 14:417-446, 1993. 2. H. Bass. Algebraic K-theory. W. A. Benjamin, Inc., New York, 1968. 3. M. G. Bellanger and J. L. Daguet. TDM-FDM transmultiptexer: Digital polyphase and FFT. IEEE Trans. Commun., 22(9):1199-1204, 1974. 4. R. E. Blahut. Fast Algorithms for Digital Signal Processing. Addison-Wesley, Reading, MA, 1984. 5. A. A. M. L. Bruekens and A. W. M. van den Enden. New networks for perfect inversion and perfect reconstruction. IEEE J. Selected Areas Commute., 10(1), 1992. 6. R. Calderbank, I. Daubechies, W. Sweldens, and B.-L. Yeo. Wavelet transforms that map integers to integers. Appl. Comput. Harmon. Anal., 5(3):332-369, 1998. 7. J. M. Carnicer, W. Dahmen, and J. M. Pefia. Local decompositions of refinable spaces. AppL Comput. Harmon. Anal., 3:127-153, 1996. 8. C. K. Chui. An Introduction to Wavelets. Academic Press, San Diego, CA, 1992. 9. C. K. Chui, L. Montefusco, and L. Puccio, editors. Conference on Wavelets: Theory, Algorithms, and Applications. Academic Press, San Diego, CA, 1994. 10. C. K. Chui and J. Z. ~¢Vang. A cardinal spline approach to wavelets. Proc. Amer. Math. Soc., 113:785-793, 1991. 11. C. K. Chui and J. Z. Wang. A general framework of compactly supported splines and wavelets. J. Approx. Theory, 71(3):263-304, 1992.
Factoring Wavelet Transforms into Lifting Steps
155
12. A. Cohen, I. Daubechies, and J. Feauveau. Bi-orthogonal bases of compactly supported wavelets. Comm. Pure Appl. Math., 45:485-560, 1992. 13. J. M. Combes, A. Grossmann, and Ph. Tchamitchian, editors. Wavelets: TimeFrequency Methods and Phase Space. Inverse problems and theoretical imaging. Springer-Verlag, New York, 1989. 14. W. Dahmen and C. A. Micchelli. Banded matrices with banded inverses II: Locally finite decompositions of spline spaces. Constr. Approx., 9(2-3):263-281, 1993. 15. W. Dahmen, S. PrSssdorf, and R. Schneider. Multiscale methods for pseudodifferential equations on smooth maniibtds. In [9], pages 385-424. 1994. 16. I. Danbechies. Orthonormal bases of compactly supported wavelets. Comm. Pure Appl. Math., 41:909-996~ 1988. 17. I. Daubechies. Ten Lectures on Wavelets. CBMS-NSF Regional Conf. Series in Appl. Math., Vol. 61. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1992. 18. I. Danbechies, A. Grossmann, and Y. Meyer. Painless nonorthogonal expansions. J. Math. Phys., 27(5):1271-1283, 1986. 19. D. L. Donoho. Interpolating wavelet transforms. Preprint, Department of Statistics, Stanford University, 1992. 20. R. E. Van Dyck, T. G. Marshall, M. Chine, and N. Moayeri. Wavelet video coding with ladder structures and entropy-constrained quantization. IEEE Trans. Circuits Systems Video Tech., 6(5):483-495, 1996. 21. M. Frazier and B. Jawerth. Decomposition of Besov spaces. Indiana Univ. Math. J., 34(4):777-799, 1985. 22. A. Grossmann and J. Morlet. Decompostion of Hardy functions into square integrable wavelets of constant shape. SIAM J. Math. Anat., 15(4):723-736, 1984. 23. A. Harten. Multiresolution representation of data: A general framework. SIAM J. Numer. Anal., 33(3):1205-1256, 1996. 24. B. Hartley and T. O. Hawkes. Rings, Modules and Linear Algebra. Chapman and Hall, New York, 1983. 25. C. Herley and M. Vetterli. Wavelets and recursive filter banks. IEEE Trans. Signal Process., 41(8):2536-2556, 1993. 26. A. K. Jain. Fundamentals of Digital Image Processing. Prentice Hall, 1989. 27. N. S. Jayant and P. Noll. Digital coding of waveforms. Prentice Hall, Englewood Cliffs, N J, 1984. 28. T. A. C. M. Kalker and I. Shah. Ladder Structures for multidimensional linear phase perfect reconstruction filter banks and wavelets. In Proceedings of the SPIE Conference on Visual Communications and Image Processing (Boston), pages 1220, 1992. 29. M. Lounsbery, T. D. DeRose, and J. Warren. Multiresolution surfaces of arbitrary topological type. ACM Trans. on Graphics, 16(1):34-73, 1997. 30. S. G. Mallat. Multifrequency channel decompositions of images and wavelet models. IEEE Trans. Acoust. Speech Signa~ Process., 37(12):2091-2110, 1989. 31. S. G. Mallat. Multiresolution approximations and wavelet orthonormal bases of L2(R). Trans. Amer. Math. Soc., 315(1):69-87, 1989. 32. T. G. Marshall. A fast wavelet transform based upon the Euclidean algorithm. In Conference on Information Science and Systems, Johns Hopkins, MD, 1993. 33. T. G. Marshall. U-L block-triangular matrix and ladder realizations of subband coders. In Proc. IEEE ICASSP, volume III, pages 177-180, 1993. 34. Y. Meyer. Ondelettes et Opgrateurs, I: Ondetettes, II: Opdrateurs de CatderdnZygmund, III: (with R. Coifman), Opdrateurs multiIindaires. Hermann~ Paris,
156
35. 36.
37. 38. 39. 40.
41.
42.
43. 44. 45. 46. 47.
48.
49. 50.
51.
52.
53. 54.
Ingrid Danbechies and Wire Sweldens 1990. English translation of" first volume, Wavelets and Operators, is published by Cambridge University Press, 1993. F. Mintzer. Filters for distortion-free two-band multirate filter banks. IEEE Trans. Acoust. Speech Signal Process., 33:626-630, 1985. T. Q. Nguyen and P. P. Vaidyanathan. Two-channel perfect-reconstruction FIR QMF structures which yield linear-phase analysis and synthesis filters. IEEE Trans. Acoust. Speech Signal Process., 37:676-690, 1989. H.-J. Park. A computational theory of Laurent polynomial rings and multidimensional FIR systems. PhD thesis, University of California, Berkeley, May 1995. L.-M. Reissell. Wavelet multiresolution representation of curves and surfaces. CVGIP: Graphical Models and Image Processing, 58(2):198-217, 1996. O. Rioul and P. Duhamel. Fast algorithms for discrete and continuous wavelet transforms. IEEE Trans. Inform. Theory, 38(2):569-586, 1992. P. SchrSder and W. Sweldens. Spherical wavelets: Efficiently representing functions on the sphere. Computer Graphics Proceedings, (SIGGRAPH 95), pages 161-172, 1995. I. Shah and T. A. C. M. Kalker. On Ladder Structures and Linear Phase Conditions fbr Bi-Orthogonal Filter Banks. In Proceedings of ICASSP-94, volume 3, pages 181-184, 1994. M. J. T. Smith and T. P. Barnwell. Exact reconstruction techniques for treestructured subband coders. IEEE Trans. Aeoust. Speech Signal Process., 34(3):434441, 1986. G. Strung and T. Nguyen. Wavelets and Filter Banks. Wellesley, Cambridge, 1996. W. Sweldens. The lifting scheme: A custom-design construction of biorthogonM wavelets. Appl. Comput. Harmon. Anal., 3(2):186-200, 1996. W. Sweldens. The lifting scheme: A construction of second generation wavelets. SIAM J. Math. Anat., 29(2):511-546, 1997. W. Sweldens and P. SchrSder. Building your own wavelets at home. In Wavelets in Computer Graphics, pages 15-87. ACM SIGGRAPH Course notes, 1996. J. Tian and R. O. Wells. Vanishing moments and biorthogonal wavelet systems. In Mathematics in Signal Processing IV. Institute of Mathematics and Its Applications Conference Series, Oxford University Press, 1996. L. M. G. Tolhuizen, H. D. L. Hollmann, and T. A. C. M. Kalker. On the realizability of bi-orthogonal M-dimensional 2-band filter banks. IEEE Transactions on Signal processing, 1995. M. Unser, A. Aldroubi, and M. Eden. A family of polynomial spline wavelet transforms. Signal Process., 30:141-162, 1993. P. P. Vaidyanathan. Theory and design of M-channel maximMly decimated quadrature mirror filters with arbitrary M, having perfect reconstruction property. IEEE Trans. Acoust. Speech Signal Process., 35(2):476-492, 1987. P. P. Vaidyanathan and P.-Q. Hoang. Lattice structures fbr optimM design and robust implementation of two-band perfect reconstruction QMF banks. IEEE Trans. Aeoust. Speech Signal Process., 36:81-94, 1988. P.P. Vaidyanathan, T. Q. Nguyen, Z. Douganata, and T. Saram/iki. Improved technique for design of perfect reconstruction FIR QMF banks with lossless polyphase matrices. IEEE Trans. Acoust. Speech Signal Process., 37(7):1042-1055, 1989. M. Vetterli. Filter banks allowing perfect reconstruction. Signal Process., 10:219244, 1986. M. Vetterli. Running FIR and IIR filtering using multirate filter banks. IEEE Trans. Signal Process., 36:730-738, 1988.
Factoring Wavelet Transforms into Lifting Steps
157
55. M. Vetterli and D. Le Gall. Perfect reconstruction FIR filter banks: Some properties and factorizations. IEEE Trans. Acoust. Speech Signal Process., 37:1057-1071, 1989. 56. M. Vetterli and C. Herley. Wavelets and filter banks: Theory and design. IEEE Trans. Acoust. Speech Signal Process., 40(9):2207-2232, 1992. 57. M. Vetterli and J. Kovaucevid. Wavelets and Subband Coding. Prentice Hall, Englewood Cliffs, N J, 1995. 58. Y. Wang, M. Orchard, A. Reibman, and V. Vaishampayan. Redundancy ratedistortion analysis of multiple description coding using pairwise correlation transforms. In Proc. IEEE ICIP, volume I, pages 608-611, 1997. 59. J. W. Woods and S. D. O'Neit. Subband coding of images. IEEE Trans. Acoust. Speech Signal Process., 34(5):1278-1288, 1986.
Spherical Wavelets: Efficiently Representing Functions on a Sphere* Peter SchrSder 1 and Wim Sweldens2 1 Department of Computer Science, California Institute of TecImology, Pasadena, CA 911257 U.S.A. ps~cs, caltech, edu 2 Bell Laboratories, Lucent Technologies, Murray Hill NJ 07974, U.S.A. wim~bell-labs, corn
1 1.1
Introduction Wavelets
Over the last decade wavelets have become an exceedingly powerful and flexible tool for computations and data reduction. They offer both theoretical characterization of smoothness, insights into the structure of functions and operators~ and practical numerical tools which lead to faster computational algorithms. Examples of their use in computer graphics include surface and volume illumination computations [17, 31], curve and surface modeling [18], and animation [20] among others. Given the high computational demands and the quest for speed in computer graphics, the increasing exploitation of wavelets comes as no surprise. While computer graphics applications can benefit greatly from wavelets~ these applications also provide new challenges to the underlying wavelet technology. One such challenge is the construction of wavelets on general domains as they appear in graphics applications and in geosciences. Classically, wavelet constructions have been employed on infinite domains (such as the real line ~ and plane ~ ) . Since most practical computations are confined to finite domains a mlmber of boundary constructions have also been developed [6]. However, wavelet type constructions for more general manifolds have only recently been attempted and are still in their infancy. Our work is inspired by the ground breaking work of Lounsbery et a/.[22, 21] (hereafter referred to as LDW). While their primary goal was to efficiently represent surfaces themselves we examine the case of efficiently representing functions defined on a surface, and in particular the case of the sphere. Although the sphere appears to be a simple manifold, techniques from ]~2 do not easily extend to the sphere. Wavelets are no exception. The first construction of wavelets on the sphere was introduced by Dahlke et a/.[7] using a tensor product basis where one factor is an exponential spline. To our knowledge a computer implementation of this basis does not exist at this moment. A * P. SchrSder and W. Sweldens. from Computer Graphics Proceedings, 1995~ 161 - 172, ACM Siggraph. Reprinted by permission of ACM.
Spherical Wavelets: Efficiently Representing Functions on a Sphere
159
continuous wavelet transform and its semi-discretization were proposed in [14]. Both these approaches make use of a (~, t~) parametrization of the sphere. This is the main difference with our method, which is parametrization independent. Aside from being of theoretical interest, a wavelet construction for the sphere leading to efficient algorithms, has practical applications since many computational problems are naturally stated on the sphere. Examples from computer graphics include: manipulation and display of earth and planetary data such as topography and remote sensing imagery, simulation and modeling of bidirectional reflection distribution functions, illumination algorithms, and the modeling and processing of directional information such as environment maps and view spheres. In this paper we describe a simple technique for constructing biorthogona] wavelets on the sphere with customized properties. The construction is an incidence of a fairly general scheme referred to as the lifting scheme [29, 30]. The outline of the paper is as follows. We first give a brief review of applications and previous work in computer graphics involving functions on the sphere. This is followed by a discussion of wavelets on the sphere. In Section 3 we explain the basic machinery of lifting and the fast wavelet transform. After a section on implementation, we report on simulations and conclude with a discussion and suggestions for further research.
Fig. 1. The geodesic sphere construction starting with the icosahedron on the left (subdivision level 0) and the next 2 subdivision levels.
1.2
R e p r e s e n t i n g Functions on t h e Sphere
Geographical information systems have long had a need to represent sampled data on the sphere. A number of basic data structures originated here. Dutton [11] proposed the use of a geodesic sphere construction to model planetary relief, see Fig. 1 for a picture of the underlying subdivision. More recently, Fekete [13] described the use of such a structure for rendering and managing spherical geographic data. By using hierarchical subdivision data structures these workers naturally built sparse adaptive representations. There also exist many non-hierarchical interpolation methods on the sphere (for an overview see [24]).
160
Peter Schr6der and Wire Sweldens
An important example from computer graphics concerns the representation of functions defined over a set of directions. Perhaps the most notable in this category are bi-directional reflectance distribution functions (BRDFs) and radiance. The BRDF, fr(wi, X, Wo), describes the relationship at a point x on a surface between incoming radiance from direction wi and outgoing radiance in direction Wo. It can be described using spherical harmonics, the natural extension of Fourier basis functions to the sphere, see e.g. [32]. These basis functions are globally supported and suffer from some of the same difficulties as Fourier representations on the line such as ringing. To our knowledge, no fast (FFT like) algorithm is available for spherical harmonics. Westin et at. [32] used spherical harmonics to model BRDFs derived from Monte Carlo simulations of micro geometry. Noting some of the disadvantages of spherical harmonics, Gondek et a/.[16] used a geodesic sphere subdivision construction [11, 13] in a similar context. The result of illumination computations, the radiance L(x, w), is a function which is defined over all surfaces and all directions. For example, Sillion et al. [28] used spherical harmonics to model the directional distribution of radiance. As in the case of BRDF representations, the disadvantages of using spherical harmonics to represent radiance are due to the global support and high cost of evaluation. Similarly no locally controlled level of detail can be used. In finite element based illuminations computations wavelets have proven to be powerful bases, see e.g. [26, 4]. By either reparameterizing directions over the set of visible surfaces [26], or mapping them to the unit square [4], wavelets defined on standard domains (rectangular patches) were used. Mapping classical wavelets on some parameter domain onto the sphere by use of a parameterization provides one avenue to construct wavelets on the sphere. However, this approach suffers from distortions and difficulties due to the fact that. no globally smooth parameterization of the sphere exists. The resulting wavelets are in some sense "contaminated" by the parameterization. We will examine the difficulties due to an underlying parameterization, as opposed to an intrinsic construction, when we discuss our construction. We first give a simple example relating the compression of surfaces to the compression of functions defined on surfaces.
Fig. 2. Recursive subdivision of the octahedral base shape as used by LD~Vfor spherelike surfaces. Level 0 is shown on the left followed by levels 2 and 4.
Spherical Wavelets: Efficiently Representing Functions on a Sphere
1.3
161
An Example
LDW constructs wavelets for surfaces of arbitrary topological type which are parameterized over a polyhedral base complex. For the case of the sphere they employed an octahedral subdivision domain (see Fig. 2). In this framework a given goal surface such as the earth is parameterized over an octahedron whose triangular faces are successively subdivided into four smaller triangles. Each vertex can now be displaced radially to the limit surface. The resulting sequence of surfaces then represents the multiple levels of detail representation of the final surface. As pointed out by LDW compressing surfaces is closely related to compressing functions on surfaces. Consider the case of the unit sphere and the function f(s) = f(~, ~) = cos2 ~ with s E S 2. We can think of the graph of this function as a surface over the sphere whose height (displaced along the normal) is the value of the function f . Hence an algorithm which can compress surfaces can also compress the graph of a scalar function defined over some surface. At this point the domain over which the compression is defined becomes crucial. Suppose we want to use the octahedron O. Define the projection T : 0 --+ S 2, s = T(p) = P/ItP[I- We then have ](p) = f(T(p)) with p e O. Compressing f(s) with wavelets on the sphere is now equivalent to compressing / ( p ) with wavelets defined on the octahedron. While f is simply a quadratic function over the sphere, ] is considerably more complicated. For example a basis over the sphere which can represent quadratics exactly (see Section 3.4) will trivially represent f . The same basis over tile octahedron will only be able to approximate ]. This example shows the importance of incorporating the underlying surface correctly for any construction which attempts to efficiently represent functions defined on that surface. In case of compression of surfaces themselves one has to assume some canonical domain. In LDW this domain was taken to be a polyhedron. By limiting our program to functions defined on a fixed surface (sphere) we can custom tailor the wavelets to it and get more efficiency. This is one of the main points in which we depart from the construction in LDW.
0.5 1 0.5
|
-0.5 1-0.5
Fig. 3. A simple example of refinement on the line. The basis functions at the top can be expressed as linear combinations of the refined functions at the bottom.
162 2
2.1
Peter Schr5der and Wire Sweldens Wavelets
on the
Sphere
Second Generation Wavelets
Wavelets are basis functions which represent a given function at multiple levels of detail. Due to their local support in both space and frequency, they are suited for sparse approximations of functions. Locality in space follows from their compact support, while locality in frequency follows from their smoothness (decay towards high frequencies) and vanishing moments (decay towards low frequencies). Fast O (n) algorithms exist to calculate wavelet coefficients, making the use of wavelets efficient for many computational problems. In the classic wavelet setting, i.e., on the real line, wavelets are defined as the dyadic translates and dilates of one particular, fixed function. They are typically built with the aid of a scaling ]unction. Scaling functions and wavelets both satisfy refinement relations (or two scale relations). This means that a scaling function or wavelet at a certain level of resolution (j) can be written as a linear combination of scaling basis functions of the same shape but scaled at one level finer (level j + 1), see Fig. 3 for an example. The basic philosophy behind second generation wavelets is to build wavelets with all desirable properties (localization, fast transform) adapted to much more general settings than the real line, e.g., wavelets on manifolds. In order to consider wavelets on a surface, we need a construction of wavelets which are adapted to a measure on the surface. In the case of the real line (and classical constructions) the measure is dx, the usual translation invariant (Haar) Lebesgue measure. For a sphere we will denote the usual area measure by dw. Adaptive constructions rely on the realization that translation and dilation are not fundamental to obtain the wavelets with the desired properties. The notion that a basis function can be written as a finite linear combination of basis functions at a finer, more subdivided level, is maintained and forms the key behind the fast transform. The main difference with the classical wavelets is that the filter coefficients of second generation wavelets are not the same throughout, but can change locally to reflect the changing (non translation invariant) nature of the surface and its measure. Classical wavelets and the corresponding filters are constructed with the aid of the Fourier transform. The underlying reason is that translation and dilation become algebraic operations after Fourier transform. In the setting of second generation wavelets, translation and dilation can no longer be used, and the Fourier transform thus becomes worthless as a construction tool. An alternative construction is provided by the lifting scheme. 2.2
M u l t i r e s o l u t i o n Analysis
We first introduce multiresolution analysis and wavelets and set some notation. For more mathematical detail the reader is referred to [10]. All relationships are summarized in Table 1.
Spherical Wavelets: Efficiently Representing Functions on a Sphere
163
T a b l e 1. Quick reference to the notation and some basic rdationships for the case of second generation biorthogonal wavelets. Y~nctions primal scaling functions dual scaling functions primal wavelets dual wavelets Biorthogonality relationships (~o~,k, ~ , k ' ) = ak,k,
(¢j,~, Cy,~,)
=a,~,~,a~,~,
Tj,k and ~_j,k, are biorthogonal Cj,m and Cj,,m, are biorthogonal
(¢~ .... ~ , ~ ) = 0 Vanishing moment relations ~a,-~ has N vanishing moments Cj,m h a s / ~ vanishing moments
~oj,kreprod, polyn, degree < N ~j,k reprod, polyn, degree <
Refinement relations scaling function refinement eq. dual scaling function refinement eq. wavelet refinement equation ~)j,m = ~lelc(j+l) gj,rn,1 ~oj+l,l ~ dual wavelet refinement equation with Vo the coarsest space v} = dosspan{~j# l k e ~:(j)} Wj = dosspan{¢j,~ I "~ e ~a(j)} with Wo the coarsest space wavelets encode difference between levels of approximation
~j,k = ~ze~:(j+l) hj,k,t qoj+~,~ ~j,k = ~ t e ~ 0 + * ) hj,k,~ ~j+l,t
Wavelet transforms ~j,k = (f, ~j,k)
scaling function coefficient wavelet coefficient
"~j,m -~" ( f , ~)j,m )
Forward Wavelet Transform (Analysis) Aj,k = ~E~c(j) hj,k,z )~j+t,l 7j,m = ~IEM(j)
"gj,m,l )~j-bl,l
scaling function coeff., fine to coarse wavelet coeff., fine to coarse
Inverse Wavelet Transform (Synthesis) )~j+lj = ~ke/c(j) hj,k,z .Xj,k + ~,~0)
gy,m,z 7J,,~
scaling function coeff., coarse to fine
164
Peter SchrSder and Wire Sweldens
Consider the function space L 2 = L 2 (S 2, dw), i.e., all functions of finite energy defined over S 2. We define a multiresolution analysis as a sequence of closed subspaces Vj c L 2, with j > 0, so that I Vj C ~ + 1 , (finer spaces have higher index) II Uj_>0 1/3. is dense in L 2, III for each j, scaling functions ~j,~ with k e ](:(j) exist so that {~j,k I k E/~(j)} is a Riesz basis I of ~ . Think of/(:(j) as a general index set where we assume that K:(j) C / C ( j + 1). In the case of the real line we can take E ( j ) = 2 - J Z , while for an interval we might have E(j) = {0, 2 - J , . . . , 1 - 2 - J } . Note that, unlike the case of a classical multiresolution analysis, the scaling functions need not be translates or dilates of one particular function. Property (I) implies that for every scaling function ~j,~, coefficients {hj,kj} exist so that ~j,k = ~ l hj,k,t ~j+l,1.
(1)
The hj,k,t are defined for j >_ o, k c /~(j), and l E E ( j + 1). Each scaling function satisfies a different refinement relation. In the classical case we have hj,k,l = hl-2k, i.e., the sequences hi,k4 are independent of scale and position. Each muttiresolution analysis is accompanied by a dual multiresolution analysis consisting of nested spaces ~ with bases given by dual scaling functions ~Sj,~, which are biorthogonal to the scaling functions:
(~j,k,~,k') = 5k,k' for k,k' ~ lC(j), where (f, g/ = f f g dw is the inner product on the sphere. The dual scaling functions satisfy refinement relations with coefficients {hj,k,l}In case scaling functions and dual scaling functions coincide, (~j,k = ~Sj,k for all j and k) the scaling functions form an orthogonal basis. In case the mnltiresolution analysis and the dual multiresolution analysis coincide (V./= ~ for all j but not necessarily ~j,k = qSj,k) the scaling functions are semi-orthogonal. Orthogonality or semi-orthogonality sometimes imply globally supported basis functions, which has obvious practical disadvantages. We will assume neither and always work in the most general biorthogonal setting (neither the multiresolution analyses nor the scaling functions coincide), introduced for classical wavelets
in [5] One of the crucial steps when building a multiresolution analysis is the construction of the wavelets. They encode the difference between two successive levels of representation, i.e., they form a basis for the spaces Wj where Vj • Wj = Vj+I. Consider the set of functions {¢j,m l J >- 0, m E )el(j)}, where 2,4(j) C ~ ( j + 1)is again an index set. If 1 A Riesz basis of some Hilbert space is a countable subset {]~} so that every element f of the space can be written uniquely as f ---~e ck .fk~ and positive constants A and B exist with AHfl] 2 < ~ k tCkt2 <-- BI]fH 2"
Spherical Wavelets: Efficiently Representing ~hxnctions on a Sphere
I65
1. the set is a Riesz basis for L2($2), 2. the set {¢j,m I m e 3d(j)} is the Riesz basis of IA~, we say that the e l m define a spherical wavelet basis. Since Wj C Vj+I, we have CJ,-~ = E l gj,,,,l ~j+l,t for m E M ( j ) .
(2)
An important property of wavelets is that they have vanishing moments. The wavelets Cj,m have N vanishing moments if N independent polynomials Pi, 0 < i < N exist so that P d = o, for all j >_ 0, m E 3d(j). Here the polynomials P{ are defined as the restriction to the sphere of polynomials on ~a. Note that independent polynomials on ]~3 can become dependent after restriction to the sphere, e.g., (1, x2,y 2, z2}. For a given set of wavelets we have dual basis functions ¢j,m which are biorthogonal to the wavelets, or {¢j,m,(bj,,m,} = 5m,m' (ij,j, for j,j' > O,m E .M(j),m' e Ad(j'). This implies ((~j,m,~j,k) = (@j,k,~bj,m) = 0 for m e 3d(j) and k E/~(j), and for f C L 2 we can write the expansion f = E j , m (@,m, f) ~bj,m = Ej,m ~'j,m Cj,m
(3)
Given all of the above relationships we can also write the scaling functions ~j+~,l as a linear combination of coarser scaling functions and wavelets using the dual sequences (cf. Eqs. (1,2))
If not stated otherwise summation indices are understood to run over k e K(j), l e / C ( j + 1), and m e 3d(j). Given the set of scaling function coefficients of a function f, {)~n,k = (f, 95j,k) I k C ]~(n)} where n is some finest resolution level, the fast wavelet transform recursively calculates the {Tj,-~ t 0 < j < n, m e 3d(j)}, and {)~0,k I k e/~(0)}, i.e., the coarser approximations to the underlying function. One step in the fast wavelet transform computes the coefficients at a coarser level (j) from the coefficients at a finer level (j + 1) )~j,k = E l hj,k,l ~j+l,1 and 7j,m = ~ t gj,m,~ )~j+l,l. A single step in the inverse transform takes the coefficients at, the coarser levels and reconstructs coefficients at a finer level
~j+l,1 = E k hz~,t tj,k + E , . gzm,~73,-~. 3
Wavelet Construction and Transform
We first discuss the lifting scheme [29, 30]. After the introducing of the algebra we consider two important families of wavelet bases, interpolating and generalized
166
Peter SchrSder and Wire Sweldens
Haar. At the end of this section we give a concrete example which shows how the properties of a given wavelet basis can be improved by lifting it and lead to better compression. Lifting allows us to build our bases in a fully biorthogonal framework. This ensures that all bases are of finite (and small) support and the resulting filters are small and easy to derive. As we will see it is also straightforward to incorporate custom constraints into the resulting wavelets.
3.1
The Lifting Scheme
The whole idea of the lifting scheme is to start from one basic multiresolution analysis, which can be simple or even trivial, and construct a new, more performant one, i.e., the basis functions are smoother or the wavelets have more vanishing moments. In case the basic filters are finite we will have lifted filters which are also finite. We will denote coefficients of the original multiresolution analysis with an extra superscript o (from old or original), starting with the filters h~,k,t, try, k,1, g~,k,t, and g~,L,t" The lifting scheme now states that a new set of filters can be found as hj,L,l = h ,L,l, a,m,l - E L 8j,L,m h ,L,l,
gj=°m,t = gj,m,t,
"fZj,L,I= h~,L, t + Err, Sj,k,m gj,m,l,
and that, for any choice of (sj,L,m }, the new filters will automatically be biorthogonal, and thus lead to an invertible transform. The scaling functions ~j,l are the same in the original and lifted multiresolution analysis, while the dual scaling function and primal wavelet change. They now satisfy refinement relations
Cj,m = E t g~,m,t ~j+l,l -- E L 8j,L,,~ ~j,L
(4)
@j,k -~--El h~,/¢,t@j+l,l "4FEm 8j,k,m ~)j,m* Note that the dual wavelet, has also changed since it is a linear combination (with the old coefficients ~o) of a now changed dual scaling function. Equation (4) is the key to finding the {Sj,k,m I k} coefficients. Since the scaling functions are the same as in the original multiresolution analysis, the only unknowns on the right hand side are the sj;L,m. We can choose them freely to enforce some desired property on the wavelets Cj,m. For example, in case we want the wavelet to have vanishing moments, the condition that the integral of a wavelet multiplied with a certain polynomial P~ is zero can now be written as 0 = El
- E L sj,L,
For a fixed j and m, this is a linear equation in the unknowns {sj,L,m I k}. If we choose the number of unknown coefficients Sj,k,m equal to the number of equations N, we need to solve a linear system for each j and m of size N × N . A priori we do not know if this linear system can always be solved. We will come back to this later.
Spherical Wavelets: Efficiently Representing Functions on a Sphere
167
The fast wavelet transform after lifting can be written as ~'~O
),j,k = ~ l j,k,t ~+1,1 + ~ m sj,k,m ~/~,,~, i.e., as a sequence of two steps. First the old dual high and low pass filters. Next the update of the old scaling function coefficients with the wavelet coefficients using the {sj,k,m t k}. The inverse transform becomes
Aj+l,l = ~ k h~,k,1(£j,k -- ~ m Sj,k,m ^/j,m) + ~ m g~,m,1~/j,m. Instead of writing everything as a sequence of two steps involving {Sj,k,m I k} we could have formed the new filters h and ~ first and then applied those in a single step. Structuring the new filters as two stages, however, simplifies the implementation considerably and is also more efficient.
Remarks: 1. The multiple index notation might look confusing at first sight, but its power lies in the fact that it immediately corresponds to the data structure of the implementation. The whole transform can also be written as one giant sparse matrix multiplication, but this would obscure the implementation ease of the lifting scheme. 2. Note how the inverse transform has a simple structure directly related to the forward transform. Essentially the inverse transform subtracts exactly the same linear combination of wavelet coefficients from )~j,k as was added in the forward transform. 3. It is also possible to keep the dual scaling function fixed and put the conditions on the dual wavelet. The machinery is exactly the same provided one switches primals and duals and thus toggles the tildes in the equations. We refer to this as the dual lifting scheme, which employs coefficients 8j,k,rn. It allows us to improve the performance of the dual wavelet. Typically, the number of vanishing moments of the dual wavelet is important to achieve compression. Also, the lifting scheme and the dual lifting scheme can be alternated to bootstrap one's way up to a desired multiresolution analysis (cakewalk construction). 4. The construction in LDW can be seen as a special case of the lifting scheme. They use the degrees of freedom to achieve pseudo-orthogonality (i.e., orthogonality between scaling function and wavelets of one level within a small neighborhood) starting from an interpolating wavelet. The lifting scheme is more general in the sense that it uses a fully biorthogonal setting and that it can start from any multiresolution analysis with finite filters. The pseudoorthogonalization requires the solution of linear systems which are of the size of the neighborhood (typically 24 by 24). Since many wavelets may in fact be the same caching of matrix computations is possible. 5. After finishing this work, the authors learned that a similar construction was obtained independently by Dahmen and collaborators. We refer to the original papers [2, 9] for details.
168
Peter SchrSder and Wire Sweldens
6. Evidently, the lifting scheme is only useful in case one has an initial set of biorthogonal filters. In the following sections we will discuss two such sets. 3.2
Fast Lifted Wavelet Transform
Before describing the particulars of our bases we give the general structure of all transforms. Forward (analysis) and inverse (synthesis) transforms are ahvays performed level wise. The former begins at the finest level and goes to the root while the latter starts at the root and descends to the leaf level. A n a l y s i s I computes the unlifted wavelet coefficients at the parent level while A n a l y s i s I I performs the lifting if the basis is lifted otherwise it is empty. Similarly, S y n t h e s i s I performs the inverse lifting, if any, while S y n t h e s i s I I computes the scaling function coefficients at the child level. Analysis For level = leaflevel to rootlevel Analysisl(level) AnalysisII(level) Synthesis For level = rootlevel to leaflevel SynthesisI(level) SynthesisII(level)
The transforms come in two major groups: (A) Lifted from the Lazy wavelet: this involves interpolating scaling functions and a vertex based transform; (B) Lifted from the Haar wavelet: this involves a face based transform. We next discuss these in detail.
3.3
Interpolating Scaling Functions
We first give a trivial example of a wavelet transform: the Lazy wavelet [29, 30]. The Lazy wavelet transform is an orthogonal transform that essentially does not compute anything. However, it is fundamental as it is connected with interpolating scaling functions. The filters of the Lazy fast wavelet transform are
given as h~,k,l
h~,k,1 ~- 5kj and g jo, m , I
"~ No g j , m , l ~-- ~ m , l .
Consequently, the transform does not compute anything, it only subsamples the coefficients. Fig. 4 (left) illustrates this idea for the case of the real line. Scaling functions {~Vj,k I J --> 0, k e ~ ( j ) } are called interpolating if a set of points {xj,k I J > O, k E K ( j ) } with xj,k = xj+l,k exists, so that
Yk, k' E ]C(j) : ~j,k(xj,k,) = 5k,k'. An example for such functions on the real line is shown on the right side of Fig. 4. In case of interpolating scaling functions, we can always take the dual scaling
Spherical Wavelets: Efficiently Representing ~nctions on a Sphere Lazy wavelet on the real line Primals
111,1
Interpolating wavelet on the real line
Duals ......
j
scaling functions
II II ] I I wavelets
Primals
tttt
Duals
j
scaling functions
j
169
tttt
scaling functions
AA/~
wavelets
wavelets
I111t111
tttttttt
scaling functions
scaling functions
tttt scaling functions
,t~:v*e:etVstV
j
ttttttttt
j+l scaling functions
scaling functions
Fig. 4. For the Lazy wavelet all primals are Kronecker functions (1 at the origin, 0 otherwise), while all duals are unit pulses (Dirac distributions). Going from a finer to a coarser scale is achieved by subsampling with the missing samples giving the wavelet spaces (h:(j) -----2-Jz, A/I(j) =-- 2-/(Z 4- 1/2), and xj,k = k). The well known linear B-splines as primal scaling and wavelet functions with Diracs as duals can be reached with dual lifting (Sj,k,m -'~ 1/25k-2-J-~ , m 4- 1/25k+2--J--~ ,m), resulting in ~5j,~ = 5(. - k) and ~)j,m "~ - 1 / 2 ~ ( "
- ~ - 2 -j-l)
4- (~(* - / y $ ) - 1 / 2 ~ ( ' --/12. 4- 2 - J - 1 ) .
functions to be Dirac distributions, ~j,k (x) = 6 ( x - x j , k ) , which are immediately biorthogonal (see the dual scaling functions on the right of Fig. 4). This leads to trivial inner products with the duals, namely evaluation of the function at the points Xj,k. The set of filters resulting from interpolating scaling functions and Diracs as their formal dual, carl be seen as a dual lifting of the Lazy wavelet. This implies that hj,k,k, -~ (~k,k', hj,k,m = sj,k,m, gj,m,k ~- --Sj,k,m, "gj,m,m' : 5m,m'. The wavelets are given by Cj,m = ~j+i,m and the dual wavelets by
The linear B-spline (right side of Fig. 4) can be seen to be the dual lifting of the Lazy wavelet. Since we applied dual lifting the primal wavelet does not yet have a vanishing moment. Below we present other choices for the filter coefficients hj,k,m. Typically one can choose the Sj,k,m to insure that ~j,m has vanishing moments (this will lead to the Quadratic scheme), or that ~j,k is smooth (this will lead to the Butterfly scheme). At this point we have an interpolathlg multiresolution analysis, which was dually lifted from the Lazy wavelet. A disadvantage of this multiresolution analysis is that the functions cannot provide Riesz bases for L 2. The dual functions do not even belong to L 2. This is related to the fact that the wavelet does not have a vanishing integral since it coincides with a scaling function. Consequently, unconditional convergence of the expansion (3) is not guaranteed. One cma now apply the primal lifting scheme to try to overcome this drawback by ensuring
170
Peter SchrSder and Wim Sweldens
that the primal wavelet has at least 1 vanishing moment. Note that this is only a necessary and not a sufficient condition. This yields
gj,m,l = 5m,l - ~ k Sj,k,m hj,~,t. The resulting wavelet can be written as
% m = ~j+l,m - E k s~,k,m~j,k.
(5)
In the situation in Fig. 4 setting 8 j , k , m = 1/45m,k+2-i-1 -t- 1 / 4 5 r n , k - 2 - ~ - ~ results in tj,m having a vanishing integral. This choice leads us to the welt known (2, 2) biorthogonal wavelet of [5].
e
--~
f2
m
e
"~-" e 3
1
"~"- e
2
Fig. 5. Neighbors used in our bases. Members of the index sets used in the transforms are shown in the diagram ( m E 3 d ( j ) , { vl , v~ , f l , f 2, e l , e2 , e3, e4} =/(:,@
3.4
Vertex Bases
Up to this point we have treated all index sets involved in the various filters as abstract sets. We now make these index sets more concrete. In order to facilitate our description we consider all index sets as defined locally around a given site Xj+l,m. A diagram is given in Fig. 5. The index of a given site is denoted m E 3 d ( j ) and all the neighboring vertices (xj,k with k E ~ ( j ) ) needed in the transform have indices v, f , and e respectively. To give some more intuition to these index sets recall wavelets on the real line as in Fig. 4. In that case the set K:(0) ~ l would consist of all integers, while 3 4 ( - 1 ) 9 m would contain the odd and ](;(-1) S k the even integers. For vertex based schemes we may think of the sites m E 3 4 ( j ) as always living on the midpoint of some parent edge (these being the "odd" indices), while the endpoints of a given edge form the "even" indices (k E ~ ( j ) ) , and their union l E ~ ( j ) U 34(j) = / C ( j + 1) gives the set of all indices. For each m the filters only range over some small neighborhood. We will refer to the elements in these neighborhoods by a local naming scheme (see
Spherical Wavelets: Efficiently Representing Functions on a Sphere
171
Fig. 5), k ~ K: m C ]t~(j). For example, the site m lies in between the elements of ~:~ = {~1, v~}. For all vertex bases the unlifted scaling coefficients are simply subsampled during analysis and upsampled during synthesis, while the wavelet coefficients involve some computation. AnalysisI (j) :
v k e It(j) : ~',k := ),j+~,k Vm e J ~ ( j ) : 7j,m := ).~+~,m - E ~ e ~
~,~..~¢,~
Synthesisll (j) :
Vm ~ ~ ( j )
: ;~+~,~ := ~/~,~ + ~ e ~
~,~,~ ~,~
We now give the details of the wavelet coefficient computations. Lazy: As mentioned above the Lazy wavelet does nothing but subsampling. The resulting analysis and synthesis steps then become ~[j,m :"~ )~j+l,m and )~j+l,m :-~ ~[j,m.
respectively. The corresponding stencil encompasses no neighbors, i.e., the sums over ~j,b,m are empty. Linear: This basic interpolatory form uses the stencil k E ]C = {vl,v2} (see Fig. 5) for analysis and synthesis
xj+~,~ := 7j,.~ + 1/2(~j,,1 + x~,v.), respectively. Note that this stencil does properly account for the geometry provided that the m sites at level j + l have equal geodetic distance from the {vl, v2} sites on their parent edge. Here sj,vl,m = sj,v2,m = 1/2. Quadratic: The stencil for this basis is given by ]Cm = {Vl,V2, f l , f2} (see Fig. 5) and exploits the degrees of freedom implied to kill the functions x 2, y2, and z 2 (and by implication the constant function [1]). Using the coordinates of the neighbors of the involved sites a small linear system results xj,v 1 xj,~ 2 x~.,A |y2,vl~
~
(1 / 2
Since x 2 + y: + z 2 = 1 this system is singular (but solvable) and the answer is chosen so as to minimize the 12 norm of the resulting filter coefficients. Note that this is an instance of dual lifting with effective filters sj,k,ra = hj,k,,~ = -gi,m,k.
172
Peter Schr5der and Wire Sweldens
Butterfly: This is the only basis which uses other than immediate neighbors (all the sites ]Cm denoted in Fig. 5). Here Sv~ = sv~ = 1/2, ~fl = s~2 = 1/8, and s~ = se2 = s~s = se4 = -1/16. It is inspired by a subdivision scheme of Dyn et al. [12] for the construction of smooth surfaces. 3.5
Lifting Vertex Bases
All of the above bases, Lazy, Linear, Quadratic, and Butterfly can be lifted. In this section we use lifting to assure that the wavelet has at least one vanishing moment. It does not improve the ability of the dual wavelet to annihilate more functions. Consequently the ability of the bases to compress is not increased, but smaller error results when using them for compression (see the example in Section 3.8 and the results in Section 5), We propose wavelets of the form ~)j,m ~ ~OjWl,m -- 8 j , v l , m qOj,vl -- 8j,v2,m qOj,v2.
(6)
In words, we define the wavelet at the midpoint of an edge as a linear combination of the scaling function at the midpoint (j + 1, m) and two scaling functions on the coarser level at the two endpoints of the parent edge (j, vl,2). The weights Sj,k, m are chosen so that the resulting wavelet has a vanishing integral
Sj,k,m = Ij+l,m/2 Ij,~ with Ij,k = fS2 ~j,k dw. During analysis lifting is a second phase (at each level j ) after the ~'j,m computation, while during synthesis it is a first step followed by the regular synthesis step (Linear, Quadratic, or Butterfly as given above). The simplicity of the expressions demonstrates the power of the lifting scheme. Any of the previous vertex basis wavelets can be lifted with the same expression. The integrals Ij,k can be approximated on the finest level and then recursively computed on the coarser levels (using the refinement relations).
Analysisll(j): [ )~j,v2 += SLv2,m ~/j,m SynthesisI(j): V m ~ M (j) :
{ Aj,vl -=- sj,vt,m ~/j,m ),j,v: - = s~,v~,m 7j,.~
For the interpolating case in the previous section, the scaling function coefficients at each level are simply samples of the function to be expanded (inner products with the ~,~,k)- In the lifted case the coefficients are defined as the inner product of the function to be expanded with the (new) dual scaling function. This dual scaling function is only defined as the limit function of a non-stationary subdivision scheme. The inner products at the finest level thereIbre need to be
Spherical Wavelets: Efficiently Representing Functions on a Sphere
173
Fig. 6. Images of the graphs of all vertex based wavelets. On the left is the scaling function (or unlifted wavelet) while the right shows the lifted wavelet with 1 vanishing moment. From top to bottom: Linear, Quadratic, and Butterfly. Positive values are mapped to a linear red scale while negative values are shown in blue. The gray area shows the support.
174
Peter SchrSder and Wim Sweldens
approximated with a quadrature formula, i.e., a linear combination of function samples. In our implementation we use a simple one point quadrature formula at the finest level. Fig. 6 shows images of the graphs of all the vertex based ¢ functions for the interpolating and lifted case.
primal scaling
dual scaling
Fig. 7. Example Haar scaling functions on a triangular subdivision. On the left are primal functions each of height 1. On the right are the biorthogonal duals each of height a(Ti) -1. Disjoint bases have inner product of 0 while overlapping (coincident supports) lead to an inner product of 1. (For the sphere all triangles are spherical triangles.)
3.6
T h e G e n e r a l i z e d H a a r W a v e l e t s a n d Face B a s e s
Consider spherical triangles resulting from a geodesic sphere construction Tj,k C S 2 with k E/~(j) (note that the face based/C(j) are not identical to the vertex based K:(j) defined earlier). They satisfy the following properties:
1. S 2 = (.Jke~c(j)Tj,k and this union is disjoint, i.e., the Tj,k provide a simple cover of S 2 for every j, 2. for every j and k, Tj,k can be written as the union of 4 "child" triangles Tj+l,l. Let a(Tj,k) be the spherical area of a triangle and define the scaling functions and dual scaling functions as
(Pj,k --'--XT~,u and qSj,k = o~(Tj,k)--IxTj,k. Here )iT is the function whose value is 1 for x E T and 0 otherwise. The fact that the scaling function and dual scaling function are biorthogonal follows immediately from their disjoint support (see Fig. 7). Define the Vj C L 2 as Vj = closspan(~pj,k ! k e ~ ( j ) ) . The spaces Vj then generate a multiresolution analysis of L2($2). Now fix a triangle Tj,,. For the construction of' the generalized Haar wavelets, we only need to consider the set of children Tj+l,t=o,l,a,3 of Tj,,. We call these bases the Bio-Haax functions (see Fig. 8). The wavelets (m = 1, 2, 3) are chosen as
Cj,.~ = 2(~;j+1,~
-
Ij+l,m/rj+~,o ~j+~,o),
Spherical Wavelets: Efficiently Representing Functions on a Sphere
children
Bio-Haar 1
Bio-Haar 2
175
Bio-Haar 3
Fig. 8. The Bio-Haar wavelets. Note that the heights of the functions are not drawn to scale.
so t h a t their integral vanishes. A set of semi-orthogonal dual wavelets is then given by
~j,m = 1/2((Pj4-1,m -- ~j,*). These bases are inspired by the construction of orthogonal H a a r wavelets for general measures, see [15, 23] where it is shown t h a t the Hunt wavelets form an unconditional basis.
Fig. 9. Illustration of the dual lifting of the dual Bio-Haar wavelets. New dual wavelets can be constructed by taking linear combinations of the original dual Bio-Haar wavelets and parent level dual scaling functions. Each such linear combination is signified by a row. Solving for the necessary weights Sj,k,m requires the solution to a small matrix problem whose right hand side encodes the desired constraints.
The Bio-Haar wavelets have only 1 vanishing moment, but using the dual lifting scheme, we can build a new multiresohition analysis, in which the dual wavelet has more vanishing moments. Let Tj,k=4,5,6 be the neighboring triangles of Tj,, (at level j), and ]Cm = {% 4, 5, 6}. The new dual wavelets are
176
Peter SchrSder and Wire Sweldens
Note that this is a special case of Equation (4). The coefficients sj,k,,~ can now be chosen so that Cj,m has vanishing moments. Fig. 9 illustrates this idea. In the left column are the three dual Bio-Haar wavelets created before. The following four columns show the dual scaling functions over the parent and aunt triangles Ti,k=.,4,5,6. Each row signifies one of the linear combinations. Similarly to the Quadratic vertex basis we construct dually lifted Bio-Haar wavelets which kill the functions x 2, y2, z 2, and thus 1. This leads to the equations Eke;C~ SJ,k,m(CPJ,k, P) = 1/2(%Sj+l,m - ~Sj,., P) with P = x 2 , y2, z 2, 1. The result is a 4 × 4 singular (but solvable) matrix problem for each m = 1, 2, 3. The unknowns are the sj,k,m with k = *,4, 5,6 and the entries of the linear system are moments of dual scaling functions. These can be computed recursively from the leaf level during analysis. The Bio-Haar and lifted Bio-Haar transforms compute the scaling function coefficient, during analysis at the parent triangle as a function of the scaling function coefficients at the children and possibly the scaring function coefficients at the neighbors of the parent triangle (in the lifted case). The three wavelet coefficiems of the parent level are stored with the children T1, T2, and T3 for convenience in the implementation. During synthesis the scaling function coefficient at the parent and the wavelet coefficients stored at children T1, T2, and T3 are used to compute scaling function coefficients at the 4 children. As before, lifting is a second step during analysis and modifies the wavelet coefficients. During synthesis lifting is a first step before the inverse Bio-Haar transform is calculated.
Analysisll(j): Vm ~ M ( j ) : ~ , ~ - = E k e ~ ~ , k , ~ j , k
SynthesisI(j):
3.7
Basis Properties
The lifting scheme provides us with the filter coefficients needed in the implementation of the fast wavelet transform. To find the basis functions and dual basis functions that are associated with them, we use the cascade algorithm. To synthesize a scaling function ~j0,ko one simply initializes the coefficient Ajo,k = (~k,koThe inverse wavelet transform starting from level j0 with all wavelet coefficients ~/j,m with j > J0 set to zero then results in Aj,~ coefficients which converge to function values of ~jo,ko as j -+ co. In case the cascade algorithm converges in L 2 for both primal and dual scaling functions, biorthogonal filters (as given by the lifting scheme) imply biorthogonal basis functions.
Spherical Wavelets: Efficiently Representing Functions on a Sphere
177
One of the fundamental questions is how properties, such as convergence of the cascade algorithm, Riesz bounds, and smoothness, can be related back to properties of the filter sequences. This is a very hard question and at this moment no general answer is available to our knowledge. We thus have no mathematical proof that the wavelets constructed form an unconditional basis except in the case of the Haar wavelets. A recent result addressing these questions was obtained by Dahmen [8]. In particular, it is shown there which properties in addition to biorthogonality are needed to assure stable bases. Whether this result can be applied to the bases constructed here needs to be studied in the future. Regarding smoothness, we have some partial results. It is easy to see that the Haar wavelets are not continuous and that the Linear wavelets are. The original Butterfly subdivision scheme is guaranteed to yield a C i limit function provided the connectivity of the vertices is at least 4. The modified Butterfly scheme that we use on the sphere, will also give C i limit functions, provided a locally smooth (C i) map from the spherical triangulation to a planar triangulation exists. Unfortunately, the geodesic subdivision we use here does not have this property. However, the resulting functions appear visually smooth (see Fig. 6). We are currently working on new spherical triangulations which have the property that the Butterfly scheme yields a globally C i function. In principle, one can choose either the tetrahedron, octahedron, or icosahedron to start the geodesic sphere construction. Each of them has a particular number of triangles on each level, and therefore one of them might be more suited for a particular' application or platform. The octahedron is the best choise in case of functions defined on the hemisphere (cfr. BRDF). The icosahedron will lead to the least area inbalance of triangles on each level and thus to (visually) smoother basis functions.
Q 13
1E-01 0
a
13 0
a 0
13 D
0
1E-02.
Q
0
.e
O
rl o
Q o
i= o
1E-03,
Linear * Linearlifted
13 o
.
D o
D o
a o
1E-04.
1E+01
"
" ' "'"il
' '
1E+02
.......
I
IE+03
........
i
o
1E+04
numberof coefficients Fig. 10. Relative li error as a function of the number of coefficients for the example function f(s) = ~ and (lifted) Linear wavelets. With the same number of coefficients the error is smaller by a factor of 3 or conversely a given error can be achieved with about 1/3 the number of coefficients if the lifted basis is used.
178 3.8
Peter SchrSder and Wim Sweldens An Example
We argued at the beginning of this section that a given wavelet basis can be made more performant by lifting. In the section on interpolating bases we pointed out that a wavelet basis with Diracs for duals and a primal wavelet, which does not have 1 vanishing moment, unconditional convergence of the resulting series expansions cannot be insured anymore. We now give an example on the sphere which illustrates the numerical consequences of lifting. Consider the function f ( s ) = X/'lsxi for s = (sx, sy, sz) e S2. This function is everywhere smooth except on the great circle sx = 0, where its derivative has a discontinuity. Since it is largely smooth but for a singularity at 0, it is ideally suited to exhibit problems in bases whose primal wavelet does not have a vanishing moment. Fig. 10 shows the relative ll error as a function of the number of coefficients used in the synthesis stage. In order to satisfy the same error threshold the lifted basis requires only approximately 1/3 the number of coefficients compared to the unlifted basis.
4
Implementation
We have implemented all the described bases in an interactive application running on an SGI Irix workstation. The basic data structure is a forest of triangle quadtrees [11]. The root level starts with 4 (tetrahedron), 8 (octahedron), or 20 (icosahedron) spherical triangles. These are recursively subdivided into 4 child triangles each. Naming edges after their opposite vertex, and children after the vertex they retain (the central child becomes To) leads to a consistent naming scheme throughout the entire hierarchy. Neighbor finding is a simple 0(1) (expected cost) function using bit operations on edge and triangle names to guide pointer traversal [11]. A vertex is allocated once and any level which contains it carries pointers to it. Each vertex carries a single A and 7 slot for vertex bases, while face bases carry a single A and 7 slot per spherical triangle. Our actual implementation carries other data such as surface normals and colors used for display, function values for error computations, and copies of all 7 and )~ values to facilitate experimentation. These are not necessary in a production system however. Using a recursive data structure is more memory intensive (due to the pointer overhead) than a flat, array based representation of all coefficients as was used by LDW. However, using a recursive data structure enables the use of adaptive subdivision and results in simple recursive procedures for analysis and synthesis and a subdivision oracle. For interactive applications it is straightforward to select a level for display appropriate to the available graphics performance (polygons per second). In the following subsections we address particular issues in the implementation.
Spherical Wavelets: Efficiently Representing Functions on a Sphere 4.1
179
Restricted Quadtrees
In order to Support lifted bases and those which require stencils that encompass some neighborhood the quadtrees produced need to satisfy a restriction criterion. For the Linear vertex bases (lifted and unlifted) and the Bio-Haar basis no restriction is required. For Quadratic and lifted Bio-Haar bases no neighbor of a given face may be off by more than 1 subdivision level (every child needs a proper set of "aunts"). For the Butterfly basis a two-neighborhood must not be off by more than 1 subdivision level. These requirements are easily enforced during the recursive subdivision. The fact that we only need "aunts" (as opposed to "sisters") for the lifting scheme allows us to have wavelets on adaptively subdivided hierarchies. This is a crucial departure from previous constructions, e.g., tree wavelets employed by Gortler et al.[17] who also needed to support adaptive subdivision. 4.2
Boundaries
In the case of a hemi-sphere (top 4 spherical triangles of an octahedral subdivision), which is important for BRDF functions, the issues associated with the boundary need to be addressed. Lifting of vertex bases is unchanged, but the Quadratic and Butterfly schemes (as welt as the lifted Bio-Haar bases) need neighbors, which may not exist at the boundary. This can be addressed by simply using another, further neighbor instead of the missing neighbor (across the boundary edge) to solve the associated matrix problem. It implicitly corresponds to adapting filter coefficients close to the boundary as done in interval constructions, see e.g. [6]. This construction automatically preserves the vanishing moment property even at the boundary. In the implementation of the Butterfly basis, we took a different approach and chose in our implementation to simply reflect any missing faces along the boundary. 4.3
Oracle
One of the main components in any wavelet based approximation is the oracle. The function of the oracle is to determine which coefficients are important and need to be retained for a reconstruction which is to meet some error criterion. Our system can be driven in two modes. The first selects a deepest level to which to expand all quadtrees. The storage requirements for this approach grow exponentially in the depth Of the tree. For example our implementation cannot go deeper than 7 levels (starting from the tetrahedron) on a 32MB Indy class machine without paging. Creating full trees, however, allows for the examination of all coefficients throughout the hierarchies to in effect implement a perfect oracle. The second mode builds sparse trees based on a deep refinement oracle. In this oracle quadtrees are built depth first exploring the expansion to some (possibly very deep) finest level. On the way out of the recursion a local A n a l y s i s I is performed arid any subtrees whose wavelet coefficients are all below a user
180
Peter SchrSder and Wire Swetdens
Table 2. Representative timings for wavelet transforms beginning with 4 spherical triangles and expanding to level 9 (22° faces and 219 + 2 vertices). All timings are given in seconds and measured on an SGI R4400 running at 150MHz. The initial setup time (allocating and initializing all data structures) took 100 seconds. Basis
Analysis SynthesislLifted Basis Analysis Synthesis
Linear Quadratic Butterfly Bio-Haar
3.59 21.79 8:43 4.31
3.55 Linear 21.001Quadratic 8.42 Butterfly 6.09 Bio-Haar
5.85 24.62 10.64 42.43
5.83 24.68 10.62 36.08
supplied threshold are deallocated. Once the sparse tree is built the restriction criterion is enforced and the (possibly lifted) analysis is run level wise. The time complexity of this oracle is still exponential in the depth of the tree, but the storage requirements are proportional to the output size. With extra knowledge about the underlying function more powerful oracles can be built whose time complexity is proportional to the output size as well.
4.4
T r a n s f o r m Cost
The cost of a wavelet transform is proportional to the total number of coefficients, which grows by a factor of 4 for every level. For example, 9 levels of subdivision sta~,ing from 4 spherical triangles result in 220 coefficients (each of £ and 9/) for face bases and 219 + 2 (each of/~ and V) for vertex bases. The cost of analysis and synthesis is proportional to the nulnber of basis functions, while the constant of proportionality is a function of the stencil size. Table 2 summarizes timings of wavelet transforms for all the new bases. The initial setup took 100 seconds and includes allocation and initialization of all data structures and evaluation of the £9,k. Since the latter is highly dependent on the evaluation cost of the function to be expanded we used the constant function 1 for these timings. None of the matrices which arise in the Quadratic, and Bio-Haar bases (lifted and unlifted) was cached, thus the cost of solving the associated 4 × 4 matrices with a column pivoted QR (for Quadratic and lifted Bio-Haar) was incurred both during analysis and synthesis. If one is willing to cache the results of the matrix solutions this cost could be amortized over multiple transforms. We make three main observations about the timings: (A) Lifting of vertex bases adds only a small extra cost, which is almost entirely due to the extra recursions; (B) the cost of the Butterfly basis is only approximately twice the cost of the Linear basis even though the stencil is much larger; (C) solving the 4 × 4 systems implied by Quadratic and lifted Bio-Haar bases increases the cost by a factor of approximately 5 over the linear case (note that there are twice as many coefficients for face bases as for vertex bases). While the total cost of an entire transform is proportional to the number of basis functions, evaluating the resulting expansion at a point is proportional to
Spherical Wavelets: Efficiently Representing Functions on a Sphere
181
the depth (tog of the number of basis functions) of the tree times a constant dependent on the stencil size. The latter provides a great advantage over such bases as spherical harmonics whose evaluation cost at a single point is proportional to the total number of bases used.
5
Results
In this section we report on experiments with the compression of a planetary topographic data set, a B R D F function, and illumination of an aaisotropic glossy sphere. Most of these experiments involved some form of coefficient thresholding (in the oracle). In all cases this was performed as follows. Since all our bases are normalized with respect to the Loo norm, L 2 thresholding against some user supplied threshold c becomes if 17j,ml~/supp(~'j]m)< c, 7j,m : : O. Furthermore c is scaled by (max(f) - min(f)) for the given function f to make thresholding independent of the scale of f. IE+00
IE~0
cA o
a~ ieg~
IE-Ol
a Linear * Quadratic + * * x
lB-O2 -
~(~ l~ ~ ~ t
Butterfly
Linear lifted Quadratic lifted Butterfly lifted Bio-Haar G Bio-Haar lifted .......
~ ~
~ ~,~g ~ ~,,~g ~ ~A g ~
t . . . . . . . . I . . . . . . . . I . . . . . . al ~ " IE+02 1E+03 IE+04 IE+05
number of coefficients
IE-OI
'=
IE-02
Q Linear g *, Quadratic $ * Butterfly * Linear lifted * Quadratic lifted x Butterfly lifted g Bio-Haar O Bio-Haar lifted IE+02
IE+03
g 1~ g
IE+04
number of coefficients
IE+05
IE+01
IE-~02
IE+03
tE404
IE+05
number of coefficients
Fig. 11. Relative 11 error as a function of the number of coefficients used during the reconstruction of the earth topographic data set (left and middle) and BRDF function (right). The six vertex bases and two face bases perform essentially the same for the earth. On the left with full expansion of the quadtrees to level 9 and thresholding. In the middle the results of the deep refinement oracle to level 10 with only a sparse tree construction. The curves are identical validating the refinement strategy. On the right the results of deep refinement to level 9 for the BRDF. Here the individual bases are clearly distinguished.
5.1
Compression
of Topographic Data
In this series of experiments we computed wavelet expansions of topographic data over the entire earth. This function can be thought of as both a surface,
182
Peter SchrSder and Wim Sweldens
and as a scalar valued function giving height (depth) for each point on a sphere. The original data, ETOPO5 from the National Oceanographic and Atmospheric Administration gives the elevation (depth) of the earth from sea level in meters at a resolution of 5 arc minutes at the equator. Due to the large size of this data set we first resampled it to 10 arc minutes resolution. All expansions were performed starting from the tetrahedron followed by subdivision to level 9. Fig. 11 shows the results of these experiments (left and middle). After computing the coefficients of the respective expansions at the finest level of the subdivision an analysis was performed. After this step all wavelet coefficients below a given threshold were zeroed and the function was reconstructed. The thresholds were successively set to 2 -~ for i = 0 , . . . , 17 resulting in the number of coefficients and relative 11 error plotted (left graph). The error was computed with a numerical quadrature one level below the finest subdivision to insure an accurate error estimation. The results are plotted for all vertex and face bases (Linear, Quadratic, Butterfly, Bio-Haar, lifted and unlifted). We also computed 12 and loo error norms and the resulting graphs (not shown) are essentially identical (although the l~ error stays initially high before falling off due to deep canyon features). The plot reaches to about one quarter of all coefficients. The observed regime is linear as one would expect from the bases used. The most striking observation about these error graphs is the fact that all bases perform similar. This is due to the fact that the underlying function is non-smooth. Consequently smoother bases do not perform any better than less performant ones. However, when drawing pictures of highly compressed versions of the data set the smoother bases produce visually better pictures (see Fig. 12). Depending on the allowed error the compression can be quite dramatic. For example, 7200 coefficients are sufficient to reach 7% error, while 119000 are required to reach 2% error. In a second set of experiments we used the deep refinement oracle (see Section 4.3) to explore the wavelet expansion to 10 levels (potentially quadrupling the number of coefficients) with successively smaller thresholds, once again plotting the resulting error in the middle graph of Fig. 11. The error as a function of coefficients used is the same as the relationship found by the perfect oracle. This validates our deep refinement oracle strategy. Memory requirements of this approach are drastically reduced. For example, using a threshold of 2 -9 during oracle driven refinement to level 10 resulted in 4 616 coefficients and consumed a total of 27MB (including 10MB for the original data set). Lowering the threshold to 2 - l ° yielded 10 287 coefficients and required 43MB (using the lifted Butterfly basis in both cases). Finally Fig. 12 shows some of the resulting adaptive data sets rendered with RenderMan using the Butterfly basis and a pseudo coloring, which maps elevation onto a piecewise linear color scale. Total runtime for oracle driven analysis and synthesis was 10 minutes on an SGI R4400 at 150MHz. C o m p a r i s o n w i t h L D W The earth data set allows for a limited comparison of our results with those of LDW. They also compressed the ETOPO5 data set
Spherical Wavelets: Efficiently Representing Functions on a Sphere
183
Fig. 12. Two views of the earth data set with 15 000 and 190 000 coefficients respectively using the Butterfly basis with 1t errors of 0.05 and 0.01 respectively. In the image on the left coastal regions are rather smoothed since they contain little height variation (England and the Netherlands are merged and the Baltic sea is desertificated). However, such spiky features as the Cape Verde Islands off the coast of Africa are clearly preserved.
using pseudo orthogonalized (over a 2 neighborhood) Linear wavelets defined over the octahedron. They subdivide to 9 levels (on a 128MB machine) which corresponds to twice as many coefficients as we used (on a 180MB machine), suggesting a storage overhead of about 3 in our implementation. It is hard to compare the quality of the bases without knowing the exact basis used or the errors in the compressed reconstruction. However, LDW report the number of coefficients selected for a given threshold (741 for 0.02, 15 101 for 0.002, and 138 321 for 0.0005). Depending on the basis used we generally select fewer coefficients (6 000 - 15 000 for 0.002 and 28 000 - 65 000 for 0.0005). As timings they give 588 seconds (on a 100 MHz R4000) for analysis which is significantly longer than our smoothest basis (lifted Butterfly). Their reconstruction time ranges from 75 (741 coefficients) to 1 230 (138 058 coefficients) seconds which is also significantly longer than our times (see Table 2). We hypothesize that the timing and storage differences are lar'gely due to their use of flat array based data structures. These do not require as much memory, but they are more compute intensive in the sparse polygonal reconstruction phase. 5.2
BRDF Compression
In this series of experiments we explore the potential for efficiently representing B R D F functions with spherical wavelets. BRDF functions can arise from measurements, simulation, or theoretical models. Depending on the intended application different models may be preferable. Expanding BRDF functions in terms
184
Peter SchrSder and Wire Sweldens
of locally supported hierarchical functions is particularly useful for wavelet based finite element illumination algorithms. It also has obvious applications for simulation derived BRDF functions such as those of Westin et at. [32] and Gondek et al. [16]. The domain of a complete BRDF is a hemisphere times a hemisphere. In our experiments we consider only a fixed incoming direction and expand the resulting function over all outgoing directions (single hemisphere). To facilitate the computation of errors we used the BRDF model proposed by Schlick [25]. It is a simple Pad@ approximant to a micro facet model with geometric shadowing, a microfacet distribution function, but no Fresnel term. It has roughness (r [0,1], where 0 is Dirac mirror reflection and 1 perfectly diffuse) and anisotropy (p e [0, 1], where 0 is Dirac style anisotropy, and I perfect isotropy) parameters. To improve the numerical properties of the BRDF we followed the suggestion of Westin et al. [32] and expanded cOsOofr(wi,O, .).
Fig. 13. Graphs of adaptive oracle driven approximations of the Schlick BRDF with 19, 73, and 203 coefficients respectively (left to right), using the lifted Butterfly basis. The associated thresholds were 2-3, 2-6, and 2-9 respectively, resulting in relative 11 errors of"0.35, 0.065, and 0.015 respectively.
In the experiments we used all 8 bases but specialized to the hemisphere. The parameters were 0~ = ~r/3, r = 0.05, and p = 1. The results are summarized in Fig. 11 (rightmost graph). It shows the relative 11 error as a function of the number of coefficients used. This time we can clearly see how the various bases differentiate themselves in terms of their ability to represent the function within some error bound with a given budget of coefficients. We make several observations all lifted bases perform better than their unlifted versions, confirming our assertion that lifted bases are more performant; increasing smoothness in the bases (Butterfly) is more important than increasing the number of vanishing moments (Quadratic); - dual lifting to increase dual vanishing moments increases compression ability dramatically (Bio-Haar and lifted Bio-Haar); - overall the face based schemes do not perform as well as the vertex based schemes. -
-
Spherical Wavelets: Efficiently Representing Functions on a Sphere
185
Fig. 13 shows images of the graphs of some of the expansions. These used the lifted Butterfly basis with an adaptive refinement oracle which explored the expansion to level 9 (i.e., it examined 219 coefficients). The final number of coefficients and associated relative 11 errors were (left to right) 19 coefficients (ll = 0.35), 73 coefficients (11 = 0.065), and 203 coefficients (11 = 0.015). Total runtime was 170 seconds on an SGI R4400 at 150MHz.
Fig. 14. Results of an illumination simulation using the lifted Butterfly basis. A red, glossy, anisotropic sphere is illuminated by 2 area light sources. Left: solution with 2 000 coefficients (11 = 0.017). Right: solution with 5 000 coefficients (11 = 0.0035)
5.3
Illumination
To explore the potential of these bases for global illumination algorithms we performed a simple simulation computing the radiance over a glossy, anisotropic sphere due to two area light sources. We emphasize that this is not a solution involving any multiple reflections, but it serves as a simple example to show the potential of these bases for hierarchical illumination algorithms. It also serves as an example of applying a finite element approach to a curved object (sphere) without polygonalizing it. Fig. 14 shows the results of this simulation. We used the lifted Butterfly basis and the B R D F model of Schlick with r = 0.05, p = 0.05, and an additive diffuse component of 0.005. Two area light sources illuminate the red sphere. Note the fine detail in the pinched off region in the center of the "hot" spot and also at the north pole where all "grooves" converge.
186 6
Peter Schr5der and Wire Sweldens Conclusions
and Future
Directions
In this paper we have introduced two new families of wavelets on the sphere. One family is based on interpolating scaling functions and one on the generalized Haar wavelets. They employ a generalization of multiresolution analysis to arbitrary surfaces and can be derived in a straightforward manner from the trivial multiresolution analysis with the lifting scheme. The resulting algorithms are simple and efficient. We reported on the application of these bases to the compression of earth data sets, BRDF functions and illumination computations and showed their potential for these applications. We found that - for smooth functions the lifted bases perform significantly better than the unlifted bases; - increasing the dual vanishing moments leads to better compression; smoother bases, even with only one vanishing moment, tend to perform better for smooth functions; - our constructions allow non-equally subdivided triangulations of the sphere. -
We believe that many applications can benefit from these wavelet bases. For example, using their localization properties a number of spherical image processing algorithms, such as local smoothing and enhancement, can be realized in a straightforward and efficient way [27]. While we limited our examination to the sphere, the construction presented here can be applied to other surfaces. In the case of the sphere enforcing vanishing polynomial moments was natural because of their connection with spherical harmonics. In the case of a general, potentially non-smooth (Lipschitz) surface, polynomial moments do not necessarily make much sense. Therefore, one might want to work with local maps from the surface to the tangent plane and enforce vanishing moment conditions in this plane. It is possible to generalize this construction to wavelets on arbitrary surfaces, see, e.g. [19]. An interesting question for future research is the construction of compactly supported surface wavelets with smooth C 2 scaling functions such as Loop. These functions are not necessarily interpolating, so two lifting steps may not suffice. Applications of these bases lie in the solution of differential and integral equations on the sphere as needed in, e.g., illumination, climate modeling or geodesy. Acknowledgments The first author was supported by DEPSCoR Grant (DoD-ONR) N00014-94-11163. The second author was supported by NSF EPSCoR Grant EHR 9108772 and DARPA Grant AFOSR F49620-93-1-0083. He is also Senior Research Assistant of the National Fund of Scientific Research Belgium (NFWO). Other support came from Pixar Inc. We would also like to thank Princeton University and the GMD, Germany, for generous access to computing resources. Help with geometric data structures was provided by David Dobkin. Finally, the comments of the referees were very helpful in revising the paper.
Spherical Wavelets: Efficiently Representing Ftmctions on a Sphere
187
References 1. Alfeld, P., Neamtu, M., and Schumaker, L. L. Bernstein-B4zier polynomials on circles, sphere, and sphere-like surfaces. Preprint. 2. Carnicer, J. M., Dahmen, W., and Pefia, J. M. Local decompositions of refinable spaces. Tech. rep., Insitut ffir Geometrie und angewandete Mathematik, RWTH Aachen, 1994. 3. Certain, A , J. Popovid, T. DeRose, T. Duchamp D. SMesin and W~erner Stuetzle Interactive Muttiresolution Surface Viewing Computer Graphics (SIGGRAPH '96 Proceedings) (1996), 91-98 4. Christensen, P. H., Stollnitz, E. J., Salesin, D. H , and DeRose, T. D. Wavelet Radiance. In Proceedings of the 5th Eurographics Workshop on Rendering, 287302, June 1994. 5. Cohen, A., Daubechies, I., and Feauveau, J. Bi-orthogonal bases of compactly supported wavelets. Comm. Pure Appl. Math. 45 (1992), 485-560. 6. Cohen, A., Daubechies, I., Jawerth, B., and Vial, P. Multiresolution analysis, wavelets and fast algorithms on an interval. C. R. Acad. Sci. Paris S~r. I Math. I, 316 (1993), 417-421. 7. Dahlke, S., Dahmen, W., Schmitt, E., and Weinreich, I. Multiresolution analysis and wavelets on S2 and S% Tech. Rep. 104, Institut f~r Geometrie und angewandete Mathematik, RWTH Aachen, 1994. 8. Dahmen, W. Stability of multiscale transformations. Tech. rep., Institut ffir Geometrie und angewandete Mathematik, RWTH Aachen, 1994. 9. Dahmen, W., PrSssdorf, S., and Schneider, R. MultiscMe methods for pseudodifferential equations on smooth manifolds. In Conference on Wavelets: Theory~ Algorithms, and Applications, C. K. C. et al., Ed. Academic Press, San Diego, CA, 1994, pp. 385-424. 10. Daubechies, I. Ten Lectures on Wavelets. CBMS-NSF Regional Conf. Series in Appl. Math., Vol. 61. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1992. 11. Dutton, G. Locational Properties of Quaternary Triangular Meshes. In Proceedings of the Fourth International Symposium on Spatial Data Handling, 901-910, July 1990. 12. Dyn, N., Levin, D., and Gregory, J. A Butterfly Subdivision Scheme for Surface Interpolation with Tension Control. Transactions on Graphics 9, 2 (April1990), 160-169. 13. Fekete, G. Rendering and Managing Spherical Data with Sphere Quadtrees. In Proceedings of Visualization 90, 1990. 14. Freeden, W., and Windheuser, U. Spherical Wavelet Transform and its Discretization. Tech. Rep. 125, Universit~it KMserslautern, Fachbereich Mathematik, 1994. 15. Girardi, M., and Sweldens, W. A new class of unbalanced Haar wavelets that form an unconditional basis for Lp on general measure spaces. Tech. Rep. 1995:2, IndustriM Mathematics Initiative, Department of Mathematics, University of South
Carolina, 1995. (ftp://lip.math. scarolina, edn/pub/imi_95/imi95_2 .ps). 16. Gondek, J. S., Meyer, G. W., and Newman, J. G. Wavelength Dependent Reflectance Functions. In Computer Graphics Proceedings, Annual Conference Series, 213-220, 1994. 17. Gortler, S., SchrSder, P., Cohen, M., and Hanrahan, P. Wavelet Radiosity. In Computer Graphics Proceedings, Annum Conference Series, 221-230, August 1993.
188
Peter Schr5der and Wire Sweldens
18. Gortler, S. J., and Cohen, M. F. Hierarchical and Variational Geometric Modeling with Wavelets. In Proceedings Symposium on Interactive 319 Graphics, 35-42, April 1995. 19. Lee A., W. Sweldens, P. Schr5der, L. Cowsar and D. Dobkin MAPS: Multiresolution Adaptive Parameterization of Surfaces Computer Graphics (SIGGRAPH '98 Proceedings), 95-104, 1998 20. Liu, Z., Gortler, S. J., and Cohen, M. F. Hierarchical Spacetime Control. Computer Graphics Proceedings, Annual Conference Series, 35-42, July 1994. 21. Lounsbery, M. Multiresolution Analysis ]or Surfaces of Arbitrary Topological Type. PhD thesis, University of Washington, 1994. 22. Lounsbery, M., DeRose, T. D., and Warren, J. Multiresolution Surfaces of Arbitrary Topological Type. Department of Computer Science and Engineering 93-1005, University of Washington, October 1993. Updated version available as 93-1005b, January, 1994. 23. Mitrea, M. Singular integrals, Hardy spaces and Clifford wavelets. No. 1575 in Lecture Notes in Math. 1994. 24. Nietson, G. M. Scattered Data Modeling. IEEE Computer Graphics and Applications 13, 1 (January 1993), 60-70. 25. Schlick, C. A customizable reflectance model for everyday rendering. In Fourth Eurographies Workshop on Rendering, 73-83, June 1993. 26. Schr5der, P., and Hanrahan, P. Wavelet Methods for Radiance Computations. In Proceedings 5th Eurographies Workshop on Rendering, June 1994. 27. Schr5der, P., and Sweldens, W. Spherical wavelets: Texture processing. Tech. Rep. 1995:4, Industrial Mathematics Initiative, Department of Mathematics, University of South Carolina, 1995.
(ftp ://ftp. math. scarolina, edu/pub/imi_95/imi95_4, ps). 28. Sillion, F. X., Arvo, J. R., Westin, S. H., and Greenberg, D. P. A global illumination solution for general reflectance distributions. Computer Graphics (SIGGRAPH '91 Proceedings), Vol. 25, No. 4, pp. 187-196, July 1991. 29. Sweldens, W. The lifting scheme: A construction of second generation wavelets. Department of Mathematics, University of South Carolina. 30. Sweldens, W. The lifting scheme: A custom-design construction of biorthogonal wavelets. Tech. Rep. 1994:7, Industrial Mathematics Initiative, Department of Mathematics, University of South Carolina, 1994.
(ftp ://ftp. math. scarolina, edu/pub/imi_94/imi94_7, ps). 31. Westerman, R. A Multiresolution Framework for Volume Rendering. In Proceedings ACM Workshop on Volume Visualization, 51-58, October 1994. 32. Westin, S. H., Arvo, J: R., and Torrance, K. E. Predicting reflectance functions from complex surfaces. Computer Graphics (SIGGRAPH '92 Proceedings), Vol. 26, No. 2, pp. 255-264, July 1992.
Least-squares Geopotential Approximation by Windowed Fourier Transform and Wavelet Transform Willi Freeden and Volker Michel University of Kaiserslautern, Laboratory of Technomathematics, Geomathematics Group, 67653Kaiserslautern, P.O. Box 3049, Germany [email protected] [email protected] http://w~w.mathematlk.uni-kl.de/~wwwgeo
1
Introduction
The spectral representation of the potential U of a continuous mass distribution such as the earth's external gravitational potential by means of outer harmonics is essential to solving many problems in today's physical geodesy and geophysics. In future research, however, Fourier expansions (x) 2 n + 1
U = E E fU(y)Hn,j(c~;y)dw(y)H~,j(c~,.) n----0 j = l A
(&0 denotes the suface element on the sphere A around the origin with radius a) in terms of outer harmonics
o=o,1 .....
will not be the most natural or useful way of representing (a harmonic function such as) the earth's gravitational potential. In Order to explain this in more detail we think of the earth's gravitational potential as a signal in which the spectrum evolves over space in significant way. We imagine that at each point on the sphere A the potential refers to a certain combination of frequencies, and that in dependence of the mass distribution inside the earth, the contributions to the frequencies and therefore the frequencies themselves are spatially changing. This space-evolution of the frequencies is not reflected in the Fourier transform in terms of non-space localizing outer harmonics, at least not directly. In theory, a member U of the Sobolev space 740 (of harmonic functions in the outer space A~xt of the sphere A with square-integrable restrictions on A) can be reconstructed from its Fourier transform, i.e. the 'amplitude spectrum'
190
Willi Freeden and Volker Michel
with
/,
(U, H,,,j (a, "))no = ] U(x)Hn,j (a; x) dw(x), A
but the Fourier transform contains information about the frequencies of the potential over all positions instead of showing how the frequencies vary in space. This paper will present two methods of achieving a space-dependent frequency analysis in geopotential determination, which we refer to as the windowed Fourier transform and the wavelet transform. Essential tool is the concept of a harmonic scaling function { ~(2)}, P E (0, ec). Roughly speaking, a scaling func-
Ae,t U A, of the
tion is a kernel ~(2) : Aext x Aext ~ I~, Ae~t = oo
form
2n+l
~(2)(x'Y) = E
(q°"(n))2 E
n-----0
Hnj(a;x)Hnj(a;y)
j=l
converging to the 'Dirac-kernel' 5 as p tends to 0. The Dirac kernel is given by oo
2nq-1
=Z
n=0 j=l
Consequently {~pp(n)}n=0,i .... is a (suitable) sequence satisfying
lira qpp(n) = 1 p ---:-0 p>0
for each n = O, 1,.... According to this construction principle, ¢ ~ *E)/[",~f l (0, c~), constitutes an approximate convolution identity, i.e. the convolution integral
A
formally converges to
U(x) = (5 • V)(x)
= (5(x, .), U)n ° =
] 5(x,y)U(y) dw(y) A
for all x E Aext as p tends to 0. Therefore, if U is a potential of class 7-/0, then lim I U - q5(2) * U 1
=0.
(1)
p>o
The windowed Fourier transform and the wavelet transform are two-parameter representations of a one-parameter (spatial) potential in n0. This indicates the existence of some redundancy in both transforms which in turn gives rise to
LS Geopotential Approximation
191
establish the least-squares approximation properties (promised in the title of the paper). The windowed Fourier transform (WFT) can be formulated as a technique known as 'short-space Fourier transform'. This transform works by first dividing a potential (signal) into short consecutive (space) segments and then computing the Fourier coefficients of each segment. The windowed Fourier transform is a space-frequency localization technique in that it determines the frequencies associated with small space portions of the potential. The windowed Fourier segments are constructed by shifting in space and modulating in frequency the 'window kernel' ~p given by 2n+l
~p(X,y) = ~ p ( n ) n=0
~H~,j(a;x)H,~,j(~;y),
(x, y) C Aext x Aext -
j=l
Note that ~2) (x, y) = (~p * ~p) (x, y) = / ~p (x, z)~p (z, y) dw(z) A
for all (x, y) C Aext x Aext- Once again, the way of describing the windowed Fourier transform (i.e. the 'short-space Fourier transform') is as follows: Let U be a potential of class 7-/o. The Fourier transform FT is given by
(FT)(U)(n,j) = f U(y)Hn,j(a; y) dw(y) A
((n,j) CAf, A / = {(k, l) I k = 0, 1,... ;l = 1,..., 2k + 1}), i.e., FT maps no into the space 7/0(N') of all sequences {H(n,j)}(nj)ear with (H(n,j)) 2 < c~ . (n,j)eN
Now ~p, with p arbitrary but fixed, is a space window ('cutoff kernel'). Chopping up the potential amounts to multiplying U by the kernel ~p, i.e. U(y)~p(x, y) with x E Aext, y ~ A, where the fixed value p measures the length of the window (i.a., the 'cutoff cap') on the sphere A. The Fourier coefficients of the product in terms of outer harmonics {Hn,j(a; -)} ~=~,.o.:~.$1 are then given by
(FT)(~p(x, .)V)(n, j) = I U(y)~Sp(x,y)Hn,j (a; y) dw(y), A
(n,j) E N', x E Aext. In other words, we have defined the 7-/o-inner product of U with a discrete set of 'shifts' and 'modulations' of U. The windowed Fourier transform is the operator (WFT)¢p, which is defined for potentials U E ~/0 by
A
192
Willi Freeden and Volker Michel
for in, j) E /¢" and x E Aext (note that ~(2)(1 ) is a normalization constant determined by the choice of ~@(p~'!~). If ~p is concentrated in space at a point ) k x E A, then (WFT)¢,(U)(n,j; x) gives intbrmation of U at position x E A and frequency (n,j) EAf. The potential U E 7/0 is completely characterized by the values of
{ (W FT) (U)(n, j; x) } (.a)es , ~EAext
U can be recovered via the reconstruction formula --1/2 co 2 n 9 - 1 /
E E
U = (~(2)(1))
(WFT)~. iU)(n'j;x)q~oi''x) dw(x)Hn,jia;')
n----0 j = l A
in the ll" Ilno-sense. Obviously, (WFT)~, converts a potential U of one spatial variable into a function of two variables x E Aext and (n, j) E Af without changing its 'total energy', i.e.: co 2n-l-i
II II 0 = n=O
j=l
S( (WFT)+. (U))in,S; ")'
k
But, as we shall see in this paper, the space ~ = (WFT)@. (7/o) of all windowed Fourier transforms is a proper subspace of the space 7/olAF x A----~t)of all (twoparameter) functions G:(n,j;x) ~-~ G(n,j;x), in,j) E AF, x E Aext, such that Gin, j; .) E 7/o for all in,j) E N and liGllno(ArxHXT:0< co (this simply means that G is a subspace of 7/0(H x Aext) but not equal to the latter).Thus being a member of 7/o(X x A---~xt)is a necessary but not sufficientcondition for G E (note that the extra condition that is both necessary and sufficient is called consistency condition). The essential meaning of ~ = (WFT)~p (7/0) in the framework of the space No (Af x A--~t) can be described by the following least-squares property: Suppose we want a potential with certain properties in both space and frequency. We look for a potential U E No such that (WFT)~p(U)(n,j;x) is closest to H(n,j;x), (n,j) EAf, x E Aext in the '?/o(Af x Aext)-metric', where H E 7/0(Af × Aext) is given. Then the solution to the least-squaresproblemis provided by the function --1/2
eo 2n-l-I f
which indeed is the unique potential in 7/0 that minimizes the '7/o(Af x Aex~)error':
Ilg - (WFT)v, (UH)l]no(nrxA~) = inf ] I H - (WFT)op(V)ll~to(Arx~). UET/o
(3)
LS Geopotential Appr0ximation
193
Moreover, if H E 6, then Eq. (2) reduces to the reconstruction formula. In the context of oversampling a signal Eq. (3) means that the tendency for minimizing errors is expressed in the least-squares property of the windowed Fourier transform. Although the oversampting of a potential (signal) might seem inefficient, such redundancy has certain advantages: it can detect and correct errors, which is impossible when only minimal information is given. Although the shape of the window may vary depending on (the space width) p, the uncertainty principle (see the considerations of M. Holschneider in this volume for the Euclidean case and [6,7,10] for the spherical and harmonic case) gives a restriction in space and frequency. This relation is optimal (cf. [7]), in the sense of localization in both domains, when ~p is a Gaussian, in which case the windowed Fourier transform is called the Gabor transform (cf. [14]). An essential problem of the windowed Fourier transform is that it poorly resolves phenomena of spatial extension shorter than the a priori chosen (fixed) window. Moreover, shortening the window to increase spatial resolution can result in unacceptable increases in computational effort. In practice, therefore, the use of the windowed Fourier transform is limited. This serious calamity, however, will be avoided by the wavelet transform. The wavelet transform acts as a space and frequency localization operator in the following way. Roughly speaking, if {~(2)}, P C (0, oc), is a harmonic scaling function and p is a small positive value, then ~2)(y, .), Y E A, is highly concentrated about the point y. Moreover, as p tends to infinity, ~(2)(y, .) becomes more and more localized in frequency. Correspondingly, the uncertainty principle (cf. [10]) states that the space localization of ~2)(y, .) becomes more and more decreasing. In conclusion, the products y ~ qS(~)(x,y)U(y), y E A, x E Aext, for each fixed value p, display information in U ~ n0 at various levels of spatial resolution or frequency bands. Consequently, as p approaches infinity, the convolution integrals ~5(p2) * U = ~p * 4ip, U
= Af =f
A
y)u(y) d (y)
A
display coarser, lower-frequency features. As p approaches zero, the integrals give sharper and sharper spatial resolution. In other words, like a windowed Fourier transform, the convolution integrals can measure the space-frequency variations of spectral components, but they have a different space-frequency resolution. Each scale-space approximation ~(2) • U = ~p * qhp. U of a potential U E 7/o must be made directly by computing the relevant convolution integrals. In doing so, however, it is inefficient to use no information from the approximation ~2) *U within the computation of ~(2,) • U provided that pt < p. In fact, the efficient p
194
Willi Freeden and Volker Michel
construction of wavelets begins by a multiresolution analysis, i.e. a completely recursive method which is therefore ideal for computation. In this context we observe that oo
R A
-~ R
A
UET/o,
R--+O,
A
i.e.
lis
nm u a>0
ep(-,z)
R A
s
eAz, y)U(y) d~(y) d~(~)
A
?
= 0, 7/o
provided that c~
!l~2)(x'y) = E
2 n-{-1
(Co(n))2 E
n=0
Hnd(a;x)Hnd(a;Y)'
(x,y) E Aext × Aext ,
j=l
is given such that d (¢p(n)) 2 = -p-~p (~Op(n))2
(4)
for n = 0, 1,... and all p C (0, oo). Conventionally, the family {~p}, p E (0, oo), is called a (scale continuous) wavelet. The (scale continuous) wavelet transform W T : 7-/o -+ 7-/o((0, oo) x Aext) is defined by (WT)(U)(p; x) = ( ~ , U)(x) =
(eAx,-), U)~0
In other words, the wavelet transfbrm is defined as the 7-/0-inner product of U E 7/o with the set of 'shifts' and 'dilations' of U. The (scale continuous) wavelet transform W T is invertible on 7/o, i.e. o0
A
o
in the sense of II" I]~to. From Parseval's identity in terms of outer harmonics it follows that oo
iS (vp(.,y), U)~o ~
A
d~(y) = (u, u)~0,
0
i.e., W T converts a potential U of one variable into a function of two variables x E Aext and p E (0, oo) without changing its total energy.
LS Geopotential Approximation
195
low-pass filter and band-pass filter, respectively. Correspondingly, the convolution operators are given by
Pp(U)=~p*~p*U,
U ET{o
R,(u)
u e T{o .
=
,
, u,
The Fourier transforms read as follows:
(FT)(Pp(U))(n,j) = (FT)(V)(n,j)(~p(n)) 2, (FT)(Rp(U))(n,j) = (FT)(U)(n,j)(¢p(n)) 2,
(n,j) E 2¢" (n,j) EAf .
These formulae provide the transition from the wavelet transform to the Fourier transform. The scale spaces 12n = Pp(7{o) form a (continuous) multiresolution analysis
~2pc Vp, C ?-to,
0
{U E 7{0 [ U E ~p for some p E (0, oo)}ll'llu° = 7{0 • Just as the windowed Fourier transform uses modulation in the space domain to shift the 'window' in frequency, the wavelet transform makes use of scaling in the space domain to scale the 'window' in frequency. Since all scales p are used, the reconstruction is highly redundant. Of course, the redundancy leads us to the ibllowing question which is of particular importance in data analysis: -
Given an arbitrary H E 7{o((0, co) × Aext), how can we know whether H = (WT)(U) for some potential U E 7{0?
In analogy to the case of windowed Fourier transform the answer to the question amounts to finding the range of the (sale continuous) wavelet transform W T : 7{0 -+ 7{0((0, oo) x Aext), i.e. the subspace w = (WT)(no)
x
Aoxt).
Actually it can be shown that the tendency for minimizing errors by use of the wavelet transform is again expressed in least-squares approximation: Let H be an arbitrary element of 7{o((0, co) × A~xt). Then the unique function UH E 7{0 with the property =
IIH-
inf IIH -
U E74o
is given by oo
0
A
(WT) (UH) is indeed the orthogonal projection of H onto W. Another important question in the context of the wavelet transform is:
196 -
Willi Freeden and Volker Michel Given H(p; x) = (WT)(U)(p; x), p 6 (0, oo) and x 6 Aext, for some U 6 7-/0, how can we reconstruct U?
The answer is provided by the so-called least-energy representation. It states: Of all possible functions H 6 ?-/o((0, co) x Aext) for U C 7-/o, the function H = 2 (WT)(U) is unique in that it minimizes the 'energy' HHI[n((o,oo)×A--~0. More explicitly, ]](WT) (V)lln((o,oo)×AoxO --
inf
H 6 ~t o ((o,oo) x A--~'[xt)
[]Hi]uo((o,oo)xA--jj)
(WT)--I(H)=U
The layout of the paper is "the following: Chapter 2 presents the basic material about outer harmonics to be needed for our harmonic variants of both windowed Fourier transform and wavelet transform. Chapter 3 deals with the Sobolev space structure of 7-/0, while Chapter 4 gives the definition of product kernels in 7-/0. Central for our considerations is the introduction of harmonic scaling functions~ which will be discussed in Chapter 5 in a mathematically rigorous way. The windowed Fourier transform and its least-squares approximation properties will be explained in Chapter 6. Chapter 7 is concerned with the scale continuous wavelet transform and its least-squares approximation property. Chapter 8 shows what happens in scale-discrete wavelet analysis. Chapter 9 lists a collection of wavelets, which are of particular significance for the data analysis in geopotential modelling. Finally, a variant of fully discrete harmonic wavelet transform is illustrated by use of approximate integration rules. The paper ends with some graphical illustrations. This paper is essentially based on constructive approximation methods, which have been developed by the Geomathematics Group, Kaiserslautern, during the last five years. The point of departure for the spherical wavelet theory is the paper [13] (see also the contribution of M. Holschneider in this volume). The basic facts of spherical wavelet theory have been summarized by the monograph [7]. Applications of harmonic wavelets to classical boundary-value problems of potential theory coresponding to (geodetically relevant) regular surfaces can be found in [12]. Theoretical and numerical aspects of harmonic modelling by wavelets and anharmonic modelling by splines in the gravimetry problem of physical geodesy have been discussed in [19]. Numerical aspects (pyramid schemata, fast summation methods, d a t a compression, etc) in the data analysis of spaceborne geopotential data have been presented by the ECMI-research notes [6].
2
Preliminaries
We begin with some basic facts to be needed for our introduction of harmonic windowed Fourier theory and wavelet theory. 2.1
Regular
Surfaces
C ]~3 is called a regular surface if it has the following properties:
LS Geopotential Approximation
197
- Z divides the three-dimensional Euclidean space ~3 into the bounded region Zint (inner space) and the unbounded region Zext (outer space) defined by ~ e x t = ]~3 \ ~ i n t , ~ i n t -'~ ~ i n t [-J ~ -
- Zint contains the origin 0. - Z is a closed and compact surface free of double points. - Z has a continuously differentiable normal field v (pointing into the outer space Zext). Examples are the sphere, the ellipsoid, the spheroid, and the smoothed earth's surface. Given a regular surface Z then there exists a positive constant a such that O~ < 0"inf = i n f IXl < sup Ixl = xE2~ -- xEE
a sup.
(5)
A (and 2 inf, respectively) denotes the sphere around the origin with radius a (and a inf, respectively). Aint (Aext) denotes the inner (outer) space of the sphere A. Correspondingly, Z ~ tf (xi~[) is the inner (outer) space of 2 inf. xi~f is in geodesy known as the Bjerhammar sphere. Obviously, ~ext C ~ ~ ei nxft C Aext
Fig. 1. ~llustration of the regular surface ~ and the sphere A
The set ~ ( ~ ) = {x e ~3 Ix = y + ~-(y), y e ~ }
198
Willi Freeden and Volker Michel
generates a parallel surface which is exterior to Z for ~- > 0 and interior for T < 0. For sufficiently small 1~[ the regularity of Z implies the regularity of Z(~-), and the normal to one parallel surface is a normal to the other (see, for example, [6,9]). Furthermore it is easily seen that inf Ix + Tv(x) - (y + a~(Y)) I = I~- - al
x,yE~
provided that b-I, ]al are sufficiently small. In what follows we use the following abbreviation:
U+(x) = lim U(x + s~(x)), x E ~, U E C(~ext) • s-+ O s>0
2.2
Outer Harmonics
Let {Yn,j}, n = 0, 1,..., j = t , . . . , 2n + 1 be an orthonormal system of (surface) spherical harmonics, i.e., m
l=l Then the system {Hn,j(a; ")}, n = 0, 1 , . . . , j = 1,... ,2n + 1 of "outer harmonics", defined by
Hn,j(a;x)= -1- ([x-~)n+l Ynj (y-~)x
, X E dext,
satisfies the following conditions: - Hn,j(a; ") is of class C(°°)(A--~0 - H n j ( a ; ") satisfies the Laplace equation A~Hn,j(x) = O, x E Aext -
H
,j(a;-)IA =
f
= 5
,kSj,.
A
The addition theorem of outer harmonics (cf. [7,21]) reads as follows:
2rt'4-1/" O~2~n+lPn (X~-~'~[y) \~]
2n+l E Hnj(a;x)Hn,j(a;y) ----~ j~-i
for all (x, y) E Aext x "Aext, where P~ is the Legendre polynomial of degree n. Harmn(Aext) denotes the space of all outer harmonics of non-negative order n:
Harmn(Aext) = spanj=l .....2n+l(Hn,j(a;')), It is well-known that
dim (Harm,~(A----~xtxt))= 2n + 1 .
n = 0,1,...
LS Geopotential Approximation We
199
set
...... (Hn,j(a;-)). HarIn0,...,m(Aext) = span ~ =~=o l,,,,,2n+l Of course,
7Yt,
Harmo ..... m(Ae~t) = ( ~ Harmn(Ae~t) n:O
so that
dim (Harmo .....m(Aext)) = E ( 2 n + 1) = (m + 1) 2. rt~--0
For brevity, we let Harmo ..... re(Z) = Harmo,...m(Aext)l whenever ~ is a subset of Aext. The kernel Ki~armo...... (Aoxd(" ") : Ae~t x Aext --+ ]R defined by /(Harmo ...... (A°x*)(x'Y)=~2n+l(n=O 47ra2 \ ~ ' a2 ] ~n+l Pn (x_~.~,y~y) is the reproducing kernel of the space Harrn0 ..... m (Aext) with respect to (., .)£2(A). 2.3
Fourier
Approximation
by Outer
Harmonics
Pot(Next) denotes the space of all U : Next ~ N with the following properties: -
U is a member of class C (2) (Next) U is harmonic in ~ext, i.e. U satisfies the Laplace equation AU = 0
in
Next,
- U satisfies the regularity condition at infinity
tu(x)l = O(Ixl-1), I(vu)(x)l = o(Ixl-
), Ixl
For q = 0, 1,... we let
Pot (q) ( ~ t )
= Pot (Next) N C (q) ( ~ )
.
From the maximum/minimum principle of potential theory (see e.g. [17,20]) we know that sup IU(x)t < xE~t
supIU+(x)l,
U E Pot (°) (~e--~xt •
xE~'
Moreover, using arguments of the theory of integral equations (cf. [3]) we have
sup Ig(x)i _< DeN(X) xEK
IU+(x)lNdw(x)
, u E Pot(°)(Zext),
(6)
200
Willi Freeden and Volker Michel
where DL2 (Z) is a positive constant (dependent on E) and K is a subset of'Zest with dist(K, Z ) > O. In our nomenclature the classical formulation of the Dirichlet boundary-value problem of the Laplace equation, i.e. the representation of a harmonic function corresponding to continuous boundary values on Z, reads as follows: Exterior Dirichlet Problem (EDP): Given F e C ( Z ) , find a member U E Pot (°) (~ext) such that U + = F. We recall the results of the solution theory of Dirichlet's problem (cf. [17]) that are of interest for our purposes below: EDP is well-posed in the sense that its solution exists, is unique, and depends continuously on the boundary values. We have
so that =
From [3,4,9] we know that C(Z)
7)+ = span ,~=o,1.... (Hn,j(a;'~2j ~=1,... ,2n-~1
(7)
Moreover, £ 2 ( Z ) = ~+11"11c2(~) = span ==o,~.... (Hnj(a;-)+)ll'llc2'~)
(8)
j~--~l,...,2n-~l
Eq. (8) equivalently means that the space ~2(Z) is the completion of the set span
o = o , i ....
j = l , . . , ,2n~-I
of all finite linear combinations of functions H n j ( a ; .)+ with respect to the N" ][L2(s)-topology. 2.4
Outer Harmonic Fourier Expansions on Regular Surfaces
In order to make use of the identity (8) in constructive approximation we have to orthonormatize the system { H n j ( a ; - ) } ~=o.1..... (for example, by virtue of j=l,...,2n-~Ul the well-known Gram-Schmidt process) with respect to the H" H£2(m)-topology obtaining a system K* :'t ~=o,1..... C Pot(°)(Z---~t.) with the following properties (cf. [4]): (i) each K.~,j, ( n , j ) C 24", is the unique solution of the boundary-value problem K*,j E Pot (°) ( ~ )
corresponding to the boundary data (K*~ , j.~+ /s = H'j,
LS Geopotential Approximation (ii) {H~,j} ~=oa.... defined by
201
H~,j = (K~,j)+~, (n,j) e .hf, is a complete
3"~1 . . . . . 2rid-1
orthonormal system in the Hilbert space (£2 (~), (., .)z:=(~)) (obtainable by £ : (Z)-orthonormalization from {Hu,j (a; ")+}(n,j)exr)Let U be the uniquely determined solution of the EDP:
u e eot(°) (~-aT~),
u + = F.
Then, in accordance with our construction (cf. [3,4]), the £2(E)-convergence of the orthogonal expansion N 2n+1
(9) n=0 j=l
to the function F (in H" ][n2($)-sense) implies ordinary pointwise convergence of the sum N 2nq-1
E E
(F'Hn,J)r.=(E)K*J
(10)
n=0 j=l
to the potential U as N -+ oo for every point x E K with K C Eext and dist(K, Z) > 0. For every compact subset K C Eext, the convergence is uniform. More explicitly, tim
N--+c~
F-
(F,H*,j)z~(~) n=O j=l
= 0
(11)
L2(~)
implies lim sup N--+oo xcK"
U(x) - ~N 2,~+1 Z (r,<J)~t~)K~,gx)= o n=0 j=t
(12)
provided that K C Zext is a set with dist(K, Z) > 0. Truncated orthogonal (Fourier) expansions, as indicated by (11), (12) for approximation, have the following least-squares property: N 2aWl Hn,J v - E E (<<.,J)~(~) *
II
n=0 j=~
VC
inf
span
(K*=j)
g=(•)
tlu - Yllc=(z)
j:l,,.,,2n+l
i.e. the problem of finding a linear combination in
span j =~=o....~v (K * n,j), which l ..... 2n+1
is minimal in the £2 (Z)-norm, is solved by the orthogonal projection of U onto
202 the
Willi Freeden and Volker Michel
span.j=l,...,2n+l ~=0..... N (Hn5).
More explicitly, we have
N 2n+1 *
H *n , 3 *
n=O j = l
2
£2(~7) N 2n+l
E E (V,H*n,j)£~(Z) ,2
= (u,
n----0 j = l
In particular, for U E Pot (°) (A---~xt) with U+ = F,
lim
N-+oo
I
F-E
N 2n+l
E (F'Hn'j(a;'))L2(A)Hn'J(a;')
n=0 j = l
=0
(13)
= 0
(14)
E2(A)
implies
lim
sup
U(x)
N-+cx) x EE---~x~
N 2n+l
-E
E (F'Hn,J(a;'))L~(A)Hn,j(a;x)
n=0 j = l
Eq. (13) indicates the conventional approach to modelling the earth's gravitational potential in (today's spherical) geodesy. Particularly important spherical outer harmonic models are GRIM4 (cf. [23]), OSU91A (cf. [22]), and EGM96 (cf. [18]). Non-spherical outer harmonic models of type (11), (12) corresponding to the actual earth's surface E (which today may be supposed to be known from GPS measurements) are a challenge for future (low-wavelength) approximation.
3
Sobolev
Spaces
Let {An}neNobe a real sequence. The sequence {A~},~eN0 is called if An # 0 for all n and the sum
{B,~}-summable
~ 2n + 1 IBn]2 ~({Bn},{gn}) =
n=0
47ra 2 [An[ e
is finite. A {1}-summable sequence is simply called
summable,i.e,
c¢ 2 n + l 1 Z ({An}) = Z ({1}, {An}) = E 47ra2 tAmp n=0
is finite.
LS Geopotential Approximation 3.1
203
Definition
For a given real sequence {An} with An ~ 0 for all n, consider the linear space C = E({An}, Aext) C Pot(~)(Aext) of all potentials U of the form c<) 2 n ~ 1
U= E
E
(V, g,~,j(a;-))L2(A) Hn,j(a; ")
(15)
n=0 j=l
with
(FT) (U)(n, j) = (U, H,,j (a; .))L2(A) = / U(y)Hn,j (a; y)dw (y) A
satisfying oo 2n-l-1 n:0
2
j:l
From the Cauchy-Schwarz inequality it follows that
(16)
~ [A~I2 (U, Hn,j(a; "))C~(A)(V, Hn,j(a; "))C~(A) --< n:0
j=l
/ ~ ~+~
) ~/~ ~n+~
\n-=0 j=l
½
(v, H~,j(a; "))c~(A) 2 / ]
for all U, V e ~. In other words, the left hand side of (16) is finite whenever each member of the right hand side is finite. Therefore we are able to impose on $ an inner product (., ")n({A~};A---'~)by letting oo 2 n + l
(U, V)n({A.};~) = E
E
JAil2 (U, H~j(a; "))L2(A) (V, Hn,j(a; "))L2(A)
n=0 j=l
The associated norm is given by II " 1IT{({A~};A---~t) : ~/(', ")7t({A~};A---~xt)" Definition 1. Let {An} be a real sequence such that A~ ~ 0 for all n. Then the Sobolev space 7i (more accurately: 7{({An}; Aext)) is the completion of E under the norm II " ll~t({An};A--~,): 7/({An}; A--~xt)= E({An}; Aext)
'
•
equipped with the inner product (-, ")7/({An};Ae~t) is a Hilbert space. From the Cauchy-Schwarz inequality it follows that (U, V)~({1};A-Z:~) exists if U E 7-t({An}; Aext) and V C ?/({A~I}, Aext). Moreover,
(u, v)~(~I};A--:z~,)t-< ItUII~((A~};Aox,)NVN~(~A~I};A-:Z~,) •
204
Willi Freeden and Volker Michel
Hence, (., ")~t({1};Aox0 defines a duality of 7/({A,}; Aext) and 7/({A~1}; A--~t). For brevity, we let 7/,(A,xt) = U({(n + 1/2)s}; A---~t) for each real value of s. In particular, 7/0(A----~0 = 7/({1} ; dext) In what follows we simply write 7/0 (instead of 7/0(Aext)) when confusion is not likely to arise. 3.2
Sobolev L e m m a
If we associate to U the series (15) it is of fundamental importance to know when the series (15) converges uniformly on Aext. The answer is provided by the following lemma. L e m r n a 1. (Sobolev) Let the sequence {An} be {Bn}-summable (with Bn • 0 corresponds to a potential of class for all n). Then each U 6 7/({B~IA~};
Aext)
pot(O)
Proof. For each sufficiently large N, we have
B~A:2~H~a(c~;x)AnBg ~ (U, g~,j(~;
-
n=o
j=l
\n=o
j=l \ B . /
)
(U'H~'J(~;'))C~(A)
<_ E ({B~} ; {A~})HUNn({BZlA~};~). This proves Lemma 1.
[]
By similar arguments we obtain the following results (cf. [5,6]): L e m m a 2. If U 6 7/s(Aext), s > k + 1, then U corresponds to a potential of class Pot (k) (A---~xtxt). Furthermore, we have (cf. [6,11]) L e m m a 3. Suppose that U is of class 7/s(Aext), s > [l] + 1. Then sup xEA~x~
N 2n+l E (U'HmJ(a;'))L2(A)
(VtU)(x)- E
(V1Hn'J) (a; x)
I
n = O j----I
<_ON[~]-stlUItn,(~) holds for all positive integers N (with V l = O[t]/(OXl)t~(Ox2)t~(Oxs) t~, l~: nonnegative integers, 11 + lz + Iz = [l]), where C is a positive constant independent
o/U.
LS Geopotential Approximation 3.3
205
O u t e r H a r m o n i c Fourier E x p a n s i o n s in 74o
The outer harmonic Fourier transform FT : U ~-~ (FT)(U), U C 7-/0, is defined by
(FT)(U) (n, j) = (U, Hn,j (o~;"))7/0 = / U(x)H,~,j(a; x) dw(x) . A
As mentioned in our introduction, the Fourier transform FT forms a mapping from 7/0 into the space 7/o(Af) of sequences {V(n,j)}(n,j)eN with V(n,j) = (V, Hnj(a; "))no, V E 7/0, satisfying
oo 2n+l IU(n,J)t 2= ~ ~'~ IV(n,J){ 2 < cc
2 IIvlino(~) = ~ (n,j)e~"
.
n=o j=l
Any potential U E 7/0 is characterized by its 'amplitude spectrum'
{(u, H~,j(~;-))Uo }(n,j)eJ¢ More explicitly, for U, V E 7/o, N 2~+1 lim
N---> oo
II
U - E E (V, Hn,j(c~;.))noHn,i(a;.) =0 n:O j:l 7to
we have U = V (in the sense of [1" []no)- In addition, for U E 7/0, lim
N-.~ co
I
II
U - E E (U'Hn'j(°~;'))Tto Hn,j(a;') =0 n:0 j=l 7io
implies
t
N 2n+1
]
Iim sup U(x)- E E (U'Hr~,J(a;'))no Hn,j(a;x) = 0 N-~oo xEK n----0 j=i for all K C Aex~ with dist(K, A) > 0. In particular, we have
N 2n+1 t E (V,H~,j(a;.))noH~,j(a;x) = 0 n:0 j=l
lim sup U(x) - E
N-+oo xEE
for all (geodetically relevant) regular surfaces satisfying (5).
4
Product Kernels
Of particular impo;ctance for all our considerations on windowed Fourier transform and wavelet transform are product kernels of the form
oo 2n+l K(x,y) = E ~(n) E H~,J(a;x)Hn,j(cEY)' n----0 j=l
(x,y) e dext x dext,
(17)
206
Willi Freeden and Volker Michel
where ,~(n) are real numbers for n = 0, 1,.... Notice that
K(x, y) --- K(y, x),
(x, y) e Aext × Aext -
By virtue of the addition theorem (see e.g. [7,21]) the product kernel K may be rewritten as follows:
K(x, y) : E
-4~-~2 [
Pn
x . y
,
(x, y) e dext × dext-
n=0
The sequence {K^(n)}n=O,1 ..... with
K^(n) = ~(n),
n = 0,1,...
is called the symbol of the product kernel K. 4.1
?-/o-kernel F u n c t i o n s
A product kernel K of the form (17) is called an 7-lo-kernel if {(KA(n)) -1 } is summable, i.e. + 1 4~ra2 n~---O
Let, K be an ~/o-kernel. Suppose that U is of class 7~o. Then we understand the convolution of U and K to be the function of class 7-/o given by
( K , U ) ( x ) = (K(x,'),U)no =
f
x
e Aext •
A
Obviously, K * U is a member of class 7/0. Furthermore, we have
(FT)tK * U)(n,j) = (FT)(U)(n,j)KA(n),
(n,j) E .M .
If L is another No-kernel, then L * K is defined by
( L * K ) ( x , y ) = f L(z,y)K(z,x)dw(z),
(x,y) e Aext x Aext
A
It is readily seen that ( L . K ) A ( n ) = nA(n)KA(n),
n = 0,1,....
We usually write K(2) -- K * K to indicate the convolution of a kernel with itself. K (2) is said to be the iterated kernel of K. 0bviously,
(K(2))A(n)=(KA(n)) 2,
n = O , 1, . . . .
LS Geopotential Approximation 4.2
207
~r/o-kernel F o u r i e r E x p a n s i o n s on R e g u l a r S u r f a c e s
Let K be an 7/0-kernel. Then the following results are known (cf. [3,5,6,9]): (EDP) Let {Xk}k=l,2 .... be a countable dense system of points xk on X. Then C ( ~ ) = 7) + = spank=l,2,... ( K (., xk)+) "ll'llC(~)
and
£2(Z ) = "1) _+lHl~2(~) Z =
spank=l,2,,..(K(', xk)~) + II'lfc2(~) •
As in the case of outer harmonics, for purposes of constructive approximation, we again have to orthonormalize the system {K(.~ Xk)}k=l,2 .... with respect to the []. [l~2(E)-topology obtaining a system {K~}k=l, 2.... C Pot (°) (~----~t) with the following properties: (i) each K~, k E N, is the unique solution of the boundary-value problem K~ E Pot (°) (~ext) corresponding to the boundary data (K~) + = L'k, k e N, is a complete orthonormal (ii) { L*k}k=l,2 .... defined by L~ = (K~)~, *+ system in (/:2(~), (., .)L2(z)). Given U E Pot(°)(~ext) with U + = F, then lira
F-
F,L*k)£2(~)L ~
= 0 .
Consequently, in connection with (6), we find lim
sup
U(x) - E(F,L*k)£2(.~)K~(x)
N--~c~ x E ~
= 0
k=l
for all subsets K C Zext with d i s t ( K , Z ) > O.
5
Scaling Function
The wavelet approach presented now is an extension of ideas developed in spherical theory (cf. [7,12,13]). Starting point is a "continuous version ~ of a symbol" {~^(n)}n=0,1 .... associated to an "7-/o-kernel" n=o
/,2n 1r°2
~-~ , (x, y) e Aext × Aext, (is)
i.e., •
=
= 0,1,....
208 5.1
Willi Freeden and Volker Michel Admissibility
Condition
2. A piecewise continuous function 7 : [0, c~) --~ ]~ is said to be admissible if it satisfies the admissibility condition:
Definition
Z
sup
I~(x)l < + o o .
(19)
~=0 \~e[~,n+l) 4. Let 7 : [0, oo) --~ N be piecewise continuous. Furthermore, assume that there exists a number s > 0 such that
Lemma
7(t) = 0 ( t - l - O ,
t -+ ~
.
(20)
Then 7 satisfies the admissibility condition. Proof, As t_~lt_)~, is b o u n d e d as t -+ c% we are able to introduce
te(0,oo) Hence, 2
sup
17(t)l
n=O \xe[n,n+l) = n=0
sup
~ t -
x6[n,n+l)
t--l--e
2
< M E sup n=0 \x~[n,n+l) =M..,
< +oo .
rt--~O
This shows L e m m a 4.
[]
Note t h a t , on the other hand, the function 7 given by 7(t)=
(t log t) -1 for t > 1 1 for0
satisfies the admissibility condition, as b ( t ) l2 =
<_ _t 2 ,
if tgee. However, this 7 does not satisfy (20). This can be seen as follows: Assume that there exists a value s > O, such that
~(t) = o(t -~-~)
LS Geopotential ApproXimation
209
as t --~ oo. Then there exists a constant M E ]~, such that
for all t E (0, eo). In particular,
(t 10g t)- 1 I t - l - e I _< M 1
for all t > 1. But lim (t log t ) - I = + ~ , t>l
whereas lim t - i - ~ = 1 . t---~ 1 t:>i
This is a contradiction. Hence, the implication of Lemma 4 is not true in the opposite direction. An immediate consequence of Definition 2 is that a kernel ~ with ~/~ (n) = 7i(n) for n = 0, 1 , . . . , where ")'i is admissible, i.e. satisfies the admissibility condition, is an 7-/o-kernel. Using an admissible generator 7 = 71 we can define a dilated generator 7p : (0, oo) -+ 1~ by letting 7,(t) = DpVl(t) = 71(pt),
t e (0, co)
(21)
(cf. [12]). We are now able to verify the admissibility condition for dilated functions. L e m m a 5. Let V1 : (0, co) -+ 1~ satisfy the admissibility conditions and p E (0,1) be a given number. Then the dilated/unction Vp satisfies the admissibility
condition. Proof. We use the denotations [-] and [.J for rounding real numbers: It] = max{n E Z I n < t}, It] = min{n E Z I nget}, where t E]K We obtain
N sup
n=0 ~E[n,nq-1) N
2
= Z sup n-~-OsE[pn,p(n÷ l ) ) N _< sup
(22)
n=0 s~[Lpnj,[p(n+l)])
(~l(S)) 2 -~('~1(8)) 2} < - N- ( sup sup n=0 \s~[LPnJ'[Pn]) sC[[pn],[p(n÷l)]) AS 0 < p < 1, every interval in the last line is either empty or has the form [p,p + 1),where p C No. But some intervals can occur several times. There are at
210
Willi Freeden and Volker Michel
most [ ~l + 1 equal intervals of the kind [Lp~J, Fp~l), as LpnJ = Lp~J (n, m e No) implies pn = p + a,
pm = p + ,8,
where p = [pnJ E No and a, fl E [0,1). Without loss of generality we assume that a < fl, i.e. n < m. Thus, - a=p(m-n)
implies fl - a m-n------<-
1
P
.
P
Analogously, we see that there are at most [i] + 1 equal intervals of the form
[[pnl, [p(n + 1)t). Ftirthermore, the laxgest values that we obtain for s in (21) are in the intervals [[pNJ, [pN]) and [[pN], [p(N + 1)]), where [p(N + I ) ] = [pN + Pl <_ [pNI + I . Hence, we obtain N
Z sup (~l(pt)) 2 n=0 tE[n,n+l) [pgl
/i-1=\
p=0~
\/pl
<_ 2
/ se[p,p+l)
+ 1 E
sup
(71(t)) 2 < +cx) .
n-~OtE[n,nT1)
[]
This proves Lemma 5. L e m m a 6. Let 71 : (0, co) -+ ~ satisfy the admissibility condition and p E (1, oo) be a given number. Then the dilated function % satisfies the admissibility condition. Proo]. Note that [pn + Pl <- [[pnl +
rp l
= [ p n l + [pl ~ LpnJ + 1 + rpl _< [pnJ + LpJ + 2.
We obtain for an arbitrary but fixed number N E N N
sup (71(s)) 2 seSo~,p(~+x)) N
<~
sup
~=--~~elip-j, Fp-+pl)
(~1(,))2
LS Geopotential Approximation
N(
< ~]
(~(~))~+...
sup
+
sup (V1(s)) 2) e[tpn]+ Lp]+1,tp~]+ ip]+2)
n=o \s6[b~i,b~]+l)
= Z
n:0 \ m:0 [p]+l ( N
: Z
sup
sE[LpnJ+m,LpnJ+m+l)
211
(~(~)f
)
~
sup
(~(~))~
m = o \~==o ~e[b~]+-~,[.~j+m+n
Let us keep m fixed for a moment. We see that LpnJ+m=[p~J+m;
n,v•No;
is equivalent to pn : p + a p~ = p + f l ;
p = bnJ 6 No;
,~,~ e [0,1) .
Hence, n--
p- +- a
--
p+fl+a-fl
P
:
P
J
-
a- - -~
P
P
As ~ - fl E (-1, I) and p > I, we see that ~ - ~ : 0 and, consequently, n : u. Thus, for fixed m, the intervals used are disjoint. Consequently, w e obtain
N sup se[pmp(n+l)) [p]+l [pNJ+m
<~-- E m:0
E k:0
(~(~))~ sup
sE[k,k+l)
(~fl(8)) 2
LpJ+l LpN]+Lp]+I
-< Z
Z
m=0
k=0
sup
(~1(.)) 2
sE[k,k+i)
oo -< (LpJ
+
2) ~
sup
k:0 se[k,k+l)
(~l(S)) 2 .
Hence, 7p satisfies the admissibility condition, as required. We can summarize
Lemma
5 and Lemma
6 in the following way.
T h e o r e m 1. Let V1 : (0, co) -~ ]~ satisfy the admissibility condition. Then ~p is admissible for all p E (0, co). D e f i n i t i o n 3. A function ~ : [0, co) -+ ]~ satisfying the admissibility condition is called ?-go-generator of the kernel ~ : Aext × Aext -+ ~ given by (18) if ~A(n) ---~(n) for all n = O, 1, ....
212
Willi Freeden and Volker Michel
From Section 4.1 it is clear that ¢ is an 740-kernel provided that ~ is an admissible generator of ~. Another consequence is that each function ~p, p E (0, oo), defined by (21) is an 740-generator of the kernel ~p via ~ ( n ) = ~p(n), n = 0, 1, .... But this enables us to write ~p = Dp~l. Note that • p¢ = Dp~¢ = Dp¢ ¢1 • Dp is called dilation operator of level p. We are also able to introduce the inverse of Dp denoted by Dp-1, p E (0, oc). To be more specific,
~p-~(x,y)---- D p - l ~ 5 ( x , y ) :
oo 2n + l cA(p_ln . ~ 41rae [~] r~-o
x
y •
,
(x, y) E Ae~t x dext, whenever • is an 74o-kernel of the form (18) with ~^(n) = ~(n), n = 0,1,....
We now introduce these 740-generators which define scaling functions. D e f i n i t i o n 4. A function ~01 : [0, c~) ~ ~ satisfying the admissibility condition is called an 74o-generator of a scaling function if it satisfies the following properties: - ~1 is monotonically decreasing on [0, c~), ~1 is continuous at 0 with value ~1 (0) = 1. -
Indeed, if ~1 satisfies the assumptions of an 74o-generator of a scaling function, then ~1 and its dilates ~p generate the scaling function {~p}pe(o,oo) C 7-lo via ~ ( n ) = ~p(n). It is easily seen that ~1 (t)geO for all t E [0, co). Furthermore, for each t E [0, c~), we find limp~o~p(t) = ~im° ~l(pt) = ~1(0) = 1, p>O
p>-O
since ~l is continuous at 0. Moreover, the monotonicity of ~1 on [0, c~) and the definition of ~p imply the monotonicity of the family {~p(t)}pE(o,o~ ) for each t e [0, Our considerations now enable us to verify an approximate convolution identity. T h e o r e m 2. Let ~1 be a generator of a scaling function {~Sp},p E (0, co). Then lim IIV p--~O
~p * 4. * Ultno
=0
p>O
for every U E 740. Proof. Observing the Parseval identity we obtain
Letting p tend to 0 we obtain the desired result, as the series on the right-hand side converges uniformly with respect to p E (0, c~). []
LS Geopotential Approximation 6
Windowed
213
Fourier Transform
We begin our considerations with the definition of the windowed Fourier transform. Definition 5. For arbitrary but fixed p E (0, co), let qSp be a member of a scaling function {qsp}. Assume that U is of class 74o. Then the windowed Fourier
transform is defined by (WFT)~p(U)(n,j;x) = (~(2)(1))-U2 (U, ~p(X,')Hn,j(a; "))no =
( ~ 2 > ( 1 ) ) - ' / 2 / - U(y)~p(x,y)Hn,j(a;y)dw(y)
A : (qS(2)(1))-l/2(FT)(~,(x,.)U)(n,j) for (n, j) EAf and x C Aext, where qS(2)(1) is a normalization constant given by oo ~ 7 )(1) = E
2n + 1 (v.(n))2
(23)
n=0
The windowed Fourier transform converts a potential U C 740 of one spacevariable into a potential (WFT)ep(U)(n,j;x) of the two variables (n,j) E A f and x E Aext- The windowed Fourier transform is generated by the 'y-shift operator' Sy and the (n, j)-moduIation operator Mn,j defined by
Sy : ~p(x,.) ~ Sy~p(x,.) = ~p(x,y), (x,y) E A~xt x Aext M~,j : ~p(x,') ~-~ Mnj~p(x,') = ~p(x,.)H~j(a;.),(n,j) e Af,
x e Aext,
respectively. In other words,
(WFT),I,p(U)(Tt, j;x) = (~p2)(1))-l/2(y, in,j~x~p(',.))7.to, 6.1
U E 740.
Reconstruction Formula
Denote by 740(24" x Aext) the space of all functions G : JV x Aext -+ ]R such that G(n, j; .) E 740 for all (n, j) CAf and (~
2n-b1
E tfa(n,;.)tlo = E E
(n,j)eN
~(:X)
.
n=0 j=l
On the space 740(A(x Aext) we are able to impose an inner product (., ")no(N×A;X/) by c~
(F'G)no(J~×Ao~d - E n=0
2nq-1
E j=l
(F(n,j; .),G(n,j; "))no ;F,G e 74o (Af × ~ext)
214
Willi Freeden and Volker Michel
The corresponding norm reads 2n-t-1
HGI[no(H×A--27=~)=
~
) 2
IIG(n,J;" [luo
n=0 j=l
This enables us to formulate the following theorem. T h e o r e m 3. Let U be a potential of class 7io. The windowed Fourier transform forms a mapping from the space 7to into 7-/0(A/× Aext), i.e. (WFT)op : 7to -+ 7-lo(Af × ~e~t), and we have c~ 2n-~l
II []no = ~ n:0
~
(WFT)*o (U)(n,j;.) no
j=l
Proof. The Parseval identity of the theory of outer harmonics shows us that c~ 2 n T 1
/~ A
Z
I(WFT)+,(U)(n'j'x)] 2 dw(x)
n=O j = l
A Now we observe that for y G A
AlOP(x,y)12 dw(x) = O(2)(y,y) = 0(2)(1) . This yields the desired result.
[]
Theorem 3 is equivalent to the statement that any potential U C 7/o can be recovered by its windowed Fourier expansion 2n~1
(0(2)(1))-'/2 ~
"~1
n=0 3=
j
r
(GT),p (U)(n,j;y)Op(.,y) dw(y)Hn,j(~; .)
A
(relative to the kernel Op). To be more specific, we have c~ 2 n + 1
u:
r
(1)) -1/: E E n----0 j = l
A
in the sense of [t" [Ino- In particular, for every subset K C Aext with dist(K, A) > 0, the convergence is uniform.
LS Geopotential Approximation 6.2
215
Least-squares Property
By virtue of the Cauchy-Schwarz inequality we obtain for U C 7/0, x E Aext, and arbitrary but fixed p E (0, co)
I(WFT)~p(U)(n,j;x)I <_( ~ 2 ) ( 1 ) )
-1/2
I(U,~p(X,-)H~,i(a;.))no
_< (~(~=)(I))-~/= IIUiI~oII~(/, .)Hnj(a;")II~o • In other words, U e 7/0 implies that (WFT)~p (U) E 7/0 (Af × A--~xt) is bounded. Hence, this transform (WFT)~p is not surjective on 7/o(Af x A--~xt) (note that 7/0 (Af × A--~xt) contains unbounded elements). Therefore
G = (WFT)+, (7-lo) is a proper subspace of 7/o(Af × A--~t):
Hence, we are led to the question of how to characterize G within the framework of 7/o(Af × Aext). For this purpose we consider the operator P: 7/o(Af × Aext) -+ 6 given by
( p H ) ( n , j ; x ) = ~ 2 ~ l f Ko(n,j;x[p,q;y)H(p,q;y)dw(y),
(24)
p=0 q-~l A
where
~p(X,z)Hn,j(a,Z)~p(Z,y)Hp,q(O~; Z) A
dw(z). (25)
Our aim is to show that P defines a projection operator. L e m m a 7. The operator P : 7/0(Af × Aext) --+ G defined by (2~), (25) is a projection operator. Proof. Assume that H = b" = (WFT)~p (U) E ~, U E 7/o. Then we obtain (PH) (n, j; x)
A
E
H(p,q;y)~'p(z,y)Hp,q(OZ;Z) dw(y) dw(z)
p=0 q=l A
dw(z) A
= U(n, j; x)
= H(n,j;x),
216
Willi Freeden and Volker Michel
hence, PH = H for all H C G. Next, we show, that for all H ± E ~± we have that H ± E G±, i.e. for all U C No
PH ± = 0. Assume therefore (26)
H l, (WFT)¢. (U))no(~f×Aex~) = 0 . If x E Aext, (n, j) E Af~ then it follows from (26) with the special choice
U= (O~2)(1))-l/2~p(X,.)Hn,j(a;') that
oo 2p+l ~ .
=ZE p=0 q=l
(q5~2)(1)) -1/2 ((~VFT)4~ p (~p(x,')Hn,j(a;')))(p,q;y)dw(y)) oo 2p+l f
:E E
p=O q=l A
(~(2) (1)) -1 fqSp(x,z)Hn,j(a;z)~p(y,z)Hp,q(a;z)
dw(z)dw(y)
A c<~ 2p+1 f
= E E ] H±(p'q;y)Kp(n'j;x ]P'q;Y) dw(y) p----0 q=l A
= (PH±)(n,j;x) . Hence it is clear that PH i = 0 for all H i E G±. Summarizing our results we therefore obtain P(7/0(hf× Aext)) = 0, P 6 ± = 0,
p2 = p.
[3
From our investigations we are therefore able to deduce that G is characterized as follows: Lemma
8. H C 6 if and only if H(n,j;x)= ~ 2~1 / Kp(n,j;xip, q;y)H(p,q;Y) dw(y) .
(27)
p=0 q=l A
In windowed Fourier theory Eq. (27) is known as consistency condition associated with the 'kernel' ~p (cf. [15, 16]). From the consistency condition it follows that not any function H E 7-/o(Af × A--~t) can be the windowed Fourier transform of
LS Geopotential Approximation
217
a potential U C 7-/o. In fact, if there were not such a restriction, then we could design spare-dependent potentials with arbitrary space-momentum property and thus violate the uncertainty principle. It is not difficult to see that Kp(n,j;yI.,.;. ) E G and Kp(.,.;. IP, q;Y) E 6. The kernel (n, j; x I P, q; Y) ~+ Kp (n, j, x I P, q; Y), (n, j) CAf, (p, q) EAf, (x, y) C Aext × Aext, is the reproducing kernel in 6. Next we prove the following theorem. T h e o r e m 4. Let H be an arbitrary element of 7-lo(N × Aext). Then the unique function UI-t E no satisfying
H - ~-'[H]7-/o(H×A--~xt):u~7-/oinf[H - lr I 7/o(HxA~;*) (with OH =
--~
given by
~---- j~l= JA g(n,j;y)~bp(X,y)dw(y)gn,j(oL;x )
(28)
Proof. We know that UH is the orthogonal projection of H onto 6. This proves Theorem 4. [] Our considerations have shown that the coefficients in 7-/0(A; × A--~xt) for reconstructing a function U E n 0 are not unique. This can be immediately seen from the identity
U= n=0 j:l / ((](n,j;y) + ~]±(n,j;y)) ~p(.,y) dw(y)Hn,j(a;'),
A
where (] = ( W F T ) ~ ( U ) and U ± is an arbitrary member of 6 ±, such that =
Therefore we are able to formulate the following result. T h e o r e m 5. For arbitrary F E 7~o the coefficient function ~f = (W FT)~p ( U) E 6 is the unqiue element in 7-lo(N × ~ ) which satisfies the minimum norm condition ~r I 7"Lo(N× A~,:t) = Ueno inf (Wx A--~x,) Hg[lno(~V×A-~)
(WFW)¥~tH)=V
-
Proof. We know already that H = b~ + 0 ±. Thus we are able to deduce that [[H[[~o(W×A,~,) --
0 + 0± 2 7/o(.,V'xAox,) ~± 2
218
Willi Freeden and Volker Michel
as required.
D
As mentioned in our introduction, the windowed Fourier transform works by first dividing a 'signal' U E 7/0 into short consecutive segments of fixed size by use of a 'cutoff kernel' (window function) ~p and then computing the Fourier coefficients of each segment. In other words, the windowed Fourier transform maps local changes of the function to local changes of the coefficients in the expansion and thereby also reduces the computational complexity. However, there is still a defect in reconstructing a function using a sole, fixed 'window parameter' p E (0, co). It poorly resolves phenomena shorter than the window which leads to non-optimal computational costs in many circumstances. This can be remedied by kernels with decreasing window diameters (i.e. p -+ 0) exhibiting the so-called 'zooming-in' property. The meaning of Theorem 4 may be explained as follows (confer the arguments in Euclidean wavelet theory [16]): Suppose we are looking for a potential with certain specified properties in frequency (momentum) and in space. In other words, we are interested in a potential U E 7/0 such that (WFT)~p (U)(n, j;x) = H(n,j;x), where H E 7/0(Af x A---~t) is given. Lemma 8 informs us that no potential can exist unless H satisfies the consistency condition. The function UH introduced above is closest in the sense that the '7/0(Af x A-~.xt)-distance' of its windowed Fourier transform UH to H is a minimum. UH is called the leastsquares approximation to the desired potential U E 7/0, In the case that H E g, Eq. (28) reduces to the reconstruction formula. The least-squares approximation may be used to process potentials simultaneously in frequency and in space. More explicitly, given a potential U E 7/o, we may first compute (WFT)~p (U)(n, j; x) and then1 modify it in any desirable way (such as by suppressing some frequencies and amplifying others while simultaneously localizing in space). Of course, the modified expression H(n,j; x) is generally no longer the windowed Fourier transform of any (space-dependent) potential U E 7/o, but its least-squaxes approximation UH comes closest to being such a potential, in the above topology. Another essential aspect of the least-squares approximation is that even when we do not purposefully tamper with (WFT)¢~ (U)(n, j; x), 'noise' is introduced in it. Hence, by the position we are ready to reconstruct U E 7/o, the resulting expression H(n,j;x) will no longer belong to g. Hence, any random change usually takes H e 7/0(Af × Aext) out of g. The 'reconstruction formula' in the form (28) then automatically yields the least-squares approximation to the original signal, given the incomplete or erroneous information at hand. This is a kind of built-in stability of the windowed Fourier reconstruction related to oversampling.
7
Continuous Wavelet Transform
With the definitions of Chapter 5 in mind, we are now interested in introducing the wavelet transform WT. In a consistent setup scale continuous as well as
LS Geopotential Approximation
219
scale discrete wavelets are discussed. It turns out that the relation between scaling function and scale continuous wavelet is characterized by a differential equation. This assumes the piecewise differentiability of the scaling function under consideration. D e f i n i t i o n 6. Let ~o1:[0,c~) -+ l~ be a piecewise differentiable 7-lo-generator of a scaling function. Then the function ¢1 : [0, c~) -+ ]~ is said to be the no-
generator of the mother wavelet kernel ~] given by ~l(x,y) =
4~ra 2 ¢1(n) \ ~ ]
p~
x . y
, (x,y) eAext × Ae~t,
rt=0
if ¢1 satisfies the admissibility condition and, in addition, the differential equation / d ,, 1/2 = ( - 2 t ~ i ( t ) ~ ( t ) ) 1/2. ¢1(t) = ~-t-~(~l(t)) e)
It is not difficult to show that the generator ¢1 and its dilates ep = Dp¢l, i.e.,
Cp(t) = ¢1 (pt) = (-2pt~l (pt)~ (pt)) 1/2 =
-p
(~p(t)) 2
satisfy the following properties: e p ( 0 ) = 0 , p e (0,
-
-
[~
\ 1/2
\t-
/
lira Cp(t) = O, t C (0, c~), p-~O p:>O
/0o
_
\ 1/2
oo
~
n=l
oo 2n+l
<
p e
p
The first property ep(0) = 0,p C (0, oc), justifies the name wavelet (i.e., "smM1 wave"). The last property is of later significance in that it essentially assures the reconstruction formula of our scale contimlous wavelet theory. The intermediate properties are straightforward consequences of our definition of the mother wavelet kernel. D e f i n i t i o n 7. The family {~p}, p E (0, c~), of 7{o-kernels corresponding to the mother wavelet ~21 defined via q~X(n) = ~bp(n),n = 0 , 1 , . . . is called a scale
continuous harmonic wavelet. Let ~p;y be defined as follows I[Ip;y : x I---} ~]p;y(X) = ~ p ( x , y )
-~ S y D p ~ l ( X , ' ) ,
~ e Aext,
220
WiUi Freeden and Volker Michel
where the 'y-shift operator' Sy and the 'p-dilation operator' Dp are given by Sy: ~l(X, .) ~-+ Sy~l(X, .) -- ~l(x,y), Dp: ~l(x, ") ~ Dp~l(x, ") = ~p(x, "),
(x,y) E Aext x Aext, x E Aext,
respectively. D e f i n i t i o n 8. Let {~p}, p E (0, c~), be a scale continuous wavelet as defined above. Then the scale continuous harmonic wavelet transform ( W T ) of scale p E (0, c~) and position y E Aext is defined by (WT)(U)(p; y) = (~p * U)(y) = (~p;y, U)n ° for all U E 7/o. Consequently, as in the case of the windowed Fourier transform, the (continuous) wavelet transform converts a potential U E 7/0 into an expression of two variables, namely scale and position. 7.1
Reconstruction. Formula
The scale continuous wavelet transform admits an inverse on the space of functions U E 7/o satisfying (V, H0,1(a; "))n0 = 0 . T h e o r e m 6. (Reconstruction formula). Let {~gp}, p E (0, oc), be a wavelet. Suppose that U E 7/0 satisfies (U, Ho3(a; "))no = O. Then lim
R-+O
U-
R>o
Proof. Choose an
WT)(U)(p;y)~p;y(.
dw(y
=0 . no
A R
arbitrary R > O. Then we have
c~
A R
4~2 A
P'n
x . z
---
We obtain
2n + 1
( ) x
z
dp
dw(z).
LS Geopotential Approximation
221
Moo
41r~2 ( ~ ( n ) ) 2
M-~oj,-..,
p,~
x z
R ~=i
dp
Izl
p
M
=
(~(n))2
lim Y~ r 2 n + l
(~u
~+1 p .
x .z
dp
lzl p'
M-+oo ~__l J R
because the convergence of the series is uniform in p E [R, M]. As
v, f2n+ 1
( =~o+i
x
z
M
< -
n=l
4rt~e
R
(e;'(n)) 2 -p
oo
<
-
n=l
4rca=
,~.n.,
=-
R
p
< +oo,
co
we are allowed to interchange lira and ~ . Hence, M-+oo
o=,
rim1
(~2(=))27 t , ~ )
4.==
P= ~ 7 ~
d~(z)
R
= f U(z) ~)(=,x)d~(z) A
for every x C Aext- Now we know that lim ~ I ) • U = U in the sense of ll" llno~--+0 /~>0
But this is the desired result. In connection with (6) we obtain the following result. C o r o l l a r y 1. Under the assumptions of Theorem 6 lira sup
R-..~ O
R>o x C ~
t s; U(x) -
Wm)(U)(p;y)g%y(X
dw(y) = 0 .
A R
In other words, a constructive approximation by wavelets defined on Aext is found to approximate the solution of the Dirichlet boundary-value problem for the Laplace equation on Zest.
222
7.2
Willi Freeden and Volker Michel
Least-squares Property
Denote by 7/0 ((0, c¢) x A--~xt) the space of all functions U:(0, c¢) x Aext --+ IS such that U(p; .) E 7to for every p E (0, oc) and oo
oo
f llu(p;")112o-dp7 = f 0
f lU(P;y)l~ dw(y)
< c¢ .
(29)
0 A
On 7/0((0, (XD)X Aext) we are able to impose an inner product (-, ")7/o((O,~)xA-~0 by letting
(f(',),W(','))7/o((O,c~)xA~xt)-/f U(p~Y)V(p~y)d(..d(Y)d~Pp o A = f (u(p; .), V(p, "))~o aPp 0 for U, V E 7/0((0, co) x A----~xt).Correspondingly,
=
(oi,,
g(p, .)ll 7to 2
From Theorem 6 we obtain the following result telling us that the wavelet transform does not change the total energy. L e m m a 9. Let {~Pp}, p e (0, co), be a wavelet. Suppose that U, V are of class
7-[0 with (u, H0,1 (~; ))7/0 = (V, go,1 (~; "))no = 0. Then
oo
0
A
As we have seen, W T is a transform from the one-parameter space 7/o into the two-parameter space 7/o((0, oo) x A----~t). However, the transform W T is not surjective on 7-/o((0, oo) x A---~xt)(note that 7/o((0, oo) x Aext) contains unbounded elements, whereas it is not hard to see that (WT)(U) is bounded for all U £ 7/o). This means that W = (WT)(no) (3i) is a proper subspace of 7/0((0, oc) x A--~t):
w ~ ~o((0, oo) × Ae~0-
LS Geopotential Approximation
223
Therefore, one may ask the question of how to characterize W within the framework of ?-/0((0, oc) x Aext). For that purpose we consider the operator P : n 0 ((0, oo) × A--~t) + W
(32)
defined by oo 7
y')= / / , /
, J
O
A
Ip; )v (p;y)
(0, oo),
c A~xt,
t-
(33) where we have introduced the kernel
K(p';v'lp;v)
= / ~p,;y,(x)~;~(.) d~(x) = (~¢;~'(),~;y())~o
m
A First our purpose is to verify the following lemma.
The operator P : ~ o ( ( 0 , o c ) x Aext) -+ W defined by (32), (33) is a projection operator.
L e m m a 10.
Proof. Assume that H = ~" = for x E Aext
(WT)(U) E W. Then it is not difficult to see that
(x)
P(H)(p; x) = / / K (p; x [ a; y)(WT)(U)(a; y) d~(y)a-~ .J
,I
o
A
= O(p; x)
= (WT)(U)(p; ~). Consequently, P ( H ) ( - , - ) = H(., .) for all H(-,-) E "W. Next we want to show that for all H ± ( -, .) E W ± we have P(H±( ., .)) = O. For that purpose, consider an element H i ( -, -) of W ±. Then, for all U C 7"/o we have (H i (., .), (WT)(U)(., .)) go((O,oo)×Aox~) -- 0 . (34) Now, tbr p E (0, co) and x e Aext, we obtain under the special choice U = ~mx(')
o = (H ± (., .1, (WT) oo
o
A
cx~
o
A
= P(H±)(p;x) .
(ep;.~) ())~o((0,~) ×~-:7)
224
Willi Freeden and Volker Michel
In other words, P(H±( ., -)) = 0 for all H i ( -, -) E W ±. Therefore we find P (7/o((0,oo) ×
= w,
P W ± = 0, []
p2 = p , as desired.
FV = (WT)(7/o) is characterized as follows: L e m m a 11. H E W if and only if the 'consistency condition' O0
P o
.4
is satisfied. Obviously,
K(p', y'[.;-) c W, K(.;.Ip, y) e W ,
p' c (0, oo), y' C Aext, pC(O, oo),yCAext,
i.e.:
(p'; y' l p; y) ~+ K(p', Y' I P;Y) is the (uniquely determined) reproducing kernel in ]/Y. Sunnnarizing our results we therefore obtain the following theorem. T h e o r e m 7. Let H be an arbitrary element of 7-/0((0, oo) x Aext). Then the
unique function UIt E 7io satisfying the property
(with 8 = WT(U)) is gi.en by O0
o
A
Theorem 7 means that UH defined above comes closest in the sense that the '7/((0, c~) × A---~t)-distance' of its wavelet transform ~rH to H assumes a minimum. In analogy to the windowed Fourier theory we call UH the least-squares approximation to the desired potential U E 7/0. Of course,, for H E )4;, Eq. (35) reduces to the reconstruction formula. All aspects of least-squares approximation discussed earlier for the windowed Fourier transform remain valid in the same way. The coefficients in 7/0((0, oo) x A-~t) for reconstructing a potential U E 7/0 are not unique. This can be readily developed from the following identity OO
o
A
LS GeopotentiaI Approximation
225
where 0 = (WT)(U) is a member of ]42 and U ± is an arbitrary member of W ±. Our considerations enable us to formulate the following minimum norm representation: T h e o r e m 8. For arbitrary U 6 74o the function 0 = (WT)(U) e ½' is the unqiue element in 74((0, co) × A~t) satisfying inf
HET/O ((0,oo)X Aex%)
tlHttn((o,~)×~)
(WT)--I(H)=U
8
Scale Discrete
Wavelet
Transform
Until now emphasis has been put on the whole scale interval. In what follows, however, scale discrete wavelets will be discussed. We start from a strictly decreasing sequence {pj}, j E Z, such that lim)_~ pj = 0 and limj_~_~ pj = co. For reasons of simplicity, we choose pj = 2 -3 , j E Z, throughout this paper. Definition 9. Let ~oD = ~po be the generator of a scaling function (as defined above). Then the piecewise continuous function eD : [0, c~) -+ ~ is said to be the 740-generator of the mother wavelet kernel ~D (of a scale discrete harmonic wavelet) if it satisfies the admissibility condition and satisfies, in addition, the difference equation D
2
(¢2(t)) 2 = (~2 ( t / 2 ) ) ~ - (Vo (t)) ,
t c ]0,co)
For ~D and ¢0D, respectively, we introduce functions ~D and ~bD , respectively, in the canonical way
~D(t)=DY~D(t)=(pDo (2-Jr),
tE[O, co),
eD(t ) = DDeD(t) = eD ( 2 - i t ) ,
t E [0, c~) .
Then, each function ~D and ~bD, respectively, j E Z, satisfies the admissibility condition. This enables us to write ¢ ~ = D1¢~_~, j ~ Z, whenever ~o~ satisfies the admissibility condition. Correspondingly, for the 74o-kernel ~ D j E Z, generated by cD via (@D)A(n) : ¢ f ( n ) , n e No, we let
~D = D I ~ D I , j E Z . Definition 10. The subfamily {q~D}, j E Z, of the space 74o generated by ~D via OPD) A (n) = eD (n), n = O, 1,..., is called a scale discrete harmonic wavelet. The generator eD : [0, co) -+ ~ and its dilates eD = DD¢0D satisfy the following properties: eD(0) = 0, j E Z ,
226
Willi Freeden and Volker Michel
(¢~(t)) 2 = (~y÷l(t)) ~ - ( ~ ( t ) ) ~,
j e z, t e [0,~),
J
(TD(t))2+E(¢D(t))2=(~D+l(t))2,
J e N o , tE[O, cc) •
(36)
j=0
It is natural to apply the operator D D directly to the mother wavelet. In connection with the "shifting operator" Sy, this will lead us to the definition of the kernel ~j;y. More explicitly, we have ~ D = D D ~ 0 D,
jEZ,
(37)
and (Sy~D) (x) =~PDy(x) = ~ D ( x , y ) =Opi(x,y),
(x,y) e A~xt × gext •
(38)
Putting together (37) and (38) we therefore obtain for (x, y) e Aext × Aext, D ~;;~(x) = (S~D?~0~) (x) =
~-~2n+l
o:o 4 ~
(xy)
A
(~°~) (2-J~) ~ ]
~.~
Po
Definition 11. Let the 7~o-kernel ~oD be a mother wavelet kernel corresponding to a scaling function ~oD = q~po. Then, the scale discrete wavelet transform at scale j C Z and position y E Aext is defined by
(WT)D(u)(j;y) = (~.,y,U)no,
U eT/o •
It should be mentioned that each scale continuous wavelet {~p}, p E (0, c~), implies a scale discrete wavelet {~D}, j C Z, by letting
((~?)~(~))~: ((~?+~)~(~))~- ((~)A(~))~, where
((~)~(~))~: /
c~
n ~ No,
(,:(n))~ dp P
PJ
i.e.,
i/2
Note that this construction leads to a partition of unity in the following sense
0
3:-c~
j:0
for n E N. Our investigations now enable us to reconstruct a potential U C 7to from its discrete wavelet transform as follows.
LS Geopotential Approximation
227
T h e o r e m 9. Any potential U E 7/0 can be approximated by its J-level scale discrete wavelet approximation J
qSO;y)n° ~2y(.)dw(y) +
= A
(WT)D(u)(j; y)~Dy(.)dw(y)
(39)
j=o A
in the sense that lira IIV - VJiluo = 0
J--+~
Proof. Let U be a member of class 7/0. From (36) it follows that
/
o
J
(U'~o;y)n0 ~OD;y(')dw(y) + E
A
/ (WT)D(u)(j;Y)~£Dy (')dw(y)
j=0 A
= f(V,
o o ' ~j+~;~)~o ~j+1;y(-)d~(y)
•
A
Letting J tend to infinity the result follows easily from Theorem 2.
[]
As an immediate consequence we obtain C o r o l l a r y 2. Let Z be a regular surface. Under the assumptions of Theorem 9, lira
sup [U(x) - Uj(x)[ = O .
J--+co x E~Text
8.1
Multiresolution
Next we come to the concept of multiresolution by means of scale discrete harmonic wavelets. For U • 7i0 denote by R D (band-pass filters), pD (low-pass filters), the convolution operators given by
,u,
UET/o U E~o
respectively. The scale spaces 1)o and the detail spaces D?)D are defined by v? = Pf(Uo),
w? = R?(Uo), respectively. The collection {)?D} of all spaces 1;D, j E Z, is called multiresolution analysis of 7/o. Loosely spoken, t}o contains all j-scale smooth functions of 7-/0. The notion "detail space" means that 14)D contains the "detail" information needed to go from an approximation at resolution j to an approximation at resolution j + 1.
228
Willi Freeden and Volker Michel
To be more concrete, WD denotes the space complementary to Vff in ]2j+l D in the sense that D v;+l = v? + w~
Note that
J ~;-}-1 •
j=O
Any potential U • 7/0 can be decomposed in the following way: Starting from POD(U) we have J
P?+I(U) = P°D(u) + E RD(u)
"
j=O
The partial reconstruction RD(u) is nothing else than the difference of two "smoothings" PD+I(U) and PjD(U) at consecutive scales
R:(U) p?+l(u)- pl'(u). =
Moreover, in spectral language we have
(~?(u),~o,,(.;.)).o = (~,~o,,(.;.)).0 ((~j.)A (.))., (n, l) • N (R~(u), Ho,,(.;-)).o = (u,~o,,(-;-))~o ((~?)" (n))',
(40)
(n, 0 • ~" •
The formulae (40) give (scale discrete) wavelet decomposition an interpretation in terms of Fourier analysis by means of outer harmonics by explaining how the frequency spectrum of a potential U C VD is divided up between the space ];D_1 and W j -DI " The multiresotution can be illustrated by the following scheme:
POD(U) tT~
ft~
Vo~ VoD klJ
+
c
v~
w~
+
kl)
Po~(U) + R2(u) 8.2
Pg(U)
P?(U)...
P~+I(U)...-~ U
#T~
... ...+
rr~
c
V
c
v j+l '~
... C 740
w j-1 ~
+
WD
+
....
7/o
+
R~(U)
+
....
U
klJ
+
... + R~_l(u)
Least-squares Approximation
The reconstruction formula (Theorem 9) may be rewritten as follows:
lira flU - uJl[~o = o,
J---~oo
u c 74o,
LS Geopotential Approximation
229
where the J-level scale discrete wavelet approximation now reads in shorthand notation as follows: J
E
uz =
/(WT)D(u)(j;y)~Dy (') d~(y) .
j=--c~ A
As in the continuous case we can make use of the projection property in the scale discrete case. We know already that (WT)D:74o --+ 740(Z x A~xt)~ where 740(Z × ~4ext) is the space of all functions U:(j;x) ~-~ U(j;x) with U(j; .) 6 74o for every j 6 Z and (:~
(yD
llu(j;)ll~0 = ~ j=-~
f lu(j;~)l~ d~(~)
< ~
.
j=-c~ A
It is not hard to see that
W D = (WT)D(74o) ~ 740(Z × Aext). Hence, we are able to define the projection operator pD ~74O(Z × ~e~t) "+ W D by oo
pD(u)(j';Y') = E
f KD(j';y' lj;y)U(j;y) dw(y),
j=--oo A
where
KD(j ',y'Ij;y) = / ~j,;y,(x)~j;y(X) D D ~D dw(x) = (~j,i~,(.), D j;y(.))~o
.
(41)
A
In similarity to results of the scale continuous case it can be deduced that pD is a projection operator. Therefore we have the following characterization of )/yD: L e m m a 12. U(., .) 6 fleD i f and only if the 'consistency condition' co
V(j';y') = ~
P
/ g D (J';Y' IJ;Y) V(j;y) d~(y) J
J=-~ A
= E
(KD(j';y' l J;')V(J;'))no
j~--oo
is satisfied. Summarizing our results we obtain the following theorem. T h e o r e m 10. Le{ H be an arbitrary element of 74o(Z x Aext). Then the unique function UD 6 740 sati]ying the property [g(',-)
~D(.,.) -
~o(ZxAcx~----) = Ve~toinfI H ( . , - ) - ~D(.,.)l~to(ZxAo~0
230
Willi Freeden and Volker Michel
(with (]D = ( W T ) D ( U ) ) is given by co
U (x) = Z f v(j; J=-co A
Moreover, we have T h e o r e m 11. For arbitrary U E ~o the function ~T = ( W T ) D ( u ) e kV D is the unqiue element in 7-/o(Z x Aext) satisfying [
9
~(Z xA--'~0
n eno(~xA.~t-----)
IlH]ln(z ×A-~,)
((wT)D)--I(H)=U
Examples
Now we are prepared to introduce some important examples of scaling functions and corresponding wavelets (cf. [6]). We distinguish two types of wavelets, namely non-band-limited and band-limited wavelets. Since there are only a few conditions for a function to be a generator of a scaling function, a large number of wavelet examples may be listed. For the sake of brevity, however, we have to concentrate on a few examples. 9.1
Non-band-limited Wavelets
All wavelets discussed in this subsection share the fact that their generators have a global support. Rational Wavelets: Rational wavelets are realized by the function qOl : [0, ec) -+ given by (t) = (z + t) t e [0, Indeed, ~1(0) -- 1, ~1 is monotonically decreasing, it is continuously differentiable on the interval [0, co), and we have ~l(t) = O(ltl-l-~), t --~ oc, for s = 1 + e, e > 0. The (scale continuous) scaling function {~p},p E (O, oc), is given by 47ra 2 ( l + p n ) s \ ~ [ ]
~p(x,y)=
p~
x . y
, (x,y) e Aextxdext •
n~0
It is easy to see that
¢1 (t) = v ~ ( 1
+ t) -~-~/~
so that the scale continuous harmonic wavelets {~Pp},p e (0, ec), are obtained from CAn) = 2 x / ~ ( 1 + p n ) - ~ - l / ~ , s > 1, n e No, whereas the scale discrete wavelets {~pD}, j E Z, are generated by CD(n) = ((1 + 2 - J - i n ) - ~ -- (1 + 2-Jn)-2~) 1/~ j e Z , n e No
LS Geopotential Approximation
Exponential Wavelets. We choose ~l(t) = e-Rt,R > O, t
231
• [0, oc). Then it
follows that
~p(t) = e -Rpt, p
•
(0, ~ )
and
¢~(t) = 2 v ~
e -R'~,
p • (0, ~ )
.
Moreover,
¢~(n)
9.2
~" ~/~
e -2-j-IR~ ~
Band-limited Wavelets
All wavelets discussed in this subsection are chosen in such a way that the support of their generators is compact. As a consequence the resulting wavelets are band-limited. A particular result is that the Shannon wavelets provide us with an orthogonal multiresolution. Shannon Wavelet. The generator of the Shannon scaling function is defined by
~l(t) = {~ f°r t • [0'1) for t • [1, co) . The scale continuous harmonic scaling function {~p}, p • (0, c~), is given via 1 for n • [0, p - l ) 0forn•[p-1 ~)
~p(n)=
A scale continuous wavelet does not make sense. However, a scale discrete wavelet {~D}, j • Z, is available. More precisely, 1 for n • [2J,2 j+]) 0 elsewhere.
CD(n) =
But this means that the scale discrete multiresolution is orthogonal (i.e., ~D j + l ~--yD ® w D is an orthogonal sum for all j). Modified Shannon Wavelet. The generator of the modified Shannon scaling function reads as follows
1 ~l(t) = 0
for t • [0, }) for t • [}, 1) for t • [t, c~)
The scale continuous harmonic wavelets {~p}, p E (0, c~), are given by 0
¢~(~) =
fornE[
,;p
)
{OVr~ for n E "0 1 -F,p-~) [}p-~, for n e [p-*, ce).
232
Willi Freeden and Volker Michel
The scale discrete harmonic wavelets {~D}, j E Z are obtainable via 0 for n e (1 + ln(2-Jn)) 1/2 forn E CD(n) = ~ (-- l n ( 2 - J - i n ) + ln(2-Jn)) t/2 for n e [ ~--ln(2-J-ln)) 1/2 for n E for n ~
[0, 2J ~) [2J~,2J+l~) [2j+l ~,2J)
[2J,2j+l) [25+1, co).
C(ubic) P(olynomial) Wavelet (CP-Wavelet). In order to have a higher intensity of the smoothing effect than in the case of modified Shannon wavelets we introduce a function ~1 : [0, co) ~ ]~ in such a way that ~1[[o,1] coincides with the uniquely determined cubic polynomial p : [0, 1] ~ [0, 1] with the properties: ; ( o ) = 1 , p(1) = o ; ' ( 0 ) = 0 , p'(1) = o .
It is easy to see that these properties are fulfilled by p(t)=(1-t)2(l+2t),
re[0,1]
.
This leads us to a function ~1 : [0, co] ~ ~ given by ~1(t)
f ( 1 - t ) 2 ( 1 + 2t) fort E [0,1) for t E [1, co). lo
It is clear that ~1 is a monotonically decreasing function. The (scale continuous) scaling function {~p}, p E (0, co), is given by
¢flp(n)
~l(pn)
]" (1 - pn)2(1 + / 0
2pn) for n
e [0,p -1) for n E [p-l, co) .
Scale continuous and discrete wavelets are obtainable by obvious manipulations.
10
Fully S c a l e D i s c r e t e W a v e l e t s
For j = 0,1,... let bNj, i = 1 , . . . , N j , be the generating coefficients of the approximate integration rules (cf. [1]) Nj
2(YiN~" ) +sj(F), /F(y) ~w(y)= Z oi "N~-" A
F E'Nj,
i=1
}
corresponding lim ] s j ( F ) [ = 0
5-+00
forall
F E I ; 5,
LS Geopotential Approximation
233
(i.e. the coefficients bh~ , i = 1 , . . . , Nj, are supposed to be determined by an a priori calculation using approximate integration, interpolation, etc). Note that the coefficients bNj and the nodes Yigj , i = 1,... ,Nj, are independent of the choice of F E "~j. Assume that U is a potential of class Pot(°)(Aext) C 7/0. Then the J-level scale discrete wavelet approximation can be represented in fully discrete form as follows:
with
No
z
°)
()
i~1
J ~ \
]j;y~ J
j=0 i=1
This leads us to the following result. T h e o r e m 12. Any potential U E Pot(Aext) can be approximated in the form
lim
J-+oo
Iu(y)-Uy(y)l 2
=o.
Moreover, lira
sup IU(x)-UjD(x)[=O
J-+co xEZ~xt
for all (geodetically relevant) regular surfaces Z satisfying (5). Fast evaluation methods (i.e., fast summation techniques, tree algorithms, and pyramid schemata) for numerical computation have been presented in [6]. The details will be omitted here. In what follows we illustrate the multiresolution analysis of the bandlimited NASA, GSFC, and NIMA Earth Gravitational Model (EGM96) using exact outer harmonic approximate integration rules (cf. [6,8]). Further graphical illustrations are available for download at ftp://www.mathematik.nni-kl.de/pub/geodata . It includes software for Windows 95/98/NT. The data themselves are also available on internet by http: //www.mathematik.uni-kl.de/~wwwgeo/research/geodata . The numerical calculations of the representations of EGM96 presented in this work have been done by Dipl.-Math. Gunnar Schug, Geomathematics Group, University of Kaiserslautern.
234
Willi Freeden and Volker Michel
References 1. Driscolt J.R., Healy D.M.: Computing Fourier Transforms and Conv01utions on the 2-Sphere. Adv. Appl. Math., 15, (1996), 202-250. 2. Freeden, W.: 0ber eine Klasse yon Integralformeln der Mathematischen Geod~sie. VerSff. Geod. Inst. RWTH Aachen, Heft 27, 1979. 3. Preeden, W.: On the Approximation of External Gravitational Potential With Closed Systems of (Trial) Functions., Bull. Geod., 54, (1980), 1-20. 4. Freeden, W.: Least Squares Approximation by Linear Combinations of (Multi-) Poles. Dept. Geod. Sci., 344, The Ohio State University, Columbus, 1983 5. Freeden, W.: A Spline Interpolation Method for Solving Boundary-value Problems of Potential Theory from Discretely Given Data. Numer. Meth. Part. Diff. Eqs. , 3, (1987), 375-398. 6. Freeden, W.: MultiscaIe Modelling of Spaceborne Geodata, Teubner-Verlag, Stuttgart Leipzig, 1999. 7. F~eeden, W., Gervens, T., Schreiner, M.: Constructive Approximation on the Sphere (with Applications to Geomathematics), Oxford Science Publications, Clarendon Press, Oxford, 1998. 8. Freeden, W , Glockner, O, Schreiner, M.: Spherical Panel Clustering and Its Numerical Aspects. J. of Geod., 72, (1998), 586-599. 9. Freeden, W., Kersten, H.: A Constructive Approximation Theorem for the Oblique Derivative Problem in Potential Theory. Math. Meth. in the Appl. Sci., 3, (1981), 104-114. 10. Freeden, W., Michel, V.: Constructive Approximation and Numerical Methods in Geodetic Research Today - An Attempt at a Categorization Based on an Uncertainty Principle., J. of Geod. (accepted for publication). 11. Freeden, W., Schneider, F.: An Integrated Wavelet Concept of Physical Geodesy. J. of Geod., 72, (1998), 259-281. 12. Freeden, W., Schneider, F.: Wavelet Approximation on Closed Surfaces and Their Application to Bounday-value Problems of Potential Theory. Math. Meth. Appl. Sci., 21, (t998), 129-165. 13. Freeden, W., Windheuser, U.: Spherical Wavelet Transform and its Discretization. Adv. Comp. Math., 5, (1996), 51-94. 14. Gabor, D.: Theory of Communications. J. Inst. Elec. Eng. (London), 93, (1946), 429-457. 15. Heil, C.E., Walnut~ D.F.: Continuous and Discrete Wavelet Transforms. SIAM Review, 31, (1989), 628-666. 16. Kaiser, G.: A Friendly Guide to Wavelets. Birkh~user-Verlag, Boston, 1994. 17. Kellogg, O.D.: Foundations of Potential Theory. Frederick Ungar Publishing Company, 1929. 18. Lemoine, F.G., Smith, D.E., Kunz, L., Smith, R., Pavlis, E.C., Pavlis, N.K., Klosko, S.M., Chinn, D.S., Torrence, M.H., Wilhamson, R.G., Cox, C.M., Rachlin, K.E., Wang, Y.M., Kenyon, S.C., Salman, R.., Trimmer, R., Rapp, R.H., Nerem, I~.S.: The Development of the NASA, GSFC, and NIMA Joint Geopotential Model. International Symposium on Gravity, Geoid, and Marine Geodesy, The University of Tokyo, Japan, Springer, IAG Symposia, 117, 1996, 461-469. 19. Michel, V.: A Multiscale Method for the Gravimetry Problem - Theoretical and Numerical Aspects of Harmonic and Anharmonic Modelling, PhD University of Kaiserslautern, Geomathematics Croup, Shaker-Verlag, Aachen, 1999.
LS Geopotential Approximation
235
20. Misner, C.W., Thorne, K.S., Wheeler, J.A.: Gravitation. W.H. Freeman and Company, New York, 1970. 21. Miiller, C.: Spherical Harmonics, Lecture Notes in Mathematics, no. 17, SpringerVerlag, Heidelberg, 1966. 22. l~pp, R.H., Wang, M., Pavhs, N.: The Ohio State 1991 Geopotential and Sea Surface Topography Harmonic Coefficient Model. Dept. Geod. Sci., 410, The Ohio State University, Columbus, 1991. 23. Schwintzer, P., Reigber, C., Bode, A., Kang, Z., Zhu, S.Y., Massmann, F.H., Raimondo, J.C., Biancale, R., Ballnino, G, Lemoine, J.M., Moynot, B., Marty, J.C., Barber, F., Boudon, Y.: Long Wavelength Global Gravity Field Models: GRIM4$4, CRIM4-C4. J. of Geod., 71, (1997), 189-208.
236
Wilti F r e e d e n a n d Volker Michel
-100.0
0.0
100,0
200.0
-200.0
0.0
[100 Ovalm]
-200.0
0.0
200.0
[100 Gal m]
400.0
-100,0
[100 Gal m]
-500.0
0.0
200.0
0,0
100.0
[100 Gal m]
500,0
0.0
[100 Gal m] F i g . 2. R e p r e s e n t a t i o n of E G M 9 6 in CP-scale spaces a t h e i g h t 0 kin; scales 3 (top) t o 5 (bottom)
100,0
[100 Gal m]
(left)
a n d C P - d e t a i l spaces
(right)
LS G e o p o t e n t i a l A p p r o x i m a t i o n
-500.0
0.0
500.0
-50.0
0.0
50.0
[100 Gat m]
[100 Gal m]
-500.0
0.0
237
500.0
-50.0
0.0
50.0
[100 Gal m]
[100 Gal m]
i
-500.0
0.0
500.0
[100 Gal m] F i g . 3. R e p r e s e n t a t i o n of E G M 9 6 in C P - s c a l e spaces at h e i g h t 0 km; scales 6 (top) to 8 (bottom)
(left)
a n d C P - d e t a i l spaces
(right)
238
Willi F r e e d e n a n d Volker Michel
-100.0
0.0
100.0
-100.0
[100 Gat m]
-200,0
0.0
0,0
100.0
[100 Gal m]
-100.0
200.0
0.0
[100 Gat m]
-200.0
0.0
100.0
[lOOUal m]
200.0
400.0
-40.0
[100Galm] F i g . 4. R e p r e s e n t a t i o n of E G M 9 6 in C P - s c a l e spaces at h e i g h t 200 km; scales 3 (top) t o 5 (bottom)
-20.0
0,0
20.0
40.0
[100Galm]
(left) a n d
C P - d e t a i l spaces
(right)
LS Geopotential Approximation
~400,0
-200.0
0,0
200.0
400.0
-10.0
[i00 Gal m]
-400,0
-200.0
0.0
-200.0
0.0
10.0
[100 Gal m]
200.0
400.0
[100 Gal m]
-400.0
0.0
239
200.0
0.0 [100 Gal m]
400.0
[100 Gal m]
Fig. 5. Representation of EGM96 in CP-scale spaces at height 200 kin; scales 6 (top) to 8 (bottom)
(left) and CP-detail spaces (right)
240
Willi Freeden and Votker Michel
-100.0
0.0
100.0
-100.0
[100 Gal m]
-200.0
0.0
0.0
100.0
[100 Gal m]
200.0
-50.0
[100 Gal m]
-200.0
0.0
0,0
50.0
[100 C~alm]
200.0
-20.0
[100 Gal ml
Fig. 6. Representation of EGM96 in CP-scale spaces at height 400 kin; scales 3 (top) to 5 (bottom)
0.0
20.0
[100 Gal m]
(left)
and CP-detail spaces
(right)
LS G e o p o t e n t i a l A p p r o x i m a t i o n
-200.0
0.0
200.0
-5.0
[100 Gal m]
-200.0
0.0
0.0
5.0
10.0
[100 Gal nil
200.0
-2.0
0.0
2.0
[100 Gal m]
[100 Gal m]
-200.0
0.0
241
200.0
[100 Gal m] F i g . 7. R e p r e s e n t a t i o n of E G M 9 6 in C P - s c a l e spaces at height 400 kin; scales 6 (top) to 8 (bottom)
(left)
a n d C P - d e t a i l spaces
(right)