ADVANCES IN IMAGING AND ELECTRON PHYSICS VOLUME 94
EDITOR-IN-CHIEF
PETER W. HAWKES CEMEULaboratoire d’Optique Electr...
14 downloads
610 Views
16MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
ADVANCES IN IMAGING AND ELECTRON PHYSICS VOLUME 94
EDITOR-IN-CHIEF
PETER W. HAWKES CEMEULaboratoire d’Optique Electronique du Centre National de la Recherche Scientifique Toulouse, France
ASSOCIATE EDITORS
BENJAMIN KAZAN Xerox Corporation
Palo Alto Research Center Palo Alto, California
TOM MULVEY Department of Electronic Engineering and Applied Physics Aston University Birmingham, United Kingdom
Advances in
Imaging and Electron Physics EDITED BY PETER W. HAWKES CEMES/Laboratoire d’Optique Electronique du Centre National de la Recherche Scientifique Toulouse, France
VOLUME 94
u ACADEMIC PRESS San Diego New York Boston London Sydney Tokyo Toronto
This book is printed on acid-free paper. @ Copyright 0 1995 by ACADEMIC PRESS, INC. All Rights Reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopy, recording, or any information storage and retrieval system, without permission in writing from the publisher.
Academic Press, Inc. A Division of Harcourt Brace & Company 525 B Street, Suite 1900, San Diego, California 92101-4495 United Kingdom Edition published by Academic Press Limited 24-28 Oval Road, London NWI 7DX
International Standard Serial Number: 1076-5670 International Standard Book Number: 0-12-014736-X PRINTED IN THE UMTED STATES OF Ah4EFXA 95 96 9 7 9 8 99 0 0 B B 9 8 7 6
5
4
3 2
1
CONTENTS CONTRIBUTORS. PREFACE . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Group Algebras in Signal and Image Processing DENNIS WENZELA N D DAVIDEBERLY 1. Introduction . . . . . . . . . . . . . . . . . . . 11. Convolution by Sections . . . . . . . . . . . . . . 111. Group Algebras . . . . . . . . . . . . . . . . . IV. Direct Product Group Algebras . . . . . . . . . . V. Semidirect Product Group Algebras . . . . . . . . VI. Inverses of Group Algebra Elements . . . . . . . . VII. Matrix Transforms . . . . . . . . . . . . . . . . VIII. Feature Detection Using Group Algebras . . . . . . IX. The GRASP Class Library . . . . . . . . . . . . . X. GRASP Class Library Source Code . . . . . . . . . References. . . . . . . . . . . . . . . . . . . .
. . . . . .
. .
. . . . .
. . .
. . . . . .
ix xi
2 4 8 14 19 22 27 44 55 63 78
Mirror Electron Microscopy REINHOLD GODEHARDT
Introduction . . . . . . . . . . . . . . . . . . . . Principles of Mirror Electron Microscopy . . . . . . . Applications of the Mirror Electron Microscope . . . . Fundamentals of Quantitative Mirror Electron Microscopy Quantitative Contrast Interpretation of One-Dimensional Electrical Distortion Fields . . . . . . . . . . . . . VJ. Quantitative Contrast Interpretation of One-Dimensional Magnetic Distortion Fields . . . . . . . . . . . . . . VII. Quantitative Mirror Electron Microscopy, Example 1: Contrast Simulation of Magnetic Fringing Fields of Recorded Periodic Flux Reversals . . . . . . . . . . 1. 11. 111. IV. V.
V
. . . .
81 83 95 98
.
109
.
114
.
119
vi
CONTENTS
VIII . Quantitative Mirror Electron Microscopy. Example 2: Investigation of Electrical Microfields of Rotational Symmetry and Evaluation of Current Filaments in Metal-Insulator-Semiconductor Samples . . . . . . . . IX. Concluding Remarks . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . .
124 144 145
Rough Sets JERZYW . GRZYMALA-BUSSE I. Fundamentals of Rough Set Theory . . . . . . . . . . . I1. Rough Set Theory and Evidence Theory . . . . . . . . I11. Some Applications of Rough Sets . . . . . . . . . . . IV. Conclusions . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . .
152 171 182 194 194
Theoretical Concepts of Electron Holography FRANKKAHLAND HARALDROSE
I. Introduction . . . . . . . . . . . . . . . . . . . . . I1. The Scattering Amplitude . . . . . . . . . . . . . . . . I11. Image Formation in Transmission Electron Microscopy . . IV. Off-Axis Holography in the Transmission Electron Microscope . . . . . . . . . . . . . . . . . . . . . V. Image Formation in Scanning Transmission Electron Microscopy . . . . . . . . . . . . . . . . . . . . . VI . Off-Axis Scanning Transmission Electron Microscope Holography . . . . . . . . . . . . . . . . . . . . . VII . Conclusion . . . . . . . . . . . . . . . . . . . . . . Appendix: Fresnel Diffraction in Off-Axis Transmission Electron Microscope Holography . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . .
197 200 203 213 221 232 252 253 256
Quantum Neural Computing SUBHASH C. KAK
I. Introduction . . . . . . . . . . . . . . . . . . . . . I1. Turing Test Revisited . . . . . . . . . . . . . . . . .
260 266
vii
CONTENTS
I11. Neural Network Models . . . . . . . IV. The Binding Problem and Learning . . V. On Indivisible Phenomena . . . . . . VI . On Quantum and Neural Computation VII . Structure and Information . . . . . . VIII . Concluding Remarks . . . . . . . . References . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
272 284 293 298 305 309 310
Signal Description: New Approaches. New Results ALEXANDER M. ZAYEZDNY AND ILANDRUCKMANN I . Introduction . . . . . . . . . . . . . . . . . . . . . I1. Theoretical Basis of the Structural Properties Method . . I11. Generalized Phase Planes . . . . . . . . . . . . . . IV. Applications . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . .
INDEX
. . . . . . . . . . . . . . . . . . . . . . . . . .
. .
315 327 347 365 394
397
This Page Intentionally Left Blank
CONTRIBUTORS Numbers in parentheses indicate the pages on which the authors’ contributions begin.
ILAN DRUCKMANN (315), Lasers Group, Nuclear Research Center Negev, Beer Sheeva 84190, Israel DAVIDEBERLY(l),Department of Computer Science and Neurosurgery, University of North Carolina, Chapel Hill, North Carolina 27599 REINHOLD GODEHARDT (81), Department of Materials Science, Martin Luther University Halle Wittenberg, D-06217 Merseburg, Germany JERZY W. GRZYMALA-BUSSE (151), Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence, Kansas 66045
FRANK KAHL(197), Institute of Applied Physics, Technical University of Darmstadt, D-64289 Darmstadt, Germany SUBHASH C. KAK(259), Department of Electrical and Computer Engineering, Louisiana State University, Baton Rouge, Louisiana 70803 HARALD ROSE(197), Institute of Applied Physics, Technical University of Darmstadt, D-64289 Darmstadt, Germany DENNIS WENZEL(l), Phototelesis Corporation, San Antonio, Texas 78230 ALEXANDER M. ZAYEZDNY (315), Department of Electrical Engineering and Computers, Ben Gurion University of the Negev, Beer Sheva 84105, Israel
ix
This Page Intentionally Left Blank
PREFACE
The six chapters that make up this volume are concerned with themes of current interest in image processing, electron microscopy and holography, signal handling, and set theory. We begin with a study by Dennis Wenzel and David Eberly of the role of group algebras in image processing, which is inspired by the heavy use of convolutional filters in image enhancement. The opening part of this chapter is inevitably rather abstruse but the authors have taken considerable trouble to provide examples of each stage of development to assist the reader. It emerges that there is a much wider class of convolutions than the familiar cyclic ones and that the members of this class may well be valuable for image analysis; once again examples are provided. The chapter ends with a formal description of a C++ class library called Group Algebras for Signal Processing (GRASP). The second contribution, by Reinhold Godehardt, contains a full account, the first such survey for several years, of a type of electron microscope in which the beam is reflected from the surface of the specimen, the mirror electron microscope. Here the beam is incident approximately normal to the specimen plane and hence a specially designed instrument is needed, unlike the reflection modes of the everyday transmission electron microscope, for which only the specimen holder is different. The author first discusses the history and instrumental aspects of these microscopes, which go back to the first decade of the electron microscope; he then presents in detail quantitative mirror electron microscopy of several kinds of objects. A short section is devoted to the principles, after which one-dimensional electric and magnetic distortion fields are examined. Several concrete examples are investigated in great detail, notably metal-insulator-semiconductor structures. The chapter is dedicated to Johannes Heydenreich, who has contributed to our understanding of these relatively unfamiliar instruments, on the occasion of his 65th birthday and we associate our good wishes with those of the author. In the third chapter, by Jerzy W. Grzymala-Busse, we are introduced to the notions of rough sets. I came across these in a volume of conference proceedings and, suspecting that I was not alone in not knowing what they were, I was delighted that the acknowledged expert agreed to prepare an account of them and their practical uses for these Advances. They go back to the work of Z . Pawlak in 1982 and are proving to be invaluable for xi
xii
PREFACE
handling granularity of data, inconsistent data, and other related questions. Jerzy W. Grzymala-Busse first sets out the theory of these sets (which are not to be confused with fuzzy sets) and then discusses several practical examples. I have no doubt that this account of the subject will be welcomed and heavily used. Holography was invented by D. Gabor in the late 1940s in the hope of circumventing the limited resolution of the electron microscope imposed by the aberrations of the objective lens. With the invention of the laser, photons took over, but, thanks to the introduction of the electron biprism by Mbllenstedt and Diiker and to the development of field-emission electron sources, it was revived in electron microscopy in the late 1960s. For many years the most impressive results came from A. Tonomura and colleagues in the Hitachi Research Laboratory in Japan, but they were soon joined by H. Lichte’s team in Tiibingen and another group in Bologna around G. Pozzi. Very impressive reconstructions are now being made from electron holograms, especially since numerical reconstruction (rather than optical reconstruction) has become current. A study of the statistical properties of these techniques is thus urgently needed so that we can understand how far such reconstructions can be expected to go and how reliable they will be. This is one of the main preoccupations of Frank Kahl and Harald Rose, from the Technical University of Darmstadt, who have contributed a careful study of the theoretical aspects of electron holography. Both the conventional and the scanning transmission instruments are considered and the account of the latter contains much new material not available in such detail elsewhere. In order to introduce the fifth chapter, by Subhash C. Kak, I cannot do better than quote the opening sentences: “This chapter reviews the limitations of the standard computing paradigm and sketches the concept of quantum neural computing. . . . A quantum neural computer is a single machine that reorganizes itself, in response to a stimulus, to perform a useful computation.” The author leads us through the Turing test, neural network models, the binding problem and learning, uncertainty and complementarity, quantum and neural computation, and structure and information. I have to say that this is a challenging chapter and not all Kak’s ideas will find acceptance, but I believe that it is important to examine, in a series such as this, the difficulties that arise when the intrinsically imprecise cognitive sciences and the exact sciences are brought into contact or even collision. The volume concludes with a long contribution by Alexander M. Zayezdny and Ilan Druckman of Beer Sheva on new approaches and results in the field of signal description. They are principally concerned with the method based on the structural properties of signals, which are expressed in terms of the signal, its derivatives, and various transforms of it. The first author
PREFACE
...
Xlll
of this chapter has been deeply concerned with this method since its introduction and the presentation here is thus authoritative. I wish to stress once again that the authors have taken care to illustrate their material with concrete examples to help the newcomer to grasp the essentials of the subject. In conclusion, I thank all the authors most warmly for the trouble they have taken with the preparation of their contributions and conclude with a list of forthcoming chapters. We have two special volumes to look forward to in the near future, one of which is wholly given over to an account of map methods for the design of particle accelerators by M. Berz and colleagues. The other is in a sense a sequel to “The Beginnings of Electron Optics” (Supplement 16), entitled “The Growth of Electron Optics”; this contains a collection of essays on the history of the International Federation of Societies of Electron Microscopy (IFSEM), contributed by many of the member societies of that Federation, and gives an extremely vivid picture of the way the subject has developed.
Peter W. Hawkes
FORTHCOMING ARTICLES Nanofabrication Use of the hypermatrix Image processing with signal-dependent noise The Wigner distribution Parallel detection Discontinuities and image restoration Hexagon-based image processing Microscopic imaging with mass-selected secondary ions Modern map methods for particle optics Nanoemission Cadmium selenide field-effect transistors and display
H. Ahmed D. Antzoulatos H. H. Arsenault M. J. Bastiaans P. E. Batson L. Bedini, E. Salerno, and A. Tonazzini S . B. M. Bell M. T. Bernius M. Berz and colleagues (Vol. 97) Vu Thien Binh (Vol. 95) T. P. Brody, A. van Calster, and J. F. Farrell
xiv
PREFACE
ODE methods Electron microscopy in mineralogy and geology Projection methods for image processing Space-time algebra and electron physics Fuzzy morphology The study of dynamic phenomena in solids using field emission Gabor filters and texture analysis Miniaturization in electron optics Liquid metal ion sources The critical-voltage effect Stack filtering Median filters
RF tubes in space Relativistic microwave electronics The quantum flux parametron The de Broglie-Bohm theory Contrast transfer and crystal images Seismic and electrical tomographic imaging
Morphological scale-space operations Algebraic approach to the quantum theory of electron optics Surface relief Spin-polarized SEM Sideband imaging The recursive dyadic Green’s function for ferrite circulators Ernst Ruska, a memoir
J. C. Butcher P. E. Champness P. L. Combettes (Vol. 95) C. Doran and colleagues (Vol. 95) E. R. Dougherty and D. Sinha M. Drechsler J. M. H. Du Buf A. Feinerman R. G. Forbes A. Fox M. Gabbouj N. C. Gallagher and E. Coyle A. S. Gilmour V. L. Granatstein W. Hioe and M. Hosoya P. Holland K. Ishizuka P. D. Jackson, D. M. McCann, and S. L. Shedlock P. Jackway R. Jagannathan and S. Khan J. J. Koenderink and A. J. van Doorn K. Koike W. Krakow C. M. Krowne
L. Lambert and T. Mulvey (Vol. 95)
xv
PREFACE
Regularization Near-field optical imaging Vector transformation SEM image processing Electronic tools in parapsychology The Growth of Electron Microscopy The Gaussian wavelet transform Phase-space treatment of photon beams Image plate Z-contrast in materials science HDTV The wave-particle dualism Scientific work of Reinhold Riidenberg Electron holography X-ray microscopy Accelerator mass spectroscopy Applications of mathematical morphology Set-theoretic methods in image processing Texture analysis Wavelet vector transforms Focus-deflection systems and their applications New developments in ferroelectrics Electron gun optics Very high resolution electron microscopy Morphology on graphs
A. Lannes A. Lewis W. Li N. C . MacDonald R. L. Morris T. Mulvey, ed. (Vol. 96) R. Navarro, A. Taberno, and G. Cristobal G. Nemes T. Oikawa and N. Mori S. J. Pennycook E. Petajan H. Rauch H. G. Rudenberg D. Saldin G. Schmahl J. P. F. Sellschop J. Serra M. I. Sezan H. C. Shen (Vol. 95) E. A. B. da Silva and D. G. Sampson T. Soma J. Toulouse Y.Uchikawa D. van Dyck L. Vincent
This Page Intentionally Left Blank
ADVANCES IN IMAGING AND ELECTRON PHYSICS. VOL. 94
Group Algebras in Signal and Image Processing DENNIS WENZEL Phototelesis Corporation. San Antonio. Texas
and
DAVID EBERLY Departments of Computer Science and Neurosurgery. University of North Carolina. Chapel Hill. North Carolina
I . Introduction . . . . . . . . . . . . . . . . . . . . I1. Convolution by Sections I11. Group Algebras . . . . . . . . . . . A . FiniteGroupsandIndexNotation . . . . B. Group Algebras and Group Convolution . . C. Matrix Representation of Group Elements . . . . . . . . IV. Direct Product Group Algebras A. Direct Product Groups and Index Notation . B. Tensor Products of Group Algebras . . . . C. Fast Convolution . . . . . . . . . . D. Examples . . . . . . . . . . . . V . Semidirect Product Group Algebras . . . . . A. Semidirect Product Groups and Index Notation B. Semidirect Tensor Products of Group Algebras C. Example . . . . . . . . . . . . VI . InverseofGroup AlgebraElements . . . . . A . Inverses . . . . . . . . . . . . . B. Ideals and Quotient Rings . . . . . . . VII. Matrix Transforms . . . . . . . . . . A . Vector Spaces of Group Algebra Elements . . B. Matrices with the Convolution Property . . C. Examples . . . . . . . . . . . . D . Direct Tensor Product Matrices . . . . . E. Semidirect Tensor Product Matrices . . . . F. Inverses of Matrices with Group Algebra Elements . G . Fast Computation of the Matrix Transform VIII . Feature Detection Using Group Algebras . . . A . Value Functions . . . . . . . . . . . . . . . . . . . B. Feature Detection . . . . . . . . C. Application to Signals . . . . . . . . D . Application to Images 1
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . . . . .
. . . . . . . . . . . . . . . . . .
. 1 4 8 8 10 12 14 14 16 17 19 19 20 21 22 22 23 24 27 28 29 32 39 40 42 43 44 45 45 46 46
Copyright 0 1995 by Academic Press. Inc. All rights ofreproduction in any form reserved.
2
DENNIS WENZEL AND DAVID EBERLY
IX. The GRASP Class Library . . A. Groupclasses . . . . . B. Group Algebraclass . . C. Group Algebra Matrices Class . . . . D. Example Usage X. GRASP Class Library Source Code A. gr0up.h . . . . . . . B. group.cpp . . . . . . C. grpa1g.h . . . . . . . D. grpalg.cpp . . . . . . E. gmatrix.h . . . . . . F. gmatrix.cpp . . . . . References . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . .
. . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . . . . . .
55 55 58 60 62 63 63 65 68 69 13 74 78
I. INTRODUCTION Filtering is an important operation in digital signal and image processing, used to enhance or deemphasize certain features. Although a variety of filtering methods exist, such as using look-up tables (e.g., thresholding), morphological operators (e.g., dilations and erosions), and neighborhood rankorder statistics (e.g., min, max, or median filtering), the primary filtering method used is convolution. For execution of the convolution operation, the coefficients of the filter function are organized into a template. In a patternmatching application, the template of coefficients represents the pattern to be matched and the correlation operation for measuring the quality of a match is mathematically equivalent to a convolution. In other applications, the coefficients in a convolution template are organized to generate output in which low-spatial frequencies are removed (e.g., Laplacian filter), highspatial frequencies are removed (e.g., Gaussian filter), or spatial discontinuities are identified (e.g., edge operators). A good discussion of these linear techniques can be found in Rosenfeld and Kak (1982). Nonlinear processes which generalize the linear convolution are based on nonlinear diffusion equations and can be found in ter Haar Romeny (1994). Two standard convolutions used are linear convolution and cyclic convolution, both of which are based on polynomial arithmetic. Treating the input data and the filter weights as coefficients of polynomials, linear convolution is computed directly as a product of these polynomials. Cyclic convolution is used as an efficient means of computing the linear convolution. In this case, the input is organized into sections, each section being treated as the coefficients of a polynomial. The output data are generated by selecting the last coefficients of the products of the section polynomials with the filter polynomial. We review this process in Section I1 as motivation for the later ideas in
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
3
the chapter. We develop a more general methodology for filtering signals and images based on two ideas. Group algebras. The linear and cyclic convolutions are computed as polynomial products. Group algebras are generalizations of polynomials and polynomial arithmetic based on algebraic groups. The multiplication in a group algebra is called group convolution. Loui (1976) and Willsky (1978) used group algebras based on a finite cyclic group in the context of random Markov processes. In particular, they constructed generalized Fourier transforms which preserve group convolution. A similar application of group algebras to signal processing is described by Eberly and Hartung (1987). Eberly and Wenzel (1991) and Wenzel (1988) extended these results to image processing. Applications of group algebras to interconnection networks is found in Gader (1986) and is formulated using the image algebra framework (Ritter et al., 1990). Vulue functions. The selection of coefficients from the cyclic convolution to generate output for the linear convolution can be extended to other functions, called value functions, to allow for a wider variety of outputs.
Section 111 contains the formal definitions for group algebras. Background material on groups and group convolution is also provided. Group convolution of a signal requires selecting a single base group to work with. For images, group convolution is naturally based on the direct product of groups. Matrix representation of group algebra elements is also discussed. Section IV provides a discussion of direct product group algebras. The ideas extend to convolution of data sets whose dimension is larger than 2. Section V is a discussion of semidirect product group algebras, which are used mainly in studying group algebras based on dihedral groups. Section VI discusses inverting group algebra elements. Two methods for inverting elements are given. In applications where nonzero and noninvertible group algebra elements may be a problem, the section also provides a way to eliminate these elements via ideals and quotient rings. Section VII discusses matrices whose elements are from a group algebra. In particular, we are interested in those matrices which have the convolution property, that is, those matrices for which the transform of a convolution is the product of the individual transforms. We also look at fast algorithms for group convolutions. Section VIII provides examples of the application of group algebras to edge detection in signals and images. The examples illustrate the use of direct product cyclic groups (the standard convolution case), direct product dihedral groups, and direct product quaternions. Section IX provides details of how to implement group algebras in an object-oriented programming language. Source code for a sample implementation is provided in the Appendix.
4
DENNIS WENZEL AND DAVID EBERLY
11. CONVOLUTION BY SECTIONS To motivate the ideas of group algebra and value function, we consider first the filtering of a sequence of numbers using the standard convolution by sections. Normally the input sequence is very long, appearing to be infinite, and it is assumed that the numbers in the sequence are sampled at equal intervals (temporally or spatially). A commonly used filter is the nonrecursive finite-impulse-response (FIR) filter. The output sequence from an FIR filter is a linear convolution of the input sequence and the sequence described by the filter tap weights. be the input sequence, where p is large. Let {fj}:=-, represent Let the sequence of filter tap weights, where N << p indicates the number of stages in the filter. Both input and filter sequences can be represented as coefficients for polynomials s(x) and f ( x ) , respectively, as P
s(x) =
1 sixi
N-1
and f ( x ) =
i=O
,
1 fixJ,
j=O
where x is an indeterminate. Let Z be the set of integers. The linear convolution of the two sequences is defined by the polynomial product r ( x ) = s ( x ) f ( x )=
c
ieZ
SiXi
c fjx’ c ( c =
j e 2
keZ
=
,ifj).’
i+j=k
kF2
rkXk,
where si = 0 for i < 0 and i > p and fj = 0 for j < 0 and j > N - 1. The output sequence is { r k } k p , + t - l . Another form of convolution is the cyclic convolution, which is closely related to the linear convolution. The input sequence {si}&* and the filter sequence {A}:=-,, have the same length. The cyclic convolution of the two sequences is defined by the polynomial product T(X)
= s ( X ) f ( X ) mod(xN- 1) = k=O
( c
(i+j)modN=k
Sifi
)
N-l Xk
=
1
rkXk.
k=O
The output sequence is {rk}[:t. The linear convolution can be computed from cyclic convolutions by sectioning the input sequence. Each short section is individually processed as a cyclic convolution and properly merged with the other sections to produce the linear convolution. Such an approach is advantageous because many fast algorithms exist for computing a short cyclic convolution (Blahut, 1988; Elliott and Rao, 1982). A common technique for performing this convolution by sections is the overlap-save method, which is briefly described below. Assume that the cyclic convolution will be performed on sequences of length N . Let { s ~ } &be~ the input sequence and let {fi}T=o be the
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
5
filter sequence, where q < N and p >> N . Construct the set of polynomials { ~ ‘ ~ ’ ( xs(’)(x), ), . . .}, where sy) = s ~ + ~ ( for ~ - ~0 )5 i < N and for some j < p , such that all coefficients of s(x) are assigned. Note that there is an overlap of q coefficients. Let r ( x ) = s ( x ) f ( x ) represent the linear convolution and let r(k)(x)= d k ’ ( x ) f ( x )mod(xN- 1 ) represent the kth cyclic convolution segment. Except for the first q coefficients, the coefficients of r(x) are found among the coefficients of r(k)(x),0 I k I j , given by = rij), for q I i < N . Each cyclic convolution except for the last one produces N - q coefficients of the linear convolution, with the remaining q coefficients discarbed. Example. Let f ( x ) = x 2 - 1, which represents a one-dimensional edge detection filter. Let s(x) represent the signal with p = deg(s) large. The linear convolution of f ( x ) with s ( x ) is given by r ( x ) = -so - S I X
+ (so
- s2)x2
+ (sl
- s3)x3
+ ... + ( s p p 2- S P ) X P + 2 .
Using the overlap-save method with cyclic convolutions of length 3, the input sequence is sectioned into blocks of three values, with each block treated as the coefficients for a polynomial d k ) ( x )= s k + s k + l x + S k + 2 X 2 , k = 0, . . ., p - 2. The output sequence values (starting with the x 2 term) are obtained by selecting the x 2 coefficient of each product, I ( ~ ) ( x= ) s(k)(x)f(x)(mod x3 -
+
1)
+ s k + 2 x 2 ) ( - 1 + x2)(modx 3 - 1) = (sk+l - s k ) + + - sk+2)x2, = (sk
sk+lx
(sk+2
sk+l)x
(sk
for k = 0,. . ., p - 2; that is, the output sequence is rk = s k - s k + 2 for k = 0, . . . , p - 2. The linear convolution of the input sequence and the filter sequence as a convolution by sections can be viewed pictorially as shown in Fig. 1, where the filter is superimposed over part of the input sequence, the necessary ...
input sequence multiplication filter sequence
addition output sequence
FIGURE1. Illustration of convolution for signals
6
DENNIS WENZEL A N D DAVID EBERLY
arithmetic is performed, and then the filter is moved to the right, where the process is repeated again. Note that the coefficients of the filter polynomial appear is reverse order. We consider next the filtering of an image { s i , , i 2 } @ ~ i 2 = The o . filter sequence is (fil,j2}j”,L-,;{&1, where Nl<< p 1 and N2<< p 2 . The input and filter sequences are thought of as coeficients for polynomials s(x, y) and f(x, y), respectively, PI
s(x, y) =
N1-1 N2-1
P2
C C si,,i2xi1yizand i , = O i2=0
f(x, y) =
C 1
fjl,j,xjlyj2, jl=O j2=0
where x and y are indeterminates. The linear convolution of the filter with the image is defined by the polynomial product r(x,Y ) = 4x9 Y)f(X, Y)
=C
k , E
Z x k z c Z (zi,+j,=klzi2+j2=kZ Sil,i2fil,jz)Xk1yk2
=CklEZ
x k 2 E
Z rk~,k2Xk1yk2~
where sil,i2= 0 for id < 0 or id > pd (d = 1, 2) and fjl,j2= 0 for j , < 0 or > Nd - 1 (d = 1, 2). The Output Sequence is {rkl,kz}P:=+ON.Lk;l’~+N2-1. If the input polynomial s(x, y) has degrees p1 = Nl - 1 and p 2 = N2 - 1, then the cyclic convolution is defined by the polynomial product
jd
r(x, y) = s(x, y)f(x, y) mod(xN1- 1) mod(yNz- 1) = C k l e Z C k 2 E Z (C(il+jl)rnodN1=kl C(i,+j2)rnodN2=k2 Si,,i2fj~.j2)Xk1ykz~
The output sequence is {r~,,k2}~:f-j,f$~’. Similar to the case of signal processing, the linear convolution for images can be computed by sectioning the image into Nl x N2 blocks, treating the entries of the block as coefficients of a polynomial, and computing the cyclic convolutions of these polynomials with the filter polynomial.
+ +
Example. Let f(x, y ) = (x2 x l)(y2 - l), which represents a 2-dimensional vertical-edge emphasis filter. Section the image into 3 x 3 blocks and let (kl,k , ) be the upper left coordinates of the input block. The block polynomial is
7
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
and the cyclic convolution of input with filter is y ) = s ( ~ ~ * ~y z) f)((xx,y)(mod , x3 - l)(mod y 3 - 1)
r(kl,k2)(x,
+ ' k 1 + 2 , k Z + l - Sk1+2.kz + ' k I + l , k , + l - ' k l + l , k Z + S k l , k z + l - ' k 1 , k z + 'k1+2,k,+l
= (Skl,kz+l - 'kl,kZ
+ (Skl+l,kz+l + (Skl+2,k2+1
- Skl+2,k2
+ 'kl+l,kZ+l
- 'kI+l,kz
- 'kl+l,kZ)
- sk1+2.k2)x
+ Skl,kz+l
- sk1,k2) xz
+ Skl+2,kz+2 - ' k I + 2 , k z + l + S k ~ + l . k 2 + 2 - Sk,+2.kz+l)Y + ('kl+l.kZ+2 - ' k l + l , k Z + l + sk1,k,+2 - ' k l, k , + l + Sk1+2,k2+2 - s k ~ + 2 , k z + I ) x Y + ('kI+2,k2+2 - S k l + 2 , k 2 + l + S k l + l , k ~ + 2- S k l + l . k z + l + ' k l , k z + 2 - S k 1 , k z + 1 ) X 2 y + ( S k l , k z - S k l . k 2+ 2 + S k1 + 2 . kz - S k , + 2 , k ~ + 2 + Sk1+l.k2 - ' k 1 + l , k 2 + 2 ) Y 2 2 + ( S k l + l , k z - ' k l + l , k 2 + 2 + ' k 1 , k z - S k 1 , k ~ + 2-k s k1+2.k2 - S k ~ + 2 , k ~ + 2 ) X Y
f (S k , , k z +Z - S k l , k z + l
+ (skl+2,kz
+ 'kl+l,kZ
+ Skl.kz
2
2
Y The output sequence is given by the last coefficients (x2y2 term) of r(k13k2), r k l , k Z = ' k I , k z - 'k l. k 2 + 2 + S k l + l , k z - S k l + l . k z + 2 + 'k1+2,k2 - ' k 1 + 2 . k 2 + 2 ) for k , = 0,. . ., p 1 - 2 and k 2 = 0,. . ., p 2 - 2. Viewing convolution by sections pictorially, the filter and data block are represented in Fig. 2. Superimposing the filter over the block, multiplying entries, and summing yields the output r k l r k 2 . - S k l+ 2 . k 2 + 2
- Skl+l,kz+2
-' k l , k ~ + 2 ) ~ *
Although the goal of convolution by sections is to reconstruct the linear convolution, it is not necessary to do so from an applications point of view. What is important is the construction of the output sequence from the coefficients of the convolution. Selecting the last coefficient of r(l')(x)(in the onedimensional case) or r(klpkz)(x, y ) (in the two-dimensional case) is appropriate for the application, since each coefficient is a finite difference which measures, in come sense, the presence of an edge in the signal. Instead of retaining the last coefficient, it is possible to select any other coefficient, or more generally, to use as output a function of all the coefficients. For signals,
FIGURE2. Illustration of convolution for images.
8
DENNIS WENZEL AND DAVID EBERLY
+
one could choose as output rk = J ( S k + , - Sk)’ + (sk+z - Sk+1)’ (sk - s k + l ) 2 . Of course, such a choice of function makes the filtering a nonlinear problem.
111. GROUPALGEBRAS The material in this section assumes that the reader is familiar with the basis concepts of groups, rings, and fields. Some standard algebra texts which cover these topics are Fraleigh (1982), Herstein (19751, and Hillman and Alexanderson (1983). We provide some simple examples of groups, namely, the cyclic, dihedral, and quaternion groups, which are used throughout this article. We introduce the concepts of groups algebras and group convolution, which are based on groups, rings, and fields. A . Finite Groups and Index Notation
Define the index set I N = (0, 1, . . ., N - l}. Let G = { g i : i E I N } be a multiplicative group of order N with identity element g o . Let i , j , k E IN and g i , g j , gk E G. The multiplication of group elements induces an operation of index summation on the index set I N , denoted @, and defined by gigj=gk-i@j=
k.
Note that i Q j = j @ i for all i and j if and only if G is a commutative group. Define the mapping 4: G -,IN by 4 ( g i ) = i for g i E G,i E I N . The mapping 4 is bijective and satisfies 4 ( S i ) 0 4(Sj) = i @ j
= 4 ( S i e j ) = #(SigjX
which implies that 4 is a homomorphism. Thus, 4 is an isomorphism between (G,.) and ( I N , @), say (C, .) g ( I N , @), where . denotes the operation for G and @ denotes the operation for I N . Since ( I N , @) is a group, it has an identity element 0 with the property i @ 0 = i for all i E I N . Also, 0 must be an associative operation, i @ ( j @ k) = ( i Q j ) @ k. Finally, each index i must have an inverse, denoted i-’, such that i @ i-’ = 0 = i-’ @ i. 1. Cyclic g r o u p
Let G = {xi:i E I N } be a multiplicative group of order N, where the group multiplication is defined by xixi
= #+j)modN
for i , j E I N , xi,x i
E
G;
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
9
then G is isomorphic to the set of integers IN under addition modulo N . The group G is called a cyclic group of order N , where the element x is the group generator. If g i = x i , then index summation for I , corresponding to the cyclic group is
iO j
= (i
+ j ) mod N .
Geometrically, the cyclic group can be associated with rotations in the plane. The group element x corresponds to a rotation by 211,” radians.
2. Dihedral Group The dihedral group of order 2N is gN = { r i d :i E I , , j E IN}, where T, = 1, oN = 1, and ?up= Q , - ~ T . If we define g k = akI2for k even and gk = T Q ( ~ - ” I ~ for k odd, then index summation for I N corresponding to the dihedral group is
iOj = j
+ (-ly’imod
2N.
Geometrically, the dihedral group 9, can be associated with rotations and reflections in the plane. Element Q corresponds to a rotation by 2 z / N radians, and element T corresponds to reflection through the x axis. It is intuitive that a rotation followed by a reflection does not necessarily yield the same effect as a reflection followed by a rotation. In the dihedral groups g Nfor N > 2, it can be verified that T Q # QT, which implies that such groups are noncommutative. Note, however, that 9, is commutative.
0 1
2 3 4 5 6 7
1
0 1 2 3 4 5 6
0 3 2 5 4
I
6
I
I
3 2 0 1 6
6
I
4 5
4
2 3 1
0
5
4 5
5 4
6
I
I
6
6
I
5
4
I
6 0 1 2 3
4
5 3 2 0 1
1 0 3 2
2 3 1 0
10
DENNIS WENZEL A N D DAVID EBERLY
where index i corresponds to the ith row and index j corresponds to the jth column. Geometrically, 9 can be associated with rotations in a threedimensional space (Shoemake, 1983). The elements i, j, and k correspond to three-dimensional coordinate vectors defining the axis of rotation, and the 1 element corresponds to the angle of rotation. The - i, -j, and - k elements correspond to negated vectors, and the -1 element corresponds to a negated angle of rotation. Multiplication of quaternion group elements corresponds to a three-dimensional cross product. As three-dimensional translations are more easily described in rectangular coordinates than in spherical coordinates, three-dimensional rotations are more easily described by using quaternion coordinates. B. Group Algebras and Group Convolution
The construction of an algebra over a finite group involves the application of group theory and algebraic properties. Although extensive material on the subject of group algebras can be found in Curtis and Reiner (1962) and Serre (1977), we present the structure, operations, and properties of a group algebra needed for later developments. Let IF be a field and G = { g i : i E IN} be a finite multiplicative group of order N with identity element g o and index summation operation 0.Define the group aigebra of G over IF to be the set of formal sums
Let c E IF and let tl, /?E FCG] be given by a=
1 aigi
and
fl =
iefN
1 Bjgj. JEIN
Addition in the group algebra is defined by
and scalar multiplication of a group algebra element is defined by ctl = c
1 aigi) = 1 ctligi.
(iefN
iefN
Multiplication in the group algebra is defined by
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
11
where 0 is the index summation corresponding to the group G.The multiplication in the group algebra is called group convolution. Note the resemblance of the operation to that of linear and cyclic convolution shown in the last section. Some observations are in order. The center of the group algebra is defined by C(IF[G]) = { a E IF[G]: ap = pa, Vg E IF[G]}. The centralizer of an element a in the group algebra is defined by C,(IF[G]) = E IFCG]: olp = pol). The set {ago: CI E IF} is isomorphic to IF, so IF _c C(IF[G]). Also, if a E C(IF[G]), then C,(IF[G]) = IF[G]. Under the operations of addition and multiplication, the group algebra IF[G] is a ring. Under the operations of addition and scalar multiplication, the group algebra F[G] is isomorphic to the N-dimensional vector space IFN over the field IF. The group convolution induces a multiplication on the vector space IFN. Given elements u, v E IFN, where u = (uo,ul,. .. , u,-~) and v = (uo, u l , .. . , u , - ~ )for ui, uj E IF, and i, j E Z, vector addition is defined by
{a
u
+ v = (uo, ..., + (uo, ..., = + U O , + .. ., + UN-1)
u1,
u1
(u0
u1,
uN-1
01 9
UN-1)
uN--l)?
and scalar multiplication is defined by cu = C(u0, u1, * * .
9
uN-1)
= (cue, cui, * * .
9
CuN-l),
where c E IF. Let e, E IFN be the vector which has all zero components except for the ith component, which is 1. The vectors can be rewritten as u=
uiei
and v =
ielN
1ujej. jEIN
The group convolution imposes a product on u and v, denoted u * v, by
where we refer to as vector convolution. Consequently, the vector convolution of em, en E IFN for m,n E Z , is given by em * e n
(1
= k e I N i bj=k
dmidnj)ek?
where a, is the Kronecker delta. Thus, em* en = emen. Note that if G is a noncommutative group, then in general, u * v # v * u. Example. The cyclic convolution of order N arises by choosing the group G = ( x i : i E I N )to be isomorphic to the cyclic group Z,, where the index summation operation is addition of integers modulo N, and the field IF = IR, for the group algebra IF[G].
12
DENNIS WENZEL AND DAVID EBERLY
Example. The linear convolution arises by choosing G = { x i :i E I } to be a multiplicative group of infinite order which is isomorphic to the set of integers I = Z,where the index summation operation is simply the addition of integers: xixj = xi+j for i, j E I . Typically, the sequences used in the linear convolution, { u i } E - , and {uj}j"-,, have the property that ui = uj = 0 for i < 0, j < 0, i > p , and j > q (for some large positive integers p and q). For such sequences, say {ui}f=oand { U ~ } J = ~the , linear convolution is given by
i=O
j=O
C. Matrix Representation of Group Elements
We present a way to represent group elements by matrices which is useful for converting matrices whose elements are from a group algebra into equivalent (but larger) matrices whose elements are in the given field. Let IF" be an n-dimensional vector space over the field IF and let GL(n, IF) be the group of isomorphisms which map IF" onto itself. An element M E IF" is, by definition, a linear mapping and with the set {eo, e l , . . .,e n - l } of basis vectors for the vector space IF", 9: IF" + IF" is defined by a square matrix [ m i j ] for order n, where the coefficients mij, for i, j E I,, are elements in the field IF. Since Y is an isomorphism, it must have an inverse map 3 - l which is also linear and which implies that the determinant of 9, denoted det(Y), is not zero. The group G L ( n , IF) is thus identified with the group of invertible square matrices of order n. Let G = { g i :i E I,} be a multiplicative group of order N with identity element go and index summation operation 0 . The matrix representation of the group G in the vector space IF" is given by the homomorphism +: G + GL(n, IF). An element g i E G, i E I , is represented as $ ( g i ) such that $ ( g i e j ) = $(gigj) = $ ( g i ) $ ( g j ) for gi, gj E G and i, j E I,. This formula also implies that $(go) = 3"and $(g[') = $-'(gi), where 9"is the n x n identity matrix which is a column ordering of the basis vectors e,, e l , . . ., for the vector space IF". Note that the dimension n of the vector space IF" does not necessarily equal the order N of the group G. Let IF[G] be a group algebra over the field IF and the group G. Let IF[GL(n, IF)] also be a group algebra, or a ring of the square matrices of order n. Through a linear extension of the $ homomorphism, the group algebra IF[G] is homomorphic to IF[GL(n, IF)]. Let a = l i P I Naigi E IFCG]. It has a matrix representation
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
13
Example. Let 9,= (1, z, o, zc,.. . , cN-',z o N - l ) be the dihedral group of order 2 N . Since the elements in 9, can be identified with operations on vectors in the plane IF2, let S and T be the matrices which represent the elements a and z, respectively. The matrices are given by cos(g) *(a) = s =
);(
-sin(;) and $(z)= T
);(
sin
=
cos
Consequently, + ( z P c 4= ) TPSq. Example. Let Z, = { 1,(, C2, . . ., CN-'} be the cyclic group of order N . Similar to the dihedral group, the elements in Z, can also be identified with operations on vectors in the plane IF2. The matrix C which represents the element C is
$(C)
cos
G)
sin
($)
=c =
(5) ($)
- sin
cos
Consequently, t,6(Cq) = Cq.
+
Example. Let 22 = { 1, +i, +j, +k} be the quaternion group of order 8. Elements in the quaternion group can be represented by 4 x 4 matrices where the group identity element 1 is represented by X4,the 4 x 4 identity matrix. The group element - 1 is represented by the matrix -Y4. Thus, $(I) = Y4 and $(- 1) = -Y4. Define the matrix representations for i and j by 0 1 0 0 1 0 $(i)=[
-'
0 0 0
-1
and $ ( j ) = [
0 0 1
-8
::
:I.
0 - 1 0 0
Since ij = k in the quaterion group and since $ is a homomorphism, $(k) = + ( i j ) = $(i)$( j ) and the matrix representation for the group element k is forced to be 0 0
*(k) = - 1 0
0 0
14
DENNIS WENZEL AND DAVID EBERLY
The matrix representation for the remaining group elements are determined by - $ ( i ) = $ ( - i ) = W j ) , - $ ( A = $ ( - j ) = $(ik), and - $ ( k ) = $ ( - k ) =
$W.
IV. DIRECT PRODUCT GROUPALGEBRAS
For the applications of group algebras to signal processing, we will choose a single group G and a field IF and work with the group algebra IF[G]. For processing of images and higher-dimensional data sets, one could also choose a single group for the group algebra. However, it is more natural to choose a group for each dimension of the data and work with the algebraic equivalent of a Cartesian product, called a direct product group. By working with group algebras based on direct products, fast algorithms can be produced for group algebra matrix transforms. These algorithms generalize the idea of fast Fourier transforms. The background for direct product group algebras is presented in this section. Fast convolution is discussed here, but the fast algorithms for matrix transforms will be discussed in a later section. A . Direct Product Groups and Index Notation
Let G and H be two finite multiplicative groups of order M and N , respectively, and let I , and Z, be their respective index sets. Define the direct product of groups G and H by G @H
=
{gi @ hj:gi E G, hj E H }
with the following definition for multiplication. For gi,, gi, E G and h,,, hj2 E H, define (gi, @ hj,)(gi,@hj,) = (gi, eiz @ hjl gj2)'
where 0 is the index summation for ,Z and 6 is the index summation for I,. If it is clear from context which index summation is being used, then the tilde will be omitted. The set G @ H with the operation defined above is a multiplicative group with identity go @ ho, where go is the identity of G and h, is the identity of H . The inverse of gi @ hj is given by (gi 6 hj)-' = g;' 0 h,Tl. The multiplication of group elements in G @ H induces an index summation on the Cartesian product of the index sets,Z x ,Z by (gi, @ hjl)(giz@ hjz) = (gi, eiz @ hjl ej2)*(i1,j1) O (i2,j2)= (il o i 2 9 j 1 Oj,)
, j , ) E ,Z x Z,. The Cartesian product I, x Z, with index sumfor ( i l , j 1 ) (i,, mation @ defined above will be denoted by ,Z @ Z,.
15
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
The direct product group G 0 H has order M N and is isomorphic to a group F = {h:k E I M N )via isomorphism $: G 6H -+ F, which itself induces an isomorphism 8: I, 0 IN + ( I M N ,0) by $(gi Q hi) =foci, j ) . If (bG(gi) = i, &(hj) = j , 4 G @ H ( g i 8 hj) = ( i , j ) , and h ( f k ) = k for i E b , j E I N , and k E I M N , then the following diagram is commutative:
’ (1,
(G 0 H,’)
bc@H
( F , -1
CF
$1
’
@ IN,0)
ie
0) so 8 = q5F 0 $ o &iH. Let (it,j,), (i2,j2)E I, 0 I N ; then (IMN,
8((i19jl) 0 ( i 2 , j 2 ) )= h($(4iiH((it~j1) 0(i2~2)))) = 4FF($(4&H(il 0 i2,j1 0j2))) = 4d+(gil ei2@ hjl e j 2 ) )
= 4d$(gilgi26 hjlhjz)) = M$((gi, 8 hjl)(giz0 hj,))) = 4A$(gil Q hjl)$(giz6 hjz)) = 4dfo(i,,jl)f&i,,j,)) = 4dfo(i,,j,)eo(iz,jz)) = 8 ( i l J l )0 W 2 , j 2 ) .
An obvious choice for the isomorphism 8 is lexicographical ordering, 8 ( i , j ) = Ni j , where i E I, and j E I N . Select the elements f k l , fk, E F, where k,, k 2 E I,, are given by k , = N i l j , and k, = Ni, j , for the appropriate i,, i, E I , and j , , j , E IN. Inverting yields i, = k, div N, j, = k, mod N, i, = k2 div N , and j , = k2 mod N. Multiplication of elements fk, and fk,
+
+
in the direct product group F k E I , N , corresponds to
+
G Q H, denoted fk =fklfkz
for
fk
EF
and
k=kl@k2 = (Nil
+ j , ) 0 ( N i , + j,)
= N(il 0 i d
+(j,@j2)
= N [ ( k , div N ) 0 (k, div N)]
+ [ ( k , mod N) 0 (k2 mod N)].
The definition of the direct product of finite groups can be extended easily to handle the direct product of more than two finite groups. Let G,, G,, . . ., Gp be finite groups of order N,, N,, . . ., N p , respectively. Define the direct
16
DENNIS WENZEL AND DAVID EBERLY
product of these groups by GI O G2 O * * O G p = { gi, O gi, O . * * O gip:gij E Gj, ij E I N j , j = 1,2, . . .,P } =
{dk:k = 8(i,, is, . . . , i p ) E I N ~ N ~ . . , N ~ }
= D,
where 8: ZNl ZNz O . * . 8 ZNp -,ZN,Nz...Npis an isomorphism. Let i , , fl E ZNl, i,, f,E IN*, . . ., i p , fp E ZNp. The index summation operation for GI 0 * . . Gp is defined by ( i l l i,,
. .., i p ) 0 (i,, f, ..., fp)
= (i,
0 i,, i , 0 f,, ...,i p 0 tp).
In a manner similar to that for the direct product of two groups, the isomorphism 0: IN,@ IN, @ .. . 8 ZNp -P ZNINz...Np can be defined by the lexicographical ordering 8(il, i , , ..., i p ) = i ,
+ (N1)i2+ (N1N2)i3+ + (N,*.-Np-,)ip.
Given an element dk E D, where k E ZN,N2...NP, the indexes i , i, E ZN2,. .., i, E ZNp can be found by the inverse isomorphism,
( :nI: )
ij = kdiv
Nt mod
E
ZNl,
4.
B. Tensor Products of Group Algebras Let G = {gi: i E },Z and H = {hi:j E IN} be two finite multiplicative groups and let G @ H be their direct product. Let IFCG] and F [ H ] be the respective group algebras. Define a = x i E l y a i gEi FCG] and B = x j E I N b i hE iIFCHI. The direct tensor product of elements a and fl is defined by
The direct tensor product operator is distributive over group algebra addition and scalar multiplication: a 0 (B
+ 6 ) = (a 8 P ) + (a 0 61, (a + Y)8 B = (a O B) + (Y 8 P), and (ca) @I /3 = a 6(cB) for c E IF.
Define the direct tensor product of the group algebras IFCG] and l F[aas the set of all formal sums
17
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
The set IF[C] 0 IF[H] is homomorphic to the group algebra IF[C 0 H ] through an extension of the isomorphism 4: G 0 H + F to a linear homomorphism @: F [ G ] 0 F [ H ] + IF[G 0 H I , where the group F = { f k : k E Z}, is a finite multiplicative group of order M N , and where @ ( x k ak
@pk) =x
k o(ak
6P k )
=x
k @ ( C i e I U C j s I N tlkipkj(gi
=C
k
C j E I Nakipkj#(gi
0
0 hj).
Let a 0 p, y 0 6 E F [ C ] 0 IFCHI, and define their product based on the homomorphism @ by (a 0 P)(r 0
= ( C i e I Mz =xi,kelu
j e I N aibj(gi
x j , f e I Naibjckdf(gi@k
= ( x i , k e I M uickgi@k) = ay
0 hj))(CkeIy
CIS,,
ckdl(gk
0 hf))
6hjf3f)
0 ( x j . I E I N bjdlhj@l)
0 p6.
Let 8: I , 0 IN + I M Nbe the isomorphism on the index sets induced by the # isomorphism and defined by O(i, j ) = k for i E I,, j E I,, k E I,,. Let ak E IF, f k E F , gi E G, and hj E H . An element a E IF[G 0 H I is given by
Similarily, an element b E IF[G 0 H ] is given by
where be(k.1) E IF, g k E G, and hl E H . The product of a and b in lF[G 0 H ] can be computed in IF[C] 0 IF[H] by ab = { C i e I u
[gi
0 ( x j E I N uO(i,j)hj)l}
= C i s I Mx k e I v
k i @ k
6( C j E I N x
[gk
8(
~ I E I bNO ( k , f ) h f ) l }
l e l a~t l ( i , j ) b O ( k , l ) h j @ f ) l
which requires M 2 N Zmultiplications in the field IF. C. Fast Convolution
In the case when H is the cyclic group Z,, then the inner summation from above is a cyclic convolution given by
18
DENNIS WENZEL AND DAVID EBERLY
The Cook-Toom algorithm and the Winograd algorithm are two types of fast algorithms for computing cyclic convolutions by reducing the number of multiplications and/or additions (Blahut, 1988). Assume that the best of such fast algorithms requires N,,,, c N 2 multiplications to compute the cyclic convolution. The product ab in I F [ G @ H ] can be computed by M2Nf,,, multiplications in IF[G] Q IF[H]. If the group G is cyclic and the group H is not, reorder the summations to obtain
which requires Mf,,,N2 multiplications. Another approach to reducing the number of multiplications used in computing the product ab in IF[G] 0 IF[H] involves reducing the representations of the elements a and b as elements in lF[G] 0 IFCHI. For example, the representation of a E IFCG 0 H ] as an element in F[G] 0 IF[H] was given by
which has M linear combinations of elements uiE IFCH], i E IM defined by ui = IN j ~ h jThe . group algebra l F [ H ] , where H is a group of order N , is isomorphic to the vector space IFN over the field IF. The group algebra element u i ~ I F I H ]can be considered as the vector v i ~ I F Nwhere vi = (a,(,,,), aO(i,1), . . .,a O ( i , N - I ) ) . Let the span of the set of vectors (vi}, i E ZM, be the set of linearly independent vectors (uo, u l , . . .,uD-l >,where D I M . The vector vi, i E ,Z can be rewritten as vi = x j E I D d i j ufor j dij E IF. The basis vector u j corresponds to the group algebra element uj E IFCHI, given by uj = l k e I N fltjkhk, where $jk E IF and hk E H. The group algebra element ui E IFCHI, i E,,Z is given by
xje
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
19
The representation in F[G] 6 lF[H] for the element a E F [ G 0 H] consists of D terms of the form mi @ pi, where cli E lF[G], fii E F[H], and i E .,Z A similar representation in F[G] 6lF[H] for the element b E lF[G 0 H] may consist of E terms of the form yj 0 Sj, where yj E IFCG], Sj E FCH], and j E ZE, where E is the rank of the set of vectors vj E IFN, j E I N , given by vj = (b,(j.o,,betj,,), . . . ,be(j,N-l)). The number of multiplications in F[G] 0 FCH] to compute the product ab in F[G 6 H] is D . E . N 2 . If there is no linear dependence between any of the combinations, then D = E = M and the number of multiplications becomes M 2 N 2 , which is the same number obtained by the direct application of the definition for group algebra multiplication.
D. Examples Linear convolution. Let G = { x i : i E Z}and H = { y j :j E Z},both isomorphic to the integers Z. The direct product group algebra R[Z 0 Z]conz x a i j x i y jof which the polynomials in sists of all finite formal sums two variables are a subset. The two-dimensional linear convolution corresponds to products of polynomials and is the multiplication of R[Z 0 Z]. Cyclic convolution. Let G = ( x i : i E Z,} and H = { y j :j E Z,}. The group multiplication for G 0 H is the product of polynomials of two variables, but modulo both x M - 1 and y N - 1. Index summation is ( i l , j l ) 0 (i,, j , ) = ((i, + i,) mod M , ( j , + j , ) mod N ) . The group algebra IRCG 6 H] consists of all polynomials in two variables of degree ( M - 1)(N - l), and two-dimensional cyclic convolution is the product of these polynomials modulo x M - 1 and y” - 1. The template size used in cyclically convolving an image is N x M . Walsh convolution. The direct product group Z; naturally corresponds to Walsh convolution (Crittenden, 1984; Gibbs and Mallard, 1969a, 1969b; Gibbs, 1969c; 1970; Walsh, 1923), despite the fact that the development of Walsh functions was in the context of orthogonal functions (and has relations to wavelets). If the indices i = (i,, . . ., id) and J = ( j , , . ..,j , ) are treated as unsigned binary numbers i , . . .id and j , . . .j d , then the index summationis i 0 j = ( i l . - . i d ) X O R ( j l . . . j.d )
x(i,j)E
V. SEMIDIRECT PRODUCT GROUP ALGEBRAS
The dihedral groups gN(for N > 2), although having two elements z and CJ such that z2 = 1 and oN= 1, clearly cannot be factored into Z,0 Z,, since this direct product group is commutative and the dihedral groups are not
20
DENNIS WENZEL AND DAVID EBERLY
commutative. However, the dihedral groups can be factored into what is called a semidirect product group, a more general concept than direct product groups. Such factorization may be useful in developing fast algorithms for convolution in group algebras based on dihedral groups. A . Semidirect Product Groups and Index Notation
Let G and H be two finite multiplicative groups of order M and N , respectively, and let I , and I N be their respective index sets. Denote the group of automorphisms of H by Auto(H). For each g E G we can associate with it an automorphism on H , call it 9. Define the antihomomorphism*: G -+ Auto(H) A by gi,gi,= &jil, for elements g i l ,gi,E G. Note that oo must be the identity automorphism, b0(hj)= hj, for any j E I N , where go is the identity of G. Define the semidirect product of groups G and H by G Q H = {giQhj:giEG,hjEH)
with the following definition of multiplication. For gil,gi,E G and hj,, hj, E H,define (gi,Q hj,)(gi2Q hj2)= (gi,gi,@ Oi2(hjl)hj2).
The set G 6H with the operation defined above is a multiplicative group with identity go 6 h o . The inverse of g i & h j is given by (gi@hj)-' = g;' 6 &l(h,Tl). The multiplication of group elements in G @ H induces an index summation @ on the Cartesian product of index sets I , x I , by ( i l , j l )6( i 2 , j 2 )= ( i l 0 i z , f 2 ( j 1 )~ j , )
for (il,jl), ( i 2 ,j , ) E I , x I,. The Cartesian product I , x IN with index summation @ defined above will be denoted by I , @ I N . The semidirect product group G @ H has order M N and is isomorphic to a group F = {fk: k E I M N } with isomorphism I):G 6 H + F. The isomorphism )I induces the isomorphism 0: ( I , @ I N , 6 )+ ( I M N ,0)by I)(gi 6 hi) If &(gi) = i, M h j ) = j , k &(gi @ hj) = (i, j ) , and Mji)= k, for i E I,, j E I N , and k E I,,, then the following diagram is commutative:
21
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
B. Semidirect Tensor Products of Group Algebras
Let G = ( g i : i E I,} and H = { h j : jE I,} be two finite multiplicative groups and let G 6H be their semidirect product. Let F [ C ] and IF[E+I be the respective group algebras. Define a = xie,,aigi E F [ C ] and p = Cje,,bihi E IF[H]. The semidirect tensor product of a and p is defined by
a@,P =
1 1 a i b j ( g i 6hj).
i e I Mj e l N
The semidirect tensor product operator has the properties gi 6(h,, + hj2)= 6 hj,) + (gi 6 hjz), ( g i , + gi,) 6 hj = (Si,6 hj) + (Si2 0 hj), and (cg,) 6 hi = gi 6(ch,) for c E IF. Define the semidirect tensor product of the group algebras IF[G] and JF[H] as the set of ail formal sums (gi
IF[C] 6IF[H] =
x
{ i
aj @pi: aj E IF[G], pi E IFCH]
I
.
The set IF[G] 6 IF[H] is homomorphic to the group algebra IF[G 6 H ] through an extension of the isomorphism 4: G 6H + F to a linear homomorphism 0:IF[G] 6IF[H] + H[G 6 H ] , where the group F = { f k : k E I M N } is a finite multiplicative group of order M N . Let (a 6p), ( y 66 ) E IF[G] 6 IFCHI, and define their product based on the homomorphism by
6 b)(y 6 6 ) = ( x i e l , x j e l N aibj(gi 6 h j ) ) ( x k e l Mx l e I N c k d f ( g k 6 = x i , k e I M x j . l e I N aibjckdl(gigk 6 = x i , k a l MC j . l e I N [ ( a i c k g i @ k ) 6( b j d f h k ( j ) h l ) l . In general, (a 6 ~ ) ( 6 y S) z (cry) 6wP). Consider the elements ( g i 6 p), ( g k 66) E F [ G ] 6IFCHI, where gi, g k E G. (a
hl))
dk(hj)hl)
The multiplication of these two elements is given by
6p ) ( g k 6 6, = ( X j e I , bj(gi 6 h j ) ) ( z f e l Nd l ( g k 6 h f ) ) = bjdl(gigk 6 = g i g k 6[ X j , I e I N b j d l d k ( h j ) h l l = gigk 6 [ d k ( x j e I N b j h j ) ( x l e I N d l h l ) l = ( g i s t ) 6( d k ( P ) 8 ) * Alternatively, consider the elements (a 6p), ( c k g k 6 6) E F [ G ] 6 IFCHI, (gi
xj,lslN
where c k
E IF,
dk(hj)hf)
g k E G. The multiplication of these two elements is given by
22
DENNIS WENZEL AND DAVID EBERLY
C. Example
The motivating examples for semidirect product groups are the dihedral groups, 9,= { z p d : p E I,, q E I,}, where z2 = 1, 0, = 1, and zcr4 = O " - ~ T . The dihedral group can be written as 9,= Z2 6 Z,. The groups are G = {tp:p E I , } z Z2 and H = ( 0 4 : q E f,} z Z,. The antihomomorA phism : G + Auto(H) is defined by z p ( 0 2 = ) for all q E Z,. The semidirect group product is therefore given by A
(tP1604')(?P2
63
- (z(PI+P2)mod2
fJ42)
6 6 [( - l)pzql + q,] A zP2(a41)042)
= (z(P1+P2)mod2
mod N ) .
The index summation corresponding to this product is ( P i , qi)&z,
42)
=((PI + P J mod 2, Gk1)qz mod N ) =((Pl + P 2 )
mod 2, C(- l)pzql + q 2 1 mod N ) VI.
INVERSES OF
GROUP ALGEBRAELEMENTS
In constructing matrices whose elements are from a group algebra and which have what we call the convolution property, we will need to compute inverses of group algebra elements. This section describes how to do that. Since group algebras generally have nonzero, noninvertible elements, it may be desirable in applications to eliminate these elements. The elimination is possible by constructing quotient rings from the group algebra and ideals containing noninvertible elements. The section includes a discussion of these topics.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
23
A . Inverses
Let F [ G ] be a group algebra over the field IF with respect to the multiplicative group G = { gi: i E 1,) whose identity element is go and whose index summation operation is @. As shorthand notation, we will identify field value 1 with group algebra element lg, Ogi. Let a = x i E I N a i gEi IF[G] and p = z j s r N p j g Ej IF[G]. If a p = 1, then fl is said to be a right inverse of a and a is said to be a lefi inverse of p. If a p = 1 = pa, then fi is both a right and a left inverse of a and is simply called the inverse of a, denoted p = a-'. For example, in W [ 9 J
+ IF=;'=;'
but 1 + 0 + a2 has no inverse. To construct a right inverse for a, note that the product of the two elements is
aB =
1(1
kPlN
icBj=k
%pj)gk
1 1
= j e t N i e I N ai@j-JBjgj-
Thus, group convolution can be computed as a matrix product Ab, where A = [ a i a j - , ] is an N x N matrix with row index i and column indexj, and b = [pj] is an N x 1 column vector. For a to have a right inverse, the system Ab = e,, where e, has a 1 in its first component, but 0s in all other components, must have a solution. If A is invertible, then a has a unique right inverse. However, if A is not invertible, it is possible that e, is in the range of A, in which case there are infinitely many right inverses. Similarly, to construct a left inverse for p, note that
The problem reduces to solving aTB = ez, where B = [pi-lej] is an N x N matrix with row index i and column index j , and a = [aj] is an N x 1 column vector. An alternative method for constructing inverses uses the matrix representations discussed in Section 1II.C. Let a E F [ G ] be an invertible element with inverse a-'. Let #(p) be the square matrix which represents p and let I be the identity matrix of the same size. It is true that
I = #(lgo) = #(aa-')
= #(a)#(a-'),
so # ( a p 1 )= #(a)-'. Given a, construct its matrix representation I(a), invert this matrix, and determine p such that #(p) = #(a)-'. This is a system
24
DENNIS WENZEL AND DAVID EBERLY
of equations in the unknown coefficients for B. The solution produces fi = a-'. For example, we mentioned earlier that in lR[Q3], (a1 + bz)-' = [a/(a2 - b 2 ) ]1 [ -b/(u2 - b2)32. Using the matrix representations for dihedral groups developed in Section III.C, we have
+
1 $(a1
+ bz) =
["
0
+
U-b
]
0
~
and d(al
+ bz)-'
a+b =
0
-
1
a-b
+
+
as long as a' - b2 # 0. Setting this system equal to c,I + c14(z) cz4(o) c34(za) + c44(a2)+ c54(za2) yields a solution of co = a/(u2 - b'), c1 = -b/(az - b'), and all other ci = 0. For element 1 a , ' a note that its matrix representation is the zero matrix, so it cannot have an inverse within the group algebra. The following function will be useful later. The net of a is defined by
+ +
net(a) =
1 ai ielN
and is a ring homomorphism over group algebra multiplication since
and
If tl is invertible, then net(a) net(a-') = net(aa-') = net(lgo)= 1, which implies that neither net(a) nor net(a-') can be zero. Thus, if a is invertible, net(a) # 0 and net(a-') = (net(a))-'. Note, however, that net(a) # 0 does not necessarily imply that a is invertible. For example, the element (go gl) E lF[Z,], where go, g1 E Z,,satisfies net(g, g l ) = 2, but go g1 is not invertible.
+
+
+
B. Ideals and Quotient Rings
It is possible that nonzero elements of a group algebra are not invertible. To avoid dealing with such elements, it is possible to work with the group algebra modulo the noninvertible elements. The algebraic way to do this is via ideals and quotient rings.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
25
Let F[G] be a group algebra over the field IF, where G = {gi:i E I,} is a multiplicative group of order N with identity element go and index summation operation @. Under the operations of addition and multiplication, the group algebra F[G] is a ring. Let I be a nonempty subset of the group algebra ring IF[G] consisting of the elements aB and Pa for every element ~ E I F C G ] and B E I , where the subset I forms a subgroup of F[G] under addition. This set I is called an ideal in the group algebra ring IFTGl. - Let I be an ideal in the group algebra ring IF[G] and let a be any element in IFCG]. The ideal coset for the element a of the ideal I in IFCG], is the same subset of F[G] as the coset a + I when I is considered as the additive subgroup of IFCG]. Since the group algebra ring IFCG] is also a commutative group under addition, the left and right ideal cosets are the same. Let IF[G]/I be the collection of ideal cosets a + I, Vu E IF[G]. This set forms a ring under the operations of addition and multiplication given by (a
+ I ) + ( p + I ) = (a + p) + I , (a
+ I ) ( B + 4 = (UP> + I ,
for the element j?E IF[G]. The ring IF[G]/I is called a quotient ring. Let a be an element from the center of the group algebra ring IFCG]; that is, a E C(IF[G]). Since the element a commutes with every other element in the group algebra ring, the centralizer of the element a, denoted C,(IF[G]), is the entire group algebra ring IF[G]. Furthermore, let a be an element such that its matrix representation, defined in Section III.C, is the zero matrix (it follows that the element a must also be a noninvertible element in IFCG]). Let I be the subset of the group algebra ring F I G ] consisting of all multiples pa of the element a by elements fl E IF[G]. Then it can be shown that this set I is an ideal in the group algebra ring IFCG]. Let ( a ) denote this ideal generated by all multiples of the element a. Example. Let g3= (1, 't, u, zu, u2, d} be the dihedral group of order 6. In the group algebra ring lF[g3], it can be verified that the element 1 + + a2 is not invertible and is in the center of IF[g3]. The element a has matrix representation
The matrix representation of 1
+ u + a2 is
26
DENNIS WENZEL AND DAVID EBERLY
$(1
Thus (1
+ 0 + 0 2 ) = $(1) + $(a) + $(u2)
+ 0 + )'a
)r; )r;
cos
(G)
-sin
sin
(f)
cos
is the ideal generated by the element 1 + 0 + 0'.
Example. Let 9 = { 1, i-i, +J, k} be the quaternion group of order 8. In the group algebra ring lF[9],it can be verified that the element 1 (- 1) is not invertible and is in the center of IF[9].The matrix representation of - 1 is defined to be the negative of the matrix representation of the element 1. Then the matrix representation of the element 1 (- 1) is the zero matrix and (1 (- 1)) is the ideal generated by the element 1 (- 1).
+
+
+
+
c2,
Example. Let Z, = {1,[, . . .,lN-'} be the cyclic group of order N. In the group algebra ring lF[Z,], the element CN - 1 can be considered as a polynomial which can be factored into the product of prime polynomials CN - 1 = n k l N Qk([),k = 1, 2, . . ., N, where klN means k divides N with no ([ - d), i = 1,2, .. ., N is a cyclotoremainder and where Qk(c)= mic polynomial over any field IF, gcd(i, k) is the greatest common divisor of i and k, and w, is a kth root of unity in some extension field. It can be verified that the element given by the cyclotomic polynomial Q N ( [ ) is not invertible, is in the center of IF[Z,], and has a zero matrix representation, so < Q , ( [ ) ) is the ideal generated by the element Q&). Now, let F[G] be a group algebra ring over the field IF, where G = { g i : i E I,} is a multiplicative group of order M, and let IF[H] be a group algebra ring over the field IF, where H = { h j : j E I,} is a multiplicative group of order N. Let IF[G] @I IFCH] be the set of all finite sums of the direct tensor products of elements from the group algebras IF[G] and IF[EII. Let I be an ideal in IF[G], J and ideal in IF[HJ, and define the set
n,cd(i,k)=l
ui@Ivi:ui~Iorui~J i=O
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
27
where n > 0 is a finite positive integer. Let a E F[G] and let /3 E IF[H]. Consider the direct tensor product (a
+ I ) 8 ( P + 4 = (a8 /3) + ( a 8 J) + ( I 8 B ) + ( I 8 4.
If the set K is an ideal in F[G] 6 IFCHI, then this can be reduced to (a
+ I ) 6( P + J ) = (a 8 8) + K .
I:==,
First, let p be an element from the set K and be given by p = ui 6 ui, where n > 0 is a finite positive integer. By its definition, the set K is closed under addition, and the additive inverse of the element p E F[G] 8 F [ H ] is given by - p = ~ ~ (- =ui) 0 o ui, which is also in the set K. These conditions imply that the set K is a subgroup in F[G] 0 IF[H] under addition. Second, let q be an element in F[G] 8 F[H] given by q = (aj 8 /Ij), where aj E IF[G], Pj E IF[H], and rn > 0 is a finite positive integer. Consider the product
cim,o
If ui is in the ideal set I, then ajui is also in I. If ui is not in the ideal set I, then ui is in the ideal set J and Sju, is also in J. Thus p4 is in the set K, which completes the proof that the set K is indeed an ideal in F[C] 8 F [ H ] .
VII. MATRIXTRANSFORMS
Earlier achievements have been made in the construction of discrete matrix transforms based on elements from a group algebra. Loui (1976) developed a generalized Fourier transform based on semisimple algebras, where group algebras are considered as a special case. Based on this general transform, Willsky (1978) defined a class of finite-state Markov processes for nonlinear estimation evolving on finite cyclic groups. The underlying structure of nonlinear filtering was obtained by viewing probability distributions as elements of a group algebra and by taking the transforms of such elements. Matrix transforms with the convolution property were constructed from orthogonality relations, and an efficient method for computing group convolutions was obtained from a generalization of the fast Fourier transform to metacyclic groups. A separate and similar development was also presented by Eberly and Hartung (1987). This development included the use of group algebras as a basis for group convolutions and for the construction of matrices with the convolution property. The discrete Fourier transform, the discrete Walsh
28
DENNIS WENZEL AND DAVID EBERLY
transform, and a transform based on the dihedral group were used as examples. Higher-dimensional transforms were constructed where the underlying group is the direct product of other groups. This section begins with a review of this earlier work and a development of the properties of matrices that have the convolution property. The sections that follow present several methods for constructing such matrices with the convolution property: by direct tensor products of matrices, by semidirect tensor products of matrices, and by exhaustive determination of all possible combinations for the matrix entries. The section concludes with a discussion of the application of these matrices with the convolution property to the construction of fast algorithms for computing group convolutions. A. Vector Spaces of Group Algebra Elements
Let F [ G ] be a group algebra over the field IF, where G = { g i : i E Z,} is a multiplicative group of order N with identity element go and index summation operation @. Define F[G]K = IFK[G] = F [ C ] x IF[C] x
*.*
x IF[G]
to be the Cartesian product of K copies of the group algebra IF[G]. Under the usual operations of component-wise addition and multiplication by a scalar, IFKIG] is a K-dimensional vector space over the group algebra lF[G]. Let F R [ G ] and F s [ C ] be R- and S-dimensional vector spaces over the group algebra IF[G], respectively. Define AR~ =S { A = [Uij]:
Uij E
F[G]y i
E ZR,j E Is}
to be the set of all R x S matrices with entries from the group algebra IF[G]. Let x = [ x i ] E lFRIC] and y = [ y j ] E IF'CG], for i E I R , j E 1,. For every A = [ a i j ]E A R xdefine s , the mapping A : IFs[G] -+ IFRIG] by
Let IF"[G] be an N-dimensional vector space and let A N x , be the set of all N x N square matrices with entries from the group algebra IF[G]. Let A = [aij] and B = [bij] be two matrices in A N x , , where aij, bij E IF[G], and i, j E IN. Then the set of matrices A N xisN a ring with addition given by
+ [bij] = [aij + bij]
[ ~ i j ]
and multiplication given by
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
29
The set of matrices M N x Nis also an associative algebra over the field IF, where scalar multiplication is given by c E IF. c[aij] = [caij], Let x = [ x i ] and y = [ y j ] be two vectors in the IFNIG]vector space, where xi, yj E lF[G], and i, j E I,. The component-wise multiplication of vectors x and y, denoted x o y, is defined by
Cxil 0 C ~ i =l C x i ~ i l . The convolution of vectors x and y, denoted x * y, is defined by
Let C(IF[G])be the center of the group algebra IF[G]. The center of the vector space lFNIG] is the set of vectors that commute under componentwise multiplication, with every vector in IFNIG]and is given by C(IFNIG])= C(IFIG])N. B. Matrices with the Convolution Property
Let IFNIG]be an N-dimemsional vector space over the group algebra IF[G], where G = (gi: i E I,} is a multiplicative group of order N with identity element go and index summation operation 0. Let M N x Nbe the set of all N x N square matrices whose entries are from the group algebra IF[G]. Let u and v be two vectors in C(IFNIG]),and let A be a matrix from the set The matrix A has the conuolution property with respect to the group G if A(u * v) = (Au) 0 (Av).
Theorem 1. For the matrix A = [aij]to haoe the convolution property, it is necessary and sufficient that aimain= a,,,, @,,Vi,m, n E I,.
Proof. Let aimain= a,,,, e n , Vi, m, n E I,; then
30
DENNIS WENZEL A N D DAVID EBERLY
= A(u*v).
This proves sufficiency. Let the matrix A have the convolution property. Then A(u * V) = (Au) 0 (Av)
C 1 ai.menumvn = m1 1 aimainumvn oINnoIN
m e I Nn e I N
ai,men = airnuin.
This proves necessity.
Theorem 2. Let A = [aij] E A N xbeNa matrix with the convolution property, where i, j E I N . Then: 1. a, = (l)go. 2. (aij)-' = aij-l. 3. [net(aij)lN= 1. 4. For any invertible element y E IFCG], the matrix B convolution property.
= y-'Ay
also has the
Proof: 1. Let aio be the ith entry in the first column of the matrix A with the convolution property. From Theorem 1, aimain= ai,,,@,,
V i , m, n E I N
implies that aim%
= ai,m @O = aim
and aioain = ai.0 en = sin.
Since aimaiO = aimand aiOain= ain,a , must be the group algebra multiplicative identity element (l)go.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
31
2. Let j be an element in the index set IN and let j-' also be in the index sel IN such that j ej-' = 0. Let uij be an entry in the matrix A with the convolu tion property. From Theorem 1, a..a..-l = ai,jej-I = 4 0 ij ij and U = Ui,j-Iej = a,. aij-la..
Since a,, is the group algebra multiplicative identity element, this implies that every entry in the matrix A with the convolution property must have a multiplicative inverse in the group algebra, which happens to be given by (aij)-' = aij-1.
3. Let aij be an entry in the matrix A with the convolution property and consider [net(aij)IN.From the multiplicative property of the net function, [net(aij)lN= net(az). From Theorem 1, net@;) = where N j =j 0 j 8 * * * 0j ( N terms). Since the group G is of order N , for any element gkE G, k E I N , the group multiplicative identity element go is obtained from g.: This implies that net(ui,Nj) = and, from Theorem 1, net@,) = net((l)go) = 1. Thus [net(aij)lN= 1. 4. Let y be an invertible element in the group algebra IFCG]. Let the beN a matrix with the convolution property, and let matrix A = [aij] E A N X B = [b,] E .ANxN be matrix given by B = y-'Ay,
so that bij = y-'a,y. Consider the group algebra elements bi, and bin from the same row i of the matrix B. Then bimbin = (~-'airn~)(~-'~in~) = y-'aimyy-'ainy = y-laimainy = 7-lai.m en7
-4.m
en-
32
DENNIS WENZEL A N D DAVID EBERLY
From Theorem 1, the matrix B has the convolution property. This completes the proof of Theorem 2. C . Examples 1. Matrix Transforms over Dihedral Groups
Let IF6[g3]be a six-dimensional vector space over the group algebra IF[g3],where Q3 = { 1 , 7 , 0 , 7 0 , o’, TO’} is the dihedral group of order 6 with identity element 1 and index summation operation 0 given by i 0 j =( j
+ (-
ly’i) mod 6,
V i ,j
E
16.
Let d t ? 6 x 6 be the set of all 6 x 6 matrices whose entries are from the group Let A = [ a i j ] E - 4 f 6 x 6 be a matrix with the convolution algebra IF[&]. property. From Theorem 1, a.cm a.In = a .~ , m @ n *
vi,m,
z6-
This formula involves only the elements in any given row of the matrix A, so we drop the row index. Consider any row given by [ao a l
u2
~3
a4
~ ~ 1a i,~ W 9 3 1 , i ~ 1 6 .
The formula from Theorem 1 reduces to
aman = urn@,,, Vm,n E 16. It is possible to construct a matrix with the convolution property from elements in the IF[g3]group algebra by determining all possible combinations for the entries in a given row of the matrix. For simplicity, consider the entries in the matrix to be only from the set { +(l)g,: g k E 9 3 , k E 161. From Theorem 1, every entry in the first column of a matrix with the convolution property must be the group algebra multiplicative identity element 1. Thus for every row, uo = 1. Consider all relationships between the row entries u2 and u4. From the formula of Theorem 1,
and U’U’U’
= f%4a4u4= UO.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
33
Also, consider the relationships between the row entries u l ,u3,u5. From the formula of Theorem 1,
u l u 2= a1$2 = u3 = ~
=~
4 $ 1
=u2$1
ul@4=u1$4=u5
4 ~ 1 ,
=aZc(l,
U3U2 = U3 $2 = U 5 = U 4 $ 3 = U 4 U 3 ,
U3U4 = U3 $4 ~
5
= U1 = U 2 $ 3 = U 2 U 3 ,
= ~a5Q2 2 = u l = u 4 e 5 = u4u5,
u5u4 = a5Q4= u3 = a2Q 5 = ~
12~5,
and UlUl
= U 1 @ 1 = uo,
‘ 3 u
= u3
$3
= go,
u5u5 = U 5 @ 5 = “0.
The only possible combinations for the row entries u2 and u4 are given by (u2, @4)E
((1, 11, (a,a2),(a2,4)
and are considered separately. Case 1: (u2, u4) = (1, 1). Consider U3 = c ( 1 U 2 = U 4 U l = Ul,
Us = UlU4 = U 2 U 1 = U1.
Thus u1 = a3= u s ,and u 1 E { + l ,+ 7 , +a, fro, +a2, fra2).
This leads to eight possibilities for a given row of a matrix with the convolution property: [l
fl
1
+1
1
fl],
[l
fT
1
+7
1
fr],
[l
fro
1
+To
1
fra],
[I
fra2 1
fro2
1
fza’].
Case 2: ( u z , a4)= (a, 0’). Consider ~ 1= 0 ~
1
= ~~3 = 2 ~(4011 = 02u1,
ulaL= ulu4 = u s = u2al = a a l .
34
DENNIS WENZEL AND DAVID EBERLY
Thus a3 = a, a and a5 = a1a2,and since a, a = a2al, a1 E { & T ,
&TO, & T O 2 } .
This leads to six possibilities for a given row of a matrix with the convolution property: [1
&T
0
&TO
CT2
&TO’],
[I
f T 0
0
&TO2
O2
k T
&T
0’
[1
&Ta2 d
1,
&To].
Case 3: (a2,a4) = (a’, a). Consider a1a2= a l a 2 = a j = ~ C X ~= O~
1
4
=~act1, 1
= ~U, = 4 ~2a1 =a2~l.
Thus a3 = a1a2and a5 = a,a, and since a1a2= aal, a1 E { & T ,
&TO,
fTO’}.
This leads to six possibilities for a given row of a matrix with the convolution property: [I
f T
fJ2
&TO2
[1
f T 0
fJ2
&T
[1
&TU’
O2
&TLT
0
fTC],
&Ta’], (r
fT
1.
This approach can easily be extended to construct matrices with the convolution property and with elements from a group algebra based on the dihedral group gM.Let IFN[gM] be an N-dimensional vector space over the group algebra IF[9,,., 3, where
.
= { 1, T, 6,To,
g M
f .
9
OM-’, r a M - l }
is the dihedral group of order N = 2M with identity element 1 and index summation operation given by i@ j =( j
+ (-
ly’i
+ N) mod N,
V i ,j E IN.
Let ANx N be the set of all N x N matrices whose entries are from the group algebra IF [gM]. Any row of a matrix A E A N is given by [ao
0 1 ~
a2
..‘ aNWl], aiE IF[gM], i E I,.
Again from Theorem 1, every entry in the first column of a matrix with the convolution property must be the group algebra multiplicative identity ele-
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
35
ment. Thus for every row, a0 = 1 . Again for simplicity, the remaining row entries of the matrix are to be selected from the set
{ k ( l ) g k : gk
g M ~
IN}*
Consider all relationships between the even numbered row entries k E I,. It can be verified that (‘2k)M
‘Zk,
=
and c12iu2j= L22k,
Vi, j
E I,.
Then the only possible combinations for the even-numbered row entries, excluding row entry a0, are given by mod M=O, P ~ ~ E I , } . Furthermore, consider the relationships between the odd-numbered row entries a 2 k + 1 , k E I,. Then C12k+la2 t12k+1a4
C12k+l‘2M-2
-
‘2k+3
- a(2k+l)$2 -
- ‘(2k+l)
= ‘(-2)@(2k+l)
= ‘2k+5 = ‘(-4)@(2k+l)
$5
= ‘(Zk+l)$(ZM-2)
= ‘2k-1
= ‘(-(2M-2))
-
- @-2M-2‘2k+l9 = @‘2M-4a2k+19
%3(2k+l)= ‘2’2k+l,
and ‘1%
-
a1 %31
- go,
a3a3
-
a 3 %33
- a07
= ‘(2,-1)%3(2,-1)
= ao.
Q2M-1‘2,-1
It can be verified that for every possible combination of the even-numbered row entries, the a1 row entry can always be selected from the set {+t,
*to, )to2 )...)f z a M - ’ } .
The remaining odd-numbered entries are always determined by the particular selection of the even-numbered row entries and the al row entry. In the special case that all even-numbered row entries are chosen to be 1, then the ctl row entry additionally can assume the values of k 1.
36
DENNIS WENZEL AND DAVID EBERLY
2. Matrix Transforms over a Quaternion Group Let IF8[9]be an eight-dimensional vector space over the group algebra lF[9],where 9 = { f 1, fi, kj, fk } is the quaternion group of order 8 with identity element 1 and index summation operation 0 given by the table in Section 1II.A. Let A s xbe 8 the set of all 8 x 8 matrices whose entries are from the group algebra IF[9].Let A = [aij]E Agx8 be a matrix with the convolution property. The elements of any row of the matrix are given by
[ao a,
a2 a3 u4 as a6 a,],
ui~lF[9],i~Z8.
From Theorem 1, aman= a,
@ ,,
Vm,n E I s .
It is possible to construct a matrix with the convolution property from elements in the F [ 9 ] group algebra by determining all possible combinations for the entries in a given row of the matrix. For simplicity, consider the entries in the matrix to be only from the set
{a:gk E 9,k E 1 8 ) . From Theorem 1, every entry in the first column of a matrix with the convolution property must be the group algebra multiplicative identity element 1. Thus for every row, uo = 1. Consider the relationship of row entry a1 with the remaining row entries:
u l a z = u l e Z = u3 = uzel = u z a l , a1a3 = a1 @3
= a2 = u s @ ,= a 3 ~ 1 ,
a 1 ~ 4= a I e 4 = a5 = a 4 @ 1= a4al, a l a s = a1 8 5 = a4 = a s e 1 = a 5 a l , a1u6 = a, @ b = a, = a6@1= a6a1, ~ l a 7= a 1 87 = a6 This implies that a ,
= a 7 @ ,= ~ 7 ~ 1 .
E C(lF[S]).Furthermore,
a l u l = a l e , = uo implies that row entry a ,
=
k 1, where both cases are considered separately.
Case I : a , = 1. Consider
uza2 = a z e 2 = a, = 1, u4u4
= U 4 a 4 = a, = 1,
a6a6
= a6 @ b = a1 = 1,
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
which imply a2 =
+ 1, a4 = + 1, and t12C14
ct6
=
=
37
& 1. The relationship
@ 4 = a6
and that a2
= a1u2 = u3,
a4 = @,la4 = Us, a6
= alC(6 = a77
imply that selecting row entries a2 and a4 determines the remaining row entries u3, u s , fx6, u7. This leads to four possibilities for a given row of a matrix with the convolution property:
I
1
[ l l
[l
1
[l
1
[l
1 -1
-1
-1
1
1
1
1
-1
-1
-1
-1
1
-1
1
13,
1
13,
-1
-11,
1 -1
-11.
The first two and last two combinations can be rewritten as [1
1 +1
+1
+1
+1
[l
1 +1
+1
T1
f l
1 -1
11, -11.
Case 2: a1 = - 1. Consider a2a2 = a2 $ 2 =
a1 = - 1,
UqU4
= a4@4= a1 = - 1,
agc16
=
@6
= f f l = - 1,
which imply that a2, a4, a6 can be chosen from the set { relationship
+ i, +j, fk}. The
a2a4 = a2@4 = fx6
and that a2 = - a , a 2 = - a 3 , a4 = -u1u4 = - u s , g6 = - @ l a 6 = -a79
imply that selecting row entries a2 and u4 determines the remaining row entries a 3 ,u s , u6, a7. Consider the six possibilities for selecting u2 as three separate cases.
38
DENNIS WENZEL A N D DAVID EBERLY
Case a: a2 = f i . This implies that row entry a3 = T i and row entry a4 must be selected from
fj, Tj, +k, Tk
+
i or T i, since ~ 2 ~ = 1 4a6 would imply that a6 = k 1). This leads to eight possibilities for a given row of a matrix with the convolution property:
(a4 cannot be
k -k],
[l
-1
+i
Ti
+j
fj
[l
-1
fi
Ti T j
+j
-k
kl,
[l
-1
+i
Ti f k
Tk
-j
j],
[l
-1
+i
Ti f k + k
j
-j].
Case b: a2 = +j. This implies that row entry u3 = T j and row entry a4 must be selected from
+i, Ti, fk, Tk
+
(a4 cannot be +j or T j , since a2a4 = a6 would imply that u6 = 1). This leads to eight possibilities for a given row of a matrix with the convolution property:
Tk
[l
-1
fj
T j +k
[l
-1
-tj
Tj
+k
+k
-i
i],
[l
-1
fj
T j +i
Ti
-k
k],
[l
-1
fj
T j Ti
fi
i
-i],
k -k].
+
Case c: a2 = k. This implies that row entry a3 = T k and row entry a4 must be selected from f i , T i , +j, T j (a4 cannot be fk or T k, since a2a4 = a6 would imply that a6 =
+
1).This leads to eight possibilities for a given row of a matrix with the convolution property:
[l
-1
fk
Tk
fi
Ti
[l
-1
+k
Tk
Ti
+i
[l
-1
+k
Tk
+j
Tj -i
[l
-1
+k
+k
Tj
+j
j
-j],
-j
j], i],
i
-i].
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
39
D. Direct Tensor Product Matrices Let F'[G] be an M-dimensional vector space over the group algebra IFCG], where G = {gi: i E I,} is a multiplicative group of order M with identity element go. Let IF"[H] be an N-dimensional vector space over the group algebra IFCHI, where H = {hj: j E I,} is a multiplicative group of order N with identity element h,. Let A M xbe M the set of all M x M matrices whose Nthe set of all entries are from the group algebra lF[G], and let A N xbe N x N matrices whose entries are from the group algebra IF[H]. Let IFMNIG0 H] be an MN-dimensional vector space over the group algebra IF[G 0 HI, where G 0 H is a direct product group. From Section IV, G 0 H is isomorphic to the group F = { f k : k E IM,} of order M N and 8: lM 0 1, -+ IMN is an induced isomorphism on the index sets given by k = 8(i,j)for i E I,, j E I N . Let AMN M N be the set of all MN x M N matrices whose entries are from the direct tensor product group algebra lF[G] @I IFCHI. Let A = [ u ~ , ~be~ a] matrix from the set A M xand M ,let B = [bjij,] be a matrix from the set A N xDefine N . C = A 0 B to be a matrix from the set A M N xcomposed M N , from the direct tensor product of matrices A and B, and given by C = [c~(il,j,)~(i2,j2)l = Caiti28 bjlj21. Let matrices A and B both have the convolution property. Let crs and crt be two elements in the same row of the matrix C = A 0 B, and be given by Crs
= %(i,,
jr)e(is,j,),
Crt
= %i,,
jr)8(it,j t ) -
Consider the product CrsCrt
= Ce(i,, j,)e(is,js)%(ir,
jr)e(it,jt).
From the definition for the elements in matrix C = A 0 B, ce(ir,jrp(iS,js)ce(ir,jr)e(it, jr)= (airis8 bjrjs)(airit 0 bjrjt)* From Section IV, this product can be reduced to (airis0 bjrjs)(ai,it0 bj,j,) = (aivisairit 8 bjrjsbjrjt). Since the matrices A and B both have the convolution property, from Theorem 1, (a.W .s a.1.h. @ b.J r l.s bJrlt . ) = (aipiseit @ bjF,jsejr).
Again from the definition for the elements in matrix C = A 0 B, (airsis@it 0 bi,,j, ejJ = c~(i7,jr)~(is @it, j, @it)'
40
DENNIS WENZEL A N D DAVID EBERLY
Since 8 is an isomorphism over the index summation operation 0,
-
CO(i,,jr)O(i, Qit,j, QjJ- cO(ir,jr)tO(ids) @O(it.jt)l-
By the way the entries c,, and crtwere defined above using the 8 isomorphism, -
%ir.j,)te(is,js) ~ B O ( ~ ~ J~ ) ICp,s @ t *
Thus, it has been shown that C , ~ C , ~= c , , , ~and, ~ from Theorem 1, this implies that the matrix C = A 0 B has the convolution property. E. Semidirect Tensor Product Matrices
Let FM[G] be an M-dimensional vector space over the group algebra F[G], where G = {gi:i E I,} is a multiplicative group of order M with identity element go. Let lFNIH] be an N-dimensional vector space over the group algebra lF[H], where H = {hj: j E I,} is a multiplicative group of order N with identity element h,. Let A M xbeM the set of all M x M matrices whose entries are from the group algebra IFCG], and let A N Xbe N the set of all N x N matrices whose entries are from the group algebra lF[H]. Let IFMNIG6H] be an MN-dimensional vector space over the group algebra lF[G @I HI, where G @I H is a semidirect product group. From Section II.C, G Q H is isomorphic to the group F = {fk: k E I M N } of order M N and 8: I, 6I, + I, is an induced isomorphism on the index sets given by k = O(i,j) for i E I, j E IN. Let A M N xbeM the Nset of all M N x M N matrices whose entries are from the semidirect tensor product group algebra lF[G] 6IF[H]. M ,let B = [ l ~ ~ ,be ~ , a] Let A = [aili,] be a matrix from the set A M x and matrix from the set A N x Define N . C = A @I B to be a matrix from the set composed from the semidirect tensor product of matrices A and B, and given by
c = [~O(i~,j,)O(i~,j~)I = Caili2@I bjlj,I* Let matrices A and B both have the convolution property. Restrict the group algebra element entries in the matrix A to be from the set
{ (1)gi: gi E G, i E I M ) , so that the matrix A is given by A
=
Cai,i,l = Cgs(il,iz)l,
i,, iz E I, gs(il,i2) E G,
where S:I, 6I, -+ IM is a homomorphism. In a similar manner, restrict the group algebra element entries in the matrix B to be from the set {(l)hj:
hj E H,j E IN},
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
41
so that the matrix B is given by
B = Cbj,j21 5 ChTUl,j2)l,
j l , j 2 E I N , hT(jl,j2)E H,
where T : IN 6I , + I, is a homomorphism. Let cr, and crt be two elements in the same row of the matrix C = A and be given by
From the definition of the elements in the matrix C crs and crfcan be rewritten as
=A
@ B,
Q B, the elements
Then the product of these two elements is given by
which from Theorem 1 would imply that the matrix C = A @I B has the convolution property. Further analysis reveals that the assumption
42
DENNIS WENZEL AND DAVID EBERLY
F . Inverses of Matrices with Group Algebra Elements Let IFNIG] be an N-dimensional vector space over the group algebra IF[G], where G = {gi: i E I N }is a multiplicative group of order N with identity N the set of all N x N matrices whose entries are element go. Let A N xbe from the group algebra IFCG]. Let A = [aij] x AN.,be a matrix with the convolution property, where each entry aij is an invertible group algebra element (from Theorem 2). Let A = [a;'] be a matrix in which each entry aijfrom the matrix A is inverted. The transpose of the matrix 2, denoted AT,is defined by AT = [aJ;']. Then the product of the matrices AT and A is given by
Let the matrix B = [b,] E AN be the matrix which results from the matrix product x T A , such that
The entries along the diagonal of the matrix B bii
=
1
Uk,i-lei
=
kcIN
From this, the matrix B
1
UkO
keIN
= J T A can
B
= A T A are given
=
C
by
1 = N.
ksIN
be rewritten as
= ATA = N
9
+ C,
where 9 E AN is the N x N identity matrix and C E AN is an N x N matrix with 0s along the diagonal. If the matrix C is the zero matrix, i.e.,
C
ak,i-I@j=o,
i#j,i,jeZN,
kcIN
then the inverse of the matrix A , denoted A-', can be given by
One direct way to compute the A is to invert the group algebra elements of A , aij, one at a time. Another method is to replace A by a larger matrix with field entries using the matrix representations discussed in Section 1II.C. where z and The idea is illustrated using the dihedral group algebra JF[9,], (T are the generators for the dihedral group and have matrix representations
GROUP ALGEBRAS IN SIGNAL A N D IMAGE PROCESSING
43
The matrix A defined below has the convolution property,
A=
1
1
1
--z
--z
-7
1
a
a2
--z
-70
--a2
1 a2
0
--z
--a2
--zu
1
1
T
-z
T
-z
Ta
-za2
1
1 0
The matrix representation of A is d ( A ) = [d(aij)], the matrix of matrix representations of the entries of A . For our example,
where a = cos(2n/3) and b = sin(2n/3). This matrix can be inverted and used for vector convolution computed in the realm of the matrix representations. G. Fast Computation of the Matrix Transform Let u jE
= [ui]
and v = [uj] be two vectors in C(F”[G]), for ui, uj E IF and i,
IN. From Section III.B, the vector convolution of u and v is defined by u*v
c(c
= kEIN
i@j=k
uiuj)ek
where ek is a vector with a 1 in the kth component and 0s in the remaining components. By this definition, the computation of the vector convolution requires N 2 multiplications in the field IF. However, since the matrix A has the convolution property, by definition
44
DENNIS WENZEL AND DAVID EBERLY
A(u *v) = (Au) 0 (Av).
If the inverse of the matrix A can be found, then the vector convolution of the two vectors u and v can be written as
u*v
= AP((Au)
0
(Av)).
The application of the matrix transform A to the vector u yields the vector Au = U = [U,,,], rn E IN, in the vector space IFNIG],which is given by u,,,= C amiui. ielN
Similarily, the vector Av given by
=V =
[V,], n E IN, in the vector space IFNIG] is
V,=
1 anjuj. jEIN
By this definition, the computation of the vectors U = Au and V = Av each requires N 2 multiplications in the field IF. The component-wise multiplication of the vectors U and V requires N multiplications in the field IF, and the application of the matrix transform A-’ to the vector U o V requires an additional N 2 multiplications in the field IF. The computation of the vector convolution of the vectors u and v by the formula
u*v
= A-’((Au)
0
(Av))
requires 3N’ + N or O ( N 2 ) multiplications in the field IF. However, if a “fast” algorithm can be found for computing the matrix transform A and its inverse transform A-’, which requires less than N 2 multiplications in the field IF, then the group convolution of the vectors u and v can be more efficiently computed by this formula on the basis that the matrix A has the convolution property. For example, the cyclic convolution of order N for sequences of complex numbers can be efficiently computed using the formula based on the convolution property of the discrete Fourier transform. The fast Fourier transform (FFT) provides for the computation of the discrete Fourier transform using O(N log N ) multiplications in the field of complex numbers. Thus the computation of the cyclic convolution using the fast Fourier transform, component-wise multiplication, and the inverse fast Fourier transform requires O(N log N ) multiplications in the field of complex numbers. VIII. FEATURE DETECTION USING GROUP ALGEBRAS
We have shown that there is a much larger class of convolutions than cyclic ones from which one can choose when analyzing signals or images. The effectiveness of an algorithm may depend not only on the type of input data, but also on the particular convolution and value function selected.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
45
A . Value Functions A value function is any function v: F[G] -+ IF. The group convolution of two elements in F[G] produces another element in the group algebra. For signal and image processing we need to produce an IF-valued response from the convolution. Value functions provide this response. The simplest such function, mentioned in Section I1 where standard convolution by sections was E F[G], discussed, is the coegicient selection function. Given a = zislNctigi the selection function is defined by
select,(a) = cti. For a direct product group algebra IF[G 0 H ] and the selection function is
ct
= x i s , N x j E , M ~ i j (gj), gi,
select,,j(cc) = aij. Other value functions defined on F[G] are net(a) = E i E I N (see ~ Section , VI) and norm(a) = where cl is the conjugate of ct if the field is the set of complex numbers, but just o! otherwise.
Jm,
B. Feature Detection
Many applications exist where one is interested in the acquisition and analysis of signals and images for the detection of certain features. This type of processing can be achieved by segmentation of the data into a description of the various features located in the scene. Surveys of techniques for detecting such features as edges, lines, curves, and spots can be found in Rosenfeld and Kak (1982). We discuss the ideas of feature detection using group algebras. Typically, edge detectors are based on finite-difference schemes. The presence of an edge is determined by comparing the differences of pixel values for neighboring points. To illustrate, consider the edge detector (1,0, - 1) and a two-valued signal ( A , .. ., A , B ,..., B). After convolution, the filtered signal is (0,. . .,0, A - B, 0, . . ., 0). Since A # B, the filter (1,0, - 1) has located the edge between the values A and B. Similarly, consider the edge enhancer (- 1,2, - 1) applied to the same signal. The filtered signal is (0, . . ., 0, A - B, B - A , 0, .. . , 0), so the edge has been detected and enhanced. The characteristic that is common to the standard edge detectors and enhancers is that the sum of the filter weights is 0. For group algebras lF[G], we define the filter f = x i E I N A gto i be an edge detector or edge enhancer if net(f)= = 0. Note that the set I = { f E IF[G]: net(f) = 0} is a twosided ideal of IF[G]. The same type of definition can be constructed for higher-dimensional data sets where the underlying group is a direct product group. For example, an edge detector f = x i E l N x j E l y J j ( ggij ,) E IFCG 0 HI satisfies the condition net(f) = = 0.
xiEIN-f;:
xiEINxjEIMJj
46
DENNIS WENZEL AND DAVID EBERLY
C. Application to Signals
Denote the input signal as {si}go and the filter as {fj}j”-J. To compute a group convolution by sections, we need to choose the underlying group for IR[G] and a value function u: IR[G] + IR to apply to the group convolution. Example. Consider IR[Z3], where i @ j = ( i + j ) mod 3 and the value function is select,. The filter is f = 1 - x2 E IR[Z3]. The input is sectioned into blocks of three consecutive values which are treated as coefficients of elements in IR[Z,], say s ( ~=) sk S ~ + + ~ S X ~ + ~ XE’ IR[Z3]. The output sequence {rk}p=ois defined by rk = sk - sL+, = select,(s(k)*f).
+
Example. Let g3be the dihedral group with six elements. Let the filter be f = 1 - t E IR[g3].The sections of input treated as elements of IR[g3]are
dk)= sk + sk+lt + S k + Z a
+
Sk+3to
+ sk+4a2+ sk+5toz,
so that ’(”*.f = (’k - ’ k + l )
+ (sk+4
+ (’k+l
- sk+3)02
- sk)t
+ (sk+5
+ (sk+Z - sk+5)o + (sk+3- s k + 4 ) t o - sk+2)zo2.
If we select the fifth coefficient of each group convolution to generate the output sequence {rk}km,o,then rk = select5(s(k)*f)= s k + 5 - sk+z. In fact, the output sequence can be generated by choosing any function u : IR[g3]+ IR and defining rk = u(sck)*f).
D. Application to Images To illustrate the application to images, we analyze edge detection and enhancement of the standard images of Lena and Mandrill using a 7 x 7 Laplacian of a Gaussian filter which is embedded in an 8 x 8 template by padding the last row and last column with 0s. The images are filtered based on the group convolutions for IR[Z, 0 Z,],IR[9,0 g4],and IR[9 0 91, where Z, is the cyclic group of order 8, g4is the dihedral group of order 8, and 22 is the quaternion group of order 8. Each underlying group is of the form G x G = { ( g i , g j } : i,j E Z,}, so we will reference direct product group elements by ( g i , gj). The digital image is represented as a two-dimensional sequence {si,iz}fl~l~,~~!o of gray levels in the range 0 I silizI 255. Similarly, the filter sequence is { & l j 2 } ~ ; ~ o , j 2 = , , . For all the examples, the assignment of group elements to the 8 x 8 template is
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
41
The group element (gi, gj) is assigned to the template location in row i and column j . Given tl = crij(gi,gj) E R[G 0 GI, it is represented by an 8 x 8 template whose (i,j) entry is the coefficient uij. The 8 x 8 filter for the Laplacian of a Gaussian, given as group algebra element f = ~ ~ z = o f j l j 2 (gj,) g j l and , represented by a template, is
xi’,=,
To perform a group convolution on a block of input data, we must also interpret the 8 x 8 subimages as group algebra elements. The subimage { s ~ , + ~ , ,i,,~ ,i,+E~Z,> ~ : is interpreted as 7
The group convolution is
7
48
DENNIS WENZEL AND DAVID EBERLY
The target point of the convolution of subimage is chosen to be row 4, column 4. The value function u: lR[G x C] -+ IR is chosen to be select 6,6(a) = @6,6. A word of caution is needed. When performing the standard cyclic convolution, one visualizes the output as being obtained by superimposing the filter over the subimage, multiplying corresponding entries, and then adding up the results for a single output. This method works only when the group algebra is IRIZN1x ZN2]and the value function is selectNl-l,N,-l.For other groups or value functions, the same idea works only after a reordering of the filter values; such reordering depends on the underlying group and the selection function used. Consider the general case where the group algebra is G 0 H,G has order Nl and index summation @lG,and H has order N , and The direct product index summation is denoted 0. index summation OH. Selection of the (ml, m,) coefficient of the group convolution is
There are Nl N , index pairs ( i l , i,) and ( j l ,j , ) such that
(4, i 2 ) O ( j 1 , j 2 )
= (il
O d l , i2 ad2) = ( m l , m2).
For each choice of (il, i2),the value of ( j l , j 2 )which satisfies this equation is ( j l , j 2 )= (i?
Ocm,, i;’ ~ ~ m= (i;’, , ) i;’)
o h,m2),
Thus, to pair up subimage template values with filter template values in the “usual” way, the filter weights need to be reordered according to the formula for ( j 1 , j 2 ) . We are using select,,,. The reordering for Z,0 Z,is
In this special case of a filter that is symmetric about its center, the reordering does nothing, so the filter weights are still
T-
0
E-
(96 ' 0 6 )
(96 '16)
('6 ' 0 6 )
('6 '16)
(56 '06)
'16)
( $ 6 '06)
( $ 6'16)
(06 ' 2 6 )
(06 ' 0 6 )
(06 '16)
(16 ' 2 6 )
('6 '06)
(16 '16)
(56
( $ 6' 2 6 ) (56
'26)
('6 ' 2 6 ) (96 ' 2 6 )
9-
('6 '$6)
(06 'E6)
( $ 6' $ 6 ) (56 '$6)
('6 ' $ 6 ) (96 ' $ 6 )
1-
E-
(06 '"6)
(06 '56)
(16 '"6)
( ' 6 '56)
( $ 6' c 6 )
(56 ' $ 6 )
( € 6'"6) (56 'be)
(96 '"6)
(96 ' $ 6 )
('6 '"6)
('6 ' s 6 )
0
('6 " 6 ) (O6 "6)
( $ 6''6) (56''6)
('6 ' ' 6 ) (96
"6)
0
('6 '96) (06 '96)
( $ 6'96) (s6'96)
('6 '96)
(96 '96)
9NISS330Xd 3 9 V M I a N V WNDIS NI SV18391V dnOXD
69
50
DENNIS WENZEL AND DAVID EBERLY
The reordering for 22 0 22 is
and the reordered filter weights are 0
0
-3
-1
-3
-6
0
-1
0
0
0
0
0
0
0
0
-3
0
0
- 16
0
40
-3
-I
0
- 16
-8
- 16
- 18
-1
-8
-3
0
0
- 16
0
40
-3
- 16
-6
0
40
- 18
40
128
-6
- 18
0
0
-3
-1
-3
-6
0
-1
-1
0
-8
- 16
- 18
-1
-8
- 16
- 16
Figures 3 through 6 show the Lena image and its corresponding filtered images, and Figures 7 through 10 show the Mandrill image and its corresponding filtered images. An open question is whether or not one can determine, in some quantitative sense, what group convolution and value function should be used to extract certain features in an image. The question is important even when the standard cyclic convolution is chosen: What value functions best detect the attributes of interest in an image? Moreover, given that two groups produce the same qualitative results, can one develop faster convolution algorithms for one group versus the other? We do not address these issues here, but leave them as research problems.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
FIGURE3. Original Lena image.
FIGURE
4.
Z,@ z, filtered image.
51
52
DENNIS WENZEL AND DAVID EBERLY
FIGURE5. g4@ g4filtered image.
FIGURE 6. 2? @I 9 filtered image.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
FIGURE7. Original Mandrill image.
FIGURE8. Z,@ Z, filtered image.
53
54
DENNIS WENZEL AND DAVID EBERLY
FIGURE 9. g40 g4filtered image.
FIGURE10. 2 @3 2 filtered image.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
55
IX. THEGRASP CLASSLIBRARY
++
AC class library called GRoup Algebras for Signal Processing (GRASP) has been developed as a practical implementation of the arithmetic operations in a group algebra presented in this article. The GRASP class library has object definitions for theoretical concepts of finite groups and the direct products of groups, group algebra rings, and matrices consisting of group algebra elements. This library has replaced the low-level group algebra routines of the complete GRASP system described by Wenzel (1988), which was developed to explore image transformations using convolutions based on a group algebra and to experiment with fast convolutions and matrix transforms based on various group algebras. Now, with the GRASP class library, these concepts can be easily integrated into any application program. The GRASP class library is organized into three main types of object classes: groups, group algebra elements, and matrices of group algebra elements. All group algebra elements are constructed using elements from the field of real numbers and elements from the cyclic group TN, dihedral group gN,the quaternion group 4, or any direct products of these groups. This section briefly describes each of these three types of object classes, including operations which can be performed on object instances. A. Group Classes
The concept of a finite group is implemented through the abstract Group class. The basic operations are outlined as follows: Constructor Group (const enum Type type, const unsigned size)
Constructor for the abstract Group class, which can be called only by a derived class. The enum Type can be one of the following: Group::ProductType for direct product groups, Group::CyclicType for cyclic groups, Gr0up::DihedralType for dihedral groups, and Group::QuaternionType for quaternion groups. The finite group is defined to have the number of elements given by s i z e which must be larger than I. operator( ) virtual const unsigned o p e r a t o r 0 (const unsigned i , const unsigned j ) Returns the result of the index summation i Q j performed in the
group. This is a pure virtual function, which must be supplied by derived classes.
DENNIS WENZEL AND DAVID EBERLY
56
operator [I virtual const unsigned operator[] (const unsigned i )
Returns the index of the inverse of specified group element specified such that i 0 i-' = 0. Derived classes may supply more efficient computations of the inverse operator. operator = = const int operator==(const Group & g r o u p ) Returns a Boolean status indicating whether the specified group is of the same enum Type and has the same number of elements. size const unsigned size(void)
Returns the number of elements in the group. The concept of a finite cyclic group is implemented through the CylicGroup class, a finite dihedral group is implemented through the DihedralGroup class, and a quaternion group is implemented through the QuaternionGroup class. Each of these three classes is derived from the abstract Group class. The constructors for these three group classes are outlined as follows: Constructors CyclicGroup (const unsigned N )
Derives a Group class of Group::CyclicGroup type having N elements. DihedralGroup (const unsigned N)
Derives a Group class of Gr0up::DihedralGroup type having 2N elements. QuaternionGroup (void)
Derives a Group class of Gr0up::QuaternionGroup type having 8 elements. Each of the CyclicGroup, DihedralGroup, and QuaternionGroup classes has its own implemention of the purely virtual index summation operator (operator( )) from the derived Group class. The index summation operators for each of these three groups were given in Section 1II.A. The concept of a direct product of groups is implemented through the ProductGroup class, which is also derived from the abstract Group class. The ProductGroup class can consist of any combination of the CyclicGroup, DihedralGroup, and QuaternionGroup classes, including any combination of sizes for the CyclicGroup and DihedralGroup classes. The operations of the ProductGroup class are outlined as follows: Constructor ProductGroup (Group * q r o u p O ,
. . .)
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
51
Constructs a direct product of groups, where each group is specified in the constructor by a pointer to an instance of the group. This constructor uses variable arguments, with the last group pointer being NULL. The direct product is taken as the order in which the group pointers are specified. At least one group must be specified, and it is possible to construct a direct product group which consists of only a single group and for any of the pointers be to a group which can also be a direct product group. decompose void decompose(unsigned & x , unsigned x i n d e x [ ] ) The direct product group index x is decomposed into an index for
each of the component groups. These component group indexes are stored in the x i n d e x [ 1 array, which must be allocated to store one index for each component group which comprises the direct product group. The formula for direct product group index decomposition is given in Section 1V.A. dim const unsigned dim(void) Returns the number of component groups which comprise the direct product group. operator( ) virtual const unsigned o p e r a t o r 0 (const unsigned i , const unsigned j ) Returns the result of the index summation i@j, where i and j are both first decomposed into indexes of the component groups (using decompose( )), the index summation is performed on each corresponding decomposed index using the index summation operation of the associated group, and then the return index is reconstructed from the component group index summations (using reconstruct( )). operator = = const int operator==(const ProductGroup & p g r o u p ) Returns a Boolean status indicating whether the specified pgroup is comprised of the same ordering of the same component groups. reconstruct const unsigned reconstruct(unsigned x i n d e x [ l ) A direct product group index is returned based on the reconstruction from the indexes in the x i n d e x [ 1 array for each of the component groups. The formula for direct product group index reconstruction is given in Section 1V.A.
58
DENNIS WENZEL AND DAVID EBERLY
B. Group Algebra Class
The concept of a group algebra is implemented through the GrpAlg class. Unlike the Group class, in which an instance was the entire group, an instance in the case of the GrpAlg class is simply a group algebra element. A group algebra element is comprised of an array of coefficients from the field of real numbers, where each coefficient is associated with an element from the underlying group. The basic operations in the GrpAlg class are outlined as follows: Constructors GrpAlg (ProductGroup *pgroup)
Construct a group algebra element over the specified direct product group pointed to by pgroup. All the coefficients for the element are initialized to zero. GrpAlg ( c o n s t GrpAlg & x ) Constructor by making a copy of the group algebra element x. The constructed group algebra element is over the same direct product group and also has the same coefficients as the element x. inv c o n s t i n t i n v (GrpAlg * x ) Attempts to compute the right inverse of the group algebra element. If the group algebra element is right-invertible, then the inverse element value is stored in x and the function returns a nonzero (true) value. Otherwise, if the group algebra element is not right-invertible, the contents of x are unchanged and the function returns a zero (false) value. The group algebra element and x must pass the GrpAlg::sametype( ) test. The method implemented for computing a right-inverse of a group algebra element is given in Section V1.A. net const f l o a t netfvoid)
Returns the net value of the group algebra element using the definition given in Section V1.A. norm c o n s t f l o a t norm(void)
Returns the length value which results from viewing the coemcients of the group algebra element as a vector. operator( ) f l o a t & o p e r a t o r 0 ( c o n s t u n s i g n e d igroup0, . . . I Accesses the coefficient of the particular group algebra element as referenced by the indexes for the component groups in the direct product. This routine uses variable argument and expects the total
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
59
number of arguments to equal the number of groups comprising the direct product group of the element. operator[] float & operator [ I (const unsigned i ) Access the coefficient of the particular group algebra element as referenced by the index into the direct product group. operator = const GrpAlg operator=(const GrpAlg & x ) Assignment of the coefficients from x to the group algebra element. The group algebra element and x must pass the GrpAlg::sametype( ) test. operator= = const int operator==(const GrpAlg & x ) Returns a Boolean status indicating whether the coefficients of the group algebra element and those of x match (within some small tolerance). The group algebra element and x must pass the GrpAlg::sametype( ) test. operator + const GrpAlg operator+(const GrpAlg & x ) Returns a group algebra element which is the sum of the group algebra element and x using the definition given in Section 1II.B. The group algebra element and x must pass the GrpAlg:sametype( ) test. operatorconst GrpAlg operator-(void)
Returns the additive inverse of the group algebra element. operator* const GrpAlg operator*(const GrpAlg & x ) Returns a group algebra element which is the product (or group convolution) of the group algebra element and x. This computation is based on the definition given in Section III.B, where the a corresponds to the group algebra element and corresponds to x, where the ordering matters. The group algebra element and x must pass the GrpAlg::sametype( ) test. const GrpAlg operator*(const float s )
Returns a group algebra element which is the result of multiplying the group algebra element by the scalar s. operator/ const GrpAlg operator/(const float s )
Returns a group algebra element which is the result of dividing the group algebra element by the scalar s.
60
DENNIS WENZEL AND DAVID EBERLY
operator+ = & x) Returns the group algebra element to which has been added the group algebra element x. The group algebra element and x must pass the GrpAlg::sametype( ) test. sametype const int sametype(const GrpAlg & x ) Returns a Boolean status indicating whether the group algebra element and x are constructed over the same direct product group. size
const GrpAlg operator+=(const GrpAlg
const unsigned size(void)
Returns the number of elements in the direct product group over which the group algebra element is defined. There are two simple classes which are derived from the GrpAlg class: the GrpAlgO class and the GrpAlgl class. Both of these derived classes have a single constructor which has the same format as the constructor for the GrpAlg class, i.e., a single pointer to a direct product group, and neither class adds any new operations. The GrpAlgO class constructor simply initializes a specific group algebra element which is the additive identity, and the GrpAlgl class constructor simply initializes a specific group algebra element which is the multiplicative inverse. C. Group Algebra Matrices Class
The concept of a matrix comprised of group algebra elements is implemented through the GrpAlgMat class. Only a limited set of matrix operations have been implemented, and these are outlined as follows: Constructor GrpAlgMat (unsigned n r o w s , unsigned n c o l s , Pr o duct Gr o up * p q r oup
Construct a matrix of dimensions nrows by n c o l s consisting of elements from the group algebra over the specified direct product group pointed to by pqroup. Each group algebra element in the matrix was constructed using the constructor function for the GrpAlg class. convprop const int convprop(void)
Returns a Boolean status indicating whether the matrix of group algebra elements has the convolution property as defined by Theo-
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
61
rem 1 in Section VI1.B. The matrix must have the same number of rows and columns as the number of elements in the direct product group for the group algebra elements. operator( ) GrpAlgMat & o p e r a t o r 0 (unsigned r , unsigned c ) Accesses the group algebra element in the matrix at row r and column c. operator = const GrpAlgMat operator=(const GrpAlgMat & x ) Assigns the group algebra elements from the matrix x to the matrix of group algebra elements. If the matrix of group algebra elements and matrix x fail both the GrpAlgMat::sametype( ) and GrpAlgMat::sameshape( ) tests, then the matrix of group algebra elements is converted into one having the same type and shape as matrix x. operator + const GrpAlgMat operator+(const GrpAlgMat & x ) Returns a matrix of group algebra elements which is the sum of the matrix of group algebra elements and the matrix x. The matrix of group algebra elements and the matrix x must pass both the GrpAlgMat::sametype( ) and GrpAlgMat::sameshape( ) tests. operatorconst GrpAlg operator-(void)
Returns the additive inverse matrix of group algebra elements. operator* const GrpAlgMat operator*(const GrpAlgMat & x ) Returns a matrix of group algebra elements which is the result of multiplying the matrix of group algebra elements by the matrix x. The number of columns in the matrix of group algebra elements must equal the number of rows in the matrix x and the matrix of group algebra elements, and the matrix x must pass both the GrpAlgMat::sametype( ) and GrpAlgMat::sameshape( ) tests. const GrpAlgMat operator*(const GrpAlg & x ) Returns a matrix of group algebra elements which is the result of multiplying each group algebra element in the matrix by the group algebra element x. The group algebra elements in the matrix and x must pass the GrpAlg::sametype( ) test. const GrpAlgMat operator*(const float s )
Returns a matrix of group algebra elements which is the result of multiplying each group algebra element in the matrix by the scalar s.
62
DENNIS WENZEL AND DAVID EBERLY
sameshape const int sameshape(const GrpAlgMat & x ) Returns a Boolean status indicating whether the matrix of group algebra elements and the matrix x have the same number of rows and columns. sametype const int sametype (const GrpAlgMat & x ) Returns a Boolean status indicating whether the matrix of group algebra elements and the matrix x are constructed of group algebra elements over the same direct product group.
D. Example Usage The following example program demonstrates the usage of the object classes from the GRASP class library including some of their operations. An explanation of the code is included in the embedded comments. #include #include #include #include #include #include
C s t d i o . h> < s t d l i b .h> t s t d a r g . h> "group. h" " g r p a l g . h" " gmat r i x h "
.
main ( i n t a r g c , c h a r * * a r g v ) (
// C o n s t r u c t a d i r e c t p r o d u c t of groups 24 x 25. ProductGroup Z4x25(&CyclicGroup(4),&CyclicGroup(5),
NULL);
// C r e a t e two group a l g e b r a e l e m e n t s over t h e group 2 4 x 24 // which h a s 2 0 e l e m e n t s . Each element i s i n i t i a l i z e d t o // t h e a d d i t i v e i d e n t i t y element. GrpAlg a ( & 2 4 x Z 5 ) ; GrpAlg b(&Z4xZ5);
// S e t i n d i v i d u a l c o e f f i c i e n t s w i t h i n e a c h group a l g e b r a e l e m e n t . a(2.3) = 2; b(3,ZI = 3 ;
// A complex group a l g e b r a e x p r e s s i o n . a = a*-(b*4);
// Attempt t o compute i n v e r s e of group a l g e b r a element a and // s t o r e t h e r e s u l t i n group a l g e b r a element b. i f (a.inv(&b)) (
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
63
// C o n s t r u c t a group a l g e b r a element t o s t o r e p r o d u c t // of g r o u p a l g e b r a e l e m e n t s a and b . GrpAlg c ( & 2 4 x 2 5 ) ; c = a * b ;
// Compare t h e p r o d u c t r e s u l t t o t h e m u l t i p l i c a t i v e // i d e n t i t y e l e m e n t f o r t h e same g r o u p a l g e b r a . P r i n t // a '1' i f t h e y a r e e q u a l ; a ' 0 ' o t h e r w i s e . printf
l " % d \ n " , ( c == G r p A l g l ( & Z 4 x Z S ) ) ;)
1 // C o n s t r u c t a s q u a r e m a t r i x h a v i n g number of rows and columns / / e q u a l t o t h e number of e l e m e n t s i n t h e g r o u p 24 x 25. GrpAlgMat m ~ 2 4 x 2 5 . s i z e 0 , 2 4 x 2 5 . s i z e 0 , & 2 4 x 2 5 ) ;
// I n i t i a l i z e t h e m a t r i x t o c o n s i s t of t h e m u l t i p l i c a t i v e // i d e n t i t y g r o u p a l g e b r a e l e m e n t a l o n g t h e d i a g o n a l and // t h e a d d i t i v e i d e n t i t y g r o u p a l g e b r a e l e m e n t e l s e w h e r e . f o r ( u n s i g n e d r=O; r < 2 4 x 2 5 . s i z e O ; + + I ) ( f o r ( u n s i g n e d c=O; c < 2 4 x 2 5 . s i z e O ; + + c ) { i f ( I == c ) m ( r , c ) = G r p A l g l ( & Z 4 X 2 5 ) ; e l s e m ( r , c ) = GrpAlgO(&Z4xZ5);
1 1
// I f t h e m a t r i x h a s t h e c o n v o l u t i o n p r o p e r t y , p r i n t a '1'; // o t h e r w i s e , p r i n t a ' 0 ' . printf
("%d\n", m . c o n v p r o p 0 ) ;
X. GRASP CLASSLIBRARYSOURCECODE A . gr0up.h # i f n d e f -GROUP-H# d e f i n e -GROUP-Hc l a s s Group (
// a b s t r a c t c l a s s
public: -Group ( v o i d ) ( )
// d e s t r u c t o r
// R e t u r n number of e l e m e n t s i n t h e g r o u p c o n s t u n s i g n e d s i z e ( v o i d ) ( r e t u r n N;)
// I n d e x summation o p e r a t o r - p u r e v i r t u a l f u n c t i o n v i r t u a l c o n s t u n s i g n e d o p e r a t o r O l c o n s t u n s i g n e d i , c o n s t u n s i g n e d j)=O;
64
DENNIS WENZEL A N D DAVID EBERLY
// Return the index of the inverse group element virtual const unsigned operator[l(const unsigned i); / / Are the groups identical? virtual const int operator==(const Group &group) {return (type==group.type) &6 (N==group.N);)
protected: // Distinquish between group types enum Type {ProductType, CyclicType, DihedralType, QuaternionType} type; unsigned
// number of group elements
N;
// Constructor Group (const enum Type settype, const unsigned size);
1; class CyclicGroup : public Group ( public: // Constructor and destructor CyclicGroup (const unsigned N ) -CyclicGroup (void) { )
:
Group(Group::CyclicType,N)
()
// Index summation operator const unsigned operator0 (const unsigned i, const unsigned j); // Return the index of the inverse group element const unsigned operator[](const unsigned i) [return (N-i)%N;)
1; class DihedralGroup : public Group { public: // Constructor and destructor DihedralGroup (const unsigned N) "DihedralGroup (void) { }
:
Group(Group::DihedralType,Z*N)
// Index summation operator const unsigned operatorO(const unsigned i, const unsigned j); 1; class QuaternionGroup : public Group { public: // Constructor and destructor QuaternionGroup (void) : Group(Group::QuaternionType,8) -QuaternionGroup (void) { }
{)
// Index summation operator const unsigned operator0 (const unsigned i, const unsigned j); }:
class ProductGroup : public Group ( public: // Constructor and destructor ProductGroup (Group * g o , ); "ProductGroup (void);
...
// last must be 0
()
65
GROUP ALGEBRAS IN SIGNAL A N D IMAGE PROCESSING
// Index summation operator const unsigned operator0 (const unsigned i, const unsigned j);
// Are the groups identical? const int operator==(const ProductGroup &group); // Return number of groups in the direct product const unsigned dimcvoid) {return ngroups;) // Decompose index of product group into index of component groups void decompose(unsigned & x, unsigned xindex[l); // Reconstruct product group index from that of component group indexes const unsigned reconstruct(unsigned xindex[l); private : unsigned Group
ngroups;
// number of groups
**group;
// array of ptrs to groups in product
};
#endif
B. groupcpp #include #include #include
<stdlib.h> "group.h" " 1ib h "
.
// Constructor for a group Group::Group (const enum Type settype, const unsigned size) { // Make sure the size is larger than 1 if (size < 2 ) ( error ("Group constructor: size must be larger than 1"); 1 type=settype; N = size;
// fill in these elements
1 // Index of the inverse of the group element const unsigned Group::operator[l(const unsigned i) ( for (unsigned j=O; j C N ; ++j) ( if ( 0 == (*this)(i,j)) return j ; 1
1 // Index summation operator for Cyclic group const unsigned CyclicGroup::operatorO(const unsigned i, const unsigned j) return (i+j)%N;
1
(
66
DENNIS WENZEL A N D DAVID EBERLY
// Index summation operator f o r Dihedral group const unsigned Dihedra1Group::operatorO (const unsigned i, const unsigned j ) return ( N + j + ((j61) 3 -i : i)) % N; 1 static unsigned QuaternionIndexSum~81IB1 = // ( 0 , 1, 2 , 3 . 4 , 5. 6, 7 ) , // (1, 0 , 3 , 2 , 5, 4 , 7 , 6 1 , // { 2 , 3 , 1, O r 6, 7 , 5, 4 ) , // ( 3 , 2 , 0 , 1, 7 , 6, 4 , 5 1 , // { 4 , 5 , 7 , 6, 1, 0 , 2 , 3 1 , ( 5 , 4 , 6, 7 , 0 , 1, 3 , 2 ) . (6, 7 , 4 , 5 . 3 , 2 , 1, O ) , ( 7 , 6, 5, 4 , 2 , 3, 0 , 11,
(
{
go=+1
gl=-1 gZ=+i g3=-i 94=+1
// +=-I // 96=+k / / g7=-k
1; // Index summation operator for Quaternion group const unsigned QuaternionGroup::operatorO(const unsigned i, const unsigned j ) return QuaternionIndexSum[il[jl; I
static unsigned GroupListSize (Group **GroupList) {
// Initialize f o r computing the total number of elements in the // direct product of the groups. unsigned N = 1; // Multiply the size of each group in the list f o r (unsigned i=O; NULL ! = GroupList[il; ++i) ( N *= GroupList[il->size(); 1
return N;
1
// Constructor f o r the direct product of groups ProductGroup::ProductGroup (Group * g o , )
...
:
Group(Group::ProductType,GroupListSize(&gO)) (
Group **GroupList
= &go;
// Count the number of groups in the list for (ngroups=O; NULL !=GroupList[ngroupsl; ++ngrOUps); group = new Group*[ngroupsl ; f o r (unsigned i=O; i < ngroups; ++i) group[il
1 // Destructor f o r the direct product of groups ProductGroup:: -ProductGroup (void) ( delete group;
>
=
GroupList[il;
(
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
// Are the direct product groups identical? const int ProductGroup::operator==(const ProductGroup &pgroup)
67
(
// First of all, are the number of groups the same? if (ngroups ! = pgroup.ngroups) return 0; // Verify that each of the component groups in the direct product // match - in order for (unsigned i=O; i < ngroups; ++i) ( if ( ! (group[il == pgroup.groupli1)) return 0; 1
return 1;
// direct product of groups is identical
// Index summation operator for direct product of groups const unsigned ProductGroup::operator()(const unsigned i, const unsigned j ) (
// Allocate buffers for the decomposition of the product group // indexes into indexes of component groups unsigned *xindex = new unsigned[ngroupsl; unsigned 'yindex = new unsignedlngroupsl;
// Decompose each product group index into indexes of component groups decompose (i, xindex); decompose ( j , yindex); // Compute the index summation in the component groups for (unsigned k=O; k < ngroups; ++k) { xindexIk1 = ~*qroupIkl)(xindexIkl,yindexlkl);
1 // Construct product group index from a list of component group indexes unsigned I = reconstruct (xindex); delete xindex; delete yindex; return
// done with temporary arrays
I;
1 // Decompose each product group index into indexes of component groups void ProductGroup::decompose(unsigned L x, unsigned *xindex) ( for (unsigned i=ngroups-1; i > 0; --i ) { xindexlil = x % grouplil->size(); x /= groupIi1->size(); 1 xindexlil = x;
1 // Construct product group index from a list of component group indexes const unsigned ProductGroup::reconstruct(unsigned *xindex) (
DENNIS WENZEL AND DAVID EBERLY
68
unsigned x = xindex[Ol; for (unsigned i=l; i C ngroups; ++i) ( x = x * group[il->size0 + xindex[il;
1 return x:
C. grpa1g.h #ifndef --GROUP-ALGEBRA-H-#define --GROUP-ALGEBRA-H-include
"gr0up.h"
class GrpAlg
(
public:
// constructors and destructors GrpAlg (ProductGroup 'pgroup); GrpAlg (const GrpAlg 6 x); -GrpAlg (void); // Access routines GrpAlg & operator=(const GrpAlg & float & operator[l(const unsigned float & operatorO(const unsigned const unsigned size(void) (return const int sametype(const GrpAlg & const int operator==(const GrpAlg // Addition, const GrpAlg const GrpAlg const GrpAlg const GrpAlg const GrpAlg const GrpAlg
// main constructor // COPY // destructor
X); i) (return c[il;) ig0, );
....
N;}
x); &
XI;
convolution, scalar multiply operations on elements operator+(const GrpAlg & x); // add // convolve operator*(const GrpAlg & x); operator*(const float s ) ; // scalar mult operator/(const float 5 ) ; // scalar divide // unary negate operator-(void); operator+=(const GrpAlg & XI; // accumulate
// Other operations on group algebra elements const float net(void); const float norm(void); const int inv(GrpA1g x); protected: unsigned float ProductGroup 1;
// sum coeffs // vector length // mult inverse?
N; // number of elements in group algebra *c; // array of coefficients *pgroup;// ptr to the direct product of groups
class GrpAlgO : public GrpAlg { // additive inverse element public: GrpAlgO (ProductGroup *pgroup) : GrpAlg(pgroup) ( ) -GrpAlgO (void) ( ) j;
// // // // // //
assign index DG index groups size same GrpAlg? equal element
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
class GrpAlgl : public GrpAlg { // multiplicative inverse element public : GrpAlgl (ProductGroup *pgroup) : GrpAlg(pgroup) (c[OI=l;) -GrpAlgl (void) ( )
1; #endif
D . grpalg.cpp #include #include #include #include #include #include
<stdlib.h> <stdarg.h> <math.h> ”grpa1g.h” ” 1ib h ”
.
// Constructor f o r a group algebra over a direct product of groups GrpA1g::GrpAlg (ProductGroup *set-pgroup) ( // Make sure the product group is valid if (NULL == set-pgroup) ( error (“GrpAlg constructor: invalid product group“); 1 pgroup = set-pgroup; N = pgroup->size(); c = new float [NI :
// initialize to the zero element f o r (unsigned i=O; i < N ; +ti) c[il
=
0.0;
1 / / Copy constructor GrpA1g::GrpAlg (const GrpAlg
&
x) (
pgroup = x.pgroup: N = x.N; c = new float IN1 ; for (unsigned j = O ; j < N ; + + j ) C[ll = X.C[ll;
1 // Destructor GrpA1g::-GrpAlg (void) ( if (c) delete c ; 1
// array of coefficients
// Given the index values of the groups in the direct product, compute // the index value in the isomorphic direct product group float & GrpAlg::operator()(const unsigned ig0, ...) { va-list gindexes; // variable group index arguments va-start (gindexes,igO);// initialize variable argument processing
69
70
DENNIS WENZEL AND DAVID EBERLY
// allocate buffer for indexes in component groups unsigned *index = new unsigned[pgroup->dimOl; index[Ol
=
// start off with the first index
ig0;
// leading index-major ordering f o r (int i-1; i < pgroup->dim(); ++i) ( index[il = va-arg(gindexes,unsigned);
i va-end(gindexes1;
// done with variable arguments
// convert indexes in component g r o u p s to index in product group unsigned dindex = pgroup->reconstruct(index); delete index; return ~[dindexl ;
// done with temporary buffer // return element from direct product group
}
// Are the two elements from the same group algebra? const int GrpAlg::sametype(const GrpAlg & x) { return (*pgroup == *(x.pgroup));
1 // Assignment operator GrpAlg & GrpAlg::operator=(const GrpAlg & x ) ( If not the same group type, destroy the previous object if (!sametype(x)) ( error ("GrpAlg::operator=: types do not match");
1 // Then just copy over the new coefficients for (unsigned j=O; j < N; ++j) c[jl = x.c[jl;
return *this;
> // Do the two elements have the same values from the same group algebra? const int GrpAlg::operator==(const GrpAlg & x) { // Error if the types do not match if (!sametype(x)) { error ("GrpAlg::operator==: types do not match"); 1
// Stop when a value is different for (unsigned i=O; i < N; ++i) ( if (fabs(c[il-x.c[i]) > FLT-EPSILON) return 0; }
return 1;
// element values are the same
1 // Group algebra addition of elements const GrpALg GrpAlg::operator+(const GrpAlg
&
x) (
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
// E r r o r if the types do not match if (!sametype(x)) ( error ("GrpAlg::operator+: types do not match"); )
GrpAlg r(x);
// make a copy of x
for (unsigned i=O; i < N; ++i) r.c[il += c[il; return
I;
1
// Group algebra additive accumulation Const GrpAlg GrpAlg::operator+=(const GrpAlg
x) (
&
// E r r o r if the types do not match if (!sametype(x)) ( error ("GrpAlg::operator+=: types do not match"); 1 for (unsigned i=O; i < N; ++i) c[il += x.c[il; return *this;
1 // Group algebra multiplication of an element by a scalar const GrpAlg GrpAlg::operator*(const float s ) { GrpAlg r(*this);
// make a copy of this
for (unsigned i=O; i < N; ++i) r.c[il
*=
s;
return I; )
// Group algebra division of an element by a scalar const GrpAlg GrpAlg::operator/(const float s ) ( GrpAlg r(*this);
// make a copy of this
for (unsigned i-0; i < N; ++i) r.c[il /= s ; return r ;
1
// Additive inverse of a group algebra element const GrpAlg GrpA1g::operator-(void) ( GrpAlg r(*this); f o r (unsigned i=O; i
return
1
I;
// make a copy of this
< N; ++i) r.c[il
=
-r.c[i];
72
DENNIS WENZEL AND DAVID EBERLY
// Group algebra multiplication (convolution) of elements const GrpAlg GrpAlg::operator*(conSt GrpAlg & X) ( // E r r o r if the types do not match if (!sametype(x)) { error ("GrpAlg::operator*: types do not match");
I GrpAlg r(pgroup);
// zero element for computing convolution
for (unsigned i=O; i < N; ++i) { for (unsigned j = O ; j < N ; ++j) { unsigned k = (*pgroup)(i,j); r [ k l += c[il * x.c[jl; 1 1 return r ; }
// Compute the multiplicative inverse o f a group algebra element const int GrpAlg::inv(GrpAlg * x) { // Error if the types do not match if (!sametype(*x)) { error ("GrpAlg: :inv: types do not match"); 1 unsigned i, j; // loop indexes
// Get the group algebra multiplicative identity element GrpAlgl one(pgroup); // Establish a two dimension array for Gauss Jordan Elimination float **a = new float*"]; for (i=O; i < N; ++i) a[il = new float[N+lI; // Fill in the left NxN matrix with the group algebra element // represented a s a linear transform. for (i=O; i < N; ++i) { for ( j = O ; j < N; + + j ) { / / This works out to be i + j-(-l) where // '+' is the index sum operation a[il [ j] = c[ (*pgroup)(i,('pgroup) j l ) 1 ;
I }
// Fill in the last column vector with the elements from / / the group algebra multiplicative identity element for (i=O; i < N; ++i) a[il[NI = one.c[il; // Perform the Gauss Jordan elimination which returns the rank / / of the matrix which is less than N if the left NxN matrix // is linearly dependent. int is-inverse = (N == GaussJordan(a,N,N+l));
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
73
// I f t h e e l e m e n t i s i n v e r t i b l e , t h e n copy t h e c o e f f i c i e n t s // o f t h e i n v e r s e e l e m e n t . f o r (i=O; i
<
++i) x->c[il = a [ i l [ N l ;
N;
// R e l e a s e d y n a m i c a l l y a l l o c a t e d b u f f e r memory f o r ( i = O ; i < N ; + + id)e l e t e a [ i l ; d e l e t e a;
// r e t u r n b o o l e a n s t a t u s
return is-inverse;
1 // Compute t h e n e t of a g r o u p a l g e b r a e l e m e n t const f l o a t GrpAlg::net(void) f l o a t sum=0.0;
{
// f o r c o m p u t i n g sum o f c o e f f i c i e n t s
f o r (unsigned i=O;
i
<
N ; + + i ) sum += c [ i l ;
r e t u r n sum;
1
/ / Compute t h e norm of a g r o u p a l g e b r a e l e m e n t c o n s t f l o a t GrpAlg::norm(void) f l o a t sum=0.0;
(
// f o r c o m p u t i n g sum of s q u a r e s of c o e f f i c i e n t s
f o r (unsigned i = O ;
i
<
N;
+ + i ) sum += c [ i l * c [ i ] ;
return sqrt(sum);
1
E . gmatrixh # i f n d e f --GROUP-ALGEBRA-MATRIX-H# d e f i n e - -GROUP-ALGEBRA-MATRIX-H#include
-
-
"grpa1g.h"
c l a s s GrpAlgMat { public :
// c o n s t r u c t o r s and d e s t r u c t o r s GrpAlgMat ( u n s i g n e d n r o w s , u n s i g n e d n c o l s , P r o d u c t G r o u p * p g r o u p ) ; // d e s t r u c t o r "GrpAlgMat ( v o i d ) ;
// A c c e s s r o u t i n e s
// m a t r i x o f same t y p e ? c o n s t i n t sametype(c0ns.t GrpAlgMat & x ) ; // same m a t r i x s h a p e ? c o n s t i n t s a m e s h a p e ( c o n s t GrpAlgMat .S x ) ; // a s s i g n GrpAlgMat & o p e r a t o r = ( c o n s t GrpAlgMat & x f ; GrpAlg L o p e r a t o r O ( u n s i g n e d r , u n s i g n e d c ) { r e t u r n * m [ r l [ c ] ; ) //Matrix
a d d i t i o n , s c a l a r m u l t i p l i c a t i o n , unary n e g a t e , multiplication c o n s t GrpAlgMat o p e r a t o r + ( c o n s t GrpAlgMat & x ) ; // a d d i t i o n
// Group a l g e b r a - s c a l a r
74
DENNIS WENZEL AND DAVID EBERLY
const const const const
GrpAlgMat GrpAlgMat GrpAlgMat GrpAlgMat
operator*(const GrpAlgMat & x); operator*(const GrpAlg & x); operator*(const float 5 ) ; operator-(void);
// // // //
mult by matrix mult by GrpAlg mult by scalar unary negate
// Does matrix have the convolution property? const int convprop(void); protected: unsigned unsigned GrpAlg ProductGroup
nrows; // ncols; // ***m; // *pgroup;//
number of rows in matrix number of columns in matrix group algebra elements in the matrix ptr to the direct product of groups
1; #endif
F . gmatrixcpp #include #include #include #include #include #include
<stdlib.h> <stdarg. h> <math.h> "gmatrix.h" "1ib.h"
// Constructor f o r a matrix of group algebra elements GrpA1gMat::GrpAlgMat (unsigned set-nrows, unsigned set-ncols, ProductGroup *setqgroup) { // Make sure the number of rows and columns are not zero if ( 0 == nrows 1 1 0 == ncols) { error ("GrpAlgMat constructor: nrows/ncols must be positive");
// Make sure the product group pointer is not NULL ) else if (NULL == pgroup) { error ("GrpAlgMat constructor: invalid product group");
nrows = set-nrows; ncols = set-ncols; pgroup = set-pgroup; unsigned I,C;
// loop indexes
// Allocate array of row pointers rn = new GrpAlg**[nrOWSl; for ( r = O ;
I
< nrows;
++I) (
// F o r each row, allocate array of GrpAlg pointers mlrl = new GrpAlg*[ncolsl;
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
15
// Associate the proper group with the group algebra for ( c = O ; c < ncols; ++c) m[rl[cl = new GrpAlg(pgr0up);
1 1 // Destructor GrpA1gMat::”GrpAlgMat (void) { f o r (unsigned r=O; r < nrows; ++I) ( for (unsigned c=O; c < ncols; ++C) delete mlrl[cl; delete m[rl; 1
delete m ;
1 // Are the two matrices o f the same type (product groups must be the same)? const int GrpAlgMat::sametype(const GrpAlgMat & x) ( return 1*pgroup == *(x.pgroup));
1 // Are the two matrices of the same shape? const int GrpAlqMat::sameshape(const GrpALgMat L x) ( return lnrows == x.nrows) & & (ncols ! = x.ncols);
// Assignment operator GrpAlgMat L GrpAlgMat::operator=(const GrpAlgMat
L
X)
(
/ / If the matrice are not of the same type, then destroy previous if (!sametype(x) I !sameshape(x)) ( unsigned ,c; // loop indexes
for ( r = O ; delete m ; nrows = x.nrows; ncols = x.ncols; pgroup = x.pg1oup; / / Allocate array of I O W pointers m = new GrpAlg**[nrows];
for ( r = O ;
I
< nrows;
++I)
(
// For each row, allocate array of GrpAlg pointers m[rI = new GrpAlg*[ncolsl; // Associate the proper group with the group algebra for (c=O; c < ncols; ++c) mlrl[cl = new GrpAlg(pgroup); 1
// Then just copy over the new coefficients f o r (unsigned r=O; r < nrows; + + I ) { for (unsigned c=O; c < ncols; ++c) {
76
DENNIS WENZEL A N D DAVID EBERLY
*m[rl[cl
=
x(r,c);
return *this;
// Addition o f matrix of group algebra elements const GrpAlgMat GrpAlgMat::operator+(const GrpAlgMat
&
x) {
// E r r o r if the types do not match if (!sametype(x) & & !sameshape(x)) ( error ("GrpAlgMat::operator+: types do not match");
GrpAlgMat y(nrows,ncols,pqroup); for (unsigned r = O ; r < nrows; + + r ) ( f o r (unsigned c=O; c < ncols; + + c ) ( y(r,c) = *mlrllcl + x(r,c); i
return y ; 1
// Multiplication of a matrix o f group algebra elements by another // group algebra element const GrpAlqMat GrpAlgMat::operator*(const GrpAlg & x) { / / Group algebra elements must be of the same type if (!m[Ol[Ol->sametype(x)) ( error ("GrpAlgMat::operator*: group o f different types");
1 GrpAlgMat y(nrows,ncols,pgroup);
< nrows; + + I ) ( for (unsigned c=O; c < ncols; ++c) y(r,c) = * m l r l [ c l * xi
f o r (unsigned r=O; r
(
i
return y ;
i // Multiplication of a matrix of group algebra elements by a scalar const GrpAlgMat GrpAlgMat::operator*(const float s ) ( GrpAlgMat y(nrows,ncols,pgroup); for (unsigned r = O ; r < nrows; + + I ) ( for (unsigned c=O; c < ncols; + + c ) y ( r , c ) = *mIrlIcl * s;
I
(
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
I return y; 1
/ / Unary negation of a matrix of group algebra elements const GrpAlgMat GrpA1gMat::operator-(void) ( GrpAlgMat y(nrows,ncols,pgroup); for
(unsigned r=O; r < nrows; + + I ) ( f o r (unsigned c=O; c < ncols; ++c) ( y(r,c) = -*m[rl[cl; )
1 return y ;
1 / / Compute the matrix multiplication const GrpAlgMat GrpAlgMat::operator*(const CrpAlgMat
&
x) (
// Make sure the matrices are of the same type if (!sametype(x)) ( error ("GrpA1gMat::operator': matrices of different types");
// Make sure the dimensions align ) else if (ncols ! = x.nrowsl ( error ("GrpAlgMat::operator*: matrices do not align"); i
/ / Create a matrix for the resulting multiplication GrpAlgMat y(nrows,x.ncols,pgroup);
// Check the condition that a(i,m) * a(i,n) == a(i,m+n) is true // for all elements of the group algebra element matrix where // a(i,j) is a group algebra element and ' + ' is the index // summation operation on the associated group. f o r (unsigned r = O ; r < y.nrows; + + r ) ( f o r (unsigned c=O; c < y.ncols; ++c) ( //Initialize to additive identity y(r,c) = GrpAlgO(pgroup); //ncols is the same as x.nrows for (unsigned k=O; k < n c o l s ; ++k) ( y(r,c) += *m[rl[kl x(k,c);
I
return y;
I // Does the matrix have the convolution property? const int GrpAlgMat::convprop(void) (
78
DENNIS WENZEL AND DAVID EBERLY
// F i r s t o f a l l t h e m a t r i x must be s q u a r e if
(nrows != n c o l s ) r e t u r n 0 ;
// And t h e m a t r i x must h a v e as many e l e m e n t s a s t h e g r o u p if
( p g r o u p - > s i z e ( ) != nrows) r e t u r n 0 ;
// Check t h e c o n d i t i o n t h a t a ( i , r n ) * a ( i , n ) == a ( i , m t n ) i s t r u e // f o r a l l e l e m e n t s of t h e g r o u p a l g e b r a e l e m e n t m a t r i x where // a ( i , j j is a g r o u p a l g e b r a e l e m e n t and ‘ t ’ i s t h e i n d e x // s u n m a t i o n o p e r a t i o n on t h e a s s o c i a t e d g r o u p . f o r ( u n s i g n e d r=O; r < n r o w s ; + + r ) ( f o r ( u n s i g n e d c=O; c < n c o l s ; t + c ) ( f o r ( u n s i g n e d k=O; k < n c o l s ; + t k ) ( unsigned index = ( * p g r o u p ) ( c , k ) ; i f ( ! ( ( * m [ r l [ c l * * m [ r l [ k l ) == *m[r 1 [ ( * p g r o u p j ( c , k ) I j j r e t u r n 0 ;
1
I 1 r e t u r n 1; // m a t r i x d o e s h a v e t h e c o n v o l u t i o n p r o p e r t y
I
REFERENCES Blahut, R. E. (1988). “Fast Algorithms for Digital Signal Processing,” Addison-Wesley, Reading, MA. Crittenden, R.B. (1973). On Hadamard matrices and complete orthonormal systems. In: “Proc. 1973 Walsh Functions Symp.,” pp. 82-84. National Technical Information Service, U.S. Dept. of Commerce, Springfield, VA. Curtis, C. W., and Reiner, I. (1962). “Representation Theory of Finite Groups and Associative Algebras.” John Wiley, New York. Eberly, D. H., and Hartung, P. (1987). Group convolutions and matrix transforms. S I A M J . Alg. Disc. Meth., 8(2), 263-276. Eberly, D. H., and Wenzel, D. J. (1991). Adaptation of group algebras to signal and image processing, CVGIP. Graphical Models and Image Processing 53(4), 340-348. Elliott, D. F., and Rao, K. R. (1982). “Fast Transforms: Algorithms, Analyses, Applications.” Academic Press, Orlando, FL. Fraleigh, J. B. (1982). “A First Course in Abstract Algebra.” Addison-Wesley, Reading, MA. Gader, P. D. (1986). Image algebra techniques for parallel computation of the discrete Fourier transform and general linear transforms. Ph.D. thesis, Computer and Information Sciences Department, University of Florida, Gibbs, E. (1969~).Some properties of functions on the nonnegative integers less than 2”. DES Rep. 3, National Physics Laboratory. Gibbs, E. (1970). Functions that are solutions of a logical differential equation. DES Rep. 4, National Physics Laboratory. Gibbs, E., and Millard, M. J. (1969a). Walsh functions as solutions of a logical differential equation. DES Rep. 1, National Physics Laboratory. Gibbs, E., and Millard, M. J. (1969b). Some methods of linear ordinary logical differential equations. DES Rep. 2, National Physics Laboratory.
GROUP ALGEBRAS IN SIGNAL AND IMAGE PROCESSING
79
Herstein, I. N. (1975). “Topics in Algebra.” John Wiley, New York. Hillman, A. P., and Alexanderson, G. L. (1983). “A First Undergraduate Course in Abstract Algebra.” Wadsworth, Belmont, CA. Loui, M. C. (1976). Efficient multiplication in semisimple algebras. M.S. thesis, Dept. of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA. Ritter, G. X.,Wilson, J. N., and Davidson, J. L. (1990). Image algebra: An Overview. Computer Vision, Graphics, and lmage Processing, 49,297-331. Rosenfeld, A., and Kak, A. C. (1982). “Digital Picture Processing,” 2nd ed. Vols. 1 and 2. Academic Press, Orlando, FL. Serre, S. P. (1977). “Linear Representations of Finite Groups.” Springer-Verlag, New York. Shoemake, K. (1983). Animating rotations with quaternion curves. Computer Graphics, SIGGRAPH ’85 Conf. Proc. 19(3),245-254. ter Haar Romeny, B. M. (ed.) (1994). “Geometry-Driven Diffusion in Computer Vision.’’ Kluwer, Dordrecht, the Netherlands. Walsh, J. (1923). A closed set of orthogonal functions. Am. J . Math. 55, 5-24. Wenzel, D. J. (1988). Adaptation of group algebras to image processing. M.S. thesis, Division of Mathematics, Computer Science, and Statistics, Univ. of Texas, San Antonio. Willsky, A. S. (1978). On the algebraic structure of certain partially observable finite-state Markov processes. Information and Control 38, 179-212.
This Page Intentionally Left Blank
ADVANCES IN IMAGING AND ELECTRON PHYSICS. VOL. 94
Mirror Electron Microscopy* REINHOLD GODEHARDT Department of Materials Science, Martin Luther University Halle Wittenberg, Merseburg, Germany
I. Introduction . . . . . . . . . . . . . . . . . . . . . 11. Principles of Mirror Electron Microscopy . . . . . . . . . . . . A. Constructional Principles . . . . . . . . . . . . . . . . B. A Brief Instrument Manufacturing History . . . . . . . . . . . C. Principles of Image Formation . . . . , . . . . . . . . . 111. Applications of the Mirror Electron Microscope . . . . . . . . . . IV. Fundamentals of Quantitative Mirror Electron Microscopy . . . . . . A. Influence of the Electron Mirror System . . . . . . . . . . . B. Contrast-Producing Interaction with Distortion Fields . . . . . . . V. Quantitative Contrast Interpretation of One-Dimensional Electrical Distortion . . . . . . . . . . . . . . . . . . . . . . . Fields A. Contrast Estimation of One-Dimensional Electrical Distortion Fields . . B. Reconstruction of One-Dimensional Electrical Distortion Fields . . . . VI. Quantitative Contrast Interpretation of One-Dimensional Magnetic Distortion . . . . . . . . . . . . . . . . . . . . . . . Fields A. Contrast Estimation of One-Dimensional Magnetic Distortion Fields . . B. Reconstruction of One-Dimensional Magnetic Distortion Fields . . . . VII. Quantitative Mirror Electron Microscopy, Example 1: Contrast Simulation of . . . . . Magnetic Fringing Fields of Recorded Periodic Flux Reversals VIII. Quantitative Mirror Electron Microscopy, Example 2: Investigation of Electrical Microfields of Rotational Symmetry and Evaluation of Current Filaments in Metal-Insulator-Semiconductor Samples . . . . . . . . . . . . A. Motivation and Aim of the Investigation . . . . . . . . . . . B. Relations Between Current Flow and Surface Potential . . . . . . . C. Potential Distribution of the Electrical Microfield Above the Specimen Surface D. Computer Simulation ofobserved Image Structures . . . . . . . . E. Applications and Results . . . . . . . . . . . . . . . . IX. Concluding Remarks . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . .
81 83 83 85
87 95 98 99 108 109 109 113 114 114 119 119
124 124 127 132 135 140 144 145
I. INTRODUCTION Mirror electron microscopy (mirror EM) provides a means of examining the surface topography and electrical and magnetic microfields on the surface of bulk specimens, without direct interaction between the imaging electron
* Dedicated to Professor Dr. rer. nat. habil. Johannes Heydenreich on the occasion of his 65th birthday. 81
Copyright 0 1995 by Academic Press, Inc. All rights of reproduction in any form reserved.
82
REINHOLD GODEHARDT
beam and the specimen. In general, the mirror specimen is not affected by the electron beam because the specimen, acting as an electron optical mirror, is biased slightly negatively with respect to the source of electrons, i.e., with respect to the filament of the electron gun. The electrons approaching the specimen surface are slowed down to zero axial velocity shortly before reaching it and then recede without affecting the specimen under investigation. Any field inhomogeneity originating from the specimen, which can deflect the approaching and receding electrons, will modify the returning electron beam and produce corresponding image contrasts when the electron beam strikes the luminescent screen. The low electron velocity near the point of reversal results in a high sensitivity of the mirror electron microscope (MEM). The high sensitivity and the possibility of investigating the specimen surface without any appreciable electron bombardment are the main advantages of the MEM. Owing to the application of the MEM as a research tool for more than five decades, there are a lot of publications discussing the possibilities and limitations of various investigation techniques. In his review, Oman (1969) summarized results of the first period of mirror EM use. Relying on these results and taking into account the usually applied shadow imaging technique, Oman also tried to present a unified theory of mirror EM including the observation of topography, voltage profiles, and magnetic fields at surfaces. Bok et al. (1971) concentrated their attention on the other branch of mirror EM, presenting both theoretical and experimental results of mirror EM with focused images, including the detailed description of the design of their rather complicated MEM. The basic theory of image contrast formation valid for weak electrical and magnetic microfields was discussed in detail in a review by Lukjanov et al. (1973). The authors derived analytical expressions describing the relations among the position-dependent electrical field strength at the specimen surface, the shifts of the electron trajectories in the viewing plane, and the resulting image contrast. The purpose of this article is twofold. O n the one hand, reviews with a lot of references will call to mind different applications and investigation techniques of the rather old method of mirror EM. On the other hand we present variants of quantitative MEM investigations which are of particular interest. Thus, in order to demonstrate the possibilities of quantitative mirror EM, in Section VIII the solution to a special task is comprehensively described. This is the investigation of current filaments in metal-insulator-semiconductor (MIS) sandwich systems. Local variations of the surface potential and electrical microfields of nearly rotational symmetry are caused by the filamentary conduction. The filament current can be evaluated by the quantitative contrast interpretation of the corresponding image structures observed by means of MEM.
MIRROR ELECTRON MICROSCOPY
83
11. PRINCIPLES OF MIRRORELECTRON MICROSCOPY
The electron mirror system (EMS), with the specimen itself acting as the mirror electrode, is the characteristic part of MEM. The imaging electrons entering the EMS in the form of a parallel beam or a focused beam are forced to return just in front of the specimen surface under investigation. The specimen information can be transferred onto the imaging electron beam indirectly via the shifts of the electron trajectories caused by field disturbances originating from the specimen. The distances of the points of electron reversal from the specimen surface can be varied by changing the negative bias voltage of the specimen with respect to the cathode of the electron gun. Of course, according to the attenuation of the field disturbances caused by the specimen, the variation of the bias voltage strongly influences the resulting electron deflections, and finally the image contrasts. The field disturbances may be caused by the geometric relief of the specimen surface (the surface itself is an equipotential plane) as well as by electrical field inhomogeneities at the surface. Magnetic surface fields can also produce image contrast, but usually only in marginal regions of the image, as only a sufficiently large radial component of the electron velocity results in net deflections of the electrons. A . Constructional Principles
Two types of instruments are predominantly used in mirror EM, as the schematic representation after Bethge and Heydenreich (1987) in Fig. 1 shows. On the left the figure displays the principle of the so-called straightforward MEM. In this kind of MEM the axis of the illumination electron beam coincides with that of the electron beam reflected in front of the specimen surface. Therefore, a small hole in the center of the luminescent screen is necessary to illuminate the mirror specimen by a narrow electron beam coming from the electron gun and optionally focused by a beam-forming system. Of course, the tilted optical mirror, too, used for observation of the screen, is passed by the electron beam through a central hole. The diverging lens of the EMS, consisting of the negatively biased specimen and a grounded aperture diaphragm in front of it, effects a beam broadening, which determines the magnification of the image on the screen. The luminescent screen is uniformly irradiated by electrons reflected in front of a plane mirror electrode without any disturbances, whereas there will be an electron redistribution giving rise to the image contrast on the viewing screen when the imaging electrons are deflected by field inhomogeneities originating from the specimen. On the right Fig. 1 illustrates a MEM with a separated beam
84
REINHOLD GODEHARDT
a
b
FIGURE1. Principles of the MEM: (a) “straightforward arrangement; (b) equipped with a magnetic prism (deflection field). (From Bethge and Heydenreich, 1987; courtesy of J. Heydenreich.)
system. The illumination beam and the beam carrying the specimen information are separated by a magnetic prism, so that they can be handled individually. This type of MEM is advantageous because of the possibility of a postmagnification by a projector lens, and the omission of the central hole in the luminescent screen. As an example of a MEM with a magnetic prism, a device at a deflection angle of 20”, described in detail by Heydenreich (1970) and Bethge and Heydenreich (1971a, 1971b), is shown in a sectional view in Fig. 2. Apart from some slight changes and attachments, this device was also used for carrying out MEM investigations described in Sections VII and VIII. The main parts of the MEM (magnetic prism, illumination system with electron source, and specimen chamber) are mounted on top of a commercial recording chamber. The part with the illumination system is fixed at the side of the magnetic prism column. Starting from the electron gun (G), the electron beam is aligned by means of an adjustment unit (A) and is optionally focused by a condenser lens (C). The removable screen (Sl), the auxiliary screen (S,) and the optical mirror (M), which are used only as adjustment aids, can be observed through a viewing window (V). The form of the pole pieces of the magnetic prism, designed after a proposal of Archard and Mulvey (1958) to reduce astigmatism, is shown in the sectional view. The specimen is on an
MIRROR ELECTRON MICROSCOPY
85
S1’I:CIMEN CI IAMBER
MAGNETIC PKISM
ELECTRON SOURCE
RECORDING CHAMBER
FIGURE2. Sectional view of a MEM with a magnetic prism. (From Bethge and Heydenreich, 1971a; courtesy of J. Heydenreich.)
insulated specimen holder (S), which is mounted on a rod (R) so that in the plane it can be moved perpendicularly to the axis of the incident electron beam. The specimen is transferred through the front opening (F) of the specimen chamber. The intermediate part, between the specimen chamber and the magnetic prism column, is interchangeable. Usually, if the MEM is working in the shadow imaging mode, it contains only the grounded aperture diaphragm (H) of the selected aperture diameter. The distance between the aperture and the specimen surface can be varied continuously up to about 10 mm. Focused mirror imaging is possible with an additional objective lens (L), as shown in Fig. 2. With the aid of the projector lens (P), the magnification determined by the electron optical parameters of the EMS and by the tube length of the MEM can also be increased. The microscope column is connected with a high-vacuum pump through two openings (O,, 02). The high voltage of the MEM can be varied over a wide range, but values between 6 and 12 kV have preferably been used. The maximum magnification attainable lies in the range of 3000 x . B. A Brief Instrument Manufacturing History
Historical surveys of mirror EM are given in reviews by Oman (1969), Lukjanov et al. (1973), and Bethge and Heydenreich (1987). Recently, Grifith and Engel (1991) presented a very interesting and informative trac-
86
REINHOLD GODEHARDT
ing of the development of instruments utilizing a cathode lens (immersion lens), including the MEM as well as the emission electron microscope (EEM) and the low-energy electron microscope (LEEM). They noted that the origins of mirror EM can be traced back to the early days of electron optics in Germany. Calculations suggesting the feasibility of operating a cathode lens in a mirror mode were reported by Henneberg and Recknagel (1935) and Recknagel (1936). The first MEMs (Hottenroth, 1937; Orthuber, 1948; Bartz et al., 1952) were made as devices with a simple magnetic beam separator. In order to avoid the additional astigmatism caused by the magnetic deflection field, later on “straightforward” devices were constructed (Mayer, 1955; Bethge et al., 1960; Spivak et al., 1961a). A few of these instruments were equipped with an additional magnetic objective lens, as reported by Bacquet and Santouil(1962), Barnett and Nixon (1967a), and Someya and Watanabe (1968). Owing to the simultaneous progress in constructing magnetic prisms, MEMs with improved magnetic deflection systems were developed (Mayer et al., 1962; Schwartze, 1966; Bethge and Heydenreich, 1971a, 1971b). Some of the instruments were especially designed to test the possibilities of a MEM in the focused imaging mode (Schwartze, 1967; Bok, 1968). Also, scanning MEMs (Bok, 1968; Garrood and Nixon, 1968; Wittry and Sullivan, 1970; Lukjanov et al., 1974) and mirror attachments to commercial scanning electron microscopes (Witzani and Horl, 1976; Wilfing and Horl, 1978; Hamarat et al., 1984) were built. The development of this class of electron mirror instruments was described in reviews by Spivak et al. (1978) and Witzani and Horl (1981). An electron mirror interference microscope, which can be regarded as the electron-optical analog of the Tolansky interference microscope of light optics, was presented by Lichte and Mollenstedt (1979) and by Lichte (1980). It was applied to measure topographical surface structures and roughness of the order of a few tenths of nanometers as well as potential steps of less than 1 mV. Combinations of a MEM with other surface analysis techniques are of particular interest to the study of surface physics and surface chemistry. The first examples were instruments that could be run either as a MEM or an EEM (Delong and Drahos, 1964; Barnett and Nixon, 1964, 1967a; Ivanov and Abalmasova, 1966). The MEM-LEED developed by Delong and Drahos (1971) was another combined MEM instrument. It provides the possibility of imaging surfaces and observing low-energy electron diffraction (LEED) patterns of the same area of a specimen. Further MEM-LEED devices were built by Berger et al. (1977), Forster et al. (1985), and Shern and Unertl (1987). There are a few advantages of the MEMLEED over conventional LEED, viz., the lower sensitivity to magnetic fields, the independence of the LEED pattern of the energy of the incident electrons, and the use of a flat viewing screen. Thus, Shern and Unertl (1987) pointed out that even a simple MEM-LEED can outperform conventional
87
MIRROR ELECTRON MICROSCOPY
LEED systems. Telieps and Bauer (1985) succeeded in developing a modern instrument for surface investigations, combining LEEM, LEED, MEM, and photoelectron emission microscopy (PEEM). A further improved instrument of this type was reported by Veneklasen (1991). It should be mentioned that, unlike most of the conventional MEMs, these particular MEMs work in the focused imaging mode. C. Principles of Image Formation
Generally, two different modes of operation of a MEM can be distinguished. Global differences in respective electron optics and ray paths are illustrated in Fig. 3. On the left Fig. 3 shows the principle of the shadow imaging technique. In this case the EMS, which in the simplest version consists of a mirror specimen and a grounded aperture diaphragm in front of it, forms an electron-optical diverging mirror. Typically, an electron beam coming from a distant source enters the EMS in the form of a nearly parallel beam, but the electron current density can also be increased by using a primary electron beam slightly focused by a condenser lens. Both the diverging lens action and the influence of the retarding field result in a beam broadening. Thus, the final movement of reflected electrons can be explained by a central projection from a virtual source located on the optical axis behind the specimen surface. The distance between the virtual source and the specimen surface is about three-fifths of that between the specimen surface and the aperture diaphragm. Electron deflections caused by the specimen result in spatial modulations of the electron current density of the reflected electron beam,
Y Q SPfCIklEh d
SLRtACL
T
Z
-
Z
~
W
b
FIGURE3. Principles of the shadow imaging technique (a) and the focused imaging mode (b) of a MEM.
~
88
REINHOLD GODEHARDT
which a fluorescent screen placed in an arbitrarily fixed image plane presents as the contrast of an unfocused image. As the image contrast produced resembles that of an optical shadow image, this imaging mode is known as the shadow imaging technique of mirror EM. The principle of producing focused images with a MEM is shown on the right of Fig. 3. In this case the retarding field of the mirror system in front of the specimen surface is combined with a convex electron lens. This objective lens can be a magnetic one, or an electrostatic arrangement of aperture electrodes such as the cathode lens of the Johannson type, the electron-optical properties of which were discussed in detail by, e.g., Soa (1959). The primary electron beam is focused by the illumination system in such a way that the crossover of the beam coincides with the center of the contrast aperture placed in the focal plane of the combined EMS. Thus, an axially parallel beam of electrons enters the uniform electrostatic retarding field in front of the specimen surface, with all electrons returning to the same trajectories after the reflection. If, however, electrons are deflected by local field disturbances caused by the specimen, then these electrons follow different trajectories after reversal, intersecting the aperture plane at a distance from the electron-optical axis that depends on the perturbation present. With this distance exceeding the radius of the contrast-forming aperture, these electrons impinge on the aperture, thus being removed from the reflected electron beam. Image contrast thus appears if each point of the specimen is transformed to the conjugate point of the screen by means of projection lenses as, e.g., in an EEM. The focused imaging mode can be realized only if the illumination electron beam is separated from the contrast-forming reflected beam, as in a straightforward MEM (Barnett and Nixon, 1967a) a stigmatic (point-to-point) imaging results in an almost zero viewing field. First experimental results of focused images taken with a MEM were presented by Schwartze (1967), and by Bok (1968) and Bok et al. (1971). They showed that high image brightness and improved lateral resolution are advantages of the focused imaging mode, whereas shadow imaging mode implies a substantially higher sensitivity to both topographical and electrical surface inhomogeneities. Bok (1968) and Bok et al. (1971) additionally made a comparison on the basis of contrast calculations. According to their results, the conclusion that the contrast produced by a MEM in the shadow imaging mode exceeds the contrast of focused images is rather trivial, as mirror EM uses phase specimens, with a defocusing consequently leading to an increase of contrast at the expense of resolving power. Despite the differences in imaging and contrast formation, the transfer of specimen information and the features of the image structures produced by both modes are neirly the same. Therefore, in the following the shadow
MIRROR ELECTRON MICROSCOPY
89
FIGURE4. Topographical image of a rock salt cleavage surface evaporated with a thin silver layer. (Courtesy of J. Heydenreich.)
imaging technique will be pointed out, as most of the applications of the MEM make use of the high sensitivity of this mode. An example of typical topographic image structures is given in Fig. 4, showing a cleaved surface of rock salt, which was evaporated with a thin silver layer to produce a sufficiently high surface conductivity. The predominant image structures are the bright-dark double lines caused by surface steps. At the lower left of the image, extended plateau regions of a fine greasy structure are separated by relatively high steps; whereas at the upper right, above the diagonal border line, there is a rather inhomogeneous cleavage face with both large and fine steps imaged. The formation of the bright-dark double-line contrast of a steplike topographic surface structure is explained by the schematic representation of electron trajectories in Fig. 5a. The example shown on its right is like that of Fig. 4. Of course, a corresponding explanation can be given for image contrasts caused by a voltage step a t the specimen surface. Figure 5b represents schematically the deflection of the imaging electrons for a voltage step with a finite transition region. An example of image contrasts produced by such a surface potential profile is shown
90
REINHOLD GODEHARDT
FIGURE5. Steps of both geometric (a) and electrical origin (b). (Courtesy of J. Heydenreich.)
on the right of the figure. A high-ohmic slit region prepared by special evaporations onto a polished glass substrate (Bethge and Heydenreich, 1960) separates two regions of different constant potential. In correspondence to the finite transition region with a linear potential gradient, the bright-dark double line, too, appears separated. Thus, the image structures observed can also be regarded as examples of a more general contrast explanation, which was presented by Wiskott (1956) and tested experimentally by Brand and Schwartze (1963). As a conclusion of his theoretical considerations, Wiskott (1 956) said that concave surface irregularities correspond to a bright image contrast [increase of the electron current density), whereas convex surface irregularities are connected with a dark image contrast (decrease of the electron current density). As in general a surface step may be regarded as consisting of a concave and a convex surface irregularity closely neighbored, it produces a bright-dark double line, with the bright edge of which marking the lower plateau adjoining the step. It is also worth mentioning that the bright-dark double line does not follow the image points corresponding to specimen points, but it is shifted to the lower plateau (Heydenreich, 1966). Of course, Wiskott’s general contrast interpretation is applicable also to twodimensional surface irregularities. A bump, e.g., representing a perturbation of rotational symmetry with a convex center and a concave fringe, forms a round dark image structure surrounded by a bright fringe. A valley, on the other hand, appears bright. The image of a polished surface of a eutectic
MIRROR ELECTRON MICROSCOPY
91
FIGURE6. Topographical image of the polished surface of a eutectic InSb-NiSb sample with line NiSb needles embedded in the InSb matrix. (Courtesy of J. Heydenreich.)
InSb-NiSb sample in Fig. 6 shows several structures of this kind. They result from a surface roughness caused by fine NiSb needles (about 1 pm in diameter) which are embedded in the InSb matrix. The image was used to test the lateral resolution of the MEM described above (Heydenreich, 1969). Fine structures separated by distances of about 400 nm can be detected. However, one should be very careful in using common terms such as “resolving power” and “magnification” in connection with mirror EM, since the actual size of a topographical feature is not related directly to the size of an image structure (extent of the contrast) via magnification. Therefore, the term “magnification” refers only to distances or separations of topographical features and not to actual size. Another difficulty in image contrast interpretation may result from a strong perturbation originating from the specimen. While a bright image structure, for instance, is produced by a shallow surface region, a caustic figure with an inversion of contrast is caused by a deep valley. The resulting image structure of a dark circular area surrounded by a bright fringe resembles that of a bump. Of course, a strong perturbation may be reduced by increasing the negative bias of the specimen, but only at the expense of the contrast of weak irregularities. Therefore, the simultaneous
92
REINHOLD GODEHARDT
MEM observation of weak and strong perturbations is problematic, particularly in the shadow imaging mode, or often not possible at all. Another problem of contrast interpretation may arise if contrasts of both geometric and electrical origins are mixed, but a discrimination has to be made. In such a case, varying the acceleration voltage of the electron gun may solve the problem (Barnett and Nixon, 1967b; Lukjanov et al., 1973). Contrast of geometric origin is improved by an increased high voltage, as both the retarding field strength and the field inhomogeneities caused by the surface roughness are also increased. On the other hand, a perturbation of electrical origin superimposed with a retarding field of higher field strength weakens the image contrast produced. While in practice the discrimination between topographical contrast and contrast of electrical origin is difficult, it is easy to distinguish between topographical contrast and contrast of magnetic origin, as there are important differences in the contrast formation. In general, the contrast formation results from the action of deflection forces on the imaging electrons. Electrical deflection forces caused by the surface roughness or by electrical microfields at the specimen surface are independent of the electron velocity. The force of the electrical field providing the deflection, therefore, retains the same direction for the electrons both approaching to and receding from the mirror specimen. This means that the resulting contrast-forming deflection is about twice that of each single part of the electron trajectory. However, the result is quite different if one considers the influence of a magnetic fringing field on the electron motion. The force on an electron caused by a magnetic field, i.e., the Lorentz force, is more complex and velocity-dependent. Hence, there is a deflection canceling in the center of the electron beam, as for the electron velocity component normal to the plane of the specimen surface the direction of the force on electrons approaching the specimen is opposite to that of the force exerted on the receding electrons. This reversal of the sign of electron deflection led to a misunderstanding, noted by Wiskott (1956), that mirror EM is not suited for the investigation of magnetic specimens. Just at that time, however, Spivak et al. (1955), followed by Mayer (1957a, 1957b, 1958, 1959a, 1959b),succeeded in visualizing magnetic microfields by MEM, as fortunately there remains the possibility of utilizing the radial component of the electron velocity to establish contrast in the magnetic case. Though the radial component of the electron velocity is very small along the electron path, it becomes predominant at the very front of the specimen surface. And as it does not reverse its sign, the corresponding Lorentz force does not change its direction either, with the contrast-forming azimuthal deflection of the electron occurring. The radial component of the electron velocity increases with increasing distance from the electron-optical beam center. Therefore, under conventional imaging conditions the magnetic contrast in
MIRROR ELECTRON MICROSCOPY
93
the fringe areas of the viewing screen increases, whereas it vanishes in the center of the beam. Thus, in his conclusions and results experimentally evidenced, Mayer (1957a, 1957b) noted that the shifting of an image structure into the electron-optical center is one criterion possible to discriminate magnetic contrast from other types. If the structure under investigation is of magnetic origin, it disappears in the center; whereas image structures of another origin will remain the same or in many cases may become even richer in contrast and more detailed. Especially weak stray fields require enhanced magnetic contrast. Therefore, an increased electron radial velocity component is desired, which can be attained by shifting the electron-optical center of the imaging electron beam to the border of the viewing screen, or even beyond.' In a MEM with a magnetic prism, the electron beam center can easily be shifted by varying the deflection angle. In addition, the negative bias of the specimen has to be decreased or even changed to a positive bias in order to permit a sufficiently deep electron penetration into the magnetic field under investigation. A change of the specimen bias to positive values also implies the possibility of practically determining the electron-optical center of the electron beam with respect to the specimen. As the radial velocity of the electrons diminishes in the beam center, the location of the latter is marked by the site where the electrons first impinge. Owing to the generation of secondary electrons, the area where the electrons strike the specimen surface looks like a diffuse bright spot with a dark fringe, easily detectable in the image. The specific contrast formation of magnetic origin described above is demonstrated in Fig. 7, where the position of the beam center is marked by black arrows. It shows images of a sample' consisting of a 400-nm-thick Co-Cr magnetic layer on a flexible support covered with a thin amorphous C-H film. Magnetic properties of the sample are the coercivity H,, I = 45 kA/m, the effective anisotropy Keff = 29 kJ/m3, and the spread in c-axis orientation of AG = 3.6". Magnetic flux reversals of a period of 6 pm were recorded in the x direction within 100-pm-wide tracks. Micrographs (a) and (b) correspond to a normal incidence of the electron beam. The specimen bias of Vb = + 1 V results in an electron beginning to impinge on the specimen surface, which produces the diffuse bright image structure in the center of the image. The same specimen region imaged at an oblique incidence of the electron beam in the y direction is shown in micrographs (c) and (d). The displacement of the electron beam center becomes obvious by the shift of the diffuse bright image structure marked by an arrow to the left In Figs. 12 and 13 of Section IV.A, this is demonstrated by computed electron trajectories. The sample was kindly supplied by Jan P. C. Bernards, Philips Research Laboratories, Eindhoven. A detailed description of the sample preparation, of the properties of the sample, and also of the results of investigating the same sample by means of Kerr microscopy, Lorentz microscopy, and magnetic force microscopy was published by Bernards and Schrauwen (1990).
94
REINHOLD GODEHARDT
FIGURE7. Enhanced contrast of periodic magnetic flux reversals recorded in a Co-Cr magnetic layer at oblique incidence of the electron beam [(c) and (d)] relative to the conventional normal incidence [(a) and (b)]. The site of the corresponding electron beam center is marked by black arrows.
+
fringe area of image (c), taken at a bias voltage of Vb = 2 V. Changing the bias to Vb = 0 V best reveals the periodic contrast of magnetic origin in the right fringe area of micrograph (d). As the electron penetration into the magnetic stray field in this region is exactly the same as in micrograph (b), the enhanced magnetic contrast is due only to the electron velocity component increased in the y direction. The flux reversals in periods of 1 pm recorded in 25-pm-wide tracks of the same specimen are shown in Fig. 8. This demonstrates the limit of the magnetic sensitivity of the MEM described above. With an aperture radius rd = 0.55 mm and a distance z d = 2.2 mm from the aperture diaphragm to the specimen surface, the parameters of EMS used are the same as those used for computing the electron trajectories presented later in Figs. 12 and 13. The accelerating voltage
MIRROR ELECTRON MICROSCOPY
95
FIGURE8. Contrast of periodic flux reversals in periods of 1 pm showing the detection limit of magnetic sensitivity of the MEM used.
chosen was 12 kV, and the recording system with photo plates was replaced by an additional transmission viewing screen and a CCD camera for image recording. 111. APPLICATIONSOF THE MIRRORELECTRON MICROSCOPE
A lot of different applications of the MEM were reported, particularly in the 1960s and 1970s. Specimens investigated consisted of metals, semiconductors, insulators, and even organic materials. Most of the MEM applications have not been focused on the depiction of topographical features, but rather on the observation of electrical or magnetic phenomena. 1. Metals Orthuber (1948) used the MEM to investigate the local differences in the contact potential of nickel cathodes covered with a barium layer. Spivak
96
REINHOLD GODEHARDT
et al. (1960) reported the observation of “patch fields” on the surface of emitters. Metal surfaces were also examined by Bartz et al. (1954). 2. Semiconductors MEM examinations of p-n junctions in semiconductors were carried out, e.g., by Bartz and Weissenberg (1957), Sedov et al. (1962), Spivak and Ivanov (1963),Spivak and Lukjanov (1966a), Igras and Warminski (1968a), Becherer et al. (1973), and Dupuy et al. (1984). A number of articles dealing with the application of MEM to investigate integrated circuits have been published (Igras and Warminski, 1968b Cline et al., 1969; Kuhlmann and Knick, 1977; Wilfing and Horl, 1978). Thin film capacitors were examined by Ivanov and Abalmazova (1967). A number of experiments concerning the mapping of the electrical structure of semiconductor surfaces, the visualisation of dislocations, impurity segregations, and surface diffusion phenomena, and the removal of acceptors in ion-bombarded semiconductors were made by Igras (1961, 1962, 1970), Igras and Warminski (1965, 1966, 1967a, 1967b, 1968a, 1968b), and Igras et al. (1969, 1974, 1975). Warminski (1970) also investigated the influence of sample polymerization on the contrast formation. Employing stroboscopic MEM, Lukjanov and Spivak (1966) succeeded in observing switching processes in semiconductor diodes, and Gvosdover et al. (1971) were successful in evaluating recombination waves in germanium. 3. Insulators One group of insulating materials studied extensively by MEM is ferroelectrics. Charging of the specimens under investigation was avoided by covering the surface with a thin metallic or semiconducting film. Ferroelectric materials examined by MEM are barium titanate (Spivak et al., 1959a, 1959b; English, 1968a, 1968b), triglycine sulfate (Spivak et al., 1963a), lead zirconate-titanate (Bates and England, 1969), Ca,Sr(C,H,CO,), (Someya and Kobayashi, 1970, 1971), and Gd,(MoO,), (Kobayashi et al., 1972). The contrast caused by ferroelectric samples was discussed by Mafit (1968). English (1968b) observed the ferroelectric transition from the ferroelectric state to the paraelectric one, the repolarization, and the nucleation and growth of domains. Someya and Kobayashi (1970) succeeded in visualizing the 180”-domainstructure, which was beyond the capabilities of a polarizing light-optical microscope. Gvosdover et al. (1968, 1970), Gvosdover (1972a), and Spivak et al. (1972) successfully used a MEM to observe piezoelectric fields on the surface of crystals when the acoustic (either bulk or surface) waves propagate through the crystal. This observation was preferably made in a stroboscopic mode
MIRROR ELECTRON MICROSCOPY
97
(Gvosdover et al., 1968), with the image contrast corresponding to a definite phase of a periodic process, while in the conventional nonstroboscopic mode only a time-averaged image contrast is produced. MEM was also used to detect electrical defects and inhomogeneities in insulating layers, which are important for semiconductor devices and integrated circuits. The direct observation of the charge distributions after charging the specimen surface by a defined electron bombardment was described by Heydenreich (1972), and by Heydenreich and Vester (1973, 1974). Defects and imaging phenomena observed under steady-state conditions were reported by Weidner et al. (1975a, 1975b). A technique in which the insulating film under investigation is embedded between a metallic basic layer and a semiconducting top layer is more advantageous than the direct observation of the insulating surface. In MEM, weak spots of the insulating film can be detected by the corresponding electrical microfields caused by current filaments when a voltage is applied to the MIS sandwich system (Heydenreich and Barthel, 1974; Godehardt et al., 1975, 1978; Heydenreich et al., 1978; Godehardt and Heydenreich, 1991, 1992). Quantitative investigations of MIS sandwich systems are described in detail in Section VIII. McLeod and Oman (1968) applied MEM to study biological specimens. The sample used was the giant unicellular alga Acetabularia crenulata, on which the dynamic electrical effects associated with charge centers in the chloroplast should be observed. From the results of the discharging characteristics of the centers in the chloroplast, the authors concluded that there were active centers in the cell which could conserve positive or negative charge for several seconds.
4. Magnetic Materials The capability of the MEM to image magnetic fringing fields of the specimen was used to visualize domain structures in ferromagnetics (Spivak et al., 1955, 1961b, 1964, 1966; Mayer, 1957a, 1957b, 1959a, 1959b, 1961; Kranz and Bialas, 1961; Kachniarz et al., 1983), patterns recorded on magnetic tapes (Mayer, 1957b, 1958, 1959b; Kuehler, 1960; Guittard et al., 1967), and on magnetic disks (Godehardt, 1988; Godehardt et al., 1992), and fields over magnetic recording heads (Mayer, 1957b, 1961; Spivak et al., 1963b). Further interesting applications reported by Mayer (1959b) are the observation of domain pattern movements in silicon-iron and permalloy films, the imaging of the magnetic stray fields at grain boundaries, and the visualization of magnetic structures by the Curie-point writing on magnetic films and by magnetic writing with an electron beam. Time-dependent periodic magnetic fields above recording heads were investigated by Spivak and Lukjanov (1966b), Rau et al. (1970), and Spivak et al. (1971).
98
REINHOLD GODEHARDT
Wang et al. (1966) studied the Abricosov vortex structure of superconductors by means of MEM, whereas Bostanjoglo and Seigel (1967) observed superconductors in the intermediate state. Not only have image structures been observed visually, but also the magnetic microfields under investigation have been evaluated on the basis of a quantitative contrast interpretation by Sedov (1968) for microfields above recording heads and by Rau et al. (1970) for magnetic fields on tapes recorded at audio and video frequencies. Corresponding quantitative investigations of magnetic fringing fields of periodic bit patterns of a high storage density recorded on a magnetic disk are described in Section VII. 5. Measuring Surface Potential Dgeerences As a special application of mirror EM, Guittard et al. (1968) worked out a
method of measuring local changes in the surface potential by using the current back-scattered from a small area of the sample. This method was developed for measuring work-function variations; it was used in chemisorption and surface studies (Dupuy et al., 1976, 1977, 1984; Pantel et al., 1977; Babout et al., 1980). This measurement process was described in detail by Guittard et al. (1981). It starts with taking an image of the sample surface in order to choose an area under measurement without any disturbances at the surface. Then, a slightly convergent beam is used to enable a localized electron impingement on the beam center when the specimen bias becomes positive with respect to the cathode potential. The back-scattered current of electrons passing a small hole in the center of the viewing screen is measured by an electron detector allowing imaging and measurement simultaneously. The measurement process consists of the continuous tracking of the variations of the back-scattered current versus the energy biasing. On the assumption that the electron reflectance remains constant with the surface potential varying, the shifts in the curve of the back-scattered current are a direct measure of the potential changes. Using a feedback system with a tuned ac detecting method, Guittard et al. (1981) continuously measured the surface potential shifts with an accuracy of 5 mV. Babout et al. (1980) introduced an improved measurement method with a controlled accuracy, which was successfully applied even to contaminated microsurfaces with vast changes in reflectance. IV. FUNDAMENTALS OF QUANTITATIVE MIRROR ELECTRON MICROSCOPY
Besides the special surface potential measuring technique described at the end of Section 111, quantitative mirror EM has a quantitative contrast interpretation. While Section 1V.A describes the influence of the EMS on the final
MIRROR ELECTRON MICROSCOPY
99
deflections of the imaging electrons, Section 1V.B presents the fundamental relations between the electron deflections and the electron current density distribution in the viewing plane according to Wiskott (1956). Based on the results of this section and on fundamental relations published by Sedov et al. (1968a, 1968b), equations useful for the quantitative contrast interpretation of one-dimensional distortion fields are presented in Sections V and VI; Section VII deals with application to the simulation of observed magnetic contrasts of periodical flux reversals recorded on a magnetic disk. A further example of quantitative mirror EM is presented in Section VIII, which is concerned with the evaluation of electrical microfields of rotational symmetry. Unlike the previous example, the corresponding contrast estimation is based directly on computed electron trajectories, exceeding the limiting assumption of a weak perturbation. In particular, the results of simulating image structures that are rich in contrast and consisting of round, nearly totally black areas with tine, very bright fringes, are used to evaluate the corresponding electrical microfields. A . Influence of the Electron Mirror System
The estimation of electron trajectories not influenced by a distortion field is the starting-point of a contrast calculation. Usually, a separation of the EMS into two parts is assumed in publications dealing with the subject. One part, with curved equipotential planes, acts as an electron lens and is described by corresponding electron-optical parameters. The other part, located between the lens region and the specimen surface, is assumed to be a uniform retarding field. According to the applications described below, only the simple EMS used is taken into consideration. It consists of a mirror specimen located in the plane z = 0 and a grounded aperture diaphragm at z = z d with an aperture radius rd. It is easy to derive equations that describe the electron motion if, in accordance with Lukjanov et al. (1968), the action of the lens of focal length f is expressed by the corresponding refraction of the electron trajectories in the plane of the aperture, as shown in Fig. 9. An electron, accelerated by voltage V, of the electron gun to an initial velocity uo and moving parallel to the electron-optical axis at a distance r, is deflected by the angle ctl in the refraction plane z = z d of the lens according to the equation tan
10
ctl = -
f
The following motion within the uniform retarding field of the field strength E , = V,/zd in the z direction is described by a parabolic trajectory with the point of reversal at
100
REINHOLD GODEHARDT I
..
:=O
z = :d
FIGURE9. Electron trajectories determined according to an EMS with a lens refraction plane at z = zd and with a uniform retarding field between z = 0 and z = z d .
r, = r,
+ 22, 1 +tantan’a l a1 z r, + 22,
tan a1 =
when taking into account that tan2 a1 << 1 and replacing tan alaccording to Eq. (1). Owing to the symmetrical branches of the parabolic trajectory, the reflected electron meets the refraction plane of the lens at a distance rl = r,
7)
+ 42, tan a1 = 1 + - r,
(
(3)
from the electron-optical axis. According to the oblique incidence of the electron on the refraction plane, the resulting deflection expressed by the angle a2 is described by the equation r1 tan a2 = tan a1 + -
f
(4)
Having passed the refraction plane, the electron moves on a straight line, as the space behind this plane is assumed to be free of electric fields. If, for simplicity, the deflection caused by a magnetic prism and the action of an additional projector lens are omitted, the electron does not change its direction until it strikes the viewing screen at a distance r, from the electronoptical axis. With the tube length of the MEM expressed by the distance z , between the aperture plane and the viewing screen, the coordinate r, of the impact point is given by r, = rl
+ z, tan u2
(5)
MIRROR ELECTRON MICROSCOPY
and by r, =
101
[( + 7) + 'f> + f 1
(1
"]ro
after replacing rI and tan a, according to Eqs. (3), (4), and (1). Considering two different electron trajectories marked by the additional index numbers I and 11, respectively, magnification M caused by the EMS is defined by the quotient
M
= rS,ll 'r,a
-
'S.1
(7)
- rr.1
which, using Eqs. ( 2 ) and (6), finally results in the expression
The result is the same if the electron trajectories do not have the same meridian plane. Thus,
r, = Mr,
(9) express the relation between image plane and object plane in the shadow imaging mode if one takes into account that a point P(x,y) in the surface plane of the specimen correlates with that electron trajectory which is characterized by a point of reversal having the same coordinates xr and y,. Taking into account the ratio of the radial electron velocity component v, and the initial electron velocity vo given by or
Vr -
xs = Mx,
= tan a1 =
VO
y, = Myr
rr
f + 2zd ~
coordinate z, of the point of reversal, which coincides with its distance from the corresponding object point at the specimen surface, is obtained from energy considerations and results in z, =
r,"
(f+ 22,)'
zd
An additional specimen bias V,, usually negative, changes Eq. ( 1 1) to
where V,, as mentioned above, is the positive value of the high voltage applied to the electron gun.
102
REINHOLD GODEHARDT
Now a slight radial deflection of the electron will be included in the consideration. Following Wiskott (1956) in assuming that the electron interaction with the disturbing field can be described by a small additional velocity Avr transferred to the electron at the point of reversal, the electron trajectory remains unchanged until the electron has passed this point. The trajectory branch of the reflected electron, however, is changed, resulting also in changed quantities a,, F,, a",, and 7,. The change of a, to a, is described by tan
=
vr ~
+ Avr = tan a , + Av,
-
YO
VO
and with Eq. (13) the changed coordinate F, is determined by
The equation tan ii,
= tan a2
+ (1 +
7)::
- -
is obtained by replacing a , , r , , and m,, by a,, F,, and Eq. (4). Applying the same to Eq. (5) results in
a,, respectively, in
Omitting the last item in the parentheses, as it can be neglected, and using M = 2zt/f, the displacement Ars = F, - r, in the image plane is finally expressed by 1 Ars = - M ( f 2
+ 2 ~ Avr ~ ) VO
Before expressions for a contrast estimation are derived further based on this equation, information concerning the EMS will be given. In the publication of Lukjanov et al. (1968), and also in most of the other publications dealing with the subject, the focal length f used in the previous equations is assumed to be that of a thin electron diverging lens given by
f = 42,
(18)
However, this value is valid only for rd << z d , thus Eq. (18) often causes great errors when applied to typical EMS in practice. The computation of electron trajectories is more accurate if it considers the actual potential distribution within the EMS. Owing to his experience that approximations conventionally used in electron optics cannot be ap-
103
MIRROR ELECTRON MICROSCOPY
plied to computations of electron trajectories within an EMS, Recknagel (1937) suggested a method which starts from a known potential distribution V ( r = 0, z ) along the electron-optical axis. Cylindrical coordinates r, cp, and z are preferably used according to the rotational symmetry of the EMS. The z axis has to be divided in sections so that in each section with sufficient accuracy the potential V(r = 0, z ) can be replaced by a parabolic approximation, where at the transition sites of adjacent sections both the approximated parabolic functions and their derivatives have to be continuous. Using a special variable transformation allows the electron trajectory within each section to be described by an analytical parametric solution of the equations of motion. The whole electron trajectory can be estimated if the computed boundary values of the variables at the end of a section are used as boundary values of the corresponding variables at the beginning of the subsequent section. Applications and details of this procedure were described by Heydenreich (1961) and by Godehardt (1989). Figures 10 and 1 1 present results concerning an EMS of rd = 3.2 mm and zd = 7 mm. The normalized curve of Fig. 10 results from an analytic estimation of the potential distribution within the EMS. Despite the normalization and the potential reference of V(r = 0, z = 0) = 0, the expression used for computing the axial potential corresponds with that given below in Eqs. (19)-(21). Dividing the z axis between z = 0 and z = 20 mm into 20 equidistant sections, the normalized potential distribution is approximated by 20 adjacent parabolic parts with an accuracy better than 0.5%. Based on the described procedure applied I">.
= 0,z)
0'1
1 .o
I I
0.9 0.8 0.7 0.6
z=0 : SPECIMEN SURFACE z = 7mm: APERTURE DIAPHRAGM (-----)
0.5
0.4 0.3 0.2
0. I c
C
1
2 3
4 5 6 7
8
9 1011 12 13 14 15 16 17 18 1 9 2 0 z [mm]
FIGURE10. Normalized potential distribution along the electron-optical axis for an EMS with parameters rd = 3.2 m m and zd = 7 mm.
104
REINHOLD GODEHARDT
0.5-
, -10
I
ss
VES
AD
FIGURE1 1 . Computed electron trajectories for an EMS with parameters r, zd = I
= 3.2 mm and
mm.
to the approximation functions, two trajectories of electrons entering the mirror system parallel to the electron-optical axis at distances r, = 0.05 mm and r, = 0.1 mm, respectively, were computed as shown by the graphical representation in Fig. 11. The focal length, estimated to be f = 21.4 mm, differs by more than 30% from the value determined by using Eq. (18). Of course, the quality of the results obtained by applying the above procedure is determined mainly by the validity of the potential distribution estimated analytically and used as the starting point. Heydenreich (1961) compared the potential distribution along the electron-optical axis, which on the one hand was calculated on the basis of analytical expressions published by Glaser and Henneberg (1935), and which on the other hand was measured in an electrolytic trough. While the results showed considerable discrepancies for the ratio rd/zd = 2, there was a good correspondence for rd/zd = 213. Therefore, the analytical estimation of the potential distribution within EMSs according to the potential function V(r,z ) = (V, - V,)
~
- zd - -
22,
n
~
I z - zdl z,
with p=-
1
fib
, / ( z - zd)2
+ rz - r i + q
+
i)]
(19)
MIRROR ELECTRON MICROSCOPY
105
and should be of good relevance for systems with rd/zd < 0.5. With parameter z, determined according to
z,
= q['+ 2
A(? n + tan-';)] Zd
the condition for the surface potential
V ( r = 0, z = 0 ) = -(Vo - V,)
(23)
is fulfilled. The given potential function V(r, z ) not only describes the axial potential V ( r = 0, z ) necessary as the starting point of the method of Recknagel(1937), it also allows one to compute directly the potential at an arbitrary site in the region above the specimen surface. Apart from special sites where the derivatives are not defined and have to be replaced by corresponding limits, the partial derivatives
can be expressed analytically and computed directly at a n arbitrary site. This allows a computer-aided method of estimating electron trajectories step by step with a great accuracy, which was applied particularly to the computation of electron trajectories having an oblique incidence on the EMS (Godehardt and Heydenreich, 1993). The procedure carried out to estimate a small segment of the electron trajectory between the starting point with coordinates
r=ri
cp=cpi
z=zi
(25)
and the final point with coordinates
r = ri+l
cp = qi+,
z = zi+l
(26)
will be roughly described. At the starting point, where the electron motion is characterized by the electron velocity components
vr , 1. = f(r = r., cp = cp., z = z.) va.r. = r.@(r = I., cp = cp.1 ) z = z . ) v . = ,f(r = r., cp = cp. z = z.) 1
(27)
2.1
resulting from the computations of the previous trajectory segment, the
106
REINHOLD GODEHARDT
potential = V(r = r.I7 cp = cp.1 7 z = z i )
(28)
the components of the electric field, and the derivative of E,,
E 1 . 1. = E r (r = r.I ) cp = cp., z E,,i
E
= z.)
0
E . = E 2 ( r = r.I ¶ cp 2.1
= cp.,
z = z.)
EZZ,I. = E 22 (y = r. cp = cp., z = z.) 19
are estimated. Using these values one suggests that with great accuracy the actual electric field E(r, cp, z ) in the neighborhood of the starting point can be replaced by the approximated field Ea(r, cp, z ) with components
Ea, = Ez,i + E,,,~(z - zi) (30) This field approximation takes into account that a field variation within the EMS takes place mainly in the z direction, allowing an analytical solution of the equation of motion. On the basis of analytical expressions which can be derived for the quantities mentioned above, the trajectory segment is estimated, with the final point of the segment, e.g., determined in such a ~) way that the deviations of the field components Eur(ri+l,cpi+l, z ~ + and Ea2(ri+l, z ~ + ~approximated ) from the corresponding actual ones E , ( T ~ +qi+l, ~ , z ~ +and ~ ) E,(ri+l, z ~ +have ~ ) to be smaller than the accuracy parameters chosen. Estimating the trajectory itself with sufficient accuracy also requires additional corrections to be made of the velocity compo~ ) v,(ri+l, z ~ +computed ~) on the basis of the nents v,(ri+l, q i + lz, ~ +and field approximation. This corresponds to an electron energy correction which is important in order to compute really adequate electron trajectories, especially near the point of reversal. Based on the approximation expressed by Eq. (30), the potential Vai+l at the final point of the trajectory segment is estimated by Ear = E r , j
vai+l =
Ea,
0
V - Ez,i(zi+l - Z i ) - tEzz,i(zi+l - Zi)’
- Er,i(ri+l - r i )
(31)
This value will differ slightly from the corresponding actual value,
V+l
(32) V ( r = r i + l , ~p = ( ~ i + l ,z = zi+l) computed according to Eq. (19). To eliminate this discrepancy, the electron having reached the final point of the trajectory segment should meet a voltage step =
A&+1
=
&+I -
Vai+l
(33)
MIRROR ELECTRON MICROSCOPY
107
-
P L A N E OF T H E A P E R T U R E DIAPHR A C M A T 2 2mm
-200
-100
0 Ybml
FIGURE 12. Electron trajectories at normal incidence of the electron beam computed for an EMS with parameters rd = 0.55 mm and zd = 2.2 mm.
-
P L A N E OF 1 H E APEll I URE DlAPl IRAGM A I 2 21111n
z[inin]
100
200
YlWI
FIGURE 13. Electron trajectories at oblique incidence G( = 0.5" of the electron beam computed for an EMS with parameters rd = 0.55 mm and z d = 2.2 mm.
in the plane perpendicular to the (actual) field gradient at this site. Though this voltage step is passed without any local changes, a slight refraction will take place correspondingly changing of the velocity components v,, i + l and V z ,i + l .
Results of such a computer-aided estimation of electron trajectories in an EMS with r,, = 0.55 mm and zd = 2.2 mm are shown in Figs. 12 and 13. The normal incidence of the electron beam shown in Fig. 12 reflects the conventional imaging conditions of a MEM. Some changes caused by an oblique incidence of the electron beam are pointed out in Fig. 13. Comparing, e.g., electron trajectories with the same coordinates y of the points of reversal in the region of the positive y axis in Figs. 12 and 13 clearly reveals the increased electron velocity component in the y direction, which is useful for enhancing magnetic contrasts, by the enlarged opening of the parabolic parts of the trajectories. Finally, the results interpreting estimated electron trajectories with an oblique incidence in the y direction expressed by a small angle of incidence a
108
REINHOLD GODEHARDT
with respect to the electron-optical axis of the EMS ( z axis) can be summarized by the relations x, = Mx,
[
x?
zr =
y, = My,
+ z, tan
c1
+ (y, + f tan a)' (f+ 2zd)2
(34) (35)
where, in contrast to the corresponding Eqs. (9), (12), (lo), and (17), valid for normal incidence of the electron beam, Cartesian coordinates and related quantities are used. Though these results can also be derived on the basis of the simple model of the EMS described above, the computer-aided estimation of electron trajectories not only results in a more adequate focal length of the EMS but also implies that the validity of relations can be proved, which is restricted to small values of the angle of incidence a. B. Contrast-Producing Interaction with Distortion Fields
On the assumption of a real mirror specimen, the electron motion is not only determined by the electrical field E,,, of the EMS, but it is additionally influenced by an electrical distortion field E(x, y, z) originating from the specimen. With x, and y, denoting the coordinates of the electrons in the recording plane without perturbations, and with Ax, and Ay, denoting the deflections caused by the interaction with the distortion field, the changed positions expressed by coordinates fsand J , are given by 2, = X,
+ AX,
J,
+ Ay,
(38) The image contrast caused by the distortion field is expressed by the distri9,) in the viewing plane. Accordbution of the electron current density je(fs, ing to Wiskott (1956), it is calculated on the basis of the law of conservation of the current flow, = y,
je(fs, J J d% dJ, = jeo dx, dy,
(39) with jeo denoting the locally invariant electron current density without perturbations. Consequently,je(TS,9,) is expressed by
109
MIRROR ELECTRON MICROSCOPY
In the one-dimensional case with, e.g., Ays = 0, Eq. (40) may be reduced to
and for a distortion field of rotational symmetry, i.e., with field E = E(r, z), where the Cartesian coordinates are replaced by the cylindrical coordinates r, cp, and z, the following expression is obtained
Though these equations are very useful for contrast calculations of small perturbations, they also show directly the limit of their application, as at points in the viewing plane where the derivative of Ax, in Eq. (41) or the derivative of Ars in Eq. (42) reaches - 1, the electron current density tends to infinity. In reality, of course, there is a limitation also of the electron current density of such caustic surfaces with very strong contrast. V. QUANTITATIVE CONTRAST INTERPRETATION OF ONE-DIMENSIONAL ELECTRICAL DISTORTION FIELDS As almost all quantitative investigations carried out with the MEM are
based on relations derived for the one-dimensional case, in the following we will concentrate on a distortion field E = E(x, z). Hence, the reader interested in the more general relations, not used in practice but derived for the two-dimensional case, is referred to the publication by Sedov et al. (1968a) and the review by Lukjanov et al. (1973). A . Contrast Estimation of One-Dimensional Electrical Distortion Fields
In order to estimate the contrast by calculating the electron current density according to Eq. (41), it is necessary to find an appropriate expression for computing the electron deflections Ax,(x,). Above-derived results show that this corresponds with a calculation of the electron velocity changes Avx caused by the interaction of the imaging electrons with the distortion field E(x, z). With the equation of motion given by mi: = -e(E,,,
+ E(x, z))
(43)
with m and e denoting the electron mass and the absolute value of charge, respectively, change Av, in the electron velocity with respect to the undisturbed electron motion is found by correspondingly integrating of Eq. (43)
110
REINHOLD GODEHARDT
along the electron trajectory with the time t as the integration variable:
Av,
=
--:(l:-
E x dt
+
jtrm
Ex d t )
(44)
Ex(x,z ) is the component of the distortion field in the x direction, and the two terms in Eq. (44) correspond to the trajectory branches of the electrons both approaching (from t = t - , to t = t,) and receding (from t = t , to t = t+,). Next, according to Wiskott (1956) and Sedov et al. (1968a), a restriction to small perturbations follows by assuming that
‘d
taking into account that, owing to a strong attenuation of E ( x , z ) with increasing z , only a superposition of the distortion field with that part of the EMS field has to be considered which is uniform and expressed by the field strength VJz,. On this assumption the electron motion in the z direction is determined only by the uniform field of the EMS. Thus, from the law of conservation of energy, -2
2
-2
vo
- e-(z
- z,) = 0
zd
the relation 1
dt
=
dz
TJmJz--z.
(47)
can be derived, where the negative and the positive signs correspond to the approaching and the receding electron, respectively. In conformity with Wiskott (1956) and Sedov et al. (1968a), relation (47) is used to replace the time dependence of Eq. (44) by a dependence of the coordinate z, resulting in
when, additionally, according to E ( x , z ) = -grad V ( x , z), the field component Ex(x,z ) is replaced by the corresponding derivative of the distortion potential V ( x , z). Choosing the upper integration limit as infinite is a mathematical simplification supported by the usually strong attenuation of Ex(x, z ) with increasing distance z from the specimen surface. Equation (48) implies a further approximation as the derivative of V ( x , z ) is calculated at the fixed coordinate x = x , of the reversal points of the electrons, thus considering an original electron motion only in the z direction expressed by
111
MIRROR ELECTRON MICROSCOPY
x = x,
y
= y,
z = z,
e vo + --t 2m zd
2
(49)
and ignoring the actual parabolic form of the electron trajectories. Of course, Eq. (48) holds as a good approximation if the specimen region of interest is located in the neighborhood of the electron-optical axis, where the trajectory branches of the approaching and the receding electrons are nearly the same. Using Eq. (48) and the above-derived Eqs. (17) and (37), the resulting electron deflections Axs in the viewing plane can be described by
Using the relation d 1 d Ax,(x,) = - - Ax,(x,) dx, M dx, -
allows the calculation of the electron density distribution according to Eq. (41). The expressions derived were first applied by Wiskott (1956). Introducing dimensionless quantities defined by
where parameter a is a characteristic length, he used a .distortion potential @ ( X ,Z), given by
+ x 2Z+ z2
@(X,Z ) = z - zo
(53)
in order to simulate a scratchlike topographic surface structure described by @ ( X ,Z) = 0. Function (53) allows the analytical calculation of the integral of Eq. (48), yielding 1 n sin[(3/2) tan-’(X,/zo)] dX - -~ dT Zi’z [I (Xr/Zo)2]3’4
fi
+
(54)
Thus both the electron trajectories and the electron current density distribution can be computed directly. Wiskott (1956) also reported a more general approach by expressing the surface field in terms of a Fourier integral. On the same basis, Schwartze (1964), Artamonov and Komolov (1966), and Barnett and Nixon (1967b) investigated the spatial frequency response of the shadow imaging mode of
112
REINHOLD GODEHARDT
the MEM. Results of these investigations are also reported in the review by Lukjanov et al. (1973). A further general approach of deriving expressions useful for estimating the electron deflections caused by an arbitrary one-dimensional distortion potential Ks(x) on the specimen surface was presented by Sedov et al. (1968a). On the assumptions that potential V ( x ,z ) satisfies Laplace's equation in the space above the specimen, vanishes well enough for z -P co, and satisfies the boundary condition V ( x ,z = 0) = K,(x) on the specimen surface, the solution of this Dirichlet boundary problem can be given by V ( x ,z ) = x
rm
+
(x -Ks(5) 0 2 z2 dl
(55)
Inserting this expression of V ( x ,z ) into Eq. (50) finally yields the integral relation
necessary to calculate the electron deflections Ax&,). Sedov et al. (1968a), e.g., applied Eq. (56) to calculate electron deflections and the image contrast of a p-n junction with a surface potential I/ss(x)given by
Using f = 42, and introducing magnification-independent deflections Ax, defined by Ax = AxJM, they obtained
and
with the abbreviations
113
MIRROR ELECTRON MICROSCOPY
B. Reconstruction of One-Dimensional Electrical Distortion Fields Though previously the simulation of image contrast resulting from the electron interaction with the distortion field has been pointed out, the direct reconstruction of the distortion field by a quantitative interpretation of the contrast or the electron deflections measured is also of interest. Considering Eq. (56) as an integral equation of the Fredholm type, which has to be solved for the derivative of V,,(x),for z, = 0, Sedov et al. (1968a) found a solution on the basis of Fourier transformations given by d u x )dx $n(f
s
vo -t 2z,)M&
+m
-m
Ax,(x) - Ax,(x - 5 )
I ~ I ~ / ~
d5
(61)
Using a further Fourier transformation and the convolution theorem, Kachniarz and Zarembinski (1980a) transformed Eq. (61) into d~Ks(4 dx
$v,
I + w
.(f +2z,)M&d
-w
CdAxs(S)/dt;lsign(x - 5 )
Jrn
d4
(62)
According to Eq. (41), the derivative of Axs can be expressed by
Kachniarz and Zarembinski (1981a) presented a method of quantitatively evaluating d I/,,/dx according t o Eq. (62), with the derivative of the electron deflection being determined by simultaneously measuring the local total electron current density j , and the local change of electron current density Aj,. In order to estimate the field of a surface p-n junction, they improved their previously described measurement method (Kachniarz and Zarembinski, 1979) by a two-channel measurement system employing pulse specimen polarization (channel 1) and pulse beam modulation (channel 2). The specimen was continuously displaced perpendicular to the electronoptical axis during the measurement at a fixed point in the center of the viewing screen, where the electrons to be detected could pass the screen through a small hole. As the measurement was restricted to a finite measurement interval, the resulting asymptotic parts of derivative Axs have to be described by appropriate expressions. Taking into account that the attenuation of these asymptotic parts by the denominator of the integrand in Eq. (61) is much stronger than in Eq. (62), it is obvious that errors introduced in these parts will be of far greater influence if dVJdx is determined by applying Eq. (62). Thus, Bathe1 (1983), who presented a far more general ap-
114
REINHOLD GODEHARDT
proach of estimating electrical and magnetic distortion fields directly, and who included both the one-dimensional case and the case of rotational symmetry, noted that the advantage of applying Eq. (62) is often compensated by the disadvantage mentioned. Moreover, the results of the computer-aided quantitative contrast interpretation of numerous practical examples given by Barthel (1983) revealed the difficulties of a direct estimation of distortion fields. As reported by Sedov (1968) and also noted in the review by Lukjanov et al. (1973), for the case where dependence z, in Eq. (56) is not ignored, Barthel (1983) pointed out that the main problem of this estimation consists in the “mathematically incorrect” task, which results in generally “instable” solutions. In order to overcome this problem, methods of regularization, published, e.g., by Phillips (1962) and Tichonov (1963a, 1963b), have been applied.
VI. QUANTITATIVE CONTRAST INTERPRETATIONOF ONE-DIMENSIONAL MAGNETIC DISTORTION FIELDS A first quantitative estimation of magnetic contrast was reported by Kranz and Bialas (1961). They derived an equation by which the image contrast is expressed as a function of the gradient of magnetic induction over the specimen surface and the radius vector in the image plane. This equation made it possible to explain experimental results in a rather “semiquantitative” way. The problem of image contrast formation of magnetic microfields was discussed in more detail by Sedov et al. (1968b), followed by Petrov et al. (1970a), Gvosdover (1972b), Lukjanov et al. (1973), and Gvosdover and Zeldovich (1973). The following consideration will again be restricted to the one-dimensional case because of its special importance for practical applications.
A . Contrast Estimation of One-Dimensional Magnetic Distortion Fields
Omitting the dependence of the magnetic field on the coordinate y, the components of the fringing field are expressed by
H, = H,(x,
Z)
H, = 0
H , = H,(x,
Thus, with the contrast-producing Lorentz force F tions of motion are given by
=
Z)
(64)
-ep,,i x H, the equa-
115
MIRROR ELECTRON MICROSCOPY
.. eP0 x = --jH, m
eP0 m
j = --(iHx
-fH,)
where po denotes the vacuum permeability. The changes of the electron velocities Avx and Avy are calculated by time integration of R and j ; of Eq. (65), resulting in
and
If it is further assumed that according to lP03HxI <<
VO
IEEMSI
(68)
=zd
the perturbation is small enough for the integration to be carried out along the undisturbed parabolic electron trajectory expressed by x = x,
+ v,t
y
= y,
+ VYt
z = z,
+ -2me- t vzdo
2
(69)
Eqs. (66) and (67) can be changed into Avx = - %?vy m
1-1
H, (x = x ,
+ vxt , z = z, + 2m
zd
and Avy =
-5iJ-Z
- v,
m
1-Z
H,(x
Hx(x
= X,
= x,
+ vxt, z = z, +
+ v X t , z = + 2m - - t vo Z,
zd
')dt}
It is important that, as pointed out by Gvosdover (1972b), the first integral expression in Eq. (71) with the transversal fringing field component must not
116
REINHOLD GODEHARDT
be neglected in order to find correct results of Avy. This will be explained by the example of a periodic magnetic fringing field with field components given by X
H , = H,, exp
(- ;) 2kn
X
sin 2kn -
I
(72) with k and H , as an integer and a constant, respectively, and I denoting the periodic length. The first integral of Eq. (71) in the form iH,dt
=
vxJ’m -m
[
e vo 2m zd
{(--)--tHoexp[-y(zr+--t)]} 2kn e V, I m zd 1
( -2k7r/4vx
2kn cos -(x,
+ v,t)
;1
may by partial integration be easily changed into
2
1
dt
(73)
1
2kn
t=+W
t=-w
+ v,
s_’,”
[ 7 + &2
Ho exp -
t2)]
(2,
sin
2kn
(x,
+ v,t) d t (74)
As the term in large brackets vanishes and the integrand of Eq. (74) is the very expression of the normal field component H,, +m
i H , dt
= V, J-m
H , dt
(75)
is finally obtained, which applied to Eq. (71) shows the validity of Av,, = 0, i.e., that there is no velocity change in the y direction. Thus, the total interaction of the electrons with the fringing field is described by Eq. (70). It is worth noticing that this again is a more general result, which is especially valid for all periodic fringing field functions that can be approximated by an expansion into Fourier series. In order to calculate the integral expression of Eq. (70), it is more advantageous if the variable of integration t is replaced by the dimensionless quantity z according to the relation z = - tVO 2% Taking now Eqs. (34)-(37) and Eq. (51) into account, displacement Ax, in the viewing plane and the normalized electron current density je’jeOare calcu-
MIRROR ELECTRON MICROSCOPY
117
lated by
and lo
1 J
-w
respectively, with x ( z ) and z ( t ) being expressed by
and
respectively. On the basis of corresponding results for a magnetic microfield of k = 1 and 1as the lateral period and for the field components given by Eq. (72), Petrov et al. (1970b) and Gvosdover and Petrov (1972) found the explicit equation for the caustic lines in the image plane,
with A given by
and n as an integer. This equation was used to explain the wedge-shaped image structures observed by Spivak et al. (1963~). Further helpful results can also be gained if a general magnetic fringing field of one-dimensional periodicity is taken into consideration. Approximating the field by an expansion into Fourier series,
118
REINHOLD GODEHARDT
with Ho as a constant and , I= 2d describing the periodic length, enables one to solve the integral expressions in Eqs. (77) and (78) analytically (see, e.g., Ryshik and Gradstein, 1963), which finally results in
and 1,
- -I -
1
(85)
with the abbreviations
and
While Eqs. (77) and (78) describe the electron interaction with the magnetic fringing field on the basis of a time dependence of the expressions derived, it is also possible to find equations with a dependence on coordinate z. According to Sedov et al. (1968b), who also considered two-dimensional magnetic distortion fields, the expressions are obtained if in the corresponding equation (56) valid for an electrical distortion field the electrical field component is replaced by the magnetic one, which, based on Eqs. (47), (36), and (37), finally results in
With Eq. (88) the electron current density distribution can be determined by means of Eq. (41) if additionally relation (51) is taken into account.
119
MIRROR ELECTRON MICROSCOPY
B. Reconstruction of One-Dimensional Magnetic Distortion Fields As described for an electrical distortion field in Section V.B, expressions can be derived for reconstructing the distortion field directly, by a quantitative interpretation of the contrast or the electron deflections measured if Eq. (88) is interpreted as an integral equation of the Fredholm type to find function H J x ) = H J x , z = 0).According to Sedov et al. (1968b), under the condition z, = 0 the solution is given by
1
+m
Ax,(x) - Ax,(x - 5)
iw2
dt
(89) Kachniarz and Zarembinski (1980b) and Kachniarz et al. (1982) preferred an expression according to Eq. (90):
which was derived from Eq. (89) by applying Fourier transforms and the convolution theorem. As a proper estimation of the pre-factor in Eq. (90), Kachniarz and Zarembinski (1981b) suggested expressing it in terms of results of a corresponding estimation of an electrical distortion field. If Eq. (89) or (90) is applied, of course, the difficulties in directly estimating the distortion fields resulting from the “mathematically incorrect” task should be taken into account, as was pointed out in Section V.B. Nevertheless, Rau et al. (1970),e.g., made quantitative investigations of the fringing field over a magnetic head, estimating the magnetic field distribution with an accuracy in the order of 15%.
VII. QUANTITATIVE MIRRORELECTRON MICROSCOPY, EXAMPLE I: CONTRAST SIMULATION OF MAGNETIC FRINGING FIELDS OF RECORDED PERIODIC FLUX REVERSALS This example will demonstrate the application of Eqs. (83)-(87) derived in the previous section. It concerns the simulation of observed contrast caused by the fringing field of periodical magnetic flux reversals recorded at a periodical length I = 2d in 8-pm-wide tracks on a hard disk3 with a CoPtCr The sample with tracks of different storage densities is part of a MFM4 (CG # 58-31) disk kindly supplied by H. Poppa and S. Lambert of the IBM Almaden Research Center, San Jose, California.
120
REINHOLD GODEHARDT
FIGURE14. Transition geometry and stray field configuration of periodic flux reversals recorded longitudinally in magnetic media.
magnetic layer of a thickness of s = 25 nm. According to Rugar et al. (1990), who investigated corresponding samples by means of magnetic force microscopy, the fringing field can be expressed by n=no
HJx, z ) = 4Mr
1
(-1)"
n=l-no
+ tan-'
1
n=l-no
(x - nd)2
(x - nd)z (x - nd)2 + a2 + a2
n=no
H,(x, z ) = 2M,
(X - nd)(z
(- 1)" In
+
S)
+ a2 + a(z + s)
11+ + +
(x - nd)2 ( z s a)Z (x - nd)2 + ( z + a)2
1
1 (91)
where M , denotes the remanent magnetization of the magnetic layer and parameter a describes the transition width between the flux reversals. The number of flux reversals taken into account is 2n,. The fringing field distribution is illustrated in Fig. 14 with arrows expressing both the direction and the value of the local field strength. Results of the investigation by MEM are presented in micrographs (a) and (b) of Fig. 15, showing a specimen region at different biases V b with three horizontal tracks of flux reversals marked by the capitals A, B, and C. Storage densities used of 1000 fr(flux reversals)/mm (A), 500fr/mm (B), and 200fr/mm (C) correspond to a length d per bit of 1 pm, 2 pm, and 5 pm, respectively. The micrographs, recorded by means of a CCD camera, were corrected by image processing so that in the marked region of track B the mean electron current density j,, corresponds to 1, whereas 0 corresponds to a vanishing electron current density (electron beam switched off). Line profiles representing the distribution of the corrected electron current densityj,, of this region are shown in the lower right of the micrographs. Under the imaging conditions of CI = 1.2", V, = 10 kV, y, = 10 pm, and with EMS parameters rd = 0.55 mm, zd = 2.2 mm, and
MIRROR ELECTRON MICROSCOPY
121
I';
2.0
FIGURE15. Comparison of results obtained experimentally [micrographs (a) and (b) with line profiles of marked regions] and by contrast simulation [(c) and (d)], and representation of the magnetic fringing field components evaluated [(e) and (f)].
122
REINHOLD GODEHARDT
f = 8 mm, the contrast simulations shown in (c) and (d) of Fig. 15 yielded M , = 50 emu/cm3, a = 0.7 pm, and no = 5. The corresponding fringing field components H,(x, z ) and H,(x, z ) estimated by Eq. (91) and thereafter expressed by an expansion into Fourier series of 16 terms are each illustrated by plots for z = 0 and z = 1 pm in (e) and (f), respectively. Based on micrograph (b) of Fig. 15 with d = 2 pm replaced by d = 5 pm, the electron current density distribution and the fringing field components have accordingly been estimated to simulate the image contrast of track C. The results are shown in Figs. 16b-l6d, whereas the line profile of the region marked in Fig. 16a presents results found experimentally. In order to estimate the electron current density distributions in a more accurate way, the energy distribution of the imaging electrons has been taken into account. This is worth mentioning, as it has not been usual in quantitative mirror EM because
FIGURE16. Comparison of results obtained experimentally [micrograph (a) with a line profile of the region marked] and by contrast simulation (b), and representationof the magnetic fringing field components evaluated [(c) and (d)].
MIRROR ELECTRON MICROSCOPY
123
of additional complications, although the importance of the electron energy distribution in mirror EM, especially with respect to resolution, has been pointed out in many publications, implying also experimental attempts of improved mirror EM employing monochromatic electrons (Kuhlmann et al., 1978). An indirect measurement procedure has been applied to determine the energy distribution of the electrons and to understand and to describe the contrast changes observed by varying the specimen bias. A CCD camera was used to record changes in the electron current density with varied specimen bias, on the one hand at a flat specimen site without disturbances, and on the other within the central region of the image structure caused, e.g., by a hillock. The electrical analog of the latter will be described in detail in the next section. In the mirror regime of the imaging electrons, where these image structures appear as round black areas with bright fringes, the electron current density recorded in their center is zero. If, however, because of an electron impact, secondary electrons are emitted from the specimen surface, they will also brighten this dark image region. Thus, comparing of the electron current densities measured enables a discrimination between the contribution of secondary electrons jeO,seand that of electrons of the mirror regime jeO,me to the total electron current density jeomeasured at the undisturbed specimen site. The results are shown by the normalized slopes in Fig. 17b, where 8 = 0 corresponds with that specimen bias Vb selected at the MEM at which all electrons impinge on the specimen surface. Applying the normalized distribution function g( Ve) of thermionic electron emission (see, e.g., Hawkes and Kasper, 1989) in the form
a
b
FIGURE17. Plots of the electron energy distribution (a) and of the total electron current and jeO.ae with varying specimen bias (b). density jeo and its contributionsjoO,mc
124
REINHOLD GODEHARDT
where V , denotes the electron emission energy and V, is not directly determined by the filament temperature but is used only as a parameter describing the width of the electron energy distribution, the plots of Fig. 17b can be calculated according to
r:
j e o C E ) = jeo,meCE:a>+ j e o , A % )
(95)
for = 0.5. The corresponding electron energy distribution is shown in Fig. 17a. The results generally agree with those of Hopp (1960), who measured the energy distribution in the transition region from the mirror regime to the secondary electron emission by MEM. Equations (93)-(94) can also be used to explain the feature of the image structure caused by an electron impact, which, e.g., is shown in Fig. 7, consisting of a bright spot with a diffuse dark fringe. Figure 17b reveals that the darkest region corresponds to = 0. Furthermore, this correspondence enables one to find experimentally the shift between and the specimen bias V b selected. Thus, e.g., in the region of the track marked by B in the micrographs of Fig. 15, v b = 0 corresponds to Vb = 5.5 V. For the energy distribution of the electrons according to Eqs. (93)-(94), the normalized electron current density has to be calculated by
with the quantities and abbreviations used in the integrand of the numerator being the same as in Eq. (85). VIII. QUANTITATIVE MIRRORELECTRON MICROSCOPY, EXAMPLE 2: MICROFIELDS OF ROTATIONAL SYMMETRY INVESTIGATION OF ELECTRICAL AND EVALUATION OF CURRENT FILAMENTS IN METAL-INSULATOR-SEMICONDUCTOR SAMPLES A . Motivation and Aim of the Investigation
Metal-insulator-metal (MIM) sandwich systems consisting of a thin film arrangement on a bulk substrate as illustrated on the left of Fig. 18 show
125
MIRROR ELECTRON MICROSCOPY
M-I-S
M-I-M
I
SUBSTRATE
I
I
S U B STR A T E
1
FIGURE18. General arrangement of MIM and MIS sandwich systems.
interesting conduction phenomena if the applied voltage exceeds a threshold value and the thin film devices undergo electroforming processes as reviewed, e.g., by Pagnia and Sotnik (1988). A general explanation of the observed phenomena, including switching and memory effects, non-Ohmic current-voltage characteristics with a voltage-controlled negative resistance branch, electron emission, and electroluminescence, failed as long as the interpretation was restricted to integral conduction mechanisms only. Based on a polyfilamentary conduction, Dearnaley et al. (1970) presented a global model of high acceptance which was later modified, e.g., by Sutherland (1971), Emmer (1974), Gould (1983), and Ray and Hogarth (1985, 1986). Some experimental results supporting a filamentary conduction were achieved by electron microscope investigations. Local changes of the top electrode of electoformed samples partly acting as centers of electron emission and electroluminescence were described by Emmer (1974), and field-induced metal diffusion in the region of surface defects observed was detected by microanalytical investigations (Rahman, 1982). In contrast to these results, which are only indicative of a filamentary conduction, a direct electron microscope investigation of current filaments is possible if the metallic top electrode of the sandwich system is replaced by a semiconducting one, as shown on the right of Fig. 18. Local electrical surface microfields are caused by the current filaments in such a metal-insulator-semiconductor (MIS) sandwich system. These surface microfields can be detected by the electrical field contrast of a MEM. Figure 19 shows the typical image structures observed. Compared with the topographical image contrast observed at a constant surface potential in micrograph (a), applying voltage V, to the specimen between the metal contacts of the MIS sandwich system results in an additional image structure, which appears as a diffuse bright spot if the metal electrode is positively biased (b), and as a round black area with a fine bright fringe if the polarity of the voltage applied is changed (c). Even a contrast interpretation in a “semiquantitative” way (Godehardt et al., 1975; Heydenreich et al., 1978) showed that these additional image structures result from the interaction of
126
REINHOLD GODEHARDT
FIGURE19. Micrographs of a MIS sample region with a conducting channel before (a) and after applying voltages of V, = + 10 V (b) and V, = - 10 V (c). (Godehardt, R., and Heydenreich, J. (1992). Int J Electronics. 73, 1093. Courtesy of the publisher, Taylor & Francis.)
the imaging electrons with an electrical microfield of nearly rotational symmetry. The microfield and respective local variations of the surface potential arise according to the distribution of a longitudinal current flow within the semiconducting top electrode originating from a pointlike current source and supplied by a current filament along a conduction channel at a weak point of the insulating layer. Both the “semiquantitative” contrast interpretation mentioned and the direct field reconstruction carried out on the basis of a measured electron density distribution in the viewing plane (Heydenreich and Barthel, 1974; Barthel, 1983) were very useful for comprehensively interpreting the image structures detected, but they failed in the evaluation of the current filament. The phenomena observed are described more advantageously in the following sections. Starting from a model characterized by an analytically expressed current density distribution within the current filament, both the surface potential and the potential distribution of the contrast-forming electrical microfield above the specimen surface are calculated and used for computer simulation of the image contrast employing a lot of electron trajectories. Exploiting the results of the quantitative contrast interpretation on the basis of computer simulations of the round black image structures with bright fringes enables the evaluation of the filament current (Godehardt, 1989; Godehardt and Heydenreich, 1991). Moreover, scanning electron microscope (SEM) investigation applying the voltage contrast mode to detect the action of a current filament showed that the extension of the defect regions typically is of the order of 1 pm (Godehardt and Futing, 1990; Godehardt and Heydenreich, 1992).
MIRROR ELECTRON MICROSCOPY
127
B. Relations Between Current Flow and Surface Potential 1. General Relations
Figure 20 shows the configuration of samples prepared by vacuum evaporation of thin film systems onto glass substrates of 16 mm x 26 mm. Rectangular black areas in the top view on the left of Fig. 20 denoted by C1, C2, C3, and C4 mark Cr-Ag contact regions of good adhesivity used for applying voltages, while the black stripe denoted by ME is the metal electrode of the MIS sandwich system. The region between the metal contacts C3 and C4 of the sample is covered with the evaporated semiconducting material and can be used to measure the sheet resistance R , of the semiconducting top electrode. The cross section along line AA' is drawn schematically on the right of Fig. 20. It shows that the area between the metal contacts C2 and C3 is the area of interest for MEM investigations, where the MIS sandwich system itself is restricted to the range of the metal electrode ME. While the insulating layer IL should only entirely cover the metal electrode ME, the semiconducting top electrode SE also has to overlap its metal contacts C2 and C3. The potential of these contacts, which are externally connected during the investigation, is used as the reference potential I/= 0 of the MIS sandwich sample. Applying a specimen voltage V,, which corresponds to a poten-
FIGURE20. Configuration of the samples investigated. (Godehardt, R., and Heydenreich, J. (1991). With permission.)
128
REINHOLD GODEHARDT
tial V = V, of the metal electrode ME, results in a small current flow through the MIS sandwich system. Relations between the current flow and the surface potential are revealed by the current balance of a cuboid volume element in the semiconducting top electrode at an arbitrary point P ( x , y, z = 0) at the specimen surface. The extensions of the volume element considered in the x and y directions are Ax and Ay, respectively, and its height h corresponds to the thickness of the semiconducting layer. With j i ( x , y) denoting the absolute value of the local current density of the current, which normally penetrates the interface between the insulating layer and the top electrode, and with jse,Jx,y) and jSe,Jx,y) marking the components of the current density jse(x,y) of the longitudinal current flow within the semiconducting layer, the current balance of the volume element is expressed by
+ AX, Y) - j S e , A x , Y ) I ~ A Y + Cjs,,y(x,Y + AY) - j s e J X 3 y)IhAx
i i ( x , Y W A Y = CjSe,&
(97)
Dividing Eq. (97) by Ax and Ay and replacing the resulting difference quotients by the corresponding derivatives with both Ax and Ay tending to zero yields j i b , Y) = h div j
s e k Y)
(98)
with div denoting the two-dimensional divergence. Taking into account the experimentally proved Ohmic relation of the longitudinal resistivity p of the semiconducting layer,
where the longitudinal electrical field strength Ese(x,y) is equal to the corresponding field strength E,(x,y) at the specimen surface so that it can be expressed by the gradient of the surface potential Ks(x,y) according to Ese(x, Y)
= Ess(x, Y) =
-grad
VSs(x9
Y)
(1W
Eq. (98) changes to
with A denoting the two-dimensional Laplace operator. For thin films relation p / h is exactly expressed by the sheet resistance R, of the layer with the additional advantage that only the true effective film thickness responsible for the longitudinal electrical conduction is taken into account. Thus, finally
MIRROR ELECTRON MICROSCOPY
129
Eq. (98) becomes
In the one-dimensional case with V,, = Es(x),Eq. (102) is reduced to 1 d2 ji(X)= -v (x) R , d x 2 ss ____
whereas in the other interesting special case, where in the polar coordinates r, cp the surface potential depends solely on r, i.e., V,, = Ks(r),the following differential relation is valid:
2. Relations Based on a Model of Filamentary Conduction
In order to derive relations between the current supplied by a current filament and the corresponding variations of the surface potential, it is advantageous to neglect the electrical conduction through the homogeneous part of the insulating layer, and for simplicity to assume that only one of the conduction channels formed at the weak points of the insulating layer is active, thus all the current flow within the semiconducting top electrode originates from the current filament considered. The calculation of the surface potential implies two different mathematical tasks. The first refers to the small surface region R1, which corresponds to the cross section of the conduction channel at the interface between the insulating layer and the semiconducting one. R1 is assumed to be a circular area of radius r f , of the order of 0.5 pm. Furthermore, it is assumed that the current density j i ( x ,y ) will depend on r only if the polar coordinates r, cp with their origin in the center of R1 are used. It is most advantageous of the current density distribution of rotational symmetry is expressed by
wherej,, is the current density in the center of region R1 at r = 0, Jo(blr/rf) denotes the Bessel function of zero order of the real argument (blr/rf),and where b, is the first zero of J, with bl z 2.4 J,(b,) = 0 J,(O) = 1 (106) Equation (105) expresses a current filament model with a hot zone of maximum current density j,, in the center for r = 0, followed by a continuous
130
REINHOLD GODEHARDT
L 0
I 15
1
I
2
1
I
2.5
i-I)llll)
FIGURE21. Normalized distributions of the current density j i ( r ) within the current filament and the surface potential Ks(r)in the vicinity of the conducting channel (Godehardt, R., and Heydenreich, J. (1991). With permission.)
decrease of the current density down to zero at the margin of the filament for r = rr (see also the normalized curve in Fig. 21). With the expression of Eq. (105), the solution of the differential equation (104) results in the function Es(r)of the surface potential given by
with V,, denoting the surface potential at the margin of region R1, i.e., for r = rf. The second task is concerned with the rectangular region R2, which besides R 1 includes the area of the semiconducting top electrode between contacts C2 and C3. In this regionji(x, y ) is zero, thus the surface potential distribution has to be a solution of the Laplace equation
A K b , Y)=0
(108)
Besides, it has to fulfil the boundary conditions that the potential tends to V,, at the boundary to region R1 and vanishes at the transitions to the metal contacts C2 and C3, and that the electrical field component in the y direction has to be zero at the margins of R2 in the y direction. As shown by Godehardt (1989), the exact solution to this boundary problem can be approximated with sufficient accuracy, especially for a region of interest in the vicinity of the conduction channel, by the potential function
MIRROR ELECTRON MICROSCOPY
131
where r, is equal to half the distance between contacts C2 and C3. At the transition from region R1 to region R2, not only the surface potential has to be continuous but also the current density of the longitudinal current jse(r). In order to fulfil this boundary condition, additionally the derivatives of &(r) in Eqs. (107) and (109) have to be equal for r = r,. This reduces the free parameters characterizing the current flow and the corresponding surface potential distribution from two to one by the relation
between V,, and jfc,where J1(bl)denotes the first-order Bessel function of the real argument b, with a tabulated value of Jl(bl)E 0.519 taken, e.g., from Olver (1972).Not only V,, is proportional toj,,. Inserting Eq. (1 10) in Eq. (107) and choosing r = 0 yields the relation
between the surface potential V,, above the current filament center at r = 0 and the current density j,, at the same site. Furthermore, the total current I, supplied by the current filament can be calculated by integrating j,(r) from Eq. (105) over the circular area R1 of radius r,,
If
:j
=
ji(r)2mdr
resulting in the relation
between I, and j,,. The results derived on the basis of the model show that an arbitrary parameter of Z, j,, V,,, and V,, can be used as the free parameter to be determined in order to evaluate the current filament and the corresponding surface potential variation, whereas the other parameters are calculated by applying Eqs. (1 10)-(112). The normalized distributions of both the current density j,(r) and the corresponding surface potential Es(r) are shown in Fig. 21 for a current filament of a typical extension expressed by radius r, = 0.5 pm.
132
REINHOLD GODEHARDT
C. Potential Distribution of the Efectrical Microfield Above the Specimen Surface
The potential distribution V ( x , y, z) above the specimen surface, which is of special interest to mirror EM, is that solution of the three-dimensional Laplace equation AV(x, y , Z) = 0
(1 14)
which, on the one hand coincides with the surface potential for z = 0 and on the other hand vanishes in the plane z = z d of the aperture diaphragm. Taking into account that according to the rotational symmetry of the surface potential expressed by Eq. (107), potential V will not depend on cp if the cylindrical coordinates r, q,and z are used. Thus the boundary problem is given by AV(r, z) =
l a a2 V(r, z) + - - V(r, z) + -V(r, z) = 0 ar r ar az2 a2
(115)
with the boundary conditions V(r, z = zd)= 0 V(r, z
= 0) =
Ks(r)
with:
r, I r
vss(r) = 0 V(r, z) = 0
nrf
sr
with r, I nrf
The last boundary condition is introduced in order to restrict the space of interest to a finite cylinder so that the solution of the boundary problem can be expressed by an expansion into a Bessel function series (e.g., Dreszer, 1975),
where b, denotes the zeros of the Bessel function .lo i.e., , Jo(b,) = 0. Coefficients a, have to be calculated according to
MIRROR ELECTRON MICROSCOPY
1
133
f'"
V(r, z = O)Jo(b,,,r/nrf)rdr
Using the surface potential V(r,z = 0) of Eq. (116), the integral of Eq. (118) can be calculated analytically (Godehardt, 1989), which finally results in
Thought Eqs. (1 17) and (1 19) provide the exact solution of the boundary problem in an explicitly analytical form, because of the insufficient convergence of the Bessel function series the computation of actual potential values is impossible even if powerful computers are used. This practical problem can be overcome by transforming the infinite series in an infinite integral expression. Though this is not trivial, only its result will be presented, and the reader who is interested in details of the procedure carried out according to hints by Kratzer and Franz (1963) of deriving the Bessel-Fourier integral of the Hankel transformation is referred to the description by Godehardt (1989). The transformation is combined with n 4 co,resulting in V(r, z )
where undefined expressions of the integrand at k = b, and k = 0 have to be replaced by the corresponding limits according to
and rk sinh{ [(zd -
lim
k-0
"(6)
k
Taking into account the oscillations of the integrand and the slope of its graph in the vicinity of the extremum near k = 3rf/rc, an appropriate splitting of the integration interval enables the calculation of potential values with the help of numerical integration methods. Results are shown in Fig. 22 for the parameters chosen, i.e., as rf = 0.5 pm, r, = 1.8 mm, and V,, = 1 V. The potential distribution at different distances z from the specimen surface shows the strong attenuation of surface potential profiles with increasing z.
134
REINHOLD GODEHARDT
I .o
0.5
0
10
20
30
40
50
i’
I p l
FIGURE22. Representation of the potential distribution at different distances z from the specimen surface.
Though using Eq. (120) enables exact potential values to be calculated with high accuracy and in the same way integral expressions can be derived to compute the electrical field components E,(r, z) and EZ(r,z), their application to the estimation of a lot of electron trajectories involves considerable effort. Thus it is more advantageous to use the following approximation function:
rf I s I s1
with
and
and limit s1 of quantity s chosen such that both z < 1 mm and r < r, are valid. This approximation function not only equals the exact values of the surface potential for z = 0, but apart from a small central region it also
MIRROR ELECTRON MICROSCOPY
135
fulfills the Laplace equation in the vicinity of the specimen surface. The accuracy of the approximation potential is also sufficient along the z axis, where, e.g., the deviations from the exact values are smaller than 1% in the interval rf < z < 0.5 mm. Of course, Eq. (123) also allows one to derive directly the partial derivatives of V [ s ( r ,z ) ] and the components Er(r, z) and Ez(r, z ) of the electrical microfield. D . Computer Simulation of Observed Image Structures Investigations based on an EMS characterized by rd = 3.2 mm and zd = 7 mm comprise computer simulations of image structures observed using a lot of computed electron trajectories, where in each case both the contrastproducing deflected electron trajectory and the corresponding undisturbed electron trajectory have been determined. As described in Section IV.A, the method suggested by Recknagel (1937) has been used t o calculate electron trajectories not deflected by a distortion field. For these calculations the z axis between z = 0 and z = 20 mm is divided into 20 equidistant intervals and the potential distribution along the electron-optical axis shown in Fig. 10 is approximated by 20 adjacent parabolic parts. Examples of undisturbed electron trajectories computed in this manner are presented in Fig. 11. If, owing to an appropriate specimen shift, the microfield investigated is in a symmetric position to the electron-optical axis, the superimposed potential Vx(r,z ) can be calculated according to VZ(r,z,
= VEMS(r,
z,
+ V(r>z,
( 124)
where VEMS(r,z), expressed by Eqs. (19)-(23), corresponds to the potential contributions of the EMS and the specimen bias V,, and V(r,z) is the potential contribution of the electrical microfield given by Eq. (123). Because of the strong attenuation of V(r,z) with increasing z, the actual superposition takes place only within the region where the retarding field of the EMS is uniform. Thus, within the interval 0 I z I 1 mm, the first summand of Eq. (124) can be replaced by I/EMs(r,z ) z [Vo (- V,)] ( z - zd)/zd, resulting in a superimposed potential of
+
can be used. Therefore, for z > 1 mm also the electron trajectories that had been affected by the electrical microfield have been determined by the
136
REINHOLD GODEHARDT
Recknagel method, while within the interval 0 I z < 1 mm in the presence of a distortion field the electron trajectories have been calculated step by step according to the procedure described in Section 1V.A by Eqs. (25)-(33). Additional expressions needed for the electrical field components [compare Eq. (24)] can easily be derived by the corresponding partial derivatives of expression &(r, z) in Eq. (1 25). The final result of calculating each electron trajectory of an electron beam at normal incidence is triple (r,, r,, Fs), with distance ro of the approaching electron outside the EMS from the electron-optical axis being used as the input value, whereas within the viewing plane the radius coordinates r, and Ts of the undisturbed and the deflected electrons, respectively, are the output values of the computation. In order to reveal the diversity of electron deflections possible, a large number of such triples have been determined. Marking the triples by an additional index number, which increases with r, increasing, enabled the number of triples computed to be selected in such a way that at each interval between successive triples (I,+, r,,,, F',J and (rO,n+l,rs,n+l,Fs,n+l), a linear correlation between r0,.+ and Fs,x may be assumed. This means that for = r,,,,
+ c,(ro,n+l- rO,J
with
0I c, I1
(127)
the equation Fs,x
= Fs,n
+ cx(~s,n+l- FS,A
(128)
is valid. On this assumption the contribution je,nof electrons arising from the interval between r O , nand ro,n+lto the electron current density in the viewing plane between Fs,nand Fs,n+l is expressed by
To collect all the electron current contributions in a proper way, the viewing screen is divided into adjacent concentric rings of the same width, thus forming a multichannel collector for the electrons striking it. As the width of the concentric rings limits the lateral resolution of the electron current density distribution to be determined, it has to be chosen small enough. Multiplying the electron current density given by Eq. (129) by that part of the area of a channel which is struck by electrons of the r, interval considered allows one to determine the corresponding electron current contribution to be collected by the channel. Summing up all the electron current contributions in a channel and dividing the sum by the area of the concentric ring results in the electron current density recorded of the region of the channel considered.
MIRROR ELECTRON MICROSCOPY
137
FIGURE23. Influence of the electrical microfield on the electron coordinate ?‘ in the viewing plane at different specimen biases V, (a-e) relative to an undisturbed electron beam (f).
Finally, the results of all the channels represent the electron current density distribution in the viewing plane. It is an important advantage of this procedure that, in contrast to the analytical estimation of j e ( r ) according to Eq. (42), there is no condition limiting the determination of the electron current density distribution, thus also allowing the application to strong contrasts and caustic figures. Results of computer simulations concerning an electrical microfield of V,, = - 1 V and rr = 0.5 ,urn at various specimen biases4 V, (+0.3 V, 0 V, - 5 V, - 10 V, and -20 V) are presented in Figs. 23-25, where, e.g., 700 electron trajectories were computed for V, = +0.3 V as the most positive bias, with none of the electrons reaching the specimen surface. Figure 23 shows coordinate F, as a function of r,,, i.e., the direct result of the electron trajectories computed for different biases. The additional straight line reflects the corresponding relation between r, and ro of undisturbed electron trajeclories. The electron deflections in the viewing plane Ar, = FS- r, in dependence on the original coordinates r, of electrons are shown in Fig. 24. The slope of the curve for V, = +0.3 V is sometimes smaller than - 1, so it is not possible to calculate the electron density distribution by Eq. (42). But exactly this very close approach of the imaging electrons to the specimen surface The bias values used for the computations differ by a shift from that given in the figure captions. To eliminate this discrepancy, the biases of the first electron impact on the sample surface have to he correlated.
138
REINHOLD GODEHARDT
: ;/
i? I .o
0.5
I .j
2.0
-
FIGURE24. Electron deflections Ars in the viewing plane caused by the electrical microfield at different specimen biases V,. Je 4.0
JeO
--(Maximum
3.5 3 .O
= 7.6 )
PARAMETERS OF THE MICROFIELD: vfc = -1v r / = 0.5prn SPECIMEN BIAS: v b = 0.3V (a) vb = ov (b)
2.5
2.0
b
1.5
-5v (C)
vb= Vb = -20V
vb = -1OV (d)
(e)
1 .o
0.5 a
-
0.5
FIGURE25. Normalized electron current density distribution in the viewing plane caused by the electrical microfield at different specimen biases V,.
MIRROR ELECTRON MICROSCOPY
139
results in the largest information transfer about the microfield under investigation. The image contrast observed is explained by the normalized electron current density je/jeOrepresented in Fig. 25 in dependence on coordinate r, in = +0.3 V, curve (a), starting from the viewing plane. Corresponding to I$ the origin of the coordinate system, practically coincides with the abscissa before it rapidly changes to its maximum at r, = r,,, followed by a rapid decrease down to its final value of 1. This representation actually corresponds to the image structures observed of round dark-black areas with fine, very bright fringes. This typical feature remains also if the bias is changed to V, = 0 V [curve (b)], whereas further shifting the bias to negative values results in diffuse dark image structures of decreased contrast. Taking into account that contrast simulations are mainly to provide information about the parameters of the electrical microfield investigated, a lot of contrast simulations have to be carried out to reveal a corresponding correlation. However, with particular emphasis on the round black image structures with bright fringes, it is much easier to find desired relations as, fortunately enough, the radius of the black area r,,, is correlated directly to microfield parameter Gc. Moreover, the results of computed electron trajectories have shown that it is not necessary to carry out the whole procedure including the estimation of the electron current density distribution in the viewing plane to find r,,, as it can also be determined from a representation as shown in Fig. 23, where the number of electron trajectories to be computed can be reduced considerably. Calculations have revealed the
2.5 -
FIGURE26. Radius rbf of the round black image structure with a bright fringe as a function of the microfield parameter (Godchardt, R., and Heydenreich, J. (1991). With permission.)
cc.
140
REINHOLD GODEHARDT
correlation between r,, and V,, shown by the curve in Fig. 26. It is worth pointing out that varying parameter rr (radius of the current filament) between 0.5 pm < rr < 4 pm as the range of interest will not influence the curve of Fig. 26. Therefore, while on the one hand using the relation expressed by the curve of Fig. 26 enables the microfield parameter V,, to be determined directly from rbf measured, MEM investigations, on the other, do not provide information about rf. Additionally, based on the relation between the filament current I, and parameter V,, according to Eqs. (1 11) and (113),
with V,, determined, also the filament current If can roughly be estimated as owing to the logarithmic expression the influence of the unknown parameter rr is not very important.
E . Applications and Results Applying the relation between rbf and V,, expressed by the graph of Fig. 26 to the round dark image structure with bright fringes recorded in micrograph (c) of Fig. 19 results in V,, = - 1.6 V of the parameter characterizing the electrical microfield detected. With the sheet resistance R, = 144 MR measured of the sample, with a MIS sandwich system consisting of a Ag(40nm)/Au(30-nm) metal double layer, an 80-nm-thick SiO, insulating film, and a 36-nm-thick Ge top electrode, the current of the conduction channel If = - 7.8 nA is determined according to Eq. ( 1 30) for a filament radius of 0.5 pm. Figure 27 shows a further series of micrographs of the same specimen region at voltages V, = f2 V [micrographs (a) and (b)] and V , = f 15 V [micrographs (c) and (d)]. The quantitative contrast interpretation of the image structure detected in micrograph (d) shows that the change of the specimen voltage from V , = - 10 V to V, = - 15 V results in an increased microfield of parameter V,, = - 2.25 V and a corresponding current source I, = - 10.9 nA. Micrographs (a) and (b), however, demonstrate the high sensitivity of the MEM to electrical microfields, where the interpretation of the corresponding image contrast in micrograph (b) results in V,, = -0.2 V for the microfield and in If = -0.95 nA for the current source. Investigations of a sample where a 60-nm-thick topographical reference structure of a periodicity of 33 pm is additionally produced by SO, evaporation onto the glass substrate through a grid mask in the region between contacts C2 and C3 (compare Fig. 19) show further interesting results. The MIS sandwich system of this sample is formed by a metal double layer of
MIRROR ELECTRON MICROSCOPY
141
FIGURE 27. Micrographs of the same sample region as imaged in Fig. 19 at applied specimen voltages of V, = + 2 V (a), V, = -2 V (b), V, = +15 V (c), and V, = - 15 V (d). (Godehardt, R., and Heydenreich, J. (1991). With permission.)
Ag(l55-nm)/Cu(105-nm)followed by a 120-nm-thick SiO, insulating layer and a 190-nm-thick Ge top electrode having a measured sheet resistance of 5.5 MR. The central region of the MIS sandwich system is imaged in Fig. 28 before applying a specimen voltage [micrograph (a)] and at applied voltages of V, = +20 V [micrograph (b)] and V, = -20 V [micrographs (c) and (d)]. Respective image structures of micrographs (b) and (c) demonstrate the action of two electrical microfields. Suddenly, without changing the specimen voltage, the image structure in the lower left disappeared, as micrograph (d) shows. This phenomenon, caused by a rupture of the conduction channel, unequivocally proves contrast fluctuations often occurring. Quantitative investigations carried out with respect to micrograph (d) on the one hand yield V,, = -2.5 V for the microfield and = -0.3 p A for the current source. The values have been corrected according to topographical electron deflections detected in micrograph (a). On the other hand, in addition the surface potential between contacts C2 and C3 along a line crossing the center of the image structure caused by the current filament has been determined experi-
142
REiNHOLD GODEHARDT
(c)
FIGURE28. Micrographs of a MIS sample region with additional topographical structures prepared by evaporation through a grid. The micrographs show the sample before (a) and after applying voltages of V, = +20 V (b) and V, = -20 V [(c) and (d)], with two current filaments being detected in (b) and (c), but only one in (d).
mentally in the following way. With the topographical grid pattern used as an orientation aid, a specimen point where the surface potential has to be measured is shifted in the electron-optical center. Then, with the specimen voltage applied and also switched off, a specimen bias is selected so that the very first electrons impinge on the sample surface as micrographs (a) and (d) demonstrate. As the specimen potential is constant for the time the specimen voltage is switched off, the difference between the two bias values corresponds to the surface potential to be measured at the sample site considered. Figure 29 presents respective results by black circular areas with diameters expressing about the measurement errors. The surface potential distribution experimentally determined is approximated by curve (c), which itself is a superposition of curves (a) and (b). Curve (a) corresponds to the surface potential contribution of the electrical microfield and is calculated by means of Eq. (123) for z = 0, V,, = -2.5 V, and rr = 1 pm. Curve (b), computed according to the empirically found equation
MIRROR ELECTRON MICROSCOPY
143
FIGURE29. Surface potential profile experimentally determined and marked by black dots, and its approximation by curve c, obtained by superimposing curves a and b.
V,, = - 6 . 6 5 ( ~+ 8) V,, = -9.86 + 1.34 cash (2.28 X)
5
- -3
j x I - j
-$ < x < +$
(131)
V,, = + 6 . 6 5 ( ~- 3) + $ < X I + $ in millimeters and V,, in volts, representing the surface potential
with x contribution caused by the current flow through the homogeneous parts of the MIS sandwich system. Contrary to the assumption made hitherto, this contribution to the surface potential must not be ignored, as it mainly determines the surface potential distribution as demonstrated by Fig. 29. On the other hand, this result again offers the possibility of separately treating the contributions to the surface potential in such a way that the influence of a current filament can be considered as a small modulation of the global surface potential distribution caused by the integral current flow of the MIS sandwich system. The global surface potential itself is considered as an additional contribution to the specimen bias. Applying Eq. (103) to the surface potential expressed by Eq. (131) results in absolute values of the current density in the insulating layer between 1.3 ,uA/mm2 and 3 pA/mm2. Compared with these values, the absolute value of the maximum current density in the center of the current filament Urcl = 0.54 A/mmz determined according to Eq. ( 1 1 1 ) is higher by about five orders. Nevertheless, the current I, = -0.3 pA supplied by the current filament is small if compared with the
144
REINHOLD GODEHARDT
determined integral current of - 12.1 p A of the sandwich system; thus it is impossible to investigate its contribution by means of integral methods. IX. CONCLUDING REMARKS Though the roots of mirror EM coincide with the beginning of electron microscopy, and applications of mirror EM were reported mainly in the 1960s and 1970s, mirror EM is still a method of interest to materials and surface sciences. A bibliography on emission microscopy, mirror EM, low-energy electron microscopy, and related techniques by Grifith et al. (1991) shows the trend of combinations of a MEM with other surface analysis techniques, preferably in combined instruments under ultra-high-vacuum conditions. The MEM mode of these instruments is used particularly to image the specimen surface topography and the surface potential distribution in a simple manner as described, e.g., by Swiech et al. (1993) in conjunction with in-situ studies of surface reactions. Furthermore, variants of quantitative mirror EM have been a matter of interest, particularly to evaluate electrical microfields (Godehardt and Heydenreich, 1991, 1992) and magnetic fringing fields (Godehardt et al. 1992). Besides a review of mirror EM, both qualitative image formation and quantitative MEM investigations are described in this article and demonstrated by examples. The subject is presented mainly from a practical point of view; whereas general electron-optical considerations concerning the resolution of MEM and quantum effects in mirror EM are beyond the scope of this article, thus readers interested in the latter are referred to Rempfer and Griffith (1992) and Gvosdover and Zeldovich (1973), respectively. ACKNOWLEDGMENT All the author’s experience in the field of mirror EM results either from his activities at the Institute of Solid State Physics and Electron Microscopy in Halle (Saale) until the end of 1991, or from his studies at the Max Planck Institute of Microstructure Physics in Halle (Saale) until 1993. Therefore, the author wishes to thank Professor H. Bethge as the director of the former institute, as well as the leadership of the Max Planck Institute for their support. The author is particularly grateful to Professor J. Heydenreich for his permanent encouragement, support, and interest in this work. Also, the collaboration with J. Vester and Dr. J. Barthel, and the fruitful discussions
MIRROR ELECTRON MICROSCOPY
145
and private communications with the latter concerning the work of scientists from the Electron Optics Laboratory of Moscow State University, are gratefully appreciated. Furthermore, the author thanks Mrs. Chr. Hatcher, Mrs. V. Krause, Mrs. H. Otto, and Mrs. M. Wille for their technical assistance in preparing the manuscript, and Mrs. H. Messerschmidt for her critically revision of its English. REFERENCES Archard, G. D., and Mulvey, T. (1958). J. Sci. Instr. 35,279. Artamonov, 0.M., and Komolov, S. A. (1966). Radiotechnika i Elektronika 11, 2186. Babout, M., Guivarch, M., Pantel, R., Bujor, M., and Guittard, C. (1980). J. Phys. D: Appl. Phys. 13, 1161. Bacquet, G., and Santouil, A. (1962). Cornpt. Rend. 255,1263. Barnett, M. E., and Nixon, W. C. (1964). In “Proc. 3rd Eur. Reg. Conf. on Electron Microscopy, Vol. A” (M. Titlbach, ed.), p. 37, Prague. Barnett, M. E., and Nixon, W. C. (1967a). J. Sci. Instr. 44,893. Barnett, M. E., and Nixon, W. C. (1967b). Optik 26, 310. Barthel, J. (1983). Thesis, Dresden. Bartz, G., and Weissenberg, G. (1957). Naturwissenschaften 44,229. Bartz, G.,Weissenberg, G., and Wiskott, D. (1952). Phys. Verh. 3, 108. Bartz, G., Weissenberg, G., and Wiskott, D. (1954). In “Proc. 4th. Int. Conf. on Electron Microscopy, Vol. 2” (R. Ross, ed.), p. 395, London. Bates, C. W., and England, L. (1969). Appl. Phys. Lett. 14, 390. Becherer, G., Gradewald, R., Kuhlmann, F., and Liechtfeldt, P. (1973). Kristall Tech. 8,287. Berger, C., Dupuy, C., Layderant, L., and Bernard, R.(1977). J. Appl. Phys. 48,5027. Bernards, J. P. C., and Schrauwen, C. P. G. (1990). Thesis, Enschede. Bethge, H., and Heydenreich, J. (1960). Exp. Tech. Phys. 8, 61. Bethge, H., and Heydenreich, J. (1971a). In “Advances in Optical and Electron Microscopy, Vol. 4,” p. 237, Academic Press, London and New York. Bethge, H., and Heydenreich, J. (1971b). Exp. Tech. Phys. 19, 375. Bethge, H., and Heydenreich, J. (1987). In “Electron Microscopy in Solid State Physics” (H. Bethge and J. Heydenreich, eds.), p. 229, Elsevier, Amsterdam, Oxford, New York, and Tokyo. Bethge, H., Hellgardt, J., and Heydenreich, J. (1960). Exp. Tech. Phys. 8,49. Bok, A. B. (1968). Thesis, Delft. Bok, A. B., Le Poole, J. B., Roos, J., and De Lang, H. (1971). In “Advances in Optical and Electron Microscopy, Vol. 4,” p. 161, Academic Press, London and New York. Bostanjoglo, O., and Seigel, G. (1967). Cryogenics 7 , 157. Brand, U., and Schwartze, W. (1963). Exp. Tech. Phys. 11, 18. Cline, J. E., Morris, J. M., and Schwartz, S. (1969). IEEE Trans. Electron Deu. ED-16,371. Dearnaley, G., Morgan, D. V., and Stoneham, A. M. (1970). J. Non-Cryst. Solids 4,593. Delong, A,, and Drahos, V. (1964). In “Proc. 3rd Eur. Reg. Conf. on Electron Microscopy, Vol. A (M. Titlbach, ed.), p. 25, Prague. Delong, A,, and Drahos, V. (1971). Nature Phys. Sci. 230, 196. Dreszer, J. (1975). “Mathematik Handbuch fur Technik und Naturwissenschaft,” p. 752, Fachbuchverlag Leipzig, Warschau.
146
REINHOLD GODEHARDT
Dupuy, J. C., Laydevant, L., and Etienne, S. (1976). J. Phys. E: Sci. Instrum. 9, 176. Dupuy, J. C., Laydevant, L., Pantel, R., and Bujor, M. (1977). C. R . Acad. Sci. 284B, 377. Dupuy, J. C., Sibai, A,, and Vilotitch, B. (1984). Surface Sci. 147, 191. Emmer, I. (1974). Thin Solid Films 20,43. English, F. L. (1968a). J. Appl. Phys. 39, 128. English, F. L. (1968b). J . Appl. Phys. 39,2302. Forster, M. S., Campuzano, J. C., Willis, R. F., and Dupuy, J. C. (1985). J . Microsc. 140, 395. Garrood, J. R., and Nixon, W. C. (1968). In “Proc. 4th Eur. Reg. Conf. on Electron Microscopy, Vol. 1” (D. S. Bocciarelli, ed.), p. 95, Rome. Glaser, A,, and Henneberg, W. (1935). Z. Tech. Phys. 16,222. Godehardt, R. (1988). In “Proc. 9th Tagung Elektronenmikroskopie” (J. Heydenreich, H. Luppa, and D. Stiller, eds.), p. 507, Dresden. Godehardt, R. (1989). Thesis, Halle/S. Godehardt, R., and Fiiting, M. (1990). In “Beitrage zur elektronenmikroskopischen Direktabbildung von Oberflachen, Vol. 23,” p. 67, Miinster. Godehardt, R., and Heydenreich, J. (1991). In “Beitrage zur elektronenmikroskopischen Direktabbildung von Oberflachen, Vol. 24/1,” p. 65, Miinster. Godehardt, R., and Heydenreich, J. (1992). Int. J . Electron. 73, 1093. Godehardt, R., and Heydenreich, J. (1993). Optik 94 Suppl. 5 , 52. Godehardt, R., Heydenreich, J., and Barthel, J. (1992). In “Proc. 10th Eur. Conf. on Electron Microscopy, Vol. 2” (A. Lopez-Galindo and M. I. Rodriguez-Garcia, eds.), p. 255, Granada. Godehardt, R., Heydenreich, J., and Vester, J. (1975). In “Proc. 8th Arbeitstagung Elektronenmikroskopie,” p. 321, Berlin. Godehardt, R., Heydenreich, J., and Vester, J. (1978). In “Proc. 9th Tagung Elektronenmikroskopie” (J. Heydenreich and H. Luppa, eds.), p. 92, Dresden. Gould, R. D. (1983). J . Non-Cryst. Solids 55,363. Grifith, 0.H., and Engel, W. (1991). Ultramicroscopy 36, 262. Griflith, 0. H., Habliston, P. A., and Birrell, G. B. (1991). Ultramicroscopy 36, 1. Guittard, C., Babout, M., Guittard, S., and Bujor, M. (1981). Scanning Electron Microsc. I, 199. Guittard, C., Babout, M., and Pernoux, E. (1968). In “Proc. 4th Eur. Reg. Conf. on Electron Microscopy, Vol. 1” (D. S. Bocciarelli, ed.), p. 107, Rome. Guittard, C., Vassoille, R., and Pernoux, E. (1967). C. R. Acad. Sci. B264,924. Gvosdover, R. S. (1972a). In “Proc. 5th Eur. Reg. Conf. on Electron Microscopy, Vol. 1,” p. 504, Manchester. Gvosdover, R. S. (1972b). Radiotechnika i Elektronika 17, 1697. Gvosdover, R. S., and Petrov, V. I. (1972). Radiotechnika i Elektronika 17,433. Gvosdover, R. S., and Zeldovich, B. Ya. (1973). J . Microsc. 17, 107. Gvosdover, R. S., Karpova, I. V., Kalashnikov, S. G., Lukjanov, A. E., Rau, E. I., and Spivak, G. V. (1971). Phys. Stat. Sol. ( a ) 5,65. Gvosdover, R. S., Lukjanov, A. E., Spivak, G. V., Ljamov, V. E., Rau, E. I., and Butilkin, A. I. (1968). Radiotechnika i Elektronika 13,2276. Gvosdover, R. S., Lukjanov, A. E., Spivak, G. V., and Rau, E. I. (1970). In “Proc. 7th. Int. Conf. on Electron Microscopy, Vol. 1” (P. Favard, ed.), p. 199, Grenoble. Hamarat, R. T., Witzani, J., and Horl, E. M. (1984). Scanning 6,75. Hawkes, P. W., and Kasper, E. (1989). “Principles of Electron Optics,” Vol. 2, p. 923, Academic Press, London, San Diego, New York, and Berkeley. Henneberg, W., and Recknagel, A. (1935). Z. Tech. Phys. 16,621. Heydenreich, J. (1961). Thesis, Potsdam. Heydenreich, J. (1966). In “Proc. 6th. Int. Conf. on Electron Microscopy, Vol. 1” (R. Uyeda, ed.), p. 233, Kyoto. Heydenreich, J. (1969). Habilitationsschrift, Halle (Saale).
MIRROR ELECTRON MICROSCOPY
147
Heydenreich, J. (1970). In “Proc. 7th. Int. Conf. on Electron Microscopy, Vol. 2” (P. Favard, ed.), p. 31, Grenoble. Heydenreich, J. (1972). In “Proc. 5th Eur. Reg. Conf. on Electron Microscopy, Vol. 1,” p. 490, Manchester. Heydenreich, J., and Barthel, J. (1974). In “Proc. 8th. Int. Conf. on Electron Microscopy, Vol. 1” (J. V. Sanders and D. J. Goodchild, eds.), p. 654, Canberra. Heydenreich, J., and Vester, J. (1973). In “Proc. 7th Arbeitstagung Elektronenmikroskopie,” p. 172, Berlin. Heydenreich, J., and Vester, J. (1974). Kristall Tech. 9, 107. Heydenreich, J., Godehardt, R., and Vester, J. (1978). Kristall Tech. 13, 355. Hopp, H. (1960). Thesis, Berlin. Hottenroth, G. (1937). Ann. Phys. 30,689. Igras, E. (1961). Bull. Acad. Poten 9,403. Igras, E. (1962). In “Proc. Int. Conf. on Semiconductors,” p. 832, Exeter. Igras, E. (1970). Electronic Equipment News 12, 18. Igras, E., and Warminski, T. (1965). Phys. Stat. Sol. 9, 79. Igras, E., and Warminski, T. (1966). Phys. Stat. Sol. 13, 169. Igras, E., and Warminski, T. (1967a). Phys. Stat. Sol. 19, K67. Igras, E., and Warminski, T. (1967b). Phys. Stat. Sol. 20, K5. Igras, E., and Warminski, T. (1968a). Phys. Stat. Sol. 27,69. Igras, E., and Warminski, T. (1968b). In “Proc. 4th Eur. Reg. Conf. on Electron Microscopy, Vol. 1” (D. S. Bocciarelli, ed.), p. 109, Rome. Igras, E., Krylow, J., and Rossinski, W. (1974). Physics and Technics of Semiconductors 8, 748. Igras, E., Przyborsky, W., and Warminski, T. (1969). Phys. Stat. Sol. 35, K107. Igras, E., Krylow, J., and Rossinski, W. (1975). Phys. Stat. Sol. ( a ) 28, 321. Ivanov, R. D., and Abalmazova, M. G. (1966). Izu. Akad. Nauk SSSR, Ser. Fiz. 30,784. Ivanov, R. D., and Abalmazova, M. G. (1967). 2s.Technich. Fiz. 37, 1351. Kachniarz, J., and Zarembinski, S. (1979). J. Phys. E: Sci. Instrum. 12, 328. Kachniarz, J., and Zarembinski, S. (1980a). Optik 55, 403. Kachniarz, J., and Zarembinski, S . (1980b). Optik 57, 287. Kachniarz, J., and Zarembinski, S. (1981a). Optik 60, 81. Kachniarz, J., and Zarembinski, S. (1981b). Optik 59,431. Kachniarz, J., Jaworski, M., and Zarembinski, S. (1982). Optik 61, 157. Kachniarz, J., Zarembinski, S., Dabkowski, A,, and Kopalko, K. (1983). Optik 64, 73. Kobayashi, J., Someya, T., and Furuhata, Y.(1972). Phys. Lett. A38,309. Kranz, J., and Bialas, H. (1961). Optik 18, 178. Kratzer, A., and Franz, W. (1963). “Transzendente Funktionen,” p. 359, Akademische Verlagsgesellschaft Goest & Portig, Leipzig. Kuehler, J. D. (1960). IBM J. Res. Develop. 4, 202. Kuhlmann, F., and Knick, I. (1977). Exp. Tech. Phys. 25,81. Kuhlmann, F., Hein, U., Burka, C., Knick, K.-D., and Schneidereit, A. (1978). Exp. Tech. Phys. 26, 285. Lichte, H. (1980). Optik 56, 35. Lichte, H., and Mollenstedt, G. (1979). J. Phys. E . 12,941. Lukjanov, A. E., and Spivak, G. V. (1966). In “Proc. 6th. Int. Conf. on Electron Microscopy, Vol. 1” (R. Uyeda, ed.), p. 611, Kyoto. Lukjanov, A. E., Rau, E. I., and Spivak, G. V. (1974). Izv. Akad. Nauk SSSR, Ser. Fiz. 38, 1406. Lukjanov, A. E., Spivak, G. V., and Gvosdover, R. S. (1973). Uspekhi Fiz. Nauk 110,623. Lukjanov, A. E., Spivak, G. V., Sedov, N. N., and Petrov, V. I. (1968). Izu. Akad. Nauk SSSR, Ser. Fiz. 32, 987.
148
REINHOLD GODEHARDT
Mafft, K. N. (1968). J. Appl. Phys. 39, 3878. Mayer, L. (1955). J. Appl. Phys. 26, 1228. Mayer, L. (1957a). J. Appl. Phys. 28,975. Mayer, L. (1957b). WDAC Technical Report 57-585. Mayer, L. (1958). J. Appl. Phys. 29,658. Mayer, L. (1959a). J . Appl. Phys. 30,1101. Mayer, L. (1959b). WDAC Technical Report 59-640. Mayer, L. (1961). In “Encyclopedia of Microscopy” (G. L. Clark, ed.), p. 316, Reinhold, New York. Mayer, L., Rickett, R., and Stenemann, H. (1962). In “Proc. 5th Int. Congr. on Electron Microscopy, Vol. 1” (S. S. Breese, Jr., ed.), p. D-10, Philadelphia. McLeod, G. C., and Oman, R. M. (1968). J. Appl. Phys. 39,2756. Olver, F. W. J. (1972). In “Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables” (M. Abramowitz and I. A. Stegun, eds.), National Bureau of Standards, Washington, DC. Oman, R. M. (1969). Adu. Electronics Electron Phys. 26,217. Orthuber, R. (1948). 2. Angew. Phys. 1, 79. Pagnia, H., and Sotnik, N. (1988). Phys. Stat. Sol. ( a ) 111, 387. Pantel, R., Bujor, M., and Bardolle, J. (1977). Surface Sci. 62, 589. Petrov, V. I., Lukjanov, A. E., and Gvosdover, R. S . (1970a). Vestnik Mosk. Uniuers. Ser. Fiz.. Astr. 4, 44 1. Petrov, V. I., Lukjanov, A. E., Gvosdover, R. S., and Spivak, G. V. (1970b). In “Proc. 7th. Int. Conf. on Electron Microscopy, Vol. 2” (P. Favard, ed.), p. 25, Grenoble. Phillips, D. L. (1962). J. Assoc. Comput. Mach. 9, 84. Rahman, A. (1982). Thin Solid Films 94, 199. Rau, E. I., Lukjanov, A. E., Spivak, G . V., and Gvosdover, R. S. (1970). 120. Akad. Nauk SSSR, Ser. Fiz. 34, 1539. Ray, A. K., and Hogarth, C. A. (1985). Thin Solid Films 127,69. Ray, A. K., and Hogarth, C. A. (1986). Thin Solid Films 144, L111. Recknagel, A. (1936). 2. Tech. Phys. 17,643. Recknagel, A. (1937). Z. Phys. 104,381. Rempfer, G. F., and Griffth, 0.H. (1992). ,Ultramicroscopy 47,35. Rugar, D., Mamin, H. J., Guethner, P., Lambert, S. E., Stern, J. E., McFadyen, I., and Yogi, I. (1990). J . Appl. Phys. 68, 1169. Ryshik, I. M., and Gradstein, I. S. (1963). “Tables of Series, Products, and Integrals,” p. 178, VEB Deutscher Verlag der Wissenschaften, Berlin. Schwartze, W. (1964). In “Proc. 3rd Eur. Reg. Conf. on Electron Microscopy, Vol. A (M. Titlbach, ed.), p. 15, Prague. Schwartze, W. (1966). Exp. Tech. Phys. 14,293. Schwartze, W. (1967). Optik 25,260. Sedov, N. N. (1968). Izu. Akad. Nauk SSSR, Ser Fiz. 32,1175. Sedov, N. N., Lukjanov, A. E., and Spivak, G. V. (1968a). Izu. Akad. Nauk SSSR, Ser. Fiz. 32, 996. Sedov, N. N., Spivak, G. V., and Ivanov, R. D. (1962). Izu. Akad. Nauk SSSR, Ser. Fiz. 26,1332. Sedov, N. N., Spivak, G. V., Petrov, V. I., Lukjanov, A. E., and Rau, E. I. (1968b). lzu. Akad. Nauk SSSR, Ser. Fiz. 32, 1005. Shern, C. S., and Unertl, W. N. (1987). J . Vac. Sci. Technol. AS, 1266. Soa, E.-A. (1959). Jenaer Jahrbuch, Vol. I, p. 115. Someya, T., and Kobayashi, J. (1970). In “Proc. 7th. Int. Conf. on Electron Microscopy, Vol. 2” (P. Favard, ed.), p. 627, Grenoble.
MIRROR ELECTRON MICROSCOPY
149
Someya, T., and Kobayashi, J. (1971). Phys. Stat. Sol. 4, K161. Someya, T., and Watanabe, M. (1968). In “Proc. 4th Eur. Reg. Conf. on Electron Microscopy, Vol. 1” (D. S. Bocciarelli, ed.), p. 97, Rome. Spivak, G. V., and Ivanov, R. D. (1963). Izu. Akad. Nauk SSSR. Ser. Fiz. 27, 1203. Spivak, G. V., and Lukjanov, A. E. (1966a). Izu. Akad. Nauk SSSR, Ser. Fiz. 30,781. Spivak, G. V., and Lukjanov, A. E. (1966b). Izu. Akad. Nauk SSSR,Ser. Fiz. 30,803. Spivak, G.V., Antoshin, M. C., Lukjanov, A. E., Rau, E. I., Ushakov, 0. A., Akishin, A. I., Tokarev, G. A., Lyamov, V. E., and Barthel, J. (1972). Izu. Akad. Nauk SSSR,Ser. Fiz. 36, 1954. Spivak, G. V., Igras, E., Pryamkova, I. A,, and Zheludev, I. S. (1959a). Kristallografiya 4, 123. Spivak, G. V., Ivanov, R. D., Pavlyuchenko, 0. P., Sedov, N. N., and Shvetz, V. F. (1963b). Izu. Akad. Nauk S S S R , Ser. Fiz. 21,1210. Spivak, G. V., Ivanov, R. D., Pavlyuchenko, 0. P., and Sedov, N. N. (1963~).Izu. Akad. Nauk SSSR, Ser. Fiz. 27, 1139. Spivak, G. V., Ivannikov, V. P., Lukjanov, A. E., and Rau, E. I. (1978). J. Microsc. Spectrosc. Electron. 3,89. Spivak, G . V., Kirenski, L. V., Ivanov, R. D., and Sedov, N. N. (1961b). Izu. Akad. Nauk SSSR, Ser. Fiz. 25, 1465. Spivak, G. V., Lukjanov, A. E., Toshev, S. D., and Koptsik, V. A. (1963a). Izu. Akad. Nauk S S S R , Ser. Fiz. 27, 1199. Spivak, G. V., Pariyuchenke, 0. P., Ivanov, R. D., and Netischenskaja, G. P. (1964). In “Proc. 3rd Eur. Reg. Conf. on Electron Microscopy, Vol. A (M. Titlbach, ed.), p. 293, Prague. Spivak, G. V., Pavlyuchenko, 0. P., and Lukjanov, A. E. (1966). Izu. Akad. Nauk SSSR, Ser. Fiz. 30, 813. Spivak, G. V., Prilezhaeva, I. N., and Azovtsev, V. K. (1955). Dokl. Akad. Nauk SSSR 100,965. Spivak, G. V., Pryamkova, I. A,, Fetisov, D. V., Kabanov, A. N., Lazareva, L. V., and Shilina, L. I. (1961a). Izu. Akad. Nauk SSSR, Ser. Fiz. 25,683. Spivak, G. V., Pryamkova, I. A., and Igras, E. (195913). Zzu. Akad. Nauk SSSR, Ser. Fiz. 23, 129. Spivak, G. V., Pryamkova, I. A., and Sedov, N. N. (1960). Izu. Akad. Nauk S S S R , Ser. Fiz. 24, 640. Spivak, G. V., Rau, E. I., Lukjanov, A. E., and Gvosdover, R. S. (1971). Intermag-71. IEEE Trans. Magnet. MAG-7,684. Sutherland, R. R. (1971). J. Phys. D4.468. Swiech, W., Rausenberger, B., Engel, W., Bradshaw, A. M., and Zeitler, E. (1993). Surface Sci. 294,297. Telieps, W., and Bauer, E. (1985). Ultramicroscopy 17, 57. Tichonov, A. N. (1963a). Dokl. Akad. Nauk SSSR 151,501. Tichonov, A. N. (1963b). Dokl. Akad. Nauk SSSR 153,49. Veneklasen, L. H. (1991). Ultramicroscopy 36,76. Wang, S . T., Challis, L. J., and Little, W. A. (1966). In “Proc. 10th Int. Conf. Low Temperature Physics, Vol. A,” p. 364, Moscow. Warminski, T. (1970). Phys. Stat. Sol. ( a ) 3, K159. Weidner, G., Weissmantel, C., Herberger, J., and Heydenreich, J. (1975a). Kristall Tech. 10,285. Weidner, G., Weissmantel, C., Herberger, J., Heydenreich, J., and Bliithner, K. (1975b). Kristall Tech. LO, 295. Willing, E., and Horl, E. M. (1978). In “Beitrage zur elektronenmikroskopischen Direktabbildung von Oberflachen, Vol. 11,” p. 205, Munster. Wiskott, D. (1956). Optik 13,463.
150
R. GODEHARDT
Wittry, D. B., and Sullivan, P. A. (1970). In “Proc. 7th. Int. Conf. on Electron Microscopy, Vol. 1” (P. Favard, ed.), p. 205, Grenoble. Witzani, J., and Horl, E. M. (1976). In “Proc. 6th Eur. Reg. Cod. on Electron Microscopy, Vol. 1” (D. G. Brandon, ed.), p. 325, Jerusalem. Witzani, J., and Horl, E. M. (1981). Scanning 4, 53.
ADVANCES IN IMAGING AND ELECTRON PHYSICS, VOL. 94
Rough Sets JERZY W. GRZYMALA-BUSSE Department of Electrical Engineering and Computer Science, University of Kansas, Lawrence. Kansas
I. Fundamentals of Rough Set Theory . . . . . . . . . A. Indiscernibility . , . . . . . . . . . . . . . B. Lower and Upper Approximations of a Set . . . . . . C. Further Properties of Lower and Upper Approximations of a Set D. Definition of a Rough Set . . . . . . . . . . . . E. Certainty and Possibility . . . . . . . . . . . . F. Rough Definability of a Set . . . . . . . . . . . G. Rough Definability of a Partition . . . . . . . . . 11. Rough Set Theory and Evidence Theory . . . . . . . . A. Basic Properties . . . . . . . . . . . . . . . B. Dempster Rule of Combination . . . . . . . . . . C. Numerical Measures of Rough Sets . . . . . . . . 111. Some Applications of Rough Sets . . . . . . . . . . A. Attribute Significance . . . . . . . . . . . . . B. LERS A Rule Induction System . . . . . . . . . C. Real-World Applications of LERS . . . . . . . . . IV. Conclusions . . . . . . , . . . . . . . . . . References . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . .
. . .
. . . . . .
. .
. . .
.
152 152 157 161 165 166 166 167 171 171 176 181 182 182 184 193 194 194
Rough set theory was introduced by Z. Pawlak in (1982); see also Pawlak (1991). Rough set theory is an excellent tool to handle a granularity of data. It may be used to describe dependencies between attributes, to evaluate significance of attributes, and to deal with inconsistent data, to name just a few possible uses out of many ways of applying rough set theory to realworld problems. Most important, it is an approach to handling imperfect data (uncertainty and vagueness). As such, it complements other theories that deal with imperfect data, such as probability theory, evidence theory, fuzzy set theory, etc. Rough set theory is frequently confused with fuzzy set theory. As Lofti Zadeh, the creator of fuzzy set theory, said in his inaugural talk at the Third IntsrsPlona! Workshop on Rough Sets and Soft Computing (San Jose, CA, November 10-12, 1994): “Rough sets are not fuzzy sets and fuzzy sets are not rough sets.” 151
Copynght 0 1995 by Academic Press, Inc. Ail rights of reproduction in any form reserved.
152
JERZY W. GRZYMALA-BUSSE
I. FUNDAMENTALS OF ROUGH SETTHEORY
The calculus of rough sets is based on an objective viewpoint of the world; i.e., all computations of the calculus are based on existing data characteristics. Many other ways of handling imperfect data are subjective, i.e., based on human perception and evaluation of real-world situations by experts. A . Indiscernibility
The notion of indiscernibility, the main idea of rough set theory, is closely related to data granularity. The notion of indiscernibility may be introduced in two different ways. First, it may be discussed in the more general but less intuitive form as the pair of the universe U and an equivalence relation R (called an indiscernibility relation) defined on U . Second, it may also be presented in the form of a table, called an information table or an information system. We will use the latter approach because it is more application oriented. 1 . Information Tables An example of an information table is presented in Table I, where eight cars, cl, c2, . .., c8, are characterized by two attributes, Interior and Noise. Thus, the universe is the set U = { c l , c2, c3, c4, c5, c6, c7, c8) of all entities c l , c2, . . .,c8. Another example of an information table is presented in Table 11, where the universe U is identical with Table I. In Table 11, however, entities are characterized by four attributes: Capacity, Interior, Noise, and Vibration. Since Table I1 gives more information about entities, we may distinguish (discern) entities better than in Table I. For example, in Table I there is no TABLE I TABLE INFORMATION Attributes Interior Noise
c3 c4
c5 c6 c7
c8
fair good fair fair excellent good good
medium medium medium low medium low low low
153
ROUGH SETS TABLE I1 INFORMATION TABLE Capacity
Attributes Interior Noise
~~~~~
cl c2 c3 c4 c5
c6 c7 c8
~
5 4 5 5 2 4 4 2
fair fair good fair fair excellent good good
Vibration
~~~~
medium medium medium low medium low low low
medium medium medium low low medium low low
way to tell the difference between cars cl and c2, because both cl and c2 are described by identical values of all attributes: fair for Interior and medium for Noise. We say that c l and c2 are indiscernible. In Table I1 we may tell the difference between cars c l and c2: For cl the value of the attribute Capacity is 5, for c2 it is 4. Let A be the set of all attributes of an information table and let P be a nonempty subset of A. Let V denote the set of all attribute values, for all attributes of the information table. An information table represents a function, called an information function, denoted p, which maps U x A into V'. For example, for Table I, p(c1, Interior) =fair. With a set P we may associate an indiscernibility relation on U , denoted IND(P) and defined as follows: (x, y) E IND(P) if and only if p ( x , a) = p(y, a) for all a E P.
For example, in Table 11, IND({Capacity)) = {(cl, cl), (cl, c3), (cl, c4), (c3, cl), (c3, c3), (c3, c4), (c4, cl), (c4, c3), (c4, c4), (c2, c2), (c2,c6),(4 c7),
( ~ 6c2), , (44( ~ 6c7), , (c7, c2), (c7,c6),(c7, c7), (c5, c5), (c5, c8X (c8, c5), (c8, cg)),
IND({Capacity, Interior}) = {(cl, cl), (cl, c4), (c4, cl), (c4, c4), (c2, c2), (c3, c3), ( ~ 5 ~( ~~ 56 )~( ~~47c7), , (c8, ~ 8 ) ) .
We may easily observe that
154
JERZY W. GRZYMALA-BUSSE
Obviously, IND(P) is an equivalence relation. Hence it may be represented by the partition on U induced by IND(P), i.e., the quotient set U/IND(P). In our example, U/IND({Capacity}) = { {cl, c3, c4}, {c2, c6, c7}, {c5, c8}}
and U/IND({Capacity, Interior}) = { {cl, c4}, (c2}, {c3}, {c5},{c6}, {c7), ( ~ 8 ) ) .
Blocks of a partition U/IND(P) are also called P-elementary sets. Using the idea of the indiscernibility relation, we may reduce the attribute set to some essential subset. Intuitively, we may look for a subset P of the attribute set A such that its discernibility power is the same as the entire set A. Such a subset is called a reduct. Thus, a subset P of the attribute set A is said to be a reduct if and only if IND(P) = IND(A) and P is minimal with this property, i.e., IND(A) # IND(P - {a}) for all a E P. For the information table presented in Table I1 there exist precisely two reducts: {Capacity, Interior, Noise} and {Capacity, Interior, Vibration}. We will show only that {Capacity, Interior, Noise} is a reduct of A = {Capacity, Interior, Noise, Vibration}. First, U/IND( {Capacity, Interior, Noise}) = {{cl}, (c2}, {c3}, (c4}, {c5}, {c6}, (c7}, ( ~ 8 ) ) = U/IND( { Capacity, Interior, Noise, Vibration}).
Second, U/IND({Capacity, Interior}) = {{cl, c4}, {c2}, {c3}, {c5}, {c6}, {c7}, {c8}} # U/IND( (Capacity, Interior, Noise}), U/IND({Interior, Noise}) = { {cl, c2, c5}, {c3}, {c4}, {c6}, {c7, c8}} # U/IND( {Capacity, Interior, Noise}),
U/IND({Capacity, Noise}) = { {cl, c3}, {c2}, {c4}, {CS},{c6, c7}, (c8}} # U/IND({Capacity, Interior, Noise)).
Thus, IND(A) # IND({Capacity, Interior, Noise} - (a}) for all a E (Capacity, Interior, Noise}, so {Capacity, Interior, Noise} is a reduct of A. 2. Decision Tables-Global
Coverings
A decision table is a special case of the information table, where one attribute is selected and called a decision, usually denoted d. For simplicity, let us denote the remaining set of attributes by A. An example of the decision table is presented in Table 111, where the decision is called Quality.
155
ROUGH SETS TABLE 111 DECISION TABLE Capacity
Attributes Interior Noise
fair fair good fair fair excellent good good
cl c2 c3 c4 c5 c6 cl c8
medium medium medium low medium low low low
Vibration
Decision Quality
medium medium medium low low medium low low
low low low medium medium high high high
We may interpret Table I1 in the following way: Cars cl, c2, . . .,c8, characterized by attributes Capacity, Interior, Noise, and Vibration, were classified according to their quality by an expert. Any block of the partition U/IND({d}) is called a concept. In Table 111, three concepts are defined: {cl, c2, c3} of cars of low quality, {c4, c5} of cars of medium quality, and {c6, c7, c8} of cars of high quality. The question is what is a subset P of the set A that describes well all concepts, i.e., the partition U/IND({d}). For such a set P we say that { d } depends on P. The necessary and sufficient condition for {d} depending on P is U/IND(P) IU/IND( { d } ) . The sign 5” in the above condition concerns partitions. For partitions II and z on U , R I z if and only if for each block B of II there exists a block B’ of z such that B c B . In other words, { d } depends on P if and only if every concept defined by { d } is the union of some P-elementary sets (or blocks of U/IND(P)). In our example of Table 111, {Quality) depends on {Capacity,Noise}. Indeed, “
U/IND( {Quality})= ( {cl, c2, c3), (c4, c5}, {c6, c7, c8) ), U/IND({Capacity, Noise}) = {{cl, c3}, {c2}, {c4}, {c5}, {c6, c7}, {c8}}, and U/IND( { Capacity, Noise}) I U/IND( (Quality}). Obviously, the first concept, set {cl, c2, c3), is the union of {cl, c3} and (c2}, the second concept, set {c4, c5}, is the union of {c4} and {c5}, and the third concept, set {c6, c7, c8}, is the union of {c6, c7} and {c8}.
156
JERZY W. GRZYMALA-BUSSE
The minimal subset P such that { d } depends on P is called a global covering of { d } . In the example of Table 111, there exist precisely two global coverings of {Quality}: sets {Capacity, Noise) and {Interior, Vibration}. Again, we will show only that the first set, {Capacity, Noise}, is a global covering of {Quality). All what remains to show is that the set {Capacity, Noise} is minimal. We may present a proof for that by showing that Quality does not depend on any subset of {Capacity, Noise}. This is true because U/IND({Capacity})= { {cl, c3, c4}, {c2, c6, c7}, {c5, c 8 ) )
$ U / I N D ({ Quality}) = { {cl, c2, ~ 3 } (~4, , c5}, ( ~ 6~, 7~, 8
))
and U / I N D ( { N o i s e } )= { {cl, c2, c3, c5}, {c4, c6, c7, c 8 } } $ U/IND({Quality}) = { {cl, c2, ~ 3 } (, ~ 4 c5}, , ( ~ 6~, 7~,8 ) ) .
3. Decision Tables-Local Coverings
So far we have discussed dependence of the decision on a subset of the attribute set. All involved variables, the attributes and decision, are considered globally, i.e., for all elements of the universe. A similar analysis can be done locally, i.e., not for all the variables but for variable-value pairs, or for a subset of the universe that is described by a given value of the variable. Let x be a variable (an attribute or a decision) and let u be a value of x. The block of a variable-value pair (x, u), denoted [(x, u ) ] , is the set of all elements of the universe U that for variable x have value u. Thus, the concept is a block of [ ( d , w ) ] for some value w of decision d. For example, in Table 111, [(Quality,low)] = {cl, c2, c3). The block of (Capacity, 5) is the set {cl, c3, c4). Let B be a subset of the universe U . Let T be a nonempty set of attribute-value pairs, where all involved attributes are different. The block of T, denoted [TI, is defined as n
C(X,
(x,u)ET
41.
In our example from Table 111, let B be the block of (Quality, low), i.e., the set {cl, c2, c3). Let X be the set {(Capacity, 5), (Interior,fair)}. The block [XI of X is equal to [(Capacity, 5)] n [(Interior,fair)] = (cl, c3, c4) n {cl, c2, c4, c5) = {cl, c4) $ (cl,
c2, c3).
157
ROUGH SETS
Thus B = {cl, c2, c 3 ) does not depend on X . On the other hand, for Y equal to {(Capacity, 5), (Noise, medium)}, the block [ Y ] is equal to [(Capacity, 5 ) ] n [(Noise, medium)] = (cl, c3, c4) n {cl, c2, c3, c5) =
{ ~ l~, 3 5) {cl, ~ 2c3) , =B
so B = {cl, c2, c3) depends on {(Capacity,5), (Noise, medium)}. We say that B depends on a set T of attribute-value pairs if and only if [TI 5 B. Set T is a minimal complex of B if and only if B depends on T and no proper subset T' of T exists such that B depends on T'. The set Y = {(Capacity,5), (Noise, medium)} is a minimal complex of B = {cl, 122,c 3 ) , since B does not depend on any subset of Y, because
[(Capacity, 511 4 B
and [(Noise, medium)] $L B.
However, there exist more minimal complexes of B. For example, 2 = {(Interior,fair), (Vibration, medium)} is another minimal complex of B. Let T be a nonempty family of nonempty sets of attribut-value pairs. Set T'is called a local covering of B if and only if the following three conditions are satisfied: 1. Each member T of T is a minimal complex of B; 2. v [TI = B; T€T
3. ?r is minimal, i.e., no subset of T exists that satisfies conditions 1 and 2.
For the example in Table 111, the set { Y, Z } , where Y = {(Capacity, 5), (Noise, medium)} and Z = {(Interior,fair), (Vibration, medium)), is a local covering of B = {cl, c2, c3) because [Y ] = [{(Interior,fair), (Vibration, medium))] = [(Interior,fair)] n [(Vibration, medium)] = {cl, c2, c4, c5) n {cl, c2,
c3, c6) = {cl, c2),
and [XI u[ Y ]
=
{cI, c 3 ) u { ~ l~, 2 =) {cl, ~ 2~, 3 =) B.
All three conditions for the set { Y, Z } to be a local covering are satisfied. B. Lower and Upper Approximations of a Set
The notions of lower and upper approximations of a set, a subset of the universe U , are the most important ideas of rough set theory. Usually, the
158
JERZY W. GRZYMALA-BUSSE
TABLE IV DECISION TABLE
5 4 5 5 2 4 4 2
medium medium medium low low medium low low
low low low medium medium high high high
common assumption is that the universe is presented in the form of a decision table. Also, the set for which lower and upper approximations are computed is a concept, i.e., the set of all examples for which a value of the decision is fixed. Finally, the lower and upper approximations are useful when the decision table is inconsistent, i.e., there exist at least two members x and y of the universe U such that p(x, a) = p ( y , a) for all attributes a E A but p(x, d ) # p(y, d ) , i.e., the entities x and y are described by the same values of all attributes yet values of the decision d for the two entities are different. An example of an inconsistent decision table is presented in Table IV. It is clear that entities c l and c3 are described identically by both attributes, Capacity and Vibration. The value of Capacity is 5 and the value of Vibration is medium for both cl and c3. Thus these two entities are just duplicates. However, entities c2 and c6 are also described by identical values (4 and medium) of both attributes, yet their decision values are different (low and medium). Thus Table IV is inconsistent. Yet another reason for inconsistency of Table IV are examples c5 and c8. Attribute values are identical for both c5 and c8, but decision values are different. Figure 1 depicts Table IV. Both attributes from Table IV, Capacity and Vibration, are completely presented in Fig. 1. Only one concept, [(Quality, low)] = {cl, ~2,1231,denoted by B, is presented in Fig. 1. It is clear that Table IV is inconsistent because concept B cannot be defined using all attributes: Capacity and Vibration entities c2 and c6 are described in the same way by both attributes, yet c2 belongs to B and c6 does not. In general, for a subset P of the set A of all attributes, a subset X of U that is the union of elements of U/IND(P) is called definable.
159
ROUGH SETS
B
Vibration
\ medium
low I
\
5
Capacity
4
2 I
FIGURE1. Concept B.
In the case of Table IV and P
=A =
{Capacity, Vibration),
U/IND(P) = { { ~ l~ ,3 } (, ~ 2~, 6 ) {, ~ 4 )(, ~ 5~, 8 } (, ~ 7 ) ) . Any subset X of U that is the union of some of the following sets, {cl, c3), {c2, c6), {c4}, (c5, c8), {c7}, is definable. Thus it is clear that the concept B = {cl, c2, c3) is not definable. For an indefinable set, two important definable sets may be brought into the picture. The first set is the greatest definable set contained in B, the second is the least definable set containing B. The former, called a lower approximation of B and denoted _PB, is the union of all elements X of U/IND(P) that are contained in B, i.e., X c B. The latter set, called an upper approximation of B and denoted PB, is the union of all elements X of U/IND(P) such that their intersection with B is nonempty, i.e., X n B # 0. In our example AB = {cl, c3) and AB = {cl, c2, c3, c6). Sets AB and AB are depicted in Fig. 2 and Fig. 3, respectively. The more general example, with attributes Attribute-1 and Attribute-2, is presented in Fig. 4. Only one
160
JERZY W. GRZYMALA-BUSSE
B
Vibration
AB -
Capacity
FIGURE 2. Lower approximationAB of set B.
concept, denoted by C, is presented in Fig. 4. The lower approximation AC is depicted in Fig. 5 and the upper approximation XC is depicted in Fig. 6, where A = {Attribute-1, Attribute-2). Lower and upper approximations have the following properties. Let X and Y be subsets of the universe U . Let P be a subset of the set A of all attributes. Then
_Pa=
= Po,
PX = x = Px, PU=U=PU, P(XuY)=_PXu_PY,
P ( X u Y) = PXUPY, P(Xn Y) = _PXn_PY,
P ( X n Y) c PxnPY,
ROUGH SETS
161
-
AB
Capacity
FIGURE 3. Upper approximation AB of set B.
P(X
-
Y) G ex - _py,
P ( X - Y) 2 Fx - P y , P(U
- X ) = u - Px,
-ex, PCpX) = PCpX) = ex, P(U - X ) = u
P ( P X ) = _P(PX)= Px. C . Further Properties of Lower and Upper Approximations of a Set
Properties of lower and upper approximations were studied in many works. Here we will quote a sample of results from (Chan and Grzymala-Busse, 1989). First of all, we need to extend the definition of a singleton set, consisting of decision d and depending on a subset P of the set A of all attributes.
162
JERZY W. GRZYMALA-BUSSE
C
\
Attribute-2
Attribute-1
I
I
I
I
I
I
I
FIGURE 4. Concept C.
Let P and Q be subsets of A. We say that set Q depends on set P, denoted P -+ Q, if and only if U / I N D ( P )IU/IND(Q).Note that from this viewpoint decision d is a variable like any of attributes. We could include d into the set A. All results that follow would still remain valid. The decision d is not included only for the sake of simplicity. For example, in Table 111, {Capacity} does not depend on (Interior, Noise, Vibration}, because U/IND({Capacity})= { {cl, c3, c4}, ( ~ 2c6, , c7}, ( ~ 5c8}} ,
and U/IND((Interior,Noise, Vibration}) = {(cl, c2}, {c3}, {c4}, {c5},
{c6}, (C7L { c 8 ) ) . On the other hand, {Vibration} does depend on {Capacity, Interior, Noise}, because U / I N D ( {Vibration)) = { {cl, c2, c3, c6}, {c4, c5, c7, c 8 } }
ROUGH SETS
163
C
Attribute-1
FIGURE5. Lower approximation AC of set C .
and U/IND({Capacity, Interior, Noise}) = { {cl}, {c2}, {c3}, {c4}, {c5},
f4,@7j7( c 8 } }. Theorem 1. Let P and Q be subsets of A . The following conditions are equivalent: 1. P + Q . 2. For each X c U , E X E Q X . 3. For each X c U , P X G Z X .
From Theorem 1 follows that E X u Q X E (P u Q ) X and ( P u Q ) X C
P X n O- X .
One of the crucial questions, important for many applications, is: What is a subset P of the set A of all attributes such that a concept X is definable by P? In other words, we would like to describe set X using only attributes from P. That can be done by testing whether U/IND(P) s { X , U - X } . If the
164
JERZY W. GRZYMALA-BUSSE
C
\
Attri bute-2
FIGURE6. Upper approximation XC of set C .
answer is yes, then X can be defined by attributes from P. Yet another way to answer the same question is presented below. Let X E U and P c A. The set X - E X , denoted A p ( X ) ,will be called a lower boundary of X . By analogy, the set FX - X , denoted a P ( X ) ,will be called an upper boundary of X . The lower boundary A p ( X )contains these elements of X that cannot be classified, using attributes from P, as certainly belonging to the set X . Similarly, the upper boundary a P ( X )contains these elements of U that, using attributes from P, possibly belong to the set X but not to the set X itself. The empty lower boundary A p ( X )means that X is definable by attributes from P . Obviously, bP(X) = 0 if and only if a p ( X )= 0. For Table 111, say that X = {cl, c2, c 3 } and P = {Capacity}.Then A - p ( X )= {cl, ~ 2~, 3 ) 0= {cl, ~ 2~,3 ) ,
K p ( X )= {cl, ~ 2~, 3 ~, 4~, 6 , ~ 7 {cl, ) ~ 2 , ~= 3 () ~ 4~, 6~ ,7 ) .
165
ROUGH SETS
Say that X is not definable by a subset P of the set A of all attributes. We want to add another subset Q of the set A to the set P so that the union of P and Q will be sufficient to define X . The question is whether we may test definability of P u Q using sets &(X) and AQ(X). The following result answers this question.
Theorem 2. Let X LE U.Let P, Q
G
A.
1 . If &(X) n A Q ( X )= 0, then X is ( P v Q)-definable. then X is ( P u Q)-definable. 2. If & ( X ) n A Q ( X )= 0,
The converse of (1) and the converse of (2) are both false. Let X = { c l , c2, c3), P = {Capacity), and Q = {Noise). Then A Q ( X )= { c l , c2, c3) - 0= { c l , c2, c3), Zi,(X) = { c l , c2, c3, c5) - { c l , c2, c3) = { c 5 ) , AP(x)
AQ(X)= ic1,c2, c 3 ) # 0,
but =0
3
so X is ( P u Q)-definable. D. Definition of a Rough Set
Strangely enough, in spite of a few hundred papers published about rough set theory, there is some ambiguity about the definition of a rough set. It must be stressed that the issue is not so controversial and even not so important, because the main tools of rough set theory, used for any theoretical or practical purpose, are lower and upper approximations of a set anyway. There are three approaches used in defining the idea of the rough set. According to the first approach, a rough set is the family of all subsets of U having the same lower and upper approximations; see, e.g., Grzymala-Busse (1991). In the example from Table IV, the rough set associated with the concept B = { c l , c2, c3) and the set A of attributes Capacity and Vibration is
{ { c l , c2, c3), { c l , c 2 , c 6 ) ) , because A ( { c l ,c2, c 3 ) ) = A ( { c l ,c2, ~
6 }=) { c l , ~ 2 )
and Z ( { c l , c2, ~ 3 )=) X ( { c l ,~ 2~ , 6 )=) { c l , ~ 2~ , 3~ , 6 ) .
166
JERZY W. GRZYMALA-BUSSE
In the second approach it is assumed the rough set is an ordered pair @B, AB); see, e.g., Dubois and Prade (1992). Using this approach, in the example from Table IV, the rough set associated with the concept B = {cl, c2, c3) and the set A of attributes Capacity and Vibration is (fcl, c2}, {cl, c2, c3, c6)).
The third approach is based on the idea of the definable set; see, e.g., Pawlak (1991). Any set that is not definable is rough. Obviously, in all three approaches we are assuming all the time that the set P of attributes and the concept B are given (P is a subset of the set A of all attributes). E . Certainty and Possibility
Lower and upper approximations of a set allow for a simple interpretation of elements belonging to the set. Say that the concept B is not definable within given set P of the set A of all attributes. An element x E U is said to be certainly in B if and only if x E _PB.An element x E U is said to be possibly in B if and only if x E FB. Both definitions have straightforward interpretation. We have to decide whether x is or is not in B on the basis of accessible information, i.e., attributes from P. This means that we may describe the membership of x in terms of members of U/IND(P). On the other hand, both sets, _PB and FB are constructed from U/IND(P). Moreover,
PB
EB E
FB,
so if x E _PB, then x is certainly in B. However, if x E FB, then x does not need to be in B. If B is not definable, P B - B # 0, and x may be a member of FB - B and not a member of B. Thus x is only possibly in B. In our example, c l E AB, so c l E B. Also, c2 E X B and c2 E B (but c6 E AB and c6 $ B). F . Rough Definability of a Set
Following Pawlak (1982), we will discuss the finer forms of definable and indefinable sets. First, it is not difficult to recognize that the set B, a subset of U , is not definable using a given subset P of the set A of all attributes if and only if _PB # B. The last condition is equivalent to another condition: FB # B. We may further distinguish four cases of indefinable sets: 1. Set B is roughly definable if and only if r B # and FB # U . 2. Set B is internally indefinable if and only if _PB = 0and FB # U . 3. Set B is externally indefinable if and only if _PB # 0and FB = U . 4. Set B is totally indefinable if and only if _PB = 0and FB = U .
167
ROUGH SETS
Attributes Capacity Vibration I
I
cl c3 c9 c10
5 5
4 2
medium medium low low
Decision Quality low medium medium medium
TABLE VI DECISION TABLE
cl c3 c9 cl1
Attributes Capacity Vibration 5 5
4 4
medium medium low low
Decision Quality low medium medium high
For Table IV and P = A, where A is the set of all attributes, Capacity and Vibration, set B = {cl, c2, c3) is roughly definable because E B = {cl, c2) # 0and FB = {cl, c2, c3, c6) # U = {cl, c2, ..., c8). Table V illustrates cases (2) and (3). For B = (cl) and P = A, _PB = 0 and FB = {cl, c3) # {cl, c3, c9, c10) = U . Thus B is internally undefinable. On the other hand, for B = {c3, c9, clO), EB = {c9, c10) and FB = {cl, c3, c9, c10) = U , i.e., B is externally undefinable. Similarly, Table VI presents the case (4) for B = {c3, c9}, since E B = $3 and FB = {cl, c3, c9, cl 1 ) = U . Thus set B is totally undefinable. According to the above definitions, every nonempty subset of U that is definable and different from the universe U is roughly definable. Sets 0and U are definable but not roughly definable. The empty set 0 is internally undefinable, while the universe U is externally undefinable. G . Rough Definability of a Partition
A partition on the universe U that we are discussing here corresponds to a classification assigned by an expert. For example, in Table IV we may assume that the values of decision Quality were given by an expert. The corresponding partition 2-or a classification-is U/IND(Quality),i.e., 1= { {cl, c2, ~ 3 ) {c4, , c5}, ( ~ 6 ~, 7 ~, 8 ) ) .
168
JERZY W. GRZYMALA-BUSSE
Attributes Capacity Vibration
S
el c2 c3 c4
4 S
cs
c6 c7
-
5 2 4 4
medium medium medium low low medium low
Decision Quality low low low medium medium high high
We are interested in how to categorize partitions x, or, how to extend the four definitions of rough definability of a set to corresponding definitions for partitions. First, the question is: Do we want all blocks of x or only some blocks of x to satisfy a definition? For example, in Table IV and the partition x = U/IND(Quality), all three blocks {cl, c2, c3), (c4, c5), and (c6, c7, c8) of x are undefinable. On the other hand, in Table VII, only sets (cl, c2, c 3 ) and (c6, c7) are undefinable, while set (c4, c5) is definable. Hence, for a partition x it may happen that all blocks satisfy some property or that only some blocks satisfy this property. In the former case we will call such property strong; in the latter case, the property will be called weak. Therefore, a partition x = {XI,X , , . ..,X , } on U is said to be strongly definable if and only if for each block X i of x we have E X i = X i (or, equivalently, p X i = X i ) . For Table 111, the partition x = { (cl, c2, c3}, (c4, c5}, {c6, c7, c 8 } } is strongly definable, because P ( { c l , c2, c 3 ) ) = {cl, c2, c3},
P((c4, c5})
= {c4, CS),
and _P{ ( ~ 6~, 7 ~, 8 ) =) ( ~ 6c7, , ~8).
A partition x = {XI,X , , . . .,X , } on U is said to be weakly definable if and only if there exists a block X i of x for which we have EXi = X i (or, equivalently, FXi = Xi). For Table VII, the partition x = { {cl, c2, c 3 ) , (c4, c5), (c6, c7) is weakly definable, since there exists a block (c4, c5) of x such that P((c4, c5)) = (c4, c5). Since there exist four different types of sets (roughly definable, internally undefinable, externally undefinable, and totally undefinable), we may expect to have eight different types of partitions (each of the four listed types can be further categorized as strong or weak). Surprisingly enough, this is not true
169
ROUGH S E T S
-there are only five corresponding types of partitions; see Grzymala-Busse (1988). We will quote the following definitions and results from Grzymala-Busse (1988). We will assume that P is a nonempty subset of the set A of all attributes and that x is an expert’s classification, i.e., some partition {XI, X , , ..., X,,} on U , where for any block Xi E x, i = 1, 2, ..., n, any two elements x, y are elements of X i if and only if p(d, x ) = p ( d , y), where d is the decision. A partition x on U is said to be roughly definable, weak if and only For Table V if there exists a number i E { 1,2, . . .,n } such that E X , # 0. and P = A = {Capacity, Vibration},the partition x = { { c l } ,{c3, c9, c!O}} is roughly definable, weak, since _ p ( { c l ) )=
0
P({c3,c9, ClO}) = {CS, ClO},
9
P ( { c l } )= { c l , c3},
F({c3,c9, ClO})= u.
In our example of a roughly definable, weak partition there exists a block X i of x such that &Xi) # U . In general, this is true for any roughly definable, weak partition, as follows from the following result (Grzymala-Busse, 1988):
Theorem 1. Let n > 1 and let there exist i E { 1,2, . .., n} such that EXi # fa. Then there exists j E { 1,2, . . . ,n} such that P X j # U . The following result has also been shown by Grzymala-Busse (1988):
Theorem 2. Let n > 1 and let E X i # P X j # U for each j E { 1,2, . . ., n } .
0 for
each i E { l , 2, ... , n}. Then
We define a partition x = {XI,X , , . . . , X,,} on U as roughly definable, strong if and only if EXi # 0 for each i E { 1,2, . . . , n } . Due to Theorem 2, for each block X i of roughly definable, strong partition x on U we have that P X i # u. Table IV illustrates this case. A partition x = { { c l ,c2, c3}, {c4, c5}, {c6, c7, c8}} is roughly definable, strong for P = A . The partition x = {XI,X , , ...,X,,} on U will be called internally undefinable, weak if and only if E X , = /a for each i E { 1,2, ... ,n } and there exists j E { 1,2, ...,n } such that P X j # U . For Table VI, partition x = { { c l } ,{c3, c9}, { c l l } ) ,and P = A , E((C1))
=
0,
P ( { c l } )= { c l , c3}
P((C3, .9}) = 0,
P((c3,c9}) =
u
_ p ( { c l l ) )= fa, F ( { c l l } )= { C S , C l l } ,
so x is internally undefinable, weak. A partition x = {XI,X,, . . ., X . } on U will be called internally undefinable, strong if and only if E X , # 0 and P X i # U for each i E { 1,2, . . .,n } . Table VIII illustrates this case for partition x = ( { c l ,c12}, (c3, c5}, (c7, c8}}
170
JERZY W. GRZYMALA-BUSSE TABLE VIII DECISION TABLE ___
~
Attributes Capacity Vibration 5 5 2
cl c3 c5 cl c8 c12
medium medium low low low low
4 2 4
Decision Quality low medium medium high high low
TABLE IX DECISION TABLE
Decision
low low
c13
and P
= A,
medium low
since
P((CLC12)) =
!a,
P({cl, ~ 1 2 )= ) {cl, c3, c7, c12}, P((C7,c8j) =
P ( { c 3 , c 5 } )=
0,
P((c3, c5))
{ ~ l~, 3~, 5~,8 } ,
=
0,
P((c7, ~ 8 ) =) (c5, c7, c8, c12}, so x is internally undefinable, strong. Theorem 1 may be given in another form (Grzymala-Busse, 1988): Theorem 1’. Let n > 1 and let P X i = U for each i E { 1,2, . .., n } . Then EXj = 0for each J E {1,2, ...,n}.
Due to Theorem l’, condition that p X i = U for each i E { 1,2, . ..,n } is so strong that analogs of an externally undefinable sets do not exist for partitions on U. Moreover, there exists only one kind of partition x = {XI, X , , . . . , X , } on U for which P X i = U for each i E { 1,2, . . ., n } , given by the following definition. The partition x = { X , , X,, . .. , X , } on U , n > 1, is called totally undefinable i f and only if F X i = U for each i E (1,2, . ..,n } . In Table IX, a partition
171
ROUGH SETS
x = ( { c l ,c13), {c3, c9>>is totally undefinable for P = A, because P ( { C l , c31) =
0,
F((c1, ~ 1 3 )=) U ,
P ( ( c 3 , c9)) =
0,
P((c3, ~ 9 >=) U .
11. ROUGH SETTHEORY AND EVIDENCE THEORY
Evidence theory, also called Dempster-Shafer theory, was introduced by A. P. Dempster (1967, 1968) and developed further by G. Shafer (1976). It is a generalization of probability theory. A. Basic Properties
In probability theory, probabilities should be assigned to elements of sample space. Probability for any event, a subset of the sample space, may be computed by Kolmogorov's addition axiom. In evidence theory this is not so. Let 0 be a frame of discernment (the notion that corresponds to the sample space of probability theory). The basic idea of evidence theory is a basic probability assignment, i.e., a function m: 2' -+ [0, 13 such that 1. m ( @ ) = 0,
The number m ( X ) is called a basic probability number of X . Thus, basic probability numbers are assigned to subsets of the frame of discernment, as opposed to probability theory, where probabilities are assigned to singleton sets. In evidence theory it may happen that m ( { x } )# 0 and m ( X ) = 0, where x E X . In probability theory that is impossible; if probability p(x) # 0, then P ( X ) z 0. Two additional functions, a belief function and a plausibility function, are commonly used in evidence theory. Both functions are defined from the basic probability assignment. A function Bel: 2' -+ [O, 11 is called a belief function over 0 if and only if Bel(A)=
1 m(B). BEA
A function P1: 2@+ [0, 11 is called a plausibility function over 0 if and only if Pl(A) = 1 - Bel(1). A subset X of the frame of discernment is called a focal element if and only if m(X)> 0.
172
JERZY W. GRZYMALA-BUSSE
In evidence theory, two independent pieces of evidence may be combined using Dempster’s rule of combination by so-called orthogonal sum. Say that the two pieces of evidence are described by their basic probability assignments m and n. Focal elements of m are A , , A , , . . .,A,, and focal elements of n are B,, B,, . . ., Bl. Then, for a focal element X = Ai n Bj, where i E (1,2, ..., k } and j E (1,2, ..., 1 } ,
m 0 n(X)=
1
m(Ai) * n(Bj)
1
m(Ai)*n(Bj) -1-
ij AinBj=X
C
ij AinBj=X
ij AinB,#0
m(Ai) * n(Bj)
m(Ai).n(Bj)’ ij A i n B j = pI
where m 0 n denotes the orthogonal sum of m and n. The main difference between evidence theory and rough set theory is philosophical-rough set theory presents an objective approach to the real world, while evidence theory uses a subjective approach. Also, in rough set theory the basic tool are sets (lower and upper approximations of a concept, computed directly from input data), while evidence theory relies on functions, e.g., basic probability assignment, given by experts. However, there exists a simple correspondence between rough set theory and evidence theory (Skowron and Grzymala-Busse, 1994). Say that a real-world problem, categorizing motorcycles ml, m2, . ..,m12, is presented in Table X. The frame of discernment 0 is the set of all values of the decision d = Quality. For the rest of this section we will assume that we are interested only in the entire set A of attributes and not in any subset P of the set A. Let [XI denote a block of the partition U/IND(A), where x E U.In the example from Table X,such blocks are (ml, m8, m10), (m2, m6, m l l ) , (m3, m12}, (m4,m7), TABLE X DECISION TABLE
ml m2 m3 m4 m5 m6 m7 m8
m9 m10 mll m12
-
medium low medium low low low low medium medium medium low medium
high medium medium low high medium low high low high medium medium
low low low low low low low medium medium high high high
ROUGH SETS
173
{m5}, {m9>.We may define two functions, f : U -+ 2' and g: U + 2', defined as follows: f(x) =
M Y 9
4 I Y E CxI>,
g(x) = { Y E Ulf(x) = f(Y)L
In our example,
f(m1) = {low,medium, high} = f(m8) = f(mlO), f(m2) = {low,high}
= f(m3) = f ( m 6 ) = f(ml1) = f(m12),
f(m4) = {low} = f(m5) = f(m7), f(m9)
=
{medium},
and g(m1) = {ml, m8, m10} = g(m8) = g(mlO), g(m2) = {m2, m3, m6, m l l , m12) = g(m3) = g(m6) = g(ml1) = g(m12), g(m4) = {m4, m5, m7} = g(m5) = g(m7), g(m9) = {mg}.
Any decision table determines a basic probability assignment on 0 with {f(x)lx E V} as focal elements. In the example from Table X,focal elements are {low},{medium}, {low,high}, and {low,medium, high}. We may define a function m: 2e --+ [O, 11, associated with the given decision table, as follows. First, for any 0 E 2e, we test whether 0 is equal to f(x) for some x E U. If so,
If not, m(0) = 0. It is not difficult to recognize that the above function m is a basic probability assignment. In our example,
174
JERZY W. GRZYMALA-BUSSE
Thus we have established a relation between rough set theory and evidence theory: For every decision table we may create a basic probability assignment. There exists a converse relation, i.e., we may create a decision table for a basic probability assignment with relational numbers as values. We will show how such a decision table is created in two parts. In the first part of the construction, let us create a decision table for a given set of focal elements. Say that our frame of discernment is the set {low, medium, high} and that the focal elements are {low), {medium, high], and {low, medium, high}. We will create a decision table such that the set ( f ( x ) l x E V } will be identical with the set {{low},{medium, high}, and {low, medium, high}}. We will use a generic attribute, denoted a, with positive integers as generic values. The first step of construction of the corresponding decision table is presented in Table XI. All values of all focal elements are listed in the column denoted by decision d. Also, examples are numbered 1,2, . . . , 6. The second step of construction of the decision table is presented in Table XII, where values of the attribute a are inserted. The values are consecutive positive integers, constant for each focal element. The obtained decision table satisfies the requirements, as we may show by the following computations. The blocks of the partition U/IND({a)) are {l}, {2,3), and (4, 5 , 6 } . Function f is given by f(1) = {low}, f(2) = f(3) = {medium, high), f(4) = f(5) = f(6) = {low, medium, high}. Thus, focal elements are {low}, {medium, high}, and {low, medium, high}. TABLE XI DECISION TABLE-FIRSTSTEP OF THE CONSTRUCTION Attribute
Decision
175
ROUGH SETS TABLE XI1 DECISION TABLE-SECOND STEPOF THE CONSTRUCTION
medium high 5
3
medium high
Function g is as follows: g(1) = {1),
g(2) = g(3) = { 2 , 3 ) , g(4) = g(5) = d 6 ) = ( 4 , 5 6 ) .
Moreover, the basic probability assignment m is defined m ( ( l o w } )= i , m( (medium, high)) = f, m( {low, medium, high}) =
4.
Now, in the second part of the construction, let us assume that we want to create a decision table with the same focal elements but with another basic probability assignment. This may be accomplished by adding duplicates to the decision table constructed in the first part. In this particular example, let us assume that we want to change the current basic probability assignment into a new one, denoted n, such that n( ( l o w ) ) = 0.2, n( {medium, high)) = 0.4, and n({low,medium, high)) = 0.4. The ith row of Table XI1 should be duplicated xi times, where i = 1,2, . . .,6. It is not difficult to recognize that n ( { l o w } )=
XI
x1
+ x2 + + x6' "'
176
JERZY W. GRZYMALA-BUSSE
Attribute a 1
2
3 4 5
6 I 8 9 10
11 12 13 14 15
Decision d
1 1 1
low low low
2 2 2 2 2 2
medium medium medium high high high
3 3 3 3 3 3
low low medium medium high high
Hence we should solve the following system of equations:
+ + + + = 0.4 + + + x4 + + = 0.4 ( X I + + + x6). XI
x2
x3
XS
x6
= 0.2
(XI
x2
*"
x6)9
(XI
x2
' ''
x6)7
x2
' * *
The simplest solution of the above system of equations is x1 = x2 = x3 = 3 and x4 = x5 = x6 = 2. The decision table obtained as a result of this construction is presented in Table XIII.
B. Dempster Rule of Combination We may combine two decision tables into a single decision table the same way as two basic probability assignments are combined into a single one using the Dempster rule of combination. Moreover, if the two basic probability assignments computed from the two tables are m and n, respectively, then the basic probability assignment for their combination is m 0 n. The construction is straightforward and will be explained by analysis of the following example. Say that two decision tables are given, Tables XIV and XV. First we need to compute the values of function f for both tables. Then we construct new tables with decision values replaced by values of the function f. The corresponding tables are presented as Tables XVI and XVII. The
177
ROUGH SETS
Attributes Weight Power
Decision Quality
I
I
medium low medium medium low
1
2 8 10 11
low low medium high high
medium medium high high medium
TABLE XV DECISION TABLE Size
medium red
Attributes Weight Power
medium medium high high
f (4 {low, medium, high}
{low, medium, high} {low, medium, high}
Attributes 13 14 15
16
Size
Color
big big medium big
red blue red red
f (4 {medium, high} {medium} {high} {medium, high}
178
JERZY W. GRZYMALA-BUSSE TABLE XVIII COMBINATION OF TABLESXVI AND XVII Weight medium medium medium medium low low low medium medium medium medium medium medium medium medium low low low
Attributes Power Size high high high high medium medium medium high high high high high high high high medium medium medium
big big medium big big medium big big big medium big big big medium big big medium big
Color red blue red red red red red red blue red red red blue red red red red red
{medium, high} {medium} {high} {medium, high} {high} {high} {high} {medium, high} {medium} {high} {medium, high} {medium, high} {medium} {high} {medium, high} {high} {high) {high}
combination of Tables XVI and XVII is presented as Table XVIII. Every row of Table XVIII is computed partly from Table XVI and partly from Table XVII. For every row x of Table XVI and for every row y of Table XVII such that the intersection off(x) and f(y) is a nonempty set, we create a new row of the Table XVIII; such row will be denoted by (x, y). Attributes of Table XVIII are all attributes of the first table followed by all attributes of the second table. Attribute values for row (x, y) are copied from row x of the first table and y of the second table. Table XVIII is not quite a decision table-there is no column designated as a decision. Instead of a decision there is a column denoted as f(x) n f ( y ) . However, we may transform this table into a decision table by replacing the column name ‘‘f(x) nf(y)” by a name of the new decision and values of f(x) nf(y) by single decision values, evenly split between rows. The resulting decision table that corresponds to the Table XVIII is presented as Table XIX. Note that the basic probability assignment rn for Table XVI may be computed as follows:
f(1) = f(8) = f(l0) = {low, medium, high}, f(2) = f(1 1 ) = {low, high},
179
ROUGH SETS
Attributes
1
14 16 17
I
Weight
Power
Size
medium medium medium medium low low low medium medium medium medium medium medium medium medium low low low
high high high high medium medium medium high high high high high high high high medium medium medium
big big medium big big medium big big big medium big big big medium big big medium big
Color red blue red red red red red red blue red red red blue red red red red red
Quality
I
medium medium high medium high high high medium medium high high high medium high high high high high
g(1) = d 8 ) = g(10) = { 4 8 , lo}, 9(2) = 9(11) = (2, 1I},
The focal elements are {low, high} and {low, medium, high}, and
m({low,high}) = 3, m( { low, medium, high}) = 3. On the other hand, the basic probability assignment n for Table XVII is
f(13) = f(16) = {medium, high}, f(14)
=
{medium},
fU5) = {high}, g(13) = g(16) = (13, 16},
d14) = { 14}, g(15) = {15},
the focal elements are {medium}, {high},and {medium, high}, and
180
JERZY W. GRZYMALA-BUSSE
n( {medium}) = $, n( {high}) = $, n( {medium, high}) =
4.
Focal elements o the orthogonal sum m 0 n are {mediurr ,, {high}, and {medium, high}. The orthogonal sum m 0 n is described as follows: m 8 n({high})
m((low, high}).n((high})i m((low, high}).n({medium,high)) + m((low, medium, h$h)).n((high) 1 - m({/ow,high}).n({medium}) - 0.4.0.25 + 0.4.0.5 + 0.6.0.25 - 0.45 - 0.5, 1 - 0.4.0.25 0.9 m Q n({medium})
0.6.0.25 0.15 - m({low,medium, high}).n({medium})= _ _=-= 1 - m({low, high)).n((medium))
0.9
0.167,
0.9
m @ n({medium,high})
0.6.0.5 - m({low, medium, high}),n({medium,high}) 1 - m({/ow,high}).n({medium})
0.9
- -0.3 _ - 0.333. 0.9
On the other hand, we may compute the same probability assignment for Table XIX: f(1) = f(4) = f(8)
= f(l1) = f(12) = f(l5) =
(medium, high},
f(2) = f ( 9 ) = f(13) = {medium}, f(3) = f(5) = f(6) = f(7)
= f(l0) = f(14) = f(16) = f(17) = f(l8) =
{high},
g(1) = g(4) = g(8) = g(l1) = g(12) = ~ ( 1 5= ) {1,4, 8, 11, 12, IS}, g(2) = g(9) = g(13) = {A 9, 13}, g(3) = g(5) = g(6) = g(7) = g(10) = g(14) = g(16) = g(17) = g(18) = (3, 5, 6, 7, 10, 14, 16, 17, l8},
focal elements are {medium}, {high}, and {medium, high}, and the basic probability assignment is m({medium}) = m({high}) =
= 0.167, = 0.5,
m( {medium, high}) = & = 0.333.
Thus, the basic probability assignment computed from Table XIX is identical with the basic probability assignment computed as the orthogonal sum of m and n.
ROUGH SETS
181
C . Numerical Measures of Rough Sets
A few numbers characterizing a degree of uncertainty may be associated with a rough set. Though they are sometimes useful for evaluation, such numbers have limited value, because the most important tool of rough set theory is sets-lower and upper approximations of a given concept X , a subset of the universe U . There are two universal measures of uncertainty for rough sets, introduced in Grzymala-Busse (1987); see also Grzymala-Busse (1991). We will assume that P is a subset of the set A of all attributes. The first measure is called a quality of lower approximation of X by P , denoted y ( X ) ,and equal to
IEXl
~
IUI * In Pawlak (1 984) y ( X ) was called simply quality of approximation of X by P. The quality of lower approximation of X by P is the ratio of the number of all certainly classified elements of U by attributes from P as being in X to the number of elements of U . It is a kind of relative frequency. There exists a nice interpretation of y ( X ) in terms of evidence theory. To introduce this interpretation, we need another definition of the frame of discernment 0 than in Section II.A, namely, we will consider 0 as the set of all concepts, i.e.,
0 = U/IND({d}), where d is a decision. Thus focal elements are blocks of U/IND( {d}), and the basic probability assignment m is defined by
1x1
m(X)= -
IUI’
where X is a concept, i.e., a block of U/IND({d}). With this interpretation, y-( X ) becomes a belief function, as was first observed by Grzymala-Busse (1987). The second measure of uncertainty for rough sets, a quality of upper approximation of X by P , denoted y(X), is equal to
The quality of upper approximation of X by P is the ratio of the number of all possibly classified elements of U by attribute from P as being in X to the number of elements of U . Like the quality of lower approximation, the quality of upper approximation is also a kind of relative frequency. It is a plausibility function of evidence theory if we accept the above interpretation of the basic probability assignment (Grzymala-Busse, 1987).
182
JERZY W. GRZYMALA-BUSSE
Sometimes yet another measure of uncertainty of rough sets is used, called an accuracy of approximation of X by P, and defined as IfX I IFXI' The above measure has secondary value because it may be easily computed from y ( X ) and Y(X). Yet anot6er measure of uncertainty for rough sets, called directly rough measure, is explained in Section III.C.1.
111. SOME APPLICATIONS OF ROUGH SETS
Rough set theory has been broadly applied in many real-world domains. The detailed record of applications can be found in many reports published in journals and conference proceedings, especially in Slowinski (1992), Ziarko (1994), and Lin (1995). Roughly speaking, the most useful areas of applications are data analysis (or attribute analysis, e.g., seeking for the most essential attributes, elimination of redundant or weak attributes, finding attribute dependencies, estimation of attribute significance) and induction of rules (regularities) from raw data (given as decision tables). Most of methods of attribute analysis using rough set theory are quite straightforward, with the exception of estimating attribute significance. This topic is explained below. Rule induction is covered in the following section. A. Attribute Significance
Rough set theory may be used for analysis of significance of attributes. In this approach, an extension of the notion of the quality of lower approximation of set X , associated with a subset P of the set A of all attributes, is used. The set X , usually a concept, is replaced by the set of all concepts U/IND({d}), where d is a decision. A quality of lower approximation of U/IND({ d } ) ,associated with P, is defined as follows:
where U/IND({d}) = {XI,X , , . . ., Xn}. A significance of the attribute a is computed as the difference between the quality of lower approximation of U/IND( { d } ) associated with the set A of
183
ROUGH SETS
all attributes and the quality of lower approximation of U/IND( { d } ) associated with the set A - { a } . The more important attribute a is the more quality of U/IND({d)) is reduced by deleting a and thus the difference is bigger. Let us illustrate the notion of significance of attributes by the following analysis of Table IV. In Table IV, A = {Capacity, Vibration), d = Quality, and U/IND({d)) = { (cl, c2, c3), (c4, c5), {c6,c7, cS}). The quality of U/IND({d}) associated with A is IA(lcl,c2, c3))l
+ IA((c4, c5))I + IA((c6, c7, c8})l I UI
- I{cl,c3}1
+ I(C4)l + l{c7}1 -- 2 + 1 + 1 = 0.5. I Ul
8
The quality of U/IND({d}) associated with A
-
{ a } , where a = Capacity,
is IA - (a)({cl, ~ 2 ~, 3 } ) 1+ IA - {a)({c4, c5})l
+ IA - {a)({c6, c7, d})l
I UI
Thus the significance of attribute Capacity is 0.5 - 0 = 0.5. Gunn and Grzymala-Busse reported (1994) results of analysis of the attributes solar energy, transmission, El Niiio, CO, trend, and CO, residual, influencing the earth’s global temperature. For data covering years from 1958 to 1990, the ranking of the attributes was computed using statistical methods (ANOVA) and then using rough set theory. Results, cited in Table XX, show surprising similarity in final ranking of the attributes. Similar analysis of attribute significance was done by Pawlak et al., (1986) and by Slowinski and Slowinski (1990). TABLE XX COMPARISON OF ATTRIBUTE RANKING The most important attribute Statistics (ANOVA)
Rough set theory
CO, residual
I
The least important attribute
(0.001)
El Niiio (0.002)
Solar energy (0.119)
Transmission (0.119)
CO, trend (0.943)
CO, residual (0.545)
Solar Energy (0.424)
El Niiio (0.364)
CO, Trend (0.212)
Transmission (0.152)
184
JERZY W. GRZYMALA-BUSSE
B. LERS: A Rule Induction System Rough set theory is used in a few systems of knowledge acquisition and machine learning from examples. The main purpose of all of these systems is rule induction, i.e., searching for regularities in the form of rules in the original raw data. Induced rules represent knowledge-information on a higher level than that from original raw data. Rules are more general, since more new, unseen cases (examples) can be classified by rules. Also, rules may be inserted into a knowledge base of the expert system. In this section we will study one of the rule induction systems, called LERS (Learning from Examples based on Rough Sets); see Grzymala-Busse (1992). A prototype of LERS was first implemented at the University of Kansas in 1988. The prototype was coded in Franz Lisp under UNIX. The current version of LERS is coded in C and may run under UNIX or VMS. Other systems of rule induction using rough set theory include RoughDAS and Roughclass (Slowinski, 1992; Slowinski and Stefanowski, 1992), created at the Poznan University of Technology in Poland, and Datalogic (Szladow and Ziarko, 1993; Ziarko, 1992), developed at the University of Regina in Canada. Some applications of these systems are listed by Pawlak et al. (1995). The user of system LERS may use four main methods of rule induction: two methods of machine learning from examples (called LEMl and LEM2, where LEM stands for Learning from Examples Module), and two methods of knowledge acquisition (called All Global Coverings and All Rules). Two of these methods are global (LEM1 and All Global Coverings); the remaining two are local (LEM2 and All Rules). In general, the input data are preprocessed before rule induction. For example, numerical values are replaced by intervals, in the process called discretization. The most important reason for discretization is that rules induced directly from numerical data are too detailed. New cases cannot be classified using rules with such detailed conditions. Also, during preprocessing, one of the methods of dealing with missing attribute values should be used when necessary. Input data file is presented to LERS in the form illustrated in Table XXI. In this table, the first line (in the sequel, any line does not need to be one physical line-it may contain a few physical lines) contains declarations of variables: a stands for an attribute, d for decision, x means “ignore this variable.” The list of these symbols starts with “(” and ends with “)”. The second line contains declarations of names of all variables, attributes, and decisions, surrounded by brackets. The following lines contain values of all variables.
185
ROUGH SETS
TABLE XXI LERS INPUT DATAFILE
< [
a Capacity
5 4 5 5 2 4 4
2
X
X
a
Interior fair fair good fair fair excellent good good
Noise medium medium medium low medium low low low
Vibration medium medium medium low low medium low low
d Quality low low low medium medium high high high
) ]
TABLE XXII DECISION TABLE
c5
c6 c8
medium medium medium low low medium low low
L
Before running any of the four methods of rule induction, LERS checks input data for consistency. If the input data are inconsistent, lower and upper approximations for each concept are computed. In local methods, rules are induced directly from lower and upper approximations of the concept. In global methods, the original decision table is substituted by 2k new decision tables, where k is the number of concepts (i.e., the number of different decision values). In each new decision table, the original decision is replaced by a new variable, called either lower substitutional decision or upper substitutional decision, and with values equal to the lower or upper approximation of a concept, respectively. Rules are induced only for the lower or upper approximations of the concept; all remaining examples in a table are considered to be negative and-as such-may be all marked by a special symbol, say "*." The three tables with lower substitutional decisions are presented as Tables XXII-XXIV, and the three tables with upper substitu-
186
JERZY W. GRZYMALA-BUSSE TABLE XXIII DECISION TABLE Decision Quality
Attributes Capacity Vibration
5 4 5 5 2 4
cl c2 c3 c4 c5
c6 cl c8
4
-
2
*
medium medium medium low low medium low low
*
* medium
* * * *
TABLE XXIV DECISION TABLE I
I
I
Attributes Capacity Vibration
c6 C7 c8
I
Decision Quality
medium medium medium
* * *
low medium low low
* *
*
high
*
TABLE XXV DECISION TABLE
I ~
Attributes Capacity Vibration
Cl c2 c3 c4 c5 c6 c7 c8
5 4 5 5 2 4 4 2
medium medium medium low low medium low low
~~
Decision Quality
low low low
* *
low
*
*
187
ROUGH SETS
Attributes Capacity Vibration
cl c2 c3 c4 c5 c6 Cl
c8
5 4 5 5 2 4 4 2
cl c2 c3 c4 c5 c6 cl c8
-
* *
* medium medium
*
* medium
TABLE XXVII DECISION TABLE
-
medium medium medium low low medium low low
Decision Quality
Attributes Capacity Vibration 5
4 5 5 2 4 4 2
medium medium medium low low medium low low
Decision Quality
* high
* *
high high high high
tional decisions are presented as Tables XXV-XXVII; all of these tables are computed from Table IV. Note that all six tables (Tables XXII-XXVII) are consistent. In general, rules induced from lower approximation of the concept, for reasons explained in Section I.E, are called certain. Rules induced from the upper approximations of the concept are called possible. Rules induced from training data may be used for classification of new, unseen cases. Usually certain and possible rules are used independently, together with additional information such as the number of training cases that a rule describes, the number of training cases for which the rule was successful (i.e., the rule correctly classified these cases), etc. (Grzymala-Busse and Woolery, 1994).
188
1. L E M l -Single
JERZY W. GRZYMALA-BUSSE
Global Covering
In the LEMl option LERS may or may not take into account priorities, associated with attributes and provided by the user. For example, in the medical domain, some tests (attributes) may be dangerous for a patient. Also, some tests may be very expensive, while-at the same time-the same decision may be made using less expensive tests. LERS computes a single global covering using the following algorithm: Algorithm SINGLE GLOBAL COVERING input: A decision table with set A of all attributes and decision d; Output: a single global covering R; begin compute partition U/IND(A); P := A; R := 0; If U/IND(A) 5 U/IND( {d}) then begin for each attribute a in A do begin Q := P - {a}; compute partition U/IND(Q); If U/IND(Q) I U/IND({d}) then P := 0 end {for} R := P end {then} end {algorithm}.
The above algorithm is heuristic, and there is no guarantee that the computed global covering is the simplest. However, the problem of looking for all global coverings in order to select the best one is of exponential complexity in the worst case. LERS computes all global coverings in another option, described below. The option of computing of all global coverings is feasible for small input data files and when it is required to induce as much rules as possible. After computing a single global covering, LEMl computes rules directly from the covering using a technique called dropping conditions (i.e., simplifying rules). In the case of Table 111 (this table is consistent), the following rules are induced by LEM 1:
189
ROUGH SETS
(Noise, medium) & (Vibration, medium) (Quality, (Interior, fair) & (Vibration, low) + (Quality, (Interior, excellent) + (Quality, (Interior, good) & (Vibration, low) + (Quality, -+
low), medium), high), high).
On the other hand, with the following attribute priorities, Capacity: 0, Interior: 1, Noise: 0, Vibration: 1 (the larger number the higher priority), the following rules are induced by LEMl from the same Table 111: (Interior, fair) 8 (Vibration, medium) + (Quality, low), (Interior, good) & (Vibration, medium) + (Quality, low), (Interior, fair) & (Vibration, low) + (Quality, medium), (Interior, excel lent) + (Qua1ity , high) , (Interior, good) & (Vibration, low) + (Quality, high).
It is obvious that these rules were induced from the global covering {Interior, Vibration}. The LEMl applied to data from Table IV (this table is inconsistent) will induce the following rules (all priority numbers are set to zero). Certain rules: (Capacity, 5) 8 (Vibration, medium) + (Quality, low), (Capacity, 5 ) & (Vibration, low) + (Quality, medium), (Capacity, 4) & (Vibration, low) -+ (Quality, high).
Possible rules: (Vibration, medium) -+ (Quality, low) with rough measure 0.75, (Capacity, 5 ) 8 (Vibration, low) + (Quality, medium) with rough measure 1, (Capacity, 2) -+ (Quality, medium) with rough measure 0.5, (Capacity, 4) + (Quality, high) with rough measure 0.667, (Capacity, 2) -+ (Quality, high) with rough measure 0.5.
Note that possible rules, induced by LERS, are additionally equipped with numbers, called rough measures. In general, say that X denotes the original concept, described by a rule, and that the same rule actually describes examples from a set Y, a subset of the universe U . If the rule is not certain, set Y does not need to be a subset of X . A rough measure, associated with the rule, is defined as follows: I X n YI
190
JERZY W. GRZYMALA-BUSSE
The interpretation of the rough measure is that it is a kind of relative frequency, which may be interpreted as a conditional probability P ( X I Y ) . For example, for the possible rule (Vibration, medium) + (Quality, low)
induced by LEMl algorithm from Table IV, the concept X is {cl, c2, c3}, the set of all examples, described by the rule, is {cl, c2, c3, c6}, so the rough measure is I(c1, c2, c3} A (el, c2, c3, c6}( 3 - ((cl, c2, c3}( = = 0.75. I{cL c2, c3, c6}1 ({cl,c2, c3, c6}l 4
Obviously, all rough measures, associate with certain rules, are equal to 1. The rule’s rough measure is helpful in classification of new cases, because it contains information about the quality of the rule. The larger the rough measure, the better the rule is. 2. LEMZ-Single
Local Covering
As in LEMl (single global covering option of LERS), the option LEM2 (single local covering) may take into account attribute priorities assigned by the user. In the LEM2 option, LERS uses the following algorithm for computing a single local covering: Algorithm SINGLE LOCAL COVERING Input: A decision table with a set A of all attributes and concept 6; Output: A single local covering T of concept B; begin G := B;
lr
:=a;
while G # 0 begin T := 0; T(G) := {(a, v)la E A, v is a value of a, [(a, v)] n G # whlle T = $3or [TI LZ B begin select a pair (a, v) E T(G) with the highest attribute priority, if a tie occurs, select a pair (a, v) E T(G) such that [[(a, v)] n GI is maximum; if another tie occurs, select a pair (a, v) E T(G) with the smallest cardinality of [(a, v)]; if a further tie occurs, select first pair; T := T u {(a, v)}; G := [(a, v)] n G;
a};
ROUGH SETS
191
T(G) := {(a,v)l[(a, v)l n G # 0}; T(G) := T(G) - T; end (while} for each (a,v) in T do if [T - {(a,v)}] s 6 then T := T - {(a,v)}; T := T u {T}; G := B [TI;
u
TET
end {while}; for each T in T do [S] = 6 then T := ?r - {T}; if
u
SET- { T]
end {algorithm}.
Like the LEMl option, the above algorithm is heuristic. The choice of criteria used was found as a result of many experiments (Grzymala-Busse and Werbrouc, 1991). Rules are eventually induced directly from the local covering; no further simplification (as in LEM1) is required. For Table 111, LEM2 induces the following rules: (Noise, medium) & (Vibration, medium) + (Quality, low), (Interior, fair) 8. (Vibration, low) + (Quality, medium), (Noise, low) & (Capacity, 4) --t (Quality, high), (Capacity, 2) & (Interior, good) + (Quality, high). 3. AZl Global Coverings
The All Global Coverings option of LERS represents a knowledge acquisition approach to rule induction. LERS attempts to compute all global coverings and then all minimal rules are induced by dropping conditions. The following algorithm is used: Algorithm ALL GLOBAL COVERINGS input: A decision table with s e t A of all attributes and decision d; Output: the set IR of all global coverings of {d}; begin IR := 0; for each attribute a in A do
compute partition U/IND( {a}); k : = 1; while k IIAl do begin
for each subset P of the set A with (PI = k do
192
JERZY W.GRZYMALA-BUSSE
if (P is not a superset of any member of lR) and ( WIND( (a}) I U/IND( (d))
n
aE P
then add P to IR, k:= k + 1 end (while) end {algorithm}.
As we observed before, the time complexity for the problem of determining all global coverings (in the worst case) is exponential. However, there are cases when it is necessary to induce rules from all global coverings (Grzymala-Busseand Grzymala-Busse, 1994). Hence the user has an option to fix the value of a LERS parameter, reducing in this fashion the size of global coverings (due to this restriction, only global coverings of size not exceeding this number are computed). For example, from Table IV, the following rules are induced by the All Global Coverings option of LERS: (Capacity, 5) & (Noise, medium) + (Quality, low), (Capacity, 4) & (Noise, medium) -+ (Quality, low), (Interior, fair) & (Vibration, medium) + (Quality, low), (Interior, good) & (Vibration, medium) + (Quality, low), (Noise, medium) & (Vibration, medium) + (Quality, low), (Capacity, 5) & (Noise, low) + (Quality, medium), (Capacity, 2) & (Noise, medium) + (Quality, medium), (Interior, fair) & (Vibration, low) + (Quality, medium), (Interior, excellent) + (Quality, high), (Capacity, 4) & (Interior, good) + (Quality, high), (Capacity, 2) & (Interior, good) + (Quality, high), (Capacity, 4) & (Noise, low) + (Quality, high), (Capacity, 2) & (Noise, low) + (Quality, high), (Interior, good) & (Noise, low) -+ (Quality, high), (Interior, good) & (Vibration, low) + (Quality, high).
4. All Rules
Yet another knowledge acquisition option of LERS is called All Rules. The name is justified by the way LERS works when this option is invoked: All rules that can be induced from the decision table are induced, all in their simplest form. The algorithm takes each example from the decision table and induces all potential rules (in their simplest form) that will cover the example and not cover any example from other concepts. Duplicate rules are deleted. For Table IV, this option will induce the following rules:
ROUGH SETS
193
(Capacity, 4) & (Interior, fair) + (Quality, low), (Capacity, 5) & (Interior, good) + (Quality, low), (Capacity, 5) & (Noise, medium) + (Quality, low), (Capacity, 4) & (Noise, medium) + (Quality, low), (Capacity, 5) & (Vibration, medium) --+ (Quality, low), (Interior, good) & (Noise, medium) + (Quality, low), (Interior, fair) & (Vibration, medium) + (Quality, low), (Interior, good) & (Vibration, medium) + (Quality, low), (Noise, medium) & (Vibration, medium) -+ (Quality, low), (Capacity, 2) & (Interior, fair) + (Quality, medium), (Capacity, 5) & (Noise, low) + (Quality, medium), (Capacity, 2) & (Noise, medium) + (Quality, medium), (Capacity, 5) & (Vibration, low) + (Quality, medium), (Interior, fair) & (Noise, low) + (Quality, medium), (Interior, fair) & (Vibration, low) + (Quality, medium), (Noise, medium) & (Vibration, low) + (Quality, medium), (Interior, excellent) + (Quality, high), (Capacity, 4) & (Interior, good) + (Quality, high), (Capacity, 2) & (Interior, good) + (Quality, high), (Capacity, 4) & (Noise, low) + (Quality, high), (Capacity, 2) & (Noise, low) + (Quality, high), (Capacity, 4) & (Vibration, low) --+ (Quality, high), (Interior, good) & (Noise, low) + (Quality, high), (Interior, good) & (Vibration, low) + (Quality, high), (Noise, low) & (Vibration, medium) -+ (Quality, high).
C. Real- World Applications of LERS The LERS approach to rule induction was used at NASA Johnson Space Center for developing an expert system that may be used in the medical area on board the space station Freedom. Also, LERS was used for rule induction useful for enforcing Sections 311, 312, and 313 of Title 111, the Emergency Planning and Community Right to Know, as a part of the project funded by the U.S. Environmental Protection Agency. Another application of LERS was medicine, the traditional area of application for machine learning. First, the system was used for the choice of warming devices for patients recovering after surgeries (Budihardjo et al., 1991). Second, the LERS system induced rules to assess preterm labor risk for pregnant women (Grzymala-Busse and Woolery, 1994). The error rate of existing diagnostic methods is high (62-83%), since the risk is very difficult
194
JERZY W. GRZYMALA-BUSSE
to diagnose. The system LERS induced rules from training data (half of a file) and then classified unseen cases (the other half of a file) using these rules for three files. The resulting classification error for unseen cases was much smaller (10-32%). The LERS system was also successful in rule induction in such diverse areas as linguistics (Grzymala-Busse and Than, 1993) and global warming (Gunn and Grzymala-Busse, 1994).
IV. CONCLUSIONS Rough set theory offers an alternative approach to handling imperfect data. The closest other known approach is evidence theory. The main difference between rough set theory and evidence theory is that the former is objective (lower and upper approximations of the set are computed directly from input data), while the latter is subjective (basic probability assignment is given by an expert). Though the number of papers or reports about rough sets is close to a thousand, and the scope of rough set theory is dynamically expanding, there is room for much more research, as it is clear from activities in workshops devoted to rough sets (Ziarko, 1994; Lin, 1995). Logical fundamentals of rough set theory (close to modal logics), further developments of rough set theory itself, and real-world applications, to name a few, are being explored extensively.
REFERENCES Budihardjo, A., Grzymala-Busse, J. W., and Woolery, L. (1991). In “Proc., IV Int. Conf. on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems” (Koloa, Kauai, Hawaii), p. 735. Chan, C.-C., and Grzymala-Busse, J. W. (1989). In “Proc. of the ISMIS-89, IV International Symposium on Methodologies for Intelligent Systems” (Charlotte, NC), p. 281, North Holland, Amsterdam, New York, Oxford, Tokyo. Dempster, A. P. (1967). Ann. Math. Stat. 38, 325. Dempster, A. P. (1968). J . Roy. Stat. SOC.,Ser. B 30,205. Dubois, D., and Prade, H. (1992). In “Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory” (R.Slowinski, ed.), p. 203, Kluwer, Dordrecht, Boston, London. Grzymala-Busse, D. M., and Grzymala-Busse, J. W. (1994). In “Proc. of the XIV Int. Avignon Conf.” (Paris), p. 183. Grzymala-Busse, J. W. (1987). “Rough-Set and Dempster-Shah Approaches to Knowledge Acquisition Under Uncertainty-A Comparison.” Manuscript.
ROUGH SETS
195
Grzymala-Busse, J. W. (1988). J. Intelligent & Robotic Systems 1, 3. Grzymala-Busse, J. W. (1991). Managing Uncertainty in Expert Systems.” Kluwer, Dordrecht, Boston, London. Grzymala-Busse, J. W. (1992). In “Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory” (R. Slowinski, ed.), p. 3, Kluwer, Dordrecht, Boston, London. Grzymala-Busse, J. W., and Than, S. (1993). Behav. Res. Meth., Instr., & Comput. 25, 318. Grzymala-Busse, J. W., and Werbrouc, P. (1991). “On the Best Search Method in the LEM and LEM2 Algorithms.” Department of Computer Science, University of Kansas, TR-91-16. Grzymala-Busse, J. W., and Woolery, L. (1994). J. Am. Med. Inform. Asso. 1,439 Gunn, J. D., and Grzymala-Busse, J. W. (1994). Human Ecol. 22, 59. Lin, T. Y.,ed. (1995). “Proc. of RSSC‘95 Workshop” (San Jose, CA), Springer-Verlag, Berlin, New York. Pawlak, Z. (1982). Znt. J . Comput. Inform. Sci. 11, 341. Pawlak, Z. (1984). Int. 3. Man-Machine Stud. 20,469. Pawlak, Z. (1991). “Rough Sets. Theoretical Aspects of Reasoning about Data.” Kluwer, Dordrecht, Boston, London. Pawlak, Z., Grzymala-Busse, J. W., Slowinski, R., Ziarko, W. (1995). “Rough Sets.” Accepted for publication in Commun. A C M . Pawlak, Z., Slowinski, K., and Slowinski, R. (1986). Int. J. Man-Machine Stud. 24,413. Shafer, G . (1976). “A Mathematical Theory of Evidence.” Princeton University Press, Princeton, NJ. Skowron, A., and Grzymala-Busse, J. W. (1994). In “Advances in the Dempster Shafer Theory of Evidence” (R. R. Yaeger, M. Fedrizzi, and J. Kacprzyk, eds.), p. 193, John Wiley, New York. Slowinski, K., and Slowinski, R. (1990). Int. J. Man-Machine Stud. 32,693. Slowinski, R. (ed.), (1992). “Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory.” Kluwer, Dordrecht, Boston, London. Slowinski, R., and Stefanowski, J. (1992). In “Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory” (R. Slowinski, ed.), p. 445, Kluwer, Dordrecht, Boston, London. Szladow, A., and Ziarko, W. (1993).A1 Expert 7,36. Ziarko, W. (1992). In “Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory” (R. Slowinski, ed.), p. 61, Kluwer, Dordrecht, Boston, London. Ziarko, W., ed. (1994). “Rough Sets, Fuzzy Sets and Knowledge Discovery. Proc. of RSKD’94 Workshop” (Banff, Canada), Springer-Verlag, Berlin, New York.
This Page Intentionally Left Blank
ADVANCES IN IMAGING AND ELECTRON PHYSICS. VOL. 94
Theoretical Concepts of Electron Holography FRANK KAHL AND HARALD ROSE Institute of Applied Physics, Technical University of Darmstadt. Darmstadt. Germany
. . . . . . . . . . . . A. Image Intensity and Contrast Transfer . . . . . . . . . B. Low-Dose Imaging . . . . . . . . . . . . . . .
I. Introduction
.
. .
. .
. . . . .
.
.
.
11. The Scattering Amplitude . . . . . . . . 111. Image FormationinTransmission Electron Microscopy
. .
. .
.
. . . . . . . . . . . . . . . . . . . . .
197 200 203 203 207 213 213
.
218 221 221 228 232 232 240 244 246 252
IV. Off-Axis Holography in the Transmission Electron Microscope . . . A. Formation of the Off-Axis Hologram . . . . . . . . . . B. Spectral Signal-to-Noise Ratio in Off-Axis Transmission Electron Microscope Holography . . . . . . . . . . . . . . . . . . . . V. Image Formation in Scanning Transmission Electron Microscopy . . . . A. Bright-Field Imaging . . . . . . . . . . . . . . . . . B. Spectral Signal-to-Noise Ratio . . . . . . . . . . . . . . VI. Off-Axis Scanning Transmission Electron Microscope Holography . . . A. Basic Principles . . . . . . . . . . . . . . . . . . B. Formation of the Scanning Transmission Electron Microscopy Hologram C. Fourier Transform of the Hologram . . . . . . . . . . . . D. Spectral Signal-to-Noise Ratio . . . . . . . . . . . . . . VII. Conclusion . . . . . . . . . . . . . . . . . . . . . Appendix: Fresnel Diffraction in Off-Axis Transmission Electron Microscope Holography . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . .
.
. . .
. .
.
. .
. .
253 256
I. INTRODUCTION
Electron microscopy is well suited for obtaining structural information on an atomic scale. The maximum information that can be obtained by electron microscopy is the complete knowledge of the elastic and all inelastic complex scattering amplitudes of the specimen. The complete reconstruction of the amplitude and the phase of each scattering amplitude is possible only if the information about these quantities is contained in the recorded image. Since only the absolute value of the electron wave function can be measured, the phase information of a single wave or a package of incoherent waves is irretrievably lost in the image plane of a conventional transmission electron microscope (TEM) or the detector plane of a scanning transmission electron microscope (STEM). Holography preserves the phase by superposing a coherent reference wave onto the elastically scattered wave (Gabor, 1949, 1951; Hawkes and Kasper, 197
Copyright 0 1995 by Academic Press, Inc.
All rights ofreproduction in any form reserved.
198
F. KAHL AND H.ROSE
1994). As a result, information about both the amplitude and the phase of the scattered wave is contained in the interference pattern. Under certain conditions the phase and the amplitude of the scattered wave can be reconstructed from such interference patterns. Since electrons are fermions, two electrons cannot occupy the same quantum state (Sakurai, 1985). Therefore, two different electrons cannot interfere with each other. Owing to this behavior, the reference wave must be a coherent partial wave of the wave package which describes a particular electron. All holographic techniques are restricted to the analysis of elastic scattering processes because the reference wave can only interfere with the elastically scattered part of the wave. Hence we disregard in the following considerations all inelastic scattering processes. In-line holography is realized in conventional bright-field imaging in the TEM and the STEM (Hanssen, 1971,1982; Hawkes and Kasper, 1994) in the case of coherent illumination. The coherent part of the wave at the exit plane, located directly beneath the specimen, can be written as a superposition of the undisturbed plane wave t,bo and the elastically scattered wave rc/,. Both of them pass through the imaging system of the TEM and interfere with each other at the image plane, where they form a Fraunhofer hologram. The current density jiat the image plane has the form
ji
I $ I=~ l$o12 + 2 R ~ { $ ~ Z+Il$s12,
(1)
q0
where the bar indicates the conjugate complex value. Here and $s describe the unscattered part and the scattered part of the wave, respectively, at the recording plane; k = 27t/l, l, and m, are the wave number, the wavelength, and the mass of the electron, respectively. In Fourier space we obtain
+ G$J + F T I $ ~ I ~ .
F T ( ~ , ) FTI$,I~+ F T { & , ~
(2)
The first term on the right-hand side of this equation represents the zero peak, the middle term represents the twin images and the third term is the quadratic or “spurious” term. The two twin images contain the elastic scattering amplitude linearly, while the spurious term comprises the quadratic terms of the elastic scattering amplitude and of the inelastic scattering amplitudes. The zero peak is produced by the unscattered part of the wave. The linear twin image can be used to reconstruct the elastic scattering amplitude. Unfortunately, this is possible only if the twin images can be separated in Fourier space from the zero peak and from the quadratic terms, which are called spurious terms in accordance with Gabor (1949). The main disadvantage of all in-line holographic procedures is the overlap of the twin images and the quadratic term (Fig. 1). As a consequence, these terms cannot be separated from each other. However, for a weak phase object, the quadratic term becomes negligibly small owing to the generalized optic theorem
199
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
peak
‘quadratic
terms
FIGURE1. Locations of the central peak, the twin images, and the quadratic term in the Fourier image of an in-line hologram.
Tt;I:o;;
k
first twin image
quadratic term
FIGURE 2. Locations of the central peak, the twin images, and the quadratic term in the Fourier image of an off-axis hologram.
(Glauber and Schomaker, 1953; Glauber, 1958). In this case the elastic scattering amplitude can be reconstructed, as demonstrated by Gabor (1949). To overcome the disadvantages of in-line holography, Leith and Upatnieks (1962) proposed off-axis holography. Subsequently this method was successfully employed in electron holography by H. Lichte (1991a) and A. Tonomura (1986, 1993). Off-axis holography uses a tilted reference wave which separates the twin images from each other and from the quadratic term, as shown in Fig. 2. Owing to the successful application of off-axis holography in the TEM, T. Leuthner and H. Lichte (1989) suggested that this technique could also be
200
F. KAHL AND H.ROSE
employed in the STEM. Unfortunately, they assumed an unrealistic point probe, thus neglecting the aberrations and the finite size of the aperture. However, at atomic resolution these effects must be taken into account. In the following sections we first outline the theory of image formation in TEM and of TEM holography, as realized by Lichte and Tonomura. In the second step we briefly restate the theory of bright-field imaging in STEM and subsequently develop the theory of off-axis holography in the STEM. Finally, we compare the performance of TEM off-axis holography with that of the STEM and with that of STEM in-line holography. AMPLITUDE 11. THESCATTERING For determining the intensity distribution of a hologram, it is sufficient to know the electron wave far from the object. Neglecting inelastic scattering processes, the wave function adopts the asymptotic form (Sakurai, 1985) eikr
v%r) x
$as
+ f(ki, k,)T
= eikir
(3)
at a distance r which is large compared with the diameter Do of the object. The complex elastic scattering amplitude f(ki, k,) depends on the wave vectors ki and k, of the incident and the scattered electron, respectively (see Fig. 3). Accordingly, the amplitude of the scattered spherical wave is a function of both the direction of incidence and the direction of scattering. In the incident plane wave
FIGURE3. Illustration of the asymptotic behavior of the elastically scattered electron wave (3).
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
optic ax is
I I
3
20 1
,, X
/
/object
plane
t FIGURE4. Definition of the angles determining the propagation of the electron in front of and behind the specimen.
case of elastic scattering the absolute value of the wave vector is preserved: We assume that the specimen is located in a plane perpendicular to the optic axis, as shown in Fig. 4.The optic axis is chosen as the z coordinate of the x, y, z coordinate system. For electron energies larger than about 100 keV, most electrons are deflected by small angles so that backscattering can be neglected. In this case the small-angle approximation
+ kO, x ke, + ke,
ki % ke,
k,
can be applied. The angles 0,8,
+ sin 4iey), 8 = B(cos 4 f e , + sin 4,eJ
0 = O(cos diex
4i,and 4,
(5)
(6)
are defined in Fig. 4. For the
202
F. KAHL A N D H. ROSE
following considerations we assume a tilted incident plane wave of the form ,,$(r) = e i k , i = e i k z e i k p e . (7) Behind the object plane z = zo, the incident plane wave is modulated due to the interaction with the specimen. For a thin object this modulation can be described by a generalized complex transmission function T(@,P O ) ? (8) which generally depends on both the angle of incidence 0 and the position vector po = x,e, + y,e, at the object plane. For the wave function just behind the object plane we obtain $(po, 0 )= T(@,p o ) e i k p e e e i k z o .
(9) It is advantageous to separate this wave function into an undisturbed plane wave and a wave resulting from the interaction of the incident wave with the specimen: @) = e i k p o e e i k z o (T(@, p,) - l } e i k p ~ e e i k z ~ . (10)
+
The term T - 1 is zero outside the area of the specimen with the diameter Do. Hence at a distance d = z - z, which fulfills the condition
the Fraunhofer approximation (Hawkes and Kasper, 1994)
can be applied for the partial wave containing T - 1 . Here p denotes the position in the plane z. Since the unscattered wave extends over the entire object plane, the Fraunhofer condition (11) is not satisfied. Therefore, the Fresnel approximation eik(p- p J 2 / 2 d
$(P? z ) = eikdI $ ( P o 9 z o )
iAd
d2Po
(13)
must be used to propagate the undisturbed plane wave to the plane z. As a result we obtain eikz,eikd(l
$(,,, z ) = e i k r , e i k d ( l - 8 2 / 2 ) e i k e p
+
d
+e2/2)
F(@,
where
{ T(@,Po) - l}e-ik(e-e’Pod 2P o
(14)
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
203
represents the scattering amplitude and e Z Pj
the angular position at the observation plane z. The first term on the righthand side of (14) is the small-angle approximation of the undisturbed wave exp(ikir) at this plane. Since the factor (17)
d is the small-angle approximation of the spherical wave eikr
-
eik
Jdz+pt
T - J d W ’ it follows from (3) that the function F ( 0 , 0) represents the small-angle approximation of the elastic scattering amplitude f (ki, kf). The transmission function of a very thin phase object does not depend on the angle of incidence 0. In this case the scattering amplitude ~ ( 0 e) ,= F ( e - 0 )
(19)
is a function of the scattering angle 8 - 0. It should be noted that for T(O, p,) = T(p,), the approximation (15) coincides with the Glauber highenergy approximation (Moliere, 1947; Glauber, 1958).
111. IMAGE FORMATION IN TRANSMISSION ELECTRON MICROSCOPY
A . Image Intensity and Contrast Transfer
Within the frame of validity of Gaussian dioptrics, the propagation of an electron wave from the plane z, to the plane z is given by the Fresnel transformation (13), provided that the space d = z - z, between the two planes is free of electromagnetic fields. In this paraxial approximation the action of a thin lens located at the plane zl is described by the lens transmission function (Goodman, 1968)
T ( ~ ,=) e-ik(pf/2f),
(20)
where f denotes the focal length of the lens. To simplify the following considerations we assume a pointlike field emission gun. This ideal point source furnishes spatially coherent illumination. Moreover, we consider only paral-
204
F. KAHL AND H.ROSE
lel axial illumination: *.
=
@.
(21)
In this case the wave function +e at the exit plane directly behind the specimen has the form
The specimen plane is located at a distance g = zl - z, in front of the lens plane zl. The objective aperture is placed in the diffraction plane z, which coincides with the back focal plane of the objective lens, as shown in Fig. 5. The distance between this plane and the image plane is denoted by 1. The action of the aperture can be described by an aperture function A = A(p,). point source -+- - - - - - - - - - - - condenser
specimen
- - - - - _ - - _ - _source plane
-B
incident plane wave
-+ I---A f
gauss I an object p l a ne
T
9
objective lens-
1
lens plane
----
I
aperture -+ ---
diffraction plane -back focal plane
l
tz optic a x i s FIGURE 5. Image formation in the TEM.
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
205
Considering the effect of this aperture and that of the lens (20) the wave function at the image plane zi can be written as a sequence of three Fresnel propagations between the planes z, + zl -,z, --f z i : +i(pi, z i )
= eik(zi-zo)
JJ/ T(o,Po)
eWpt- pd2/28
eik(~.-~t)2/2/ e-ik(P:/2.f)
ilzg
ilf
A(PJ
The integration over pI can be performed analytically. The magnification of the image is defined as
where n denotes the number of intermediate images located between the object plane z, and the final image plane zi. For mathematical simplicity it is advantageous to replace the position vector pa by the aperture angle
The second relation holds true in the case (MI >> 1. For the arrangement shown in Fig. 5 we have n = 0. Taking further into account the thin-lens equation 1
1
1
we obtain
This result shows that the three Fresnel transforms of the expression (23) are equivalent to a sequence of two Fourier transforms. In the following we assume a circular aperture with an aperture function
A(e) = A ( O )=
1:
0:
for 0 Ie,, else,
206
F. KAHL A N D H. ROSE
where 0, denotes the limiting aperture angle. By inserting the relation (10) into the integrand of (27) and considering the expression (15), the wave function at the image plane (27) takes the well-known form (Rose, 1977)
Unfortunately, the assumption that the source is monochromatic and the lens system ideal does not yield a realistic description of the image intensity. To be more accurate, we must consider the energy spread of the source, the defocus Af of the specimen, and the aberrations of the objective lens. Assuming isoplanatic conditions, we can neglect all off-axial aberrations. The defocus Af
= Z,
- Z,
(31)
of the objective lens is positive if the specimen plane z , is located between the lens and the Gaussian object plane 2,. Owing to the energy spread of the source, the actual energy E of an electron differs by the amount AE=E-Eo
(32)
from the mean energy E , of the beam. The influence of the defocus, of the energy deviation, and of the spherical aberration can be taken into account by replacing the aperture function A ( 8 ) by the generalized complex aperture function
X(e, AE) = A(e)e-iy(e,AE),
(33)
where (34) is the phase shift produced by these perturbations (Schemer, 1949; Goodman, 1968; Hawkes and Kasper, 1994); C, and C, denote the constants of the spherical and the axial chromatic aberration, respectively. Substituting (33) for A(6) in relation (30), the wave function at the image plane zi is found as
The corresponding current density is given by ji(&,AE) = jol+i(g,AE)l 2
THEORETiCAL CONCEPTS OF ELECTRON HOLOGRAPHY
207
Here j , denotes the constant current density at the object plane. The actual current density is the average over all possible energy deviations BE of the electrons:
(k)
ji
-
=
1 :(
)
j i -, AE p(AE)dAE,
(37)
where p(AE) is the normalized energy distribution of the source. B. Low-Dose Imaging
In the case of low-dose imaging, the noise resulting from the CCD camera or the photographic plate is negligibly small compared with the quantum noise. To account for the effect of quantum noise, we subdivide the image into N x N square pixels of equal area (Ma,,)’ = (Ma)’, where a,, = a is the lateral length of the corresponding object elements. The signal N,, recorded by the pixel (I, m), I, m = 1, 2, ..., N , is assumed to be proportional to the number of electrons that have hit the pixel area during the illumination time t. 1 . Average Image Intensity
The mean value (N,,) of the signal N,, is defined as e
e
B(p,, - p ) j i ( p ) d’p.
(38)
The proportionality factor 4 represents the quantum efticiency. The vector Mp,, points to the center of the pixel (I, m) at the image plane. The box function B(p) is defined as
(0:
else.
For reasons of simplicity, we refer the image coordinates back to the object plane by means of the relation p = - Pi
M’
Employing relation (38), the mean value of the discrete Fourier transform
208
F. KAHL AND H. ROSE
of the digitized image intensity is obtained as
With the aid of the convolution theorem we derive the expression
provided that the number of pixels is sufficiently large. The functions N and J are given by N(n) = 4
sin(kal2,/2) sin(kaRJ2) kaR, kaR, '
's
(44) (45)
J(Q) = - j,(p)e-ik"P d2p,
A2
respectively. The range 0 IIQl/A I Q,,,/L of the transferred spatial frequencies is limited by the aperture at the diffraction plane. If the size a of the pixels satisfies the sampling condition
we obtain (r"((n)) = M 2
TJ(n)N(Q) e
- M Z"N(C2) s s j i ( p , AE)p(AE)ePik"Pd2p dAE
a2 e
(47)
for the significant region 0 I R I R,,,. This result is in agreement with the sampling theorem proposed by Whittaker (1915) and Shannon (1949). Considering the expression (36) for j i ( p , A), we can perform the integration over p analytically:
i
i (I(Q))= FjoN(S2) d(Q) + lF(O, n)A(R)
e
1-
- - F(0,
a
-Cl)A(R)
s
s
e-iYcn*AE'p(AE) dAE
eiY(R,m)p(AE) dAE
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
209
Here d(Q) is the two-dimensional delta function which accounts for the central peak, the next two terms represent the twin images, while the last integral forms the quadratic term. Although the elastic scattering amplitude F is separated from the transfer characteristic of the microscope in the expressions for the two twin images, a reconstruction of the scattering amplitude is not possible for an arbitrary specimen because the twin images overlap with the quadratic term and with each other. In the special case of a weak phase object, the quadratic term is negligibly small. The scattering amplitude F of a weak phase object (19) depends only on the scattering angle 8 - 0 and satisfies the Friedel law:
F(e - 0 )= F(O - e).
(49)
In this case the second and the third terms in expression (48) can be combined into a single term. As a result we obtain the weak phase-object approximation
where
9(n)= 9 ( R ) = - A @ )
s
sin y(R, AE)p(AE)dAE
(51)
is the so-called phase-contrast transfer function (PCTF). To incorporate the effect of the energy spread, a Gaussian energy distribution is assumed
p(AE) =
~
e-( 1 / 2 ) ( A E b d 2 .
(52)
For this distribution the integral in expression (51) can be evaluated analytically, yielding
z(Q) = - A ( Q ) ~ - ( ~ / ~ ) [ ( ~ U E / ~ E Osin ) C yo(Q), ,~~R~
(53)
yo(Q) = y(Q, AE = 0).
(54)
with
If we employ the normalized units listed in Table I, the results are valid for
210
F. KAHL AND H. R O S E
TABLE I NOTATION OF VARIABLES AND IMAGING PARAMETERS IN CONVENTIONAL AND I N NORMALIZED FORM, RESPECTIVELY
EM Specific
Normalized
Af 6, @d
A := Af/AN 9, := e,pN 9, := 0 , / 6 ,
a
B := a/dN
Biprism angle
6
8 := spN
Detector coordinate Fourier coordinate Spatial frequency
0
9 := Q/BN o := Q/6, o := ( Q / l ) d N
Parameter/Function Defocus Aperture angle Detector angle Pixel size
n
Normalization Factor AN=&
6,
=s *
dN =
a
AN
Source parameters
dN
any microscope. In particular, the PCTF takes the well-known form
P(w) =
-A(w)e-(1/2)&2m4
sin Y O ( O ) ,
(55)
where It
y0(w) = -(w" - 2Aw2).
2
For a weak phase object and parallel illumination, the maximum transferred spatial frequency (Born and Wolf, 1975) is Qmax
A
-
$0
(57)
A.
The function sin yo oscillates rapidly beyond the first zero w1 = ,/% of its argument (56). Since a reconstruction of the scattering amplitude becomes very difficult for spatial Frequencies higher than wl, we take w1 as the maximum normalized aperture angle so.If the microscope operates at the Scherzer focus A = 1.21 (Scherzer, 1949), the first zero of yo occurs at w1 = 1.56. The maximum normalized pixel size CZ is therefore given by d = 1/(2w1)= 0.32. For a specific microscope with the imaging parameters C, = 1.4 mm,
A
C, = 1.2 mm, =
1.21,
E , = 100 keV,
so = w1 = 1.56,
a, = 0.5 eV,
a = 0.32,
(58)
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
1.00
21 1
----_
0.75
0.50 0.25
0 1
I 1 IPV'
VV
-0.25
FIGURE6. Effective PCTF of a TEM with imaging parameters listed in (58).
is plotted in Fig. 6 for the section coy = 0. The scattering amplitude of weak phase objects can be reconstructed from a single micrograph up to the information limit, apart from the transfer gaps centered at the zeros of the PCTF. The information limit is defined as the maximum spatial frequency coI that can be distinguished from the noise. This limit results from the incoherent aberrations which determine the envelope of the PCTF. Using an amorphous carbon foil, the PCTF can be derived from the diffractogram of the micrograph of an amorphous carbon foil (Thon, 1971). Unfortunately, the rapidly decreasing distance between the zeros of the PCTF causes an increasing density of the transfer gaps. However, by using defocus series, reconstruction of the scattering amplitude over the whole range up to the information limit is possible (Schiske, 1968; Coene et al., 1992; Saxton, 1994). It should be noted that the restriction to weak phase objects is necessary only to suppress in the diffractogram the overlapping quadratic term. Offaxis holography separates the twin images from each other and from the quadratic term. In this case the restriction to weak phase objects is no longer mandatory.
2. Spectral Signal-to-Noise Ratio In order to obtain a statistically significant signal, a certain number of scattered electrons per object element must be collected. The signal-to-noise ratio (SNR) of the image depends on the dose
212
F. KAHL AND H. ROSE
which represents the number of electrons incident on the specimen area during the illumination time t. The spectral SNR (Berger, 1993; Berger and Kohl, 1993) is defined as
If K quantities Ni E lR are statistically independent, the variance of any linear combination
is given by K
var{Nge,}=
(ail’var{Ni}. i=l
Since the signals of the individual pixels are statistically independent, we obtain for the variance of the Fourier-transformed image (41) the expression 1
var{r((n)} = -
A4
N-1
C
var{Nl,}.
t,nr=o
(64)
For moderate doses the number Nl, of electrons detected by the pixel (I, m)is approximately given by a Poisson distribution with var{Nlm)
=
(65)
Taking this result and relation (38) into account, the variance (64) for a quadratic object field with a length L = Na can be written as
The third relation presupposes that most electrons pass through the objective aperture. This assumption is always fulfilled for weak phase objects. Inserting relations (50), (59), and (66) into expression (61), we obtain for the spectral SNR
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
213
Accordingly, the spectral SNR is proportional to the square root of the dose
D and to the absolute value of the effective PCTF. Iv. OFF-AXISHOLOGRAPHY IN THE TRANSMISSION ELECTRON MICROSCOPE A. Formation of the Off-Axis Hologram In off-axis holography, a tilted reference wave is used for spatially separating the twin images from the quadratic term in the diffractogram. The arrangement used in practice is shown schematically in Fig. 7 . The specimen is located entirely in the half-plane x, I 0. The partial wave traversing the other half-plane x, > 0 therefore remains unscattered. An electrostatic biprism (Mollenstedt and Duker, 1956; Mollenstedt, 1992) is inserted at a distance lb = vl in front of the intermediate image plane. In the case v << 1, the biprism is located just above the intermediate image plane. Hence the wave function in the region xb < 0 of the biprism plane zb is determined largely by the image of the undisturbed wave passing through the half-plane x, > 0. Accordingly, the wave function for xb 2 0 contains the whole information about the object. The biprism tilts each of the two partial waves through an angle 6 toward the optic axis. Assuming a wire with diameter db for the biprism, the tilted wave fronts overlap at the intermediate image plane within a stripe of width 261, - db. This overlap region contains the off-axis hologram. To record a hologram of a specimen area L x L, the angle 6 and the position parameter v of the biprism must satisfy the condition
26Vl - db = 261, - db.
IMilL
Here Mi is the magnification at the intermediate image plane. If we replace the image distance 1 by means of the equation M.=
'
1
--
f'
condition (68)can be written as L I26fV - -.db lMil
(68)
214
F. KAHL A N D H. ROSE
point source + _ _ _ _ _ _ _ _ _ _ _ -
-- - - - - - - - - _ - source plane
condenser+ incident plane wave gaussian Object plane
lens plane -----
diffraction plane -back focal plane
_ _ _ _ _ _ _ - _ _ _ biprism plane
_ - _ - - - - - intermediate
image plane
FIGURE7. The formation of an off-axis hologram in the TEM.
To determine the condition for the spatial separation of the twin images and the quadratic term, we first calculate the current density in the hologram in the absence of any specimen. The angular tilt vector
6 = 6ex
(71)
of the biprism points in the x direction if we assume that the wire extends along the y axis. In this case the reference wave and the plane object wave form a simple hologram consisting of interference fringes in the y direction. In the limiting case v -+ 0, we find j(o)(pi) - jo leikdxi
Mi'
+ e-ikdxi
I2 = -4j0cos2(6kxi). Mi'
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
215
interference
detector plane
cur rent den sit y
'!j
Fourier t r a n s f o r m
FIGURE8. Principle of off-axis holography.
The Fourier transform of this expression, I
l(Q)=
's
A
j(0)(pi)e-i"npl d2pj
demonstrates that two sideband peaks are located symmetrically about the central peak (R,= 0, R, = 0) at the positions Ry= 0, R, = 26, and R, = -26, respectively, as shown in Fig. 8. When the specimen is present,
216
F. KAHL AND H. ROSE
the corresponding twin images (sidebands) are centered about the peaks of these sidebands. The sidebands do not overlap with the central term if the maximum angular radii R,,,,, and R,,,, of the central image and the sidebands, respectively, satisfy the condition
26 2 Rc,max + Rs,max-
(74)
We restrict our considerations to Fraunhofer holograms, which are formed in the vicinity of the image plane conjugate to the object. The effect of the biprism on the wave function can be taken into account by the transmission function
acting at the plane z b . Considering parallel coherent illumination (21), the wave function at the image plane zi is found as
X
X
Since the transmission function (75) is discontinuous at x b = - d b / 2 and xb = db/2, we must separate the integration over the biprism plane in two integrations, one over the half-plane x b > db/2,the other over the half-plane x b < -db/2. The wave at the half-plane x b < -db/2 is determined by the undisturbed plane reference wave. The transmission function T can therefore be replaced by unity for the integration over this half-plane. Moreover, we assume that the biprism is located at a short distance above the image plane zi.In this case we have v << 1 and 1 - v w 1. If we consider in addition the equation (15) and the lens equation (26), we obtain for the wave function at the image plane the relation
where
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
@(x) =
1: 0:
217
for x 2 0, for x < 0,
denotes the step function. The functions $(+) and
$(-)
are found to be
Near the edges of the hologram this approximation becomes invalid owing to Fresnel diffraction originating at the wire of the biprism. This disturbance can be neglected in the case v << 1 because the circumferential region over which the Fresnel fringes extend is then small compared with the total area of the hologram. It follows from relation (70) that the biprism angle must satisfy the condition
for recording a hologram from an area L x L of the specimen. A detailed analysis of the influence of the parameters v, !Mil,6 , and f on the Fresnel diffraction at the edges of the hologram is outlined in the appendix. The mean value ( j i > of the current density and the mean value (r((n))of its Fourier transform are calculated by employing the methods outlined in Section 111. Using expressions (36),(37), and (47), we obtain (r"((n)>= qDN(R)(&l)
+ t(-(n+ 21Mi16))+ O(n- 21Mi16)},
(82)
where
i II
- -F(O,
s
-n)A@) eiY(n*AE)p(AE) dAE
describes the intensity of the central term. This part consists of two linear terms and a quadratic term. The terms
2 1
~ ( c c=) 6(u) - -F(o, m)e-ikvfanxYs(cc),
34
(84)
218
F. KAHL AND H.ROSE
with a = -21Mi16
a,represent the sidebands;
9,(a) = 9,(R) = --A@) 2il
S
e-iy(R*AE)p(AE) dAE
(85)
is the complex sideband transfer function. Assuming axial illumination, the maximum radius R,,,,, of the central term and the radius R,,,, of each sideband are given by Qc,max
= 2607
Qs,max
= so.
(86)
In this case the sidebands do not overlap with the central term if the condition
2 1 ~ 2~3e0 s is fulfilled. For an arbitrary object the maximum angular radius Rmax
= Rc,max
+ 2Qs,max
= 400
(87)
(88)
of the Fourier spectrum (82) is at least four times larger than the angular radius Omax= 0, of the Fourier spectrum of the bright-field image of a weak phase object, provided that the separation criterion (87) is satisfied. Hence for an arbitrary object the maximum normalized size of a pixel,
must be at least four times smaller than that for a weak phase object. The normalized quantities are defined in Table I. If the separation condition (87) is satisfied, the elastic scattering amplitude can be reconstructed for arbitrary specimens up to the maximum scattering angle O0, provided that C, and Af are known with a sufficient degree of accuracy. The phase term exp[-ikvf6RX] in Eq. (84) can be disregarded because it merely describes a lateral shift of the specimen. It should be noted, however, that these statements are valid only as long as the parasitic secondorder axial aberrations (axial coma and threefold astigmatism) remain negligibly small. B. Spectral Signal-to-Noise Ratio in Off-Axis Transmission Electron Microscope Holography
In analogy to relation (66), the variance of
r(Q)is given by
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
where ji(Pi) = j,
s
I@i(Pi,
WI’P(AE) dAE
219
(91)
represents the mean current density at the intermediate image plane. Assuming that most electrons pass through the hole of the aperture and considering that the specimen is irradiated by only one-half of the incident electrons, we obtain 2 var(f(R)) x ,qDL2.
(92)
1
Using relations (61) and (84), the spectral SNR of the centered sideband is found as
where
is the effective sideband transfer function. The comparison with the spectral SNR (67) for the in-line mode reveals that the formulas are the same for both cases, apart from the different effective contrast transfer functions. The effective sideband transfer function adopts the simple form
i 2s,eff(co)= - - - - ~ ( c o 2Jz
1
+ 2 1 ~ ~ l ~ ) ~ ( o ) e - ~ ~ 0 ( ~ )(95) e - ~ ~ * ~ ~
for the energy distribution (52). This complex function has no zeros, unlike the PCTF (55) for standard phase-contrast imaging. Therefore, the reconstruction of the scattering amplitude is theoretically possible up to the inforThe information limit does not depend on the normalized mation limit oI. defocus A if the size of the source is negligibly small. This condition is generally fulfilled for field emission guns but not for thermionic cathodes. In the latter case the finite size of the source causes additional damping of the transfer function. The corresponding attenuation can be described by a second envelope function which depends on the normalized defocus A (Reimer, 1984). Optimum transfer of the spatial frequencies up to the information limit o,is obtained for the so-called Lichte focus (Lichte, 1991b), AL = 0.750;.
(96)
220
F. KAHL AND H.ROSE
0.3 0.2
-
0.1
=
o
9 u 3x q :
E - 0.1
-0.2
-0.3 FIGURE9. Imaginary part of the sideband transfer function for the imaging parameters (97) and wy = 0.
For this focus the damping effect resulting from the finite size of the source is minimized in the range 0 < w < 0:. Simultaneously this defocus minimizes the oscillations of exp( - iy,) within this range. The sections w,, = 0 of the imaginary part and of the absolute value of the complex sideband transfer function are plotted in Fig. 9 for the .parameters
C, = 1.4 mm, E
1.96,
C, = 1.2 mm,
E , = 100 keV,
AL = 0 . 7 5 ~ ;= 2.89,
d=
21kfi16 = 39, = 30,,
1 ~
80:
0,
= 6.36
= 0.5 eV,
x lo-*,
(9.7)
assuming a pointlike field emission gun and the Lichte focus. The resulting information limit aI= 1.96 is derived from the defining equation l%eff(~I)l
= 0.1-
(98)
Comparison of Fig. 6 and Fig. 9 shows that the maximum value of the spectral SNR for conventional bright-field imaging is about three times larger than the corresponding value in the case of off-axis holography. In order to reconstruct the scattering amplitude with a sufficient degree of accuracy, the fluctuations of the phase yo must be smaller than about n/9 rad. Employing this condition, we find that the normalized defocus must be known with an uncertainty smaller than
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
221
Assuming a spherical aberration constant C, = 1.2 mm and a voltage of 100 kV, the defocus must be known with an accuracy better than
(~3Af),,,~~ sz 19 A. (100) It seems very questionable whether this accuracy can ever be achieved in practice!
v. IMAGE FORMATION IN SCANNING TRANSMISSION ELECTRON MICROSCOPY A . Bright-Field Imaging
In contrast to the TEM, the image in the STEM is obtained sequentially. The scanning probe at the object plane is a demagnified image of the effective source. This probe, which has a diameter of a few angstroms, is scanned across the specimen by means of a double-deflection element located in front of the objective aperture plane. In a dedicated STEM, such as the VG HB501,the demagnification lies in the range 10-’-10-’. For such low demagnifications the diameter of the probe is limited not only by diffraction but also by the geometric size of the effective source at the object plane (Scheinfein and Isaacson, 1986; Batson, 1988). Accordingly, we must take the finite diameter of the effective source into account. The convergent incident electron wave is transformed by the interaction with the object into a wave consisting of an unscattered and a scattered partial wave. If we place the detector at an image of the aperture plane, the influence of the Grigson coils located behind the objective lens can be neglected. In this situation, the intersection of the bright-field cone with the detector plane is spatially fixed, regardless of the position of the probe at the object plane. To achieve a high magnification Md of the image of the aperture at the detector plane, the aperture must be placed just in front of the back focal plane of the objective lens. In the absence of any specimen, the current density in the detector plane vanishes outside the bright-field area formed by the cross section of the illumination cone at the detector plane; the current density within the brightfield region is then constant. The detector is generally not located at an image of the aperture plane. A double-deflection element (Grigson coils) must then be placed beneath the object to ensure that the bright-field area at the detector plane does not
222
F. KAHL AND H.ROSE
depend on the position of the probe. Since these coils are not necessary if the detector is situated at the image of the aperture, we assume in the following this special location of the detector. A simplified illustration of the image formation in the STEM is depicted in Fig. 10. We consider a configured detector, which enables one to combine the signals recorded by the individual detector elements arbitrarily. The image is formed by displaying the resulting signal Z(pp) on a viewing screen. The vector pp = xpex+ ype, defines the center of the probe at the object plane. By varying the combination of the constituent signals, different images of the same object can be obtained. The possibility of extracting optic axis
I
I
tilted incident p la n e wave aperture plane
lens plane
gaussian object plane
detector plane = conjugate aperture plane I
+bright-fie[d
area d
FIGURE10. Image formation in the STEM.
223
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
specific information about the object by weighing the detector signals appropriately is a major advantage of the STEM. Because each point of the effective source represents an incoherent point source, waves originating from different points of this source can be considered as statistically independent (incoherent). Since the condenser lens forms an image of the effective source at infinity, each of these spherical waves is transformed into a plane wave which is tilted with respect to the optic axis. The tilt angle fl is proportional to the off-axial distance of the corresponding point source. The deflection system tilts the plane wave by an additional angle u, as shown in Fig. 10. Hence each partial wave of the wave package at the aperture plane has the form $a(pa) = Ceik(a+P)p..
(101)
The corresponding wave function immediately in front of the specimen,
is obtained by employing the procedure outlined in Section 111. As a result we obtain
Pa
.-
f is the objective aperture angle and
represents the magnification of the image of the objective aperture at the detector plane. The position Pp
= fa
(106)
of the center of the scanning spot at the object plane is proportional to the deflection angle u. The lateral shift
224
F. KAHL AND H.ROSE
a=fS
(107)
considers the additional displacement for an off-axial point source. The detector is placed at a large distance d = IMdlf >> D,'/1 beneath the object plane such that the object is located entirely within the first Fresnel zone. Accordingly, the phase
k - P,' Mdf
of the exponential factor in front of the integral in Eq.(103) can be neglected if p , < DJ2. Hence the wave function (103) can be described approximately as a package of plane coherent waves with different angles of incidence. Considering the concept of a generalized transmission function (8) for each constituent plane wave, we obtain for the wave function behind the object k ( P P + a, P o 7 -iCf e a.k ( z . - z . ) e i k ( ~ ~ / M d f )
1
s
T(0,, , ) ~ ( e ) ~ - i ~ (AeO. e - i M p ,
+ a- Po)@
d20
The first term of the second equation represents the unscattered wave. Since this wave covers the entire object plane, the phase (108) cannot be neglected for this partial wave. Fresnel propagation must therefore be used to obtain the unscattered part of the wave at the detector plane. The second term, which represents the scattered wave, satisfies the Fraunhofer condition (11) because T - 1 is zero for p, > DJ2. Accordingly, the scattered part of the wave at the detector plane can be derived by using the Fraunhofer approximation. Considering relation (105), we obtain for the wave function at the detector plane the expression
Here the position vector
Pd
= Xdex
+ ydey at the detector plane zd has been
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
225
replaced by the angular vector
The constant C must be chosen in such a way that the “angular” current density adopts the form
where I, is the current of the probe at the object plane. Since we have neglected backscattering, the condition r
holds true, regardless of the position pp of the scanning spot. The normalized function q(u) considers the distribution of the statistically independent point sources. Employing a configured detector with a detector function D(0)and a quantum efficiency q, we obtain for the mean image signal per pixel the expression
where 7 =-
t
N2
is the illumination time for each pixel. Here we have assumed that the image consists of N x N square pixels. The Fourier transform of the digitized STEM image can be written in accordance with that of the TEM (Section 111) as
where the length u of each pixel is identical with the sampling distance of the probe in the x and y directions, respectively. Hence a square specimen area L x L with L = N u is scanned. The value N,, represents the detector
226
F. KAHL AND H.ROSE
signal obtained when the probe is positioned at the center pp = pfmof the pixel (1, m). The multiplication with the pixel area a’ is equivalent to the integration over the pixel area in the image plane in the case of the TEM (38). The mean value of the Fourier transform (1 16) is given by
If N is sufficiently large and if the sampling condition (46) is satisfied, the expression (117) adopts the simple form
where jd(@ p p ) e - i k n p p d 2P P
is the Fourier transform of the effective current per solid angle at the detector plane. Inserting relation (112) into expression (119), we obtain
s s
+ f Q(a) D(O)A(O)A(IO- al)F(@ - A2, 0 ) P ( 0 ,a)dZO 2 i
1 Q(Q)
--
D(-O)A(O)A(IO- Ql)F(a- 0, -0)F(0, A2) d2Q
I
a)d28 d 2 0
x F(O,@)P(B,
The envelope functions
.
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
Q(SL)
=
p(0,a) =
s s
q(a)eiknad2a,
227
(121)
e-i{Y(Ie-nl,AE)-Y(8,AE))p(AE)dAE
( 122)
represent the imperfect lateral coherence and the imperfect temporal coherence of the incident wave. If we center a bright-field detector with a maximum detector angle @, I &, within the illumination cone, the maximum spatial frequency contained in the hologram is amax
A
- @d + eo
A
(123)
.
The maximum pixel size amaxis obtained from the sampling theorem (46) as
Owing to the integration over the detector angle, the scattering amplitude can generally not be separated from the imaging characteristics of the microscope. Such a separation is possible for the twin images in (120) (the second and the third term) only if the scattering amplitude of the specimen depends solely on the scattering angle a,as it is the case for the first-order Born approximation. Thin phase objects also satisfy this condition because their transmission functions ?;, do not depend on the direction 0 of the incident wave: Owing to this obstacle, STEM holography is restricted to thin phase objects. The standard STEM bright-field image can be considered as an in-line Fraunhofer hologram. In the Fourier transform of such holograms, the quadratic term overlaps with the twin images. For weak phase objects the intensity of the quadratic term becomes negligibly small (19). In the following we consider a centrosymmetric detector function
D(-0) = D(0)
(126)
and a source distribution for which ( 127) 4 - a) = 4 w Assuming in addition the validity of the first-order Born approximation (weak phase objects), we obtain
2
A(O)D(O) d 2 0 - -F(SL)Y(n)
A
228
F. KAHL A N D H. ROSE
Here
2(n)= Q(a) [D(@)A(@)A(l@ - nl)Im(P(0, a)}d2@ n6,Z
-
9
[ [ D ( B ) A ( @ ) A ( l @ - nl)sin[y(@, AE) - y ( l 0 - nl,AE)]
(129)
x p(AE) dAE d 2 0
is the PCTF of the STEM. In the limiting case @d << 6,, D ( 0 ) = 1 for 6, < 6,, this function becomes proportional to that of the TEM:
B. Spectral Signal-to-Noise Ratio We consider a STEM image composed of N x N statistically independent pixels. The number of electrons recorded by an angular detector element d 2 0 during the illumination time zp per pixel is given by
d&,(@) = %ej d ( @ ,
Pi,) d 2 0 ,
(131)
when the probe is positioned at the center p, = plm of the pixel (1, m).In the case of a constant collection efficiency q, the Fourier transform of the image intensity is given by
c
a2 N - 1
Z(n)= -
J2 I,m=O
s
D(0)dN,,(0)e-ik"P~m.
(132)
Considering Eqs. (62), (63), and (65), we obtain for the variance of (132) the relation
For weak scatterers, the fraction of electrons that are scattered out of the illumination cone is negligibly small. In addition, the modulation of the current density resulting from constructive and destructive interference of the elastically scattered wave with the unscattered wave is small compared with the average current density I,/(nB~)within the bright-field cone. Ac-
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
229
cordingly, the variance (1 33) can be written approximately as
(134) An ideal detector satisfies the relation D Z ( 0 ) = lD(@)I = 1.
(135)
Therefore, the variance (134) for an ideal detector covering the entire detector plane is given approximately by
If we assume that the probe current does not vary in time, the dose
is the same for each object element (1, m).Inserting relations (128) and (134) into expression (61) for the spectral SNR, we find
In analogy to our considerations in Sections III.B.2 and IV.B, the last factor on the right-hand side of (138),
can be considered as the effective PCTF. Assuming a Gaussian distribution for both the energy spread (52) and the distribution function
of the source, we can evaluate the integrals in (129) analytically. The parameter 0, defines the extension of the source, while M, denotes the magnification of the source at the object plane. Employing normalized units (Table I) and
230
F. KAHL AND H.ROSE
considering the definition (139),we obtain
x
sin[yo(13 - 01)
- y o ( $ ) ] e ~ ~ ~ z ~ d~2 3 z .~ ~ 9(141) ~ ~ ~ z ~ z
The PCTF becomes rotationally symmetric for a rotationally symmetric detector function. The detector function, the objective aperture angle $o, and the defocus A which yield an optimum SNR and an optimum PCTF have been determined by Rose (1974, 1975). The optimum detector is a rotationally symmetric three-ring detector which extends over the entire bright-field area. The normalized limiting angles $(’) of the corresponding detector function,
are listed in Table 11. In Fig. 1 1 we have depicted the corresponding effective PCTF and the effective PCTF (59) of the TEM for parallel illumination assuming in both cases a pixel size a = 0.7 A and a microscope with C,
=
1.4 mm,
C, = 1.2 mm, a, = 43 A,
E,
=
lMsl
100 keV, 1 80
=-
a,
= 0.5 eV,
(143)
It should be noted that a detector with more than three rings increases neither the information limit nor the spectral SNR significantly (Rose, 1974, 1975; Eusemann and Rose, 1982; Hammel, 1992; Hammel and Rose, 1995). Unlike the effective PCTF (59) of the TEM, the effective PCTF (141) for the TABLE I1
OPTIMUM PARAMETERS FOR A THREE-RING DETECTOR
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
23 1
0.75
0.50 0.25
0.00
-0.25 FIGURE11. Effective PCTF (55) of the TEM (-----) for the section w y = 0 and effective PCTF (141)of the STEM (-) employing an optimum three-ring detector.
STEM is rotationally symmetric because we have assumed a round effective source. The reciprocity theorem (Helmholtz, 1886; Zeitler and Thomson, 1970) shows that the effective source in the STEM is equivalent to a pixel in the TEM. Since the maximally accepted scattering angle in bright-field STEM is twice as large as that in the TEM for axial illumination, the maximum spatial frequency in the STEM image is twice as high as that in the TEM, provided that the same aperture is used in both cases. Another advantage of the STEM is the absence of zeros in the PCTF. The triangular shape of this PCTF suppresses the speckles in the phase-contrast image (Rose, 1974; Rose, 1975; Hammel, 1992; Hammel and Rose, 1995). Hence, in contrast to the TEM, all spatial frequencies up to the information limit can be reconstructed from a single micrograph if Af and C, are known with a sufficient degree of accuracy. It should be noted, however, that with hollow-cone illumination (Hanssen and Trepte, 1971; Eusemann and Rose, 1982; Kunath et al., 1985; Dinges et al., 1994), a similar transfer function can be achieved in the TEM. Fortunately, the PCTF of the STEM is not very sensitive with respect to small changes of the imaging parameters C, and A$ On the other hand, the maximum spectral SNR of the STEM is about three times smaller than the corresponding SNR of the TEM: The STEM is not the optimum instrument for bright-field imaging of radiation-sensitive objects.
232
F. KAHL AND H.ROSE
VI. OFF-AXISSCANNING TRANSMISSION ELECTRON MICROSCOPE HOLOGRAPHY A . Basic Principles
Owing to the encouraging results in off-axis TEM holography, Leuthner and Lichte (1989) proposed an equivalent technique for the STEM. These authors suggested that two spatially separated probes should be formed at the object plane by inserting a biprism in the illumination wave. The two probes are scanned simultaneously in such a way that their relative distance is kept fixed. The specimen is placed in the half-plane x,, < 0. Only one of the two probes-the so-called specimen probe-is moved across the specimen, as shown in Figs. 12 and 13. The reference probe does not hit the specimen. A real electron source has a finite brightness and emits electrons into a small solid angle after they have passed through the hole in the anode. To guarantee that the central ray intersects the center of the beam-defining aperture, a double-deflection element must be incorporated above the objective lens, as depicted in Fig. 14. The first coil tilts the beam by an angle t. The second coil reflects the beam back to the axis by an angle -5 + d. For a chosen angle a,we can adjust the primary deflection 6 in such a way that the central ray always intersects the center of the aperture plane. In this case the wave front in this plane can be approximated by a plane wave whose direction of propagation is tilted by the angle cz with respect to the optic axis, as demonstrated in Fig. 14b. Since the radius r, of the beam at the aperture plane is significantly larger than the radius r, of the opening, the central ray need not necessarily intersect the optic axis at the aperture plane. Indeed, it may intersect the optic axis at any distance ( from the aperture plane as long as the off-axial distance d, of the central ray at this plane satisfies the condition
d,
= l(al
-= r, - r,.
(144)
The intersection of the central ray with the optic axis is called the pivot point. The requirement (144) guarantees that the disturbances of the outer zones of the illuminating wave do not pass through the opening in the aperture. In conventional STEM imaging the parameter d, has no significance, but in the case of off-axis STEM holography the center ray represents an axis of mutual phase of the object wave and the reference wave, as shown in Fig. 15. This axis is therefore called the iso-phase axis. If d, satisfies the condition d, < r, - r,, - ds,
(145)
the wave front in the aperture plane can be approximated by the superposi-
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
233
optic axis incident plane wave biprism
--C
1st scanning coil
2nd scanning coil
aperture
objective lens
object plane
_-__---
pivot point
‘L_ _ _ _ _ _
+
pivot plane
I ‘---m--bl--- 1 I
bright - f ield detector
FIGURE12. Formation of the reference probe and the object probe in off-axis STEM holography.
F. KAHL AND H.ROSE
234
specimen
\
POY
field of view I
4 I I
I I r
----
-
---
I I I
I
I
I I
I I
position of rest
reference probe I I I
FIGURE 13. Definition of the positions of the specimen probe and the reference probe.
a
effective source
axis
axis
b
effective source
axis
axis
FIGURE14. Schematic arrangement of the illumination system demonstrating the effect of a single (a)and a double (b) deflection element.
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
235
optic axis I
incident plane wave
I
I
biprism
1st scanning coil
2nd scanning coil
apert u r e
objective lens
object p l a n e pivot plane
b r i g h t - field detector FIGURE15. Diagram elucidating the properties of the iso-phase axis in off-axis STEM holography.
+
tion of two plane waves with tilt angles a 6 and a - 6, respectively. Here s denotes the distance between the biprism plane and the aperture plane. At this plane two wave surfaces with equal phase intersect each other along
F. KAHL AND H.ROSE
236 the line
p(c) = -la
+ c(e, x a).
( 146)
The point p(c = 0) = -[a on this line represents the point of intersection of the “iso-phase axis” with the aperture plane. Immediately behind the beam-limiting aperture the entire wave has the form
One of the two plane waves is transformed by the objective lens into the specimen probe, the other into the reference probe. These probes are centered about the positions pp = -fS
+R
and
pr = f S
+ R,
(148)
respectively, as illustrated in Fig. 13. The vector
R = fa
(149)
describes the displacement of the probes relative to their positions of rest. If the voltage at the biprism is turned off (6 = 0), the objective lens forms a single probe centered about the point R = fa at the object plane. The objective lens also forms an image of the wave function (147) with magnification (105) at the detector plane. Assuming an ideal lens, the current density at the detector plane without a specimen,
oscillates in the direction ed = ex within the area of the illumination cone. If the pivot point is placed at the center of the aperture plane (C = 0), j d does not depend on the displacement R of the probes. In this case the image intensity is constant and the sideband peaks do not show up in the Fourier transform of j,. As a consequence, off-axis STEM holography is feasible only if the pivot point is not placed at the center of the aperture plane. In the case ‘4 # 0, the current density at the detector plane depends on the displacement R of the probes from their position of rest. The location of the interference pattern does not change if the probes are simultaneously shifted by a multiple of the periodicity distance
AR
=
-.Af
2C6
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
237
The moving interference pattern in the detector plane produces a stripe pattern in the image with the same periodicity. As a result, the Fourier transform r(S2)of the image intensity exhibits two eccentric sideband peaks at the positions
r a = +2-6. f
The maximum radius of each twin image in the diffractogram is 6, + @ d , while that of the quadratic term is 26,. Accordingly, the twin images are separated from each other and from the quadratic term if the separation criterion
r
2-6 > 38,
f
+ 0,
(153)
is fulfilled. For reliable reconstruction of the complex scattering amplitude, the contrast of the stripes in the image must be as high as possible. The magnitude of the contrast depends on the configuration of the detector. For example, a circular detector whose diameter is large compared with the periodicity of the current density at the detector plane largely cancels out the oscillations of the image intensity. To maximize the contrast of the stripes in the image, the detector must be subdivided into stripes which run parallel to the stripes of the current density. The width of each stripe must be one-half of the periodicity interval,
of the current density (150) at the detector plane. The sign of the stripe detector function,
must alternate with the number angular width of each stripe,
II
of the stripes, as shown in Fig. 16. The
represents half the diffraction angle of a line grid whose periodicity coincides with the distance d, = 2fd
(157)
238
F. KAHL AND H.ROSE s t r i D e detector
t s t r i o e detector
I
I I
I
I
I I
I
I
FIGURE16. Principle of the stripe detector. The periodicity of the stripes matches the periodicity of the current density. The sign of the detector function alternates with the number of stripes. Each stripe covers either a region of constructive interference or a region of destructive interference.
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
239
between the two probes at the object plane. For probe positions R , = nAR, the maxima of the current density at the detector plane match the stripes of the detector, which are weighted by 1. In this case a strong image signal of positive sign is obtained, as shown in Fig. 16a. Probe positions with R, = (n 1/2)AR yield a strongly negative image signal (Fig. 16b). In order to discuss the actual arrangement of the stripe detector, we assume a STEM with a voltage of100 kV, a limiting aperture angle 8, x 13 mrad, and a distance d, = 100 A between the probes. For these values we obtain from (156) an angle AOS x 1.85 x rad. Moreover, the brightfield area must be subdivided into
+
+
stripes. Hence if we use a CCD array with 512' pixels, each stripe of the detector should extend over three pixel rows. To satisfy this condition for a pixel size of 20 pm and a focal length f = 2 mm, we need a magnification
lMil=
3 x 140 x 20pm z 162 200s
(159)
Any enlargement of the probe distance d, results in smaller stripes, making the construction of the detector even more difficult. The maximum number of stripes is limited by the pixel number N , of the CCD array. Hence we obtain, for the maximally allowed probe distance,
For probe distances larger than a few hundred angstroms, the effect of the off-axial coma must be considered. The coma prevents the unambiguous reconstruction of the scattering amplitude. Therefore, the maximum distance d, between the two probes is limited by both the minimum size of the detector pixel and the coma. The probe distance must be chosen in such a way that (a) each interference stripe at the detector plane remains broader than the size of the pixel and (b) the coma does not exceed a distinct limit. To establish the appropriate location of the pivot point, we assume a microscope with f = 2 mm, 8, = 0,= 13 mrad, and a probe distance d6 = 100 A. Considering further the relation S = d6/(2f), we find from (153) that the pivot point must be located at a distance ( > 20.8 m beneath the objective aperture plane. It should be mentioned that in this case the pivot point is located outside the microscope.
240
F. KAHL A N D H. ROSE
B. Formation of the Scanning Transmission Electron Microscope Hologram
So far we have assumed an ideal point source yielding an illumination wave which propagates in the direction of the optic axis. However, any real source has a finite size. Electrons that are emitted from a nonaxial point of the source enter the biprism plane with an additional tilt angle S. Accordingly, the “iso-phase axis” of the two plane waves behind the biprism is tilted by this angle. The total tilt angle of the object wave is -6+u+B,
(161)
while the corresponding angle for the reference wave is given by (162)
+6+u+S.
These plane waves are transformed into two convergent waves by the objective lens. The center of the resulting probes at the object plane are displaced by the distance
=fS
(163)
from the centers of the spots formed by the axial plane wave. Simultaneously, the intersection point of the “iso-phase axis” with the aperture plane is shifted by ss from the nominal position -let, as shown in Fig. 17. Here s is the distance between the biprism plane and the objective aperture plane. Considering these displacements, we obtain for the wave function at a plane just behind the aperture the expression
+ eik(+6+a+ p)paeik(+8+a+
P)(ia-sp)
1.
( 164)
To incorporate the phase shift caused by the aberrations of the objective lens we must multiply the expression (164) by the phase term
XP, P,,
AE) = .(e, PA - r(e, AE)
( 166)
introduced by the objective lens consists of the familiar term y given in (34) and the term (Rose, 1989) K(e, p,) = ~ , 0 3 p , cos(a,
+ a,
- a,)
( 167)
which accounts for the effect of the third-order coma. The coma of magnetic
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
tilted -1 "iso-phase axis"
---
@
24 1
-
"iso-phase axis"
I I
._----
- - - - - -- -- - - - - - - -
- - - -- - - - - - - - - - - - I
I I I I
.. .. .. ...
.. ..
2nd scanning coil
'1
-c
-------__-
aperture plane
I?'SP
l a j
I;
FIGURE 17. Course of the iso-phase axis for producing an off-axis STEM hologram in the cases of an axial (-) and a tilted (-----) incident plane wave. The iso-phase axis coincides with the (virtual)ray which intersects the center of the biprism.
242
F. KAHL AND H. ROSE
lenses contains a radial coefficient and an azimuthal coefficient which accounts for the effect of the Larmor rotation. The angles ag and a, are the polar angles of the vectors 8 and pol respectively. In the case aK = 0, the axis of the coma figure points in the radial direction. In order to obtain the wave function at the detector plane, we must propagate each of the two partial waves (164) from the objective aperture plane to the detector plane by employing the procedure outlined in Section V.A. Since the diameter of the probes is in the order of a few angstroms, we can replace the vector p, in the phase term (167) of the coma by the positions pp = R - f S and pr = R f 6 of the centers of the specimen probe and the reference probe, respectively. Employing this approximation, we derive for the wave function at the detector plane the expression
+
where
for the object wave. Assuming a single point source and using relations (170), (171),(172), and (148), we obtain for the angular current density at the detector plane the relation
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
ei[~(O, -fa+ R+u)-K(B', - f 6 + R + a ) ]
e -ik(-fS+R+e)(O-e')
243
d28 d (173)
where I , is the current of the specimen probe at the object plane. In the absence of a specimen ( F = 0), the angular current density,
forms stripes which are distorted by the coma in the outer region of the detector. This distortion complicates significantly the reconstruction of the scattering amplitude from the hologram. It may therefore be necessary to limit the size of the specimen area to such an extent that the coma can be neglected. The maximum phase shift K,,, resulting from the coma is found from (167) to be K,,,
%
kcK@d,.
(175)
In order to estimate the influence of the coma we assume an objective aperture angle 6, = 13 mrad, a typical value (176)
IcKl
for the coma coefficient, a probe distance d , = 20 nm, and an acceleration voltage of 100 kV.The resulting maximum phase shift K,,,
%
0.08
(177)
is negligibly small. Since this phase shift increases only linearly with the probe distance d,, the influence of the coma can still be tolerated if the probe distance is increased to about 30 nm. In the following we consider only object fields whose diameters are smaller than 30 nm. In this case we can neglect the phase shift K in expression (173) and obtain
244
F. KAHL AND H. ROSE
For an extended polychromatic source, the current density per solid angle at the detector plane is given by jd(@,
R) =
ss
jd(@, R, a, E)p(AE)q(a)d 2 0 dAE.
(179)
Here q ( a ) and p ( E ) determine the spatial and the energy distribution of the effective source formed by the statistically independent constituent point sources. C . Fourier Transform of the Hologram
The considerations concerning the Fourier transform of the off-axis hologram are outlined in Section V.A. We again assume that the number of pixels N is sufficiently large and that condition (46) of the sampling theorem is satisfied. In this case the mean value of the Fourier transform of the image intensity,
s
I(R) = 2 jd(O,R)D(O) d 2 0 , qze recorded during the illumination time z = N 2 z , is found to be
The first term,
Z(n)= Q(Q)e-ikfan
(180)
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
245
denotes the amplitude of the central intensity, while
D(@)A(IO - fiJ)A(@)F(@ -
1
a, 0)P(0,fi)eikZS6edZO
(183) with li! = 25/f6 A f2 describes the amplitude of each of the two sidebands. The angular radius 8, of the central term and the radius 19,of each sideband term are given by
respectively. Hence if the imaging parameters satisfy the separation condition (153),
the sidebands are spatially separated and do not overlap with the central term. To reconstruct the scattering amplitude, we need to consider only the amplitude
+ 2fs)
=
?$&(a
+ 256)
of the first sideband, which is centered about the position
5 f
iz,= - 2 - 6
in the diffractogram. Expression (183) shows that the scattering amplitude can only be separated from the properties of the microscope for thin phase objects. Since in this case the scattering amplitude F(O - f2, 0 )= F(SZ) does not depend on the detector angle 0, the sideband term can be written as
--
Z,(Q)
=qzp
e
~
j
" e-ikf*fi{ Q (2 f 6 ) 6(a)
718,"
-2F(kli!)g(fi)).
2
A(@)D(@)eZikfge d2@
(188)
Here A ( ! @ - fiI)A(@)D(O)P(O,li!)ezikfgedZO (189)
246
F. KAHL A N D H. ROSE
is the sideband transfer function (SBTF). The phase factor exp( - ikf 6fi) need not to be corrected, for reasons outlined in Section 1V.A. It should be noted that the envelope function Q(f2 + 2s/f 6) is rotationally symmetric with respect to fi,, = -2s/f 6. To estimate the effect of this shift, we assume a 100-kV STEM with d, = 20 nm,
f = 2 mm,
s = 100 mm,
C, = 1.2 mm.
(190)
For these values the center of the envelope function is displaced by the angle S
S
GI, = 2-6 = - d
f
x 0.5 mrad
(191)
f 2 ,
from the origin fi = 0 of the diffractogram. Assuming an optimized threering detector (142) with a cutoff angle
Od + 0, = 20, x 26 mrad,
(192)
we find that the relative shift of the center of the damping envelope with respect to the radius of the sideband amounts to only 1.9%.Since this shift is very small, the effect of the envelope function is quite similar to that in conventional STEM. The factor P(0,fi)is identical with the term (122) for conventional STEM imaging. Therefore, the energy spread affects offaxis STEM hologram in the same way as it does for the phase-constrast image. D. Spectral Signal-to-Noise Ratio
The noise reduces the information which can be extracted from the STEM hologram. For reasons of simplicity we restrict our considerations to weak scatterers. In this case the number of electrons scattered out of the illumination cone is negligibly small. Hence the current through the bright-field area of the detector is approximately equal to the current 1, of the specimen probe. Without an object, the angular current density in the detector plane is given by ("'
=
2,4(@)&{ 747 1
+ Re[Q(2+.>]
- Im[Qk+o)]
sin[Zks(/e
-
f
""'1)
p(E)q(a)d E d 2 0
~ 0 ~ [ 2 k 6 ( f @-
?)I}.
y)] (193)
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
247
For a bright-field detector with D'(0)
=
1,
(194)
the contributions of the cosine and the sine term to the variance (133) cancel out as well as the terms that are linear in the scattering amplitude owing to the validity of the generalized optical theorem (Glauber and Schomaker, 1953; Rose, 1977). Neglecting the quadratic terms which account for the electrons scattered out of the bright-field region, we obtain for the variance of the sideband intensity of the diffractogram the expression
Using this expression, the spectral SNR of the sideband intensity for R # 0 is found to be
Comparison of this relation with the corresponding expressions (67), (93), and (138) suggests that the effective SBTF should be defined as follows:
This function can be used to compare the spectral SNR of off-axis holography in STEM with that in the TEM and with those of TEM and STEM bright-field imaging. Assuming a Gaussian function for both the energy distribution (52) and the radiation characteristic of the effective source (140), we obtain for the effective SBTF in normalized units (Table I) the expression
1 e-~eZ(10-m12-92)2ein(9x/A9,)
d29
(198)
Here Ass denotes the normalized stripe width (156) of a stripe detector for maximum stripe contrast. Apart from the factor exp (in
&)
248
F. KAHL A N D H. ROSE
stripe detector
annular detector
hologram detect o r -1
FIGURE18. Combination of a stripe detector and an annular detector forming an optimum detector for off-axis STEM holography.
the real part of the integrand in (198) coincides with the integrand of the effective PCTF (141) for standard STEM bright-field imaging. The strong oscillations caused by the phase term (199) cancel out the whole integral if an annular phase-contrast detector with only a few rings is used. To avoid this disastrous behavior we choose a detector configuration with a detector function D ( 8 ) = Ds($x)Da($)* (200) Such a detector consists of rings which are subdivided into stripes, as shown in Fig. 18. The factor D,($,),which matches the interference pattern of the intensity at the object plane in the absence of an object, is obtained from (155) by replacing 0,by $,ON. The factor Da(9) accounts for the annular subdivision of the detector. To elucidate the effect of the factor D,($,)on the effective SBTF, we have plotted in Fig. 19 the real part and the imaginary part of the product D,(9,)exp(in$,/A$,). The curves demonstrate that the real part is always positive, while the imaginary part has the form of a sawtooth function. Since all other factors of the integrand vary slowly over a region of many oscillations, the imaginary part of the integral in (198) is approximately zero. The mean value 2/n w 0.63 of the real part demonstrates that the stripe detector (155) counterbalances to a large extent the damping effect of the oscillating phase factor (199). Considering the effect of the stripe detector, we can substitute the factor 2/n for the product D,(9,)exp(in9,/A9,) in the integrand of the expression (198). The resulting approximation, e-zd:(m+2s// 1
%eff(9)
=
8)2
iJzn2$, 9,
jI
A( 3 - 0 I )A($)Da($) 1
e - i [ ~ ~ ( l m - 8 1 ) - ~ ~ ( 9 ) l e - ~ ~ z (mi218-
W 2d 28
(201)
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
FIGURE19. Real part (-)
and imaginary part (-----)
249
of the product Ds(9,)exp(in$,/A&).
for the effective SBTF is rotationally symmetric if we disregard the displacement -2&s/f of the envelope function Q. In this case the real part of 9s,eff coincides with the effective PCTF of the standard bright-field STEM imaging (139), apart from the factor 1/(& % 0.23. The optimum transfer function (201) will be obtained when the factor Do($)of the detector function (200) coincides with the detector function (142) for optimum phase contrast. The optimum hologram STEM detector therefore combines the features of both the stripe and the ring detector, as shown in Fig. 18. In this case the real part of the SBTF has the same shape as the PCTF (139),apart from the constant factor 0.23. To illustrate this behavior, we have plotted in Fig. 20 the real part of the SBTF and the PCTF for a 100-kV STEM with C, = 1.2 mm, C, = 1.4 mm, f = 2 mm, and a monoenergetic point source. For comparison, the corresponding curves for a realistic source with source parameters a, = 0.5 eV,
a, = 43 A,
1 IM,I = 80
(202)
are depicted in Fig. 21. To completely survey of the complex SBTF for a monoenergetic source, we have plotted in Fig. 22 the absolute value, the real part, and the imaginary part of the SBTF, and in Fig. 23 the corresponding phase. The SBTF for off-axis STEM holography and the PCTF of standard STEM bright-field imaging are less sensitive with respect to the energy spread of the source than the SBTF or the PCTF of the TEM. On the other hand,
250
F. KAHL AND H. ROSE
0
1
2
3
W
FIGURE 20. PCTF of the optimized three-ring detector (-) and the real part of the SBTF of the three-ring stripe detector (-----) as functions of the normalized spatial frequency w. In both cases a monoenergetic point source has been assumed.
0.301
0
1
2
3
w
FIGURE 21. PCTF of the optimized three-ring detector (-) and of the real part of the SBTF for the three-ring stripe detector (-----) for a gaussian source with an energy spread (variance) a, = 0.5 eV and a source radius (variance) us = 43 A.
the performance of the STEM is strongly affected by the finite extent of the effective source. Accordingly, coherent bright-field imaging in STEM can only be achieved with a field-emission gun. Another critical point in off-axis STEM holography concerns the extreme accuracy required for the stripes of
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
0.08
251
1
0.06
0.04
0.02
0 FIGURE 22. Absolute value (-), real part (---), and imaginary part (----) the SBTF in the case of a three-ring stripe detector and a monoenergetic point source.
of
1.6 1
FIGURE 23. Phase of the SBTF for the three-ring stripe detector.
the detector. To demonstrate this behavior, we have plotted in Fig. 24 the absolute value of the SBTF for a relative deviation of the stripe width of OX, O.l%, and 1%. The curves reveal that a relative accuracy of at least 0.1% is necessary in order to prevent an appreciable reduction of the SBTF. This accuracy should be attainable in practice without difficulties. Finally,
252
F. KAHL AND H. ROSE
0.08
1
0.06
0.01
0.02
0 0
1
2
3
w
FIGURE 24. Effect of an inaccurate stripe width on the SBTF in the case of a combined three-ring stripe detector and a monochromatic point source for a relative deviation of 0% (-), 0.1% (---), and 1% (----) of the stripes.
it should be noted that a shift of the interference pattern by the angle AOJ2 maximizes the imaginary part of the SBTF, while it suppresses the real part.
VII. CONCLUSION In the case of conventional bright-field imaging in TEM and STEM (in-line holography), the reconstruction of the scattering amplitude is restricted to weak phase objects. When no focus series is available, the reconstruction of the scattering amplitude up to the information limit is hampered in the TEM by the gaps of the PCTF. An important advantage of the STEM equipped with a configured brightfield detector is the possibility of extracting specific information about the object from a single exposure by subsequent combination of the signals from the detector elements in various ways. The PCTF obtained by an optimum three-ring detector has a triangular shape without any transfer gaps up to the maximum transferred spatial frequency. Due to this triangular shape, the speckles are largely suppressed in the bright-field image of the STEM. The PCTF of the STEM is less sensitive with respect to deviations of C, and Af than the PCTF of the TEM for parallel illumination. On the other hand, the
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
253
spatial coherence of the source must be higher for the STEM than for the TEM. Accordingly, a field-emission gun must be used in the STEM. Since only half of the elastically scattered electrons contribute to the modulation of the bright-field intensity, the spectral SNR is about three times lower than that in TEM employing parallel illumination. Off-axis holography in the TEM is not restricted to weak phase objects. However, an accurate knowledge of both the defocus Af and the coefficient C, is mandatory for reconstructing the complex scattering amplitude of arbitrary thin objects up to the information limit. Moreover, off-axis holography has an about three times lower spectral SNR than conventional TEM brightfield imaging. The optimum detector configuration for off-axis STEM holography is obtained by subdividing the three-ring detector for conventional phasecontrast imaging into stripes. The absolute value of the resulting SBTF has the same shape as the PCTF in the conventional STEM. Hence the resolution will be the same in both cases. Unfortunately, for a given dose the achievable maximum spectral SNR is about 10 times lower than that of the conventional TEM. Therefore, this technique seems to be applicable only for radiation-resistant objects. STEM holography is restricted to thin phase objects. Since this fundamental disadvantage cannot be overcome, the field of application of STEM holography is strongly limited. APPENDIX: FRESNEL DIFFRACTION IN OFF-AXIS TRANSMISSION ELECTRON MICROSCOPE HOLOGRAPHY Within the frame of validity of the Fresnel approximation, we obtain for the wave function (76) at the intermediate image plane
where the functions
[/-
$(+)(pi) = 9
IMilfAv
(6vl -
2(1 - v)
(204)
- xi)]
and
I Mi If J v
2(1
x)]
- v) + xi + 1 - v
d28
(205)
254
F. KAHL AND H.ROSE
with S ( X )
1 2
= -{1
+ (1 - i) [%(x) + i Y ( x ) ] }
are expressed in terms of the Fresnel integrals (Abramowitz, 1970),
Considering the asymptotic behavior
2
1 1 Y ( x ) x - - - cos 2 nx
,
nx
(3
xz)
(208)
of these functions for large arguments x >> 1, we derive the relations ei(n/4)
@(x) x 1- 7 ei(n/2)x2 J 2nx
for x >> 1 and lim 9 ( x ) = 0.
(209)
x + -m
To simplify the considerations, we analyze the behavior of @+) and $(-Iin the absence of a specimen (F = 0). The argument of .9in (204)is positive for db xi < 6vl - ___ 2(1 - v)’
and that of 9in (205)is positive for Xi>
-6vl+-
db
2(1 - v)’
Since the extension of the Fresnel fringes surrounding the edges xi = 6vl - [db/2(1 - v ) ] and xi = -6vl + [db/2(1- v ) ] shrinks with decreasing v, we restrict our considerations to v << 1. In this case we can replace 1 - v by 1 in all equations. T o obtain an estimate for the limit v, up to which the approximation
is valid within the region -6vl+
db
-
2
db < xi < S v l - 2
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
255
of the hologram, it is advantageous to introduce the parameters 1 2
= - - (26V1 - db)-lXi
and 1 p(-) = - + (26~1-d J 1 x i .
2
In both cases the range 0 < p(*) < 1 corresponds to the hologram zone (213). In order to record a hologram of a specimen with the area L x L, condition (68) must hold. Choosing the equality sign in this requirement, we derive the equation 26V1- db = IMi(L.
(216)
If we insert this relation into (214) and (215) and replace x i in (204) and (205) by one of the resulting expressions for xi, we find
To guarantee that the relation 9 = 1 holds within the area of the hologram, we must keep the absolute value of the difference $(*)(pi)- 1 below a given threshold E,,,:
Considering the asymptotic behavior (209) of the function (206) for large arguments x >> 1, we derive from (218) and (209) the condition lMil v < (27rp(*)L&,,,)2-.
f2
In the following we require that 90% of the area of the hologram should not be affected by the edge fringes. This inner zone corresponds to the region defined by 0.95 2 p(*) 2 0.05. Assuming L = 200 A,
E,,,
f = 2 mm, we obtain
= 0.03,
IMil = 50,
E , = 100 keV,
256
F. KAHL AND H. ROSE
Equivalent considerations show that in this case $(+)(p(+))and $(-)(p(-)) are negligibly small in the region p(*) < - 0.05 outside the hologram. For this example, the wave function in the intermediate image plane can be approximated by (77), apart from the stripes -0.55L/IMil < x i < -0.45L/IMil and 0.45L/IMiI < x i < 0.55L/IMiI centered about the edges of the hologram. Assuming a typical diameter d, = 1 gm for the wire of the biprism, it follows from condition (70) that the biprism angle must be chosen as
REFERENCES Abramowitz, M., and Stegun, I. A. (1970). “Handbook of Mathematical Functions,” 9th ed., p. 295, Dover, New York. Batson, P. E. (1988). In “Treatise on Materials Science and Technology, Vol. 27,” p. 337. Academic Press, New York. Berger, A. (1993). Ph.D. thesis, TH Darmstadt, Germany. Berger, A., and Kohl, H. (1993). Optik 92 (4), 175. Born, M., and Wolf, E. (1975). “Principles of Optics,” 5th ed., p. 483. Pergamon Press, New York. Coene, W., Janssen, A,, de Beeck, O., and Van Dyck, D. (1992). Phys. Rev. Lett. %, 3743. Dinges, C., Kohl, H., and Rose, H. (1994). Ultramicroscopy 55,91. Eusemann, R., and Rose, H. (1982). Ultramicroscopy 9, 85. Gabor D. (1949). Proc. Roy. Soc. A 197,454. Gabor D. (1951). Proc. Phys. Soc. B64,449. Glauber, R. J. (1958). In “Lectures in Theoretical Physics, Boulder, Vol. 1,” p. 315. Interscience, New York. Glauber, R. J., and Schomaker, V. (1953). Phys. Rev. 89,667. Goodman, J. W. (1968). “Introduction to Fourier Optics,” McGraw-Hill, San Francisco. Hammel, M. (1992). Ph.D. thesis, TH Darmstadt, Germany. Hammel, M., and Rose, H. (1995). Submitted for publication. Hanssen, K.-J. (1971). In “Advances in Optics and Electron Microscopy, Vol. 4” (V. E. Cosslett, ed.), p. 1. Academic Press, New York. Hanssen, K.-J. (1982). In “Advances in Electronic and Electron Physics, Vol. 59,” p. 1. Academic Press, New York. Hanssen, K.-J., and Trepte, L. (1971). Optik 33, 166. Hawkes, P. W., and Kasper E. (1994). “Principles of Electron Optics, Vol. 3, Wave Optics.” Academic Press, New York. Helmholtz, H. V. (1886). Crelles J. 100,213. Kunath, W., Zemlin, F., and Weiss, K. (1985). Ultramicroscopy 16, 123. Leith, E. N., and Upatnieks, J. (1962). J . Opt. SOC.Am. 52, 1123. Leuthner, Th., and Lichte, H. (1989). Phys. Stat. Sol. (a) 116, 113. Lichte, H. (1991a). In “Advances in Optics and Electron Microscopy, Vol. 12” (T.Mulvey, ed.), p. 25. Academic Press, New York.
THEORETICAL CONCEPTS OF ELECTRON HOLOGRAPHY
257
Lichte, H. (1991b). UItramicroscopy 38, 13. MoliBre, G. (1947). 2. Naturforschung. 2a, 133. Mollenstedt, G. (1992). In “Advances in Optics and Electron Microscopy, Vol. 12” (T. Mulvey, ed.), p. 1. Academic Press, New York. Mollenstedt, G., and Diiker, H. (1956). Z. Physik 145, 377. Reimer, L. (1984). “Transmission Electron Microscopy,” Springer Series in Optical Sciences. Springer-Verlag, Berlin. Rose, H. (1974). Optik 39,416. Rose, H. (1975). Optik, 42,217. Rose H. (1977). Ultramicroscopy 2,251. Rose, H. (1989). “Image Formation in Electron Microscopy,” Lecture notes, TH Darmstadt, Germany. Sakurai, J. J. (1985). “Modern Quantum Mechanics,” Addison-Wesley, New York. Saxton, W. 0.(1994). Ukramicroscopy 55, 171. Scheinfein, M., and Isaacson, M. (1986). J. Vac. Sci. Techno!.B4,326. Scherzer, 0.(1949). J. Appl. Phys. 20,20. Schiske, P., “Proc. 4th Eur. Conf. on Electron Microscopy” (Rome), Vol. 1 , p. 145. Shannon, C. E. (1949). Proc. IRE 37,lO. Thon, F. (1971). In “Electron Microscopy in Material Science,” p. 570. Academic Press, New York. Tonomura, A. (1986). In “Progress in Optics, Vol. XXIII” (E. Wolf, ed.), p. 183. Elsevier, New York. Tonomura A. (1993). “Electron Holography,” Springer Series in Optical Sciences, SpringerVerlag, Berlin. Whittaker, E. T. (1915). Proc. Roy. SOC. Edinburgh, Sect. A 35, 181. Zeitler, E., and Thomson, M. G. R. (1970). Optik 31,258.
This Page Intentionally Left Blank
.
ADVANCES IN IMAGING AND ELECTRON PHYSICS VOL. 94
Quantum Neural Computing SUBHASH C. KAK Department of Electrical and Computer Engineering. Louisiana State University. Baton Rouge. Louisiana
I. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. The Mind-Body Problem B. Emergent Behavior . . . . . . . . . . . . . . . . . . . . . C. Indivisibility and Quantum Models . . . . . . . . . . . . . D . A Historical Note . . . . . . . . . . . E. Overview of the Chapter I1. Turing Test Revisited . . . . . . . . . . . . . A . TheTest . . . . . . . . . . . . . . . . . B. On Animal Intelligence . . . . . . . . . . . . . . . . . . . . . . . C. Gradations of Intelligence D. Animal and Machine Behavior . . . . . . . . . 111. Neural Network Models . . . . . . . . . . . . . A . TheBrain . . . . . . . . . . . . . . . . . B. Neurons and Synapses . . . . . . . . . . . . C . Feedback and Feedforward Networks . . . . . . . D . Iterative Transformations . . . . . . . . . . . IV. The Binding Problem and Learning . . . . . . . . . A . The Binding Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . B. Learning . . . . . . . . . . . . . C. Chaotic Dynamics D . Attention, Awareness, Consciousness . . . . . . . . . . . . . . . . . . . E. Cytoskeletal Networks . . . . . . . . . . . F. Invariants and Wholeness V . On Indivisible Phenomena . . . . . . . . . . . . A . Uncertainty . . . . . . . . . . . . . . . . B. Complementarity . . . . . . . . . . . . . . C. What Is a Quantum Model? . . . . . . . . . . VI . On Quantum and Neural Computation . . . . . . . . A . Parallels with a Neural System . . . . . . . . . B. Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . C. More on Information D . A Generalized Correspondence Principle . . . . . . VII . Structure and Information . . . . . . . . . . . . A . A Subneuron Field . . . . . . . . . . . . . B. Uncertainty Relations . . . . . . . . . . . . VIII . Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References 259
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
260 261 262 262 264 265 266 266 267 269 270 272 272 273 276 278 284 284 285 288 289 291 292 293 294 295 296 298 298 301 302 304 305 305 307 309 310
Copyright 8 1995 by Academic Press. Inc. All rights of reproduction in any form reserved.
260
SUBHASH C. KAK
I. INTRODUCTION
This chapter reviews the limitations of the standard computing paradigm and sketches the concept of quantum neural computing. Implications of this idea for the understanding of biological information processing and design of new kinds of computing machines are described. Arguments are presented in support of the thesis that brains are to be viewed as quantum systems, with their neural structures representing the classical measurement hardware. From a performance point of view, a quantum neural computer may be viewed as a collection of many conventional computers that are designed to solve different problems. A quantum neural computer is a single machine that reorganizes itself, in response to a stimulus, to perform a useful computation. Selectivity offered by such a reorganization appears to be at the basis of the gestalt style of biological information processing. Clearly, a quantum neural computer is more versatile than the conventional computing machine. Paradigms of science and technology draw on each other. Thus Newton’s conception of the universe was based on the clockworks of the day; thermodynamics followed the heat engines of the nineteenth century; and computers followed the development of the telegraph and telephone. From another point of view, modern computers are based on classical physics. Since classical physics has been superseded by quantum mechanics in the microworld and animal behavior is being seen in terms of information processing by neural networks, one might ask if a new paradigm of computing based on quantum mechanics and neural networks can be constructed. In recent years proposals have been made by Beniof (1982),Deutsch (1985, 1989), Feynman (1986), and others for the development of computers based on quantum mechanics. In these schemes the quantum mechanical basis states are the logic states of the computer and the computation is a unitary mapping of these states into each other. Hermitian Hamiltonians are specified that define the interactions. From another perspective, these computers let the computation of several problems proceed simultaneously, as in the evolution of a superimposition of states. By itself that is no better than several computers running in parallel. However, if one were to imagine a problem where the partially evolved computations of some of these superimposed states are used to find the solution, then it might be that such a machine offers improved speed in the sense of complexity theory. The idea of a quantum computer has not yet been shown to be practical (Landauer, 1991; Gramss, 1994). Furthermore, these proposals deal only with the question of the physics underlying basic computation; they do not consider the question of how a computation process leads to intelligence.
QUANTUM NEURAL COMPUTING
26 1
The proposal for a quantum neural computer is based on an explicit representation of the unity of the computation process; it defines computation as a behavior in relation to other systems. In other words, we introduce computation in relative terms, a perspective that appears to be appropriate for computation related to artificial intelligence (AI) problems. This shift in perspective is like the shift urged by Mach in his criticism of the notion of absolute space that had been assumed by Newton. According to Mach, space should be eliminated as an active cause in a system of mechanics; a particle moves not relative to space but rather to the center of all the other masses in the universe. Likewise, one says that intelligent computation can be defined only in terms of the computational behavior related to other such systems. A . The Mind-Body Problem
In order to be able to design machines that are equal to the capabilities of brains, it is essential to understand the nature of brain behavior. Our first hurdle is the mind-brain problem. The brain is defined by its physical and chemical properties, and it is assumed to function in ways that can be predicted by physics and chemistry. In contrast, the mind is an abstract entity that consists of sensations, beliefs, desires, and intentions. How is the physiochemical body related to a nonphysical mind? In identity theories, the mind is a part of the physical properties of the brain, as is true of the electrical recordings from the brain. Mental states then should be seen as brain states, and statements about cognition can be reduced to statements about behavior or a disposition to behave. However, identity theory does not explain how the activity in the brain assumes a unity which constitutes an awareness of the self. If stable brain states are identified as memories, as is done in many current models, can the problem of self be reduced to the problem of a bundling of these memories? Experiments with split-brain patients have shown that memory does not require consciousness. The converse, that consciousness does not require memory, has been known for a longer time. The split-brain subject appears to experience consciousness via the dominant cerebral hemisphere. By contrast, the minor cerebral hemisphere does not enable the subject to have conscious experience, although it subserves memory, nonverbal intelligence, and concepts of spatial relations. We know ourselves as unique indivisible minds, and not as a unique brain. In dualist theory, the mind has a reality independent of the body. Dualists cite introspection by which knowledge of things such as pleasure, pain, hap-
262
SUBHASH C. KAK
piness, emotions, and so on, is gained as proof of the existence of mind. The argument against dualism is that the recognition of the color green can be reduced to photons of a certain frequency striking the retina. Another objection is that a nonphysical entity cannot influence a material object without violating conservation laws. B. Emergent Behavior
Mind, with its concomitant intelligence, is often taken to be an emergent property of the complexity and the organization of the interconnections between the neurons. The concept of emergent property comes from chemistry, where the properties of water are taken to be “emerging” from the properties of hydrogen and oxygen. The properties of water could not originally be predicted from the properties of hydrogen and oxygen; it is assumed that in future, given the properties of atoms and the rules of their combinations, the properties of water would be completely explainable from the properties of its constituents. The notion of emergent behavior is a consequence of a reductionist approach to phenomena. Such an approach is a reasonable foundation for scientific theories. However, the workings of the mind appear to be so much different from the substratum of theories about the brain that doubts have been expressed about it being an emergent phenomenon. Neuroscience is also based on a reductionist agenda. It is believed that once the physiochemical basis of the activity and the interactions of the nerve cells are understood, mental events will be explained in terms of physiochemical events taking place in the nerve cells. But how awareness can emerge from principles of physics or chemistry is not obvious at this point. Neither does current theory explain how awareness could arise from complexity alone. The idea of emergent behavior does not rule out the need to add new laws at the higher level. Does the mind have unique properties that are not reducible to physical laws? In going to higher levels related to brain behavior, we confront the tyranny of complexity: The explosion of combinatorial possibilities may make it impossible to answer whether the higher-level behavior is reducible to that of lower levels. C. Indivisibility and Quantum Models
Reductionist approaches to brain function do not capture the richness of biological information processing. According to the complementarity interpretation of quantum theory, any indivisible phenomenon must be described
QUANTUM NEURAL COMPUTING
263
by a wavefunction. This is why “elementary” particles, which in turn have “subparticles”as constituents, are described by wavefunctions. One can even speak of a wavefunction for a macro object and indeed for the entire universe. If consciousness is taken to be indivisible, one has no choice but to model it in a quantum mechanical fashion. Evidence for the unity of self-awareness or consciousness is provided by many neuropsychological experiments. These include split-brain research as well experiments on dissociation of behavior from its awareness, as in prosopagnosia, amnesia, and blindsight (Weiskrantz, 1986). Schrodinger (1965), Penrose (1989), and other scientists have argued that, as a unity, consciousness should be described by a quantum mechanical type of wavefunction. No representation in terms of networking of classical objects, such as threshold neurons, can model a wavefunction. Therefore, current computing machines, which are based on mechanistic laws of information processing, are unlikely to lead to machines that would match human intelligence. It is sometimes argued that since quantum mechanics is not needed to describe neural processes, therefore it should not enter into any higher-level descriptions of cognitive processes or of consciousness. The noisy environment characterizing nerve impulse flow should drown out any quantum mechanical behavior. On the other hand, it has been experimentally determined (see Baylor et al., 1979, Schnapf and Baylor, 1987) that the retina does respond to single photons although the process that leads to-such response is not understood. This was determined by directing dim flashes of light into one eye of a subject sitting in total darkness. The subject perceived a flash when only seven photons were absorbed. Since a population of about 500 rods in the eye absorbed the photons in a random spatial pattern, there was no likelihood that any single rod had absorbed more than one photon. The response itself is a macroscopic nerve signal. Now consider social computing (Kak, 1988), that is, processing that takes place in a society of individuals without a conscious notion of a collective self. Current machines are not capable of matching such performance. Social computing is clearly less powerful than computing that is accompanied by awareness, so current computing models fall considerably short of the capabilities of biological systems. There also exist speculations regarding consciousnessin quantum physics. It has been argued that the collapse of the wavefunction occurs owing to its interaction with consciousness. Physicists such as Wigner have argued that science, as it stands now, is incomplete since it does not include consciousness. In a variant of the Schrodinger cat experiment, Wigner (1967) visualized two conscious agents, one inside the box and another outside. If the inside agent makes an observation that leads to the collapse of the wave-
264
SUBHASH C. KAK
function, then how is the linear superposition of the states for the outside observer to be viewed? Wigner argued that in such a situation, with a conscious observer as a part of the system, linear superposition must not apply. Clearly, these questions are a restatement of the ancient debate regarding the notion of free will in a deterministic or causal universe. Regardless of its origin, consciousness appears to bring entirely new, and paradoxical, aspects into the nature of physical reality. Thus, in Section V.C.l we describe a scenario dealing with quantum mechanics, due to John Archibald Wheeler, that admits the possibility of current actions influencing the past if commonsensical, mechanistic interpretations are sought. Wheeler has likened the physical universe to a gigantic information system whose unfolding depends on how we question it.
D. A Historical Note The ancient Vedic science of cognition, based primarily on consciousness examining itself, which has been traced in India to at least 2000 B.C. (Kak, 1994c), led to a rich conceptual structure for the mind. The individual was seen as a system, with the body, the energy field, mind, intellect, and emotion as different hierarchical levels. The mind itself was seen as a system consisting of the sense organs, an emergent characteristic of T-ness, associative memory, logic, and an underlying universal principle of consciousness. The whole conception was sometimes viewed as a dualistic system of matter and consciousness, but more often as a unitary (monistic) system where a universal consciousness is the groundstuff of reality, with individual awareness as an emergent characteristic related to brain organization and behavioral associations. Other, more advanced structural models spoke of consciousness centers and agents. Animal and human minds were taken to be different in the sense that humans possess a language that is richer than that of animals; animals were also taken to be sentient, conscious beings. Details of this fascinating tradition, which is not widely known in the West, may be found in Sinha (1958), Aurobindo (1970), Dyczkowski (1987), and Kak (1993c, 1994b). We d o not know whether the categories of the Vedic cognitive science can be related to current neurophysiological discoveries. It is significant that the tradition claims to define a science of consciousness, and the description is in terms of a holistic framework that is reminiscent of that of quantum mechanics. It is therefore not surprising that, at least for Schrodinger, one of the creators of quantum mechanics, Vedic ideas were a direct source of inspiration (Schrodinger, 1961; Moore, 1989, pp. 170-173). Schrodinger (1965) also believed that the Vedic conception provided the resolution to the paradox of consciousness. Walter Moore, Schrodinger’s biographer, summarizes:
QUANTUM NEURAL COMPUTING
265
The unity and continuity of Vedanta are reflected in the unity and continuity of wave mechanics. In 1925, the world view of physics was a model of a great machine composed of separable interacting material particles. During the next few years, Schrodinger and Heisenberg and their followers created a universe based on superimposed inseparable waves of probability amplitudes. This new view would be entirely consistent with the Vedantic concept of All in One (Moore, 1989, p. 173).
E . Overview of the Chapter
This chapter presents several perspectives on the problem of machine and animal intelligence so as to set the framework for introducing the notion of a quantum neural computer. Our emphasis is on concepts and not on practical considerations. We do not speak about a design for a quantum neural computer, but the examination of this concept is very useful in understanding the limitations of current techniques of machine intelligence. We begin by revisiting the Turing test, and suggest that a new test that measures gradations of intelligence should be devised. Recent research on the cognitive capabilities of some animals, which validates the notion of levels of intelligence, is reviewed. This research establishes that certain cognitive abilities of animals, such as abstract generalization, are beyond the capabilities of conventional computers. This is followed by a survey of the state of current neural network research. We consider new neuron and network models including one where, in analogy with quantum mechanics, the neuron outputs are taken to be complex. We also review rapid training of feedforward networks using prescriptive learning, chaotic dynamics in neuron assemblies, models of attention and awareness, and cytoskeletal microtubule information processing. Recent discoveries in neuroscience that cannot be placed in the reductionist models of biological information processing are examined. We learn from these studies that all biological systems cannot be viewed as connectionist circuits of components or systems; there exist dynamic structures whose definition is, in part, related to the environment and interaction with other, similar systems. Although there are biological structures that are well modeled by artificial neural networks, in general biological systems define a concept of interdependence that is much stronger than the notion of connectionism that has been used in artificial neural networks. This is based on a recursive relationship between the organism and the environment. We call such interdependence connectionism in its strong sense. One may postulate systems that are equivalent in their connectionist complexity to biological systems.
266
SUBHASH C. KAK
We define a quantum neural computer as a strongly connectionist system that is nevertheless characterized by a wavefunction. In contrast to a quantum computer, which consists of quantum gates as components, a quantum neural computer consists of a neural network where quantum processes are supported. The neural network is a self-organizing type that becomes a different measuring system based on associations triggered by an external or an internally generated stimulus. We consider some characteristics of a quantum neural computer and show that information is not a locally additive variable in such a computer. Models for a subneuron field that could explain the unity of awareness are reviewed. Uncertainty relations connecting the programs for structure and environment are described. Issues related to the design of intelligent systems, different in their programmatic structure from those found in nature, are discussed. The chapter argues that quantum neural computing has parallels with brain behavior. Owing to this perspective, a considerable part of this chapter is devoted to pointing out the limitations of the conventional computing paradigm and the characteristics of quantum systems. We adopt the philosophical point of view that quantum reality reveals itself through questions, as embodied by different experimental arrangements. It is natural, then, first to explore the information theoretic issues related to the concept of a quantum neural computer. 11. TURINGTESTREVISITED If current computers are so fundamentally deficient in comparison with biological organisms at certain kinds of tasks, one might ask why the discipline of computer science has not generally recognized it. It appears that a preoccupation with comparisons with human thought distracted researchers from considering a rich set of possibilities regarding gradations of intelligence. The prestige of the famed test for machine intelligence proposed by Alan Turing (1950), which now bears his name, was partly responsible for such a focus. A . The Test
The Turing test proposes the following protocol to check if a computer can think: (1) The computer together with a human subject are to communicate, in an impersonal fashion, from a remote location with an interrogator; (2) The human subject answers truthfully, while the computer is allowed to lie to try to convince the interrogator that it is the human subject. If in the
QUANTUM NEURAL COMPUTING
267
course of a series of such tests the interrogator is unable to identify the real human subject in any consistent fashion, then the computer is deemed to have passed the test of being able to think. It is assumed that the computer is so programmed that it is mimicking the abilities of humans. In other words, it is responding in a manner that does not give away the computer’s superior performance at repetitive tasks and numerical calculations. The trouble with the popular interpretation of the Turing test is that it focused attention exclusively on the cognitive abilities of humans. So researchers could always claim to be making progress with respect to the ultimate goal of the program, but there was no means to check if the research was on the right track. In other words, the absence of intermediate signposts made it impossible to determine whether the techniques and philosophy used would eventually allow the Turing test to be passed. In 1950, when Turing’s essay appeared in print, his test embodied the goal for machine intelligence. But a statement of a goal, without a definition of the path that might be taken to reach it, detracts from its usefulness. Had specific tasks, which would have constituted levels of intelligence or thinking below that of a human, been defined, then one would have had a more realistic approach to assessing the progress of AI. Perhaps this happened because the dominant scientific paradigm in 1950, following old Cartesian ideas, took only humans to be capable of thought. The rich tradition of cognitive philosophy that emerged in India about 4000 years ago (Kak, 1993b) where finer distinctions in the capacity to think were argued, was at that time generally unknown to A1 researchers. The dominant intellectual ideas ran counter to the folk wisdom of all traditions regarding animal intelligence. That Cartesian ideas on thinking and intelligence were wrong has been amply established by the research on subhuman intelligence of the past few decades. Another reason why Turing’s test found a resonance in the intellectual climate of his times was the then prestige of operationalism as a scientific philosophy. According to the operationalist view, a machine is to be seen as having thoughts if its behavior is indistinguishable from that of a human. The ascendancy of operationalism came about from the standard interpretation of quantum mechanics, according to which one could only speak in terms of the observations by different experimental arrangements, and not ask questions about what the underlying reality was.
B. On Animal Intelligence According to Descartes, animal behavior is a series of unthinking mechanical responses. Such behavior is an automatic response to stimuli that originate
268
SUBHASH C. KAK
in the animal's internal or external environments. Complex behavior can always be reduced to a configuration of reflexes where thought plays no role. Only humans are capable of thought, since only they have the capacity to learn language. Recent investigations of subhuman animal intelligence not only contradict Cartesian ideas but also present fascinating riddles. It had long been thought that the cognitive capacities of the humans were to be credited in part to the mediating role of the inner linguistic discourse. Terrace (1985) claims that animals do think but cannot master language, so the question arises as to how thinking can be done without language: Recent attempts to teach apes rudimentary grammatical skills have produced negative results. The basic obstacle appears to be at the level of the individual symbol which, for apes, functions only as a demand. Evidence is lacking that apes can use symbols as names, that is, as a means of simply transmitting information. Even though non-human animals lack linguistic competence, much evidence has recently accumulated that a variety of animals can represent particular features of their environment. What then is the non-verbal nature of animal representations?. . . [For example] learning to produce a particular sequence of four elements (colours), pigeons also acquire knowledge about a relation between non-adjacent elements and about the ordinal position of a particular element (Terrace, 1985, p. 113).
Clearly, the performance of animals points to representation of whole patterns that involves discrimination at a variety of levels. In an ingenious series of experiments, Herrnstein and Loveland (1964) were able to elicit responses about concept learning from pigeons. In another experiment, Herrnstein (1985) presented 80 photographic slides of natural scenes to pigeons that were accustomed to pecking at a switch for brief access to feed. The scenes were comparable, but half contained trees and the rest did not. The tree photographs had full views of single and multiple trees as well as obscure and distant views of a variety of types. The slides were shown in no particular order, and the pigeons were rewarded with food if they pecked at the switch in response to a tree slide; otherwise nothing was done. Even before all the slides had been shown, the pigeons were able to discriminate between the tree and the non-tree slides. To show that this ability, impossible for any machine to match, was not somehow learned through the long process of evolution and hardwired into the brain of the pigeons, another experiment was designed to check the discriminating ability of pigeons with respect to fish and non-fish scenes, and once again the birds had no problem doing so. Over the years it has been shown that pigeons can also distinguish: (1) oak leaves from leaves of other trees, (2) scenes with or without bodies of
QUANTUM NEURAL COMPUTING
269
water, (3) pictures showing a particular person from others with no people or different individuals. Herrnstein (1985) summarizes the evidence thus: Pigeons and other animals can categorize photographs or drawings as complex as those encountered in ordinary human experience. The fundamental riddle posed by natural categorization is how organisms devoid of language, and presumably also of the associated higher cognitive capacities, can rapidly extract abstract invariances for some (but not all) stimulus classes containing instances so variable that we cannot physically describe either the class rule or the instances, let alone account for the underlying capacity (p. 129).
Among other examples of animal intelligence are mynah birds that can recognize trees or people in pictures, and signal their identification by vocal utterances-words-instead of pecking at buttons (Turney, 1982), and a parrot that can answer, vocally, questions about shapes and colors of objects, even those not seen before (Pepperberg, 1983). The question of the relationship between intelligence and consciousness may be asked. Griffin infers animal consciousness from a variety of evidence: I. Versatile adaptability of behavior to novel challenges; 11. Physiological signals from the brain that may be correlated with conscious thinking; 111. Most promising of all, data concerning communicative behavior by which animals sometimes appear to convey to others at least some of their thoughts (Griffin, 1992, p. 27).
Consciousness implies using internal images and reconstructions of the world. Purposive behavior is contingent on these internal representations. These representations may be based on the stimulus from the environment, memories, or anticipation of future events. C . Gradations of Intelligence
The insight from experiments of animal intelligence, that one can attempt to define different gradations of cognitive function, is a useful one. The theory of evolution not only posits the evolution of human brains but also that of behavior. In this theory one would expect to see different levels of intelligence. It is obvious that animals are not as intelligent as humans; likewise, certain animals appear to be more intelligent than others. For example, pigeons did poorly at picking a pattern against two other identical ones, as in picking an A against two Bs. This is a very simple task for humans. Herrnstein (1985) describes how they seemed to do badly at certain tasks:
270
SUBHASH C. KAK
1. Pigeons did not do well at the categorization of certain man-made and three-dimensional objects. 2. Pigeons seem to require more information than humans for constructing a three-dimensional image from a plane representation. 3. Pigeons seem to have difficulty with dealing with problems involving classes of classes. Thus they do not do very well with the isolation of a relationship among variables, as againt a representation of a set of exemplars.
In a later experiment, Herrnstein et al. (1989) trained pigeons to follow an abstract relational rule by pecking at patterns in which one object was inside rather than outside a closed linear figure. It is to be noted that pigeons and other animals are made to respond in extremely unnatural conditions in Skinner boxes of various kinds. The abilities elicited in research must be taken to be merely suggestive of the intelligence of the animal, and not the limit of it. Animal intelligence experiments suggest that one can speak of different styles of solving A1 problems. Are the cognitive capabilities of pigeons limited because their style has fundamental limitations? Or is the cognitive style of all animals similar and the differences in their cognitive capabilities arise from the differences in the sizes of their mental hardware? And since current machines do not, and cannot, use inner representations, is it right to conclude that their performance can never match that of animals? Can one define a hierarchy of computational tasks that could be quantified as varying levels of intelligence? These tasks could be the goals defined in a sequence that could be set for AI research. If the simplest of these tasks proved intractable for the most powerful of computers, then the verdict would be clear that computers are designed based on principles that are deficient compared to the style at the basis of animal intelligence.
D . Animal and Machine Behavior 1. Linguistic Behavior
A classification of computer languages in the 1950s provided an impetus to examine the nature of animal communication. The focus of these studies on primates was not to decode this communication in their natural setting, but rather to see if animals could be taught to communicate with humans. Since the development of language, as part of behavior, is critically a component of social interaction, studies in unnatural laboratory settings are inherently limited,
QUANTUM NEURAL COMPUTING
27 1
Many projects have examined the grammatical competence in apes (e.g., Savage-Rumbaugh et al., 1985, Griffin, 1992). This research has clarified questions related to what we understand by language. Responding to signals in a consistent way is not sufficient to demonstrate linguisitic competence of the subject. It is essential for us to know that the subject uses different symbols for naming. Savage-Rumbaugh (1986) argues that, to qualify as a word, a signal must satisfy the following attributes: (1) It must be an arbitrary symbol that stands for some object, activity, or relationship; (2) it must be used intentionally to convey knowledge; and (3) the recipient must be able to decode and respond appropriately to the symbols. Many early ape-language projects did not satisfy all these attributes. Savage-Rumbaugh et al. (1985) report striking success in the acquisition of symbolic skills on a lexigram keyboard by an ape named Kanzi. How to grade competence in symbolic manipulation remains an interesting problem. If this were possible, one would have the picture of different levels of symbol manipulation leading finally to the development of language. Grammatical competence could likewise be classified in different categories. The relationship of these levels to the ability to solve A1 problems could be explored. A hierarchy of intelligence levels will be useful in the classification of animal behavior.
2. Recursive Behavior Animal behavior is recursive. Life can be seen at various levels, but consciousness is a characteristic of the individual alone. Considering this from the bottom up, animal societies may be viewed as “superorganisms.” For example, the ants in an ant colony may be compared to cells, their castes to tissues and organs, the queen and her drones to the generative system, and the exchange of liquid food among the colony members to the circulation of blood and lymph. Furthermore, corresponding to morphogenesis in organisms, the ant colony has sociogenesis, which consists of the processes by which the individuals undergo changes in caste and behavior. Such recursion has been viewed all the way up to the earth itself seen as a living entity. Parenthetically, it may be asked whether the earth itself, as a living but unconscious organism, may not be viewed like the unconscious brain. Paralleling this recursion is the individual, who can be viewed as a collection of several “agents,” where these agents have subagents, which are the sensory mechanisms, and so on. These agents are bound together and this binding defines consciousness.
272
SU3HASH C. KAK
Can an organism as well as a superorganism be simultaneously conscious? It appears that such nested consciousness will be impossible. In multiplepersonality disorder, the person’s consciousness remains a unity. Such a person might claim that “yesterday I was Alice and today I am Mary,” but each time the assertion would be in terms of an indivisible consciousness. In split-brain patients, the minor hemisphere does not have a conscious awareness. In other words, there is no nested consciousness.
3. Logical Tasks Logical tasks are easy for machines, whereas A1 tasks are hard. We have two perspectives. The first is: Why build machines in a hierarchy that mimics nature? After all, wheeled machines do better than animals at locomotion. The second is that there is something fundamental to be gained in building machines that have recursively defined behavior in the manner of life, and the same facility for solving A1 problems that subhumans possess. If the basis for animal competence at A1 tasks were understood, then new kinds of A1 machines could be designed. Logical tasks are difficult for animals to complete. Human language is more than reflexive behavior, and it has a logical component. A logical structure underpins human language. On the other hand, when apes have been taught associations of symbols with objects, they have found it hard to string these together into rule-based combinations that could be termed language. Although communication among animals seems to be very rich, it is clear that animal communication lacks many features that are present in human language. To see these differences in terms of less developed neural structures would connect neural hardware with specific function.
111. NEURAL NETWORK MODELS
A . The Brain
The central nervous system of vertebrates is organized in an ascending hierarchy of structures-spinal cord, brain stem, and brain. A human brain (Fig. 1) contains about 10’ neurons. The structure, composition, and functioning of central nervous systems in all vertebrates have general similarity. The peripheral nervous system consists of peripheral nerves and the ganglia of the autonomic nervous system. At the gross level, the brain is bilaterally symmetric and the two hemispheres are connected together by a large tract of about half a billion axons
’
273
QUANTUM NEURAL COMPUTING Corpus callosurn
LL
ond dliculi
ourth ventricle
Pons
FIGURE1. A midsagittal section of the human brain.
called the corpus callosum. At the base are structures such as the medulla, which regulates autonomic functions, and the cerebellum, which coordinates movement. Within lies the limbic system, a collection of structures involved in emotional behavior, long-term memory, and other functions. The convoluted surface of the cerebral hemispheres is called the cerebral cortex. The cerebral cortex is a part of the brain that is more developed in humans than in other species. Approximately 70% of all the neurons in the central nervous systems of primates are found in the cortex. This is where the most complex processing and coding of sensory information occurs. The most evolutionary ancient part of the cortex is part of the limbic system. The larger, younger neocortex is divided into frontal, temporal, parietal, and occipital lobes, which are separated by deep folds. Each of these lobes contains yet further subdivisions. The parietal, temporal, and occipital lobes receive sensory information such as hearing and vision. These sensoryreceiving areas occupy a small portion of each lobe, the remainder being termed the association cortex. Each sensory area of the neocortex receives projections primarily from a single sensory organ. Thus, the visual-receiving area in the occipital lobe processes sensations received by the retina; the auditory-receiving area in the temporal lobe processes sensations received by the cochlea; and the body sense-receiving area in the parietal lobe processes sensations received by the body surface. Within each sensory-receiving area, the sense-organ projections form a map of the sensory organ on the neocortical surface.
B. Neurons and Synapses The nervous tissue is made up of two types of cells, neurons and satellite cells. In the central nervous system the satellite cells are called neuroglia, and
274
SUBHASH C. KAK
I
Axon
Axon terminals
FIGURE2. An idealized neuron.
in the periphery they are called Schwann cells. The satellite cells provide insulating myelin sheathing around the neurons. This insulation increases the propagation velocity of the electrical signals. A neuron consists of treelike networks of nerve fiber called dendrites, the cell body (or soma), and a single long fiber called the axon, which branches into strands and substrands (Fig. 2). At the end of these substrands are the axon terminals, the transmitting ends of the synaptic junctions, or synapses (Fig. 3). The receiving ends of these junctions can be found both on the dendrites and on the cell bodies themselves. The axon of a typical neuron makes about a thousand synapses with other neurons. It may be argued that brain computations must really be seen to take place in the synapses rather than in neurons. This change in point of view has important philosophical implications. Not only are there more synapses than neurons-say, about 1015-but this changed perspective can be taken to mean a view of the brain as a chemical computer rather than an electrical digital system. This is an issue that will not be explored in this chapter. Two major types of synapse are electrical and chemical. In electrical synapses the channels formed by proteins in the presynaptic membrane are physically continuous with the similar channels in the postsynaptic membrane, or the synaptic gap is only about 2 nm, allowing the two cells to share some cytoplasm constituents and electrical currents. At chemical synapses, the gap of about 20-30 nm between the presynaptic and postsynaptic membranes prevents a direct flow of current. The signal is now transmitted to the next neuron through an intermediary chemical process in which neurotransmitter substances are released from spherical vesi-
275
QUANTUM NEURAL COMPUTING
\I
Glial cell
I Postsynaptic membrane
\Synaptic
gap
FIGURE3. The synapse.
cles in the synaptic bouton. This raises or lowers the electrical potential inside the body of the receiving cell. When this potential reaches a threshold, a pulse or action potential of fixed strength and duration is sent down the axon. This constitutes the firing of the cell. After firing, the cell cannot fire again until the end of the refractory period. Ionic currents flowing across surface membranes of nerve cells lead to electrical potential changes. Ions such as sodium, potassium, calcium, and chloride participate in this process. The nerve maintains an electrical polarization across its membrane by actively pumping sodium ions out and potassium ions in. When ion channels are opened by voltage changes, neurotransmitters, drugs, and so on, sodium and potassium flow rapidly through the channel, creating a depolarization. Action potentials occur on an all-ornone basis from integration of dendiritic input at the cell body region of the neuron. The frequency of firing is related to the intensity of the stimulus. Action potential velocity is fixed for given axons depending on axon diameter, degree of myelinization, and distribution of ion channels. A typical value of action potential velocity is 100 m/s.
276
SUBHASH C. KAK
Neurons come in a great variety of structures. The diversity is even greater if molecular differences are considered. Although all cells contain the same set of genes, individual cells exhibit, or activate, only a small subset of these genes. Selective gene expression has been found even in seemingly homogeneous populations of neurons. Evidence does not support the thesis that each neuron is unique; and certainly the brain cannot be viewed as a collection of identical elements. An order is imposed on the enormous diversity of the neurons and their connection patterns by the nested arrangement of certain circuits and subcircuits (Mountcastle, 1978; Sutton et al., 1988). For example, neurons of similar functions are grouped together in columns that extend through the thickness of the cortex. A typical module is the visual cortex, which could contain more than 100,000 cells, the great majority of which form local circuits devoted to a particular function. 1. The McCulloch-Pitts Model
McCulloch and Pitts (1943) proposed a binary threshold model for the neuron. The model neuron computes a weighted sum of its inputs from other inputs and outputs a 1 or a 0 according to whether this sum is above or below a certain threshold. The updating of the states of a network of such neurons was done according to a clock. Although the McCulloch-Pitts (MP) neurons do not model the biological reality well, networks built up of such neurons can perform useful computations. Variants of the M P neurons have graded response; this is achieved if the neuron performs a sigmoidal transformation on the sum of the input data. However, networks of such graded neurons do not provide any fundamental computing advantage over binary neurons. Networks of MP-like neurons have been used extensively to model brain circuits, in the hope that these networks will somehow capture the essentials of the processing done in the brain. Various kinds of networks have been used in these models. C . Feedback and Feedforward Networks
A broad distinction may be made between feedback and feedforward neural networks. In feedback networks the computation is taken to be over when the network has reached a stable state. The updating for the network is represented by the rule where f represents the concatenation of the transformation by the synaptic
QUANTUM NEURAL COMPUTING
277
interconnection weight matrix and the sign or the sigmoidal function. This updating occurs either synchronously or asynchronously. The operation of such a network in the asynchronous mode may be viewed as a descent to minimum energy value in the state space. Such minima can thus represent stored memories in a feedback model. Learning shapes attraction areas around exemplars. The notion of the attraction basis associated with each energy minimum provides the model for generalization. The major limitation of the feedback model is that, along with the useful memory, a very large number of spurious memories are automatically generated. Feedforward networks perform mapping from an input to an output. Once the training input-output pairs have been learned to define certain classes, this network also has the capacity to generalize. This means that neighbors of the training inputs are automatically classified. Although feedforward networks are quite popular as far as signal processing applications are concerned, they suffer from inherent limitations. This is owing to the fact that such networks perform no more than mapping transformations or signal classification. Feedback networks, on the other hand, offer greater parallels with biological memory, and biological structures, such as the vision system, have aspects that include feedback. It is due to this reason that recurrent networks, which are feedforward networks with global feedback, are being increasingly studied. The neuron model that we examine in this section is the on-off or the bipolar MP model. This model and its variant, where the neuron output can be any analog value over a range, have generally been used for signalprocessing artificial neural networks. However, biological neurons exhibit extremely complex behavior, and on-off-type neurons are a gross simplification of the biological reality (Libet, 1986). For example, neuron output is in terms of spikes; this output cannot be taken to be held to any fixed value. There is, furthermore, the question of the refractory phase of a neuron’s response. If one considers waves of neural activity, then a continuum model may be called for (Milton et al., 1994). Neuron outputs may also be considered to arise from the partial response of other neurons. In addition, there exist neurons that remain quiescent in a computation. We look at some of these issues and investigate how the bipolar model can be made more realistic. In many artificial intelligence problems, such as those of speech or image understanding, local operators can be properly designed only if some understanding of the global information is available. Networks of MP neurons do not provide any choice as far as resolution of the data is concerned. A generalization of the MP model of neurons appears essential in order to account for certain aspects of distributed information processing in biologi-
278
SUBHASH C. KAK
cal systems. One particular generalization allows one to deal with some recent findings of Optican and Richmond (1987), which indicate that in neuron activity the spike rate as well as spike distribution carry information. This may be represented by complex-valued neurons with relationship between the real and imaginary activations; this can form the first step in the storage of patterns together with their corresponding context clues. D . Iterative Transformations
Consider a fully connected neural network, composed of n neurons, with a symmetric synaptic interconnection matrix Tj,where Ti = 0. The matrix T is obtained by Hebbian learning, which is an outer product learning rule, or its generalization called the delta rule. In Hebbian learning,
T.=
c x:x; 1
k=l
where 1 is the number of memories, and X k represents the kth memory. Let xi be the output of neuron i. The updated value of this neuron is
where f is a sigmoid or a step function. Without loss of generality, we will now confine ourselves to the sign function (sgn) for f,and we will use the convention that sgn(0) = 1. Also note that the updating of the neurons may be done in any random order-in other words, asynchronously. The characteristics of such a feedback network have parallels with that of one-variable iterative systems. Consider the equation g(x) = 0, the solutions for x defining the roots of the equation. Let xo be a point near a root x. Then one may expand g(x) in a Taylor series so that
For values of x near xo, (x - xo) will be small. Assuming that g’(xo) is large compared to (x - xo), and that g”(xo) and the higher derivatives are not unduly large, the above equation may be rewritten as a first approximation: g(x) = d x o )
+ (x - xo)s’(xo)
QUANTUM NEURAL COMPUTING
279
Since we seek x such that g(x) = 0, we can simplify this equation as
The new x will be closer to the root than x,,, and this forms the basis of the Newton-Raphson iteration formula:
If the equation g ( x ) = 0 has several solutions, then the variable x will be divided into the same number of ranges. Starting in a specific range would take one to the corresponding solution. Each range can therefore be seen as an attraction basin. This may be generalized by the consideration of the transformation on the complex plane. For an analytic function one can use the Taylor series to give
For g(z), an nth-degree polynomial, one would see n attraction basins. A good example is g ( z ) = z 3 - 1 = 0, which leads to the iterative mapping 2z3 z,+1
+1
= ___
3z2
This has three attraction basins. The boundaries of the basins are fractals. Das (1994) has examined how neural networks that are a generalization of the above idea can be designed. We see that for an iterative transformation the concepts of attractors and attraction basins are defined similar to those for the feedback neural network. However, we do not obtain a regular structure of connected points defining an attraction basin. Moreover, the nature of the attraction basins is dependent on the iterative transformation, although the underlying polynomial may be the same. 1. Neuron Models and Complex Neurons
Biological neurons are characterized by extremely complex behavior expressed in a distribution of voltage and time dependence of current. Different types of membrane currents may be involved. Perhaps the simplest model is the classic Hodgkin-Huxley model of the squid giant axon. It has three main characteristics: (1) an action potential, (2) a refractory period after an action potential during which the slow recovery of the potassium and sodium con-
280
SUBHASH C. KAK
ductance has a great impact on electrical properties, and (3) the ability to capacitively integrate incoming current pulses. This model is approximated by two first-order differential equations in terms of four time-dependent dynamical variables. It has been suggested that a further approximation in terms of two dimensions is reasonably good. This description shows that the MP binary or bipolar neurons do not capture the intricacies of even the simple Hodgkin-Huxley model. Considering the problem at a gross level, we mentioned that neurons carry information in the distribution as well as the number of spikes in a response (Optican and Richmond, 1987). The nature of the relationship between these two types of information is not clear. In the most general case one may take these two channels of information to provide us with independent information, although they may actually be connected through some transformation. On the other hand, from an analytical perspective related to information processing, it is useful to see local/global information linkage whereby local information features are linked into global concepts. This may occur through stages of transformation so that information is delivered to neurons that deal directly with global concepts. One may use a straightforward generalization of the standard neuron model to include complex sequences where the real and the imaginary activations are related to each other. It may be assumed that usual observations, being based on time-averaged behavior, ignore the information in the distribution of the spikes. One might further assume that the real activation is related to the spike rate and the imaginary activation expresses the information in the spike distribution. In the general case the real and the imaginary quantities are connected to each other through a transformation. Note that each neuron in the standard network model does receive information from all other neurons and, therefore, there exists the potential of the delivery of global information at each neuron. However, this information is presented in a mixed fashion, from where it may not be retrievable. It is for this reason that it is useful to postulate an explicit transformation. Let the stored memories be represented by zj, where and The transformation g would in general be nonlinear. As a starting point, this transformation may be taken to be Fourier followed by a saturation function. Transformations such as Fourier decorrelate redundant patterns. This means that far fewer components of the pattern in the transformed domain are necessary to represent most of the pattern information.
QUANTUM NEURAL COMPUTING
28 1
For ease of illustration, one may consider bipolar neurons. The Hebbian learning rule for the interconnection weight matrix T can be stated to have the form
The neural network operates in the usual mode, Z,,, = sgm { TZ,,,}. The only difference with the standard Hebbian rule is that the column memory vector is multiplied with its complex conjugate transposed form. Also, if the staggering is done correctly for the patterns in a set, Hebbian learning will automatically associate the imaginary components of these patterns in a concatenated form. 2. Chaotic Maps on the Complex Plane The dynamics of a synchronous feedback network is represented by X,,, = J(Xold),where f is a linear map followed by a sigmoidal function and X is an n-component vector. In a synchronous update one can encounter oscillatory behavior. When the components of the X vector are continuous, the behavior can be chaotic. This will be seen in the context of a specific nonlinear transformation below. Consider z,,+~ = f(z,,), an iterative transformation on the complex plane. Such an iterative transformation must be considered for points on the unit circle because for ( z (< 1 the asymptotic value would end-up at 0, whereas for IzI > 1 it would end up at 03. The set of points on the unit circle is the Julia set. Consider now Zn+1
2
= zn
for the unit circle. This corresponds to z,+1
= x,+1
+
iY,+l
= xf
-yf
+ i2x,yn
Since x f + y i = 1, we obtain a pair of transformations for the x and y variables: x,+1
= 2x; -
1
- 1 I x, I 1
and 2
yn+l= 4~,2(1- y i )
Let us define another variable r, = y f ; then the above equation can be rewritten as I , + ~ = 4rn(l - r,,)
0 I r,, I 1
282
SUBHASH C. KAK
This represents the well-known logistic map. If r, is represented by an irrational point then the iterative transformation will never return to its starting point. In other words, the logistic map will generate an infinite period sequence, or a chaotic sequence. The same will also be true of the other maps listed above. To see this, consider z, = Iz,I exp(i8). It is clear that this map implies multiplying the angle 8 by 2 at each iteration. Now, if the starting angle 0, is represented in a binary form, then each step implies shifting the expansion one bit to the left. Therefore, if the original angle is represented by an infinite expansion (as for an irrational number), the iterative transformation will lead to chaotic behavior. It is also clear from the above that the two-dimensional structure of the Julia sets merely expresses the randomness properties in different scales of the underlying irrational number. One may consider different generalizations of the map on the unit circle. Thus one may consider
(
2 xn+l = Ax; 1 - GI 2 Y,+l
When GI = B we have
=
i2)
+ -y,
= B - I.,’Y,’
1 and 1 = 4, we obtain the logistic map. When only GI
=
fl = 1,
r,+1 = Irfl(1 - rfl) To determine the stability of a fixed point, one looks at the slope If’(r)l evaluated at the fixed point. The fixed point is unstable if I f ’ [ > 1. For this logistic function, when 1 < 1 < 3 there are two fixed points r = 0, (1- 1)/1, of which r = 0 is unstable and the other point is stable. For 1> 3 the slope at r = (1- 1)/1,which is f ’ = 2 - 1, becomes greater than 1 and both equilibrium points become unstable. At I = 3 the steady solution becomes unstable and a two-cycle orbit becomes stable. For 3 < A < 4 the map shows many multiple-period and chaotic motions. The first chaotic motion is seen for 1 = 3.56994... . Before the onset of chaos at this value, if the nth period doubling is obtained for I,, we get the wellknown result that in the limit
” -,4.6692016...
An
- In-1
which is the Feigenbaum number (Feigenbaum, 1978). This represents an example of the period-doubling route to chaos. There are other routes to chaos, such as quasi-periodicity and intermittency, that do not concern us in the present discussion.
QUANTUM NEURAL COMPUTING
283
Considering that chaos in the brain arises from a similar mechanism, one might ask whether the universality of the route to chaos provides a certain normative basis. 3. Implication The study of complex-valued neurons and iterative transformations shows that these alone are not capable of defining a rich enough framework in which cognitive functions can be understood. Thus a complex network may be replaced by four real ones, and so nothing fundamental is gained by such a generalization. Likewise, the transition from classical to quantum mechanics is much more than generalization from scalar to complex representations.
4. Performance of Interacting Units We now consider the question of a computation being carried out by a neural system of which a feedback network is a part. A network that deals with nontrivial computing tasks will be composed of several decision structures. If the neural network computation can be expressed in terms of a straightforward sequential algorithm, then it is inappropriate to call the computation style neural. We may call neurons that perform decision computations cognitive neurons. These cognitive neurons may take advantage of information stored in specialized networks, some of which perform logical processing. There would be other neurons that simply compute without cognitive function. Such cognitive neurons could lie at the end of the brain’s many convergence regions. It is essential to assume that many cognitive neurons compute in parallel for a computation to be neural. It was shown by Kak (1993b) that independent cognitive centers will lead to competing behavior. In the cortex, certain cells seem to be responsive to specific inputs. In other words, certain neurons process global information. Thus Hubel and Wiesel (1962) have shown that certain cells in the visual cortex of a cat are responsive to lines in the visual field which have a particular angle of slope. Other cells respond to specific colors, and others respond to the differences between what each eye has received, leading to depth perception. One might assume that neurons are organized hierarchically, with the higher-level neurons dealing with more abstract information. Can it be assumed that the two different kinds of neurons do interact with each other? Does the information processing of such a network have special characteristics o r limitations? Another interesting phenomenon is where damage to parts of the visual cortex makes a person blind in the corresponding part of the visual field. However, experiments have shown that often such subjects are able to
284
SUBHASH C. KAK
“guess” objects placed in this region with accuracy, even though they are blind in that region. The information from the retina is also processed by obscure regions lying in the lower temporal lobe. Clearly, these independent cognitive centers supply information to the subject’s consciousness, but this is done without direct awareness. Consider two nonlinearly interacting cognitive centers A and B, with three courses of action labeled 1, 2, and 3 with energies (- 1,2, 1) and (3, 5, - l), respectively. If the nonlinear interaction is represented by multiplication at the systems level, then the system states will be shown as in the following table: Energy
A,
A,
B, B2
-3 -5
6 10 -2
B3
1
A, 3 5 -1
The optimal system performance is represented by A , B,, which corresponds to the minimum energy of - 5 . However, the two centers would choose A , and B, considering individual energy distributions. This is indication that an energy function cannot be associated with a neural network that performs logical processing. If there is no energy function, how do the neurons operate cooperatively? Neural network models do not deal with or describe inner representations, so from our experience with animal intelligence (Section II.B), it is clear their behavior cannot be purposive. Current theory does not show how such inner representations could be developed.
Iv. THEBINDINGPROBLEM AND LEARNING A . The Binding Problem
How is a stimulus recognized if it belongs to a class that has been learned before? If the stimulus triggers the firing of certain neurons, how is the firing information “gathered” together to lead to the particular perception? Considering visual perception, there exists an unlimited number of objects that one is capable of seeing. Early theories postulated “grandmother neurons,” one for each image; this concept leads to a homunculus-the person’s representation inside the brain which observes and controls, and also represents the self. The postulation of grandmother neurons leads to logical problems:
QUANTUM NEURAL COMPUTING
285
How can we have a number that would exhaust all possible objects, and how does the homunculus organize all the sensory input that is received? Thus grandmother neurons cannot exist, although there might exist some specialized structures for certain features. If a large number of neurons fire in response to an image, the problem of how this set tires in unison, defining the perception of the image, is called the binding problem. No satisfactory solution to the binding problem is known at this time. In reality, the problem is much worse than the binding problem as stated above. The bound sets are in turn bound at a higher level of abstraction to define deeper relationships. The reductionist approach seeks explanations of brain behavior in terms of a sum of the behaviors of lower-level elements. Brain behavior has traditionally been examined at three different levels of hierarchy: (1) synaptic level, or in terms of biochemical changes; (2) neuron level, as in maps of the retina in the visual cortex; (3) that of neural networks, where the information is distributed over entire neuron cell assemblies. However, the explanation of brain function in terms of behavior at these levels has proved to be inadequate. Skarda and Freeman (1990) have argued that while reductionist models have an important function, they suffer from severe limitations. They claim that “perceptual processing is not a passive process of reaction, like a reflex, in which whatever hits the receptors is registered inside the brain. Perception does not begin with causal impact on receptors; it begins within the organism with internally generated (self-organized) neural activity that, by reafference, lays the ground for processing of future receptor input.. . . Perception is a self-organized dynamic process of interchange inaugurated by the brain in which the brain fails to respond to irrelevant input, opens itself to the input it accepts, reorganizes itself, and then reaches out to change its input (p. 279)” In other words, neural networks perform a perceptual processing function within a holistic framework that defines the selforganizing principle. B. Learning 1. The Biological Basis
Two types of learning or memories have been described by researchers. Memories that require a conscious record have been termed declarative or explicit. Memories where conscious participation is not needed are called nondeclarative or implicit. It is not known whether other types of memories exist.
286
SUBHASH C. KAK
Explicit learning is fast and may take place after only one training trial. It involves associations of simultaneous stimuli and defines memory of single events. In contrast, implicit learning is slow and requires repetition over many trials. It often connects sequential stimuli in a temporal sequence. Implicit learning is expressed by improved performance on certain tasks without the subject being able to describe what he or she has learned. Experiments indicate that the storage of both these memory types proceeds in stages. “Storage of the initial information, a type of short-term memory, lasts minutes to hours and involves changes in the strength of existing synaptic connections. The long-term changes (those that persist for weeks and months) are stored at the same site, but they require something entirely new: the activation of genes, the expression of new proteins and the growth of new connections” (Kandel and Hawkins, 1992, p. 86). Since gene activation at neuron sites, and the reorganization of the brain, are two of the consequences of learning, the question arises: Do we perceive as we do as a consequence of the genetic potential within us via its embodiment in the organization of the brain? In other words, is learning a part of the response to the universe? If the ape, or the pigeon, does not have the appropriate genes to activate specific development and reorganization of the brain, it will not respond to certain patterns in its environment. Hebb’s rule that coincident activity in the presynaptic and postsynaptic neurons strengthens the connection between them is one learning mechanism. According to another rule, called the premodulatory associative mechanism, the synaptic connection can be strengthened without activity of the postsynaptic cell when a third neuron, called a modulatory neuron, acts on the presynaptic neuron. The associations are formed when the action potentials on the presynaptic cell are coincident with the action potentials in the modulatory neuron. 2. Prescriptive Learning Although neural networks and other A1 techniques provide good generalization in many signal and image recognition applications, these methods are nowhere close to matching the performance of biological neural systems. As explained in Section 11, pigeons, parrots, and other animals can quite easily perform abstract generalization, recognizing abstract classes such as those of a tree, a human, and so on, which lies completely beyond the capability of any A1 machine. Are current learning strategies being employed fundamentally deficient? One would expect that biological systems, in their evolution through millions of years, would have evolved learning mechanisms that are optimal in some sense. This section examines biological learning and shows that artificial neural networks do not learn in the same manner
QUANTUM NEURAL COMPUTING
287
as animals. We claim that a two-step learning scheme, where the first step is a prescriptive design, offers a more versatile model for machine learning. Our starting point is developmental neurobiology. During embryogenesis, several kinds of cell death occur. One is histogenetic cell death, or the death of neurons that have ceased proliferation and have begun to differentiate; this is what is of relevance for our discussion. One may also speak of phylogenetic cell death, responsible for the regression of vestigal organs or of larval organs during metamorphosis (such as the tadpole tail); and morphogenetic cell death, which involves degeneration of cells during massive changes in shape, as in bending and folding of tissues and organs during early embryogenesis. According to Oppenheim (1981): “The neurons that exhibit cell death include such a wide variety of cells., .that it is impossible at this time to argue that any particular feature is characteristic of all neurons exhibiting cell death. Although it has been suggested that cell death may only occur among neurons that have greatly restricted or limited numbers of synaptic targets, the variety of cell types [that exhibit cell death] would appear to contradict such a suggestion” (p. 114). Soon after the issuance of Darwin’s The Origin of Species, it was suggested by T. H. Huxley, G. S. Lewes, and Wilhelm Roux that Darwinian ideas of natural selection might apply to the cells and tissues of the developing organism. The Spanish neuroanatomist Ramon y Cajal also suggested in 1919 that “during neurogenesis there is a kind of competitive struggle among the outgrowth (and perhaps even among the nerve cells) for space and nutrition. The victorious neurons.. . would have the privilege of attaining the adult phase and of developing stable dynamic connections” (Rambn y Cajal, 1929, p. 400). None of these early pioneers, however, anticipated the possiblity of massive neuronal death during normal development which was reported first by Hamburger and Levi-Montalcini (1949). We suggest that cell death is a result of many neurons being rendered redundant during a stage of reorganization that follows the first phase of development of the system. During the first phase all the neurons fulfil a specific function; this is followed by a structural reorganization. This twostep strategy can be used to devise a new approach to machine learning. 3. Reorganization
The two-step learning scheme allows us to view the developmental process in a different light. The first step consists of prescriptive learning which defines the layout and the synaptic weights of the interconnections. This learning may be called the biologically programmed component of the learning.
288
SUBHASH C. KAK
The nature of the prescriptive learning requires many more neurons than are necessary in a more efficiently organized network. During the second stage of learning a reorganization of the network takes place that renders a large number of neurons redundant. These are the neurons that suffer death as they do not receive appropriate signals from target neurons (Purves, 1988). In a more general setting this new learning strategy may be termed a structured approach. One may begin with a general kind of architecture for the intelligent machine which is refined during a subsequent phase of training. 4. The Example of Feedforward Neural Networks Backpropagation is the commonly used method for training feedforward networks, but it does not explicitly use any specific architecture related to the nature of the mapping. It has recently been shown that a prescriptive learning scheme can be devised (Kak, 1993a, 1994a). In its basic form this does not require any training whatsoever, but this form does not generalize. The weights are now changed slightly by adding random noise, which allows the network to generalize. Characteristics of this model are summarized in the work of Madineni (1994) and Raina (1994). This variant scheme provides significant learning generalization and a speed-up advantage of 100 over fastest versions of backpropagation. The penalty to be paid for this speed-up is the much larger number of hidden neurons that are required. The second step in this training would be to reorganize from the architecture obtained using prescriptive learning and reduce the number of hidden neurons. The speed of learning, and the simplicity of this procedure, make the new model a plausible mechanism for biological learning in certain structures. Interesting issues that remain to be investigated include further development of this approach so that one can obtain continuous-valued outputs and the development of competitive learning. However, we cannot expect such models to provide any insight regarding holistic behavior. C . Chaotic Dynamics
Freeman and his associates (e.g., Freeman, 1992, 1993 and Freeman and Jakubith, 1993) have argued that neuron populations are predisposed to instability and bifurcations that depend on external input and internal parameters. Freeman (1992) claims that chaotic dynamics makes it “possible
QUANTUM NEURAL COMPUTING
289
for microscopic sensory input that is received by the cortex to control the macroscopic activity that constitutes cortical output, owing to the the selective sensitivity of chaotic systems to small fluctuations and their capacity for rapid state transitions” (p. 452). The significance of this work is to point out the gulf that separates simple input-output mapping networks used in engineering research from the complexity of biological reality. But the claim that chaotic dynamics in themselves somehow carry the potential to explain macroscopic cortical activity relating to the binding problem seems to be without foundation. The nonlinearities at the basis of chaos have been seen as the basis of a self-organizing principle. Considering these ideas for the olfaction in a rabbit, Freeman (1993) says: “The olfactory system maintains a global chaotic attractor with multiple wings or side lobes, one for each odor that a subject has learned to discriminate. Each lobe is formed by a bifurcation during learning, which changes the entire structure of the attractor, including the pre-existing lobes and their modes of access through basins of attraction. During an act of perception the act of sampling a stimulus destabilizes the olfactory bulbar mechanism, drives it from the core of its basal chaotic attractor by a state transition, and constrains it into a lobe that is selected by the stimulus for as long as the stimulus lasts, on the order of a tenth of a second... . In this way the cortical response to a stimulus is ‘newly constructed’. . . rather than retrieved from a dead store” (p. 507). The above theory is very attractive for the olfactory system, but it remains very limited in its scope. It is hard to see how it could be generalized for more complex information processing.
D . Attention, Awareness, Consciousness Milner (1974) speculated that neurons responding to a “figure” fire synchronously in time, whereas neurons responding to the background fire randomly. More recently, von der Malsburg and Schneider (1986) have proposed a correlation theory to explain a temporal segregation of patterns. Crick and Koch (1990) have considered the problem of visual awareness. They distinguish between two kinds of memory: “very-short-term” or “iconic,” and a slower “short-term” or working memory. Iconic memory appears to involve visual primitives, such as orientation and movement, and it appears to last half a second or less. On the other hand, short-term or working memory lasts for a few seconds, and it deals with more abstract representations. This memory also has a limited capacity, and it has been claimed that this capacity is about seven items. For the “short-term” memory, Crick and Koch postulate an attentional mechanism that transiently binds together all those neurons whose activity
290
SUBHASH C. KAK
relates to the different features of the visual object. They argue that semisynchronous coherent oscillations in the 40- to 70-Hz range are an expression of this mechanism. These oscillations are sensory-evoked in response to auditory or visual stimuli. Postulating such a mechanism, however, merely trades one problem for another: How the neurons that participate in these oscillations are bound is still not explained. This theory is an extension of the earlier “searchlight” hypothesis relating to awareness. It shifts the basis of awareness to a more abstract mechanism of attention (Koch, 1993). This work has led to a reexamination of the question of what may be taken to define the loss of consciousness. In the standard view, anesthesia leads to four distinct effects: motionlessness in the face of surgery; attenuation or abolition of the autonomic responses such as tachycardia, hypertension, and so on, that would normally accompany surgery; lack of pain; and lack of recall. Use of electroencephalograms (EEGs) shows little difference between natural sleep and the anesthetized state. On the other hand, Kulli and Koch (1991) argue that the loss of consciousness, as defined by these four effects, is best represented by the loss of the 40-Hz sensory-evoked oscillations. Deepening anesthesia has also been seen as progressive loss of complexity of the EEG phase plot (Hameroff, 1987). Hameroff claims that anesthetics retard mobility of electrons, and doing so may disrupt hydrophobic links among proteins which interconnect microtubules which are filamentous substructures of the neuron (see next section).From a functional point of view one may see a prevention of memory consolidation from input to long-term memory storage as one of the consequences of anesthesia; other cognitive and motor functions may be similarly inhibited. 1. Scripts Yarbus (1967) recorded the eye movements when a picture is viewed. He found that there is a continual scanning of the scene, with a predominance of the fixation points on parts which carry important and complex features. From this one may infer that the brain constructs the regular image from fragments, obtained through the scanning process, that may be termed scripts. Such scripts may be defined with respect to both space as well as time. The structuring of events in dreams gives us important clues regarding time scripts. My father explained to me in 1955 that certain dreams run as scripts. Thus a foot slipping off another in sleep may be accompanied by a dream about falling off a precipice. That such a script includes an appropriate sequence of events preceding the climax indicates that the mind rearranges the events so that the dream appears to have started before the slipping of the foot. Similar scripts should be a part of normal awake cogni-
QUANTUM NEURAL COMPUTING
29 1
tion. For example, in delirious speech, ideas and words are strung together in an illogical manner. It is as if different brief scripts have been jumbled together without comparing the stream to the inner model of reality. Scripts might be seen as chunks of visualization or linguistic behavior, just as neural subsystems might be viewed as chunks of structure. 2. Time Effects Experiments focusing on time delays and rearrangement of events by consciousness have been performed by H. H. Kornhuber and his associates and by Benjamin Libet. Kornhuber and his associates (Deeke et al., 1976) found that the readiness potential, the averaged EEG trace from the precentral and parietal cortex, of a subject who was asked to flex his finger built up gradually for a second to a second and a half before the finger was actually flexed. This slow buildup of the readiness potential may be viewed as a response of the unconscious mind before it passes into the conscious mind and is expressed as a voluntary movement. In Libet’s work (Libet et al., 1979), the subjects were undergoing brain surgery for some reason unconnected to the experiment and they agreed to electrodes being placed at points in the brain, in the somatosensory cortex. It was found that when a stimulus was applied to the skin, the subject became aware of this half a second later, although the brain would have received the signal of the stimulus in barely a hundredth of a second. Furthermore, the subjects themselves believed that no delay had taken place in their becoming aware of the stimulus! In further experimentation the somatosensory cortex was electrically stimulated within half a second of skin stimulus. A backward masking phenomenon occurred, and the subject did not become aware of the earlier skin sensation. Now Libet initiated a persistent cortical stimulation first and then within half a second he also touched the skin. The subject now was aware of both stimulations, but he believed that the skin stimulation preceded that of the cortex. In other words, the subject extrapolated the skin-touching sensation backward in time by about half a second. These experiments demonstrate how the brain works as an active agent, reorganizing itself as well as the information. E . Cytoskeletal Networks
Recursive definition is one of the fascinating characteristics of life. For example, natural selection not only works at the level of species but also at the level of the individual and that of the nerve cells of the developing organism (Cowan, 198 1). Purposive behavior characterizes human societies as well as the societies of other animals such as ants. Recursion may be seen regarding
292
SUBHASH C. KAK
information processing as well, from animal societies to the neural structures of the individual or perhaps to the cytoskeleton of the cell. C. S. Sherrington (1951) argued that even single cells possess what might be called minds: “Many forms of motile single cells lead their own independent lives. They swim and crawl, they secure food, they conjugate, they multiply. The observer at once says ‘they are alive’; the amoeba, paramaecium, vorticella, and so on. They have specialized parts for movement, hair-like, whip-like, spiral, and spring-like.. . . [Of] sense organs.. .and nerves there is no trace. But the cell framework, the cyto-skeleton, might serve” (p. 208). Hameroff and his associates have argued that microtubules, hollow cylinders 25 nm across, which are the cytoskeletal filamentous polymers to be found in most cells, perform information processing. Hameroff (1987), Hameroff et al. (1992), and Rasmussen et al. (1990) propose that the cytoskeleton may be viewed as the cell’s nervous system, and that it may be involved in molecular-level information processing that subserves higher, collective neuronal functions ultimately relating to cognition. They further propose that microtubule automata may have a function in the guidance and movement of cells and cell processes during the morphogenesis of the brain’s network architecture, and that in learning they could regulate synaptic plasticity. In Hameroff (1994), interactions between the electric dipole field of water molecules confined within the hollow core of microtubules and the quantized electromagnetic radiation field were considered. These and BoseEinstein condensates in hydrophobic pockets of microtubule subunits were taken to be responsible for microtubule quantum coherence. It was suggested that optical signaling in microtubules would be free from both thermal noise and loss, and that this may provide a basis for biomolecular cognition and a substrate for consciousness. Regardless of the precise function of the microtubule information transmission, it could carry additional features that may be useful in biological information processing. The notion of a connectionist network within the neuron, which in turn is an element of the connectionist neural network, defines a recursive relationship that has many desirable features from the point of view of ability to model biological behavior.
F . Invariants and Wholeness The brain’s task can also be viewed as one that involves the extraction of invariants. For example, an image cannot be recognized based on the visual stimulus that is compared to a stored exemplar. The reason is that the details of the visual stimulus depend on illumination, the distance and orientation of the object, motion, and many other factors. In order to extract the in-
QUANTUM NEURAL COMPUTING
293
variant features of the image, the brain must, from the constantly changing cascade of photonic data, construct an internal visual world. Zeki (1992) has shown that color, form, motion, and possibly other attributes are processed separately in specialized parts of the visual cortex. These parts function as parallel processing modules, although there exist considerable connections among them. How the processing of these parallel modules is put together remains a puzzle. Zeki summarizes: “The entire network of connections within the visual cortex must function healthily for the brain to gain complete knowledge of the external world. Yet as patients with blindsight have shown, knowledge cannot be acquired without consciousness, which seems to be a crucial feature of a properly functioning visual apparatus. Consequently, no one will be able to understand the visual brain in any profound sense without tackling the problem of consciousness as well” (p. 76). It is the notion that awareness possesses a unity and that many aspects of consciousness are distributed over wide areas of the brain that has driven the search for quantum neural models. These issues are summarized in Kak (1993c, 1995) and Penrose (1989), but the presence of noise and dissipation inside the brain makes the development of such models a daunting task. Artificial neural network research has stressed pattern recognition and input-output maps. The development of machines that can match biological information processing requires much more than just pattern recognition. Corresponding to an input not only are many subpatterns bound together, there is also generated other relevant information defining the background and the context. This is the analog of the binding problem for biological neurons, and therefore further progress in the design of artificial neural networks appears to depend on the advances in understanding brain behavior. From the perspective of wholeness, it appears that a drastic change of perspective is necessary to make further advances. Consciousness is a recursive phenomenon: Not only is the subject aware, he or she is also aware of this awareness. If one were to postulate a certain region inside the brain from where the searchlight is shown on the rest of the brain and which provides the unity and wholeness to the human experience, the question of what would happen if this searchlight were to be turned on itself arises. V. ON INDIVISIBLE PHENOMENA
To understand the marvelous information processing ability of animals, researchers have investigated the neural structure of the brain and related it to a hypothesized nature of the mind. If brain structure is neuronal, then cognitive capabilities should be found for networks of neurons. Note, however, the argument (Sacks, 1990) that neuropsychology itself is flawed since it
294
SUBHASH C . KAK
does not take into account the notion of self, which is why it is hard put to explain phenomena such as that of phantom limbs (Melzack, 1989). The study of neural computers was inspired by possible parallels with the information processing of the brain. It was proposed that artificial neural networks constituted a new computing paradigm. However, our experience with these networks has shown that such a characterization is incorrect. When they are simulated, they represent sequential computing. In hardware, they may be viewed as a particular style of parallel processing, but as we know, parallel computing is not a departure from the basic Turing model. Neither does the use of continuous values provide us any real advantage, because such continuous values can always be quantized and processed on a digital machine. Artificial neural networks were assumed to offer a real advantage in the solution of optimization problems. However, the energy minimization technique, while sound in theory, fails in practice since the network gets stuck in local minima. Neural networks have provided useful insights, but they have failed at explaining the unity that characterizes biological information processing. Cognitive abilities may be seen to arise from a continuing reflection on the perceived world. This question of reflection is central to the brain-mind problem and the problem of determinism and free will (see, e.g., Kak, 1986; Penrose, 1989). A dualist hypothesis (e.g., Eccles, 1986) to explain brainmind interaction or the process of reflection meets with the criticism that this violates the conservation laws of physics. On the other hand, a brain-mind identity hypothesis, with a mechanistic or electronic representation of the brain processes, does not explain how self-awareness could arise. At the level of ordinary perception there exists a duality and complementarity between an autonomous (and reflexive) brain and a mind with intentionality. The notion of self seems to hinge on an indivisibility akin to that found in quantum mechanics. This was argued most forcefully by Bohr, Heisenberg, and Schrodinger (e.g., Moore, 1989). A . Uncertainty
Uncertainty is a fundamental limitation in a quantum description arising out of indivisibility. Since a quantum system is characterized by the wavefunction $, which is a superposition of states, and the measurement leads to the collapse of the wavefunction, one can never completely infer the state before the measurement. The wavefunction evolves according to the Schrodinger equation:
QUANTUM NEURAL COMPUTING
295
Here fi is the Hamiltonian. According to the probability interpretation of the wavefunction, the probability of finding a quantum object in the volume element dt is given by
*** Thus $$* is a probability density. For a particle the Heisenberg’s uncertainty relation,
limits the simultaneous measurements of the coordinate x and momentum p , where 6x and 6p are the uncertainties in the values of the coordinate and momentum. This relation also implies that one cannot speak of a trajectory of a particle. Since the initial values of x and p are inherently uncertain, so also is the future trajectory of the particle undetermined. One consequence of this picture is that we cannot have the concept of the velocity of a particle in the classical sense of the word. There are many scholars [see Jammer (1974) for a bibliography] who champion a statistical interpretation of the uncertainty relations, according to which the product of the standard deviations of two canonically conjugate variables has a lower bound given by h/2. Such an interpretation is based on the premise that quantum mechanics is a theory of ensembles rather than individual particles. But there is considerable evidence, including that related to the EPR experiment, described in the next section, that goes against the ensemble view. It appears, therefore, that the correct interpretation is the nonstatistical one, according to which one cannot simultaneously specify the precise values of conjugate variables that describe the behavior of a single object or system.
B. Complementarity The principle of complementarity, as a commonly used approach to the study of the individuality of quantum phenomena, goes beyond wave-particle duality. In the words of Bohr: The crucial point [is] the impossibility of any sharp separation between the behavior of atomic objects and the interaction with the measuring instruments which serve to define the conditions under which the phenomena appear. In fact, the individuality of the typical quantum effects finds its proper expression in the circumstance that any attempt of subdividing the phenomena will demand a change in the experimental arrangement introducing new possibilities of interaction between objects and measuring instruments which in principle can-
296
SUBHASH C. KAK
not be controlled. Consequently, evidence obtained under different experimental conditions cannot be comprehended within a single picture, but must be regarded as complementary in the sense that only the totality of the phenomena exhausts the possible information about the objects (Bohr, 1951, p. 39).
Observe that complementarity is required at different levels of description. Just as one might use a probabilistic interpretation instead of complementarity for atomic descriptions, a probabilistic description may also be used for cognitive behavior. However, such a probabilistic behavior is inadequate to describe the behavior of individual agents, just as notions of probability break down for individual objects. According to complementarity, one can only speak of observations in relation to different experimental arrangements, and not an underlying reality. If such an underlying reality is sought, then it is seen that the framework of quantum mechanics suffers from paradoxical characteristics. One of these is nonlocal correlations that appear in the manner of action at a distance (Bell, 1987). However, quantum mechanics remains a very successful theory in its predictive power. Consider again the similarity between the thought process and the classical limit of the quantum theory. The logical process corresponds to the most general type of thought process as the classical limit corresponds to the most general quantum process. In the logical process, we deal with classifications. These classifications are conceived as being completely separate but related by the rules of logic, which may be regarded as the analog of the causal laws of classical physics. In any thought process, the component ideas are not separate but flow steadily and indivisibly. An attempt to analyze them into separate parts destroys or changes their meanings, yet there are certain types of concepts, among which are those involving the classification of objects, in which we can, without producing any essential changes, neglect the indivisible and incompletely controllable connection with other ideas. C. What Is a Quantum Model?
Wave-particle duality is not a characteristic only of light. Interference experiments have been conducted with neutrons. In one experiment, the interference pattern formed by neutrons, diffracted along two paths by silicon crystal, could be altered by changing the orientation of the interferometer with respect to the earth’s gravitational field. This demonstrated that the Schrodinger equation holds true under gravity. Interference experiments are also being designed for whole atoms. If we use complementarity in its widest sense as applying to all “indivisible” phenomena, reality is seen to be such that it cannot be captured by a single view.
QUANTUM NEURAL COMPUTING
297
This indicates that one should be able to define complementary variables in terms of higher-level attributes for “complex,” indivisible objects. From another perspective, a quantum system is characterized by probabilistic behavior. In Born’s probability interpretation, the wavefunction is not a material wave but rather a probability wave. Nevertheless, in relativistic quantum theory, the wavefunction cannot be used for the definition of a probability of a single particle. The orthodox Copenhagen interpretation of quantum mechanics is based on the fundamental notion of uncertainty and that of complementarity. According to this interpretation, we cannot speak of a reality without relating it to the nature of measurement. If one seeks a unified picture, one is compelled to accept the existence of effects propagating instantaneously. One example of paradoxical results arising from an insistence on a description that independent of the nature of observation is the delayed-choice variant of the double-slit experiment of the well-known physicist John Archibald Wheeler. 1. Delayed Choice
In the delayed-choice experiment, considering a specific reality (picture) before observations leads to the inference that we can influence the past. In the words of Wheeler (1980): A choice made in the here and now has irretrievable consequences for what one has the right to say about what has already happened in the very earliest days of the universe, long before there was any life on earth.
Wheeler considers a source that emits light of extremely low intensity, one photon at a time, with a long time interval between the photons. The light is incident on a semitransparent mirror MI and divides into two parts r and t ; after reflections by the totally reflecting mirrors A and B, it reaches the photomultipliers PI and Pz (Fig. 4).Since a photon will travel one of the two trajectories rAr or tBt, it will be detected either by PI or by P,. This experiment shows the corpuscular nature of photons. Now a semitransparent mirror M , is inserted at DCR (the delayed-choice region). The thickness of the mirror is so chosen that if light is considered wavelike, then the superposition of the waves toward P, combines in a destructive manner and the waves toward P, interfere in a constructive manner. With the insertion of M,, only the detector P, will register any reading, and it should be assumed that each photon follows both trajectories. Wheeler now wishes for us to imagine a situation where the mirror M 2 has been inserted at the last moment, when the photon has already passed through M , and is along the way either down rAr or tBt. If M , is inserted,
298
SUBHASH C. KAK
A
0
FIGURE4. Wheeler’s delayed-choice experiment; DCR is the delayed-choice region.
the photon will behave as a wave, as if it had followed both the paths. On the other hand, if M, is not inserted, then the photon chooses only one of the two paths. Since the insertion of M, is done at the last moment, it means that this choice modifies the past with regard to the behavior of that photon. Wheeler points out that astronomers could perform such a delayed-choice experiment on light from quasars that has passed through a galaxy or other massive object that acts as a gravitational lens. Such a lens can split the light from the quasar and refocus it in the direction of the distant observer. The astronomer’s choice of how to observe photons from the quasar appears to determine whether each photon took both paths or just one path around the gravitational lens in its journey commenced billions of years ago. When they approached the galactic beam splitter, did the photons make a choice that would satisfy the conditions of an experiment to be performed by unborn beings on a still-nonexistent planet? Clearly, this fallacy arises out of the view that a photon has a physical form before the astronomer’s observation.
VI. ON QUANTUM AND NEURAL COMPUTATION A . Parallels with a Neural System
By a quantum neural computer we mean a (strongly) connectionist network which is nevertheless characterized by a wavefunction. Let us assume that we are interested in considering only the computational potential of such a supposition. In parallel to the operational workings of a quantum model. we can sketch the basic elements of the working of such a computer. Such a computer will start out with a wavefunction, reflecting the state of the self-
QUANTUM NEURAL COMPUTING
299
organization of the connectionist structure, that is a sum of several different problem functions. After the evolution of the wavefunction, the measurement operator will force the wavefunction to reduce to the correct eigenfunction with the corresponding measurement that represents the computation. States of quantum systems are associated with unit vectors in an abstract vector space V , and observables are associated with self-adjoint linear operators on V. Consider the self-adjoint operator f.We can write
where $i are the eigenvectors and fi are the eigenvalues. The t+hi are the wavefunctions and fi represent the corresponding measurements. When $i are a complete orthonormal set of vectors in V, we have +it#‘ =S ,. Also, any wavefunction $ in V can be written as a linear combination of the $i. Thus
where the complex coeficients are given by ci
=
I
$$? dq
and q represents the configuration space. The sum of the probabilities of all possible values fi equals unity:
When an observation is made on an object characterized by a wavefunction $, the measurement process causes the wavefunction to collapse to the eigenfunction t,bi that corresponds to the measurement fi. This measurement itself is obtained with the probability Ici12. Since the wavefunction evolves regardless of whether any observations are being made, this will endow the system to compute several problems simultaneously. Analogously, animals can simultaneously perform several tasks, although such performance has traditionally been explained as being reflexive. From a functional point of view, this has parallels with the workings of a feedback neural computer. In such a computer the final measurement is one of the stored “eigenvectors” Xi,where the neural computer is itself characterized by the synaptic interconnection weight matrix T, so that sgm(TXi} = Xi where sgm is a nonlinear sigmoid function, so as to define a X with discrete component values.
300
SUBHASH C . KAK
One difference between the quantum mechanical framework and the above equation is the nonlinearity introduced by the sigmoidal function. This nonlinearity may be seen to be at the end of a chain of transformations, where the first step is a linear transformation as in quantum mechanics. On the other hand, the matrix T is like one of the many measurement operators of a quantum system. In other words, a neural network is associated with a single measurement operator, whereas a quantum computer is associated with a variety of measurement operators. Equivalently, we may view a quantum computer as a collection of many neural computers that are “bound” together into a unity. The view of the cognitive process representing self-organized neural activity implies that each such process does set up what may be considered a different neural computer. The set of the self-organized states may then be viewed as representing a collection of neural structures that are bound together. The search for a local mechanism for this binding is as difficult as for the binding problem mentioned in Section IV. If, on the other hand, we postulate a global function defining such a binding, then we speak of a quantum neural computer. If the wavefunction is associated with several operators, then the neural hardware for a specific problem will secure its solution. The measurement process will appear instantaneous after the decision to choose a specific measurement has been made. How this choice might be made constitutes another problem. Some of the characteristics of such a model are: 1. It explains intuition, or the spontaneous computation of the kind performed in a creative moment, as has been reported by Poincark., Hadamard, and Penrose (1989). 2. A wavefunction that is a sum of several component functions explains why the free-running mind is a succession of unconnected images or episodes. The classical neural model does not explain this behavior. 3. One can admit the possibility of tunneling through potential barriers. Such a computer can then compute global minima, which cannot be done by a classical neural computer, unless by the artifice of simulated annealing. (The simulated annealing algorithms may not converge.) 4. Being a linear sum of a large (or infinite) number of terms, the individual can shift the focus to any desired context by the application of an appropriate measurement hardware that has been designed through previous exposure (reinforcement) or through inheritance. Such a shifting focus is necessary in speech or image understanding.
QUANTUM NEURAL COMPUTING
301
B. Measurements
Let us consider a pure state $ to be a superposition of the eigenfuctions, or $=
1
ci$i
i
The overlapping realities collapse into one of the alternative worlds of a specific eigenfunction when a measurement by a nonquantum device is made. However, not all measurement devices are completely nonquantum; thus superconductivity represents a quantum phenomenon at the macroscopic level. If the measuring system is also a quantum system, it too should be described by a wavefunction. The measurement state would then arise out of the interference of the two wavefunctions and, in itself, would represent a superposition of several states. As shown by von Neumann, this requires that further measurements be made on the measuring apparatus to resolve the wavefunction, and so on (Schommers, 1989). 1. The Example of Vision
It was argued by Gibson (1979) and Schumacher (1986) that it is not necessary to view vision as resulting from the passage of signals through the optic nerve, but rather as a reorganization of the brain as a response to an environment. According to Gibson: It is not necessary to assume that anything whatever is transmitted along the optic nerve in the activity of perception. We can think of vision as a perceptual system, the brain being simply part of the system. The eye is also part of the system, since retinal inputs lead to ocular adjustments and then to altered retinal inputs, and so on. The process is circular, not a one-way transmission. The eye-head-brain-body system registers the invariants in the structure of ambient light. The eye is not a camera that forms and delivers an image, nor is the retina simply a keyboard that can be struck by fingers of light (p. 61).
Consider the parallel between the tracks in the photosensitive emulsion used to detect deflected particles and the eye. Bohm (1980) argued that the track in the emulsion is to be viewed as resolving the relevant wavefunction; likewise, the track in the eye may be viewed as resolving the wavefunction associated with the incident beam. However, we have seen in Section LA that the eye cannot be seen to be entirely classical. It appears, then, that the vision system itself should be decomposed into a concatenation of subsystems where this resolution proceeds. From this perspective the vision system may be viewed as quantum and classical measuring instruments associated with a quantum process.
302
SUBHASH C. KAK
C . More on Information
That a quantum neural computer will have characteristics different from that of a collection of classical computers is seen from an examination of information. In contrast to classical systems, information in a quantum mechanical situation is affected by the process of measurement. If a linearly polarized photon strikes a polarizer oriented at 90” from its plane of polarization, the probability is zero that the photon will pass to the other side. If another polarizer tilted at 45” is placed before the first one, there is a 25% chance that a photon will pass both polarizers. By the process of measurement, the 45” polarizer transforms the photons so that half of the initial photons can pass through it. The collapse of the wavefunction also causes nonlocal effects. These characteristics can be looked for in determining whether biological information systems should be considered quantum. 1. The E P R Experiment
We consider the thought experiment described by Einstein, Podolsky, and Rosen (1935) (Bell, 1987) now known as the EPR experiment. Einstein et al. assume the following condition for an element of a mathematical model to represent an element of reality: If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical reality (P. 777)
They also assert that “every element of the physical reality must have a counterpart in the physical theory.” They argue that if a pair of particles has strongly interacted in the past, then, after they have separated, a measurement on one will yield the corresponding value for the other. Since the position and momentum values are supposed to be undefined before the measurement is made and, nevertheless, the value is revealed after the measurement on the remote particle is made, Einstein et al. argue that the measurements should correspond to aspects of reality. They conclude that “the quantum-mechanical description of physical reality given by wave functions is not complete.” Since after separation each particle is to be considered as physically independent of the other (locality condition), this can also be taken to imply that effects propagate instantaneously. Bell (1987) has shown that the EPR reality criterion is incompatible with the predictions of quantum mechanics. Experiments have confirmed the predictions of the latter. We now consider the EPR experiment from an information theoretic viewpoint. In its Bohm variant, a pair of spin-f particles, A and B, have
QUANTUM NEURAL COMPUTING
303
formed somehow in the singlet spin state and are moving freely in opposite directions. The wavefunction of this pair may be represented by 1 -(v+ w-+ v- W')
Jz
where I/+ and I/- represent the measurements of spin + f and -f for particle A and W + and W - represent the measurement of + fand -f for particle B. The EPR argument considers the particles A and B to have separated. Now if a measurement is made on A along a certain direction, it guarantees that a measurement made on B along the same direction will give the opposite spin. The important point here is that the spin is determined as soon as, but not before, one of the particles is measured. This has been interpreted to mean that knowledge about A somehow reduces the wavefunction for B in a specific sense. In other words, the EPR correlation has been taken to imply a nonlocal character for quantum mechanics, or instantaneous action at a distance. To reexamine these questions in an informationtheoretic perspective, it is essential to determine the extent of information obtained in each measurement. Given a spin3 particle, an observation on it produces log, 2 = 1 bit of information. A further measurement made along a direction at an angle of 8 to the previous measurement leads to a probability cos' 8/2 that the measurement would give the same sign of spin, and a probability sin' 8/2 that it will give spin of the opposite sign. The information associated with the second measurement is 8 8 e 8 H ( 0 ) = -cos2 - log, cos2 - - sin2 - log, sin' 2 2 2 2
The average information considering all angles is Ha, =
1
:j
H ( 8 ) d8 = 1 - log,
& bits = 0.27865 bits
In other words, the information obtained from the second measurement is somewhat less than a quarter of a bit. These sequential experiments are correlated. It is also important to consider that all further measurements provide exactly 0.27865 bit of information. This indicates that information in a quantum description is not a locally additive variable. 2. Information in the Experimental Setup Now consider the information to be obtained from the measurement of spin of two spin-f particles that are correlated in the EPR sense. If information is
304
SUBHASK C. KAK
additive, then the first measurement provides 1 bit of information and after the end of the second measurement we have a total of 1.27865 bits. Since the EPR correlation reveals the spin of the particle B, as soon as the measurement of A has been made, one might infer that the information that the two arms of the experimental setup provide equals 0.27865 bit. In reality we cannot do so, as the calculus for information, in a quantum description, is unknown. Not being locally additive, the information in the experimental setup cannot be used in subsequent repetitions of the experiment. D . A Generalized Correspondence Principle
In a study several years ago (Kak, 1976, 1984) it was argued that the fundamental Heisenberg uncertainty was compensated by information in terms of new symmetries of the quantum description. In other words, one can generalize the correspondence principle to include uncertainty:
Iclassica, = Ism+ uncertainty A calculation of this information yields plausible results regarding the number of such symmetries. The analysis given here provides further elaboration of that idea. Although we appear to be unable to use the information in the various measurements, it is clear that the experimental setup itself plays a fundamental role in our knowledge. From another perspective, information can be associated with the angular (as well as spatial) position of an object. Nevertheless, this information cannot be considered in a local fashion. In other words, this information cannot be considered separately for the components of the system. In the context of a quantum neural computer, this means that behavioral characteristics of such a machine can be defined only in terms of the manner in which measurements on it are made. If such measurements are stored in the machine, then this experience can be regarded only as an interaction between the machine and the environment. Berger (1977, p. xiv) argues that one needs to use a similar relativism in the study of biological systems. “A necessary condition of experience is interaction between the organism and the environment, so that the structure of experience reflects the structure of both the organism and the environment with which it interacts. It follows that an organism cannot experience its own structure or that of the environment independently of the other.” Assume that an emergent phenomenon arises from interactions that are noncomputable in terms of the descriptions at the lower level. Assume further that the higher-level description is associated with a fundamental uncertainty. Will the generalized correspondence principle apply to this situation?
QUANTUM NEURAL COMPUTING
305
And if it does, will the emergent phenomenon be associated with new attributes that provide information equal in magnitude to this uncertainty? If information in biological information processing exhibits other nonlocal characteristics as well, such behavior may represent further parallels between the quantum mechanical paradigm and biological reality. VII. STRUCTURE AND INFORMATION A . A Subneuron Field
In earlier sections we critiqued materialist models, where an identity is assumed between mental events and neural events in the higher centers of the brain. Our main criticism was that these models did not deal with the binding problem. We presented evidence in support of the view that biological systems perform quantum computing. We now raise the question of the location of the quantum field that associates experience with its unity. Two proposals are reviewed in this section. 1. A Dualist Hypothesis
Popper and Eccles (1977) presented the view in which the world of mental events (they call it World 2) is sharply separated from the world of brain processes (World 1). Both these worlds are supposed to have autonomous existence. These ideas were further developed in Eccles (1986, 1990). In his theory, Eccles (1990) assumes that the entire set of the inner and outer senses is composed of elementary mental events, which he calls psychons. He further proposes that each psychon is linked to a corresponding functional aggregate of dendrites, which he names a dendron. A dendron is the basic receptive unit of the cerebral cortex, and it is a bundle or cluster of apical dendrites of certain pyramidal cells. Each dendron could involve about 200 neurons and as many as 100,000 synapses. There are about 40 million dendrites in the human cerebral cortex. He assumes that the mind exerts a minimal additional excitation on the dendronic vesicles, which are at a state just below the threshold for excitation. In order to meet the objection that immaterial mental events such as thoughts cannot act on material structures, such as the neurons in the cerebral cortex, Eccles has recourse to quantum mechanics. He says that “a calculation on the basis of the Heisenberg uncertainty principle shows that a vesicle of the presynaptic vesicular grid could conceivably be selected for exocytosis by a psychon acting analogously to a quanta1 probability field. The energy required to initiate the exocytosis by a particle displacement
306
SUBHASH C . KAK
could be paid back at the same time and place by the escaping transmitter molecules from a high to a low concentration. In quantum physics at microsites energy can be borrowed provided it is paid back at once. So the transaction of exocytosis need involve no violation of the conservation laws of physics” (Eccles, 1990, p. 447) Although the central idea of this theory is the linkage between the psychons and the microsite dendrons, Eccles’ theory is a field theory of psychons. A dendron represents the measurement hardware with which each unitary or elementary mental event is associated. Psychons are autonomous entities that may exist apart from dendrons and be related among themselves. In the brain-to-mind transaction, Eccles proposes that each time a psychon selects a vesicle for exocytosis, this “microsuccess” is registered in the psychon for transmission through the mental world. This signal carries into the mental world the special experiential character of that psychon. Eccles has only sketched the outline of his theory. He sees the psychons and the brain processes to be interlinked entities, which is why he labels his theory dualistic. He does not explain what is the physical support of psychons. Seen from this perspective, psychons are nothing but an artifice to circumvent the brain-mind problem. If they need the dendrons to emerge as mental events, then we are again confronted with the problem of the processes that lead to such an emergent phenomenon. Eccles does see the need for a quantum theoretic basis to explain psychon -dendron interaction of mental events. It appears, therefore, that the psychon field ought to be a quantum field. He does not explain how the world of mental events, as a separate, autonomous reality, functions. 2. Microtubule Field As a field one cannot take awareness to be localized. Considering the parallel of the wavefunction, it is a unity and it cannot be taken to be identical to the physical body although it is associated with it. Hameroff and his collaborators (Hameroff, 1994; Jibu et al., 1994) have suggested that the subneural structures called microtubules provide coherent information transmission that leads to the development of a quantized field that solves the “binding problem” and explains the unitary sense of self. These cytoskeletal microtubules, which provide structural support to cells, are hollow cylinders 25 nm in diameter whose walls are made of subunit proteins known as tubulin. Each tubulin subunit is an 8-nm dimer that is made up of two slightly different classes of 4-nm dimers known as a and fl tubulin. Hameroff has argued that the quantum dynamical system of water molecules and the quantized electromagnetic field confined inside the hollow microtubule core manifests a collective dynamics by which coherent photons are created inside
QUANTUM NEURAL COMPUTING
307
the microtubule. The question of why consciousness characterizes only certain cells, although microtubules are to be found in all cells, is not clearly answered. Hameroff (1994) suggests that consciousness might be an attribute of all quantum phenomena. Commenting on the results of the EPR experiment, he says that “quantum entities are ‘aware’ of the states of their spatially separated relatives!” It is possible also to see the microtubule information as providing the “imaginary” component in the complex neural information that was described in an earlier section. If this is the mechanism that provides the simultaneous global and local information which is essential for artificial intelligence tasks, then it is clearly not feasible to build such quantum neural computers at this time because microtubule information processing is still imperfectly understood.
3. A Universal Field If one did not wish for a reductionist explanation as in inherent in the cytoskeletal model, one might postulate a different origin for the quantum field. Just as the unified theories explain the emergence of electromagnetic and weak forces from a mechanism of symmetry breaking, one might postulate a unified field of consciousness-unified force-gravity from where the individual fields emerge. The notion of a universal field still requires for one to admit the emergence of the individual’s I-ness at specialized areas of the brain. This I-ness is intimately related to memories, both short term and long term. The recall of these memories may be seen to result from operations by neural networks. Lesions to different brain centers effect the ability to recall or store memories. For example, lesions to area V1 of the primary visual cortex lead to blindsight (Weiskrantz, 1986). These people can “see,” but they are unaware that they have seen. Although such visual information is processed, and it can be recalled through a guessing-game protocol, it is not passed to the conscious self. B. Uncertainty Relations
By structure we mean a stable organization of the neural system. The notion of stability may be understood from the perspective of energy of the neural system. Each stable state is an energy minimum. For example, the following energy expression (Hopfield, 1982) is suitable for a feedback neural network:
308
SUBHASH C. KAK
Any structure may be represented by a number, or a binary sequence. Thus, in one dimension, the sequences
abc, ba, cab represent three structures that can be coded into numbers by a binary code. Assume that a neural structure has been represented by a sequence. Since this representation can be done in a variety of ways, the question of a unique representation becomes relevant. Definition 1. Let the shortest binary program that generates the sequence representing the structure be called p .
The idea of the shortest program gives us a measure for the structure that is independent of the coding scheme used for the representation. The length of this program may be taken to be a measure of the information to be associated with the organization of the system. This length will depend on the class of sequences being generated by the program. In other words, this reflects the properties of the class of structures being considered. Evidence from biology requires that the brain be viewed as an active system which reorganizes itself in response to external stimuli. This means that the structure p is a variable with respect to time. Assuming, by generalized complementarity, that the structure itself is not defined prior to measurement, then for each state of an energy value E, we may, in analogy with Heisenberg’s uncertainty principle, say that
where k l is a constant based on the nature of the organizational principle of the neural system. The external environment changes when the neural system is observed, due to the interference of the observer. This means that as the measurement is made, the structure of the system changes. This also means that at such a fundamental level, a system cannot be associated with a single structure, but rather with a superposition of several structures. Might this be a reason behind pleomorphism, the multiplicity of forms of microbes? The representation described above may also be employed for the external environment. Definition 2. Let the shortest binary program that generates the external environment be called x.
If the external environment is a eigenstate of the system, then the system organization will not change; otherwise, it will.
QUANTUM NEURAL COMPUTING
309
We now propose an uncertainty principle for neural system structure: Sx S p 2 k ,
This relation says that the environment and the structure cannot be simultaneously fixed. If one of the variables is precisely defined, the other becomes uncontrollably large. Either of these two conditions implies the death of the system. In other words, such a system will operate only within a narrow range of values of the environment and structure. We conjecture that k , = k2 = k. One may pose the following questions: Are all living systems characterized by the same value of k? Can one devise stable self-organizing systems that are characterized by a different value of k? Would artificial life have a value of k different from that of natural life? What is the minimum energy required to change the value of p by one unit? Does a Schrodinger-type equation define the evolution of structure? It is also clear that, before a measurement is made, one cannot speak of a definite state of the machine, nor of a definite state of the environment. VIII. CONCLUDING REMARKS In the previous sections we have argued that humans and other animals perform what might be called quantum neural computing. If we accept the reverse of this claim, then a quantum neural computer would be characterized with life and consequently consciousness. On the other hand, ordinary machines cannot be conscious, since they do not come with a set of potentialities that consciousness provides. Machines are therefore like the neural hardware that provides extension. A machine that is so designed that it has infinite set of potentialities would be alive, but such an alive machine need not be based on the organic molecules of normal life. Our review has highlighted the following points. Conventional computing has been unable to devise schemes for holistic processing. This is why computers cannot recognize faces or understand text. Conventional computing is based on a reductionist algorithmic approach, but such an approach presupposes that the particles, or objects of the computation, have been selected. The selection of these objects is normally left to a human in the solution of any real-world problem. Conventional computing has failed at deriving methods for such a selection.
310
*
SUBHASH C. KAK
Biological computing may be viewed as a potentially infinite collection of conventional computing devices. This versatility arises from the brain reorganizing its structure in response to an association, presented by the environment or by a self-generated state, to become a special-purpose computing machine suited to the solution of that problem. We postulate a wavefunction associated with brain behavior that allows the biological system to get into a definite structural state corresponding to the stimulus. We can simulate a quantum neural computer by a collection of a large number of conventional computers. To build a true quantum neural computer one will have to develop a framework for organization. In other words, one would then know how the different conventional computers that together constitute an approximation to the quantum neural computer could have a plasticity built into them so that the same machine would be able to reorganize itself. We speculate that self-awareness is associated with the wavefunction that allows for the selectivity in the biological organism. If this is true, then a quantum neural computer will be self-aware.
This chapter presents arguments for a holistic computing paradigm that has parallels with biological information processing and with quantum computing. In this paradigm, inputs trigger an internal reorganization of the connectionist computer that makes it selective to the input. Such a holistic paradigm suffers from several paradoxical aspects. No wonder the determination of the phenomenological correlates of the holistic function, and therefore the design of a computer that operates in this paradigm, remain perplexing problems. A quantum neural computer represents an underlying quantum system that interacts with classical measurement structures composed of neural networks. This parallels the basic quantum theory framework, where measurement is to be performed using macroscopic apparatus. Learning by a biological system then represents the development of the measuring instruments, which are the neural structures in the brain. This picture still needs to address other questions, such as how, in response to a stimulus, does the brain reorganize itself so as to pick the appropriate neural network for measurement; and how the unity of the wavefunction leads to self-awareness. Unless we consider all matter to be conscious, we must take consciousness to be an emergent property of a quantum system that requires a certain structural basis supporting specific neuronal activity.
REFERENCES Aurobindo, S. (1970). “The Synthesis of Yoga.” Aurobindo Ashram, Pondicherry, India. Baylor, D. A., Lamb, T. D., and Yau, K.-W. (1979). J. Physiol. 288,613.
QUANTUM NEURAL COMPUTING
311
Bell, J. S. (1987). “Speakable and Unspeakable in Quantum Mechanics.” Cambridge University Press, Cambridge. Beniof, P. (1982). J . Stat. Phys. 29, 515. Berger, R. (1977). “Psyclosis: The Circularity of Experience.” Freeman, San Francisco. Bohm, D. (1980). “Wholeness and the Implicate Order.” Routledge & Kegan Paul, London. Bohr, N. (1951). “Atomic Physics and Human Knowledge.” Science Editions, New York. Cowan, W. M., ed. (198 I). “Studies in Developmental Neurobiology.” Oxford University Press, New York. Crick, F., and Koch, C. (1990). Sem. Neurosci. 2, 263. Das, S. (1994). Second order neural networks. Ph.D. dissertation, Louisiana State University, Baton Rouge. Deeke, L., Grotzinger, B., and Kornhuber, H. H. (1976). Biol. Cybernet. 23,99. Deutsch, D. (1985). Proc. R. SOC.Lond. A 400,97. Deutsch, D. (1989). Proc. R . Soc. Lond. A 425, 73. Dyczkowski, M. S. G. (1987). “The Doctrine of Vibration.” State University of New York Press, Albany. Eccles, J. C. (1986). Proc. R. Soc. Lond. B 227,411. Eccles, J. C. (1990). Proc. R. SOC.Lond. B 240,433. Einstein, A., Podolsky, B., and Rosen, N. (1935). Phys. Rev. 47,777. Feigenbaum, M. J. (1978). J . Stat. Phys. 19,25. Feynman, R. P. (1986). Found. Phys 16,507. Freeman, W. A. (1992). Int. J . Bijircarion and Chaos 2,451. Freeman, W. A. (1993). In “Rethinking Neural Networks: Quantum Fields and Biological Data” (Karl H. Pribram, ed.). Lawrence Erlbaum Associates, Hillsdale, NJ. Freeman, W. A,, and Jakubith, S. (1993). In “Brain Theory” (A. Aertsen, ed.). Elsevier, New York. Gibson, J. J. (1979). “The Ecological Approach to Visual Perception.” Houghton & Mimin, Boston. Gramss, T. (1994). On the speed of quantum computation. Tech. Rep. 94-04-017, Santa Fe Institute, Santa Fe, NM. Grifin, D. R. (1992). “Animal Minds.” University of Chicago Press, Chicago. Hamburger, V., and Levi-Montalcini, R. (1949). J . Exp. 2001.111,457. Hameroff, S. R. (1987). “ Ultimate Computing: Biomolecular Consciousness and NanoTechnology.” North-Holland, Amsterdam. Hameroff, S. R. (1994). J . Consciousness Stud. 1,91. Hameroff, S . R., Dayhoff, J. E., Lahoz-Beltra, R., Samsonovich, A. V., and Rasmussen, S. (1992). IEEE Comput. 25(11), 33. Hermstein, R. J. (1985). Phil. Trans. R . Soc. Lond. B 308, 129. Herrnstein, R. J., and Loveland, D. H. (1964). Science 146, 549. Herrnstein, R. J., Vaughan, Jr., W., Mumford, D. B., and Kosslyn, S. M. (1989). Perception and Psychophysics 46, 56. Hopfield, J. J. (1982). Proc. Natl. Acad. Sci. USA 81, 2554. Hubel, D. H., and Wiesel, T. N. (1962). J . Physiology (London) 160, 106. Jammer, M. (1974). “The Philosophy of Quantum Mechanics.” Wiley, New York. Jibu, M., Hagan, S., Hameroff, S. R., Pribram, K. H., and Yasue, K. (1994). BioSystems 32, 195. Kak, S. C. (1976). Nuouo Cimento 33B, 530. Kak, S. C. (1984). Proc. Indian Natl. Sci. Acad. 50, 386. Kak, S. C. (1986). “The Nature of Physical Reality.” Peter Lang, New York. Kak, S. C. (1988). Proc. 22nd Annu. Conf. on Information Science and Systems (Princeton, NJ), p. 131. Kak, S. C. (1993a). Pramana J . Phys. 40, 35. Kak, S. C. (1993b). Circuits, Systems, and Signal Processing 12,263.
312
SUBHASH C. KAK
Kak, S. C. (1993~).Reflections in clouded mirrors: Selfhood in animals and machines. Symposium on Aliens, Apes, and AI: Who Is a Person in the Postmodern World. Southern Humanities Council Annual Conference, Huntsville, AL, February 1993. Kak, S. C. (1994a). Pattern Recognition Lett. 15, 295. Kak, S. C. (1994b). From Vedic science to Vedanta. Fifth Int. Congress of Vedanta, Miami University, Oxford, Ohio, August 1994. Kak, S. C. (1994~).“The Astronomical Code of the Rgveda.” Aditya, New Delhi. Kak, S. C. (1995). Inform. Sci. 83, 143. Kandel, E. R., and Hawkins, R. D. (1992). Sci. Am. 267(3), 79. Koch, C. (1993). Curr. Opin. Neurobiol. 3, 203. Kulli, J., and Koch, C. (1991). Trends Neurosci. 14,6. Landauer, R. (1991). Phys. Today 44,23. Libet, 9. (1986). Fed. Proc. 45, 2678. Libet, B., Wright, E. W., Feinstein, B., and Pearl, D. K. (1979). Brain 102, 193. Madineni, K. B. (1994). Inform. Sci. 81,229. McCulloch, J. L., and Pitts, W. (1943). Bull. Math. Biophys. 5, 115. Melzack, R. (1989). Can. Psychol. 30, 1. Milner, P. M. (1974). Psych. Rev. 81, 521. Milton, J., Mundel, T., an der Heiden, U., Spire, J.-P., and Cowan, J. (1994). Neural activity waves and the relative refractory states of neurons. Tech. Rep. 94-09-052, Santa Fe Institute, Santa Fe, NM. Moore, W. (1989). “Schrodinger: Life and Thought.” Cambridge University Press, Cambridge. Mountcastle, V. 9. (1989). In “The Mindful Brain” (G. M. Edelman and V. 9. Mountcastle, eds.), p. 7. MIT Press, Cambridge. Oppenheim, R. W. (1981). In “Studies in Developmental Neurobiology” (W. M. Cowan, ed.). Oxford University Press, New York. Optican, L. M., and Richmond, B. J. (1987). J. Neurophys. 57, 162. Penrose, R. (1989). “The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics.” Oxford University Press, Oxford. Pepperberg, I. M. (1983). Anim. Learn. Behau. 11, 179. Popper, K. R., and Eccles, J. C. (1977). “The Self and Its Brain.” Springer International, Berlin. Purves, D. (1988). “Body and Brain: A Trophic Theory of Neural Connections.” Harvard University Press, Cambridge, MA. Raina, P. (1994). Inform. Sci. 81, 261. Ramon y Cajal, S. (1929). “Studies on Vertebrate Neurogenesis.” Thomas, Springfield. Rasmussen, S., Karampurwala, H., Vaidyanath, R., Jensen, K. S., and Hameroff, S. (1990). Physica D 42, 428. Sacks, 0. (1990). “Awakenings, A Leg to Stand On, The Man Who Mistook His Wife for a Hat.” Book of the Month Club, New York. Savage-Rumbaugh, E. S. (1986). “Ape Language: From Conditioned Response to Symbol.” Columbia University Press, New York. Savage-Rumbaugh, E. S., Sevcik, R. A,, Rumbaugh, D. M., and Rubert, E. (1985). Phil. Trans. R. SOC.Lond. B 308,177. Schnapf, J. L., and Baylor, D. A. (1987). Sci. Am. 256(4), 40. Schommers, W. (1989) “Quantum Theory and Pictures of Reality.” Springer-Verlag, Berlin. Schrodinger, E. (1961). “Meine Weltansicht.” Paul Zsolnay, Vienna. Schrodinger, E. (1965). “What Is Life?” Macmillan, New York. Schumacher, J. A. (1986). In “Fundamental Questions in Quantum Mechanics” (L. M. Roth and A. Inomata, eds.), p. 93. Gordon & Breach, New York. Sherrington, C. S. (1951). “Man on His Nature.” Cambridge University Press, Cambridge.
QUANTUM NEURAL COMPUTING
313
Sinha, J. (1958). “Indian Psychology.” University of Calcutta Press, Calcutta. Skarda, C. A., and Freeman, W. J. (1990). Concepts in Neurosci. 1,275. Sutton, J. P., Beis, J. S., and Trainor, L. E. H. (1988). J . Phys. A: Math. Gen. 21,4443. Terrace, H. S. (1985). Phil. Trans. R. Soc. Lo&. B 308, 113. Turney, T. H. (1982). Bull. Psychon. Soc. 19, 56. Turing, A. M. (1950). Mind 59,433. von der Malsburg, C., and Schneider, W. (1986). Biol. Cybernet. 54,29. Weiskrantz, L. (1986). “Blindsight.” Oxford University Press, London. Wheeler, J. A. (1980). Delayed choice experiments and the Bohr-Einstein dialog. Paper presented at the Joint Meeting of the American Philosophical Society and the Royal Society, London, June 5. Wigner, E., (1967). “Symmetries and Reflections.” Indiana University Press, Bloomington. Yarbus, A. L. (1967). “Eye Movements and Vision.” Plenum Press, New York. Zeki, S. (1992). Sci. Am. 267(3), 69.
This Page Intentionally Left Blank
ADVANCES IN IMAGING AND ELECTRON PHYSICS. VOL. 94
Signal Description: New Approaches. New Results ALEXANDER M . ZAYEZDNY Department of Electrical Engineering and Computers. Ben Gurion University of the Negev. Beer Sheva. Israel
ILAN DRUCKMANN Lasers Group. Nuclear Research Center Negev. Beer Sheva. Israel
I . Introduction . . . . . . . . . . . . . . . . . . . A. On the Meaning of Signal Description . . . . . . . . . . B. Review of Signal Description Methods . . . . . . . . . C. Criteria for Comparing Different Methods of Signal Description . I1. Theoretical Basis of the Structural Properties Method . . . . . . A . Basic Definitions and Terminology . . . . . . . . . . . B. Elementary State Functions . . . . . . . . . . . . . C. Nonelementary State Functions . . . . . . . . . . . . D . Generating Differential Equations . . . . . . . . . . . E. A Second-Order Linear Generating Differential Equation Based on the Amplitude-Phase (A-Psi) Representation of a Signal . . . . . F. The Linear Second-Order Generating Differential Equation with . . . . . . . ThreeCoeficients, and Some Generalizations I11. Generalized Phase Planes . . . . . . . . . . . . . . . A. Definitions . . . . . . . . . . . . . . . . . . B. Construction of Generalized Phase Planes for Given Signals . . . C. Reconstruction of the Original Signal from Given Phase Trajectories D . Signal Processing Using Phase Trajectories . . . . . . . . IV. Applications . . . . . . . . . . . . . . . . . . . A. Parameter Estimation . . . . . . . . . . . . . . . B. Signal Separation . . . . . . . . . . . . . . . . C . Filtering and Rejection of Signals . . . . . . . . . . . D . Signal Identification . . . . . . . . . . . . . . . E. Data Compression . . . . . . . . . . . . . . . . F. Extrapolation of Time Series and Real-Time Processes . . . . G . Malfunction Diagnosis . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . . . . .
. . . . . .
315 316 318 325 321 327 329 333 336 339
. . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . . . . I
343 341 341 349 353 358 365 365 316 319 383 386 391 392 394
I. INTRODUCTION Signal theory has developed during the last decades as an independent branch of science which incorporates the general mathematical theory and tools for dealing with signals originating from a variety of fields. The main 315
Copyright 0 1995 by Academic Press. Inc. All rights of reproduction in any form reserved.
316
ALEXANDER M. ZAYEZDNY A N D ILAN D R U C K M A N N
part of signal theory is dedicated to mathematical methods of signal representation and classification, i.e., signal description methods. There is an abundant literature dedicated to this subject, e.g., Franks (1969), Trachtman (1972), Papoulis (1977), Urkowitz (1983), and Marple (1987). The primary attention in this literature is given to the analysis of signal properties, to their relationships with systems, and also to the synthesis of signals and systems. There are many methods of signal description; some of them can be viewed as classical (Fourier analysis and its many branches and related transforms, envelope-phase representation, series expansions, and linear system models) and others as new methods which have gained much popularity recently (wavelet analysis, higher-order spectra). We shall review these methods briefly, with emphasis on the newer methods. However, this chapter is dedicated mainly to the new approach called the method of “structural properties” of signals. The main idea of this approach is to make use of the relationships between the signal and its derivatives, and possibly also other transforms (e.g., Hilbert). These relationships are in general nonlinear, and are expressed by combinations of derivatives called state functions. Three tools are used in this method: the state function, the generating differential equation (GDE), and the generalized phase plane. In our view, the structural-properties approach has two main advantages: It makes use of more information (information about the relationships between the derivatives is information about the nature of the signal); and the description by this method is transparent to system synthesis. The chapter is organized as follows. In Sections LA and 1.C are presented some basic ideas about the meaning of signal description and some criteria for comparing description methods. In Section 1.B we briefly review the methods of signal description. Section I1 is dedicated to two of the tools of the structural properties method: the state function (Sections 1I.A-I1.C) and the GDE (Sections 1I.D-1I.F). In Section I11 we present the third tool, the generalized phase plane. Finally, Section IV is dedicated to the applications of the method to signal processing. Some of the sections of Section IV include brief reviews of existent methods. We cite here a few early works on the method of structural properties: Zayezdny (1967), Zayezdny and Zaytsev (1971), Zayezdny et al. (1971), Plotkin and Zayezdny (1979). A . On the Meaning of Signal Description
First we shall define some basic notions. A signal is a function of time (or space) which contains information. Usually a signal implies a combination of
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
317
a “pure” signal, containing the information, and a disturbance. The disturbance may be a noise or an interference, i.e., an unwanted signal. We shall consider in this chapter only one-dimensional real functions of time. Of course, the generalization is readily available. Signal description is defined as an operation of imaging, in which the signal is represented as a convenient analytical expression. This can be processed for the extraction of the information, and also for the synthesis of systems. A general representation of unidimensional signals is x ( t ) = p(t)s(t;a, b)
+ n(t),
(1)
where all functions are time-dependent. However, in the following, we shall drop the argument t except when we want to emphasize it. The vector a contains the informative parameters; the vector b contains the noninformative ones. p(t) denotes a multiplicative disturbance, and n(t) an additive one. n(t) can also include other signals, unrelated by nature with the functions s(t) and p ( t ) . We shall use the term “signal” to denote both s ( t ) itself and the mixture s ( t ) + disturbance. Signal theory deals with different problems: measurement of parameters a; suppressing the disturbances p ( t ) and n(t) and diminishing the influence of the noninfonnative parameters b on the parameters a; separation of signals (by their parameters or by their different functional form), problems of transforming the signals in order to increase the efficiency of processing systems, etc. From the point of view of practical applications, all these problems can be divided into the following groups: 1. Estimation or measuring of signal parameters: in communication, radar, etc. 2. Signal identification and signal classification: pattern recognition, speech identification, malfunction diagnosis, spectrum monitoring, medical diagnosis, etc. 3. Signal separation, filtering, and signal rejection 4. Data compression 5. Additional problems: extrapolation, etc. The purpose of signal description methods is to facilitate the solution of one or several problems. Solving the problem means obtaining a mathematical algorithm which can be implemented as a real device or system. From the great diversity of signal processing problems arises a diversity of description methods, and a method which is adequate for a certain problem may not be satisfactory for another type of problem. In the following sections we shall briefly review existing methods of signal description, and then we shall propose and discuss some criteria for the comparison of their efficiency.
318
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
B. Review of Signal Description Methods
All signal description methods can be classified into two main groups: description by contour and description by structural properties. Contour properties are those properties which can be revealed by using instantaneous values of the signal, and also of its derivatives and integrals. By structural properties we mean those properties by which we can reveal the relationships between the signal and its derivatives, or between the derivatives themselves, and which in some sense can generate the signal. In the following we give a brief review of the methods of signal description. 1. The Direct (Immediate) Description
The signal is described by means of an analytical expression, as in (1). This method is used for dealing with only the most simple problems. For the solution of other problems it is used as a preliminary step for the transition to some other description. 2. Representation by Envelope and Phase
The signal is represented by means of the instantaneous values of two functions: the envelope A ( t ) and the phase d(t): x ( t ) = A(t) cos q(t),
(2)
where A(t),q ( t ) are defined as
and A(t) is the Hilbert integral transform:
It is convenient to express the phase in the form d t ) = mot
+ W),
(5)
where w,t is the linear term of the expansion of q ( t )about the frequency w,,. The theory of envelope-phase representation of signals is widely treated in the literature and is used in applications connected with oscillatory processes (Franks, 1969; Rice, 1982; Urkowitz, 1983; Picinbono and Martin, 1983; Papoulis, 1984).
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
3 19
3. Representation by Expansion in a Series of Orthonormal Functions The signal is expanded in a series of basis functions,
where, in most cases, the basis functions are orthonormal on some interval ( T may be finite or infinite),
and the coefficients cn are given by
The coefficients serve as a representation of x ( t ) in the signal vector space spanned by the family of basis functions. The equality in (6a) becomes an approximation if only N terms are used. In general, the coefficients are computed according to some criterion of minimum error for N-terms approximations, and also of independence of the set {c,} on N . For example, the specific form of coefficients (6c) results from the minimization of the mean-square error, for given basis functions and given N . An example of such an expansion is the Fourier series expansion in the interval (- T/2, T/2), T = 2n/w,, with $, = exp jnw,t. Other popular basis functions are families of orthogonal polynomials, the sine function, Walsh functions, etc. A related topic is the expansion of random processes in a series of orthonormal functions. In this case the coefficients are random variables. For a given family of basis functions, the optimal coefficients under the same minimum error criterion are as in (6c). An additional problem appears: What is the optimal family of functions, in the sense of decorrelating the coefficients? The solution to this problem is the Karhunen-Lohe expansion, in which the basis functions are the eigenfunctions of the integral equation involving the autocorrelation of x ( t ) (Van Trees, 1968; Proakis, 1983). The theory of representation of a signal x ( t ) by a linear combination of a family of basis functions &,(t) is called “the generalized spectral theory” (Trachtman, 1972). This topic is well developed and finds numerous applications-for example, in signal classification and signal detection (Van Trees, 1968; Franks, 1969; Proakis, 1983; Proakis and Manolakis, 1989).
320
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
4. Description by Means of integral Transforms The signal is represented in another domain, of the variable s, by a new function X(s),related to the original function by (74 ~ ( t=)
b
X ( s ) Y ( t ,S ) ds
where O(s, t), Y(t,s) are the kernels of the direct and inverse transforms, respectively. These transforms can be considered as continuous analogs of the discrete representations (6). The result of the direct transform is a function X ( s ) of a new variable s. This function is the image of the original signal x ( t ) . This new image is the object of processing in this method. The integral transforms which are most commonly used in signal processing are the Laplace and Fourier transforms (Proakis and Manolakis, 1989; Bracewell, 1986; Elliot and Rao, 1982).
5 . Mixed Time-Frequency Transforms The Fourier transform and its relatives are global, ie., are based on integrating the entire time axis, and as such, are inadequate for describing nonstationary signals. For this kind of signals one uses, instead of global transforms, various types of localized transforms, called mixed timefrequency transforms, such as the short-time Fourier transform (STFT), the Gabor transform, and the Wigner-Ville distribution. All these transforms map the signal into a time-frequency plane, where certain properties of the signal are made visible. The STFT or windowed Fourier transform is defined as X(w, t) =
s t
t+r/z
-r/z
x(a)e-j"("-') da.
(8)
This transform adequately describes the characteristics of a varying spectrum signal. In this definition the signal has been windowed by a rectangular window. In a more general definition, the window shape is arbitrary: m
X ( o , t) =
[ J
x(a)w(a - t)e-iocu-t)da.
(9)
-m
An important member of this class is the Gabor transform, in which the window is Gaussian shaped (Bastiaans, 1980; Wexler and Raz, 1990), and which has applications in image analysis and compression (Porat and Zeevi, 1988).
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
321
The Wigner-Ville distribution (WD) is defined as WD,(w, t ) =
x (t
+
i)
x* ( t -
i)
e-jw7dz.
Its interest is connected with some geometric properties in the time-frequency plane, which enables localization of various signals, particularly of linearly frequency-modulated signals (Mecklenbrauker, 1993). There also exist generalizations of the WD. One of these is the Cohen class of distributions (Cohen, 1989). These distributions can be used to classify signals from the point of view of complexity and measure of information (Orr, 1991; Baraniuk et al., 1993). 6. Wavelet Transforms Another localizing analysis, which has gained much attention recently, is wavelet analysis. This is basically a generalization of the Fourier mixed timefrequency analysis described above. The main differences are replacing the frequency by a scale variable, and the basic sinusoid by another suitably chosen signal g(t), called a wavelet. The signal is projected on a family of functions that are derived from g(t) by time scaling and shifting. The wavelet transform (WT) is defined as
Wavelet analysis was first introduced by Grossmann and Morlet (1984), and developed by Daubechies et al. (1986), Daubechies (1988, 1990), and Mallat (1989a, 1989b). It is now recognized as a powerful tool in signal processing and has a rapidly growing literature, from among which we mention Combes et al. (1990); Meyer (1991); Special issue (1992); Schumaker and Webb (1993). In STFT, Gabor, and similar transforms there is a fundamental problem: The fact that the window length is constant makes it impossible to represent both low- and high-frequency components equally well. We have an uncertainty-principle issue. Short windows contain enough high-frequency cycles to localize their spectrum well, but spectra of low-frequency components are not accurately determined. Larger windows can do this, but then the highfrequency components are not well localized in time. What we need is a window whose size adapts itself to the frequency contents of the signal. This is precisely what the wavelet transform does: The scaling variable allows dilating or shrinking the effective width according to the frequency component being analyzed.
322
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
7. Description b y Equivalent Linear Time-Invariant Discrete Systems
A very fruitful concept in signal theory is the description of a signal by an equivalent system (mostly linear). The signal is viewed as the output of a linear time-invariant system, and therefore, instead of the signal, we describe the system and the input signal. This method is useful if the system and the input are more simple to characterize than the output signal. The linear system can be described by its impulse response, or by the corresponding transfer function. The most widely used models of this kind are discrete linear systems of the forms all-poles, called autoregressive (AR); all-zeros, called moving-average (MA); and poles-zeros, called ARMA. The input to these systems is in general considered to be a white stochastic process or a delta sequence. Both these inputs have a flat power spectrum. In fact, the procedures for determining the parameters of these models are based on flattening (whitening) the spectrum of the assumed input, and minimizing its energy. These models, especially AR and ARMA, have been very popular for 20 years, and have applications in many areas: speech processing, recognition, and synthesis; seismic data processing; biomedical signals; spectral estimation; data compression; etc. They have been thoroughly investigated and have become part of many textbooks on signal processing. We shall emphasize two conceptual aspects of the models. (a) They may reflect an actual physical mechanism generating the signal, as in the case of speech signals. (b) They offer a possibility of separating the signal into two parts: a known part, or deterministic, described as the system, whose parameters can be determined; and an unknown part, described as a stochastic process-the input to the system. The interaction between the two parts forms an approximation to the real signal. These remarks can be applied to any model consisting of a deterministic system and a stochastic input. There are cases where the linear time-invariant system is not a good enough approximation. Two directions of generalization exist: time-varying AR models and nonlinear systems. 8. Higher-Order Spectra
One of the most-used tools in signal description is the power spectral density (PSD) or power spectrum (Papoulis, 1977; Kay, 1988). It is defined as the Fourier transform (FT) of the autocorrelation of the signal (instead of the FT of the signal), and in practice is used especially for discrete signals. The motivation for this definition and also for the popularity of the tool is the
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
323
fact that it can be defined for both deterministic and random processes, and therefore Fourier analysis can be extended to random processes. Given a stationary random process x and a deterministic one y , we define autocorrelation, respectively, as r,[k] = E { x [ n ] x [ n
+ k]},
rJk]
=
lim
N
+
1
2N ~
~
c YCnlYCn + kl9 N
+1
n = - ~
(12) and the power spectrum, for both cases, as m
P(w) =
1
k=-m
r [ k ] exp(-jwk),
(wI In.
(13)
The description of signals and their processing using these tools (autocorrelation, PSD) have some limitations (e.g., loss of information about phase, or about harmonic relationship between spectral components). These have caused the evolution, in recent years, of more powerful extensions to the PSD: these are the higher-order spectra (HOS) or polyspectra (Brillinger and Rosenblatt, 1967; Nikias and Raghuveer, 1987; Mendel, 1991). These are based on the cumulants of the process, in the same manner that PSD is based on the autocorrelation. The joint cumulants of a set of random variables (r.v.’s) are defined in terms of derivatives of the logarithmic characteristic function of the r.v.3 (Papoulis, 1984). They can be expressed in terms of the moments. In the case of stationary processes, a cumulant is a function of several delays between the samples. For example, the cumulants of order 2 and 3 are identical, respectively, with the moments of the same order. Given a stationary random process with zero mean, the second-order cumulant is the autocorrelation, and the third-order cumulant is c 3 [ k , , k z l = m3[k1, k z l = E { x [ n l x C n + k l l x C n
+4 1 } .
(14)
The fourth-order cumulant is not identical with the fourth moment m4, but is given by
The spectrum of order N is defined (Nikias and Raghuveer, 1987) as
CN(%,%, ...l w N - l )
324
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
For example, the third-order cumulant spectrum or bispectrum is defined as
c F m
Ho,,WZ) = k,=-m
k2=-m
c,Ck,, k z l exp{-j(o,k,
+ wzkz)).
(17)
HOS have several properties that make them useful: The higher-order cumulants (HOC, N 2) are zero for Gaussian processes, therefore higherorder spectra can serve as a measure of gaussianity; if a group of random variables can be divided into mutually independent subgroups, HOC are zero, therefore HOS provide a criterion of statistical independence; if a component frequency of an output process is harmonically related to other frequencies, HOS can distinguish if this is due to a nonlinearly coupling or these are independent components, etc. HOS techniques have been used for identification of nonlinear and non-minimum-phase processes, speech and biomedical signals processing, passive sonar, source separation, blind deconvolution, etc (Matsuoka and Ulrych, 1984; Raghuveer and Nikias, 1986; Nikias and Raghuveer, 1987; Giannakis and Mendel, 1989; Mendel, 1991; Lacoume, 1991; Lacoume et al., 1992). The following two methods are structural properties methods.
=-
9. Representation by State Functions and Generating Di#erential Equations In this method the given signal x ( t ) is transformed into a generating differential equation (GDE), which has one partial solution identically equal to x(t). It is important to emphasize that for all twice-differentiable, signals, there exists a GDE (see Section 11.C): x
+ a , ( t ) i + a,(t)x = 0,
(18)
which can be termed the canonical linear form GDE. Equation (18), together with appropriate initial conditions, is the image of x ( t ) . For the method of differential transforms we created a special mathematical apparatus by using nonlinear relationships between derivatives in the form of new variables (state functions). The basis of this method, which is called the method of structural properties, (Zayezdny and Druckmann, 1991) is presented in Section 11. 10. Description by Generalized Phase Planes
According to this method, the signal x ( t ) is represented by the image u(u), where u and u are any two of the above-mentioned state functions, viewed as operators applied on the signal x. As a special case, when the operators are u = 1, u = d/dt, the image is k(x)-the Poincare phase plane. In the general case, the curves u(u), where u = u [ x ( t ) ] , u = u [ x ( t ) ] , are called generating phase trajectories. Signal processing is performed on these curves. The theory
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
325
and practice of using generating phase trajectories are presented in Section 111.
Now we shall turn to criteria for comparing different methods of signal descriptions, from the point of view of their efficiency for signal processing. C . Criteria for Comparing Different Methods of Signal Description
The problem of signal description is an essentially complex problem, and the advantages of different methods can be estimated only by reference to a number of qualitative criteria, or characteristics. Therefore, we cannot give an exhaustive and definite answer to the question of which method is the better one in the context of a real problem of analysis, and, what is more important, of synthesis. At the same time, however, we present below a number of criteria which are helpful for selecting the most preferable method from those listed in Section I.B. We discuss here only the main criteria; other criteria, such as physical size of devices, computational complexity, and cost, are not addressed here. We propose the following criteria for comparing signal description methods. 1. Constructability Criterion
The meaning of the constructability criterion is the possibility to realize (to construct) a device directly from formulas of description. For example, a description by infinite series or polynomials is less preferable than a description by closed-form formulas, which are accessible for realization. By accessibility for realization we mean the possibility to obtain a working algorithm or block diagram. 2. Technologicality and Universality of Realization By technologicality we mean the possibility of constructing the block diagram of the algorithm or system into elementary blocks of as small a number of types as possible. Universality means that these elementary blocks or elements can be used in constructing systems for a wide spectrum of applications. For example, the malfunction diagnosis system described in Section 1V.G is made up of a great number of only one type of universal elementdissipantors (see Section II.C, list 3). 3. “Znformation per Symbol” as a Characteristic of Description
The most powerful methods of description are those which make use of complex concepts, i.e., concepts which combine simpler ones. For example, the “vector” concept is a more powerful tool than a “scalar,” and the
326
ALEXANDER M. ZAYEZDNY A N D ILAN D R U C K M A N N
“matrix” concept, as a generalization of “vector,” is still more powerful. We say that the matrix and vector symbols are more informative. Another example is complex variables used as the “phasor” symbol to represent sinusoidal signals.
4. Transparency of Description High transparency of description means that the parameters associated with the description have an evident physical meaning, and give insight into the essence of the process. For example, according to this criterion, description by polynomials is less transparent than description by closed-form formulas, which reflect clear physical meaning.
5. Universality of Description Universality of description tools means that these tools can be easily adapted to general classes of signals or systems. For example, autoregressive (AR) models are suited for multisinusoidal-multiexponential signals, or equivalently, for linear systems. Related to these are the large class of Prony methods for estimating parameters of multiple sinusoids. Superior to these from the point of view of universality is the method of wavelet analysis, which is suited to describe and analyze arbitrary signals. Also more universal is the method of structural properties proposed in this chapter, which provides the means (generating differential equations) to describe arbitrary signals. 6. Additional Characteristics of Description Methods
Different description methods or different models have various influences on the characteristics of the real systems or devices that were implemented on the basis of these methods. There are even situations when models which are mathematically equivalent produce real systems that are widely different from the point of view of stability or immunity to noise, for example. Therefore, the choice of the description method may have a decisive importance on the characteristics of the real, synthesized system. We give some examples of these real system characteristics, which may serve as criteria for comparison among description methods. 1. Computational precision of the signal processing operations. Related to this is the sensitivity to the quantization of system parameters. 2. Sensitivity of output parameters with respect to stability of system elements, external conditions, or irrelevant system parameters. Is it possible to minimize the influence of some parameters by creating a system which is invariant to them? See Section 1I.D.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
327
3. Resolution of measurement (of parameters): Does the description method have the possibility of increasing the resolution? See Section IV.B.2. 4. The frequency range and the dynamic range of the real system.
It is not possible to assess the relative importance of each criterion exactly, and moreover, this depends on the specific application. However, the list of criteria we have presented may serve as a guideline for comparing the description methods and for selecting the most advantageous.
11. THEORETICAL BASISOF THE STRUCTURAL PROPERTIES METHOD A . Basic Definitions and Terminology
When in Section 1.B we reviewed the signal description methods, we mentioned the “method of structural properties.” The main idea of this method is that, for describing a signal, we use a transformation from the original x ( t ) to a generating differential equation (GDE). This GDE is the object on which signal processing is performed, in accordance with the requirements of the specific problem. We mentioned there that, to this end, a special mathematical apparatus was created; the basis of this apparatus is described in this section. The main feature of this special apparatus consists of changing the variable x into a related variable which is more informative. This approach enables us, by essentially increasing the information per symbol, to modify the canonical form of differential equations, in the interests of signal processing. For this description it is necessary to introduce some new definitions and concepts, to which this section is dedicated (Zayezdny, 1967; Zayezdny and Druckmann, 1991). In addition to the function x(t),we shall consider the current variation of x(t),
+
AX(^) = ~ ( tAt) - ~ ( t ) ,
and the relative current variation,
This function is called the relative function or the chi function. By means of this function we obtain an increase of information per symbol, relative to signal x(t).
328
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
Further, we introduce the velocity of the chi function, defined as follows:
or S,(t) =
dx/dt i -x(t) x
[s-’1.
~
We shall call this function the dissipunt of the function (signal) x(t). The reason for this terminology will be given below. The new function 6,(t) can be used to express a linear homogeneous differential equation of first order and with variable coefficient in a more concise form: dx dt
_-
6,(t)x = 0,
x(0) = xg.
Equation (21) represents a differential image of x ( t ) and can be used as an object of signal processing instead of the signal x itself. With respect to the velocity of function y ( t ) = i ( t ) , we can introduce the dissipant of velocity,
j x 6,(t) = - = y Y
cs-’1,
X
which corresponds to a homogeneous linear differential equation of second order: x - 6,(t)2 = 0,
x ( 0 ) = xo. (23) We introduce also the product of the dissipants 6,6, = K,, termed the conservant of x , and their ratio 6,/6, = I,, termed the condiunt of x : i(0) = lo,
i x
x
K , ( t ) = 6,6, = - -
=-
x - x,(t)x
= 0,
x l
I&)
6 6,
Cs-’]
x
xx il
7
(25)
xx g2’
=2=-- =-
2 - I&)-
l2 = 0. X
Each of the new functions A, S,, x, I , contains in one symbol not only information about one point as in the signal x(t), but also information about
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
329
current variation of x , its relationship to the value of the signal, about its velocity and acceleration, i.e., information about the state of a point. For this reason we call these functions state functions. These four state functions are called elementary state functions because the solutions of their corresponding differential equations with constant coefficients are functions which are commonly called “elementary functions”: the exponential function e”’, trigonometric and hyperbolic functions sin at, sinh at, . ..,power functions ( t + a)”. We once more draw attention to the fact that, in the structural properties method, differential equations are modified by using new variables, nonlinear combinations of derivatives, which are themselves functions of time. In the following sections we shall develop this subject. B. Elementary State Functions Definition.
A state function (SF) of a signal x is an expression of the form u(t) = u ( L , x , L 2 x , ...) L,x),
(28)
where L , , L 2 , . . . are arbitrary operators. The operators commonly used in the structural properties method are the differentiator and integrator operators. Other operators may also be used occasionally, e.g., Hilbert transform, raising to power, logarithmic, etc. In the last section we introduced four elementary state functionsformulas (20), (22), (24), and (26); these functions play a central role in the structural properties method. Each elementary function is characterized by its physical meaning and some properties. Nonelementary state functions will be considered in the next section. 1. The Dissipant of Function x ( t ) For the interpretation of the physical meaning of S,, and the reason for its name, we must turn to the canonical GDE (18), rewriting it after dividing by x in the following form: K,
+ al(t)Sx+ a&)
= 0.
(29)
The second term of this expression is connected with energy dissipation, and in the theory of differential equations is called the dissipative term [or resistance term (e.g., Boyce and diPrima, 1969, p. 417)]. Consequently, S, will be called the dissipant. Another way to see this is the following. Let us define x 2 ( t )as the instantaneous power of signal x, and let us compute the relative variation rate of this
330
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
power:
We see that S, expresses the relative rate of variation of P . Namely, if 6, < 0, then PIP decreases, i.e., energy dissipation occurs; if 8, = 0, then there is no instantaneous change of PIP; and if 6, > 0, then PIP increases. We note that when x > 0, the dissipant equals the logarithmic derivative, i d 6, = - = - In x = in' x , x dt
x > 0.
Clearly, the dissipant generalizes the definition of the logarithmic derivative, because the dissipant is not subject to the limitation x > 0. The physical meaning of the dissipant follows directly from its definition: dxlx &(t) = dt
Cs-'l,
and is read as: the dissipant equals the velocity of relative variations of the function x(t). Another interpretation follows from the fact that 1,the velocity of the current variations of x(t), equals the product of x itself and the dissipant (velocity of relative variations), 1 = xs,,
(30)
and by analogy,
(3 1) In the Poincare phase plane 1 = f ( x ) , the dissipant equals the slope of the tangent to the phase trajectory. In the conventional methods of signal description we use primary notions: the point of function x ( t ) and its velocity 1(t).In the method of structural properties we introduce new notions: the point of relative function (AxIx)( t ) and the velocity of its relative variations S,(t). This approach requires a new form of thinking, because the velocity of relative variations is a notion of deeper insight-contains more information-than the velocity of the process itself. This reorientation of thought needs some training, similar to that required for the transition from time representation to spectral representation of signals. f = xSxSy.
2. The Dissipant of the Velocity of Function x ( t )
The physical meaning of the dissipant of velocity S,, y = 1,is identical to the meaning of S,, but replacing x with y. Here we must warn against confusion
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
33 1
with the concept of acceleration, when considering relative variations. The dissipant 6, is not acceleration of relative variations, and it has the dimension [s-'1. It expresses velocity of variations of i ,but not relative variations. The velocity of velocity of relative variations, i.e., velocity of S,, equals 8,:
or, by Eq. (241,
8,
= SX(6, - 6),
cs-21. (33) We can see that the acceleration of relative variations is defined by (32) and has dimension [s-']. 3. The Conservant of Function x ( t )
The values of the conservant can serve as a measure of the closeness of a given process to a conservative process. According to (32), the following relationship exists between K, and 6,: K, = 6:
+ 8,.
(34) This equation gives the possibility to reduce the order of some differential equations. Another interpretation of the conservant is as a basic operator for the solution of a general linear differential equations of order I I ; that is, using this operator and its inverse (in analogy to the differentiator and integrator operators) enables us to express the solution of such an equation analytically. Let us consider the conservant as an operator K , and define its inverse K-', by analogy with the differentiation and its inverse, the integration operator: x
K [ x ( t ) ] = - = KX(t) X
(354
K-': K ( t ) + x ( t ) (35W By introducing these definitions, it is possible to express the solution of a general second-order linear differential equation, as explained in the following theorem.
Theorem. Given a linear homogeneous DE of order IZ,
R
+ p(t)i + q(t)x = 0,
(36)
one of its general solutions is (one solution is sufficient to find both general solutions)
332
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
To solve a linear DE of order I1 we need a table of anti-kappa, in addition to a table of integrals (anti-derivatives).
4. The Condiant of Function x ( t ) The condiant has the physical meaning of expressing in a succint way the behavior of the power function. By definition, the condiant is the ratio of the velocity of relative variations of the function’s derivative to the velocity of the relative variation of the function itself. When I, is a constant q, then the corresponding process is a power function: x = ( t a)”(’-@, q z 1.
+
5 . Inversion Formulas We close this section by giving the formulas for computing the original signal x ( t ) when one of the elementary state functions is known. The derivation of these formulas follows from solving the appropriate differential equation, associated with each elementary state function (21), (23), (25), (27). Dissipant 6, i= 8,(t)x,
dx
-=
X
6,(t) dt,
In x =
s
6,(t) dt
+ c, (38)
x = c exp[jd,(t) d t ] .
Dissipant 6,
3 = 6,(t)Y,
dy Y
-=
dy(t)dt,
y
= c1 e x p [
j 6 J t ) dt],
x = s y ( z ) dz,
(39) x ( t ) = c1 sexp[
/‘s,,(e)
de] d s
+ c2.
Conservant K,. Given x = K,(t)x, we make the transition from K , to 8, by (20) and obtain the Riccati equation:
8, + 6: - KX(t)= 0.
(40) Solving this equation numerically gives d,(t), and from this we can obtain X(t) by (38). Condiant I ,
SIGNAL DESCRIPTION NEW APPROACHES, NEW RESULTS
333
C . Nonelementary State Functions
In addition to elementary state functions, which were described in detail in the preceding subsection, nonelementary state functions are also widely used in the method of structural properties. A great variety of nonelementary state functions may be defined, but in the following we shall present only the most widespread, expressed in the form of basic combinations of elementary state functions (SF) of the signal x and some other functions (e.g., i). The symbols of SFs can be viewed as operators applied on signals. We give below formulas for nonelementary SF, ordered in groups according to the operation involved. The derivation of these formulas is straightforward and consequently is not given here. These formulas are useful for the design of block diagrams and transformations between state functions. In the following the symbols u, u, w designate arbitrary functions, and x designates a combination of u, u, .. . , e.g., x ( t ) = u(t)u(t).The notation &(u, u) is read as follows: the dissipant of the signal (process) x, which is the product of functions u and u. 1. Separation of product (division) of functions under the application of the elementary SF operators:
6,(uu) = (UUY ~
uu
6,
(:>
- = 6, -
K,(UU) =
(uu)" ~
uu
= 6(u)
+ 6(u) = 6, + 6,,
a,, = K,
+ K , + 26,6,,
334
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
3. Formulas for expressing different state functions by means of only the dissipant S, and the original function x:
s,
= 6, = S(XS,),
Sj
= 6, = 6 i: = S ( i )
( :)
sxsy = 6,6(X6,), = K y = s,sj = [s(Xs,)]2
(444
+ S(6,) = S(x6,) + S[S(xS,)],
(444
K, =
Ki
(44b)
+ s(Xsx)d[6(X6,)],
(444
The formulas of this group give the possibility to generate all state functions by using only one element, the dissipantor, whose output is the dissipant of its input (see also Section I.C.2).
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
335
4. Formulas for expressing the dissipant of elementary state functions and of their derivatives: 6(6,)
(:>
=6 -
6(6,) = 6
= s, - 6, = d(X6,) - ,s,
(3
- s,
- = 6,
b(K,) = 6(6,6,) = 6(6,)
6(A,)
(45d
(3
(4W
+ 6(6,) = 6, - ,s,
= 6 - = 6(6,) - 6(6,) = 6,
+ 6,
(454 - 24,
C ~ , W x )=l 6(4) + SC~(6,)1, 6@,) = m,w,)l= W,) + W~,)l> W,) = = 6 ( K , ) + 6C6(K,)l, S ( L ) = “,W,)l = W,) + sCW,)l, S(&)
=~
4
~
X
W
X
)
I
(454 (454 (450 (45d (434
5. Formulas for expressing differences of elementary state functions and dissipants of their differences:
We have given above the most common nonelementary state functions; there are many ways to generate others. For example, we can add to the list l/u, where u is an elementary function, or different functional transformations. We mention here that a similar apparatus of state functions for representation of signals has been developed by Maizus, who calls them differential operators or S(D)operators (Maizus, 1989, 1992, 1993; Grinberg et al., 1989).
336
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
D . Generating Differential Equations
We have already seen that one of the basic tools of the method of signal description by structural properties is the state functions, which are nonlinear combinations of the signal and its derivatives. These state functions serve as images of the original signal for the purpose of signal processing. Another basic tool of the method of structural properties is the generating differential eguation (GDE). Definition I : Generating Diflerential Equation (GDE). Let x ( t ) be a signal. A differential equation, of which x is the solution (with given appropriate initial conditions), is called a generating differential equation (GDE) of x.
A general form of a GDE is F ( x , 2,X,...; a, b, C, ...; t ) = 0,
GDE:
(47)
where a, b, c, . . . are parameters contained in the signal x(t). In addition to this usual form, we can rewrite any GDE in a new form, using state functions instead of the variable x. Such equations are called differential equations of states. Definition 2: Differential equation of states (DES). A differential equation of state (DES) is an equation of the form
DES: F[dl(t), d2(t)7 where dl(t), dz(t),. . . are state functions.
* . . I=
(48)
O7
Examples. Differential equations (DE) versus DES:
DE x
+ q ( t ) 2 + a,(t)x = 0
... x+
DES Ic,
+ a,(t)6, + a&)
Ky
+ 26; + K , = 0
x2
55 27+-=0 x
x
=0
We should pay attention to the difference between the use of differential equations (DE) here and in the classical context. In the theory of differential equations the problem consists of finding the solution of a given DE, i.e., an analysis problem. In the signal processing context the signal is given, and the problem consists of constructing a DE corresponding to this signal; i.e., our problem is the synthesis of a DE. Methods for the synthesis of GDEs will be considered below. A GDE of the form (47) contains parameters a, b, . .. . There are instances when it is useful to consider GDEs containing one parameter only, or no parameters at all. Therefore we define the following.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
337
Definition 3: Ensemble GDE (EGDE). Let { x ( t ; a, b, c, . . .)> be a family of signals dependent on the parameters a, b, c, . .. . If there exists a GDE of the form
GDE:
F ( x ,i, I , ...;t ) = 0,
(49)
i.e., independent of a, b, c, .. .,then it is called an ensemble GDE of the signal family (signal ensemble) x.
A GDE which is not an ensemble GDE is called a partial GDE. A partial GDE may contain all the parameters a, b, c, . . ., or some of them, e.g., only parameter a: GDE:
F,(x, i, -it,
. . . ; a; t ) = 0.
(50)
Now we shall present a procedure by which we can decrease the number of parameters on which a partial GDE is dependent until, ultimately, an ensemble GDE is obtained. Let us assume a GDE of the form (47). By differentiatingwith respect to t, we obtain a new equation, GDE:
G(x, i, X,
. . .; a, b, C , . . .; t ) = 0.
(51)
Then we can eliminate one of the parameters between the two equations. Let us assume (without loss of generality)that the eliminated parameter is a. The result will be a GDE independent of parameter a, GDE:
F1(x,i,X , . . .; b, C, . . .; t ) = 0.
(52) By performing this step a sufficient number of times and eliminating all the parameters one by one, we shall ultimately reach a stage in which the resulting equation will be an ensemble GDE.
Example. For a sinusoidal signal x = b cos(wt
+ O), we have
+ wZx = 0, GDE(x, X; b) = X(b2 - x’) + i Z x = 0,
GDE(x, 2 ; W ) = X
EGDE(x, i, R, X) = X i - XX = 0.
(53) (54) (55)
Obtaining a GDE which contains only one parameter can be of interest in the following situation. Suppose we want to use a GDE to measure a certain parameter a. This is done by solving the GDE expression (viewed as an algebraic equation) for a. Then the measuring algorithm is u = F,(x, 1, X , ...; b, C, ...; t).
(56)
This expression depends in general on other parameters b, c, . . . . If we make use of a GDE which contains only a, we obtain a measuring algorithm which depends only on x and its derivatives, i.e., is invariant with respect to other parameters.
338
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
In the procedure for parameter elimination that we presented above, we used differentiation of the original GDE in order to generate new GDEs. In general, we can also use other transformations, e.g., Hilbert transforms (HT), to produce new GDEs. In this case, the resulting GDE will include x, its HT, and their derivatives. In the following we give a list of examples of signals, their GDEs, and DESs, including ensemble equations. Note the fact that the formulation by means of state functions, i.e., in the form of DES, is more concise. Example 1. x = eat
DES
GDE i- ax = 0
Partial:
6, = u
Ensemble: f x - k2 = 0, Example 2. x = cos(ot
(;)
=o,
=o
+ cp)
GDE Partial:
x
6(6,) = 0
DES
+ 02x=0
Ensemble: xi - Yx
K, = --o
=0
2
?I, = 0,
6,
K y - K, = 0 - 6, = 0, 6(K,) = 0
Example 3. x = t P ,p # 0, 1
GDE
DES
x2
Ensemble: Y - 2T x Example 4. x
= exp
+ xx i = 0 -
1, = 0,
(- $)
GDE Partial
6(6,,) - S(S,) = 0
DES
+
a,=
i Ptx = 0 x i
Ensemble: W - 3 - + 2 - = 0 X
-Pt
23
X2
&=O,
(K,--;),=o
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
Example 5. x = e-”‘ cos(ot
+ q) DES
GDE Partial Ensemble:
339
x + 2ai
+ (w’ + a 2 ) x = 0 xx2 + X P X . . . xi” + X x2ix + i3
Ic,
+ 2 4 + w2 + a2 = 0
8 ( K y - Ic,) - S(S, - 8,) = 0
Given a signal x, there exists an infinite number of corresponding GDEs. Of course, most of them are more complicated in structure than x itself, and hence of no use to us. Our aim is to construct GDEs that have simple coefficients (constant, polynomial, etc.), and in any case, coefficients that have a simpler structure than x itself. These GDEs may be linear or nonlinear. We shall present in the following sections some methods to construct GDEs. E . A Second-Order Linear Generating Dgerential Equation Based on the Amplitude-Phase (A-Psi) Representation of a Signal 1. The A-Psi Type of GDE
It is well known that any analytical signal can be represented in the following form (Papoulis, 1984): x ( t ) = A(t) cos $(t),
(57)
where A ( t ) is the envelope (or the instantaneous amplitude) and $ ( t ) is the phase, and they are given by A(t) =
Jm,
(58)
where 2(t) denotes the Hilbert transform (or conjugate process) of x(t). We also define the instantaneous angular frequency w(t): w(t)= $(t).
(60)
We shall set out from this representation to synthesize a second-order linear GDE and a corresponding DES.
340
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
Theorem 1. Given a twice-differentiable signal x(t),for which an amplitudephase representation (57)-(59) exists, then it has a GDE of the form
x - (26, + 6,)1 +
[w2
+ 6,(26, + 6,)
- KA]X =0
(61)
or K, - (26,
+ 6,)6, + [a2+ 6,(26, + S,)
=0
(DES), (62) where 6, = A / A is the dissipant of amplitude, 6, = ci)/w is the dissipant of frequency, and K , = A / A is the conservant of amplitude. - KA]
Proof: We differentiate twice, then we eliminate cos $ and sin $ between the three equations for x, 1, x. We obtain (61) (Zayezdny and Druckmann, 0 1991). This type of GDE is a special case of the canonical form 2
+ a l(t)i + a,(t)x = 0.
(63) We shall call it the A-Psi equation type. This DE is a generating equation for any oscillatory signal modulated by both amplitude and frequency. It is possible to simplify the A-Psi GDE (61) by using, instead of the variable x, its normalized counterpart x/A; after performing this substitution we get the following equivalent equation:
The equivalence between (64) and (61) is easily proven by expanding the expressions rc(x/A),6 ( x / A )by means of Eq. (42),Section 1I.B. Equation (64) can be written in yet another equivalent form, by using the identity K , = 6,,6,. We obtain
2. Special Cases of the A-Psi GDE Type: A M and FM Signals
We shall now derive from (61) special forms of GDE for two cases: an amplitude-modulated (AM) signal and a frequency-modulated(FM) one. For an AM signal we have w = w, = const., hence 6, = 0, and therefore (61) becomes K, - 26A6,
+ (mi + 26;
- KA) = 0
(AM).
(66)
For example, if A = Aoe-", then 8, = -a, K , = -a2, and we obtain the well-known equation of a dissipative oscillatory circuit: K,
+ 2a6, + (w," + a') = 0.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
34 1
In the case of an FM signal, we have A = const., hence 6, = 0, IC, = 0, and (61) takes the simple form: IC, - Sodx
+ w2 = 0
(FM).
(67)
This equation, if written in the usual form (not with 6, IC),has been used by Hess (1966). 3. Separate Equations for A(t) and o ( t )
In the following we shall derive GDEs for A ( t ) only, and GDEs for $ ( t ) (or w ) only, under the condition of conservativeness, and also in the general case. The resulting equations are encountered in some areas, of circuit synthesis. The Case of a Conservative Process. For a conservative process, the dissipative coefficient a1 in (61) is zero, a l ( t ) = 26,
+ 6,
= 0,
(68)
and Eq. (61) therefore becomes x
+ a,(t)x = 0,
IC,
+ a,(t) = 0.
(69)
Equation (68) is called the condition of adiabatic invariance.
Theorem 2. Given a twice-dgerentiable, conservative process x(t), its instantaneous amplitude A ( t ) satisfies the GDE
.. c A - 7- ao(t)A = 0, A and its instantaneous Pequency satisfies the GDE 1
ZK,
-
-t(3' = ao(t).
In Eq. (70), c is a constant which equals c = const. = x 2 - i 2 .
(72)
Proof: Substituting the condition (68) in (62), we get
x, - KA t'm2 = 0.
(73)
In this equation we identify the coefficient a,(t): a, =
-KA
+0 . 2
(74)
Now we shall express w in terms of A, by using the relationship provided by (68):
342
26,
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
+ 6,
dA 2A
= 0,
+ dww = const.,
In A'
-
w=-
+ In w = const.,
A 2 w = c,
C
(75)
A2'
Substituting (75) in (74) gives the desired Eq. (70). Now we shall prove (71), by expressing (74) in terms of w. First we make the following substitution in (74): KA
=8A
+ 6i.
Then we express 6, in terms of d, [from (68)]: dA -- _ - id,.
(77)
By performing these substitutions in (74), we obtain (71). Finally, to find the constant c in (75), we express w by differentiating the phase in (59), and we have
The Case of a General (Nonconseruative)Process
Theorem 3. Given a general process satisfying the second-order linear GDE (63), then its instantaneous amplitude A ( t ) satisfies the GDE K,
+ a,6,
-c
exp[-2S(al
+ 26,)
dt] = -ao,
(78)
and its instantaneous frequency satisfies the GDE
+ 02
= a, - +a: - "4 2 1'
(79) Proof; This follows the same lines as the preceding proof. We have two equations, in which a , ( t ) and ao(t) are given input functions, and A, w are unknown variables: + K ~
First we use the equation for a l ( t )from above to express w in terms of A. We have
dm = -26,
- a,,
dw
-=
dt
(-26,
-
a,)w,
w2 = c exp[-2J(al
+ 26,)
dt],
and then, substituting this in the equation for ao(t),we obtain (78). Alternatively, we can use the equation for a l ( t )to express A in terms of w, and then substitute this in the equation for ao(t) to obtain (78).
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
343
4. The A-Psi G D E with Coefficients Expressed in Terms of x , 2 , i , . . . Until now we have expressed the coefficients of the A-Psi GDE and of all related equations in terms of A , w. We shall give here the same equations expressed in terms of x, 2 and their derivatives. Note that these are not merely equivalent equations, but exactly the same ones, only with coenicients expressed differently. The new forms are derived from the old ones simply by making use of Eqs. (58) and (59). These equations enable us to express A, w , and their derivatives in terms of x, i?, and their derivatives. Carrying out the calculations, we obtain the A-Psi equation (61) expressed as (Zayezdny and Druckmann, 1991) i j - xi XZ - x i a, = a , =(81) Xi? - i i ? . X i - 22’ Similarly, the condition of adiabatic invariance, i.e., the condition for a process being conservative, is expressed as [see (68)] x2 - X Z = 0,
or
K , - K~
= 0.
(82)
F . The Linear Second-Order Generating Differential Equation with Three Coefficients, and Some Generalizations
In the preceding section we discussed a special kind of linear GDE, the A-Psi GDE, which is one type of the more general class (which we repeat here for convenience)
x + a l ( t ) i + a,(t)x = 0.
(63) It is indeed customary in the field of differential equations to describe second-order linear homogeneous DEs in this form (the canonical form), which includes two Coefficients. Let us consider a more general form, containing three coefficients: b,(t)x
+ b l ( t ) i + b,(t)x = 0.
(83) This equation can be readily converted to the canonical (two-coefficients) form by dividing by the leading coefficient:
It is customary to view (63) and (83) as the same equation. By this is meant that both equations have the same set of solutions. And indeed, in the field of DEs, the question of the solutions is the only matter of interest. In contrast, we have here other interests. In fact, we may say that the solution of the DE
344
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
is not a problem at all for us, because we know the solution a priori: This is our original signal x(t).Instead, we are concerned with the form of the GDE, or, more explicitly, with the form of the coefficients. We are interested in certain properties of these coefficients, for example, their bandwidth. We shall show in this section that the coefficients of (63) and (83) have in general different bandwidths, although those equations are equivalent from the point of view of their solutions. Therefore, in the following, we shall view (63) and (83) as distinct GDEs. 1 . The A-Psi GDE in the Three-Coeficients Form
A three-coefficientsGDE equivalent to form (61) is A2wX - A(2kw
+ A & ) i + [ A 2 w 3 + k(2ko + A h ) - AAo]x
= 0. (85)
We can rewrite the equation by expressing its coefficients in terms of x, A, that is, b,(t)X b, = x i - i A ,
+ b l ( t ) l + bo(t)x= 0, bl = XA - x i ,
bo =
- XL
(86)
This GDE is the counterpart of (81). We emphasize that (85) and (86) are the same equation-that is, their respective coefficients are exactly the same functions, only expressed differently. We shall call this form the general (ternary)A-psi GDE, to distinguish it from the equivalent two-coefficients form (61) and (81), which will be termed the reduced A-Psi GDE. It is easily seen that the coefficients b,, b, of (86) [identical to those of ( 8 9 1 are related by the following relationship: b, = -b2.
(87)
The significance of this relationship is that the GDE (86) can be entirely specified by only the two coefficients (b,, bo). The remaining coefficient b,(t) can be easily reconstructed by (87). 2. Generalization of the A-Psi Type of GDE Theorem 4. Given a signal x ( t ) and an arbitrary transform y ( t ) of x(t), then the following is a GDE of x(t),for which the relationship b, = -6, holds:
+
b2(t)% b l ( t ) i b2(t) = x j - i y ,
+ bo(t)x = 0,
b,(t) = Xy - xjj,
b,(t) = ijj - Xj.
(88)
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
345
Proof. We construct the following determinant:
The determinant is identically zero because it has two equal rows. Expanding the determinant with respect to the first row, we obtain the GDE (88). The relationship b, = -8, is proven simply by carrying out the differentiation of b,, as given by (88). 0 For the special case when y = 3, we get (86).As an additional example, we take y = 1.We have then the GDE
The Condition for Conservativeness. From the condition b, = 0 and from (87) we have b, = const. (conservativeness) (91) This is true in general, for any GDE obtained by a procedure as described in the preceding theorem. When y is the Hilbert transform of x, we get (75)as a special case. 3. Equivalent GDEs May Have Coefficients with DiSferent Bandwidths
Now we shall show that the coefficients of (63) and (83) may have very different bandwidths. In fact, we shall show that there are cases when we have two equivalent GDEs, one with two coefficients and one with three, and the three coefficients have a total bandwidth that is less than the total bandwidth of the two coefficients.
Theorem 5. Given two independent low-pass processes A@), $(t), having bandwidths, respectively, of W,, W,, we form the modulated process x ( t ) given by (57).The three coefficients b,, b,, b,, of the GDE (of x) given by (86) are low-pass, and they have bandwidths of W, = W, = 2wA + W,, Wo = 2wA + 3w,. Proof. We shall consider the coefficients {b,} in the form given by (85) [as already mentioned, these are the same coefficients as in (86)].First we shall note, that if A is low-pass, with bandwidth W,, so are k,k: Similarly, if $ ( t ) is low-pass, with bandwidth W,, so are o,6.Then we recall that the Fourier spectrum of a product of functions is the convolution of their spectra, and if the functions have bounded spectra, their product also has a bounded
346
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
spectrum of bandwidth equal to the sum of their individual bandwidths (Bracewell, 1986). We therefore have b, = AZw
its bandwidth W, = 2 W,
W, = W,; b, = A2w3+ k(2kw + A h ) - AAw b, = -bz
+ W,;
-
W, = max(2W,
+ 3W,
= 2w* -k 3w,.
2 W,
+
(92) W,)
0
There is no such bandwidth-restricting theorem for the coefficients a, , a, of (63). Moreover, if A has zeros, then b, has zeros too, and hence a, and a,, having b, at the denominator according to (84), are unbounded. Therefore the bandwidth of a, and a, can even be infinite. As another simple example, consider the following differential equations, which are both GDEs of the Bessel function of order zero Jo(ot):
+ 1 + wZtx = 0,
(934
1 ++iw2x = 0. t
(93b)
tj;. f
The bandwidths of the coefficients of (93a) are zero (being polynomials). In contrast, the bandwidth of the coefficient al(t) of (93b) is infinite. 3. Other Classes of Linear Homogeneous Second-Order GDEs Equation (88) represents a quite general class of linear, homogeneous, second-order GDEs, of which the A-Psi GDE is a special case. However, there are many other classes of linear GDEs for the same signal x(t), and even more classes of nonlinear GDEs. We shall present now a new type of GDE, which illustrates two points: (a) This is a GDE which is not included in the class of Eq. (88). (b) The coefficients of different GDEs have different spectra, and therefore one kind of GDE is useful for a certain class of signals, while another kind of GDE is useful for another class of signals. Assume two low-pass processes O(t), $(t) of bandwidth W, W,, respectively, and define a(t) = 4,
w ( t ) = I).
(94)
Ce-O(')cos $ ( t )
(95)
Then the signal x(t) =
has the following GDE:
wx
+ (2ao - h ) i + (w3+ 20 - a h + iw)x = 0.
(96)
SIGNAL DESCRIPTION NEW APPROACHES, NEW RESULTS
347
Note that this equation is not of the class of (88). For example, the relationship 6, = -b2, which was a property of the GDEs of class (88), does not hold here. Of course, the signal has also a GDE of the type A-Psi, whose coefficients may be computed according to (85), but these coefficients are less simple (in the sense that the have larger bandwidth) than the coefficients of Eq. (96). As an illustration, let us compare the leading coefficients of both equations. In Eq. (96) the coefficient is w and has bandwidth W,. On the other hand, the coefficient of the A-Psi GDE is b2(t)= A2w = C2e-2e(‘’w,
(97)
and it has in general an infinite bandwidth, because of the exponential term. In conclusion, there are signals for which the GDE (96) is more efficient, and others for which the A-Psi GDE is more efficient. “Eficient” means here “having simpler coefficients.” The signals of the first class are essentially of low-pass nature, and those of the second class are essentially bandpass. 111. GENERALIZED PHASE PLANES A. Definitions
In Section I1 we have discussed two kinds of tools for signal description: state functions and GDEs. These signal description tools have this in common, that they use instantaneous values of the signals, of their derivatives and combinations thereof. In addition to this approach, it is possible to introduce new functions, which express relationships between two state functions, so that the time t does not appear in explicit form. These functional relationships are represented as phase trajectories in the plane defined by the two state functions, which is called a phase plane. The simplest example of a phase plane is the well-known PoincarC phase plane x i , in which differential equations are represented as trajectories of the form i ( x ) (Andronov et al., 1966), and which has been in use for a century in the qualitative theory of differential equations. Generalizing, we define the following. Definition. Given a signal x(t),let u(t).v ( t ) be two state functions (as defined in Section 1I.B). Then the curve described by the relationship v = F(u) is called a trajectory in the generalized phase plane uv. In general, in order to obtain the expression of the trajectory, we must find the inverse function t(u), and substitute it in the expression for v(t). That is,
348
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
the trajectory is constructed as a composite function u[t(u)] = u(u).
(98)
The classical phase plane x i is widely used in the general oscillation theory and the theory of nonlinear differential equations, in control theory, and in other applications (Andronov et al., 1966), and we shall not discuss it in this chapter. Instead, we shall focus on the method of the generalized phase plane (GPP). We shall consider phase trajectories that correspond, for instance, to the following state functions of the signal x(t): 1. With argument x: dx(x),
d,(X),
K,(X),
&Ax), &(x);
2. With argument 6,: ~,(~,>,K A & J Y nX(d,),
8,(&),
46,) [Wx)I;
3. More complicated GPPs, for example:
w,- a),
Cd(KY - KAI,
etc.
We can expand this list of GPPs. We have therefore many different GPPs. We shall explain now why this is useful for signal processing. Consider the following problem, which we call the linearization of functions: Given a function x(t), defined on a bounded time interval, how can we transform it into a new function which is a straight line, without loss of information? The meaning of this is that if the given function has N parameters, and since a straight line has only two parameters, then the remaining N - 2 parameters must be expressed by initial conditions or boundary conditions of the straight line. In other words, the variables which are related by the linear function (straight line) contain in their expressions some derivatives of the original function, and hence, initial conditions. To date we do not have a solution in closed form to this problem. Therefore the only way toward a practical solution is to compose a dictionary (list) of the transformations which transform different functions to straight lines by using generalized phase coordinates. In Section 1II.B we shall present some examples of such transformations. Having found a phase plane in which the trajectory of our signal has a suitable form (linear or other), we can perform the signal processing operations on this trajectory. We shall show in the following sections that this method has its own advantages and properties, and therefore will receive progressive attention in the course of time.
SIGNAL DESCRIPTION NEW APPROACHES, NEW RESULTS
349
Now we state the problems appearing when processing signals by means of phase trajectories. Let x(t, a, b) be a signal. Our task is to estimate its parameters or to use some other properties for signal identification.We must answer the following problems. 1. We must choose the phase coordinates u(t),v(t) in accordance with the conditions of the specific problem. In general, several possibilities are available, and we must choose the optimal. 2. For the given x ( t ) , we must compose the expression for the phase trajectory u(u). This can be done analytically or numerically. 3. ?he inverse problem: Given the phase trajectory v(u), we must recover the original signal x ( t ) . 4. We must create metrics for phase trajectories processing and measuring, that is, define measures and criteria appropriate for the specific problem. For example, we may define a decision law or an identification algorithm that is based on the staying of the trajectory in a definite region of the plane. We shall consider these matters in the following sections. B. Construction of Generalized Phase Planes for Given Signals
The first problem of signal processing by using generalized phase planes is to choose the phase plane according to the signals we must process, and according to the kind of problem we want to solve. We are interested in a plane, in which the trajectory will be simple, will emphasize features that are important for our problem, and will be, if possible, invariant to parameters that are not relevant for the problem. There is no general method for finding such a phase plane. We can get help from illustrated tables of phase planes and trajectories, computed for various families of signals. We shall show now some examples of how choosing an appropriate phase plane can simplify the trajectory and make it invariant to sume parameters. Additional examples can be found in Zayezdny and Shcholkunov (1976). Example 1. We are given the signal x = a sin@ + 6) and want to construct a phase plane in which the signal has a simple trajectory and it is convenient to measure the frequency w. It is well known that, in the classical phase plane x i , the trajectory of this signal is an ellipse (Andronov et al., 1966): x-+-2 i2 - 1, a’ a202
350
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
and the frequency can be computed by the algorithm w = Axis on i/Axis on x. However, by choosing another phase plane, we can obtain a simpler image. Consider the plane with coordinates v=x.
u=x,
The trajectory of the signal in this plane is a straight line:
u = -w%, (99) from whose slope the frequency can easily be computed. Note that this trajectory is invariant to a and 8. Example 2. Given the signal x = be-"' sin(wt + O), find a phase plane in which the trajectory is a straight line, for measuring the parameters u, w. As is well known, in the classical phase plane the trajectory is a spiral (Figs. la and lb). Now let us consider the state equation for this signal: K,
+ 2a6, + w2 + uz = 0.
We therefore choose the phase plane U = d,,
U = Kx,
and in this plane the trajectory of the signal is a straight line, namely, u = -2uu - m2 - u2.
( 100)
The slope and the intercept with the u axis provide us with the desired means of measuring the parameters (Fig. lc). Example 3. Given the signal x = b exp( -ut2/2), find a phase plane in which the trajectory is a straight line, for measuring the parameter u. This signal has the state equation K, - 62 + u = 0. We therefore choose the phase plane U
=a:,
V = K,,
and in this plane the trajectory of the signal is a straight line, namely, u=u-a.
The parameter is measured by means of the intercept with the v axis. Example 4. Consider the family of signals x = a sin w1t
+ b sin(w, t + 0).
Figures 2a-2d show the signals and the trajectories in the Poincark phase plane for o2= $ml and for two values of the phase difference 8. We see that the trajectories are quite complicated. Moreover, if the ratio between the frequencies is not a simple fraction, then the image is even more complicated.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
a
1
1
I
I
1 0.5
'n -
351
-
'-
0 -
-0.5
-
1
0
I
I
I
5
10
15
20
'n
4r-------. --. \
2-
/-. o(
-2 2
(72)
)
1 ' 1 -0 5 -0.5
05
0 X
1
n
C
FIGURE1. 'The signal x = be-"' sin(ot + 0): (a) in the time domain; (b) in the Poincare phase plane x e (c) in the generalized phase plane S,, K,.
=
d 2
-1 -7
X2'
05
-
-0
-05
100
50
e
0~~
2
'\ 0
2
l
.... 5
0
X
1
5
0
5
KX
FIGURE 2. (a) A bisinusoidal signal in the time domain. o z / w , = 7/3.(b) The same signal in the Poincare phase plane x i . (c) A signal as in (a), but with another phase difference. (d). The same signal in the Poincark plane. (e) The two precedent signals in the generalized phase plane K,, Xiw/X.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
353
Let us now consider the phase plane
In this plane the trajectory is the straight line (see Fig. 2e): u
+ (0:f 0l)u + 0:w;
= 0.
(102)
It remains a straight line for all signals of the family (but not the same straight line!). The position of the trajectory in the plane will change when varying the frequencies. Information about the frequencies can be recovered from the intersections with the axes. The trajectory is invariant to phase and amplitudes (however, knowledge of the time difference between two points on the trajectory allows the calculation of the amplitudes and phase). This image can serve to signal identification, or to malfunction diagnosis (for instance, appearance of higher frequencies would destroy the linearity of the trajectory). C. Reconstruction of the Original Signal from Given Phase Trajectories
During signal processing by using phase trajectories u(u), we may encounter the problem of reconstructing the original signal x(t) when the current phase coordinates u(t), u(t) are known, but the implicit time variable is unknown. In the particular case when u(t) = t and u(t) is one of the four elementary state functions a, S,,, K,, I,, this problem has already been considered in Section II.B, and there were obtained inversion formulas for the original signal x(t). Now we shall consider methods for solving this problem in the general case, and shall also give specific formulas for the case when the phase coordinates are the most simple nonelementary state functions. The solution of these problems is based on numerical methods, which involve piecewise linear approximation of a phase trajectory (Zayezdny, 1973, p. 434), and the method of isoclines. The latter is widely used in the theory of nonlinear oscillations (Andronov et al., 1966, p. 8). Given a segment of phase trajectory u = f(u), we want to find the function x(t). We do this in two main steps. 1. First we compute the inverse function t(u) [or t(u)],and consequently, u(t) Cml. 2. Then we compute x ( t ) from u(t). The first problem is special to the subject of this section, generalized phase planes. The second is the inverse problem of state functions, which has already been discussed in Section 11. In this section we shall focus on the first, and more dimcult, problem.
354
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
In the first problem we can distinguish two cases: (1) the function f(u) is known analytically, and (2) only the coordinates u,,, u, are known. The second case is more realistic, so it is this case that will be treated here. In this case we shall approximate the trajectory piecewise by N linear segments:
v 2 a,
+ b,u,
n
=
1, ..., N ,
tn-l I t I t,,,
to = 0,
(103)
each one connecting a point (u,,-~,u,,-~)to a point (u,,, u,). Each point corresponds to a time instant t,, not known. We have therefore to determine N time points (except to = 0). We shall do this by computing the interval At,, on each linear segment, and then t,,
= t,,-l
+ At,,,
n
=
1, ..., N .
We must also know that we have %Vn-1 - Un-lUn b,, = A v n - o n - on-1 a, = Au,, u,,- ~ , - 1 ufl - 4 - 1 In the following we shall consider specific phase trajectories. ~
1 . The Phase Trajectory y
9
(104)
= i = f ( x ) (Poincarb Plane)
Given a trajectory y = f (x), we approximate it piecewise by linear segments: dx y = - 2 a, dt
+ b,X,
t,,-l It It,,
to = 0.
(105)
For each segment we integrate the equation and obtain (the index of a, b is dropped for clarity):
Since from Eq. (104) we have
then
We can simplify the algorithm as follows:
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
355
and therefore
Axn At,, = __ F Yn-1
(2)'
In z F ( z ) = __ z - 1'
where
(108)
In this way we have obtained the desired function x ( t ) in the form of a set of pieces of the inverse function t = g(x). 2. The Phase Trajectory 6, = f ( x ) 1 dx 6, = - - E a, x dt
+ b,X,
t,-l 5 t 5 t ,
After using (109) and (104), we obtain
3. The Phase Trajectory 8, = f(6,)
The sequence of operations for deriving the algorithm for this case is similar to that of Subsection 1, except for the final stage: dd, dt
-= a i bh,,
...
We have obtained a sequence oft, corresponding to d,,, which enables us to find an approximation to &(t). Since 1 dx x dt
= - -,
then x(t) = c
exp[f 6&) d z ] .
356
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
4. The Phase Trajectory K, = f( x )
Since d2x dt2
x K = - = -
dy dt’
dx y=dt
and dY - d Y dx - dY
dt - zz - yz’ then a linear differential equation of second order,
2 + UI%
+ a,x
= 0,
can always be rewritten in a form which does not contain time t explicitly: dY --y dx
+ a , y + a,x
= 0,
or
dY
X
+ a, + a,- Y = 0. dx -
The solution of this equation gives the phase trajectory y = f(x), from which we can return to x = f ( t ) according to Subsection 1. We recommend to solve this equation by the method of isocline lines (Andronov et al., 1966, p. 8). (An isocline is the locus of points at which the tangent to all integral curves has the same dope.) Denoting dyldx = k, we obtain the expression of the isocline: X
k+a,+a,-=O, Y
a0 y = -____ x a, k
+
(Equation of isocline). (113)
As a rule, for approximate calculations, it is enough to use only four isoclines, which are called the main isoclines (Zayezdny, 1973, p. 382):
By marking on the isoclines the points corresponding to the given a,, a,, we obtain a field of vectors describing the phase trajectory i = f ( x ) , and from this curve we can make the transition to x ( t ) according to Subsection 1.
SIGNAL DESCRIPTION NEW APPROACHES, NEW RESULTS
357
5 . The Phase Trajectory K , = f(6,) Using the equation
K, =
f(a,) we have
8,
+ 8,” and the piecewise linear approximation for 8,
+ 62 z a, + b,6,
and we get the differential equation with initial condition, in the variable 6,:
8,
+ 62 - b.6,
= a,,
6,(0)
(114)
=
Integrating this equation numerically, we obtain (z is the shifted time t - t,-l) 6,(t) = g(t.),
0 I5 IAt,.
We know the value of g(z) at the end of the segment: g(At,) = a,, and from this we calculate At,. Doing this for all the segments, we get the sequence { t , } corresponding to {d,), that is, we have an approximation of the function &.(t), and from this we can calculate x(t).
6. The Phase Trajectory 1, = f ( x ) We have
d y - f ( x )d x , Y
In Jyl =
X
i’e
dx
+ C.
The last expression is the equation of the trajectory in the plane x y , that is, we have obtained a function y = i= g(x), and now we can make transition to x ( t ) according to Subsection 1. 7 . The Phase Trajectory 6, = f(8,)
For a linear segment approximation, we have
We obtain the differentialequation with initial condition
8,
= a,&
+ (b, - l)S$
6,(0)
Solving numerically, we obtain the function 6,(z) know the value of g(z) at the end of the segment, g(Atn) = 4,
=
(116)
= g(z), 0 I z I At,. We
358
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
and from this we calculate At,,. Doing this for all the segments, we get the sequence ( t n } corresponding to {Sn}, that is, we have an approximation of the function S,(t), and from this we can calculate x(t). 8. The Phase Trajectory 1, = f(S,) For a linear segment approximation, we have
We obtain the differential equation with initial condition
8,
= (a,, -
1)s:
+ b&,
S,(O)
=
(1 17)
Solving numerically, we obtain the function S,(T) = g(T), 0 I T 5 At,,. We know the value of g(z) at the end of the segment, g(Atn) =
6fl9
and from this we calculate At,,. Doing this for all the segments, we get the sequence { t,,} corresponding to {dn} that is, we have an approximation of the function S,(t), and from this we can calculate x(t). 9. The Phase Trajectory 8, = f ( x )
Using the equation 6, = K, -,:S we have K,
- s:
=f ( x ) .
K, =
dY = f ( x ) + sxsy = -xY -dx
After some manipulations, K,
=f(x)
+
(2)' -
,
we obtain the equation of the isocline:
From this, transition to x ( t ) according to Subsection 1. D. Signal Processing Using Phase Trajectories
Signal processing is usually performed by functions of time-the signals themselves. We can readdress all the operations involved in signal processing to phase trajectories, which are images of signals. According to this statement of the problem, its solution consists of two parts: (1) to construct the
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
359
phase trajectory corresponding to the given signal; and (2) to apply to the phase trajectory special processing, according to the conditions of the concrete problem, for example signal separation, measurement of signal parameters, identification, pattern recognition, etc. We intend to consider here the second part of the problem, and to give some examples of phase trajectory processing for these applications. A phase trajectory is a composite function u(u), where u ( t ) and u(t) are implicit functions of time. Since there are always definite relationships between the parameters of the composite function and the parameters of x(t), we can estimate the properties of the original signal by using the coordinates of the trajectory in the u, u plane. We propose two methods for characterization of phase trajectories, based on the following parameters: 1. The number of intersections of the representative point with certain straight lines defined on the phase plane and, as a particular case, with the axes. 2. The time during which the representative point remains in a certain region of the phase plane. We shall call this the staying time. We give some illustrative examples of applying the two methods. In these examples we use only very simple trajectories, for the sake of clarity. However, the recommended procedures are valid for any phase trajectories. Example 1. Let us consider a well-known problem in detection theory, that of the reception of a binary set of signals sl,s2. The received signal is
+
x = s ~ , W ,~
0 < t < T,
where w is an interference (noise), and we must decide which one of the two signals was transmitted. The classical algorithm is
and is based on the following decision rule: If the upper inequality holds, then we infer that it was s1 that was transmitted, if the lower inequality holds, then s2 was transmitted. To be specific, we shall assume that sl,s2 are the two antipodal signals: s1 = u
sin wt,
s2 = --a sin wt,
0 < t < T.
( 120)
Now we shall propose a new algorithm, using a phase plane. In this algorithm, we replace the comparison of integrals as in (119) by comparison of the “staying time,” which is the amount of time that the representing point remains in a certain region of the plane. To see this, let us consider the case without interference, x = s ~ , The ~ . integrand functions are shown in Figs. 3a
360
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
-2
200
100
0
-2
100
0
200
10
0
-10
-1 5
0
15
SI2
s1 'S2
FIGURE3. Detection of two signals using phase planes. The received signal is multiplied by sl. (a) The output when receiving s, (no noise). (b) The output when receiving s2. (c, d) The outputs of (a) and (b) in the Poincare phase plane.
and 3b and their corresponding phase trajectories on the Poincare phase plane C(u) in Fig. 3c and 3d. We introduce four unit functions corresponding to the four quadrants of the phase plane:
i
1, x > 0, i> 0, "@) = 0, otherwise,
I
s3(t)=
1, x < 0 , i < 0, 0, otherwise
{ {0,
1, x < 0 , i > 0, rlz(t) = 0, otherwise. 1, x > 0,2< 0, r14(t) = otherwise.
(121)
We calculate the vector of staying time parameters: Oi = J
qi(t)dt,
i = 1, ..., 4.
0
This vector is a characteristic that can fully discriminate between the two
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
361
original signals. The algorithm can be written in the following form:
e, + 0,s e3 + e4,
S1
x=
.
s2
(123)
In the real case, when noise is present, the signals must be smoothed before applying the algorithm. Example 2. Besides the method of staying time, we can use the method of number of crossings: in this case, the crossings of the two semiaxes which form the boundaries between the quadrants 1-4, respectively, 3-2 (Figs. 3c and 3d). The algorithm for this method will be Ni-4
2 N3-2,
S1
X*
.
(124)
s2
Note that the method of staying time is to be preferred to the method of number of crossings, if there is noise. This because the number of crossings due to noise may be much larger than those due to the signal, and therefore the number of crossings loses significance. Example 3. Let us consider the problem of discriminating between the two antipodal signals shown in Figs. 4a and 4b (together with their phase trajectories in Figs. 4c and 4d). We will use the method of number of crossings. We shall designate by N, (right) the number of crossings of the semiaxis 1-4, and by Nf(left) the number of crossings of the semiaxis 3-2. The two kinds of crossings take place when
N,:
s > 0, S = 0,
N,:
s < 0, :: = 0.
In our example, for the first signal N, = 3, iVl = 1, and for the second signal N, = 1, Nf = 3. The decision law is N,
2 N,,
x*
S1
.
(125)
s2
Example 4. Consider the four signals (see Figs. 5a-5d) a nt s1 = - sinh -,
. nt s2 = - a sin -, T
s3 = -s1,
s4=-s2,
b
T
where b = / sinh T -2.nl .
O l t l T .
362
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
IKj ; r j
0
-/'1-1
0
100
200
-1
0
100
5E
200
5
0
0
-.
- -~
-5 1
0
I'
1
- -1
s1
0
1
s2
FIGURE 4. Detection of two signals using number of crossings in a phase plane (Example 3). (a, b) The signals in the time domain. (c, d) The signals in the Poincark plane. The number of crossings of the positive semiaxis x is different for each signal.
This specific value for b has been chosen, in order to have equienergetic signals. We must receive them and decide which one has been transmitteda quaternary decision problem. We shall make use of the following plane uv:
The trajectories of the four signals are shown in Fig. 5e. These are the four bisectrices of the quadrants. The decision procedure is: (1) For the received signal, compute the four staying times ei in each quadrant i, i = 1, 2, 3,4. (2) Decide according to the law: if max(Oi) = 0, => sj.
(127)
Example 5. This is another example of a quaternary decision problem. Consider the following set of four equienergetic signals, defined in the interval 0 5 t I T (Figs. 6a-6d):
:t
-2 0
1
2
tIT
=
O -2
t/T
d
s3
'
-
7
s4
I
0
1
-2
2
tIT
0
1
2
t/T
e 2
..
v=x
0
-2
-1
0
1
u=02x
FIGURE5. Detection of the four signals of Example 4. (a-d) The signals in the time domain. (e) The signals in the generalized phase plane uu, u = d x , u = f. Each signal stays in a different quadrant.
tlT
=
tlT s4
=3
2r-----
2r----l
1
00
:L
2
0
1
2
tIT e
tlT
30
0
I
D
a
FIGURE 6. Detection of the four signals of Example 5. (a-d) The signals in the time domain. (e) The signals in the generalized phase plane uu, u = 6(6,), u = b(6,). Each signal stays in a different quadrant.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
s1 = b(t - T)’,
s2 = a sin wt,
s3 = bt2,
sq = a cos wt,
where b = a&(T2fi).
365
7L
w =2T’
(128)
We choose the following phase plane:
u = 6(6,),
u = 6(Sy),
where y = 1.
The four trajectories are along the bisectrices of the quadrants, and are shown in Fig. 6e. The decision law is as in the preceding example. Note that if we replace s2 by -s2, and s4 by -s4, the trajectories do not change. This replacement is recommended if we want to obtain a signal set with zero dc component.
IV. APPLICATIONS
A. Parameter Estimation (Note: In this section only, x’ denotes the Hilbert transform of x, and Z denotes the estimate of x.) We have already seen some particular cases of parameter estimation by means of GDEs in Section I1.D. In this section we shall consider this problem in more detail. The problem of parameter estimation in the general case is formulated as follows. Assume a signal, function of time t and of a vector of parameters a. We are given N samples of this signal, corrupted by noise w, x, = s(t,; a)
+ w,,
t,
= nT,
n = 0 , .. ., N - 1 ,
(129)
where T is the sampling interval. The problem consists of estimating the vector of parameters a from these samples. The most widely treated problem is the case of a multiple sinusoid, and the closely related case of the multiple exponential signal. For these problems a large amount of literature exists. In contrast, literature on nonsinusoidal signals is very scarce. In Subsection 1 we present the general approach of using structural properties to parameter estimation, and then deal with a test case of nonsinusoidal signals. We emphasize that it is especially in this specific field that the structural properties method has the greatest potential advantages, because of its generality. In the remaining subsections we focus on the multiple sinusoidal signals. We
366
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
shall give a brief survey of the existing methods for the multisinusoidal case in Subsection 2. In Subsection 3 we shall discuss the Cramer-Rao lower bounds (CRB) for the one-sinusoidal signal. This bound sets a theoretical limit to how small (good) the variance of an estimated parameter can be. We shall show that by making use of the second derivative of the signal we obtain a CRB that is smaller than the classical bound. Thus we may expect better estimates by using derivatives, i.e. structural properties methods. In Subsection 4 we present structural properties-based algorithms for estimating the parameters of multiple sinusoids, and an illustrative example of their performance. 1. Parameter Estimation b y Structural Properties: General Approach and a
Test Case for Nonsinusoidal Signals
The general approach to parameter estimation by structural properties consists of using generating differential equations (GDEs), and especially GDEs dependent on only the parameter of interest (and invariant to others). This topic was developed in Section 1I.D. We shall only briefly review it here. Given a signal x dependent on parameters a, b, c, for example, then we can construct GDEs of several forms: F(x, i, f, . . .; U , b, C; t ) = 0, F,,,(x,i,f
,... ; a , b ; t ) = O ,
F,(x, 1,f, ...; a; t ) = 0, and similarly for b, c, bc, etc. In practice, these equations must be viewed as algebraic (not differential) equations existing for each discrete sample t,, x,, etc. If we want to measure parameter a, we can use the GDE F,, which is invariant to the other two parameters b, c. From Fa we can derive an equation for a, of the form a = GJx, i, f,. . .; t). Moreover, we can eliminate the time t in the above equations, and obtain time-independent algorithms for measuring the parameter. If the signal is corrupted by noise, we have, instead of the aforementioned equations, equations of the form (el",e2, are errors): F(x,, in, x,, . . .; a, b, c; t.) = el,,, F,(x,, in,f,, ..., a; t,) = e2,, etc.
(131)
The parameter a is computed by minimizing the total mean squared error E e l : or z e 2 : , etc. In general, we obtain an algebraic equation of some degree K, of the form
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
361
with coefficients of the form A, = (Gk(xn, kn,
- ..; t,)>,
(133) where (. ) denotes discrete averaging over N samples. One of the solutions u of the equation is the desired parameter a. The selection of the correct solution can be made in one of two ways: (1) by careful checking the derivation of (132) in order to take in consideration various conditions which the solution must satisfy; (2) by substituting all the solutions back in (131) and choosing the one producing the minimum error. As a test case for nonsinusoidal signals, let us estimate the “frequency” w of a sinc signal in the presence of noise: X, = b
In,
sin w T ( n - N/2) +w,, w T ( n - N/2)
n = O ,..., N - 1 .
The following results are taken from Druckmann (1991, 1994). It is shown that, when there is no noise, o satisfies the following nonlinear GDE (for each sample xn): 04x2+ 2w2(2xf - i 2 )
+ 3 2 - 2 i x = 0.
(134) We view this equation as an algebraic equation in the unknown o.When there is noise, the zero on the right side is replaced by an error e. Minimizing the total squared error, we develop an algorithm which involves the solving of the following cubic equation, whose solution is 02: au3
+ bu2 + cu + d = 0, b = 3(x2(2x2 - a’)),
a = (x4),
+ 2 i 4 - 2x22f - 8xi2x),
c = (1 1x2x2
(135)
d = ((3x2 - 22i](2xx - i 2 ) )
This is an example of an equation of the type (132). The algorithm requires the computation of the first three derivatives of the signal. Two numerical differentiation algorithms were used: one based on a seven-point interpolating polynomial, the other on Fast Fourier transforms (FFTs). Both methods were tested at low and high signal-to-noise ratios, and for various sampling rates. For each case, a set of 400 noisy records was used for statistical averaging. Some results are shown in Fig. 7 the root-mean-square error as a function of the sampling rate, for the two differentiation methods, and at two SNR levels, - 5 dB and 40 dB. 2. Parameter Estimation for Multiple Sinusoids: Statement of the Problem and Existing Methods The problem consists of estimating the frequencies, amplitudes, and phases of an M-tuple sinusoid embedded in noise:
368
--
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
L
0,
b
0.01z
FFT der. 40 dB
I 5
0.0012
6
4
3
fs,
7
8
I
sampling freq. [sampl./timeunit]
FIGURE7. Frequency estimation of a sinc signal in noise: the rms error of the estimate vs. the sampling frequency. Frequencies are normalized to the true frequency. The results are for two differentiation methods (one using a 7-point interpolation polynomial, the other based on FFT), and for two SNRs (- 5 dB, 40 dB). (After Druckmann, 1994.) M Xn
=
bk k=t
COS(%Tn + 6,)
+ Wk,
n = 0,..., N - 1,
(136)
from the knowledge of N samples. The interference w k is a discrete Gaussian white noise, with zero mean and variance 02.It is usual to consider, instead of (136), the complex sinusoid, or cissoid:
where w k is a complex Gaussian white noise, crk = 20’. Note that having N samples of the complex signal (137) is equivalent to having N samples of the real signal x, (136) and, in addition, N samples of the Hilbert transform of x,. The three kinds of parameters are not equal from the point of view of difficulty of estimation. If the frequencies w, are known, then finding bk,6, is a simple linear problem; i.e., they can be computed by simply inverting a matrix. The difficult problem is computing the frequencies. Let us suppose that we want the optimal solution for the frequencies, amplitudes, and phases, i.e., that set of frequencies, amplitudes, and phases which minimizes the total squared error c k w k w k * . Because the parameters w, appear inside cos or sin functions, this is a nonlinear problem. Therefore it is customary to divide the problem of parameter estimation into two parts. First, a suboptimal algorithm is used to find the frequencies. Then, with the frequencies known, the
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
369
optimal bk, 6, are computed (a linear problem). The central issue in all algorithms for multisinusoidal signals is therefore the estimation of frequencies, and this is also the issue by which the various algorithms differ. Most algorithms are based on a difference equation for finding the frequencies. We shall give here a brief presentation of three of these methods. The Extended Prony Method. The extended proxy method is a family of methods, suited to exponential, decaying sinusoids and sinusoidal signals. The variant adapted to the multiple sinusoid case is known as line spectrum prony (Kay and Marple, 1981; Kay, 1988). The method is based on the fact that a multiple sinusoid (complex or real) satisfies the difference equation
where the vector (ak)is symmetric and is called the prediction vector. This vector is found by minimizing the total squared error l e Z . The resulting procedure for frequency estimation is: Compute the matrix and the vector,
N-M Yk
=
1
n=M+l
(%+k
-k
-k
xn+k)(xn+M
X,-M),
k = 0, .. ., M - 1,
(140)
and then the coefficient vector, Form the p-degree polynomial, p
= 2M,
M
H(2) =
1
Ck(ZM+k
+zM-k),
k=O
and compute the M pairs of conjugate roots. From the M arguments of the roots (the positive ones), compute the frequencies:
The Pisarenko Method. The Pisarenko method (Pisarenko, 1973) is based on a difference equation of the ARMA (p, p) type. The procedure is as follows. Compute the autocorrelation matrix:
370
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
Compute the minimum eigenvalue Amin and its associated eigenvector vmin. Then rs2 = Amin and the prediction vector a is vmin.As in the Prony method, construct the p-degree polynomial, compute its roots and the frequencies according to (143). In the original version of the method, all the eigenvalues are computed, and the minimal is selected. There are more recent algorithms which compute iteratively the minimum eigenvalue directly (Hayes and Clements, 1986; Pitarque et al., 1991), and are thus more computationally efficient.
Forward-Backward Linear Prediction Method. The forward-backward linear prediction method (FBLP) is more accurate than the other linear prediction methods, but more computationally complex. It uses a prediction vector of length L greater than M , and therefore obtains a number of eigenvectors and roots greater than M , the number of sinusoids. It was originally proposed by Ulrych and Clayton (1976) and by Nuttall (1976). We shall present here an improved version of the method, proposed by Tufts and Kumaresan (1982) and called the modified FBLP. The procedure is as follows. Compute the correlation matrix R, according to ( H denotes complex conjugate transpose): XL-1
XL-2
XL
XL-1
...
XO X1
... X N - L - 1 ...............................
xN-2
A=
...
xr x;
*
XN-L
xN-3
xf xf x;I;-L+i
..'
,
R=AHA.
x2* X2+1
...
*
xN-l
Compute the L eigenvalues and eigenvectors uiof R . From these, choose the M greatest eigenvalues and their associated eigenvectors. Now compute the prediction vector g as follows:
Compute the spectral estimator using g:
37 1
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
This spectrum has L maxima, attained at L frequencies. From among these, we choose the M frequencies whose corresponding poles are nearest the unit circle, and these are the best estimates. 3. Cramer-Rao Bounds for One Sinuisoid When Using the Second Derivative of the Signal A good estimate for a parameter means (1) small bias, and (2) small variance. A very small bias is attainable for high enough SNRs. For unbiased estimators there is a theoretical bound on how small a variance can be obtained. This is the Cramer-Rao lower bound (CRB) (Van Trees, 1968).In the case of a complex sinusoid, the CRB for the three parameters are (Rife and Boorstyn, 1974): 12a2 var{d} 2 = CRB, (146a) b2T2N(N2- 1) IT2
(146b)
var(6) 2 N = CRB, var{@ 2
2aZ(2N- 1) = CRB, bZN(N 1)
+
(146c)
We recall that complex sinusoid means that we assume having at our disposal not only N samples of the original real signal, but also N samples of its Hilbert transform. In analogy to this, let us suppose that we also have N samples of the second derivative of the complex signal. What is the CRB under these conditions? The answer is contained in the following theorem (Druckmann, 1994, Druckmann, in prep.). Theorem. Given N samples of each of the complex signals (T = sampling interval), 2, = x, +jy,,
Z , = s,
+ jz,,
where:
+ w,, y, = 1, = b sin(wTn + 0) + Gj,, = - b o 2 sin(oTn + 0) + &,, = 2, = - bo2 cos(o7'n + 0) + w,, z, =
x, = b cos(oTn + 0) s,
then the Cramer-Rao bounds for the frequency, amplitude, and phase are var{d} 2
16a2R = CRB2, b2T2N[3R2(NZ- 1) (128O/~~)(f/f,)~]
+
(147a)
372
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
var{b} 2
+
2aZ[R2(N- 1)(2N - 1) ( 6 4 0 / 3 ~ ~ ) ( f / f J ~ ] = CRB2,. b 2 N . $ R [ R Z ( N 2- 1) (1280/3~~)(f/f,)~]
+
(147~)
The proof can be found in the references, and will not be reproduced here. However we shall mention some basic facts. The samples of x, 2, x, 2, which we shall designate by x,, y,, s,, z, respectively, form together a 4N Gaussian random vector, having the following parameters: E{X,2} = E{y,2} = 2, p=-
E{s,Z} = E(z,2} =
azn4f,4 ~
5
’
Js
Cov{xnsn} _ - COV{y n z n > - ax 0 s ay=z 3 ’
and all other correlations are zero. Expressing the logarithmic probability density function of this vector and then computing from it the corresponding Fisher information matrix, produces, after some lengthy derivation, the results of the theorem. We define the gain of the new bounds with respect to the classical bounds as CRB,, classical = CRB,, with 2nd derivative and similarly for amplitude and phase. These are functions of the relative frequency w/o,, and of N. The gain for the frequency CRB, as a function of relative frequency, is shown in Fig. 8 (N = 64 samples). We see that it is always greater than 1, except at one point, where it is 1. Maximum gain is obtained at the extremal frequencies, 0 and 0.5 f,. The other two gains, of the amplitude and phase, have the same behavior (not shown). In conclusion, making use of the second derivative of a complex sinusoid allows us to expect to obtain estimators with lower variance. 4. Algorithms
One-Sinusoidal Case. For the one-sinusoidal case, when the noise is very small, the conservant can be used to measure the frequency (Zayezdny et al., 1992a): &
=--*
x,
(148)
Xn
The derivative is computed by a numerical algorithm. This simple estimation algorithm can be used to track the instantaneous frequency, if frequency variations are small compared to 2KT, where T is the sampling interval and 2K is the number of samples around n that are necessary to compute the
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS N = 64 samples
Gain of CRB for Freq (DB)
0.1
0
02
373
0.3
0.4
0.5
Freq I Fs
FIGURE 8. The gain of the Cramer-Rao bound using second-derivative knowledge, with respect to the classical CRB, in the case of sinusoid frequency estimation. The gain is plotted vs. the frequency (normalized to sampling rate). The gain is never less than 0 dB, i.e., the CRB with derivative is smaller (better) than the classical one. (After Druckmann, 1994.)
second derivative at n. Smoothing before applying the algorithm can improve the result, if the noise is small. For the case of larger noise, a better algorithm is (Zayezdny et al., 1992a): lj)
”
(XX)
=---
(2)’
(149)
where ( ) denotes discrete averaging over N samples. This estimator is obtained by minimizing the total squared error for the differential equation:
x, + w’x,
= en.
( 150)
This algorithm, with suitable modifications, has also been used successfully to estimate waveforms of the form A cos(wt + 0) + C from samples of less than half a period (Druckmann et al., 1994). Multiple-Sinusoidal Case. Assume we are given an M-multiple sinusoid with M frequencies, and its derivatives of even order, up to order 2M. Then the samples of these signals satisfy the equation M-1
x“”)
+ i=O
aixi2’)= 0,
n = 0, ..., N - 1,
(151)
where dk)= d k x / d t k .From this equation we obtain, by using the Fourier
374
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
transform, the following algebraic equation, whose roots are the frequencies of the sinusoids:
We can obtain an equivalent equation of reduced degree, by setting u
uM +
M-1
= s2:
,
a i d = 0.
(153)
i=O
This equation has M negative real roots. The frequencies are given by
ok=t/-uk,
k = l , ..., M .
(154)
If the signal is corrupted by noise, then Eq. (151) becomes
where en is an error due to noise. We want to find those coefficients which minimize the total squared error. Differentiation of the error with respect to the coefficients gives the following linear equation in the coefficient vector: r
‘00
‘10
r20
r10
rll
121
120
‘21
r2 2
‘M-1,l
l;M-1.2
,‘M-l.O
... ... ... ...
...
>
rM-l,O
rM-l, 1
rM-1,2
r~-i,~-i,
where we have introduced the notation rk
= (X(”)X(~~)).
Note that the matrix is symmetrical. The algorithm for computing the frequencies is summarized as follows: 1. Compute M derivatives of even order, dZk), k = 1, ..., M. 2. Compute the M ( M + 3)/2 terms: rk,i= ( x ( ~ ~ ) x ( ~k ~=) 0, ), i = k , ..., M. 3. Compute the Coefficients a, . . ., u ~ - according ~ , to:
Matrix R
= (rk,j ) ,
Vector y, a = -R-l
yi= rM,i,
Y.
. .., M
- 1,
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
375
4. With the coefficient vector, construct the following equation and solve
it for M negative solutions uk: uM
+ a M - I U M - l + * . . a , u + a, = 0.
5. Compute the frequencies according to: o,= J- uk.
Note 1 . Step (1) requires a numerical differentiation algorithm. Therefore the given general frequency computation algorithm is in fact a family of algorithms, each one differing from the other by the specific differentiation algorithm used. Note 2. The differentiation algorithm computes the highest derivative at point k on the basis of 2L + 1 points around point k. (L depends on the algorithm). Therefore, the derivatives are available at only N - 2L points of the signal’s original N data points. Therefore the averaging required at step 2 is performed on N - 2L samples. Simulations. The performance of the presented methods has been tested by computer simulations: variance of one-sinusoid case (Zayezdny et al., 1992a), variance and root-mean-square (rms)error of one- and two-sinusoid case (Druckmann, 1994). We give two results from the last reference. Figure 9 shows the “performance” (defined as the inverse of the variance) of the frequency estimates by the structural properties method and by the 40
ONE SlNUSOlD Freq = 1 SNR= -5 DB 400 TRIALS Np = 4 PERIODS
’ 5
30-
r
-
r
-1
r
a: 5
+
fl d
+ - 4 -
+
+
--
a
4 Srnoothings
9 20
104
4
I
I
I
.
a
I
I
I
12
<
I
_. ’
I
16
SAMPLES I PERIOD
FIGURE 9. Frequency estimation of a sinusoid: the performance (=inverse of variance) as a function of sampling rate, for Prony method and two variants of structural properties method (differentiation based on 7-point interpolating polynomial, with and without smoothing). SNR = - 5 dB. The C-R bounds (classical and with derivative) are also plotted. (After Druckmann, 1994.)
376
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN 70
60
6 50 +
h
2 [r
-
40
7
(3
9
30
z
20
10 -10
-5
0
5
10 15 SNR [ D B ]
20
25
30
5
FIGURE10. Frequency estimation of sinusoid, as in Fig. 9, but here the performance is plotted vs. SNR, at constant sampling rate (10 samples/period).(After Druckmann, 1994.)
Prony method, as a function of sampling rate, for the one-sinusoid case. The performance should be large to be good. The signal-to-noise ratio is low: - 5 dB. The computation was performed on a set of 400 signals with Gaussian white noise, and length of 4 periods. Figure 9 shows also the Cramer-Rao bounds, both the classical one and the one using second-derivative knowledge (both for real, not complex, sinusoid). Figure 10 shows the performance as a function of signal-to-noise ratio, for a sampling rate of 10 samples/period. The signals have a length of 6 periods, and the number of trials is 200.
B. Signal Separation By signal separation we mean two different, but related, problems: (1) Two or more signals arrive simultaneously at the detector, that is, are combined in an additive mixture, and we must reconstruct each one of them separately; (2) signals arrive consecutively, during time intervals of duration T, and we must decide which one of the signals, out of a small set of signals, has arrived. The first problem is encountered, for example, when a signal is corrupted by a pulse-type interference. The second problem is the wellknown problem of optimal detection or demodulation in communication (Proakis, 1983). In this section we shall treat this problem briefly, from the point of view of structural properties, emphasizing important properties connected with this method.
SIGNAL DESCRIPTION NEW APPROACHES, NEW RESULTS
377
1. Separation of Additively Mixed Signals
The problem can be formulated as follows. We have a mixture of two signals, and possibly noise: x ( t ) = sl(t)
+ S 2 ( t ) + w,
where a, b, .. ., a, b, . .. are parameters, and we want to reconstruct sl(t), s,(t) from x(t). We have assumed only two signals, for the sake of simplicity, but the problem can easily be extended to an arbitrary number of signals. However, the two-signals case is quite characteristic; for example, s1 (t) may be a useful signal, and s Z ( t ) an interference. The assumption is that we know the general form of both sl(t) and s2(t), but not their specific parameters. In this case we can derive a GDE for the signal x, by means of the techniques of Section II.C, and our problem becomes a parameter-estimation problem, which can be solved by the procedure described in Section 1V.A. After we have estimated the parameters, the reconstruction of sl(t)and s,(t) is immediate. We emphasize that the structural properties approach gives us the possibility to separate signals, even if they occupy the same region of spectrum. In this case classical filtering techniques, based on spectrum bands concepts, do not work. As an illustration, we shall present the following simple example. Consider the mixed signal, including a Gaussian pulse and an exponential one (Figs. 1la and 1lb):
[ 2
~ ( t=) bexp -
+
de-Ct.
(157)
We can interpret the Gaussian pulse as a useful signal, and the exponential as an interference. This mixed signal has the time-varying, linear GDE [ c - a(t - t,)]X
+
[CZ
+ a - ayt -
to)2]i
+ ac[l + c(t - to)- a(t - t0)2]X = 0.
(158a)
The derivation of this equation is according to the procedure in Section 1I.D. Since we are interested in estimating the parameters, it is more convenient to rewrite the GDE in a form in which the terms are arranged according to the degree of the parameters a, c, to (the degree of a term is the sum of the degrees of each parameter), in increasing order. The new form is
cx
+ a ( i - tX) + c 2 i - a z t 2 i + QCX + at,X + ac2tx + 2 a z t 0 t i - a2ct2x - ac2tox
+ 2azctotx - a 2 t ; i
- a2ct;x
= 0.
(158b)
If there is no additional noise, then this GDE, and the signals x , i, 2 computed at three points, can serve as a basis for computing the parameters
378
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
b 2
1
-0
100
50
n "
150
Time
0
0 05
Freq I F,
0
I
50
100
I
150
Time
Spectra of sl, s2
0
I
I
Spectrum of s1 + s2
01
0 0
0 05
01
Freq I F,
FIGURE11. Example of two signals overlapping both in the time and the frequency domains. (a) The signals in the time domain. (b) Their sum. (c) The spectra of the two signals. (d) Spectrum of the sum signal.
(a system of three equations with three unknowns). In a more general case, allowing an error (or noise) on the right side of the equation, we can minimize the total squared error at N points, with respect to the parameters a, c, t o . The result of this minimization procedure is a system of three quartic equations, with the three parameters as unknowns. This system of equations is analogous to Eq. (132) or (135). As we already said, this is an example of two signals that occupy the same region of the spectrum. Indeed, we can see that in Figs. l l c and l l d . These signals cannot be separated by conventional methods. 2. Detection of Signals This classical problem of communication is formulated as follows. We send one of a set of two signals sl(t), s2(t). The received signal is corrupted by
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
379
noise: x
= s1,2
+ w.
We must decide, based on the knowledge of the received signal x , which one of sl(t),s2(t)was sent. The classical solution to this problem is based on calculating the maximum a posteriori probability (MAP) of sending s l ( t ) or s2(t),i e., the conditional probabilities Pr{s,lx}
and
Pr{s,Jx},
and choosing the signal with the larger probability. The outcome of the development of this idea is two equivalent implementations: the correlator and the linear optimal filter (Proakis, 1983). Both implementations result (assuming Gaussian distributed noise) in comparing the two decision variables:
loT
x(t)s,(t)dt
and
joT
X(t)SZ(t)
dt.
We shall briefly discuss some ideas about how to use concepts of the structural properties approach in order to solve this problem. Nonlinear Transformations. We can use state functions as nonlinear transformations to increase the distance between s1 ( t ) and s2(t), that is, u[s,, s2] = ~ , , ~ (such t ) , that D ( u , , u 2 ) increases. Example. Given the set of signals s l ( t ) = sin ot, sz(t) = cos o t , we apply to the received signal the operator corresponding to the dissipant S[x]. Instead of the signals sl(t) and s&), we obtain the transformed signals S[sl] and S[sJ (Fig. 12). It is easy to see that the distance (distinguishability) between S[s,] and S[sz] is larger than between s l ( t ) and s2(t).Another possibility is to use the operator l / S [ x ] . Of course, development of this idea must take into account the noise and include it in the formulation of distance. Generalized Phase Planes. Trajectories in GPPs can be used to solve the signal detection problem. This has been discussed in Section 111. See in Section III.D, Examples 1 and 2 (a two-signals decision problem), and Examples 4 and 5 (a four-signals decision problem). C . Filtering and Rejection of Signals
Let x ( t ) be a signal described as an image in a transform space or as a trajectory in a phase plane. The problem of filtering and rejection can be visualized as building a “tube” (or toroid) in the description space around the image of the signal. If the region inside the “tube” corresponds to a class
380
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
Sl
- s2
(s,: sinusoid, s2: cosinusoid)
t /T
FIGURE12. Using nonlinear transformationsto increase the distance between signals. (a) Distance (difference) between a sinusoid and a cosinusoid. (b) The difference between the dissipants of the signals.
of useful signals, we have a filtering problem; if, on the other hand, they correspond to unwanted signals, we have a rejection problem. These problems are usually solved in the spectral domain, where filtering is performed according to frequency bands, and this theory is well known.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
381
However, by using structural properties, we can obtain new filters and rejectors which have special properties and advantages. 1. Rejection Filters and Function Elimination Filters We can formulate the procedure of synthesis of rejection filters by means of generating differential equations (GDEs)-see Sections 1I.D and 1I.E. Given the signal x ( t ; a), consider a GDE of this signal: F(x, i, x, . . .;a) = 0.
We can view this GDE as an operator L a [ . ] acting on x(t): LJx)
= B(x, 1, x,
.. .; a) = 0.
(159)
It is clear that this operator is an ideal rejector for the given signal. A simple example is a rejector for a sinusoid of frequency o,,: L,,[x]
=
x + wo'x.
A filter like the one in this example (for one frequency) can also be designed by other means. But by using GDEs we can obtain filters of new kinds, which cannot be obtained by classical methods. For example, we can base our design on an ensemble GDE and obtain a filter that is invariant to the parameter vector a, and therefore will reject any signal belonging to the class of functions generated by this ensemble GDE. Such a filter is in general nonlinear and is called a function elimination filter (FEF). These filters have been proposed by Plotkin and are treated in several papers (Plotkin, 1968, 1978, 1979; Plotkin and Wulich, 1982). The last two papers deal with eliminating impulsive sinusoidal interferences. Continuing our simple example, we shall use ensemble GDEs of a sinusoid to obtain FEFs for this class of signals. An ensemble GDE for a sinusoid and its corresponding FEF are x x - 1 R = 0,
L,,,[x) = x x - 1%.
(161)
Another ensemble GDE for a sinusoid, which uses only first derivatives, but also Hilbert transforms, and its corresponding FEF are (Zayezdny and Druckmann, 1991) x i - i$ = 0,
t,j,[X]
=x i
- k?.
(162)
Both these filters reject sinusoidal signals independently of their parameters, that is, we have L,,,[b sin@
+ e)] = 0,
and they are not obtainable by other methods.
382
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
We want to emphasize the following important properties, which are not to be found in other types of filters. (a) The possibility to design FEFs, that is, filters that operate on an entire class of signals, independently of their parameters. (b) The possibility of filtering signals that occupy the same frequency band. Indeed, let x , y be two signals having spectral components in the range w1 < w < w,, and let L , [ - ] be an FEF corresponding to x. Then we have L,[xl = 0,
L,Cy] # 0.
This cannot be achieved by conventional nonparametrical filters. We shall close this subsection by giving some examples of rejection filters for various signals, expressed both in the conventional form and in the form of state functions. (a) The signal x = a sin(wot + 0).
(b) The signal x
x + wo'x;
L,,[x]
=
L,,,[X]
= XY
=a
-
ix;
+ 2px2; ~ [ x =l X ~ Y- 3 x i x + 2 1 3 ;
a (c) The signal x = -, t-b L,[x]
L,,,[X] = K y - K,
Ls[x] = 6 ,
+ 28
= Kj -
3K,
+ 2s:
t > b.
+ -1x 2 ;
=i
U
L [ x ] = XY- 212;
L.[X] = 6,
+ X-, ll
L[x] = 6, - 26,.
+ 0,) + a2 sin(w2t + 02). Lwl,w2[x]= xi" + (0:+ w;>x + w:w;x.
= a,
sin(w,t
Lol,ol[X] = K , K i
(e) The signal x
+ wo'
exp( -pt2).
Ls[x] = x x - 1 2
(d) The signal x
L,,[x] = K,
=a
+ (U:+ Wi)K, +
0:Wi
sin(wot + 0) . The filter is derived from Eq (134). w,t 8
+
L,[x] = 2k-2 - 3x2 - 4w2xx + 2 w 2 i 2 - w4x2.
SIGNAL DESCRIPTION. NEW APPROACHES, NEW RESULTS
383
2. Pass Filters
We can synthesize filters that allow the passing of a given class of signals, and reject another class of signals. This problem is closely related to the separation problem of the preceding section. The procedure is as follows. We synthesize a GDE that corresponds to both classes of signals, dependent on parameters of both classes, or dependent only on the parameters of the first class (the passing class). Then we derive the equations that estimate (calculate) the parameters of the passing signal. Finally, from these calculated parameters we reconstruct the passing signal. Note that we have introduced here a new type of GDE: one that is dependent on the parameters of one class of signals, and independent on the parameters of another class of signals.
D. Signal Identification Signal identification is closely related both to parameter estimation and to signal separation. Let s(t; a, b) be a signal including parameters a, b, and let us construct the following state functions and GDEs. The state functions that produce constant output for input s are (we have assumed that the output is the parameter itself, e.g., a, but it may be a', ab, or some other combination of parameters):
u = FJs, S,
S, ...),
b = Fs,b(s,&i, ...).
The GDEs, including an ensemble GDE, are of the form
. . .; a, b) L,[s; a, b] = 0, L,,,[x] = EGDE(x, i, X, . . .) * L e , s [ ~=] 0.
L,[x;a, b] = GDE(x, i, X,
(164) (165)
The first GDE includes in its expression the parameters a, b; the second is independent of the parameters. Taking into account the presence of noise, we shall also construct estimators for the parameters, according to the procedure of (13 1)-( 132). The estimator for a is of the form 5 = Fest((Gi(x% % . . . )>, ( G ~ ( x ,& . ..I>,
.. .),
(166)
where F,,,contains averages of functions GI, C,, . .. over a specified running time interval. A.11 these functions, viewed as filters acting on a signal x, can be used as identification systems for the general signal s(t; ., .) or for the specific signal s(t; a, b) with specified parameters a, b. In the first case we have a signal x,
384
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
and must decide whether it is s(t; ., .) or not. We can use some of the abovedefined functions as filters and apply them to the input x. We have at the output: F,,,[x = s] = a = const., Fs,b[X
= s] = b = Const.,
L,,,[x = s] = 0 = const.,
F,,,[x # s] # const. Fs,b[X
# S] # Const.
Le,,[x # s] # 0.
For all these systems we check the outputs. If the output is essentially constant (or zero), then we conclude that the input is the signal s(t; ., .). When the output departs from the constant, we conclude that the input is no longer s. Of the three systems above, the first two not only identify s, but also produce some more information about it: They measure one of its parameters. The third system performs only identification. We shall call all these three systems or filters exact identification filters, because they are derived under the assumption that the signals are “clean,” without noise (Kazovsky et al., 1983). If the signal is corrupted by noise, then the output of the three identification systems is not constant any more (note that in general the systems are not linear, and therefore the noise at the output is not simply related to the input noise). If we allow the presence of a certain amount of noise and we still want to recognize the corrupted signal as ‘‘s(t),” then we can apply smoothing at the output, and check the smoothed output for “constantness.” The length of the smoothing time window is to be determined according to the amount of noise allowed: If we perform too much smoothing (large time window), then we can obtain a constant output even for a signal s2 other than s, and hence we shall wrongly identify it as s; on the other hand, too small a smoothing window will produce a noisy signal (not constant), and hence s will not be identified as such. Another possibility is, in the case of noisy signals, not to use an exact filter and to smooth its the output, but to use a filter that is synthesized according to a procedure which from the beginning takes into account the presence of noise. Such a filter is, for example, one designed as a parameter estimator under the condition of minimizing the output error, such as (166). The identification is carried out according to FeSt,,,,[x# s] = variable > threshold.
F,,,,,,,[x = s] w u = const.,
The criterion for constantness can be the variance of the output, computed by time averaging. If y , denotes the output (as discrete samples), then the time variance is defined as var
=
(Y: -
(Y.>2)9
(167)
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
385
and the “constantness” is defined according to var < threshold. A filter of this type will be called an aueraging identification filter. It includes averaging (smoothing) as an integral part of its design, and is optimal in the sense of minimizing the power of the output noise. This filter was designed not only for estimating a, but also b and all other parameters, simultaneously. Therefore a better identification system is one which tests simultaneously the constantness of the outputs of Fest,s,a[x], Fest,s,b[~], etc. We define a multioutput (or uector) filter,
I 1 Fest,s,aCxI
Fest,sCxI
=
Fest,s,bCxI
( 168)
-..
and the identity test will be for a constant vector at the output. Finally, we also define an exact filter that is a vector, i.e., a multiple-output exact filter. This is based on a new notion, the vector state function, which is a generalization of a state function. There are signals that are not fully identified by a state function which produces a constant, but are fully identified by a vector of state functions which produces a constant vector for this signal. For a two-parameter signal we have the vector state function
3 i i
F,,,(s, S, s, .. .) s, s, . . .)
= FJS,
= Fs(s,S, i, ...).
Example 1. Given a bisinusoidal signal s, we define the vector state function
Applying this vector filter on s, we obtain a constant output:
In a certain sense. this is a bidimensional conservant. Example 2: Identification of a Caussian-Shaped Pulse. We return to the example of the mixed signal of Section 1V.B. We showed there how to separate between a Gaussian pulse and an exponential one. But suppose that we have a broad Gaussian pulse, and the exponential pulse appears somewhere during the Gaussian pulse. We must use an identifier for the Gaussian pulse: as long as its output is constant, we know there is no exponential interfer-
386
ALEXANDER M. ZAYEZDNY A N D ILAN D R U C K M A N N
ence. As soon as its output departs from constantness, we know the exponential pulse has appeared, and we must apply the separation algorithm. Here are three examples of identification filters for Gaussian pulses, of the types discussed above:
E. Data Compression 1. Definitions and Brief Survey of Existing Methods
Data compression is connected with the representation of an analog signal by a string of digital characters (e.g., binary digits). Given a band-limited signal of bandwidth W, the simplest and crudest way to do this is (1) to sample it at a rate f greater than the Nyquist rate 2W, and then (2) to quantize each sample to L uniform levels, representing it by M bits, L = 2'. The signal is thus encoded by a sequence of bits transmitted at a rate of R
=fM
(bits/s).
This is called the canonical representation. There are ways of decreasing both f and M ,and thus obtaining an average rate R* smaller than R. If this is the case, we say that we have achieved data compression. Other names are bandwidth reduction and waveform coding. Following the above description, we see that the transition from the analog signal (continuous both in time and value) to the digital representation (discrete in both) is done in two steps: discretization in time (sampling) and then discretization in value (quantization). Consequently, a data compression system consists of two main blocks: a generalized sampler and a generalized quantizer. A generalized sampler obtains from the samples a set of P parameters (paramete+), from which the signal can be reconstructed. These parameters are quantized. The generalized quantizer transforms the sequence of bits into another sequence, more economical, containing in average C bits/parameter. This is done by exploiting statistical properties of the original sequence (lossless or entropy encoding).For most practical signals, P < f and C < M , and therefore data compression can be achieved. In this chapter we are concerned only with the first block, that is, parametrization or generalized sampling. There are many methods to do this, but
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
387
most of them can be classified in three groups: linear predictive methods, transform methods, and syntactic methods. In methods of the the first group, the signal is viewed as the output of a linear discrete filter (AR or ARMA), which can be represented by one of several sets of parameters: coefficients of the difference equation, coefficients of the impulse response, reflection coeffcients, poles-zeros, etc. According to the second group, the signal is transformed by a linear transform, and the most significant of its coefficients are retained as the representation. Several transforms are used: cosine, Hadamard, Karhunen-Lobe. Walsh, Fourier, wavelet, etc. In the third kind of methods, the signal is segmented into components which have special significance for the signals involved, and these are compressed separately. Accordingly, the methods of this group are strongly dependent on the nature of the signals involved, and different from each other. For example, picture signals are decomposed in contour and texture signals, and ECG signals are segmented in a small number of special-type complexes (Q, R, S, etc.). Some basic references are Jayant and No11 (1984), Lynch (1985), and Kunt et al. (1985). 2. Statement of the Problem for Use of Generating Dgerential Equations For our aims, the problem of data compression can be formulated as follows (Druckmann, 1987; Zayezdny and Druckmann, 1991). Let x ( t ) be a signal, and T [ '1 a transformation that generates from x a set of K new signals:
Data compression is achieved if the following conditions are satisfied. (a) The set { s k } contains full information about x, that is, there exists an algorithm, which can reconstruct x from this set:
(b) The set { S k } can be represented by R, bits/s, which is less than R,, the number of bits per sample required to represent x with the same reconstruction error:
where Mk are the number of bits per sample of signal sk, and jkis the sampling rate of sk. If the spectra of sk and x have bandwidths W,, Wx, respectively, then a condition that may cause (171) to hold is that the
388
ALEXANDER M. ZAYEZDNY A N D ILAN DRUCKMANN
total bandwidth of
{sk}
be less than the bandwidth of x:
We deal here with a particular case of this general formulation: For us, { s k } are the time-varying coefficients of a GDE, and T - ’ [ . ] is the algorithm
of solving the differential equation. In the following we shall confine ourselves to second-order linear GDEs. Two types of GDEs have been used: the A-Psi type with three coefficients (see Section II.F), and an equivalent set of two first-order equations, in which A and o are transmitted directly. Some work on a fourth-order GDE has also been done (Druckmann and Zayezdny, 1991). 3. Use of a GDE of A-Psi Type with Three Coeflicients to Data Compression
Consider a band-limited signal x, whose instantaneous amplitude and frequency have bandwidths W,, W,, respectively, and let us construct its GDE of type A-Psi. As a consequence of Theorem 5, Section II.F, if the bandwidths W,, W,,satisfy
w,,+ 8
0
= 4(W*
+ W,) < w,,
( 172)
then we have a possibility of data compression. We say only “have a possibility,” because (172) is not sufficient: We must check the reconstruction error after quantization of the coefficients. Now we shall give a brief description of the compression system. This consists of a transmitter and a receiver. At the transmitter the following operations are performed. (a) The coefficients az(t),ao(t)are computed: a,(t) = x i - ii,
ao(t) =
-xi.
Actually, they are computed as discrete samples, not as continuous signals: - f,,i,,. a,[n] = x,,i,, - inin, ao[n] = inin
(b) The coefficients are sampled at a lower rate than the original signal, because they have a smaller bandwidth than it has. (c) The samples of the coefficients and of the initial conditions are quantized and transmitted. At the receiver, the reconstruction of x is performed according to the following steps:
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
389
(a) Interpolation is performed on the coefficient samples, in order to restore them to a sampling rate greater than the Nyquist rate of the original signal. (b) The coefficient al(t) is computed by numerical differentiation: al[n] = -Ciz[n].
(c) The three coefficients and the initial conditions are passed to a differential equation solver, which solves according to i =Y,
(173)
The solver was of the Runge-Kutta type of order 4, with constant step. This compression system suffers from some drawbacks. The total bandwidth of its two transmitted coefficients is greater than the total bandwidth of the amplitude and phase representation [see Eq. (172) and Theorem 5 in Section II.F]. And besides, during solving the differential equation, there are large errors at those points where the leading coefficient is close to zero. 4. Data Compression by a Set of First-Order GDEs Using A and w Directly In order to overcome these drawbacks, we used another system (Druckmann, 1994), in which we transmit A and w = directly, and we use at the receiver a GDE in the new variables
The GDE to be solved, with initial conditions, is
e
= -w(t)v
3 = w(t)<.
<(O) = t o ,
v(0)= vo.
After the DE solver, we reconstruct the original signal as xn
= Ant,
This compression system has been tested for signals of the form
where a(t) and w ( t ) are Gaussian, low-pass, random processes with zero mean, which were generated by passing two independent, white processes
390
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
through a low-pass filter of bandwidth LP. For example, consider such a signal, with LP = 0.2.f0, where frequencies have been normalized to fo, the central frequency of (162), and time has been normalized to l/fo.The bandwidth of x is W = 1.9, and it has been sampled at four times the Nyquist rate. We have taken 20 records of length N = 1024 samples of this signal family, the records being independent [independent a(t)and o(t)].The compression and reconstruction system has been applied to this set of signals. The rms reconstruction error has been calculated for each signal, and then the average of the rms error on the entire set. The compression has been tested for various quantization resolutions (uniform quantization). The average “bits per sample” has been computed taking into account the quantization of each coefficient and the bits allocated to initial conditions. The result is the graph of the average error (distortion) versus the “bits per sample” normalized to the original sampling rate, that is, the distortion-rate curve (Fig. 13). This method has been compared to the canonical representation (PCM) and some other methods: DPCM, I-Q quadrature components. In the latter, the signal is represented by two low-pass components:
[s‘ 1
o(z) dz ,
x i ( t ) = a(t) cos b
x,(t) = a(t) sin
[b
1‘
o(z) d r l , J
”
“I
-1 0-
\
m
n
-50
-60, -60 1
2
3 Bits /
4
5
6
7
I
Sample at original rate
FIGURE 13. Distortion-rate curves for some methods of data compression applied to signals of the form Eq. (177). Methods: PCM, DPCM, quadrature components (I-Q), and two variants of compression using the coefficients of a GDE. First variant: the DE solver was reset (newinitial conditions) after generating a block of 1024 samples of signal; second variant: blocks of 512 samples. (After Druckmann, 1994.)
SIGNAL DESCRIPTION NEW APPROACHES, NEW RESULTS
391
and reconstructed as x ( t ) = x i ( t )cos coot
- x&) sin coot.
It is seen that the GDE method compresses better by a factor of 1.6 than the best classical method tested (1-Q). F. Extrapolation of Time Series and Real-%me Processes
Application of the structural properties method to extrapolation has been developed in a number of papers: extrapolation of time series in Tabak et al. (1983),Tiunov et al. (1983),Zayezdny and Tiunov (1993);and extrapolation of real-time processes in Tiunov et al. (1994). The last paper also presents results for real-time signals obtained from a target tracking system, and includes comparison with a classical system based on a Kalman-Bucy filter (Kalman and Bucy, 1961). We present in this section the new method of extrapolation based on structural properties, as it is reported in these these references. The new method is compared in these references to conventional methods (Box and Jenkins, 1970). The problem of extrapolation is formulated as follows: Given a random process x(t),0 It IT, we first perform a preliminary processing of smoothing. This is done by conventional methods (Marble and Zayezdny, 1989; Zayezdny et al., 1992b). All operations for extrapolation are performed on the smoothed version of the signal x ( t ) = s ( t ) + w(t). Our aim consists of finding the function .?(t),0 I t I T + t,t > 0, under the condition that .?(t) approximates s ( t ) in some sense, in the interval T I t I T + t. Widely accepted methods for doing this are given in Box and Jenkins (1970). The described method based on structural properties performs the extrapolation according to the following steps. 1. In the interval 0 I t I T the input function is transformed into a new image, by using state functions (Section 1I.B)or phase trajectories (Section 1II.B). This image must be a simple one, preferably a straight line. The search for the new image is carried out on a large but finite set of state functions. 2. After the suitable state function has been chosen, the new image (straight line) is extrapolated to the interval T It I T + z (extrapolation interval). 3. The inverse transformation is performed on the extrapolated image, during the interpolation interval T I t I T + T, in order to return to the original function. We thus obtain the extrapolated approximation .?(t),which is the solution of our problem.
392
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
This approach includes two issues. The first issue is the transformation of the original process s ( t ) into a simple image u(t) or u(u). The simplification of the functional structure by using a suitable image can be explained intuitively by the fact that a combination of derivatives of the function contains more information than the function alone. Moreover, the transformation can be piecewise modified, that is, a different transformation is selected in different time intervals. The second issue consists of the automation of the above-described procedure. Such a procedure is considered in Zayezdny and Tiunov (1993). The key to a successful selection of suitable transformations is the availability of large tables or libraries of transformations. Such tables can be found in the cited references, and also in Zayezdny and Druckmann (1991) and in the papers of Maizus and others (Maizus, 1989, 1992; Grinberg et al., 1989; Lototsky et al., 1991). Maizus deals with the application of structural properties methods to control theory, and uses a somewhat different terminology. Examples given in Zayezdny and Tiunov (1993) and in Tiunov et al. (1994) show that the structural properties method achieves smaller extrapolation error and, therefore, larger prediction intervals than the Box-Jenkins method. G . Malfunction Diagnosis
In this section we briefly present the application of structural properties methods to malfunction diagnosis as developed by Keynan (1987). The problem is formulated as follows. We consider a signal that gives indication about the proper operation of a system. We are given two forms of the signal: the standard (reference) signal, which corresponds to the normal operation state of the system, and a measured signal, called the test signal. We must quantitatively compare the two signals in order to identify malfunction of the system. The method described in the reference can be divided into two parts. The first part consists of transforming the original signals (standard and test) into convenient state functions (Section 1I.B) or phase trajectories (Section 1II.B). “Convenient” here means “of simple functional structure.” The reference signals are known, and therefore state functions can be found (one for each reference signal), which transform them into a linear function. This is the problem of linearization of functions. The second part is dedicated to the quantitative estimation of the malfunction, that is, how much the state function of the test signal departs from the state function of the reference signal. Several methods are possible. In the reference work, a special universal method is developed, based on expressing the test signal as a composite function of the reference signal.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
393
Malfunction estimation means the estimation of the difference between two functions, the test signal, denoted by x(t), and the reference signal, denoted by s(t). The signals may be normalized in value and in time, and hence, without loss of generality, we may assume: 1st I 1,IxJI 1,0 I t I 1. We express x by means of s as a composite function x(t) = f [ s ( t ) ] ,
0 I t I 1,
where the external function ffx) is approximated by a polynomial: N
f(s) z
1 a,s".
(178)
n=O
If a, = 1 and all other coefficients are zero, then x = s, and there is no malfunction. The coefficients (a,) are called malfunction coefficients, and can be determined, for example, by means of a system of equations that minimizes the mean squared error:
{jol[f', ais'(t)
aez a -= -
aa,
aa,
- x(t)
= 0,
n = 0, . . ., N .
In the reference another method is proposed, based on the following system of equations:
+ a,s(t) + a2s2(t) + ... + a,sN(t), i ( t ) = a,i(t) + 2a2s(t)d(t)+ ... + NaNsN-'(t)S(t),
x(t) = a,
X"'(t) = a, s"'(t)
(179)
+ [a2s2(t)]"' + . + [aNsN(t)]"! * *
The solution of this system gives the coefficients {a,}, which are functions of time. This, because the polynomial form is only an approximation. If the approximation is good enough, the variation of the coefficients is small relatively to the variation of s(t). In order to smooth out the time dependence of the coefficients, the following average value is used: an,rms =
Jsba?(t) dt.
(180)
For the special case of a second-degree polynomial approximation, we have x ( t ) = a,
+ a,s(t) + a2s2(t),
+ 2a,s(t)S(t), %(t)= U l S ( t ) + 2az[iZ(t) + s(t)S(t)-J,
i ( t ) = a,i(t)
394
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
and after solving for the coefficients, we obtain a&) =
Sf - fi ~
2s3
Ul(t)
=
i , 9 - 2saz
~
a&) = x - sZaz - sa,.
Before computing the average values of the coefficients according to (180), we must eliminate the spikes that result from division by zero. This is done by means of a nonlinear filter that recognizes the occurrence of spikes and replaces them by a straight line computed by linear regression. This method has been tested in the reference work by many examples, including examples where coefficients can be computed analytically.
REFERENCES Andronov, A. A,, Vitt, A. A., and Khaykin, S. E. (1966). “Theory of Oscillators.” Pergamon Press, Oxford. Baraniuk, R., Flandrin, P., and Michel, 0. (1993). In “Proc. 14th Colloque GRETSI (Juan-lesPins, France)” (B. Picinbono, P. Tournois, and G. Bienvenu, eds.), p. 359. Bastiaans, M. J. (1980). Proc. I E E E 68, 538. Box, G. E. P., and Jenkins, G. M. (1970). “Time-Series Analysis: Forecasting and Control.” Prentice-Hall, Englewood Cliffs, NJ. Boyce, W. E., and DiPrima, R. C. (1969). “Elementary Differential Equations and Boundary Value Problems,” 2nd ed. Wiley, New York. Bracewell, R. N. (1986). “The Fourier Transform and Its Applications,” 2nd ed. McGraw-Hill, New York. Brillinger, D. R.,and Rosenblatt, M. (1967). In “Spectral Analysis of Time Series’’ (B. Harris, ed.), p. 189. Wiley, New York. Cohen, L. (1989). Proc. I E E E 77,941. Combes, J.-M., Grossmann, A., and Tchamitchian, P., eds. (1989). “Wavelets Time-Frequency Methods and Phase-Space-Proc. Marseille 1987.” Springer-Verlag, Berlin. Daubechies, I. (1988). Commun. Pure Appl. Math. 41,909. Daubechies, I. (1990). I E E E Trans. Inform. Theory 36,961. Daubechies, I., Grossmann, A., and Meyer, Y. (1986). J . Math. Phys. 27, 1271. Druckmann, I. (1987). Data compression using the Coefficients of a 2nd order homogeneous linear generating differential equation. M.Sc. thesis, Ben Gurion University, Beer Sheva, Israel (in Hebrew). Druckmann, I. (1991). In “Proc. 13th Colloque GRETSI (Juan-les-Pins, France),” p. 301. Druckmann, I. (1994). Application of structural properties method to signal processing (parameter estimation, data compression, modulation type identification). Ph.D. thesis, Ben Gurion University, Beer Sheva, Israel (in Hebrew). Druckmann, 1. (in prep.). The Cramer-Rao bound for sinusoidal signals when the second derivative is known. Druckmann, I., and Zayezdny, A. M. (1991). In “Proc. 13th Colloque GRETSI (Juan-les-Pins, France),” p. 461. Druckmann, I., Gabay, S., and Yifrah, Y. (1994). In “21st Power Modulator Symp. Record (Costa Mesa, CA),” IEEE Press.
SIGNAL DESCRIPTION: NEW APPROACHES, NEW RESULTS
395
Elliot, D. F., and Rao, K. R. (1982). “Fast Transforms-Algorithms, Analyses and Applications.” Academic Press, New York. Franks, L. E. (1969). “Signal Theory.” Prentice-Hall, Englewood Cliffs, NJ. Giannakis, G. B., and Mendel, J. M. (1989). IEEE Trans. ASSP 37,360. Grinberg, A. S., Lototsky, V. A., Maizus, R. I., and Yaralov, A. A. (1989). In “Low Cost Automation-Preprints IFAC Symp. (Milano)” (A. DeCarli, ed.), p. F53. La Sapienza, Rome. Grossmann, A., and Morlet, J. (1984). SIAM J. Math. Anal. 15, 723. Hayes, M. H., and Clements, M. A. (1986). IEEE Trans. ASSP 34,485. Hess, D. T. (1966). Proc. IEEE 54, 1089. Jayant, N. S., and Noll, P. (1984). “Digital Coding of Waveforms.” Prentice-Hall, Englewood Cliffs, NJ. Kay, S. M. (1988). “Modern Spectral Estimation: Theory and Applications.” Prentice-Hall, Englewood Cliffs, NJ. Kay, S. M., and Marple, S. L. (1981). Proc. l E E E 6 9 , 1380. Kazovsky, L., Zayezdny, A. M., and Plotkin, E. (1983). J. Franklin lnst. 315, 37. Keynan, G. (1987). Universal method and system for malfunction diagnosis. M.Sc. thesis, Ben Gurion University, Beer Sheva, Israel (in Hebrew). Kunt, M., Ikonomopoulos, A., and Kocher, M. (1985). Proc. IEEE 73,549. Lacoume, J. L., ed. (1991). “Higher Order Statistics.” Elsevier, Amsterdam. Lacoume, J. L., Gaeta, M., and Amblard, P. 0.(1992). In “Signal Processing VI, Eusipco Conf. (Brussels)”(J. Vandewalle et al., eds.), p. 91. Elsevier, Amsterdam. Lototsky, V. A., Yaralov, A. A,, Grinberg, A. S., and Maizus, R. I. (1991). In “Computer-Aided Design in Control Syst.-Preprints 5th IFAC-IMACS Symp. (Swansea, UK).” Lynch, T. J. (1985). “Data Compression: Techniques and Applications.” Lifetime Learning Pub., Belmont, CA. Maizus, R. I. (1989). Representation of experimental data by differential discriminant operators in information systems. Ph.D. thesis, Kiev, USSR (in Russian). Maizus, R. I. (1992). Electronic Modeling 14(6), 35. Maizus, R. I. (1993). Eng. Simulation 10, 1002. Mallat, S. G. (1989a). IEEE Trans. PAMI 11,674. Mallat, S. G. (1989b). IEEE Trans. ASSP 37,2091. Marble, A. E., and Zayezdny, A. M. (1989). Med. Biol. Eng. Comput. 27, 171. Marple, S. L. (1987). “Digital Spectral Analysis with Applications.” Prentice-Hall, Englewood Cliffs, NJ. Matsuoka, T., and Ulrych, T. J. (1984). Proc. IEEE 72, 1403. Mecklenbrauker, W. F., ed. (1993). “The Wigner Distribution: Theory and Applications in Signal Processing.” Elsevier, Amsterdam. Mendel, J. M. (1991). Proc. IEEE 79,278. Meyer, Y., ed. (1991). “Wavelets and Their Applications, Proc. Marseille 1989.” SpringerVerlag, Berlin, and Masson, Paris. Nikias, C. L., and Raghuveer, M. R. (1987). Proc. IEEE 75,869. Nuttall, A. H. (1976). In: Naval Underwater Systems Center Sci. Eng. Stud., Spectral Estimation (New London, CT), Rep. 5501. Orr, R. (1991). Proc. S P I E , 1565. Papoulis, A. (1977). ‘‘Signal Analysis.” McGraw-Hill, New York. Papoulis, A. (1984). “Probability, Random Variables and Stochastic Processes,” 2nd ed. McGraw-Hill, New York. Picinbono, B., and Martin, W. (1983). Ann. Telecommun. 38 (in French). Pisarenko, V. F. (1973). Geophys. J . Roy. Astron. SOC.33,347.
396
ALEXANDER M. ZAYEZDNY AND ILAN DRUCKMANN
Pitarque, T., Allengrin, G., Ferrari A., and Menez, J. (1991). Signal Processing 22, 231. Plotkin, E. (1968). Method of filtering and rejection of deterministic signals with unknown parameters. In “Proc. Conf. Leningrad Inst. Commun.,” p. 81 (in Russian). Plotkin, E. (1978). In “European Conf. Circuit Theory and Design (Switzerland),” p. 422. Plotkin, E. (1979). IEEE Trans. ASSP 27, 501. Plotkin, E., and Wulich, D. (1982). Signal Processing 4,35. Plotkin, E., and Zayezdny, A. M. (1979). Signal representation using relative state variables. In “Proc. 11th Convention Elec. Eng. Israel,” p. D2-2. Porat, M., and Zeevi, Y. (1988). IEEE Trans. PAMI 10,452. Proakis, J. G. (1983). “Digital Communications.” McGraw-Hill, New York. Proakis, J. G., and Manolakis, D. (1989). “Introduction to Digital Signal Processing.” Macmillan, New York. Raghuveer, M. R., and Nikias, C. L. (1986). Signal Processing 9,35. Rice, S . 0.(1982). Proc. IEEE 70,692. Rife, D. C., and Boorstyn, R. R. (1974). IEEE Trans. Inform. Theory 20,591. Schumaker, L. L., and Webb, G., eds. (1993). “Recent Advances in Wavelet Analysis.” Academic Press, New York. Special issue on wavelet analysis (1992). IEEE Trans. Inform. Theory 38. Tabak, D., Turgeman, E., and Zayezdny, A. M. (1983). In “Melecon 83 Mediterranean Electr. Conf. Proc. (Athens, Greece),” Vol. 2, p. CS-07. Tiunov, S., Bronstein, A., and Zayezdny, A. M. (1994). Signal Processing 38,231. Tiunov, S., Palacheva, Z., Soldatov, A., and Kirichenka, G. (1983). Design and implementation of a statistic data processing system. In “Trans. Scient. Symp.-Automation Information Technological Processes (Kuibyshev, USSR)” (in Russian). Trachtman, A. M. (1972). “Introduction to Generalized Spectral Theory of Signals.” Soviet Radio Press, Moscow (in Russian). Tufts, D. W. and Kumaresan, R. (1982). Proc. IEEE 70,975. Ulrych, T. J., and Clayton, R. W. (1976). Phys. Earth Planet. Interiors 12, 188. Urkowitz, H. (1983). “Signal Theory and Random Processes.” Artech House, Dedham, MA. Van Trees, H. L. (1968). “Detection, Estimation and Modulation Theory, Part I.” Wiley, New York. Wexler, J., and Raz, S. (1990). Signal Processing 21,207. Zayezdny, A. M. (1967). “Structural relationships and their use in oscillation theory.” Bans. Electr. Inst. Commun. Leningrad 57 (in Russian). Zayezdny, A. M. (1973). “Basic Circuit Theory-Nonlinear and Parametrical Circuits.” Sviaz Press, Moscow (in Russian). Zayezdny, A. M., and Druckmann, I. (1991). Signal Processing 22, 153. Zayezdny, A. M., and Shcholkunov, K. (1976). Generalization of phase plane method and its appl. to signal theory. Trans. Electr. Inst. Commun.USSR 36 (in Russian). Zayezdny, A. M., and Tiunov, S. (1993). Signal Processing 32,285. Zayezdny, A. M., and Zaytsev, V.A. (1971). Telecom. & Radio Eng 26,86 (transl. from Russian). Zayezdny, A. M., Adler, Y., and Druckmann, I. (1992a). IEEE Trans. Instr. Measurement 41, 397.
Zayezdny, A. M., Lyandres, V., and Briskin, S. (1992b). In “Proc. URSI Intl. Symp. Signals, Systems and Electronics ISSSP92 (Paris)” (P. Bauer et al., eds.), p. 500. Zayezdny, A. M., Plotkin, E. I. and Cherkassky, Y. A. (1971). Telecom. & Radio Eng 26, 61 (transl. from Russian). Zayezdny, A. M., Tabak, D., and Wullich, D. (1989). “Engineering Applications of Stochastic Processes.” Research Studies Press (Wiley), Jaunton, Somerset.
Index
A
B
Accuracy of approximation, 182 Algebra, group algebra, 2-78 Algorithms All Global Coverings, 191-192 All Rules, 192-193 Cook-Tom algorithm, 18 group algebra matrix transforms, 14, 44 parameter estimation, 372-376 Single Global Covering, 188-190, 191 Single Local Covering, 190-191 Winograd algorithm, 18 All Global Coverings algorithm, 191-192 All Rules algorithm, 192-193 Amplitude-modulated (AM) signal, A-Psi CDE, 340 Anesthesia, effects, 290 Animal intelligence, 264, 267-272, 286-287 ARMA systems, signal description, 322 Artificial intelligence, 277 Turing test, 266-272 Astronomy, delayed-choice experiment, 298 Attention, 289-291 Attribute significance, rough set theory, 182-1 83 Autoregressive (AR) systems, signal description, 322, 325 Averaging identification filters, 385 Awareness mind, 262-263 neural network model, 289-291, 293 397
Backpropagation, 288 Basic probability assignment, 171, 175 Basic probability number, 171 Belief function, 171 Bessel function series, 132-133 Binary threshold model, neurons, 276 Binding problem, 284-285 Biological specimen, electrical effects, 97 learning, see Animal intelligence Bispectrum, 324 Block, rough set theory, 156 Brain, see also Mind invariants, 292-293 mind-body problem, 261-262 neural network model, 272-273 quantum neural computing, 266-310 as quantum system, 260 reductionist approach, 262-263, 285 scripts, 290-291 somatosensory cortex, 291 split-brain research, 261, 263 vision, 273, 283-284, 302 Bright-field imaging, 220, 252 C
Calculus, rough set theory, 151-194 Cell death, neuronal, 287 Certainty, rough set theory, 166 Chaotic dynamics, 288-289
398
INDEX
Chi function, 327 Classical phase plane, 348 Coefficient selection function, 45 Cognitive function animal intelligence, 264, 267-269 scripts, 290-291 Turing test, 266-272 Cognitive function reflection, 294 Complementarity, 262-263, 295-296 Computation, paradigm, 261, 294 Computers quantum neural computing, 260-310 Turing test for intelligence, 272 Computer simulation, EMS image structures, 135-140 Computing models, 260-264,283-284 Concept, rough set theory, 155 Condiant, 332 Connectionism, 298-299 Consciousness anesthesia, effects, 290 mind, 261, 263 neural network model, 289-291, 293 as recursive phenomenon, 293 Vedic cognitive science, 264-265, 267 Conservant, 328, 331 Constructability, 325 Contour properties, signal description, 318 Contrast calculation, electron mirror system, 108-144 Contrast simulations, electrical microfield, 135-140 Convolution filtering signals, 2-3, 4-8 by sections, 4-8 Convolution property, matrices, 29-35 Convolutions cyclic convolutions, 2, 4-7 fast convolutions, 12-19 group convolutions, 3, 11-12 linear convolutions, 2, 4-7, 12, 19 vector convolutions, 11 Walsh convolutions, 19 Cook-Tom algorithm, 18 Correctionism, 265, 292 Cramer-Rao bounds, 366, 371-372, 376 Crystals, mirror electron microscopy, 96-97 Cumulants, 323, 324 Cyclic convolution, 4-7, 19 fast convolution, 12-19
Cyclic groups index notation, 8-9 matrix representation, 13 D
Data, granularity, rough set theory, 151-1 94 Data compression, 386-387 generating differential equations, 387-391 Decision tables, 154-157, 158, 185-187 Dempster rule of combination, 176 Defects, mirror electron microscopy, 97 Definable partition, 167-171 Definable set, 158-159, 166 Degree of uncertainty, rough sets, 181 Delayed-choice experiment, 297-298 Dernpster rule of combination. 176 Dempster-Shafer theory, see Evidence theory Differential equation of states (DES), 336 Differential operators, 335 Dihedral groups index notation, 9 matrix representation, 13 matrix transforms, 32-35 Direct product group algebra, 14-19 Direct tensor product, group algebra, 16 Discrete linear systems, signal description, 322 Discrete matrix transform, group algebra, 27-44 Dissipant, 328, 329-330 Distortion fields, mirror electron microscopy, 109-124 Domain structure, mirror electron microscopy, 97 Dreams, script, 290 Dropping conditions, 188 Dualist theory, mind-body problem, 261-262.294, 305-306
E Edge detection, group algebra, 45 EGDE, see Ensemble GDE Einstein-Podolsky-Rosen (EPR) experiment, 302-304 Electron distortion field, 109-114 Electronics industry, mirror electron microscopy, 96-98
399
INDEX Electron microscopy, 197 low-energy electron microscope, 86 mirror electron microscopy applications, 95-98 history, 85-87, 144 image formation, 87-95 instrument, 83-85 quantitative contrast technique, 98-144 photoelectron emission microscopy, 87 scanning transmission electron microscopy, 197 image formation, 221-231 off-axis holography, 232-253 transmission electron microscopy, 197 image formation, 203-213 low-dose imaging, 207-213 off-axis holography, 213-221, 252-256 Electron mirror system (EMS) contrast mirror electron microscopy, 108-109 mirror electron microscopy, 87-88, 99-108 Elementary state functions, 329-333 Emergent behavior, mind, 262 Emergent property, 262 EMS, see Electron mirror system Ensemble GDE (EGDE), 337-339 Envelope, signal description, 318 EPR experiment, see Einstein-PodolskyRosen experiment Evidence theory, and rough set theory, 171-182 Exact identification filters, 384 Explicit learning, 286 Extended Prony method, 369 Extrapolation, structural properties method. 391-392 F
Fast convolution, 17-19 FBLP, see Forward-backward linear prediction method Feedback networks, 265,276-278 Feedfonvard networks, 265, 276-278,288 FEF, see Function elimination filters Feigenbaum number, 282 Ferroelectric materials, mirror electron microscopy, 96 Ferromagnetic materials, mirror electron microscopy, 97-98
Filamentary conduction, mirror electron microscopy, 129-131 Filtering (signals) convolution, 2-3, 4-8 structural properties method, 379-383, 385 Finite groups cyclic group, 8-9 dihedral group, 9 index notation, 8-11, 55-57 quaternion group, 9-10 Finite-impulse-response (FIR) filter, 4 Focal elements, 171, 173-180 Forward-backward linear prediction (FBLP) method, 370 Fourier transforms off-axis hologram, 215-218, 244-246 short-time Fourier transforms, 320 signal description, 320 Frame of discernment, 171 Fraunhofer approximation, 202 Fraunhofer holograms, 216, 227 Free will, 264 Frequency-modulated (FM) signal, A-Psi GDE, 341 Fresnel approximation, 202, 203 Fresnel diffraction, TEM off-axis holography, 253-256 Friedel law, 209 Function elimination filters (FEF), generating differential equations, 381-382 Fuzzy set theory, 151
G GDE, see Generating differential equations Generalized phase planes (GPP) signal description, 324-325, 347-365 signal detection, 379 Generalized quantizer, 386 Generalized sampling, 386 Generalized spectral theory, 319 Generating differential equations (GDE) A-Psi type, 339-347, 388 data compression, 387-391 ensemble GDE, 337-339 parameter estimation, 366 signal description, 324, 327, 336-347 three coeffecients, 343-347 Generating phase trajectories, 324
400
INDEX
Global covering, 156, 185 LERS LEMl option, 188-190, 191 Global warming, LERS, 194 GPP, see Generalized phase planes Grandmother neurons, 284 Granularity of data, rough set theory, 151-194 GRASP library, 63-78 Grigson coils, 221 Group algebra, 10 convolution by sections, 4-8 direct product group algebra, 14-19 finite groups and index notation, 8-10 GRASP class library, 55-63 source code, 63-78 group convolution, 3, 11-12 history, 3 inverses of group algebra elements, 22-27 matrix representation, 12-14 matrix transforms, 27-44 semidirect product group algebra, 19-22 signal and image processing, 44-54 Group algebra class, 58-60 Group algebra matrices class, 60-62 Group class, 55-57 Group convolution, filtering signals, 3, 11-12 Group elements, matrix representation, 12-14, 24
H Hebb’s rule, 278, 281, 286 Higher-order spectra, signal description, 322-324 Hodgkin-Huxley neuronal model, 279-280 Holography, 197-198 Fraunhofer holograms, 216, 227 in-line holography, 198-199, 252 off-axis holography, 199 scattering amplitude, 200-203
scanning transmission electron microscopy, 221-231 transmission electron microscopy, 203-213 Image processing, group algebra, 4650
Image structures, computer simulation, 135-140 Imperfect data, methods of handling, 151, 194 Indefinable set, 159, 166 Indiscernibility, 152-157 Indiscernibility relation, 152 Information function, 153 Information processing biological, 260, 262-263, 264, 292 chaotic dynamics, 289 Information tables, 152-154 Information theory, EPR experiment, 302-304 In-line holography, 198-199, 252 Insulators, mirror electron microscopy, 96-97 Integrated transforms, signal processing, 320 Intelligence; see also Animal intelligence biological, 264, 267-269 gradation, 269-270 learning, 255-293 Turing test, 266-272 Intentionality, 294 Interference experiments, 296-297 Internally undefinable partition, 169-170 Invariants, brain function, 292-293 Inverses group algebra elements, 22-27 matrices with group algebra elements, 42-43 Inversion formulas, 332-333 Iso-phase axis, 232, 235,236, 240, 241 Iterative transformations, neural networks, 278-284
I Iconic memory, 289 Ideal, group algebra, 25 Identity theory, mind-body problem, 261 Image formation mirror electron microscopy, 87-95
K Karhunen-Loeve expansion, 319 Knowledge acquisition, rough set theory, 184, 191 Kolmogorov’s addition axiom, 171
INDEX L
Learning quantum neural computing, 285-293 rough set theory, 184, 191 Learning from Examples Based on Rough Sets, see LERS LEEM, see Low-energy electron microscope LERS, 184-184 All Global Coverings, 191-192 All Rules, 192-193 LEMl option, 188-190, 191 LEM2 option, 190-191 real-world applications, 193-194 Single Global Covering, 188-190, 191 Single Local Covering, 190-191 Lichte focus, 219-220 Linear convolution, 2, 4-7, 12, 19 Linear time-invariant system, signal description, 322 Linguistics animal and machine behavior, 270-271 LERS used, 194 Local covering, 156, 185 LERS LEM2 option, 190-191 Logical tasks, animal and machine behavior, 272 Low-dose imaging, transmission electron microscopy, 207-213 Low-energy electron diffraction, with mirror electron microscopy, 86-87 Low-energy electron microscope (LEEM), 86 Lower approximation, of a set, 157-165 Lower boundary, 164 Lower substitutional decision, 185
M Machine intelligence linguistics, 270-271 logical tasks, 272 recursive behavior, 272 Turing test, 266-272 Machine learning, rough set theory, 184 Magnetic distortion field, 114-119 Magnetic fringing fields, mirror electron microscopy, 97, 119-124 Magnetic materials, mirror electron microscopy, 97-98
40 1
Malfunction coefficients, 393 Malfunction estimation, structural properties method, 325, 392-394 Mathematics group algebra, 2-78 rough set theory, 151-194 Matrices convolution property, 29-35 direct property matrices, 39-40 group elements, 11-12 semidirect tensor product matrices, 4041 Matrix transforms, group algebra, 27-44 McCulloch-Pitts model, neurons, 276 Medicine, LERS used, 193-194 MEM, see Mirror electron microscopy Memory, 289-290; see also Learning Metal-insulator-metal (MIM) system, 124 Metal-insulator-semiconductor (MIS) system, mirror electron microscopy, 124-144 Metals, mirror electron microscopy, 95-96 Microscopy, see Electron microscopy Microtubule information transfer, 292, 306-307 MIM system, see Metal-insulator-metal system Mind, see aiso Brain consciousness, 261-264 emergent behavior, 262 mind-body problem, 261-262 Vedic cognitive science, 264-265, 267 Mirror electron microscopy applications, 95-98 crystals, 96-97 history, 85-87, 144 image formation, 87-95 instrument, 83-85 quantitative contrast technique, 98-108 electron distortion field, 109-1 14 magnetic distortion field, 114-119 magnetic fringing field, 119-124 metal-insulator-conductor samples, 124-144 superconductors, 98 MIS system, see Metal-insulatorsemiconductor system Mixed time-frequency transforms, signal description, 320-321
402
INDEX
Modified forward-backward linear prediction method, 370 Moving average (MA) systems, signal description, 322 Multioutput filter, 385 Mutual phase axis, 232
Nervous tissue, 273-276 Neural networks, 265 artificial, 294 feedback and feedforward networks, 265, 276-278 iterative transformations, 278-284 models, 272-284 quantum neural computing, 260-310 Neural process, quantum mechanics, 263 Neurons, 273-275,276 cell death, 287 complex neurons, 279-281 subneuron field, 305-309 Neuropsychology binding problem, 284-285 consciousness, 261,263 flawed, 294 learning, 285-293 Neuroscience developmental, 287 neural network model, 272-276 reductionist approach, 262 Newton-Raphson iteration formula, 279 Nonelementary state functions, 333-335 Nonlinear transformations, 379 Number of crossings, 361
Partition, rough definability, 167-171 Pass filters, generating differential equations, 383 PEEM, see Photoelectron emission microscopy P-elementary sets, 154 Perception, binding problem, 285 Phase, signal description, 318 Phase-contrast transfer function (PCTF), 209-211, 230, 231, 252 Phase plane, 347 generalized phase plane, 324-325, 347-365 Phase trajectories, signal processing, 324, 353-365 Photoelectron emission microscopy (PEEM), 87 Pisarenko method, 369-370 Pivot point, 232, 239 Plausibility function, 171 Poincar6 phase plane, 347, 350-352, 354-355 Possibility (rough set theory), 166 Potential distribution, electrical microfield, by MEM, 132-135 Power spectral density (PSD), signal description, 322-324 Prediction vector, 369 Prescriptive learning, 286-288 Probability theory, evidence theory, 171- 182 Proportionality factor, 207 PSD, see Power spectral density Psychons, 306
0
Q
N
Off-axis holography, 199 scanning transmission electron microscopy, 232-244, 252-253 transmission electron microscopy, 253-256 Operationalism, 267 Orthogonal functions, signal description, 319 Orthogonal sum, 172
P Parameter estimation, structural properties method, 365-376
Quality of lowerhpper approximation, rough sets, 181 Quantitative mirror electron microscopy, 98-144 electron distortion field, 109-114 magnetic distortion field, 114-1 19 magnetic fringing field, 119-124 metal-insulator-semiconductor samples, 124- 144 Quantum computation, neural computation, 298-305 Quantum mechaqics Copenhagen interpretation, 297 neural processes, 263
INDEX operationalism, 267 quantum neural computing, 260-310 Vedic cognitive science, 264-265, 267 Quantum model, 296-298 Quantum neural computer, 260, 266, 300, 302,310 Quantum neural computing, 296-298, 309-310 binding problem, 284-285 complementarity, 295-296 consciousness, 261, 263 definition, 298 learning, 285-293 mind-body problem, 261-262 neural network models, 272-284 quantum and neural computation, 298-305 structure and information, 305-309 Turing test, 265, 266-272 uncertainty, 294-295 Quantum physics complementarity, 262-263, 295-296 consciousness, 263 Quaternion groups index notation, 9-10 matrix representation, 13 matrix transforms, 36-39 Quotient ring, group algebra, 25
R Real-time processes, extrapolation, 391-392 Recursive behavior, 291-292 animal and machine behavior, 271-272 Reduct, 154 Reductionist approach, brain, 262-263, 285 Reflection, neuroscience, 294 Rejection filters, generating differential equations, 381 Relative function, 327 Roughclass, 184 RoughDAS, 184 Rough definability, 166-171 Rough measure, 182, 189 Rough set, defined, 165-166 Rough set theory, 151-194 applications, 182- 194 attribute significance, 182-183 LERS, 184-193 real-world applications, 193-194
403
concepts and definitions certainty and possibility, 166 indiscernibility, 152-157 lower and upper approximations of a set, 157-165 rough definability, 166-171 rough set, 165-166 and evidence theory, 171-182 basic properties, 171-176 Dempster rule of combination, 176- 180 numerical measure of rough sets, 181-182 Rule induction, LERS, 184-194 S
SBTF, see Sideband transfer function Scanning transmission electron microscopy (STEM), 197 image formation, 221-231 off-axis holography, 232-244, 252-253 Fourier transform, 244-246 spectral signal-to-noise ratio, 246-252 Schrodinger, E. cat experiment, 263-264 Vedic cognitive science, 264-265, 267 Schrodinger equation, 294-295 Scripts, neuroscience, 290 S(D) operators, 335 Self, philosophy, 294 Semiconductors mirror electron microscopy, 96 product group, 20 tensor products, 20-21 Semidirect product group algebra, 19-22 Separation criterion, 237 Sets definability, 166-171 definable, 158-159 lower and upper approximations, 157-165 Shadow imaging, mirror electron microscopy, 87-95 Short-term memory, 289-290 Short-time Fourier transform (STFT), signal description, 320, 321 Sideband transfer function (SBTF), 219-220,246-252 Signal, definition, 316, 317
404
INDEX
Signal description criteria for comparing methods, 325-327 defined, 317 direct description, 318 equivalent linear time-invariant discrete systems, 322 expansion in a series of orthogonal functions, 319 generalized phase planes, 324-325, 347-365 higher order spectra, 322-324 integrated transforms, 320 mixed time-frequency transforms, 320-321 phase and envelope, 318 structural properties method, 327-347 applications, 365-394 generalized phase planes, 324-325 generating differential equations, 324, 327, 336-347 state function, 324, 327, 329-335 wavelet transforms, 321 Signal detection, structural properties method, 378-379 Signal identification, structural properties method, 383-386 Signal processing feedfonvard networks, 277 group algebra, 46 nervous system, 274-275, 279 phase trajectories, 353-365 separation, 376-399 Signal theory, 315-317 Signal-to-noise ratio (SNR) spectral SNR bright field imaging, 220 STEM off-axis holography, 246-252, 253 TEM off-axis holography, 218-221 transmission electron microscopy, 211-213 Simulations electrical microfield contrast simulation, 135- 140 EMS image structures, 135-140 parameter estimation, 375-376 Single Global Covering algorithm, 188-190, 191 Single Local Covering algorithm, 190-191
Social computing, 263 Somatosensory cortex, experiments, 291 Source code, GRASP library, 63-78 Space, paradigm, 261 Spectral signal-to-noise ratio (SNR) bright-field imaging, 220 off-axis holography STEM hologram, 246-252, 253 TEM hologram, 218-221 transmission electron microscopy, 211-213 Split-brain research, 261, 263 State functions elementary state functions, 329-333 nonelementary state functions, 333-335 signal description, 324, 329 Staying time, 399-400 STEM, see Scanning transmission electron microscopy STFT, see Short-time Fourier transform Stripe detector, 237-239, 248 Strongly definable partition, 168 Structural properties method, 316 applications data compression, 386-391 extrapolation of time series and realtime processes, 391-392 filtering and rejection of signals, 379-383 malfunction diagnosis, 325, 392-394 parameter estimation, 365-376 signal identification, 383-386 signal separation, 376-379 definition and terminology, 327-329 elementary state functions, 329-333 generating differential equations, 324, 327,336-341 nonelementary state functions, 333-335 Subneuron field, 305-309 Superconductors, mirror electron microscopy, 98 Surface potential, mirror electron microscopy, 98
T Technologicality, 325 TEM, see Transmission electron microscopy Tensor products, group algebra, 16-17
405
INDEX Test signal, malfunction diagnosis by structural properties method, 325, 392-394 Time scripts, 290, 291 Time series, extrapolation, 391-392 Totally undefinable partition, 170-171 Transforms, group algebra, 27 Transmission electron microscopy (TEM), 197 image formation, 203-213 low-dose imaging, 207-213 off-axis holography, 213-221, 252-253 Fresnel diffraction, 253-256 Transparency, of description tool, 326 Turing test, machine intelligence, 266-272
V
Value functions, 3, 45 Vector convolution, 11 Vector filter, 385 Vector spaces, group algebra elements, 28-29 Vector state function, 385 Vedic cognitive science, 264-265, 267 Velocity, chi function, 328 Very short-term memory, 289 Vision quantum and neural computing, 273, 283-284, 302 scripts, 290-291 W
U Uncertainty, 294-295 rough sets, 181 Uncertainty principle, quantum neural computer, 304, 305 Uncertainty relations, 307-309 Universal field, 307 Universality, of description tool, 326 Upper approximation, of a set, 157-165 Upper boundary, 164 Upper substitutional decision, 185
Walsh convolution, 19 Wavefunction, neural processes, 263, 298, 301 Wavelet transforms, signal description, 321 Waveiparticle duality, 295, 296 Weakly definable partition, 168 Wigner-Ville description, signal description, 320, 321 Windowed Fourier transform. see Shorttime Fourier transform Winograd algorithm, 18
This Page Intentionally Left Blank
This Page Intentionally Left Blank