ADVANCES IN ELECTRONICS AND ELECTRON PHYSICS VOLUME 67
EDITOR-IN-CHIEF
PETER W. HAWKES Laboratoire d Optique Electro...
17 downloads
829 Views
16MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
ADVANCES IN ELECTRONICS AND ELECTRON PHYSICS VOLUME 67
EDITOR-IN-CHIEF
PETER W. HAWKES Laboratoire d Optique Electronique du Centre National de la Recherche Scienti$que Toulouse, France
ASSOCIATE EDITOR-IMAGE
PICK-UP AND DISPLAY
BENJAMIN KAZAN Xerox Corporation Palo Alto Research Center Palo Alto, California
Advances in
Electronics and Electron Physics EDITEDB Y PETER W. HAWKES Laboratoire d'Optique Electronique du Centre National de la Recherche Scientifique Toulouse, France
VOLUME 67 1986
ACADEMIC PRESS, INC. Harcourt Brace Jovanovich, Publishers Orlando San Diego New York Austin Boston London Sydney Tokyo Toronto
COPYRIGHT 0 1986 BY ACADEMIC PRESS. INC ALL RIGHTS RESERVED NO PART OF THIS PUBLICATION MAY BE REPRODUCED OR TRANSMITTED I N ANY FORM OR BY ANY MEANS. ELECTRONIC OR MECHANICAL. INCLUDING PHOTOCOPY. RECORDING. OR ANY INFORMATION STORAGE AND RETRIEVAL SYSTEM, WITHOUT PERMISSION I N WRITING FROM THE PUBLISHER
ACADEMIC PRESS, INC. Orlando, Florida 32887
Uttitrd Kingdom Edirion published by
ACADEMIC PRESS INC.
(LONDON) 24-28 Oval Road, London NWI 7DX
LIBRARY OF CONGRESS CAT4LOG C A R D
ISBN 0-12-014667-3
(alk. paper)
PHINltD INTHt L I N I T I I ~ ~ T A l t S O F A M ~ H I ~ A
86 87 Rn 89
Y X 7 6 5 4 1 2 1
LTD
NUMBER49-7504
CONTENTS PREFACE. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
vii
The Status of Practical Fourier Phase Retrieval R . H. T. BATESand D . MNYAMA I . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I1. Theoretical Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111. Phase Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV . Iterative Phase Refinement . . . . . . . . . . . . . . ................. V . Canterbury Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 5 14
Lie Algebraic Theory of Charged-Particle Optics and Electron Microscopes ALEXJ . DRAGTand ETIENNE FOREST I . Introduction ........................................... I1 . Lie Algebraic Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111. Simple Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV . Lagrangians and Hamiltonians . . . . . . . . . . . . . . . . . . . . . . . . . . . . V . Liecalculus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VI . Paraxial Map for Solenoidal Lens . . . . . . . . . . . . . . . . . . . . . . . . . VII . Second- and Third-Order Effects for Solenoidal Lens . . . . . . . . . VIII . Spherical Aberration for a Simple Imaging System . . . . . . . . . . . IX . Correction of Spherical Aberration . . . . . . . . . . . . . . . . . . . . . . . . X . Concluding Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References and Footnotes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
21 40 60
65 68 70 75 81 86 88 92 96 105 115 117
The Universal Imaging Algebra CHARLES R . GIARDINA 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I1 . Mathematical Morphology ............................... 111. Arithmetic Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . IV . Topological Operations .................................. V . Gray-Valued Mathematical Morphology .................... VI . The Universal Imaging Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V
121 127 144 150 167 175 181
Vi
CONTENTS
Cathode Ray Tubes for Industrial and Military Applications ANDREMARTIN I . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I1 . Electron Guns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111. Electrostatic Deflection and Deflection Amplification . . . . . . . . . IV . Electromagnetic Deflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V . Phosphors and Screens . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VI . Measurement Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII . Contrast Enhancement Techniques . . . . . . . . . . . . . . . . . . . . . . . . VIII . Characteristics Achieved with Present CRTs . . . . . . . . . . . . . . . . IX . Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Open-Circuit Voltage Decay in Solar Cells V . K . TEWARY and S . C . JAIN I . Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . I1 . Basic Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111. OCVD in Base-Dominated Devices . . . . . . . . . . . . . . . . . . . . . . . . IV . Effect of Emitter on OCVD .............................. V . Quasi-Static Emitter Approximation and Its Applications in Solving Coupled Diffusion Equations for OCVD . . . . . . . . . . . . . VI . High Injection Effects on FCVD . . . . . . . . . . . . . . . . . . . . . . . . . . Appendix: List of Symbols and Notations . . . . . . . . . . . . . . . . . . References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . INDEX. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
183 186 209 222 254 276 288 299 322 323
329 333 346 370 383 395 411 413 415
PREFACE The present volume contains representatives of the topics traditionally covered by these Advances together with three from a field that I am eager to encourage, namely, imaging theory and image processing. The traditional articles are devoted to the physics and applications of devices of two very different kinds. The account of cathode-ray tubes, originally destined for the Advances in Image Pick-up and Display, is an ambitious attempt to present the basic elements of CRT design together with the most recent developments and their many uses. It is hardly necessary to comment on the interest of solar cells. The open-circuit voltage decay technique is commonly used to determine excess carrier lifetime in p-n junction diodes in general, and in their article V. K. Tewary and S. C. Jain describe in detail the application of this technique to such diodes used as solar cells. In the world of image processing, the so-called phase problem has occupied a privileged place since the early 1970s and has an even longer history in crystallography, where it takes a slightly different form. Considerable progress has been made with this problem during the past few years, and R. H . T. Bates and D. Mnyama have made major contributions to this. The full account they have prepared for these Advances enables us to assess what has been achieved and what still needs to be done. Staying with image processing and image analysis, C. R. Giardina gives an account of the imaging algebra that he has developed to give a formal structure to the many operations that are routinely performed on images in the fields of recognition and analysis. This is related to the subject that has come to be known as mathematical morphology, and I may mention that a critical account of the latter by J. Serra will appear in a later volume. Finally, I welcome the review by A. J. Dragt and E. Forest of the methods based on Lie algebra that they have developed for charged-particle optics and indeed for optics in general. These methods have caused a considerable stir among the charged-particle optics community and I hope that this account will help to clarify the situation by showing clearly what aspects of the older subject can benefit from this new formalism. It only remains for me to thank all the authors for their contributions and to list those received for forthcoming volumes. Einstein-Podolsky-Rosen Paradox and Bell’s Inequalities Distance Measuring Equipment in Aviation vii
W. De Baere R. J . Kelly and D. R. Cusick
... Vlll
PREFACE
Monte Carlo Methods and Microlithography Invariance in Pattern Recognition Mirror Systems and Cathode Lenses
K . Murata H . Wechsler E. M. Yakushev and L. M. Sekunova PETERW . HAWKES
ADVANCES IN ELECTRONICS A N D ELECTRON PHYSICS. VOL. 67
The Status of Practical Fourier Phase Retrieval R. H. T. BATES AND D. MNYAMA Electrical and Electronic Engineering Department University of Canterbury Christchurch I , New Zealand
I. INTRODUCTION A . Preamble
Our purpose is to bring the reader up to date with the results of recent work on the retrieval of the phase of a function (of two independent variables) from its magnitude (or from its intensity, which is the square of the magnitude). It is understood that this function is the Fourier transform of a second function which only has significant value within a finite domain of the twodimensional space wherein it is defined. We think of the second function as a source distribution or an image, and the first function as a diffraction pattern, radiation pattern, or visibility (invoking the terminology of interferometry). From an image-processing viewpoint, the advances made during the past 5 years have been spectacular. Crystallographers routinely recover phases from intensities of threedimensional diffraction patterns. So, why should there be anything remarkable about retrieving two-dimensional phase distributions? The point is that crystallographers are always in possession of an exceptional piece of a priori information: The images that they wish to reconstruct comprise assemblages of atoms, whose individual scattering characteristics are well understood. Even 5 years ago, few people working in image processing would have taken seriously any claim that an arbitrary image can be conjured up from merely the intensity of its Fourier transform. That many such claims can now be fully substantiated is largely due to the imaginative perseverance of Jim Fienup of the Environmental Research Institute of Michigan. Our interest here is in two-dimensional images whose amplitude (or brightness) can assume any form whatsoever within finite regions of image space. We do not need to know beforehand the area of image space occupied by any particular image, but we can only reconstruct it if this area is finite. This
Copyright @ 1986 by Academic Press. Inc. All rights of reproduction in any form reserved.
2
R. H. T. BATES AND D. MNYAMA
does not constitute any sort of practical restriction, of course, because all realworld images are of finite size. The lack of specific a priori information concerning the image implies, of course, that we must be able to gather more data than are available to crystallographers. The latter can only observe what are known as structure factors, because the structures they are interested in are (by definition, since they are crystals) spatially periodic. To be able to apply the techniques reviewed herein, we need to know the intensities of what we call the in-between structure factors, which can be thought of as the (complex) amplitudes of the diffraction pattern at all points in Fourier space (which is called reciprocal space by crystallographers) lying half-way between adjacent structure factors. When presented with the intensities of the ordinary and in-between structure factors we say we have a Fourier phase problem, in contradistinction to a crystallographic phase problem which arises when only the ordinary structure factor intensities are given. Microscopists are aware of a third phase problem, for which the data are records of the intensities of both the image and its Fourier transform. Although this third problem is but a particular manifestation of the approach to image enhancement which makes use of micrographs recorded for varying degrees of defocus, it is highly relevant to this review. It was popularized by Owen Saxton and Ralph Gerchberg, whose iterative refinement algorithms provided the inspiration for Jim Fienup. While all three phase problems are discussed in this review, the major emphasis is on the Fourier phase problem. We present the underlying theory and demonstrate, by example, that phase retrieval has reached the stage where complicated images containing intricate detail can be successfully reconstructed. B. Important Techniques Not Discussed in This Review
There are two classes of techniques which, although they are effective and show promise of further useful development, are neglected in this review. Neither their underlying theory nor their implementation fit conveniently into the following exposition, and there are several good recent reviews of both. We merely refer the reader to what we consider to be the most significant parts of the literature. The first class, which has already been mentioned in passing in Section I.A, encompasses every approach to image enhancement relying on micrographs recorded for varying degree of defocus (cf. Misell, 1978; Chanzy et al., 1976; Chapman, 1975a,b; Gerchberg and Saxton, 1972; Hanszen, 1970; Hui et al., 1974; Unwin and Henderson, 1975; Taylor and Glaeser, 1976; Klug and Crowther, 1972; Erickson and Klug, 1975; Klug et al., 1970).
TWO-DIMENSIONAL PHASE PROBLEMS
3
The second class of techniques includes all those based on entropy or likelihood principles (cf. Ables, 1974; Gull and Daniell, 1978; Kazuyoshi and Yoshihiro, 1983; Narayan and Nityananda, 1982; Nityananda and Narayan, 1982; Wernecke, 1977; Wernecke and D’Addario, 1977; Willingale, 1981; Skilling and Bryan, 1984; Bricogne, 1974, 1984; Gull and Skilling, 1984).
C . How and Where Phase Problems Arise Any image can be reconstructed completely from the inverse Fourier transform of the visibility, provided both the magnitude and phase of the visibility are known. A phase problem is said to exist if little or no phase information is available. There are two broad subdivisions of the Fourier phase problem: (i) A pure phase problem arises when it is required to reconstruct the true image given only the magnitude, or (equivalently) the intensity, of the visibility. As is explained in detail in Section 111, uniqueness of the solution can only be guaranteed for what we call the image-form (Bates, 1982b; Bates and Fright, 1984; Won, 1984). In many applications, however, the imageform is all that is required (Bates, 1984). (ii) A partial phase problem arises when one is provided with complete visibility magnitude information and partial information regarding either the visibility phase or image. Various causes and types of phase problems are summarized in Table I. D . Conclusions
It seems safe to claim that the form of virtually any positive (i.e., real and nonnegative) image can (eventually, after enough computation) be recovered from the intensity of its visibility (i.e., Fourier transform), provided the latter is sampled finely enough to permit the autocorrelation of the image to be immediately computed by Fourier transformation. The reconstruction algorithms are comparatively insensitive (to an encouraging degree) to noise on the data. Practical phase retrieval is based upon the iterative Fienup algorithms, which need initial phase estimates in order to begin. The “crude phase estimate” appears to be the most useful of the known starting phases. Noisy data need to be preprocessed to give the best obtainable results, which in fact are achieved only by combining the most useful existing techniques into a composite image reconstruction algorithm.
4
R. H. T. BATES AND D. MNYAMA
TABLE I PHYSICAL APPLICATIONS I N WHICH THE DIFFERENT TYPES OF PHASE PROBLEM ARISE,AND THE REASONS W w PHASECANNOT BE ACCURATELY MEASURED Types
Applications
Reasons
Pure
Speckle interferometry; Phase is either discarded (cf. Bates, Inferring antenna aperture distribution 1982b)or is effectivelyunmeasurable from distant (e.g., satellite, aircraft) because it either fluctuates too rapidly or involves too much technical measurement of radiation pattern; Astronomical aperture synthesis when difficulty/expense or impossible to (as is likely for future giant optical generate adequate “reference” telescopes) closure phase effectively (Morris, 1985) unavailable
Partial
Because propagation medium (beSpeckle imaging; tween object and measurement Inferring antenna aperture distribuapparatus) introduces severe distortion when reasonable approximation tion only partial phase information to distribution magnitude available; available (cf. Bates, 1982b); or only Ultrasonic dirraction tomography; closure phase available (Bates, Astronomical aperture synthesis when 1978a,b, 1982b); or phase unclosure phase available; measurable in any plane (Saxton, Electron microscopy (crystallography) 1978) when both image and diffraction micrographs available
Pure (but not Fourier)
X-Ray and neutron crystallography
Phase unmeasurable (Cowley, 198I )
The most pressing problems relate to the requirements on data sample spacing placed by the level of the noise on the data, improved methods of reconstructing “foggy” images (characterized by faint detail on smooth, bright backgrounds), and means of generating better initial phase estimates. E . Outline of the Review
Section I1 contains necessary preliminaries. Basic notation and terminology are established in Section II.A, with needed theorems collected in Section 1I.B. The material introduced in Section 1I.C is central because it underlines what makes Fourier phase retrieval possible. The effects of the inescapable measurement noise are discussed in Section 1I.Dand Section 1I.E outlines the major practical difference between the Fourier and crystallographic phase problems. Section I11 introduces the various types of phase problem which are relevant to this review. Section II1.A sets down what in fact is recoverable when the Fourier phase is lost, and it introduces two particular images, with
TWO-DIMENSIONAL PHASE PROBLEMS
5
which the remainder of the review is illustrated (by displaying the versions which are reconstructed under relevant conditions by the various algorithms detailed in Sections IV and V). The three relevant problems are defined in Sections 1II.B-1II.D. The experimentally established fact that Fourier phase dominates Fourier intensity is illustrated in Section IKE, while in Sections 1II.F and 1II.G it is explained why phase retrieval from intensity can nevertheless be successful. Section IV details the iterative algorithms which form the core of practical phase retrieval procedures. Since these algorithms are iterative, they require starting phases which are discussed in Section 1V.A. The inspirational (as far as Fourier phase retrieval is concerned) Gerchberg-Saxton algorithm is described in Section IV.B, and the central retrieval algorithms due to Fienup are explained in Section 1V.C. Critical aspects of computational practice are discussed in Sections 1V.D-1V.F. Section V describes the composite image reconstruction algorithm developed in our laboratory. Data, which are unavoidably contaminated by noise, need to be preprocessed (as indicated in Section V.A) in order to make the most of them. In Sections V.B and V.C we show how improved (over those discussed in 1V.A) starting phases can be generated. The way the Fienup algorithms are incorporated is explained in Section V.D. Important computational points are covered in Sections V.E and V.F, while in Section V.G we detail those problem areas we feel are most urgent and which we anticipate are most likely to prove fruitful in the near future.
BACKGROUND 11. THEORETICAL A. Images and Visibilities
We introduce image space and visibility space, arbitrary points within which are identified by Cartesian coordinates (x, y ) and (u,u), respectively. The im.age f ( x , y ) and its visibility F(u, u) constitute a Fourier transform pair, that is,
6
R. H. T. BATES A N D D. MNYAMA
The infinite limits on the integral merely emphasize that the whole of the image, or respectively the visibility, is to be included. In fact, we only consider images whose supports (i.e., the regions of image space within which image amplitudes are nonzero) are finite. Because the units of length in the x and y directions can be normalized differently and arbitrarily, no generality is lost by requiring the support of f ( x , y ) to be just enclosed by the square (centered at the coordinate origin) whose sides are of length unity. We call this square the image box B(x, y}. It is now convenient to introduce the notation B(x, y I N ) to characterize a square region of image space (the length of each of whose sides is the positive integer N ) centered on the coordinate origin. We endow B(x, y 1 N ) with two distinct meanings which we feel need never conflict nor cause confusion. First, for 1x1 > N / 2 or
J y J> N / 2
for 1x1 c N / 2 and lyl c N / 2
FIG.1. Image space.
7
TWO-DIMENSIONAL PHASE PROBLEMS
Second, we say B ( x , y l N ) is the region of image space within which B(x,yl N ) = 1. In particular, therefore,
I 1)
B ( x , y ) = &,Y
(3)
Figures 1 and 2 illustrate several of the geometrical aspects of image space and visibility space, respectively, which are discussed in this subsection. It is appropriate here, for the purposes of later analysis, to introduce the periodic image p(x,y ) which consists of the image repeated throughout image space in contiguous square cells, each congruent to B(x, y I N ) : W
P(X,Y) =
1
m.n = - m
f ( x - m N , y - nN)
(4)
Because of the periodicity of p(x, y), its visibility only exists at discrete points, that is,
1
m
m,u)[ 1m,n=
J(u - m / N ) 6(v
- n/N)
m
= P(u, u)
-
p(x,y )
(5)
where a(.) denotes the Dirac delta function. So, knowledge of P(u, u) implies
o
*?’
t- x
x
o
O L X
0
Iri
x
t+
O
o
x
o
x
x o
o
x
FIG.2.
0
o
o
o
O
O
~
x
x x
x
O 0
o
o
Visibility space.
x
x
x x
o
o
x
x
8
R. H. T. BATES AND D. MNYAMA
knowledge of F(u, u) only at discrete points ( m / N ,n / N ) in visibility space, where m and n are arbitrary integers. We further require the extended image e ( x , y ) , also called the quasilocalized image (Bates, 1984), which is defined by 4x9 Y ) =
P(X9
(6)
y)w(xlN)w(y/N)
where the window function w ( - )is itself defined by for It1 , , > 1
w(t) = 0 =
[l
+ cos(nt)]/2
for It1
(7)
-= 1
It transpires that (Bates, 1984) E(m/N,n / N ) = F ( m / M ,n / N )
(8)
for all integers m and n, where E(u, u) c,e(x,y). In crystallographic terminology (cf. Ramachandran and Srinivasan, 1970), p(x, y) is the (two-dimensional) structure, image space is real space, f ( x , y) represents the contents of the (two-dimensional) unit cell B(x, y I N ) , visibility space is reciprocal space, and the F(m/N,n / N ) are the structure factors. Note that N = 1 in virtually all practical crystallographic contexts. It is made clear later why it makes sense to entertain values of N greater than unity in the context of the Fourier phase problem. We need a terminology for particular points (called sample points) in visibility space. For all integers m and n, the points ( m / M , n / N ) , ((2m 1)/2N,n / N ) , ( m / N ,(2n 1 ) / 2 N ) , and ((2m 1 ) / 2 N ,(2n 1 ) / 2 N ) are called, respectively, ordinary sample points, in-line in-between (in-row) sample points, in-line in-between (in-column) sample points, and diagonal in-between sample points. The complex amplitudes of the visibility at these points are called structure factors. It is convenient to introduce the notation
+
+
+
+ v)/N)
Gm+p.“+v = G((m + CL)/N,(n
+
(9)
where G stands for either E or F and both p and v can be either 0 or 1 / 2 . Individual structure factors are identified by the same prefixes as their corresponding sample points (Table 11). B. Some Theorems
-
The convolution theorem (cf. Bracewell, 1978) states that
N u , W ( u ,4
4 x , Y ) 04 x , y )
9
TWO-DIMENSIONAL PHASE PROBLEMS TABLE I1 NOTATION AND TERMINOLOGY FOR SAMPLE POINTS AND STRUCTURE FACTORS‘ Prefix for sample point and structure factor Ordinary In-line in-between in-rows In-line in-between in-columns Diagonal in-between
Position of sample point
Notation for structure factor
(mlN, nlN) C(2m 1)/2N, n/Nl
G,,”
Cm/N,(2n f 1)/2N1
Gm,+
+
[(2m
+ 1)/2N,(2n + 1)/2N]
Gm+
112.”
l/Z
G,+
Note that the term “actual” has been used on occasion instead of “ordinary” (Bates, 1984; Bates and Fright, 1984).
where A(u, u) H A(x, y), Q(u,u) t+ w ( x , y), S l ( x , y ) is the support of L(x,y) and 0 is the two-dimensional convolution operator. The autocorrelation theorem (cf. Bracewell, 1978) states that lF(u,v)12
-
fS(X,Y) =
11
f * ( x ’ , y ’ ) f ( x ’+ X,Y’
+ y)dx’dy’
(11)
B(x‘,y’I 1)
where f S ( x , y ) is called the autocorrelation of the image and the asterisk denotes complex conjugation. It is worth noting, from Eq. (lo),that f s ( x , y ) is the convolution of the image with its conjugate reflection in the coordinate origin. We give the name autocorrelation box, denoted by B B ( x , y ) , to the square (of side L) which just encloses the support of f S ( x , y). Inspection of the integral in Eq. (1 1) reveals that
L52
(12)
Consequently, given the image box, the size of the autocorrelation box is contrained by Eq. (12). Given the autocorrelation box, however, no limit can be placed on the size of the image box, in the absence of other a priori information. It is worth outlining why this is so. We point out first that {F(u,u) exp[i@(u,u ) ] } has the same magnitude as F(u, u), for any @(u,u) that is a real function of u and u. The convolution theorem [Eq. (lo)] then indicates that F(u, u) exp[i@(u, 41
-
f ( x , Y ) 0h(x, Y ) = A x , y )
(13)
where exp[i@(u, u)] t+ h(x, y). Since, in general, there is no constraint whatsoever on the support of h(x, y), the support of j ( x , y ) can be arbitrarily large, even though j j ( x , y ) = f s ( x , y ) from the autocorrelation theorem [Eq. (1 l)].
10
R. H. T. BATES AND D. MNYAMA
This is very difficult to visualize, but the oscillations exhibited by j ( x , y) must be such that the integral in Eq. (1 I), with f replaced by j , vanishes identically for points ( x ,y) lying outside the support of f ( x , y). The most compact image (i.e., the one which occupies the smallest area of image space) compatible with the given autocorrelation must be just enclosed by a box whose sides are of length L/2, as is obvious from the form of the integral which defines the autocorrelation. We call this the minimum extent of autocorrelation theorem (Bates and McDonnell, 1986). Furthermore, when the image is positive (i.e., real and nonnegative), the sides of the image box are necessarily of length L/2, since there cannot be any oscillations (of the kind mentioned in the final sentence of the previous paragraph) of the integrand in Eq. (11). This is significant because many of the images which feature in important scientific and technical applications are necessarily positive. So,
L
=2
(14)
when f ( x , y ) is positive,
which we refer to as the extent-constraint theorem for positive images (Bates and McDonnell, 1986). It follows that BB(x, y ) = B(x, y I 2)
(15)
when f ( x , y ) is positive
C . Sampling and Computing
While one-dimensional information can be gathered continuously, as with a chart recorder for example, most multidimensional data consist of discrete samples. Even when a continuous data record exists (e.g., on film) it must be sampled before it can be passed to a computer for further processing. The sampling theorem (cf. Bracewell, 1978) states that the whole of F(u,u), and therefore the whole of f(x, y ) , can be reconstructed from samples spaced (in both the u and u directions) by no more than the reciprocal of the length of the sides of the image box. So, when the phase as well as the magnitude of F(u, v) is given, all of F(u,u) can be recovered merely from its ordinary structure factors (see Table II), even when N has its smallest allowed value of unity, for example, M
FmSn sinc(u - m) sinc(u - n)
F(u, u) =
when N
=
1
(16)
m.n=-M
where the notation introduced in Eq. (9) has been invoked, and where sinc(t) = 1 =0
when t
=0
when t is any nonzero integer
(17)
We emphasize that the positive integer M appearing in Eq. (16) is taken to be finite for reasons given in Section 1I.D.
TWO-DIMENSIONAL PHASE PROBLEMS
11
A quantity must be quantized before it can be computed. There are two sorts of quantization. First, there is amplitude quantization, so that for any point ( x , y ) in image space, for instance, the number representing the real (for example) part of the image (stored in computer memory) must be chosen from a discrete set. There must also be spatial quantization in that the image amplitude can only be specified at a discrete set of points. The latter set is usually chosen to form a rectangular grid, so that image space can conveniently be subdivided into smaller rectangles each centered on one of the points. These rectangles are called picture elements or pixels (cf. Rosenfield and Kak, 1982; Andrews et al., 1970; Andrew and Hunt, 1977). The term pixel is also used to denote the quantized image amplitude of the pixel. Quantities which are computed must consequently be sampled in both image and visibility space. It is worth keeping this in mind, because it is a mathematical theorem (cf. Hilderbrand, 1948; Paley and Wiener, 1934; Boas, 1954), that the Fourier transform of a quantity having finite support is itself entire (it exists and is well behaved throughout all of the finite part of the space on which it is defined). It is therefore necessary to take care that the quantization steps and sample spacings are close enough, and the number of samples is sufficiently large, to prevent computationally generated artifacts exceeding whatever level is deemed unacceptable. If the phase of F(u, u) is lost, so that only its magnitude is given, we need samples of IF(u,u)l spaced by no more than 1/2 in the u and u directions in order to be able to reconstruct the whole of it, for example, M'
IF(u,u)I2 =
1
JFm/2,ni212sinc(2u - m)sinc(2u - n )
when N = 1
m,n=-M'
(1 8) where the positive integer M' need not be a multiple of M, though in virtually all practical computational algorithms it is understood that
M' = 2M (19) which is connected with the idiosyncracies of the fast Fourier transform (FFT) algorithm (cf. Brigham, 1974; Cooley and Tukey, 1965; Institute of Electrical and Electronics Engineers, 1969; Pease, 1968). Since the iterative procedures (see Section IV) discussed in this review are so computationally intensive, the use of the FFT is mandatory (any other Fourier transformation algorithm would be hopelessly inefficient). The assumption [Eq. (19)] causes no difficulties in practice provided M is taken large enough to prevent the generation of aliasing artifacts. When attempting to assess or interpret algorithms which rely on the FFT, it is important to remember that the number of samples is necessarily the same in both image space and visibility space. It is therefore essential to ensure that
12
R. H. T. BATES AND D. MNYAMA
this number does not fall below whatever limit is necessary in whichever space seems to require the more samples. When N is taken to be greater than unity, the visibility is here said to be oversampled in the following sense. Consider that cell, congruent to B(x,y I N ) , centered at the coordinate origin in Fig. 1. The central unit square, which is of course congruent to B(x,y 1 I), contains the image. The image amplitude is set to zero within the parts of B(x, y I N ) outside B(x, y I 1). Computing the FFT of the contents of B(x,y I N ) generates the ordinary structure factors (as defined in Table 11), which are spaced by 1 / N in both the u and u directions. We say that F(u, u) is oversampled by the factor N , because the sampling theorem only requires that the samples of F(u, u) be spaced by unity in the two directions. The trick of filling the parts of B(x, y I N ) outside B(x, y I 1) with zeros is sometimes called zero packing. It is a particularly convenient means for interpolating any (multidimensional) quantity which is given in the form of regularly spaced samples. The FFT of the quantity is computed and the output is packed with zeros. The reverse FFT then regenerates the original data but with extra samples in-between the given ones. In the following sections we make no distinction between forward and reverse FFTs because they are formally so similar and are (in effect) algorithmically identical. Inspection of Eqs. (11) and (18) confirms that IF(u,u)12 needs to be oversampled by 2, as compared with what the sampling theorem requires for recovering f(x,y) from the samples of F(u,u), in order for fs(x,y) to be generated directly by Fourier transformation. This is of course merely a restatement of Eq. ( 1 5). If the part of B(x, y I 2 N ) outside B(x, y I 2 ) is packed with zeros, with the contents of B(x, y 12) being set equal to f(x, y), the FFT of the contents of B(x, y I 2 N ) consists of samples of (F(u,u)I2 spaced N times more closely, in both the u and u directions, than the samples IFm,2.n,2)2 included in the summation in Eq. (18). We summarize the gist of the preceeding four paragraphs by noting that F(u, u) need not be oversampled (i.e., N = 1) to enable the image-form to be recovered when both IF@, u)l and phase {F(u, u ) ) are given. However, IF(u, u)l must be oversampled by at least 2 (i.e., N 2 2 ) if fs(x,y) is to be directly computed from the visibility magnitude. D . Consequences of Measurement Noise All real-world data are corrupted by some sort of noise. In this review we are interested in particular methods for processing data gathered in visibility space. The observable visibility is not F(u, u) then, but rather FXu, 4 = W ,0)
+ F,(u, 4
TWO-DIMENSIONAL PHASE PROBLEMS
-
-
13
where subscripts o and n stand for “observable” and “noise,” respectively. It is convenient to define F,,(u, u) fn(x,y ) and Fa(u,u) fa(x,y). Since F’(u, u) is generally some kind of (quasi-)random process, there is no reason to expect f , ( x , y ) to be confined to the image box B ( x , y I 1). Consequently, f,(x,y) must usually occupy a larger region of image space than B(x,y 1). The sampling theorem (see Section 1I.C) thus demands that Fo(u,u) be sampled more finely than would be necessary if F,,(u,u) were negligible. We ask the reader to keep the ramifications of the previous paragraph in mind when assessing the significance of the results presented in Sections 1V.F and V.A. These ramifications are, in fact, among the more challenging of existing image-processing research problems. We d o not mention this any further in this review, but we think it worth speculating in passing though, that, if Fa(u,u) is sampled appreciably more finely than 1/2 in both the u and u directions, and if an a priori estimate of the size of the true image box is available, it should be possible to compensate for at least part of F,,(u,u). Furthermore Sanz and Huang’s (1985) recent studies may be particularly pertinent to any attempt at improving the preprocessing described in Section V.A. In theory, the positive integer M introduced in Eq. (16) should be infinite. For Iml and In1 both large enough, however, lFm,nl must fall below the prevailing noise level. In practice, therefore, it is vital to take M to be small enough that we only make use of those Fm,” which represent useful information.
I
E . Patterson Compared with Autocorrelation
Because crystallographers are concerned (by definition) witth periodic structures, the visibilities they encounter are typified by P(u, u), as opposed to F(u, u). So, they are more interested in the autocorrelation of p(x, y ) than in f s ( x ,y). The autocorrelation of p(x,y ) is called the Patterson (cf. Ramachandran and Srinivasan, 1970),which we denote by pp(x,y). It can be defined by
-
IP(u, 412
PP(X,Y)
(21)
from which it follows that pp(x,y ) has the same periodicity as f ( x , y ) , because P(u, u) only exists at ordinary sample points-refer to Eq. ( 5 ) and Table 11. So, the Patterson consists of overlapping versions of the autocorrelation of the image, as inspection of Eq. (4) confirms
14
R. H. T. BATES A N D D. MNYAMA
Since N = 1, effectively, in the great majority of crystallographic contexts, it is usually as difficult to unravel fs(x,y) from pp(x,y) as it is to reconstruct f ( x ,y ) from the structure factor magnitudes. Consequently, although we have found means of applying our phase retrieval techniques to a very special class of crystallographic problems (cf. Millane et al., 1986), we do not discuss periodic images any further in this review (except merely for completeness in Sections II1.D and 1II.F). There are several excellent reviews on modern crystallographic phase recovery methods (cf. Baritz and Zwick, 1974; Cowley, 1981; Bricogne, 1974,1984; Burgi and Dunitz, 1971; Colella, 1974; Davies and Rollett, 1979; Eyck, 1973; Harrison, 1970; Hosoya and Tokonami, 1967; Ino, 1979; Krabbendam and Kroon, 1971; Piro, 1983; Post, 1979,1983; Rae, 1974; Sayre, 1952; Thiessen and Busing, 1974).
111. PHASEPROBLEMS
This section is concerned with various intricacies of the general problem of retrieving an acceptable phase function, here denoted by Y(u, u), to go with a given visibility intensity I(u, u), where
I(u,u) = I F(u, u)12
with F(u, u) = ( F ( u ,u)l exp[iY(u, u)]
(23)
Note that an alternative statement of the autocorrelation theorem (Section 1I.B) is 0)
++
f s b ,Y )
(24)
A . Image-Forms When the phase of F(u, u) is lost, certain information is irrecoverable, so that it is appropriate to introduce the concept of image-form (cf. Bates, 1984). If x,, x2, y,, y,, tj,, and tj2 are arbitrary real constants, then we say that exp(i$,)f(x - x l , y - y l ) and exp(itj2)f*(-x x2, -y y 2 ) have the same image-form as f ( x ,y). The motivation for this is that an image “looks the same” when it is shifted or is reflected in the origin of coordinates, or if a constant is added to its phase. Notice that the image’s appearance is not altered by reversing its phase. When presented with a phase problem, one’s goal is to reconstruct the image-form, rather than the actual image, because the latter is irretrievable. In many important practical applications, however, it is the image-form which is of interest to us.
+
+
TWO-DIMENSIONAL PHASE PROBLEMS
15
Fourier phase retrieval algorithms are now capable of reconstructing the forms of complicated images containing intricate detail. We use the two images shown in Fig. 3 to illustrate the performance of the algorithms. The data presented to the algorithms are generated by Fourier transforming the images, which we refer to henceforth as original images. Note that Fig. 3a contains easily recognizable structure, while Fig. 3b is rather “nondescript.” They represent two extremes of the kinds of images which tend to be of interest in practical applications.
FIG.3. Original images. (a) Puffin, referred to as fi(x,y), its visibility is denoted F,(u,u); (b) starfield, referred to as f 2 ( x , y ) ,its visibility is denoted F,(u, u).
R. H. T. BATES A N D D. MNYAMA
16
B. Microscopal Phase Problem
The problem posed here is only characteristic of a subclass of the problems with which microscopists are concerned, as already mentioned in Sections 1.A and I.B. However, it fits nicely into our exposition. Following Gerchberg and Saxton (1972) we pose the problem as Given If(x,y)l and IF(u,u)l, recover the image-form of f(x,y). C. Fourier Phase Problem
We pose this problem as Given I(u, u), at all ordinary and in-between sample points where it has significant value, recover the image-form of f(x, y). We point out that, in principle, the structure factor magnitudes for any N can be generated from those for which N = 2. This is because the autocorrelation (Section 1I.B) and sampling (Section 1I.C)theorems indicate that I(u, u) can be computed, given N = 2, at any point (u, u) in visibility space, from the square of the ordinary and in-between structure factor magnitudes. It is worth remembering, however, that the inevitable measurement noise (Section ILL)) causes any such computation to be erroneous. It is therefore preferable, if possible, to sample IF(u,u)l at all points (m/2N, n/2N) in visibility space where is has significant value, for whatever particular N is required.
D. Crystallographic Phase Problem We pose this problem as Given I(u, u) at all ordinary sample points where it has significant value, recover the image-form of f(x, y), under the constraint N = 1. As already pointed out in Section ILE, this problem cannot be solved in general by the techniques that are the main concern of this review. E. Phase Dominance
Consider a set {Fj(u,u) *--) f;.(x,y);j = 1,2,. . .,J } of visibility/image pairs. Define thejkth mixed-image f;.,(x,y) by I$(u,
41expCi phase{Fk(u,u)}l
= Fj&,
0)
++
fjk(x,y)
(25)
It is well known (cf. Ramachandran and Srinivasan, 1970; Oppenheim and
TWO-DIMENSIONAL PHASE PROBLEMS
17
Lim, 1981; Fright, 1984; Bates and Fright, 1985) that f j k ( X , y ) usually “looks more like” h(x, y) than fj (x , y). In general, therefore, the phase of the visibility dominates the magnitude. Figure 4 illustrates this for the pair of original images shown in Fig. 3. We denote the images shown in Figs. 3a and 3b by fi(x, y) and f 2 ( x ,y), respectively. Despite the completely different character of the two original images, f12(x,y)looks very similar to f 2 ( x , y ) and fil(x,y) bears a close resemblance to fl(x,y). It is this phase dominance which, more than anything else, makes it difficult to believe that phase retrieval can be successful in the absence of highly restrictive a priori constraints. It is all the more remarkable, therefore, that Fourier phase problems (Section 1II.C)can so often be solved effectively uniquely in practice. Sections 1II.F and 1II.G reveal the clues as to why this apparently unlikely state of affairs has arisen.
F. Uniqueness Given the intensity Z(u) of a one-dimensional visibility, corresponding to a one-dimensional image f ( x ) having finite support, it is well known (cf. Taylor, 1981) that a unique image-form cannot in general be recovered. The visibility can always be written as (cf. Bates, 1978b). F(u) = A(u)R(u)
(26)
where A(u) and R(u) are polynomials of infinite and finite orders, respectively, with all zeros of A(u) being necessarily real. The zeros (of which, for instance, there are K) of O(u) are all complex in general. It follows that F(u) can be represented as a product of a denumerably infinite number of factors, (u - &) being a typical example. For an infinite number of values if the integer index k, the constant l k is real, while is generally complex for each of a further K values of k (Bates, 1971, 1978c, 1982a,b). It is clear that IF(u)l is unchanged when any particular (u - &) is replaced by (u - (t).It transpires that this exchange of factors does not alter the support of the image (cf. Bates, 1978b). If 5,‘ is real, the visibility phase is unaltered by this exchange. The phase is altered, however, if l k is complex. It follows that there are 2K- different image-forms compatible with the given IF(u)( (cf. Garden and Bates, 1982). This exchanging of factors is refered to colloquially as zero flipping (cf. Bates, 1982b). Examples of ambiguous one-dimensional images, each of which is compatible with a given visibility, are presented by Hofstetter (1964), Bates (1969), Bates and Napier (1972),Napier (1972),and Garden and Bates (1982). A two-dimensional visibility can still be written as in Eq. (26), but with u
18
R. H. T. BATES A N D D. MNYAMA
FIG.4. Mixed images. (a) jlZ(x.y);(b) f21(x,y).
TWO-DIMENSIONAL PHASE PROBLEMS
19
replaced by u, u. The polynomial A(u, u) is always expressible as the product of one-dimensional polynomials, each of whose zeros are necessarily real. So, A(u, u) cannot contribute to any phase ambiguity. The polynomial O(u,v ) is generally prime, as Bruck and Sodin (1979) seem to have been the first to recognize in the context of the Fourier phase problem. Since prime polynomials are unfactorizable, zero flipping is impossible, so that one has reason to believe that two-dimensional phase problems should have unique solutions. Nonuniqueness can occur in contrived situations (Van Toorn et al., 1984; Byrne et al., 1983; Huiser and Van Toorn, 1980; Fienup, 1984; Fright, 1984; Hayes, 1982),but it is very doubtful if contrived situations have any practical relevance, since they depend critically upon special symmetries being maintained. The inevitable measurement noise must upset such special conditions. Bruck and Sodin's initial insight into the uniqueness question has been usefully supplemented recently (cf. Sanz and Huang, 1985; Byrne et al., 1983; Foley and Russell Butts, 1981; Fienup, 1982; Fright, 1984; Hayes, 1982). While all this may seem to explain the encouraging results of the somewhat ad hoc algorithms that have been devised for generating image-forms from given samples of their intensities, one should not forget the implications of Eq. (13). The point is that
lF(u,u)expCWu,u)ll = 1%
~)l
(27)
So, how can it make sense to try and choose a special phase function Y(u, u) from among the infinity of possible phase distributions O(u,u)? The clue is provided by the minimum extent of autocorrelation theorem (Section 1I.B). We retrieve that Y(u, u) which characterizes the most compact image compatible with a given I(u, u). While there seems to be every reason to believe that the most compact image is usually unique, it is extremely difficult, according to our present computational experience (Bates and Tan, 1985a,b)to recover the image-form of a complex image from its visibility intensity. Numerical solutions to the microscopal phase problem (Section 1II.B) are readily generated for complex images. When strong extra a priori information is available (cf. Bates and Tan, 1985b), complex image-forms can sometimes be recovered. Virtually all published accounts of numerical phase retrieval suggest, however, that the Fourier phase problem (as posed in Section 1II.C) can only be solved successfully for positive images. This is discussed further in Section 1II.G. Since the data for the microscopal phase problem (Section 1II.B) encompass those for the Fourier phase problem (Section IILC), if solutions to the latter are unique then so must be solutions to the former. In fact, solutions to the microscopal phase problem can be expected to be unique in one dimension as well as two (Saxton, 1978, 1980).
20
R. H. T. BATES AND D. MNYAMA
G. Signijicance of Positivity
We assert that the extent-constraint theorem for positive images (Section 1I.B) highlights the reasons for the practical success of those phase retrieval algorithms which are, in fact, effective. When it is known a priori that the image is positive, the constraints implicit in this theorem are so strong that they overcome phase dominance (Section 1II.E). Phase dominance is less critical for the microscopal (Section 1II.B) than for ihe Fourier (Section 1II.C) phase problem, because there are so much more data for the former. Computational experience suggests very strongly that useful numerical solutions to the Fourier phase problem are virtually unattainable unless the image is positive (Bates and Tan, 1985a,b), although TABLE I11 IN WHICH IMPORTANTPHYSICAL QUANTITIES ARE EACHOTHER’S APPLICATIONS FOURIER TRANSFORMS AND WHETHER THE IMAGE IS COMPLEX, REAL,OR POSITIVE Application
Object
Visibility
Image
X-Ray crystallography Neutron crystallography Microscopy
Electron density
Diffraction pattern Diffraction pattern Back focal (diffraction) plane field Visibility or spatial frequency spectrum Far-field (Fraunhofer) radiation pattern Far-field (Fraunhofer) scattering pattern Temporal spectrum Spatial frequency spectrum
Positive
Interferometry; spectroscopy; radio astronomical aperture synthesis Radio engineering; ultrasonic/acoustic/ optical measurements Acoustic/ electromagnetic scattering Communication; speech processing Image processing
Nucleon density Transmissivity or reflectivity of specimen Spatially incoherent distribution of sources Aperture distribution or pupil field Spatially coherent distribution of sources Signal Image
Positive Complex
Positive
Com p Iex
Complex
Real Complex, real, or positive depending upon detailed application
TWO-DIMENSIONAL PHASE PROBLEMS
21
the approach recently introduced by Deighton et al. (1985) may help to change this. Comparison of Figs. 3 and 4 emphasizes why exceedingly strong constraints are necessary. For instance, the only possible hint of Fig. 3b apparent in Fig. 4b is found in the fuzzy background. We must somehow contrive to transform this background into Fig. 3b, simultaneously obliterating the version of Fig. 3a which dominates Fig. 4b. The algorithms described in Section V, and in Section IV.C, have the power to effect such transformations. It is all very remarkable and is certainly not completely understood as yet. Since positivity is clearly so important for practical phase retrieval, we summarize in Table 111 those physical situations in which positive images occur. IV. ITERATIVEPHASEREFINEMENT Having dealt with the preliminaries, we are now in a position to illustrate the power of existing phase retrieval algorithms. This section is concerned with the iterative procedures that form the core of all presently devised schemes for generating numerical solutions to Fourier phase problems. Both the image and visibility are represented as arrays of pixels (refer to Section 1I.C).The ordinary and in-between samples (refer to Table 11) of the visibility constitute its pixels. The pixels of the image are arranged in a rectangular array, with their ordering in the x and y directions identified by the integers m and n, respectively. We denote the m,nth image pixel by =f
nA,) (28) where A, and A, are the spacings, in the x and y directions, respectively, of adjacent samples. In general, each f,,, can be a complex number, although in Section IV.C, and in all of Section V, it is required to be positive (refer to Section 1II.G and Table 111). The two iterative procedures which computational practice has shown to be particularly effective are introduced in Sections 1V.B and 1V.C. Means of starting and stopping the iterations are discussed in Sections 1V.A and IV.D, respectively. While Sections 1V.B and 1V.C are illustrated with particular image-forms obtained after various numbers and types of iterations, Section 1V.D presents histories of the numerical convergence of several reconstructed image-forms. We feel that the results collected in Section 1V.D should give the reader a clear indication of the computational implications of the iterative algorithms. The effects of noise on the data are discussed in Section 1V.F. fm,n
(mA1,
22
R. H. T. BATES AND D. MNYAMA
FIG. 5. Images formed from zero-phase visibilities. The actual images are the Fourier IF1(u,u)1*;(d) IF2(u,u)1*.Notethat images(c)and(d)are transformsof (a)JF,(u,u)~;(b)~F,(u,u)~;(c) the autocorrelations of the puffin and starfield, respectively.
TWO-DIMENSIONAL PHASE PROBLEMS
FIG.5 (continued)
23
24
R. H. T. BATES AND D. MNYAMA
A. Initial Phase Estimates
All three of the phase problems introduced in Section 111 are predicated upon being able to relate an initially unknown image-form to given visibility data. The connection between visibility space and image space is by Fourier transformation. Since only the intensity of the visibility is given, we must first construct a phase function Y(u,u) which can be combined with the (positive square root of the) intensity to allow a first estimate f ( x , y ) of the image-form to be computed, that is, expCiY(u,011 = h,u) ?(x, Y) (29) In Sections V.B and V.C we discuss versions of Y(u,u) which are computed from the given visibility intensity. Here, we discuss only the most rudimentary initial phase estimates. The simplest is undoubtedly a constant, i.e., Y(u,u) has the same value (zero being most convenient) for all u and u. Setting Y(u,u) = 0 is often unsatisfactory, however, for the following reason. Note, first, that the phase of I(u,u) is zero and, second, that its Fourier transform is the autocorrelation of the image, as follows from Eqs. (23) and (1 1). Even though IF(u,u)( is the square root of I(u,u), the Fourier transform of IF(u,u)( nevertheless tends to be somewhat similar to f f ( x , y ) (compare Figs. 5a and 5b with 5c and 5d, respectively).This often retards the numerical convergence of the iterative procedures described in Sections 1V.B and 1V.C. Less simple, but much more effective initial phase estimates can be computed (without undue computational expense) with the aid of a pseudorandom number routine. The latter is a computer program that generates real, effectively random numbers within a specified range according to a specified (pseudo) probability density function (pdf) (cf. Bury, 1975; Burr, 1970; Bethea et al., 1975; Bhattacharyya and Johnson, 1977). The simplest kind of pseudo-random Y ( u , u ) is obtained by allotting a simple pseudo-random number, having a uniform pdf within the range --IL and +n, to each ordinary and in-between structure factor. We call this the direct pseudo-random initial phase estimate. Another relatively simple kind of pseudo-random Y(u,u) is constructed in the following way. A first estimate f ( x , y ) of the “boxed image” (this terminology is explained in Section 1V.C) is constructed by taking, for each pair of values of m and n corresponding to pixels within B(x,y), NU,
U)I
+-+
..
(30) where a,,,n and Om,n are independent pseudo-random numbers, the former being positive and the latter real. Their ranges and pdf are chosen to accord with whatever a priori information is available. For instance, if the image is known to be positive, each is set to zero. In the absence of such f i n
= am,neXP(i4n.n)
TWO-DIMENSIONAL PHASE PROBLEMS
25
information, it is often adequate to take each pdf to be uniform within its range (i.e., zero to the maximum permitted image magnitude, for am,”;and -71 to + n,for Om,n).This boxed image is defined either only in B(x,y) = B(x,y 1 l), or in B(x, y I 2) with pixels set to zero in the part of B(x,y I 2) not spanned by B(x, y 1 1). On computing P(u, u), which is defined by E(u, 0) we choose
-
f(X,
Y)
(31)
Y(u, u) = phase { E(u, u)}
which we call the indirect pseudo-random initial phase estimate. In our computational experience, the direct and indirect pseudo-random initial phase estimates are more or less equally effective. Figure 6 shows the two versions of T(x,y), computed from the magnitude of the visibility of Fig. 3a and the two kinds of pseudo-random Y (u, u). Neither version closely resembles Fig. 3a, but both versions can serve to initiate the iterative procedures introduced in Sections 1V.B and 1V.C. Note that both of these F(x, y) spread throughout most of B(x, y I 2), whereas f ( x ,y) itself is, of course, confined to B(x, y I 1). We emphasize that the initial phase estimates described in Sections V.B and V.C tend to be significantly more effective than either of the pseudo-random estimates. B. Gerchberg-Saxton Algorithm
The phase problem in, for example, electron microscopy involves the determination of the phase corresponding to either an image plane or a diffraction plane micrograph (Misell, 1978; Gerchberg, 1972). Solutions to such phase problems are of importance to various aspects of imaging of microscopic structures. In a limited number of cases relating to biological materials, either occurring naturally in regular form or prepared in crystalline form, electron micrographs can be recorded in both the image plane and the back focal plane (i.e., diffraction plane, or Fourier plane), as has been demonstrated by Unwin and Henderson (1975) working with purple membrane and catalase, Hui et al. (1974) who have studied hydrated and frozen specimens, and Taylor and Glaeser (1976) who were concerned with catalase. Instrumental limitations, caused, for instance, by stray magnetic fields, cannot be overlooked in practice. The resolution in the image plane is usually poorer than in the diffraction plane. For example, in the work related to purple membrane, the resolution in the image plane was of the order of 0.6 nm, as compared with 0.3 nm in the diffraction plane (Misell, 1978). Gerchberg and Saxton (1971, 1972) put forward the following numerical solution to the microscopal phase problem (refer to Section 1II.B). Having
26
R. H. T. BATES AND D. MNYAMA
FIG.6 . Images obtained from puffin's visibility magnitude, IFl(#, u)l, combined with (a) direct pseudo-random phase, (b) indirect pseudo-random phase.
TWO-DIMENSIONAL PHASE PROBLEMS
27
constructed an initial phase estimate Y(u,v), in one of the ways described in Section IV.A, and having computed r(x, y) as prescribed by Eq. (29),the image phase function r$(x, y) is defined by w , Y ) = Phasemx, Y)>
(33)
This is then combined with the given image magnitude If(x,y)l to give h Y ) = If(x,y)I exP[i&x9Y)l
(34) On computing P(u, v), as prescribed by Eq. (31), and taking Y ( u ,v ) to be given by Eq. (32), we return to Eq. (29), thereby setting up an iterative loop which generates successive versions of $(x, y) and Y (u, v). This algorithm can function satisfactorily without oversampling (N = 1; refer to Section 1I.C) as shown in Figs. 7a and 7b. But note that the starfield image would obviously benefit from more iterations (the black spots in Fig. 7b represent negative parts in the image). The algorithm's numerical convergence can be accelerated (refer to Section 1V.E) by oversampling by a factor of 2 (N = 2), as is illustrated in Figs. 7c and 7d. C. Fienup Algorithms
In this subsection the image-form is assumed to be positive for reasons given in Sections 1II.F and III.G, so that the visibility phase must be antisymmetric through the origin of visibility space: f(x, y) is real and Y (- u, - u) = Y (u, u) (35) The visibility magnitude must be oversampled by at least 2 (N 2 2; refer to Section 1I.C). Note that the constraint on the phase function imposed in Eq. (35) only causes f(x, y), as defined by Eq. (29),to be real, For it to be positive, as well as real, ul(u,v) must be the actual phase function that we are seeking. So, it is necessary to find some iterative procedure for generatin-g this phase function. To this end, we introduce what we call the boxed image f(x, y) which is defined by
"kY ) = P ( X 3 Y ) + 4fp+ (x, Y ) + j ; (x, Y ) ) - P(P- (x, Y ) + 7-(x, Y ) ) -
(36)
where the subscript p denotes the previous version (i.e., the version obtained during the iteration completed just before the one presently being considered). The superscripts + and - ,respectively, identify those positive and negative parts of an image which are within B(x,y). The superscript - identifies all parts of the image within B(x, y I N), which is the complement of B(x,y) in B(x,y I N), i.e., it comprises all pixels of B(x, y I N) not within B(x, y). The
+
28
R. H. T. BATES A N D D. MNYAMA
FIG. 7. Image-forms reconstructed by five Gerchberg-Saxton iterations, starting with direct pseudo-random initial phase estimates. (a) Puffin with N = 1, (b) starfield with N = 1, (c) puffin with N = 2, (d) starfield with N = 2.
TWO-DIMENSIONAL PHASE PROBLEMS
FIG.7 (continued)
29
30
R. H. T. BATES A N D D. MNYAMA
FIG. 8. Image-forms reconstructed by 20 Fienup error-reduction iterations, for N = 2, starting with direct pseudo-random initial phase estimates. (a) Puffin, (b) starfield.
TWO-DIMENSIONAL PHASE PROBLEMS
31
f5ctors a and fl are both positive constants in the range 0 to 1. We say that f(x, y) is “boxed” because we are trying to confine it to the image box. We set up an iterative loop by successively invoking.Eqs. (31), (32), (29), (36), (31), etc. The simplest Fienup algorithm, called error correction or error reduction, is defined by a = fl = 0. Fienup (1982) introduces and compares several algorithms. Besides his error-reduction algorithm, we find his hybrid input-output algorithm (a = 1.0, fi = 0.5) most useful. The error-reduction algorithm is a species of the Gerchberg (1974) algorithms, and as such necessarily converges. It tends to stagnate relatively rapidly, however (see Section 1V.E). We find it essential to invoke the hybrid input-output algorithm after completing a few error-reduction iterations (refer to Section V). Figures 8a and 8b shows the f+(x, y) corresponding, respectively, to Figs. 3a and 3b after 20 error-reduction iterations, starting with direct pseudorandom initial phase estimates. Neither image can be said to be faithful. However, Fig. 9 shows that improved versions of f’(x,y) can be obtained after a further 30 iterations, but employing _the hybrid input-output algorithm. Figure 10 shows that the versions of f + ( x ,y) obtained after another 50 hybrid input-output iterations are very similar to the original images shown in Fig. 3. Inspection of Figs. 8-10 reveals that the image-form of Fig. 3a is easier to reconstruct than that of Fig. 3b. This is because the pronounced detail present in the former gives rise to more closely spaced fringes in visibility space, as is emphasized by Fig. 11, which depicts IF(#, 0)l and IF(0, o)l for the original images shown in Fig. 3. Note that the Fienup algorithms have parameters a and fl which are kept constant when computing estimates of the visibility phase. Their optimal values are found by numerical experimentation; several specimen images are chosen and their visibility magnitudes are subjected to a particular number of iterations. The values of a and B which produce the most faithful images are deemed to be the optimal values (Fright, 1984). This explains our choice, for the hybrid input-output algorithm, of a = 1.0 and fl = 0.5.
D . Error and Stopping Criteria As the iterations proceed, we need to know how close we are getting to the true image-form. The fraction of f”(x, y)existing within B(x, y I N) is a realistic measure of the “error” between T(x, y) and the true image-form, because the latter is confined to B(x, y) be definition. So, we introduce the outside error of order v, denoted
32
R. H. T. BATES AND D. MNYAMA
FIG.9. Improved image-forms obtained from an additional 30 Fienup hybrid input-output iterations, starting from Fig. 8. (a) Puffin,(b) starfield.
TWO-DIMENSIONAL PHASE PROBLEMS
33
FIG. 10. Further improvement of image-forms obtained after an additional 50 Fienup hybrid input-output iterations, starting from Fig. 9. (a) Puffin, (b) starfield.
U
I
U
V
FIG.1 I. Profiles (plotted logarithmically)through magnitudes of visibilities of puffin and starfield. (a) (F,(u,O)(;(b) 1F,(O, u)l; (c) IF2(u,0)I;(d) IF2(0,u)l. Note that the fringe frequency in (a) and (b) is greater than in (c) and (d).
by E - ( v )and defined by (37) B(x. Y I N)
where v can be any positive number, but is usually set equal to 1 (defines the magnitude error) or 2 (defines the intensity error). In the microscopal phase problem (Section 1II.A) we are given the true If(x, y)]. When implementing the Gerchberg-Saxton algorithm (Section IV.B), therefore, it makes sense to introduce the Gerchberg-Saxton inside
TWO-DIMENSIONAL PHASE PROBLEMS
error, denoted by &(v) &(V)
35
and defined by =
N
(If(x, y)l - I
k Y)l)’
dx dY
(38)
B(x. Y )
We have no a priori knowledge of the image-form in the Fourier phase problem (Section 1II.B). So, E - ( v ) represents the only error criterion for the Fienup algorithms, unless we know the image-form is positive. In the latter case it makes sense to introduce the Fienup positivity inside error, denoted by &(v) and defined by
The total errors, for the Gerchberg-Saxton and Fienup algorithms, are respectively defined by
When v is set equal to 2, the error measures are exactly equivalent to the measures which could be correspondingly defined in visibility space, because of Parseval’s theorem (cf. Bracewell, 1978):
The total error never vanishes in practice, of course. In fact, it tends to oscillate about some lower limit, set by various uncertainties (e.g., noise on the data, numerical approximations) which are by no means fully understood as yet (cf. Sanz and Huang, 1985). We therefore need to gauge objectively when to cease iterating. A suitable stopping criterion is to assume that f(x, y) is as close as can be expected to the true image-form when
<s
(42) where E ( V ) represents either E ~ ~ (orv E) ~ ( v and ) , E is the available a priori estimate of the minimum practicable error. We would like to be able to derive E from whatever estimate is available of the level of contamination of the data. Unfortunately, very little is known about how to do this. In practice, E is either chosen on the basis of computational experience or it is set a posteriori as that value below which E ( V ) seems incapable of falling. &(V)
36
R. H. T. BATES A N D D. MNYAMA
E . Numerical Convergence The error criteria introduced in Section 1V.D are illustrated in this section by showing a few examples of the variation of the errors with the number J of iterations. These examples relate to reconstructions of the image-form of the starfield. The Gerchberg-Saxton and Fienup inside errors are given by Eqs. (38) and (39), respectively. Figure 12a shows plots of these errors, for v = 1, obtained when the starfield is reconstructed from Gerchberg-Saxton and Fienup algorithms. Convergence is faster for the Gerchberg-Saxton algorithm because of the extra available information (i.e., the image space magnitude is given as well as the visibility space magnitude). The Gerchberg-Saxton algorithm applies only, of course, to the microscopal phase problem. The Fienup algorithm must be invoked for the general Fourier phase problem. The rate of numerical convergence of the Fienup algorithms often depends markedly on the initial phase estimates (see Sections IV.A, V.B, and V.C, but note our reference to Sault (1984) made in the introductory comments to Section V), as is evinced by the computational examples of Won et al. (1985), for instance. Figure 12b shows error plots for v = 2, obtained when the starfield is reconstructed with the aid of Fienup’s algorithms, starting with three different kinds of initial phase estimate. Note that crude phase estimation and its improved version, based on the idea of the extended image (E-image), are discussed in Sections V.B and V.C, respectively. F. Consequences of Noisy Data
Figures 13a, 13b, and 13c show the image-forms obtained, after the same number and types of iterations used to generate Fig. 9a, when the data are the ordinary and in-line structure factors of [ F,(u, u)[,as defined by Eq. (20), with the rms value of I F,,(u, u)l/( IF(u, u ) l ) being 0.05, 0.10, and 0.25, respectively, where the angular brackets denote an average over visibility space. The noise was independent between structure factors and was uniformly distributed between its extreme negative and positive values. Since the outer structure factors (i.e., those far from the origin of visibility space) of F(u,u) are comparatively small, their signal-to-noise ratios are effectively lower. In fact, many of these signal-to-noise ratios are significantly less than unity, so that the visibility intensities corresponding to Figs. 13b and 13c in particular are palpably noisy by practical image-processing standards. A most encouraging aspect of Figs. 13a, 13b, and 13c is that the imageforms become less faithful only gradually as the noise level increases. There is
a
.................................... A = G S
......... ....
I
..........
......................
J
J
FIG.12. Plots of errors versus numberJ of iterations for the starfield with N = 2. (a) Dotted line ~ & ( 1 ) and solid line E+F(t), all iterations error-reduction; (b) +(2), first iteration errorreduction, following iterations hybrid input-output, starting with: direct pseudo-random initial phase(-), crude phase estimation (---), improved crude phase estimation (...) . Vertical scales are arbitrary (i.e.,they have been normalized for convenience of display).
38
R. H. T. BATES AND D. MNYAMA
FIG.13. Image-forms of starfield reconstructed,starting with direct pseudo-random initial phases followed by 20 error-reduction and 80 hybrid input-output iterations, from noisy visibilities.The rms values (defined as in Section 1V.F) were (a) 0.05;(b) 0.10;(c) 0.25; (d) 0.10 but the noisy visibility magnitude was preprocessed as described in Section V.A.
TWO-DIMENSIONAL PHASE PROBLEMS
FIG. 13 (continued)
39
40
R H. T. BATES AND D. MNYAMA
no sudden, catastrophic deterioration such as usually occurs, for instance, when one attempts extrapolation in the presence of noise. The “gentle deterioration,” which has been previously remarked both by Fienup (1979, 1982,1984) and ourselves (Bates and Fright, 1984; Bates et al., 1984),does of course greatly enhance the practical significance of Fourier phase retrieval algorithms. We show in Section V.A how the effects of noise can be ameliorated in part by preprocessing, thereby permitting the reconstruction of image-forms more faithful than those shown in Figs. 13a, 13b, and 13c. Figure 13d shows the improvement on Fig. 13b obtained when the techniques described in Section V.A are invoked. Significantly more of the fainter detail exhibited by Fig. 3b is apparent in Fig. 13d than in Fig. 13b.
ALGORITHM V. CANTERBURY It is demonstrated in this section that a composite algorithm, incorporating several features ancillary to the iterative algorithms discussed in Section IVC, can perform significantly better than the iterative algorithms on their own. We do not wish to seem to imply that Fienup’s algorithms always benefit from being augmented in the manner discussed in this section. As Sault’s (1984) careful comparison of the Fienup algorithms and what he calls the Canterbury algorithm (this is actually what we call crude phase estimation, which is described in Section V.B) confirms, the former often perform better unaided. It nevertheless appears to be true that the aforesaid ancillary procedures do, in fact, markedly improve computational efficiency when the data are “noisy” (see Section V.A) and/or the image is foggy (see Section V.E), and they can make all the difference between success and failure under certain circumstances. Throughout this section we impose the constraint that the image f ( x , y ) is positive. The necessity for this restriction is explained in Sections 1II.F and 1II.G. Because the number of figures needed to illustrate this section is large, we have decided to present reconstructions of only one of the image-forms shown in Fig. 3. We have chosen Fig. 3b as being more representative of images of interest in scientific and technical applications. After describing in Sections V.A-V.E the details of the various steps of the composite algorithm (which we call the Canterbury algorithm), we show by example in Section V.F that image-forms can be successfully reconstructed when the data are oversampled by less than 2 (but of course more than 1). Finally, in Section V.G we list those problems which, if they can be successfully solved, will make Fourier phase retrieval both routine and globally applicable.
TWO-DIMENSIONAL PHASE PROBLEMS
41
A. Preprocessing When the data are appreciably corrupted by noise they correspond to a significantlydifferent image-form than that to which 1F(u, u)[ itself relates. This is one reason for the difference between corresponding images in Figs. 8b, 9b, and 10b on the one hand and Figs. 13a, 13b, and 13c on the other. It also helps to explain the absence (noted in Section 1V.F) of catastrophic deterioration in the presence of noise, because the “true” image-form corresponding to noisy data presumably only differs from the image-form corresponding to the ideal data in proportion to the noise-to-signal ratio. As noted in Section II.D, there is one particular case in which a noisy image-form can be expected to differ markedly from its noiseless counterpart. The former’s image box is likely to be appreciably larger. We show below how noisy data can be massaged to make them compatible with a particular image box. The spacing of the structure factors, whose magnitudes are given (or measured), must be less than that required to permit the autocorrelation to be computed directly (by Fourier transformation) from the intensities of the structure factors. When presented with a set of given structure factor magnitudes, the first step is to estimate the size of the image box. Since the linear dimensions of the autocorrelation box are necessarily twice those of the image box (see Section II.C), we begin by applying the FFT to the intensities of the given structure factors. Provided an estimate is available of the level of the noise contaminating the data, the image computed by the aforementioned FFT can be thresholded with a rectangular box, outside which exists merely what we deem to be noise. The rectangular box is the best estimate of the autocorrelation box which can be obtained from the data. To illustrate this, we show in Fig. 14b the magnitude of the visibility of Fig. 3b, oversampled by 4 with each sample corrupted by a pseudo-random noise (refer to Section 1V.A) distribution having a mean level of zero. The rms level (as defined in Section 1V.F) of the noise was 0.10. Figure 14a shows the noise-free visibility intensity of the original image shown in Fig. 3b. The Fourier transform of the intensity of the noisy visibility is shown in Fig. 14c. The bright square in this figure represents the perimeter of the autocorrelation box BB(x, y). The image detail outside BB(x, y) has an apparently random character. Since the autocorrelation must be both positive and confined to the autocorrelation box, we obtain an improved version of Z(u, u), i.e., partially compensating for the effects of noise, by Fourier transforming the positive parts of the contents of the autocorrelation box. Figure 14d shows the improved I(u,u) obtained from Fig. 14c. Note that the image-form shown in Fig. 13d was generated from this preprocessed intensity. We have previously argued (Bates and Fright, 1984, Section 12) that further improvement is
42
R. H. T.BATES AND D. MNYAMA
FIG.14. Preprocessing of noisy magnituide of starfields visibility. (a) I(u,u) = IF2(u,u)12; (b) IF,(u,v)l, which is that from which Fig. 3b was reconstructed; (c) Fourier transform of IF,(u,t1)1~,the bright square is the estimate of the autocorrelation box; (d) finally preprocessed version of I(#, u), compare with (a).
TWO-DIMENSIONAL PHASE PROBLEMS
FIG. 14 (continued)
43
44
R. H. T. BATES AND D. MNYAMA
possible by “masking” the contents of the autocorrelation box, but no objective way of choosing the mask has yet been devised (we comment further on this in Section V.G).
B. Crude Phase Estimate
It follows from the definitions introduced in Section I1.A and Table I1 that Em+ 1.n
+ Em,n = 2Em+1 / 2 4
(43)
and Em,,+ 1 + Em9n= 2Em,n+1/2 (44) On writing Em+,,,+, - Am+p,n+vexp(i8,,,+,,,+,), where Am+C,n+v is positive and 8,,,+,,,+,is real, we deduce from Eqs. (43) and (44), respectively, that 2
cos(~m+l,n - em,n) =(4A;+1/2.n - A m + l , n - A;,n)/2Am+i,nAm,n
(45)
and 2
Ai,n)/2Am,n+1 A m , n (46) When the image is positive, Fo,o is necessarily positive from Eq. (l), implying from Eq. (8) that Eo,o is positive, so that cos(~m,n+ 1 - 8m.n) = (4A;,,+
1/2
- Am,,+
1-
(47)
80,o = 0
is a structure factor magnitude. Consequently, the cosines Note that Am+p,n+v on the left-hand sides of Eqs. (45) and (46)are expressed by those two equations entirely in terms of structure factor magnitudes. Furthermore, because E(u, u) and E*(u, u ) are by definition (Section 1II.A) both visibilities of the same image-form, we are permitted to choose 81,0to be the positive arccosine of the right-hand side of Eq. (45) when rn = n = 0: (48) Figure 15 is a phasor diagram on which both 80,0 and 81,0are shown (for a particular value of the latter). Setting m = n = 0 in Eq. (46) we obtain, with the aid of Eq. (47), 81.0
= cos-’C(4AI,2,0 -
- ~~,0)/2A1,0~0,0I
80,1= C O S - ’ C ( ~ A & ~- A & - Aid2AO.1 A0,01
(49)
The two possible values of 80,1are represented (for particular structure factor magnitudes) by the cross-hatched lines on Fig. 15. Setting m = 0 and n = 1 in Eq. (45) gives (again assuming particular structure factor magnitudes) two The four possible values of are values of 81,1for each value of represented by the dashed lines on Fig. 15. Finally, setting m = 1, n = 0 in
TWO-DIMENSIONAL PHASE PROBLEMS
45
\ \ \
/ / /
/
\ \
\
FIG.15. Phasor diagram illustrating evaluation of
from given values of Oo,o and O,.o
Eq. (46) gives the two quantities f(O1,, The two possible values of 01,1 are represented by the dotted lines on Fig. 15. Note that one dashed and one dotted line coincide. Note further, in the absence of a special symmetry, is fixed. A there can only be one such coincidence. So, the value of similar construction fixes the value of Oo,l. It is convenient to say that the eight structure factors at the sample points W N ,n / W , ((m + 1/2)/N, n l W , ((m + f)/N, n / W , ((m+ 1)/N,(n + 1/2)/W ((m+ 1)/N,(n + ((m + W / N , ( n+ U/W, ( m / N , ( n+ 1)/N, ( m / N , (n + 1/2)/N) constitute the m,nth unit cell in visibility space. Since the 0,Oth unit cell is similar to all the other unit cells, the previous paragraph shows that, given all the magnitudes of the structure factors belonging to any unit cell, and given the phases of two adjacent ordinary structure factors, the phases of the other two ordinary structure factors can be inferred immediately. The latter two phases are unique, except when special symmetries pertain (note that, if the given structure factor magnitudes are derived from observations, the presence of the inevitable measurement noise prevents the occurrence of such symmetries). It follows that the phases of all the ordinary structure factors can be evaluated in pairs, successively applying the above construction to contiguous unit cells. 1 ) / N 9
46
R. H. T. BATES AND D. MNYAMA
We see that the Fourier phase problem for IE(u,u)l has an elementary solution, in that phase{E(u, u)} is obtained from a single recursive pass across visibility space. Unfortunately, we are generally presented with samples of IF(u, u)l rather than (E(u,u)(.However, note from Eq. (8) that the ordinary structure factor magnitudes of E(u,u) are equal to those of F(u,u), implying that only the right-hand sides of Eqs. (43) and (44) are in error when F substitutes for E. Furthermore, if E is replaced by F in Eqs. (43) and (44), the accuracy of these formulas can be expected to increase as the spacing of the sample points decreases (i.e., as N increases). Conversely, since the above-derived phase retrieval procedure is recursive, there is a tendency for the accumulation of phase errors to increase with increasing N . In our computational experience, the optimum value for N is 2. We have not been able to find any theoretical justification for this (but, of course, many of the more interesting facets of technical science are beyond the reach of theoreticians!). Anyway, when the phases of the ordinary structure factors of F(u, u) are estimated by the recursive construction (described above) which is exact for E(u, u), we say that the phases so obtained constitute the crude phase estimate (CPE). There is a curious aspect of the CPE which is computationally significant. = [Fm+p,n+y(, the Since Eqs.(45)and(46)areonlyapproximate when Am+p,n+Y magnitudes of their right-hand sides can sometimes exceed unity, thereby implying complex phases. We call this condition cosine overflow. Since the ern,= must be real, each cosine overflow must be appropriately countered. We set the argument of the cosine equal to 0 or R, respectively, whenever the righthand side of these two formulas is greater than 1 or less than - 1. We define the fraction of the cosine overflows to be the number of times the magnitudes of the right-hand sides of Eqs. (45) and (46) exceed unity divided by the total number of invocations of these two formulas. On Fourier transforming the I(u,u) shown in Fig. 14a, zero packing the results so that it exists within B(x,y(4), and Fourier transforming back to visibility space, we obtain samples of the visibility intensity at ordinary and in-between sample points defined in Table I1 with N = 2. After constructing the CPE and combining it with the ordinary structure factors, we obtain (by Fourier transformation) the image-form shown in Fig. 16. While this image-form bears little resemblance to Fig. 3b, it is significantly more structured than the image-form obtained from either the direct or indirect pseudorandom initial phase estimate (refer to Section 1V.A). It represents an appreciably better first version of r ( x , y ) . Figure 17a and b show image-forms obtained after 30 hybrid input-output iterations starting, respectively, with the direct pseudo-random initial phase estimate and the CPE. The accelerated numerical convergence manifested by the latter is obvious.
TWO-DIMENSIONAL PHASE PROBLEMS
47
FIG.16. Image-form of starfield given by the Fourier transform of IF,(u,o)l combined with its crude phase estimate.
C. Improving Crude Phase Estimate
Since the CPE is exact for E(u,u), as we have shown in Section IV.B, it from the given IF,+,,,+,Imakes sense to try and reconstruct the IEm+p,n+vl refer to the definition [Eq. (9)]. Having obtained the CPE from the given magnitudes of the structure factors of F(u,u), and having noted the cosine overflows, we then generate T(x,y) by Fourier transformation and use it to form, according to Eqs. (4) and (16) with f replaced by a first estimate e"(x,y) of e(x,y). By Fourier transforming back to visibility space we obtain the estimates of the in-line inbetween structure factors of E(u, u). Combining the magnitudes of the latter with the given ordinary structure factor magnitudes of F(u, u), which equal the magnitudes of the ordinary structure factors of E(u, u), as Eq. (8) shows, a new CPE is generated (having a hopefully lower fraction of cosine overflows). We thus set up an iterative loop which is continued until the fraction of the cosine overflows reaches a level (0.01 in our computational experience) below which experience suggests it is pointless to proceed. Figure 18a skews the part of the version of e"(x,y) within the image box obtained from f ( x , y ) shown in Fig. 17b, after three iterations (when the fraction of cosine overflows was 0.025). The fraction of cosine overflows
48
R. H. T. BATES AND D. MNYAMA
FIG.17. Image-forms of starfield obtained from one error reduction followed by 29 hybrid input-output iterations, starting with (a) direct pseudo-random initial phases, (b) CPE.
FIG. 18. Image-forms of starfield. (a) The part of the third version of Z(x,y) within e ( x , y) obtained after starting with Fig. 17b (fraction of cosine overflow was 0.025); (b) f(x,y) obtained from the same number and type of Fienup iterations as invoked for Fig. 17, but starting with Fig. 18a (although background is “noisier” than in Fig. 17b, it shows more of the true detail displayed in Fig. 3b).
50
R. H. T. BATES AND D. MNYAMA
corresponding to the first two versions of Z(x, y ) were 0.060 and 0.037, respectively. Figure 18b shows the image-form obtained after the same number and types of iterations used to generate the image-forms shown in Fig. 17b. The further acceleration in numerical convergence manifested by Fig. 18b over Fig. 17b is obvious. The above iterative scheme suffers from the defect that it makes no use of the given in-between structure factor magnitudes of F(u,_v) once the initial CPE is generated. There is then nothing to prevent the f(x,y) from which Z(x, y) is formed from departing more and more from the true image-form at each iteration. This is probably why we find it best not to reduce the fraction of cosine overflows below 0.01. We comment on this further in Section V.G. Won et al. (1985) suggest other related means of improving the CPE and demonstrate them by example. Since their performance seems to be inferior to that of the approach described here, we neglect them in this review. D. Fienup Cycling
Having generated a particular initial phase estimate, it is profitable to combine Fienup’s error reduction and hybrid input-output algorithms into cycles. Each of the latter starts with a few (vl, for example) error-reduction iterations, is followed by an appreciable number (v2) of hybrid input-output iterations, and finishes with a small number (v3) of error-reduction iterations (cf. Bates and Fright, 1984; Fright, 1984). The image-form shown in Fig. 8b was reconstructed from 20 error reductions (vl = 20, v2 = 0, v 3 = 0) and Fig. 10b was obtained from a further 80 hybrid input-output iterations (vl = 20, v2 = 80, v3 = 0). Figure 19 shows the image-form obtained after a further 10 error reductions (vl = 20,v2 = 80,v3 = 10). Using error reduction to finish the Fienup cycling tend to reduce artifacts, as comparison of Figs. 10b and 19 confirms. Error reduction is guaranteed to converge (Fienup, 1982) and this convergence can be accelerated by using hybrid input-output. Hybrid inputoutput is known to approach a limit cycle after a while. This is why it is important to revert to error reduction to complete Fienup cycling. E . Defogging and Refogging
It often happens that the given Z(u, u ) is dominated by a large central lobe, with the outer fringes being miniscule by comparison. Since it is these fringes which contain the information characterizing the image-form, the Fienup algorithms tend to converge very slowly (if at all) unless some ameliorative action is taken.
TWO-DIMENSIONAL PHASE PROBLEMS
51
FIG. 19. Image-form of starfield obtained after a single Fienup cycle ( v , = 20, v 2 = 80, v g = lo), starting with CPE.
The presence of a dominating central lobe in I(u,u) often implies that the image detail is superimposed upon a smooth background or fog. If one thinks of the detail as gleaming only fitfully through this fog, it is obvious why the iterative algorithms make heavy weather of reconstructing the image-form. This kind of thinking led us to introduce our defogging procedure (Bates and Fright, 1983), which we find is best applied before the preprocessing step described in Section V.A (Bates and Fright, 1984). We first of all multiply IF(u, u)l by a positive window W(u,u) which is small over the central lobe and smoothly approaches unity outside. This is the defogging step, which experience shows to be optimized when the defogged central lobe is about 1.5 times as large as the largest of the outer fringes. Figure 20 shows the starfield (see Fig. 3b) with a smooth fog over all of it. Figure 21a is a profile through the visibility magnitude of this foggy star field, along the u axis. The same profile, after defogging (using the window function depicted in Fig. 21b), is shown in Fig. 21c. Figure 22a is the image-form obtained from the defogged visibility magnitude after CPE and one Fienup cycle (vl = 20, v 2 = 80, v 3 = 10). Figure 22b is the image-form obtained from the CPE and an identical Fienup cycle operating on the unfogged visibility magnitude. The superiority of Fig. 22a is obvious.
R. H. T. BATES AND D. MNYAMA
52
FIG.20. Starfield with a smooth fog superimposed (this is called the foggy starfield).
U
2
I
b
F
W(U,O)
(u,O)
l
0
U
U
FIG.21. Profiles through quantities in visibility space, along positive u axis. (a) Magnitude of visibility of foggy starfield; (b) defogging function, W(u,0); (c)defogged visibility magnitude. Subscripts F and DF stand for foggy and defogged, respectively.
TWO-DIMENSIONAL PHASE PROBLEMS
53
FIG.22. Image-forms of foggy starfield obtained after a single Fienup cycle (v, = 20, lo), starting with CPE and using (a) defogged visibility magnitude (see Fig. 21c), (b) original visibility magnitude (see Fig. 21a). v, = 80, v3 =
54
R. H. T. BATES AND D.MNYAMA
Another commonly encountered difficulty is the “partial fog” problem, illustrated by Fig. 23 which displays a fog over only part of the starfield. The fog is denser than that in Fig. 20, but its integrated amplitude is the same, so that it is appropriate to employ the same defogging window as that shown in Fig. 21b. Figures 24a and b show IF(u,O)( before and after defogging, respectively. Figures 25a and b show the image-forms obtained, after CPE followed by a single Fienup cycle (vl = 20, v 2 = 80, v 3 = lo), with and without defogging, respectively. Figure 25b is somewhat superior to Fig. 25a, which might suggest that the partial fog problem is not too serious. Such a conclusion would be misleading because any “fogginess” impedes reconstruction of an image-form if it significantly reduces the depth of the fringes exhibited by the visibility magnitude. We have included Fig. 23, even though its visibility does not suffer all that much from such reduced fringe depth, in order to emphasize the major difficulty associated with the partial fog problem. This is that the image-form corresponding to the defogged visibility of a partially fogged image cannot be wholly positive. We do not yet understand how to handle this difficulty, which we mention further in item (iii) of Section V.G. When the error (see Section 1V.D)either shows no sign of decreasing, or falls below some level set a priori, the estimate of the visibility phase so far obtained is combined with the given IF(u, u)l (this is the refogging step) and further Fienup cycling is carried out. Figures 26 and 27 show the image-forms
FIG.23. Stafield with partial fog superimposed (this is called the partially foggy starfield).
55
TWO-DIMENSIONAL PHASE PROBLEMS
U
U
FIG.24. Profiles through magnitudes of visibilities of partially foggy starfield, along positive u-axis. (a) Visibility of image shown in Fig. 23, (b)defogged visibility.Subscripts F and D F as for Fig. 21.
obtained after further error-reduction iterations, using the original visibility magnitudes, starting with the visibilty phases corresponding to Figs. 21a and 24a, respectively. In Fig. 26, the detail in the original image (Fig. 20) is apparent. The fog in Fig. 26 can be reduced further by employing several hybrid input-output iterations. The refogging tends to ensure that the reconstructed image-forms satisfy the positivity constraint. F. Consequences of Noninteger Oversampling When Image Box Is Known a Priori
While the positivity constraint is essential for realistic phase retrieval in practice, some oversampling (as categorized by the symbol N introduced in Section ILC) of the visibility intensity is also usually neccessary. The reason why N must be greater than 2 in general is because we have to deduce the size of the image box from the data. However, if this size is given a priori, the visibility phase can often be successfully retrieved when the amount of oversampling is less than 2, along each direction in visibility space, but is greater than unity (i.e., 1 < N < 2, no longer requiring N to be an integer). Figures 28a and b are image-forms of the starfield obtained from CPE and one Fienup cycle (vl = 20, v2 = 80, v3 = 10) using oversampling factors of N = 2 and N = 1.28, respectively. While the image-form shown in Fig. 28b is appreciably less faithful than that shown in Fig. 28a, it nevertheless reveals a significant amount of the detail exhibited by the true image (Fig. 3b). As N is gradually raised from 1.28 to 2, the same number of iterations generate an increasing amount of true detail.
56
R. H. T. BATES AND D. MNYAMA
FIG. 25. Image-forms of partially foggy starfield obtained after a single Fienup cycle (vl = 20, v2 = 80, v3 = lo), starting with CPE and using (a) defogged visibility magnitude (see
Fig. 24b), (b) original visibility magnitude (see Fig. 24a).
FIG.26. Image-form of foggy starfield obtained after 10 error-reduction iterations, starting with original visibility magnitude and phase corresponding to Fig. 22a.
FIG.27. Image-form of partially foggy starfield obtained after 20error-reduction iterations, starting with original visibility magnitude and phase corresponding to Fig. 25a.
58
R. H. T. BATES AND D. MNYAMA
FIG.28. Image-forms of starfield obtained after a single Fienup cycle ( v l = 20, v 2 = 80, with CPE when the size of the image box is known a priori. (a) N = 2;
v 3 = lo), starting (b) N = 1.28.
TWO-DIMENSIONAL PHASE PROBLEMS
59
G. Outstanding Questions In this final subsection we list a number of problems whose successful resolution would permit Fourier phase retrieval to become entirely routine.
(i) Since all measurements are inevitably corrupted with a variety of contaminants, it is vital to understand the connection (mentioned in Section 11)between noise level and the increased amount of oversampling needed to accommodate it. It is our opinion that this is the most important unresolved aspect of the phase retrieval problem. We also feel that its resolution will permit image-forms reconstructed from noisy data to be significantly more faithful than they presently are (see Section 1V.F). (ii) Connected with (i) above is the question of optimum preprocessing (see Section V.A) of noisy data to generate the “best” estimate of the autocorrelation derivable from those data. Since the latter are likely to be more reliable the nearer they are to the origin of visibility space (because I(u,u) is usually larger there), we feel that it should be possible to adapt Gerchberg’s (1974) algorithm with advantage to this problem. Having decided on the size of the autocorrelation box (in the manner described in Section V.A), one might iteratively modify the data (such that the allowable modifications increase with the radial coordinates of each sample) until the Fourier transform of the modified intensity is as small as it can be forced to be on the perimeter, and outside, of the autocorrelation box. Possible ways of implementing this notion are being actively studied in our laboratory, but our present schemes are largely ad hoc. It is interesting to note, in passing, that Gerchberg’s (1974) algorithm is conceptually a precursor of the Gerchberg-Saxton algorithm (see Section 1V.B) although it seems to have been thought of later.
(iii) The defogging procedure (see Section V.E) would benefit from becoming an integral part of the preprocessing step (see Section V.A) of the Canterbury algorithm. Before this can be done really usefully, it will be necessary to develop a systematic approach to the partial fog problem (see Section V.E). There is another question which should ideally be studied simultaneously. This is what we have previously referred to as the problem of “fringe magnification” (Bates and Fright, 1985). The latter, which we have so far only been able to tackle in an ad hoc manner, arises when small secondary fringes are superimposed upon the main outer fringes of the visibility (in general, of course, the data exhibit fringes having a wide range of relative amplitudes). We are convinced that the idea of defogging needs to be systematically extended in some way which will permit all fringes to be amplified during the preprocessing step. A major question is how to keep the modified image-form positive while doing this.
60
R. H. T. BATES AND D. MNYAMA
(iv) Even though they are effective, the iterative algorithms are regrettably “brute force.” The improvement to CPE discussed in Section V.C represents a significant step toward a less stochastic approach to phase retrieval. However, the technique for estimating e ( x ,y ) described in Section V.C needs to be extended such that, at successive iterations, f ( x , y ) is forced to become increasingly compatible with the given in-between structure factor magnitudes, (v) Finally, there is the “Holy Grail” of direct phase retrieval. Although our hope may be “pious,” we feel that the quest for a realistic approach should concentrate on the “complex zeros.” Deighton et al. (1985) have systematized the algorithm foreshadqwed by Napier and Bates (1974), but it is so computationally intensive that it could never be incorporated into computational practice. However, various vaguely promising ideas seem to be in the wind (Scivier et al., 1984; Scivier and Fiddy, 1985; Arsenault and Chalasinska-Macukow, 1983),so that it may not be overly optimistic to expect a “bright idea” to come from somewhere during the next few years. ACKNOWLEDGMENTS We are especially grateful to our colleague Dr. W. R. Fright for many detailed discussions and for the trouble he has taken to explain to us the fine points of the many phase retrieval algorithms that he has developed. One of us (D. M.) acknowledges the award of a Commonwealth scholarship under the bilateral aid program between New Zealand and Zimbabwe.
REFERENCES Ables, J. G. (1974). Maximum entropy spectral analysis. Astron. Astrophys., Suppl. Ser. 15, 383393. Andrews, H. C., and Hunt, B. R. (1977). “Digital Image Restoration.” Prentice-Hall, Englewood Cliffs, New Jersey. Andrews, H. C., Pratt, W. K., and Caspari, K. (1970). “Computer Techniques in Image Processing.” Academic Press, New York. Arsenault, H. A., and Chalasinska-Macukow, K. (1983). The solution to the phase problem using the sampling theorem. Opt. Commun.47 (6),380-386. Baritz, D. A,, and Zwick, M. (1974).The use of symmetry with the fast Fourier algorithm. Acta Crystallogr., Sect. A A30,257. Bates, R. H. T. (1969)Contributions to the theory of intensity interferometry. Mon. Not. R. Astron. SOC. 142,413-428. Bates, R. H. T. (1971). Holographic approach to radiation pattern measurement. I. General theory. Int. J . Eng. Sci.,9, 1193-1208. Bates, R. H. T. (1978a). On the Phase Problem. I. Optik 51 (2), 161-170.
TWO-DIMENSIONAL PHASE PROBLEMS
61
Bates, R. H. T. (1978b). On the Phase Problem. 11. Optik 51 (3), 223-234. Bates, R. H. T. ( I 9784. Fringe visibility intensities may uniquely define brightness distributions. Astran. Astrophys. 70, L27-L29. Bates R. H. T. (1982a).Fourier phase problems are uniquely solvable more than onedimension. I. Underlying theory. Optik 61,247-262. Bates, R. H. T. (1982b).Astronomical speckle imaging. Phys. Rep. 90 (4), 263-297. Bates, R. H. T. (1984). Uniqueness of solution to two-dimensional Fourier Phase Problem for localised and positive images. Comput. Graphics, Vision Image Process. 25,205-217. Bates, R. H. T., and Fright, W. R. (1983). Composite two-dimensional phase restoration procedure. J. Opt. SOC.Am. 73 (3), 358-365. Bates, R. H. T., and Fright, W. R. (1984). Reconstructing images from their Fourier intensities. Chapter 5 . Adv. Comput. Vision Image Process. I, 227-264. Bates, R. H. T., and Fright, W. R. (1985). Two dimensional phase restoration. In “Fourier Techniques and Applications.” pp. 121-148. Plenum, New York. Bates, R. H. T., and McDonnell, M. J. (1986).“Image Restoration and Reconstruction.” Oxford Univ. Press, London and New York. Bates, R. H. T., and Napier, P. J. (1972). Identification and removal of phase errors in interferometry. Mon. Not. R. Astron. Sac. 158,405-424. Bates, R. H. T., and Tan, D. G. H. (1985a).Towards reconstructing phases of inverse scattering signals, J. Opt. SOC.Am. 2n (1 I), 2013-2017. Bates, R.H. T., and Tan, D. G. H.(1985b).Fourier phase retrieval when the imageis complex. Proc. Sac. Photo-Opt. Instrum. Eng. 558, 54-59. Bates, R. H. T., Fright, W. R.,and Norton, W. A. (1984). Phase restoration is successful in the optical as well as the computational laboratory. In “Indirect Imaging” (J. A. Roberts, ed.), pp. 119-124. Cambridge Univ. Press, London and New York. Bethea, R. M., Duran, S., and Boullion, T. L. (1975). “Statistical Methods for Engineers and Scientists,” Vol. 15. Dekker, New York. Bhattacharyya, G. K., and Johnson, R. A. (1977).“Statistical Concepts and Methods.” Wiley, New York. Boas, R. P. (1954). “Entire Functions.” Academic Press, New York. Bracewell, R. N. (1978). “The Fourier Transform and its Applications,” 2nd ed. McGraw-Hill, New York. Bricogne, G. (1974). Geometric sources of redundancy in intensity data and their use for phase determination. Acta Crystallogr., Sect. A A30,395-400. Bricogne, G. (1984).Maximum entropy and the foundations of direct methods. Acta CrystaNogr., Sect. A A40,410-445. Brigham, E. D. (1974). “The Fast Fourier Transform.” Prentice-Hall, Englewood Cliffs, New Jersey. Bruck, Y. M., and Sodin, L. G. (1979).On the ambiguity of the image reconstruction problem. Opr. Commun. 30,304-308. Burgi, H. B., and Dunitz, J. D. (1971).Multiple solutions of crystal structures by direct methods. Acta Crystallogr., Sect. A A27, 117-1 19. Burr, 1. W. (1970).“Applied Statistical Methods.” Wiley, New York. Bury, K. V. (1975).“Statistical Models in Applied Science.” Wiley, New York. Byrne, C. L., Fitzgerald, R. M., Fiddy, M. A., Hall, T. J., and Darling, A. M. (1983). Image restoration incorporating prior knowledge. Opt. Lett. 73,1466. Chanzy, H., Franc, J. M., and Herbage, D. (1976). High angle electron diffraction of frozen hydrated collagen. Biochem. J. 153, 131-140. Chapman, J. N. (1975a). The application of iterative techniques to the investigation of strong phase objects in electron microscopy. I. Test calculations. Philos. Mag. [8] 32,541-552.
62
R. H. T. BATES AND D. MNYAMA
Chapman, J. N. (1975b). The application of iterative techniques to the investigation of strong phase objects in electron microscopy. 11. Experimental results. Philos. Mag. [S] 32,541 -552. Colella, R. (1974). Multiple diffraction of X-rays and the phase problem: Computational procedures and comparison with experiment. Acta Crystallogr., Sect. A AM, 413-423. Cooley, J. W., and Tukey, J. W. (1965). An algorithm for the machine calculation of complex Fourier series. Math. Comput. 19 (90), 297-301. Cowley, J. M. (1981).“Diffraction Physics,” 2nd ed., North-Holland Publ., Amsterdam. Davies, A. R., and Rollett, J. S. (1979). Phase extension by means of square root of the electron density. Acta Crystallogr.,Sect. A A32, 17. Deighton, H. V., Scivier, M. S., and Fiddy, M. A. (1985). Solution of the two-dimensional phase retrieval problem. Opt. Lett. 10 (6), 250-251. Erickson, H. P., and Klug, A. (1975). Measurement and compensation of defocussing and abberrations by Fourier processing of electron micrographs. Philos. Trans. R. Soc. London, Ser. B261, 105-118. Eyck, L. F. T. (1973). Crystallographic fast Fourier Transforms. Acta Crystalfogr.,Sect. A A29, 183- 191. Fienup, J. R. (1978).Reconstruction of an object from the modulus of its Fourier transform. Opt. Lett. 3, 27-29. Fienup, J. R. (1979).Space object imaging through the turbulent atmosphere. Opt. Eng. 18,529534. Fienup, J. R. (1982). Phase retrieval algorithms: A comparison. Appl. Opt. 21 (15), 2758-2769. Fienup, J. R. (1984). Experimental evidence of the uniqueness of phase retrieval from intensity data. In “Indirect Imaging” (J. A. Roberts, ed.), pp. 99- 109. Cambridge Univ. Press, London and New York. Fienup, J. R., Crimmins, T. R., and Holsztynski, W. (1982).Reconstruction of object having latent reference points. J . Opt. Soc. Am. 73, 1421-1433. Foley, J. T., and Russell Butts, R. (1981). Uniqueness of phase retrieval from intensity measurements. J . Opt. SOC.Am. 71, 1008-1014. Fright, W. R. (1984). The Fourier phase problem. Ph. D. Thesis, University of Canterbury, Christchurch, New Zealand. Garden, K. L., and Bates, R. H. T. (1982). Fourier phase problems are uniquely solvable in more than one dimension. I1.One-dimensional consideration. Optik 62,131-142. Gerchberg, R. W. (1972). Holography without fringes in electron microscopy. Nature (London) 240,404-46. Gerchberg, R. W .(1974). Super-resolution through error energy reduction. Opt. Acta 21 (9), 709720. Gerchberg, R. W., and Saxton, W. 0. (1971). Phase determination from image and diffraction plane pictures in the electron microscopy. Optik 34,275-288. Gerchberg, R. W., and Saxton, W. 0.(1972).A practical algorithm for the determination of phase from image and diffraction plane pictures. Optik 35 (2), 237-246. Gull, S. F., and Daniell, G. J. (1978).Image reconstruction from incomplete and noisy data. Nature (London) 272,686-690. Gull, S. F., and Skilling, J. (1984). Maximum entropy method in image processing. Proc. Inst. Electr. Eng. 131 (6), 646-659. Hanszen, K. J. (1970). The optical transfer theory of the electron microscope fundamental principles and applications. In “Advances in Optical and Electronic Microscopy” (V. E. Cosslett and R. Barer, eds.), Vol. 4, pp. 1-84. Academic Press, New York. Harrison, H. R. (1970). Application of Patterson methods to the solution of molecular crystal structures containing a known rigid group. Acta Crystallogr. Sect. A A26,692-694.
TWO-DIMENSIONAL PHASE PROBLEMS
63
Hayes, M. H. (1982). The reconstruction of a multidimensional sequence from the phase or magnitude of its Fourier transform. IEEE Trans. Acoust. Speech, Signal Process. ASSP-30 (21, 140-154. Hilderbrand, F. B. (1948). “Advanced Calculus for Engineers.” Prentice-Hall, Englewood Cliffs, New Jersey. Hofstetter, E. M. (1964). Construction of time-limited functions with specified autocorrelation function. IEEE Trans. Inf. Theory ITlO, 119-126. Hosoya, S., and Tokonami, M. (1967).Crystal structure analysis from a viewpoint of information theory. Acta Crystaffogr.,Sect. A A23, 18-25. Hui, S. W., Parson, D. F., and Cowden, M. (1974). Electron diffraction of wet phospholipid bilayer. Proc. Natl. Acad. Sci. U.S.A.71, 5068-5072. Institute of Electrical and Electronics Engineers (1969). lEEE Trans. Audio Electroacoust., Spec. Issue Fast Fourier Transforms Aa37. Ino, T. (1979). X-ray diffraction by small crystals. Acta Crystallogr., Sect. A A35, 163-170. Kazuyoshi, I., and Yoshihiro, 0. (1983). Phase estimation based on the maximum likelihood criterion. Appl. Opt. 22 (191, 3054-3057. Klug, A., and Crowther, R. A. (1972).Three-dimensional image reconstruction from the viewpoint of information theory. Nature (London) 238 (25), 435-440. Klug, A., Crowther, R. A., and DeRosier, D. J. 1970. The reconstruction of a threedimensional structure from projections and its application to electron microscopy. Proc. R. SOC. London. Ser. A 317,319-340. Krabbendam, H., and Kroon, J. (1971). A step-by-step algorithm for phase refinement with the aid of Sayre’s equations. Acta Crystailogr., Sect. A A27,48-53. Millane, R. P, Norton, W. A., Fright, W. R., and Bates, R. H. T. (1986). Towards direct phase retrieval in macromolecular crystallography. Biophys. J. 49,60-62. Misell, D. L. (1978). The phase problem in electron microscopy. In “Advances in Optical and Electron Microscopy” (V. E. Cosslett and R. Barer, eds.), Vol. 7, pp. 185-279. Academic Press, New York. Morris, D. (1985). Phase retrieval in radio holography of reflector antennas and radio telescope. IEEE Trans. Antennas Propag. AP-33 (7), 749-755. Napier, P. J. (1972). Reconstruction of radiating sources. Ph. D. Thesis, University of Canterbury, Christchurch, New Zealand. Napier, P. J., and Bates, R. H. T. (1974).Inferring phase information from modulus information in two-dimensional aperture synthesis. Astron. Astrophys., Suppl. Ser. 15,427-430. Narayan, R., and Nityananda, R. (1982). The maximum determinant method and the maximum entropy method. Acta Crystallogr., Sect. A A38, 122-128. Nityananda, R., and Narayan, R. (1982). Maximum entropy image reconstruction-a practical non-information-theoretical approach. J . Astrophys. Astron. 3,419-450. Oppenheim, A. V., and Lim, J. S. (1981). The importance of phase in signals. Proc. IEEE 69,529541. Paley, R. E. A. C., and Wiener, N. (1934).“Fourier Transforms in Complex Domain.” Am. Math. SOC.,Providence, Rhode Island. Pease, M. C. (1968). An adaption of the Fast Fourier Transform for parallel processing. J. Assoc. Comput. Mach. 15 (2), 252-264. Piro, 0. E. (1983). Information theory and the ‘Phase problem’ in crystallography. Acta Crystallogr., Sect. A A39.61-68. Post, B. (1979). A solution of the X-ray phase problem. Acta Crystallogr., Sect. A A39, 17-21. Post, B. (1983). The experimental determination of the phases of X-ray reflections. Acta Crystallogr., Sect. A A39, 711-718.
64
R. H. T. BATES AND D. MNYAMA
Rae, A. D. (1974).The phase problem and its implication in the least squares refinement of crystal structures. Acta Crystallogr. Sect. A A30, 761-765. Ramachandran, G. N., and Srinivasan, R. (1970). “Fourier Methods in Crystallography.” Wiley (Interscience), New York. Rosenfeld, A,, and Kak, A. C. (1982).“Digital Picture Processing,” 2nd ed. Academic Press, New York. Sanz, J. L., and Huang, T. S . (1985). On the stability and sensitivity of multidimensional signal reconstruction from Fourier Transform magnitude. Proc. IEEE Int. Con$ ASSP, pp. 10651068. Sault, R. J. (1984).Two procedures for phase estimation from visibility magnitudes. Aust. J. Phys. 37,209-229. Saxton, W. 0. (1978). Compter techniques for image processing in electron microscopy. Adu. Electron. Electron Phys. Suppl. 10. Saxton, W. 0.(1980).Recovery of specimen information for strongly scattering objects. Top. Curr. Phys. 13,35-87. Sayre, D. (1952).The implications of a theorem due to Shannon. Acta Crystallogr. Sect. A AS, 843. Scivier, M. S., and Fiddy, M. A. (1985). Phase ambiguities and zeros of multidimensional bandlimited functions. J. Opt. SOC.Am. 2A ( S ) , 693-697. Scivier, M. S., Hall, T. J., and Fiddy, M. A. (1984). Phase unwrapping using the complex zeros of a band-limited function and the presence of ambiguities in two dimensions. Opt. Acta 31 (6), 619-623. Skilling,J., and Bryan, R. K. (1984). Maximum entropy image reconstruction: General algorithm. Mon. Not. R. Astron. Soc. 21 1, 1 11-124. Taylor, C. A., and Glaeser, R. M. (1976). Electron microscopy of frozen hydrated biological specimens. J. Ultrastruct. Res. 55,448-456. Taylor, L. S. (1981).The phase retrieval problem. IEEE Trans. Antennas Propag. AP-29 (2), 386391. Thiessen, W. E., and Busing, W. R. (1974). Identification of systematically aberrant phase relationships arising from structural regularity. Acta Crystallogr., Sect. A AM, 814-821. Unwin, P. V. T., and Henderson, R. (1975). Molecular structure determination by electron microscopy of unstrained crystalline specimen. J. Mol. Biol. 94,424-440. Van Toorn, P., Greenaway, A. H., and Huiser, A. M. J. (1984). Phaseless object reconstruction. Opt. Acta 31 (7), 767-774. Wernecker, S. J. (1977).Two-dimensional maximum entropy reconstruction of radio brightness. Radio Sci. 12 (5). 831-844. Wernecker, S. J., and DAddario, L. R. (1977). Maximum entropy image reconstruction. I E E E Trans. Comput. C-26 (4). 351-364. Willingale, R. (1981).Use of maximum entropy method in x-ray astronomy. Mon. Not. R. Astron. SOC. 194,359-364. Won, M. C. (1984).Towards a solution of two dimensional Fourier phase problem. M. E. Thesis, University of Canterbury, Christchurch, New Zealand. Won, M. C., Mnyama, D., and Bates, R. H. T. (1985).Improving initial phaseestimates for phase retrieval algorithms. Opt. Acta 32 (4), 377-396.
ADVANCES IN ELECTRONICS AND ELECTRON PHYSICS. VOL. 67
Lie Algebraic Theory of Charged-Particle Optics and Electron Microscopes ALEX J. DRAGT Center for Theoretical Physics Department of Physics and Astronomy Unioersity of Maryland College Park, Maryland 20742
ETIENNE FOREST Lawrence Berkeley Laboratory University of California Berkeley, California 94720
I. INTRODUCTION A . New Methods
The purpose of this paper is to present new methods, employing Lie algebraic tools, for characterizing charged-particle optical systems and computing aberrations. These new methods represent the action of each separate element of a compound optical system, including all departures from paraxial optics, by a certain operator. The operators for the various elements can then be concatenated, following well-defined rules, to obtain a resultant operator that characterizes the entire system. The use of Lie algebraic tools offers several advantages. First, the calculation of high-order aberrations is facilitated. As an example, this paper treats solenoids and sextupoles through fifth order. Also, a description is given of MICROLIE, a fifth-order code intended for the design, evaluation, and operation of electron microscopes and other optical systems. Second, new insight into the origin and possible correction of aberrations is provided. For example, we will discuss a “supercorrected” solenoid and sextupole system that is free of both third-order and fourth-order “spherical type” aberrations. A preliminary numerical study of this example indicates that, by simultaneously removing both third- and fourth-order “spherical 65
Copyright @ 1986 by Academic Press, Inc. All rights of reproduction in any form reserved.
66
ALEX J. DRAGT AND ETIENNE FOREST
type" aberrations, it may be possible to build electron microscopes having subangstrom resolution. B. Old Methods
Consider the charged-particle optical system illustrated schematically in Fig. 1. A charged particle originates at the initial point 9' at ti. After moving through some optical device, typically containing various electric and magnetic fields, the particle arrives at the Jinal point Bf at time tf. Given the initial time, coordinates, and momenta (ti, ri, pi), the fundamental problem of charged-particle optics is to determine the final quantities (tf,rf, p') and to design an optical device in such a way that the relation between the initial and final quantities has various desired properties. The relation between the initial and final quantities can always in principle be obtained by integrating numerically the equations of motion. However, this process may be rather slow if performed for a large number of rays. Moreover, it does not necessarily give much insight into the behavior of an optical system or what could be done to change it. A common procedure is to seek an expansion of the final quantities in the form of a power series in the initial quantities. In lowest order, this approach yields the linear matrix approximation of paraxial optics. The higher-order (nonlinear) terms in such an expansion provide a description in terms of aberration coefficients. The treatment of the higher-order terms can be facilitated by the use of various characteristic functions. This approach exploits the fact that the equations of motion for charged-particle motion in electromagnetic fields can be written in Hamiltonian (or Lagrangian) form. Despite their general theoretical importance and frequent applicability, there are certain awkward features associated with the use of characteristic functions. For example, their use gives relations which are implicit in that they contain an admixture of initial and final quantities. These relations must be made explicit, by solving for the final quantities in terms of the initial X I
FIG.1. A charged-particle optical system consisting of an optical device preceded and followed by simple transit. A particle originates at 4"' at time ri with location ri and momentum pi, and terminates at 4"' at time t' with location rf and momentum pf.
LIE ALGEBRAIC THEORY OF OPTICS
67
quantities, before they can be applied. This same circumstance makes it difficult to find explicitly the characteristic function for a compound optical system even when the characteristic function for each of its component parts is already known. It may also be observed that, when characteristic functions are used, no one kind of characteristic function is applicable to all kinds of optical systems. For example, a point characteristic function is not applicable to an imaging system, and an angle characteristic function is not applicable to a telescopic system. C. Promises As indicated earlier, the purpose of this paper is to give a preliminary description of an alternative Lie algebraic approach to the problem of characterizing charged particle optical systems ( 1 ) . This approach, like that employing characteristic functions, exploits the Hamiltonian nature of the equations of motion. Moreover, it has the feature that the relation between initial and final quantities is always given in explicit form. In particular, it is possible to represent the action of each separate element of a compound optical system, including all departures from linear matrix optics, by a certain operator. These operators can then be concatenated by following well-defined rules to obtain a resultant operator that characterizes the entire system. That is, the use of Lie algebraic methods provides an operator extension of the linear matrix methods to the general case. Additionally, their use, as is the case with characteristic functions, facilitates the treatment of symmetries in an optical system and the classification of aberrations. Finally, it is believed that the Lie algebraic approach will greatly simplify the calculation of high-order aberrations. Thus, it should be of great utility in the design of electron microscopes, microprobes, and other optical systems. The remaining sections of this paper are devoted to an exposition of Lie algebraic methods and examples of their use. Section I1 provides a brief summary of Lie algebraic tools. Section I11 treats simple examples. Both of these sections are meant to be introductory and presume no previous acquaintance with Lie algebraic methods. Subsequent sections are more technical in nature and presume some familiarity with Lie algebraic methods or a willingness to acquire familiarity. Section IV is devoted to a suitable formulation of the equations of motion, with particular application to the case of a solenoid. Section V summarizes various rules for Lie algebraic calculation. Sections VI and VII treat the solenoidal lens, first in paraxial approximation and then in second and third order. Sections VIII and IX discuss spherical aberration in solenoidal systems, analyze a simple sextupole
68
ALEX J. DRAGT AND ETIENNE FOREST
corrector, and describe how it is possible to remove both third-order and fourth-order “spherical type” aberrations from a spot-forming system. A final section outlines possible future developments, gives some numerical examples, and describes MICROLIE 5.0, a Lie algebraic fifth-order code intended for the design, evaluation, and operation of electron microscopes and other optical systems. One numerical example indicates that, by simultaneously removing both third- and fourth-order “spherical type” aberrations, it may be possible to build electron microscopes having subangstrom resolution.
11. LIEA-LGEBRAIC TOOLS The purpose of this section is to present a brief summary of the Lie algebraic tools and concepts required for our purpose. A more complete discussion may be found elsewhere ( 2 ) .
A. Lie Operators To begin, let f (r, p) be a specified function on phase space, and let g(r, p) be any function. Associated with each f is a Lie operator that acts on general functions g. The Lie operator associated with the function f will be denoted by the symbols :f:and is defined by the rule :f:s = Cf9Sl
(1)
Here the bracket [ , ] denotes the familiar Poisson bracket operation of classical mechanics ( 3 ) . In an analogous way, powers of :f:are defined by taking repeated Poisson brackets. For example, :f:’ is defined by the relation :f:%
=
Cf9Cf7Sll
(2)
Finally, :f:to the zero power is defined to be the identity operator $09 = g
(3)
B. Lie Transformations Since powers of :f:have been defined, it is also possible to deal with power series in :f:. Of special importance is the power series exp(:f:). This object is called the Lie transformation associated with :f:or f.The Lie transfor-
LIE ALGEBRAIC THEORY OF OPTICS
69
mation is also an operator and is formally defined by the exponential series m
exp(:f:) =
C :f:"/n!
n=O
(4)
In particular, the action of exp(:f:) on any function g is given by the rule exp(:f)s
=g
+ Ef7sl+ Cf,Cf,~11/2!+
(5)
C. Transfer Map and Factorization Theorem
As indicated in the introduction, we wish to use the fact that the equations of motion are governed by some Hamiltonian H. Its exact form will be determined in later sections. Now consider all trajectories generated by H . Suppose we denote the collection of phase-space variables by the shorthand symbol w. Then, given a set of initial conditions wi at Pi,we may follow the trajectory with these initial conditions to find the final conditions w f at 9'. Evidently, this following of trajectories assigns to each wi a unique w'. Equivalently, we say that the Hamiltonian H gives rise to a transfer map A! (in general nonlinear) with the property Wf = A ' w i
(6)
Suppose further that a set of phase-space coordinates can be found so that w = 0 is a trajectory. Then A' maps the origin of phase space into itself. In this circumstance, there is a factorization theorem which shows that any map A! arising from following Hamiltonian trajectories can be written as an infinite product of Lie transformations in the form exp(:f,:)exp(:f,:)-.(7) Here each function f, is a homogeneous polynomial of degree m in the variables w '. Evidently, a knowledge of A' is equivalent to a knowledge of the trajectories generated by the Hamiltonian H. And, according to Eq. (7), a knowledge of A' amounts to determining certain homogeneous polynomials f 2 f 3 f4,etc. When applied to charged-particle optics, the factorization theorem indicates that the effectof any collection of elements can be characterized by a set of homogeneous polynomials. It will be shown in Section I11 that the polynomials fzreproduce paraxial (linear matrix) optics and the higher-order polynomials, f3, f4, etc., describe departures from paraxial optics and are related to aberrations in the case of an imaging system. Thus, from a Lie algebraic perspective, the fundamental problem of charged-particle optics is to study which polynomials correspond to various optical elements, to study A
9
1
= exp(:f2:)exp(:f3:)
70
ALEX J. DRAGT AND ETIENNE FOREST
which polynomials result from concatenating various optical elements, and to study which polynomials correspond to various desired optical properties.
111. SIMPLE APPLICATIONS A. Preliminaries
In this section some simple Lie transformations will be studied, and a correspondence will be made between them and various idealized optical elements. A complete analysis of the general optical system requires the consideration of particles having a range of energies and passing through elements with no particular optical axis. However, for simplicity, our present discussion will be restricted to monoenergetic particles passing through systems having an optical axis along the z direction. Consequently, we shall only be concerned for the moment with coordinates transverse to the z axis. (See Fig. 1.) Let po be the momentum of some “design orbit” lying on the optical axis. Then it is convenient to introduce a transverse scaled momentum P defined by the relations px = P X I P O
py = P Y I P O
We will also introduce a transverse coordinate vector Q defined by writing
Q, = x Q, = Y Finally, we will define a Poisson bracket [ , 3 in such a way that the quantities
4, Qpsatisfy the usual relations ( 4 ) . CQa,
Qpl
= CP,,PpI = 0 CQa
3
41 =d a p
(1 0 4
(lob)
Note that the quantities P and Q are zero along the design orbit. (The design orbit lies on the optical axis.) Consequently, they will be small for nearby orbits. Also, the quantities P are dimensionless and in lowest approximation are related to angles between the design orbit and the orbit in question.
LIE ALGEBRAIC THEORY OF OPTICS
71
B. Simple Transit We are now ready to study some simple Lie transformations. To begin, consider the case in which Jt = exp(:f,:) f2 =
(11)
-(l/2)(Pi)2
(12)
(Here, as before, the superscripts “i” and “f” denote initial and final, respectively.) The effects of :f2: and its powers on each component Qb and Pt are easily worked out using definitions (1) and (10). We find the results :f2:Ph = -(1/2)[(P’)Z, Pt] = 0
(134
:f2:Qt = -(1/2)[(Pi)’, Qt] = 1Pb
(1 3b)
:f2:’QQt = I:f,:Pb = 0, etc.
(13c)
Now substitute these results into Eq. (5). In view of Eqs. (13a) and (13c), the required infinite series terminate. Consequently, using a notation similar to Eq. (6),we find the relation
P: = JtPi
= exp(:f,:)Pi =
Pt
(144
(14b) Q: = AQL = exp(:f,:)Qt = Qt + IPt Comparison of the far left-hand and right-hand sides of Eqs. (14a,b) shows that the relation described by A in this case is just that predicted by paraxial optics for transit by a distance 1. Thus, the following correspondence has been established: A = exp{ -(1/2):(Pi)’:]
-
transit by distance 1 in the paraxial approximation
(15)
C. Simple Lens
Next consider the case in which Jt is again given by Eq. (11) withf, now having the form -(k/2)(Q‘)2 Again work out the required Poisson brackets to find the result f 2 =
:f2
:Qb = -(k/2)[(Qi)2, Qi] = 0
:fz :Pi = -(k/2)[(Qi)’, Pi] = - kQt :f,:’Ph = - k :f,:Qb = 0, etc.
(16)
(1 7 4
(17b) (17c)
72
ALEX J. DRAGT AND ETIENNE FOREST
Proceeding as before, we find the net result QL = A Q t = exp(:f2:)Qt = Qb PL = AP;
= exp(:f2:)Ph =
Ph - kQh
(184 (18b)
Comparison of the far left-hand and right-hand sides of Eqs. (18a,b) in this case shows that the relation described by A is that predicted by paraxial optics for passage through a thin lens of power k. Consequently, there is also the correspondence A = exp{ -(k/2):(Qi)2:)
through a thin lens of power k in the paraxial approximation (19)
o passage
D. Axial Rotation As a somewhat mare complicated example, consider the Lie transformation associated with f2 when f2 has the form f2 =
4(Qi x Pi) 2,
= 4(QLP; - QLPL)
(20)
Then we find the Poisson bracket results :f2:QL = 4Qi :f2:Q1= -4QL :f2:PL = +Pi
$,:Pi=
-4PZ
In this case, it is necessary to sum an infinite series. For example, for the effect of A on Qb, we find from Eqs. (21a,b) the result
+ :f2: + : f 2 : 2 / 2 +! :f2:3/3! + ...)QL = (1 - 42/2! + ..-)Qi + (4 - @/3! + ..*)Qb = Qi cos 4 + Q; sin 4
Qf, = MQI = exp(:f2:)QI = (1
(224
Similarly, we find for the remaining final quantities the results
Qb cos 4 - QI sin 4 Pr = A P : = exp(:fi: ) P ; = cos 4 + Pb sin 4
(22b)
= exp(:f2:)P; = PI cos 4 - PI sin 4
(224
Q:
=d
P: =
Q 1 = exp(:f2:)Q;
=
(22c)
Thus, we have established the correspondence A = exp{4:(Qi x P i ) .2=:} o rotation by angle 4 about the z axis
(23)
73
LIE ALGEBRAIC THEORY OF OPTICS
E . General Paraxial Optics At this point it should be evident that all of paraxial optics can be represented by maps or products of maps of the form A = exp(:fz:). That is, maps of this form always produce linear transformations and produce all of the linear transformations required for paraxial optics. As an exercise, the reader is invited to work out the effect of A when fz is given by the expression fz
=
-.(P'.Q')
(24)
where 0 is some parameter. She or he should find the result that Q' = (e')Qi
p' = (e-")P'
(25)
These are the equations for a system that is both imaging and telescopic.
F. Departures from Paraxial Optics The next question to address is that of departures from paraxial optics. Suppose, as is often the case, that the optical system under consideration is axially symmetric (rotationally invariant) about the z axis. Then this symmetry requires that the various f, be functions only of the variables P2, Qz, P Q, and Q x P. It follows that all f, with odd n must vanish, since it is impossible to construct an odd-order homogeneous polynomial by using only the variables P2, QZ,P * Q, and Q x P. Consequently, in any case having the assumed symmetry, the map A must be of the general form
-
A = exp(:fz:)exp(:f4:)exp(:f6:).
(26) Of course, in the general case of no particular symmetry, the odd-degree etc. can also occur, and all the polynomials f , can, in polynomials f3,fs, principle, depend on the components of the vectors P and Q in an unrestricted fashion. *.
G. Third-Order Aberrations
The burden of the remainder of this section is to show that the polynomials an imaging system are related to third-order and fifthorder aberrations, etc., respectively ( 5 ) . In particular, specific attention will be devoted to third-order aberrations. They are completely described by f4. That is, attention will be restricted to maps of the form
f4, f6, etc. in the case of
A = exp(:fz:) exp( :f4:) The higher-order cases can be treated in a similar fashion.
(27)
74
ALEX J. DRAGT AND ETIENNE FOREST
Suppose that the paraxial portion of a map, the portion described by the factor exp(:f,:), has been selected to produce imaging. Also, let w = (Q,P) denote the location and direction of a ray at the image plane as predicted by the paraxial approximation. Then it can be shown that the actual ray fourvector, which will be denoted by W,is affected by the term exp(:f,:) and is given by the expression
W =exp(:f4:)w
=w
+ [ f 4 , w ] + [f4,[f4,w]]/2! ...
(28) Next observe that the operation of Poisson bracketing involves multiplication and two differentiations. Thus the term [f4,w] is a polynomial of order 3, and [f4,[f4, w]] is a polynomial of order 5, etc. Consequently, since only thirdorder effects are of interest at this point, attention will be devoted only to the third-order terms [f4,w]. According to the previous discussion of symmetry, f4,in the cases of present interest, can depend only on the variables P2, Q2,P Q, and Q x P. Consequently, f4must be of the general form
-
f4
= A(P2)’
+ BP2(P Q) + CP2{(Q x P) - C,} *
+ D(P Q)’ + E(P Q){(Q x P) Cz] + FP’Q’ + CQ2(P Q) + HQ’{(Q x P) - e,} + I(Q’)’ *
*
*
(29) Here the quantities A-I are arbitrary coefficients whose values depend on the particular optical system under consideration. The significance for an imaging system of the various terms in f4, as given by the expansion (29), can be evaluated by using Eq. (28). For example, suppose that an image is to be produced in the image plane and that the effect of the term A(P2)2on this image is to be studied. Then use of Eq. (28) through third order gives the result that *
Q,
=
Q, + CNp2)2,Qdl
(30)
Explicit evaluation of the Poisson bracket gives the result that AQ, = Q, - Q, = - A
= -~ A P ~ P ,
(3 1) Evidently, the effect of the term A(P2)2is that the deviations in the actual arrival point of a ray at the image plane depend on its paraxial arrival direction P in the manner indicated in Eq. (31) and do not depend on its paraxial arrival point Q. Those familiar with aberration effects will recognize that this effect is exactly what is usually referred to as spherical aberration. Similarly, the other terms involving the coefficients B-H can be shown to describe the other well-known third-order aberrations ( 6 ) . Finally, the last term 1(Q2)’has no effect on the quality of an image, since [Z(Q’)’, Q,] = 0. (It a(p2)2/ap,
LIE ALGEBRAIC THEORY OF OPTICS
75
does, however, affect the arrival direction P of a ray at the image plane and therefore may be important if the optical system under study is to be used for some other purpose as part of a larger optical system.) Specifically,one finds in the imaging case the following one-to-one correspondence between the terms in the expansion (29) and the classical third-order monochromatic (monoenergetic) aberrations: Term
Aberration Spherical aberration Coma Anisotropic coma Astigmatism Anisotropic astigmatism Curvature of field Distortion Anisotropic distortion
In reflecting on what has been accomplished so far, it is evidenct that all that has been assumed is that the transfer map A arises from following Hamiltonian trajectories and has axial symmetry. As a consequence of these assumptions, paraxial optics was obtained as a first approximation, and it was found that for an imaging system only eight well-defined kinds of monochromatic aberrations can occur in third order. The use of Lie algebraic methods seems to be an optimal way of arriving at and understanding these basic results (7).
Iv. LAGRANGIANS AND HAMILTONIANS A . Lagrangians
The preceding sections have illustrated that various properties of an optical system can be conveniently described in terms of Lie transformations and their associated polynomials. The purpose of this and subsequent sections is to study certain optical systems with the aim of using Lie algebraic methods to analyze their performance in detail. Our plan is to find a suitable Hamiltonian H for the equations of motion and then to compute the transfer map A’ and its associated polynomials in terms of H . We begin by finding a suitable formulation for the equations of motion.
76
ALEX J. DRAGT AND ETIENNE FOREST
The relativistic Lagrangian for the motion of a particle of rest mass m and charge q in an electromagnetic field is given by the expression L(r,v, t ) = -me2(1
-
v2/c2)lI2 - 4$(r, t )
+ 4v
A(r, t )
(32)
Here II/ and A are the scalar and vector potentials defined in such a way that the electromagnetic fields E and B are given by the standard relations
The momentum p i canonically conjugate to the generalized coordinate qi is defined by the relation pi
(34)
= aL/adi
Specifically, in the case of Cartesian coordinates, one finds the result
p
+
= mv/(l - v ’ / c ~ ) ’ / ~ 4A
(35)
B. Hamiltonians
The Hamiltonian H associated with a Lagrangian L is defined by the Legendre transformation H ( q ,P , t ) =
1picii - L(q,4, t ) i
(36)
Consequently, again in Cartesian coordinates, the Hamiltonian associated with the Lagrangian (32) is given by the expression H = [m2c4
+ c2(p - qA)2]”2 + q$
(37)
C. Change of Independent Variable
In the usual Hamiltonian formulation (as in the usual Lagrangian formulation) the time t plays the distinguished role of an independent variable, and all the 4’s and p’s are dependent variables. That is, the canonical variables are viewed as functions q(t),p(t) of the independent variable t . For our purposes, it is more convenient to employ distance along some particular “design orbit” as the independent variable. Suppose, as assumed earlier, that the system to be studied has an optical axis, and that this axis lies along the z direction. Our desire is to find trajectories as functions of z in the form ~ ( 4~, ( 4 P&),, and P,(z).
LIE ALGEBRAIC THEORY OF OPTICS
77
This can be done while remaining within the Hamiltonian framework. Introduce a quantity pt by the rule Pt
= -H(q,p,t)
(38)
Now solve Eq. (38) for the quantity p z to find a relation of the form PZ
= --K(t,
X,
Y ;Pr, p x , p y ; z )
(39)
Specifically, in our case, K is given by the expression Then it can be shown that K is the desired Hamiltonian, and that one has the equations of motion ( 8 ) t‘ = aK/apr (414
p;
=-a~/at
(41b)
= aK/ap,
(424
=
-aK/ax,
etc.
(42b)
Here a prime denotes the differentiation dldz. In Eq. (41a) the quantity t(z) is viewed as the time of flight, and, according to Eq. (41b), the quantity p,(z) may be viewed as a momentum canonically conjugate to the time. Reference to the definition (38) shows that the quantity p I isjust the negative of the total energy. In the case that H i s time independent, K will be also, and then Eq. (41b) shows that the energy is conserved as expected. The use of p t as a canonical variable will enable us to treat both chromatic effects and time of flight in a natural way ( 9 ) .
D. Deviation Variables For the purposes of this paper, we will restrict our attention to the time independent case so that pr is indeed constant along each trajectory. Furthermore, we will only examine the case of no electric field so that $ may be assumed to be zero (10).Finally, we will assume that the vector potential vanishes on the optical (z) axis,
A(O,O, Z) = 0 Under these assumptions there is a trajectory of the form x=o, p x = 0,
(43)
y=o
(44a,b)
=0
(45ab)
Py
P z = Po
(46)
78
ALEX J. DRAGT AND ETIENNE FOREST
This trajectory evidently lies along the optical axis. We will regard it as the “design trajectory,” and will refer to po as the design momentum. Let us now introduce new scaled variables (P, Q , PT, and T ) by the rules
As the reader can easily check, we may regard these new variables as canonical with equations of motion governed by a Hamiltonian k, provided k is defined by the relation
-
K = - cp; - m2c2/p,Z- (P, - qA,/pA2 - (PY - qAy/Po)21”2- qA,/po (51) According to Eqs. (44),(45), (47), and (49), the quantities P and Q vanish on the design trajectory. However, PT and T do not. We find that on the design trajectory they obey the relations PT
= POT = - [I
+ m’c 2/ p o l
2 112
T’ = ak/aPT= - P ; [ ( P ; ) ~- m 2 c 2 / p ~ / = 1 ~- P2O
(52) T
- I/po
(53)
Here 8, is the quantity u/c on the design trajectory. Its value is given by the relation = c1
+ m 2c 2/ P 2o l - 1 / 2
(54) As indicated earlier, in order to use the factorization theorem it is convenient to employ phase-space variables which have the property that they all vanish on the design trajectory. Evidently, this can be done by leaving Q and P as they stand. However, T and PT must be replaced by new variables z and P, defined by the relations Po
T
+T
(554
+ P,
(55b)
= z/p0
PT = POT
Thus, z is a scaled diflerential transit time, and - P, is a scaled deviation from the design energy.
E . Transformed Hamiltonian The last task of this section is to determine the Hamiltonian which governs the evolution of the phase-space variables (z, Q, P,, P) on a general trajectory.
LIE ALGEBRAIC THEORY OF OPTICS
79
This will be done by finding a suitable transformation function that produces the desired transformation to these new variables and, by the usual rules, also produces a new Hamiltonian. Let (z,Q,p,,P) be a collection of new variables, and consider the transformation function defined by the equation F,(T,Q;P,,P;z) = Q * P
+ ( T - Z/fl,)(P$ + P,)
(56)
where POT and fl, are to be regarded as constant numbers (11). Then, by the standard procedure for obtaining a canonical transformation from a transformation function, we find the results 7=
a ~ , / a p =, T - z/po aF,/aT
PT =
= POT
+ p,
Equations (55) and (57) agree, and consequently the desired transformation is indeed canonical. Further, one has the relations P,
=
a~,/aQ= , P,,
etc.
(584
Q,
=
a ~ , j a P ,= Q,,
etc.
(58b)
Thus, the variables Q and P are unchanged, and for convenience we shall drop the use of the bar. Finally, let us now use the symbol H to denote the new Hamiltonian for the new variables. Again, following the standard procedure, one has the relation
H
=
r? + a ~ , / a ~
(59)
Upon evaluating Eq. (59) explicitly, we find the result
H
=
-C(P,
+ POT)’ - m
- qAz/p,
c / p a2 - (P, - qA,/p,)2 - (P, - q A , / ~ , ) ~ l ” ~
2 2
- (P, + POT)/flo
(60)
F. Application to Magnetic Solenoid As an example, and for use in subsequent sections, let us apply these results to the case of a magnetic solenoid with axial symmetry. Then A is determined completely by BJO, 0, z), the longitudinal component of the magnetic field on the optic axis. One finds the relations (12, 13)
A, = -YU(Z,P2)
A, = xU(z,p2) A, = 0
80
ALEX J. DRAGT AND ETIENNE FOREST
Here U is given by the expression U=
n=O
+
( - l)"(p2)"B~2"1/{22"+'n!(nl)!}
(62)
where BL2"' and p 2 denote the quantities ~ ; 2 " 1=
(d2"/dz2")Bz(0, 0,z )
p2 = x2
(63)
+ y2
(64)
For future use, let us now substitute into Eq. (60) the expression given by Eq. (61) for A and expand the resulting H in a power series in the variables (z, Q, P,. P). We find a general relation of the form W
H = C H , I=O
where each quantity HI is a homogeneous polynomial of degree I in the phasespace variables. Specifically, we find for the polynomials Ho through H 4 , which together govern the evolution of the phase-space variables through third order, the results
H , = (POYO)F2 H, = O H , = P2/2 + u2Q2/2 - M(Qx P) 2,
+ P,2/(2p,Zy,2)
H3 = (P,/PO)H2 H4 = (1/8)(P2)2- (1/2)aP2{(Q x P) 2,} - (u2/2)(P. Q)2
-
+ (1/8)(o!"
- a3/2)Q2((Q x
P) * G,}
+
(664 (3/4)a2P2Q2
+ (1/8)(a4 - ua")(Q2)2
- (1/4)(1 - 3/Pt)P,2P2 - (1/4)(1 - 3/P:)a2P,2Q2
+ ( W ( 1 - 3/PhP,2{(Q x P) - 6,) - (1 - 5/P,2)(W,2yf)-'P:
(664
Here yo is the usual relativistic factor given by the relation Yo
= (1
2 -112
-Po)
(67)
Also, the symbol o! denotes the combination of factors
4 4 = 4BZ(O, 0 , Z ) P P O
(68)
The HI of still higher degree can be found if needed. Note that H , vanishes. This is to be expected in general because, as a result of our construction, we have employed phase-space variables with the property that they all vanish on the design trajectory. If H , were not zero, the
LIE ALGEBRAIC THEORY OF OPTICS
81
condition of all phase-space coordinates vanishing would not be a solution of the equations of motion generated by H. Observe also that H, has no effect on the equations of motion, since it is independent of all phase-space variables. Thus, the sum Eq. (65) could just as well be taken to begin at I = 2. According to Eq. (66d), H, is proportional to P,. Its effect is therefore entirely chromatic. Finally, we note that the various H, depend only on axially symmetric quantities as expected. G. Summary
Let us summarize what has been accomplished in this section. We started with the general Lagrangian for relativistic charged-particle motion. Then, for the time-independent and no-electric-field case, we found a new Hamiltonian for a convenient set of new phase-space variables that vanish on the design trajectory. Finally, for later use, we found and expanded this Hamiltonian for the case of a solenoidal magnetic field. In the next section we will see how the transfer map A can be computed in terms of H.In particular, we will see how the fi of Eq. (7) can be computed in terms of the Hl.
V. LIECALCULUS A. Preliminaries
The purpose of this section is to summarize various rules for the computation of the transfer map A’ in terms of the Hamiltonian H a n d for the manipulation of Lie operators and Lie transformations. These rules will be used extensively in subsequent sections. Their derivation may be found elsewhere (2, 1 4 ) . For the computation of A in terms of H , it is easier to employ a factorization in an order opposite to that given in Eq. (7). That is, we write A’
= ...exp(:g,:)exp(:g,:)exp(:g3:)J2
(69) Here ,le,denotes the linear map exp(:f,:). Once the polynomials gn have been determined, one can pass from the factorization (69) to the factorization (7) using standard Lie algebraic manipulations to be described shortly. Just as the polynomials f, in the factorization (7) described (when applied to an imaging system) aberrations in terms of final quantities (quantities in the image plane),
82
ALEX J. DRAGT AND ETIENNE FOREST
it can be shown that thegndescribe aberrations in terms of initial quantities in the object plane. Also, in order to have a compact notation, it is convenient to introduce a six-component vector 5 having the quantities Q, P, z, and P, as its entries:
5 = (Qx,Px9Qy,Py,~,P,)
(70)
B. Calculation of Transfer Map
Now let &(z
z ' ) denote the map relating the phase-space coordinates at the point z'. We take A?(z c z') to have the factorization Eq. (69), where each polynomial g,((',z) is a homogeneous polynomial of degree n in the components of 5' with z dependent coefficients, and we will denote the linear part of this map by A 2 ( z + z ' ) . Then it can be shown that A 2 ( z + z i ) obeys the differential equation t
r ( z ) at the general point z to their initial values
A'; = J l 2 :
- H,(5',
['
z):
(71)
That is, the linear (paraxial) part of A is determined by H , , the quadratic part of H , as might be expected. If the Lie operators :H2(Ci,z): for different values of z all commute, then Eq. (71) can be integrated immediately. However, in general the solution of Eq. (71) requires the numerical integration of a certain set of linear (but z dependent) ordinary differential equations. Once A?, has been determined, the polynomials g,, can be gotten by quadrature. For example, one has the formulas
r zf
g3 = -
1
dzH"((',z)
J z'
g4 = -
+
s:'
dzHy(C',z)
[: [i' dz,
dz, [ -
z,), - H F 1 ( ( i , zl)]
(73)
Explicit formulas for all the gn through g6 are available, and results for n still larger can be derived easily. Here the quantities HF' denote the result of transforming the H, to the "interaction picture" with the aid of M2. Specifically, the quantities H?l are defined by the equations H?l(Ci, 2 ) = H,,(A2(Z c z')(', z )
(74)
LIE ALGEBRAIC THEORY OF OPTICS
83
C. Evaluation of Commutators
Reference has already been made to the commutator of two Lie operators in the context of solving Eq. (71). The appearance of commutators is a general feature of many Lie algebraic manipulations. It can be shown that the commutator of two Lie operators is again a Lie operator. Specifically,suppose that :f:and :g: are two Lie operators. Then, one finds for their commutator the result :f::g: - :g::f:
=
:[f,g]:
(75)
Thus, the evaluation of commutators is equivalent to the computation of Poisson brackets. Equation (75) allows us to immediately infer a result about the degree of polynomials associated with various Lie operators and their commutators. Suppose g m and gn are homogeneous polynomials of degrees m and n, respectively. Then, since Poisson bracketing involves the operations of multiplication and two differentiations, we conclude that the degree of their Poisson bracket is given by the relation degree[g,,g,]
=m
+n-2
(76)
D . Combining Transfer Maps
The next topic to be discussed in this section is that of combining transfer and Agare two maps for two optical elements. For maps. Suppose that .Af concreteness, we assume that both are represented in the factored product form Eq. (7),
Af= exp(:f,:)exp(:f,:)exp(:f,:)*..
(77d
Ag= exp(:g,:) exp(:g,:)exp(:g,:) * * .
(77b)
[We could equally well assume a reverse factorization of the form of Eq. (69), or even a mixture of factorizations.] Then, if we combine these two optical elements to make a compound optical system, we will want to compute the product A,Ag.Call this product A h , A h
=A
f d g
(78)
According to the factorization theorem, Ahwill also have a representation of the form A h
= exp(:h,:)exp(:h,:)exp(:h,:)...
(79)
84
ALEX J. DRAGT AND ETIENNE FOREST
Consequently, our problem is to compute the various polynomials hngiven the palynomialsf, and gn. We proceed by substituting the representations (77a,b) into Eq. (78) to obtain a relation of the form A h
= exp(:f,:)exp(:f,:)exp(:f,:)*..
..
x exp(:g,:)exp(:g,:)exp(:g,:)-
(80)
It can be verified from the properties of the exponential series that exp(:g,:)exp(-:g2:) = 9
(81)
where 9 denotes the identity map. Upon inserting 9 as given by Eq. (81) between various factors in Eq.’(80), we obtain the equivalent relation A h = exp(:f,:)exp(:g,:)
exp( -:g2:) exp(:f,:)exp(:g,:)
x exp( -:g,:)exp(:f,:)exp(:g,:)exp(
-:g,:).**exp(:g,:)exp(-:g,:)
x exp(:g,:)exp(:g,:)exp(:g4:)---
(82)
The first two factors in Eq. (82), namely, exp(:f,:) and exp(:g,:), correspond to linear maps. Consequently their product is also a linear map, and we conclude upon comparing Eqs. (78), (79), and (82) that we must have the relation exp(:h,:)
(83)
= exp(:f,:)exp(:g,:)
Next, from properties of Lie operators and the exponential function, it can be shown that one has the relation exp( -:g,:)exp(:f,:) exp(:g,:)
= exp(:fr:)
(84)
wherefr is a “transformed” homogeneous polynomial of degree n given by the formula
f;(C’)
= fnCexp(-:g2:)CiI
With this simplification, the expression (82) for A compact form A h
h
(85)
can be written in the more
= exp(:f,:)exp(:g,:)exp(:f:‘:)exp(:fi:)*
*
x exp(:g,:)exp(:g,:)...
Also, in view of Eqs. (79) and (83), we have the relation exp(:h,:)exp(:h,:)**. = exp(:f:‘:)exp(:fi:). . x exp(:g,:)exp(:g,:)...
(87)
LIE ALGEBRAIC THEORY OF OPTICS
85
Long ago the mathematicians Baker, Campbell, and Hausdorff made the remarkable discovery that the product of two exponentials of noncommuting operators can again be written as the exponential of some operator, and moreover this operator consists of the sum of the original two operators plus terms formed solely out of their commutators and multiple commutators (2). To see how this result applies to Lie transformations, suppose : f : and :g: are two Lie operators, and let { : $ , : g : } denote their commutator, (:f:,:g:)= :f::g: - :g::f:
(88)
Then the Baker-Campbell-Hausdorff formula states that one has the result
+ :g: + {:f:,:g:}/2 + {:f:,{:f:,:g:}}/12 + {:g:,{:g:,:f:}}/12 + . * . I (89)
exp(:f:)exp(:g:) = exp[:f:
The series appearing in the exponent on the right-hand side of Eq. (89) is called the Baker-Campbell-Hausdorff series. In general it has an infinite number of terms. However, each of these terms consists entirely of commutators, or multiple commutators. The Baker-Campbell-Hausdorff formula, when combined with Eq. (75), in principle makes it possible to manipulate exponents at will. For example, one can transform between the factorizations (7) and (69). Moreover, cases which involve Lie operators corresponding to polynomials of degree 3 or higher are particularly simple because then, in view of Eq. (76), only a finite number of terms need be retained in the Baker-Campbell-Hausdorff series if one is only interested in obtaining results up to some given degree. For example, the Poisson bracket of two third-degree polynomials is a polynomial of degree four; double Poisson brackets involving third-degree polynomials are of degree five, etc. Let us use the Baker-Campbell-Hausdorff formula to manipulate the exponents appearing in the right-hand side of Eq. (87). We find the result exp(:f:':) exp(:fy:). .. x exp(:g3:)exp(:g4:) =exp(:f;:
--
*
+ :g3:)exp(:f2: + :g4: + {:f;:,:g3:}/2)*.*
(90)
Now compare this result with the left-hand side of Eq. (87). Evidently, we must have the formulas h3
=
f" + 93
sk'+ 94 + Cf:'5931/2
(9 1a)
(9 1b) Similar formulas can be derived for the h, of still higher degree. Note that in obtaining Eq. (91b), use was made of the relations (75) and (88). h4 =
86
ALEX J. DRAGT AND ETIENNE FOREST
E . Properties of Lie Transformations
We close this section by stating three properties concerning the action on functions of general maps & which are Lie transformations or products of Lie transformations. Suppose g is any function on phase space. Then it can be shown that J@g(C') = g(ACi)
(92)
Moreover, as a corollary to Eq. (92), suppose g and h are any two functions on phase space. Then we have the relation A{g(CiMi)1= {JwCi))
vwi)l
= g(&[i)h(A[i)
(93)
Finally, using the notation above, Eqs. (84) and (85) have the generalization &exp(:g:)A-'
= exp(:g":)
(944
where gfr(Ci)= &g(C') = g(&Ci)
(94b)
VI. PARAXIAL MAPFOR SOLENOIDAL LENS
The paraxial approximation consists of omitting all terms in the Hamiltonian beyond H,. The transfer map A in this case will consist only of A2. The purpose of this section is to work out some of the properties of A2for a solenoidal lens. A. Decomposition into Factors
We have found it useful to decompose H 2 as given by Eq. (66c) into three terms: H2
= Ho - uJ,
+ H,
where
H, = P2/2 + a 2 Q 2 / 2 J, = (Q x P) 6 , 2 2 2 2 Hr = Pr/( P a ~ a )
(95)
LIE ALGEBRAIC THEORY OF OPTICS
87
The term Ho is the major term governing optical behavior. It has identical effects on the dynamical pairs Q,,P, and Qy,Py.According to Eq. (23), the term J, simply generates rotations about the optical axis. Finally, the term H, only affects the time of flight. The reason for making such a decomposition is that the Lie operators :Ho:, :a&, and :HT: all mutually commute. It follows that d2can be written as a product of commuting factors in the form %&z = % & R d 7 d 0
(97)
The rotational and time of Pight factors, dR and .at,,are generated by :aJ,: and is generated by :Ho:. :Hr:,respectively. Similarly, the "optical" factor B. Computation of Time of Flight and Rotational Factors
The determination of 4,is straightforward since H , is independent of z. The time of flight factor relating initial conditions at z' to final conditions at z f is given simply by
4 = exp(: -
JI
HTd z
= exp{ - ( z f -
:)
Z')(~/?,Z~,Z)-~:(P~)~:}
(98)
The determination of the rotational factor dR is not much more difficult since the Lie operators :a(z)J,: commute for different values of z. We find the relation
AR= exp(: = exp{(
:J
a(z)J, dz
Jr
:)
a(z)d z ) : ( Q i
(99)
C . Computation of Optical Factor
The computation of the optical factor dorequires considerably more work because the Lie operators :Ho(P,Q ;z): do not commute for different values of z. Briefly, we proceed as follows. According to the general theory of the last section, as described by Eq. (71), doobeys the equation of motion A&= A0:- Ho(P',Q';z):
(100)
Since we know that doproduces a linear transformation and that this transformation is the same for the pairs Q,, P, and Q y , P,, it follows that we
88
ALEX J. DRAGT AND ETIENNE FOREST
must have a relation of the form
From the equation of motion (100) it can be shown that the functions a(z) through d(z) obey the differential equations a' = c,
c' = -a2(z)a
(102a,b)
b'
d' = -a2(z)b
(103a,b)
= d,
Finally, the requirement Ao(z'
4-
z') = 9
(104)
gives the initial conditions
a(z') = 1,
c(z') = 0
(105a,b)
b(z') = 0,
d(z') = 1
(106a,b)
Except in special cases, the solution of Eqs. (102) and (103) with the initial conditions (105) and (106) requires numerical integration. If desired, A. can be written in exponential form once this solution is found. However, there is no particular need to do so since we already have the explicit representation (101) for the action of doon Q,P space, and it is evident that doacts as the identity map on the variables z, 4.
VII. SECONDAND THIRD-ORDER EFFECTS FOR SOLENOIDAL LENS A. Computation of Interaction Picture Hamiltonian
The purpose of this section is to compute, for the solenoidal lens, the functions g 3 and g4 which describe second- and third-order effects. According to Eqs. (72) and (73), we must first find the quantities H Y t and Zft'as given by Eq. (74)using the A2of Eq. (97). But, as noted earlier, for the case of an axially symmetric solenoid the various H, are invariant under rotations about the z axis. Also, they are evidently independent of z. Thus, in this case the AKand factors in A2have no effect, and we may equally well write H?([', z ) = H"[A0(Z 4- z i p , z ]
(107)
89
LIE ALGEBRAIC THEORY OF OPTICS
Finally, because of Eq. (92), we may also write
HP([i, z) = Ao(z 4- Zi)H"([i, z)
(108)
Since the H,are composed of rotationally invariant functions, and in view of Eqs. (107),(log), and (93),our next task is to compute the effect of A. on each of the rotational invariants. We find, with the aid of Eq. (101), the results d0(z
+ (b'),(P')'
+ z')(P')' = (a'),(Q')'
+ 2a'b'(Qi- Pi) Ao(z zi)(Qi)' = a2(Qi)' + b2(Pi)' + 2ab(Qi Pi) + z')(Q' Pi) = UU'(Q')'+ bb'(P')' + (ab' + a'b)(Q'- Pi)
(109a)
t
*
(l09b)
~ O ( Z
A ~ (+zz')(Qi x Pi)* 6, = (Q' x Pi) * 6,
( 109c) ( 109d)
B. Computation of g3
We will now use the results obtained s6 far to compute g3.In view of the relations (108)and (93), and the fact that P, is left unchanged by A,,,we find by using Eq. (66d), the result:
H y
= (p,/Po)H,[A0(z + zi)[i, z]
(110)
Next, making use of the explicit form for H , as given by Eq. (66c),and using the results [Eq. (10911, we find the relation
H2[A0(zt z')[~,z]= (1/2)[(b'), + ~r'b~](P')~
+ (1/2)[(~')~ + a2a2](Q')' + (a'b' + a2ab)(Qi Pi) *
- a(Q' x Pi) * 6,
+ (P:)2/(2$/9t)
(1 11)
Finally, upon substituting Eq. (1 10)into Eq. (72) and making use of Eq. (11 l), we obtain the final result:
90
ALEX J. DRAGT AND ETIENNE FOREST
where
cg
=
-
J
dz = (zi - zf) zI
Note that all terms in g 3 are proportional to P i . Therefore, the effect of g 3 on Qf and P‘ is entirely chromatic, and vanishes when Pf = 0. Of course, g3 also gives second-order corrections to the time of flight rf, and these are generally nonzero even when Pi = 0. C. Computation of g 4
We now turn to the somewhat more difficult task of computing g 4 . We begin by observing that H4 can be decomposed into a sum of chromatic and purely geometric (nonchromatic) terms,
H4 = H$
+ H:
(114)
where H$ = -(1/4)(1 - 3/P:)PfP2 - (1/4)(1 - 3/Pf)a’P;Q2
+ (1/2)(1
P) * &>- (1/8)(Po~o)-2(1- 5/P:)P; (1 15) H f = (1/8)(P2)2- (1/2)aP2{(Qx P) * CZ} - (d2/2)(Q P)’ - 3/P:)uP?{(Q x
-
+ (3/4)a’P’Q2 + (1/8)(a”- a3/2)QZ{(Qx P ) . Cz} + (1/8)(a4- aa”)(Q2)2
(1 16) We next observe that g4 as given by Eq. (73)is a sum of two terms, and that the second term, which involves a Poisson bracket, is purely chromatic since H 3 is purely chromatic. It follows that g4 can also be written as a sum of chromatic and geometric terms, 94 =
sE + g f
(1 17)
LIE ALGEBRAIC THEORY OF OPTICS
91
Jzr
Specifically, we find the relations g:=
-
+
dz (H:)'"'
:1 1:'
gf= -
dz,
dz, [ - HF1(c,zZ), -Hy(ci,zl)]
dz(H,G)'"'
(1 18) (119)
In this paper we shall content ourselves with the computation of g: since it is often the quantity of greater interest. The computation of g$ will be presented elsewhere (IS). With the tools at hand, the calculation of gf poses no new problems beyond those of algebraic complexity. We make use of Eqs. (116),(107),(108), (93), and (109), and then substitute the results into Eq. (119) to obtain an expression of the form
+
g,G = A*{(P')2}2 B*(P')2(Pi.Q')
+ C*(Pi)2{(Qix Pi) C z } + D*(P' Q')' + E*(P' Qi){(Q'x Pi) C z } + F*(P')2(Qi)2 + G*(Q')'(P' - Q') + H*(Qi)2((Qix P i ) .2,) *
*
*
+
I*{(Qi)2}2 (120) Eq. (120) has the same form as Eq. (29), as must be the case because Note that of axial symmetry. We find for the coefficients A*-I* the integral expressions
iJzl Zf
A*
=
[aa"b4 - {a2b2+ (b')2}2]d z
(121a) (121b) (121c)
:Iz, Zf
D* = E*
[aa"a2b2+ a2 - (a'b'
= j;[a(a'b'
+ a2ub) -
+ ~t'ab)~]dz
(121d) (121e)
F* = tJI:"aa"u2b2 - 2a2 - {a2u2+ (a')'}{a2b2+ ( ! ~ ' ) ~ } ] d(121f) z
92
ALEX J. DRAGT AND ETIENNE FOREST Pzf
1
G*
=d
[aa”a3b - {a’a’
+ (a’)’}{a2ab+ a’b’}]dz
21
H* = j ~ [ { ~ } { a ’ a+ ’ (a‘)’} - {;}a2]dz
(121h) (121i)
VIII. SPHERICAL ABERRATION FOR A SIMPLE IMAGINGSYSTEM Consider the simple imaging system shown in Fig. 2. Let us refer to transit in a field-free region as a “drift.” Then the imaging system consists of a drift followed by a solenoidal lens followed by a second drift. In this section we will illustrate how to compute some of the optical properties of this system using the results obtained so far. A. Transfer Map for Solenoid
Suppose two points z1 and z2 are selected so that the magnetic field of the solenoid is essentially zero outside the interval z1 Iz Iz2. Let Asdenote the transfer map for the solenoid alone. That is, the map relates final conditions in the plane z = z2 to initial conditions in the plane z = zl. Suppose we also ignore chromatic effects. This means that we assume P, = 0, and are uninterested in the time of flight. Then, with these assumptions, we may write As= ... exp(:&:)AOAE
( 122)
I
I
/
I
I
FIG.2. A simple imaging system consisting of a solenoid preceded and followed by drifts. The action of the solenoid is depicted schematically as that of a thin focusing lens.
LIE ALGEBRAIC THEORY OF OPTICS
93
Here the quantities g4, &lo, and ARare to be computed as described in the previous section using zi = z1 and zf = z2. Note that, for a given magnetic field strength, the differential equations (102a,b) and (103a,b),and the integrals (95) and (121) need only be solved and evaluated once and for all. B. Transfer Map for Drifts
Next, we need to find maps for the two drifts. We have already seen that, in paraxial approximation and neglecting chromatic effects and time of flight, the map for a drift of length 1 is given by Eq. (15). Let A&)denote the map for a drift. Then, again neglecting chromatic effects and time of flight, it can easily be shown that &lD(l) is given beyond paraxial approximation by the expression
&lD(l) = *-.e~p{(-1/8):[(P')~]~:} ex~{(-1/2):(P')~:}
(123)
C. Total Transfer Map
With this background, we are ready to compute A, the map for the full optical system. Evidently, it is given by the product = d#D(ll)ASALd12)
( 124)
Our task now, in order to compute aberrations, is to rewrite Eq. (124) in the form Eq. (7). In principle, the desired transformation of Eq. (124) can be carried out using the calculus of Section V. However, for our purposes it is much easier to argue as follows. Suppose the whole region from the object plane z = z1 - lI to the image plane z = z2 + l2 were to be viewed as part of the solenoid. This is certainly possible. We only need be sure that the quantity a(z) as given by Eq. (68) is taken to be zero outside the region z1 5 z 5 z2. If we do this, then A can be written in the form
-
A = . . exp(:gf:)kokR
(125)
Here we have introduced the notation to indicate that the various quantities g4, do, and &lRshould be computed as before except we must use zi = z1 - l1 and zf = z2 + 12. Our task now is simply to bring Eq. (125) to the form of Eq. (7). This is easily done. We first rewrite Eq. (125) in the form ''-I'
A = &,2R{k;'.i,1 ...exp(:gf:)i0iR)
(126)
However, using Eq. (94),the term between braces in Eq. (126) can be rewritten
94
ALEX J. DRAGT AND ETIENNE FOREST
in the form (26)
A-;'A--l ...exp(:g"y:)ioJR= exp(:f,":). .
(127)
with f,"given by the equation f?(r;i)
=
"&'i<'S"y(r;')
Moreover, since 5," is rotationally symmetric, the factor Consequently, we may also write f,"(Ci)
(128)
= {f(&-;'&yi)
= g,"(i;yi)
2;'
has no effect. (129)
In summary, we have obtained the factorization A
=
i0iR exp(:f 2:)
* *
with f," given by Eq. (129). D. Spherical Aberration
Suppose we wish to determine the spherical aberration of our simple imaging system. To do this, we need to find the coefficient of [(Pi)2]2in f,". Sinceour system is assumed to be imaging, the action of io must be given by a relation of the form
where m is the magnification of the system and c is some constant. It follows from Eq. (131) that we also have the relation
Consequently, the relation (129) can be written more explicitly in the form ff(Q',P') = {,"(m-'Q',mP' - cQ')
(133)
Now look at the form of g"," as given by Eq. (120). Let A denote the coefficient of [(Pi)'I2 in f,".Then it is clear from inspection of Eqs. (133) and (120) that we have the simple relation A = m4J*
(134)
Here we have used the symbol A* to denote the integral (t21a) evaluated between zi = z , - I, and zf = z2 + 1 2 . To see that all this is consistent, suppose we use both Eqs. (125)and (130) to compute Q' for the initial conditions Q' = 0 and Pi variable. In both cases we
95
LIE ALGEBRAIC THEORY OF OPTICS
will ignore the factor .,$fR since this merely produces a rotation. Employing Eq. (125), we find the result
(135)
QL = exp(:gf:)doQ; But, from Eq. (131) we have the relation
d o Q L= mQt
(136)
Consequently, we find the result Qi = m exp(:$z:)Qi
+ :g,“: + **.}Q: = m{Q; + J*[{(P‘)2}2,Qi] +
= m{Y
= mQi - 4mJ*(Pi)’Pi
. . a }
+
(137)
Moreover, according to Eq. (136), the quantity (mQ’) is Q‘ in the paraxial approximation. Thus, we may rewrite Eq. (137) in the final form
Qiclual = Qfparaxia, - 4mJ*(Pi)’Pi
+
(138)
Note that this formula expresses aberrations in terms of quantities in the object (initial) plane. A common procedure in the literature is to express spherical aberration in the form ( 1 7) Qictual = Qfparaxial + mCdP’)’P’ + * * (1 39) and the quantity C, is then called the spherical aberration of the system. Comparison of Eqs. (138) and (1 39) gives the relation
c, = - 4 J *
(1 40)
By contrast, using the factorization (130) gives the result
Q i = Joexp(:fz:)Qt
+
+ ***}Qb
= .,$fo{.9 :ff:
+ A[((P’)’)’,Q~] + ..*} = J,{Qh- 4A(P’)’P: + -..} = mQi - 44P‘)’P; + . . . =
.,$fo{QL
(141)
Here, we have used (92) and (93). Consequently, we also have the relation QLual
= Qfparaxial
- 4A(P‘)’pf + . * .
(142)
As advertised, this formula expresses aberrations in terms of quantities in the image (final) plane.
96
ALEX J. DRAGT AND ETIENNE FOREST
Moreover, we also have from Eq. (131) the paraxial relation p'
= m-'P'
+ cQ'
( 143)
If we insert this result into Eq. (142) for the case Qi = 0, we find the result f 3 pi 2pi (144) Qactual = Qfparaxial - 4A(m)- ( 1 +.** Evidently, the results (1 38) and (144) agree, thanks to Eq. (134). E . Scherzer's Theorem
At present, spherical aberration is a major factor affecting the optical performance of electron microscopes. We see from Eqs. (30) and (31), for example, that spherical aberration makes it impossible to form a point image of a point object. It is therefore of interest to examine the expression (121a)for A * in more detail. As it stands, Eq. (121a) is applicable to solenoids in the general case. However, in the special case of an imaging system composed only of solenoids and drifts, the integral for A* can be rearranged to show that A* is always negative, and, correspondingly, the spherical aberration is always positive. Indeed, as shown in the appendix, one finds, after several partial integrations, the result A* = - ij:[2a4b4
+ b2(ba' + b'a)' + a2b2(b')Z]dz
(145)
Observe that all terms in the integrand are positive. Therefore, in view of the overall minus sign, the total A* is always negative for an imaging system. This fundamental result is due to Scherzer. It shows the impossibility of making an imaging system, using only solenoids and drifts, that is free of spherical aberration (18-20).
IX. CORRECTION OF SPHERICAL ABERRATION A. Simple Crewe-Kopf System
As has been seen in the last section, an imaging optical system composed entirely of solenoids and drifts can never be free of spherical aberration. The purpose of this section is to present a Lie algebraic analysis of a simple imaging system devised by Crewe and Kopf in which a magnetic sextupole is used to remove spherical aberration (21). Our aim is both to present an
LIE ALGEBRAIC THEORY OF OPTICS
97
alternate derivation of their result and to illustrate the power of Lie algebraic methods. The system of Crewe and Kopf is illustrated in Fig. 3. It consists of two solenoids, various drifts, and a sextupole. The solenoids and drifts are arranged in such a way that, in the paraxial approximation, a crossover (focus) occurs in the center of the sextupole. We will see that for a suitable choice of sextupole strength, this total system has no third-order spherical aberration. B. Transfer Map for Sextupole
A sextupole magnetic field may be derived from a vector potential A of the form A, = O A,, = O
A,
= s(x3 - 3 x y 2 )
Here s is some constant, assumed to be independent of z, which measures the sextupole strength (22). Let Y denote the transfer map for a sextupole of length 1. This map relates final conditions at the back face of the sextupole to initial conditions at the front. Then, one finds by means of the formulas of Sections IV and V that 9’ has the factored product representation (23)
Y = 9(1)exp(:h,:)exp(:h,:)...
(147)
Here 9(l)is the map for a drift of length 1 in the paraxial approximation and is
FIG.3. A sextupole-correctedimaging system consisting of two solenoids, various drifts, and a sextupole. Rays leaving the object plane are brought to a crossover at the center of the sextupole and then to a final focus in the image plane.
98
ALEX J. DRAGT AND ETIENNE FOREST
given, according to Eq. (1 5), by the formula
Q(1) = exp{ -(l/2):(Pi)’:} The polynomials h3 and h4 are given by the expressions (24) h3
(Q,P) = 4Q: - 3QXQ:) - ~~l’(3/2)(Q?, - Q:Px - 2Q,QYPJ + d3(QXP:- QxP: - 2Q,PxPy)- al4(1/4)(P; - 3P’P:)
Q [(3/56)0217 ~)~ - 1/8](P2)2 h4(Q,P) = 0 ~ 1 ~ ( 3 / 4 ) ( + - a2j4(3/2)(Q P)Q2
+ 0~1~(3/40)Q’P~
+ a~15(21/20)(~. p)2
-
Here the constant
LT
.
0 2 i 6 ( 3 / 8 ) ~ 2 ( ~P)
(149)
(150)
is related to the sextupole strength by the equation CT
= 9S/P,
(151)
C. Total Transfer Map
Let Asand .rrj be the transfer maps for the two solenoids and, as in Section VIIZ, let AD(L) be the map for a drift of length 1.Then the complete transfer map for the system of Fig. 3 is given by the product = ~D(rl)ASAD(12)9AD(1,)J1TSAD(14)
(152)
Our task is to determine the optical behavior of dh? by making appropriate Lie algebraic manipulations. In so doing, we will neglect a11 Lie operators corresponding to polynomials of degree 5 and higher. That is, we will ignore all aberrations of fourth and higher orders. Suppose we begin by inserting on both sides of 9an identity map N of the form 9 = AD(1/2)46’(1/2) = AL1(1/2)AD(1/2)
(153)
Then the expression for A takes the form A
=~
D(li)~~.ltO(~2)~D(~/2)~Di(~/2)~~~l(1/2)
AD(1/2)dD(13
)NSAD(14)
(154)
Now we use the relation
JG@~W’D(~) = AD(^ + 4)
(155) which evidently must hold for any drift. Applying this relation to Eq. (154),we
99
LIE ALGEBRAIC THEORY OF OPTICS
find the result
+
+
Jll = AD(~1)JllSdD(l~~ / 2 ) d ; l ( 1 / 2 ) ~ ~ l l 0 ~ ( 1 / 2 )l/2)NSdD(l4) d~(l~ (156)
Finally, let us define maps .& and 2s be the equations I
Jlls = J W i ) & 4 ( 4
+ 1/21
“& = d d 1 3 + l/2)4dD(14)
(157a) (157b)
Then the expression for d takes the more compact form
d =~sJl101(1/2)~Jll~1(1/2)~s
(158)
Moreover, as in Section VIII, we recognize the maps Asand Jsas being the complete transfer maps associated with the solenoids and their respective leading and trailing drifts.
D. Dejnition of Trouble Map
To continue our computation, supgose we employ the representations (130) and (125) for the maps is and Ns,respectively. Then Eq. (158) can be rewritten in the form Jll = Jo2uF202u
( 159)
where F is the “trouble” map
5 = e~p(:li:).k;~(1/2)9k&~(1/2)exp(:g$) (160) Here fi is the aberration polynomial for the first complete solenoidal imaging system on the left of Fig. 3 when j S is factored in the form (130), and g: is the aberration polynamial for the second complete solenoidal imaging system when JSis factored in the form (125). Looking at Eq. (159), we see that all aberrations in Jll have been lumped into the factor F, and all remaining factors are simply paraxial linear maps. Consequently, our task has been reduced to the problem analyzing X
E. Analysis of Trouble Map Examination of Eqs. (123) and (148) shows that the factor Mi1(l/2) appearing in 5 can be written in the form A0’(1/2)
= &D(-1/2)
= g(- 1/2)e~p{(1/16):[(P‘)~]~:) (161)
100
ALEX J. DRAGT AND ETIENNE FOREST
Consequently, employing Eqs. (147) and (161), we find for the inner factors in 9- the result A; 1(1/2)9A;1(1/2) = A, '(1/2)9(1)exp(:h,:) exp(:h,:)~; l(1/2) = 9(1/2) e~p{(l/16):[(P')~]~:} exp(:h,:)exp(:h,:)
x exp ((I/ 16):[(Pi)2] :)9( -l/2) = 9(1/2) exp(:h,:) exp(:h4:)g(- 1/21
Here the polynomial
( 162)
h4 is given by the relation h4 = h4 + (1/8)[(Pi)2]2
(163)
Note that in computing Eq. (162)it has been necessary to use the commutation properties of various Lie operators. In particular, it has been necessary to commute Lie operators corresponding to polynomials of degrees 3 and 4. However, according to Eqs. (75) and (76), such commutation produces polynomials of degree 5 and higher, corresponding to aberrations of order 4 and higher. As indicated earlier, such aberrations are beyond the scope of the present calculation. Finally, it should be observed that the term (l/8)[(Pi)2] appearing in Eq. (163)just cancels a corresponding term of opposite sign in h,. (See Eq. ( 1 50).) The expression appearing on the right-hand side of Eq. (162) can be simplified further using Eq. (74). We find the result 9(1/2) exp(:h,:) exp(:h,:)~(- 1/21 = 9((1/2)exp(:h3:)9(-
1/2)g(1/2)exp(:~~:)9(-1/2)
= exp(:h:':)exp(:&:)
(164)
where
Finally, upon combining the results obtained so far, we see that Y has the form
9- = exp(:f::) exp(:h:':) exp(:hz:) exp(:g::) = exp(:h:':)exp(:Pi
+ f: + g:)
(167) Again, in arriving at Eq. (167),it has been necessary to commute Lie operators corresponding to polynomials of degrees 3 and 4, and the resulting polynomials of degree 5 and higher have been ignored.
LIE ALGEBRAIC THEORY OF OPTICS
101
All that remains to be done is the computation of h: and &. We begin with h‘;. According to Eqs. (92) and (14),we have the general result 9(n)h3(Q,
P) = h3(Q
+ AP, P)
(168)
If this operation is carried out for h3 as given by Eq. (149),we find the relation h3(Q+ IzP,P) = -(o/4)[(1-
- A4](Pl
- 3P’PS)
+ terms proportional to Q
( 169)
Observe that in the special case A = 1/2, which is the case of interest, one has
(1 - 4 4 - L4 = 0
(170)
As a result, h’; has no terms which depend only on P. In fact, we find the result (171) h‘; = ol(Ql- 3Q,QS) + (al3/4)(Q,P? - QxPyZ - 2Q,P,P,) The computation of @ proceeds in similar fashion. We find the relation
@%Q,P) = g(l/2)&(Q, P) = k4{Q
+ (1/2)P,P}
(172) Explicit evaluation of the right-hand side of Eq. (172) using Eqs. (163) and (150) gives the result
i 2 = 30217[(1/56)- (1/64)](P2)’ - (3~’l5/1O)Q2P2
+ (30~1~/lO)(Q - P)’ + (30’1~/4)(Q’)~
(173)
F. Cancellation of Spherical Aberration Observe that the coefficient of (P2)2in Eq. (173)is positive.This means that for a suitable choice of the quantity o’l’, we can arrange to have complete cancellation among the coefficients of in the quantity (6; + fi + g:), and consequently 9-can be made to be entirely free of spherical aberration. Indeed, the condition for this cancellation is simply 3o2I7[(1/56)- (1/64)] + Al
+ A”:
=0
(174a)
where A, and A”: are the coefficients of [(Pi)’I2 in f: and g:, respectively. Equivalently, using Eq. (134),we may rewrite Eq. (174a) in the form 3o2l7[(1/56) - (1/64)] + m4A”:
+ A”;
=0
(1 74b)
Here m is the magnification of the paraxial transfer map i, corresponding , to the first complete solenoidal lens system on the left of Fig. 3,A“: is its spherical aberration coefficient, and At is the corresponding spherical aberration coefficient for the second solenoidal lens system on the right (25).
102
ALEX J. DRAGT AND ETIENNE FOREST
In order to compare the result (174) with that of Crewe and Kopf, it is necessary to compute the total spherical aberration of the system when the sextupole is turned off. In that case, h:' and h i vanish, and 9takes the form
Y = exp(:f::)exp(:gi:) It follows that A now takes the form A = .koiR exp(:f::) exp(:gi:)202K
(175) (176)
Crewe and Kopf have chosen to characterize spherical aberration in terms of image quantities as in Eq. (142). Consequently, in order to make an equivalent comparison, let us bring all the aberration factors in Eq. (176) over to the far right. This gives the factorization A = . k o . k R & & J ~ ~ exp(:f::) exp(:gi:)20
+ fi:}
= d&~R&&exp(:(f:)"
(177)
Here, j-2 is the aberration polynomial for the second complete solenoidal is factored in a form analogous to Eq. (130), and (f:)Ir imaging system when 2s is given by the relation
(fy= 2;y:
(178) Now if A, is the coefficient of [(Pi)'I2 in f:, then it is easily verified that n4A, is thecoefficient of [(Pi)']' in (f:)". Here, in analogy to Eq. (131), n is the magnification of the second imaging system as described by the paraxial map No.Consequently, the coefficient of [(Pi)']' in the combination {(f:)'' fi} is the sum (n4A1+ A 2 ) .Therefore, in analogy to the computation producing Eq. (142), we conclude that the total (uncorrected) spherical aberration of both solenoidal imaging systems combined is described by the factor (-4A'"') given by the relation
+
- 4 ~ ' " '= -4(n4A1
+ A,)
( 179)
Finally, comparison of Eqs. (174a) and (179), along with use of a relation for the second imaging system analogous to Eq. (134), gives the cancellation condition 3a217[(1/56)- (1/64)] = ( -4A'"')/(4n4)
(180)
This result agrees with that of Crewe and Kopf (26). G . Further Discussion
The discussion so far has shown that the coefficient of in the aberration polynomial for Y can be made to vanish if Eq. (174), or equi-
LIE ALGEBRAIC THEORY OF OPTICS
103
valently Eq. (180), is satisfied. The careful reader may still wonder if this is really the same as saying that the spherical aberration of the complete system is zero since spherical aberration has so far been defined in terms of factorizations in which all aberration terms appear either on the far right as in Eq. (7) or on the far left as in Eq. (69). However, inspection of Eq. (159) shows that 5 stands between two paraxial maps, both of which are separately imaging. It therefore follows, from calculations similar to those employed in Section VIII, that the coefficient of [(Pi)2]2 in the aberration polynomial appearing in either factorization (7) or (69) will also vanish if it vanishes in the aberration polynomial for 5 Consequently, spherical aberration has indeed been eliminated. It is also important to observe that while the sextupole can be used to remove spherical aberration (which is a third-order aberration), it also produces second-order aberrations since h: as given by Eq. (171) is nonzero. However we shall see that, due to the special form of h g , it in fact has no effect on the ability of the complete optical system to form a point image of an onaxis point object. That is, if the complete optical system is used only to form a spot that is the image of an on-axis point object, then its performance is unaffected by h:. To see that this is so, suppose that A? as given by Eq. (159) is refactorized by writing it in the form A? = dkodkR202RP
(181)
where Ytris defined by the expression
Examination of Eqs. (94) and (167) shows that P , like K consists entirely of aberration factors, and has a factorization of the form
P = exp{:(h;)Ir:} exp{:(i;k' + f: + g:,":}
(183)
Here, in particular, the third-degree term (Qtr has the form (h;)tr
= 2;12;1h;
(184)
In a manner similar to the calculation made involving Eqs. (129), (132), and (133), we find the relation
2L1h: = h','(n-'Q',np' - dQ')
(185) where the quantities nand d play the roles for goanalogous to those played by m and c for dko. Observe that, according to Eq. (171), h: has no terms that contain only components of P. It follows from Eq. (185) that the same is true of 2 ; ' h : . That is, all terms in Jt';'h; contain at least one factor of a
104
ALEX J. DRAGT AND ETIENNE FOREST
component of Q'. Finally, since JK is simply a rotation, Eq. (184) shows that all terms of (h;)**contain at least one factor of a component of Q'. Now let us consider the effect of exp{:(h;)*r:}. Following a procedure analogous to that employed in studying spherical aberration as in Eq. (30),and employing a similar notation, we find the relation
Q,= Q.
+ [(h:')", Q,l+ - ..
(186)
Since each term in (h:)" contains at least one factor of a component of Q, it follows that all terms on the right-hand side of Eq. (186) contain at least one factor of a component of Q. But, in Eq. (186) Q is the paraxial arrival point, and for the use envisioned for the complete optical system, Q vanishes for all points of the paraxial image. Consequently, by the structure of Eq. (186), the actual arrival point, also vanishes for all rays. That is, the actual image of an on-axis point object is indeed a point.
a,
H . Supercorrection The discussion so far has been restricted to the effect of third- and fourthdegree aberration polynomials, and we have seen that the corrected system of Fig. 3 forms a perfect point image even when the effect of these polynomials is included. What can be said about the effect of the aberration polynomials of still higher degree, which have so far been ignored? A complete answer to this question is beyond the scope of this paper. However, a few comments can be made about fifth-degree aberration polynomials. According to the general symmetry arguments made in arriving at Eq. (26), no fifth-degree polynomials appear in the representations of maps for drifts and (rotationally invariant) solenoids. But fifth-degree aberration polynomials do appear in the representation of the map (147)for a sextupole. Moreover, in arriving at the representation (167) for Y, it was necessary to commute Lie operators for polynomials of degrees 3 and 4. According to Eqs. (75) and (76), this operation may be expected to produce fifth-degree polynomials. Thus, in general, we expect that an optical system consisting of drifts, solenoids, and sextupoles will be described by a map which contains fifth-degree aberration polynomials among its factors. As was the case with the aberration polynomials of third and fourth degree, for the purposes of the optical system under consideration, the only offensive terms in the fifth-degree aberration polynomial are those which depend solely on the components of Pi,and have no dependence on Q'. For, as can be seen from a calculation analogous to those of (30) and (31), only such terms (which will be called "sperical-type" aberrations) affect the ability of the system to form a point image of an on-axis point object.
105
LIE ALGEBRAIC THEORY OF OPTICS
The appearance of such offensive terms in the fifth-degree aberration polynomial for the system of Fig. 3 can be analyzed quite simply using Lie algebraic methods. Indeed, we have found the surprising result that the offensive fifth-degree terms due to the sextupole can be made to completely cancel the offensive fifth-degree terms resulting from the required commutation of the Lie operators associated with the polynomials of degree 3 and 4! This cancellation occurs provided the third-order spherical aberration coefficient of the first solenoidal imaging system and the strength of the sextupole corrector are related by the condition (3021’/2)[(1/56) - (1/64)]
+ A1 = 0
.
(187)
Moreover, examination of Eqs. (174a) and (187) shows that both conditions can be made to hold simultaneously, providing the spherical aberration coefficients of the first and second solenoidal imaging systems are related by the condition *
A , = A:
(188)
Thus, for a suitable choice of third-order spherical aberrations, there are optical systems of the form shown in Fig. 3 whose ability to form a point image of an on-axis point object is limited only by spherical-type aberrations of order 5 and higher (27,28). Or equivalently, in Lie algebraic language, offensiveterms remain only in aberration polynomials of degree 6 and higher ( 5 ) . We will use the adjective “supercorrected” to refer to optical systems in which two aberrations have been simultaneously corrected. Thus, with a suitable choice of parameter values, the system of Fig. 3 is supercorrected in that it is free of both third-order and fourth-order spherical-type aberrations. Whether or not the above result is of use for practical optical design, and also the relative merits of other possible correction schemes, are still under study (29,30).We expect to present the derivation of Eq. (187), as well as our findings concerning other correction schemes, in future papers.
X. CONCLUDING DISCUSSION A . Recapitulation
A new method has been presented for the treatment of charged-particle optical systems. In Sections I1 and I11 we showed how transfer maps can be represented as products of Lie transformations, applications were made to some simple cases, and a brief discussion was given of various aberrations in the case of an imaging system. Section IV provided a suitable formulation of
106
ALEX J. DRAGT AND ETIENNE FOREST
the equations of motion, and made special application to the case of a solenoid. After a mathematical interlude provided by Section V, the transfer map for a solenoidal lens was tFeated in some detail in Seotions VI and VII. Finally, Sections VIII and IX gave a preliminary description of spherical aberration and its correction as an illustration of the power of the Lie algebraic approach. In brief, the use of Lie algebraic methods provides an operator extension of the matrix methods of paraxial optics to the general case. These operators can be used to characterize departures from paraxial behavior and to examine what can be done to control and compensate undesired effects. B. The Program MARYLIE
The preceding sections have been devoted primarily to the use of Lie algebraic methods for analytical calculations. It is worth remarking, at this point, that Lie algebraic methods are also of great utility for numerical calculations. Indeed, a charged-particle beam transport computer code, based on Lie algebraic methods, has been developed at the University of Maryland. This code, called MARYLIE, is intended to be used in the design of particle accelerators and storage rings (31). One feature of MARYLIE is the capacity to compute transfer maps for various beam-line elements. In its present version, MARYLIE 3.0, the code represents the map for each individual beam-line element in the form A = exp(:f,:)exp(:f,:)exp(:f,:)
Thus, MARYLIE 3.0 is correct through terms of degree 3. Currently, the following idealized beam-line elements are described by MARYLIE: drifts normal-entry and parallel-faced dipole bends hard-edged fringe fields for dipoles magnetic quadrupoles hard-edged fringe fields for quadrupoles
magnetic sextupoles magnetic octupoles electrostatic octupoles axial rotations radio frequency cavities
In addition, there are provisions for user-specified transfer maps (through nonlinear terms of degree 3) that make it possible to handle real fringe fields, nonideal magnets, and solenoids. MARYLIE also has the capacity to combine a collection of maps, corresponding to a collection of beam-line elements, into a single grand map. That is, MARYLIE can carry out the operation (78), and refactorize the result in the form (79) using the Baker-Campbell-Hausdorff formula.
LIE ALGEBRAIC THEORY OF OPTICS
107
Finally, MARYLIE has the capacity to trace rays. That is, it can carry out the operation (6) repeatedly for any collection of initial conditions. Thus, MARYLIE may be used as a substitute for numerical integration. MARYLIE has proved to be remarkably compact, efficient, fast, and accurate. It computes individual maps and combines them at a high rate. When used for ray tracing, it is orders of magnitude faster than numerical integration and, in typical accelerator applications, has an accuracy as high as seven significant figures.
C. The Program MICROLIE Currently, we are developing a code similar to MARYLIE, tentatively called MICROLIE 5.0, for use in the design of electron microscopes. MICROLIE will represent the map for individual elements in the form A = exp(:f,:) exp(:f,:).
*
exp(:f,:)
(190)
Consequently, MICROLIE will give results correct through terms of degree 5. We expect that MICROLIE will treat drifts, solenoids, sextupoles, and perhaps other elements (32). The polynomials fm for drifts and ideal sextupoles can be obtained, through the case m = 6, with the aid of symbolic manipulation programs such as MACSYMA (33). Indeed, this procedure was already essential in the cases m = 3 and m = 4 for the construction of MARYLIE. For. solenoids, formulas can be written for fs and f 6 in terms of integrals and multiple integrals involving terms similar to those appearing in Eq. (121). Alternatively, since these integrals must generally be evaluated numerically, one can instead directly integrate, again numerically, certain differential equations for the f, or the g,. For example, g z , the polynomial describing fifth-order geometric effects for the factorization (69), obeys in the case of solenoids the differential equation ( d / d z ) ( g z )=
- [(H:)i"',gf]
(191)
These equations can be handled quite easily once a computer has been properly programmed to treat polynomial arrays and evaluate their Poisson brackets (14,34). Based on our experience with MARYLIE, we expect that MICROLIE will be able to trace rays much faster than can be done by direct numerical integration, and with great accuracy. Consequently, because of its very high speed, a real-time version of MICROLIE may be useful as part of a computercontrolled operating system for an ultra-high-resolution electron microscope.
108
ALEX J. DRAGT AND ETIENNE FOREST
D . Numerical Example of Simple Crewe-Kopf System As an application of MARYLIE and of the results of Section IX on the correction of spherical aberration, consider the sextupole-corrected spotforming system shown in Fig. 4.It, like the system of Fig. 3, consists of two solenoids, a sextupole, and various drifts. However, the object plane is now at infinity. Two sets of parameter values for this system are listed in Table I. For simplicity of presentation, the solenoid field strengths are taken to have Glaser profiles of the form (35) E,(O,O, Z ) = B y [ 1
+ (z - Pax 1r2I-l
(192)
Here B y is the maximum field strength at the center of the solenoid, z = zmar, and r controls the fall-off of the magnetic field outside the solenoid. More realistic profiles could be used, and more nearly optimal dimensions and parameter values could certainly be found, The system is meant only to be illustrative, and not indicative of any actual design. Figure 5 shows the final spot patterns produced by this system, in the paraxial image plane, for two sets of incoming rays when the parameter values from the first line of Table I are employed. However, the correction sextupole is turned off. The incoming rays are taken to be parallel to the optical axis, lie on two cylinders having radii of 9 x lop3cm and 4.5 x lop3cm respectively, and enter the system in a field-free region well outside the first solenoid. The fact that these rays are not brought to a point focus, but instead form two circles, is a consequence of third-order spherical aberration. Note that the
FIG.4. A sextupole-corrected spot-forming system. Parallel incoming rays are brought to a crossoverat the center of the sextupoleand then to a final focus in the image plane. For purposes of computation, initial rays are launched on cylinders having radii of 9 x lo-’ cm and 4.5 x cm, respectively.The lengths I,, I,, I,, 14, and I are as follows: I , >> r; I, = variable; I, = 0.022 m; Id = 0.043235 m; I = 0.20 m. See Table I for two sets of remaining parameter values.
TABLE I
Electron beam YO
1.3914
Same
Right solenoid system
Left solenoid system
B Y
r
12
A1
B Y
r
n
Combined system
6:
A,
+ 2:
(TI
(m)
(m)
(m)
(TI
(m)
(m)
(m)
0.13 0.14
0.0029 0.0025
0.0431
-22.20 -29.71
0.24
0.004
0.3555
-29.68
Same
Same
Same
Same
-51.88 -59.39
0.0429
A , - 6:
-
Sextupole strength S
(4
(TI (m12
7.48 -0.03
40.571 43.408
110
ALEX J. DRAGT AND ETIENNE FOREST 0.3r x IO-~ 0.2 0.1
-
E“
0.0
-
a
-0.1 -
1
I
I
L
3
.
0
-
I
.... ....
t 01
,
I
-0.3 -0.2 -0.1
0.0
x , meter
0.1
0.2 C 3 x lo-’
FIG.5. Final spot patterns produced by the system of Fig. 4 using parameter values from the first line of Table I, except that the sextupole is turned off. The two circles are formed by the cm and two sets of parallel incoming rays originating on the cylinders having radii 9 x 4.5 x cm. The rays are not brought to a point focus, but instead form two circles, due to third-order spherical aberration.
ratio of the radii of the two circles is 23 = 8, as expected for a system dominated by third-order aberrations. Figure 6 shows the spot patterns formed by the same incoming rays when the sextupole is set to the strength specified by Eq. (174a) (36). The parameter values are again those of the first line of Table I. Comparison of Figs. 5 and 6 shows that the spot patterns have been considerably reduced in size. Figure 7 illustrates various spot patterns formed for the 9 x cm initial radius case when the sextupole strength deviates from its ideal value. Evidently, the spot pattern is indeed “smallest” when Eq. (174a) is obeyed. Although the spot patterns in Fig. 6 are much smaller than those of Fig. 5, they are still of finite extent. This is primarily because, as inspection of the first line of Table I shows, the spherical aberration coefficients of the two solenoids violate condition (1 88). Thus, the system should now be dominated by fourthorder aberrations. Indeed, the ratio of the “radii“ of the two patterns is now approximately 24 = 16. There is also, of course, the effect of spherical-type aberrations of order 5 and higher (37). E. Numerical Example of Supercorrected System
Suppose the spherical aberration coefficients of the two solenoids are adjusted to satisfy Eq. (188). The second line of Table I gives a list of parameter values which accomplish this goal. This second system has paraxial properties
111
LIE ALGEBRAIC THEORY O F OPTICS
0.2
I 0, t
1
b
E
0.0
2:
-0.1 O'Il
: ......
-0.21
*. . .
'
-0.3
-0.2 -0.1
-0.3
0.0
.
. . .
0.1
0.2
0.3
x IO-~
x , meter FIG.6. Final spot atterns produced by the system of Fig. 4, using th parameter values from the first line of Tal I, with the sextupole turned on. Third-order sphe al aberration has
now been removed by the sextupole corrector. Note that the outer spot pattern has a complex double-valued structure.
similar to the first system, and somewhat larger total third-order spherical aberration. It differs from the first system only in the nature and location of the first solenoid. Figure 8 shows the spot patterns produced by this second system, for the same sets of incoming rays, when the correction sextupole is turned off. Comparison of Figs. 5 and 8 shows that the spot patterns are slightly larger 0.15x 10-8 0.10 -
0.05
I
-
I
0,
E
t
I
S.39.743
I
I
I
Tesla /(meter)'
\...........
-
f. p.. . ..... .. ::. ...
-
0.00-
.............
s: -0.05-0.10 -0.15
S141.062 Tesla/(rneter)' I
I
I
I
I
FIG.7. Final spot patterns formed for the 9 x cm initial radius case when the sextupole has its ideal strength and values slightly above and below its ideal strength.
112
ALEX J. DRAGT AND ETIENNE FOREST
I I - 0.3I -0.3 -0.2 -0.1
I
I
I
I
0.0 0.1
0.2
0.3
x IO-~
x , meter
FIG. 8. Final spot patterns produced by the system of Fig. 4 using parameter values from the second line of Table I, except that the sextupole is turned off.
for the second system than the first. This is to be expected because of the somewhat larger third-order spherical aberration of the second system. By contrast, Fig. 9 shows the spot patterns when the sextupole is set to the strength specified by Eq. (174a). The effect of the sextupole is now even more dramatic because condition (187) is also satisfied! That is, the system is supercorrected. Comparison of Figs. 6 and 9 shows that the spot pattern 0.75
I
1
I
x lO+O 0.50 -
I
I
I
. . .
0.25 -
-
0,
t
E
ZI
0.00
-
-0.25
-
-0.50
-
0
,
-
. *
.
-
-0.75lllliil - 0.50 0.00 0.50
x , meter
x 16'0
FIG.9. Final spot patterns produced by the system of Fig. 4, using the parameter values from the second line of Table I, with the sextupole turned on. The system is now supercorrected in that it is free of both third-order and fourth-order spherical-type aberrations. Note that the spot patterns are now less than an angstrom in radius.
113
LIE ALGEBRAIC THEORY OF OPTICS
corresponding to the rays launched on the 9 x cm cylinder is about a factor of four smaller in “radius” than that for the first system, and more nearly cm cylinder, the correspondcircular. For the rays launched on the 4.5 x ing improvement is about a factor of 8. Figure 10 shows an enlarged view of the central spot pattern of Fig. 9. As explained earlier, the remaining offensive spherical-type aberrations are now of order 5 and higher. It is they that account for the fact that the spot patterns still have a finite, albeit much reduced, size. Indeed, the ratio of the radii of the two patterns in Fig. 9 is about 25
= 32.
As an additional check on the validity of what is being claimed, Fig. 11 shows the performance of the second system, as computed by a combination of numerical integration and MARY LIE, when the f5and still higher-degree f’s of the sextupole are artificially removed. That is, the effect of the sextupole is computed using only its polynomials fi, f3, and f4 (38). Evidently, although somewhat larger, the spot patterns of this figure are similar to those of Fig. 6. The ratio of the radii of the two patterns in Fig. 11 is about z4 = 16, thereby indicating that the behavior is now dominated by fourth-order aberrations. Thus, the fourth-order compensating effects of the sextupole due to fsare indeed essential to obtaining the reduced spot sizes of Fig. 9. In the case of systems whose performance is dominated by third-order spherical aberration, it is well known and easy to show that the spot size can be reduced by displacing the actual image plane slightly away from the paraxial image plane (39). We have found that the same is generally true of systems whose performance is dominated by fifth-order spherical-type aberrations 0.2I
I
.
I
x , meter
I
I
x 16“
FIG. 10. Enlarged view of the central spot pattern of Fig. 9.
114
ALEX J. DRAGT AND ETIENNE FOREST
0.2
I
I
x 10-8 0.1 7
e, t
E
0.0 -
. ..... * . . ... .. .. . . *
*.
L
I
* ...
-
.
....
.’
Q
-
s -0.1
-
-
......... ..’ I
-0.2
I
I
FIG.1 1 . Final spot patterns produced by the system of Fig. 4, using the parameter values from the second line of Table I and the sextupole turned on, except that the fs and still higherdegree f ’ s of the sextupole have been artificially removed. The behavior is now dominated by fourth-order aberrations.
c
-0.2
..
.
..
0.0
0.1
:*
*
.
-0.3 -0.3 -0.2 -0.1
x , meter
0.2 0.3 x 10-10
FIG. 12. Final spot patterns produced by the system of Fig. 4, using the supercorrected parameter values from the second line of Table I, with the image plane slightly displaced to minimize spot size. Note that the entire spot size is less than 0.5 A.
LIE ALGEBRAIC THEORY OF OPTICS
115
(40).However, in this case the underlying theory and required calculations are considerably more complex. For the supercorrected system described by the second line of Table I, we have calculated that, for the rays of interest, a minimum spot size should be achieved when the image plane is moved forward (toward the source) from the paraxial image plane by approximately 230 A. Figure 12 shows the spot patterns formed in this optimal image plane. We note that the entire spot is less than 0.5 A in diameter. Thus, by simultaneously removing both third- and fourth-order spherical-type aberrations and assuming perfect construction, it appears to be possible to build electron microscopes having subangstrom resolution.
APPENDIX The purpose of this appendix is to demonstrate that in the case of an imaging system, the expression (121a) for A* can be brought to the form (145). To do so requires some judicious integrations by parts and use of the equations of motion (103).If expression (121a) is expanded out, we find the expression I
**
rzf
=d
[tltl”b4 - u4b4 - 2 ~ ~ b ’ ( b-’ (b’)4] ) ~ dz
(All
I’
We now will outline briefly what steps are required for various terms. The (b’)4term can be integrated by parts to give the relation
lz,
I zf
= b(b’)3
rzf
- J 3b(b’)’b”dz I’
However, according to Eq. (106a),b(z’)= 0 by construction. Also, comparison of Eqs. (101) and (131) shows that b(z‘) = 0 in the imaging case. Finally Eq. (103)shows that b obeys the equation of motion
b” = -a2b Consequently, we find the relation
(A31
116
ALEX J. DRAGT AND ETIENNE FOREST
The term aa”b* can also be integrated by parts. We find the relation
:1
aarrb4dz =
s: I
~ t ~ ~ ( adzb * ) Zf
= a’(ab4)
a’(a‘b4 + 4ab3b‘)dz
-
(A51
The end-point contributions vanish for the same reasons as before (41). Therefore, we also have the result
Izr
aa”b4dz
=
- f:[(a’)2b4
+ 4aa’b3b‘Jdz
( 4
Finally, consider the result of integrating the quantity ua‘b3brby parts. We find the relation
Again, the end-point contributions vanish (41). Upon carrying out the indicated differentiation and using the equation of motion (A3), we find the result r zf
J
r zf
aarb3b’dz= J [-aa’b3b’
z1
+ a4b4]dz
- 3a2b2(b’)2
(A8)
21
which can be written in the equivalent form [-3a2b2(b‘)’
+ a4b4]dz
(A9)
The stage is set! The first and last terms in the integrand of (Al) can be replaced using (A6) and (A4) respectively. Finally, use (A9) to replace half of the resulting aa’b3b’ terms. When this is done, one obtains the desired result A* = 5{:[Za4b4 8
+ b2(ba’ +
+ ~ ~ b ~ ( dzb ’ ) ~ (A10) ]
ACKNOWLEDGMENTS We are grateful to John A. Farrell and Richard K. Cooper of the Los Alamos National Laboratory, Albert V. Crewe of the University of Chicago, and Daniel Welsh of the University of
LIE ALGEBRAIC THEORY O F OPTICS
117
Maryland for many helpful comments. We are also grateful to the Computer ScienceCenter of the University of Maryland for supplying computer time. Finally, we are grateful to the United States Department of Energy (Contract AS05-80ER10666) and the National Sciences and Engineering Research Council of Canada for their generous support.
REFERENCES AND FOOTNOTES 1. There are many references describing the customary approach to electron microscopes and
2.
3. 4.
5. 6.
ion optics. In the area of electron microscopes they include A. B. El-Kareh and J. C. El-Kareh, “Electron Beams, Lenses, and Optics,” 2 vols. Academic Press, New York, 1970; W. Glaser, “Grundlagen der Elektronenoptik.” Springer-Verlag, Berlin and New York, 1952; P. Grivet and A. Septier, “Electron Optics,” 2nd ed. Pergamon, Oxford, 1972;P. W. Hawkes, “Electron Optics and Electron Microscopy.” Taylor Francis, London, 1972; in “Quadrupole Optics” (G. Hohler, ed.), Springer Tracts Mod. Phys., Vol. 42. Springer-Verlag, Berlin and New York, 1966; “Quadrupoles in Electron Lens Design.” Academic Press, New York, 1969; P. W. Hawkes, ed., “Magnetic Electron Lenses.” Springer-Verlag. Berlin and New York, 1982; “Image Processing and Computer-Aided Design in Electron Optics.” Academic Press, New York, 1973; 0. Klemperer and M. E. Barnett, “Electron Optics,” 3rd ed. Cambridge Univ. Press, London and New York, 1971;T. Mulvey and M. J. Wallington, Rep. Prog. Phys. 36,347 (1973); H. Rose and U. Petri, Opt& 33, 151 (1971); J. C. H. Spence, “Experimental High Resolution Electron Microscopy.” Oxford Univ. Press (Clarendon), London and New York, 1981; P. A. Sturrock, Proc. R. SOC. London, Ser. A 210, 269 (1951); “Static and Dynamic Electron Optics.” Cambridge Univ. Press, London and New York, 1955. In the area of electron microscopes, microprobes, and other optical systems, references include V. E. Cosslett, “Introduction to Electron Optics.” Oxford Univ. Press (Clarendon), London and New York, 1950; G. W. Grime and F. Watt, “Beam Optics of Quadrupole Probe-Forming Systems.” Adam Hilger Ltd., Bristol, 1984;E. Harting and F. H. Read, “Electrostatic Lenses.” Elsevier, Amsterdam, 1976; A. Septier, ed., “Focusing of Charged Particles,” 2 vols. Academic Press, New York, 1967; “Applied Charged Particle Optics,” Parts A and B. Academic Press, New York, 1980; H. Wollnik, ed., “Charged Particle Optics.” North-Holland Publ., Amsterdam, 1981;Ji-ye Ximen,“Principles of Electron and Ion Optics and an Introduction to Aberration Theory” (revised by A. V. Crewe). Enrico Fermi Institute, University of Chicago, Chicago, Illinois, 1984(book in preparation); V. K. Zworykin, G. A. Morton, E. G. Ramberg, E. G. Hillier, and A. W. Vance, “Electron Optics and the Electron Microscope.” J. Wiley. New York, 1945. A. J. Dragt, Lectures on nonlinear orbit dynamics. In “Physics of High Energy Particle Accelerators”(R. A. Carrigan et al., eds.), Am. Inst. Phys. Conf. Proc. No. 87. Am. Inst. Phys., New York, 1982; A. J. Dragt and J. M. Finn, J . Math. Phys. 17,2215 (1976). H. Goldstein, “Classical Mechanics,” 1st ed. Addison-Wesley, Reading, Massachusetts, 1950. Note that this is indeed a “definition.” Had we employed the usual Poisson bracket relations for the quantities on the right-hand side of Eq. (8), then we would have found that the Poisson brackets of the quantities on the left-hand side of Eq. (8) would involve factors of l/p,,. Note that the order of an aberration is one unit lower than the degree of its associated polynomial. A discussion of the various third-order aberrations is given, for example, in Section (1.4) of P. W. Hawkes’ article on Magnetic Lens Theory in “Magnetic Electron Lenses.” SpringerVerlag, Berlin and New York, 1982.
118
ALEX J. DRAGT AND ETIENNE FOREST
7. For a similar treatment of the case of light optics, see A. J. Dragt, J . Opt. Sci. Am. 72, 372 (1982);see also E. Forest, Lie algebraic methods for charged particle beams and light optics.
Unpublished Ph.D. Dissertation, Department ofphysics and Astronomy, University of Maryland (1984). 8. A. J. Dragt, A l P Conj. Proc. 87, 151 (1982). 9. Knowledge of time of flight may not necessarily be useful in the case of electron microscopes. It may, however, be of interest in the design of spectrometers. 10. The case of electrostatic lenses can also be treated by Lie algebraic methods, and will be a subject for future work. 11. H. Goldstein, Classical Mechanics,” 1st ed.,p. 241. Addison-Wesley, Reading, Massachusetts, 1950. 12. A. B. El-Kareh and J. C. El-Kareh, “Electron Beams, Lenses, and Optics,” Vol. 2, p. 20. Academic Press, New York, 1970; see also H. Bateman, “Partial Differential Equations of Mathematical Physics,” p. 406. Cambridge Univ. Press, London and New York, 1959. 13. Note that A vanishes in regions where B = 0. Consequently, according to Eq. (35), canonical and mechanical momenta are equal in field-free regions. 14. A. J. Dragt and E. Forest, J . Math. Phys. 24, 2734 (1983); see also E. Forest, Lie algebraic methods for charged particle beams and light optics. Unpublished Ph.D. Dissertation, Department of Physics and Astronomy, University of Maryland (1984). 15. The formula for g: is available upon request. 16. Note that in writing Eq. (127) we have changed the order of factoring. 17. See, for example, Eq. (1.126) of the article by P. W. Hawkes on Magnetic Lens Theory in “Magnetic Electron Lenses.” Springer-Verlag, Berlin and New York, 1982. 18. See, for example, Section (1.4.1) of P. W. Hawkes’ article on Magnetic Lens Theory in “Magnetic Electron Lenses.” Springer-Verlag, Berlin and New York, 1982. 19. The original reference to this result, now often called Scherzer’s theorem, is 0. Scherzer, Z. Phys. 101, 593 (1936). 20. Scherzer has also shown, in our language, that the total A* is always negative for an imaging system consisting of solenoids, drifts, and axially symmetric electrostatic lenses. See, for example, A.B. El-Kareh and J. C. El-Kareh, “Electron Beams, Lenses and Optics,” Vol. 2, Sect. (10.1).Academic Press, New York, 1970, or Scherzer’s paper (19). 21. A.V.CreweandD.Kopf.Optik55,I (1980);56,391(1980);57,313(1980);60,271(1982);A.V. Crewe and D. B. Salzman, Uftrumicroscopy9,373 (1982);A. V. Crewe, Optik 69,24 (1984);see also V. D. Beck, ibid. 53,241 (1979);Ji-ye Ximen, ibid. 65, 295 (1983). 22. That is, for simplicity, we assume that s has a constant value over some interval corresponding to the length of the sextupole, and is zero elsewhere.Also, fringe fields are neglected. Using the methods of Ref. 14, it is possible to treat the case where s is not constant and fringe fields are included. 23. Here we shall ignore chromatic and time-of-flight terms. They are available upon request. 24. D. R. Douglas, Lie algebraic methods for particle accelerator theory. Unpublished Ph.D. Dissertation, Department of Physics and Astronomy, University of Maryland (1982). 25. Strictly speaking, Eqs. (174a.b)are correct in the idealization that the sextupole has constant s and no fringe fields. However, we have verified that a sextupole can still be used to cancel spherical aberration even when end effects are included. 26. See A. V. Crewe and D. Kopf, Optik 55, 10 (1980).We have verified that our Eq. (180) can be transformed, with suitable algebraic manipulation, into the last equation in their paper. In their derivation, Crewe and Kopf neglect the spherical aberration of the first solenoidal lens system. However, their final result is stated in a form that is correct even when this aberration is included. 27. This same result has been found, using the method of characteristic functions, in a remarkable
LIE ALGEBRAIC THEORY O F OPTICS
119
paper by H. Rose [Nucl. Instrum. Methods 187, 187 (1981)l. This paper is also reprinted in “Charged Particle Optics” (H. Wollnik, ed.). North-Holland Publ., Amsterdam, 1981. For a more detailed presentation of the Lie algebraic derivation of this result, see E. Forest, Lie algebraic methods for charged particle beams and light optics. Unpublished Ph.D. Dissertation, Department of Physics and Astronomy, University of Maryland (1984). 28. We have verified that this statement remains true even when sextupole end effects are included. 29. The effect of construction errors for the system of Fig. 3 is discussed in Ref. 27. 30. More complicated correction schemes involving multiple sextupoles are described in Ref. 27 and the last five papers of Crewe et al. listed in Ref. 21; see also 0.Scherzer, Optik2,114 (1947); G. D. Archard, Proposed system for magnetic electron probe in “X-Ray Microscopy and XRay Microanalysis,” Proceedings of the Second International Symposium, Stockholm, 1959, (A. Engstrom, V. Coslett, and H. Pattee, eds.), p. 337. Elsevier, Amsterdam, (1960);articles by A. Septier, and R. Barer and V. E. Coslett, in “Advances in Optical and Electron Microscopy” (A. Septier, ed.). Academic Press, New York, 1966; W. Bernhard, Optik 57,73 (1980);H. Hely, ibid. 60,307,353 (1982); H. Wollnik, Nucl. Instrum. Methods 103,479 (1972);S . Y. Yavor, L. P. Ovsyannikova, and L. A. Baranova, ibid. 99, 103 (1972);Y. G. Lyubchik and T. Y. Fishkova, Sou. Phys.-Tech. Phys. (Engl. Transl.) 19, 1403(1974);H. Koops, Proc. Int. Congr. Electron Microsc. 9th, 1978, Vol. 3, p. 185 (1978). These references all describe octupole correction schemes. 31. See Douglas (24);see also A. J. Dragt, IEEE Trans. Nucl. Sci.NS26,3601(1979); A. J. Dragt and D. R. Douglas, ibid. NS-28,2522 (1981); Lie algebraic method for charged particle beam transport. Brookhaven Nat. Lab. [Rep.] BNL BNL 31761 (1982); IEEE Trans. Nucl. Sci. NS-30, 2442 (1983); Lie algebraic methods for particle tracking calculations. In “Proceedings of the 12th International Conference on High Energy Accelerators” (F. T. Cole and R. Donaldson, eds.), Fermi Nat. Lab., Batavia, Illinois, 1983; Particle tracking using lie algebraic methods. In “Computing in Accelerator Design and Operation” (W. Busse and R. Zelazny, eds.), Lect. Notes Phys. No. 215. Springer-Verlag, Berlin and New York, 1984; A. J. Dragt, L. M. Healy, F. Neri, and R. D. Ryne, MARYLIE 3.0-A program for nonlinear analysis of accelerator and beamline lattices. IEEE ‘Trans. Nucl. Sci. NS-32,2311 (1985). 32. Quadrupoles and octupoles, etc. could be included for use in the design of microprobes. It is also expected that MICROLIE will treat magnet misplacements and misalignments and the general question of tolerances. 33. The geometric (nonchromatic) portions of fs and fs for a sextupole are available upon request. 34. Prelimary versions of such a program exist. 35. For simplicity,we have also set B, to zero once the Glaser profile has reduced the on-axis field to 1/26 of its maximum value. This amounts to setting the field to zero beyond a distance 5 from the center of the solenoid. 36. Note that Eq. (174a) is more general than Eq. (174b) because it does not assume that the first solenoidal system is imaging. 37. The spot patterns of Figs. 5 through 10 and Fig. 12 were computed using numerical integration and therefore aberration effects of all orders have been included. Although not shown, we found that the use of MICROLIE produced almost identical patterns. 38. These spot patterns were obtained by integrating numerically through the first solenoid, then using MARYLIE (or, equivalently, MICROLIE) with only f2,f3, and f4to map through the sextupole, and then finally going through the second solenoid again by numerical integration. 39. For a discussion of the so-called disk of minimum confusion in the case of third-order spherical aberration see, for example, A. B. El-Kareh and J. C. El-Kareh, “Electron Beams, Lenses and Optics,” Vol. 2, Sect. (10.3). Academic Press, New York, 1970.
120
ALEX J. DRAGT AND ETIENNE FOREST
40. We have also found that this is not true of optical systems whose performance is dominated by
fourth-order spherical-type aberrations. Thus, the spot pattern of Fig. 6 cannot be improved by moving the image plane. However we have verified that the spot pattern can be improved in this case by moving thesextupole. See E. Forest, Lie algebraic methods for charged particle beams and light optics. Unpublished Ph.D. Dissertation, Department of Physics and Astronomy, University of Maryland (1984); also see Rose (27). Finally, it may be remarked that it should be possible to reduce the spot size in Fig. 12 even further by reintroducing some third-order spherical aberration while keeping the fourth-order spherical aberration zero. This could be done by maintaining Eq. (187) while violating Eq. (174a). For a discussion of such “aberration balancing” in the context of light optics, see M. Born and E. Wolf, “Principles of Optics,” 6th ed., pp. 467,468. Pergamon, Oxford, 1980. 41. Note that in this case the end point contributions also vanish, independent of other arguments, if z’ and zf are in field-free regions.
ADVANCES IN ELECTRONICS AND ELECTRON PHYSICS, VOL. 67
The Universal Imaging Algebra CHARLES R. GIARDINA Stevens Institute of Technology Hoboken, New Jersey 07030
I. INTRODUCTION Several algebraic approaches have recently been given for describing useful operations in image processing. Crimmins et al. (1985) investigated some aspects of a morphological-type imaging algebra for automatic shape recognition. The basic reference to algebraic properties as well as topological and stochastic properties involving mathematical morphology is Matheron (1975). A survey article on the mathematical structure for an image algebra was given in Ankeney and Urquhart (1984). Using relational type structures, Ritter (1984) developed an imaging algebra describing various operations in image processing. Some algebraic aspects of the binary mathematical morphology were extended to gray-value images in Serra (1982) and Sternberg (1980). This was done using a relation called the umbra. A similar extension of the binary mathematical morphology to gray-value images was specified in Giardina (1982,1984) using fuzzy sets. In the latter reference the concept of the Universal Imaging Algebra was also given. Instead of defining gray values as integers between 1 and 2", n integer > 1, the reciprocal of these values are used in the Universal Imaging Algebra. More specifically,values between 0 and 1, inclusive, denote gray values in this algebra. By doing this, the rich structure of fuzzy set theory given by Zadeh (1965) and fuzzy set theory introduced by Goguen (1967) can be utilized in the Universal Imaging Algebra. This algebra is a many-sorted algebra similar to that of Birkhoff and Lipson (1970) and Goguen and Thatcher (1974). As in any algebra, the Universal Imaging Algebra consists of three items: sets of various sorts, operators mapping elements in some of these sets into an element in another set of some sort, and finally, equational constraints between the operators like the commutative or associative laws. One sort or type of set employed in the imaging algebra described herein is the set of images; another set is the set of binary images. These two sets are 121 Copyright @ 1986 by Academic Press, Inc. All rights of reproduction in any form reserved.
122
CHARLES R. GIARDINA
defined in the next two sections. Other types of sets needed in the theory will be defined in ensuing sections. The use of more than one sort of set in this algebra provides a natural setting for image processing. A vector space is elegantly described utilizing two sorts of sets: sets of vectors and sets of scalars. Algebras involving more than one type of set are called many-sorted algebras and are generalizations of universal algebras. This concept is defined in Section VII. Standard morphological operators are in the Universal Imaging Algebra and are described along with examples and various properties in Section 11; here the more general theory of Matheron (1975) is applied to binary images as described in Section I.C. Various aspects of the binary mathematical morphology are generalized to gray-value images in Section V. Furthermore, numerous properties and relationships of operators in the gray-valued mathematical morphology are also provided in the referenced section. In Sections I11 and IV arithmetic and topological operators in the Universal Imaging Algebra are discussed along with useful properties. A . Images
The principal definition of an image presented in this document is the same as the one given in Giardina (1982,1984) and will be repeated below. Before an image is defined, an important example (Herman 1980) will be presented which should motivate the forthcoming definition. Consider a television set with a fixed, nonchanging square picture. A “digital” or sampled image representing this picture is desired. Furthermore, a light meter is the sensor which wiIl help determine this image. The meter has readings from 0 to 1, with 0 denoting white and 1 denoting black. Numbers closer to 1 in the meter denote darker values of gray. By placing this meter near the picture a useful measurement is found. To obtain the digital image, first partition the television screen into 16 equal smaller squares. This could be done by drawing equidistant vertical and horizontal lines on the screen. Next, on a large piece of opaque material cut a hole that is the size of one small square. Finally, use the opaque material and the light meter at a fixed distance from the screen. Take 16 readings from the light meter by moving the opaque material 16 times: once for each time a small square area in the picture and the hole coincide. Each square in the above example is called a pixel and a mapping was performed. Each pixel has attached to it (by way of a light-meter reading) a number between 0 and 1 indicative of the (“average”) gray value in that pixel. The set of all pixels, each with a number between 0 and 1 attached, is therefore
THE UNIVERSAL IMAGING ALGEBRA
123
a fuzzy set (numbers between 0 and 1 indicate “how much gray belongs in each pixel”). Images will now be defined similar to the TV example. Partition the x-y plane into squares one unit on a side. Do this such that each lattice point is the center of a square and the square contains its left and bottom boundary. Denote the resulting structure by Z x Z. It follows that where Z denotes the integers and [a, b) = { x : a 5 x c b}. Also, let (i, j) represent the (square) pixel [i - 3,i + 4) x [ j - f,j + 3). An image f is a function from Z x Z into [0,1]. This is denoted by f : Z x Z + [0,1] or by f E [O, 1IzXz.Consequently, f is a fuzzy set: each pixel has a number between 0 and 1 attached to it. As before, let 0 denote white and 1 denote black, and numbers closer to 1 denote darker gray. Standard fuzzy notation is utilized in this document. An image f is specified as a set of elements of the form (i, j) I uf((i, j)). In this notation uf((i, j)) [= uf(i, j) for convenience] is the value of gray attached to the (i, j) pixel for the image f.The value u,(i, j) indicates the extent to which pixel (i, j) belongs to f. Whenever uf(i, j) = 0 this pixel (and associated gray value) need not be explicitly mentioned in the set representation for f, or in an illustration.
B. Subimage of an Image For any image f in [0, 1Jzxzand a fixed subset D of Z x Z the image g = {(x, y) uf(x,y), x , y in D} is called a subimage of f. Here g is equal to the original image f on the restricted part D of the x, y plane, and is 0 elsewhere.
I
Subimages are useful in segmentation and image isolation. Another concept closely related to subimage is that of fuzzy subimage. In this case h = {(x, y) I u,(x, y), x, y in D} is the fuzzy subimage of f provided u,(x, y) Iuf(x,y). It is seen that a fuzzy subimage has gray levels pixel by pixel equal to or less than those of the original image. Further, the fuzzy subimage is 0 on D‘, the complement of D.This concept is of importance in the gray value mathematical morphology. When the set D is not specifically mentioned D = 2 x Z is to be understood. When h is a subimage of f in the fuzzy or standard sense, this is denoted h c f. The context of the discussion should make it clear which type of subset is involved. Example 1
If f = {(1,2)I 3/4, (233)I 1/49 (131) I 1/21 then 9 = ((1,2)I3/4,(2,3) I 1/41 is a subimage of f and h = {(I, 2) I 1/2} is a fuzzy subimage of f (and 9). As mentioned above, D = Z x Z is implied in this example, although D =
124
CHARLES R. GIARDINA
{(1,2),(2,3)} is the smallest set (in cardinality) for which the total example is valid. Operators S and P could be defined that have the aforementioned prox [0, 11'"' + [0, l l z X zwhere S(0,f) = perties. Specifically, let s:2"' g and where g is a subimage of f,and then S is called a subimage operator. where P(0,f ) = h and where h is Also, if 9:2' " x [O,l]' " [O,l]' a fuzzy subimage of f,then Pis called a fuzzy subimage operator.
'
'
C . Binary Images A binary image B can be viewed as a subset of Z x Z, or equivalently it is a function in (0, 1)'"'. The latter representation is analogous to the fuzzy situation. Here the pixel (i, j) belongs to B with gray value udi,j), which in this case is 1. When it doesn't belong, uB(i,j) is 0.
Example 2
The binary image B illustrated in Fig. 1 can be represented using "usual" set notation: B = {(1,1),(2,1),(1,2),(1,3),(1,4)}. A fuzzy set representation could also be made: B = ((1,l) 1 1, (2,l)1 1, (1,2)1 1, (1,3)I 1, (1,4)I l}. When binary images are employed in this article the former notation will be utilized. The principal reason for this is that this notation is less cumbersome, and that the formulas in the (black-white) mathematical morphology employ standard set representations.
FIG. 1. Binary image B.
THE UNIVERSAL IMAGING ALGEBRA
125
D . Standard Set Theoretic Operations on Binary Images Let A and B be binary images, that is, A, B c Z x Z and the union of A and B, denoted by A u B, is the set of all pixels which either have gray value 1 in A or have gray value 1 in B. Thus, A u B = {(i,j) where uA(i,j) = 1 or uB(i,j) = l}. The intersection of A and B, denoted by A n B, is the set of all pixels which have value 1 in A and also have value 1 in B. Hence, A n B = {(i,j) such that (denoted by 3) uA(i,j) = 1 and u& j) = 1). The complement of A is represented by A’ and is the image which has zeros where A has ones and has ones where A has zeros. Therefore, A’ = {(i, j) 3 uA(i,j) = O}. Example 3 If the binary image A is given in Fig. 2 and B is given in Fig. 1 of Example 2, then A u B, A n B, and A’ are illustrated in Figs. 3,4, and 5, respectively. The all-white image will be denoted by cp and the all-black image will be denoted by 1. For any binary image A it follows that cpcAcl
Several other properties of complement, union, and intersection are (A’)‘ = A cp’ = 1
AuA’= 1 AnA’=cp A c B if and only if B’ c A’.
FIG.2. Binary image A .
126
CHARLES R. GIARDINA
AUE
FIG.3. Binary image A u E.
FIG.4. Binary image A n B.
A'
FIG.5. Binary image A'. In A' all nonlabeled pixels have gray value one.
THE UNIVERSAL IMAGING ALGEBRA
127
The next two relations are called De Morgan’s theorems: ( A u B)’ = A’ n B’
and ( A n B)’ = A‘ u 8‘
Associative laws hold: A u( B u C ) = ( A u B ) u C A n ( B n C ) = ( A n B) n C
Commutative laws hold: AuB=BuA AnB=BnA
Distribution laws also hold: A u ( B n C ) = ( A u B) n ( A u C ) A n( B u C ) = ( A n B) u ( A n C )
Union and intersection can be extended to arbitrarily large index sets; if { A , } is a set of images with a E K (a is an element of K) with K an indexing set, then
u
A, = {x 3 x E A,
for at least one a E K}
UEK
and
n A,
= (x3
x E A,
for every a E K}
UEK
Numerous properties hold for arbitrary union and intersection, in particular De Morgan’s theorems hold:
11. MATHEMATICAL MORPHOLOGY In this article the mathematical morphology developed by Matheron (1975) is applied to binary images. In subsequent sections this theory is
128
CHARLES R. GIARDINA
generalized to images using fuzzy set theory. Before the morphological operators are introduced, some fundamental concepts will be given. An attempt has been made to keep the notation and definitions similar to that in the reference above. The definition of a translate of a binary image will be introduced in the next section. A . Translate of a Binary Image Given a binary image X c Z x Z, and a pixel b = (i, j) E Z x Z, the translate of X by b is another binary image denoted by x b and is defined as
~,={(x+~,Y+~)~(x,Y)EX} =
{z+ ~ E J Z E X }
So, x b is the set of all pixels in X moved i units to the right a n d j units up. Example 4
Let A be given in Fig. 2 of Example 3. Then A(2,1bis illustrated in Fig. 6.
An obvious property of translation is commutativity, for example, = { z + a + b 3 z E X}. The translate x b of X can be obtained by using the translation operation defined in Section V.A. Another important concept in the mathematical morphology, as well as in signal processing and image processing in general, is the 180"-rotated image. (x,,)b = (Xb),, = x , , + b
B. The 180"-Rotated Image
Given X in Z x Z, the 180" rotation of X is denoted by 3 and is defined as 3 z E X } . Thus, X is found by rotating X 180" about the origin, therefore, 3 is found by replacing each pixel (i, j) in X by (-i, j). An obvious property of 2 is 2 = X , which is called idempotence.
2 = { -z
-
Example 5
Again let A be given in Fig. 2 of Example 3, then 2 is illustrated in Fig. 7. It is constructive to use a see-through plastic model of the image A, and actually rotate it 180" to obtain A'. The image 3 is also obtained from X by
129
THE UNIVERSAL IMAGING ALGEBRA
utilizing a rotation-type operator not defined in this document [see Giardina ( 1985)].
The rotation and translation are often used together in various image(2,) in general. processing operations and it is important to notice that (%)b This can be seen from the following sequence of binary images given in the example below.
+
FIG.6. Binary image A(!,,,.
FIG.7. Binary image 2.
130
CHARLES R. GIARDINA
Example 6
C . Minkowski Subtraction and Addition
Let X and B be subsets of Z x 2, then Minkowski subtraction is defined as X o B = nXb baB
and Minkowski addition is given by the formula x @ B = u X b boB
As an illustration of how X
0 B and X
@
B are found, consider Example 7.
Example 7
Let X and B be given in Figs. 8 and 9. X
FIG.8. Binary image X.
FIG.9. Binary image B.
THE UNIVERSAL IMAGING ALGEBRA
131
In this example, only the one pixel up, one pixel to the right translate of X, (X(l,lJmust be found, since X(o,o)= X. Therefore X 0 B = X n X(I ,l )= ((2, l)}and X 0 B = X u X , , , , , is illustrated in Fig. 10.
FIG. 10. Binary image X 0 B.
D. Erosion and Dilation These set theoretic operations described in the previous section motivate the introduction of the two-image processing operations of erosion and dilation. The erosion of X by B is
b ( X , B) = { y
3 4c
X}
and the dilation of X by B is
g ( X , B ) = { y 3 B y n X f cp) Example 8 If X and B are given as in Example 7, then b ( X , B ) = {(1,0)} because the only translate of B which is a subset of X is B(l,o).The dilation 9 ( X , B) is illustrated below in Fig. 11. Notice that the Minkowski subtraction X 0 B operator and erosion operator when applied to the sets in the above example did not yield the same result, nor did Minkowski addition and dilation. However, there is a relation between these operations.
132
CHARLES R. GIARDINA D(X.B)
FIG. 11. Binary image D(X,B).
Specifically, b ( X , B )= X 8
fi
B ( X , B ) = X Q3
B
and In the previous examples B = {(O,O), (- 1, - l)};a simple calculation could be conducted illustrating the above relationship. For the general case, b ( X , B ) = n b E S X bcan be shown in two steps: First show &‘(X,B) c n b E d X b This . is true since if any fixed y in b ( X , B ) is chosen; then Byc X , i.e., for every b in B, b + y E X , or for every b in B, y E X + and so Y E n b , B X - b or equivalently Y E n b E S X b .When b ( X , B ) is empty there is nothing to do since the empty set is a subset of every set; furthermore, the vacuous situation will not be mentioned anywhere else in this document. To conclude the proof, the second property to be shown is that n b G S X bc b(X,B).So let y be an element of n b , , X b . For every b in B, y is in X + , that is, for every b in B, y + b is in X, or equivalently Byc X and therefore y E 4 x ,B). The identity involving dilation could be shown in exactly the same way. For any fixed y in 9 ( X ,B), Byn X # cp and so there is a pixel in common. There are some b in B and x in X such that b + y = x or equivalently y E X - b and so y E u b e B x - b = U b p i X b . Conversely let y be any fixed pixel in u b , S x b . For this y there has to be a b in B such that y E x-b;thus, there has to be an x in X such that y = x - b, or y + b = x, and consequently By n X cannot be empty, that is, y E 9 ( X , B).
THE UNIVERSAL IMAGING ALGEBRA
133
Theerosion and dilation operations could be found directly or by utilizing Minkowski subtraction and addition, respectively. Furthermore, the dilation and erosion operations are themselves related under complementation. Specifically, 9 ( X , B ) = [ S ( X ’ ,B)]’ or equivalently
(X@ B) = (X’0 3)’ An instance of this identity is easily made by considering the previous examples. The identity itself follows from what was previously shown and by applying De Morgan’s theorem, and so g(x,B)= x @
B=
u
bEB
=
(
xb
=
n
(bEB
(xb)’
>’
n - ( X ’ ) b )= ( X ’ 0 i)’= [ S ( X ‘ ,B)]’
bEB
E. Some Properties of Erosion and Dilation
In this section some well-known properties of erosion and dilation shall be reviewed. Verification of only a few of these properties will be given because of the commonality in the method. The properties are described using Minkowski addition and subtraction due to the relations previously derived and the simplicity of infix notation. I . Properties of Dilation D1. D2. D3. D4. D5. D6.
Translation Invariance Commutativity Associativity Identity Distribution with Union Distribution Inclusion with Intersection
A @ By = ( A 0 B)y A@B=B@A ( A @ B) @ C = A @ (B 0 C ) A 0 ((0,O)) = ((0,O)) 0 A = A A 6 ( B u C ) = ( A 0 B) u ( A @ C ) A 0 ( B n C)c ( A 0 B ) n ( A 0 C )
2. Properties of Erosion A 0 By = ( A 0 B)? A 0 B # B 0A, in general ( A 0 B) 0 C # A 0 ( B 0 C ) , in general E4. Commutative Associativity ( A 0 B) 0 C = ( A 0 C ) 0B
El. Translation Invariance E2. Noncommutativity E3. Nonassociativity
134
CHARLES R. GIARDINA
E5. Inverted Lejl Distribution A
with Union E6. Right Distribution with Intersect ion E7. Partial Inverted Left Distribution with Intersection E8. Partial Inverted Right Distribution with Union E9. Right Identity E10. Nonleft Identity
0( B u C ) = ( A 0 B ) n ( A 0 C)
(BnC)OA=(BOA)n(COA)
Ae(BnC)x(AOB)u(AOC)
( A u B) 0 C x ( A 0 C) n( A 0 C ) A 0 {(O,O)} = A {(O,O)} 0 A f A, in general
Besides the relations previously described involving complementation, dilation and erosion has the property DE1. Inverted Associativity A 0( B @ C) = ( A 0 B) 0 C Property DI can be proved by writing (A = {x
o B ) =~ + y 3 Exn A z rp} {X
+ y 3 for every b in B there is an a in A such that x - b = a } and A 0 By = (2 3
nA # q }
= {z 3 for every b in E there is an a in A such that z - y z equal to x + y in the top expression shows that
- b = a}. Setting
( A 0 E l y = A 0 By
concluding the proof. Property 0 2 follows immediately by noticing that A @B=
A, = bEB
U {a + b 3
a ~ A =) { a
+bSaEA,bEB}
bEB
= u ( a + b , b ~ B } = B @ A aeA
Property 0 3 holds since (A 0 B) 0 C =
u
( A 0 B),
CEC
=
u
U(AOBc)= CEC
A,= U { ~ + Z ~ ~ E A , Z E B ~ }
CECZEB,
CEC
+ (b + c),a A,b E B,c E C} = u {a + b + c , b B,c C} = { b + c , b E B,c E C} @ A = A @ { b + C, b E B, c E C} = A @ ( B @ C) = {a
E
E
aEA
E
135
THE UNIVERSAL IMAGING ALGEBRA
Property 0 4 holds since
(J A ,
A 0 {(O,O)} =
YE((O.0))
= {a
+ (O,O),aE A } = A
Property DS follows since A@(BuC)=
u
A,
ZSBlJC
= U ( B u C ) , = U(B,,,uCW)= woA
weA
U
Bw
>.(,li,.>
(WeA
(BOA)u(COA)=(AOB)u(AOC). Property 0 6 is true since A 0 ( BnC ) = ( B nC ) 0 A =
(J ( B n C), =
=
u [ { b + a,b B } u { b + a,b B ) E
u
(Ban C,)
asA
aeA
n {c
+ a,c E C } ]
aeA
c
E
=B
0A
=A
0B
aeA
Similarly, A 0 ( B n C ) c A 0 C, and so A 0 ( B n C ) c ( A O B ) n A 0 C. The converse of Property D6 is not true as can be seen by the following counterexample. Example 9
Consider the images A, B, and C depicted in Figs. 12-14.
FIG. 12. Binary image A .
136
CHARLES R. GIARDINA
FIG.13. Binary image B.
FIG.15. Binary image A
FIG.14. Binary image C.
@ B.
A @ B and A 0 C are illustrated in Figs. 15 and 16, respectively. Furthermore, ( A @ B ) n ( A 0 C ) # A @ ( B n C ) = A @ ((0,O)) = A.
F . Opening and Closing Operations
A very useful operation in the mathematical morphology is the opening operation ( O ( X , B ) )of X by B, both subsets of Z x Z. This oDeration is defined in terms of erosion and dilation, and the result is often denoted by X,. Thus,
THE UNIVERSAL IMAGING ALGEBRA
137
FIG.16. Binary image A @ C.
A related operation is the closing operation C ( X ,B ) = XB,involving binary images X and B and C ( X , B )= X B = (X @ g) 0 B
As an illustration consider the example below. Example 10 Let X and B be given in Figs. 17 and 18. The reflected image is given in Fig. 19. Applying the erosion and dilation operations gives the results depicted in Figs. 20 and 21, respectively. As a consequence, the opening and closing are given in Figs. 22 and 23, respectively.
FIG.17. Binary image X.
FIG. 18. Binary image B.
138
CHARLES R. GIARDINA
FIG.20. Binary image 6 ( X ,B).
THE UNIVERSAL IMAGING ALGEBRA D(X,B) =
x 05
FIG.21. Binary image D ( X , E ) .
FIG.22. Binary image X,,.
140
CHARLES R. GIARDINA
FIG.23. Binary image XB.
As previously mentioned, the operations of opening and closing are related; in particular it is true that
This follows from the results proved earlier involving erosion, dilation, and complementation. The details are (XB)’
= [(X@
B) 0 B]’
= (X @
@B
B ) I
= (X‘0 =
B) @ B
o(xf, B) = ( X f ) B
A more useful relationship involving the opening is
This identity provides a practical and quick method for obtaining X,. The opening of X by B in the previous example could again be found, this time using the above relationship; instead consider the next example.
THE UNIVERSAL IMAGING ALGEBRA
141
Example I 1 Consider the image X from Example 10 and the images A and C illustrated in Figs. 24 and 25. The opening of X by A is X,i.e., X, = X and the opening of X by C is illustrated in Fig. 26. Another interesting property of the opening is that y EX, when and only when B,, n(X 0 B) # cp. This could be shown by using properties of erosion and dilation previously derived. Specifically y E X, when and only when y E u,,,(X 0 B)z, and this occurs if and only if there is a b in B such
FIG.24. Binary image A .
FIG.25. Binary image C.
142
CHARLES R. GIARDINA
FIG.26. Binary image X,.
that y E (X 0 8)b,that is y = a + b for some a in X 0 i, equivalently y - b = a, but this means that a E Byor ( X 0 i) n fiy # cp. By using the above property the principal identity in this section XB = u B z c x B z can be proved. First to show that XB c U B z C X B z , choose a fixed point y e XB. Using the above property it follows that By n (X 0 8)# cp, and therefore there is a point a E gY and a E X 0 i which implies that B, c X. It follows that a = y - b for some b in B, and so y = a + b or y E B,, and y E B, c X, which implies that y E B,. Going the other way, i.e., to show that
uezcx
uBzcx
u
BZcXB
BzCX
pick a point y in B z , and so there exists a point a such that y E B, c X. As a consequence, there is a b in B such that y = b + a, and so a = y - b. From this, it follows that a E By; if it is also shown that a E X 0 the proof would be concluded since then it would be shown that gyn (X 0 8)# cp. However, since B, c X this shows that a E X 0i and the proof ends. An analogous property for the closing is XB=
n B',
B,cX'
This relation follows easily from previous results derived herein and by using
143
THE UNIVERSAL IMAGING ALGEBRA
De Morgan’s theorem. Specifically,
x B= [ ( X ‘ ) J ’
[u
= B z C X ’ BZ]’ =
n BI,
BzCX‘
G. Some Properties of Opening and Closing Operations The opening and closing operations defined in Section I1.F are openings and closings (see Burris and Sankappanava, 1981) in the strict universal algebra sense: The opening operation is 0 1 . Antiextensiue X, c X 0 2 . Increasing X c Y * X, c Y, 0 3 . Idempotent (X,), = X , and the closing operations satisfy
C1. Extensive C2. Increasing C3. Idempotent
X cXB X c Y =-X B c Y E (XB),= X B
UBycX
Property 0 1 follows from using X, = By since if x E X, then x is in some Byc X and so x E X . Property CI is true by using 0 1 and the fact that ( X B ) ’= (X’),. Use 0 1 with ( X ’ ) , c X’ and thus, (X’)’c X’ and SO X c X B . Property 0 2 holds since X c Y it follows that xB=
u
BzcX
B,c
u
= YB
BzC Y
Property C2 follows from 0 2 and properties of complementation. Using X c Y implies Y’ c X’ and so ( Y ‘ ) , c (X’), taking complements gives ( X ’ ) b c (Y’)(Band so X B c Y B . Property 0 3 is proved in two steps: ( X B ) , c X, follows from 0 1 and 0 2 . Next X, c (X,),follows by using X, = u B y c x B so y ,let z E X,, then there is a By c X such that z E By. It must be shown that z E (XB), = B,, but this follows by using the same By as before since By c X,, thus x E Byc (XB)B~ Finally, Property C3 is proved by using 0 3 along with properties of
uBzcxB
complementations, namely, X B = ((X’),)’ = ( ( ( X ’ I B ) , ) ’ = ( ( ( X B ) ’ ) B ) ’ = ( X B Y
as was to be shown.
144
CHARLES R. GIARDINA
An image X is said to be open with respect to the image B means that XB
=
x
An example of an image open with respect to another image was given in Example 11, here
x, = x Every image is open with respect to itself, that is X, = X.This follows by the results given at end of Section 11.F. In a similar fashion, an image X is said to be closed with respect to the image B means
xB=x As could be expected there is a duality type theorem: X is closed with respect to B when and only when X' is open with respect to B. The proof of this assertion follows easily from properties of complementation. The image X is closed with respect to B means X B = X,this holds if and only if (XB)' = X', which holds when and only when (X'),= X', that is, when X' is open with respect to B.
111. ARITHMETIC OPERATORS
Among the most basic operations employed in image processing are the arithmetic operators defined in this section. These operations appear in various signal processing and statistical approaches to imaging. A. Addition Operator
The addition operator A is used to determine an image g whose gray level at each fixed pixel is the sum of the gray levels at the same corresponding pixel of two prespecified images f and h. Whenever the sum of the gray levels exceed 1, 1 is employed for the resulting image. It follows that A :[O, l ] Z X Z x
to, l]ZXZ
-P
[O,l]Z"Z
and
4fY h) = 9 = {(x, Y) I U,(& Y)} where u,(x,
Y) = Uf(X,Y) + U L ( X , Y)
THE UNIVERSAL IMAGING ALGEBRA
145
when and
Example 12
Let f and h be given in Figs. 27 and 28. Then A(f,h) = g is illustrated in Fig. 29.
FIG.27. Imagef.
FIG.28. Image h.
CHARLES R. GIARDINA
146
FIG.29. Image A ( J , h ) .
B. Subtraction Operator
The subtraction operator S is defined somewhat similar to the addition operator. The subtraction of two images results in a third image with pixel gray values given by the algebraic difference of the original two images. Since gray values cannot be denoted by a negative number, 0 is employed whenever the difference is negative. So
s: [O,
l]ZXZ
x
[O, l y x z -+ [O,
I-JZXZ
where
u,(x,y) = 0 otherwise
An immediate application for this operator is in “manufacturing” a negative image. Letting uf(x,y) = 1 for all (x,y) and setting g = S ( f , h ) , it follows that u,(x, y) will be light whenever uh(x, y) is dark and conversely, etc.
THE UNIVERSAL IMAGING ALGEBRA
147
Example 13
Let f and h be given in Example 12. Then S(f, h) is illustrated in Fig. 30.
FIG.30. Image S( f,h).
C. Multiplication Operator The multiplication operator M provides an image whose gray levels are the product of the gray levels of two prespecified images: M : [ O , l]zxz x [0, l l z X z+ [0, 1-JzXz MU7 h) = 9 = { ( x ,Y ) I u,(x, Y)>
where ug(x,
Y ) = uf(x,Y ) ' uh(x,Y )
Example 14
Let f and h be given in Example 12. Then M(f, h) = g is given in Fig. 31. This operator is useful in masking and segmentation. If a certain portion of an image f is to be unchanged, and it is desired to attenuate the rest, then let = {(x, Y ) I U h ( X ? Y ) }
be such that uh(x,y) =
for (x, y ) in D (the region to be unchanged) uh(x, y ) = 0 otherwise
148
CHARLES R. GIARDINA
5
0
FIG.31. Image M ( j , h ) .
Performing the multiplication yields M(f,h)= g
with and
u,(x, y) = 0 otherwise as desired. D. Division Operator The division operator D, when used, gives an image with gray level equal to the quotient of the gray levels of two prespecified images. Whenever the quotient exceeds 1 the value 1 is utilized in the resulting image. So D : [ O , llzXz x [0,l]zxz + [0,llzXz
Nf, h) = g = {(x, Y) 1 u,(x, Y>> where
149
THE UNIVERSAL IMAGING ALGEBRA
when
u,(x,y) = 0
when
u,(x,y) = 0
and
uh(x,y) = 0
u,(x,y) = 1 otherwise Example 15
Again let f and h be given in Example 12. Then D(f,h) = g is given in Fig. 32.
FIG.32. Image D ( f , h ) .
E. Some Properties of Arithmetic Operators Several relationships involving the arithmetic operators will be given below. Addition satisfies the following properties: A l . Associatioity A(h, A(f,9)) = A(A(h,f ) , g ) A2. Identity A($, cp) = A ( r p , f ) = f where cp is the all-white image A3. Commutativity A ( f ,g ) = A ( g , f )
Addition and subtraction are related by the properties: AS1. No Left Inverse A(f,S(cp,f)) # cp, in general S( f, A ( q ,f)) = cp = S( f,f ) AS2. Right Inuerse
150
CHARLES R. GIARDINA
Multiplication obeys the properties:
M 1. Associativity M2. Identity
M(h, M ( f ,9)) = M ( M ( h ,f ) , g ) M ( f , 1) = M(1,f) = f , here 1 denotes the all-black image M3. Comrnutatioity M ( f , g ) = M ( g , f )
Multiplication and division are related by the properties:
MD1. No Left Inverse M(f,D(l,f)) # 1, in general MD2. No Right Inverse D(f,M(l,f)) = D(f,f) # 1, in general Property AS1 follows since S(cp, f ) = cp and so A ( f ,S(cp, f)) = f.Therefore for f f cp A ( f , # cp. Property MD1 holds since D(1, f )= 1 and so M ( f,D( 1, f)) = f and equals 1 if and only if f = 1. Property M D 2 is true by noting that M(1,f) = fand S ( c p 9 f ) )
therefore, D ( f , f ) = 1 when and only when no pixel in f is 0.
IV. TOPOLOGICAL OPERATIONS In this section several topological concepts are introduced that involve images. Topological and geometrical operators are defined along with relationships between these operations.
A . Neighborhood Concept A K neighborhood of a pixel p = (x, y) in an image f is an image, and is denoted by tfK(f, p). When K = 0 this image equals 0 for all pixels except at p and equals u,(p) at p. When K = 1 this image is 0 for all pixels (except p ) which are not adjacent or diagonally adjacent to p . In this nine-pixel neighborhood the gray values of f are utilized. When the image f is given, yjK(f, p ) will often be denoted more simply by q,(p) and it is defined for p = (i, j) by
qK(p)= (0
+ x,j + y)Iu,(i + x, j + y)
where
max(lxl,lyl) I K )
K = 0,1,2,. . . . So tfK(f, p ) will be zero outside a square centered at p, with sides
THE UNIVERSAL IMAGING ALGEBRA
of length 2K square.
151
+ 1 and will have gray values identical to f for pixels inside the B. Bounded Image
is said to be bounded, meaning there are only a An image f in [0,1]’ finite number of nonwhite pixels in f. This is equivalent to the statement that f is bounded when and only when for any nonwhite pixel in f there is a neighborhood about it containing all other nonwhite pixels of f.The latter statement could be used to generalize the definition of bounded images for “continuous images” as defined in Section V1.A. In any case, an image that is not bounded is said to be unbounded. Example 16 Consider the image f in Figure 33. This image is bounded since the neighborhood qz ((2,3))contains all of the nonwhite pixels of f.
FIG.33. Image f.
Let f = ((x,y)ll when x unbounded.
+y
is odd}. This “checkerboard” image is
152
CHARLES R. GIARDINA
C . Connection Operator
Stating that the image f = {(x,y) 1 uJ(x,y)} is connected means that for every two nonwhite pixels (xo,yo), (xn,yn) in f, i.e., uJ(x,,yo) # 0 and uJ(xn,yn) # 0, there is a nonwhite path between them (made up of adjacent or diagonal pixels). So there must exist uJ(xl,yl), uJ(x2,y2), . . . ,U~(X,,-~,Y,,all nonzero such that xi+l = x i or x i - 1 or x i 1 and yi+l = yi or y i - 1 o r y i + 1 f o r i = 0 , 1 , 2 ,..., n - 1 . When the above is not true, the image is said to be a disconnected image.
+
Example 17 A disconnected image f is depicted in Fig. 34. Here uJ(l,4) = 1/2 and uJ(2,2)= 3/4, but uJ(l,3)and uJ(2,3)are both 0. The connection operator C is used to find a maximally connected subimage of a given image f, that is, it is used to find all nonwhite pixels connected to a given pixel for a given image f. Given the pixel pi = (x,,y,) let C(Pi) = r~
when
u/(Xi,yi) = 0
Otherwise with g connected,
C ( p i )= g
ue(xi,yi)# 0
and if h is connected and
uh(xi,yi)# 0
then
h =g
=f
In other words, given a nonwhite pixel pi in f the connection operator yields “the largest” (sub)image in f containing pi which is connected.
FIG. 34.
Disconnected image J.
THE UNIVERSAL IMAGING ALGEBRA
153
Example 18
Consider the image f in Fig. 35. Then
C(6,4)= (64)I 3/41 C(8,1) = { (8,1) I 1/4,(8,2) I 1 /2}
C(3,2)= ((3,2111/2,(2,1)l3/4,(2,2)11/2,(4,3)11/4} In order to use the connection operator C, three things must first be specified;the first is the image fand the other two give the location of the pixel p. Hence,
c: [O, l ] Z X Z
x
z x z + [O, l
y
FIG.35: lmagef.
D. Boundary Finding Operator
The boundary of an image f is an image g (g is often denoted as af) made up of those nonwhite pixels of f which have an adjacent or diagonally adjacent white pixel. If then
154
CHARLES R. GIARDINA
such that ug(x,Y) = U f k Y)
when and only when uf(x, y) # 0 and there exists an i and j i = - 1,0,1
and
j = - 1,0,1
such that uf(x
+ i,y + j)
=0
All other ug(x,y) are 0. Let us illustrate the concept of boundary.
Example 19 Let f be given in Fig. 36. Then the boundary g of this image is provided in Fig. 37. Notice that u,(3,3) = 3/4 since u,(3 + i, 3 j) = 0 with i = - 1 and j = - 1; however, u,(3,4) = 0.
+
Let B denote the boundary operator which maps an image into its boundary as defined above. This operator is a unary operator and is such that
B : [O, l y x z + [O,
l]ZXZ
FIG.36. Imagef.
THE UNIVERSAL IMAGING ALGEBRA
155
FIG.37. Boundary image o f f .
E . Interior Finding (Or Boundary Peeling) Operator
The interior of a given image f is another image g (sometimes g is denoted as f o ) found by (deleting or) subtracting from f the boundary image B(f).The interior g = f o is given by the formula
9 = {(x, Y) I ug(x, Y)) where q x , Y) = Uf(X9 Y) - uB&,
Y)
The gray level in the resulting image is the pixel by pixel arithmetic difference of the original gray level and the gray level of the boundary image. The interior finding operator I could be defined such that I ( f ) = f o = S(f,B(f)),where S is the subtraction operator and B is the boundary finding operator. This operator takes an image into a “new” image by removing (whitening out) the boundary and leaving everything else unchanged, and it is a unary operator. Thus
Example 20 Consider f in Fig. 38. Since the boundary image consists of all nonwhite pixels which are adjacent or diagonally touching a white pixel, and since I ( f ) consists of f minus its boundary, the image in Fig. 39. is obtained for Z(f).
156
CHARLES R. GIARDINA
FIG.38. Image f.
FIG.39. Interior image o f f .
157
THE UNIVERSAL IMAGING ALGEBRA
Numerous layers of boundaries are often “peeled off’ and a recursion could be performed, for instance, Il(f) L+l(f)
= f - 8.f = L(f)-
84(f)
which is sometimes called thinning. F. Expansion Operator
It was seen that the interior operator when applied to an image f renders an image f o “smaller than” f, that is, f o is a subset of f. The expansion operator X does somewhat the opposite, if X ( f ) = g then the original image f will be a subset of g. The new image is obtained from f by making every white pixel in f that touches a nonwhite pixel the darkest value of gray. Specifically
x:[O, l y x z + [O, 1 y x z and
X(f1 = 9 where U,(X,Y)
= Uf(X,Y)
u,(x+i,y+i)=rnaxu,,
when Uf(X,Y) # 0 i=-l,O,l
and
j=-l,O,l
whenever uf(x, y) # 0
and
uf(x + i, y
+ i) = 0
An illustration of this operator follows. Example 21
Consider the image f given in Fig. 40. A simple application of the expansion operator X ( f ) = g gives the results depicted in Fig. 41. G. Accumulation and Isolated Pixels
A pixel p = (xi,yl) is said to be an accumulation pixel for an image f when uf(xi,y,) > 0 and every neighborhood qK of p , K = 1,2,.. . contains at least one other nonwhite pixel of f. Whenever ql(p) contains at least one other nonwhite pixel off then so does qK(p),K = 2,3,. . . ,hence the definition could be modified accordingly.
158
CHARLES R. GIARDINA
FIG.40. Image f.
._ 8
7
6
5
3/4
4
3/4
3
3/4
. -
3/4
2
3/4
1 ~
1
2
FIG.41. Image X ( f ) .
159
THE UNIVERSAL IMAGING ALGEBRA
Example 22
Let f be given in Fig. 42. All nonwhite pixels in this image are accumulation pixels, except ( 8 , 2 ) .The latter pixel is called an isolated pixel, and the definition is introduced in the next paragraph. A pixel (x,y) which has the property that u,-(x,y) > 0 and is not an accumulation pixel is said to be an isolated pixel. Hence, for an isolated pixel a neighborhood q1((x, y)) can be found such that the only nonwhite pixel of the neighborhood is the center pixel (x, y).
FIG.42. Image j,
H . Interior Pixel and the Maximal Interior Neighborhood For an image f,the pixel p = (x, y) with u,-(x,y) > 0 is said to be interior if there exists a neighborhood q K ( p )about p for some K 2 I such that all points in this neighborhood are nonwhite. The neighborhood with the “largest” K for which this is true is called the maximal interior neighborhood at p . If no largest exists (for a necessarily all nonwhite (unbounded) image) q m ( p )= f should be used for the maximal interior neighborhood of p . An operator J& called the maximal interior neighborhood operator will be defined. If p = (x, y) and uf(x, y) = 0 then A ( p ) = cp. When ur(x, y) # 0 and there is a white pixel in f then let A ( p ) = {(x i, y j), u,-(x i, y j) using the largest K such that uJ(x i,y j) > 0,i = - k , . . ., -2, - 1,0,1,2,. . ., k,
+ +
+ +
+ +
160
CHARLES R. GIARDINA
FIG.43. Image j.
FIG.44. Maximal interior neighborhood of image f at (3,4).
THE UNIVERSAL IMAGING ALGEBRA
161
and j = - k , . . .,- 1,0,1,. .. ,k } . Finally, if ur(x,y) # 0 and there is no white pixel in f then let &(p) = f. Example 23
Consider the image f given in Fig. 43.Then the only interior pixels in this image are (3,4), (3,5), and (3,6). Letting p = (3,4) then A ( p ) is given in Fig. 44.If we let p = (7,6) then A ( p ) = {(7,6)1 1/4}. In order to use the maximal interior neighborhood operator three things must be specified, that is, this operator is ternary. The image f with which we are dealing must be specified along with the x and y locations of the pixel p used, and so A:[O,
l]ZXZ
x
z x 2 + [O, l y x z
I . Some Properties of Topological Operators In this section a few properties will be given of some of the topological operators. The boundary finding operators satisfy properties:
B1. Antiextensive
B(f)c f B2. Nonmonotonic f c g + E(f)c B(g),in general B3. Idempotent B(B(f )) = B( f ) The interior operator satisfies properties:
11. Antiextensive I(f1 c f 12. Increasing f = 9 * I(f)= I(9) 13. Nonidempotent 12(f) = I(I(f))# l(f),in general 14. Bounded Nilpotency f bounded * I k ( f ) = cp for some k 2 1 The expansion operator satisfies properties:
X1. Extensive X2. Nonmonotonic
X3. Nonidempotent
f = X (f 1 f c g + X(f)c X ( g ) , in general X(X(f)) # X(f ), in general
The interior operator is related to the expansion operator by
1x1. Left Identity Partial Inclusion
f = W(f))
Partial Inclusions
f tz X(I(f)) and X(l(f)) G f, in general
1x2. No Lefi Identity 1x3. No Right Identity
f # I(X(f)), in general
162
CHARLES R. GIARDINA
Property E l is proved by considering a pixel p such that uB&) = a # 0. Then, from the definition of B(f)it must be that u,(p) = a and a neighboring pixel of p is white in f . In any case, since u,,,-,(p) = u f ( p ) this implies that
B(f)= f.
Property 8 2 is shown by considering the counterexample given below.
Example 24
Let f and g be given in Figs. 45 and 46. Then B(f)= f and B(g) is given in Fig. 47.
FIG.45. Image f,
FIG.46. Image g.
THE UNIVERSAL IMAGING ALGEBRA
163
FIG.47. Image B(g).
Property 83 can be shown in a two-step procedure: first, show that c B ( f ) . Again let p = (x, y) and let S denote the set S of eight adjacent or diagonally adjacent p ) a # 0, then it must be that neighboring pixels of p . Then, say that u B ( ~ ) ( = us(p) = a and there is a q in S such that uf(q)= 0. Since uf(q)= 0, it follows that u,(,,(q) = 0 and therefore by applying B to the image B ( f ) it follows that us2(,)(p) = a also. Consequently, B(f)c B2(f). The reverse inequality follows from property B l . Hence, B ( f ) = Bz(f). Property 11 is proved by first choosing any pixel p = (i, j). From the definition off', if uJp) = 0 then u f o ( p )= 0. Also, if u f ( p ) # 0 then u f o ( p )= 0 when p is noninterior and ufo(p)= uJ(p) when p is interior. In any case, U J O ( P ) 5 U f ( P ) , thus f 0 = f. Property 12 can be shown by letting p be a pixel such that u f ( p ) = a # 0. Then, since f c g, it follows that u,(p) 2 a, so p might be an interior pixel for both f and g. If p is not an interior pixel for f then u,&) = 0, and there is nothing to do. So let p be an interior pixel for f. Consequently, all eight neighboring pixels of p have nonzero gray value in f. But, f c g and therefore the gray value of the neighbors of p in g also have a nonzero gray value. Thus, p is an interior pixel for g and so u,(,)(p) = u,(p) 2 t ~ ~ ( ~ and ) ( p the ) proof is concluded. Property 13 can be shown by applying the operator I to the image g in Fig. 46 of Example 24. In that same example, however, 12(f) = I(f). Property 14 follows by finding the rightmost, uppermost pixel p = (x,y) in f such that u,(p) # 0. The existence of a pixel p is guaranteed since the cardinality of f is finite, and the vacuous situation is ignored in this document. In any case, uJ(x,y 1) = 0 by the definition of p , therefore p is a
B(f)c B(B(f))= BZ(f) and second, show that B2(f)
+
164
CHARLES R. GIARDINA
boundary pixel and the cardinality of B(f)must be at least 1. Next, observe that Z(f) = S ( f , B ( f ) ) , that is, the interior is obtained by taking away the boundary pixels. Since the cardinality of f is finite and the same argument above could be applied to B i ( f ) ,i = 1, 2,. . . implies that B k ( f )must equal cp for some positive integer k. For an unbounded image this property is not true-use the all-black image. Property XI is true since for any pixel p , if u f ( p ) = a # 0 then uX&) = a and so f c X ( f ) holds. Property X2 follows by the next counterexample. Example 25
Let f = {(1,1) I l } and g = ((1,l) I 1, (2,l)I3/4}; then X(f)and X ( g ) are depicted in Figs. 48 and 49. It should be pointed out that if X(f)was defined slightly different, that is, if the value maxuf used in “making white pixels in f become nonwhite in X(f)”is replaced by 1, then the function X would be increasing. Property X3 trivially holds. For instance, using f given in the previous example shows that X2(f) # X(f). Property ZXI can be seen by first choosing any pixel p = (x, y), and letting S denote the set of all the eight adjacent and diagonally adjacent neighbors of p. Then, if u f ( p ) = a # 0 then uX(&) = a and uxO,(q) # 0 for all q E S. The last condition involving pixel q is true since either uf(q)= b # 0 and then uX&) = b or if uf(q)= 0 then it follows that uX&) = maxuf. In any case the pixel p in S(f) is an interior pixel and so ~ , ( ~ ( ~ )=) (ap#) 0. When
FIG.48. Image X ( f ) .
THE UNIVERSAL IMAGING ALGEBRA
165
FIG.49. Image X ( g ) .
u f ( p ) = 0 there is nothing to do since ~ , ( ~ ( , ) ) (2p )0. So, in any case uf(p)I U U X ( f ) ) ( P ) and so f = W f ) ) . Property I X 2 can be seen by the counterexamples given in Example 26.
Example 26 Let f be given in Fig. 50; then I ( X ( f ) )is illustrated in Fig. 51. It is seen that
W(f))z f.
FIG.50. Imagef.
166
CHARLES R. GIARDINA
FIG. 51. Image I ( X ( f ) ) .
Property ZX3 can be seen by the counterexample provided below.
Example 27
Let f be depicted in Fig. 52. Then X ( Z ( f ) )is given in Fig. 53 and it is seen that f % x(z(f)) and -W(f)) $ f.
FIG.52. Imagef.
THE UNIVERSAL IMAGING ALGEBRA
167
FIG.51. Image X ( I ( f ) ) .
V. GRAY-VALUED MATHEMATICAL MORPHOLOGY As mentioned in the introduction, the use of fuzzy set theory allows a natural generalization from black and white morphology into the gray-level morphology. Some aspects of this generalization are described in the present article. All results involve replacing one complete lattice ((0, l}) by another one (C0,11). In the next section the translation operation for images is given, and this is followed in subsequent sections by an introduction to fuzzy operators. The fuzzy opening and fuzzy closing operations along with some of their properties are also prevented. A. Translation Operation
The translation operation Tis a ternary operator. It has as inputs an image f and two integers i and j . The output of this operator is also an image, “this image g is identical to f,but moved over i pixels to the right and j pixels up.” Therefore, T: [0, llzXzx 2 x Z + [0, 1 l z X z , where T(f, i, j ) = g = {(x i,y + j ) l u,(x,y)). When T is applied to a binary image X then T ( X ,i , j ) = X ( ij, ) using the notation of Section 1I.A. From here on, whenever T is applied to any image f (binary or not) then the result T(f, i, j ) will be denoted byAj.
+
168
CHARLES R. GIARDINA c
FIG.55. Image T(f,1,2).
Example 28
If f is given in Fig. 54, then T ( f ,1,2) = fiz is depicted in Fig. 55. B. Maximum or Fuzzy Union Operator The fuzzy union operator U is used in determining an image from two given images. The gray value of the resulting image at a specified pixel is the maximum of the gray values at the same pixel of the given two images. Thus, this operator is often called the maximum operator. Specifically,
u :[O, l I Z X Z x [O, 1 1 Z X Z + [O,
l-JZXZ
THE UNIVERSAL IMAGING ALGEBRA
169
and where u,(x,y) = max[u/(x,y), u,,(x,y)] and max stands for maximum. A simple illustration of the use of this operator will be given in the following example. Example 29
Refer to Example 12 where, in Figs. 27 and 28, f and h are defined; then f u h = g is given in Fig. 56. The fuzzy union of an arbitrary number of images is found in a similar manner. Here g = U a E Awhere L A is some indexing set, and U a EisAdefined by u,(x,y) = sup[uf=(x,y)] where sup stands for the supremum or least upper bound.
FIG. 56. Imagefuh.
C. Minimum or Fuzzy Intersection Operator
The fuzzy intersection operator which yield a third image, thus
n:LO, p
z
n is a binary type operator on images
LO, p
z
+ LO,
qzxz
This operator provides gray values equal to the minimum value of the gray values of two given images pixelwise, therefore f n h = g where u,(x,y) = min[uf(x, y), u,(x, y)], with min standing for minimum.
170
CHARLES R. GIARDINA
FIG. 57. Image f n h.
Example 30
Using f and h, again from Figs. 27 and 28 of Example 12, f n h = g is illustrated in Fig. 57. The fuzzy intersection of an arbitrary number of images is found analogous to the union. In this case, n a e A L = g, where and inf is the infimum or greatest lower bound.
Fic. 58. Image C ( f ) = f’.
THE UNIVERSAL IMAGING ALGEBRA
171
D. Fuzzy Complementation Operator
The fuzzy complementation operator C is a unary operator which takes an image into an image, therefore,
c :[O, l]ZXZ + [O, l y x z The resulting image has gray values equal to 1 minus the original gray value pixelwise, and so C(f) = g where u,(x, y) = 1 - uf(x,y). The complement of f will often be written as f'. Example 31
Referring to the image f given in Fig. 27 of Example 12, then C ( f ) = g is illustrated in Fig. 58. E. Restricting Fuzzy Operators to Binary Images
When the fuzzy operators union, intersection, and complementation are used on binary images they provide results identical to usual union, intersection, and complementation. A similar consequence is true for fuzzy subimage as defined in Section I.B. F . Fuzzy Opening Operation The fuzzy opening operator 0 is a binary operator which takes images into images. It is defined in terms of numerous operators, previously given. x [0,1]' x z + [O, llZ x z where O(f, h) = g and g = Specifically, 0 : [0, llZ h, where the index set K is defined by K = { ( i , j ) 3 h, c f}.The union is a fuzzy union and the subset is also fuzzy; h, is the translate of h, i units to the right and j units up, so h,.= T(h,i, j).The symbol 0 was previously employed for the opening operator for binary images. The use of this symbol for this operation presents no difficulty. If f and h are binary, and this operator is applied to these images, fh would result as in the binary case. An example illustrating the mechanics of the operation follows.
u,
Example 32
Assume that f and h are given in Figs. 59 and 60. The first step in applying this operation is determining the index set K ; (0,O)# K since hoo = h h; f, however, h,, c f, h12 c f, and h i 3 c f, thus K = ((1, l), (1,2), (1,3), (2, l)}. The opening of f by h is O ( f , h ) = h l l u h12u h i 3 u h 2 , and is illustrated in
172
CHARLES R. GIARDINA
FIG. 59. Imagef.
FIG.60. Image h.
FIG.61. Image O ( j , h ) .
Fig. 61.Notice that u,(2,1) = 3/4 since u,,,(2,1) = 1/2 and ul,,(2, 1) = 3/4 and the larger of these two values is used in g. G. Fuzzy Closing Operation
The fuzzy closing operator C takes two images into a third, and so C: [0, l]zxz x [O, l]zxz 4 [0, l]zxz. Like the fuzzy opening operator, the fuzzy closing operator is defined using several previously given operations. In this case, C ( f , h )= n ( l , j ) E K h iwhere j, the index set K is defined by K = {(i, j ) 3 f c hij}.The intersection and subset should be taken in the fuzzy sense.
THE UNIVERSAL IMAGING ALGEBRA
173
Example 33
Let f and h be given in Figs. 62 and 63. To find C(f,h), the complement of h must be found, (Fig. 64) and then the images should be translated. On those translates of h’ which contain f the fuzzy intersection operator is applied and the result is given in Fig. 65. In this example I< = 2 x 2 - {( -2, - I), (- 2,0), (-2, -2)>. As a consequence, the 0 gray value in h’ will appear in the h; translate of h’ at location (i 2, j 2), for all i and j except i = -2 and j = - 1, i = -2 and j = 0, and i = - 2 and j = - 2. This means that at all the pixels in Z x Z except (0, 0), (0, l), and (0,2) the resulting image C ( f ,h) will be 0 since the intersection of
+
+
FIG.62. Image f.
FIG.63. Image h.
174
CHARLES R. GIARDINA
2
1 0
FIG.64. Image h’
FIG. 65. Image C(.f,h).
all h, over K defines C ( f , h ) . Furthermore, the value 3/4 in h’ appears in the h; translate of h’ at location (i + 2,j + 1) for all i, j except (O,O), (0, l), and (0, - 1). Since the intersection of hij over K defines C(f, h) there must be a value of 3/4 at pixel (0,Z)and value 1 at (0,O)and (0,l). It is helpful to construct a template of h’ using see-through plastic material and overlay h’ on f to obtain C(f, h). Analogous to the fuzzy opening operator, the fuzzy closing operator defined in this section generalizes the binary closing operation defined in Section 1I.F. That is, when binary images are employed, both operators yield
THE UNIVERSAL IMAGING ALGEBRA
175
the same results. For this reason the same symbol C is employed for the fuzzy closing operator as well as the binary image closing operator.
H . Properties of the Fuzzy Opening and Fuzzy Closing Operations Some of the many properties of the fuzzy opening and fuzzy closing operations will be given below. The fuzzy opening is an algebraic opening since the following properties hold: F01. Antiextensive O ( f , h )c f F02. Increasing f c g * O(f,h) c 0(g,h) F03. Idempotent O(O(f, h)) = O(f,h) In an analogous fashion, the fuzzy closing is an algebraic closing from universal algebra since the following properties hold: FC 1. Extensioe FC2. Increasing FC3. Idempotent
f c C (f,h) f c g C ( f , h ) c C(g,h) C ( C ( f ,h)) = C ( f , h ) Property FCZ holds since by definition of C ( f , h ) ,f c hb for all (i, j)E K thus, f c 0, hij = C ( f ,h). Property FC2 follows by noticing that since f c g and if g c hij then f c hij.Consequently, if K, = { ( i ,j ) 3 g c hij} and K, = { ( i , j )3 f c hij} then K, c K, and (-),,hij c hij because the first intersection is over a larger indexing set. Hence, C(f,h) c C ( g ,h), as was to be shown. Property FC3 is proved in two steps. First, using Property FCl f c C ( f , h ) ,and then applying this to Property FC2 shows that C ( f , h )c C ( C ( f ,h)). In the second step it must be shown that g = C(C(f, h)) c C ( f ,h). Let p = (x,y) be a pixel such that u,(p) = a # 0, then for all ( i , j ) in K, = ( ( i , j ) 3 C ( f , h )c hij} it follows that uht(p) 2 a. Let q = C ( f , h ) = n K , h i j with K, = { ( i , j )3 f c hij}. Then with a change of notation, and, as in the proof of Property FC2 above since f c C ( f ,h), then K , c K,. This shows that u,(p) 2 a, thereby concluding the proof.
0,
VI. THEUNIVERSAL IMAGING ALGEBRA All operations introduced in previous sections of this article involved images or binary images as defined in Sections 1.A and I.C, respectively. In the first subsection of Section VI a new type of image is defined, called a
176
CHARLES R. GlARDlNA
continuous image; this is done to illustrate how an image might be obtained mathematically. It also provides a further illustration that the Universal Imaging Algebra is a very rich structure, for it includes continuous images among its sort of sets. The next subsection contains a discussion of polyadic graphs, (see also Goguen and Thatcher, 1974). These graphs are used in representing syntax type information for image operations described in the previous sections. In the final subsection the Universal Imaging Algebra is introduced as a many-sorted algebra. A. Creating a Sampled Image
A “continuous image” f is defined as a function from the Euclidean plane into [0,1], viz., f: R x R + [0,1].Again, 0 can be thought of as representing white, 1 represents black, and numbers close to 1 denote darker values of gray.
Example 34
Consider
elsewhere This continuous image has finite support-it is0 outside of a square four units on a side and has gray value 1 at the center of the square, i.e., at x = y = 3/2. A method for converting continuous images into sampled images follows. To do this draw vertical and horizontal lines one unit apart on the x - y plane R x R. There, the structure is like graph paper and is Z x Z (discussed earlier) and the scenario is similar to the TV example given previously. The gray value induced on a given pixel (i, j) by the continuous image f is found by integrating, thus i+1/2 j+1/2
u(i, j) =
J J f(X,Y)dXdY
i- 1 / 2 j- 1 / 2
For arbitrary-sized pixels the double integral should be divided by the area of a pixel, in the above this area is 1. As an illustration consider Example 35. Example 35
If f is given as in the previous example then the sampled version g is depicted in Fig. 66.
177
THE UNIVERSAL IMAGING ALGEBRA
FIG.66. Sampled image g.
The procedure for converting a “continuous image” into a sampled image can also be employed for converting images with a certain pixel structure into an image with a different pixel structure. In this case the original pixel structure is ignored and the original image is treated as being continuous while the resulting image is formed as before: integrate to find the gray values associated with each “new pixel.”
Example 36 Refer to the previous example, and assume that the sampled image g is given. A new image h is desired whose pixels are also square, but twice as long
p i x e l s of g - s o l i d l i n e s . p i x e l s of h
FIG.67. Large and small pixel systems.
-
dotted lines
178
105/2304
l90/2304
25/2304
79a/2304 1444/2304 i90/2304
441/2304
798/2304 l05/2304
0
on a side compared to the pixels of 9. See Fig. 67. If the integration is performed to obtain the pixels in k, the result obtained is illustrated in Fig. 68. An operator called the digital image creator will be defined which converts a “continuous image” into an image as defined in Section I.A.
B. Polyadic Graphs for the Imaging Operators Polyadic graphs consists of a set of edges and nodes which are usually drawn as circles or as ovals. Edges are labeled with the name of the operation to be performed. An edge will have one or more sources (tails) and a single target (head). Each tail leaves from some node and the head also points to a node. The ovals indicate the type or sort of entities employed and the many tailed arrows indicate names of operators or functions. The head of the arrow indicates the sort of output or sort of values given by the designated operator and the many tails indicate the sort of entities needed as inputs to the operation. That is, each of the many tails determine exactly what sort of entity is needed before the specified operator can be employed. It should be noted that the polyadic graph provides the same information as block diagrams. However, polyadic graphs more simply allow numerous operators to be represented in a single diagram. See Figs. 69 and 70. C . The Universal Imaging Algebra
The concept of a many-sorted algebra will now be introduced. Intuitively, a many-sorted algebra consists of different “sorts” of sets and operators which map elements from some of these sets into others. A many-sorted algebra (S,a, X,B, 0,y, G ) is a 7-tuple with components defined as follows.
179
THE UNIVERSAL IMAGING ALGEBRA
Addition
irnaga
Neighborhood Operator
Boundary Finding Operator Integers
Continuous Image
FIG.69. An example of a polyadic graph.
The first component S is a set of sorts, which in this application is always finite and nonempty. The set contains the sort of things that are dealt with, and images is one of them. To get an idea of the other types of entities in S refer to the previous section where various different sets were introduced. Before the next component in a many-sorted algebra is described, the set S* of finite strings of elements from Swill be introduced. An element of S* consists of the ,concatenation (or finite sequence, without commas) of possible repeated elements from S as well as the empty string E. Rigorously, S* could be considered to consist of all functions in Sz",where Z" = { 1,2,. . .,n} for all nonnegative integers n with E arising from S o = Szo. The second component a is a function a :S* x S -+ C.Specifically,for any = u,inS* x S,a(u) = CS,Sz...sn,srwhereC,,,z...sn,,isitselfa 2-tuple(s,s,-~~sn,s)
180
CHARLES R. GIARDINA
1-
Binary Image
I
Binary Image-I
*-I
3
1-
Maximal Interior Neighborhood Operation
Integer
-
Image Image
Binary Image
I
-
Image
Integer
Minkowski Subtraction
Image
1
c
Addition
7 Image
Image
b
Boundary Finding Operator
-Image
Continuous Image
*
Digital Image Creator
*Image
FIG.70. An example of block diagrams.
set. It consists of all operator names which have domains indicated by the string of sorts s1s2.--s,and range of the sort s. This set is often called a signature set. Consequently the third component Z is a family of signature sets, that is, and Z6,sare all elements of Z. the sets Zs,s2...s,,s The 4-tuple is a function a :S + CP with B(s)= A, for every s in S, where A, is itself a set, called the carrier set. Therefore, the fifth component CP is a family of carrier sets, that is, the elements of (0 are themselves sets, namely, A, for each s in S.
THE UNIVERSAL IMAGING ALGEBRA
181
u(sls
The sixth component is a function y, y : *...S n , S ~ E S , X s ~ S , S 2 . . . S n . S -,G, that is, y maps the union of all signature sets into G. For any element a in the union y(a) = a, where if a is in ~,I,S,...Sn,S, then aA:Asl x A,, x x A,,, -+ A,, that is, a is the name of the function and a” is the n-ary function itself. If a is in &s, then a, is in A,. In other words a null-ary function is an element (special element) in the carrier set of the sort s. The last ingredient G is the set of all 0-ary, 1-ary, binary, ternary, etc., operators in the algebra. The Universal Imaging Algebra is a many-sorted algebra. Among the sorts of sets in the algebra are included the inputs to the operators defined herein; some of them appear in the ovals of the polyadic graph in Fig. 69. The operators specified in this article are included in the set G, and the names of these operators are in appropriate signature sets. For instance, the “translation operator” given in Section V.A is in the signature set
c
Image Integer Integer. Image
The maximal interior neighborhood operator is also in this signature set. Furthermore, y(maxima1 interior neighborhood operator) = M where M :Almagex Alntegerx Alnteger+ Almage.In this document Almage= and Alnteger= 2 as in Section 1V.H. [O, 11’ Different definitions of image could have been employed such as [0,1]’ ’. For certain operations identical (isomorphic) results occur if this different definition was used. For instance,the mathematical morphological operations render “the same results” using either definition. However, if “statistical” moment methods had been discussed in this article different results would versus images in [O,l]’ ’. have occurred for images in [0,1]’ In this document only a brief look at the many sort of sets, operators, and relationships between the operators in the Universal Imaging Algebra was given. A basis for the algebra of imaging has been developed and will be reported in a future contribution.
’
REFERENCES AND BIBLIOGRAPHY Ankeney, L. A,, and Urquhart, H. N. (1984). On a mathematical structure for an image algebra. Proc. Con$ Intell. Syst. Mach., 1984, p. 330. Arbib, M. A., and Manes, E. G. (1975). “Arrows, Structures, and Functors: The Categorical Imperative.”Academic Press, New York.
182
CHARLES R. GIARDINA
Birkhoff, G., and Lipson, D. (1970). Heterogeneous algebras. J . Combinatorial Theory 8, 115- 133.
Burris, S., and Sankappanara, H. P. (1981). “A Course in Universal Algebra.” Springer-Verlag, Berlin and New York. Crimmins, T. R., and Brown, W. M. (1985) Image algebra and automatic shape recognition. I E E E Trans. Aerosp. Electron. Syst. AES-21 (1). Giardina C. R. (1982). Advanced imaging techniques for hierarchies of intelligent PGM’s. Proc. NATO AGARD Guidance & Control Conf., 1980. Giardina, C. R. (1984). The universal imaging algebra. Pattern Recognition Lett. 2 (3). Giardina, C. R. (1985). A many sorted algebra based signal processing data flow language. Int. ConJ Math. Model., Jth, 1985. Goetcherian, V. (1980). From binary to gray tone image processing using fuzzy logic concepts. Pattern Recognition 12. Goguen, J. A. (1967). L-fuzzy sets. J . Math. Anal. Appl. No. 18. Goguen, J. A., and Thatcher, J. W. (1974). Initial algebra semantics. 15th Annu. Switch. Automata Symp. 1974. Herman, G. T. (1980). “Image Reconstruction from Projections.” Academic Press, New York. Matheron, G. (1975). “Random Sets and Integral Geometry.” Wiley, New York. Ritter, G. X.(1984). An algebra of images. Conf: Intell. Syst. Mach., 1984, p. 334. Rodenacker, K., Gais, P., Jutting, U., and Burger, G .(1983). “Mathematical Morphology in GreyImages, Signal Processing. 11. Theories and Applications.” Serra, J. (1982). “Image Analysis and Mathematical Morphology.” Academic Press, New York. Sternberg, S. (1980). Cellular computers and biomedical image processing. Proc. U.S.-Fr. Semin. Biomed. Image Process., 1980. Zadeh, L. A. (1965). Fuzzy sets. InJ Control 8.
ADVANCES IN ELECTRONICS AND ELECTRON PHYSICS, VOL. 67
Cathode Ray Tubes for Industrial and Military Applications ANDRE MARTIN Thomson-CSF Dover, New Jersey 07801
I.
INTRODUCTION
The extensive growth of electronic communication has been one of the main features of the past few decades. Such information, in the case of industrial and military applications, is very often presented on a cathode ray tube (CRT). This information ranges from the alphanumeric symbols displayed on a computer consale to the sophisticated multicolor displays presented on a radar console or to the high-quality texts printed from a highresolution CRT in a phototypesetting machine. The aim of the present chapter is to provide a comprehensive review of the large amount of technical progress that has occurred in the area of industrial and military CRTs over the past two decades. Excluded from this discussion are direct-view storage tubes and television tubes. It should be noted, however, that the progress in industrial and mulitary CRTs has often profited from the huge development effort in color television tubes (see, for example, Morrell et al., 1974). This effort has been an important factor in maintaining the CRT in a strong position relative to competing devices, such as plasma panels or electroluminescent devices, emerging from the new technologies. Evidence for the existence of “cathode-rays” was first found by Plucker (1 858) and Hittorf (1 869) using tubes in which an electron beam was generated by a high-voltage discharge in a gas (usually air) at a pressure of the order of 0.05 Torr. A major step forward was due to Crookes (1879),who showed that the impinging on the glass wall of the tube consisted of negatively charged particles that could be deflected by an electromagnetic field. This deflection technique was later used by Thomson (1897) when he made his famous measurements of the ratio e/m. In the same year, Braun (1897)devised the first cathode-ray tube for use as an oscillograph. This also consisted of a gas discharge tube containing a cold cathode. In operation high voltage (e.g., 10 kV or more) was applied to an accelerating anode to maintain a discharge. Beyond this anode, an aperture plate was mounted to limit the width of the electron beam emerging from the anode, which landed on a fluorescent screen. 183 Copyright @ 1986 by Academic Press, Inc. All rights of reproduction in any form reserved.
184
ANDRE MARTIN
External coils were initially provided for electromagnetic deflection, but later models incorporated additional electrodes for electrostatic deflection. A few years later, Wehnelt (1904) invented the triode gun with negative control grid, the same basic design still used today. During the period between 1904 and 1940, the foundations of modern electron optics were formulated by Busch (1926, 1927), Knoll and Ruska (1932), Davisson and Calbick (1932), Briiche and Scherzer (1934), and von Ardenne (1935,1940). Paralleling the work on television tubes, the first commercial oscilloscopes were introduced by DuMont in the United States in the early 1930s. These oscilloscopes used tubes with a hot cathode as an electron emitter in a wellevacuated envelope and no longer depended on a gas discharge.Also included was a negative control grid of the Wehnelt type between the cathode and anode, a pair of deflection plates driven by a sawtooth generator to give a horizontal sweep, and a second pair for vertically deflecting the beam in accordance with the incoming signal. During the late 1930s radar CRTs were developed, with the first use of long-persistence screens for panoramic radars occuring in England about 1940. During the same period, oscilloscope CRTs progressed considerably with the introduction of brighter tubes (made possible by better electron optics). At the same time, Langmuir (1937) showed on theoretical grounds that there was a maximum current density which could be obtained in a focused spot because of the thermal velocity spread of the electrons emitted from a hot cathode. In the 1950s, these results, together with the progress made in phosphor materials, made possible the development of new and improved CRTs covering a wide range of applications from image projection to photographic recording of electrical transients. As already mentioned, much of this effort was supported by parallel developments in black-and-white and color television tubes. The period from 1960 to the present is characterized mainly by continuous evolutionary improvements and design optimization rather than by major innovation. Improvements during this period occurred in four major areas: reliability and life, ruggedization,electron optics, and phosphors. Improvements in reliability and life were stimulated by the introduction of semiconductors,whose average working life is very long. Since the CRT was often the only vacuum tube remaining in the equipment, it was in danger of becoming the major source of system failure. As a result of intensive developmental work, the average life of the CRT is now well in excess of 10,OOO hours. Improvements in ruggedization were also stimulated in part because of the introduction of semiconductors. By reducing the bulk and power dissipation of the system, such components made possible the installation of sophisticated electronic displays using CRTs in aircraft, submarines, space capsules, and other equipment subjected to extreme environmental conditions of shock, vibration, temperature, and pressure.
185
CATHODE RAY TUBES
In the field of electron optics, a better mathematical and analytical approach to the determination of the exact trajectories of the electrons leaving the cathode to form the beam and the increased availability of computers to calculate the trajectories have permitted the development of low-aberration electron optical structures. These allow smaller, high-current-density spots to be obtained and permit the beam to be deflected with minimal defocusing and reduced power. Improvements in phosphor materials and screen deposition techniques have also partly resulted from work in color television. Effort in this area has led to a better understanding of the physics of crystalline materials, making use of the extensive work on silicon and other semiconductor materials. This has resulted, over the last 20 years, in substantial improvements in phosphors with respect to efficiency and life, as well as more desirable color characteristics. TABLE I CRT CLASSIFICATION AND GENERAL DATA CATEGORI
APPLICATIOI
Picture diagonal or diameter (mmJ
Deflection mode
Ed@ t0 edge deflection angle (degrees) Typical r e d u (cycleslpicturi
Typical luminance (cd/mz )
Electron beam power at screen (watts)
Peak line brightness
-
-
-
1000
> 100,000
Area brightness
-
-
500
-
50,000
From min.
0.01
0.2
1
0.5
0.5
To max.
0.5
1
40
20
140
186
ANDRE MARTIN
The present chapter has been divided into eight sections.In Sections 11,111, and IV, the methods of producing an electron beam and deflecting it are discussed. In Section V the state of the art in cathodoluminescent materials is reviewed, with emphasis on CRT-related problems such as flicker, timeresponse characteristics,and phosphor life. Sections VI and VII are concerned with the measurement methods required to evaluate CRT performance and include a proposed method for evaluation of the visual quality of a CRT display. In Section VIII the present state of the art of the principal categories of industrial and military CRTs is briefly discussed. Since it is found impracticalto include references of all the published work in a field as broad as this and extending back so many years, emphasis has been placed primarily on references most directly related to the material discussed. Industrial and military monochrome CRTs presently manufactured can be separated into five groups, in accordance with their applications, as follows (see Table I for their general characteristics): 1. High-resolution recording on a photographic medium 2. Display of fast transients and study of electrical signals (oscilloscopes) 3. Display of data using TV raster scanning 4. Display of data using stroke writing 5. Generation of high-brightness and/or high-contrast displays (for projection and aircraft use) Industrial and military color tubes are discussed in Section VII1.C.
11. ELECTRON GUNS A. Basic Elements of the Electron Gun
The electron gun is that part of the CRT which produces the electron beam. After emerging from the beam-formation part of the gun, the beam is usually focused to a small spot by electrostatic lenses which are part of the gun structure or by electromagnetic lenses generally placed external to the gun structure. After the electron beam has passed through the focusing region, it is deflected by electrostatic or electromagnetic means, as reviewed in Sections 111 and IV. A cross section of a typical electron gun is shown in Fig. 1. In this structure the electrons, emitted from a heated cathode, are accelerated toward an apertured cylindrical anode which is positively biased (anode 1). The magnitude of the electron current reaching the anode is controlled by the
CATHODE RAY TUBES
FIG. I .
187
A typical electron gun.
negative bias applied to grid No. 1, sometimes called the control grid or Wehnelt electrode, from the name of its inventor Wehnelt (1904). The structure comprising the cathode, grid No. 1, and the anode No. 1 is generally referred to as the triode structure. As shown, the beam converges in front of the cathode to a minimum cross section referred to as the crossover (Klemperer and Barnett, 1971). This crossover is then imaged onto the phosphor screen by a focusing-lens system to form a small spot. Figure 1 shows a typical focusing lens, made up of three electrodes: anode 1, the focusing electrode, and anode 2. Although not shown in Fig. 1, the electron gun may be protected from stray magnetic fields by an appropriate shield, to ensure that the low-energy beam remains centered on the gun axis. Typical operating voltages for such a gun are cathode, 0 V (ground); grid no. 1, 0 to -200 V (depending on beam current, which is typically between 0 and 1 mA); anode no. 1, 15 kV;focus electrode, adjustable from 0 to + 1 kV; anode no. 2, 15 kV;screen, 15 kV.In practice, different gun designs have been developed for cathode-ray tubes in accordance with their intended use as discussed below. B. Electron Gun Designs for Different Applications
I . High-Resolution Recording This category includes CRTs whose resolution is in excess of about 1000 cycles per scan line and which have a spot about 10-20 pm in diameter. Such tubes almost always use electromagnetic deflection and incorporate phosphors whose spectral characteristics match those of the photographic
188
ANDRE MARTIN
recording medium. In practice a beam current of 0.1 to 15 pA is used with a final acceleration voltage of 10 to 20 kV, producing a high current density in the spot, from 0.3 to 4.5 A/cm2. As shown in Fig. 2, a typical gun of this type incorporates 1. A cathode, a grid No. 1, and an anode maintained at a potential V ,equal
to that of the screen; 2. A precentering correction system (generally an electromagnetic deflection coil, used to compensate for mechanical alignment defects of the triode structure) to slightly deflect the beam and recenter it on the electron optical axis; 3. A diaphragm D, at the end of the anode, that limits the beam to its central part so that aberrations are reduced as much as possible in the focusing lens; 4,A magnetic focus lens, preferred in practice to an electrostatic because of its reduced aberration; and 5. An astigmatism correction system that uses a quadrupolar field to reshape the beam before it enters the focusing lens, so that residual astigmatism is corrected.A useful discussion of this correction is given by Petrov (1976).
To obtain the smallest spot size possible for a given tube length, the ratio d , / d , is made as small as possible (where d , is the distance from the magnetic focus lens center to the crossover and d2 is the distance from the lens center to
k Precenteringcorrection system
1
i
L
Astigmatisrncorrection system
&
i
i
I
\ \
4 I
Screen at I
Crossover
si% ::/
potential V,
CATHODE RAY TUBES
189
FIG.3. High-resolution double-focus-structureelectron gun.
the viewing screen). Making d 2 / d l small reduces the spot size by making its magnification smaller, as in geometrical optics, and also increases the convergence angle 20 of the beam approaching the screen. The latter minimizes space-charge repulsion in the beam, one of the effects known to increase the spot size. Calculation of the effects of space charge on spot size have been made by Schwartz (1957). Another type of high-resolution gun makes use of a double-crossover structure, as shown in Fig. 3. To further reduce the spot magnification and to increase the beam angle 28 for a given tube length, the crossover C created by the triode structure is first strongly demagnified by the electrostatic lens composed of the first anode (at a potential V,) and the focus electrode (at screen potential 6).This gives a second crossover C' which is then refocused onto the screen by means of an electromagnetic focus lens. The total magnification is given by the product d;/d', x d z / d , . However, because of spherical and chromatic aberration in the first strong electrostatic lens, a relatively narrow beam must be used, thus limiting the available screen current. With this technique, spot sizes of 10 pm have been demonstrated.
2. Display of Fast Transients and Study of Electrical Signals Such guns are typically used in oscilloscope tubes, where the input signal is used to deflect the beam. Since very little power is generally available in the input signal and since large bandwidths are required, an electrostatic deflection system is used because of its rapid response and low power requirements. Figure 4 shows the elements of such a gun structure together with the two sets of deflection plates, P, and P,. Since electrostatic deflection
190
ANDRE MARTIN Cathode
gl
92
93
94
pv
px
Screen
FIG.4. Electrostaticfocus and deflection electron gun.
causes severe deflection astigmatism (see Section 111),the beam diameter must be kept as small as possible. To limit the beam diameter, the triode structure is designed to operate at the highest voltage that is compatible with the required deflection plate sensitivity, and a very small aperture is used in anode No. 1 (g2 in Fig. 4). An additional aperture is used in electrode g4 to further trim the beam before it enters the deflection zone. As in the case of Fig. 1, the focus lens is formed by g3,g2, and 84. With such a structure the beam current is reduced from 0.1 to 0.01 of the cathode emission current. Since the cathode current is generally limited to about 1 mA, the beam current reaching the screen will rarely be over 50 pA, thus limiting the brightness capability of such tubes. 3. Data Displays Using a TV Raster
Since the TV raster is the most convenient and cheapest scanning technique to implement, monitors for this type of application are usually adapted from consumer monochrome or color TV designs, thus providing an economical approach. In most cases, average beam currents around 100 pA are used and a resolution corresponding to that of standard 525-line 60-Hz television is achieved. For these applications a gun structure providing adequate resolution without requiring critical potential adjustments is generally desired.
CATHODE RAY TUBES
Grid No. 2
191
Focus electrode
FIG.5. Unipotential lens focus structure.
Such a gun, incorporating a unipotential focus lens structure, is shown in Fig. 5. In order to obtain current modulation by means of a moderate voltage swing on G, (grid No. l), an electrode known as G, (grid No. 2) is placed between G I and the anode No. 2. This electrode is generally maintained at a potential of 50 to 1OOOV. One of its functions is to separate the beam formation zone from the beam acceleration zone, making the cutoff voltage of grid No. 1 virtually independent of small variations in anode voltage, thus permitting the use of low-cost nonregulated power supplies for the interconnected anodes No. 2 and No. 3 and the tube screen. By proper adjustment of the grid No. 2 voltage (see Section II.E.6) it also enables the desired modulation characteristics of grid No. 1 to be obtained. In addition, grid No. 2 acts as an electrostatic shield, protecting the cathode from the effects of arcing when the tube is operated at high voltage. After emerging from anode No. 2, the beam is focused by a three-electrode lens (usually called a unipotential lens) consisting of anodes No. 2 and No. 3 (maintained at the same screen potential V,) and a focus electrode whose voltage is generally 0.01 to 0.04 of V,. Since the focus electrode draws negligible current because of its low potential, an inexpensive focus power supply can be used. The main drawback of this unipotential lens is its highly curved electrostatic field, which results in strong spherical aberration, limiting the spot size.
4 . Data Displays Using Stroke Writing In the case of graphic displays, lines at an angle to the TV scanning lines have a dotted or stepped appearance. For high-resolution applications, therefore, a stroke-writing mode is used in which the electron beam is directly deflected to trace curves and vectors on the screen. In operation, the beam is deflected at speeds in the range of 0.5 to 5 cm/psec.
192
ANDRE MARTIN A3
Cathode
--'-+--7
A4
I
A3 =focus electrode
c
Limiting diaphragm
I
I
FIG.6. Bipotential lens focus structure.
Since beam currents of several hundred microamperes are used, electromagnetic deflection is necessary to minimize deflection defocusing. However, since better resolution than for simple TV raster displays is needed, a special gun structure is used. Instead of the unipotential lens described above, focusing is accomplished by the bipotential lens formed between the anodes A3 and A4 as shown in Fig. 6, A3 being held at a potential between 0.1 and 0.3 of the A4 voltage. It has been shown by Klemperer (1953) that such a lens structure in which the electron beam is successively accelerated gives less spherical aberration and a smaller spot than a lens structure in which the beam is first decelerated and then reaccelerated. However, to reduce spherical aberration to an acceptable level, the bipotential gun must also incorporate a limiting aperture, generally in anode A3 as shown. This requires a wellregulated focus power supply be used since the relatively large current intercepted by this electrode varies with modulation of the electron beam. 5 . High-Brightness andlor High-Contrast Displays
For special applications, tubes are required with very high brightness such as for projection onto large screens or for direct viewing in very high ambient light. Operation with high beam current (e.g., up to several milliamperes) and high screen potentials (up to 70 kV) is thus involved. Operation at high voltage is limited by breakdown problems between tube electrodes. To avoid this, the gun is specially designed with polished cylindrical anodes whose edges are curled back to limit arcing effects. Since such tubes also create X rays by bombardment of the screen, they must be properly shielded (a rule of thumb is that X-ray precautions must be taken with all tubes operating at screen voltages above 20 kV). 6 . Color Tubes
Comments regarding the electron gun structure of these tubes are given in Section V1II.C.
CATHODE RAY TUBES
193
C . Other Beam-Formation Structures
All of the gun designs described above use the three-electrode (or Wehnelttype) structure (an example of which is shown in greater detail in Fig. 7) to produce the beam. As mentioned, because of the negatively biased grid No. 1 between the anode and cathode, a crossover C of diameter d, is formed near the cathode. Another structure that, under specific operation conditions, gives a laminar flow of electrons without a crossover has been designed by Pierce (1954) and is widely used for microwave tubes. This design, shown in Fig. 8, however, provides a laminar flow of electrons for a particular ratio of operating voltages. If, for example, the control grid is biased too negatively, the gun behaves as a conventional structure and the beam converges to a crossover. Computer-generated plots of trajectories indicating these operation modes are shown in Section E below. In the past few years such guns have also been used in special cathode-ray tubes (Bates et al., 1974; Lehrer, 1974).Although improved operation is claimed, the advantages of such structures remain uncertain. D. Cathode Structures 1. Introduction
Most CRTs use an oxide cathode which consists typically of a heated nickel substrate coated with barium and strontium oxides. This cathode works Note : Cathode gl potential. 0 potential. V1
Anode or 92 potential. v2
d = D, = t = f = b = dc =
Grid 1 diameter of aperture Grid 2 diameter of aperture Grid 1 thickness Grid 1 to Grid 2 (or anode) spacing Cathode to grid spacing crossover diameter
FIG.7. Triode design of the Wehnelt type.
194
ANDRE MARTIN Control grid potential. V1
I
Anode or 92 potential. v2
I
Cathode potential. 0
FIG.8. Triode design of the Pierce type.
at a temperature of 800"C,providing a dc emission current density (averaged over the cathode area) in the range of 0.1 to 0.2 A/cm2 (Herrmann and Wagener, 1951).In addition to their good performance, such cathodes provide a reasonable compromise with respect to price, convenience of processing, and operating life. For very-high-performance CRTs, where current densities of 4 to 5 A/cm2 are required, the tungsten-impregnated cathode, consisting of a heated porous tungsten matrix impregnated with barium and calcium aluminates, is used. In an actual gun, the electron emission varies across the cathode as shown in Fig. 9, resulting from the fact that the electrostatic field produced by the anode 1 is not constant across the cathode. The actual variation of this field is controlled by the potential applied to grid No. 1. As a general rule, the current density at the center C of the cathode is assumed to be about three times the mean current density. For high brightness and projection tubes, the highest beam current compatible with spot size is required and the gun is designed so that emission current is drawn from the whole useful area of the cathode. In high-writing-speed oscilloscope tubes, on the other hand, to obtain good deflection sensitivity, a very narrow beam is required, capable of passing between deflection plates with a small spacing (e.g., 0.7 mm). To obtain such a
-
195
CATHODE RAY TUBES Current density
j rnax.
k I
__Mean
p
T
Anode1
current load j
///A/// J ',
Grid No. 1 Cathode surface
I
i
Cathode Center
FIG.9. Current density distribution across the cathode.
narrow beam at a low accelerating voltage of 1 or 2 kV requires most of the outside of the beam to be cut off by apertures, resulting in a typical beam transmission of 2 to 5%. The useful portion of the beam passing through the apertures therefore must come from the very center of the cathode, which is heavily loaded. In general-purpose tubes (which are between the two extremes indicated above) the electron beam is generally trimmed to some degree to partially limit its diameter in order to prevent loss of resolution by deflection defocusing at the edge of the picture. The beam transmission in this case is still good (typically 50 to 80%)and the cathode loading is kept to an acceptable level. Aside from the high current densities required, the cathode emission must be stable with time, so that the beam modulation characteristics do not change. As indicated on Fig. 10,a drop in emission requires adjustment of the drive voltage by A V to obtain a given cathode current I,. Depending on the required emission current density, a tube design will incorporate an oxide cathode or an impregnated tungsten cathode. Since the operating temperatures of these cathodes are different (about 800°C for oxide cathodes and 1000°C for impregnated cathodes), this must be taken into account. In particular, if the same filament power is to be used in both cases, the heat losses must be reduced for the higher temperature impregnated cathodes.
196
ANDRl?. MARTIN Cathode current
No. 1
I
i
FIG. 10. Variation of cathode current versus drive characteristicsfor an unstableemission.
2. The Oxide-Coated Cathode As shown in Fig. 1la, structurally, this type of cathode consists of a nickel or chromium nickel sleeve containing a tungsten heater. One end is covered with a nickel cap made of an alloy containing a closely controlled amount of a suitable reducing agent, such as magnesium, silicon, or tungsten. On the surface of the cap is a coating of barium-strontium carbonates, or bariumImpregnated aluminates
J
Porous
-- tungsten matrix
a
b
FIG. 1 1 . The oxide-coated cathode (a) and the tungsten-impregnatedcathode (b).
CATHODE RAY TUBES
197
calcium-strontium carbonates mixed in a suitable binder. During the tube processing, the cathode is heated under vacuum so that the cap temperature reaches 1O5O0C,causing the carbonates to thermally decompose into oxides and the binder to be eliminated. According to Hensley (1955), Haas (1961), and Jenkins (1969) the oxide cathode is believed to behave as an n-type semiconductor, having a band gap of about 4.4 eV, at least two donor levels at 1.4 and 2 eV, and an electron affinity of about 0.6eV. According to a recent hypothesis by Shafer and Turnbull (1981) free barium and barium oxide migrate toward the coating surface through a porous structure of strontium and calcium oxides. This free barium, acting as a donor, is believed to be responsible for the cathode emission (Haas, 1961). (It has been noted that oxygen vacancies and hydroxide ions can also act as donors.) The donors are created by partial reduction of the oxides at the metal-oxide interface when the cathode is heated under vacuum, after the transformation of the carbonates into oxides. This process is quite efficient when the base nickel contains reducing agents such as magnesium or silicon. However, when the base contains tungsten or zirconium, a small positive dc voltage (3 to 8 volts) must be applied to the grid No. 1 and anodes to draw an electron current to cause additional electrolytic dissociation. In this process, alkaline earth metal ions are drawn toward the nickel substrate and oxygen is liberated at the surface. These ions are then neutralized and migrate back into the oxide coating, resulting in the so-called electrolytic activation. Among the many factors that contribute to the decrease of electron emission of oxide cathodes, the most accepted one is that the overall donor (e.g., barium) concentration decreases (Shafer and Turnbull, 1981). This decrease may be caused by the thermal loss of donors, depletion of the reducing agents from the base nickel, and residual gas buildup in the tube. Other factors that may shorten the cathode life are, for example, buildup of an interface resistance between the oxide coating and the base nickel, impinging ions which locally destroy the oxide layer and eventually the nickel base (arcing may cause similar destruction), or flaking off of the oxide layer from the nickel base. Overheating may also cause partial evaporation or sintering of the oxides and may cause barium to evaporate too rapidly. The life is also affected by the choice of the nickel base alloy, whose proportion of reducing agents has to be chosen on a compromise basis between such factors as activation time, operation without current drawn (at spot cutoff ), average current density required, peak pulse current density, and working life. If activation time and peak pulse current density are to be considered first, an “active” nickel containing from 0.01 to 0.06% of Mg or Si is used to take advantage of the reducing action of these materials. If life is to be considered first, a passive nickel, containing from 0 to 4% W, but few reducing elements ( cO.Ol%), is used. For long and stable operation a good compromise
198
ANDRE MARTIN
is obtained for the oxide cathode when it is operated at a temperature of 800"C, where an acceptable equilibrium is reached between excessive evaporation of barium, good emissivity, and contamination from residual gases which outgas from different parts of the tube. To avoid such contamination, great care must be given to the cleanliness and the processes used in manufacturing. 3. The Impregnated-Tungsten-Matrix Cathode
Oxide-coated cathodes, used at temperatures not exceeding 900°C can be operated for a period of about a few thousand hours at an average dc current density under 0.2 A/cm2. Compared with oxide cathodes, impregnatedtungsten-matrix cathodes, operated from 900 to 1100°C (according to their type) can produce emission current densities of 1 to 10 A/cm2 with an operating life of several tens of thousands of hours. They can also withstand more mechanical or electrical damage than the oxide cathode (Rittner et al., 1957; Haas, 1961). As shown in Fig. l l b this cathode is composed of a metallic sleeve (generally molybdenum) with an internal filament heater, often potted to reduce heat transfer losses, a machined porous tungsten pellet crimped or brazed to the sleeve, and a compound of barium calcium aluminate impregnated into the porous pellet. When heated under vacuum by the filament, the aluminates react with the tungsten and free barium migrates toward the surface, which then becomes electron emitting. Two types of cathodes, designated as B and S types, are currently manufactured. B-Type cathodes are impregnated with the molar composition 5 BaO, 3 CaO, 2 A120,, while S-type cathodes are impregnated with the molar composition 4 BaO, 1 CaO, 1 A1,0,. Both types deliver typically 1 A/cm2 at 1000°C.To get more emission, B- or S-type cathodes can be coated with a thin film of osmium or of iridium (Zalm and Van Stratum, 1966) that reduces the work function of the cathodes from 2.1 to 1.85 eV. These coated cathodes, designated as M-type, deliver 2 A/cmZ at about 1000 "C and 1 A/cm2 at 920°C.
4. Comparison of Impregnated and Oxide-Coated Cathodes The impregnated cathode, having a large reserve of emitting material, is capable of a life 10 to 50 times greater than that of the oxide-coated cathode. High cathode currents can be sustained thanks to their better emitting capability, even under the poisoning conditions of residual gases, because it is possible to maintain the emission level by an elevation of the cathode temperature. However, operating temperatures in the 1OOO"C range make it difficult to use the same heater power as oxide cathodes. Since special designs must be used to conserve power and limit heat losses, and since the cathode
CATHODE RAY TUBES
199
materials such as porous tungsten and molybdenum are difficult to machine and to weld, the resultant manufacturing cost is more than 10 times greater than for an oxide cathode. For most cathode-ray tubes, oxide-coated cathodes are quite satisfactory and have a much lower cost. Nevertheless, as stated by Shroff and Palluel (1982), impregnated cathodes have now reached a performance level which makes them of importance for special-purpose tubes where unusually high current density or very long life is required. E. Basic Equations for Cathode Loading, Cutoff Voltage, and Beam Modulation
The user of a cathode-ray tube is generally concerned with factors such as spot size and brightness under varying scanning conditions, as well as the voltage excursion of the control grid needed to modulate the beam current. These factors are in turn directly related to the ability of the cathode to supply a specified beam current and to withstand operation at a given peak current density. 1. Space-Charge-Limited Current Density at the Cathode
The maximum current density J,, which can be drawn from the cathode is restricted by space-charge considerations in accordance with the LangmuirChild law (Child, 191 1; Langmuir and Compton, 1931). In the case of a planar diode, J,, is given by
or ~ 3 / 2
J,,
= (2.33 x
L2
A/m2
(2)
where e and m are, respectively, the charge and mass of the electron, V is the accelerating anode potential in volts, L is the cathode-to-anode distance in meters, and E~ is the permittivity of free space (c0 = 8.854 x F/m). In a CRT, the maximum current is obtained when grid No. 1 is at cathode potential. (To avoid drawing current to the grid No. 1, positive grid No. L voltages are never used.) Since grid No. 1 is very close to the cathode, the electron gun can be considered in this particular case to operate as a diode. As an example, for a practical gun with a distance L = 0.002 m and V = 5000 V, the calculated value of space-charge-limited current density is 2 x lo4 A/m2 (or 2 A/cm2).This density may exceed the temperature-limited current density discussed below.
200
ANDRl? MARTIN
2. Temperature-Limited Current Density Neglecting the space-charge limitation, the maximum emission current at a given temperature T available from the cathode is given by the DushmanRichardson law: 4nme k 2 T 2exp( J, = h3
J,
= (1.2 x
$)
(3)
3
(
106)T2exp - 11600-
A/m2
(4)
where e and m are, respectively, the charge and the mass of the electron, k is Boltzmann’s constant, h is Planck’s constant, T is the cathode temperature in Kelvin, and 4 is the work function of the emitting surface at T in electronvolts. For any cathode the practical maximum current density becomes J = J,(1 - r), where r is a reflection coefficient depending on the cathode base material and on the coating nature as stated by Champeix (1960). Taking data from Champeix (1946) for oxide cathodes, operating at 910 to 1070 K, the practical maximum current density J is about 2 x lo4 to 5 x lo4 A/m2, so that the current density is in most cases space-charge limited rather than temperature limited.
3. Practical Formulae for Current Density at the Cathode of a Triode C R T Gun
In the triode gun, the maximum space-charge current density for a given grid bias is limited by the penetration of the electrostatic field from the anode through the aperture of the grid No. 1. Making use of the Langmuir-Child law and the work of Ploke (1951, 1952) and Moss (1968), Oess (1979) has arrived at the following simplified formulae: For V, = Vk = 0, the maximum current density at the cathode center is given by
For a given grid drive V, (the difference between the voltage applied to the control grid and the cutoff voltage), the maximum current density at the cathode center is given by
J~ = (1.1 x 10-5)-
Vi’2
d2
A/cm2
where d is the grid diameter expressed in centimeters.
20 1
CATHODE RAY TUBES
4. Maximum Current Density at the Center of the Focused Spot
Assuming that the velocity of electrons leaving the cathode is Maxwellian and that space-charge repulsion effects are neglected, Langmuir (1937) showed on theoretical grounds that the maximum current density obtained at the center of the focused spot on the screen is J,
= Jco(
1+
2)
sin2 B = Jco( 1 +
''7')
sin2 B
(71
where V,is the screen potential expressed in volts, T is the cathode temperature expressed in Kelvin, J, is the current density at the spot center, J,. is the spacecharge-limited current density at the cathode, k is the Boltzmann constant, e is the charge of the electron, and 6 is the half-angle of the converging conical beam at the screen. As an example, assuming J,, = 1 A/cmZ, 6 = 0.3", and V, = 5000 V, the maximum theoretical current density in the spot is 1.5 A/cm2 for a cathode temperature of 1070 K. Because of the Maxwellian velocity distribution of the electrons, the focused spot has a Gaussian current distribution, J, = Jcoe-c'2,where J, is the current density at a distance r from the spot center, J,, is the current density at the spot center, and c is a constant indicative of spot width. From the knowledge of the total current in the spot and of the value of J,. it is thus possible to determine c. 5 . Spot Cutojf Voltage
The spot cutoff voltage V,, is defined as the grid-cathode voltage difference required to reduce the beam current to essentially zero. In practice, one uses the visual extinction of a stationary beam on the CRT screen, which corresponds to a current below 0.05 PA. A useful formula, valid for most classical CRT configurations, relating the spot cutoff and the dimensions of the beam formation structure of the gun is given by Pommier (1977):
v,, = (4.5 x
d2 10-2)--kV2
bf
where
):
t if 0.15 < - < 0.1875 d
k = 2.5( 1 2t k = 1-d
if
t 0.1875 < - < 0.24 d
In the above, V, is the potential of the accelerating anode, d is the diameter of grid No. 1, t is the thickness of grid No. 1, b is the grid No. 1-to-cathode
202
ANDRfi MARTIN
separation, and f is the grid No. l-to-anode separation. The quantities d, t, b, f are indicated in Fig. 7, as well as the crossover diameter c. Electron-optic calculations show that the crossover diameter c for a given cutoff voltage is reduced if the smallest possible separation b between grid No. 1 and cathode is used. Currently, distances of 0.06 mm are employed, making it very difficult to maintain the cutoff voltage within very close limits from tube to tube because of thermal expansion (the cathode is operated at 800°C and the distances are measured on cold parts) and insufficient mechanical stability of the triode assembly. Any cutoff measurement should thus be made after at least 10 min of tube preheating to achieve thermal stabilization of the gun structure. 6. Modulation Characteristics
Assuming that space charge is the only factor limiting the cathode current, the beam current can be expressed as a function of the grid-cathode voltage and of the cutoff voltage by the following formulae, developed by Moss (1968) and revised by Oess (1976). For grid drive, the beam current is IbG = kVgD/VEoG
(9)
For cathode drive, the beam current is
where I/coG is the cutoff voltage measured under grid drive conditions, I/c& is the cutoff voltage measured under cathode drive conditions, the amplification factor p is V2/I/coG, with V2 being the anode voltage referred to the cathode. k is a modulation constant that varies for most designs from 2.5 to 3.5 pA/V3I2. When the grid drive VGD is given the value I/coG, IbG = kVgbe. Since the beam current also obeys the Child-Langmuir law [Eq. (l)],the value of t,b - E must be 1.5. J/ must also be related to the gamma of a CRT, that is, the exponent y of the power law linking the CRT area luminance L to the modulation VGD: L = k,V&
(11)
Since the area luminance (see Section V.B.3) is proportional to the screen current, an acceptable approximation is to assume that L = k,lbG = k,kVg,/VEoG if the electron beam is not excessively trimmed by apertures. Hence t,b N y, which permits a simple measurement of y by measuring beam current (instead of luminance) versus modulation VGD.
CATHODE RAY TUBES
203
7. Limitations of Basic Equations Although the above equations are fundamental when designing a CRT, they neglect several limiting factors that are frequently encountered. One factor neglected in the Langmuir approximations is the mutual repulsion of electrons leaving the cathode, particularly in the crossover region where the electron beam density is very high. This space-charge repulsion effect is also neglected in the field-freeregion of the beam between the focusing lens and the screen. Other sources of error are the spherical aberration of the electron lenses due to the nature of the field curvature in the electron lenses and chromatic aberration which arises because the electrons are emitted from the cathode with a distribution of velocities. In early guns, after making a simple design using the basic formulae above, further improvements were made experimentally. At the present time, the approach is to reduce the design errors by computer calculation of the electric fields and of the electron trajectories in the electron gun as discussed in Section F below.
F . Using Computer Calculations to Optimize CRT Design Many authors have used a detailed mathematical approach to calculate the beam characteristics of a CRT and to calculate the spot size on the screen. Examples of this are found in the publications of Pierce (1954), Klemperer (1953), Klemperer and Barnett (1971), Spangenberg (1948), Gewartowski and Watson (1965), Moss (1968), Hawkes (1973),and El-Kareh (1970).In order to enable calculations to be made on the basis of the mathematical formulae without a computer, a certain number of simplifications are always made. For example, only paraxial rays close to the axis are used and the distribution of current density in the spot is assumed to be Gaussian. When using a computer, the space to be studied can be divided into small rectangular or trapezoidal elements where the electrostatic field is determined with precision. Poisson’s equation can then be applied to each element and appropriate matching made between adjacent elements at their boundaries so that trajectories, as well as space-charge and current densities, can be calculated. The precision of such a computer program depends to a large extent upon suitable subdivision of the space into small meshes, especially where high beam current densities or strongly varying fields are likely. This precision depends also upon the assumption made regarding the emission velocities of electrons leaving the cathode. A typical electron gun program structure is shown in Fig. 12 and two typical programs used to calculate electron trajectories in an electron gun are presented briefly below.
204
ANDRE MARTIN Initial field conditions based on model
NO
Digital solution of Poisson's equation for the mesh structure
Calculation ot traectories
END
FIG.12. Typical program structure for an electron gun.
1. Program for Pierce Structures (The Albaut Model)
In this model developed by Albaut (1970), the current density at the cathode plane xy (Fig. f3) is calculated using the Langmuir-Child law, with the electronsassumed to have negligible initial velocities along the beam axis z. Electrons leaving the xy plane from any point A hence remain in the Az plane. The system has a rotational symmetry around the beam axis z, allowing
X
FIG.13. The trajectories in the Albaut model.
205
CATHODE RAY TUBES
calculation to be made in two dimensions only. The space charge is then calculated assuming that around a trajectory such as AME the current density 3 is constant within a small distance dr. The space charge p = f / u is then calculated, u being the mean velocity of the electrons crossing a surface of radius dr around any point M of the trajectory. To make the calculation, the surface is divided in small rectangles of constant size. Such a program gives results which correlate well with experience for Pierce-type structures, as shown by the examples of Section F.4. However, it cannot be used for triode beam formation structures because of the rapid variations of current density in some zones, such as the crossover where the approximations of a constant current density around a trajectory and constant pitch of the calculation mesh are not valid. 2. Program for Triode Structure- The Weber-Hasker Model
To take into account the thermal velocities of the electrons leaving the cathode and also the variations of space charge near the cathode and along the beam (much more important in the triode than in the Pierce structure), Weber (1967)has proposed a model in which the electron beam leaving the cathode is decomposed into adjacent curvilinear beamlets (Fig. 14). At the cathode, the electrons of each beamlet are assumed to have only thermal transverse velocities (emission velocities along Oz axis assumed to be zero) so that chromatic aberration is neglected. Weber has demonstrated that if the current density in one of these beamlets is Gaussian, it remains Gaussian at each point of the trajectory. These calculations provide reasonable precision compared
I' Electron trajectories perpendicular to the surface of minimum potential - 2
ox
FIG.14. The beamlets of the Weber-Hasker model.
206
ANDRE MARTIN
to experimental results, assuming that a variable mesh size is used for the calculation to overcome the difficulties mentioned in the Albaut model. However, it was shown by Hasker (1965) that is was necessary to take into account the longitudinal velocities of the electrons leaving the cathode to obtain better precision. This was accomplished by calculating the space charge in front of the cathode. For that purpose the initial part of the triode was replaced by a system of concentric diodes whose cathode was the triode cathode, and whose fictive anodes were at continuously varying distances from the cathode. The Langmuir-Child equation and tables could then be used to determine the space charge and the geometrical position of the surface of minimum potential, or potential well, in front of the cathode. All the electron trajectories from the curvilinear beamlets leaving the cathode are assumed to be perpendicular to this surface of minimum potential (Fig. 14). Although the precision of the Weber-Hasker model proved to be adequate to determine trajectories and current density along the beam, it was not precise enough to enable accurate determination of the cutoff voltage, whose precise value is necessary to calculate the other characteristics of the electron gun structure. 3. Improved Computer Programs
The present computer programs are generally derived from the WeberHasker model. Better precision on cutoff calculation has been obtained by introducing correction factors for the emitting surface of the cathode, drawing partially on Haine (1957)and Lauer (1982)and partially on experimental data. The capacity of modern computers has also made it possible to reduce the mesh size used in calculation, mainly in the high-current-density regions of the beam, hence allowing a more precise calculation of the space charge and of the trajectories. 4 . Some Examples of Computer Calculations As an example, using the Albaut model, we have calculated the electron trajectories, first for a Pierce structure (Fig. 15) operating at two different beam currents, 240 and 1.25 pA, but whose two anodes have the same voltages, 480 and 1700 V, respectively. Figure 15 shows that at 240 pA of beam current, corresponding to a - 10 V potential of the control grid, the beam is perfectly laminar without a crossover. Figure 16,corresponding to a -40 V potential of the control grid and to a beam current of 1.25 PA, no longer shows a laminar beam and exhibits a crossover. Our second example using the Weber-Hasker model is a representation of trajectories for a bipotential focus lens electron gun structure, as described in Section II.B.4. Figure 17 shows some equipotentials and trajectories in the triode section composed of cathode k, grid
CATHODE R A Y TUBES
FIG.15. Laminar flow in a correctly biased gun of the Pierce type.
FIG.16. Nonlaminar flow in an incorrectly biased gun of the Pierce type.
FIG.17. Triode section of a bipotential structure.
207
208
ANDRE MARTIN
FIG. 18. The focus region of a bipotential lens structure.
No. 1 (gl), and grid No. 2 (g2), while Fig. 18 shows the trajectories and some equipotentials in the bipotential lens focusing region (83 and g4). Trajectories are also shown in the drift space before the focusing region (toward the triode section) and after it (toward the screen). For improved clarity, the triode section has been represented in Fig. 18 scaled down from Fig. 17. Figure 19 shows the current distribution in the spot at the tube
Current density (%of peak)
C = spot Center FIG.
19. Measured and calculated current density distribution in the spot for a bipotential
lens structure.
CATHODE RAY TUBES
209
screen, calculated and measured for the same beam current on a tube using the structures of Figs. 17 and 18. As indicated, except for the double hump at the peak, the calculation gives results which are in close agreement with experiments. The spot diameter at 50% of peak is 0.42 mm in both cases, while some differences are found near the peak and at the lower part of the current density distribution curve.
5. Limitations of Computer Programs Although electron trajectories and current densities can be calculated with acceptable accuracy by means of such programs, they assume a gun structure with perfect symmetry of revolution, a situation which exists only in an ideal case. In practice, mechanical defects, such as out-of-round parts and electrode misalignment, which very often limit the performance of a given structure, cannot be taken into account with the two-dimensional computer programs. A first approach to solving these problems has been proposed by Say et al. (1975), who used the simulation of an eccentric lens on a two-dimensional program to calculate the aberrations for small misalignments of electrodes. Three-dimensional programs have more recently been designed for the treatment of trajectories (Alig-Hughes, 1983; Franzen, 1983) for color TV tubes and oscilloscope tubes. These programs do not take into account the space charge, not because this is not possible, but because the cost of running such programs with the necessary iterations becomes prohibitive. In practice, as suggested by De Wolf (1982), a combined approach using simple analytical expressions for a first low-cost evaluation and then a refined analysis on a more precise computer program in connection with experimental verification and refinement has proved to be relatively inexpensive and fast.
111. ELECTROSTATIC DEFLECTION AND DEFLECTION AMPLIFICATION
A. General
At present, electrostatic deflection is mainly restricted to use in oscilloscope CRTs, where high bandwidth is required for analyzing very-highfrequency signals. (However, since less deflection defocusing is produced by electromagnetic deflection systems, such systems are preferred where deflection signals are below about 3 MHz.) Fundamentally, deflection defocusing in an electrostatic deflection system is caused by the difference in potential between the two deflection plates, which causes the electrons on opposite sides
210
ANDRE MARTIN
of the beam to have a difference in kinetic energy (and also a difference in transit time through the deflection region). This results in differences of deflection angle, observed as deflection defocusing, a situation which does not occur in electromagnetic deflection. An electrostatic deflectionsystem can be described by two parameters: its equivalent bandwidth and its deflection sensitivity. The equivalent bandwidth B is defined as B = 0.35/zby Loty (1966), z being the 10 to 90% rise time of the deflection system subjected to a single-step-function input voltage of negligible rise time. The deflection sensitivity can be expressed in centimeters per volt, deflection angle per volt, or number of spot diameters per volt. The deflection factor (volts/centimeter), which is the inverse of the deflection sensitivity, is also often used. B. The Basic Electrostatic Dejection System
1. Sensitivity and Contour of the Dejection Plates An electrostatic deflection system for one direction of scan usually consists of two metal plates symmetrically located with respect to the tube axis Oz (which is also the electron beam axis). Figure 20 shows such a set of parallel plates whose spacing is d. The dc potential of the plates is Vo (equal to the potential of the last electrode before the plates), and symmetrical deflection voltages & V, are added so that resultant potentials Vo + VD, Vo - VD are applied to the plates. The deflection angle at the exit of the plates is a, and 1 is the total length over which deflection occurs. From the basic laws of electron
Anode at Vo Plate at V,
0
Electron beam
+ vD ---z
Plate at V, -VD FIG.20. The basic electrostatic deflection system.
21 1
CATHODE RAY TUBES
dynamics (see Soller et al., 1948) the deflection angle a is given by
tan a = (1/2)(VD/VO)(l/d)
(12)
To increase tan a for constant voltages VD and Vo,it is necessary to increase I or to reduce d. However, to prevent the deflected beam from striking the plates, they must be flared as shown in Fig. 21. The best shape is obviously obtained when, at maximum deflection, the beam follows exactly the plate contour as shown in Fig. 21. If we consider the length l of the deflection region, and a small deflection length 81,the incremental deflection angle Aa for this length at the plates can be expressed from Eq. (12) by
tan Aa = (1/2)(VD/Vo)(Al/2y)
(13)
The contour can be calculated by using an equation given by Cooper (1961). For the set of plates shown in Fig. 21 with spacing 2y0 at the entrance and plate potentials of Vo + V,, the optimum contour is given by z = zo
(2)lo
+ 2y0
ll2
(t”Y/Y0)”2
exp u2 du
Mathematical tables give the solutions of the integral st exp u 2 du and graphical solutions are obtained by plotting y / y o against (z - zo)/ 2Yo(Vo/vD)1’2as indicated in Fig. 22. When the plates have been correctly contoured, the deflection sensitivity can be obtained by integrating along the profile of the electrostatic field using Eq. (13).
FIG.21. Contoured deflection plates.
212
ANDRk MARTIN V YO
4
3.E
3
2.f
1.E
1
FIG.22. Variation of y / y o with(Z - Z0)/2y0(V0/VD)1’*
2. Deflection Astigmatism In the above [Eqs. (12) and (13)] it has been assumed that the electrons travel in a space with potential V,. Actually, as mentioned above, because of the finite beam width, different parts of the beam see a different potential, depending on their position with respect to the deflection plates. The resulting deflection defocusing causes the radius r, of the focused spot at the screen to be increased by the factor 6(rs). For a set of parallel plates, 6(r,) has been calculated by Moss (1968):
where E is the deflection field, 1 is the length of the deflection region, L is the distance from the phosphor screen to the center of the deflection region, ri is the radius of the beam entering the deflection region, and V, is the initial beam potential. Eq. (14) shows that 6(r,)varies with the square of the deflection field
CATHODE RAY TUBES
213
E. The deflection angle should therefore be limited to keep the deflection astigmatism at an acceptable level. In most industrial tubes with electrostatic deflection, the deflection angle a does not exceed 15". 3. High-Frequency Deflection Systems When the frequency of the deflection signals exceeds about 150 MHz, the signal may reverse polarity during the transit time of the electrons through the deflection region. If we calculate for a set of parallel plates the deflection amplitude as a function of the signal frequency, and if we compare it to the deflection amplitude obtained for a dc signal, we obtain (Spangenberg, 1948)
where f is the frequency of the input signal, y j is the deflection amplitude at frequency f, yfo is the deflection amplitude for dc or very-low-frequency signals, and zo is the transit time through the deflection plates. To alleviate this problem, the signal can be applied to a pair of constant impedance delay lines, consisting in its simplest form of successive pairs of deflection plates (capacitance C) interconnected by inductive elements as shown in Fig. 23. These lines are connected to a symmetrical amplifier whose final power transistors are shown, and which deliver deflecting voltages Vo k V, to the lines terminated by appropriate resistive loads R. The beam is assumed to enter the line with an energy eVo, so that the velocity of the electrons is matched to the wave speed in the line. Above 300 MHz, segmented plates cannot be used because of the small inductance required between the elements. Double helix lines as shown in
FIG.23. Segmented deflection plates.
214
ANDRE MARTIN R
t
Beam axis
Capacitance correction shields
P\ Beam axis
Top view
FIG.24. Double helix deflection line.
Fig. 24 are then used with each turn of the helix behaving as a deflection element. A capacitance correction shield compensates for the variable spacing between the two lines and permits a constant impedance. The lines are terminated by a resistance R, as in the previous case. Typical lines of this sort have been described by Albertin (1976)and Silzars (1978).The rise time t of the electron beam to reach its full deflection is given by 7 = (O.8/nk)zofor a line of n elements or n turns, where z,, is the transit time for one element and k (a function of the cross-coupling between elements of the line) is a parameter between 0.5 and 1 . Analytical techniques to design these traveling-wave deflection systems have been developed by Loty (1966)and Silzars and Knight (1972).
C . Post-Deflection- Acceleration (P D A ) If the voltage of the phosphor screen is made higher than the average voltage of the deflection plates, increased brightness is obtained. This acceleration of the beam after deflection is called post-deflection-acceleration (PDA). Unfortunately a converging lens, formed by the difference of voltage
CATHODE RAY TUBES
215
between the deflection region and screen zone, causes a reduction in deflection sensitivity. This new sensitivity is nevertheless higher than would be the case if the dc voltage on the deflection plates were raised as high as the screen potential. In the early postacceleration tubes the converging lens was rather strong, resulting in a large loss in deflection sensitivity. To avoid this, an arrangement, still widely used, permitting the use of a weaker lens was developed. This consists of a spiral resistive coating deposited on the inner surface of the tube cone (Fig. 25), one end of the spiral being held at the average potential Vo of the deflection plates and the other end at the screen potential V, . For such a spiral PDA system, the demagnification factor DM, the ratio between the deflection amplitude y, obtained with a screen voltage V, and the deflection amplitude y, which would be obtained with a screen voltage V, , is given by Yz 2 DM=-= 1 (1 + Vl/Vo)”2 y, Since the thick lens formed by the PDA field also converges the beam, the spot size is reduced, increasing the brightness. However, because of the convergingeffect of the PDA thick lens, if a ratio Vl/Vo exceeding 5 or 6 is used, this causes excessive barrel distortion of the raster pattern. This cannot be compensated by proper shaping of the deflection plates. Furthermore, a high ratio V,/Vo requires compensation of the deflection demagnification DM, requiring an increase in the deflection angle at the output of the deflection plates. This results in more deflection defocusing and nonlinear deflection
+
r
I
L
i i I
+
/
I t
t Electron gun
Zone at potential Vo
coating
FIG.25. Postacceleration with a resistive spiral.
V1
216
ANDRE MARTIN
sensitivity. To overcome these problems two types of postacceleration schemes, both making use of deflection amplification, were developed. As discussed below, these generally allow the use of ratios V,/V, of well over 10.
D. Defection AmpliJication I . Mesh-Type Postdeflection Amplification
One technique of deflection amplification widely used in oscilloscope CRT's is shown in Fig. 26. Here a metallic cylinder g5 with a dome-shaped metallic mesh g6, having an optical transparency of 60 to 80%, is mounted just beyond the deflection region and maintained at a potential V, close to the average potential of the deflection plates. Since the final anode and bulb coating are held at a potential V,, such that V,/V, N 10, a diverging lens is created by the field adjacent to the mesh on the anode side. The electron beam will thus be deflected a distance y z from tube axis, instead of y l , which would result in the absence of the lens and a final screen potential of V,. As shown by the lower part of Fig. 26, representing the electron beam, the divergent lens formed in the mesh region images the crossover of diameter 4, on the screen with a diameter &,where & is greater than the spot diameter 4; without the mesh lens. This can be explained on the basis of the work of Albertin (1976) and Miller (1963). Considering the CRT shown in Fig. 26 and assuming that the paraxial ray conditions are fulfilled, we can use the following formulae of the thin lenses to compare the deflection
Cathode
--Zone
1 1
at potential V,
1
~
gf
i
Y,
I
.
t4
-v2 > 1 --
'0 2
FIG.26
CRT using mesh postdeflection acceleration.
CATHODE RAY TUBES
217
amplification factor M = y 2 / y 1and the spot magnification factor G = &/&:
distance from the mesh to the y deflection center distance from the mesh to the apparent deflection center after postacceleration distance from mesh to screen deflection amplitude on the screen without postacceleration deflection amplitude on the screen with postacceleration potential of the screen and of the wall anode g7 potential in the deflection region (also applied to g4) distance between the crossover of diameter 41and the center of the main focusing lens distance between the crossover image of diameter 41 (image in lens L , ) and center of the main focusing lens L , distance between the crossover image and the center of the diverging lens L , whose focal lengths are, respectively, f and f ’ distance between the screen and the center of the diverging lens L2 diameter of the crossover image without any postdeflection amplification (lens L2 removed) diameter of the crossover imaged by both lenses L1 and L 2 , which are separated by a distance L Combining Eqs. (1 7) and (18) with the relations
f
f ’ -1,
f ’- -
p2
p;
d2
we obtain
G
=
f
f +(M f‘
=
1, P ; = D
dl
-
-)f
1 + Dldl f’ 1 D/L
+
(19)
In practice, dl is always smaller than L since the main focus lens is necessarily placed before the deflection plates. Therefore, from Eq. (19) we obtain
218
ANDRE MARTIN
Deflection amplification thus increases the spot size more than the magnitude of the deflection. If we express the deflection factor in volts/trace width, the tube with postdeflection amplification will have poorer performance than one without amplification. However, since V, can be about 10 times V,, relatively high brightness is achieved. Also the tube length can be substantially reduced for a given deflection sensitivity. It should be noted that some spot degradation and spot halo also results because of scattering and secondary emission of the electron beam impinging on the mesh. These effects, including local bending of the beam by the microlenses formed by the mesh holes, are described by Hasker (1966a,b) and by Hawes and Nelson (1978). The spot radius is also affected by the mesh structure because of spacecharge effects. The space-charge calculations of Schwartz (1957) for an electron beam have been extended to the mesh PDA tubes by Janko (1974), who has calculated the ratio rsc/rias a function of ZZ 'l2/ri V3/*,where rscis the radius of the virtual crossover image after the diverging mesh as a lens (Fig. 27), ri is the exit aperture radius, I is the beam current Z is the distance from the beam exit aperture of the mesh, and V is the average potential of the space between the exit aperture and the mesh. The set of curves calculated by Janko for different ratios of D / Z , D being the distance from the mesh to the virtual image plane (Fig. 28), demonstrates that the space-charge effects along the beam are greater for a mesh type tube where D / Z > 0 than for a tube without mesh postdeflection amplification where D / Z = 0.
-V
= const
I ,-Postdeflection
region7
. z axis
FIG.27. Schematic of the beam profile along the CRT axis. The beam is focused to produce the smallest possible virtual image. The convergence of actual rays is reduced in the strong PDA field, and they form the real image on the screen. The ratio ros/rican be calculated from the curves of Fig. 28. From Janko (1974).
CATHODE RAY TUBES
219
0.003 0.004 0.005 0.006 0.007 0.008 2I"Z -
ri v'"
FIG.28. Calculatedspace-charge-determined radius of a beam in the virtual image plane of a mesh expansion PDA tube. For D / Z = 0,the virtual image plane coincides with the mesh and the model reduces to the non-PDA space-chargemodel. Portions of curves for D / z = 0.1 and 0.3 are left out for clarity. From Janko (1974).
2. Dejection Amplijication with Nonsymmetrical Field Lenses
To avoid the shortcomings of mesh systems, meshless deflection amplification systems have been developed. One such arrangement, described by Martin and Deschamps (197 l), uses hyperbolic quadrupolar lenses consisting of two pairs of electrodes having a hyperbolic cross section perpendicular to the electron beam axis (Fig. 29c). Here eV, is the energy of the electron beam entering the quadrupolar lenses while V, - Vand Vl + Vare the potentials applied to opposite pairs of electrodes. The electrostatic field in the xz plane (Fig. 29b) is Ex = -2Vx/a2 and in the yz plane is Ey = 2Vy/a2 (Fig. 29a), with 2a being the distance between opposite electrodes. The corresponding forces acting on the beam are 2e V F =-x a2
FY = - -
2e V a2 y
Fxis a diverging force, while Fyis a converging force. It can be shown that if the
220
ANDRE MARTIN
r - - -__I b
\
I'
-
/
v,-v
__--(bl
XIZPtane (divergent)
+
I
FIG.29. Deflection amplification by means of a quadrupolar lens. See text for details.
beam enters the quadrupolar lens at angles x; and y; (Figs. 29a,b), it will exit at angles x; and y ; with the following amplification of the deflection angle:
0,
= x;/x; = cosh k b
- kd, sinh kb
0, = y;/y; = cos k b - kd, sin k b
(22) (23)
where b is the length of the quadrupolar lens, d , and d, are the distances of deflection centers to the quadrupole, and k = (l/u)(V/Vl)1'2. Because of the finite width of the beam, the forces acting on the electrons cause widening of the beam in the xz plane and compression in the yz plane, resulting in a strong astigmatism. To compensate for this astigmatism, a second quadrupolar lens, with electrostatic fields oriented perpendicular to the first lens, is used. A typical tube structure using such a set of quadrupolar lenses is shown in Fig. 30 (Albertin, 1976). Figure 30a is a cross section of the tube showing the successive gun electrodes gl, g2, g3, and g4, followed by the y deflection plates, a first quadrupolar lens J,, a set of x deflection plates, and a second quadrupolar lens J, enclosed in a slot lens g5. The final electrode g6 is at screen potential. Figure 30b shows the effect of the successive lenses L,, L, , L, and quadrupolar lenses L,, L, on the undeflected electron beam in the xz and yz planes. L, is the main focusing lens, L, is an astigmatic lens formed between the final aperture of anode g4 and the entrance of the y deflection plates, and L5 is a prismatic lens which separates the deflection and amplification zone from the postacceleration region. This prismatic lens is
CATHODE RAY TUBES
22 1
Cathode
_ _ _ _ _ _ _------_--
FIG.30. Structure of a tube using quadrupolar-lens deflection amplification.
made from a slot in g5 whose axis is in the xz plane. In the y z plane, the electron beam leaving the crossover 'pl will first enter the converging lenses L,, L,, then the diverging lens L,, and finally the converging lenses L, and L,. In the xz plane, the electron beam leaving the crossover 'pl will first enter the converging lens L,, then the divergent lens L,, the converging lens L,, the divergent lens L,, and finally the prismatic lens L, ,whose effect is negligible in the xlz plane. Proper adjustment of the different lens' potentials allows overall compensation of the beam astigmatism so that a round spot is obtained at the screen. With this structure, and thanks to the prismatic lens L,, Martin and Deschamps (1971) have shown that the deflection amplification takes place in L3 and in L, for the y z plane, making possible not only good astigmatism correction but also a deflection sensitivity more than 3.5 times greater than that without amplification. Other structures using a boxlike electrode structure to form quadrupolar lenses have also been reported by Odenthal (1977). More recently, Franzen (1983) has described a quadrupolar scan expansion system for oscilloscopes derived from a basic quadrupolar lens design by Klemperer (1953) and greatly improved through computer electrostatic field and electron trajectory calculations. Most of the oscilloscope tubes presently manufactured use either a domeshaped mesh lens or a quadrupolar lens system for deflection amplification, although some flat meshes or parallel wire grids are still used for postacceleration amplification in tubes where reduced performance can be accepted. The
222
ANDRE MARTIN
spiral postacceleration system (Section C, above) is still widely used for oscilloscope tubes, generally operating below 30 MHz, where high deflection sensitivity is not ncessary. In some tubes (see Section VIII.2.1), in order to achieve the highest brightness (and writing speed) as well as high deflection sensitivity, microchannel plates have been placed adjacent to the screen to increase the beam current by electron multiplication.
IV. ELECTROMAGNETIC DEFLECTION As already indicated in Section 111, electromagnetic deflection is often used because it can provide large deflection angles at high beam currents with reduced deflection defocusing. This deflection method, used universally in color and black-and-white television sets and in industrial and military tubes employing television-type scanning, is well described by Vonk (1972) and Hutter (1974). However, when stroke writing (random access) is used, the dynamic characteristics of the deflection yoke (rise time, settling time), although of much less importance for repetitive television scanning, become critical. For black-and-white television the deflection field is purposely made nonuniform to prevent raster distortion. This causes spot distortion and a loss of resolution that are not acceptable in industrial and military displays. In such displays, geometrical accuracy may be sacrificed when designing the yoke but this can be compensated for by using drive circuits that supply dynamic correction signals to the deflection yoke (Pieri, 1978). With any system, some form of compromise between geometrical distortion and spot aberrations must be made when designing the deflection coil. This section outlines the main design parameters of deflection coils and discusses their relative importance. Also the different deflection yoke structures are related to their application.
A. Basic Electromagnetic Dejection Equations
Figure 31 shows a flat-screen cathode-ray tube with its deflection yoke. The electron beam enters the magnetic deflection field along the z axis and leaves it at an angle 8 which is a function of the magnetic field generated by the current in the horizontal winding of the deflection yoke. To simplify calculations we assume a deflection coil whose ferrite core has the shape of a cylinder, as shown in Fig. 32. All turns of the horizontal winding are wound in a saddle-type manner around this cylindrical core. (The vertical
CATHODE RAY TUBES
Electron gun
1
H
/Horizontal winding
223
224
ANDRE MARTIN
winding, wound perpendicular to the horizontal one, is not drawn on Fig. 32 to clarify the sketch.) Although the magnitude of the field actually varies along the z axis, as indicated in Fig. 33a, to establish elementary relations it is assumed to change abruptly as shown in Fig. 33b since experience has shown such an approximation to be useful. With this assumption, a sinusoidal distribution of the turns along a section of the cylindrical core gives a uniform field distribution inside this cylinder of diameter D and length 1. When combining both horizontal and vertical deflection fields, the resultant magnetic field is also uniform and its direction results from the geometric sum of both fields. The permeability of the magnetic core is assumed to be sufficiently high (over 1000) so that the reluctance of the core is negligible compared to that of the air path. (In the discussion below all units are assumed to be in the MKSA system.)
b
FIG.33. Variation of the field strength along the z axis of the deflection coil: (a) experimental and (b) approximation with abrupt transitions used for calculations.
225
CATHODE RAY TUBES
1. Electron Trajectories and Dejection Angle Figure 34 shows an electron beam, accelerated to a potential V’ and entering a uniform deflection field H perpendicular to its direction of motion. This electron beam describes in the magnetic field a circular trajectory of radius r such that
f
= 0.373-
= &po-&
H
(24)
Jv,
where elm is the charge-to-mass ratio of the electron and p o is the permeability of the deflection space. We can write for the deflecting field created by a cylindrical winding of inside diameter D and length I,
H
(25)
=NI/D
where N is the number of turns of the winding and I is the current. From geometrical considerations in Fig. 34, the deflection angle 8 is given by sin0 = l / r
(26)
Substituting Eqs. (26) and (25)into Eq. (24)we obtain sin 8 =
NI1 2.680
Equation (27) shows that the sine of the angle of deflection is (1) inversely proportional to the square root of the accelerating voltage V’, (2) inversely
t”
Screen plane
L
Deflection amplitude
at V b /
Deflection region of diameter 0
FIG. 34. Principle of electromagneticdeflection.
226
ANDRk MARTIN
proportional to the inside diameter of the core, and (3) proportional to the ampere-turns and to the length of the core.
2. Magnetic Energy for a Given Deflection Angle According to electromagnetic theory, the magnetic energy associated with the deflection coil of Fig. - 32 is
E=
/fd.
where B is the magnetic flux density and v is the volume inside the ferrite core where the field exists. From Eq. (28) we obtain, after integration, IL
E = -poD21H2 8
(29)
The energy is also given by the familiar expression
E = LI '/2
(30)
where L is the inductance of the deflection coil. Combining Eqs. (29) and (27) we can write, with Eq. (30),
E
LIZ 2
= -= 2.82 x
D 2 sin20 VB 1
Finally, from Eqs. (3 1) and (27) we can draw two important conclusions. The energy in the deflection coil is proportional to the square of the inside diameter D of the ferrite core and the sine of the deflection angle is inversely proportional to D. It is thus commonly of interest to reduce D, which in effect requires reducing the diameter of the tube neck and of the electron lenses. A compromise must thus be found between power reduction and deflection sensitivity on the one hand and the electron-optical characteristics of the electron gun on the other hand. From Eq. (31) we can write
As indicated, the deflection, expressed as sin 6, is directly proportional to the current in the winding. We can also note that the sensitivity is proportional to the square root of the core length. This length, however, can only be increased up to the point where the deflected beam is intercepted by the tube neck, In modern design, this is partially avoided by flaring both the neck and deflection coil, as indicated in Fig. 31 and discussed in Section IV.E.1.
227
CATHODE RAY TUBES
B. Magnetic-Field Distribution in the Dejection Coil
Figure 35 shows a section in the x y plane of the deflection coil sketched in Fig. 32 with its horizontal winding which varies with angle a from Ox with a distribution function t ( a )such that the magnetic field is uniform. The ampereis then given by g(a) = kt(a), where the constant k is ir turn density s(@) function of the current level. We can then write
(33) where g(a) is a periodic function that can be expanded into the following Fourier series: g(a) = dNZ(a)/da
g(a)=h, +h,cosa+h~sina+h2cos2a+
Since the windings are symmetrical g(a) = g ( -a), and g(a) = - g ( n This requires that the coefficients of the series have the values
h,'
=0
and
- a).
h, = h2 = hZm = 0
We can then write g(a) = h,cosa
+ h3cos3a + h,cos5a +
(34)
V
Distribution
NI amp. turns 2
NI 2
-amp.
section Fic. 35. Section in xy plane of the deflection coil represented in Fig. 32.
turns
228
ANDRE MARTIN
where the ratios h 3 / h , , h 5 / h , , etc. are characteristic of the coil turn density. If we consider an actual yoke where the field drops gradually at the ends of the coil, the h1+2mcoefficients of Eq. (34) may become a function of z. The calculation of the magnetic field is thus a three-dimensional problem that requires long and costly computer operations. If we assume that the length I of the yoke (Fig. 32) is at least two times the diameter D, the relative contribution of the fringe fields at the end of the coil is reduced and a simplified approach can be used in which the h, + 2m coefficients are assumed to be independent of z. The problem is then reduced to two dimensions, p and a, in the x y plane (Fig. 35). With this approximation, the magnetic field H ( p , a) in the deflection volume contained by the core can be derived from the scalar magnetic potential @ ( p ,a): H(p,a ) = -grad W p , a )
Since div H = 0, @ ( p ,a ) is a solution of A@(p, a ) = 0. Within the planes of symmetry of the deflection coil, a general solution is possible in the form of a Fourier series in a. The series coefficients are a function of p and are related to the h, + 2 m coefficients of g(a) of Eq. 34 by the limit conditions at the surface of the windings. With these conditions, we can write
+
$ ( p , a ) = h,-sina P R
(35)
Consequently, the magnetic field in polar coordinates is given by
H, = L h cosa R
h3 > O
hl >O h, > O
+h3 R3’
h, > O
2
h3 > O ‘h, = 0
+
cos 301
(37)
* * *
h, >O
h3 = h, = O
FIG.36. Field distribution in the deflection coil for three sets of values of h , , hf, and h,.
229
CATHODE RAY TUBES
where Hp and H, are, respectively, the radial and tangential field components (Fig. 35). Figure 36 shows the field distribution in the deflection yoke for different values of h , , h 3 , and h , . The field distortion is obviously negligible when h3 and hS are equal to 0. The distribution of the winding is then such that the ampere-turn density is sinusoidal. C . Geometric Distortion
Up to now, we have considered the horizontal winding only, for which the ampere-turn density coefficients [Eq. (34)] are h l , h z , and hl + 2m. Similarly, the ampere-turn density coefficients of the vertical deflection winding are u l , u z , and u 1 + 2 m * The expressions for the vertical deflection fields HZ and H,V are obtained by replacing hl + 2 m by u1 + Z m and a by a - 4 2 in Eqs. (34) and (35) because of the symmetry of the deflection coil. The coefficients hl + 2m and u1 + 2m that express the ampere-turns are proportional to the combination of currents I, and I,,. The shape of the coil and the field distribution can hence be expressed independently of the currents I, and I,, by using the ratios p = h3/h1 and q = u 3 / v 1 for the third order. The total field at a given point ( p , a, z) is now defined by Hp=
+ HZ
(38)
HI = HP
+ H,V
(39)
and The currents I, and I, in the two windings define a deflection angle a. This direction a and the axis Oz determine the deflection plane containing the electron trajectory.
1. Case of a Uniform Field-Electronic of Pincushion Distortion
Correction
To obtain a uniform field, the h and u coefficients of Eqs. (34) and (35) must be such that h, > 0; u1 > 0; h3 = u3 = 0. The angle cp of the trajectory projection on the screen with respect to the horizontal x axis is expressed by tan = v , / h , . The total field at a given point (p,a, z) is thus
h, H p =,sina
+ -sin O1 R
( - 3) a
-
230
ANDRE MARTIN
In the deflection plane determined by cp and
0 2 ,a =
cp and we can write
H,, = 0
Substituting H , in Eq. (27) we find the deflection angle 8 (Fig. 31) expressed by sin8 =
4
(42)
2.68f i
By substituting for HI, we obtain
From Eq. (33) we have
and similarly NvI, = 2111 The expression for sin 0 can thus be written as
When both windings distributions are identical, NH reduces to
= Nv,
sin8 = K I
and Eq. (43) (44)
where
I
=
J
~
If the distance from the center of deflection to the screen is S (Fig. 34), and if the spot coordinates on the screen are x and y, we can write SKI,
Jm SKI, y=Jm
x=
(45)
CATHODE RAY TUBES
23 1
FIG.37. Typical pincushioncorrection.The arrows show the directions in which correction is applied.
Since the deflection is not linear with current, a raster pattern will have a pincushion shape on the screen as shown in Fig. 37. However, such pincushion distortion can be avoided by properly modifying the deflection currents. From Eqs. (45) and (46) we can write I,
=
I, = By setting
we obtain
X
K ,Isz + x Z + y 2 Y
KJSZ
+ x z + yz
232
ANDRE MARTIN
The necessary coil current correction can now be calculated by means of Eqs. (47) and (48), using the first few terms of the series. 2. Case of Nonuniform Field Used for Pincushion Correction
As an alternative, pincushion distortion can be corrected by using a nonuniform deflection field, obtained by changing the ampere-turn distribution in the coil. This is known as the third-order h3u3 correction. From Eqs. (36),(37),(38),and (39) we can combine a vertical deflection field (ul,uj) and horizontal deflection field ( h , , h,) to produce a total deflection field: 1 sin a - u, cos a) H p = -(h, R 1 HI = -(/I, R
PZ + -(h3 R3
sin 3a
+ u3 cos 3a)
PZ cos a + u1 sin a) + +h3 R
cos 3a
- u3 sin 3a)
These expressions can be broken down into a uniform field given by h,, u, resulting in a deflection angle 0 in the cp direction, plus perturbations q,(H,,) and cp3(Hl)due to the h3 and u3 terms. Therefore tanrp = u , / h , and the magnitude of the nonperturbated deflection is M = If we consider the plane cpz, we can write
,/m.
~ p ( c p )= HI(rp)
=
0 + 63(Hp) M/R
+
and for any point ( p , a )
+ 6,(Hp) H,(cp) = uniform field + d3(Hl)
Hp(cp)= uniform field
When the x and y windings have identical turn distributions, the third-order terms are the same, and we can write h u p = 2 = 2 h , u1 PZ 6,Hp = F(pul
with u1 = M sin cp, cos 3a
+ p h , sin 3a)
- PPZM(sincp cos 3a + cos cp sin 3a) -R3
h,
=
M cos rp
233
CATHODE RAY TUBES
Similarly
From Eqs. (49)and (50)we can deduce the three following results for positive values of p: on the x and y axes where cp = nn/2,
6,(Hp) = 0
&(HI) has a positive maximum
and
+ l)n/4,
on the diagonals where cp = (2n
6 , ( H p )= 0
&(HI) has a negative maximum
and
along directions where cp = (2n
+ l)n/8,
&(Ifp) is maximum if
and minimum if
cp = (4n
cp = (4n
+ 3)@3
+ l)n/8,
and
6,(H1) = 0
The effect of the third-order correction, as indicated in Fig. 37 by the arrows, is to increase the deflection amplitude along the x and y axes and reduce it along the diagonals.
3. Correction of Dejection Nonlinearity Although the pincushion distortion may be reduced to an acceptable level by an appropriate winding distribution, this increases the deflection nonlinearity which has to be corrected along both the x and y axes by modification of the deflection currents, which can be expressed by:
I,
= Ax
+ Bx3 + cx5 +
* * *
(51)
The exact currents must be determined for each particular yoke-tube configuration. For stroke writing, the correction signals may be obtained from diode function generators. For television rasters, diode function generators can also be used, although simpler RC circuits may also be acceptable. For instance, for a television raster horizontal-deflection linearity correction, a suitable capacitor may be inserted in series with the winding and the sinusoidal current response so obtained gives a good approximation of Eq. (51).
4 . Correction of Dejection Distortion Using Auxiliary Magnetic Fields Instead of using the methods described in Subsections 1 and 2 above, geometric distortion of the scanning pattern can be corrected by means of a static magnetic deflection field acting on the beam after the deflection coil field. The static field may be produced by an %pole arrangement
234
ANDRE MARTIN
using dc windings or permanent magnets. Correction by this method is limited to small deflection angles up to 40 or 50” since it tends to produce beam astigmatism, giving distorted spots on the screen. 5. Other Defection Distorsion Problems
In addition to pincushion distortion, which is inherent to the operation of the deflection yoke, many other pattern distortions may occur. For example, a square may appear as a trapezoid, a straight line as a parabola. Such defects are generally caused either by physical defects in the deflection yoke (such as incorrect and nonsymmetrical positioning of the windings or insufficient precision in machining of the magnetic core), or by misalignment of the electron beam with respect to the deflection coil axes, which is the classic cause of parabolic lines occurring along the x or y axes, usually accompanied by trapezium distortion. In the latter case, fringe fields at the entrance of the deflection coil, which normally do not interact with the beam, add their effects unsymmetrically to the normal deflection. These distortions may be avoided by careful centering of the yoke around the beam axis, aligning of the beam with respect to the yoke axis by means of auxiliary magnets or coils between the electron gun and deflection coil, adding further correction signals to the geometric correction signals, or using geometric correction magnets or coils placed after the deflection zone as indicated in subsection 4 above. D. Aberrations Aflecting Resolution Uniformity
At the screen center (zero deflection fields), the electron beam characteristics (spot size and astigmatism) are determined solely by the electron optics of the gun system. Deflection of the beam by the magnetic field, however, introduces aberrations such as field curvature, astigmatism, and coma, similar to those encountered in prismatic deflection of a light beam in conventional optics. 1 . Image-Field Curvature
Figure 38 shows a deflected astigmatic beam with its circle of least confusion. As in prismatic deflection of a light beam we obtain a tangential focus and a sagittal focus. The circle of least confusion is found to occur on a “best” focus surface of paraboloid shape tangent to the screen center and curved toward the deflection coil. The spot on the screen is thus defocused everywhere except at the screen center. One way of correcting for this effect is to use dynamic focusing in which a correcting signal, whose amplitude is a function of spot position, is applied to
235
CATHODE RAY TUBES 1 - Tangential focus 2 - Circle of least confusion 3 . Sagittal focus 4 - Image surface ..
\ FIG.38. Image field and deflection astigmatism.
the focusing electrode or coil. For electrostatic lenses, the signal V,, is such that V,,, = K ( x 2 + y 2 ) . Another approach, in the case of small tubes, is to deposit the phosphor screen on a fiber-optic faceplate having an internal concave surface which matches the best focus paraboloid. 2. Astigmatism
Figure 39a shows the variations in cross section of an astigmatic deflected beam as the focus voltage (or current) is adjusted, the deflection voltage or current being kept constant. The astigmatism occurring in a small volume of the deflectionfield can be analyzed by means of Eqs. (49) and (50), which are used to calculate the field difference between the center and edge of the beam. For this calculation a uv reference system, centered on the beam center 0 is used (Fig. 40), where u is in the direction of the deflection (angle (o) and v is perpendicular to u. Applying Taylor's method to obtain the coefficient 6, from Eqs. (49) and (50), we obtain along p or u (between rays 4 and 2 of Fig. 40)
M 6,(H") = p 7 c o s 4 q ( p 2 + 2pu + u 2 ) R M
6,(Hu) = p$sin4(o(p2
+ 2pu + u 2 )
(52)
(53)
236
ANDRE MARTIN
(cJ
(UJ
/ - -
II
\
FIG.39. Astigmatism on a CRT screen. (a) Successive cross sections of an astigmatic beam varying with voltage, (b) distortion on a CRT screen caused by harmonic astigmatism, (c) distortion on a CRT screen caused by axisymmetric astigmatism, (d) combined effects of harmonic and axisymmetric astigmatism on a CRT screen.
Along the tangent u (between 3 and 1 of Fig. 40)since c1 N cp
d,(H,)
M
= p-(
R3
M
= p3(p2
+ u / p we can write
cos 441- 2up sin 441- 2u2 cos 4cp - . * .) sin 4q
+ 2up cos 4 q -
2u2 sin 441- - ..)
(54) (55)
The field differences between the center and the edge of the beam are proportional to 2up [Eqs. (52) and (53)] or to 2up [Eqs. (54) and ( 5 5 ) ] . They give field components that are zero at the center of symmetry, and that vary linearly with u and u. This represents a quadrupolar field which is the fundamental cause of astigmatism. For the small field volume that is analyzed, the degree of astigmatism is proportional top (that is, to H3 and u3 as discussed in Section C.2), which explains why the astigmatism that is introduced is
CATHODE RAY TUBES
237
FIG.40. Field differences between the center and the edge of a beam.
proportional to the degree of geometric correction obtained by coil shape design. This astigmatism is also proportional to M p , and thus to the square of the deflection. It varies linearly with u and u, and thus is also proportional to beam diameter. We call the astigmatism described up to now “harmonic astigmatism” because it is caused by the third- and fifth-order components of the Fourier series of the electromagnetic field. There is another astigmatism, called “axisymmetrical astigmatism,” which is symmetric with respect to Oz (Fig. 31). Its direction and intensity are linked to the second derivative of the electromagnetic field with respect to Oz. Figure 39b shows the spot astigmatism caused on a CRT screen by harmonic astigmatism, Fig. 39c shows the same spot astigmatism caused by axisymmetric astigmatism, and Fig. 39d shows the combined effects of both astigmatisms. The axisymmetrical astigmatism is very difficult to correct by design of the deflection coil. Therefore, when all design techniques have failed, because of the impossibility of meeting all the specified performance figures, the only way of compensating for deflection astigmatism is to use dynamic correction. This is produced by two quadrupolar electromagnetic lenses, one rotated 45” with respect to the other, placed between the electron gun and the deflection coil.
238
ANDRE MARTIN
Typical currents in terms of the x and y deflections are I , = kxy for the diagonals (off the x and y axes) and 1: = k’(x2 - y 2 )for the x and y axes. More sophisticated correction is possible if terms higher than the third order are to be corrected.
3. Coma Equations (52) through (55) contain center-to-edge field difference terms in u2 and 2u2. They are the cause of coma aberration, so called because it
produces a comet-tail spot. Successive cross sections of an electron beam presenting coma aberration are represented in Fig. 41. The degree of coma aberration is proportional to u2 and u2. It is therefore proportional to the square of the beam diameter and to the modulus of the ampere-turns (M = Good dynamic correction for coma, using 6-pole coils, is obtained on some very-high-resolution tubes.
,/m).
E . The Modern Dejection Coil The two-dimensional mathematical model of the deflection coil discussed in Sections 1V.B.C and D assumes a uniform distribution of field along the z axis (Fig. 33). In practice, as shown in Fig. 31, modern deflection coils are lengthened and flared to reduce beam deflection energy. In this case the designer may take advantage of the possibility of appropriately changing the third- and fifth-order terms of the deflection field (decomposed into Fourier series) to compensate for some geometric or astigmatic defects. The modern wide-angle industrial or radar CRT is comparable to the self-converging color shadow mask tube in terms of the difficulty of deflection coil design, since it must handle wide electron beams (3 to 4 mm diameter in the deflection zone) and must have a high deflection sensitivity to reduce the power required from the deflection amplifiers (sometimes several hundreds of watts in the stroke writing and some tens of watts in the TV raster mode).
~~
~
~
FIG.41. Successivecross sections of an electron beam showing coma.
CATHODE RAY TUBES
239
1. The Flared Dejection Coil for Sensitivity Improvement
The magnetic energy for a given deflection angle has been calculated in Section IV.A.2 [Eq. (3111. To reduce this energy, one can reduce the core diameter (and also the tube neck diameter), but “neck shadow” due to the beam striking the tube inner wall very soon limits this approach. A second idea, described by Brilliantov (1972),and now widely employed, is to flare the deflection coil as shown on Fig. 42. Starting from a cylindrical core (1) of length c, whose deflection angle limit caused by neck shadow is d 2 , one can increase the core length to a, while adding a conical part of length b and of half-angle /3 to obtain the flared core (2). The cylindrical part alone of length a has a deflection angle limit of < 02. However, the added conical part of the yoke increases the angle from el to #, thus matching the deflection angle of the previous cylindrical coil, but enabling reduction in magnetic energy. Iterative calculations derived from a mathematical model show that the magnetic energy E decreases with the overall core length t = a + b as shown in Fig. 43. The weight, volume, and cost then become the limiting factors of the sensitivity improvement.
e
FIG.42. The flared deflection coil opposed to the cylindrical deflection coil.
240 E,, (PJ)
-t
a
ANDRE MARTIN
0 = Limit for a cylindrical coil
FIG.43. Magnetic energy E ~ for H full deflection of the beam against the coil core length / and for three coil deflection angles.
2. Three-Dimensional Model of the Magnetic Field in a Deflection Coil
In Section 1V.B we have derived the magnetic field from the scalar magnetic potential, using a two-dimensional representation assuming axisymmetric conditions with respect to z. Similarly, we can represent a threedimensional field by a Fourier series and write for the horizontal deflection field:
@b, a, 4 = 1 n=1.3.5,
...
$!k p ) * sin(na)
(56)
As A@(p, a, z ) = 0, we obtain a set of n differential equations whose n solutions are the two-dimensional coefficients $ f ( z ,p ) . In order to obtain a representation which correlates with field measurements on actual coils, we can expand $ y ( z , p ) in a series:
with a, = f ( z )and uim)being the derivatives of the mth order of f ( z ) . From experimental considerations, we have established that taking into account up to the fifth order correlates well with actual measurements for most cases (the seventh order being only necessary for 110" color shadow mask
CATHODE RAY TUBES
24 1
deflection coils). We can then write
$%P)
=
4
The two-dimensional model described in Section 1V.B can be written as $l(P)
= hl(P/R)
$3(P)
= (h3/3)(P/R)3
$5(P) = ( h S / 5 ) ( P / W
The first terms of the three-dimensional (3D) model (Eq. 58) a l p , a 3 p 3 , a 5 p 5define a magnetic field which compares well with the two-
dimensional (2D)model, with the exception that the coefficients are a function of z. The 2D model analysis of Sections 1V.B and 1V.C hence remains valid for the first-order terms of the 3D model. The difference between 2D and 3D models lies in the derivative terms of the above equations. To analyze the effect of these terms, we calculate the corresponding magnetic fields along p, t, z (Fig. 44). From experimental considerations the term -aS2)p5/16 can be
FIG.44. u--0 coordinates to study the field around a point M ( p , cp,z). Projections of the tangential fields H, and radial field H, along UIJ is Huand H,.
242
ANDRE MARTIN
neglected, as well as the fifth-order term of the field HF on the z axis. We can then write
+ 3a3p2sin 3a + 5asp4sin 5a 1 P2 + c ) c o s a a(4) 4
(59)
192
+ 3a3p2cos 3a + 5asp4cos 5a sin a
+ a\')p3 sin 3a
The term al(z) is the expression of the magnetic field along the z axis, generally called Ho(z) or V,(z).The terms of the 3D analysis are the derivatives with respect to z of the field along the z axis. 3. Combining the Horizontal and Vertical Fields As in Section IV.D, we calculate the magnetic field resulting from the addition of the horizontal and vertical fields. The result (Fig. 44)is a radial deflection in the direction cp and, for paraxial conditions, the trajectory is in the zcp plane. We can therefore write:
H, = H,
+ HV= K[D(p,a,z)coscp + D(p,a - ~/2,z)sincp]
where D is the maximum deflection field (corresponding to full beam deflection) while K is the deflection amplitude coefficient varying from 0 to 1. We have also assumed that D is identical in both the zx and zy planes and that it is defined by Eq. 59. The projections of Hcp on the p t z reference system are
%
I
Z P
+ '3Ka3p2sin(3a + cp) + 5Ka5p4sin(5a + cp,' 9,
+hHb"p4) 3K
cos(a - cp)'
+ ' 3 K a 3 p 2cos(3a + cp) + 5Ka,p4cos(5a - cp)'
(60)
CATHODE RAY TUBES
243
These relations (Eq. 60) show-the structure of the deflection field. If we consider the field from the zcp plane, a = q k E and the terms of E q . 60 noted as ptz are independent of cp, showing the axial symmetry of 43 with respect to Oz. The other terms ptz show a cyclical variation with cp, hence a harmonic variation with cp. The 3D model of the deflection field is therefore characterized by two superimposed and independent modes (Fig. 45). These are (1) an axisymmetrical mode containing the field Ho(z) and its derivatives corresponding to the field spread along z (Fig. 4Sa), and (2) a harmonic mode containing the terms of the third order and above, which do not react with the axisymmetrical mode (Figs. 45b and c). As an example, Figs. 46a,b,d,e shows the horizontal and vertical field distribution measured, respectively, in the xz and yz planes of a selfconverging deflectioncoil for high-resolution color tubes. Fig. 46c and f show the terms H o , H, ,H4, Vo, V,,and V4 (for the same deflection coil) of the series expansion of the fields such as H,,(Z,X)= H,(Z) + H2(z)X2+ H4(Z)X4*** 4 . Aberrations in a Section b z of the Deflection Coil a. Aberrations Related to the 43 Mode. To calculate the quadrupolar and hexapolar field components around a point M defined by p , cp, z we use the uv coordinates as indicated in Fig. 44:
9?u = 9?pcos(~ - cp)
- B,sin(a
- cp)
Ye, = gPsin(cr- cp)
+ 43,cos(a - cp)
(61)
az= 9?z 1 8
9?,, = --KHb2'(2vp
+ ~ V +U
e..)
1 gZ= KHL'b - -KHL3'vp2 8
The series expansion is limited to the third order for the same reasons as in Section IV.E.2. In E q . (62) the terms with the coefficient -QKHLz) show an effecton the geometry because of p 2 , on the astigmatism because of 2up and 2pu, and on the coma because of u2, u2, 2uu. These aberrations are axially symmetric (see also Section IV.D.2)with respect to the z axis. The term KHL1)u
Y
t,l I
Ty
ty
X
(bl
I
(51
FIG.45. Lines of force of the magnetic field for (a) the fundamental mode, (b) the harmonic mode as, and (c) the harmonic mode a; in the cylindrical deflection coil.
A
Front part
Rear part
2 I
b
C
N
6
I
/\
HxV
Front part
Front
Rear part
d
e
f
FIG.46. Experimental magnetic fields in a high-resolution color television coil: (a), (b), (c) horizontal field distribution;(d),(e), (f) vertical field distribution.
246
ANDRk MARTIN
of Se, shows an action comparable to the effect of a cylindrical lens whose axis is in the 43 direction. Hence, astigmatism and field curvature are a function of the amplitude of the deflection (coefficient K ) . b. Aberrations Related to the X Mode. Using the same uu coordinates around a point M defined by p , 43, z, we can write
+
Xu=3 a 3 K ( p 2 2pu
+ u2- 2u2
+ * a * )
sin 4q +(2pu+ 2uu +...) cos 443
Xu= 3a3K(-2up -22vu+...) sin 4q + ( p 2 +2pu
Xz= aS1)K[(3up2
+
. a * )
+ +
+
cos 443 ( p 3 3p2u
a * . )
+ u 2- 2u2
+ . a * )
cos4q
(63)
sin 4q)l
The effect of Xuand Xuadds to the effect of B,,Ye,on geometry because of p 2 , on astigmatism because of 2pu and 2pu,and on coma because of u2,u2, and UU. Along the z axis the field variation is such that field curvature and astigmatism related to the 3up2 terms are introduced. Also, a field coaxial to the z axis can be found in some directions.
5 . Aberrations in the Complete Dejection Coil Integration of the various perturbations along the electron trajectories makes it necessary to take into account the angle of the trajectories with respect to the successive x y planes. This is only possible by numerical integration with a computer. For small deflection angles, a mathematical theory using the Fermat principle was first described by Wendt (1952), then developed by Kaashoek (1968). Although useful to the understanding of the variations of the deflection coil parameters, this theory requires computer calculation of the coefficients of the equations obtained, so that quantitative results can be achieved. 6 . Magnetic Field Variation along the z Axis
The terms aj(z) and a5(z)of Eq. (60)are related to the winding distribution (coefficientsh3,h5 of Section 1V.B) because of the boundary conditions of the scalar potential Q ( p , a, 2). With a proper winding distribution, h3 and h5 can be made a function of z and it is hence possible to modify a3(z)and a,(z). From previous analysis of the 3D model, we have found that, for a given trajectory in the deflection yoke, the magnetic field distribution coefficientsfor geometry, astigmatism, and coma vary respectively as p 2 , p l , po. At the screen, after integration of the trajectories, these coefficients vary as p 3 , p 2 , p i . Figure 47 shows the magnitude of these coefficients (from Eq. 63) with respect to the yoke z axis. The coefficient 3a3Kp2 reacts very strongly on geometry near the front part of the coil (CRT screen side). The coefficient 6a,Kpu acts on
CATHODE RAY TUBES
I
247
Parameters
,6a, Kpu-astigmatism
e FIG.47. Aberration intensities in different zones of the deflection coil.
astigmatism mainly in the median and front parts of the coil, while 3a,Ku2 acts on coma at the yoke entrance (electron gun side). The designer therefore has the possibility of compensating simultaneously for geometry, harmonic astigmatism, and coma by a differential action of the windings at the yoke entrance, median zone, and front part. This approach is presently widely used in most modern deflection coils. 7 . Summary of Action of the Main Parameters on Observations
In Table I1 the action of the three important aberration parameters
( p = h,/h,, beam diameter E, and deflection angle 0) are summarized.
248
ANDRE MARTIN
TABLE I1
EFFECT OF VARIOUS PARAMETES ON ABERRATIONS Error on screen propartional t o factors indicated for following defects
PARAMETERS p = !I2 hl
3d order astigmatism
P
3d order coma
P
Pincushion
p=o
Pincushion change
P
Image field curvature
-
Defocusing by field curvature
-
Beam diameter E
Deflection angle %
-
-
I
F. Behavior of the Dejection Yoke under Transient Operating Conditions Although the transient response of a yoke is not critical when using a TV raster (because of the periodicity of the signals), it becomes very important when using the stroke writing mode usually employed for high-quality industrial and military displays. With the stroke mode, a given spot position on the screen corresponds to a combination of two particular current values, I, and I,,, in the windings. Suitable variations in these currents create the desired image. The accuracy of the currents in the windings, which has a fundamental effect on the image quality, may be affected by the transient changes in yoke currents resulting from rapid changes in voltage. A typical deflection system is shown in Fig. 48 where a low output impedance amplifier supplies a current I, to the deflection coil, a feedback voltage being derived from the current flowing through the small-valued resistor R I . Dynamic-response errors have two causes. The current I, does not correspond to the input voltage because of the inductance of the coil. Also, the magnetic field may not correspond to the current I, (or I,,) because of the remanent field and eddy currents in the magnetic material.
1. The Dejection Yoke as a Circuit Element The limit to the rate of current variation (or scanning speed) is given by the classical expression
CATHODE RAY TUBES
249
FIG.48. A typical deflection amplifier with feedback loop.
where L is the coil inductance, V the voltage applied across it, and I the coil current. If t is the time required to deflect the spot across the full screen (corresponding to a deflection angle 20,) and I,,,the current corresponding to a deflection from center to edge of the screen, we can write L dlldt = 2LI,,,/t
If & V, is the voltage supplied to the deflection amplifier, we must have
v, 2 2LI,,,lt Since the corresponding energy in the deflection coil is E , = L1,32, we can combine it with the above expression and write L S V:t2/8E,
(64)
Equation (64) gives the maximum permissible coil inductance for a given power supply voltage k V, and a given time t. We have neglected the voltage drop in the amplifier (RJ,,,) and in the coil (RI,,,)which should be subtracted from V, in Eq. (64). Another factor limiting the performance of the deflection system is its bandwidth B, which is closely related to the coil resonant frequency f,.The gain in voltage (in dB) of the system as a function of the frequency is given by
250
ANDRfi MARTIN
G 50dB
Deflection amplifier
0 dB
0 dB
Deflection coil
-50 dB
-.
50 dB
Deflection coil (open) plus deflection amp1if ier
0 dB
FIG.49. Behavior of deflectioncoil and amplifier with frequency.
the curves of Fig. 49 for the amplifier connected to a resistive load (upper figure)and also the amplifier connected to the deflection coil (lower part). This gain G is related to the 6 dB/octave slope of the curve f’/Vf of the deflection coil. This slope is limited at the low-frequency end by R/2nL ( R being the sum of all resistances in series with the coil of inductance L). At the high-frequency
CATHODE RAY TUBES
25 1
end, the limit isf,. The gain G is hence close to G
11
(2nL/R)f,
This approach considers the deflection coil as a simple resonant circuit. Real coils behave in a more complex manner and their equivalent circuit resembles a T filter, as shown in Fig. 50, where the deflection coil is connected at one end to a low-impedance deflection amplifier and at the other end to ground through a low-value resistor R,. There is a strong mutual induction M between the two coil elements, each of inductance L / 2 . Using this equivalent circuit it is then possible to define a transfer function H(f) = I ( f ) / V ( f ) whose amplitude and phase vary with frequency. Figure 51 shows these amplitude and phase variations for three different yokes. Curves (1) and (2)are for single-ended deflection coils, while curve (3) is for a double-ended deflection coil energized by an amplifier with two symmetrical outputs. The characteristic frequency of a yoke is the frequency corresponding to a phase rotation of 180".The bandwidth of the overall system (yoke + amplifier) is generally about half of the characteristic frequency. [The curve (1) of Fig. 51 indicates 5-MHz yoke characteristic frequency and hence a system bandwidth of 2.5 MHz.] 2. Recovery Time, Settling Time, and Residual Magnetism
When d l / d t is small, the field varies linearly with current but when it is high the field does not change instantaneously with current, but is delayed according to an approximately logarithmic law. This delay, known as recovery time, is due to eddy currents in the core and in metallic parts surrounding the coil. The corresponding delay in CRT response, known as the settling time T, is the time between the instant that the current reaches its specified value and
FIG.50. Equivalent circuit of a deflection coil.
252
ANDRk MARTIN
FIG.51. Experimental curves of phase and amplitude of the transfer function H ( f ) (by courtesy of Videocolor).
the instant that the spot reaches its correct position within 1 or 0.1%. Typically 7' (1%) is less than 1 psec and T (0.1%) is a few microseconds. Due to hysteresis in the deflection coil core, residual magnetism results. It has a very noticeable relative displacement effect on displays where rapidly moving figures are mixed with slowly moving ones. This residual magnetic field is expressed as a percentage of the maximum deflection field. Typical values are lo-'% for a very good yoke and up to 1% for an ordinary one. Additional hysteresis of metallic parts surrounding the deflection coil can also produce similar effects. G. Field of Application of Various Types of Dejection Coils
In Table 111 the main characteristics of five types of yokes are compared. The saddle yoke, whose windings are disposed in a saddle shape inside the magnetic core is the reference standard of television raster scanning. The common toroidal yoke is also given as a reference example for television raster scanning. The precision system toroidal yokes uses an electric-motor type of stator core, accurately machined to precisely define the field, with the windings placed in the slots of the core. This type of yoke is well suited for industrial and
TABLE I11 FIELDS OF APPLICATION OF VARIOUS DEFLECTION YOKE TECHNOLOGIES Electro-optical characteristics
Yoke Technology
LjR fr
L
Poor
values > l mH
Average
Limited
Residual Recovery magnetism time
~ 1 -
2
2
(Q)
I
Common toroidal yoke
High-Q stator yoke
Low-Q stator yoke
I
TV field
I I I Average
1 to 2.5 ms
Very
TV line and field
Average
Random scan
Average
good
Very good
High
5OoW
2 MHz,
100 to 200 ps
High 4 MHz, 120pH
Limited to
lOmH
-
Poor
Average
Medium
Good
Very good
Very good
254
ANDRE MARTIN
military displays using vector scanning because of its low inductance and low aberration. Also, the highest resonant frequencies are found in stator yokes because of their low inductance.
V. PHOSPHORS AND SCREENS A. General Properties of Phosphors
The luminescent materials used for the cathode-ray tube screens consist of highly purified inorganic compounds containing very small, carefully controlled quantities of elements known as activators or dopants. It is common practice to call fluorescence the luminous emission during electron bombardment and phosphorescence the luminous emission after the electron excitation has ceased. For most phosphors, fluorescence begins within a short time At after excitation. According to Curie (1960), At varies from lo-* to sec. The duration of the phosphorescence varies with the phosphor type and may be in the range of sec to several hours. Details regarding such phosphor materials can be found in the books of Leverenz (1950)and Curie (1960). The phosphor materials can be separated into two families, the broadband-emitting phosphors, in which we find materials such as sulfides, oxides, and silicates, and the narrow-band-emitting phosphors represented by rare earth materials. I . Broad-Band-Emitting Phosphors In these materials, where the sulfides are the most widely found, electronbeam excitation transfers electrons from the luminescent centers (activator atoms)to the conduction band of the crystal. The electrons then return to their lower energy state through various intermediate levels that are characteristic of the crystal host, and of the dopants that have been introduced into it. At least one of these transitions is accompanied by the emission of light. The spectral distribution curve of the light emission is usually wide, over loo0 A; an example of such a phosphor is the green-emitting (Zn,Cd)S:Cu. 2. Narrow-Band-Emitting Phosphors
These phosphors consist of a rare earth host crystal, such as a vanadate or oxysulfide in which activator ions of another rare earth are incorporated. As stated by Tecotsky (1976), the unique luminescent properties of these materials come from electron transistions to the 4f orbitals of the activator. Because these electrons are shielded from the fields produced by the outer
255
CATHODE RAY TUBES
valence electrons, the spectral emission consists of narrow bands instead of the broad bands that are found with sulfides,oxides,and silicates. The first rare earth phosphor used for industrial applications was the europium-activated yttrium orthovanadate (Levine and palilla, 1965), a red-emitting phosphor initially developed for color television. Since their first introduction, a number of rare earth phosphors have been developed, covering a wide range of applications for industrial and military CRTs as well as for color television. Table IV lists some of these rare earth phosphors. E . Characteristics of Phosphors
The document “Optical Characteristics of Cathode Ray Tube Screens” published by the Electronic Industries Association (EIA) (1985) provides useful information on the characteristics of each of the 109 registered phosphors. These include the spectral distribution curve of emission, the CIE 1931 x - y color coordinates of this emission, the decay curves under some excitation conditions, and their chemical composition. However, for special CRT applications, additional information is frequently required by CRT designers and users, such as (1) energy conversion efficiency and luminous efficiency under various conditions of excitation, (2) linearity of the luminous efficiency as a function of absorbed energy (taking into account amplitude, duration, and frequency of excitation pulses), (3) variation of the luminous efficiency with operating time, (4) rise and decay times under pulse conditions, (5) flicker frequency limit under typical operating conditions. These factors are briefly reviewed below. 1. Energy Conversion Eficiency (qe)
The energy conversion efficiency qe of a phosphor is defined as the following ratio radiant power (W) ‘e
= input power
W,
=-
(w) wi
For most phosphors, qe lies in the range of 6 to 20%. Sulfide types have the highest efficiency, 20%. An example of this is the P4 phosphor mixture (ZnS:Ag + ZnCdS: Ag) used for black-and-white TV whose efficiency is 15%. Oxide based phosphors have efficiencieswhich do not exceed about 7%, while rare earth phosphors have an efficiency which varies from 6 to 20%. The energy conversion efficiencies given in Table IV should be used with caution since they are most probable values based on published experimental
TABLE IV CHARACTnusnCS OF COMMON h 0 S P Color coordinates composition
N m u i
I
p1
I
green
I
0.218
I
~~
0.712
I
1
Zn2Si0,:Mn
,
~-
P2
green
0.279
0.534
2nSCdS:Cu
P4
white
0.270
0.300
P5
blue
0.169
0.123
CaW04 :W
P11
blue
0.139
0.148
ZnSAg
IumendWr IE)
IumendWi (€1
520"'
2814' t o 3113'
712.3-41
465"'
32.4(3'to 3414'
15(2-31
285"
2.5"' t o 3.615'
85'11
2.1@1 to 3'8)
1 0 ' ~ to21I3' '
140'l'
10 to 2713" 3171
WdWi % (qe)
ZnSAa+ZnSCdSA
4.8'5'to 8.5i78'
'
to 30"'
~
7-
P16
violet
0.199
0.016
CaMgSiO, :Ce
5'31
245"'
P19 - P26
orange
0.572
0.422
kMgF, :Mn
2.314'
400'l'
P20
yellow .green
0.426
0.546
ZnCdSAg
14(24'to 1fji3'
480"'
65'4'
P22
blue
0.157
0.47
2nS:Ag
1 5'21
55'''
8.2'8'
P22
green
0.280
0.600
ZnSCdSAg
15(241
435'7'
60i4'
914)to 12(7)
P22
I
red
0.650
0.325
JEDEC ITT chart Fundamentals ofDisplay System Design ;p.69 ;Sol Sherr, Wiley Interscience T homson-CSF catalogue
qe = energy conversion efficiency E = luminous equivalent E = luminous efficiency For definition of these three see Section V - paragraph B
NOTE :
YU0,:Eu
I
7.11%) to
I
1d41
I
9.5161
(5) Lehman :Journal of Electrochem Society ;118-1164 11971) (6) TECOTSKY :SID Meeting N Y Mid-Atlantic Chapter April 1976 (7) Peter Seats :SID 76 Digest (8)Obtained from qe x E
I
258
ANDRE MARTIN
data that depend to a great extent on the excitation and measurement conditions which vary from worker to worker. 2. Luminous Equivalent ( E )
For phosphor materials, information is frequently desired on the lumens produced per watt of radiated power against the power. As the lumen unit is related to the sensivity curve S(1) of the average eye versus wavelength (Fig. 52), this equivalence varies with the spectral energy distribution I(12)of the emitted light. In the case of phosphor materials it should be noted that the spectral intensity distribution curve 1112) may vary with the input electron beam energy. The luminous equivalent E can be calculated from the expression
As indicated below in Section 6, Measurement Techniques, the value of E can be readily computed for wide-band phosphors, but is much more difficult to determine for rare earth phosphors with multiple narrow emission bands.
Relative response
t
wavelength
FIG.52. Spectral response of the average normal eye.
CATHODE RAY TUBES
259
3. Luminous EfJiciency ( E )
The luminous efficiency is defined by the following relation: E=
radiant flux (lumens) lm =input power (W) Wi
(67)
The value of E varies from a few lm/W to 70 lm/W for very efficient green phosphors. It is related to the energy conversion efficiency and luminous equivalent by the expression (68) Table IV gives the luminous efficiency E (lm/W,) and the luminous equivalent E (Im/W,) for common phosphors. As mentioned, these data are subject to variation in measurement and presently the most likely values from available data. Based on the above parameters, the luminance L of a screen can be expressed as follows: E
= q,E
where T is face plate transmission (fractional), E is luminous efficiency in lm/Wi, V, is screen voltage in volts, I is beam current in amperes, S is surface of the screen excited by the electron beam in m2, and L is luminance in cd/m2. This expression can also be written as
where d is the current density at the screen expressed in pA/cmZ. These formulae are valid within a limited range of voltages and of currents, since E is a function of V , and d , as discussed in Section V.C. 4. Color Characteristics
The color of the luminous emission is usually defined by means of either the CIE 1931 x-y color coordinate system, the CIE UCS (uniform chromaticity scale) u-u systems (see Judd and Wyszecki, 1975), or the equivalent wavelength (color measurement techniques will be discussed below in Section VI.B.4). The CIE 1931 x - y system is of universal use, because it simplifies the calculations of the color resulting from the addition of different lights. The CIE UCS systems, mainly those from the years 1960 and 1976, are transformations of the CIE 1931 system in which equal distances between any two points on the diagram correspond to nearly identical visual impressionsof
260
ANDRI? MARTIN
color difference, making these systems useful for measurement method of color contrast. The equivalent wavelength is the wavelength L, of monochromatic light producing the same color, but not necessarily the same saturation, as the phosphor output. Hence 1,givesjust an indication about the color of the phosphor output, not of its saturation. 5. Response to Digerent Excitation Modes, Rise and Decay Times, and Flicker
The time dependence of a phosphor’s behavior under electron excitation varies, depending on whether it is excited by a long pulse, a short pulse, a pulse train, or recurrent pulses. a. Response to Long-Pulse Excitation ( L P E ) (See Fig. 534. If the duration of the excitation is long enough, the emission becomes stabilized at the level Lo corresponding to the excitation current I,. Most phosphors fall into one or another of two categories in terms of their decay time. One category exhibits the following decay: L = Lo exp( - t / z )
(71)
where Lo is the stabilized luminance level at t = 0 and z is a decay constant characteristic of the phosphor. Silicates, phosphates, rare earth oxysulfides, some fluorides, aluminates, and tungstates (with corresponding EIA designations, P1, P5, P12, P25, P27, P38, P43, P44, P45, P46, P47, P48) are representative of this family. Figure 54 shows the decay curve of a P1 phosphor exemplifying such an exponential decay. The second category exhibits the followingpower law decay characteristic:
where a and n are constants characteristic of the material. Figure 54 shows the decay curve of a P14 phosphor as an example of a power law decay. This category includes alI sulfides and some fluorides. These phosphors generally have a longer aftergiow than those of the exponential decay type. The 10% point of the decay time curve is often given, and sometimes the 1% point is given to provide information on the weak afterglow. Table V gives the 10% point of the decay time curve and equivalent wavelength 1,for some currently used phosphors. It is important to note, however, the decay time is strongly dependent on the excitation energy. For most phosphors, the rise time follows the same law as the decay time, so that if the decay time law is L = Lo$(t), the rise time law is L =LOCI
- $403
(73)
CATHODE RAY TUBES
t
26 1
‘f
r 0
(a I Excitation
t
‘t 0
“b n pulses
r
Id I FIG.53. Phosphor responseto different excitation modes. (a) Long-pulse excitation(LPE), (b) short-pulse excitation (SPE),(c) pulse-train excitation (PTE),(d) recurrent-pulse excitation
(RPE).
b. Response to Short-Pulse Excitation (SPE) (See Fig. 53b). Phosphors capable of responding to short pulses are used for flying-spot, photorecording, and printing applications. However, in such excitation the phosphor may not reach an equilibrium light level during the exciting pulse. Although recording the energy radiated during the short-pulse duration At, is of primary importance for these applications, the measurement should be made over a
262
ANDRE MARTIN Relative brightness
100
10
-
Time after excitation
0
10
20
30
40
50 ms
FIG.54. Decay curves for two typical phosphors,one exponential and the other hyperbolic.
time At2 such that At2 > A t l since there may be a low-level afterglow extending for a long period. Table VI gives the decay time and the energy conversion efficiency, measured first in an LPE mode then in a SPE mode, for At2 = 3 p e c and A t l = 0.5 psec of two recent fast-decay phosphors P46 and P47 compared to the older P16. Looking at this table, we find 4% of long-pulse excitation energy conversion efficiency for the phosphor P46, and 2% for the same energy conversion efficiency measured in a short-pulse mode. This difference of efficiency results from the fact that the rise time and the decay time become more and more important and reduce the capacity of the phosphor to deliver energy when excited by short pulses. From Table VI, we can see that P47 is the best efficiency phosphor for SPE, and that this result correlates with the shorter decay time indicated for this phosphor in the table.
c. Response to Pulse-Train Excitation ( P T E ) (See Fig. 534. In the PTE mode, n repeated pulses are applied to the phosphor, n being insufficient to cause stabilization or saturation. There is thus partial integration over the total duration of the n pulses and the radiated energy is a function of n. This type of excitation typically occurs in radar operation where the pulsed spot moves slowly across the screen.
263
CATHODE RAY TUBES TABLE V CLASSIFICATION OF COMMON PHOSPHORS FOR DECAY TIMEVERSUS THEIRCOLOR"
4 -
1s
P26 I(
X P39
P38
*p7 X
XP27
XPl
-
10-3-
X
XPll
P2
X P20
x P31
P3
' !f P16
-
Equivalent
Wavelength
nm
d. Response to Recurrent-Pulse Excitation (RPE) and Antgicker Characteristics (See Fig. 53d). This excitation mode is similar to PTE, except that the number of pulses is sufficient to reach stabilization. It is the most common form of excitation occurring in stroke writing and TV raster scanning. Where the displayed picture is pulsed at a given refresh rate, the decay time response measured after the excitation has ceased is the one given in the EIA publication "Optical Characteristics of Cathode Ray Tube Screens" (EIA, 1985) determined under scanning conditions. Since the human eye can integrate TABLE VI RFSWNSE OF FASTPHOSPHORS TO SHORT-PULSE EXCITATION' Phosphor
Docay timo
Energy convorrion efficioncy
Qe
50 %
10 %
P46
4%
40 ns
160 ns
50 %
2%
P47
5%
25 ns
80 ns
100 %
5%
P16
4%
25 ns
40 %
1.5 %
"'emit
IW ,
150 ns ~~
SPE duration: 0.5 psec; measurement interval: 3 psec.
I3 P d
'la (3 PSI
number
~
ANDRB MARTIN
264
light over a limited period, for each phosphor there is a refresh frequency above which the pulsing or flicker of the picture disappears. This frequency is called the critical fusion frequency CFF. De Lange Dzn (1954,1978) and Kelly (1971) have shown that for a pulsed light source the CFF (1) depends on the retinal illuminance expressed in trolands (the troland being defined as the retinal illuminance produced by viewing a large area surface whose luminance is 1 cd/m2 through a pupil having an area of 1 mm2), (2) does not depend on light pulse shape, and (3) depends on the modulation index of the light source. The modulation index is defined for sinusoidal sources as the ratio of the modulation amplitude to the average source luminance. For nonsinusoidal sources, De Lange Dzn (1957) has shown that the first term of the Fourier series representing the modulation amplitude can be used with satisfactory results to calculate the modulation index. In Fig. 55 are shown the De Lange Dzn (1978) curves of CFF versus modulation index for different retinal illuminations expressed in trolands (& to T5). These curves show that the observed flicker becomes negligible above 70 Hz and underline the importance of the retinal illumination for the CFF.
Modulation index
t 100%
10%
1%
Criticel furan frequency
m:; 6
10
tH+r 60 100HZ
*
FIG. 55. De Lange curves for different retinal illumination levels from TI to T,.
CATHODE RAY TUBES
265
The complexity of the visual system under pulsed illumination conditions has also been well described by Rogowitz (1983). The development of a CRT for a given application requires the determination of the phosphor time response characteristics needed to avoid flicker, given the CRT operating conditions. Phosphors useful for this purpose are referred to as antiflicker phosphors. Long persistence or a long afterglow by itself does not mean that a phosphor has antiflicker characteristics, on the contrary, experiments have shown that only phosphors having an exponential decay time have best antiflicker capability. Application of De Lange work to CRTs was started by Turnage (1966) and confirmed by Krupka and Fukui (1973). Experiments to determine CFF are generally performed using a CRT whose image is scanned at a variable refresh rate. (Since all points of the image are not excited simultaneously a correction factor must be introduced in the measurements.) A small element ds of the screen thus emits a series of light pulses at the image refresh frequency with a very low duty cycle (in the range). The measured modulation index (MI) is an intrinsic characteristic of the phosphor and depends on the image refresh frequency f because of phosphor rise and decay times. Merloz (1978) measured the modulation index directly on a CRT with the apparatus shown in Fig. 56. Recurrent pulse excitation was used and the emitted light measured by a photomultiplier. The amplitude L , of the first term of the Fourier series was measured with a narrow-band amplifier tuned to have unity gain at the excitation frequency. The mean value L, was measured by using an integrating capacitor C. The modulation index MI = L J L , was measured at several excitation frequencies and for each phosphor the MI = & f ) curves were drawn.
FIG.56. Measuring apparatus for the modulation index MI.
266
ANDRk MARTIN
In Fig. 57 are shown curves of CFF versus MI. The MI = 4(f) obtained by Merloz (1978) for phosphors P4, P7, P12, and P38 are shown on a modified De Lange Dzn set of curves where retinal illumination has been replaced by corresponding signal luminance (valid for a given picture configuration only). The intersections between the MI = 4(f )curves and the De Lange Dzn curves indicate the CFF values for the three luminances 7.6, 34.5, and 138 cdfrn’. For all refresh rates above the CFF, the experimental display gives flicker-free images. Another approach for the determination of the modulation index is to calculate it from a Fourier analysis of the decay curves. This approach has been taken by Krupka and Fukui (1 973). Table VII gives CFF values obtained by three groups for an image size of 13 x 13 cm2 and a mean luminance of 34 cd/m2 for the seven phosphors P1, P4, P7, P12, P22r, P38, and P39. The results show good correlation between the different methods used. The above discussion of CFF determination is valid for stationary images only for which it may be desirable to use a long-persistence phosphor to reduce flicker. However, for moving images, the phosphorescence (afterglow)results in a loss of contrast and of resolution (often called smearing). Because of this, power law decay phosphors having a long afterglow must be avoided for 7.6 34.5 138 cd/m2
0.1
4
I
5
I 10
Hi!
1
50
1
lo
FIG.57. MI = q(f)curves for phosphors P4, P7, P12, and P38.
CFF
267
CATHODE RAY TUBES
TABLE VII CRITICAL FLICKER FREQUENCIES (CFF) FOR SOME COMMON PHOSPHORS ACCORDING TO THREE DIFFERENT WORKERS
dynamic displays and are only used on radar plan position indicators. Table VIII indicates the preferred use of the most common phosphors, together with their color and 10% decay time. C. Action of Various Parameters on Phosphor Eficiency 1. Eflect of Screen Voltage on Luminous EfJiciency
A number of relations have been proposed for the variation of luminance with screen voltage. One example of this is L = k&d(E= v,)” where L is the luminance of the screen, k is a constant that depends on the units TABLE VIII THEPREFERRED USE OF SOME COMMON PHOSPHORS, RELATED TO THEIR DECAY TIME
P7 P25 P26 P28 P33
blue -yellow orange orange orange Yellow - green orange
P1 P12 P38 P39
green yellow -orange orange green
P19
Antiflicker display
Standard display
P43
white green red .green .blue green green green white
350 rns 220 ms 45 rns 6s 500 rns 6 5
20 r n s 200 ms
Is 350 rns
40jtr-trns
1.2 ms 1.2 ms
268
ANDRE MARTIN
employed and on the screen manufacturing technique, E is the luminous efficiency in Im/W, d is the screen current density, V, is screen voltage, V ,is the voltage corresponding to an onset of light output (also called “dead voltage”), and n is an exponent lying between 1 and 2. As an alternative we recommend the following relation in which E is a function of V , and d L = kEV,d (74) A typical experimental curve showing variation of log E with V, is given in Fig. 58. The first part (1) of the curve from V , to A follows approximately the relation E = k(V - V,)”with n < 1. For this part of the curve the luminous efficiency increases from zero at the dead voltage V,to a value emax,generally reached at V, = 8 to 10 kV. The dead voltage V , is caused by the nonemitting materials, such as the aluminum conductive coating on the screen, inorganic binders used in depositing the phosphor screen, or by surface defects on the phosphor grains which reduce the efficiency of the outer layers of the phosphor particles. For a typical CRT, V, lies between 1 and 4 kV. For lowvoltage operation these effects must be minimized in manufacturing. In some cases the aluminum backing may be replaced by a transparent conductive coating on the faceplate. The second part (2) of the curve from A to B shows a constant efficiency E,,,, whose value is a function of the phosphor and of its deposition process. This is the normal range of screen operating voltages, and is a function of the thickness of the phosphor layer. Ozawa and Hersh (1974) have found that the phosphor screen thickness providing maximum brightness and maximum length of segment AB is 1.4 times the average phosphor particle diameter, and
Loge
t
Working voltage
FIG.58. The variation of luminous efficiency with screen voltage.
CATHODE RAY TUBES
269
that the optimum phosphor particle size equals the electron penetration depth in the phosphor material at the given screen voltage (for example 10 pm at 25 kV for a zinc sulfide). In tube manufacturing, phosphor particle size and screen thickness are thus chosen accordingly. The third part (3) of the curve, beyond B,shows a drop of E. In this voltage range the electron beam energy becomes so high that the electrons traverse the phosphor without losing all of their energy and strike the faceplate, which may eventually become degraded or darkened by this bombardment. 2. Effect of Current Density on Luminous Eficiency
The linearity of the luminous efficiency with current density is extremely important for all high-brightness CRTs, such as projection tubes and avionic tubes, where area brightness in the former, and trace brightness in the latter, of up to 40,000cd/m2 are often required. The average density d , is given by
A phosphor is said to have a linear response when the luminance of an element is proportional to the time-averaged current density, d,. Such linearity generally is valid for small values of d,, up to approximately 10 pA/cmZ.At higher average current densities, when an element is excited by current pulses, the degree of nonlinearity or saturation depends on whether the average current is obtained by increase of the amplitude, the pulse duration z or the refresh rate, as shown on Fig. 59. Figure 60 shows experimental results of doubling either the refresh frequency or the pulse duration z for a terbium-activated yttrium oxysulfide. These results indicate that the degree of phosphor saturation depends more on the integrated current of a single pulse than an increase of refresh frequency. 3. Phosphors with Best Linearity
Among recently developed phosphors, P53 (terbium-activated yttrium aluminum garnet) and E86 (europium-activated barium strontium thiogallate) give by far the best linearity, as can be seen from the curves of Fig, 61 showing brightness versus time-averaged current density. On the same figure, curves are also plotted for P1 (Zn,SiO,: Mn for green and YVO,: Eu) and P44 (La,O,S:Tb). All measurements were made at V, = 16 kV. These phosphors are all superior to the classical sulfides. The old P1 phosphor is still an excellent choice, and is widely used in avionic tubes, mainly for head-up displays where high current density is required.
Writing speed decrease t
Image frequency increase
4 T I 2
L-
FIG.59. Different ways of doubling average current density.
FIG.60. Luminance of yttrium oxysu1fide:terbium versus screen current density dm. a, Refresh frequency f = duration T; b, refresh frequency 2f = duration T ; c, refresh frequency f = duration 27.
271
CATHODE RAY TUBES
FIG.61. Brightness of different phosphors versus current density. V = 16 kV. 4. Phosphor Aging
The luminous efficiency E of a phosphor gradually drops with elapsed electron beam excitation time. Pfahnl(l961) demonstrated that this variation follows what is known as the Coulomb law;
+ CQ)1
(76) where c0 is the initial luminous efficiency, C is the burn parameter constant, and Q is the number of electrons received by unit surface of the screen expressed in C/cmZ. An end-of-life criterion often assumed for a phosphor is a 50% drop in luminous efficiency, measured in terms of the number of C/cmZ (called the coulomb rating Q o . 5 ) needed to reach this level. We can hence write E
= "01/(1 f
= ~oC1/(1
Q/Q0.5)
with
Q0.5
=
l/c
(77)
Table IX shows the effect of some parameters on aging indicated by Pfahnl. Drawing on the same author, Table X lists the aging characteristics of various phosphors under 10 keV electron beam excitation. The results here show that best results are obtained for silicates such as P1 (with the exception of P16, activated by Ce3+)with much more rapid again occurring in sulfides. Linear rare earth phosphors mentioned in Section V.C.3 above, such as P53
272
ANDRk MARTIN
r
TABLE IX THEINFLUENCEOF DIFFERENT PARAMETERS ON PHOSPHOR AGING Influence on the aging
Paremeterr Relating to phosphor 1. Particle size
2. Form of crystalI ization
Smaller particles are more sensitive variations of up to 1 : 10
Relating to excitation 1. Accelerating voltage
No influence
2. Raster or dc excitation
No influence
3. Temperature
Increasing burn at increasingtemperature
Ion burn about 10 times greater than electron burn
keV ELECTRONS
Phosphor
P.No.
V C = No. of coulomba/cm2
Chemical composition
uw
LCO
P16
Flying spot scanner
(CaOI2.MgOASiO2I t
P2 P4
ZnS.CdS ZnS ZnS.CdS/Zns ZnS
P11
Oscilloscopes directviewTV, (blendI Cascade screens for oscilloscopes Photography
Oxygen and sulfur dominated, Mn and T i activated
P1 P4 P4
Scopes radar projection TV black and white
Zn2SiOa Ca0.Mp0.Si02 Zn0.Be0.Si02
Oxygen dominated
P15 P24 P5
Flying spot scanner Flying spot scanner Oscillorcopes
ZnO ZnO Caw04
Ce"
activated silicates
Sulfur dominated, Ag and Cu activated
P7
f
Lo
About 10"
10.0 16.6
ZnS.CdS ZnS Mn Ti Mn
104.0 55.5 83.5
CATHODE RAY TUBES
273
and E86, and color penetration phosphors such as E20 have a coulomb rating comparable to that of P1. Although the mechanism of phosphor degradation is not very well understood, it is believed to be caused by dislocation produced in the crystal lattice and by displacement of the activator under electron excitation and the associated local electrostatic fields produced in the phosphor. Also, a darkening of the phosphor may occur due to the growth of a light-absorbing layer on the surface of the particles which also filters the emitted light. This may be caused by chemical reduction to metal atoms of metallic cations (consistingeither of the activator, a component of the host material, or one of the materials used in screen processing). 5 . Figure of Merit of Phosphors A figure of merit, FM, defined as the product of the lumen efficiency E by the coulomb rating Q0.5gives a measure of the total luminous energy that a phosphor can produce during its useful life. Table XI gives the value of FM for P1,P43, P44, and P53 phosphors, widely used for high-brightness tubes, and for P31 used for computer terminal displays.
D . State of the Art in Phosphor Technology 1. Recent Phosphors
The variety and quality of phosphors available for industrial and military applications have increased significantly because of the last 20 years of intensive work on developing brighter red, green, and blue phosphors for domestic color television, particularly ZnS: Mg used for the blue phosphor CHARACTERISTICS AND
TABLE XI FIGURES OF MERIT OF SOME PHOSPHORS USED IN AVIONIC DISPLAYS
214
ANDRE MARTIN
and (Z,,Cd)S: Cu,Al used for the green phosphor of color television screens in conjuction with rare earth red phosphors, such as the efficient Y,02S:Eu. For industrial and military applications, stress has been placed on the improvement of the linearity and burn resistance of phosphors, while keeping the luminous efficiency at the highest level possible. These requirements are frequently best satisfied by rare earth phosphors. As mentioned earlier, the first rare earth phosphor to be made commercially was YV0,:Eu which replaced the red sulfides previously used in color television screens because of its linearity, improved spectral response, and better coulomb rating. Since then, more efficient red rare earth phosphors, but with a slightly shorter dominant wavelength, have been developed for television. Examples are G d 2 0 3:Eu and Y 2 0 3:Eu and Y 2 0 2 S :Eu. The latter’s luminous efficiency is about twice that of red sulfides, and its coulomb rating is five times better. Other compounds such as phosphates, thiogallates, and garnets are also under study and have already given some interesting results. The “Optical Characteristics of Cathode Ray Tube Screens” (EIA, 1985) lists rare earth phosphors under the following P numbers: P43, P44, P45 for oxysulfides,terbium-activated, and P46 and P53 for garnets. (Note: A new phosphor designation system, using two letters will soon replace the P numbers.)
I
I
I . . ~
Wavelength in nanometers
FIG.62. Spectral energy distribution of phosphor type P43.
. --
CATHODE RAY TUBES
275
The phosphors P43, P44, and P45 are terbium-activated oxysulfideswhose efficiency v], varies from 10 to 20%, depending on the manufacturer. These three phosphors have a series of sharp emission peaks distributed over the visible spectrum. The relative heights of these peaks and resultant color can be varied by changing the activator concentration or the cation of the host mater,ial.P43 (Gd,O,S:Tb) has been designed to maximize the energy in the 545-nmgreen peak (Fig. 62), producing a good saturated green when it is viewed through a narrow-band-pass green filter (see Section VII). Although P44 (La,O,S:Tb) has a different host crystal composition, its spectral characteristics are very close to P43. P45 (Y20,S:Tb), making use of still another host crystal, has peaks that are the same as for P43 but with a different energy distribution (Fig. 63). A suitable terbium concentration gives enough red and blue emission to obtain, combined with the green emission, a white emitting phosphor. The P45 phosphor has a coulomb rating of over 80 C/cm2, making it better than the sulfide P4 for projection tubes. Its white body color also results in a more pleasant image than that of the high cadmium P4 with its yellow body color. P53 is a medium-efficiencygarnet-type phosphor whose coulomb rating of about 120 C/cm2 is presently the highest of all known phosphors and it is also very stable at high temperatures. Its efficiency is below that of P44 at low timeaveraged current density, but is higher at levels about 12 pA/cm2, as shown in Fig. 61. It is hence the best choice for tubes demanding a high current density on the screen and when a good screen color saturation is not required. Another category of phosphors in which significant development has
FIG.63. Spectral energy distribution of phosphor type P45.
276
ANDRE MARTIN
occurred are the fast-decay phosphors. These include P46 (yttrium, aluminum garnet:cerium), P47 (yttrium silicate:cerium), and P48 (a mixture of P46 + P47 made to cover more uniformly the whole visible spectrum of flyingspot scanners used for color television). As aging characteristics of these phosphors are also improved over P16, they are recommended for flying-spot scanners and photographic applications. P46, having a good red energy conversion efficiency, is well suited for tubes operating with medium low red sensitivity sensors, such as photomultipliers. 2. Phosphors for Color Tubes
Although high-resolution shadow-mask tubes may use P22 domesticcolor television phosphors, many display applications require antiflicker phosphors. Although no blue antiflicker phosphor exists and a compromise is obtained by using P13 red (magnesium si1icate:manganese) and P39 green (zinc silicate:manganese and arsenic) with a P22 blue (zinc sulfide:silver). Since P13 and P39 have both antiflicker characteristics, acceptable antiflicker is obtained for most color images if the blue is moderately used. In some tubes for special applications, phosphor dots of red, green, and white materials are used, the white generally consisting of the P45 already described above. In avionic shadow-mask tubes, where a high current density is required, P43 or P1 phosphors may be used for green, P22r (yttrium oxide:europium) for red, and P22 B (zinc su1fide:magnesium)for blue. For special color tubes, such as penetration tubes and current-sensitive tubes, a short description of the phosphors employed in these is given in Section VIII.B.6.
VI. MEASUREMENT TECHNIQUES In accordance with improvementsin tube characteristics and performance, a corresponding effort has been made to refine CRT measurement techniques. The most important methods and their main features are discussed below. In this section, all voltages are assumed relative to the cathode, unless otherwise stated. A. Electrical Parameters
Aside from the measurement of operating voltages, such as anode voltage, some types of measurements are critical and require particular care:
CATHODE RAY TUBES
277
1. The measurement of cutoff voltage, generally taken at visual extinction of a stationary spot at screen center, and grid (or cathode) modulation required to obtain a given beam or cathode current. Also important is the measurement of deflection sensitivity, generally expressed in centimeters of deflection per volt applied to the electrostatic deflection system or in centimeters per ampere for electromagnetic deflection systems. 2. The cathode-to-grid capacitance and the capacitance of cathode to all other electrodes are both important to determine the bandwidth of the video amplifier associated with the tube, when cathode modulated. For high-frequency oscilloscope tubes using helix deflection systems, the characteristic impedance and bandwidth of the deflection system must be measured using reflectometric and single-pulse response techniques, well described by Loty (1966). Writing speed and refresh rates can be accurately measured by means of calibrated ramp generators used to drive the vertical and horizontal deflection systems, together with a pulse generator which modulates the beam to produce markers on the screen for easy measurement of the writing speed. A block diagram of a typical electromagnetic deflection CRT test bench is given in Fig. 64. B. Optical Parameters
The main optical parameters of a CRT are resolution, brightness, and color. For most of these parameters, a thorough description of the appropriate measurement techniques is given by the publications of the International Electrochemical Commission (1975, 1978) A brief outline of the different techniques employed is given below. I . Resolution
The resolution of a CRT can be expressed in several ways, all of which are interrelated as described below. a. Micrometric Loupe Linewidth (Lloupe) (Fig. 65). In this method, the scan line (displayed on the screen at a given writing speed, refresh rate, and beam current) is examined directly through a micrometric loupe of 20 x to 60 x magnification. The average observer will in practice consider the line width corresponding to 4% of the peak line brightness level. A trained operator, however, tends to obtain average linewidths which correspond approximately to twice the 50% value of the peak brightness as described below. b. Linewidth ( L o . 5 at ) 50% of Peak Brightness (Figs. 65 and66). The line is projected through a microscope of given magnification onto a narrow slit
278
ANDRE MARTIN g2 power
sawtooth aenerator
I
I
deflection amplifier
Vertical Video amplifier
-@I*
sawtooth generator
Video
input
FIG.64. Block diagram of a CRT test bench.
parallel to the line. The slit width is chosen to be at least 20 times smaller than the projected line image and the light output from the slit is then recorded by means of a suitable photodetector. The measurement can be carried out by moving the slit across the image of the line, or by moving the line across the screen with the microscope stationary. The recorded photodetector output curve representing the line profile (Fig. 66) can then be used to measure the linewidth in terms of any percentage of peak brightness, the most common one being the 50% point. This method is also used, with suitable calibration, to determine the peak value of the time-averaged brightness of the line. It is presently the most widely used precision method for determining linewidth and peak line brightness in CRTs.
CATHODE RAY TUBES
279
Luminance
100%
/I\
Luminance distribution in a scanned line
l----4 L loupe
FIG.65. Micrometric loupe linewidth LlouPsand 50% of peak brightness linewidth
Photodetector Micro/scope
+movement
,%
100%
-1
'
I=
sli<
/
I
50%linewidth
FIG.66. Measurement principle of 50% of peak brightness linewidth brightness measurement).
(also used for line
280
ANDRI? MARTIN
1 50% energy line width 50% of peak luminance
--P-
linewidth
FIG.67. Resolution measurement: comparison of 50% of total energy linewidth and of Los.
c. Linewidth (Lhalf-energy) at 50% of Total Energy (Fig. 67). In this method, identical equipment is used as in the previous case, except that the slit width is adjustable. A measurement is first made with the slit wide open and the line parallel to the slit, so that the luminous energy from the entire linewidth is captured by the photodetector, giving a 100%point. The slit width is then reduced symmetrically about the line, centered in the slit, until the luminous energy is reduced to 50%. The slit width obtained at that point gives the 50% total-energy linewidth. This method is most suitable for longpersistence phosphor screens in place of the 50%-peak-brightness linewidth. In the latter cases, the measurement may be hampered by energy emitted from undesired areas of the long-persistence phosphor when the line is moved across the slit. d . Spatial Frequency (F,) (Fig. 68). This frequency is generally expressed in cycles per millimeter. In this measurement method it is assumed that the spot energy distribution is Gaussian (as mentioned in Section 1I.A). The principle is to analyze a line scanned on the tube, through a microscope on whose image plane a grating having several different pitches has been placed (bars of the grating being parallel to the line). The light output from the grating is then displayed on an oscilloscope when the line is moved across the tube screen. The greatest pitch of the grating is such that the full energy of the line can pass between the bars. The ratio A, to A, between the measured amplitudes on the oscilloscope of the light from the scanned line as it is moved from the low-spatial-frequency to the high-spatial-frequency bands gives the modulation percentage at the chosen spatial frequency of the grating. The 60%-modulation depth level has been chosen to give the spatial frequency F,. e. Shrinking Raster Linewidth (LSR)(Fig. 69). With the tube operated under specified conditions, a raster of n horizontal lines is scanned on the CRT screen. The vertical size of the raster is then progressively reduced until the line structure disappears producing a uniform luminance for the observer. This
28 1
CATHODE RAY TUBES Photode tector
C RT
Beam movement
1
1 I
Microscope
3 mplifie
A1 : Reference amplitude
A1
a t low frequency
A2 : Mean amplitude at
I
test frequency
Oscilloscope
FIG.68. Measurement principle of spatial frequency F,.
occurs for the average observer when the residual modulation amplitude is about 2% of the total modulation as indicated on Fig. 69. If n lines are scanned and if A is the final raster height, the linewidth is given by the ratio A/n. This method is fast and gives quite consistent results with trained operators, but it is not as precise as the Lo.5,Lha,f-energy or F, techniques described above. 2. Relationship between Resolution Measurements a. Relationship between F,, Lo,,, Lo.5,and LsR. We assume that the energy distribution and the resulting brightness B across a scanned line are
L%
,CRT
Scanned
Resultant amplitude -variation at line
100 _ _ ~
0 FIG. 69. Resolution measurement:shrinking-raster linewidth.
282
ANDRE MARTIN
Gaussian and can be expressed as follows: B
=8,,,exp(-~~/2~~)
(79)
where 0 is the standard deviation. The Fourier transform of exp( - x 2 / 2 0 2 ) is
= &exp(+)
The modulation transfer function MTF is given by
= exp( - 2n2F?aZ)
(82)
where F, = 2n/a is the spatial frequency. Since 20 is the linewidth Lo,6at 60% of spot height, MTF = exp -(nFsLo.6)2/2
(83)
If Fso.6 is the spatial frequency for 60% modulation we find L0.6
=
1/nFs0,6
(84)
The relation between Lo.5 and Lo.6 is calculated from the Gaussian distribution
Weiss (1969) has calculated the relation between L,, and Lo.5and found them approximately equal, that is, Lo.5
= LSR
(86)
Assuming again a Gaussian b. Relationship between Lo.6and Lhalf-energy. energy distribution for the spot and drawing on Pommier (1974), we can write for the radiated power emitted by a line of length L, W = nL/& B dx, E being the luminous efficiency of the phosphor (Section V.B). Since B = B,,, exp( - x2/202), we can write
lT:
+ W
exp( - x2/2a2)dx
W =K -m
CATHODE RAY TUBES
283
From standard tables we find that for W,/ W = 0.5,x/a = 0.674. Thus we have Lhalf-encrgy = 1.3480 = 0.674L0.6
(87)
3. The Luminance
Using a photometer calibrated against a tungsten white light standard, important discrepancies may be found between measurements on CRT's taken with various photometers for several reasons.
1. The CRT is a pulsed luminance source whose ratio between peak spot luminance and time-averaged peak line luminance may exceed lo6. However, the linearity of the photometer may not extend over such a wide range of luminance and should be checked for example, by using calibrated neutraldensity attenuators in front of a high-brightness standard. 2. Many rare earth phosphors concentrate the light emission in one or more very narrow bands. Unfortunately the photometer filters intended to match the average observer eye response,as defined by the CIE, are sometimes affected by aging, temperature and humidity, storage conditions, etc, causing their spectral response to vary. Hence, recalibration of the photometer with a calibrated 2854 K tungsten source combined with phosphor simulation color filters that have been calibrated against this particular source is a fundamental precaution. To measure with accuracy the luminance of rare earth phosphors, spectroradiometry associated with computation of the luminance is the only satisfactoryprocedure as filters matching the multiband rare earth phosphors are not presently feasible. 3. The photometer may not produce an average output which corresponds to the time averaging of the observer's eye. The photometer output should thus be checked with an oscilloscope to verify that good integration is obtained and that the photometer has a time constant identical to that of the CIE average observer. Assuming that all the above precautions are taken, correct measurements of luminance are obtainable with an accuracy better than 5%. Failure to observe these rules, however, may lead to measurement errors of 100% or more. As an aid, the photometric and radiometric units used for these measurements are given in Table XII. The luminance of a CRT can be measured on a raster-scanned area of the screen, in accordance with the formula of Eq. 69. Although this formula is convenient for the evaluation of a television type display and relates closely to screen efficiency, it does not give any information about the time-averaged peak line luminance which is more representative of the optical performance of a CRT since it takes the spot size into account. When measuring the resolution as described in Section VI.B.l.b, one can replace the microscope
284
ANDRE MARTIN
TABLE XI1 COMPARISON BETWEEN
RADIOMETRICAND PHOTOMETRIC UNITS
and photodetector of Fig. 66 by a calibrated microphotometer allowing the time-averaged peak line luminance of a scanned line to be accurately measured, together with the spot energy distribution and 50% linewidth L o , 5 . When carrying out the microphotometer calibration, to obtain accurate results, one must use a diaphragm-limited calibrated source whose dimensions are as close as possible to the dimensions of the line to be measured, to avoid the flare effect caused by internal reflections in the microscope body and optics. The time-averaged peak luminance is expressed by the formula
L= K I ; V ! ~ , ~ - ' E T , (88) where K is a coefficient that is a function of the screen dispersion characteristics and of the units of luminescence, I, is the beam current, c( is a coefficient that lies between 0.8 and 1, V, is the screen voltage, is a coefficient that lies between 1.2 and 2, f,is the refresh frequency, u is the writing speed, E is the screen luminous efficiency, and T is the fractional transmission of the faceplate. This measurement of this luminance together with that of the Lo.5 linewidth provide the most precise means for evaluating the optical characteristics of a CRT. The drawback of this method is its cost, since it requires sophisticated equipment and a well-trained operator to eliminate causes of error such as jitter, slow drift of the line scanned, incorrect focusing of the tube or of the microphotometer, and improper calibration.
CATHODE R A Y TUBES
285
4 . Color Measurements
a. Color Representation. The most commonly used color representation is the system adopted by theCIE in 1931, which defines color coordinates X,Y, and Z, each weighted by the three distribution functions Z(A),F(L), and Z(A) and defined by the following formulae:
I(L)Z(A)dA
X = 680 !A:
Y
I(L) F(L)dA
= 680 A!:
I(A)Z(A)dA
2 = 680 JA:
where I @ ) is the energy spectral distribution of the radiation and ,II and L2 are the limits of the visible spectrum (370 nm and 740 nm). Figure 70 shows the three functions X(& Y(L),and Z(A).It should be noted that 7 also represents the average observer photopic distribution curve, S(L). Y is thus proportional to the luminance of the source.
t
2.0
Wavelength (hl
FIG.70. Spectral tristimulus values X ( L ) ,
p(A),and Z ( A ) as defined by the CIE.
286
ANDRE MARTIN
In order to use a planar representation the CIE has defined coordinates x and y where X =
X X+Y+Z’
X Y=X+Y+Z’
z=
X
x+u+z
Since x + y + z = 1, it is necessary to specify only two coordinates, e.g., x and y to define a particular color point. Figure 71 shows a typical CIE 1931 diagram. Each color point located on the horseshoe curve represents a pure color defined by a single wavelength. All other existing colors are represented by points located within the area bounded by this curve and the line connecting its lower points. In Fig. 71, if we draw a line from the point C (representing the white equivalent of diffuse sun light) to the point M .wc
.80C
,700
,600
,500 Y
,400
,300
,200
.loo
0
,100
,200
,300
,400
,500
,600
X
FIG.71. CIE chromaticity diagram (1931).
700
.BOO
287
CATHODE RAY TUBES
(representing the light being studied), the point K where it intersects the horseshoe curve indicates the dominant wavelength of the light. M. Grassmann’s addition law states that if we have two sources of light S1 (x,, y , , zl) and S, (x2,y,, z,) the light S resulting from adding their effects will have the following tristimulus values such that
x, = x,+ x, Y3
z,
=
Y, + Y,
= z,
+ z,
The CIE 1931 diagram thus permits rapid calculation of the color resulting from the addition of several sources. From Grassmann’s law, any visible color can be obtained by adding a pure color located at some point K (Fig. 71) to white located at C (or at A, which is the 2854 K tungsten lamp light white). The purity of a color represented by M is defined by the ratio of the line lengths C M to C K . Since the distance between two points in the CIE 1931 triangle is not proportional to the visual impression of color difference, the CIE proposed a uniform chromaticity scale diagram in 1960 which is still widely used. It is a
x=o.10
x = 0.20
x = 0.30
FIG.72. CIE 1960 uniform chromaticity diagram (CIE 1931 x-y in thin lines overlay).
288
ANDRE MARTIN
homographic transform of the CIE 1931 diagram, using the formulae U =
4x 12y - 2x
+ 2’
V =
6Y 12y - 2x
+3
In the resulting u-u diagram, shown in Fig. 72, equal distances represent equal color perceptual steps. Further information on color diagrams as well as on color measurements is given by Judd and Wyszecki (1975) and Grum and Bartleson (1980). b. Color Measurements. The introduction of color CRTs in modern displays has put emphasis on precise color measurement when colors are used to discriminate between different types of data. Modern spectrophotometers measure the spectral energy distribution I(A) of the source to beevaluated, and integration is done by means of a computer program which introduces the weighted functions X(A), Y(A),and Z(A). However, there are still calibration and reproducibility problems and we draw the attention of CRT users to the difficulty of getting reliable and accurate color measurements. VII . CONTRAST ENHANCEMENT TECHNIQUES A. Measurement for Contrast
1. Brightness Contrast and Contrast Ratio
The contrast ratio C , is defined as the ratio of the luminance L, of the picture element to the luminance L, of the background as follows: cR
= L1/L2
(90)
The brightness contrast C, of importance to the observer, is defined as the ratio of the luminance of the picture element plus background L , + L2 to the luminance L, of the background:
c = ( L , + L , ) / L , = 1 + CR
(91)
The expression for C, shows that the contrast ratio (and the brightness contrast) is also a function of the color. Aside from brightness contrast, two elements of equal brightness may have a color contrast because of color variations between them. To determine the visual equivalent color difference corresponding to a given brightness contrast, experimentswere carried out by
CATHODE RAY TUBES
289
Galves and Brun ( 1 975) and the author, using the UCS 1960 diagram (Fig. 72). The basis of their experimental work was to determine the equivalent visual impression of two signals, of known color and of the same brightness, as compared to two signals of the same color, but of different brightness. For this purpose, an experimental color CRT display was prepared which could present simultaneously two adjacent bars of the same color, but of different brightness (the brightness of one signal being adjustable by the observer), and two adjacent bars of the same brightness, but of different color. The observer was requested to match (by the means of a brightness control) the brightness difference between the two adjacent bars of the same color to the visual impression he had of contrast between the bars of the same brightness but of different color. This series of tests showed that, at least for the range of luminance found in CRT displays, a linear ratio could be found between the logarithm of the contrast ratio C, and the separation of the color points in the UCS 1960 diagram. The test results were verified for reddish orange, orange, white, yellow, green. Blue was not investigated since its use in a graphic or alphanumeric display, other than for background, is unlikely. As a result of these investigations, we have defined the following parameters, summarized in Table XIII, which we describe now. The luminous difference is defined as EL = log,, C . Since the threshold value for C has been shown to be at least 1.05, the threshold luminous difference EL, is defined as EL, = log,, 1.05 = 0.021. The unitary luminance difference EL”,corresponding to a comfortable visual discrimination between two signals, is defined as the logarithm of the contrast C = & corresponding to one shade of gray. Therefore EL” = log,,& = 0.15 N 7 ELs. The chrominance differenceE,, is the distance between the u, ocoordinates of any two points in the 1960 CIE-UCS diagram and is given by Ec = (Au’
+ Av’)~/’
(93)
The chrominance threshold, Ecs, has been defined by Jones (1968) as being the smallest color differencediscernible. Measured in the 1960 UCS diagram, it has a value of Ecs = 0.00384. From the author’s experiments the unitary chrominance difference E,,, representing a comfortable discrimination between any two colors, was obtained when the distance between two points of the 1960 UCS diagram was E,, = 7 x Ecs = 0.027. Using these experimental results, we have defined a photocolorimetric space (Fig. 73) where a point of color u, u and luminance L is represented by u/0.027, v/0.027 log, L/O.15. The visual discrimination between any two points in this photocolorimetric space is directly proportional to their separation. The discrimination indexes shown in Table XI11 are defined as the chrominance or luminance difference divided by the
290
ANDRB MARTIN TABLE XI11 DETECTION AND DISCRIMINATION INDEX BETWEENTwo LUMINOUS SOURCFS LUMINANCES L, ,L2 AND COLOR U, ,V1 and U2,V,
I - Luminance Discrimination Index IDL I tern
Numerical Relationship
L1 with C = L2
I - 1 . Luminous difference
EL=logC
I .2. Perception threshold
ES = log 1.05 =0.021
+1
C = 1.05 being the minimum discernable contrast ratio comfortable vision CR =fi
I . 3 . Unitary luminancedifference ELU =log & =0.15
1 - 4 . Ratio
Comments
ELU ES
-_
ELU ES
-
EL - log C Luminance discrimination index IDL = -ELU 0.15 II - Chrominance Discrimination Index IDC
+ Av2)”’
11-1. Chrominance difference
EC = ( A d
11-2. Chrominance threshold
ECS =0.00384
distance between two color points smallest discernable color difference determined by physiological
11-3. Unitary chrominance difference ECU = 0.027
tests
ECU 11-4. Ratio ECS
correlate with 1-4.
EC EC Chrominance discrimination index IDC = - = ECU 0.027 Ill - Discrimination Index ID We define this as the distance between two points in the photo colorimetric space we have previously defined.
Discrimination index ID (IDL’
+ IDC2)”*
corresponding unitary difference as follows: Luminance discrimination index:
IDL = 3 EL” = 0.15
Chrominance discrimination index: IDC Discrimination index:
Ec E,,
= -=
Ec ~
0.027
(94) (95)
ID =(IDLZ + IDC2)”2 (96)
CATHODE RAY TUBES
29 1
V -
0.027
FIG.73. Photocolorimetric space (u, D, log L).
We now have a means of analyzing the viewability of any display, by modifying the usual concept of contrast to a color concept and by measuring the combined effects of brightness and color by means of the discrimination index. In order to determine in a rapid, but approximate way, the gain brought by color differencesin addition to the brightness contrast, we can conveniently define an equivalent contrast E,,, that is derived from the discrimination index by the formula E,, = antilog(0.15 x ID). The E,,, which takes the discrimination index into account, can be compared to the brightness contrast C to give a contrast gain G = E,,/C. This method of calculation may not be the most accurate, but it has the advantage of permitting the rapid evaluation of a display in any lighting environment such as demonstrated by Christiansen (1983) and limits more costly experiments to a minimum. As an example, let us assume that we have a CRT display generating, in dark surroundings, a red signal with a brightness of 44.5 cd/m2. Assuming a white background lighting of 100,000 lux, equivalent to the effect of full sunlight on a clear day, and a neutral density filter of 10% transmission in front of the CRT, reflected light from the tube faceplate will have a luminance L, = 340 cd/m2 and color coordinates assumed to be xb = 0.33, y , = 0.33.
292
ANDRE MARTIN
and the reflected background light to obtain the resultant color of the display data.
Transforming the x-y coordinates into u-v UCS 1960 coordinates by means of the conversion formula (Eq. (89) of Section VI), we find the following: Brightness contrast
C
Luminance discrimination index
IDL = 0.35
= 1.13
Chrominance discrimination index IDC = 1.2 Discrimination index
ID = 1.25
Equivalent contrast
Eqc = 1.54
Contrast gain
G
= 1.4
This contrast gain demonstrates clearly that the color difference, taken into account by the discrimination index is far from negligible.
B. Methods for Improving the Viewability of a Display Of the several methods to improve the viewability of a display, all are based on increasing the signal-to-background ratio by absorbing as much of the ambient light falling onto the faceplate of the CRT as possible, and allowing as much transmission of the CRT light through the faceplate as possible. As shown in Fig. 74, the background incident light, B(A), falling on a CRT faceplate with transmission T(A)results in reflectances:
Rcl(A) at the air-glass interface, RC2(A)at the glass-phosphor interface (which includes secondary effects due to multiple reflections), and &(A) at the phosphor layer (indicated as diffuse). The effect of each of these parameters is discussed below.
I . Reflection Rcl(A)at the Air-Glass Interface The value of Rcl(A) is (n - 1)2/(n+ l)’, where n is the refractive index of the faceplate. As is well known, Rcl(A)can be reduced by depositing multiple evaporated optical coatings. For a typical faceplate with a refractive index of 1.52, Rcl(A) = 0.042 without such a coating but with a coating Rcl(A) drops below 0.004 as shown on Fig. 75 for two angles of incident light. The resulting 10/1 gain is of great value since all CRTs have flat or convex faceplates that act
CATHODE RAY TUBES
293
Incident light, B(h)
T(XI
I Phosphor layer
FIG.74. Reflection of incident light on a CRT faceplate.
as partially reflecting mirrors, giving virtual images which may make the observation of the display difficult. This gain is considered sufficient in most cases, except for specular reflections of intense light sources, such as bare light bulbs or direct sunlight. In such cases, a further improvement may be obtained by etching the faceplate, or depositing a diffusingcoating on it to slightly blur the image (the etched surface can also be optically coated to reduce refiection). Unfortunately, this diffusing surface, at a distance of 4 to 20 mm from the phosphor screen surface, reduces the spatial resolution of the displayed image in proportion to the thickness of the faceplate.
5.0 4.5 4.0 w
y
3.5
a + 0
3.0
2
2.5
w
le of incidence
w
a 2.0
I-
z
8 1.5 a
2
1.0
0.5 0 400 450 500 550 600 650 700 Nnm)
FIG.15. Percentage reflectance for a typical antireflection coating (courtesy of OCLI).
294
ANDRE MARTIN
2. Rejection R c 2 ( l )at the Phosphor- Faceplate Interface Since an optical coating is generally not feasible on the phosphorfaceplate interface, Rc2(A)is reduced by etching or stippling of this face, both of which give a satisfactory diffusing surface that limits glare effects. Since the diffusing surface is in contact with the phosphor layer, the effect on resolution is negligible, except for very-high-resolution tubes (over 50 cycles/mm).
3. Transmission T ( l )of the Faceplate Very often, the faceplate consists of a sandwich of several filter layers laminated to the tube face by resins of the same refractive index. Internal reflections are hence avoided and the layers can be considered to be optically equivalent to a single faceplate.
a. Omnidirectional Filtering. Without a filter, the CRT luminance is Lo = I ( l ) S ( l )d l and viewed through a filter faceplate of transmission T ( l ) , the luminance is L = S I ( l ) S ( l ) T ( l ) d l .Without a filter, the background ) assuming the term Rcl(A)to be neglected luminance is B , = B ( l ) S ( l ) R ( l dil, and after filtering, the background luminance is B = B(il)S(A)T2(A)R(A) dl. Schmidt et al. (1977) have defined a contrast improvement factor CIF, based on the work of Partlow (1972),which is defined as the ratio of the absorption by the faceplate of emitted and background light. Using the same units as above we write CIF = (L/L,)(B,/B). If a neutral density filter is used, T(A) is independent of the spectral distribution of the emitted light as well as of the background light, so that CIF = 1/T. Partlow (1972) has defined a filter performance P for any type of filter compared to a neutral density filter as follows:
s
s
s
This relation provides a convenient way to measure the performance of a narrow bandpass filter with respect to that of a neutral density filter (for which P = 1). To produce such narrow bandpass filters, several layers of materials (glass filters, optical coating, circular polarizers, dyed resins) may be laminated together. Since such filters are relatively expensive they are usually used only when ambient lighting levels are very high. In most cases, inexpensive neutral density filters, or wide-band colored filters (amber, green, etc) are sufficient.
b. Directional Filtering. Directional filters can also be used to improve the contrast (Fig. 76). They may consist, for example, of a black metal plate pierced with holes or a fiber optic faceplate. The solid angle 2i over which the observer can see the picture is limited by the filter structure and the
29 5
CATHODE RAY TUBES
d
FIG.76. Principle of a directional filter.
background light can only enter the filter within the same solid angle. The angle i is given by the formula
i = sin-'
[( n 1
Yi'l
1+d2/e2
n being the refraction index of the light-transmitting channels, e the thickness of a channel, and d its diameter. The contrast enhancement factor of such a filter may be very high, assuming that the observer remains within the viewing angle, and that most of the background light is incident from outside the viewing angle (which is generally the case since the observer tends to block incident light of small solid angles). In general, the smaller the viewing angle the better the results, provided that the observer's head movement can be restricted. It is desirable, however, that the pitch of the filter be sufficiently small (under the resolving power of the eye) so that it cannot be detected. An important application of this type of filter is in fighter cockpit displays where the observer has very limited head movement.
296
ANDRE MARTIN
4. The Rejiectance R J I ) at the Phosphor Layer
Reduction of reflectance from the phosphor layer Itp(,?) is commonly obtained in television shadow-mask color tubes by providing an inert black layer (black matrix) between the phosphor dots which absorbs incident light without reducing the phosphor emission. In the case of industrial and military CRTs, many attempts have been made to use a black coating between the phosphor and the faceplate. However, they have never been fruitful up to now since the light emitted by the phosphor is generally reduced by the same amount as the background light. However, good results have been achieved by coating the phosphor with an absorbing dye whose optical transmission matches the emission spectrum of the phosphor making possible a performance not too far removed from that obtained with a narrow bandpass filter. Another possible solution is to use a thin-film transparent phosphor coated on the beam side with an electron-transparent black layer which absorbs the ambient light but not the light emitted forward by the phosphor. Thecontrast improvementin this case may be very high. One limitation of this approach results from the fact that evaporated transparent thin-film phosphors have a low external efficiency since most of the light they emit is trapped in the phosphor layer as a result of the large difference in refractive index between the phosphor layer (typically n = 2.6) and the glass (typically n = 1.5). 5. Combination of Techniques
In many cases, a surface antireflection coating (to reduce R c l ) is used in conjunction with a colored- or neutral-density filter faceplate. The transmission T ( I )cannot be reduced too much, because, as shown in Fig. 77, the brightness reflected from the front surface of the faceplate will rapidly become predominant, and the contrast gain will not improve any further. In Fig. 77 we have drawn the R , T Z curve for a neutral-density filter of transmission T, assuming that the phosphor reflectance Itp@)is neutral and has the usual value of 0.8. This line represents the fraction of incident light reflected from the filter-CRT faceplate combination. The intersection of this line with the 0.04reflectance horizontal line corresponds to T = 0.22. The reflected brightness for a nonoptically coated face-plateis rather high, and the gain in contrast will not improve much if the transmission T of the filter is further reduced. If the faceplate is optically coated, the surface reflectance becomes 0.004 and a filter of transmission 0.07 could now be used (intersection of the RpT2curve with the 0.004 horizontal line). The same data apply to directional filters and more calculation is required for color filters to take into account the color contrast separation. In any case, to obtain optimum filtering, a compromise must be found between the required contrast, the display luminance acceptable to the
CATHODE RAY TUBES
297
1
0.04
0.01
0.005
RC1 for an uncoated glass surface
r
-
(diffused I ight )
I-
I
.
0.004
I I
RC, for an optically coated
I I
surface (diffused light)
I
0.002 .
I I
I I
0.001
1
1
0.5
0:2
1
0.1 O.O?
0.05
0.02
I , T
FILTER TRANSMISSION
FIG.77. Relative importanceof the front surface reflections of background light, and of the light reflected from the screen surface for a neutral density filter of transmission 7'.
observer (whose eye pupil may be stopped down heavily because of the ambient lighting), and the capability of the tube to supply the necessary light output with an acceptable life. C . Examples of Contrast Enhancement Filters
An example of a narrow bandpass filter used in connection with a P43 screen having a narrow band emission at 545 nm is shown, Fig.78. This filter is widely used for cockpit displays operating in full sunlight. Such a filter has a transmission of 8.4% for P43 phosphor light, a coefficient of reflection under white light of 0.29%, a peak transmission of 11.3%, and color coordinates when illuminated by white light (CIE, 1931) of x = 0.323, y = 0.666. For an ambient lighting of 100,000 lux and a tube luminance of 3000 cd/m2
298
ANDRE MARTIN P43 Phosphor
FIG.78. Spectral distribution of P43 phosphor and transmission of a narrow bandpass filter used with P43 phosphor (main peaks of P43 shown in dotted lines).
t % Transmission
filter
\ /
\
Viewing angle, 8, in degrees
FIG.79. The variation of transmission with viewing angle for a directional filter.
CATHODE RAY TUBES
299
(measured prior to filtering) we obtain a contrast
c = 1 + L , / L ~= 3.54 Since narrow bandpass filters cannot be used for mu icolor tubes whose spectral emission curves are broad, directional neutral-density filters may be employed whenever possible because the neutral tint does not modify the colors displayed by the tube. A typical filter of this type, designed for cockpit displays under full sunlight, has the following characteristics: (1) maximum viewing angle of f 18" about filter axis, (2) peak transparency on axis of 40% (presentedin Fig. 79),(3)transmission of 0% outside 20" viewing angle, (4)and color coordinates (CIE, 1931)of x = 0.330, y = 0.330. With such a filter, an ambient lighting of 100,000lux and a tube luminance of 3000 cd/m2 (measured prior to filtering) result in a contrast C of 12.7 for a viewing angle of 0" and a contrast of 4.2 for a viewing angle of 15". A directional filter thus gives a better contrast than a narrow bandpass filter under the same operating conditions. It should be noted that this is true so long as no direct background light falls within the viewing angle and that the observer views the tube within a & 15" angle.
VIII. CHARACTERISTICS ACHIEVED WITH PRESENT CRTs A . Present Technology Status
An important trend over the past decade has been toward improved stability of the CRT characteristics and longer life. Current industrial and military tubes now used for graphic and radar displays have a useful life well in excess of 10,000 hr. For very-high-brightness displays used in aircraft cockpits under full sunlight, the life is in excess of 1000 hr. This longer useful life has been obtained by progress with respect to several factors, mainly the screen materials and the vacuum techniques. Screen deposition techniques have been developed so that the luminous efficiency of the phosphor material is not reduced as a result of processing in tube fabrication. This has been achieved primarily by reduction of the quantity of binder (nonluminescent chemicals) used to obtain adherence of the phosphor particles to the faceplate, by an optimization of the thickness of the phosphor layer (related to screen voltage),and by a higher reflectivity of the aluminum film (generally used as a reflecting conductive layer) on the back of the phosphor coating. Vacuum techniques have been improved thanks to a better knowledge of the influence of residual gases on the cathode emission and life. Authors such
300
ANDRE MARTIN
as Barosi and Giorgi (1972), on the basis of residual gas analyses in various CRTs by means of mass spectrometry, have shown that the residual gases in a CRT are mainly hydrocarbons, carbon monoxide and dioxide, oxygen, hydrogen, and nitrogen. Oxygen, halogens (such as chlorine and fluorine), and hydrocarbons have been shown to be very active poisons of the cathode, so that residual pressure of these gases must be kept to the lowest level possible. The pumping equipment usually employed for fabricating CRTs consists of an oil mechanical pump followed by a diffusion pump whose limiting vacuum is about torr with a cold trap. The mineral oil of the backing pump and the silicon oil of the diffusion pump are the most important causes of the presence of hydrocarbons in the tube, since their vapors back-diffuse from the pumping equipment into the tube. Improved design of the mechanical and diffusion pumps, together with recently developed very-lowresidual pressure diffusion pump oils, are now making it possible to obtain CRT pressures of lo-' torr without sophisticated cold traps. For very high performance CRTs, where the peak loading reaches levels of 3.5 A/cm2 for an oxide cathode (and 10 A/cm2 for an impregnated cathode) it is preferable to use ionic pumps or turbo-molecular pumps to obtain a hydrocarbon-free vacuum, the pumping being initiated by zeolite traps. This type of pumping is necessary because of the very high peak cathode loadings used which approach the theoretical temperature-limited saturation currents. The smallest poisoning of the cathode due to residual gases will thus reduce their performance. After the tube is sealed off the pumping equipment, it is necessary to maintain its vacuum and to absorb any gas subsequently released from the various electrodes and internal coatings of the CRT. For this purpose, manufacturers have always used pumps (called getters) inside the CRT which operate by chemisorption and surface absorption. Giorgi and Della Porta (1972) have been among the important contributors to the development of high-pumping-speed getters. These are capable of maintaining the residual pressure in the CRT below lo-' torr in practically all present conditions of operation and outgassing. The combination of improved pumping techniques and better performing getters has led to longer and more predictable tube life, as well as stable cathode emission characteristics throughout the useful life. Apart from these advances, new electron optical designs, elaborated with the help of computer programs, have permitted a reduction in lens aberrations. This in turn allows a better combination of lenses to be used, resulting in reduced spot size and a more uniform center to edge resolution. These improvements, however, can only be realized by precision mechanical alignment of all parts of the electron gun. A high-precision mechanical fixture, such as shown Fig. 80, limits the overall misalignment to less than 20 pm along
CATHODE RAY TUBES
30 1
FIG.SO. High-precision mechanical fixture for aligning electron gun parts. An electron gun removed from the fixture is shown separately.
the total length of the gun, and less than 10 pm in the tetrode part of the same gun. The precision involved in electron gun manufacturing is thus very comparable to that in optical lens manufacturing. B. State of the Art of Typical C R T s
I . Tubes for High-Resolution Recording For computer output recording, microfilming, and typesetting, tubes are available which can produce an optical image (by means of high-resolution flat field optics) on a photographic film of dimensions up to approximately 250 x 250 mm. In this case the phosphor screen must be flat to an optical tolerance of a few micrometers so that the image focused onto the film maintains the 20-pm resolution generally required at a magnification ratio of 1/1. The faceplate must hence be thick enough to remain flat after pumping. In general, to reduce deflection defocusing, the deflection angle of such tubes is kept below k 20". The electron gun is of the high-resolution type described in Section II.B.l, and the tube is generally operated with static and dynamic
302
ANDRE MARTIN
astigmatism correction of the spot, electronic correction of pincushion distortion, as well as with dynamic focusing. The screen must also be of the very fine grain type (phosphor particles with about 1 pm average size). Generally, P11, P46, P47, or P52 phosphor screens are used, whose spectral emissions match the spectral sensitivities of various photographic emulsions. Because of the high resolution required, great care must be given to the mechanical assembly of all the components (such as deflection, focus and astigmatism coils, shield, and optics) and very often all these parts are bonded together with a semi-rigid resin. The magnetic shielding of the tube is also critical (usually a double shield is necessary) as is the stability of the power supplies and deflection amplifiers. As an example, let us assume a tube operating at V , = 10,000 V with a deflection angle of f20". Since the deflection angle is given by sin8 = K(2K)-1'2 [Eq. (27) Section IV.A.11 we can write d8/tan8 = dK/2E. If we assume a V, ripple of k 5 V (0.05%), the variation dB in deflection angle will be 1.8 x lop4 rad. At a distance of 150mm from the deflection center, this represents a spot movement of 27 pm, which is larger than the spot diameter of 20 pm. Similar variations in spot position occur for residual ripple in the deflection current. The high-resolution CRT with its power supplies and deflection means, unlike the ordinary display tube, is an expensive piece of high-precision mechanics and optics that exploits the latest state-of-the-art techniques. Such a CRT is shown Fig. 81 and its characteristics are given in Table XIV (Column 1-1). Another type of high-resolution CRT type, developed to eliminate the optics and to transfer directly the image from the phosphor screen to a photographic film employs a fiber-optic faceplate with a magnification ratio of 1:l.
FIG.81. A high-resolution CRT of 20 pm spot size (courtesy of Thomson-CSF).
303
CATHODE RAY TUBES
The basic idea of direct printing tubes was first demonstrated by Olden (1957) who used a tube with a thin mica window (75 pm thickness) for contact exposure of electrofax paper. Some tubes of this kind are still used for special applications but most of the printing tubes presently in operation employ fiber-optic faceplates whose fibers are compressed under heating to make the faceplate vacuum-tight. The individual fibers (Fig. 82) consist of a glass core of index n, clad with a glass of lower refraction index n, so that light entering the fiber is reflected at the interface between the core and the cladding and can only escape at the end of the fiber. The light-gathering power of such a fiber is related to the limiting entrance angle 8 of the light which will be transmitted through the fiber by successive reflections at an angle I 4 (see Fig. 82). This limiting angle is given by the numerical aperture NA, where NA = sin8 = n,cos4 = -/,
Figure 83 shows the cross section of a CRT fiber optic faceplate with the photographic film pressed against the outside of the fiber bundle and the luminescent screen and aluminum-film backing deposited on the inside. Because of its high optical efficiency such a structure makes possible about 20 times more light in the film compared to a conventional optical system used with a flat-faced CRT. In addition to this gain there is no need for precise mechanical focusing of the optics and the high cost of the optics is also eliminated. Progress in fiber-optic manufacturing has resulted in major improvements in the optical quality (reduction in number of dead fibers, cladding opacity), the range of numerical apertures available, and in the voltage breakdown limit between opposite faces of the fiber faceplate (which may be as high as 25 kV). Most fiber-optic tubes (an example of which is shown in Fig. 84) have a narrow printing window extending across a standard paper format width of 210 mm, with some tubes covering the computer format of 400 mm. Since the smallest fiber size presently is around 5 pm, the resolution is limited to about Absorber \
Cladding (n,)
,
Core (n,)
FIG.82. A single optical fiber.
304
ANDRE MARTIN
FIG.83. Section of CRT fiber-opticfaceplate with photographic film.
20 pm,approximately 4 times this diameter. Typical characteristics of a fiberoptic tube are given in Table XIV, Column 1.2. 2. Oscilloscope Tubes for Display of Fast Transients Most oscilloscope tubes presently in service, except for cheap indicator tubes, use one of the three postacceleration methods described in Section 111, using either a resistive spiral, a mesh with deflection magnification, or a quadrupolar-lens deflection magnification coupled with postacceleration. The tubes with the highest performance are those intended for highfrequency operation where the best possible writing speed and sensitivity are
FIG.84. A typical fiber-opticcathode-raytube (courtesy of DuMont).
TABLE XIV TYPICAL CHARACTERISTICS OF CATHODE RAYTUB= 1.1
1.2
2
Printing fihr.optis
Smpe
,il:zid
CATEGORY
TYPICAL DATA
wpastrlnl
. .
0
l o x 200 70
0
110
10'
450
530
12
12.5
operauns "Dltaqel
. Vscreen
3
I 1
High resolution TVdirplay tuber
cms4es : penetration color tuber
ri5M 70
u 400 90
m
445
4.2
500
400 700
P45
0 0
800
50 12
M1
40,000 0 06 50
04 P26
P49
0.14
1000 1500
P1
8000 3 50
7wo
40.000
0.25 P43
Pl.P22R
;
z
6
None
None
0.1
None
0.02
50 2000
z
6
None
2wo 2000
~~
0 06 100 2000 None
,ntenr,ty Of for frequenclel Of to
None
50
200
50 2000
2000
100 5 2000
2W 11
200 11
5
Shack
25
6
6 Ambient
11
$
Approximate m e e n approach angle. Angle before %can magniiicstion is much m a l l e r
Ambient
100
6
c
hbient
25
P
Ambient
4
Ambient
25
2-
5
e
I
6 d
Dc
go
fordoration Temperature werating range from to
OC
.
80 6 60
~
,ntens,ty of
.
75 25
Green
0A
~
.
50
grs
1
700 1M O
100
1000 0.04 P52
None
300
4
200 550
N0"B
0 180 35
20
60
.
150
53
15
HT
P11
Proiection tuber
3.5
40
160 0.015
HDD IVSDl
25
Red
Mechanical performance Random vibiation intensity of for frequencler of to Smuroidal vibration
.
'lJD
Mag.
kmlr
Y)
~~~~~
tuber
'1
cdlm'
d/m' eWlcm' mm
Radar
16
15
0
.
or energy Spot size for rame conditions Phosphor
5.3
O-tO.6
50
gGi
. .
for a writing speed Of and a refresh rate of Full-wreen raster brightness
5.2
Graphic
Optoelenrical performance
. Peak line brightness
5.1
0 0
Operating currents current current
tubel
4.1
3:
. Vfmur . Peak wrwn . Peak cathode
2z:;,
3. 2
I
Dmensions Diagonal P I ordiameter 10) Deflection angle lfordiag.ardiam. * Length
rubes
3.1
Ambient
-40
+ 80
-55
-55
+ 100
+1w
25
6 Ambient
306
ANDRE MARTIN
required. A good example of a high-writing-speed tube is described by Nishino and Maeda (1973). This tube combines a helical line deflection system to achieve a 4-GHz bandwidth, a resistive spiral for postacceleration, and a fiberoptic faceplate to improve the phosphor-to-film energy transfer. In order to achieve higher writing speeds, electron multiplication between the deflection zone and the screen was introduced by Loty and Dreano (1970) using microchannel plates placed adjacent to the screen. These plates consist of an array of small channels, about 10 pm diameter, in a glass structure (Fig. 85). In operation a difference in potential of V, - V, is applied between the metallic coatings on both faces of the plates generating an electrostatic field along the length of each channel. Electrons entering each channel generate secondary electrons when they strike the walls, which have a high secondary emission ratio. These secondary electrons are successively multiplied, resulting in a large number of electrons at the output of the channel for each input electron. Typical electron gains of lo3 to lo4 are achieved depending upon the type of plate and the difference of potential applied between the plates (of the order of 1 kV). An example of a high-performance 7-GHz CRT using a microchannel plate together with a fiber-optic faceplate and a helical line deflection system is described by Loty (1983). This tube enables the recording of subnanosecond transients with a vertical sensitivity of 5 V full scale at a bandwidth of 7.4 GHz. Another tube, described by Janko (1979),combines helical deflection lines with quadrupolar lenses to magnify the deflection together with a large-area microchannel plate to intensify the electron beam on a 68 x 85 mm2 screen. Its quadrupolar lens system, described by Odenthal(l977), derives from earlier work by Deschamps (1965). For the vertical deflection system they obtained a
FIG.85. Cross section of a microchannel plate with a difference of potential V, - V, applied between metal-coated faces.
CATHODE RAY TUBES
307
bandwidth of 3.5 GHz, a rise time of 100 psec and a sensitivity of 1.1 V/cm. For the horizontal deflection system they achieved a bandwidth of 1.2 GHz, a rise time of 290 psec, and a sensitivity of 1.5 V/cm. Using a spot size of 0.25 mm, a writing speed in excess of 3 x 10" cm/sec was obtained. A schematic of this tube is shown in Fig. 86. To ensure sufficient current density of the beam at the microchannel plate an electron gun was designed using a demagnifying lens to first image the crossover C to a smaller size C' before the main focusing lens. The helical deflection system is then followed by a quadrupolar lens system, in the shape of a rectangular box, which produces a deflection gain of about 4, Since the microchannel plate produces a current gain of 5000, single-shot transients with a writing speed of 3.106 cm/sec could be photographed. A photograph of the tube is shown in Fig. 87 and its general characteristics are listed in Table XIV (Column 2). 3. Display Tubes Using a Scanning Raster
Such tubes are now in widespread use the world over in computer terminals and word processors. Frequently, a conventional interlaced raster (either 525 lines/60 Hz or 625 lines/50 Hz) is used with the information presented being principally alphanumerics with some graphics. A recent trend is to replace this TV raster with a noninterlaced raster having a refresh rate of 60 Hz or somewhat higher, thus eliminating the interline flicker due to interlacing as well as large-area flicker as shown by De Lange curves (see Fig. 55, Section V). Since raster scanning does not demand excessivepower from the deflection system, a screen voltage of 20 kV or more can be used, allowing a relatively small beam current with a correspondingly small beam diameter. Such a small beam diameter is compatible with unipotential-type electron optics, a
FIG.86. Schematic of high-performance oscilloscope tube incorporating an electronmultiplying microchannel plate (see also Fig. 87).
308
ANDRE MARTIN
FIG.87. A high-performanceoscilloscope tube having a bandwidth of 3.5 GHz(courtesy of Tektronix).
medium-high peak cathode loading, and allows deflection angles of 90" or more without the need for dynamic focusing. The main features of these tubes are their good spot size in comparison to that of monochrome TV tubes and their low price obtained through mass production. Typical characteristics of a computer terminal tube are given in Table XIV (Column 3.1). For special applications, such as the display of medical radiographic images, where the smallest detail contrast must be discernible, tubes with special electron optics are used, generally of the magnetic focus type, as well as high-resolution screens to prevent light scattering from degrading picture quality. Dynamic focusing and dynamic astigmatism correction are also often used to obtain a very uniform picture resolution. Typical characteristics of a tube of this type are given in Table XIV (Column 3.2). 4 . Display Tubes Using Stroke Writing a. Graphic Console Tubes. A typical application for tubes with stroke writing is in computer-aided design (CAD). Often a penetration screen is used to offer the versatility of color graphics. Typical tube designs use a highvoltage-focus electron gun (see Section II.B.4) and dynamic focusing. Since these tubes are usually operated in areas with moderate ambient lighting (about 200 lux) a neutral density filter, preferably anti-reflection coated, will usually result in sufficient contrast. Typical characteristics of such a color tube are given in Table XIV (Column 4.1) and Fig. 88 shows a 25-in. diagonal tube used in large graphic consoles. The main advantage of the stroke-writing mode is the quality of graphic curves and lines, which are not discontinuous as in raster scanning. Also, flicker can be avoided if a refresh rate in excess of 60 Hz is used. These displays cause relatively little operator fatigue when used intensively for long periods, provided that the electronic signal generators and deflection amplifiers are
CATHODE RAY TUBES
309
FIG.88. A 25-in. diagonal color penetration tube used in large graphicconsoles(courtesyof Thomson-CSF).
sufficiently stable. By comparison, a raster-scanned display is more likely to cause observer fatigue, especially if interlacing is used, which produces noticeable flicker on adjacent horizontal scan lines. Although a graphic console using stroke-writing mode costs much more than one using television scanning, this may be justified by reduced visual fatigue of the operator and associated psychological problems. b. Radar Tubes. Previously, tubes with long-persistence phosphor screens were always used for this purpose, on which raw radar information was displayed. The operator then had to interpret all incoming echoes and identify them against the noise background. The latest trends have been to process the raw radar information, and then display it on a graphic console, using tubes such as described above. In some cases it is desirable to have the possibility of also displaying the raw radar information on the screen as a backup safety measure. For this purpose, penetration phosphors can be employed since they permit the use of a combination of a short-decay-time phosphor for displaying computerprocessed information and a long-persistence phosphor (operated at a higher screen voltage) for displaying the raw radar information. Research is still continuing, however, to obtain phosphors with a longer useful decay time and less low-level afterglow. Because many of these tubes are round (to present a polar-coordinate display)they often use a glass-metal envelope consisting of a nearly flat faceplate and a chromium-iron alloy conical section. Such tubes
310
A N D R MARTIN ~
have a reduced weight and an increased mechanical strength compared to allglass tubes. The electron optics used in radar tubes are generally of the high-voltage focus type. Characteristics of a typical radar tube are given in Table XIV (Column 4.2). 5. High-Brightness andlor High-Contrast Displays
a. Head-Up Display Tubes (HUD). The head-up display is used in aircraft, whereby the output of the CRT is projected by means of collimating optics and a semi-transparent mirror (see Fig. 89) to produce a virtual image that the pilot sees through the windshield at infinity. The pilot can thus keep his head up (hence the name HUD) while viewing the displayed information without looking down at the instrument panel, which would require repeatedly focusing his eyes from infinity to the nearby instrument panel. To ensure that the center of the image (used as target finder) remains always on the airplane axis, the optical axis of the HUD must be rigidly bound to the airframe. Since dampers cannot be used to absorb the shock and vibration transmitted to the HUD by the airframe, the CRT must be extremely rugged and capable of operating properly without breakdown in this environment. Since hazy clouds give an ambient background lighting of 100,OOO lux, the CRT must have, in addition to good resolution, a high brightness of, for example, 40,000 cd/cm2, to produce images of sufficient contrast. Character-
Semi-transparent mirror
\ Trror Virtual image at infinity
Mirror
Col Ii mating optics
CRT in housing including shield and deflection coils
FIG.89. Schematic of a head-up display (HUD) system
CATHODE RAY TUBES
31 1
FIG.90. A typical head-up display tube (courtesy of DuMont).
istics of a typical HUD CRT are given in Table XIV (Column 5.1), while Fig. 90 shows an example of such a tube. The main difficulties encountered in designing these tubes are similar to those encountered with projection tubes, namely the high power dissipation required on the phosphor screen to obtain the necessary brightness. This presently limits the choice of phosphors to P1 and P53.As an alternative P43 and P44 phosphors can also be used by employing dichro’ic semitransparent mirrors having a high reflectivity in the 545-nm band emitted by these phosphors, thus reducing the luminance required from the tube to 25,000 cd/m2. Present trends are to use HUD displays for presenting infrared television pictures as well as general flight information. The use of such displays for presenting increasing amounts of information is in turn resulting in increased demand for higher resolution and brightness, associated to low power consumption. b. Head-Down Display Tubes (HDD). As opposed to a head-up display, a head-down display is mounted on the aircraft instrument panel and is viewed directly, requiring the pilot to look down. The basic characteristics of tubes used in such displays are high brightness and good resolution, together with the low power consumption and ruggedization that are essential for all avionic equipment. A head-down tube has a small neck diameter to reduce yoke dimensions and increase deflection sensitivity so that the driving power is reduced to a minimum. For the same reason, the acceleration voltage is kept to the minimum value compatible with the necessary brightness of about
312
ANDRE MARTIN
FIG.91. A typical head-down display tube (courtesy of Thomson-CSF).
6000 cd/m2 for a full screen raster. For monochrome displays, P53 or P43 screens coupled to a narrow-band contrast-enhancement filter are generally used, while for color displays whose brightness is lower, a more efficient directional contrast filter may be employed. Figure 91 shows a typical headdown CRT low-weight assembly in an aluminum casting which incorporates the tube, its deflection coil, and the associated contrast filter. Typical characteristics of such a display are given in Table XIV (Column 5.2). c. Projection Tubes. Projection tubes are used in applications such as air traffic control for display of data and in flight simulators. For these applications high resolution combined with a luminous flux output of at least 50 Im (ranging sometimes up to 1400 Im for special applications) are necessary. For color displays, a combination of three tubes, each producing one of the primary colors, green, red, and blue is used. High resolution and high luminous flux require screen voltages over 30 kV (sometimesup to 75 kV) and beam currents in the range of one to several milliamperes, which may cause rapid degradation of the screen efficiency. A compromise must be found between the projection optics, whose size is limited, and the screen surface which must be increased to reduce its current loading. Different configurations using either refractive optics or Schmidt optics have been discussed by Good (1976).Todd and Starkey (1977) have described tubes which incorporate internal Schmidt optics using an arrangement as
CATHODE RAY TUBES
313
shown in Fig. 92. Here a metallic convex mirror coated with a phosphor is bombarded by the electrons from the gun mounted on the mirror axis. The concave mirror at the rear of the vacuum envelope projects the image of the phosphor through a transparent faceplate and a Schmidt aspherical correction lens onto a screen. The light collection efficiency of such a Schmidt optical system is usually considerablybetter than that of refractive optics since their f number may be smaller than 1, compared to 1.5 to 3 for refractive optics. On the other hand, the rather small surface of the phosphor results in a high current density which reduces the tube life when a high luminous output is required. The present trend is to use air-cooled or liquid-cooled projection tubes as described by Kikuchi et al. (198 1) together with external refractive optics to allow the use of a screen area of at least 120 cm', resulting in reduced current density. Recent progress in refractive optics, thanks to the use of plastic moulded aspherical lenses, as described by Howe and Welham (1980) allows numerical apertures approaching 1 and permits a high luminous output on the projection screen while permitting sufficient resolution. Better contrast is also achieved by optically coupling the tube and the lens, which reduces internal reflexions. Because of the possibility of arcing between the electron gun electrodes resulting from the high anode voltage, the circuit design must be such that the video amplifier and the deflection circuits are well protected. Typical characteristics of a projection tube are given in Table XIV (Column 5.3). Schmidt correction
Vacuum sealed
Concave mirror Deflection coil
Transparent optical quality faceplate
convex mirror
electron beam
FIG.92. A CRT incorporating mirror optics for display projection.
ANDRB
314
MARTIN
6. Color Tubes
Since domestic shadow-mask color TV tubes are generally unsatisfactory for industrial and military applications a large research effort has been directed toward the development of color tubes having the required characteristics of resolution, color stability with time, and ruggedness. This work has led to the penetration tube, the current sensitive multicolor tube, the highresolution shadow-mask tube, and the beam-index tube described below. a. The Penetration Tube. The principle of operation of penetration tubes is based on the variation in depth of penetration of an electron beam in successive layers of different phosphor materials when the accelerating voltage is varied. A typical penetration screen in which a red and a green phosphor layer are successively deposited on a glass substrate is shown in Fig. 93. At electron energies below eV, , the threshold energy, incident electrons will only excite the red layer. Above eV,, the green layer will be more and more excited as the energy is increased. The mixture of red and green colors will hence give successively orange, yellow, and green colors. Figure 94 shows the red and green luminances measured on such a screen at varying anode voltages, at constant beam current density. The total luminance curve results from the addition of red and green luminances above the threshold V,. In operation, single gun penetration color tubes are operated in a sequential manner, the high voltage being switched at a frequency above the flicker perception threshold, so that the desired color information produced at each voltage level combines sequentially in the observer’s eye to produce the final picture. Morrell et al. (1974) have published a good mathematical model of the operation of a penetration tube, described earlier by Pritchard (1965). Galves (1979) and Martin and Galves
-
eV
> eVa
-. I IW Red phosphor Green phosphor
Glass substrate
FIG.93. Principle of operation of a penetration screen.
315
CATHODE RAY TUBES Luminance
Total luminance
v)
r2
3
>
a
d
k
m [r
a
0
VO
10
20
(kV)
FIG.94. Typical luminance curves of a penetration color screen,
(1982) have also reviewed the theory, operation, and design of multifunction penetration screens. i. CIassification of penetration phosphor screens. Penetration phosphor materials and screens are manufactured by several different techniques. However, these techniques can be separated into two main categories, namely, multilayered and composite crystal phosphor screens.
Multiluyered phosphor screens. One approach is to use thin-film technology, where successive layers of phosphors are evaporated sequentially on a substrate. Initial research on single-layer thin-film phosphor was done by Feldman (1957) and subsequently by Buchanan et al. (1973), who used multilayer films at differentmaterials. A very elegant method to produce such a film was to evaporate a film of a rare earth oxysulfidesuch as La202S,which was successively doped during its deposition with terbium and europium giving, respectively, green and red emitting layers (Fig. 95a). However, no information on the luminous efficiency and color characteristics of these screens has been published.
316
ANDRE MARTIN
ii
Red phosphor
Inert layer
L a 2 0 2S: Eu La202S:Tb
nnn77 SUBSTRATE
a . Thin.filrn technology
Green phosphor SUBSTRATE
b .Polycrystalline technology
FIG.95. Multilayered penetration screen technologies.
Thin-film evaporated screens have a number of advantages over particulate screens. They are capable of higher resolution because light scattering within the film itself may be quite small; they have a higher contrast capability because ambient light is not backscattered toward the viewer; and they provide better thermal contact between the luminescent material and the substrate, improving cooling and allowing the use of higher beam power. In spite of these potential advantages, evaporated screens are not commonly used because of the large difference of index of refraction of the screen and substrate, causing about 90% of the light emitted by the phosphor to be trapped within the thin film. Also, the high-temperature process needed to prepare thin-film evaporated screens necessitates the use of high-temperature substrates, which in turn leads to difficult and expensive tube manufacturing processes. Another type of multilayered phosphor screen, referred to as the superimposed layers screen (SLS) consists of two layers of polycrystalline phosphors’ powder separated by an evaporated barrier layer (Fig. 95b). Chevalier and Galves (1982) have developed such screens for cockpit display tubes. This new screen, called E23, has about twice the P49 efficiency. Compositecrystal phosphor screens. The above multilayer screens require very good thickness uniformity of the successive screen layers to insure color uniformity over the entire screen surface. This is difficult to achieve on screens with a diagonal greater than 160 mm. In such cases, a different type of screen, consisting of a uniform mixture of phosphor particles is used. Figure 96 shows four types of composite-crystal phosphor screens, The mixed-phosphor screen (Fig. 96a) consists of a mixture of two phosphors, one red and one green, in
317
CATHODE RAY TUBES Nonluminescent layer
Nonluminescent layer Nonluminescent core Green phosphor
a . Mixed-phosphor
Red luminescent layer
b . Double-layer mixed-phosphol
Nonluminescent laver
Rare earth oxysulfide :Tb Rare earth oxysulfide : Eu c . Onionskin phosphor
d . Double-layer phosphor
FIG.96. Composite crystal penetration phosphor screens.
which the green particles are coated with a nonluminous barrier that prevents their excitation by a low-voltage beam. (A typical phosphor representative of this technology is registered under the number P50.) The double-layer mixedphosphor screen (Fig. 96b) has been designed by Burilla et al. (1979) to reduce the red phosphor excitation in the high-voltage mode, and therefore increase the color variation with change in voltage. Here, the red material is coated on a nonluminescent core material. The onion-skin phosphor (Fig. 96c) is composed of a green phosphor core first coated with a nonluminescent barrier layer, then with a layer of small grains of red phosphor. (A representative phosphor of this category is registered under the number P49.) The doublelayer phosphor (Fig. 96d) is composed of a single host material, such as a rare earth oxysulfide, doped at different depths by two different activators, such as terbium and europium, to give a core and a concentric shell that emit different colors. Such a material has been described by Tecotzky and Mattis (1971) and compares quite well in performance with the thin-film screens of similar nature described by Buchanan and Maple (1979). In practically all of the techniques described a dead barrier layer is used to shift the green brightness towards a higher voltage. However, the energy absorption of this layer reduces the efficiency of penetration phosphor screens to about 20-30% of that of equivalent single-layer phosphor screens. (In this connection it should be noted that in shadow-mask tubes the mask absorbs
318
ANDRC MARTIN
about 80% of the beam energy.) To show the difference in performance of the various penetration screens, we present three sets of curves each giving the proportion of beam energy absorption in the four main components of a penetration screen, namely, the aluminum layer, red phosphor, barrier layer, and green phosphor. Figure 97 corresponds to the mixed-phosphor screen of Fig. 96a, Fig. 98 for the onion-skin phosphor of Fig. 96c, and Fig. 99 for the superimposed layer screen of Fig. 95b. These curves may be compared with one another since the phosphors shown have the same color characteristics a t 10 and 18 kV.It can be easily seen from the curves that the superimposed layer screen has the best efficiency because of the smallest absorption in the barrier layer. It should be noted that by a suitable choice of phosphors the persistence as well as the color, may be varied, making such penetration screens useful in a variety of applications. Penetration tube screens have a limiting resolution of 10 to 15 cycles/mm for a powder screen structure, and up to 100 cycles/mm for a thin-film multilayered structure. The tube resolution is thus limited in practice by the gun characteristics, as in the case of monochrome tubes. Resolution well over 2000 cycles per screen diameter are currently obtained, and the picture quality compares very well with that of monochrome tubes. Penetration tubes may also use employ more than one gun, with each gun operated at a given W
FIG.97. Relative absorption of penetration screen components: The mixed-phosphor screen.
319
CATHODE RAY TUBES W
io
18
FIG.98. Relative absorption of penetrationscreen components:The onion-skin phosphor.
W
100
I
E z 9 n IK
pa
10
w
f
4
w
K
5
10
15
20
FIG.99. Relative absorption of penetration on screen components: The superimposed layer screen.
320
ANDRE MARTIN
voltage and deflected independently to generate one color only. Since highvoltage-switching power supplies are no longer necessary with this type of tube and sequential operation is avoided, such tubes can be operated in accordance with television standards. However, it is difficult to obtain exact registration of the two electron beams and there is an increase in deflection power due to separate deflection systems. For most applications, penetration tubes with a single gun are used to avoid problems of registration. However, the light output of the display is limited because of the required blanking during which time the high voltage is switched, generally of the order of 200 pec. The high voltage switch requires a special power supply whose cost is high compared to the cost of television display power supplies. b. The Current Sensitive Multicolor Tube. Another approach to color CRTs is to exploit the nonlinear brightness that many phosphors exhibit with beam current density, as discussed in Section V.C.2. If a red sublinear phosphor is mixed with a green superlinear phosphor in a CRT screen as shown in Fig. 100, a change of color results as the beam current is increased. Sisneros (1967) reported early experimental results with such a phosphor screen, producing color change from red to amber-green with variation of beam current density. Progress made in phosphors have since made possible a larger color variation, as reported by Ohkoshi et al. (1983) using a sublinear red phosphor of yttrium oxysulfide (doped with europium and minute amounts of terbium and praseodymium) and a superlinear green phosphor of zinc cadmium sulfide (silver and nickel coactivated). With these phosphors, a color shift from red to green was obtained with good saturation characteristics. To avoid brightness modulation with changes in beam current Ohkoshi et al. (1983) employed beam chopping method to compensate for the changes in current density. AIthough showing promising results, current sensitive multicolor tubes are still in the laboratory stage and will probably require further phosphor development before they are widely used. c. The High-Resolution Color Shadow-Mask Tube. For the display of alphanumeric and graphic information, shadow-mask tube structures having finer mask pitch than home TV tubes have been developed as described by Takano (1983).The small mask pitch (0.2 mm for recent tubes) requires special screen processing of fine-grain phosphors and special electron optics to produce a sufficiently small spot size and proper beam landing angle. In addition, proper registration must be provided by the deflection system across the entire screen area. The electron guns of high-resolution shadow-mask tubes make use of electrostatic focus produced by a bipotential lens structure (see Section II.B.4). The three guns may have a triangular (delta) configuration, or they may be mounted in a plane (in-line) parallel to the horizontal deflection axis. In either
CATHODE RAY TUBES
32 1
(arbritrary units) FIG.100. Luminance of sublinear and superlinear phosphors versus current density.
case the three deflected electron beams must be dynamically converged on the whole screen area. The in-line guns located in a plane of symmetry of the deflection system, require less sophisticated convergence circuitry than the delta guns. On the other hand, for a given deflection coil, neck diameter and deflection angle, delta guns allow larger diameter electrodes to be used than in-line guns, resulting in less spherical aberrations and better spot size or the ability to handle higher beam currents for the same spot size. When high brightness and resolution are required, the delta gun configuration is preferred, although this is at the expense of more complex convergence circuitry and more difficult adjustments of the display. Presently, 0.2-mm-pitch shadow-mask tubes are available up to 20-in. diagonal size with conventional dome-shaped mask structures mounted in a
322
A N D R MARTIN ~
rectangular envelope and 0.3 mm pitch tubes have been demonstrated up to 31 in. diagonal (Sudo, 1986). An interesting development has been reported by Robinder et al. (1983) for cockpit display tubes in which a flat-face tube is used with flat taut mask made of Invar material having a fine 0.2-mm pitch. Because of the low thermal expansion of the Invar mask, high current densities can be used and a 500 F1 area brightness in green has been successfully achieved without loss of color purity. Flat tension mask CRTs up to 14 in. diagonal have also been recently presented by Dietch (1986). High-resolution shadow-mask tubes presently have a resolution which approaches that of color penetration tubes, making them increasingly useful in color display terminals. However, such tubes have still to be improved for industrial and military applications where the environmental conditions of vibration and shock, may disturb the complex equilibrium of the structure (mask, screen, electron gun, deflection system), making it difficult to continuously maintain the quality requirements.
d. The Beam-Index Tube. The principle of operation of the beam-index tube, well described by Morrell et al. (1974), is based on the determining of the position of a single electron beam on a striped color screen whose back is coated with indexing phosphor stripes invisible to the observer. The light radiated from these stripes when excited by the scanning beam is detected by a fast photosensor, side or rear of the tube, producing indexing pulses. Proper electronic processing of these pulses makes it possible to determine the beam position with respect to the vertical color stripes and allow the beam to be switched on at the correct moment to produce the desired color. The tube itself is structurally simple compared to a shadow-mask tube and can be made very rugged. The difficulties lie in the electron beam size, which limits the brightness, electronic processing of the signals, which require very large bandwidth and hence make the video amplifiers difficult to design. Typical designs have been discussed by Turner (1979) and Ohkoshi et al. (1981). Recently, Inoue et al. (1983) described a small beam-index tube for camera viewfinder applications. At present it seems that small and medium size beam-index tubes may find some applications in the industrial and military field because of their inherent ruggedness.
IX. CONCLUSION During the 1970s there was a general feeling that some form of flat panel would soon replace the CRT, at least for most graphic and alphanumeric applications. However, the flat panels have so far failed to perform as well as the CRT in many respects. No presently envisioned fiat-panel technology has
CATHODE RAY TUBES
323
either the size, the brightness, gray scale capability, speed, color capability, or convenienceof addressing of CRTs, aside from lacking a combination of these characteristics.Aside from this, the quality and performance of CRTs are still improving. Three-dimensionalcomputer programs in electron optics can now help in calculating precise electron trajectories, taking into account mechanical misalignment defects. With respect to phosphor screens, although major improvements in zinc sulfide phosphors are not expected, rare earth compounds are presently the subject of extensive research, leading every year to new phosphors with improved characteristics and better life. If little improvement can be expected from conventional oxide cathodes, used for the last 50 years, the increasing use of impregnated cathodes allows the design of electron guns using the 10-times higher current density supplied by these cathodes. General design and tube process improvements as well as the associated developments in stable drive and deflection circuits are leading to better picture quality, display reliability, and ruggedness, while reducing the price-toperformance ratio of the CRT display. Computer-aided design using threedimensional programs may also make it possible to seriously consider some form of flat CRT, a subject abandoned for many years and recently revived by Emberson (1986). The flat CRT may hence be the flat panel of the future.
ACKNOWLEDGMENTS This chapter has developed over the course of several years, and I am indebted to many people who have helped to generate the material presented herein. For their major participation, I would particularly like to thank A. Albertin for the electron guns and electrostatic deflection sections, J. P. Galves for the phosphors and screens section, R. Pieri for the electromagnetic deflection section, and A. M. Shroff for the oxide and impregnated cathode portion. The author would also like to mention the infinite patience of Dr. B. Kazan, who has taken many hours from his busy schedule to read, correct, and propose improvements and clarifications, and without whom this publication would never have been possible. We also wish to thank F. Bernier, 0.Salomon, and their teams for their help in preparing the figures, typing, and correcting the manuscript.
REFERENCES Albaut, R. (1970). Programme canon. Thomson-Varian Tech. Rep. No. 038. Albertin, A. (1976). CRT's for 1 GHz oscilloscope. Onde Electr. 56, (5), 247-255. Alig, R. C., and Hughes, R. H. (1983). Expanded field lens design for in line picture tubes. SID Dig. 14,70-71.
324
ANDRE MARTIN
Barosi, A., and Giorgi, T. A. (1972). SAES Tech. Rep. TR2S. Bates, D., Silzars, A., and Roberts, L. (1974). US. Patent 535,098. Braun, F. (1897). Uber ein Verfahren zur demonstration und zum Studium des zeitlichen Verlaufes variabler strome. Ann. Phys. (Leipzig)[3] 60,552-559. Brilliantov, D. P. (1972). Efficient deflection system for a miniature kinescope, Telecommun. Radio Eng. (Engl. Trans/.)26,96- 100. Briiche, E., and Scherzer, 0.(1934).“Geometrische Elektronenoptik.” Berlin. Buchanan, R. A., and Maple, T. G. (1979). Seminar Lecture Note at the SID Conference of University of Illinois, Urbana. Buchanan, R. A., Maple, T. G., Bailey, H. N., and Alves, R. V. (1973). “On Luminescent Applications of Rare Earth Materials,” Final Report, No. LMSC-D315254, Contract DAHC 15-71-C-0138. Lockheed Palo Alto Research Lab., Palo Alto, California. Burilla, C. T., Rafuse, M. J., and Kestigian, M. (1979). Penetration and cathodoluminescent properties of a La,O,S penetration phosphor. SID Dig. Tech. Pap., pp. 70-71. Busch, H. (1926). Berechnung der Bahn von Kathodenstrahlen in axial symmetrischen Elektromagnetischen felde. Ann. Phys. (Leipzig)[4] 8,974-993. Busch, H. (1927). Uber die wirkungweise der konzentrierungs spule bei der Braunschen rohre. Arch. Elektrotech. (Berlin) 18,583-594. Champeix, R. (1946).La mesure des diffhrencesde potentiel de contact et du courant de saturation dans les tubes a vide utilisant des cathodes a oxydes. Ann. RadioPlecrr. 3,208-324. Champeix, R. (1960). “Physique et Technique des Tubes Electroniques,” Vol. 11, pp. 46-50. Dunod, Pans. Chevalier, J., and Galves, J. P. (1982). CRT’s with phosphor and impregnated cathodes for avionic displays. SID Dig. Tech. Pap., pp. 60-61. Child, D. C. (1911). Phys. Rev. 32, 312. Christiansen, P. (1983). Proc. SID 24/1,29-41. Cooper, H. G. (1961). High resolution electrostatic CRT. Bell Syst. Tech. J . DD, 723-739. Crookes, W. (1879). On the illumination of lines of molecular pressure and the trajectory of molecules. Philos. Trans. R. SOC.London 170, 135-164. Curie, D. (1960). “Luminescence Cristalline.” Dunod, Pans. Davisson, C. J., and Calbick, C. J. (1932). Phys. Rev. 42, 580. De Lange Dzn, H. (1954). Relationship between critical frequency and a set of low frequency characteristics of the eye. J. Opt. SOC.Am. 44,380-389. De Lange Dzn, H. (1957). Attenuation characteristics and Phase-shift characteristics of the human fovea cortex systems in relation to flicker-fusion phenomena. Thesis at the Technical University of Delft, the Netherlands. De Lange Dzn, H. (1978). Research into the dynamic nature of the human fovea-cortex systems with intermittent and modulated light. J . Opt. SOC.Am. 48,777-785. Deschamps, J. (1965). Perfectionnement aux tubes a rayons cathodiques comportant une lentille tlectronique quadrupolaire et un dispositif de post-acchltration. French Patent 1,445,405. De Wolf, D. A. (1982). Simplified analysis of kinescope electron guns. R C A Rev. 43, 339-362. Dietch, L., Palac, K., and Chiodi, W. (1986). Performance of high-resolution flat tension mask color CRT’s. SID Dig. 17,322-323. Electronic Industries Association (EIA) (1985). ”Optical Characteristics of Cathode Ray Tube Screens,” Formulated by EIA Tube Engineering Advisary Council, Publ. No. 116A. EIA, Washington, D.C. El-Kareh, A. B., and El-Kareh, J. C. J. (1970). “Electron Beams, Lenses and Optics,” Vols. 1 and 2. Academic Press, New York. Emberson, D. L., Caple, A,, Field, R. L., Jervis, M. M., and Smith, J. (1986). A thin flat high resolution CRT for datagraphics. S I D Dig. 17, 288-231. Feldman, C. (1957). Bilayer bichromatic cathode screen. J . Opt. SOC.Am. 47,790-794.
CATHODE RAY TUBES
325
Franzen, N. (1983). A quadrupole scan expansion accelerating lens system for oscilloscopes. SID Dig. 14, 120-121. Calves, J. P. (1979). Multicolor and multipersistence penetration screens. Proc. SID 20 (2). Calves, J. P., and Brun, J. (1975). Color and brightness requirements for cockpit displays, a proposal to evaluate their characteristics. AGARD Con$ Proc. AGARD-CP-167, Lect. No. 2. Gewartowski, J., and Watson, H. (1965).“Principles of Electron Tubes.” Van Nostrand-Reinhold, Princeton, New Jersey. Giorgi, T. A., and Della Porta, P. (1972). “Residual Gases in Electron Tubes.” Academic Press, London. Good, W. (1976),Projection television. Proc. S I D 17 (3). Grum, F. C., and Bartleson, C. J., eds. (1980). “Optical Radiation Measurements,” Vol. 2. Academic Press, New York. Haas, G. A. (1961). “Thermionic Electron Sources,” NRL Rep. No. 5657. U S . Nav. Res. Lab., Washington, D.C. Haine, M. E. (1957). A contribution on the triode system of the cathode ray tube electron gun. J . Br. lnst. Radio Eng. 4, 211-216. Hasker, J. (1965). Transverse velocity selection affects the Langmuir equation. Philips Res. Rep. 20,34-47. Hasker, J. (1966a). The influence of initial velocities on the beam current characteristics of electron guns. Philips Res. Rep. 21, 122-150. Hasker, J. (1966b). Calculation of lens action and lens defects of the post-acceleration region in oscilloscope tubes. Philips Res. Rep. 21, 196-209. Hawes, J. D., and Nelson, G. (1978). Mesh PDA lenses in cathode ray tubes. Proc. SID 19 (I), 17-21. Hawkes, P. W., ed. (1973).“Image Processing and Computer-Aided Design in Electron Optics.” Academic Press, New York. Hensley, E. B. (1955). Annu. Conj. Phys. Electron., Top. Con$ Am. Phys. SOC.,ISth, 1955, p. 18. Hernnann, G., and Wagener, S. (1951).“The Oxide Coated Cathode,” Vols. 1 and 2. Chapman Hall, London. Hittorf, W. (1869). Uber die Elektricitatsleitungder Base. Ann. Phys. (Leipzig)12) 136, 1-31. Howe, R. L., and Welham, B. H. (1979). Developments in plastic optics for projection television systems. Pap., IEEE Chicago Fall Con$ Consum. Electron October 1979. Hutter, R. G. E. (1974). The deflection of electron beams. “In Advances in Image Pick-up and Display” ( B . Kazan ed.), Vol. 1, pp. 163-224. Academic Press, New York. Inoue, F., et al. (1983).A new beam index color electronic viewfinder. Jpn. Display Dig., pp. 128131. International Electrochemical Commission (1975). 2nd ed., Publ. No. 151-14. IEC 1 rue de Varernbe, Geneva, Switzerland. International Electrochemical Commission (1978).Proposal No. 39 1 rue de Varembe, Geneva, Switzerland. Janko, M. (1974). Space charge effects in a CRT with post deflection acceleration. IEEE Trans. Electron Deoices ED-21 (3), 184-189. Janko, B. (1979). A new high speed CRT. Proc. SID 20 (2). Jenkins,R. 0.(1969). A review of thermionic cathodes. Vacuum 19 (8), 353-389. Jones, A. H. (1968).J . SMPTE 77, 108. Judd, D. B., and Wyszecki, G. (1975).“Color in Business Science and Industry,” 4th ed. Wiley, New York. Kaashoek, J. (1968). A study of magnetic deflection errors. Thesis, Technical University, Eindhoven. Kelly, D. H. (1971). Theory of flicker and transient responses. J . Opt. SOC.Am. 61 (4).
326
ANDRE MARTIN
Kikuchi, M., Kobayashi, K., Chiba, T., and Fujii, Y. (1981). A new coolant sealed CRT for projection color TV. IEEE Trans. Consum. Electron. CE27 (3). Klemperer, 0.(1953).“Electron Optics,” 2nd ed. Cambridge Univ. Press, London and New York. Klemperer, O., and Barnett, M. E. (1971). “Electron Optics,” 3rd ed., pp. 331-339. Cambridge Univ. Press, London and New York. Knoll, M. I., and Ruska, E. (1932). Bietrag zur geometrischen elecktronenoptik. Ann. Phys. (Leipzig) 12 (5), 607-640. Krupka, D. C., and Fukui, H. (1973). The determination of relative flicker frequency of raster scanned CRT displays by analysis of phosphor persistence characteristics. Proc. S I D 14 (3). Langmuir, B. (1937). Theoretical limitations of cathode ray tubes. Proc. IRE 25 (8). Langmuir, B., and Compton, K.T. (1931). Rev. Mod. Phys. 3,300-344. Lauer, R. (1982). Characteristics of triode electron guns. Adv. Opt. Electron Microsc. 8, 137-206. Lehmann, M. (1971).On the optimum efficiency of cathodoluminescence of inorganic phosphors. J . Electrochem. SOC. 1-18, 1164. Lehrer, N. (1974). Application of the laminar flow gun to cathode ray tubes. S I D J.,Mar./Apr., pp. 7-12. Leverenz, H. W. (1950). “Luminescence of Solids.” Wiley, New York. Levine, A. K., and Palilla, F. C. (1965). YVO,:EU, a new red emitting phosphor for color television. Trans N.Y. Acad. Sci. [2] 27 (5), 517-529. Loty, C. (1966). Large bandwidth CRT’s. Acta Electron. 10, 351-361, 399-409. Loty, C. (1983). A 7 GHz for real time digital oscilloscopy. S l D Dig. Tech. Pap. Loty, C., and Dreano, J. P. (1970). Cathode ray tube with microchannel plate amplifier and quadrupolar lens deflection amplification. French Patent 7,037,647. Marie, G. (1982).Television projection systems, present situation and future prospects. Onde Elect. 62 (3), 88-98. Martin, A., and Deschamps, J. (1971). A short length rectangular oscilloscope tube with high deflection sensitivity by using an original technique. Proc. S I D 12 (1). Martin, A., and Galves, J. P. (1982). Ecrans multifonction pour tubes a rayons cathodiques. Rev. Thomson-CSF 14 (4), 957-988. Merloz, P. (1978). Thomson-CSF Tech. Rep. 155. Miller, V. (1963). The application of divergent lenses in oscilloscope cathode ray tubes. Radio Electron. Phys. 8 (5), 883-886. Morrell, A. M., Law, H. B., Ramberg, E. G., and Herold, E. W. (1974).“Color Television Picture Tubes.” Academic Press, New York. Moss, H. (1968).“Narrow Angle Electron Guns and Cathode Ray Tubes.” Academic Press, New York. Nishino, T., and Maeda, H. (1973). Fiber optics traveling-wave CRT’s for recording fast single subnanosecond transients. Electron. Commun. Jpn. 56B (5). Odenthal, C. (1977).A box shaped scan expansion lens for an oscilloscope CRT. S I D Dig. Tech. Pap., pp. 134-135. Oess, F. G . (1976). Notes on evaluation of cathode emission quality. Clinton Electron. Tech. Rep. Oess, F. G. (1979). CRT considerations for raster dot alphanumeric presentations. Proc. SID 20 (2). Ohkoshi, A., et al. (1981). A new 3 0 V beam index color cathode ray tube. IEEE Trans. Commun. Electron. CE-27,433-442. Ohkoshi, A., et al. (1983). Current-sensitive multicolor CRT display. Proc. S I D 24 (2), 144-149. Olden, R. G. (1957). A thin-window cathode ray tube for high-speed printing with “Electrofax.” RCA Rev. 18. Ozawa, L., and Hersh, H. (1974). Optimum arrangement of phosphor particles in cathode ray picture tube screens. J . Electrochem. Soc. 121 (7), 894-904.
CATHODE RAY TUBES
327
Partlow, W. D. (1972). Selectively absorbing filters for contrast enhancement of light sources. Appl. Opt. 11 (7), 1491-1494. Petrov, I. A. (1976). First order electron optics of a single crossed lens. Soviet Phys. Tech. Phys. (Engl. Transl.) 21 (S), 640-642. Pfahnl, A. (1961).In “Advances in Electron Tube Technique” (Slater ed.), Vol. 1, Conf. no. 5, pp. 204-208. Pergamon, Oxford. Pierce, J. (1954). “Theory and Design of Electron Beams,” 2nd ed. Van Nostrand-Reinhold, Princeton, New Jersey. Pieri, R. (1978).“DCviateurs pour applications professionnelles,” Tech. Doc. Orega-Videocolor 1978, Sect. 111, pp. 7-15. Ploke, M. (1951). Elementare, Theorie der Elektronenstrahlerzeugung mit Triodensystem-Part I. Zeitschr. Angew. Phys. 12,28-31. Ploke, M. (1952). Elementare theorie der Elektronenstrahlerzeugung mit Triodensystem-Part 11. Zeitschr. Angew. Phys. 1, 1-4. Plucker, J. (1858). fllber die Einwirkung der Magneten auf die elektrischen entladungen inverduunten Gasen. Ann. Phys. (Leipzig) [2] 103,88-106. Pommier, C. (1974). Thomson-CSF Technical Specification, No. 5oooO Electron Tube Division, 38 rue Vauthier, Boulogne-Billancourt, France. Pommier, C. (1977). Thomson-CSF Technical Specification, No. 50053 Electron Tube Division, 38 rue Vauthier, Boulogne-Billancourt, France. Pritchard, D. H. (1965). Penetration color screen-color tube- and color television receiver. US. Patent 3,204,143. Rittner, E. S., Ahlert, R. H., and Rutledge, W. C. (1957). On the mechanism of operation of the barium aluminate impregnated cathode. J . Appl. Phys. 28 (12), 1468-1473. Robinder, R., Bates, D. J., Green, P. J., Lewen, G. D., and Rath, D. R. (1983). A high-brightness shadow-mask color CRT for cockpit displays. SID Dig. 14.72-73. Rogowitz, B. E. (1983).The human visual system: A guide for the display technologist. Proc. S I D 24 (3), 235-251. Say, D. L., Jaeger, R. B., and Ryan, J. D. (1975). Electron optics for CRT’s computer modeling. IEEE Trans. Electron Devices ED-21 (I), 0057-0063. Schmidt, K., Temple, M., and Seddon, I. (1977). Viewability improvement of CRTdisplays. OCLI Eng. Rep. No. 77003, pp. 1-5. Schwartz, J. W. (1957). Space charge limitation on the focus of electron beams. RCA Rev. 18, 1-11. Seats, P. (1976).Fundamentals ofcathode ray tubes. SID Dig. 7 , 172-173. Shafer, D., and Turnbull, J. (1981). The emission carbonate crystallite and oxide cathode performance in electron tube. Appl. Surf Sci. 8,225-230. Sherr, S. (1970).“Fundamentals of Display System Design,” p. 69, Wiley (Interscience),New York. Shroff, A. M., and Palluel, P. (1982). Les cathodes impregnbes. Rev. Tech. Thomson-CSF 14 (3), 583-655. Silzars, A. (1978).Traveling-wave deflectors for high speed electron beam modulation. Proc. SID 19 (l), 9-16. Silzars, A,, and Knight, R. (1972). The deflection sensitivity of traveling-wave electron beam deflection structures. IEEE Trans. Electron Deoices ED-19, 1222-1224. Sisneros, T. E. (1967). Appl. Optics 6,417. Soller, T., Starr, M. A., and Valley, G. E. (1948).“Cathode Ray Tube Displays,” MIT Radiat. Lab. Ser., pp. 63-64. McGraw-Hill, New York. Spangenberg, K. R. (1948).“Vacuum Tubes.” McGraw-Hill, New York. Sudo, M., Ashiya, R., Vba, T., Murata, A., and Amano, Y. (1986). High resolution 20V” x 20V” Trinitron and monochrome CRT. SID Dig. 17,338-341.
328
ANDRE MARTIN
Takano Y. (1983). In “Advances in Image Pick-Up and Display” (B. Kazan, ed.), Vol. 6, pp. 5468. Academic Press, New York. Tecotzky, M. (1976). General trends in phosphor development, Tech. Con$ SID Mid Atlantic Chapter April 1976, New Yvrk. Tecotzky, M., and Mattis, J. J. (1971). A new phosphor approach for multicolor display. J . Electrochem. Soc. 118, C22. Thomson, J. J. (1897). Cathode rays. Proc. R. Inst. 15,419-432. Todd, L., and Starkey, C. (1977). High brightness, high resolution projection CRT. Int. Electron Devices Conf Tech. Dig., pp. 80A-80D. Turnage, R. E. (1966).The perception of flicker in cathode ray tube displays. InJ Disp. 313,38-52. Turner, J. A. (1979).An electron beam indexing color display system. Displays 1, 155-158. Von Ardenne, M. (1935). Negative lonenstrahlen bei der Formierung von Hochvakuum Elektronenstrahlrohren. Arch. Elektrotech. (Berlin)29,73 1-732. Von Ardenne, M. (1940).“Elektron Obermikroskopie.” Springer-Verlag, Berlin and New York. Vonk, R. (1972).Deviation magnetique dans les tubes image de television. Philips Tech. Rev. 32 (3), 35-48. Weber, C. (1967). Analog and digital methods for investigating electron optical systems. Philips Res. Rep. Suppl. No. 6, pp. 1-84. Wehnelt, A. (1904). Ober der austritt negative ionen aus gliihenden Metallverbindungen und damit zusammenhangende Erscheinenungen. Ann. Phys. (Leipzig) [4] 14,425-468. Weiss, H. (1969).Optimum spot size of a scanned raster display. Inf: Disp. 6,51-52. Wendt, G. (1952). “SystBmes de deviation ilectronique et leurs aberrations,” Commun. SOC. Radioilectr. Zalm, P., and Van Stratum, A. J. A. (1966). Osmium dispenser cathodes. Philips Tech. Rev. 27, 69-75.
ADVANCES IN ELECTRONICS AND ELECTRON PHYSICS, VOL. 67
Open-Circuit Voltage Decay in Solar Cells V. K. TEWARY* Birla lnstitute of Technology and Science Pilani-333031, India
S . C. JAIN' Solid State Physics Laboratory Delhi- 110007, India
I. INTRODUCTION Starting from the classic paper of Gossick (1959, open-circuit voltage decay (OCVD) has been a popular technique for measurement of excess carrier lifetime in p-n-junction diodes. In this technique, first a diode is forward biased for a long enough duration so that it attains a steady state. The forward current is then switched off. Keeping the diode in open circuit, the voltage decay is observed on a CRO. The decay curve in most cases contains a linear region. The slope of this region is, as shown by Gossick (1955), (kT/q)(l/z,) (for notations, see Appendix) from which z, can be determined. The physical principle of OCVD can be understood as follows. The effect of a forward current in a diode is to inject excess carriers. The excess carrier concentration (ECC) at any point is determined by the balance of the following contributions (1) excess carrier generation, (2) recombination of carriers, and (3) diffusive motion of carriers. For the moment we neglect the contributions of drift field and the end surfaces of the diode. In steady state the space distribution of ECC, i.e., the excess carrier profile (ECP) does not change with time. When the injection of excess carriers is switched off, the ECC at each point will decay to zero. Eventually the carrier concentration would assume its thermal equilibrium value. Initially, when the decay starts, the ECC as a function of time is determined by the diffusive motion of the carriers and their
* Present address: Institute for Materials Science and Engineering, United States Department of Commerce, National Bureau of Standards, Gaithersburg, Maryland 20899. Present address: Department of Electrical and Computer Engineering, University of Arizona, Tucson, Arizona 85721.
'
329 Copyright 0 1986 by Academic Press. Inc. All rights of reproduction in any form reserved.
330
V. K. TEWARY A N D S. C. JAIN
recombination. The diffusive motion depends upon the ECC gradient and therefore on the steady-state ECP. After a while the ECP will even out and the diffusive motion becomes negligible. Then the carriers decay by recombination only. The recombination rate is equal to ECC/zb (see, for example, Shockley, 1950).Thus ECC can be shown to have exponental dependence on time, viz. exp( - t/Tb). The junction voltage according to the Shockley relation is proportional to log of ECC at the junction. Hence the voltage will decay linearly with time and the slope of the linear region will be proportional to l/Tb. Obviously, in this region, the decay rate will be independent of the starting ECP, which mainly determines the diffusive motion of carriers. A p-n junction solar cell is also a diode. In a solar cell the excess carriers can be injected electrically by a forward current or optically by illuminating it. The ECP in the steady state will obviously be different in the two cases (see, for example, McKelvey, 1966).The decay will start when the injection is stopped by switching off the forward current or the illumination as the case may be. From what has been described in the preceeding paragraph, initially the diffusive contribution will be significant, which depends upon the shape of the ECP. The initial shape of the decay curve will therefore depend upon whether the excess carriers were generated electrically or optically.The later part of the curve which signifies decay mainly by recombination will be independent of the manner in which excess carriers were injected in the steady state. Thus OCVD can also be induced optically in a solar cell from which q, can be measured. The excess carriers’ lifetime 5 b is an important material parameter of a solar cell. It determines the energy conversion efficiency of solar cells. This explains the strong topical interest in the OCVD technique. In addition, the OCVD technique is used in studying the switching behavior of diodes and solar cells and design of electronic pulse circuits, charge storage devices, and varactor diodes for frequency multiplication. In this article our interest will be confined to theory of OCVD technique for determination of excess carrier lifetime in p-n-junction single-crystal solar cells. We shall discuss OCVD obtained by electrical as well as optical injection of excess carriers. The former will be referred to as forward-currentinduced voltage decay (FCVD) and the latter as photovoltage decay (PVD). Henceforth, OCVD will refer jointly to FCVD and PVD. Obviously in an ordinary diode only FCVD is possible whereas both FCVD and PVD can be observed in solar cells. In the present article we shall use the general word “device” to refer only to diodes and solar cells and no other devices. Although our main interest is in solar cells, we shall discuss some recent work on FCVD in diodes in so far as it is relevant to determination of q, in solar cells. Several review articles and monographs on FCVD in diodes are already available in the literature (see,for
331
OPEN-CIRCUIT VOLTAGE DECAY
example, Nosov, 1969).A glance at the table of contents will indicate the scope of the present article. A. Structure of a Solar Cell
A solar cell has a diffused layer of p type or n type on top of a n- or p-type (respectively) base. Light enters from the top of the solar cell, through the diffused layer to the base (see Fig. 1).The excess carriers (both p and n type) are created throughout the solar cell in the diffused layer as well as the base. The diffused layer is also called an emitter. There is a space-charge layer between the emitter and the base. However, in this article, we have neglected the effect of the space-charge layer and assumed that the solar cell has a planar abrupt p-n junction. (see Section 1I.A). The top of the emitter, viz. the front surface of the solar cell, has a grid electrical contact to allow light to enter the solar cell. The back surface of the base has a full ohmic contact. Modern solar cells also have a low-high junction (n-p-p+ or p-n-n’) in the base which gives the so-called back surface field (BSF). In such cases the base of the solar may be taken to extend from the p-n junction to the low-high junction. The structure of an ordinary diode is similar to a solar cell except that there is no need for a grid contact at the top of the emitter. Usually both the front and the back surface of a diode have full ohmic contact. Usually, the emitter in a modern device is heavily doped. The doping concentration may be of the order of 1 O ” ~ m - ~ - l O~’ ~m - The ~ . base, on the other hand is lightly doped, the doping concentration being about 10’’ ~ m - ~ 10l6Cw3. In a forward-biaseddiode or an illuminated solar cell in steady state, excess carriers are stored in the emitter, the space-charge layer, and the base. When the steady state is interrupted by switching off the forward bias or the illumination, the excess carriers in all the three regions of the device emitter,
LIGHT
M I L
I
,I
EMITTER
2 ;(Heavily Doped) lb
a1
F ‘
z 0
21
z
3’ # I
21 L I
BASE
W
V
ia
3
#
k U
X V
3
m
a
V. K.TEWARY AND S. C . JAIN
332
space-charge layer, and base will decay. The effect of the space-charge layer is usually negligible except at low voltages (Sah et al., 1957). In a typical modern silicon diode or solar cell the effect of the space-charge layer is negligible unless the junction voltage is lower than 0.4 V. In this article we shall not be interested in such low levels of injection. We shall therefore not discuss the effect of the space-charge layer. Unless otherwise obvious by context, in this article when we refer to low levels of injection it would mean that the level of injection is low enough for high injection effects to be neglected but not low enough for space charge effects to be significant. Because of high doping in the emitter, the saturation current of the emitter is expected to be much less than that of the base. In such cases the injection efficiency of the emitter is close to unity and the characteristics of the device are determined from the base alone. Such devices are called base-dominated devices (solar cell or diodes). However, it has been found that the heavy nonuniform doping in the emitter causes band-gap narrowing which increases the saturation current of the emitter. The saturation current is further increased because carrier lifetime in the emitter is reduced due to processes like Auger recombination. Consequently, the effect of emitter cannot be ignored. In such cases OCVD becomes quite sensitive to the coupling of the ECP in the emitter and the base. This so-called p-n coupling is discussed in Sections IV and V at low injections and in Section VI at high injections.
B. Experimental Technique The main objective of this article is to review theory of OCVD in solar cells and application of this theory to interpretation of experimental data with a view to determine z., However, in order to give an idea of the nature of the OCVD experiments, a brief description of the experimental technique is given below. For details, reference may be made to the original papers quoted in the text (also see Nosov, 1969, for FCVD). The schematic circuit diagrams for FCVD and PVD are shown in Figs. 2 and 3, respectively. In FCVD experiments, a pulse generator is used to provide
+cTd=l Relay
-
000
Power supply
Rs
Solar
Unity gain
cell
buffer
Oscilloscope
FIG.2. Schematic circuit diagram for the forward-current-induced voltage decay in a diode/solar cell.
OPEN-CIRCUIT VOLTAGE DECAY
Stroboscope
Solor cell
Unity goin buffer
333
Oscilloscope
FIG.3. Schematic circuit diagram for photovoltage decay in a solar cell.
a rectangular pulse. The length of the pulse should be long enough so that the device attains a steady state of foreward bias before the injection is switched off. In PVD, the light pulses can be generated by electronic stroboscopes or lasers. Both FCVD and PVD patterns are monitored on a CRO. In the FCVD experiments using lasers which give a single pulse, PVD pattern has to be observed on a storage oscilloscope.
11. BASICFORMULATION
In this section we shall describe the model of a p-n-junction device on which the subsequent discussion is based. We shall also quote the ambipolar diffusion equation and steady-state solutions in some specific cases which will be used in the calculation of OCVD in later sections. Excellant reviews and monographs are availablein the literature in which the steady-state characteristics of diodes and solar cells have been described in detail (see, for example, Moll, 1964; Shockley, 1950; McKelvey, 1966; Sze, 1981; Hovel, 1975; Neville, 1980; Fahrenbuch and Bube, 1983). A. Model of a p-n- Junction Device We consider a planar abrupt junction model of a p-n-junction device as shown in Fig. 1. In this model the space-charge effects are neglected. The space-charge effects are relatively insignificant except at low levels of injection (see, for example, Shockley, 1950). We shall also assume that the quasineutrality approximation (Shockley, 1950) is valid. According to this approximation the excess carrier concentrations of the two types, viz. n and p, are exactly equal everywhere and at all times. This is a useful and certainly conventional approximation, though not rigorously valid (see, for example, Shockley, 1950: McKelvey, 1966). We further assume that the injection is uniform on any plane parallel to the junction. We thus treat the device as essentially a one-dimensionalstructure as
334
V. K. TEWARY AND S. C. JAIN
shown in Fig. 1. This figure also shows the coordinate axes. The origin is taken at the junction. The x axis is taken to be normal to the junction. The front surface is at x = -de whereas the back surface is at x = db. The subscripts e and b refer to the emitter and the base, respectively. We shall model the two end surfaces by defining a surface recombination velocity (SRV). An ohmic contact will be represented by SRV = 00. The effect of BSF at the low-high junction can be modelled by defining an effective SRV at that back surface of the base, i.e., the low-high junction. In the ideal case the SRV at this junction can be assumed to be zero. We shall refer to such a device as an ideal BSF device in which SRV at the back surface of the base will be taken to the zero. A device with an ohmic contact at the back surface of the base (SRV = 00) will be referred to as a conventional device. In some cases d e / L eand db/Lb are much larger than unity. In such cases the thickness of the emitter and the base may be assumed to be infinite. Such a diode will be referred to as a thick diode. It may be remarked that in a solar cell, even if d e / L e>> 1, it is not possible to approximate the thickness of the emitter to infinity. This is because the absorption of light in the solar cell is generally exponential and hence an infinitely thick emitter would imply that no light can reach the junction and the base. In a solar cell therefore, we shall always assume the emitter to have a finite thickness. The base, however, can be approximated to be infinitely thick if db/& >> 1. Such a solar cell will be referred to as a “thick” solar cell. B. Ambipolar Difusion Equation
In this section we shall present the ambipolar diffusion equation for excess carriers. The ambipolar equation is derived by combining the diffusion equation for the two types of carriers and using the quasi-neutrality approximation. The ambipolar equation does not refer to either of the two types of carriers in isolation but refers to both. For the model described in the preceeding section, the ambipolar equation in the emitter is as follows:
where
335
OPEN-CIRCUIT VOLTAGE DECAY
and the excess carrier lifetime T: is defined by
_ _ _ _+-q-e-
q e - qeo T:
Te
4eo
- qcm + q e
Te0
Tern
qe
Tern0
(4)
The ambipolar equation for the base is similar to Eq. (1)with the subscript e replaced by b. The ambipolar equation is a nonlinear equation. However, as we shall see below, it becomes a linear equation in the low and high injection limits. 1. Low Injection Limit
In this limit qe << qem.For a strongly extrinsic semiconductor, qeo << qem.In this case, we obtain from Eqs. ( 2 ) and (3) D: = De
Further, in the low injection approximation ze is nearly equal to therefore, from Eq. (4)
(5)
T , and ~
t: = T e
(7) Thus we see that, in the low injection approximation in a strongly extrinsic semiconductor, the ambipolar diffusion equation reduces to the linear diffusion equation for the minority carriers. 2. High Injection Limit The high injection effects become significant when the excess carrier concentration becomes comparable to the thermal equilibrium value of the majority carrier concentration. The high injection limit is defined by qe>>qern. In this limit, Eqs. (2) and (3) reduce to the following
and p: = 0
(9)
The excess carrier lifetime T: has a rather complicated dependence on the level of injection (see, for example, Baliga, 1981). Moreover, at high injections certain other processes such as the Auger process make significant contributions to the excess carrier lifetime (see, for example, Gandhi, 1977). We shall not discuss the dependence of t: on q. in this article. We only define t,
336
V. K.TEWARY AND S. C. JAIN
as the high injection limit of zT so that in this limit 7: = T,,
(10)
From Eqs. (8) and (9) we see that the ambipolar quantities reduce to their values for intrinsic semiconductors (i.e., for qeo = qem)in the high injection limit. This is of course physically expected because when the excess carrier concentration is much larger than the doping concentration of the semiconductor, the physical quantities cannot be sensitiveto the nature and the level of doping. The semiconductor will therefore behave as an undoped, i.e., an intrinsic semiconductor. It may also be noted that in the high injection limit, Eq. (1) becomes a linear diffusion equation again. C. Boundary Conditions
The ambipolar equation, i.e., Eq. (1)is a second-order differential equation which gives the excess carrier profile, i.e., q e ( x , t ) in the emitter. A similar equation would give the excess carrier profile qb(X, t) in the base. However, in order to specify the solution, i.e., the ECPs, we have to prescribe certain boundary conditions. These boundary conditions may specify the boundary value of the function (Dirichlet type) or its derivative (Neumann type) or their combination. In the present abrupt junction model as shown in Fig. 1, the space-charge layer has been neglected. The ECPs q,(x, t) and qb(x,t) are therefore directly coupled at the junction x = 0. The coupling, as we shall see below, arises from the boundary conditions. In order to fix our ideas, first we consider the steady state. However, the boundary conditions discussed below, in general, will also apply to transient state. In the steady state, by definition, the time derivative of qe i.e., the righthand side of equation (1) is zero. The total number of boundary conditions required in the steady state is obviously four; two for qe(x)in the emitter and two for qb(X) in the base. We can specify one boundary condition at the front surface x = - d e , one at the back surface x = d b , and two at the junction x = 0. These are described below. I . Boundary Condition at the Front Surface (x = - d e ) dqeldx = S e q e - ?NO
(1 1)
This boundary condition was derived by Tewary and Jain (1980). The second term on the right-hand side of Eq. (11) accounts for generation of carriers at the front surface. This term contributes only in the case of an illuminated solar cell. In case of electrical injection of carriers in a diode or a
OPEN-CIRCUIT VOLTAGE DECAY
337
solar cell with no incident light, No = 0. In this case, as well as when q = 0, Eq. (11) reduces to the conventional boundary condition (see, for example, Shockley, 1950; McKelvey, 1966; Sze, 1981). In case of an ohmic contact as in a diode, S, = 03 and Eq. (1 1) reduces to qe(- d , ) = 0
(ohmic contact)
(12)
2. Boundary Condition at the Back Surface ( x = d b ) dqb(db)/dX = - sbqb(db)
(13)
The generation of carriers at the back surface, i.e., the term corresponding to the second term in Eq. (11) can be neglected in this case. For a conventional solar cell or a diode with no BSF and an ohmic contact at the base, Sb = 03. In this case, as in Eq. (12), (conventional device)
qb(db) = 0
(14)
In a solar cell or a diode containing BSF, the back surface is modeled by assigning an effective value to Sb. An ideal BSF device is defined by Sb = 0 so that in this case dqb(db)/dx= 0
(ideal BSF device)
(15)
3. Boundary Conditions at the p-n Junction ( x , = xb = 0)
We need to specify two boundary conditions at the p-n junction. One of them called the current condition, relates the gradients of the excess carrier profiles at the junction to the total current through the junction. The other boundary condition is called the voltage condition and relates the heights of the excess carrier profiles at the junction to the voltage at the junction or the terminals of the device. In practice, it is convenient to formally prescribe the following boundary conditions qe(x = 0) = q e j
(16)
qb(X = O) = q b j
(17)
and
Thus the solutions of the diffusion equations in the emitter and the base regions are specified, respectively, in terms of qejand q b j . These quantities are then determined using the voltage condition which is given later in this section. The current voltage relation can then be determined by using the current condition which is given below.
V. K.TEWARY AND S. C. JAIN
338 4 . Current Condition
The total current through the junction Jj is the sum of the emitter and the currents at the junction. Thus JJ . = JeJ. - J
bj
(1 8)
where the appropriate signs of Jejand Jbjare to be taken depending upon the type of the carriers ( p or n). The emitter and the base currents at the junction are given by
and
where Eejand Ebjdenote the values of the drift field in the emitter and the base at the junction x = 0. The appropriate signs of p e and P b are to be chosen depending upon the type of carriers ( p or n). The current Jj is a measurable quantity and is determined from the experimental conditions. Thus Eq. (18) along with Eqs. (19) and (20) provide one of the boundary conditions at the junction. We shall be particularly interested in the open-circuit case, i.e., when Jj = 0. In this case the required boundary condition is given by
(open-circuit condition) 5 . Voltage Condition
The voltage condition relates the excess carrier concentration at the junction on its either side (emitter and the base) to the voltage in the device. Here, one needs to distinguish between the voltage at the junction and the terminals of the device. The terminal and the junction voltage are, in general, different because of the voltage drops in the quasi-neutral regions of the device. This voltage drop arises primarily due to the ohmic resistance of the quasi-neutral regions when a current is flowing and the Dember effect which arises due to the difference in the mobilities of the two types of carriers. For a detailed discussion of these contribution, see, for example, McKelvey (1966). There has been considerable confusion and controversy in the literature regarding the voltage condition to be used (see, for example, Hauser, 1971;
OPEN-CIRCUIT VOLTAGE DECAY
339
Heasell, 1979; Nussbaum, 1969, 1975, 1979; Guckell et al., 1977; Scharfetter et al., 1963; Van der Ziel, 1966; Van Vliet, 1966). We shall not attempt to resolve this controversy here but only report that it is now generally believed that the junction voltage is described by Fletcher conditions (Fletcher, 1957; see also Dhariwal et al., 1976), whereas the terminal voltage is described by Misawa conditions (Misawa, 1956; Hauser, 1971). These are quoted below. 6. Fletcher Conditions
7. Misawa Conditions
These conditions are quite complicated in the case when a current is flowing through the device. However, in the open-circuit case and using the charge balance approximation, the Misawa conditions can be written in the following form: qzj + qemqej = qij + qbmqbj = q:[exp(q&/kT) - l1 (24) In most cases of practical interest the unity in the square brackets in the extreme right of the above equation is negligible. We thus obtain 4ej = Cq,2{exp(qK/kT) - 11 + q,2,/41”2 - qem/2
(25)
and qbj = Cq,2(exp(4c/kT) - l > + q ~ m / 4 1 ” 2 - 4bm/2 (26) In the extreme high injection limit, the Misawa conditions reduce to the following simple form which has been used frequently in the literature qej= qbj = qiexp{qV,/2kT}
(high injection limit)
(27)
8. Low Injection Limit
Fortunately, in the limit of low injections, the situation is quite clear. Both Fletcher and Misawa conditions reduce to the well-known Shockley boundary condition as given below. and
340
V. K. TEWARY AND S. C. JAIN
In the low injection limit the Dember voltage is negligible. The difference between the terminal and the junction voltage arises mainly due to the ohmic drop in the quasi-neutral regions so that
v, =
f qJjRs In the open-circuit case Jj = 0 so that in the low injection limit
(30)
K=F
(31) Even at fairly high injections, the Dember voltage is only a small percentage of vj so that Eq. (31) may be taken as a reasonable approximation (McKelvey, 1966). In the time-dependent case the boundary conditions given above at the two surfaces as well as at the junction have to be obeyed at all times. In addition, the excess carrier profiles qe(x,t ) and qb(X, t ) have to be specified for values of x at a certain instant of time. In latter sections, we shall be interested in transients starting at t = 0 when the steady state persisted until t = 0. We therefore prescribe initial conditions as follows:
D. Solution of Steady-State Diflusion Equation in a Diode In this section we shall briefly describe the solution of the linear diffusion equation for a diode in the steady state of electrical injection. In what follows we shall assume the drift field to be constant with respect to x . It implies that the doping profile is exponential. In practice doping profiles are rarely such. However, for practical purposes a constant drift field is a reasonable and certainly convenient approximation. In many devices the drift field can also be assumed to be zero. We write the diffusion equations in the emitter and the base in terms of dimensionless variables X , and x b as follows: Emitter (- We < X, I 0): (34)
(35)
OPEN-CIRCUIT VOLTAGE DECAY
34 1
Equations (34) and (35) can be obtained from the ambipolar equations in the emitter and the base regions [for example, Eq. (1) for the emitter] by taking their low injection limits and transforming to the dimensionless variables X, and xb, respectively. In case of electrical injection in a diode, the excess carriers are injected at the junction. The generation terms g: and g t are taken to be zero everywhere. The injection of excess carriers at the junction is accounted for by using the voltage condition. The corresponding equations in the high injection limit will have the same form as Eqs. (34) and (35) but with redefined variables Xe and Xb and no drift field term. We shall first give the solutions of Eqs. (34) and (35) for Fe = & = 0. The solutions for nonzero values of Fe and/or F b can be obtained from these solutions by using a simple transformation which is given at the end of this section. 1. Thick Diodes (we,b= 00) Using the boundary conditions as given by Eqs. (12), (14), (16), and (17) we obtain qe(Xe) = q e j exP(Xe)
(- a 5 Xe 5 0)
(36)
and
(O 5 xb 5 a)
(37) The current voltage (I- V ) relation can be easily obtained by using Eqs. (1 8), (28),and (29) and is qb(Xb) = qbjexp(-Xbf
where
and Equation (38) is the well-known Shockley relation for a diode. In fact, this relation is obeyed individually by the emitter and the base currents, which can be verified by using Eqs. (19),(20), (28), and (29) for zero-drift field which gives Jej/Jeo = J b j / J b o = exp(qy/kT) - 1
(41)
2. Thin Diodes ( w e v b is $finite) a. Ohmic Contacts at Both Ends. In this case also the relevant boundary conditions are given by Eqs. (12), (14), (16), and (17). The solutions are as
V. K. TEWARY AND S. C. JAIN
342 follows: qe(Xe)
=qej
sinh(Xe + We) sinh W,
(-We IxeSO)
The I- V relation can be obtained by using Eqs. (1 8), (28),and (29) as in the preceeding case for thick diodes. It can be verified that the form of the I-V relation is the same as in Eq. (38). However, the expression for the saturation current is modified as follows:
and
Thus, the effect of finite thickness of the emitter and the base of a diode is to increase their saturation currents each by a factor equal to the hyperbolic cotangent of their thickness.
b. Ohmic Contact at the Front Surface and BSF in the Base. As described in Section II.C, we model the BSF in the base by defining an effective value of s b . In this case the base is assumed to extend from the p-njunction to the lowhigh junction (i.e., p-p+ junction in case of an n or p diode). The relevant boundary conditions in this case are given by Eqs. (12), (13), (16), and (17). The solution in the emitter is same as given by Eq. (42). The solution in the base is (0 Ix b IWb)
where the dimensionless SRV S,* is defined as
s,*= s , z b / L b
(47)
In this case also the I-V relation has the same form as the Shockley relation given by Eq. (38). The effect of S,* is to further modify the base saturation current in Eq. (45). As mentioned earlier, for an ideal BSF diode S b = 0, whereas for a conventional diode with an ohmic contact at the back s b = 0 .In the latter case Eq. (46) reduces to Eq. (43).
OPEN-CIRCUIT VOLTAGE DECAY
343
3. Solution for a Constant Drift Field
We shall now indicate a method by which solution of the diffusion equation containing a constant drift field term can be obtained. Consider, for example, Eq. (35). We introduce the transformation
which reduces Eq. (35)to
_d 2_y_ ~ ( +l F;) dXi
=0
(49)
This equation has the same form as the diffusion equation without a drift field. Its solution can be obtained as in Sections II.D.l and II.D.2. The effect of the drift field is to modify the excess carrier profile by the exponential factor as given by Eq. (48) and the diffusion length Lb to an effective diffusion length L: defined as follows: 1/L,* = (1 + F;)’’’/&
(50)
E. Solution of Steady-State Dijiusion Equation for an Illuminated Solar Cell As in Section II.D, we write the steady-state diffusion equations for the emitter and the base of an illuminated solar cell in terms of the dimensionless variables Xe and xb as follows:
Emitter (- We IXe 5 0):
Base (0 5 xb IWb):
where the dimensionless generation rates are defined as
(53) and (54)
344
V. K. TEWARY AND S. C. JAlN
For the generation rates, we take the usual exponential form as g:(Xe)
=
Ce(No/Le)exPC-Ce(Xe
+ Well
(55)
where No and Noj denote the number of photons incident per unit/time per unit area at the front surface (i.e., the emitter) and the junction, respectively. We have neglected the drift field which can be included by using the transformation defined by Eq. (48). 1. Thick Solar Cell ( w b
=
co)
First we consider a “thick” solar cell and take w b = co.This case has been discussed by Sharma et al. (1981).They have derived the following expression for the excess carrier profile in the base using the boundary conditions as given by Eqs. ( 1 4) and ( 1 7).
where
and
The junction potential is related to qbj at low injections through the Shockley condition given by Eq. (29).The solution for the emitter can be easily obtained by using the boundary conditions given by Eqs. ( 1 1) and (16) and will not be quoted here. The total saturation current in this model can be shown to be JO
= JbO
The open-circuit voltage through
+ JeO
Szcosh We + sinh We cosh We + sinh We
6 is related
to the light-generated current J,
vj = (kT/q)ln(l + J,/Jo)
(61)
The light-generated current J,, which is equal in magnitude to the short circuit current in the solar cell, is proportional to the intensity of incident light and is
OPEN-CIRCUIT VOLTAGE DECAY
345
given by Jg
= Jeg
+ Jbg
(62)
where Jegand &,denote, respectively, the contributions of the emitter and the base to Jg. The expression for Jegis somewhat complicated (Sharma et al., 1981) and is not needed here. It may be mentioned, however, that it is proportional to N o . The expression for J b g is Jbg
=
+
cbNOj/(l
cb)
(63)
We shall obtain the excess carrier profile in the specific case of a basedominated solar cell in open-circuit configuration which will be useful in Section 1II.B. In a base-dominated solar cell the effect of the emitter, by definition is negligible. In this case the zero-current condition given by Eq. (21) requires only the derivative of qB(XB)to be zero at Xb = 0. By applying this condition, Eq. (57) gives
It can be easily verified that in this case N, as defined by Eq. (59),is equal to qbj . 2. Thin Solar Cell (W, is finite) We now consider a solar cell in which the base thickness w b is finite. We shall assume that the effect of the emitter is negligible so that the solar cell is a base-dominated solar cell. It may have an ohmic contact at the back or a BSF which, as in Section II.D.2, will be modeled by assigning an effective value to the SRV at the back surface. We shall give below the excess carriers profiles in the base for two limiting cases-one for s b = co corresponding to an ohmic contact at the base and the other for Sb = 0, which is referred as an ideal BSF cell. For the more general case of finite S , and a constant drift field in the base, see Sharma and Tewary (1982), Kennedy (1962),and Wolf (1963). a. Conventional Solar Cell ( s b = a). The relevant boundary conditions in this case are given by Eqs. (14) and (17). The solution of the diffusion equation for the base region is then given by qb(Xb)/qbj = D 1 / D 2
(65)
where
D, = exp(-CbXb)
+ Cbexp(Xb)
- D3 [exp(Xb) + exp( - xb)l D , = 1 -I- Cb - 203
(66) (67)
346
V. K. TEWARY AND S. C. JAIN
and
b. Ideal BSF Solar Cell (sb = 0). In this case the relevant boundary conditions are given by Eqs. (15) and (17). The required solution is as follows
and
111. OCVD IN BASE-DOMINATED DEVICES
In this section we shall discuss OCVD in base-dominated devices, i.e., solar cells and diodes in which the effect of the emitter is negligible and the decay characteristics are determinated by the base alone. Both FCVD and PVD will be discussed. For PVD we shall consider the case of a long as well as a short pulse. Some experimental results on PVD will also be discussed. The high injection effects will not be considered in this section. A. FCVD in Diodes
We consider a forward biased diode in steady state. As given by Eq. (30)the terminal potential is related to the junction potential as follows:
V,,
=
vj, + I F R, (73) denotes their steady-state values and IFis
where the subscript zero on V, and the forward current. We now assume that the forward current is instantaneously switched off at time t = 0. Now the diode is in open-circuit configuration while the voltage
OPEN-CIRCUIT VOLTAGE DECAY
347
across the diode will decay to zero. The ohmic part of the terminal voltage will decay instantaneously. Hence the terminal voltage which is the one measured experimentally, will show an instantaneous drop. This voltage drop will be equal to the ohmic contribution to V,, and is given by AVO = A E = IFR,
(74)
This part of the voltage decay curve will appear as a vertical line at t = 0 on the CRO screen and can be easily measured. Since IFis known from the steadystate measurements, R, can be determined from the measured value of A V,(see, however, Section IV). Our main interest lies in the decay of y, which is the remaining part of V, after separating out the ohmic contribution. Since the diode is open circuited after t = 0, Eq. (31)will be valid. The decay of vj is determined by the decay of qbj in accordance with Eq. (29). The time dependence of qbj can be calculated theoretically by solving the ambipolar diffusion equation for the base [see Eq. (111. In the low injection limit in which we are presently interested, the ambipolar equation reduces to the diffusion equation for minority carriers (see Section I1.B. 1). This equation written in terms of the dimensionlessvariables is (for z > 0) (0 IX I03)
where X = x/Lb and z = t/q,. The subscript b has been dropped from X for notational brevity. In this section we shall assume Fb = 0, i.e., there is no drift field. In the steady state the right-hand side of Eq. (75) is zero and this equation reduces to Eq. (34) for zero-drift field. The transient starts at z = 0 when the forward bias is switched off. Equation (75) has to be solved, therefore, after z = 0. The boundary conditions are prescribed as given by Eqs. (14) and (17) and the initial condition as given by Eq. (33) where the initial steady-state profile is given by Eq. (37). After the forward bias is switched off,i.e., for z > 0, the diode is in open circuit. We have to therefore also impose the zero-current condition as given by Eq. (21).This equation, in the present approximation of zero-drift field and no emitter contribution, reduces to the following
The solution of Eq. (75) subject to the above boundary and initial conditions can be obtained easily (see, for example, Nosov, 1969) and is
348
V. K. TEWARY AND S. C. JAIN
as(z > 0,OI X I03)
With the help of Eq. (29), we obtain the following expression for voltage decay from Eq. (75):
where VI(4
= qF(z)/kT
As long as the following inequality is satisfied
exPC~l(z)l>> 1 Eq. (78) can be written in the following simple form: AVl(z) 3 Vl(z) - Vl(0) = ln[erfc&]
(79)
(80)
Equation (80) gives the well-known expression for FCVD that was first derived by Gossick (1955). For large z since erfc& behaves like exp( - z), we see that AVI(Z)x
-Z
(Z >>
1)
(81)
We see from Eq. (82) that, for large t, the decaying voltage has a linear dependence on t. The slope of the linear region is kT/qz,. Since the slope of the linear region can be easily measured the excess carrier lifetime can be determined with reasonable accuracy. As we have already seen from Eq. (74), the vertical drop at t = 0 in the FCVD curve gives R,. Thus FCVD provides a convenient method for determining R, as well as zb. The main features of FCVD curve in a thick diode are 1. Vertical drop at t = 0 due to ohmic contribution 2. Some curvature in the initial portion of the FCVD curve as predicated by Eq. (79) for low values of z 3. A linear region in the curve with slope unity [in Vl(z)versus z curve]
For even larger z, the junction voltage becomes too small, the space-charge effects become significant, and the theory as given above is not valid. The
OPEN-CIRCUIT VOLTAGE DECAY
349
behavior of this part of the FCVD curve is quite complicated and is outside the scope of this article. It is also not of any interest as far as the determination of excess carrier lifetime is concerned. B. PVD in Thick Solar Cells
In this section we shall discuss PVD in a thick solar cell, i.e., assuming that the thickness of the base is infinite. We shall consider only monochromatic illumination. We first consider PVD induced by a long pulse. In this case the solar cell is assumed to have been illuminated in the open-circuit configuration for a long enough duration so that it has reached a steady state. The illumination is switched off at t = 0 when the decay starts. After t = 0 (z = 0), the excess carrier concentration is given by the solution of Eq. (75). As in Section III.A, we assume that Fb = 0. All the boundary and the initial conditions as prescribed in Section 1II.A are also valid in the present case except that the initial excess carrier profile is given by Eq. (64) and not Eq. (37). This case has been studied by Jain (1981), whose results will be reviewed here. By using the Laplace transform method, Jain (1981) has obtained the following expression for the excess carrier profile for z > 0:
where, again for notational brevity, we have dropped the subscript b from X, z, and C, and
Equation (83) gives the following expression for the excess carrier concentration at the junction:
where
350
V. K.TEWARY AND S. C. JAIN
By using the Shockley voltage condition given by Eq. (29), we obtain the following expression for PVD: AV,(z) = - z
+ In
[
CS(Z) - S ( C l z ) ]
c-1
For large z, the function S(z) has the following asymptotic expansion (see, for example, Abramowitz and Stegun, 1965):
Jnz
L...]
[,I+
S(z) = -
22
42
Thus we see that for large z, S(z)tends to unity and hence the contribution of the second term on the right-hand side of Eq. (87) vanishes. In this limit, as in the case of FCVD in diodes,
AV,(Z) x
-Z
(89)
This equation shows that at large times, in principle, PVD curve also has a linear dependence on time with exactly the same slope as the FCVD curve from which q,can be determined. However, the location and the extent of the linear region will depend upon the value of C in Eq. (87). This is because, depending upon the value of C , the S(z) = 1 limit may not reach while the inequality (79) is still satisfied. Figure 4 taken from Jain (1981) shows the dependence of V,(z) on z for different values of C as calculated from Eq. (87). This figure also gives FCVD curve as calculated from Eq. (80). We see from this figure that for larger values of C PVD curves come closer to the FCVD curve. Mathematically this behavior can be expected from Eq. (87) by noting that for C + 00 S(C2z) << S(z)
(90)
and in this limit Eq. (85) becomes identical to Eq. (80). This could also be anticipated from the steady-state excess carrier profile for an illuminated solar cell as given by Eq. (64). It may be verified that in the limit C --f 00, Eq. (64) becomes identical to Eq. (37) which gives the excess carrier profile in a forward-biased diode. In general, the difference between PVD and FCVD arises only from the difference between their initial excess carrier profiles in the steady state. Physically the similarity between FCVD and PVD for large values of C can be understood as follows. A large C, i.e., a large LY implies a smaller absorption length (the range in which most of the photons are absorbed). In all the light will be absorbed the limit when the absorption length is 0 (C a), within an infinitesimal range close to thejunction. In this limit, there will be no generation of excess carriers anywhere in the base of the solar cell except at the --f
351
OPEN-CIRCUIT VOLTAGE DECAY
-
0
1
2.5
0
Z-
50
0
1 75
FIG.4. The voltage drop V,(z) - V,(O)is plotted as a function of z. Curve 1 is for the FCVD method. Curves 2, 3, and 4 are for the PVD method with C = 2,0.S, and 0, respectively.
junction. This situation is identical to a forward-biased diode in which the excess carriers are injected only at the junction. From a practical point of view, the main interest in the OCVD experiments is in the determination of q,from the slope of the linear region of the OCVD (PVD or FCVD) curve by using Eq. (82). The value of z b thus obtained will be correct provided the slope of V,(z)versus z curve as given by Eq. (81)or Eq. (89) is unity. The slope is unity only in a certain mathematical limit as shown above. We therefore examine how much the slope of V,(z)versus z curve differs from unity in case of PVD and FCVD. Figure 5 shows the variation of slope d'V,(z)/dzwith C for different values of z as calculated from Eq. (87).This figure also shows the slope for the FCVD curve. The main features of this figure, as observed by Jain (1981), are given below. 1. Slopes for FCVD as well as PVD are always larger than unity. The resulting values of lifetime will therefore be smaller than Tb. This error reduces progressively with increasing z , for example, from 180% at z = 0.25 to 17% at z = 2.25 in case of FCVD. For a reasonable estimate of T,,therefore, the slope should be measured for z > 2.25, provided, of course, the voltage has not decayed to such a low value that the space-charge and junction capacitance efforts become important and the inequality (79) is violated.
V. K.TEWARY AND S. C. JAIN
352
I
/ ,
4
z=I'U
I
cFIG.5 . The values of slope S are plotted as a function of the parameter C for four different values of time z. Assuming a value of 50 pm for L,, the corresponding values of iare also shown in the figure.
2. For small values of C, the slope of the PVD curve is relatively closer to unity. For example, at C = 0.2, the slope differs from unity by 10% at z = 0.5 and 6% at z = 2.25. A small value of C corresponds to a low absorption coefficient and therefore a large wavelength of light (see, for example, Agarwala et al., 1980, for a discussion of data on absorption coefficient). For a silicon solar cell, Jain (1981) has recommended that light of wavelength 0.9-1.1 pm will be most useful for determination of zb. In Jain's (1981) analysis as reviewed above, it has been assumed that the steady state had been reached before t = 0. In actual experiments light flashes are obtained by using stroboscopes and duration of flashes may not be long enough for the excess carriers to attain their steady-state profile before the decay starts. The analysis given above applies to the case when the duration of the light flash is effectively semiinfinite and light intensity falls to zero instantaneously at t = 0. We now consider the other extreme case when the duration of the flash is zero but its intensity is infinite. Such a pulse can be mathematically represented by a 6 function. This case has been treated by Dhariwal and Vasu (1981) and Muralidharan et al. (1982). The mathematical treatment of the 6 function pulse is quite simple and will not be described here (see Dhariwal and Vasu, 1981; Muralidharan et al., 1982).One main result reported in these two papers is that the linear term in the PVD curve is identical to that obtained in the long-pulse case and therefore does not depend upon the duration of the pulse.
OPEN-CIRCUIT VOLTAGE DECAY
353
Muralidharan et al. (1982) have found that for small values of C, the behavior of the short-pulse (6 function)-induced PVD and the long-puiseinduced PVD is quite similar. As C increases the initial part of PVD curve in both the long- and short-pulse cases becomes curved. The increase in curvature is much larger for the short-pulse-induced PVD than that for the long-pulse case. Lifetime q,can be determined from the slope of the linear region in either case. However, in the short-pulse case, the initial curvature of the PVD curve will become quite large at the cost of the linear region. The large C values or light of short wavelengths are therefore less useful for the short-pulse-induced PVD. In general, for small values of C and for a thick solar cell, it would be quite advantageous to use short-pulse excitation for inducing PVD. The practical utility of this result lies in the fact that GaAs laser diodes can be used for PVD experiments in silicon solar cells. The pulse width in these lasers is about 0.1 psec. This pulse can therefore be regarded as a S function pulse since its width is much smaller than q, in silicon, which is of the order of a few microseconds. Before closing this section we shall mention a rather important difference between FCVD and PVD. We consider the time derivative of the voltage decay curve at z close to zero. Using the McLaurin's expansion of e r f c h (Abramowitz and Stegun, 1965), we obtain the following results for FCVD by using Eq. (80):
and
Similarly, by using Eq. (85) we obtain the following for PVD:
AVI(Z)= -(C
+ l ) +~
(93)
We observe from Eqs. (91) and (93) that near z=O, FCVD curve has a & dependence whereas PVD has a linear dependence on z. In fact the &terms in expansions for S(z) and S(C2z)on the right-hand side of Eq. (87) cancel out exactly. The first derivative of the FCVD curve has a singularity at z = 0, which is absent in the PVD curve. The FCVDcurve therefore, unlike the PVD, shows a kink at z = 0. This kink exaggerates the ohmic drop in the FCVD curve at z = 0, which was mentioned at the beginning of Section 1II.A. Physically this difference between FCVD and PVD arises because of the following. In an FCVD experiment a finite forward current flows through the junction until the decay starts at t = 0. The gradient of the excess carrier
3 54
V. K.TEWARY AND S. C. JAIN
profile is determined from this magnitude of the current which is nonzero. After t = 0 when the decay starts, the diode is open circuited. Hence the gradient of the excess carrier profile at the junction is zero. Thus the gradient of the excess carrier profile at the junction has a discontinuity across t = 0 in a FCVD experiment. In contrast, the solar cell in a PVD experiment is in open-circuit configuration before as well as after t = 0. The gradient of the excess carrier profile at the junction therefore remains continuous across t = 0. C . FCVD in Thin Diodes
FCVD in thin diodes has been discussed in several papers (Muralidharan et al., 1982; Sharma and Tewary, 1982; Joshi and Singhal, 1982; for a review and earlier references, see Nosov, 1969). In this section we shall mainly review the work of Muralidharan et al. (1982). The time-dependent diffusion equation in this case is the same as Eq. (75) with Fb = 0. The boundary conditions are given by Eq. (13) and (17) and the initial condition is as given by Eq. (33) with the initial steady-state profile as given by Eq. (46). The solution of the diffusion equation in this case can be obtained by expansion in terms of an appropriate set of orthogonal functions (see, for example, Carslaw and Jaeger, 1959). Taking Fb = 0 (no drift field), and using the zero-current condition given by Eq. (76), along with the Shockley condition given by Eq. (29) and the inequality (79) we obtain the following result for FCVD: 4 (4 exp[AV,(z)] = b j= qbj(O)
where &;
=
C B,exp( -$z) r
1 + p;/w;
(94) (95)
and pr are the real roots of the following equation: p, tan p, = S,* W,
(96) The coefficients Br have to be determined from the steady-state profile. We shall give below the expressions for FCVD in the two limiting cases, i.e., S,* = 0 and 00. I . Ideal BSF Diode (S,* = 0) a. For Small
wb
and Large z.
exp[AVl(z)] = e-'
(97)
355
OPEN-CIRCUIT VOLTAGE DECAY
where pr = rn/wh
The series in Eq. (94) is rapidly convergent for small w b and large times. An alternative series which is more rapidly convergent for large W, and short times has also been obtained by Muralidharan et al. (1982) and is given below. b. For Large
wb
where the function
and Small z.
is defined as
2. Conventional Diode (S,* = 00)
a. For Small w b and Large 2.
where Pr =
(2r
+ 1)n wb
b. For Large wb and Small z. - coth W,
erf&
-2
1
1 $,(2(r + 1)wb,z} m
exp[AV,(z)] = 1
r=O
where $1has been defined by Eq. (99). Figure 6 shows A V,(z) versus z curves for different values of w b and for an ideal and conventional diode. For the purpose of comparison, this figure also gives the FCVD curve for a thick diode as calculated from Eq. (80). The main features of Fig. 6 are given below. 1. In general the decay curves have linear region. For smaller w b , the linear region in the decay curve sets in earlier. This behavior could be mathematically expected from Eqs. (97) and (100).These equations show that after a sufficiently long time (large value of z), the first term in the series on the right-hand side of Eqs. (97) or (100) will dominate. Then AVl (z) will be linear in z provided of course the inequality (79) is not violated. For low values of w b , the first term will start dominating for smaller value of z.
356
V. K. TEWARY AND S. C. JAIN
zFIG.6. Plots of V,(z) - V,(O) versus z for the FCVD method. Curve 1, W = m, S = 0; curve 2, W = 2.0, S = 0; curve 3, W = 0.5, S = 0; curve 4, W = 2.0, S = co;curve 5, W = 0.5, s=m.
2. The slope of the linear region is unity for SE = 0, whereas it is larger than unity for S,* = 00. The usual practice is to determine q, from the slope of the linear region in the FCVD curve by using Eq. (82).For thin diodes, as we have seen above, this relation is not always valid. However, the lifetime determined from the slope by using Eq. (82) can be identified as an effective lifetime Zeff. This can be related to the actual lifetime zb by using the slope of the Al/,(z) versus z curve corresponding to the leading terms in the series in Eqs. (97) and (100) as follows: ( i ) Ideal BSF Diode (SE = 0) 7eff = Tb
(ii) Conventional Diode (S,* = 00)
Equation (104) has also been derived by Nosov (1969). This equation shows that the discrepancy between zeff and q, progressively increases as wb decreases, although the extent of linearity in the FCVD increases as wb decreases. In general for intermediate values of S,* between 0 and infinity, the nature of the FCVD curves would depend upon both S,* and zb. It is possible in
OPEN-CIRCUIT VOLTAGE DECAY
357
principle to obtain S,* and Tb by fitting experimental FCVD curves with theory. It is not quite apparent that such a fitting procedure would yield unique and therefore reliable values of S,*as well as Tb even if the thickness of the diode is accurately known. However, an approximate formula, as given below, can be derived for the effective lifetime in the limiting case when &,k& is small but S,*/W, is large:
(105) The usefulness of the above formula is limited to the case of a very thin diode containing a strong BSF. Zeff
=
wbTb/sb*
D. PVD in T h i n Solar Cells
In this section we shall discuss the effects of cell thickness, surface recombination velocity at the back surface (BSF), wavelength of light used for inducing PVD, and drift field on the PVD rates in a thin solar cell. First we shall consider a long-pulse-induced PVD in which the solar cell reaches a steady state before the illumination is switched off at t = 0. The case of a shortpulse-induced PVD will be considered later. The time-dependent diffusion equation for z > 0 in the present case is same as Eq. (75) since the illumination is assumed to have been switched off at z = 0. The boundary and the initial conditions are as prescribed by Eqs. (13), (17), and (33) and the steady-state ECP as given by Eq. (65) or (69) with appropriate modifications for a constant drift field. The effect of drift field is included by using the transformation given by Eq. (48). For a nonzero drift field, the boundary condition at the back surface as given by Eq. (13) is modified as follows:
The solution of the diffusion equation is obtained by using the transformation given by Eq. (48) and then using the method of expansion in orthogonal functions as in the preceeding section. The required expression for FCVD can then be obtained by using the zero-current condition, i.e., Eq. (21) (ignoring the emitter contribution), the Shockley voltage condition, i.e., Eq. (29), and, for practical convenience, the inequality (79). Following these steps, the result obtained by Sharma and Tewary (1982) is expCAvl(z)l= where &:
=1
1Ar(&/Wb)exp(-E:z) r
+ FE -
(/&/Wb)'
(107) (108)
358
V. K. TEWARY AND S. C. JAlN
and p r are the solutions of the following equation:
The coefficients A, are obtained in terms of the excess carrier profile in the steady state. As in the preceeding sections, the linear region in the AV, ( z ) versus z curve will be obtained when the first term in the series on the right-hand side of Eq. (107) dominates. The slope of the linear region, corresponding to the leading root p0, is given by =1
+ Fz -
(pO/Wb)’
(1 10)
which gives the following expression for the effective lifetime
We shall now discuss the effect of various parameters on PVD. 1. EfSect Of
s:, c, and w b for F b = 0
If S,* = 0 then all the roots of Equation(l09) are imaginary. They are given by
(r = 0,1,2,. ..)
,ur = irz and Teff
= zb
On the other hand, if S,* = 03 then pr = (2r + l)ni/2
and
(r = 0, 1,2,. ..)
-=ql+”]4w2 1
Teff
Tb
The PVD curves as calculated from Eq. (107) by Muralidharan et al. (1982) for different values of C, w b , S,* = 0, and co have been shown in Fig. 7. The FCVD curves have also been given in this figure for the sake of comparison. The curves for w b = 03 have also been included to estimate the validity of the infinite base approximation in the calculation of decay rates. It may be observed that for S,* = 0, the PVD curve becomes identical to the FCVD curve for large C and w b . As C and/or w b decreases, the initial slope of PVD curves for S,* = Odecreases.In this case, the difference between PVD and
OPEN-CIRCUIT VOLTAGE DECAY
359
Z-
FIG.7. Plots of V , ( z )- V,(O)versus z: Line lP, PVD plot for C = 0, W = co;curve lF, FCVD plot for W = m; line 2P, PVD plot for C = 0, W = 2.0, S = 00; curve 2F, FCVD plot for W = 2.0, S = m; line 3P, FCVD and PVD plot for all values of C, W = 0.5, S = 00.
FCVD is maximum when w b is large and C is small, corresponding to the wavelength of light being large. In such conditions, PVD is more convenient for a measurement of q, as compared to FCVD because of the more pronounced linear region in PVD curves. For a conventional solar cell with S,* = 00, Fig. 7 shows that the difference between FCVD and PVD is smaller for w b = 2 as compared to that for w b = cg. For larger values of C, the difference between PVD and FCVD would decrease even further. For wb = 0.5, the difference between FCVD and PVD is negligible. 2. Efect of Drift Field on PVD The effect of drift field on PVD has been calculated by Sharma and Tewary (1982). They find that in general PVD is sensitive to the magnitude as well as the sign of the drift field Fb.A positive Fbrefers to the retarding field whereas a negative Fbrefers to the accelerating field. A retarding field drives the carriers toward the junction and thus retards the decay of the voltage. On the other hand, an accelerating field drives the carriers away from the junction and thus accelerates the voltage decay. Mathematically this behavior can be seen from Eq. (109). We will consider a conventional solar cell with S,* = co. For this case, Eq. (108) reduces to
Fbwb tanh p, (1 14) For negative Fb, Eq. (114) will have no real root except po = 0. This will give the smallest decay constant c0 in Eq. (108) since the imaginary values /l, =
360
V. K. TEWARY AND S . C. JAIN
of p, will make a positive contribution to 8 ; . Hence r = 0 will give the leading decay mode. The value of toas obtained from Eq. (108) for po = 0 is 8;
=1
+ Fi
(1 15)
For a retarding field (& > 0),Eq. (1 14)can have real roots. The largest real root will give the leading mode of decay. We consider the limit of a very strong retarding field such that Fbwb +
In this limit the real root of Eq. (114) corresponding to the leading mode po is given by PO
The corresponding value of
8:
Fbpb
(1 16)
--t
as obtained from Eq. (108) is 8;
(117)
=1
which gives, from Eq. (107), AVI(Z)= - Z
+ In
AoFb
(1 18)
and from Eq. (1 11) Oeff
= zb
(119)
Thus, we get an interesting result as derived by Sharma and Tewary (1982) that, in the limit of a very strong retarding field, the slope of the linear region of the PVD curve becomes independent of the field as well as the thickness of the base. Physically this result can be understood as follows. A retarding field opposes the diffusive loss of excess carriers at the junction. It can not create or destroy the carriers. After the field is large enough to totally counteract the diffusion process, a further increase in the field would have no effect on the excess carrier concentration at the junction. In this limit the excess carrier concentration at the junction will decay by recombination only, which gives unity slope in the AVl(z) versus z curve and Oeff = Ob. Figures 8 and 9 show PVD curves for different values of Fb and W, as calculated by Sharma and Tewary (1982). These curves show that the effect of accelerating field is relatively larger for large values of w b . Physically, it happens because, for large values of w b , the gradient of the excess carrier profile at the junction is small. Hence the diffusive loss is reduced and the effect of the accelerating drift field becomes relatively more pronounced. For small w b the excess carrier profile for Sf = co becomes more steep so that the diffusion process becomes stronger and the relative effect of the field is less significant. For Sf = 0, a smaller w b will not increase the “steepness” of the excess carrier profile and this effect will not be there.
361
OPEN-CIRCUIT VOLTAGE DECAY Time tin units of 7b’
FIG. 8. Effect of drift field on PVD (W = 0.1, C = 1). For curves A, B, and C, 2F = - 5 , 0, and 5, respectively.
The main conclusion of this analysis is that for small w b (X0.5), PVD and FCVD are almost identical both for solar cells with S,* = 0 (ideal BSF) and S,* = 03 (conventional solar cell). The slope of the linear region would depend strongly on S,*. For S,* = 0 the slope is unity for FCVD as well as PVD, irrespective of the values of C and w b as long as wb is not too large. For S,* = 03 the slope is 1 + 7c2/4Wi as given by Eq. (1 13), which increases b decreases. as w In general, the infinite base approximation is better for low values of time. This is physically obvious because the carriers, after diffusing away from the junction, will take a finite time to “feel” the presence of the other end. The validity of the infinite base approximation for S,* = 03 also depends upon the relative magnitudes of C and wb.For large C the infinite base approximation would become reasonable for smaller wb.This can be physically understood by recalling that a larger C implies a smaller absorption length. Sharma and Time (in unihof TB) I 3 0 40 50 I
I
I
I
I
I
I
60 I
FIG.9. Effect of drift field on PVD (C = 1). For curves A, B, and C, 2F = -5, 0, and 5, respectively, and W = 1. For curves D, E, and F, 2F = - $0,and 5, respectively, and W = 3.
362
V. K. TEWARY AND S. C. JAIN
Tewary (1982) find that, in general, for reasonable values (C > 0.1) the infinite base approximation becomes reasonable for w b > 3. We shall now briefly describe PVD induced by a short light pulse in which the duration of the illumination is not long for the solar cell to reach a steady state. Before proceeding further we note from Eq. (94) and Eq. (107) that the decay rates (coefficients of z in the exponents) are determined from the geometry and the characteristics of the device. They are same for FCVD as well as PVD and would not depend upon the nature or duration of the illumination. The coefficients A,, i.e., the expansion coefficients in the series in Eq. (94) and Eq. (107) are determined from the excess carrier profile in the steady state, which of course depends upon the nature as well as the duration of illumination. These coefficients essentially determine the initial curvature of the decay curve and the onset of the linear region. The slope of the linear region from which the lifetime is determined does not depend upon the characteristics of the steady state and therefore neither on the nature nor the duration of the illumination. We infer therefore that the slope of the linear region in the PVD curve will remain the same irrespective of whether a short or a long pulse has been used for inducing the PVD. A detailed calculation of PVD induced in a thin solar cell by a 6 function pulse has been carried out by Muralidharan et al. (1982). Both ideal BSF type (S,* = 0) and the conventional solar cells with ohmic contact at the back (S,* = co)have been considered. These results are shown in Fig. 10. Fig. 10 also gives PVD curves for W, = co. It may be observed that for W, = co the short-pulse-induced PVD curve lies above the FCVD curve for C c 1 and below it for C > 1. At the other extreme, the PVD curve for a short pulse does not depend upon the value of C for w b = 0.5 and S,* = 00 and is identical to the FCVD curve.
E. Limitations of the OCVD Method for Lifetime Determination and Small Signal OCVD In the preceding section we have obtained various expressions for voltage decay in diodes and solar cells. These expressions can be used for determination of the excess carrier lifetime in the base of a device. There are, however, some limitations which are summarized below. 1 . Variation of Lifetime
During the time of a typical OCVD experiment the excess carrier concentration near the junction varies by a large amount. Thejunction voltage
OPEN-CIRCUIT VOLTAGE DECAY
363
ZFIG. 10. Plots of V,(z) - V,(O) as a function of z. Curve lP, PVD after short-pulse excitation plot for W = 0.5, S = 0, C = 0.5; curve 2P, PVD after short-pulse excitation plot for W = 0.5, S = 0, C = 2.0; curve 3P, PVD after short-pulse excitation plot for W = 2.0, S = m, C = 0.5;curve SF, FCVD plot for W = 2.0, S = co;curve 5P, PVD after short-pulse excitation plot for W = 2.0, S = co, C = 2.0; curve 6, FCVD and PVD after short-pulse excitation plot for all values of C, W = 0.5, S = co.
varies say from 0.6 to about 0.3 V. The excess carrier concentration which is proportional to exp(qV/kT) will therefore vary by 3-4 orders of magnitude at room temperature. The excess carrier concentration also has a strong spatial dependence and will vary from a high value near the junction to zero at the back ohmic contact. Except when the excess carrier concentration remains sufficiently high or sufficientlylow during the course of an experiment, the excess carrier lifetime will also vary (see Section I1.B). The theory developed in the preceeding section does not account for this variation. An attempt to account for the variation of excess carrier lifetime in space and time will necessarily require solving a nonlinear differential equation. 2. Diode Factor and Its Variation
In the Shockley boundary condition given by Eqs. (28) and (29) the exponent is q q / k T . In general, in a real case, the exponent can be written as q F / A k T where A is called the diode factor. A is equal to unity in the ideal case of a Shockley diode but otherwise varies between 1 and 2 or even more. There is a considerable uncertainty in the actual value of A. If A is taken to be a constant then the derivations given in Section 1II.B would yield A / T , as the slope of the linear region in the OCVD curve. Since the lifetime is determined from the slope of the linear region, any uncertainty in A directly affects the
364
V. K. TEWARY AND S. C. JAIN
determination of the lifetime. The situation is worse if A is taken to be a function of time, which again makes the diffusion equation nonlinear.
3. High Injection Egeects The effect of high injection is to modify the excess carrier lifetime as well as the diode factor. In addition, it causes a saturation of the junction voltage (Dhariwal et al., 1976; Fletcher, 1957) which is described by the nonlinear boundary conditions as given in Section 1I.C. Consequently, the expressions given in this article, i.e., Sections 1II.A-D for OCVD are not valid. Moreover, as we shall see in Section VI the effect of p-n coupling becomes quite important at high injections but this has been neglected in the present formulae given in the this section. In general, in a typical silicon diode or solar cell, the high injection effects may become important when the junction potential exceeds 0.6 V. 4. Low Injection Effects
At low junction voltages, the space-charge effects and the junction capacitance leakage effects become significant. These effects have not been included in the present theory. Moreover, the linear region in the OCVD curve would exist only if the inequality (79) is satisfied. This inequality is of course violated at much lower voltages than that when the space-charge effects set in. In a typical silicon diode or a solar cell, the space-charge effects become significant when the junction voltage drops to lower than 0.4V. The factors given above cause severe limitations on the determination of excess carrier lifetime by the OCVD method. The starting voltage should not be too high to avoid high injection effects. The linear region in the decay curve from which the lifetime is determined occurs only after some time but during this time the voltage should not drop to too low a value to avoid space-charge effects.The voltage-decay range should not be too large to avoid the variation of the excess carrier lifetime. Yet it should be large enough for the linear region to be identified. Moreover, the diode factor in this region should be accurately known. Some of the aforementioned difficulties can be avoided by using what is called the small-signal OCVD method in solar cells. This very useful and elegant method was suggested by Moore (1980)and is briefly described below. We consider a base-dominated solar cell illuminated by stationary light so that it is in steady state with open-circuit voltage Kj. In this state a pulse of light is applied to the solar cell. It may be a long pulse so that the junction voltage reaches a new steady-state value Kj + Voj which eventually decays to Kj. It may also be a short 6 function type pulse so that the junction voltage reaches its peak value Kj Voj and then decays to Kj.
+
OPEN-CIRCUIT VOLTAGE DECAY
365
Let qsj denote the excess carrier concentration at the junction in the initial steady state corresponding to the voltage Kj.The additional excess carrier concentration at the junction corresponding to the additional voltage Voj is denoted by qoj. We now make the small-signal approximation so that
kT
VOj<< 4
With this approximation, using the mathematical methods given earlier in this chapter, the following formulae for the voltage decay can be easily derived (see Moore, 1980): Aqoj AVlW = (121) 4sj where A is the diode factor and ~ ( zis) a function which represented qj(z) in the earlier sections of this article. For example, for a thick-base (W, = 00) solar cell, ~ ( zis) the function given by the right-hand side of Eq. (85). Similarly, for a thin solar cell, it is given by the right-hand side of Eq. (107) with cr and B, = A,pL,/Wb as given in Section 1II.D for different cases. Except in the ) be expressed in the followearliest stages of the decay, the function ~ ( z can ing form:
---m
~ ( z= ) B, exp( - ~ ; z )
(122) As discussed in the earlier sections of this article, E, is known quite accurately. In a thick solar cell as well as in a thin solar cell with an effective BSF (S,*= 0) and no drift field, E, = 1. For other cases the values of E, have been given in Section 1II.D. Any uncertainties in the starting excess carrier’s profile and the duration of the pulse would affect only B, and not E,. The quantity E, is also independent of the diode factor. Combining Eqs. ( 1 21) and (1 22) we see In AV,(z) = const. - E
(123) We see from Eq. (123)that if the log of the voltage drop is plotted against time, we expect a straight line with slope E;/z,, (z = z/zb).Since E; is accurately known, zb can be determined from the slope of this straight line. The slope is obviously independent of the diode factor, duration of the pulse, as well as any uncertainties in the starting steady-state excess carrier profile. The method also has the advantage that the decay range is small, i.e., Kj + Voj to Kj. The lifetime can be easily assumed to be constant within this range. The measurements are effectively made at the chosen value of Kj. Thus by taking different values of Ej corresponding to different intensities of the stationary light, we can measure the dependence of the excess carrier lifetime on the intensity of the illumination, i.e., the level of injection. It ~ Z
366
V. K. TEWARY AND S. C. JAIN
may be remarked that Kj should be chosen such that it is neither too high nor too low so that high injection effects as well as space-charge effects are not significant.
F . Interpretation of Experimental Results on PVD FCVD in diodes has been a popular technique for measurement of excess carrier lifetime as well as series resistance of the diode for several years (see,for 1969). In contrast PVD is a new technique and very few example, NOSOV, experimental papers on PVD are available in the literature. Perhaps the first experimental results on PVD in solar cells were published by Mahan et al. (1979). Subsequently, more experimental work has been reported by Ray et al. (1982), Madan and Tewary (1983), and Moore (1980) on small-signal PVD. The usual electrical circuit for PVD measurements is quite similar to that for FCVD. For illuminating the solar cell, Mahan et al. (1979) used an electronic stroboscope. They used a composite light source but the spectral composition of the light used was not reported. Hence it has not been possible to make a quantitative comparison between the theory and their experimental results. The main qualitative features of the experimental results of Mahan et al. are given below. 1. The voltage decay curves are more linear in case of PVD as compared to those for FCVD. 2. The photo induced voltage persists for a longer duration. 3. The values of lifetimes as determined by the PVD method are considerably larger than those obtained by the FCVD method. 4. The values of lifetime obtained by the PVD method seem to be more accurate than those obtained by the FCVD method. The above qualitative features are consistent with those predicted by Jain’s (1981) theory. This can be seen from Fig. 4.As mentioned in Section III.B, the infinite value of C in PVD corresponds to FCVD. We see from Fig. 4 and as already mentioned in Section 1II.B as C decreases, the PVD curves become more linear and the slope of the linear region comes closer to unity. Thus we see that compared to infinite value of C, i.e., the FCVD method, a finite value of C, i.e., the PVD method will give more pronounced linear region (observation 2), smaller slope (observation 3), and more accurate value of the lifetime (observation 4).Figure 4 also shows that PVD curves are above the FCVD curve, which is consistent with observation 2. A more quantitative verification of the theory is possible by using the experimental results of Madan and Tewary (1983). They have obtained PVD
367
OPEN-CIRCUIT VOLTAGE DECAY
curves for monochromatic light of wavelengths of 690,590,560,500,410 nm, and also for composite light. Their results are shown in Fig. 11. The monochromatic beams of light were obtained by using interference filters. The initial open-circuit voltage for each wavelength is a measure of intensity of steady-state illumination. By adetailed analysis, Madan and Tewary (1983) infer that their curves can be compared with the theories of Jain (1981) or Dhariwal and Vasu (1981) until about 0.4 V. Below this voltage the low injection effects,e.g., space-charge effects, become significant. Since curve 6, i.e., the one corresponding to wavelength 410 nm starts at just about 0.4 V, it cannot be analyzed by using the existing theories. The remaining curves (1-5) in Fig. 1 1 all become linear and parallel to each other after about 20 psec and remain so until the voltage drops to about 0.4 V. The slopes of the linear region in all these curves do not differ by more than 20%. These features are consistent with the theories of Jain (1981) as well as those of Dhariwal and Vasu (1981). The lifetime determined from this slope comes out to be about 23 psec, which seems to be a reasonable value. However, a more detailed analysis shows that in the initial region of high curvature, the experimental PVD curves as obtained by Madan and Tewary (1983) differ considerably with those predicted theoretically by Jain (1981) and Dhariwal and Vasu (1981). Madan and Tewary (1983) attribute this discrepancy to the following two factors: 1. Jain's (1981) theory assumes a long pulse so that a steady state is assumed to have reached before the decay starts. On the other hand, Dhariwal
"
01 0
I
50
I
100
I
I50
I
200
TIME (ps)
FIG. 1 1. Spectral dependence of the PVD in a silicon solar cell for composite light (curve 1) and for wavelengths of 690 nm (curve 2), 590 nm (curve 3). 560 nm (curve 4), 500 nm (curve 5), and 410 nm (curve 6).
368
V. K. TEWARY AND S. C. JAIN
and Vasu (1981) assume a S function pulse. Neither of these assumptions are valid for the pulse used in the experiment. 2. The thick-base and zero-drift-field approximations may not be valid in the solar cell used in the experiment. Sharma and Tewary (1982) have also analyzed the experimental results of Madan and Tewary (1983) on the basis of their theory of PVD in a thin solar cell. This theory has been described in Section 1II.D. They have fitted the experimental results with Eq. (107) by using a least square fitting procedure and thus obtained the best values of t b , L b , d b , and Fb.Two different sets were obtained. In one set the drift field Fbwas forced to be zero whereas in the other set, F b was assumed to be finite and constant. These values are given below: Set 1:
zb
= 47 psec,
Set 2:
zb =
Lb
23 pet,
= 458 pm,
Lb =
db/Lb
128 Pm,
= 1.57,
Fb
=0
1.55,
Fb
= 3.85
db/Lb =
It can be easily verified that both these sets are consistent with the effective lifetime as defined by Eq. (109). The value of teffin the present case, as obtained from the slope of the region and as quoted earlier, is 23 psec. The fit between the theoretical and experimental results using the parameters of Set 1 is shown in Fig. 12. A similar fit is obtained by using Set 2. Although the theoretical results fit well with the experimental results they obviously do not yield a unique set of values for the parameters. It is therefore
0
I
l
1 0
20
l
I
30 40
I
I
I
SO 60 70 80
Time ( p s )
FIG.12. Fitting of theory with the experimental data on PVD assuming zero-drift field. Curves A, B, C, and D correspond to light wavelengths (in nm) of 690, 590, 560, and 500, respectively. The corresponding experimental points are shown by x , 0, A , and 0, respectively.
369
OPEN-CIRCUIT VOLTAGE DECAY
essential to have independently measured values of some of the parameters to obtain a reliable value of zb. The experimental results obtained by Moore (1980) on small-signal PVD are shown in Fig. 13. Two small-signal PVD curves, one with and the other without stationary light bias, are shown in the figure. It may be seen that the PVD curve without the stationary light bias is not exponential, presumably due to space-charge effects. The small-signal PVD curve with a stationary light bias is indeed exponential as predicted theoretically (see Section 1II.E). The slope of the linear curve plotted on a log graph gives r b = 7 psec by using Eq. (123) taking e0 = 1. Using this technique, Moore (1980) has experimentally obtained the dependence of rb on the level of injection, Moore (1980) finds that in a particular solar cell, zb increases from 7.5 psec at a light intensity of 3.8 x lop3 Sun (equivalent) to 26 psec at light intensity of 0.4 Sun (equivalent). It then decreases to 8.2 psec as the light intensity further increases to 40 Sun (equivalent). Moore (1980) has also given an explanation of the observed dependence of z b on the injection level in terms of the Shockley-Reed-Hall two-level recombination mechanism.
-
FIG.13. Open-circuit voltage in the tail of the decay. The upper curve is with no light bias, the lower curve is with a small light bias equivalent to less than 0.1 mA short-circuit current. The normal operating range of these cells is near 1 A. x , No light bias; 0, with light bias.
370
V. K. TEWARY AND S. C. JAIN
I v . EFFECTOF EMITTER ON OCVD In the previous section we discussed OCVD in base-dominated devices. In such devices the effect of the emitter is negligible. This approximation is valid in a device in which the injection efficiency of the emitter is close to unity, i.e., the saturation current of the emitter is much less than the saturation current in the base. Most of the early work on characteristics of the devices was done on basedominated devices in steady state (see, for example, Shockley, 1950, and various papers in Backus, 1976) as well as transient state (Gossick, 1955; Lederhandler and Giacoletto, 1955; Kingston, 1954; also the work reviewed by Nosov, 1969). The effect of stored charges in the emitter was discussed by Lederhandler and Giacoletto (1955) but this work was not pursued further. This was presumably because the injection efficiency of the emitter was indeed close to unity in the germanium diodes that were mostly used in 1950s. In a typical modern silicon device the emitter is heavily doped. As mentioned in Section I.B, the heavy and nonuniform doping of the emitter causes band-gap shrinkage which increases the saturation current of the emitter. In such devices the effect of the emitter becomes quite important. This was recognized by Lindholm and Sah (1976) for transient characteristics and Fossum et al. (1979; also see Sharma et al., 1981) for steady-state characteristics of devices. The experimental results of Mahan et al. (1979) also seem to support this. Subsequently, the effect of emitter on OCVD has been the subject of many papers. Some of these will be reviewed in this and the following sections. The effect of the emitter on OCVD arises from what is called p-n coupling. The nature and effect of p-n coupling is described in Section 1V.A. The effect of p-n coupling on OCVD is likely to be significant in a device in which heavy doping in the emitter results into band-gap shrinkage. Even in a device in which there is no band-gap shrinkage in the emitter and in which the saturation current of the emitter is much less than that of the base, the effect of p-n coupling becomes important at very high injections. This will be discussed in Section VI.
A. p-n Coupling Consider a p-n junction device in which some excess carriers have been injected. The excess carrier concentrations at the junction on both of its sides are related through the voltage boundary conditions given in Section II.C.3. The excess carrier concentrations in the emitter and the base therefore cannot decay independently of each other. The ECC at the junction, i.e., the heights of
OPEN-CIRCUIT VOLTAGE DECAY
371
ECP at the junction on both of its sides are coupled with each other through the voltage boundary condition. Further, the total current through the junction consists of two parts, the emitter current and the base current. The diffusion current is proportional to the gradient of ECP at the junction, whereas the drift current depends upon the height of ECP at the junction. Any physical condition on the current will apply to the total current through the junction and not to the emitter and the base currents individually. In the open-circuit case, for example, the total current has to be zero as given by Eq. (21). Therefore we see that the gradients of the ECP at the junction in the emitter and the base are also coupled. Thus we see that, during decay, the heights as well as slopes of ECP at the junction in the emitter and the base of the device are coupled together and therefore can not decay independently. This coupling is called p-n coupling since one of the two-emitter and base-would be p type and the other n type. We shall now show qualitatively that the effect of p - n coupling is, in general, to accelerate the voltage decay. In a typical n + - p device, for example, the excess carrier lifetime in the emitter may be a few nanoseconds whereas in the base it may be a few microseconds. If the excess carrier concentration in the emitter and the base were to decay independently of each other, decay would be much faster in the emitter than in the base. However, because of the p-n coupling they can not decay independently of each other. The base has to supply enough carriers to the emitter to keep the relative ECC on the two sides of the junction at a value demanded by the voltage condition. The net rate of decay is therefore higher than that from the base alone in the absence of p-n coupling. In the followingsubsectionsof this section we shall discuss the effect of p-n coupling on FCVD and PVD at low injections. Some experimental results will also be discussed.The effect of p-n coupling at high injection will be discussed in Section VI. B. FCVD in Thick Diodes with p-n Coupling
The theory given in this section is due to Tewary and Jain (1983). We consider the model described in Section 1I.A (see Fig. 1). We assume a thick diode (or a solar cell used as a diode),i.e., the thickness of the emitter as well as the base is infinite in the sense given in Section 1I.A. The steady state is assumed to persist until t = 0 when the forward current is switched off and the decay starts. The ambipolar diffusion equations for the emitter and the base in the low injection approximation in terms of the dimensionless variables X , , Xb,and z
372
V. K. TEWARY AND S. C. JAIN
reduce to the following (see Section 1I.B):
I. Emitter (-aI Xe I 0; z > 0) 1 aqe
a24e
ax,^ - q e = PZ
11. Base (0 I x b I 00; z > 0) a2qb --
ax;
qb
=
aqb
where we have assumed the drift field to be zero and
5
(126) The boundary and initial conditions, as given in Section 1I.Care as follows: = zb/ze
1. At the ohmic contacts for all z = qb(a,z) =0
qe(-a,z)
2. At the junction (Xe = x
b
(127)
= 0) for all z
q e ( 0 , Z) = qej(z)
( 128)
z, = 4bj(z)
(129)
0)= q e A X e )
(130)
and qb(O,
3. At z
= 0 (initial condition) qe(Xe
9
and O) = qbs(Xb) (131) In addition we shall use the following voltage and current conditions as given by Eqs. (28),(29), and (21): qb(xb,
qej(z)/qeO
= qbj(Z)/qbO = exp[qy(z)/kTI -
and
[,
Le
aqe(xe, axe
z,
I-<[
Lb
aqb(xb, axb
z, ]Xb=O
=o
1
(132)
(133)
In order to obtain the expression for voltage decay i.e., Y ( z )as a junction of z, we solve Eqs. (124) and (125) subject to the boundary and the initial
conditions as given above. The solution can be easily obtained by using the standard Laplace transform method (see, for example, Carslaw and Jaeger,
OPEN-CIRCUIT VOLTAGE DECAY
373
1959). Use of Eq. (133) then gives a relation between qej(z)and qbj(z).Finally, use of Eq. (132) gives the desired expression for y ( z ) as a function of z. The initial ECP in steady state for the emitter and the base have been given in Eqs. (36) and (37). Using the inequality (79) the following result can be easily obtained:
1 exp[AV,(z)] = -[b(S(z) b-a
- aS(a2z))exp( -2)
where
and the function S(z) is as defined by Eq. (86). The right-hand side of Eq. (134) is known as North's expression and was first quoted by Lederhandler and Giacoletto (1955) without giving its derivation or the model and assumptions on which the expression was derived. North's expression as derived above includes the effect of p-n coupling in the sense described at the beginning of this section. We shall now briefly mention the qualitative features of FCVD as predicted by the North's expression. 1. If j >> 1 and
>> 1, then Eq. (135) reduces to
A V,(z) = - z
+ In S(z)
(138) which is the usual expression for FCVD in base-dominated diodes as given earlier by Eq. (80). In this limit therefore the effect of coupling is negligible and FCVD is not affected by the emitter. 2. We consider the case when j s l
(139)
and Using the above inequalities in Eqs. (134)-(136) we obtain a = l/j,
b=1
(141)
374
V. K.TEWARY AND S.C. JAIN
and e-*
exp[AVl(z)] = -[S(z) 1-a
- aS(a2z)]
It may be remarked that in devices with heavily doped emitters, inequality (1 39)may be satisfied for two reasons: (i) an increase in qeo[see Eq. (137)] due to band-gap shrinkage and (ii) a decrease in L, because of decrease in excess carrier lifetime due to Auger recombination, etc., operative at high doping levels. We note from Eq. (142)that for a = 0, i.e., if j >> 1 so that inequality (140)is satisfied, this case reduces to case 1, i.e., Eq. (138).Thus we infer that the terms containing a in Eq. (142)arise due to the p-n coupling. 3. We now consider the large time (z >> 1)behavior of FCVD as given by Eq. (134)assuming that >> 1. In this limit we obtain the following:
For large z the contribution of the second term on the right-hand side of Eq. (143)will be small. Thus we see that, as in the case of base-dominated diodes discussed in Section 111, FCVD curve becomes linear in z with slope unity. Obviously the linear region is not affected by p-n coupling. The same result is of course predicted by Eq. (142),which is an approximate form of Eq. (1 34). 4. Finally we consider the low time (z + 0) behavior of FCVD. By using the low z expansion of S(z) (Abramowitz and Stegun, 1965),we obtain the following result for z + 0:
The effect of p-n coupling will be negligible in the limit of large j,as can be verified by comparing the large j limit of Eq. (144)with the low z limit of Eq. (80). The dependence of the right-hand side of Eq. (144)on for finite values of j therefore arises due to the p-n coupling. A particularly interesting effect of the p-n coupling can be seen from Eq. (144)by considering the limit j << 1 but j& >> 1. In this limit, which represents a strong p-n coupling,
Obviously for j << 1, AVl (z) will be large. Thus a strong p-n coupling results in a large voltage drop in the FCVD curve near z = 0, i.e., in the initial stages of
OPEN-CIRCUIT VOLTAGE DECAY
375
the decay. This drop is quite sharp because the derivative of AV,(z) with respect to z is singular at z = 0. The sharp drop predicted by Eq. (145) is called the coupling drop. This drop will exist in a device in which the emitter is heavily doped so that { >> 1 and because of band-gap shrinkage j<< 1. The coupling drop will occur within a time which is of the order of re. In most cases 7, is of the order of nanoseconds whereas z, is of the order of a few microseconds. The FCVD curve is mostly observed on a CRO set to a time scale in microseconds. The coupling drop therefore would appear to be almost vertical and will get added up to the ohmic drop caused by the series resistance of the device [see Eq. (7411. It is therefore important to separate out the coupling drop from the initial vertical drop for the purpose of determining the series resistance of the device. The qualitative features of FCVD as discussed above will be apparent from Fig. 14 taken from Tewary and Jain (1983).To summarize, the effect of p n coupling on FCVD is as follows: 1. The initial stages of the decay are sensitive to p-n coupling. 2. The effect of p-n coupling is to make voltage decay faster, i.e., to make a FCVD curve more steep. This effect is more for lower values of j . 3. In certain cases ( jc< 1, j 2 { >> 1) the effect 2 can be quite pronounced. It results in a very rapid drop in the voltage right at the beginning of the decay. This drop, referred to as the coupling drop, will be added to the ohmic drop due to the series resistance of the device.
z-
FIG.14. Reduced voltage V,(z) - V,(O)as a function of z. Curve 1 is the ideal linear law, curve 2 is from Eq. (138), curves 3 and 4 are from Eq. (142) for a=0.5 and 10, respectively, and curve 5 is from Eq. (134) for a = 0.1 and b = 0.7.
376
V. K. TEWARY AND S.C. JAIN
4, The large time behavior of FCVD is not sensitive to the p-n coupling. The slope of the linear region in the FCVD curve at large times is determined only by the excess carrier lifetime in the base of the device. C. Experimental Results on FCVD Showing the Efect of p-n Coupling Madan and Tewary (1985) have carried out a simple experiment which provides at least a qualitative verification of the basic physical ideas underlying the theory described in the preceeding section. Their results clearly show the existence of the coupling drop in the FCVD curve. As mentioned in Section IV.B, the coupling drop A K is added to the ohmic drop given by Eq. (74).The total vertical drop AVOin the FCVD curve at z = 0 is therefore given by AVO = A%
+ IFR,
(146)
We also define an apparent resistance R* as follows: R*
= AVo/ZF
(147)
so that, from Eq. (146)
R*
= R,
+ AK/IF
(148) The significance of the apparent resistance R* should be obvious. It denotes the value of the resistance which would be obtained from the initial vertical drop in the FCVD curve without separating the contribution of the coupling drop. In a device in which A K is almost 0 (small p-n coupling) R* will be almost equal to R,. From the theory in Section 1V.B and also in Section 111, it is clear that A K is independent of I,. We can also assume that R , is independent of IF within the limits that are relevant to our present discussion. Then, Eq. (146) shows that AVOhas a linear dependence on I F . The slope of this straight line will be as R , and its intercept with the y axis (AVOaxis) will be equal to A K . This is obviously in agreement with the experimental results of Madan and Tewary (1985) as shown in Fig. 15. From the slope and the y intercept of the linear region they obtain the following values: R , = 0.24 R
and AV,=27 mV Figure 15 also gives the measured values of the apparent resistance R*. We see from the figure that the apparent resistance has a hyperbolic dependence
377
OPEN-CIRCUIT VOLTAGE DECAY
E
iii
- 63 V
c
9
0
25 50 CURRENT (mA)
75
too
2
-
’p
FIG.15. Initial voltage drop (curve a) and apparent series resistance (curve b) as a function of injection current. Dotted portion of curve a indicates linear extrapolation to zero current.
on I, as predicted by Eq. (148). For large values of I,, R* becomes close to R, as theoretically expected from Eq. (148). However, if I, is too large, the high injection effects that have not been included in the theory so far become significant. Also, at very low values of I,, the low injection effects that have been neglected in the present theory become important. The departure from linearity in the AVO versus I, curve at low values of I, may be due to low injection effects. For a further verification of the theory, Madan and Tewary (1985) added known additional resistance R, in series to the diode and measured AVO for each added resistance. The effect of R, is obviously to increase the series resistance of the device from R, to R, + R,. Using these results, Madan and Tewary (1985) verified that, as expected from Eq. (146), the slope of the AVO versus IF line changed from R, to R, + R, but the intercept A K remained unchanged. A more direct experimental analysis of the coupling drop can be done by using a high-resolution CRO and adjusting it to the time scale of 7, (nanoseconds). Such an experiment should be quite interesting and should yield useful information about 7,. Madan and Tewary (1985) have also analyzed their FCVD results using Eq. (142), which is a reasonable approximation of the North’s expression given by Eq. (135). The experimental and the “best” fit experimental FCVD curves have been shown in Fig. 16. The theoretical curve in this figure was obtained by using a least square fitting procedure which yielded zb = 11 psec and a = 1.1. It may be remarked that the “measured” value of a as reported above and ,-, may be used to estimate the band-gap shrinkage in the emitter. Let .I:
V. K.TEWARY AND S. C . JAIN
378
- 0.4
0
10 20 3 0 4 0 50 60 70 80 TIME(ps)
FIG. 16. Calculated open-circuit voltage decay curve in a p-n diode. Points indicate experimental results after subtracting series resistance contribution.
Jbo represent the saturation currents in the emitter and the base of the device, respectively, in the absence of any band-gap shrinkage. J:, and JLo can be calculated from the thermal equilibrium values of the carrier concentration (doping levels). If the band-gap shrinkage in the emitter is AE,, then the effective saturation current in the emitter will be given by Jeo = J:,exp(AE,/kT)
(149)
We assume that the base is not heavily doped so that there is no band-gap shrinkage in the base. Then we get from Eqs. (141) and (149). a = a’exp(AE,/kT) or AE, = kTln(a/a’) ( 150) where a = J,o/Jbo is the value obtained from the FCVD data and a‘ = Jeo/Jbo is the value of a obtained without accounting for band-gap shrinkage. Eq. (150) gives a reasonable estimate of AEDconsistent with the FCVD data.
D. ESfect of p-n Coupling on OVD in Thick Solar Cells The effect of p-n coupling on PVD in thick solar cells ( w b + 00) has been discussed by Sharma et al. (1981) and Sharma and Tewary (1983). They have used the same model as shown in Fig. 1 and described in Section 1I.A. The
OPEN-CIRCUIT VOLTAGE DECAY
379
time-dependent coupled diffusion equations to be solved in the present case are the same as Eqs. (124) and (125) because the illumination is assumed to have been switched off at z = 0. The boundary conditions at the back of the base (Wb = 00) and at the junction are the same as in the case of FCVD [Eqs. (127)-( 129)]. Formally, the initial conditions in the present case are also the same as those given by Eqs. (130) and (131). The ECP in the base for the present case is given by Eq. (57). The ECP in the emitter is somewhat complicated and is not quoted here (see Sharma et al., 1981). The boundary condition at the front of the emitter is as given by Eq. (11) without the second term on the right-hand side. The second term qNo will contribute only in the steady state and not in the transient state after the illumination has been switched off. The formal solution of Eqs. (124) and (125),subject to the above boundary and initial conditions, can be easily written in terms of the Laplace transform. However, it is not possible to obtain the inverse Laplace transform in a closed analytical form in the present case. Sharma and Tewary (1983) have used the Tricomi method (see, for example, Sneddon, 1972) to obtain the solution in a semi-analytic form. Sharma et al. (1981) have given an expression for PVD which is valid at small times. In this limit (z + 0), the PVD is given by AVI(z) = -Az(z << 115' << 1)
(151)
where
!l
= Le/Lb
and JQis given by Eq. (62). Equation (151) is convenientfor a qualitative understanding of the effect of p-n coupling on PVD. First, it may be remarked that the decay rate A given by Eq. (152) does not really depend upon No since JBis proportional to No [see Eq. (6211. The decay rate A in practical units is shown in Fig. 17 as a function of a for different values of q and S,. The calculations refer to a n+-p solar cell used by Fossum et al. (1979). The material parameters of this cell are as follows: emitter, d, = 0.1 pm, z, = 1 nsec, L, = 0.3 pm and Jeo= 18.6 x A/cm2; base,z, = 4.1psec,Lb = 80pmandJb, = 6.3 x 10-14A/cm2.1tmaybenoted that in this cell j << 1 which shows a substantial band-gap shrinkage. The surface parameters of the emitter, namely q and S,, affect the decay rate only through Jgand Jeo.In fact the dependence of PVD on q comes only through the steady-state ECP. We observe from Fig. 17 that for low values of a, A is quite sensitive to S, as well as q. This is because JB decreases as a
380
V. K. TEWARY AND S. C. JAIN
31
I
16~
I I
I
10
I I IO* lo3 oc (crnl)
I
I
I
lo4 lo5 lo6
FIG. 17. Variation of decay rate A of open-circuitphotovoltage at t = 0 as a function of the absorption coefficient a of light. For curves A, B, and C, S = lo4 cm/sec and 4 = 0.1,0.01,and 0, respectively. For curves D, E, and F, S = lo7 cm/sec and q = 0.1, 0.01, and 0, respectively. Add 13.2 to all numbers on the y axis.
decreases but the absorption at the surface characterized by q does not change. Hence the relative contribution of surface absorption increases. For very large values of a (>> l/de) the front surface and the emitter again makes a significant contribution because more light is absorbed in the emitter. In this case the surface recombination dominates over surface absorption and therefore A is not sensitive to q for large a. Another interesting feature of Fig. 17 is the existence of a maximum which occurs at approximately a = l/de.This arises because of the presence of the a exp( -ad,) term on the right-hand side of Eq. (1 5 1) (since C, = aL,).The maximum is not exactly at a = l/d, because of the dependence of Jg on a. More detailed FCVD curves as calculated by Sharma and Tewary (1983) have been shown in Figs. 18-21. Figure 18 shows, as in the case of FCVD discussed in Section IV.C, that the effect of p-n coupling is more pronounced for smaller j (larger a) and larger 5. Figure 19 gives the effect of q and a on PVD. This figure also gives the corresponding result obtained by using Jain’s (198 I) theory, i.e., Eq. (87). The effect of q is more significant in the initial stages of the decay, which has been discussed earlier in this section. The main features of the spectral dependence of PVD, i.e., the dependence on a as observed from Fig. 19, are as follows: 1. The effect of p-n coupling is more significant at low values of z. The slope of the linear region in the PVD curve for large z is not affected by p-n coupling. This is similar to FCVD discussed in Section 1V.C. 2. The decay rate is faster for large a in the initial stages (low z). This is similar to the case of PVD without p-n coupling as discussed in Section 1II.B. It is physically expected because the slope of ECP at the junction in the base in
38 1
OPEN-CIRCUIT VOLTAGE DECAY 0
0.25 I
>
l
Timolin unitr of Tbl 0.5 0.75 1.0 J
1
1
1
1
1
1.25 1
1
1
2A IA/28
IB
1.81-
IC FIG.18. Effect of relative dark saturation current j (curve 1) and relative minority-carrier lifetime 5 = rb/rc(curve 2) on PVD. For curve 1,5 = loo0 and j = 0.1,5, and 10 for A, 9,and C, respectively. For curve 2, j = 0.1 and 5 = 100 and 1000 for A and B, respectively. For all these curves a = lo3 cm-I, q = 0.01, S = lo3cm/sec, and d = 0.1 pm.
Time (in units of Tb) 025 0.5 0.75 1.0
0 I
I
(
I
l
I
I
I
1.5
1.25 I
1
1
I
280
c
3
3.6-
FIG. 19. Effect of the absorption coefficient of light (a)and surface generation coefficient (9) on PVD for j = 30, = 0.003,d = 0.1 pm, and S = lo4 cmjsec. Curves 1 and 2 are for q = 0 and 0.01, respectively; A, 9,and C denote the curves for OL = 10, lo2,and 10’ cm-I, respectively. Curves labeled J are obtained by using Jain’s theory (in which p-n coupling is neglected) with A, 9,and C referring to values of a as above.
382
V. K. TEWARY AND S. C. JAIN Time (in units of Tbt 0.25 0.5 075 1.0 1
1
1
1
1
1
1
1.5
125 1
1
1
~
1
36
FIG.20. Effect of surface recombination velocity ( S ) on PVD. Curves 1 and 2 are for A, B, and C denote the curves for S = 10 cm/sec, 10’ cmjsec and lo3 cm/sec, respectively. For all these curves q = 0, z,/t, = 0.003,j = 30, and d = 0.1 pm. a = 10’ cm-’ and lo’ cm-’, respectively;
steady state increases as a increases. This results in a faster diffusive loss of excess carriers at the junction when the decay starts. It may also be noted that, as in the case of PVD without p-n coupling (Jain, 1981), the PVD rate becomes insensitive to a for very large and very small a. 3. The effect of p-n coupling becomes more significant for larger values of a. This, as discussed earlier in this section for low values of z , is due to the fact that more light is absorbed in the emitter. The effect of S, on PVD is shown in Fig. 20. It is qualitatively similar to that at low values of z which has been discussed earlier. The effect of the thickness of the emitter d, on PVD is shown in Fig. 21. We see from this figure that this effect is more for larger values of d,, which is physically expected since a thicker emitter would absorb more light and hence would store more excess carriers. This effect is more significant for smaller values of j since, as discussed earlier, the effect of p-n coupling itself increases as j decreases. Sharma and Tewary (1983) have also tried to fit their theoretical PVD curve with the experimental results of Madan and Tewary (1983). However, because of a large number of parameters involved in the theory, a unique set of values of these parameters could not be obtained. It would be interesting to have more detailed experimental results for PVD in a solar cell. If some of the parameters are known by independent measurements it should be possible to estimate z, and q, by fitting the theoretical and experimental results.
383
OPEN-CIRCUIT VOLTAGE DECAY Timotinunits of Tb) 0.5 0.75 1
1
I
I
1.0
I
1.25
.
I
I
I
1.5 l
l
2 ‘8
1
2
c
FIG.21. Effect of diffused layer thickness ( d )on PVD. Curves 1 and 2 are for j = 0.1 and 10, respectively; A, B, and C denote the curves for d = 0.1 pm and 0.3 pm, respectively. For all these curves S = lo3cm/sec, q = 0.01, s,/T, = 0.001, and a = lo3 cm-’.
V. QUASI-STATIC EMITTER APPROXIMATION AND ITS APPLICATIONS IN SOLVING COUPLED DIFFUSION EQUATIONS FOR OCVD In the previous chapter we saw that the effect of p-n coupling on OCVD can be quite significant in certain cases. In general the mathematical problem of solving the coupled diffusion equations is quite complicated. An analytical solution, i.e., the North’s expression could indeed be obtained for FCVD in thick diodes but in a general case of practical interest, one has to resort to numerical or semianalytical treatment as in the case of PVD in Section 1V.D. In this section, we shall describe the so-called quasi-static emitter (QSE) approximation which was proposed by Jain and Muralidharan (1981) and which can be used for solving the coupled diffusion equations. The QSE approximation effectively decouples the two diffusion equations by modifying the boundary condition at the junction. With the help of QSE approximation, it would suffice to solve the diffusion equation in the base while the effect of emitter is approximately included in terms of a modified boundary condition. The QSE approximation has been shown to be quite accurate for t >> 2,. The QSE approximation has been found to be quite accurate except for very small times (of the order of re).For the purpose of determining z b and the band-gap shrinkage the region of time z, << t 5 z b is of considerable practical interest. In this region the QSE approximation has been found to be extremely
384
V. K. TEWARY AND S. C . JAIN
useful. The QSE approximation is of course valid even for larger times t >> 7 b but then the effect of p-n coupling itself becomes insignificant. In this section we shall first illustrate the QSE approximation by applying it to calculate FCVD in thick diodes for which the exact result, i.e., the North's expression, is available as described in Section 1V.B. We shall then use the QSE approximation to calculate FCVD in a thick diode containing a constant drift field in the base and PVD in a solar cell with arbitrary thickness with and without a constant drift field in the base. In all these cases the effect of p-n coupling is included through the QSE approximation while still neglecting the high injection effects. A . Q S E Approximation
The main assumption in the quasi-static approximation is that the shape of the ECP does not change during decay. We make this assumption for the emitter only and not for the base. Thus we write the following for the timedependent ECP in the emitter: where qe,(Xe)is the ECP in the steady state and the function fl(z) is to be determined. As we shall see in this and the following subsections, use of Eq. (153) in the zero-current condition at the junction decouples the diffusion equations, resulting into a considerable simplification. It is tempting to try a similar quasi-static approximation in the base as well. Such an attempt was made by Lindholm and Sah (1976). However, detailed calculations of Jain and Muralidharan (1981) show that, except for a short duration of time, the ECP in the emitter retains its steady-state shape. On the other hand, the shape of ECP in the base deteriorates very fast after the decay starts. The quasi-static approximation is therefore not valid in the base. We now illustrate the use of QSE approximation for calculating FCVD in a thick diode. The exact solution of this problem has already been given in Section 1V.B. Using Eq. (153) in the zero-current condition given by Eq. (133) we obtain
The function fi(z)in fact gives the voltage decay. This can be seen by using the voltage condition as given by Eq. (132) in Eq. (153) which gives the following result:
fi(4= exPCA~l(z)l where we have used the inequality (79).
(155)
OPEN-CIRCUIT VOLTAGE DECAY
385
The coefficient of fl(z)on the right-hand side of Eq. (154) is the emitter current at the junction in the steady state as given by Eq. (19)for Eej = 0. Using Eqs. (41) and (29),we obtain from Eq. (154)
or
where a = l/j as given by Eq. (141). We see that the QSE approximation used along with the current and the voltage conditions has yielded a relation between the derivative and the height of the ECP in the base at thejunction. The effect of emitter or the p-n coupling is represented by the parameter a. Equation (156) can be prescribed as the boundary condition at the junction. It would thus suffice to solve the diffusion equation in the base subject to the boundary condition at the junction as given by Eq. (156)and the appropriate boundary condition at the back surface of the base. The solution of Eq. 125, i.e., the diffusion equation in the base of the diode can now be easily obtained by using the LapIace transform method. The boundary conditions are as given by Eq. (127) for q b ( x b , z) and Eq. (1 56) with the initial condition specified by Eqs. (131) and (37). The result is e-* exp[AV1(z)] = - [ S ( Z )
- as(a2z)]
1-a
(157)
We note that Eq. (157) is identical to Eq. (142), which was obtained by approximating Eq. (134). As we have already discussed in Section 1V.B (see also Madan and Tewary, 1985), this approximation is very good for t >> t,. Unless our interest is particularly in the determination of z, and other characteristics of the emitter, the QSE approximation should prove to be adequate. As shown by Jain and Muralidharan (1981) and Jain and Ray (1983a), the QSE approximation is valid, except at short times of the order of z,, provided the following two conditions are satisfied:
< >> 1
1.
2.
>> z,
(1 58)
(159) where zeff is the effectivelifetime of excess carriers as determined from the slope of the linear region in the OCVD curve [see Section III.C, Eq. (10311. Z,ff
386
V. K. TEWARY AND S. C. JAIN
B. E#ect of Drift Field on FCVD in a Thick Diode This problem has been solved by Jain and Ray (1983a) using the QSE approximation which gives the results in a closed analytical form. The timedependent diffusion equation for the excess carriers in the base is the same as Eq. (75) which is rewritten below:
The explicit dependence of q b on xb and z has not been shown for notational brevity. The boundary condition at the back of the base is given by Eq. (127). We need to specify the current and the voltage conditions at the junction as given by Eqs. (21) and (29), respectively. Using the QSE approximation (Jain and Ray, 1983a) and following the steps leading to Eq. (156) we obtain the following boundary condition at Xb = 0:
+ 2Fbqb = a q b
(161) In addition, we have to specify the initial condition as given by Eq. (131) and Eq. (48). Equation (160) can now be easily solved by using the Laplace transform method. The final result is aqb/axb
where = (a - F
b
)
/
m
(163)
and
z1 = (1
+ F,?)z
( 164)
Equation (162) is similar to Eq. (157) with a and z replaced by a, and z,, respectively. This similarity is not surprising because, as shown in Section 11, the solution of Eq. (160) is related to the solution of Eq. (153) through a simple transformation. The effect of the emitter on FCVD as given by Eq. (162) is represented by the coupling parameter al . This parameter depends upon the drift field in the emitter F,, T,, and band-gap shrinkage AEs. In general all these quantities, i.e., F,, T,, and A& are not constant throughout the emitter. Jain and Ray (1983a) assumed them to be constant, having appropriate effective values which determine Jeo and hence the coupling parameter. In terms of the effective values of F,, T,, and AEg, Jeo is given by J ~= o J:OCFlecoth(F1eW,) - Fel ex~(AFg/kT)
(165)
OPEN-CIRCUIT VOLTAGE DECAY
387
where Jeo is the saturation current for F, = 0, AE, = 0, and We = 00 and
+ F e2 ) 112
(166) As given in Section IV.C, the value of a and hence Jeo can be estimated from the FCVD data, Equation (165) can then be used to estimate AE, since F, can be estimated from the known doping profile in the emitter. We shall now discuss the effect of drift field in the emitter on FCVD. For simplicity we assume that AE, = 0 and We = 00. This gives Fie
a = a0CF1e - Fel
(167) where a, is the value of a for F, = 0. We see from Eq. (167) that for retarding fields (F, > 0), a will decrease as Fe increases. The effect of p-n coupling will therefore decreases as F, increases. In the limit F, = 00, a = 0, the diode will behave like a base-dominated diode. This is physically expected because a retarding field drives the carrier toward the junction. Hence the injection efficiency of the emitter increases with increasing F,. It becomes unity for Fe = a, i.e., the diode becomes a base-dominated diode. On the other hand, an accelerating field (F' 0) drives the carriers away from thejunction. We expect physically therefore that an acceleratingfield will accelerate the decay. Due to a faster depletion of carriers at the function in the emitter, the base has to provide more carriers to the junction to satisfy the voltage condition. The strength of p-n coupling will therefore increase as the accelerating field increases. This is predicted mathematically by Eq. (167) which shows that, for F, < 0, a increases as the magnitude of F, increases. The effect of drift field in the base is to replace a and z by a, and z1 in accordance with Eqs. (163) and (164), respectively. From the similarity between Eqs. (163) and (142) we expect that Vl(z)will become linear in z for large z and the slope of the linear region will be (1 + F t ) . The effective lifetime in the base, as determined from this slope, is therefore given by
-=
1 - -(1 1 _ Teff
+Fi)
Tb
We infer therefore, as physically expected, that acceleration drift fields will accelerate the decay. In the limit l&l = co,reffas given by Eq. (168) becomes zero. However, in this limit the inequality (158) will be violated and hence the QSE approximation will not be valid. Further, we see from Eq. (163) that a, will increase as the magnitude of Fb increases so that the strength of p-n coupling will also increase for acceleration drift fields. The increased p-n coupling will accelerate the decay in the initial stages. Figures 22 and 23 show the calculated FCVD curves for different magnitudes of accelerating as well as retarding drift fields. The effect of acceleratingdrift fields has already been discussed. The effect of retarding drift
388
V. K. TEWARY AND S. C. JAIN
-15
1 7
3
4
8
5
FIG.22. FCVD plots for several values of accelerating (negative Fb)and retarding (positive Fb)drift fields in the base. The parameter a is equal to 2 for all the plots.
fields, as can be seen from Fig. 22, is also to accelerate the decay. This apparently inconsistent effect can be understood as follows. We notice from Eq. (163)that for large retarding drift fields (Fb> 0), a,, the effective coupling parameter, becomes negative. It can be shown (Jain and Ray, 1983a) that, even for negative values of a , , the function on the right-hand side of Eq. (162) has a linear dependence on z , for large zl.The FCVD in this region of time is given by the following expression which is valid for
0
1
2
3
4 5 6 7 8 2 FIG.23. FCVD plots for several values of retarding drift fields in the emitter for a diode in which the emitter is not heavily doped and there is no drift field in the base. The parameter a, [see Eq. (167)] has been taken as 2 for all the plots.
OPEN-CIRCUIT VOLTAGE DECAY
AVl(z) = -(1 - a:)(l
+ F i ) z t In (1 ~
389
Yi!,,)
The effective lifetime as defined in terms of the slope of this linear region is then given by
We see from Eq. (170)that, since Fb > a > 0, by hypothesis Teff < z b . This explains how the voltage decay becomes faster because of a strong retarding field in the base. In fact this case is particularly interesting because here the p-n coupling has made a qualitative difference in FCVD. That this result arises from p-n coupling is clear from Eq. (170) since if we put a = 0 in this equation, T~~~ becomes equal to zb. This is consistent with Eq. (1 19), which shows that in the case of a strong retarding drift field in a base-dominated diode, 7eff = zb and is independent of F b as well as wb. In the case of weak retarding drift fields (0 < Fb < a),we see from Eq. (163) that as Fb increases from zero, a, decreases. This reduces the strength of the p-n coupling. When F b becomes equal to a, a, = 0 and the p-n coupling becomes zero. C . FCVD in Thin Base Diodes with Constant Drift Field
In this section we shall mainly report the results of Ray et al. (1982). The diffusion equation for this case is the same as Eq. (160). The QSE boundary condition at the junction that accounts for the p-n coupling is as given by Eq. (161). The boundary condition at the back of the base is as given by Eq. (106). The steady-state profile required for the initial condition, i.e., Eq. (33) is obtained from Eq. (46) using the transformation given by Eq. (48). The diffusion equation in the base, subject to the boundary conditions mentioned above, is solved by the usual method of expansion in terms of orthogonal functions. As in Sections 1II.C and D, the following expression for FCVD can be easily derived (Ray et al., 1982): exp[AV,(z)]
= zArexp( -$z) r
(171)
where
&,2 = 1 + F ;
+ p;
(172)
and p r are obtained as the roots of the following equation:
(s,*+ a)pr - [k + ( F b - a)(s,* + Fb)l
tan(/-bWb)=
(173)
390
V. K. TEWARY A N D S . C. JAIN
As in Section 1II.C or D, the effective lifetime is obtained in terms of the slope of the linear region of the FCVD curve. In the present case it is given by -=
1
1 -(I
Teff
Tb
+ F: + p i )
(174)
where po is the smallest real or imaginary root of Eq. (173). As mentioned in Sections 1II.C and D, p, do not depend upon the steady-state ECP. The coefficients A, are obtained by the steady-state ECP. The FCVD curves for different positive and negative values of Fb have been shown in Fig. 24. For all these curves the value of a is taken to be unity and S,* = co. We see from this figure that the voltage decay becomes faster for retarding as well as accelerating fields. This is similar to the result obtained in the preceding section. As mentioned in that section, the retarding field makes the decay faster because of the p-n coupling. The effect of SRV at the back surface of the base (i.e., at the low-high junction) on FCVD is shown in Fig. 25. For these calculations, the values of wb,a, and Fb are taken to be unity. It is found that FCVD is not sensitive to S,* for S : > 100 for w b = 1. This S,* = 100 almost corresponds to the limiting case of an ohmic contact. In general, as found in Section 1II.C without p-n coupling, the voltage decay becomes faster as S,* increases. This is physically b would result in a expected because an increase in S,* for small values of w faster removal of carriers from the junction.
0,
I .o
Z--
2.0
3.0
FIG.24. The reduced voltage V,(z)- V,(O)versus reduced time for various values of the base drift fields.
39 1
OPEN-CIRCUIT VOLTAGE DECAY 2
I
z-
4
6
-5-
G 7I -10 -
-
N-
2
-15
-
FIG.25. The reduced voltage V,(z)- V,(O)versus reduced time z = t/zb for different values of reduced surface recombination velocity at the base S t = SbLb/Db.
Ray et al. (1982)have also reported their experimental results on FCVD in the n+-p-p+ silicon varactor diode. Their observed FCVD curve is shown here in Fig. 26. We see from this figure that the linear region starts very soon and extends up to about 1 psec. The curvature after 1 psec is attributed to the space-charge layer. The diode used in the experimentswas quite thin (W, was small).Hence the series in Eq. (171) is very rapidly convergent. Within a short time, the first term
>-
> 15 0
I
2 t in Nsec-
3
4
FIG.26. The observed voltage decay curve for a diode.
392
V. K. TEWARY AND S . C. JAIN
in this series starts dominating over higher terms. This explains the early onset of the linear region in the FCVD curve. The effective lifetime, as determined from the slope of this linear region, comes out to be 0.16 psec. From the known doping profile in the base, Ray et al. (1982) estimated S,* = 0 (showing a very “good” BSF diode) and drift field & to be 36 V/cm. They estimated the value of a to be approximately unity by using the dark I-V characteristics of the diode. Finally, the value of ‘tb, as determined from the FCVD data, is found to be 0.5 psec with &, = 3. D. PVD in Solar Cells
In this section we shall use the QSE approximation to calculate PVD in thick as well as thin solar cells including the effect of p-n coupling. For thick solar cells, the effect of p-n coupling on PVD has already been discussed in Section 1V.D without using the QSE approximation. The advantage of QSE approximation is that it yields the result in a closed analytical form as obtained by Jain and Ray (1983b). The effect of drift field is neglected. First we consider a thick base solar cell. The diffusion equation for the base is same as Eq. (75) for F b = 0 [or Eq. (125)] and only monochromatic illumination is considered. The boundary condition at the back is as given by Eq. (127).The initial condition is prescribed by Eq. (131)with the steady-state ECP given by Eq. (64).It may be remarked that this expression for steady-state ECP does not include the contribution of the emitter. However, the nature of the steady-state ECP affects PVD only in the early stages of decay for which the QSE approximation is not valid any way. The effect of p-n coupling on PVD at low times has already been discussed in Section 1V.D. The boundary condition at the junction is derived by using the QSE approximation (Jain and Ray, 1983b) and is given below:
The solution of the diffusion subject to aforementioned boundary and initial conditions can be obtained by the Laplace transform method. The resulting expression for PVD (Jain and Ray, 1983b) is as follows
e-‘
-
cb)(a
[C,S(Ctz) - aS(a2z)] - cb)
(176)
OPEN-CIRCUIT VOLTAGE DECAY
393
It has been verified by actual numerical calculations that, as mentioned earlier in this section, the final result for t >> 5, (or z >> l/() is not sensitive to whether the steady-state ECP included the effect of the emitter or not. The PVD curves as calculated from Eq. (176) (Jain and Ray, 1983b;also see Mallenson and Landsberg, 1977) have been shown in Fig. 27 for a = 0 and 2 and for C , = 0.5. The figure also shows the FCVD curve for a = 2. The qualitative nature of the effect of p-n coupling on PVD as indicated by Fig. 27 is of course the same as discussed in Section 1V.D. An expression for PVD which includes the effect of p-n coupling in thin solar cells can also be derived quite easily by using the QSE approximation. We consider a solar cell having BSF in the base but no drift field. The boundary condition at the back of the base is given by Eq. (13). The steadystate ECP is taken to be that given by Eq. (65) or Eq. (69) depending upon the value of Sz . The boundary condition at the junction is the same as prescribed by Eq. (175), which was obtained by using the QSE approximation. The expression for PVD as obtained by the standard method of expansion in terms of orthogonal functions is given below: exp[AVl(z)] =
r
A,exp( - $ z )
(177)
where, as in the earlier cases, A, are obtained by using the steady-state ECP,
&,Z
=
1
+ p;
(178)
Z-FIG.27. PVD for a solar cell with base thickness W,. Curve 1 corresponds to a basedominatedcell (a = 0) and curve 2 is for the coupling parameter a = 2. The value of C(= aL,) for the curves 1 and 2 is 0.5. Curve 3 is the FCVD plot for a = 2.
394
V. K. TEWARY AND S. C. JAIN
and pr are the roots of the following equation:
+
(S,* u)pr - (p: - as:) tan(prwb)= 0
(179)
The series on the right-hand side of Eq. (177) is rapidly convergent for large z and low w b . The effective lifetime determined as in Eq. (103) or Eq. (104) is written in the following form
1 zeff
-
1 Tb
+-tl1
where z1
= zb/pi
(181)
and p o is the leading root of Eq. (1 79). The parameter z1 in fact does not explicitly depend on 'sb (Jain and Ray, 1983b). This is because po is proportional to l/wb and therefore to L , where L b , the diffusion length, is (DbZb)1'2. A weak implicit dependence of z1 and z b comes through the dependence of p o on parameter a (since the saturation currents depend upon the diffusion lengths). Thus, for all practical purposes, tl can be treated as independent of zb. The effect of p-n coupling as well as finite thickness of the base of the solar cell are included in z, . The calculated values of 71 in a real solar cell (Jain and Ray, 1983b) are shown in Fig. 28 as a function of SRV, for different values of Jeo and w b . In these calculations Db = 13 cm2/sec and qbO = 0.82 x C/cm3. These values correspond to a n-type base of resistivity 1 R-cm.
Sb(cm/set\
FIG.28. Characteristicdecay time constant t for 1 R-cm n-typebase solar cell as a function of S, for different values of Jco. The solid curves correspond to d , = 300 pm and the dotted ones are for d , = 100 pm. The y-axis scales for the solid and dotted curves are, respectively, given by the left- and right-hand side of the figure.
OPEN-CIRCUIT VOLTAGE DECAY
395
For an ideal BSF solar cell (S,* = 0), it can be easily verified that if a = 0 (no p-n coupling)po = 0 and z1 = oo so that zeff = q,.This is the same case as given by Eq. (112). We see that in the absence of p-n coupling, the PVD rate is independent of the thickness of the base. In a solar cell in which a is quite large due to band-gap shrinkage in the emitter, the effect of p-n coupling will be significant. It will make the voltage decay faster. We see from Fig. 28 that for small values of S,*, z1 decreases rapidly as Jeo (and hence a) increases. In a typical n+-p-p+ solar cell where Jeo = 2 x A/cmZ (including the effect of band-gap shrinkage), d b = ,&+f = 300 pm, and S,* = 0, we find that z1 = 138 psec (Rose, 1981; Weaver, 1981). For this solar cell q,= 105 psec. The value of zeff comes out to be 63 psec. Thus we see that in such cases the slope of the linear region in the PVD curve will provide a poor estimate of zb. It is interesting to see that the p-n coupling makes the PVD rate quite sensitive to the thickness of the base. This effect is even stronger for larger values of t(=q,/z,). Jain and Ray (1983b) find that in certain cases p-n coupling can reduce the effective lifetime by as much as 70%. In a conventional solar cell (S,* = oo),Jain and Ray find that for small db, reff is not very sensitive to Jeo. This and the above result can be physically understood as follows. During decay, the ECC near the junction in the base decays by the following three processes: (1) recombination, (2) reverse flow in the emitter due to p-n coupling, and (3) diffusion away from the junction toward the back contact. In a conventional solar cell (S,* = oo),the ECC at the back is always zero. If the thickness of the base is small the ECP becomes very steep. In this case the diffusive loss of excess carriers becomes very strong and the process (3) dominates. The effect of p-n coupling is relatively insignificant, i.e., Teff is not sensitive to Jeo. In an ideal BSF cell S,* = 0. The proximity of back contact to the junction (i.e., small db) makes the ECP quite flat, which reduces the contribution of the process (3). In such cases the process (2) dominates, i.e., the effect of p-n coupling becomes very significant.
ON FCVD VI. HIGHINJECTION EFFECTS
In this section we shall discuss the effect of high injection on FCVD. Although we shall mostly refer to FCVD, the basic ideas and the physical processes which we shall discuss will also be relevant to PVD. We define a semiconductor to be in a high injection state when the concentration of injected carriers, i.e., the ECC is of the order of, or exceeds,
396
V. K.TEWARY AND S . C. JAIN
the concentration of majority carriers at thermal equilibrium in the semiconductor. In a p-n diode or a solar cell the emitter is much more heavily doped than the base. The base therefore reaches the high injection state at a much lower level of injection than the emitter. In a diode or a solar cell when the emitter also reaches the high injection state, then the junction potential becomes comparable to the diffusion potential of the device. The maximum value of the junction potential in a device is of course equal to its diffusion potential as can be seen from Fletcher voltage conditions given by Eqs. (22) and (23). The proximity of the junction potential to the diffusion potential of the device is, therefore, a measure of the level of injection in the device. In the past few years a considerable amount of experimental work has been done on high injection effects on OCVD (see, for example, Bassett et al., 1973; Derdouri et al., 1980; Cooper, 1983; also the review by Green, 1984, which also gives other references). Some of these results are very fascinating. A suitable theoretical explanation of these effects is not yet available in the literature. In this section we shall attempt to identify the physical processes which might be responsible for the observed high injection effects on OCVD. Based upon these processes, we shall present a theoretical analysis of the experimental results on high injection FCVD. A theory of FCVD at high injection would be mathematically rather complicated because of the nonlinearity of high injection effects. In this section therefore, we shall use reasonable approximations including the QSE approximation to analyze the high injection effects on FCVD. A . Nature of High Injection Effects on OCVD
First we summarize the nature of experimentally observed high injection effects on PVD and FCVD curves. 1. In some devices the slope of the FCVD as well as PVD curves is small in the beginning and increases after some time. (Bassett et al., 1973; Green, 1984). An example of this behavior is given in Fig. 29. This figure shows some PVD curves for a 10 R-cm base-resistivity n+-p solar cell which have been observed at the Solid State Physics Laboratory (SSPL), Delhi, by S. C. Jain, U. C . Ray, and their colleagues (private communication). 2. In some cases of FCVD it is found that the voltage drops rapidly in the beginning, then remains constant for a while and then finally decays to zero. The region of the FCVD curve in which the voltage remains constant (almost zero slope) will be referred to as the “plateau” in the FCVD curve. In some cases, after the initial drop, the voltage rises for a while and then decays to 0.
397
OPEN-CIRCUIT VOLTAGE DECAY
TIME Ips1
O
-
~
~
~
'
~
l
J
'
The rise and the fall in the voltage decay curve is referred to as the "hump" in the FCVD curve, in contrast to plateau mentioned earlier. The hump/plateau can be clearly seen in the experimental FCVD curves published by Derdouri et al. (1980) and Cooper (1983). The existence of the hump and the plateau in FCVD curves has been further confirmed by the detailed experimental investigation on FCVD carried out at SSPL, Delhi by S. C. Jain, U. C. Ray, and their colleagues (private communication). Some of their results are shown in Fig. 30. This figure shows FCVD curves in a p-n diode of 10000-cm resistivity base at different levels of injection.The fast drop in the beginning of the FCVD curves followed by a hump or a plateau can be clearly seen in Fig. 30. If the injection level is not high enough, the plateau in the FCVD curve is replaced by a regin of small slope. As the injection level rises this region becomes a plateau and then a hump. It may be remarked that a plateau in a FCVD curve can be taken to be a special case of a hump i.e., a region in which the voltage could not rise enough to become a hump. It appears from the very detailed and extensive work at SSPL, Delhi that the hump/plateau in FCVD curves is observed only in p-i-n diodes or n+-p-p+ (or p + - n - n + ) devices, i.e., those containing BSF. It seems therefore that the existence of two junctions as in p-i-n diodes or devices with BSF is essential for occurrence of the hump/plateau in the FCVD curves. Moreover, this behavior has been observed only in high-resistivity devices. This is
'
'
398 0.4-
(1)
V(t-0) 8OOmV
121 6 6 O m V
I(t=0) 93mA 3.4OmA
0.3:LL I31 6 3 O m V
0.54mA
3
G-t
I
I
2
5YF 0O"0 2
FIG.30. High injection FCVD in a 1OOOR-cm resistivity base p-i-n diode. These experimental results were obtained by S. C. Jain, R. Muralidharan, and U. C. Ray at SSPL,Delhi (private communication).Note the formation of the plateau and the hump as the injection level increases.
presumably because the high injection effects would set in at a relatively low level of injection in a high resistivity device as compared to a low resistivity device since the thermal equilibrium value of the carrier concentration is relatively low in a high-resistivity device. The observations (1) and (2) are inconsistent with each other. In (1) there is a slow decay region in the beginning followed by faster decay of the voltage. In (2) the voltage decays rapidly in the beginning and then either shows a plateau or a hump. Moreover, the nature of high injection effects seems to be quite sensitive to the method of fabricating the devices. In view of various uncertainties and discrepancies in the experimental results, it would be interesting to carry out more detailed and careful measurements on the high injection FCVD and PVD. In FCVD, it would also be necessary to separate out the ohmic drop from any coupling drop by using, for example, the method suggested in Section 1V.C. Theoretically the following three factors would affect the high injection OCVD. 1. Change in Ambipolar Coejficients
As described in Section I1.B the coefficients of the ambipolar diffusion equation, i.e., Eq. (1) are functions of ECC and therefore of the level of injection. The dependence of various ambipolar coefficients on the injection
OPEN-CIRCUIT VOLTAGE DECAY
399
level has been studied by several authors (see, for example, Agarwala and Tewary, 1980). As shown in Section II.B, all the ambipolar coefficients become constaints in the high injection limit. The high injection limit of ambipolar diffusion coefficient is equal to the harmonic mean of the diffusion coefficients for the two types of carriers [see Eq. (8)]. In general the variation of the ambipolar diffusion coefficient between its low and high injection limit is not large unless the diffusion coefficients for the two types of carriers are drastically different. The ambipolar mobility decreases as the level of injection increases. It becomes zero in the limit of extreme high injection [see Eq. (9)]. The most important change with the level of injection occurs in the ambipolar life time. As the injection level increases, it increases in general but the actual dependence on the level of injection is quite complicated, as mentioned in Section 1I.B. According to calculations of Dhariwal et al. (1981) the lifetime increases quite substantially as the injection level increases due to saturation of traps. In extreme cases the lifetime may even change by a factor of 5. However, at extreme high injections, Auger recombination process can decrease the carrier lifetime very substantially. The Auger process makes a significant contribution when the carrier concentration is of the order of 10'' mP3 or more. At such high concentrations the diffusion coefficient will also decrease due to carrier-carrier scattering. These processes have obviously not been accounted for in the ambipolar diffusion equation. It is tempting to explain the initial region of low slope, i.e., observation (1) in the FCVD curve in terms of an increase in the ambipolar life time of excess carriers. However, the aforementioned slope is so small at very high injections that it would require an extremely large increase in the lifetime, viz. by a factor of about 50. The ECC required for such a large increase is not attained in normal experiments. However, at such high concentrations the Auger process becomes significant,which would reduce the excess carrier lifetime.
2. Change in the Strength of p-n Coupling At high injectionsthe strength of p-n coupling increases. This can be easily seen by estimating the value of the coupling parameter a. For this purpose, we use the Fletcher voltage conditions as given by Eqs. (22) and (23). It may be remarked that in this context we should use the Fletcher conditions which give the junction voltage and not the Misawa conditions which give the terminal voltage. Using Eq. (141) for a and the Fletcher conditions, we formally obtain the following expression for a: aeff
=
r
m
(182)
400
V. K. TEWARY AND S. C. JAIN
where
As mentioned earlier the proximity of the junction potential 5 to the diffusion potential V, is indicative of the state of high injection. The parameter ( in Eq. (182) can thus be treated as a measure of the dependence of the coupling parameter on the level of injection. For low injections vj << VD so that aeff= a, as defined by Eq. (141). We assume for the time being that (DeZb/DbTe)is independent of the level of injection. We first consider the case when there is no band-gap shrinkage in the emitter and qeo<< qbo. Then, from Eqs. (182) and (183) aeff =
a(qbO/qeO)exp[q(y - vD)/kTl
(184)
We see that in the high injection limit when 5 x V,, the coupling parameter increases by a factor qbo/qeo which is very large. If we include the injection level dependence of D,, Db, q,, and z, there would be no qualitative change in this conclusion. This is because (Deq,/&,r,) will have a very weak dependence on the level of injection; it will in fact increase with injection level until the Auger process drastically reduces q, without affecting z,. Even with a substantial band-gap shrinkage, in many cases of practical interest, qeo is less than qbO and the above conclusion is still valid. If the gap shrinkage is so large that qeo % qbo, then as indicated by Eq. (183), and therefore aeffbecome independent of the level of injection. The extreme case of qeo >> qbO is not of practical interest. We conclude therefore that, in general, the strength of the p-n coupling increases at high injections. The effect of p-n coupling, therefore, has to be included in OCVD calculations at high injections, even in base-dominated devices in which p-n coupling is negligible at low injections.
r
3. Voltage Boundary Condition at the Junction
At high injections the Shockley boundary conditions given by Eqs. (28) and (29) are not valid. We have to use Fletcher or Misawa boundary conditions as given by Eqs. (22) and (23) or (25) and (26), respectively. The Shockley boundary conditions are linear in the sense that qej/qpjis independent of vj or time. The Fletcher and Misawa conditions are nonlinear. It is normally not possible to obtain a solution of the diffusion equation in a closed analytical form using the nonlinear boundary conditions. In this section we shall discuss only qualitatively the effect of Fletcher and Misawa boundary conditions on FCVD.
OPEN-CIRCUIT VOLTAGE DECAY
40 1
First we consider the Fletcher conditions as given by Eqs. (22)and (23). We see from these equations that qej = qbj = 00 in the limit vj = V,, which is theoretically the maximum value of q. In the extreme high injection case defined by
We obtain the following expression for vj from Eqs. (22) and (23)
Equation (186)shows that for large qej, vj is not sensitive to qejand remains close to V,. Thus we infer that as qej decays from an almost infinite value to lower but still large values, 5 will not change significantly and remain close to V,. Thus it would seem to explain the observation 1 given earlier in this section, i.e., the existence of a low slope region in the initial stages of the FCVD region. More detailed calculations on FCVD using the Fletcher conditions have been reported by Tewary and Jain (1982).Their theory is valid when vj is so close to V, that the inequality (185) is satisfied. As is apparent from Eq. (186), the theory of Tewary and Jain (1982)does explain the observation 1. However, in reality, the observation 1, i.e., the low slope region in the FCVD curve, is observed at a much lower level of injection than required by the inequality (185). As pointed out by Green (1984), this inequality is not satisfied in an ordinary diode even at an enormous current level of lo4 A/cm2. On the other hand, if the Misawa voltage condition is used at the junction, it would predict that the voltage decay is faster in the beginning. This can be easily seen from Eq. (27),which is the limiting form of the Misawa condition at extreme high injections. It has been a usual practice in the literature to use Eq. (27) as the voltage condition at high injections (see, for example, Green, 1984). However, it may be remarked that Eq. (27) is also valid only in the limit of extreme high injections which would not be normally attained in practice. In fact if Misawa and Fletcher conditions are to represent the same physical state the limiting form of Misawa conditions as given by Eq. (28) corresponds to the limiting form of Fletcher conditions for which inequality (184)is satisfied. Actually, there is an inconsistency here if we try to relate the Misawa and Fletcher conditions in terms of the terminal and junction voltage, respectively. In the open-circuit case the terminal and the junction potential should only differ by the Dember potential. The contribution of the Dember potential is relatively small even at high injections, which can be easily verified (see, for example, Dhariwal et al., 1976).
402
V. K. TEWARY AND S. C. JAIN
On the other hand, as we see from inequality (184) and Eq. (27), the two voltages, i.e., 6 and V , have entirely different behaviors. The discrepancy perhaps lies in the fact that the quasi-neutrality approximation upon which the whole formalism is based may not be valid at high injections. A relation between the Misawa terminal potential and the Fletcher junction potential has been given by Hauser (1971) but in this paper the quasi-neutrality approximation has also been used. To summarize the above discussion: (1) Fletcher conditions predict a region of low slope in the FCVD curve but at extremely high injections, i.e., when 6 = VD.(2) At similar high injections Misawa conditions would give a region of large slope viz. double that of low injections. B. Q S E Approximation for p-n Coupling a t High Injections
We have seen in the previous section that p-n coupling becomes very important at high injections. In this section we shall derive the boundary condition at the injection for the diffusion equation in the base of a device that accounts for the p-n coupling in the QSE approximation. We shall assume a moderate level of high injection such that only the base is in high injection but not the emitter. As mentioned at the beginning of this section, the base reaches the high injection state at a relatively lower level of injection. The emitter is very heavily doped and would reach the state of high injection only at very high levels of injection. By using the mass action law (see, for example, Sze, 1981)at the junction in the emitter and the base we obtain (4eO
+ q e j ) ( q e m + q e j ) = ( q b m + qbj)(qbO + q b j )
(187)
We assume that the emitter is in a state of low injection so that 4ej
<< 4 e m
(188)
qbO
<< q b m
(189)
Also, at ordinary temperatures Using the inequalities (188) and (189), the following relation can be easily derived : where and
OPEN-CIRCUIT VOLTAGE DECAY
403
Now we invoke the QSE approximation, that the slope of the ECP in the emitter does not change during decay. First, we see by putting X, = 0 in Eq. (153) that
fi (2) = q e j (z)/qej (z = 0)
(193)
Hence Eq. (153) can be written in the following form:
Using Eq. (194) for qe(X,,z) in the zero-current condition given by Eq. (154) and then Eq. (190) for qej(z), we obtain the following relation:
where
and fe(X,) is the normalized steady-state ECP in the emitter, that is, f,(Xe) = qe(Xe 9 O)/qej ( 2 = 0)
(197)
Equation (195) gives the desired boundary condition at the junction that accounts for p-n coupling in the QSE approximation. The parameter a2 and the quadratic term on the right-hand side of Eq. (195) arise from the high injection effects. For q b m = DC), the base can not be in high injection by definition, so that a2 = 0. In this case, if we take the steady-state ECP to be exponential as given by Eq. (36) for a thick diode, Eq. (195) reduces to the low injection boundary condition as given by Eq. (156). C. FCVD at High Injections
In this section we shall discuss the theory of FCVD at moderately high injections using the QSE approximation. These calculations have been carried out by Jain et al. (1986). We assume, as in Section VI.B, that the emitter is in a state of low injection. First we consider a thick diode, i.e., for infinite w b and We. The diffusion equation for the base is as given by Eq. (125). The boundary condition at the junction is taken to be that given by Eq. (195) in the QSE approximation. For a thick emitter the steady-state ECP is given by Eq. (36) so
404
V. K. TEWARY AND S . C. JAIN
that, as mentioned at the end of the preceding section, a1 = a
(198)
The other boundary and initial conditions are the same as taken for the base in Section 1V.B viz. Eq. (127) for qb(Xb,z)and Eq. (131) as well as Eq. (37). The solution of the diffusion equation can now be obtained by the Laplace transform method. Following the steps leading to Eq. (157), the following nonlinear Volterra integral equation for FCVD can be obtained after some algebraic manipulation: r z
where
and e-'
K(z) = -- ae-"S(a2z)
Jnz
The ECC at the junction qbj(Z) can be related to the junction potential by using the Fletcher voltage condition given by Eq. (23). We restrict ourselves to the case when the level of injection is not extremely high so that
I y - VDl > kT/q
(203)
exp[2q(y - VD)/kT] << 1
(204)
and
This is a reasonable assumption since, as mentioned in Section VI.A, in most cases of practical interest 6 does not reach close enough to VD to violate the inequality (203). Further, even with a substantial band narrowing qbO
'
qe0
Then, Eq. (23) reduces to the usual Shockley relation given by Eq. (29). Thus we find that, although the base is assumed to be in high injection, qbj can be assumed to obey the Shockley voltage condition. On the other hand, even if the emitter is in low injection, the Shockley condition would be a poor approximation for qej unless 6 is so small that the product qboexp[q( 5 - VD)/kT] is negligible, which is in fact the low injection limit.
OPEN-CIRCUIT VOLTAGE DECAY
405
Jain et al. (1986) have solved Eq. (199) numerically. Their calculated FCVD for a n+-p diode have been shown in Fig. 31 for different values of the steadystate junction voltage V,(O), which is a measure of the level of injection. For ~ ,= lo2' ~ m -J,'~ = , 10'' A/cmZ,z b = 20 psec, this diode qbm = loi3~ m -qem and a = 0.04. The value of a was deliberately chosen to be low, which neglects the gop shrinkage in the emitter, in order to underline the importance of p-n coupling at high injections. For such a low value of a, as discussed in Sections IV and V, the p-n coupling will be negligible at low injections. In Fig. 31, the curve for Y(0) = 0.5 is well within the low injection limit. It has the expected behavior of a low injection FCVD in a base-dominated diode. As V,(O), i.e., the injection level increases, the voltage decay in the initial stages become faster. This is obviously due to the increased p-n coupling. The FCVD curves in Fig. 31 are shown only for z 2 0.1 because the QSE approximation itself is not valid for smaller values of z. For z >> 1, the nonlinear term on the right-hand side of Eq. (199) becomes negligible and FCVD is determined by the low injection term W,(z).As we have already seen in Section IV.B, it would give a linear region in the FCVD curve with unit slope. This slope, however, will only give zb for low injections and not the excess carrier lifetime at high injection. Jain et al. (1986) have also calculated FCVD in an ideal BSF diode b is given by (SRV = 0) with finite w b . The boundary condition at X , = w Eq. (13) and the initial steady-state ECP is given by Eq. (46) for S,* = 0. The
FIG.3 I . V , ( z )- V,(O) is plotted as a function of z for various values of junction voltages in a thick base diode. Curves 1 to 4 are for V,(O) = 0.50,0.62,0.64, and 0.66 V, respectively.The value of a for all the plots is taken as 0.04.
406
V. K. TEWARY AND S. C. JAIN
boundary condition at the junction is same as given by Eq. (195). The diffusion equation is solved by using the Laplace transform method. This leads to the following nonlinear Volterra integral equation for qbj(Z) @bj(z)
= KO(z) - a
f
@bj(y)KO(z- Y ) dY
where
al has been defined in Eq. (202) and t,bl in Eq. (99). Equation (207) is slightly more general than that derived by Jain et al. (1986), who have approximated the steady-state ECP in the base to be constant. This is of course a reasonable approximation for small w b for s,*= 0. The calculated FCVD curves, obtained from a numerical solution of Eq. (205), for different values of w b are shown in Fig. 32. The curves are
-0.5-
t
c
0
7-1.05
>-1.5 1
0
0.2
0.4
Z-
06
0.8
1.0
FIG.32. V,(z) - Vl(0) is plotted as a function of z for various values of base widths in a BSF solar cell. Curves 1 to 3 are for W, = 0.5,0.75, and 1.0,respectively. The value of a is 0.01 and the initial voltage is 0.66 V for all the curves.
OPEN-CIRCUIT VOLTAGE DECAY
407
almost linear as expected for small w b . The slope of these curves gives qff,an effective value of the excess carrier lifetime as defined in Section 1II.C. It may be remarked that there is a slight change in slope of the FCVD curves with z but it is too small to be apparent in the figure. We may note that the theory as given here is valid only at a moderate level of injection. It can not explain the two characteristic high injection effects, i.e., the observations 1 and 2 described at the beginning of Section 1V.A. Jain et al. (1986) have also carried out some experimental measurements on FCVD in a BSF in an attempt for a qualitative verification of their theory. Instead of directly comparing the theoretical and experimental FCVD curves, Jain et al. (1986) have compared the theoretical and experimental curves of q,/7,ff as a function of initial voltage q(0). These curves are shown in Fig. 33 for different values of the coupling parameter a. The experimental results are shown for two different devices with different values of a which were known from independent measurements. We see from Fig. 33 that the agreement between the calculated and the experimental curves of Zb/t,ff is quite reasonable.
D. FCVD in p-i-n Diodes In this section we shall briefly consider FCVD in p-i-n diodes. The base of a p-i-n diode, being intrinsic by definition will be in high injection at any level of injection. In practice, the base of a p-i-n diode is not exactly intrinsic. However, the doping concentration in the base is quite small and therefore it
9
i
V ( 0 ) IN VOLTS
FIG.33. The ratio ~ ~ /is 7plotted ~ ~ as, a function of the initial voltage V(0).Curves 1 to 3 are the theoretical plots for a = 0.01,0.02,and 0.03, respectively.The points marked by 0and A are the experimental results for cells A and B, respectively.
408
V. K. TEWARY AND S. C. JAIN
reaches the high injection state at fairly low levels of injection. It may be remarked that a BSF diode like n+-p-p+ would also behave like a p-i-n (or n-i-p) diode at a very high level of injection. This is because, as explained in Section II.B.2, in the limit of high injections when the ECC is much larger than the doping concentration, a doped semiconductor behaves like an intrinsic one. The characteristic feature of a p - i - n diode is that it has two junctions, one each at the two ends of the base, i.e., p - i and i-n. The p and n regions are referred to as emitters and the i region is referred to as the base. The coupling between the emitter and the base at the two junctions will be still referred to as p - n coupling for the purpose of uniformity of nomenclature. We assume that the injection is not high enough so that both the emitters can be assumed to be in low injection state. The boundary conditions at the two junctions, which include the effect of p-n coupling, can be obtained by using the QSE approximation as in deriving Eq. (195).These are given below.
and
where Gb(Xb,Z)
= qb(Xb,Z)/qjl(Z
= O)
(210)
b = 0, which are the same as defined in Section V1.C. The corresponding coupling parameters at the junction x b = w b are given by 6 and b,, respectively. The junctions at x b = 0 and Xb = Wb have been denoted by the subscripts j l and j2, respectively. The total junction voltage in a p-i-n diode is given by the sum of the two junction voltages, viz.
u and a, are coupling parameters at the junction at x
V, = 5, + 6, where y l and 5, are obtained from qjl and qj2, respectively, by using the Shockley voltage condition given by Eq. (29). As mentioned in Section VI.C, even when the base is in high injection, the Shockley voltage condition is a reasonable approximation unless V,, and/or y2 are too close to the corresponding diffusion potential. The diffusion equation for the base is the same as used in the earlier sections, i.e., Eq. (125). Its solution, subject to boundary conditions and the appropriate initial condition taking the steady-state ECP in a p-i-n diode,
OPEN-CIRCUIT VOLTAGE DECAY
409
can be formally obtained either by the Laplace transform method or by expansion in terms of eigenfunctions. However, because of nonlinear boundary conditions, one has to resort to numerical methods to obtain the final solution. Schlangenotto and Gerlach (1972) have discussed the theory of FCVD in p-i-n diodes. In their calculations, they have taken only the quadratic terms on the right-hand side of Eqs. (208) and (209) taking a = b = 0, but a, and b , are taken to be nonzero. For a symmetric p-i-n diode in which qjl(z)= qj2(z) and al = b,, Schlangenotto and Gerlach (1972) have derived the following nonlinear equation in terms of the variable t (not z):
where aSG,the coupling parameter, is related to a, = bl as follows: %G
= alLb/zb
(2 13)
&(t) is the value of qjl(t)in the absence of p-n coupling, that is, when aSG= 0,
and O3 is the Jacobian function defined by
for small z and the following for large z:
+ 2 1 exp(-r2nz) m
&(O,iz) = 1
I=
1
(216)
Equation (212) has to be solved numerically. However, Schlangenotto and Gerlach (1972) have given the following expression which can serve as an approximate representation of the solution of Eq. (212):
,
where ij (t') is normalized according to Eq. (2 lo), and
410
V. K. TEWARY AND S. C . JAIN
The numerical solution of Eq. (212) as obtained by Schlangenotto and Gerlach (1972) agrees quite well with the approximate result obtained from Eq. (217). The main feature of the result is the rapid initial decay which is attributed to a strong p-n coupling. For example, taking q j l ( t = 0) = 5 x loi7 ~ m - aSG ~ , = 2 x 1 0 - l ~ cm4/sec, and Db(=LE/Tb) = 6.5 cm2/sec we obtain zSG
= 0.2 psec
For this value of zG, qj, falls to half its value in only 0.2 p e c . Equation (217) gives the decay of ECC at the junction. The result for FCVD can be simply obtained by using the Shockley voltage condition given by Eq. (29)qualitatively, the nature of the FCVD curve as obtained from these results is similar to that obtained by Jain et al. (1986) for a BSF diode. E. Possible Explanation f o r the Occurrence of Hump in the FCVD Curve
We see that neither the theory of Jain et al. (1986) nor that of Schlangenotto and Gerlach (1972) explains the existence of the hump or plateau in the FCVD curve (observation 2 in Section V1.A). Both these theories give a rapid decay in the initial stages but the rise or saturation in the voltage curve are not predicted by these theories. Of course, as mentioned in Section VI.A, the experimental situation is also not quite clear. What seems to be reasonably certain experimentally is that a hump/plateau in the FCVD curve occurs only in a two-junction device, i.e., a p-i-n or a BSF diode. The physical processes which might be responsible for causing the hump/plateau are not yet understood. We can only offer the following suggestions: 1. Dember effect-the Dember voltage depends upon the ratio of the ECC at the two junctions (see, for example, Gandhi, 1977) viz.
The voltage can be assumed to be added to the junction voltage. It is tempting to try to explain the hump in the FCVD curves in terms of the change of sign V,,,. As is apparent from Eq. (220),V,,, can change sign if qjl and q j 2 decay at different rates, which is possible in nonsymmetric p-i-n diodes. However, a detailed examination shows that the Dember term is not large enough to compensate for the decay of the junction voltage and show a rise in the FCVD curve. Of course, Eq. (220) is based upon the quasi-neutrality approximation, which itself may not be valid at high injections.
OPEN-CIRCUIT VOLTAGE DECAY
41 1
2. Change in the strength of the p-n coupling. This would seem to be a more promising line of thought. At high injections, as we have shown in Section VI.A, the p-n coupling is quite strong. It results in a rapid decay of ECC at the junction as shown in Sections V1.C and V1.D and, of course, as physically expected. In a p-i-n or an ideal BSF diode, if the thickness of the base is much smaller as compared to the diffusion length in the base, the ECP will remain almost flat in the middle region of the device. Since the ECC at the junction has substantially decayed due to p-n coupling but not in the middle region of the base, the gradient of the ECP near the junction will become quite large. The junction will gain carriers by diffusion from the middle region of the device. Moreover, since the ECC at the junction has decreased, the p-n coupling will become weak and therefore further loss of carriers to the emitter will become negligible. For a suitable choice of parameters it is possible, in principle, that the diffusive gain of carriers at the junction may temporarily exceed the total loss of carriers from the junction due to recombination and the p-n coupling. It is thus possible that ECC at the junction may build up after the initial decrease. After some time the ECP will become flat near the junction. The diffusion will then become negligible and the carriers will decay again (mainly by recombination). This process does explain, in principle, the occurrence of the hump in the FCVD curve. The plateau is a special case of the hump when the gain of carriers at the junction just equals the losses. It would be worthwhile to mathematically model this process in a theory of high injectian FCVD and to attempt to explain the rather fascinating occurrence of hump/plateau in the FCVD curve.
APPENDIX: LISTOF SYMBOLS AND NOTATIONS A list of symbols/notations is given below in alphabetical order. In addition to these, some other symbols/notations have been defined locally in the text. In general the subscripts e and b refer, respectively, to the emitter and the base regions of a device (diode or solar cell). U
a ce,b 4 . b
Dem,bm de,b
Absorption coefficient of light in a solar cell Coupling parameter (approximately equal to l/j) uLe,b
Diffusion coefficient for minority carriers Diffusion coefficient for majority carriers Thickness of emitter (e)/base (b)
412 Ee,b Eej,bj
ECC ECP
v Fe,b ge.b Jej,bj
4 JeO.bO
JO
j
k T/4 Le,b pe,b
NO Noj
4e.b qej,bj qe0,bO 4es qem,brn
4i
Rs
SRV
se,b sz,b
5 t zz,b ze,b zeO.bO zem,bm zanO.bmO
v,
VD
4 vi X
V. K. TEWARY AND S. C. JAIN
Drift field Drift field at the junction Excess carrier concentration Excess carrier profile Surface generation coefficient Drift field in dimensionless units ( = q E e , J e , , / 2 k T ) Generation rate of excess carriers Current at the junction Total junction current ( J b j + Jej) Thermal equilibrium saturation current Total thermal equilibrium saturation current (Jeo J b o ) Ratio of saturation currents ( = J b o / J e o ) Boltzmann voltage factor Diffusion length for minority carriers Mobility of minority carriers Number of photons incident per second per unit area at the front surface in a solar cell Number of protons incident per second per unit area at the junction in a solar cell ECC ECC at the junction Minority carrier concentration at thermal equilibrium ECC in the steady state Majority carrier concentration at thermal equilibrium (doping concentration) Intrinsic carrier concentration Series resistance of diode/solar cell Surface recombination velocity SRV Dimensionless SRV ( = & b z e , b / L e , b ) Ratio of lifetimes (= z b / z , ) Time variable Excess carrier lifetime (ambipolar) Minority carrier lifetime Minority carrier lifetime at thermal equilibrium Majority carrier lifetime Majority carrier lifetime at thermal equilibrium Terminal potential across diode/solar cell Diffusion (built-in) potential Junction potential Dimensionless junction potential ( =q Y / k T ) Distance variable
+
OPEN-CIRCUIT VOLTAGE DECAY xe*b
Z
413
Dimensionless distance variable ( = x/L,,b) Dimensionless time variable ( t / r b )
REFERENCES Abramowitz, M., and Stegun, I. E., eds. (1965). “Handbook of Mathematical Functions.” Douer, New York. Agarwala, A., and Tewary, V. K. (1980).J. Phys. D 13, 1885. Agarwala, A,, Tewary, V. K., Agarwal, S. K., and Jain, S. C. (1980).Solid-State Electron. 23, 1021. Backus, C. E., ed. (1976). “Solar Cells.” Inst. Electr. Electron. Eng., New York. Baliga, B. (1981).In “Silicon Integrated Circuits” (D. Kahng, ed.), Appl. Solid State Sci. Suppl. 2, Part B, p. 109, Academic Press, New York. Bassett, R. J., Fulop, W., and Hugarth, C. H. (1973).Int J. Electron. 35, 177. Carslaw, H. S., and Jaeger, J. C. (1959). “Conduction of Heat in Solids,” 2nd ed. Oxford Univ. Press, London and New York. Cooper, R. W. (1983).Solid-state Electron. 26,217. Derdouri, M., Letureq, P., and Munoz-Yague, A. (1980). IEEE Trans. Electron Devices ED-27, 2097. Dhariwal, S. R., and Vasu, N. K. (1981). IEEE Electron Device Lett. EDL-2, 53. Dhariwal, S. R., Kothari, L. S., and Jain, S. C. (1976). IEEE Trans. Electron Devices ED-23,504. Dhariwal, S . R. Kothari, L. S., and Jain, S. C. (1981). Solid-State Electron. 24, 749. Fahrenbuch, A. L., and Bube, R. H. (1983). “Fundamentals of Solar Cells.” Academic Press, New York. Fletcher, N. H. (1957). Int. J . Electron. 2,609. Fossum, J. G., Lindholm, F. A., and Shibib, M. A. (1979). IEEE Trans. Electron Devices ED-26, 1294. Gandhi, S. K. (1977). “Semiconductor Power Devices.” Wiley, New York. Gossick, B. R. (1955). J. Appl. Phys. 26, 1356. Green, M. A. (1984). Sol. Cells 11, 147. Guckel, H., Thomas, B. C., Iyengar, S. V., and Demirkol, A. (1977). Solid-state Electron. 20,647. Hauser, J. R. (1971). Solid-State Electron. 14, 133. Heasell, E. L. (1979).Solid-State Electron. 22,853. Hovel, H. J. (1975). In “Semiconductors and Semimetals,”(R. K. Willardson and A. C. Beer, eds.), Vol. 11. Academic Press, New York. Jain, S. C. (1981). Solid-state Electron. 24, 179. Jain, S. C., and Muralidharan, R. (1981).Solid-state Electron. 24, 1147. Jain, S. C., and Ray, U. C. (1983a).Solid-State Electron. 26,515. Jain, S. C., and Ray, U. C. (1983b).J. Appl. Phys. 54,2079. Jain, S. C., Ray, U.C., Muralidharan, R., and Tewary, V. K. (1986). Solid-State Electron. 29,561. Joshi, S. C., and Singhal, C. M. (1982). Int. J . Electron. 52, 381. Kennedy, D. P. (1962). IRE Trans. Electron Devices ED-9, 174. Kingston, R. H. (1954). Proc. IRE 42,829. Lederhandler, S . R., and Giacoletto, L. J. (1955). Proc. IRE 43,477. Lindholm, F. A., and Sah, C. T. (1976). J. Appl. Phys. 47,4208. McKelvey, J. P. (1966). “Solid State and Semiconductor Physics.” Harper Kc Row, New York. Madan, M. K., and Tewary, V. K. (1983).Sol. Cells 9,289.
414
V. K. TEWARY AND S. C. JAIN
Madan, M. K., and Tewary, V. K. (1985). Ind. J . Pure Appl. Phys. (in press). Mahan, J. E., Ekstedt, T. W, Frank, R. I., and Kaplow, R. (1979). IEEE Trans. Electron Devices ED-26,733. Mallenson, J. R., and Landsberg, P. T. (1977). Proc. R. SOC.Ser. London, A 355, 115. Misawa, T. (1956). J . Phys. SOC.Jpn. 11,728. Moll, J. L. (1964). “Physics of Semiconductors.” McGraw-Hill, New York. Moore, A. R. (1980). RCA Rev. 40,549. Muralidharan, R., Jain. S. C., and Jain, V. (1982). Sol. Cells 6, 157. Neville, R. C. (1980). “Solar Energy Conversion: The Solar Cells.” Am. Elsevier, New York. Nosov, Y. R. (1969). “Switching in Semiconductor Diodes.” Plenum, New York. Nussbaum, A. (1969).Solid-state Electron. 12, 177. Nussbaum, A. (1975). Solid-State Electron. 18, 107. Nussbaum, A. (1979). Solid-state Electron. 21, 1178. Ray, U. C., Agarwal, S. K., and Jain, S. C. (1982). J . Appl. Phys. 53,9122. Rose, B. H. (1981). IEEE Int. Electron Devices Meet. 1981. Sah, C. T., Noyce, R. N., and Shockley, W. (1957). Proc. IRE 45, 1228. Scharfetter, D. L., Lade, R. W.,and Jordon, A. G. (1963).IEEE Trans. Electron Devices ED-l0,35. Schlangenotto, H., and Gerlach, W. (1972).Solid-state Electron. 15,393. Sharma, S . K., and Tewary, V. K. (1982). J. Phys. D 15, 1077. Sharma, S. K., and Tewary, V. K. (1983). J . Phys. D 16, 1741. Sharma, S. K., Agarwala, A., and Tewary, V. K. (1981). J. Phys. D 14.11 15. Shockley, W. ( 1 950). “Electrons and Holes in Semiconductors.” Van Nostrand-Reinhold, Princeton, New Jersey. Sneddon, I. N. (1972).“The Use of Integral Transforms.” McGraw-Hill, New York. Sze, S. M. (1981). “Physics of Semiconductor Devices,” 2nd ed. Wiley, New York. Tewary, V. K., and Jain, S. C. (1980). J . Phys. D 13,835. Tewary, V. K., and Jain, S. C. (1982).Solid-state Electron. 25,903. Tewary, V. K., and Jain, S. C. (1983).Ind. J . Pure Appl. Phys. 21,36. Van der Ziel, A. (1 966). Solid-State Electron. 16, 1509. Van Vliet, K. M. (1966). Solid-state Electron. 9, 185. Weaver, H. T. (1981). h i . Workshop Phys. Solid State Devices, 1981. Wolf, M. (1963). Proc. IEEE 51,674.
Index A
Aberration coefficients, 66 Accumulation and isolated pixels, 157- 159 Albaut model, 204-205, 206 trajectories in, 204 Aliasing artifacts, 1 1 Ambipolar diffusion equation, 334-336, 37 I boundary conditions, 336-340 current condition, 338 Fletcher conditions, 339 low injection limit, 339-340 Misawa conditions, 339 voltage condition, 338-339 Ambipolar diffusion equation, 334-336 high injection limit, 335-336 low injection limit, 335 Ambipolar life time, 399 Ambipolar mobility, 399 Apparent resistance, 376 Arithmetic operators, 144- 150 addition operator, 144- 146 division operator, 148- 149 multiplication operator, 147- 148 subtraction operator, 146- 147 Astigmatism, in deflection, 212-213, 235238, 302 ”axisymmetrical astigmatism,” 237 deflected astigmatic beam, 234-235 “harmonic astigmatism,” 237 Auger recombination, 332, 374, 399 Autocorrelation, 10, 13-14 and Patterson, 13-14 Autocorrelation box, 9, 41 Autocorrelation theorem, 9 alternative statement of. 14
B Baker-Campbell-Hausdorff formula, 85, 106 Baker-Campbell-Hausdorff series, 85 Band-gap shrinkage, 370, 379, 386 Binary images, 124-127 standard set theoretic operations on, 125I27 Bipotentiai lens, 192. 208, 320
Block diagrams, I80 Boundary finding operator, 153- 155 Boundary peeling operator, see Interior finding operator Bounded image, 151 “Boxed image,” 24, 25, 27 definition of, 27 Brightness contrast C , 288-292 definition of, 288 BSF diode, ideal, 342, 346, 354, 356
C CAD, see Computer-aided design Canterbury algorithm. 40-60 preprocessing, 4 1-44 Carrier set, 180 Catalase, 25 Cathode drive, 202 Cathode loading, cutoff voltage, and beam modulation, 199-203 current density, formulae for, 200 maximum current density at center of focused spot, 201 modulation characteristics, 202 space-charge considerations, 199 spot cutoff voltage, 201-202 definition of, 201 temperature-limited current density, 200 Cathode ray tubes, 183-328, see also CRTs classification, 185 present technology, 299-301 table of characteristics, 305 Cathode structures, 193-199 comparison of impregnated and oxide-coated cathodes, 198- 199 current density distribution, 195 impregnated-tungsten-matrix, 198 B, S, and M types, 198 oxide-coated, 196- I98 factors shortening cathode life, 197 triode design Pierce type, 194 Wehnelt type, 193 CFF, see Critical flicker frequency 415
416
INDEX
CFF, see Critical fusion frequency Characteristic function, 67 Charged-particle optics, 65- 120 system diagram, 66 Chromaticity diagram, 286 uniform chromaticity diagram, 287 Chromatic terms, 90 Chrominance difference, 289 Chrominance discrimination index, 290 Chrominance threshold, 289 CIF, see Contrast improvement factor Color shadow mask deflection coils, 240-24 1 Coma, 238 coma aberration, 238 Composite algorithm, 40, see ulso Canterbury algorithm Computer-aided design, 308 in graphic console CRT, 308 Computer calculations to optimize CRT design, 203-209 limitations of, 209 program for Pierce structures, 204-205 program for triode structure, the WeberHasker model, 205-206 Connection operator, 152- 153 “Continuous images,” 151, 176-177 Contrast improvement factor, 294 Contrast ratio CR. 288-292 definition of, 288 Convolution theorem equation for, 8 Cosine overflow, 46, 50 Coulomb law, 271 Coulomb rating for phosphors, 273 Coupling drop, 375-377 CPE, see Crude phase estimate Crewe-Kopf system, 96-97 diagram of, 97 numerical example of, 108-1 10 Critical flicker frequency, 267 Critical fusion frequency, 264 CRT, see also Cathode ray tubes CRT contrast enhancement techniques, 288299 brightness contrast and contrast ratio, 288292 contrast enhancement filters, 297-299 detection and discrimination index, table, 290 improving viewability of display, 292-297 reflectance at phosphor layer, 296
reflection at air-glass interface, 292-293 reflection at phosphor-faceplate interface, 294 transmission of the faceplate, 294 CRT measurement techniques, 276-288 electrical parameters, 276-277 optical parameters, 277-288 color measurements, 285-288 linewidth at 50% of peak brightness, 277-279 linewidth at 50% of total energy, 280, 282 luminance, 283-284 micrometric loupe linewidth, 277 shrinking raster linewidth, 280-281 spatial frequency, 280, 281 CRTs, state of the art, 301-322 color tubes, 314-322 beam-index tube, 322 current-sensitive multicolor tube, 320 high-resolution color shadow-mask tube, 320-322 penetration tube, 314-320 high-brightness andlor high-contrast displays, 310-313 HDD, 31 1-312 HUD, 310-311 projection tubes, 3 12-3 13 oscilloscope tubes for display of fast transients, 304-307 tubes for high-resolution recording, 301-304 fiber-optic tubes, 303-304 magnetic shielding, 302 tubes using a scanning raster, 307-308 tubes using stroke writing, 308-310 graphic console tubes, 308-309 radar tubes, 309-310 Crude phase estimate, 3, 44-47, 55 improving of, 47-50 Crystallographic phase problem, 2, 16 recovery methods, reviews on, 14
D “Dead voltage,” 268 Deflection amplification and electrostatic deflection, 209-222 astigmatism, 212-213 basic electostatic deflection system, 2 10214
417
INDEX
deflection plates, sensitivity and contour of, 210-211 high-frequency deflection, 2 13-2 I 4 double helix deflection line, 214 high-frequency deflection, 2 13-2 14 segmented deflection plates, 21 3 mesh-type postdeflection amplification, 216-2 I9 diagram of, 216 with nonsymmetrical field lenses, 219-222 with quadrupolar lens, 220 post-deflection-acceleration, 2 14-2 16 Deflection amplification factor, 2 16-2 17 equation for, 2 I7 Deflection coil, 227 Deflection coils, application of, 252-254 Deflection demagnification, 215 Deflection, electromagnetic, see Electromagnetic deflection Deflection yoke, 222, 248-253 table of, 253 Defogging and refogging. 50-57, 59 foggy starfield, 52-54 DeLange curves, 264, 307 6 function pulse, 352 Demagnification factor, 215 equation for, 215 Dember effect, 338 Dember vol tage, 340, 40 I , 4 I 0 De Morgan’s theorems, 127, 133 Design momentum, 78 “Design orbit.” 70 “Design trajectory,” 78 Digital image creator, 178 Dilation, 131-133 properties of, 133- 136 Diode observed voltage decay curve, 391 Diode factor, 364-365 Diode, steady-state diffusion equation in, 340343 with constant drift field, 343 thick diodes, 341 thin diodes, 341-342 Directional filtering, 294-295 Direct pseudo-random initial phase estimate, 24, 26 Dirichlet type function, 336 Disconnected image, 152 DM, see Demagnification factor Doping, 332, 336, 378 doping levels, 378
Drift field, 343, 359, 386-389 acceleration drift field, 387 retarding drift field, 388 Dushman-Richardson law. 200
E ECC, see Excess carrier concentration ECP, see Excess carrier profile Electroluminescent devices, 183 Electromagnetic deflection, 222-254 aberrations affecting resolution uniformity, 234-238 astigmatism, 235-238 coma, 238 image-field curvature, 234-235 basic equations of, 222-226 deflection yoke under transient operating conditions, 248-252 diagram of, 225 electron trajectories and deflection angle, 225-226 geometric distortion, 229-234 correction of deflection nonlinearity, 233 correction using auxiliary magnetic fields, 233-234 nonuniform field for pincushion correction, 232-233 uniform field-electronic correction of pincushion distortion, 229-232 magnetic energy for given deflection angle, 226 magnetic-field distribution in the deflection coil, 227-229 the modem deflection coil, 238-248 flared deflection coil for sensitivity improvement, 239-240 3-dimensional model of magnetic field in, 240-242 Electron guns, 186-209 bipotential lens focus structure, 192 data displays using stroke writing, 191-192 data displays using a TV raster, 190-191 diagram of, 187 fixture for aligning parts, 301 high-resolution recording, 187- 189 Electronic Industries Association, 255 Electron microscopes, 65, 66, 96, 115 ultra-high resolution, 107 Electrostatic deflection and deflection amplification, 209-222 Entropy, 3
418
INDEX
Environmental Research Institute of Michigan, 1
Erosion, 131-133 properties of, 133- I36 Error, 31-35 magnitude error, 34 Excess carrier concentration, 329, 330, 365, 370 Excess carrier lifetime, 330, 335. 365, 366, 399 definition of, 335 Excess carrier profile, 329, 330, 336, 349, 371, 373, 379, 384, 392 steady-state, 365 Excess carriers, 33 I , 334 Expansion operator, 157 Extended image, 8 Extent-constraint theorem for positive images, 10, 20
F Factorization theorem, 69-70 FCVD, see Forward-current-induced voltage decay FFT, see Fourier transform Fiber-optic CRTs, 303-304 Fienup algorithms, 27-3 I , 35 Fienup cycling, 50, 51, 53, 55, 58 Fienup error-reduction iterations, 30 Fienup hybrid input-output iterations, 32, 33 Fienup positivity inside error, 35-36 definition of, 35 Fifth-degree aberration polynomials, 104- 105 Figure of merit, 273 of phosphors, 273 Fletcher conditions, 339 Fletcher junction potential, 402 Fletcher voltage conditions, 396, 399-401 Flicker, 265, 267 Fluorescence definition of, 254 FM, see Figure of merit Focused spot, 201 “Foggy” images, 4, 54-57 “partial fog” problem, 54, Forward-current-induced voltage decay, 33041 1, passim in diodes, 246-249 experimental results showing effect of g-n coupling on, 376-378 high injection effects, 377
explanation for hump in FCVD curve, 41041 1 high injection effects on, 395-41 I humpiplateau in curves, 397 at high injections, 403-407 in p-i-n diodes, 407-410 in thick diodes with p-n coupling, 371-376 in thin diodes, 354-357 Fourier phase problem, 2, 3, 16, 17 Fourier phase retrieval, 1-64 Fourier space, 2 Fourier transform algorithm, I I “Fringe magnification,” 59 Fuzzy closing operation. 172- 175 properties of, 175 Fuzzy Complementation operator, 171 Fuzzy intersection operator, 169- 170 Fuzzy opening operation, 171-172 properties of, 175 Fuzzy set representation, 124, 128 Fuzzy set theory, 121 Fuzzy subimage. 123 fuzzy subimage operator, I24 Fuzzy union operator, 168-169
G Geometric terms, 90 Gerchberg’s algorithm, 59 Gerchberg-Saxton algorithm, 5, 25-27, 34, 59 Gerchberg-Saxton iterations, 28 Gerchberg-Saxton inside error, 34-36 definition of, 35 Germanium diodes, 370 Getters, 300 Glaser profiles, 108 Grassmann’s law, 287 Gray-valued mathematical morphology. 167I75 fuzzy closing operation, 172-175 fuzzy complementation operator, 171 fuzzy opening operation, 171-172 maximum or fuzzy union operator, 168- I69 minimum or fuzzy intersection operator, 169-170 restricting fuzzy operators to binary images, 171 translation operation, 167- 168 Grid drive, 202
419
INDEX
H HDD, see Head-down display tube Head-down display tube, 31 1-312 Head-up display tube, 310-31 I High injection effects on FCVD, 395-41 1 QSE approximation for p-n coupling in, 402-403 High-writing-speed oscilloscope, 194, 306 HUD, see Head-up display tube I Image box, 6, 41, 55, 58 Image enhancement, 2 Image-forms, 14- 15 Image-processing, 1 Image space, 5 geometric aspects of, 6 Imaging algebra, 121-182 Indirect pseudo-random initial phase estimate, 25 Initial phase estimates, 24-25 In-line in-between structure factors, 47 Input-output algorithm, 3 I Interaction picture Hamiltonian, 88-89 Interior finding operator, 155-157 Interior pixel and maximal interior neighborhood, 159-161 International Electrochemical Commission, 277 Invar mask, 322 Iterative Fienup algorithms, 3 Iterative phase refinement, 2 1-40 Iterative refinement algorithms, 2 I-V relation. 342
J Jain's theory, 366, 367, 380
L Langmuir approximations, 203 Langmuir-Child law, 199-200, 202, 204, 206 Lie algebraic theory of optics, 65-120 application to magnetic solenoid, 79-81 deviation variables, 77-78 Hamiltonians, 76 Lagrangians, 75-76 relativistic Lagrangian, 76 transformed Hamiltonian, 78-79 Lie algebraic tools, 65, 68-70 Lie calculus, 81-86
calculation of transfer map, 82 combining transfer maps, 83 evaluation of commutators, 83 properties of Lie transformations, 86 Lie operators, 68, 82-83 definition of, 68 Lie transformations, 68-75, 86 axial rotation, 72 definition of, 69 departures from paraxial optics, 73 general paraxial optics, 73 simple lens, 7 1-72 third-order aberrations, 73-75 Likelihood principles, 3 Long-pulse excitation, 260-261 LPE, see Long-pulse excitation Luminance, 259, 266, 270, 283-284, 290, 32 1 luminance discrimination index, 290 Luminous difference, 289
M MACSYMA, 107 Magnetic solenoid, 79-8 I Magnetic shielding, 302 Magnitude error, 34 definition of, 34 Many-sorted algebra, 178 MARYLIE, 106-108, I13 Mathematical morphology, 127- 144 applied to binary images, 127 erosion and dilation, 131-133 Minkowski subtraction and addition, 130 180"-rotated image, 128- 130 opening and closing operations, 143- 144 translate of a binary image, 128 Mathematical morphology, gray-valued, see Gray-valued mathematical morphology Maximal interior neighborhood operator, 159 definition, 159 Maximum current density at cathode center, 200 Maximum operator, 168- 169 Measurement noise, consequences of, 12- I3 data from visibility space, 12 MI, see Modulation index Microchannel plates, 306-307 MICROLIE, 65, 68, 107 Microscopal phase problem, 16, 34 Microwave tubes, 193
420
INDEX
Minimum extent of autocorrelation theorem, 10, 19 Minimum operator, 169-170 Minkowski subtraction and addition, 130- I33 definition of, 130 Minkowski subtraction operator, 131 Misawa conditions, 339, 399-401 Misawa terminal potential, 402 Modulation index, 265-266 measuring apparatus for, 265 Monoenergetic aberrations, 75 Morphological-type imaging algebra, 12 I
N NA, see Numerical aperture “Neck shadow,” 239 Neighborhood concept, 150-151 Neumann type function, 336 Noisy data, consequences of, 36-40 Nonchromatic terms, see Geometric terms Noninteger oversampling, consequences of, 55-58 North’s expression, 373 Numerical aperture, 303 Numerical convergence. 36
0 OCVD, see Open-circuit voltage decay Ohmic drop, 375-376 Open-circuit voltage decay, in solar cells, 329-414 in base-dominated devices, 346-369 effect of emitter on, 370-383 high injection effects on, 396-402 ambipolar coefficients and, 398-399 and strength of p - n coupling, 399-400 voltage boundary condition at the junction, 400-402 limitations for lifetime determination, 362366 diode factor, 363-364 high injection effects, 364 low injection effects, 364-366 small-signal OCVD method in solar cells, 364 Opening and closing operations, 136- 146 effect of p-n coupling on, in thick solar cells, 378-383
P Paraxial map for solenoid lens, 86-88 computation of optical factor, 87-88 computation of time of flight and rotational factors, 87 decomposition into factors, 87 Parseval’s theorem, 35 equation for, 35 Patterson, 13-14 compared with autocorrelation, 13- 14 definition of, 13 PDA, see Post-deflection-acceleration Penetration tube screens, see Phosphor screens, penetration Perception threshold, 290 Periodic image, 7 Phase dominance, 16-17 Phase problem, 3, 14-21 definition of, 3 table of types of, 3 Phasor diagram, 45 Phosphorescence definition of, 254 Phosphors and screens, 254-276 aging, 271-273 antiflicker phosphors, 265 best linearity. 269-270 europium-activated barium strontium thiogallate, 269 terbium-activated yttrium aluminum garnet, 269 broad-band-emitting, 254 color characteristics, 259-260 energy conversion efficiency, 255 luminous efficiency, 259 effect of current density on, 269 effect of screen voltage on, 267-269 luminous equivalent, 258 narrow-band-emitting, 254-255 new technology, 273-276 fast-decay phosphors, 276 phosphors for color tubes, 276 recent phosphors, 273-276 rare earth, table, of, 256-257 response to different excitation modes, rise and decay times, and flicker, 260-267 Phosphor screens, penetration, 314-320 composite-crystal , 3 16-3 I8 multilayered, 315-316 Photocolorimetric space, 289, 291 Photovoltage decay, 330, 346
42 1
INDEX
Picture elements, 1 I , see also Pixels Pierce structures, 204-205, 207 Pincushion correction, 231 Pincushion distortion, 229, 302 p-i-n diodes, 407-410 total junction voltage in, 408 Pixels, 11, 121-182, passim isolated pixel, definition of, 159 Plasma panels, 183 p-n coupling, 364, 370, 383 p-n junction boundary conditions at, 337 p-n-junction diodes, 329 model of, 331, 333-334 Polyadic graphs, 176 example of, 179 for the imaging operators, 178 Positive images table of physical situations for, 20 Positivity, significance of, 20-21 Postacceleration, 215-2 16 Post-deflection-acceleration, 2 14-2 I6 Preprocessing, 41-44 Pseudo-random number routine, 24 PTE, see Pulse-train excitation Pulse-train excitation, 261-262 Purple membrane, 25 PVD, see Photovoltage decay interpretation of experimental results on, 366-369 Jain’s theory. 367 thick-base and zero-drift-field approximations, 368 in thick solar cells, 349-354 in thin solar cells, 357-362 effect of drift field, 359-362
Q QSE, see Quasi-static emitter Quantization, I 1 amplitude, I I spatial, I 1 Quasi-localized image, 8, see also Extended image Quasi-static emitter approximation and diffusion equations for OCVD, 383-395 for p - n coupling at high injcctions, 402403 effect of drift field on FCVD in thick diode. 386-389
FCVD in thin base diodes with constant drift field, 389-392 PVD in solar cells, 392-395
R Radar CRT. 309-310 Rare earth compounds, as phosphors, 256-257 Reciprocal space, 2 Rectangular box, 41 Recurrent-pulse excitation, 261, 263 RPE, see Recurrent-pulse excitation
S Saddle yoke, 252 Sampled image, 176-178 Sample points, 8 diagonal in-between sample points, 8 in-line in-between sample points, 8 ordinary sample points, 8 table of, 9 Sampling theorem, 10 Scherzer’s Theorem, 96 Schmidt aspherical correction lens, 3 13 Schmidt optics, 312 Settling time, 251 Sextupole-corrected spot-forming system, 108 Shockley boundary condition, 339, 354, 400 Shockley-Reed-Hall two-level recombination mechanism, 369 Shockley relation, 330, 404 for a diode, 341 Shockley voltage condition, 350, 357, 404, 408, 410 Short-pulse excitation, 261-262 response of fast phosphors to, 263 Solar cell steady-state diffusion equation for, 343-346 thick solar cell, 344-345 thin solar cell, 345 structure of, 33 1-332 Solenoid lens, 86-88 second- and third-order effects for, 88-92 Solid State Physics Laboratory, Delhi, 396, 397 SPE, see Short-pulse excitation Spherical aberration, 74, 92-105 cancellation of, 101-102 correction of, 96-105 for a simple imaging system, 92-96 “Spherical-type’ ’ aberrations, 104
422
INDEX
Spot degradation, 218 Spot halo, 218 Spot magnification factor G , 217 equation for, 217 SRV, see Surface recombination velocity SSPL, see Solid State Physics Laboratory, Delhi Stopping criteria, 3 1-35 Stroke writing, 191-192, 263, 308-310 Subimage of an image, 123 “Supercorrected” solenoid, 65 Supercorrection. 104- 105 numerical example of, 110- I 15 Surface recombination velocity, 345, 357, 382
T Topological operations, 150- 167 accumulation and isolated pixels, 157-159 boundary finding operator, 153- 155 bounded image, 151 connection operator, 152- 153 expansion operator, 157 interior finding (or boundary peeling), operator, 155-157 interior pixel and the maximal interior neighborhood, 159- 161 neighborhood concept, 150- 151 properties of, 161 - 167 Toroidal yoke, 252 Transfer function, 252 Transfer map, 69-70, 82, 93-99 for drifts, 93 for sextupole, 97 for solenoid, 92-93 total, 93, 98-99 Translation operation, 167- 168 Tricomi method, 379 Trolands, 264 definition of, 264
Trouble map analysis of, 99-101 definition of, 99 True image box, 13 Two-dimensional phase problems, 1-64
U UCS, see Uniform chromaticity scale
Umbra, 121 Unary operator, I55 Uniform chromaticity scale, 259 UCS 1960 diagram, 289 Uniqueness, 17- I9 Unitary chrominance difference, 289 Unitary luminance difference, 289 Universal imaging algebra, 121- 182
V Vacuum techniques, 299 VGD, 202 Visibility, 3, 5, 7, 12 observable, equation for, 12 Visibility space, 5, 12 geometric aspects of, 7 Volterra integral, 406 W
Weber-Hasker model, 205-206 beamlets of, 205 Wehnelt electrode, 187
Z Zero flipping, 17 Zero packing, 12, 46 Zero-phase visibilities, images formed from, 22-23