David Hestenes
Space-Time Algebra Second Edition
David Hestenes
Space-Time Algebra Second Edition Foreword by Anthon...
310 downloads
3541 Views
878KB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
David Hestenes
Space-Time Algebra Second Edition
David Hestenes
Space-Time Algebra Second Edition Foreword by Anthony Lasenby
David Hestenes Department of Physics Arizona State University Tempe, AZ USA
Originally published by Gordon and Breach Science Publishers, New York, 1966 ISBN 978-3-319-18412-8 DOI 10.1007/978-3-319-18413-5
ISBN 978-3-319-18413-5
(eBook)
Library of Congress Control Number: 2015937947 Mathematics Subject Classification (2010): 53-01, 83-01, 53C27, 81R25, 53B30, 83C60 Springer Cham Heidelberg New York Dordrecht London © Springer International Publishing Switzerland 2015 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed. The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made. Cover design: deblik, Berlin Printed on acid-free paper Springer International Publishing AG Switzerland is part of Springer Science+Business Media (www.birkhauser-science.com)
Foreword It is a pleasure and honour to write a Foreword for this new edition of David Hestenes’ Space-Time Algebra. This small book started a profound revolution in the development of mathematical physics, one which has reached many working physicists already, and which stands poised to bring about far-reaching change in the future. At its heart is the use of Clifford algebra to unify otherwise disparate mathematical languages, particularly those of spinors, quaternions, tensors and differential forms. It provides a unified approach covering all these areas and thus leads to a very efficient ‘toolkit’ for use in physical problems including quantum mechanics, classical mechanics, electromagnetism and relativity (both special and general) – only one mathematical system needs to be learned and understood, and one can use it at levels which extend right through to current research topics in each of these areas. Moreover, these same techniques, in the form of the ’Geometric Algebra’, can be applied in many areas of engineering, robotics and computer science, with no changes necessary – it is the same underlying mathematics, and enables physicists to understand topics in engineering, and engineers to understand topics in physics (including aspects in frontier areas), in a way which no other single mathematical system could hope to make possible. As well as this, however, there is another aspect to Geometric Algebra which is less tangible, and goes beyond questions of mathematical power and range. This is the remarkable insight it gives to physical problems, and the way it constantly suggests new features of the physics itself, not just the mathematics. Examples of this are peppered throughout Space-Time Algebra, despite its short length, and some of them are effectively still research topics for the future. As an example, what is the real role of the unit imaginary in quantum
v
vi
Foreword
mechanics? In Space-Time Algebra two possibilities were looked at, and in intervening years David has settled on right multiplication by the bivector iσ3 = σ1 σ2 as the correct answer between these two, thus encoding rotation in the ‘internal’ xy plane as key to understanding the role of complex numbers here. This has stimulated much of his subsequent work in quantum mechanics, since it bears directly on the zwitterbewegung interpretation of electron physics [1]. Even if we do not follow all the way down this road, there are still profound questions to be understood about the role of the imaginary, such as the generalization to multiparticle systems, and how a single correlated ‘i’ is picked out when there is more than one particle [2]. As another related example, in Space-Time Algebra David had already picked out generalizations of these internal transformations, but still just using geometric entities in spacetime, as candidates for describing the then-known particle interactions. Over the years since, it has become clear that this does seem to work for electroweak theory, and provides a good representation for it [3,4,5]. However, it is not clear that we can yet claim a comparable spacetime ‘underpinning’ for QCD or the Standard Model itself, and it may well be that some new feature, not yet understood, but perhaps still living within the algebra of spacetime, needs to be bought in to accomplish this. The ideas in Space-Time Algebra have not met universal acclaim. I well remember discussing with a particle physicist at Manchester University many years ago the idea that the Dirac matrices really represented vectors in 4d spacetime. He thought this was just ‘mad’, and was vehemently against any contemplation of such heresy. For someone such as myself, however, coming from a background of cosmology and astrophysics, this realization, which I gathered from this short book and David’s subsequent papers, was a revelation, and showed me that one could cut through pages of very unintuitive spin calculations in Dirac theory, which only experts in particle theory would be comfortable with, and replace them with a few lines of intuitively appealing calculations with rotors, of a type that for example an engineer, also using Geometric Algebra, would be able to understand and relate to immediately in the context of rigid body rotations. A similar transformation and revelation also occurred for me with respect to gravitational theory. Stimulated by Space-Time Algebra and
Foreword
vii
particularly further papers by David such as [6] and [7], then along with Chris Doran and Stephen Gull, we realized that general relativity, another area traditionally thought very difficult mathematically, could be replaced by a coordinate-free gauge theory on flat spacetime, of a type similar to electroweak and other Yang-Mills theories [8]. The conceptual advantages of being able to deal with gauge fields in a flat space, and to be able to replace tensor calculus manipulations with the same unified mathematical language as works for electromagnetism and Dirac spinors, are great. This enabled us quickly to reach interesting research topics in gravitational theory, such as the Dirac equation around black holes, where we found solutions for electron energies in a spherically symmetric gravitational potential analogous to the Balmer series for electrons around nuclei, which had not been obtained before [9]. Of course there are very clever people in all fields, and in both relativistic quantum mechanics and gravity there have been those who have been able to forge an intuitive understanding of the physics despite the difficulties of e.g. spin sums and Dirac matrices on the one hand, and tensor calculus index manipulations on the other. However, the clarity and insight which Geometric Algebra brings to these areas and many others is such that, for the rest of us who find the alternative languages difficult, and for all those who are interested in spanning different fields using the same language, the ideas first put forward in ’Space-time Algebra’ and the subsequent work by David, have been pivotal, and we are all extremely grateful for his work and profound insight. Anthony Lasenby Astrophysics Group, Cavendish Laboratory and Kavli Institute for Cosmology Cambridge, UK
References [1] D. Hestenes, Zitterbewegung in Quantum Mechanics, Foundations of Physics 40, 1 (2010). [2] C. Doran, A. Lasenby, S. Gull, S. Somaroo and A. Challinor,
viii
Foreword
Spacetime Algebra and Electron Physics. In: P.W. Hawkes (ed.), Advances in Imaging and Electron Physics, Vol. 95, p. 271 (Academic Press) (1996). [3] D. Hestenes, Space-Time Structure of Weak and Electromagnetic Interactions, Found. Phys. 12, 153 (1982). [4] D. Hestenes, Gauge Gravity and Electroweak Theory. In: H. Kleinert, R. T. Jantzen and R. Ruffini (eds.), Proceedings of the Eleventh Marcel Grossmann Meeting on General Relativity, p. 629 (World Scientific) (2008). [5] C. Doran and A. Lasenby, Geometric Algebra for Physicists (Cambridge University Press) (2003). [6] D. Hestenes, Curvature Calculations with Spacetime Algebra, International Journal of Theoretical Physics 25, 581 (1986). [7] D. Hestenes, Spinor Approach to Gravitational Motion and Precession, International Journal of Theoretical Physics 25, 589 (1986). [8] A. Lasenby, C. Doran, and S. Gull, Gravity, gauge theories and geometric algebra, Phil. Trans. R. Soc. Lond. A 356, 487 (1998). [9] A. Lasenby, C. Doran, J. Pritchard, A. Caceres and S. Dolan, Bound states and decay times of fermions in a Schwarzschild black hole background, Phys. Rev. D. 72, 105014 (2005).
Preface after Fifty years This book launched my career as a theoretical physicist fifty years ago. I am most fortunate to have this opportunity for reflecting on its influence and status today. Let me begin with the title Space-Time Algebra. John Wheeler’s first comment on the manuscript about to go to press was “Why don’t you call it Spacetime Algebra?” I have followed his advice, and Spacetime Algebra (STA) is now the standard term for the mathematical system that the book introduces. I am pleased to report that STA is as relevant today as it was when first published. I regard nothing in the book as out of date or in need of revision. Indeed, it may still be the best quick introduction to the subject. It retains that first blush of compact explanation from someone who does not know too much. From many years of teaching I know it is accessible to graduate physics students, but, because it challenges standard mathematical formalisms, it can present difficulties even to experienced physicists. One lesson I learned in my career is to be bold and explicit in making claims for innovations in science or mathematics. Otherwise, they will be too easily overlooked. Modestly presenting evidence and arguing a case is seldom sufficient. Accordingly, with confidence that comes from decades of hindsight, I make the following Claims for STA as formulated in this book: (1) STA enables a unified, coordinate-free formulation for all of relativistic physics, including the Dirac equation, Maxwell’s equation and General Relativity. (2) Pauli and Dirac matrices are represented in STA as basis vectors in space and spacetime respectively, with no necessary connection to spin.
ix
x
Preface after Fifty years
(3) STA reveals that the unit imaginary in quantum mechanics has its origin in spacetime geometry. (4) STA reduces the mathematical divide between classical, quantum and relativistic physics, especially in the use of rotors for rotational dynamics and gauge transformations. Comments on these claims and their implications necessarily refer to contents of the book and ensuing publications, so the reader may wish to reserve them for later reference when questions arise. Claim (2) expresses the crucial secret to the power of STA. It implies that the physical significance of Dirac and Pauli algebras stems entirely from the fact that they provide algebraic representation of geometric structure. Their representations as matrices are irrelevant to physics—perhaps inimical, because they introduce spurious complex numbers without physical significance. The fact that they are spurious is established by claim (3). The crucial geometric relation between Dirac and Pauli algebras is specified by equations (7.9) to (7.11) in the text. It is so important that I dubbed it space-time split in later publications. Note that the symbol i is used to denote the pseudoscalar in (6.3) and (7.9). That symbol is appropriate because i2 = −1, but it should not to be confused with the imaginary unit in the matrix algebra. Readers familiar with Dirac matrices will note that if the gammas on the right side of (7.9) are interpreted as Dirac’s γ-matrices, then the objects on the left side must be Dirac’s α-matrices, which, as is seldom recognized, are 4 × 4 matrix representations of the 2 × 2 Pauli matrices. That is a distinction without physical or geometric significance, which only causes unnecessary complications and obscurity in quantum mechanics. STA eliminates it completely. It may be helpful to refer to STA as the “Real Dirac Algebra”, because it is isomorphic to the algebra generated by Dirac γ-matrices over the field of real numbers instead of the complex numbers in standard Dirac theory. Claim (3) declares that only the real numbers are needed, so standard Dirac theory has a superfluous degree of freedom. That claim is backed up in Section 13 of the text where Dirac’s equation is given two different but equivalent formulations within STA. In equation (13.1) the role of unit imaginary is played by the pseudoscalar i, while, in equation (13.13) it is played by a spacelike bivector.
Preface after Fifty years
xi
Thus, the mere reformulation of the Dirac equation in terms of STA automatically assigns a geometric meaning to the unit imaginary in quantum mechanics! I was stunned by this revelation, and I set out immediately to ascertain what its physical implications might be. Before the ink was dry as the STA book went to press, I had established in [1] that (13.13) was the most significant form for the Dirac equation, with the bivector unit imaginary related to spin in an intriguing way. This insight has been a guiding light for my research into geometric foundations for quantum mechanics ever since. The current state of this so-called “Real Dirac Theory” is reviewed in [2, 3]. Concerning Claim (4): Rotors are mathematically defined by (16.7) and (16.8), but I failed to mention there what has since become a standard name for this important concept. Rotors are used for an efficient coordinate-free treatment of Lorentz transformations in Sections 16, 17 and 18. That provides the foundation for the Principle of Local Relativity formulated in Section 23. It is an essential gauge principle for incorporating Dirac spinors into General Relativity. That fact is demonstrated in a more general treatment of gauge transformations in Section 24. The most general gauge invariant derivative for a spinor field in STA is given by equations (24.6) and (24.12). The “C connection” is the coupling to the gravitational field, while “D connection” in (24.16) was tentatively identified with strong interactions. I was very suspicious of that tentative identification at the time. Later, when the electroweak gauge group became well established, I reinterpreted the D connection as electroweak [4]. That has the great virtue of grounding electroweak interactions in the spacetime geometry of STA. The issue is most thoroughly addressed in [5]. However, a definitive argument or experimental test linking electroweak interactions to geometry in this way remains to be found. Finally, let me return to Claim (1) touting STA as a unified mathematical language for all spacetime physics. The whole book makes the case for that Claim. However, though STA provides coordinatefree formulations for the most fundamental equations of physics, solving those equations with standard coordinate-based methods required taking them apart and thereby losing the advantage of invariant formulation. To address that problem, soon after this book was published,
xii
Preface after Fifty years
I set forth on the huge task of reformulating standard mathematical methods. That was a purely mathematical enterprise (but with one eye on physics). I generalized STA to create a coordinate-free Geometric Algebra (GA) and Geometric Calculus (GC) for all dimensions and signatures. There were already plenty of clues in STA on how to do it. In particular, Section 22 showed how to define a vector derivatives and integrals that generalize the concept of differential form. Another basic task was to reformulate linear algebra to enable coordinate-free calculations with GA. The outcome of this initiative, after two decades, is the book [6]. Also during this period I demonstrated the efficiency of GA in introductory physics and Classical Mechanics, as presented in my book [7]. By fulfilling the four Claims just discussed, STA initiated developments of GA into a comprehensive mathematical system with a vast range of applications that is still expanding today. These developments fall quite neatly into three Phases. Phase I covers the first two decades, when I worked alone with assistance of my students, principally Garret Sobczyk, Richard Gurtler and Robert Hecht-Nielsen. This work attracted little notice in the literature, with the exception of mathematician Roget Boudet, who promoted it enthusiastically in France. Phase I was capped by my two talks [8, 9] at a NATO conference on Clifford Algebras organized by Roy Chisholm. That conference also initiated Phase II and a steady stream of similar conferences still flowing today. Let me take this opportunity to applaud the contribution of Chisholm and the other conference organizers who selflessly devote time and energy to promoting the flow of scientific ideas. This essential social service to science gets too little recognition. The high point of Phase II was the publication of Gauge Theory Gravity by Anthony Lasenby, Chris Doran and Steve Gull [10]. I see this as a fundamental advance in spacetime physics and a capstone of STA [11]. Phase II was capped with the comprehensive treatise [12] by Doran and Lasenby. Phase III was launched by my presentation of Conformal Geometric Algebra (CGA) in July 1999 at a conference in Ixtapa, Mexico [13]. After stimulating discussions with Hongbo Li, Alan Rockwood and Leo Dorst, diverse applications of GA published during Phase II
Preface after Fifty years
xiii
had congealed quite quickly in my mind into a sharp formulation of CGA. Response to my presentation was immediate and strong, initiating a steady stream of CGA applications to computer science [14] and engineering (especially robotics). In physics, CGA greatly simplifies the treatment of the crystallographic space groups [15], and applications are facilitated by the powerful Space Group Visualizer created by Echard Hitzer and Christian Perwass [16]. I count myself as much mathematician as physicist, and I see development of GA from STA to GC and CGA as reinvigorating the mathematics of Hermann Grassmann and William Kingdon Clifford with an infusion of twentieth century physics. The history of this development and the present status of GA has been reviewed in [17]. I am sorry to say that few mathematicians are prepared to recognize the central role of GA in their discipline, because it has become increasingly insular and divorced from physics during the last century. Personally, I dedicate my work with GA to the memory of my mathematician father, Magnus Rudolph Hestenes, who was always generous with his love but careful with his praise [9].
References [1] D. Hestenes, Real Spinor Fields, J. Math. Phys. 8, 798–808 (1967). [2] D. Hestenes, Oersted Medal Lecture 2002: Reforming the Mathematical Language of Physics. Am. J. Phys. 71, 104–121 (2003). [3] D. Hestenes, Spacetime Physics with Geometric Algebra, Am. J. Phys. 71, 691–714 (2003). [4] D. Hestenes, Space-Time Structure of Weak and Electromagnetic Interactions, Found. Phys. 12, 153–168 (1982). [5] D. Hestenes, Gauge Gravity and Electroweak Theory. In: H. Kleinert, R.T. Jantzen and R. Ruffini (Eds.), Proceedings of the Eleventh Marcel Grossmann Meeting on General Relativity (World Scientific: Singapore, 2008), pp. 629–647. [6] D. Hestenes and G. Sobczyk, CLIFFORD ALGEBRA to GEOMETRIC CALCULUS, A Unified Language for Mathematics and Physics (Kluwer: Dordrecht/Boston, 1984).
xiv
Preface after Fifty years
[7] D. Hestenes, New Foundations for Classical Mechanics (Kluwer: Dordrecht/Boston, 1986). [8] D. Hestenes, A Unified Language for Mathematics and Physics. In: J.S.R. Chisholm and A.K. Common (Eds.), Clifford Algebras and their Applications in Mathematical Physics (Reidel: Dordrecht/Boston, 1986), pp. 1–23. [9] D. Hestenes, Clifford Algebra and the Interpretation of Quantum Mechanics. In: J.S.R. Chisholm and A.K. Common (Eds.), Clifford Algebras and their Applications in Mathematical Physics (Reidel: Dordrecht/Boston, 1986), pp. 321–346. [10] A. Lasenby, C. Doran and S. Gull, Gravity, gauge theories and geometric algebra, Phil. Trans. R. Lond. A 356, 487–582 (1998). [11] D. Hestenes, Gauge Theory Gravity with Geometric Calculus, Foundations of Physics 36, 903–970 (2005). [12] C. Doran and A. Lasenby, Geometric Algebra for Physicists (Cambridge: Cambridge University Press, 2003). [13] D. Hestenes, Old Wine in New Bottles: A new algebraic framework for computational geometry. In: E. Bayro-Corrochano and G. Sobczyk (Eds), Advances in Geometric Algebra with Applications in Science and Engineering (Birkh¨auser: Boston, 2001), pp. 1–14. [14] L. Dorst, D. Fontijne and S. Mann, Geometric Algebra for Computer Science (Morgan Kaufmann, San Francisco, 2007). [15] D. Hestenes and J. Holt, The Crystallographic Space Groups in Geometric Algebra, Journal of Mathematical Physics 48, 023514 (2007). [16] E. Hitzer and C. Perwas: http://www.spacegroup.info [17] D. Hestenes, Grassmann’s Legacy. In: H.-J. Petsche, A. Lewis, J. Liesen and S. Russ (Eds.), From Past to Future: Grassmann’s Work in Context (Birkh¨auser: Berlin, 2011).
Preface This book is an attempt to simplify and clarify the language we use to express ideas about space and time. It was motivated by the belief that such is an essential step in the endeavor to simplify and unify our theories of physical phenomena. The object has been to produce a “space-time calculus” which is ready for physicists to use. Particular attention has been given to the development of notation and theorems which enhance the geometric meaning and algebraic efficiency of the calculus; conventions have been chosen to differ as little as possible from those in formalisms with which physicists are already familiar. Physical concepts and equations which have an intimate connection with our notions of space-time are formulated and discussed. The reader will find that the “space-time algebra” introduces novelty of expression and interpretation into every topic. This naturally suggests certain modifications of current physical theory; some are pointed out in the text, but they are not pursued. The principle objective here has been to formulate important physical ideas, not to modify or apply them. The mathematics in this book is relatively simple. Anyone who knows what a vector space is should be able to understand the algebra and geometry presented in chapter I. On the other hand, appreciation of the physics involved in the remainder of the book will depend a great deal on the reader’s prior acquaintance with the topics discussed. This work was completed in 1964 while I was at the University of California at Los Angeles. It was supported by a grant from the National Science Foundation—for which I am grateful. I wish to make special mention of my debt to Marcel Riesz, whose trenchant discussion of Clifford numbers and spinors (8, 9) provided an initial stimulus and some of the key ideas for this work. I would
xv
xvi
Preface
also like to express thanks to Professors V. Bargmann, J. A. Wheeler and A. S. Wightman for suggesting some improvements in the final manuscript and to Richard Shore and James Sock for correcting some embarrassing errors. David Hestenes Palmer Laboratory Princeton University
Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . xix I
Geometric Algebra 1 Interpretation of Clifford Algebra 2 Definition of Clifford Algebra . . 3 Inner and Outer Products . . . . 4 Structure of Clifford Algebra . . . 5 Reversion, Scalar Product . . . . 6 The Algebra of Space . . . . . . . 7 The Algebra of Space-Time . . .
II Electrodynamics 8 Maxwell’s Equation . . 9 Stress-Energy Vectors . 10 Invariants . . . . . . . 11 Free Fields . . . . . . . III Dirac Fields 12 Spinors . . . . . . . 13 Dirac’s Equation . 14 Conserved Currents 15 C, P , T . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
IV Lorentz Transformations 16 Reflections and Rotations . 17 Coordinate Transformations 18 Timelike Rotations . . . . . 19 Scalar Product . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
. . . .
. . . .
. . . .
. . . . . . .
1 1 4 5 10 13 16 20
. . . .
25 25 27 29 31
. . . .
35 35 39 42 43
. . . .
47 47 51 55 59
xvii
xviii
V Geometric Calculus 20 Differentiation . . . . . . . . . . . 21 Riemannian Space-Time . . . . . 22 Integration . . . . . . . . . . . . . 23 Global and Local Relativity . . . 24 Gauge Transformation and Spinor
Contents
. . . . . . . . . . . . . . . . . . . . . . . . . . . . Derivatives
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
63 63 68 72 76 82
Conclusion
87
Appendixes A Bases and Pseudoscalars . . . . . . . . . . . B Some Theorems . . . . . . . . . . . . . . . . C Composition of Spatial Rotations . . . . . . D Matrix Representation of the Pauli Algebra
89 89 92 97 99
Bibliography
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
101
Introduction A geometric algebra is an algebraic representation of geometric concepts. Of particular importance to physicists is an algebra which can efficiently describe the properties of objects in space-time and even the properties of space-time itself, for it is only in terms of some such algebra that a physical theory can be expressed and worked out. Many different geometric algebras have been developed by mathematicians, and from time to time some of these systems have been adapted to meet the needs of physicists. Thus, during the latter part of the nineteenth century J. Willard Gibbs put together a vector algebra for three dimensions based on ideas gleaned from Grassmann’s extensive algebra and Hamilton’s quaternions [1, 2, 3]. By the time this system was in relatively wide use, a new geometric algebra was needed for the four-dimensional space-time continuum required by Einstein’s theory of relativity. For the time being, this need was filled by tensor algebra. But, it was not long before Pauli found it necessary to introduce a new algebra to describe the electron spin. Subsequently, Dirac was led to still another algebra which accommodates both spin and special relativity. Each of these systems, vector and tensor algebra and the algebras of Pauli and Dirac, is a geometric algebra with special advantages for the physicist. Because of this, each system is widely used today. The development of a single geometric algebra which combines advantages of all these systems is a principal object of this paper. One of the oldest of the above-mentioned geometric algebras is the one invented by Grassmann (∼1840) [4]. His approach went something like this: He represented the geometric statement that two points determine a straight line by the product of two algebraic entities representing points to form a new algebraic entity representing a line. He represented a plane by the product of two lines determining the
xix
xx
Introduction
plane. In a similar fashion he represented geometric objects of higher dimension. But more than that, he represented the symmetries and relative orientations of these objects in the very rules of combination of his algebra. At about the same time, Hamilton invented his quaternion algebra to represent the properties of rotations in three dimensions. Evidently Grassmann and Hamilton each paid little attention to the other’s work. It was not until 1878 that Clifford [5] united these systems into a single geometric algebra, and it was not until just before 1930 that Clifford algebra had important applications in physics. At that time Pauli and, subsequently, Dirac introduced matrix representations of Clifford algebra for physical but not for geometrical reasons Clifford constructed his algebra as a direct product of quaternion algebras. I rather think that Hamilton would have made a similar construction earlier had he been troubled about the relation of his quaternions to Grassmann’s algebra. But this is a formal algebraic method. So is the now common procedure of arriving at the Pauli and Dirac algebras from a study of representations of the Lorentz group. Had Grassmann considered this problem, I think he would have been led to Clifford algebra in quite a different way. He would have insisted on beginning with algebraic objects directly representing simple geometric objects and then introducing algebraic operations with geometrical interpretations. In this paper, ideas of Grassmann are used to motivate the construction of Clifford algebra and to provide a geometric interpretation of Clifford numbers. This is to be contrasted with other treatments of Clifford algebra which are for the most part formal algebra. By insisting on “Grassmann’s” geometric viewpoint, we are led to look upon the Dirac algebra with new eyes. It appears simply as a vector algebra for space-time. The familiar γµ are seen as four linearly independent vectors, rather than as a single “4-vector” with matrices for components. The Pauli algebra appears as a subalgebra of the Dirac algebra and is simply the vector algebra for the 3-space of some inertial frame. The vector algebra of Gibbs is seen not as a separate algebraic system, but as a subalgebra of the Pauli algebra. Most important, there is no geometrical reason for complex numbers to appear. Rather, the unit pseudoscalar (denoted by γ5 in matrix representations) plays the role
Introduction
xxi
of the unit imaginary i. All these features are foreign to the Dirac algebra when seen from the usual viewpoint of matrix representations. Yet, as will be demonstrated in the text, emphasis on the geometric aspect broadens the range of applicability of the Dirac algebra, simplifies manipulations and imbues algebraic expressions with meaning. The main body of this paper consists of five chapters. In chapter I we review the properties of the Clifford algebra for a finite dimensional vector space. We strive to develop notation and theorems which clarify the geometrical meaning of Clifford algebra and make it an effective practical tool. Then we discuss the application of our general viewpoint to space-time and the Dirac algebra. We deal exclusively with the Dirac algebra in the rest of the paper. Nevertheless, there are good reasons for first giving a more general discussion of Clifford algebra. For one thing, it enables us to distinguish clearly between those properties of the Dirac algebra which are peculiar to space-time and those which are not. A better reason is that Clifford algebra has many important applications other than the one we work out in this paper. Indeed, it is important wherever a vector space with an inner product arises. For example, physicists will be interested in the fact that quantum field operators form a Clifford algebra.1 Many readers will be familiar with the matrix algebras of Pauli and Dirac and will be anxious to see how these algebras can be viewed differently. They can take a short cut, first reading the intuitive discussion of section 1 and then proceeding directly to sections 6 and 7. The intervening sections can be referred to as the occasion arises. To get a full appreciation of the efficacy of the Dirac algebra as a vector algebra quite apart from any connection it has with spinors, the discussion of the electromagnetic field in chapter II should be studied closely. It should particularly be noted how identification of the Pauli algebra as a subalgebra of the Dirac algebra greatly facilitates the transformation from 4-dimensional to 3-dimensional descriptions. Spinors are discussed in chapter II before any mention is made of Lorentz transformations to emphasize the little-known fact that 1 See reference [6]. An account of the Clifford algebra of fermion field operators is given in any text on quantum field theory; see, for instance, reference [7], especially sections (3–10). There is no clear physical connection between the Clifford algebra of space-time discussed in this book and the algebra of quantum field operators, although one might suspect that there should be because of the well-known theorem relating “spin and statistics”.
xxii
Introduction
spinors can be defined without reference to representations of the Lorentz group, even without using matrices. The definition of a spinor as an element of a minimal ideal in a Clifford algebra is due to Marcel Riesz [8]. It can hardly be overemphasized that by our geometrical construction we obtain the Dirac algebra without complex numbers. Nevertheless, it is still possible to write down the Dirac equation. This shows that complex numbers are superfluous in the Dirac theory. One wonders if this circumstance has physical significance. To answer this question, a fuller analysis of the Dirac theory will be carried out elsewhere. In section 13 we will be content with writing the Dirac equation in several different forms and raising some questions. The main purpose of chapter III is to illustrate the most direct formulation of the Dirac theory in the real Dirac algebra. We also indulge in speculating a connection between isospin and the Dirac algebra. Lorentz transformations are discussed in chapter IV. So many treatments of Lorentz transformations have been given in the past that publication of still another needs a thorough justification. The approach presented here is unique in many details, but two features deserve special mention. First, Lorentz transformations are introduced as automorphisms of directions at a generic point of space-time. This approach is so general that it applies to curved space-time, as can be seen specifically when it is applied in section 23. Automorphisms induced by coordinate transformations are discussed in detail as a very special case. The discussion of coordinate transformations can be further compressed if points in space-time are represented by vectors, but such a procedure risks an insidious confusion of transformations of points in space-time with transformations of tangent vectors. The most distinctive feature in our approach to Lorentz transformations arises from information about space-time which was built into the Dirac algebra when we specified its relation to the Pauli algebra. The respective merits of representing Lorentz rotations in the Pauli algebra and in the Dirac algebra have been discussed on many occasions. We are able to combine all the advantages of both approaches, because both approaches become one when the Pauli algebra is identified as the even subalgebra of the Dirac algebra. The resultant gain in perspicuity and ease in manipulation of Lorentz transformations is
Introduction
xxiii
considerable. Since the Dirac algebra has physical significance, it is important to be thoroughly acquainted with its special properties. For this reason some space is devoted in section 19 to a discussion of isometries of the Dirac algebra. The technique of using isometries to generate “possible” physical interactions is illustrated in section 24. Having demonstrated that it provides an efficient description of the electro-magnetic field as well as the Dirac equation, the Dirac algebra must be adapted to curved space-time if it is to provide the basis for a versatile geometric calculus. This task is undertaken in chapter V and proves to be relatively straightforward because of the geometric meaning which has already been supplied to the Dirac algebra. The main problem is to define a suitable differential operator . It appears that all the local properties of space-time can be construed as properties of this remarkable operator. We express the gravitational field equations in terms of , and we show how is related to two different generalizations of Einstein’s special principle of relativity. We discuss the application of and Clifford algebra to integration theory. Finally, we illustrate a way to generalize to generate theories with non-gravitational as well as gravitational interactions. The space-time calculus is sufficiently well-developed to be used as a practical tool in the study of the basic equations of physics formulated here. But it can be developed further. A tensor algebra can be constructed over the entire Dirac algebra. This is most elegantly done by emulating the treatment of dyadics by Gibbs [1, 2, 3]. In this way equivalents of the higher rank tensors and spinors can be constructed. Above and beyond algebraic details, it is important to realize that by insisting on a direct and universal geometric meaning of the Dirac algebra we are compelled to adopt a philosophy about the construction of physical theories which differs somewhat from the customary one. Ordinarily the Lorentz group is taken to describe the primitive geometrical properties of space-time. These properties are then imbedded in a physical theory by requiring that all mathematical quantities in the theory be covariant under Lorentz transformations. In contrast, we construct the Dirac algebra as an “algebra of directions” which embodies the local geometrical properties of space-time. Physical quantities are then limited to objects which can be constructed from this
xxiv
Introduction
algebra. If there is more than a formal difference between these approaches, it must arise because the second is more restricted than the first. This can only occur if the assumptions about space-time which go into the real Dirac algebra are more detailed than those which go into the Lorentz group. That such may actually be the case is suggested by the fact that the first philosophy permits physical theories which contain complex numbers without direct geometrical meaning, whereas the second does not. It should be possible to resolve this problem by a study of the Dirac equation because it contains complex numbers explicitly. But, we shall not attempt to do so here.
Chapter I
Geometric Algebra 1 Interpretation of Clifford Algebra To every n-dimensional vector space Vn with a scalar product there corresponds a unique Clifford algebra Cn . In this section we give an intuitive discussion of how Cn arises as an algebra of directions in Vn . In the next section we proceed with a formal algebraic definition of Cn . From two vectors a and b we can construct new geometric objects endowed with geometric properties which arise when a and b are considered together. For one thing, we can form the projection of one vector on the other. Since it is determined by the mutual properties of a and b, we can express it as a product a · b. That a · b must be a number (scalar) and also symmetric in a and b is an algebraic expression of the geometric concept of projection. We can form another kind of product by noting that the vectors a and b, if they are linearly independent, determine a parallelogram with sides a and b. We can therefore construct from a and b a new vectorlike entity with magnitude equal to the magnitude of the parallelogram and direction determined by the plane containing the parallelogram. Since it is completely determined by a and b, we can write it as a product: a∧b. We call a∧b the outer product of a and b. Loosely, we can think of a∧b as an algebraic representation of the parallelogram since it retains the information as to the area and plane of the parallelogram. However a ∧ b does not determine the shape of the parallelogram and could
Ó Springer International Publishing Switzerland 2015 D. Hestenes, Space-Time Algebra, DOI 10.1007/978-3-319-18413-5_1
1
2
Chapter I. Geometric Algebra
equally well be interpreted as a circular disc in the plane of a and b with area equal to that of the parallelogram. In its shape the parallelogram contains information about the vectors a and b individually, whereas a ∧ b retains only properties which arise when a and b are considered together—it has only magnitude and direction. Actually, since we can distinguish two sides of the plane containing a and b, we can construct two different products which we write as a ∧ b and b ∧ a. Intuitively speaking, if a ∧ b is represented by a parallelogram facing up on the plane, then b ∧ a can be represented by a similar parallelogram facing down in the plane. a ∧ b and b ∧ a have opposite orientation, just as do the vectors a and −a. We can express this by an equation: a ∧ b = −b ∧ a.
(1.1)
Thus the outer product of vectors is antisymmetric. The reader is cautioned not to confuse a∧b with the familiar product a×b introduced by Gibbs. For the vectors of V3 , a × b is the dual of a ∧ b and the same interpretation can be given to both. However, the definition of a × b depends on the dimensionality of the vector space in which a and b are imbedded, whereas a ∧ b pertains only to the plane containing a and b. The reader is now directed to observe that since a · b and a ∧ b have opposite symmetry we can form a new kind of product ab with a · b and a ∧ b as its symmetric and antisymmetric parts. We write ab = a · b + a ∧ b.
(1.2)
Let us call this the vector product of a and b It is the fundamental kind of multiplication in Clifford algebra. In the subtle way we have described, it unites the two geometrically significant kinds of multiplication of vectors. In our attempt to construct a geometrically meaningful multiplication for vectors we were led to define other geometric objects which, like vectors, can be characterized by magnitude and direction. This suggests a generalization of the concept of vector. Every r-dimensional subspace of Vn determines an oriented r-dimensional unit volume element which we call an r-direction. By associating a magnitude with an r-direction we arrive at the concept of an r-vector. A few examples
1. Interpretation of Clifford Algebra
3
will help make this concept clear. The vectors of Vn are 1-vectors. The unit vectors of Vn are 1-directions. Because they have magnitude but no direction, we can interpret the scalars as 0-vectors. The outer product a ∧ b is a 2-vector. If it has unit magnitude it is a 2-direction. Let us call r the degree of an r-vector. Evidently n is the largest degree possible for an r-vector of Vn . Only two unit n-vectors can be associated with Vn . They differ only in sign. They represent the two possible orientations which can be given to the unit volume element of Vn . Let us return to our definition of vector product. In (1.2) we introduced the sum of a 0-vector and a 2-vector. In view of the interpretation of a · b and a ∧ b as “vectors”, it is natural that they should be added according to the usual rules of vector addition. The 0-vectors and the 2-vectors are then linearly independent. Now that we understand the product of two vectors, what is more natural than to study the product of three? We find that it is consistent with our geometrical interpretation of (1.2) to allow multiplication which is associative and distributive with respect to addition. As we will show in the text, it follows that the product of any number of vectors can be expressed as a sum of r-vectors of different degree. In this way we generate the Clifford algebra Cn of Vn . An element of Cn will be called a Clifford number. Every r-vector of Vn is an element of Cn , and, conversely, every Clifford number can be expressed as a linear combination of r-directions. Thus, we can interpret Cn as an algebra of directions in Vn , or, equivalently, an algebra of subspaces of Vn . In the paper which introduced the algebra that goes by his name, Clifford remarked on the dual role played by the real numbers. For instance, the number 2 can be thought of as a magnitude, but it can also be thought of as the operation of doubling, as in the expression 2 × 3 = 6. Clifford numbers exhibit the same dual role, but in a more significant way since directions are involved. An r-vector can be thought of as an r-direction with a magnitude, but it can also be thought of as an operator which by multiplication changes one Clifford number into another. This dual role enables us to discuss directions and operations on directions without going outside of the Clifford algebra. Combining our insights into the interpretation of the
4
Chapter I. Geometric Algebra
elements and operations in Cn we can characterize Clifford algebra as a generalization of the real numbers which unites the geometrical notion of direction with the notion of magnitude.
2 Definition of Clifford Algebra To exploit our familiarity with vectors, we introduce Clifford algebra as a multiplication defined on a vector space. We begin with an axiomatic approach which does not make explicit mention of basis, because it keeps the rules of combination clearly before us. In later sections we again emphasize geometrical interpretation. We develop Clifford algebra over the field of the real numbers. Development over the complex field gives only a spurious kind of generality since it adds nothing of geometric significance. Indeed, it tends to obscure the fascinating variety of “hypercomplex numbers” which already appear in the Clifford algebra over the reals. Our analysis will suggest that the complex field arises in physics for geometric reasons. In any case, it is a trivial matter to generalize our work to apply to Clifford algebra over the complex field. Let Vn be an n-dimensional vector space over the real numbers with elements a, b, c, . . . called vectors. Represent multiplication of vectors by juxtaposition, and denote the product of an indeterminate number of vectors by capital letters A, B, C, . . . . Assume these “monomials” may be “added” according to the usual rules, A+B =B+A (commutative law), (A + B) + C = A + (B + C) (associative law).
(2.1) (2.2)
Denote the resulting “polynomials” by capital letters also. Multiplication of polynomials satisfies: A(BC) = (AB)C a0 = 0
(associative law),
(2.3) (2.4)
for every a in Vn and 0 the null vector, (A + B)C = AC + BC, C(A + B) = CA + BA
(distributive law).
(2.5)
3. Inner and Outer Products
5
Assume scalars commute the vectors, i.e., αa = aα for scalar α and vector a.
(2.6)
Equations (2.3) and (2.6) imply that the usual rules for scalar multiplication hold also for vector polynomials. Scalars and vectors are related by assuming: For a, b in Vn , ab is a scalar if and only if a and b are collinear.
(2.7)
1
We identify (|a2 |) 2 with the length of the vector a. The algebra defined by the above rules is called the Clifford algebra Cn of the vector space Vn .1 Conversely, Vn is said to be the vector space associated with Cn . The elements of Cn are called Clifford numbers, or simply c-numbers. We shall call AB the vector product of c-numbers A and B. From the mathematical point of view it is interesting to note that there is a great deal of redundancy in our axioms. It is obvious, for instance, that the rules of vector and scalar multiplication in Vn with which we began are special cases of the operations on arbitrary cnumbers which we defined later. A minimal set of axioms for Clifford algebra over the reals is very nearly identical with the axioms for the real numbers themselves.
3
Inner and Outer Products
Consider the quadratic form a2 for a vector a. Observe that for vectors a and b, (a + b)2 = a2 + ab + ba + b2 , ab + ba = (a + b)2 − a2 − b2 , which is a scalar. Thus a2 has an associated symmetric bilinear form which we denote by a · b ≡ 21 (ab + ba).
(3.1)
1 The special effects of various kinds of singular metric on V are discussed by Riesz [9]. Most n of our work is indifferent to these features, so we will take them into account only as the occasion arises.
6
Chapter I. Geometric Algebra
Call it the inner product of a and b. Obviously a · a = a2 . Now decompose the product of two vectors into symmetric and anti-symmetric parts, ab = 12 (ab + ba) + 12 (ab − ba). Anticipating a special significance for the antisymmetric part, call it the outer product of a and b and write a ∧ b ≡ 12 (ab − ba).
(3.2)
So the product of 2 vectors can be written in the form ab = a · b + a ∧ b.
(3.3)
We proceed to examine the outer product in some detail. Obviously a ∧ b = −b ∧ a and a ∧ a = 0. From (2.7) it is apparent that a ∧ b = 0 means that a and b are collinear—a statement which is complementary to the statement that a · b = 0 means that a and b are perpendicular. Geometrically, a ∧ b can be interpreted as an oriented area. b ∧ a has opposite orientation. a ∧ b ∧ c can be interpretated as an oriented volume, i.e., the parallelopiped with sides a, b, c. c ∧ b ∧ a is the volume with opposite orientation. The outer product of r vectors, Ar = a1 ∧ a2 ∧ . . . ∧ ar
(3.4)
is called a simple r-vector. It may be defined by recursion: Ar+1 = a ∧ Ar ≡ 12 (aAr + (−1)r Ar a) = a ∧ a1 ∧ . . . ∧ ar .
(3.5)
Ar may be interpreted as an oriented r-dimensional volume. A particular simple r-vector may be thought of as representing the r-dimensional subspace of Vn which contains the vectors of “its product” and their linear combinations. A linear combination of simple r-vectors (e.g., a1 ∧ a2 + a3 ∧ a4 ) will be called simply an r-vector. As an alternate terminology to 0vector, 1-vector, 2-vector, 3-vector, . . . we shall often use the names scalar, vector, bivector, trivector, . . . respectively. An n-vector in Cn will be called a pseudoscalar, an (n − 1)-vector a pseudovector, etc. We shall call r the degree of an r-vector. The geometric term “dimension’ is perhaps more appropriate, but we shall be using it to
3. Inner and Outer Products
7
signify the number of linearly independent vectors in a vector space. By multivector we shall understand an r-vector of unspecified degree. Generalizing (3.5), we define the outer product of a simple rvector with a simple s-vector. (a1 ∧ a2 ∧ . . . ∧ ar ) ∧ (b1 ∧ b2 ∧ . . . ∧ bs ) = (a1 ∧ . . . ∧ ar−1 ) ∧ (ar ∧ b1 ∧ . . . ∧ bs ).
(3.6)
This is clearly just the associative law for outer products. From the associative law for outer products and (3.5) it follows easily that the outer products of r vectors is antisymmetric with respect to interchange of any two vectors. Therefore, with proper attention to sign, we may permute the vectors “in” a simple r-vector at will. Since a ∧ b = 0 if a and b are collinear, a r-vector is equal to zero if it “contains” two linearly dependent vectors. This fact gives us a simple test for linear dependence: r vectors are linearly dependent if and only if their associated r-vector is zero. Thus, we could have defined the dimension of Vn as the degree of the largest r-vector in the Clifford algebra of Vn . The commutation rule for the outer product of an r-vector Ar with an s-vector Bs follows easily from antisymmetry and associativity, Ar ∧ Bs = (−1)rs Bs ∧ Ar .
(3.7)
We turn now to a generalization of the inner product which corresponds to (3.5). The inner product has symmetry opposite to that of the outer product. For a vector a and an r-vector Ar = a1 ∧a2 ∧. . .∧ar write a · Ar ≡ 12 (aAr − (−1)r Ar a).
(3.8)
The result is an (r − 1)-vector. The right side of (3.8) can be evaluated to give the useful expansion a · (a1 ∧ . . . ∧ ar ) r X = (−1)k+1 (a · ak )a1 ∧ . . . ∧ ak−1 ∧ ak+1 ∧ . . . ∧ ar . k=1
(3.9)
8
Chapter I. Geometric Algebra
This expansion is most easily obtained from a more general formula which we prove later. We define the inner product of simple r- and s-vectors in analogy to (3.6). (a1 ∧ . . . ∧ ar ) · (b1 ∧ . . . ∧ bs ) = (a1 ∧ . . . ∧ ar−1 ) · [ar · (b1 ∧ . . . ∧ bs )].
(3.10)
By (3.8), this is well defined for r 5 s. The expansion (3.9) shows that if one of the a’s is orthogonal to all the b’s, the inner product (3.10) is zero. Successive applications of (3.10) and (3.9) shows that (3.10) is an |r − s|-vector. Two other forms of (3.10) are often convenient. (a1 ∧ . . . ∧ ar ) · (b1 ∧ . . . ∧ bs ) = (a1 ∧ . . . ∧ ak−1 ) · [(ak ∧ . . . ∧ ar ) · (b1 ∧ . . . ∧ bs )], (a1 ∧ . . . ∧ ar ) · (b1 ∧ . . . ∧ bs ) = (a1 ∧ . . . ∧ ar ) · (b1 ∧ . . . ∧ br ) br+1 ∧ br+2 ∧ . . . ∧ bs − (a1 ∧ . . . ∧ ar ) · (b2 ∧ · · · ∧ br+1 )b1 ∧ br+2 ∧ · · · ∧ bs ... ... ... s−r−1
+ (−1)
(3.11)
(3.12)
(a1 ∧ . . . ∧ ar ) · (bs−r ∧ . . . ∧ bs )b1 ∧ b2 ∧ . . . ∧ bs−r−1 .
(3.12) gives the explicit form of the multivector which is obtained from the inner product of two simple multivectors. By (3.8), a · (b1 ∧ . . . ∧ bs ) = (−1)s+1 (b1 ∧ . . . ∧ bs ) · a. So from (3.10) we find that for any r-vector and s-vector with s = r, Ar · Bs = (−1)r(s+1) Bs · Ar .
(3.13)
This corresponds to (3.7) for outer products. To get some understanding of the geometric significance of (3.10) we look at a couple of simple cases. By (3.10) and (3.9) (a ∧ b) · (b ∧ a) = a · [b · (b ∧ a)] = a · [(b · b)a − (b · a)b] b·b b·a 2 2 2 . = a b − (a · b) = (3.14) a·b a·a
3. Inner and Outer Products
9
This is the square of the area of a ∧ b. The projection of a ∧ b on u ∧ v is expanded the same way as (3.14), (b ∧ a) · (u ∧ v) = (a · u)(b · v) − (a · v)(b · u) a·u a·v . = b·u b·v
(3.15)
The sign measures the relative orientations of a ∧ b and u ∧ v. Another example shows how (3.9) and (3.10) are related to the expansion of a determinant by cofactors, (a ∧ b ∧ c) · (u ∧ v ∧ w) = (a ∧ b) · [c · (u ∧ v ∧ w)] = (a ∧ b) · [(c · u)v ∧ w − (c · v)u ∧ w + (c · w)u ∧ v] b·v b·w b·u b·w − (c · v) + (c · w) b · u b · v = (c · u) a·u a·v a·v a·w a·u a·w c·u c·v c·w = b · u b · v b · w . a·u a·v a·w The generalization of this formula is evidently a1 · b1 . . . a1 · br .. .. (ar ∧ . . . ∧ a1 ) · (b1 ∧ . . . ∧ br ) = ... . . a · b ...a · b r 1 r r
.
(3.17)
This is the projection of a1 ∧ . . . ∧ ar on b1 ∧ . . . ∧ br . We have shown how the Clifford algebra Cn of Vn generates a vector algebra on Vn . The basic rules for vector multiplication are the associative and anti-commutative rules for outer multiplication and formulas (3.9) and (3.10) for inner multiplication. All other formulas of vector analysis follow easily, as do all theorems on determinants.2 In fact, (3.17) can be taken as the definition of determinant. We have seen that our original rule of multiplication acts as a kind of “neutral product” from which other rules of multiplication can be obtained which have geometric significance. The inner and 2 Note, for instance, that the antisymmetry of rows (or columns) under interchange is obvious from (3.17) by the antisymmetry of the outer product.
10
Chapter I. Geometric Algebra
outer products are not the only examples of such rules, but they are the only ones to which we shall assign a special notation. Our inner and outer products were invented independently of Clifford algebra by Grassman [4] about 1844. Because of their geometric significance they occur ubiquitously and have no doubt been reinvented many times The outer product is particularly important in multiple integration theory. We will discuss this again in section 22. Our “neutral algebra” was invented by Clifford [5] about 1878. He appears to have arrived at it as a kind of union of Grassmann’s exterior (outer) product with Hamilton’s quaternions. Grassmann’s original ideas for algebraic representation of geometric notions are brought to perfection by Clifford algebra, as is Hamilton’s idea that the generators of rotations are the proper generalizations of pure imaginary numbers. The latter idea is realized by the fact that the bivectors of Cn generate the rotations in Vn ; it will be amply illustrated in our treatment of Lorentz transformations (chapter IV).
4
Structure of Clifford Algebra
In this section we discuss some important properties of Cn which for the most part do not depend on the metric of Vn . We learned in the last section that the product of a vector a with an r-vector Ar gives, in general, an (r−1)-vector plus an (r+1)-vector: aAr = a · Ar + a ∧ Ar .
(4.1)
It follows that the result of any combination of multiplication and addition of vectors can be expressed as a polynomial of multivectors. In fact, since any vector can be expressed as a linear combination of n-linearly independent vectors e1 , e2 , . . . , en , any c-number can be written as a polynomial of multi-vectors formed from the ei : A = a + ai ei +
1 i1 i2 1 a ei1 ∧ ei2 + . . . + ai1 ...in ei1 ∧ ei2 ∧ . . . ∧ ein . 2! n! (4.2)
Here the a’s are scalar coefficients and summation is taken from 1 to n over repeated indices.
4. Structure of Clifford Algebra
11
Let us pause to introduce some helpful notations. We will often write eij...k ≡ ei ∧ ej ∧ . . . ∧ ek .
(4.3)
Since eij...k is antisymmetric under interchange of indices, it is usually convenient to write the indices in natural order (i.e. i < j < . . . < k). This eliminates redundancy in the sums of (4.2). As a further abbreviation, we may represent a whole set of indices by a capital letter. Let J = (j1 , . . . , jn )
(4.4)
where jk = k or 0. By deleting elements for which jk = 0, we may write any one of the base multivectors in the form eJ = ej1 ...jn = ej1 ∧ ej2 ∧ . . . ∧ ejn .
(4.5)
For the case when all jk vanish, we take eJ = 1. The expansion (4.2) of any c-number can now be written simply as A = aJ e J . (r)
(4.6) (0)
Let Cn be the space of all r-vectors; thus, Cn is the scalar (1) (n) field; Cn = Vn ; Cn is the space of all pseudoscalars. The set of all (r) eJ of degree r forms a basis for Cn . It is a simple matter of counting (r) to show that the dimension of the linear space Cn is given by the binomial coefficient nr . The whole Clifford algebra Cn is itself a linear space, and the set of all eJ forms what we shall call a tensor basis for Cn . The dimension of Cn is given by n n X X n (r) dim Cn = dim Cn = = 2n . (4.7) r r=0 r=0 It is a fact of great geometrical significance that Cn can be decomposed into two subspaces composed respectively of even and odd multivectors. We write Cn = Cn+ + Cn−
(4.8)
12
Chapter I. Geometric Algebra
where Cn+ is the space of all r-vectors for r even, and Cn is the space of all r-vectors for r odd. The relation of Cn+ to Cn− by multiplication is expressed by the formulas Cn+ Cn+ = Cn+ , Cn− Cn+ = Cn− ,
Cn+ Cn− = Cn− , Cn− Cn− = Cn+ .
(4.9)
These formulas are implied by the more specific relations Cn(r) Cn(s) ⊂
m X
Cn(r+s−2k)
(4.10)
k=0
where m = min(r, s) . It is clear from (4.9) that Cn+ always forms a subalgebra of Cn− whereas Cn− never does. We call Cn+ the even subalgebra of Cn . Cn+ is itself a Clifford algebra, and by a proper identification of vectors we may associate it with a vector space Vn−1 . Let us express this by Cn+ = Cn−1 .
(4.11)
This relation will be of great significance to us in section 7 when we discuss the relation of the Pauli and Dirac algebras. There we will discuss a geometrically significant rule which can be used to determine the vectors of Vn−1 from Cn . Here we will merely mention a curious fact, namely, that the most important Clifford algebras are related in a series by (4.11): The Pauli algebra is the even subalgebra of the Dirac algebra. The quaternion algebra is the even subalgebra of the Pauli algebra. The complex numbers compose the even subalgebra of the quaternions, and, of course, the real numbers compose the even subalgebra of the complex numbers. The decomposition (4.8) of Cn into even and odd c-numbers is as significant as the decomposition of complex numbers into real and imaginary parts. In fact, for that Clifford algebra C1 which can be identified with the complex numbers, the two decompositions are one and the same. This leads us to an involution of Cn which may be looked upon as a generalization of complex conjugation. We give the name inversion 3 to the mapping which takes every r-vector Ar into 3 Also
called the main involution of Cn .
5. Reversion, Scalar Product
13
its conjugate A∗r defined by A∗r = (−1)r Ar .
(4.12)
Inversion distinguishes even and odd subspaces of Cn , Cn∗ = (Cn+ + Cn− )∗ = Cn+ − Cn− .
(4.13)
Inversion may be thought of as an “invariant” kind of complex conjugation because it does not favor a particular tensor basis in the algebra. It is worth mentioning that if n is odd inversion is an outer automorphism of Cn , i.e. it cannot be induced by operations defined in Cn (as can easily be verified for the complex numbers and for the Pauli algebra). However, inversion is important enough to make it worthwhile to extend the definition of Cn to include it. In the one-dimensional Clifford algebra of the complex numbers the vectors are also the pseudoscalars. We customarily represent the unit positive pseudoscalar by the symbol i. A rotation in the complex plane is a mapping of scalars into pseudoscalars by the exponential operator eiφ . We call it a duality rotation. There is an analog of this transformation in any non-singular Clifford algebra Cn . (A Clifford algebra will be said to be non-singular if it contains pseudoscalars with non-zero square.) Let us again use the symbol i to denote the unit pseudoscalar in Cn . Again we will call the transformation Cn → eiφ Cn
(4.14)
a duality rotation. In later sections we will see that duality rotations in the Dirac algebra have physical significance.
5
Reversion, Scalar Product
To obtain the reverse A† of a c-number A, decompose A into a sum of products of vectors and reverse the order of all products. We call this operation reversion 4 or, alternatively, hermitian conjugation, because it is equivalent to the operation by that name in a matrix representation of Clifford algebra which represents 1-vectors by hermitian matrices. Since the inner product of vectors is symmetric, it does not matter 4 Also
called the main antiautomorphism of Cn .
14
Chapter I. Geometric Algebra
whether the term “product” in the above definition is interpreted as “vector product” or as “outer product”. Let us define the scalar product (A, B) of two c-numbers A and B as the scalar part of A† B. (A, B) ≡ (A† B)S .
(5.1)
This is not to be confused with our “inner product” which might not even have a scalar part. We may say that A is orthogonal to B if (A, B) = 0. It is easy to show that we can find a basis {eJ } for Cn consisting of multivectors which are orthogonal in this sense. If the metric of Vn is Euclidean the eJ can be chosen ortho-normal, i.e. (eJ , eK ) = δJK .
(5.2)
The scalar product (5.1) determines a norm for each c-number A: kAk = (A† A)S .
(5.3)
This norm is positive definite if and only if the metric of Vn is Euclidean, for only in the Euclidean case can we find an orthonormal basis. Thus, using (5.2) and A = AJ eJ 6= 0, we find X kAk = (AJ )2 > 0. (5.4) J
|kAk may be interpreted as the square of the length of a “vector” in the vector space Cn . To see that this is a generalization of the square of the length of a vector, note that for a vector a, a2 = (a2 )S = (a† a)S = kak.
(5.5)
Since (5.3) satisfies (5.4) only if the metric on Vn is Euclidean, any singularity it may have in other cases will be exclusively due to a singular metric on Vn . Therefore we conclude that (5.1) is a natural generalization of the inner product on Vn . We should mention that the scalar product (5.1) is symmetric, (A, B) = (B, A).
(5.6)
5. Reversion, Scalar Product
15
This just follows from the fact that the scalar part of a product of c-numbers is invariant under cyclic permutation of products, as can be seen by using a tensor basis of orthogonal multivectors. Thus X X (AB)S = AJ B J e2J = B J AJ e2J = (BA)S . (5.7) J
J
The problems of evaluating transition probabilities that occur in quantum electrodynamics always involve finding the scalar part of some c-numbers which may, for example, represent the scalar product of c-numbers as in (5.1). This can always be reduced to evaluation of the scalar part of a product of vectors. (When matrix representations of the Clifford algebra are used, this is a problem in evaluating traces.) We derive a fundamental formula for such calculations. Let a, b1 , b2 , . . . , bk be vectors. We prove a · (b1 b2 . . . bk ) =
k X
(−1)r+1 (a · br )(b1 . . . br−1 br+1 . . . bk )
(5.8)
r=1
by adding up both sides of the equalities: 1 (ab1 b2 2 1 − 2 (b1 ab2
...
. . . bk + b1 ab2 b3 . . . bk ) = (a · b1 )b2 b3 . . . bk . . . bk + b1 b2 ab3 . . . bk ) = −(a · b2 )b1 b3 b4 . . . bk ... ...
(−1)k+1 12 (b1 . . . bk−1 abk + b1 . . . bk a) = (−1)k+1 (a · bk )b1 . . . bk−1 . Multivectors of the same degree on both sides of (5.8) can be equated. The largest multivector gives equation (3.9) so this is the promised proof of that equation. By (5.7), [a1 · (a2 a3 . . . ak )]S = 12 [(a1 . . . ak )S + (a2 . . . ak a1 )S ] = (a1 a2 . . . ak )S . So by (5.8) (a1 a2 . . . ak )S =
k X (−1)r (a1 · ar )(a2 . . . ar−1 ar+1 . . . ak )S . r=2
(5.9)
16
Chapter I. Geometric Algebra
(a1 . . . ak )S can be written entirely in terms of inner products of vectors by successive applications of (5.9). This and other formulas can be found in standard works on quantum field theory.5 But (5.8) is the most useful formula. Note, in passing, that (a1 . . . ak )S = 0 if k is odd.
6 The Algebra of Space The set of all vectors tangent to a generic point x in Euclidean 3space E3 forms a 3-dimensional vector space V3 = V3 (x) with an inner product which expresses the metric on E3 . The Clifford algebra of V3 is so important we will give it a special name.6 We call it the Pauli algebra P to agree with the name given its matrix representation. To exploit the familiarity of physicists with the matrix representation of P, we will begin our study of P with a basis in V3 . Select a set of orthonormal vectors σ 1 , σ 2 , σ 3 , tangent to the point x.7 The vectors σ k satisfy the same multiplication rules as the familiar Pauli matrices. σ j · σ k = 12 (σ j σ k + σ k σ j ) = δjk .
(6.1)
By taking all products of the σ k we generate a tensor basis for P. P is a linear space of dimension 23 = 8. The elements of the tensor basis in P can be given geometric interpretation. The bivector σ jk = σ j ∧ σ k = 12 (σ j σ k − σ k σ j )
(6.2)
can be interpreted as a unit element of area oriented in the jk-plane. The unit element of volume is so important that we represent it by the special symbol i, i = σ1 ∧ σ2 ∧ σ3 = σ1σ2σ3. 5 See,
(6.3)
for example, page 437 of reference [7]. the same way that we extend the concept of a vector to a vector field on a manifold, we can extend the concept of a c-number to a c-number field. And just as it is often convenient to suppress the distinction between vector and vector field, it is often convenient to suppress the distinction between c-number and c-number field. In the rest of this paper we will frequently do such things with little or no comment. 7 Throughout this paper we use boldface type only for points in Euclidean 3-space and 1vectors in the Pauli algebra. Some such convention as this is necessary to distinguish vectors in space from vectors in space-time, as will become clear in section 7. 6 In
6. The Algebra of Space
17
We choose the sign of i to be positive when the σ k form a righthanded set of vectors. To remind us of this, we call i the unit righthanded pseudoscalar. The symbol i is appropriate because i commutes with all elements of P, and i2 = −1. Of course, in this context, i also has a geometrical meaning, and this enables us to give a geometrical interpretation to complex conjugation. Let us give the name space conjugation to the operation which reverses the direction of all vectors at the point x.8 It can be represented as an operation on the base vectors: σ k → −σ k .
(6.4)
This induces a change in orientation of the unit volume element: i → −i.
(6.5)
We will see later that when applied to equations of physics space conjugation changes particles into antiparticles and so can be given a physical interpretation as well. It win be convenient to call the elements of the Pauli algebra pnumbers. Any p-number A can be written as a sum of scalar, vector, bivector and pseudoscalar parts: A = AS + AV + AB + AP .
(6.6)
Or, in more explicit form, A = α + iβ + a + ib
(6.7)
where α and β are scalars and a and b are vectors. This shows that we can look upon a p-number as composed of a complex scalar plus a complex vector. This algebraic interpretation is very convenient; however, it is important not to lose sight of the geometrical interpretation of the various terms. We will denote by A∗ the element obtained from A by space conjugation. In terms of the decomposition (6.6), A∗ = AS − AV + AB − AP .
(6.8)
Another invariant kind of conjugation is obtained by reversing the order of all products of vectors in P. We call this operation hermitian 8 Space
conjugation is an example of the operation inversion introduced in section 4.
18
Chapter I. Geometric Algebra
conjugation, because it is equivalent to an operation with the same name in the usual matrix representation of P. The element obtained from A by hermitian conjugation will be denoted by A† , A† = AS + AV − AB − AP .
(6.9)
We can define an invariant kind of transpose by9 e ≡ A∗† . A
(6.10)
e = AS − AV − AB + AP . A
(6.11)
Obviously, As in (3.3), we can decompose the product of two vectors a and b into symmetric and antisymmetric parts,10 ab = a · b + a ∧ b.
(6.12)
The complete vector algebra developed in section 3 now applies to P. It includes to the familiar vector algebra developed by Gibbs [2, 3]. This will become clear when we take account of some special properties of P. The product of iA of the unit pseudoscalar i with a p-number A is called the dual of A. In the Pauli algebra the dual of a bivector is a vector, and vice-versa. This special property of P enables us to define the familiar cross product of two vectors as follows, a × b ≡ −ia ∧ b.
(6.13)
The sign of this definition has been chosen so that from (1.3) we obtain σ 1 × σ 2 = σ 3 , which means that the σ k form a righthanded frame. a × b is commonly called an “axial vector”. It is often said that the behavior of a × b under “space reflection” differs from that of “true” 9 The reader familiar with quaternions will know that the Pauli algebra is equivalent to the ale as the quaternion conjugate of A. A discussion gebra of complex quaternions. He will recognize A in quaternion language which is similar to ours is given by G¨ ursey [15]. The quaternion viewpoint suffers from two defects: It does not direct sufficient attention to geometric interpretation of P, and it gives no hint as to the relation of P to the Dirac algebra. 10 Any vector a in P can be written as a linear combination of the σ · : a = ak σ . It is a k k common practice in the literature of physics to write ak σk = a · σ. This would be a flagrant violation both of the notion of vector used in this paper and the definition of inner product (3.1), for the ak are scalars and the σk are vectors, so ak σk is not the inner product of two vectors and should not be written a · σ.
6. The Algebra of Space
19
vectors such as a and b. Now all this depends critically on what is meant by “space reflection”. Here we understand “space reflection” to mean “reverse the direction of all vectors in space”. It is the operation of space conjugation defined above. Vectors a and b change sign under inversion, but bivector a ∧ b does not. We recorded in (6.5) that, for consistency in multiplication, the orientation of the pseudoscalars must change. Hence, according to our definition (6.13) a × b changes sign under space conjugation and is in our sense a “true” vector. a∧b is vectorlike except under inversion; it is appropriate to call it a pseudovector because it is the dual of a vector, but the appellation “axial vector” is not appropriate to a ∧ b because the “axial” property is an accident of three dimensions. By using the definition (6.13) for the cross product, all the usual formulas of Gibbs’ vector analysis can be obtained easily from the formulas of section 3. Before passing onto some examples, the reader is cautioned to note that, although i commutes with all elements of P, it does not commute with the inner and outer products. This can be seen from the definitions (3.5) and (3.8). Thus, a × b = −i(a ∧ b) = −a · (ib) = (ib) · a.
(6.14)
Now observe the decomposition of the double cross product with the help of (3.9): a × (b × c) = −i[a ∧ (−ib ∧ c)] = i2 a · (b ∧ c) = −(a · b)c + (a · c)b.
(6.15)
Another example: a ∧ b ∧ c = −i2 (a ∧ b ∧ c) = −ia · (ib ∧ c) = −i[a · (b × c)]. (6.16) So |a · (b × c)| is the magnitude of the oriented volume a ∧ b ∧ c. The characteristic property of the Pauli algebra is that the outer product of any three linearly independent vectors is a pseudoscalar. From this it follows that the outer product of any four vectors in P is zero: a ∧ b ∧ c ∧ d = 0.
(6.17)
We have so far constructed a complete vector algebra for the tangent vectors of Euclidean 3-space. The development of our vector
20
Chapter I. Geometric Algebra
calculus is completed by defining the following differential operator: ∇ ≡ σ k ∂k ,
∂k =
∂ k, ∂x
(6.18)
where σ k is tangent vector for the coordinate xk . We call ∇ the gradient operator.11 It has the multiplication properties of a vector superimposed on the Leibnitz rule, characteristic of a differential operator. The gradient of a vector can be decomposed into two parts, ∇a = ∇ · a + ∇ ∧ a = ∇ · a + i∇ × a.
(6.19)
We call ∇ · a the divergence of a; it is a scalar. We give the name “curl of a” to ∇ ∧ a, rather than to ∇ × a as is customary, because the outer product is more general than the cross product. Of course ∇ ∧ a is a bivector. As a point of historical interest, we mention that in the latter part of the nineteenth century there was a lively controversy as to whether quaternion or vector methods were more suitable for the work of theoretical physicists.12 The votaries of vectors were victorious. However soon after the invention of quantum mechanics, quaternions reappeared in a new guise, for Clifford algebra (in matrix form) was found necessary to represent the properties of particles with spin. Now the root of the vector-quaternion controversy is to be found in the failure of both sides to distinguish between what we have called vector and bivector. It is amusing that vectors and quaternions are united in the Pauli algebra, so that the whole basis for controversy is eliminated. Neither side was wrong except in its opposition to the other.
7
The Algebra of Space-Time
The vector space of vectors tangent to a generic point x in Minkowski space-time13 determines a real Clifford algebra which we call the Dirac 11 The
gradient operator is discussed in more detail in chapter V. short history of geometric algebra and the vector-quaternion controversy is given by Wills [3]. Letters on the subject by Gibbs can be found in [1]. We mention that the significant comments on notation made by Gibbs apply also to the calculus developed in this paper. These topics are also wryly discussed by Heaviside [11]. 13 The adjective “Minkowski” signifies the flat space-time of special relativity. 12 A
7. The Algebra of Space-Time
21
algebra. We begin our study of the Dirac algebra D by selecting a set of orthonormal vectors γ0 , γ1 , γ2 , γ3 tangent to the point x. The vectors γµ satisfy the same multiplication rules as the familiar Dirac matrices. The γk (k = 1, 2, 3) are spacelike vectors. The timelike vector γ0 has positive square. γ02 = 1, γk2 = −1, γµ · γν = 0 (µ 6= ν).
(7.1)
These relations can be abbreviated, γµ · γν = 21 (γµ γν + γν γµ ) = gµν .
(7.2)
By taking all products of the γµ we generate a tensor basis for D. D is a linear space of dimension 24 = 16. The elements of the tensor basis in D can be given geometric interpretation. The bivector γµν = γµ ∧ γν = 12 (γµ γν − γν γµ )
(7.3)
can be interpreted as a unit element of area in the µν-plane. The trivector (pseudovector) γµνσ = γµ ∧ γν ∧ γσ
(7.4)
can be interpreted as an oriented unit 3-dimensional volume element. The pseudoscalar γ5 ≡ γ0 ∧ γ1 ∧ γ2 ∧ γ3 = γ0123
(7.5)
can be interpreted as an oriented unit 4-dimensional volume element. It will be convenient to call the elements of D d-numbers. Any dnumber A can be written as a sum of scalar, vector, bivector, trivector, and pseudoscalar parts: A = AS + AV + AB + AT + AP .
(7.6)
We give the name space-time conjugation to the operation which reverses the directions of all vectors in D, and we denote by A the element obtained from A by this operation.14 Evidently, A = AS − AV + AB − AT + AP .
(7.7)
14 This is another example of the inversion operation defined in section 4. We use A e instead of A∗ for the space-time conjugate to distinguish it from the analogous space conjugate in P.
22
Chapter I. Geometric Algebra
If A = A, A will be called even. If A = −A, A will be called odd. The other invariant kind of conjugation in D reverses the order of all e the vector products in D and will be called reversion. We denote by A element obtained from A by this operation. Evidently e = AS + AV − AB − AT + AP . A
(7.8)
The Dirac algebra embodies the local geometric properties of space-time, whereas the Pauli algebra represents the properties of space. These two algebras must be related in a definite way, a way which depends on the choice of a specific timelike direction. We fix this relation by requiring that P be a subalgebra of D. To see how this partition depends on the selection of a timelike direction, let us relate the basis σ k in P to the basis γµ in D. We choose σ k and γk to refer to the same spacelike directions. Therefore, since they are elements of the same algebra, σ k must be proportional to γk . It is then easy to see that, except for a sign, the only way we can satisfy (6.1) and (7.1) is to have15 σ k = γk γ0 .
(7.9)
σ jk = −γjk
(7.10)
From this it follows that
and i = γ5 .
(7.11)
Thus we find that P is the subalgebra of all even d-numbers. The scalars of P are scalars of D. The pseudoscalars of P are pseudoscalars of D. Only the partition of the bivectors of D into vectors and bivectors of P (as in (7.9) and (7.10)) depends on the singling out of a particular timelike vector. 16
15 The uniqueness of our solution depends on the choice of metric we made in (7.1). If instead we had chosen γi 2 = 1 and γ0 2 = −1 we could entertain the solution σi = γi , which may seem more natural, because then vectors in P would be vectors in D. But this identification is subject to two serious objections. First, “Dirac factorization” would be impossible with this metric. By “Dirac factorization” we mean that the equation p2 − m2 = (p − m)(p + m) = 0 has a solution for a scalar m and timelike vector p. This is important in the study of the Dirac equation. Second, we would have t = γ1 γ2 γ3 = −γ0 γ5 , so there would be more than one i—one for each γ0 . Consequently, the interpretation of complex numbers and the connection between space and space-time would be more complicated than in the text. 16 It may be recalled that this point was discussed in section 4.
7. The Algebra of Space-Time
23
The Pauli algebra P determined by a timelike vector γ0 will be called the Pauli algebra of γ0 . We have just seen that the even dnumbers are elements of the subalgebra P. The odd d-numbers can also be represented as elements of P simply by multiplying them by γ0 . Thus, a vector p in D can be represented by a scalar p0 and a vector p in P, for pγ0 = p0 + p (7.12a) where p0 ≡ p · γ0 , p = p ∧ γ0 .
(7.12b)
We choose the sign in (7.12a) to agree with (7.9). We will follow this convention throughout the rest of this paper. We will hereafter always use the symbol i to represent the unit righthanded pseudoscalar in D. The symbol γ5 may be retained to represent the pseudo-scalar of an arbitrary basis, as in appendix A. i will play the role of unit imaginary. Besides the property i2 = −1, we note that i commutes with the σ k but anticommutes with the γµ . Now that we have established the connection between P and D, the operations of space conjugation and hermitian conjugation defined for P can be extended to D. The definitions A∗ ≡ γ0 Aγ0
(7.13)
e 0=A e∗ A† ≡ γ0 Aγ
(7.14)
and apply to any element of D and agree with the definitions given in section 6. We will always remember that the symbols A∗ and A† tacitly assume that a particular timelike direction has been singled out, but we will usually not need to know which direction it is. Note, for example, that i∗ = −i holds for any choice of γ0 . The vector algebra of section 3 works in D just as it did in P. We remark again that no confusion should arise from our use of the same symbols for inner and outer products in both P and D because we distinguish vectors in P by boldface type. To complete our development of a geometric calculus for spacetime, we define the gradient operator ,17 ≡ γ µ ∂µ . 17 In
chapter V a more careful definition of the gradient operator is given.
(7.15)
24
Chapter I. Geometric Algebra
If A is a differentiable d-number field, then A = · A + ∧ A.
(7.16)
We call · A the divergence of A and ∧ A the curl of A. In agreement with common parlance, we call 2 the d’Alembertian. Our use of different symbols for and ∇ will preclude any confusion that might follow from giving them the same name. Using (7.9) we find that ∇ = γ0 ∧
(7.17)
∂0 = γ0 · .
(7.18)
γ0 = ∂0 + ∇.
(7.19)
and Thus We make this one violation of the convention established by (7.12) in order that our definition of ∇ agree with the usual one.
Chapter II
Electrodynamics 8
Maxwell’s Equation
The electromagnetic field can be written as a single bivector field F in D. Representing the charged current density by a vector J in D, Maxwell’s equation can be written1 F = J.
(8.1)
Using (7.15), we can separate (8.1) into vector and pseudovector parts: · F = J, ∧ F = 0.
(8.2a) (8.2b)
If these equations are expressed in terms of a tensor basis, it is found that the coefficients show Maxwell’s equation in familiar tensor form. Alternatively, we can re-express Maxwell’s equation as a set of four equations in the Pauli algebra, by singling out a particular timelike direction γ0 . Using (7.12) we can write F = E + iB
(8.3a)
E = 12 (F − F ∗ ),
(8.3b)
iB = 12 (F + F ∗ ),
(8.3c)
where
1 This
form of Maxwell’s equation is due to Marcel Riesz [9].
Ó Springer International Publishing Switzerland 2015 D. Hestenes, Space-Time Algebra, DOI 10.1007/978-3-319-18413-5_2
25
26
Chapter II. Electrodynamics
and we can write J in the form J = (Jγ0 )γ0 = (J · γ0 + J ∧ γ0 )γ0 ≡ (ρ + J)γ0 .
(8.4)
Now, multiplying (8.1) by γ0 and recalling (7.19), we have2 (∂0 + ∇)(E + iB) = ρ − J
(8.5)
or ∂0 E + ∇E + i(∂0 B + ∇B) = ρ − J. Equating separately the scalar, vector, bivector and pseudoscalar parts in P, we get the four equations ∇ · E = ρ, ∂0 E + i∇ ∧ B = −J, i∂0 B + ∇ ∧ E = 0, i∇ · B = 0.
(8.6a) (8.6b) (8.6c) (8.6d)
Using the definition (6.13) of cross product we easily convert (8.6) into a form of Maxwell’s equation which is too familiar to require comment. Equation (8.1) implies that J · is a conserved current, for 2 F = J = · J + ∧ J.
(8.7)
Since 2 is a scalar operator, the left side of (8.7) is a bivector, so 2 F = ∧ J
(8.8)
· J = 0 = ∂0 ρ + ∇ · J.
(8.9)
and We can, if we wish, express F as the gradient of a vector potential, F = A = · A + ∧ A.
(8.10a)
But, since F is a bivector, (8.10a) says that F =∧A
(8.10b)
· A = 0.
(8.10c)
and 2 Equation (8.5) is equivalent to the old quaternion form for Maxwell’s equation. I do not know who first invented it. A convenient early reference is [12].
9. Stress-Energy Vectors
27
Substituting (8.10a) into (8.1), we arrive at the wave equation for A: 2 A = J.
(8.11)
The vector potential is not unique, for we can add to it the gradient of any scalar χ for which 2 χ = 0. Thus, if A0 = A + χ,
(8.12)
then A0 = A + 2 χ = A. If we wish, we can solve for E and B in terms of potentials. Let Aγ0 = A0 + A.
(8.13)
Then F = E + iB = A = (γ0 )(γ0 A) = (∂0 − ∇)(A0 − A) = −(∂0 A + ∇A0 ) + ∇ ∧ A. So E = −∂0 A − ∇A0 , B = ∇ × A.
9
(8.14)
Stress-Energy Vectors
From the source-free Maxwell equation we can find four conserved vector currents. From F = 0,
e =0 Fe
(9.1)
we get3 e = ∂µ Feγ µ F = 0. FeF + FeF
(9.2)
e to mean differentiating to the left We have used the symbol instead of to the right. Define S µ by S µ = 12 Feγ µ F.
(9.3)
3 In (9.2) we have assumed inertial coordinates. Generalization to arbitrary coordinates is a simple matter by the techniques of chapter V.
28
Chapter II. Electrodynamics
µ Since S µ = Seµ = −S , the S µ are vectors. Let us call them the stressenergy vectors of the electromagnetic field. S µ gives the flux density of electromagnetic stress-energy through the hypersurface orthogonal to γ µ . The components of the S µ are
S µν = S µ · γ ν = (S µ γ ν )S = 12 (Feγ µ F γ ν )S .
(9.4)
S µν is the so-called energy-momentum tensor.4 By writing F in terms of a bivector basis, the righthand side of (9.4) can be reduced to familiar tensor form. But (9.4) is already more felicitous than the tensor form. The following properties of S µν are easily verified directly from (9.4): S µν = S νµ ,
Sµµ = 0,
S 00 = 0.
(9.5)
We will find it more convenient to deal with the vectors S µ rather than the scalars S µν . To increase our familiarity with the S µ , let us re-express S 0 by writing F in the Pauli algebra of γ0 , S 0 = S0 = 21 Feγ0 F = − 12 F γ0 F = − 12 F F ∗ γ0 , ∗
2
2
F F = (E + iB)(−E + iB) = −(E + B ) + iE ∧ B.
(9.6) (9.7)
Hence S0 = [ 12 (E2 + B2 ) + E × B]γ0 .
(9.8)
This shows that S0 is the space-time generalization of the Poynting vector. We can write (9.2) in the form ∂µ S µ = 0.
(9.9)
Since the S µ are vectors, (9.9) gives four conserved quantities. The reader may show that (9.9) can also be written in the form · S µ = 0.
(9.10)
It is easy to find the modification of (9.9) produced by sources. Since 1 e (F J + JF ) = −(F J − JF ) = −F · J, (9.11) 2 4 The form (F eγ µ F γ ν )S for the energy-momentum tensor of the electromagnetic field was first given by Marcel Riesz [8].
10. Invariants
29
we get ∂µ S µ = J · F.
(9.12)
K = F · J is the space-time generalization of the Lorentz force. This is easily seen by writing it as two equations in the Pauli algebra of γ0 , Kγ0 =K0 + K, F · Jγ0 = =
(9.13)
1 (F J − JF )γ0 2 1 (F Jγ0 − Jγ0 F ∗ ) 2 1 [(E + iB)(ρ + J) 2
= − (ρ + J)(−E + iB)] = ρE + J · E + J × B.
(9.14)
By separately equating the scalar and vector parts of (9.13) and (9.14), we arrive at K0 = J · E, K = ρE + J × B.
10
(9.15) (9.16)
Invariants
The invariants of the electromagnetic field are given by F 2 The square of a bivector has only scalar and pseudoscalar parts, F 2 = F · F + F ∧ F.
(10.1)
To see the invariants in tensor form we must express F in terms of a bivector basis: F = 12 F µν γµν . Using (3.15), we find for the scalar part F · F = − 12 F µν Fµν .
(10.2)
Using (A.30), we find for the pseudoscalar part F ∧ F = −F αβ F µν εαβµν i.
(10.3)
Alternatively, we may express the invariants in terms of E and B, F 2 = (E + iB)2 = E2 − B2 + 2iE · B.
(10.4)
The decomposition F = E + iB depends on a particular timelike vector. It would seem better to decompose F using invariants of the
30
Chapter II. Electrodynamics
field. We accomplish this here for the case F 2 6= 0. First we note that F can be written F = f eiφ (10.5) where φ is a scalar and f = e + ib
(10.6a)
with e · b = 0. 2
(10.6b) 2
Now we observe that when F 6= 0 both φ and f are determined by F 2 , for F 2 = f 2 e2iφ = (e2 − b2 )e2iφ .
(10.7)
Here F 2 is represented as a scalar f 2 which is taken through an angle 2φ by a duality rotation to give the scalar and pseudoscalar parts of F 2. Let us express the perhaps more significant invariants φ and f 2 in terms of E2 − B2 and E · B. The invariant of F ∗ = −E + iB is (F ∗ )2 = (F 2 )∗ = f 2 e−2iφ = E2 − B2 − 2iE · B.
(10.8)
(F ∗ )2 F 2 = f 4 = (E2 − B2 )2 + 4(E · B)2 .
(10.9)
Hence So
1
f 2 = ±[(E2 − B2 )2 + 4(E · B)2 ] 2 .
(10.10)
We can find another interesting expression for f 2 by noting that F ∗ F = −(E2 + B2 ) + 2E × B.
(10.11)
Since E × B anticommutes with both E and B, f 4 = F ∗ F ∗ F F = (F ∗ F )(F ∗ F )∗ = (E2 + B2 ) − 4(E × B)2 . So
1
f 2 = ±[(E2 + B2 )2 − 4(E × B)2 ] 2 .
(10.12)
The expression for φ is found to be tan 2φ =
2E · B . E2 − B2
(10.13)
11. Free Fields
31
Evidently φ is determined by E2 − B2 and E · B only when F 2 6= 0 and then only to an additive multiple of π. It is natural to interpret φ as the physical phase of the electromagnetic field. In the next section we will do this for a free field. Other authors [13] have called the φ the complexion of the Maxwell field. It seems that no one has pinned down its physical manifestations. We note here that it does not contribute to the stress-energy vectors, for, since i anticommutes with γ µ , S µ = 12 Feγ µ F = 12 feγ µ f.
11
(10.14)
Free Fields
The reader has probably noticed that the source free Maxwell Equation has the same form as the Dirac equation for a free neutrino field, F = 0.
(11.1)
It is therefore not surprising that we can describe the polarization of a photon in the same way that we describe the polarization of a neutrino. Just the same, it is worthwhile to briefly discuss photon polarization, in order to see what special insights are provided by the Dirac algebra. Consider the plane wave solutions of (11.1): F (x) = f eik·x .
(11.2)
Here x = xµ γµ is the position vector in Minkowski space-time, k = k µ γµ is a constant vector and f is a constant bivector. Multiply (11.1) by γ0 to obtain an equation involving only p-numbers: (∂0 + ∇)F = 0.
(11.3)
Let kγ0 = k0 + k, and substitute (11.2) into (11.3) to get the equation kf = k0 f.
(11.4)
We can multiply (11.4) by k0 + k to find k0 = ±|k| (i.e. k 2 = 0). These two solutions correspond to photons which are right or left circularly polarized, or, in different words, photons with positive or negative helicity. We can interpret the solution with k0 = |k| as a particle, and the solution with k0 = −|k| as an antiparticle. This allows us to
32
Chapter II. Electrodynamics
identify the operation of space conjugation (defined in section 6) with antiparticle conjugation, for, applying (7.12) to (11.4), we get −kf ∗ = k0 f ∗ .
(11.5)
In section 13 we will see that the same interpretation holds for the Dirac equation with mass. When the neutrino is described by the Dirac equation a side condition is necessary to limit the number of solutions. The corresponding condition on the electromagnetic field is F = F.
(11.6)
This just follows from the fact that F is a bivector. The condition that F be a bivector can be written F = F = −Fe.
(11.7)
We can use the bivector property of f to get more detailed information from (11.4). As in (8.3), we write f = e + ib.
(11.8)
Equation (11.4) can be split into the two equations k0 e = ikb, k0 b = −ike.
(11.9a) (11.9b)
ˆ ˆ × b, k = k0 e k · e = k · b = 0, e · b = 0,
(11.10a) (11.10b) (11.10c)
It follows that
e 2 = b2 .
(11.10d)
The last two conditions can be written f 2 = 0. This implies F 2 = 0,
(11.11)
an invariant condition that F represents a field of circularly polarized radiation.
11. Free Fields
33
Let us close this section with a geometrical interpretation of the circularly polarized plane wave solutions (11.2). Operating on f, eik·x is a duality rotation which rotates the vector e into a bivector and rotates the bivector ib into a vector. Thus we get the picture of the electric and magnetic vectors spinning about the momentum vector k as the plane wave progresses, but this “spinning” is due to a duality transformation rather than the usual kind of spatial rotation.
Chapter III
Dirac Fields 12
Spinors
Let I be a subspace of an algebra A with the property that the sum of elements in I is also in I . I is called a two-sided ideal if it is invariant under multiplication on both the left and the right by an arbitrary element of A . I is called a left (right) ideal if it is invariant under multiplication from the left (right) only. In both the Pauli and Dirac algebras the only two-sided ideals are the zero element and the whole algebra. For brevity, we will apply the appellation ideal only to left ideals. An ideal is called minimal if it contains only itself and the zero element as ideals. An element of a minimal left ideal I will be called a left-spinor, or again for brevity, simply a spinor. The minimal right ideal obtained from I by reversing products may be called the ideal conjugate to I . An element of a minimal right ideal may be called a conjugate spinor. A definition of spinor in terms of transformations will be given in section 19. We will investigate the spinors of the Pauli algebra in detail. Any p-number φ can be written in the form φ = φ0 + φi σ i
(12.1)
where the σ i are orthonormal vectors and φ0 and the φi are formally “complex numbers”. We can take any unit vector, say σ 3 , and decom-
Ó Springer International Publishing Switzerland 2015 D. Hestenes, Space-Time Algebra, DOI 10.1007/978-3-319-18413-5_3
35
36
Chapter III. Dirac Fields
pose φ as follows: φ = φ[ 12 (1 + σ 3 ) + 12 (1 − σ 3 )] = φ+ + φ− .
(12.2)
The set of all φ+ (φ− ) is an ideal I+ (I− ), I+ ≡ {φ+ },
I− ≡ {φ− }.
(12.3)
φ+ and φ− are spinors. I+ and I− are independent minimal left ideals. These facts become obvious as we take the ideals apart. For a basis in the ideal I+ we can choose u1 and u2 defined by u1 ≡
√1 (1 2
+ σ 3 ),
u2 ≡
√1 (1 2
− σ 3 )σ 1 .
(12.4)
This basis satisfies the orthonormality and completeness relations: (u†a ub )S = δab , X ua u†a = 2
(12.5) (12.6)
a
(a, b = 1, 2) . The ua also satisfy u1 σ 3 = u1 , u2 σ 3 = u2 ,
σ 3 u1 = u1 , σ 3 u2 = −u2 ,
σ 1 u1 = u2 , σ 1 u2 = u1 .
(12.7)
The conditions (12.5) and (12.7) determine the ua uniquely except for a common phase factor. With the help of (12.7) we can write φ+ in the form 1 1 φ+ = √ (φ0 + φ3 )u1 + √ (φ1 + iφ2 )u2 . (12.8) 2 2 The basis in I− which corresponds to the ua in I+ is v1 ≡
√1 (1 2
+ σ 3 )σ 1
v2 ≡
√1 (1 2
− σ 3 ).
(12.9)
The relations for the va which correspond to (12.5), (12.6), (12.7) are readily found. We also have the orthogonality relation (ua vb† )S = (vb† ua ) = 0.
(12.10)
Now we can expand φ in terms of the “spinor basis” we have constructed for the Pauli algebra. φ = φ1+ u1 + φ1− v1 + φ2+ u2 + φ2− v2
(12.11)
12. Spinors
37
where 1 φ1+ = √ (φ0 + φ3 ), 2 1 φ2+ = √ (φ1 + iφ2 ), 2
1 φ1− = √ (φ1 − iφ2 ), 2 1 φ2− = √ (φ0 − φ3 ). 2
(12.12)
The effect of space conjugation on the spinor basis is readily found from the definitions. From (12.11) we get φ∗ = φ∗2− u1 − φ∗2+ v1 − φ∗1− u2 + φ1+ v2 .
(12.13)
By writing the coefficients of (12.8) in a column, we obtain a representation of φ+ as a column matrix Φ+ . φ1+ Φ+ = . (12.14) φ2+ Similarly, we can obtain a matrix representation Φ of φ by writing the coefficients of (12.11) in an array. φ1+ φ1− Φ= . (12.15) φ2+ φ2− The correspondence between operations in the Pauli algebra and operations in its matrix representation is given in appendix D. We have chosen ua and vb to be “eigenvectors” of σ 3 operating on both the left and the right. It is worth pointing out that this is an unnecessarily restrictive choice of spinor basis. Multiplication of a p-number on the left is completely independent of multiplication on the right, and they may be considered as operations performed in independent spaces. Accordingly, we may choose a spinor basis which “diagonalizes” σ 3 operating on the left and some other vector on the right. Indeed, in the usual physical applications of spinors only the matrix representation Φ+ given by (12.14) and multiplication on the left are used. In this case it does not matter what p-numbers are diagonalized on the right—it only matters that Φ+ represents an element of a minimal ideal. The base elements ua are not distinguishable from the va by left multiplication.
38
Chapter III. Dirac Fields
Our knowledge about spinors in the Pauli algebra will be helpful in our study of spinors in the Dirac algebra. To make use of this knowledge, we must distinguish some timelike vector γ0 . Now any dnumber ψ can be written as the sum of an even part φ and an odd part χγ0 , ψ = φ + χγ0 ,
(12.16)
φ and χ are p-numbers. As in the Pauli algebra, ψ can be written as the sum of elements ψ+ and ψ− of independent minimal ideals, ψ = ψ+ + ψ−
(12.17)
ψ± = ψ 12 (1 ± σ 3 ) = φ± + χ∓ γ0 .
(12.18)
where It is obvious that the set I+ (I− ) of all ψ+ (ψ− ) is a minimal ideal. We can write ψ as a linear combination of orthonormal base elements wa in I+ , ψ+ =
4 X
ψa+ wa .
(12.19)
a=1
The ψa+ are “complex” coefficients and the wa are defined by w 1 = u1 , w 2 = u2 ,
w3 = v1 γ0 = −γ0 u2 , w4 = v2 γ0 = γ0 u1 ,
(12.20)
and the ua and va are defined by (12.4) and (12.9). A basis for I− is given simply by wa γ0 , ψ− =
4 X
ψa− wa γ0 .
(12.21)
a=1
It is now clear that the Dirac algebra can be written as a sum of two independent minimal ideals, each of which is a four-dimensional vector space over the “complex” numbers.1 1 When the Dirac algebra is developed over the complex numbers instead of the reals there are four independent minimal ideals.
13. Dirac’s Equation
13
39
Dirac’s Equation
In the last section, we observed that the usual “four-component spinor” can be looked upon as an element of a minimal ideal in the Dirac algebra. Consider a spinor ψ which is an eigenstate of σ 3 on the right, ψσ 3 = ψ.
(13.1)
The Dirac equation for ψ interacting with an electromagnetic potential A can be written ψ = (m + eA)ψi.
(13.2)
The most important point to be noted here is that multiplication of ψ by i on the right leaves (13.1) invariant and commutes with any multiplication of√ψ on the left.2 Therefore i multiplying ψ on the right plays the role of −1 in the usual Dirac theory, whereas i multiplying ψ on the left plays the role of γ5 . As is usual, (13.2) is invariant under gauge transformations. ψ → ψe−ieχ , A → A + χ.
(13.3a) (13.3b)
To leave Maxwell’s Equation invariant, we must also have 2 χ = 0.
(13.3c)
We can write ψ in the form ψ = 12 (1 + γ0 )ψ1 + 12 (1 − γ0 )ψ2
(13.4)
where ψ1 , and ψ2 are spinors in the Pauli algebra of γ0 . For positive energy solutions in the non-relativistic limit, ψ1 is the socalled large component and ψ2 is the small component. Two coupled p-number equations in ψ1 and ψ2 can be obtained by separately factoring out 1 (1 + γ0 ) and 12 (1 − γ0 ) from (13.2) on the left. 2 Rather than (13.4), it is perhaps more meaningful to separate ψ into even and odd parts: ψ = φ+ + χ− γ0 .
(13.5)
2 In (13.2), we cannot take ψ to be an eigenstate of γ on the right because i anticommutes 0 with γ0 and so right multiplication by i does not leave left ideals of γ0 invariant.
40
Chapter III. Dirac Fields
Again φ+ and χ− are spinors in the Pauli algebra of γ0 , φ+ σ 3 = φ+ ,
χ− σ 3 = χ− .
(13.6)
We can get separate uncoupled equations for the even and odd parts of ψ after properly “squaring” the operator in (13.2), (2 − e2 A2 + m2 )ψ + e(2A · + F )ψi = 0.
(13.7)
Here we have used F = ∧ A and · A = 0. We can separate (13.7) into the two equations (2 + 2ieA · − e2 A2 )φ+ + (ieF + m2 )φ+ = 0, 2
2
2
2
( − 2ieA · − e A )χ− + (−ieF + m )χ− = 0.
(13.8) (13.9)
Evidently the even and odd fields are coupled to the electromagnetic field with opposite charge.3 We can get a different form of the Dirac equation by separately equating the even and odd parts of (13.2) and factoring out γ0 . Using the definitions (7.19) and (8.13), we get (∂0 + ∇)φ+ = e(A0 − A)φ+ i + mχ∗− i, (∂0 + ∇)χ− = −e(A0 − A)χ− i − mφ∗+ i.
(13.10a) (13.10b)
These can be recombined into. a single p-number equation: (∂0 + ∇)Ψ = [e(A0 − A)Ψ + mΨ ∗ ]iσ 3
(13.11)
Ψ = φ+ + χ− .
(13.12)
where Equation (13.11) was first obtained by G¨ ursey [19]. We can also write it in the form Ψ = (eAΨ + mΨ γ0 )iσ 3 .
(13.13)
Here Ψ = Ψ ; we could equally well have arranged to have Ψ = −Ψ . Equation (13.2) depends implicitly on a timelike bivector by virtue of (13.1). The equivalent equation (13.13) has the merit that this dependence is explicitly displayed. Evidently we will not have a 3 It has been shown that the results of quantum electrodynamics can be obtained by using (13.8) rather than the Dirac equation (13.2). A review of this approach is given by Brown [14].
13. Dirac’s Equation
41
complete understanding of the Dirac equation until this circumstance is interpreted physically. The question arises as to why nature distinguishes one ideal from another. Spinors in independent ideals are independent solutions of the Dirac equation. Perhaps they should be identified with physically independent fermion fields. Perhaps we should generalize the Dirac equation by relaxing the condition that ψ be an element of a minimal ideal. An arbitrary d-number ψ can be written ψ = ψ+ + ψ−
(13.14a)
ψ± = ψ 12 (1 ± σ 3 ).
(13.14b)
where Therefore the free particle equation ψ = ψim
(13.15)
is equivalent to separate equations for two independent particles with the same mass: ψ+ = ψ+ im,
ψ− = ψ− im.
(13.16)
This circumstance at once brings to mind the nucleon. If we identify ψ+ with the proton and ψ− with the neutron, then the equation for a nucleon interacting with the electromagnetic and pion fields can be written ψ = ψim + eAψ 12 (1 + σ 3 )i + gψπ.
(13.17)
Here g is the pion coupling constant, and π can of course be written π = πi σ i
(13.18)
where the πi are scalars. It will be noted that ψ = −γ5 ψi expresses the pseudoscalar coupling of the π-nucleon interaction. Excluding the electromagnetic coupling, (13.17) is invariant under the constant gauge transformation 1
ψ → ψe 2 ia , π→
1 1 e− 2 ia πe 2 ia .
(13.19a) (13.19b)
42
Chapter III. Dirac Fields
It is clear that (13.19) must be identified as an isospin transformation. Following the procedure which led to (13.11), equation (13.17) can be expressed as two equations of the G¨ ursey type. These equations were found by G¨ ursey when he arrived at transformations equivalent to (13.19) as a generalization of the Pauli group [16]. We will not here attempt to generalize (13.17) further. Our discussion only suggests that it may be possible to account for isospin within the Dirac algebra and so give isospin a geometrical basis grounded in space-time. To substantiate this suggestion it would be necessary to derive some consequence peculiar to this representation. In any case, whether it has any special consequences or not, the representation of the nucleon field given here is conveniently succinct, because it is given entirely in terms of the Dirac algebra and does not require any additional “isospin algebra”.
14
Conserved Currents
In sections 9 and 10 we constructed invariants and conserved currents from the electromagnetic field. Analogous constructions can be made from a Dirac field. Let us consider a particular example. From equation (13.2) we can construct the following equations: ˜ ˜ + eA)ψi, γµ ψψ = γµ ψ(m ˜ + eA)ψγµ . e µ = iψ(m ψ˜ψγ
(14.1a) (14.1b)
The scalar part of the sum of these equations is ˜ e µ )S = 0. (γµ ψψ + ψ˜ψγ
(14.2)
˜ S = · Tµ = 0 ∂ν (γ ν ψγµ ψ)
(14.3)
˜ V. Tµ = (ψγµ ψ)
(14.4)
This is equivalent to
where The Tµ are evidently four independent conserved vector currents. However’ the condition (13.1) implies that T1 and T2 vanish identically and
15. C, P , T
43
that T0 and T3 differ only by a sign. T0 is proportional to the usual expression for the electromagnetic current vector J, ˜ S = (γµ ψψ † γ0 )S = (ψ † γ0 γµ ψ)S . Jµ ∼ γµ · T0 = (γµ ψγ0 ψ)
(14.5)
If the above procedure is applied to the nucleon equation (13.17), it is found that T0 is again conserved, but the other three vector currents, which can be identified as isospin currents, are not conserved.
15 C, P , T In this section we use our space-time algebra to formulate antiparticle conjugation, parity conjugation and time reversal. We also suggest appropriate geometrical interpretations of these transformations. We introduce them as transformations of equation (13.2) which either leave the equation invariant or simply change the sign of the coupling to the electromagnetic field. In our discussion we will always assume that and A transform in the same way. This is natural if D, defined by Dψ = ψ − eAψi, is viewed as a differential operator which is covariant under the gauge transformation (13.3a). We define antiparticle conjugation C by C : ψ → ψC = ψγ0 .
(15.1)
The conjugate field ψC satisfies the equation ψC = (m − eA)ψC i.
(15.2)
C interchanges even and odd parts of ψ and changes their relative sign. A double application of (15.1) shows ψCC = −ψ.
(15.3)
The usual space parity conjugation P is given by P : ψ(x, t) → ψP (x, t) = γ0 ψ(−x, t).
(15.4)
This leaves equation (13.2) invariant. P interchanges even and odd parts of ψ. We define time reversal T by T : ψ(x, t) → ψT (x, t) = ψ ∗ (x, −t).
(15.5)
44
Chapter III. Dirac Fields
The time reversal field ψT satisfies equation (15.2). In our definitions of ψC , ψP and ψT we could have included quite general phase factors on the right, even bivector as well as scalar phase factors. Our choice of phase factors makes possible a geometric interpretation of CP conjugation, CP : ψ(x, t) → ψCP (x, t) = φ∗ (−x, t).
(15.6)
In chapter I we saw how to represent “surface” elements of any dimension by Clifford numbers. We can, for instance, represent a twodimensional oriented surface by the field of bivectors tangent to the surface. The tangent bivector gives the orientation of the surface at each point. (The magnitude of the bivector represents a density of the surface.) We discussed, in chapter I, certain important transformations which change the orientation of “surface” elements, namely, space conjugation and space-time conjugation. These transformations can also be applied to physical fields represented by Clifford numbers, and the corresponding geometrical interpretation can be carried over as well. Applied to p-number fields in space, the transformation (15.6) reverses the direction of all tangent vectors in space and reflects space points through the origin. It literally turns finite volumes inside out. This provides geometric meaning for CP conjugation as seen in the inertial frame of γ0 . The transformation (15.6) also changes the relative sign of even and odd d-numbers (space-time conjugation), but this has no effect on the Pauli algebra of γ0 . To summarize: From the point of view of the Pauli algebra, (15.6) inverts volumes. But from the point of view of the Dirac algebra, (15.6) reflects points in space and reverses the time direction of tangent vectors. The CP transformation (15.6) can be applied to the electromagnetic as well as the Dirac field; it is just the antiparticle conjugation which maps equation (11.4) into (11.5). The Dirac equation is invariant under the product of C, P and T , CP T : ψ(x) → ψCP T (x) = ψ(−x).
(15.7)
This transformation reflects all space-time points through the origin and reverses the direction of all tangent vectors. The operation C is more interesting when applied to the nucleon equation (13.17). If we are to obtain the proper equation for antinu-
15. C, P , T
45
cleons, the transformation of the pion field must be C : π → π ∗ = π ∗ = −π.
(15.8)
The reader will be left to satisfy himself that C is precisely the operation of G-conjugation first introduced by Lee and Yang [17].
Chapter IV
Lorentz Transformations 16
Reflections and Rotations
Let p be a vector tangent to a point x in space-time. The scalar p2 is a natural norm for p. We call p timelike if p2 > 0, spacelike if p2 < 0, or lightlike if p2 = 0. The set V (x) of all vectors tangent to the point x will be called the tangent space at x. An automorphism of V (x) which leaves the square of every vector unchanged is called a Lorentz transformation.1, 2 The set of all Lorentz transformations is called the Lorentz group. In the Dirac algebra, transformation of a d-number p into a dnumber p0 can be written3 p → p0 = RpS.
(16.1)
This will be a Lorentz transformation if (16.1) takes any vector p into a vector p0 so that (p0 )2 = p2 .
(16.2)
Substituting (16.1) into (16.2) we get p2 = RpSRpS.
(16.3)
automorphism of a set V is a one-to-one mapping of V onto itself. this point, it is worth mentioning that rotations and reflections in Vn can be handled in Cn in the same way that we here handle Lorentz transformations in D, with only minor adjustments for metric and dimension. 3 We will suppress reference to the space-time point whenever it is consistent with clarity. 1 An 2 At
Ó Springer International Publishing Switzerland 2015 D. Hestenes, Space-Time Algebra, DOI 10.1007/978-3-319-18413-5_4
47
48
Chapter IV. Lorentz Transformations
This will hold for every vector p if SR commutes with every p; this can occur only if RS is a scalar;4 therefore we write S = ±R−1
(16.4a)
R−1 = R−1 R = 1.
(16.4b)
where We lose no generality by dealing only with the solution (16.4a) with positive sign. Only a 1-vector p in D will satisfy the two conditions pe = p
(16.5)
p = −p.
(16.6)
and Applying (16.6) to (16.1), we find that with proper normalization e R−1 = ±R.
(16.7)
Applying (16.6) to (16.1), we find that R must be either even or odd, i.e. R=R (16.8) or R = −R.
(16.9)
p → p0 = RpR−1
(16.10)
The Lorentz transformation
will be called a rotation if (16.8) holds, and a reflection if (16.9) holds. One of the most important kinds of reflection is the operation of space conjugation defined by (7.13). This transformation reverses the direction of every vector p⊥ orthogonal to some unit timelike vector γ0 while leaving unchanged every vector p|| collinear with γ0 . Thus, space conjugation transforms any vector p = p|| + p⊥
(16.11)
4 The most general solution to (16.3) has RS = eiϕ , but such “phase factors” add nothing to our discussion.
16. Reflections and Rotations
49
into p∗ = γ0 pγ0 = p|| − p⊥ .
(16.12) 0
Now observe that if R satisfies (16.9), then R = γ0 R satisfies (16.8). It follows that any reflection of spacelike vectors can be produced by a rotation followed by a space conjugation, or vice-versa. Another basic transformation is the space-time conjugation (7.7). This reverses the direction of every vector p, p = ipi−1 = −ipi = i2 p = −p.
(16.13)
A time inversion, which reverses the direction of all vectors collinear with γ0 , is produced by the combination of (16.12) and (16.13), p∗ = −p|| + p⊥ .
(16.14)
It follows that any reflection can be produced by some combination of space conjugation, space-time conjugation and a rotation. Therefore we may confine the rest of our discussion to rotations. According to theorem 1 of appendix B, any d-number R which e = 1 can be written satisfies R = R and RR 1
R = ±e− 2 B
(16.15)
where B is a bivector. Hence, for a Lorentz rotation, (16.10) can be written 1
1
p → p0 = e− 2 B pe 2 B .
(16.16)
We say that B generates the rotation (16.16). A rotation can be characterized by the properties of the generator.5 The square of any bivector B in the Dirac algebra has a scalar part B · B and a pseudoscalar part B ∧ B, B 2 = B · B + B ∧ B.
(16.17)
If B ∧ B = 0, B will be called simple. If B ∧ B 6= 0, B will be called complex. A simple bivector B will be called timelike, lightlike, spacelike if B 2 > 0, B 2 = 0, B 2 < 0 respectively. The corresponding Lorentz 5 Relative to some bivector basis, B = 1 B µν γ µν to deterµν . Since it takes six parameters B 2 mine B, the group of Lorentz rotations (16.16) is a six parameter group.
50
Chapter IV. Lorentz Transformations
rotations (16.16) can be distinguished by the same names. This agrees with the analogous classification of vectors given in section 7. A complex bivector B can always be written as the sum of two orthogonal simple bivectors B1 , and B2 , B = B1 + B2 ,
B1 B2 = B2 B1 .
(16.18)
If B1 is timelike, then B2 is spacelike. The Lorentz rotation generated by B becomes 1
1
p → p0 = e− 2 (B1 +B2 ) pe 2 (B1 +B2 ) =
1 1 1 1 e− 2 B1 e− 2 B2 pe 2 B2 e 2 B1 .
(16.19)
Therefore, any Lorentz rotation generated by a complex bivector can be expressed as the commuting product of a timelike with a spacelike Lorentz rotation. For this reason we can confine our study to Lorentz rotations generated by simple bivectors. Any d-number D can be expressed as some combination of sums and products of vectors. If these vectors are subjected to the Lorentz transformation (16.10), then D → D0 = RDR−1 .
(16.20)
We say that (16.20) is induced by (16.10). A d-number which transforms in this way will be called a rotor. The electromagnetic bivector is an important example of a rotor. The reader will note that the transformation (16.20) leaves invariant the scalar part of D2 , so (D2 )S is a e S possible norm for D. However, in section 19 we suggest that (DD) is a more natural norm. When some timelike direction is singled out as specially significant there is a decomposition of a Lorentz rotation which is more convenient than (16.19). Theorem 4 of appendix B enables us to write the Lorentz rotation (16.16) in the form 1
1
1
1
p → p0 = e− 2 ib e− 2 a p e 2 a e 2 ib
(16.21)
where a and b are timelike bivectors proportional to some selected timelike unit vector γ0 . All Lorentz rotations which leave γ0 invariant are of the form 1
1
p → p0 = e− 2 ib p e 2 ib .
(16.22)
17. Coordinate Transformations
51
We call (16.22) a spatial rotation. A Lorentz rotation of the form 1
1
p → p0 = e − 2 a p e 2 a
(16.23)
will be called a special timelike rotation. We can now describe (16.21) qualitatively by the statement that every Lorentz rotation can be expressed as a special timelike rotation followed by a spatial rotation.
17
Coordinate Transformations
The Lorentz transformations which we discussed in section 16 are rigid rotations and inversions of the tangent space V (x) at a point x of space-time. Of course, x can be a variable, so these transformations map vector fields into vector fields while preserving the length of vectors at each point. In this section we will show how a subclass of such transformations can be related to coordinate transformations in Minkowski space-time. In section 23 we will discuss these problems in the context of Riemannian space-time. By taking the gradient of coordinate functions xµ we form the tangent vectors γ µ = γ µ (x) = xµ .
(17.1)
Minkowski space-time is distinguished by the existence of coordinates xµ for which the corresponding tangent vectors γ µ are constant and orthonormal. Such a set of functions is commonly called an inertial system (of coordinates). We will call the corresponding field of tangent vectors an inertial frame. From one inertial system every other inertial system can be obtained by a Poincar´e transformation:6 0
xµ → xµ = aµν xν + aµ .
(17.2)
The aµν and the aµ are constants. The coordinate transformation (17.2) induces a Lorentz transformation of the tangent vectors, which gives, by (17.1), 0
0
0
γ µ = xµ = 6 Also
∂xµ ν γ = aµν γ ν . ∂xv
called an inhomogeneous Lorentz transformation.
(17.3)
52
Chapter IV. Lorentz Transformations
The inverse of this transformation gives γν =
∂xν µ0 0 = bνµ γ µ . 0γ µ ∂x
(17.4)
Now, in a way we have already discussed, the timelike vector field γ0 of an inertial frame S uniquely determines a Pauli algebra at every space-time point. This will be called the Pauli algebra of the inertial system S, or, as before, the Pauli algebra of γ0 . When we are interested in the interpretation of physical processes as seen in some inertial system S, it is useful to express the pertinent equations in the Pauli algebra of S. We have already done this for some important physical equations in parts II and III. To see how the Pauli algebra of one inertial system is related to the Pauli algebra of another, we 0 express γ o above in terms of the Pauli algebra of γ0 . From (17.3), 0
γ 0 γ0 = a00 + a0k γ k γ0 .
(17.5)
To understand the physical meaning of the terms on the right of this 0 equation, let xµ be the coordinates of a point at rest in the primed inertial system. Then the velocity of this point in the unprimed inertial system will be just the velocity of the primed system as observed in the unprimed system. The components v k of this velocity are dxk ∂xk = dx0 ∂x0
vk =
(17.6)
Now 0
β ≡ a00 = γ 0 · γ0 = γ 0 · γ00 = b00
(17.7)
and 0
a0k = γ 0 · γk = −γ k · γ0 = −bk0 = −βv k .
(17.8)
Therefore, using (7.9), we find γ00 γ0 = β(1 + v).
(17.9)
By squaring this equation, we get an expression for β: 1
β = (1 − v2 )− 2 .
(17.10)
17. Coordinate Transformations
53
We emphasize that (17.9) holds for any Lorentz transformation. More generally, for any timelike vector field p(x) we can write p(x)γ0 = |p(x)|
1 + v(x) (1 −
(17.11)
1 v2 (x)) 2
and interpret v(x) as the velocity of the vector p(x) at the point x as observed in the inertial frame of γ0 . As a physical example, consider a particle with energy-momentum vector p and mass m, pγ0 = E + p = mβ(1 + v).
(17.12)
Hence, in the inertial frame of γ0 the energy of the particle is E = mβ
(17.13a)
p = mβv = Ev.
(17.13b)
and the momentum is
Using (16.10), we can write (17.3) in the form 0
γ µ = aµν γ v = R−1 γ µ R
(17.14)
where R is a constant d-number field. A vector field p which represents some property of a physical system will be independent of any inertial system used for observational purposes. If p has components pµ in the unprimed system, then it has components p0µ = bσµ pσ in the primed system, 0
p = pµ γ µ = p0µ γ µ .
(17.15)
If we say that the components pµ transform covariantly under a change of basis, then the base vectors γ µ are said to transform contravariantly. We can simulate a change of basis by applying the inverse of (17.14) to the vector p: p → p0 = RpR−1 = p0µ γ µ .
(17.16)
To an observer in the {γ µ } frame the vector p0 appears the same 0 as the vector p appears to an observer in the {γ µ } frame. Therefore,
54
Chapter IV. Lorentz Transformations
we can interpret (17.16) as a transformation from one inertial frame to another which preserves a basis in the Dirac algebra. It is sometimes of interest to know the aµν as a function of R, or vice-versa. To solve (17.14) for the aµν is a simple matter. 0
aµν = γν · γ µ = (γν R−1 γ µ R)S .
(17.17)
aµν
To solve for R in terms of the is somewhat more difficult. We do it here for the case of rotations. Define A ≡ aµν γµ γ ν = aµµ + aµν γµ ∧ γ ν .
(17.18)
From (17.14) we get 0
e µ R. A = γµ γ µ = γµ Rγ
(17.19)
Since we are discussing rotations, R has the property R = R; this implies that R can be written R = RI + RB
(17.20)
e is a bivector and RI = 1 (R + R) e has only scalar where RB = 12 (R − R) 2 and pseudoscalar parts. Using (A.20), (A.21) and (A.25) of appendix A, we find e µ = γµ Rγ µ = 4RI∗ . γµ Rγ
(17.21)
Therefore (17.19) becomes A = 4RI∗ R.
(17.22)
e = 16(R∗ )2 . AA I
(17.23)
e = 1, Since RR Hence R=±
A 1
.
(17.24)
e 2 (AA) When (17.18) is substituted this gives R explicitly as a function of the aµν . The indeterminate sign in (17.24) clearly occurs because R appears twice in (17.14) whereas aµν appears once. Formula (17.24) is not of great practical value, because, as our discussion of Lorentz transformations is designed to show, it is usually easier to construct R than it is to construct aµν from physical or geometrical data.
18. Timelike Rotations
18
55
Timelike Rotations
A timelike rotation is of special physical interest when it is interpreted as a transformation from one inertial frame to another. In this section we study timelike rotations from this point of view. Recall from (16.23) that a timelike rotation relates a vector p to a vector p by the formula 1
1
e = e− 2 a pe 2 a . p0 = RpR
(18.1)
We learned in the last section that p and p0 can be interpreted as the same vector as seen in different inertial frames {γµ } and {γµ0 } respectively. We also learned that, under this interpretation, the frames are related by the inverse of (18.1); in particular, 1
1
e 0 R = e 2 a γ0 e− 2 a . γ00 = Rγ
(18.2)
For the purpose of physical interpretation, we must express R in terms of the velocity v of the primed inertial system relative to the unprimed system. First we remark that if (18.2) is to be interpreted as a simple timelike rotation between inertial frames, then a must be a vector in the Pauli algebra of γ0 . In this case γ0 anticommutes with a, so e R∗ = γ0 Rγ0 = R.
(18.3)
From (18.2) and (17.9), we find R2 = γ0 γ00 = e−a = β(1 − v).
(18.4)
In passing, we note that since e−a = cosh a − sinh a, we can write
(18.5)
1
β = (1 − v2 )− 2 = cosh a,
(18.6)
v = tanh a.
(18.7)
The half angle exponential can be written in the same form as (18.4). 1
R = e− 2 a =
1−u 1
(1 − u2 ) 2
.
(18.8)
56
Chapter IV. Lorentz Transformations
To take the square root of (18.4) we must find u in terms of v. Thus 2 1 − u = R2 = β(1 − v). (18.9) 1 2 2 (1 − u ) The scalar part of this expression is 1 + u2 =β 1 − u2
(18.10)
from which it follows that |u| =
β−1 β+1
1
|v|
2
=
1
.
(18.11)
1 + (1 − v2 ) 2
The ratio of vector to scalar part in (18.9) gives v=
2u . 1 + u2
(18.12)
Since v is collinear with u, (18.11) yields v
u=
1 + (1 −
1 v2 ) 2
=
βv . 1+β
(18.13)
At last we can express R as a function of v: R=
1 + β − βv 1
[2(1 + β)] 2
1
1
ˆ [ 12 (β − 1)] 2 . = [ 12 (β + 1)] 2 − v
(18.14)
Physicists are frequently interested in a Lorentz transformation from the laboratory system to the rest frame of a physical system characterized by energy-momentum vector p and mass m. If E and p are the energy and momentum of the system as observed in the lab., then, according to (17.13), R=
m+E−p 1
[2m(m + E)] 2
=
E+m 2m
1 2
ˆ −p
E−m 2m
1 2
.
(18.15)
18. Timelike Rotations
57
Because of the complexity of R relative to R2 (when expressed in physical variables), it is advantageous to express Lorentz transformations in terms of R2 whenever possible. We can always do this when spinors are not involved. Now let us reconsider the timelike rotation (18.1) of a vector p. We express p and p0 in the Pauli algebra of γ0 by multiplying (18.1) on the right by γ0 , to get p00 + p0 = R(p0 + p)R.
(18.16)
Now p has a part p|| collinear with v and a part p⊥ orthogonal to v, so p00 + p0 = R2 (p0 + p|| ) + p⊥ = β(1 − v)(p0 + p|| ) + p⊥ .
(18.17)
From the scalar and vector parts of this equation we get the usual expression for the Lorentz transformation of a space-time vector: p00 = β(p0 − v · p), p0 = β(p|| − p0 v) + p⊥
(18.18) (18.19)
Equation (18.19) can be written in a variety of different ways depending on the representation of p|| and p⊥ : for example, ˆ · pˆ ˆ × (p × v ˆ ), p|| = v v = p − p⊥ = p − v ˆ · (ˆ ˆ × (p × v ˆ ) = p − p|| = p − v ˆ · pˆ p⊥ = v v ∧ p) = v v.
(18.20a) (18.20b)
If p is the energy-momentum vector of some physical system, then (18.18) shows how the energy of p as measured in one inertial system is related to the energy measured in another inertial system moving with velocity v relative to the first. Equation (18.19) is the corresponding relation for momentum. As we saw in (16.20), the Lorentz transformation of any rotor can be accomplished in the same way as that of a vector. As an example, we consider a timelike rotation of the electromagnetic bivector F : e F → F 0 = RF R.
(18.21)
According to (8.13), we can write F = E + iB. By separating E and B into parts collinear and parts orthogonal to v, we can express (18.21) in terms of R2 : E0 + iB0 = (E|| + iB|| ) + R2 (E⊥ + iB⊥ ).
58
Chapter IV. Lorentz Transformations
Using the expression (18.4) for R2 , and noting that vE⊥ = v ∧ E = iv × E we can write (18.21) in the explicit form E0 + iB0 = E|| + β(E⊥ + v × B) + i[B|| + β(B⊥ − v × E)]. (18.22) The vector and bivector parts of this equation give familiar expressions for E0 and B0 . We will now study the composition of special timelike rotations, because it occurs frequently in problems of physical interest. We will be concerned with three different inertial systems. Let R2 be the operator which takes vectors in system 1 to system 2, and let R3 take system 1 to system 3. Let R1 take system 2 to system 3.7 According to (18.4), we can write Ri2 = βi (1 − vi ).
(18.23)
First we show that 1
R = R1 R2 = e− 2 ib R3 .
(18.24)
In words, (18.24) tells us that a Lorentz transformation from system 1 to system 2 followed by a transformation from system 2 to system 3 is equivalent to a transformation from system 1 to system 3 followed by a spatial rotation. According to (16.21), we can always write R in the form (18.24). Still, we need to. verify that, given the interpretation of R1 and R2 , R3 has the interpretation we have already assigned. Observe that RR† = R1 R22 R1 = R32 , or R1 β2 (1 − v2 )R1 = β3 (1 − v3 ).
(18.25)
Equation (18.25) is precisely of the form (18.16). It establishes an equivalence of the velocity vector of system 1 as seen in system 2 with 7 It is difficult to find a system of labels for this problem which is both simple and suggestive. The reader may find it useful to construct a simple triangle diagram to help keep the labels straight.
19. Scalar Product
59
the velocity vector of system 1 as seen in system 3. It shows we have the proper interpretation of R3 . We can solve immediately for β3 and v3 in terms of v1 and v2 , β3 = β1 β2 (1 + v1 · v2 ), v3 =
v1 + v2 + (β1−1 − 1)ˆ v1 × (ˆ v 1 × v2 ) . (1 + v1 · v2 )
(18.26) (18.27)
From (18.25), it is easy to see that in (18.27) v2 and v3 can be interchanged if also the sign of v1 is changed. Our understanding of (18.24) will be complete when we have an expression for b in terms of v1 and v2 . To this end, recall that, according to (18.14), we can write the Ri in the form Ri = Ni (1 + βi − βi vi ).
(18.28)
By substituting these expressions for Ri into (18.24) and equating the ratio of bivector to scalar part on each side of the resulting equation, we find v 2 × v1 tan 12 b = . (18.29) −1 (1 + β1 )(1 + β2−1 ) + v1 · v2 An expression for v3 can also be found from (18.24), but, besides being unduly complicated, it is superfluous, because we already have (18.27).
19
Scalar Product
A scalar product (A, B) of d-numbers A and B is defined by e S. (A, B) ≡ (AB)
(19.1)
From our discussion in section 5, where the notion of scalar product was introduced, we know that the indefinite nature of (19.1) is entirely due to the underlying Lorentz metric of space-time. This means that (19.1) is a geometrically significant scalar product for the whole Dirac algebra. It is interesting to examine the inner automorphisms of D which leave (19.1) invariant.8 We will call them isometries of D. 8 An inner automorphism of an algebra A is an automorphism which can be expressed in terms of operations defined in A .
60
Chapter IV. Lorentz Transformations
Any isometry which takes A into A0 can be written A → A0 = RAS.
(19.2)
The scalar product of any two d-numbers is invariant under (19.2) if and only if e = 1, RR S Se = 1, (19.3a) or e = −1, RR
S Se = −1.
(19.3b)
e and R = ±R, then (19.2) becomes If also S = R−1 = ±R A → A0 = RAR−1 .
(19.4)
According to (16.20) this is a Lorentz transformation of a rotor. A rotor A can be written as the product of two d-numbers Ψ and e Φ, e A = Ψ Φ.
(19.5)
If we require that the Lorentz transformation on vectors (16.10) induce e the following transformations on Ψ and Φ, Ψ → Ψ 0 = RΨ, e→Φ e0 = ΦR e −1 , Φ
(19.6a) (19.6b)
e is a rotor. On the other hand, ΦΨ e is invariant then it is clear that Ψ Φ under Lorentz transformations. D-numbers which transform according to (19.6) are commonly called spinors. Since the transformation (19.6) leaves minimal ideals invariant, this definition of spinor does not conflict with section 12, where we defined a spinor as an element of a minimal ideal. The two definitions are related by the fact that (19.6) can be interpreted as a change of basis in a minimal ideal. The set of all Lorentz transformations on a rotor is a group of isometries of the Dirac algebra; so is the set of all spinor transformations. Closely related is the group of isometries which leave the even and odd subspaces of the Dirac algebra separately invariant. We will call it the group of complex Lorentz transformations. An element of the subgroup for which R and S are even will be called a complex Lorentz
19. Scalar Product
61
rotation. Justification for the label “complex” comes from the fact that these transformations are isometries among the odd d-numbers, and any odd d-number k can be interpreted as a complex vector, as is clear when it is written k = a + ib,
(19.7)
where a and b are vectors. Furthermore, since it takes six parameters to determine R and six more to determine S, the group of complex Lorentz rotations is determined by twelve real parameters, or, equivalently, by six “complex” parameters. The complete group of complex Lorentz transformations can be obtained by combining the continuous group of complex rotations with the discrete operation of space conjugation. The complex rotations are distinguished by leaving invariant both the scalar and pseudoscalar parts of a product of d-numbers. Thus, the complex rotations leave invariant the following norm for k: k k˜ = (a + ib)(a + bi) = a2 − b2 + 2ia · b.
(19.8)
Space conjugation complex conjugates (19.8). ˜ ∗ = a2 − b2 − 2ia · b. k ∗ k˜∗ = (k k)
(19.9)
A canonical form for the d-numbers which produce the most general isometry of D is given in theorem 7 of appendix B. There it is shown that the isometries continuously connected to unity are related to the group of rotations in a five-dimensional space with metric (+ − − − −) . We can entertain definitions other than (19.1) for a scalar product, for instance, (AB)S , (AB)S or (A† B)S . Any other definition of scalar product must single out some space-time direction as special. The considerations of section 14 give us good reason to choose ˜ S = (ψ † γ0 φ)S (φγ0 ψ)
(19.10)
as the scalar product of Dirac fields φ and ψ. In section 24 we discuss the possibility of a physical interpretation for the inner automorphisms of the Dirac algebra which leave (19.10) invariant.
Chapter V
Geometric Calculus 20
Differentiation
To complete the space-time calculus developed in chapter I, we must define the gradient operator for curved space-time. Evidently is an invariant differential operator, but we will not attempt to define it without reference to local coordinate systems. This approach simplifies comparison with tensor analysis. However, once is defined, manipulations can be carried out in a coordinate-independent manner. To simplify our discussion, we will ignore all questions about differentiability. Such questions can be answered in the same way as in tensor analysis. We wish to emphasize the special algebraic features of our geometric calculus. A set of d-number fields on a region R of space-time will be called linearly independent if they are linearly independent at each point of R. A set of four linearly independent 1-vector fields {γ i ; i = 0, 1, 2, 3} on R will be called a frame field on R. The metric tensor g ij of a frame field is determined by the inner products of the γ i , g ij = γ i · γ j = 12 (γ i γ j + γ j γ i ).
(20.1)
It will be algebraically convenient to construct a frame field {γi } of vectors reciprocal to the γ i . The γij are uniquely determined by the condition γ i · γj = δji .
Ó Springer International Publishing Switzerland 2015 D. Hestenes, Space-Time Algebra, DOI 10.1007/978-3-319-18413-5_5
(20.2)
63
64
Chapter V. Geometric Calculus
For further discussion of the reciprocal frame {γi } we refer to appendix A. Let {xµ : µ = 0, 1, 2, 3} be a set of coordinate functions on R. Through each point x of R the µth coordinate function xµ determines a coordinate curve with tangent vector γ µ . The set {γ µ } of linearly independent vector fields so determined will be called a coordinate frame field on R. Throughout part V we will use Greek indices exclusively for coordinate frame fields and Latin indices for more general frame fields. It will also often be convenient to refer to a frame field simply as a frame, and, since we will be dealing only with local properties of space-time, to ignore the possibility that we may be able to cover only a limited region of space-time by a single coordinate system. Let {γ i } be a frame field. Corresponding to each vector field γ i , we define a differential operator i by the following postulates: (a) i maps scalars φ into scalars, i φ = ∂i φ.
(20.3)
If {γ µ } is a coordinate frame, then ∂µ is an ordinary partial derivative, i.e. ∂φ . (20.3) ∂xµ For other frame fields, ∂i can be expressed as a linear combination of partial derivatives by postulate (e) below. µ φ = ∂µ φ =
(b) i maps vectors into vectors.1 In particular, i maps γj into a vector, which can, of course, be expressed as a linear combination of the γk . Therefore we can write i γj = −Lkij γk .
(20.4)
This defines the so-called coefficients of connection Lkij for the frame {γk }. (c) i obeys the Leibnitz rule. Thus, for any two d-number fields A and B, i (AB) = (i A)B + A(i B).
(20.5)
1 Green [18] studies a which maps a vector into an arbitrary d-number; however, he restricts i his treatment to a space-time with absolute parallelism. In section 23, we discuss a different generalization.
20. Differentiation
65
(d) Differentiation is distributive with respect to addition, i (A + B) = i A + i B.
(20.6)
(e) If a frame field {γ i } is related to a different frame field {em } by γ i = him em ,
(20.7a)
then the operators i corresponding to the γ i are related to the operators m corresponding to the em by m = him i .
(20.7b)
This postulate insures that differentiation has the same scale in all frames. It is important to remember that throughout this paper we use the terms “scalar” and “vector” in the algebraic sense of section 2 and not in the sense of tensor analysis. The derivative of a d-number is not affected by the indices which label the d-number. For sake of brevity, we call the operator i simply the derivative. We now proceed to examine some of its properties. First note that the derivative preserves the degree of any r-vector. This property will be lost in section 24 when we modify the derivative to apply to spinors. The coefficients of connection for a frame are not completely independent of the metric for the frame. To see this, apply the Leibnitz rule to the metric tensor (20.1). k gij = (k γi ) · γj + γi · (k γj ). By (20.3), k gij = ∂k gij and by (20.4), (k γi ) · γj = −gjm Lm ki ≡ −Lkij . So, ∂k gij = −Lkij − Lkji .
(20.8)
This shows that only 12 42 (4 − 1) = 24 components of Lkl ij are independent. Tensor analysis requires that the covariant derivative of the metric tensor vanish in metric manifolds. (20.8) shows that this condition is
66
Chapter V. Geometric Calculus
equivalent to the Leibnitz rule in our language. Crudely speaking, the Leibnitz rule says that when we differentiate a · b we differentiate a and b but we do not differentiate the dot. The coefficients of connection for {γ k } are completely determined by those for {γj }. i γj = i (gjk γ k ) = (i gjk )γ k + gjk i γ k . By (20.4) and (20.8), −Lijk γ k = (−Lijk − Likj )γ k + gjk i γ k , g jm gmk i γ k = δkj i γ k = g jm Likm γ k So, i γ k = Lkij γ j .
(20.9)
We defined the minus sign in (20.4) in order to get a plus sign in (20.9). We shall give reason soon for considering the γ k more basic than the γj . Let us use our rules to differentiate an arbitrary vector field a. a = aj γ j , i a = (i aj )γ j + aj (i γ j ), i a = (∂i aj + ak Lkij )γ j .
(20.10)
(20.10) shows us that when operating on vectors i is equivalent to the “covariant derivative” of tensor analysis. According to postulate (a), derivatives of scalars with respect to coordinate frame always commute. For example, α β aµ = ∂α ∂β aµ = ∂β ∂α aµ = β α aµ .
(20.11)
But the derivatives of vectors in general do not commute, α β γ µ = α (Lµβσ γ σ ) = (α Lµβσ )γ σ + Lµβσ α γ σ = (∂α Lµβσ + Lµβρ Lρασ )γ σ . So [α , β ]γ µ = Lµαβσ γ σ ,
(20.12)
20. Differentiation
67
where Lµαβσ
[α , β ] ≡ α β − β α , = ∂α Lµβσ − ∂β Lµασ + Lµβρ Lρασ − Lµαρ Lρβσ .
(20.13) (20.14)
Lµαβσ is the usual curvature tensor for a metric manifold with a linear connection. Combining (20.12) and (20.11) we obtain [α , β ]a = aµ Lµαβσ γ σ .
(20.15)
From (20.15) it is clear that the vanishing of the curvature tensor is a necessary and sufficient condition for the derivatives of a vector to commute. With respect to arbitrary frames, the derivatives of scalars do not commute but they do form a Lie algebra with the commutation rule: [∂i , ∂j ] = Ωijk ∂k .
(20.16)
Thus, for arbitrary frames (20.15) becomes m k [i , j ]a = (am Lm ijk + Ωij ∂m ak )γ .
(20.17)
The derivatives k suffer from the mild defect of depending on particular vector fields. We can remove this defect by defining a more fundamental derivative by ≡ γ k k
(20.18)
as before, we call this vector differential operator the gradient. is an invariant differential operator. It is independent of any particular frame field Algebraically, it is characterized by superposition of vector multiplication onto the Leibnitz rule for differentiating products. We could have expressed these algebraic properties in “vector form” without any reference to a frame field. Our procedure, however, achieves a certain clarity by keeping the Leibnitz rule and the vector multiplication separate. Nevertheless, it is important to note that this separation is not invariant, but depends on the selected frame field. From we can obtain the directional derivative with respect to any vector field a: a ≡ a · .
(20.19)
68
Chapter V. Geometric Calculus
We get back the k by using (20.2) and (20.19): k = γk · .
(20.20)
From (20.19) one finds that the gradient of the coordinate function is the tangent vector γ µ : xµ = γ µ .
(20.21)
Because of this direct relation, we consider the frame {γ µ } more fundamental than the frame {γµ }. Indeed, γµ depends not only on xµ , but on all the other coordinates as well. Likewise, µ = γ µ · = g µν ν
(20.22)
is the directional derivative determined by the coordinate function xµ , whereas µ depends on all the other coordinate functions as well. Using our generally defined gradient operator and the vector algebra of chapter I, we have a complete coordinate-independent vector calculus. Except for some obvious modifications, our earlier formulas now hold in curved space-time. For instance, the separation (7.16) of the gradient into divergence and curl still holds, as does Maxwell’s equation (8.1). As new examples, let us write down a couple of formulas that hold for the invariant vector calculus. If a and b are vectors, then (a · b) = (a · )b + (b · )a − a · ( ∧ b) − b · ( ∧ a).
(20.23)
This is a coordinate free form of (20.8). It is an expression of the Leibnitz rule. The reader may recognize it as a generalization of a familiar formula in the vector analysis of Gibbs. For an r-vector Ar and an s-vector Bs , ∧ Ar ∧ Bs = ( ∧ Ar ) ∧ Bs + (−1)r Ar ∧ ∧ Bs = (−1)s(r+1) Bs ∧ ∧ Ar + (−1)r Ar ∧ ∧ Bs .
(20.24)
This can easily be proved by combining the Leibnitz rule with the associative and anticommutative rules for outer products.
21
Riemannian Space-Time
The objective of this section is to express Einstein’s gravitational field equations in terms of our space-time calculus. Therefore we direct
21. Riemannian Space-Time
69
our attention to Riemannian space-time. A Riemannian manifold is distinguished by the simple requirement that the curl of the gradient of every scalar vanish;2 that is, for every scalar function φ ∧ φ = 0.
(21.1)
This implies that for a coordinate frame ∧ γ µ = ∧ xµ = 0.
(21.2)
But ∧ γ µ = 12 (Lµαβ − Lµβα )γ αβ . So Lµαβ = Lµβα .
(21.3)
By combining (21.3) with three copies of (20.8) with permuted indices, we find Lαβµ = 12 (∂µ gαβ − ∂α gµβ − ∂β gµα ).
(21.4)
So the coefficients of connection are completely determined by the metric tensor. Let us now examine some properties of the curvature tensor. From (20.12) we can get an equation containing only invariant derivatives µ ∧ γ µ = 12 Rαβσ γ αβ γ σ ,
(21.5)
µ where Rαβσ is now the Riemann curvature tensor. We can analyze (21.5) further in the following way:
2 γ µ = ( · + ∧ )γ µ = ( · γ µ + ∧ γ µ ). But because of (21.2), ∧ γ µ = ( · γ µ ) − ( · )γ µ .
(21.6)
The righthand side of (21.6) has only a vector part. Hence the trivector part of (21.6) vanishes, µ ∧ ∧ γ µ = 12 Rαβσ γ αβσ = 0. 2 Remarks
on the significance of this condition are made in section 22.
(21.7)
70
Chapter V. Geometric Calculus
This is equivalent to the well known identity µ µ µ Rαβσ + Rβσα + Rσαβ = 0.
It is a simple matter to prove a more general version of (21.7), namely, that for any differentiable d-number field A, ∧ ∧ A = 0.
(21.8)
To interpret the vector part of the right side of (21.5) we use equation (3.9), γ αβ · γ σ = γ α (γ β · γ σ ) − γ β (γ α · γ σ ). So µ µ Rαβσ γ αβ γ σ = 2Rαβσ g βσ γ α = 2Rαµ γ α .
Rαµ
(21.9)
is the Ricci tensor. Thus (21.5) and (21.6) give Rµ ≡ Rαµ γ α = ( · γ µ ) − · γ µ .
(21.10)
We see that the Ricci tensor involves only the gradient operator whereas the total curvature tensor involves directional derivatives in its definitions. In space-time the free field gravitational equations are ordinarily written Rαµ = 0. By (21.10) these equations become · γ µ = 0
(21.11)
if the γ µ are subject to the subsidiary condition · γ µ = · xµ = 0.
(21.12)
Coordinates which satisfy (21.12) will be called harmonic. Equations (21.10) and (21.12) look quite like the wave equation for the electromagnetic potential with Lorentz side condition. But the appearance is deceiving. Observe that the operator · takes on its simplest form in harmonic coordinates; · = γ µ · γ ν µ ν + ( · γ µ )µ = µ µ .
(21.13)
Now it is true that when operating on a scalar µ µ = ∂ µ ∂µ , the d’Alembertian. Therefore (21.12) is the wave equation for the xµ . But when operating on vectors γ µ in (21.11) µ µ produces a nonlinear
21. Riemannian Space-Time
71
equation in the coefficients of connection, as can be seen by using (20.10). Because the gravitational field equations take on their simplest form (21.11) for a harmonic frame, it is tempting to attribute some special physical significance to harmonic coordinates. Fock [19] has suggested that harmonic coordinates are the natural generalization of the inertial coordinates of special relativity. We will return to this suggestion in section 23. Here we will be content with the remark that whether or not harmonic coordinates can be interpreted as inertial, they retain a redoubtable significance because (21.12) is the simplest coordinate condition involving only the gradient operator. Einstein’s equations for the gravitational field in the presence of matter are usually written Rνµ − 12 δνµ R = Tνµ .
(21.14)
To complete our discussion, we must reformulate them in the geometric calculus. The scalar curvature is simply R = Rµ γµ = Rµν gµν .
(21.15)
From the matter field we can construct the stress-energy vectors T µ = Tνµ γ ν , as we did for the electromagnetic field in section 9. The gravitational field equations can now be written √
γ(Rµ − 12 γ µ R) = γT µ ,
(21.16)
where γ = |γ5 | = − g is the magnitude of the pseudoscalar for the frame {γµ }.3 The factor γ in (21.16) has been introduced so that the right side satisfies the conservation law µ (γT µ ) = 0,
(21.17)
and the left side satisfies the identity µ [γ(Rµ − 12 γ µ R)] = 0.
(21.18)
The operator µ differs somewhat from the covariant derivative of tensor analysis, and the factor γ compensates for this differences in (21.17) and (21.18). 3 The
pseudoscalar of a frame is discussed in Appendix A.
72
Chapter V. Geometric Calculus
We can write (21.16) in a more suggestive form. By contracting the tensors of (21.14) we find T ≡ Tµµ = −R.
(21.19)
Dropping the pseudoscalar factor γ from (21.15), the gravitational field equations for a harmonic frame can be written · γ µ = 2T µ − γ µ T.
(21.20)
Thus the gravitational field can be represented by four vector potentials, each of which satisfies a “wave-like” equation with a source determined by the stress-energy tensor of the matter field. In spite of the apparent simplicity of (21.20), it is not clear that it can be solved without being rewritten as an equation for the coefficients of connection of the harmonic frame, in which case the methods of solution are the same as in tensor analysis.
22
Integration
Grassmann (or multilinear ) algebra is equivalent to Clifford algebra using only the outer multiplication defined in section 3.4 In modern mathematical treatments of integration theory and differential geometry, Grassmann algebra is widely used in the so-called algebra of differential forms. Introductions to this subject which are designed for physicists are given by Misner and Wheeler [13] and by Flanders [20]. Here we wish to point out that our geometric calculus based on Clifford algebra has all the advantages of the algebra of differential forms, and more. All the results given by Flanders can be easily transformed into the language of Clifford algebra. We emphasize that Clifford algebra is a superior language because, as we saw in chapter I, it integrates inner and outer multiplication into a single operation; this has the great advantage of making all multiplication associative. By contrast, in treatments of differential forms, outer multiplication is as simple as in Clifford algebra, but duality transformations are more difficult, and the inner product is introduced as a combination of outer multiplication and duality transformation; beyond these unnecessary 4 Or if you wish, the Grassmann algebra of a V is the Clifford algebra of a V for which the n n inner products of all vectors in Vn are zero.
22. Integration
73
complications, the powerful associative law of Clifford algebra is never found and exploited. The superiority of Clifford algebra is amply illustrated by comparing our formulation of Maxwell’s equation with that given in the above references. To make our geometric calculus more useful, we show how Clifford algebra enables us to express the fundamental theorem of integral calculus in strikingly simple form. We call this theorem the boundary theorem, and we formulate it first for flat space-time. The Boundary Theorem. In Minkowski space-time, let Ω be a closed (r + 1)-dimensional surface bounded by an r-dimensional surface Σ. Represent the surface element of Ω by the (r + 1)-vector dω, and the surface element of Σ by the r-vector dσ. If A is a differentiable dnumber field on Ω, then Z Z dω · A = dσA. (22.1) Ω
Σ
Actually, (22.1) holds for flat manifolds of any dimension as long as the appropriate Clifford algebra is used. For Euclidean 3-space we write Z Z dω · ∇A = dσA. (22.2) Ω
Σ
Of course for space-time 0 5 r 5 3, and for Euclidean 3-space 0 5 r 5 2. Our formulation of the boundary theorem contains all special cases, variously named as Stoke’s theorem, Green’s theorem, Gauss’s theorem, the divergence theorem, etc. The proof of the boundary theorem involves only the much studied analytical problems which arise in the usual treatment of Stoke’s theorem, so we do not attempt it here. Our purpose is rather to demonstrate the great algebraic convenience which accrues from the use of Clifford algebra. However before we discuss applications, some remarks on the boundary theorem for curved manifolds are in order. In flat manifolds addition of vectors, hence addition of any c-numbers, at different points can be uniquely defined. This means that the integral of any c-number field has a definite meaning. But in curved manifolds only the sum of scalars and pseudoscalars at different points
74
Chapter V. Geometric Calculus
is well-defined. Therefore for curved manifolds only the scalar and pseudoscalar parts of (22.1) are meaningful. For curved Riemannian space-time (22.1) yields Z Z (dω · A)I = (dσA)I , (22.3) Ω
Σ
where the subscript I means invariant part = scalar part + pseudoscalar part. Let us examine (22.3) in more detail. Using equation (3.10), we note that (dω · A)S = (dω · ) · Ar = dω · ( ∧ Ar ),
(22.4)
where Ar is the r-vector part of A. Therefore the scalar part of (22.3) can be written Z Z dω · ( ∧ Ar ) = dσ · Ar . (22.5a) Ω
Σ
Similarly, the pseudoscalar part of (22.3) can be written Z Z i dω · ( ∧ Br ) = i dσ · Br , Ω
(22.5b)
Σ
where B = Ai is the dual of A. With the help of (3.17), it is easy to see that (22.5) is the same as the formulation of Stoke’s theorem given in references [17] and [24]. We may take equation (22.3) as the definition of the boundary operator . For a manifold with a general linear connection the boundary operator so defined will not be equivalent with the gradient operator defined in section 20. However, by virtue of equation (21.1), it can easily be shown that in Riemannian geometry the gradient and the boundary operator may be taken as one and the same. This is the beautiful property that sets Riemannian geometry apart. It also explains why the basic differential operator should be a 1-vector, for as boundary operator, mediates between manifolds differing by one dimension. Now return to flat space-time and equation (22.1). If r = 3 in space-time dω is pseudoscalar so dω ∧ = 0. Therefore for this special case we can write (22.1) as Z Z dωA = dσA. (22.6) Ω
Σ
22. Integration
75
This circumstance enables us to give a definition of A without explicit reference to coordinates: Z 1 A = lim dω dσA. (22.7) dω→0
Σ
The limiting procedure, clearly involves the shrinking of the volume Ω to the point where A is defined. As further illustration, let us see how the integral theorems for Euclidean 3-space are special cases of (22.2). For r = 0, Ω is a curve with tangent 1-vector dω = ds and end points x1 and x2 . Σ is the 0-dimensional surface consisting of the points x1 and x2 . Integration over Σ consists merely in multiplication by the 0-vectors 1 or −1 depending respectively on whether the curve Ω is arriving at or leaving Σ. Thus (22.2) takes on the familiar form Z x2 ds · ∇A = A(x2 ) − A(x1 ). (22.8) x1
For r = 1, Ω is a surface with tangent 2-vector dω = −indS where n is the unit normal to Ω and dS = |dω| is the magnitude of dω. The surface Ω is bounded by a curve Σ with tangent 1-vector dσ = ds. For this case (22.2) becomes Z Z dSn × ∇A = dsA. (22.9) Ω
Σ
Using inner and outer products we can separate (22.9) into two parts. For example if A is a vector a, the scalar and bivector parts of (22.9) are, after multiplying the bivector part by −i, Z Z dSn × ∇ · a = ds · a, (22.10) Ω Σ Z Z dS(n × ∇) × a = ds × a. (22.11) Ω
Σ
If A is a scalar φ, (22.9) has only one part: Z Z dSn × ∇φ = dsφ. Ω
Σ
(22.12)
76
Chapter V. Geometric Calculus
Formulas for bivector or pseudoscalar A are similar to the above, because in Euclidean 3-space they can be expressed as duals of a vector or scalar. For r = 2, Ω is a volume with volume element dω = idV and surface Σ with area element dσ = indS, where n is the unit outer normal to Ω. For this case we can write (22.2) in the form Z Z dV ∇A = dS nA. (22.13) Ω
Σ
If A is a vector a, (22.13) is equivalent to the two equations Z Z dV ∇ · a = dS n · a, (22.14) Ω Σ Z Z dV ∇ × a = dS n × a. (22.15) Ω
Σ
If A is a scalar φ, (22.13) becomes Z Z dV ∇φ = dS nφ. Ω
23
(22.16)
Σ
Global and Local Relativity
In chapter IV we introduced the Lorentz transformation as an isometry of the tangent space V (x) at a point x of space-time. We found that it could be written in the form V (x) → V 0 (x) = R(x)V (x)R−1 (x).
(23.1)
Now in the theory of special relativity Lorentz transformations are interpreted as coordinate transformations among inertial systems, and they have a priori nothing to do with (23.1), although in section 17 we related them to isometries with R(x) constant. This brings us to a very important question: How is the notion of Lorentz invariance, which has proved so pregnant in special relativity, properly generalized to nonuniform space-time? Is the physical significance of a Lorentz transformation to be found in its interpretation as a coordinate transformation? Or as a change in direction of tangent vectors? Perhaps
23. Global and Local Relativity
77
each interpretation relates to a different physical fact, and perhaps they are connected in some physically significant way. Einstein [21] generalized the idea of coordinate transformation. In the special theory of relativity, Einstein had introduced covariance under Lorentz transformations as a statement of the equivalence of observations made in different inertial systems. In the general theory of relativity, he extended this to the principle that the equations of physics must be covariant under any coordinate transformation. However, although of heuristic value, the principle of general covariance proved to be without physical content, as Einstein himself admitted, for any equation can be cast in covariant form.5 Evidently the success of his theory of gravitation has some basis other than this principle, and the physical significance of Lorentz transformations is to be found somewhere else. Fock [19] rejects Einstein’s idea that in nonuniform space-time no class of coordinate systems can have objective physical significance. He generalizes the definition of inertial coordinates by identifying them with harmonic coordinates in Riemannian space-time. The Riemann and the harmonic conditions can be combined in a natural way into a single definition of inertial systems: Coordinate functions xµ will be called inertial if they everywhere satisfy the wave equation: 2 xµ = 0.
(23.2)
The geometry of space-time can now be largely determined by a postulate which we call the Principle of Global Relativity (PGR): Inertial coordinates (systems) exist in space-time. Let us mention some of the physical implications of PGR. First, note that the bivector part of (23.2) is equivalent to (21.2), and we saw that this implies that space-time is Riemannian. Second, observe that PGR leads to a relativity group which characterizes space-time; namely, the group of all coordinate transformations which leaves the metric of inertial systems invariant. This is a subgroup of the group of all transformations which leave (23.2) invariant. In flat space-time, for instance, the larger group permits arbitrary scale transformations for each coordinate in addition to the usual Lorentz transformation. 5 Indeed, equation (8.1) is an invariant formulation of Maxwell’s equation—it makes no reference whatever to coordinates. What can general covariance mean here except that coordinate systems are irrelevant?
78
Chapter V. Geometric Calculus
We can eliminate these scale transformations by suitable boundary conditions. Therefore, we can alternatively define the relativity group as a group of transformations, subject to certain boundary conditions, which leaves (23.2) invariant. Perhaps it is not necessary to construe PGR as asserting that all of space-time can be covered by a single coordinate system. We must be able to cover certain subregions of space-time with inertial coordinates, but we leave the exact nature of such regions to be determined by some future physical principle. The structure of the relativity group is not solely determined by PGR but depends on boundary conditions for space-time, and so involves a fundamental question of cosmology. This leads to the problem of determining the relativity group for various models of space-time. If space-time is flat, the relativity group is just the Poincar´e group, and our theory reduces to the special theory of relativity. Fock [19] has further argued that for any Riemannian manifold which is uniform at infinity harmonic coordinates are unique to within a Poincar´e transformation. Beyond this, the problem of determining “suitable boundary conditions” is wide open. The original special theory of relativity suffers from the lack of a clear definition of inertial coordinates. Einstein tacitly assumed that space-time is flat by assuming that rigid rods exist in each inertial system and can be used to lay out coordinates. Definition (23.2) is designed to remedy this defect in a way which unites both of Einstein’s great theories. It is fitting that this definition involves a generalization of the wave equation, for it was Einstein’s brilliant analysis of electrodynamics which first led to the formulation of the relativity principle. PGR leads to a uniqueness in interpretation of physical phenomena which is absent in the viewpoint of general relativity. Assuming suitable boundary conditions in space-time, a straight line is uniquely defined as a straight line in an inertial system, and the relativity group can be looked upon as the group of congruences of all straight lines in space-time. Acceleration is the same in all inertial systems, and this gives absolute significance to gravitational forces and the stress-energy tensor. Unfortunately, these so called advantages of PGR can be construed as mere conveniences, for no decisive physical consequence of identifying harmonic and inertial coordinates has yet been discovered.
23. Global and Local Relativity
79
In our formulation of global relativity we assumed that the nature of the gradient operator was understood. Thus, the d’Alembertian in (23.2) makes implicit use of the metric of space-time; it incorporates Einstein’s original assumption of the constancy of the speed of light. The gradient operator is, moreover, a “local” operator. We can formulate this property in terms of a “relativity principle” which has nothing to do with coordinates. We will show that it agrees with the postulates for differentiation given in section 20 and so could be used to construct an alternative definition for the gradient operator. The insight we gain leads us to the more general notion of differentiation discussed in the next section. Let us examine what may be called the Principle of Local Relativity (PLR): The gradient operator is covariant under local Lorentz transformations. The transformation (23.1) will be called a local Lorentz transformation at x if R(x) is differentiable. Supposing that is not yet defined, let us consider how to give meaning to the word “differentiable”. We cover a neighborhood of the point x with coordinates xµ and their tangent vectors γ µ . We take the partial derivatives of the γ µ with respect to xµ to be zero by definition. Now, by expressing R(x) in terms of the tensor basis γ J determined by the γ µ (defined in section 4), we have ∂µ R(x) = ∂µ (RJ γ J ) = (∂µ RJ )γ J ,
(23.3)
i.e., the partial derivative operates only on the scalar coefficients. The main point to be made is that differentiability implies that a local Lorentz transformation at x entails a local Lorentz transformation at neighboring points of x. A d-number which transforms according to (23.1) will be called a local rotor. Let A be a local rotor. The principle of local relativity says that the local Lorentz transformation A → A0 = RAR−1
(23.4)
A → (A)0 = R(A)R−1 .
(23.5)
induces In order to see that this can be identified with the gradient operator defined in section 20, we cover a neighborhood of the point
80
Chapter V. Geometric Calculus
x under consideration by a frame field of orthonormal vectors γ k , which we call a vierbein. We can, of course, express the γ k as a linear combination of the vectors γ µ of a coordinate frame, γ k = hkµ γ µ , hiµ hνi = δµν ,
γ µ = hµk γ k , hiµ hµj = δji .
(23.6)
Just as we fixed the partial derivatives ∂µ by requiring ∂µ γ ν = 0, we can define operators ∂k corresponding to the γ k by the condition ∂k γ j = 0. The ∂k can be expressed in terms of the ∂µ by ∂k = hµk ∂µ
(23.7)
and differentiation carried out in the usual way, but it must be remembered that only the components of a d-number taken relative to the γ k are to be differentiated. Now, in a well-known way [22, 23], we can construct a covariant derivative k which is associated with the γ k . We limit our discussion to local Lorentz rotations. First we introduce a d-number field Ck for which (23.4) induces the transformation e + R∂k R. e Ck → Ck0 = RCk R
(23.8)
e = 1, we find that By differentiating RR e = −(∂k R)R, e R∂k R
(23.9)
e is a bivector. Therefore, (23.8) can be satisfied which shows that R∂k R if Ck is chosen to be a bivector field. The operation of k on a local rotor A can now be written explicitly. k A = ∂k A + [Ck , A].
(23.10)
It is easily verified that (23.4) induces e k A → (k A)0 = R(k A)R.
(23.11)
e γ k → Rγ k R,
(23.12)
Since also = γ k k satisfies (23.5).
23. Global and Local Relativity
81
We can verify that k is the covariant derivative of section 20 by applying it to the γ k , i γ k = [Ci , γ k ] = 2Ci · γ k .
(23.13)
Ci = 14 Cimn , γ mn
(23.14)
Now, we can write so, using (3.9), k Ci · γ k = 14 Cimn γ mn · γ k = 12 Cim γm k i γ k = Cim γ m.
(23.15)
This agrees with (20.9). Of course, the coefficients of connection for a vierbein satisfy Cijk = −Cikj ,
(23.16)
which agrees with (20.8), since for a vierbein ∂k gij = 0. The coefficients of connection for a coordinate frame field can be found from those for a vierbein by substitution, µ γ ν = µ (hνi γ i ) = (µ hνi )γ i + hνi hkµ (k γ i ) i = (∂µ hνj + hνi hkµ Ckj )hjσ γ σ .
But µ γ ν = Lνµσ γ σ . So, ν Lνµσ = Aνµσ + Cµσ ,
(23.17a)
where Aνµσ ≡ hjσ (∂µ hνj ), ν i Cµσ ≡ hνi hkµ hjσ Ckj .
(23.17b)
This completes our demonstration that the gradient operator of section 20 is covariant under local rotations. The covariance of the gradient operator expresses the local character of differentiation. Differentiation involves comparison only of d-numbers at neighboring points. A local rotation can change the relative directions of tangent vectors at points separated by a finite distance, but it leaves the relative directions of vectors at infinitesimally
82
Chapter V. Geometric Calculus
separated points unchanged. For this reason the algebraic properties of the gradient operator are preserved by local rotations. The connection Ck in (23.10) includes the gravitational force on the rotor A. The separation of the right side of (23.10) into two parts is not invariant, but depends on the choice of a frame. A unique separation of the directional derivative into a part which is due to the gravitational field and a part which is not can only be accomplished by some global postulate such as PGR. Just the same, since the gradient operator includes the gravitational force, PLR can be interpreted as an expression of the local character of the gravitational interaction.
24
Gauge Transformation and Spinor Derivatives
In the last section, we saw that the gradient operator could be described, at least in part, by the condition of covariance under local Lorentz transformations. Now, in section 19 we found isometries of the Dirac algebra which are more general than Lorentz transformations. This leads us immediately to a more general kind of derivative. Let us define a gauge transformation as any differentiable isometry of the Dirac algebra at a generic point x of space-time. The term “differentiability” is to be understood in the same sense as in the last section, and again differentiability implies that a gauge transformation at x entails a gauge transformation at neighboring points of x. The set of all gauge transformations forms what we call a gauge group. The structure of the gauge group is determined by the scalar product it leaves invariant. Appropriate definitions of scalar product were discussed in section 19. Any gauge transformation has the form ψ → ψ 0 = RψS.
(24.1)
To illustrate the general method, we will study the gauge group which leaves invariant the norm ˜ S = (ψ † γ0 ψ)S , (ψγ0 ψ)
(24.2)
where γ0 is a field of timelike unit vectors. In order that (24.1) leave (24.2) invariant, we must have e = ±1 RR
(24.3)
24. Gauge Transformation and Spinor Derivatives
83
and SS † = ±1,
(24.4)
where both (24.3) and (24.4) must have the same sign. Our purpose will be served if we allow only the positive signs in (24.3) and (24.4), and, further, if we restrict our discussion to gauge transformations continuously connected to the identity. We consider the following generalization of the principle of local relativity: The gradient operator is covariant under gauge transformations. This means that on the derivative k ψ of the d-number field ψ with respect to the vierbein {γ k }, the gauge transformation (24.1) induces the transformation k ψ → (k ψ)0 = R(k ψ)S.
(24.5)
The covariance of k ψ can be assured by requiring that it have the form k ψ = (∂k + Ck )ψ + ψDk ,
(24.6)
where ∂k is defined as in section 23, and (24.1) induces the following transformations on Ck and Dk : e + R∂k R, e Ck → Ck0 = RCk R Dk →
Dk0
†
(24.7)
†
= S Dk S + (∂k S )S.
(24.8)
Ck and Dk must be proportional to the generators of R and S. Define the gradient operator as before: ≡ γ k k .
(24.9)
Now the γ k were introduced as vectors, and they must remain so after a gauge transformation. Therefore if the gradient is to be covariant, i.e., if (24.1) induces the transformation ψ → (ψ)0 = R(ψ)S,
(24.10)
then the induced transformation 0
e γ k → γ k = Rγ k R
(24.11)
must be a local Lorentz rotation, that is, the γ k must be local rotors, and the Ck in (24.6) must be the same as the Ck defined in section 23.
84
Chapter V. Geometric Calculus
We can now write ψ = (∂ + C)ψ + γ k ψDk ,
(24.12)
∂ ≡ γ k ∂k
(24.13)
C ≡ γ k Ck .
(24.14)
where and It is very important to remember that the separation of ∂ and C depends on the selected vierbein {γ k }, so the comments we made about (23.6) apply here. Also note that the connection C can be expressed in terms of the connection for a coordinate frame field by (23.13) and (23.16), σ C = 14 Cijk γ i γ j γk = 14 Cαβ γ α γ β γσ
= (Lσαβ − Aσαβ )γ α γ β γσ .
(24.15)
The term Cψ in (24.12) is the usual expression for the interaction of the gravitational field with a spinor field. The term γ k ψDk must be interpreted as the coupling of ψ to nongravitational fields Dk . From theorem 5 of appendix C, we find that Dk is proportional to the generators of the group of rotations in a 5-dimensional Euclidean space. Therefore Dk represents 10 independent vector fields interacting with ψ. We can write Dk = iak + γ0 (bk + ick ) + idk ,
(24.16)
where, as usual, the boldface letters represent vectors in the Pauli algebra of γ0 . If, as in section 13, we interpret ψ as a nucleon field, then certain physical interpretations are possible for the terms on the right of (24.16). The first term can be identified with the ρ meson and the last term with the ω meson.6 It is not clear how the second and third terms might be associated with any known vector mesons; certainly no identification can be made until the factor γ0 is interpreted. At this point we terminate the discussion before it leads too far along speculative lines. Equation (24.12) stands only as an example of interactions invariant under local isometries of the Dirac algebra. 6 Cf.
reference [24].
24. Gauge Transformation and Spinor Derivatives
85
At the very least, it has the attractive feature that both gravitational and electromagnetic interactions are generated in the same way—by covariance under gauge transformations. In as much as the interaction terms arise naturally from the Dirac algebra, they may be said to have a geometric origin.
Conclusion We have constructed a single algebraic system which combines many of the advantages of matrix algebra, Grassmann algebra and vector and tensor analysis with some peculiar advantages of its own. The simple form of equations and the ease of manipulation provided by our spacetime algebra have been amply illustrated. Much of this simplicity is due to specific assumptions about space-time which we incorporated into the algebra. All the elements and operations in our space-time algebra can be interpreted, either directly or indirectly, as representing geometrical features of space-time. This is the great epistemological virtue of our system. We have seen how it forces us to identify the unit pseudoscalar in both space and space-time as the complex imaginary i. This, in turn, connects the indefinite property of the space-time metric to the equation i2 = −1. Further, it implies that space-time is something more intricate than three dimensions of space plus one dimension of time, as illustrated by our conclusion that a vector in space corresponds to a timelike bivector in space-time. Space-time is related to space as an algebra is related to one of its subalgebras. Since our space-time algebra is more than a mathematical system, but something of a physical theory as well, it is natural to ask for its experimental consequences. Unfortunately, such consequences can be obtained only in the context of a more detailed physical theory. Just the same the striking successes of the Dirac electron theory furnish strong evidence for the fundamental role of the Dirac algebra. Although spin was discovered before the Dirac algebra, the Dirac algebra provides a theoretical justification for the existence of spin. It should not be too surprising if the Dirac algebra also provides a justification for isospin after the manner suggested in section 13. The
Ó Springer International Publishing Switzerland 2015 D. Hestenes, Space-Time Algebra, DOI 10.1007/978-3-319-18413-5
87
88
Conclusion
ultimate justification of our space-time calculus as a “theory” awaits a physical prediction which depends on our identification of i as a pseudoscalar.
Appendixes A
Bases and Pseudoscalars
In this appendix we give some miscellaneous conventions and formulas which are useful for work with arbitrary bases in Vn , together with some special formulas in the Dirac and Pauli algebras. A set of linearly independent vectors {ei : i = 1, 2, . . . , n} will be called a frame in Vn . The metric tensor gij of Vn is defined by gij = ei · ej = 12 (ei ej + ej ei ).
(A.1)
A vector a can be written as a linear combination of the ei . : a = ai ei . The inner product of vectors a and b is a · b = ai bj (ei · ej ) = ai bi = ai bi
(A.2)
ai = aj gij .
(A.3)
where Consider the simple r-vector ei1 ∧. . .∧eir . The number of transposition necessary to obtain eir ∧. . .∧ei1 is 1+2+3 . . .+(r−1) = 21 r(r−1). It follows that the reverse (defined in section 5) of any r-vector Ar is 1
A†r = (−1) 2 r(r−1) Ar .
(A.4)
The pseudoscalar of a frame {ei } is an n-vector e defined by e = e1 ∧ e2 ∧ . . . ∧ en .
(A.5)
The determinant g of the metric tensor is related to the pseudoscalar e by (3.17): e† e = e† · e = (en ∧ . . . ∧ e1 ) · (e1 ∧ . . . ∧ en ) = det gij = g.
Ó Springer International Publishing Switzerland 2015 D. Hestenes, Space-Time Algebra, DOI 10.1007/978-3-319-18413-5
(A.6)
89
90
Appendixes
By (A.4), 1
e2 = (−1) 2 n(n−1)+s |g|,
(A.7)
where the signature s is the number of vectors in Vn with negative square. In order to simplify algebraic manipulation with base vectors, it is convenient to define another set of base vectors ej related to the ei by the conditions ei · ej = δij ,
(A.8)
where δij is a Kronecker delta. The raised index on ej is strictly a mnemonic device to help us remember the relations (A.8) between the two bases, and is not meant to indicate that the ej are in any sense vectors in a “space” different from that containing the ei . Each ej can be written as a linear combination of ei : ej = g ij ej
(A.9)
from which it follows immediately that g ij gjk = δki
(A.10)
g ij = ei · ej = 12 (ei ej + ej ei ).
(A.11)
and We can construct the ej explicitly in terms of the ei in the following way: Define the pseudovector E j by E j = (−1)j+1 ei ∧ . . . ∧ ˆej ∧ . . . ∧ en ,
(A.12)
where the circumflex means omit ej . Observe that ei ∧ E j = δij e.
(A.13)
Also define the reciprocal pseudoscalar e−1 by e−1 = (−1)s
1 e. e2
(A.14)
Now we can define ej by ej ≡ E j e−1 .
(A.15)
A. Bases and Pseudoscalars
91
The frame {ej } is often called dual to the frame {ei } because of the duality operation used in constructing it. With (A.15) we can verify (A.8): ei · ej = (ei ∧ E j )e−1 = δij ee−1 = δij . We can also write e−1 = (−1)s en ∧ . . . ∧ e2 ∧ e1 .
(A.16)
We now restrict ourselves to the Dirac algebra. The pseudoscalar for the frame {γ µ : µ = 0, 1, 2, 3} will be denoted by γ 5 , γ 5 ≡ γ 0123 = γ 0 ∧ γ 1 ∧ γ 2 ∧ γ 3 .
(A.17)
Similarly, the pseudoscalar for the dual frame is γ5 ≡ γ0123 .
(A.18)
All of the following formulas can be proved with the aid of formulas in section 3, γ 5 γ5 = 1,
γ5 =
γ5 , (γ5 )2
γ µ γµ = 4, γ µ γ5 γµ = −4γ5 .
(A.19) (A.20) (A.21)
For any vectors a, b, c, γ µ aγµ = −2a, γ µ abγµ = 4a · b, γ µ abcγµ = −2cba.
(A.22) (A.23) (A.24)
The last two formulas admit the following special cases: γ µ a ∧ bγµ = 0, γ µ a ∧ b ∧ cγµ = 2a ∧ b ∧ c.
(A.25) (A.26)
The completely antisymmetric unit εµναβ can be defined by εµναβ ≡ γ 5 γµναβ
(A.27)
92
Appendixes
Similarly, εµναβ = γ5 γ µναβ .
(A.28)
It follows that ε0123 = 1, γµναβ = γ5 εµναβ , γµναβ γ β = γµνα , γµναβ γ
αβ
= 2γµν ,
γµναβ γ ναβ = 6γµ .
(A.29) (A.30) (A.31) (A.32) (A.33)
The magnitude of γ5 can be expressed in terms of the metric tensor, g ≡ (γ5 )2 = g0µ g1ν g2α g3β γ µναβ γ5 = g0µ g1ν g2α g3β εµναβ = det gµν Thus γ5 = ±i
√
− g.
(A.34)
(A.35)
The reader is invited to find formulas analogous to those above in the Pauli algebra. Bases in D and P can be related by σµ ≡ γµ γ0 .
(A.36)
The pseudoscalar in space for the frame {σ k } is σijk = γ5 εijk ,
(A.37)
εijk ≡ εijk0 .
(A.38)
where Of course, for a righthanded orthonormal frame, σijk = iεijk .
B
(A.39)
Some Theorems
e = 1, then Theorem 1. If R = R and RR R = ±eB ,
(B.1)
where B is a bivector. B can be chosen so that the sign in (B.1) is positive except in the special case when B 2 = 0 and R = −eB .
B. Some Theorems
93
Proof. Since R = R, R is the sum of a scalar S, a bivector T and a pseudo-scalar P , e = S − T + P. R
R = S + T + P,
If T 2 = 0, we can express T as a linear combination of orthogonal simple bivectors T1 and T2 , T = S1 T1 + S2 T2 , T1 · T2 = 0,
T12 = 0,
T22 5 0.
e = 1 gives us two equations: The condition RR S 2 = S12 T12 − S22 T22 + P 2 = 1, S1 S2 T1 T2 = SP. We can adjust the magnitude of T1 T2 so T1 T2 = P , then S1 S2 = S. This enables us to factor R, R = (S1 + T1 )(S2 + T2 ). e = 1 becomes So the condition RR (S12 − T22 )(S22 − T12 ) = 1. We can take S12 − T22 = 1, so, since T22 < 0, S1 + T2 = eB2 = cos B2 + sin B2 . Similarly, since T12 > 0, S2 + T1 = eB1 = cosh B1 + sinh B1 . Since T1 · T2 = 0, T1 commutes with T2 , and B1 commutes with B2 . Therefore R = eB1 eB2 = eB2 eB1 = eB1 +B2 = eB .
(B.2)
This equation shows slightly more than (B.1), because B is expressed as the sum of orthogonal simple bivectors. To complete our proof, we must treat the special case T 2 = 0. e = 1 gives The condition RR S 2 + P 2 = 1,
SP = 0.
94
Appendixes
Since P 2 5 0, we must have P = 0 and S = 1. Consider the case S = +1. Then, since T 2 = 0, ∞ X 1 n R=1+T = T = eT . n! n=0
This is in the form (B.1). If S = −1, then R = eT .
(B.3)
e = 1 and U = U = U ∗ , then Theorem 2. If U U U = eib ,
(B.4)
where b is a simple timelike bivector. Proof. From theorem 1 we have U = ±eB , so ∗
U ∗ = γ0 U γ0 = ±eγ0 Bγ0 = ±eB = U implies B = B ∗ = γ0 Bγ0 . Therefore γ0 · B = 12 (γ0 B − Bγ0 ) = 0, which means that B is orthogonal to γ0 and so is simple and spacelike. We can write B as the dual of a simple timelike bivector b: B = ib. 0 ˆ ˆ ˆ and B ˆ 2 = −1, Since −eB = eBπ eB = eBπ+B = eB , where B = |B|B B we can always choose B so U = e . e = 1 and H = H = H † , then Theorem 3. If H H H = ±ea ,
(B.5)
where a is a simple timelike bivector. Proof. From theorem 1 we have H = ±eB , so e 0 = ±e−γ0 Bγ0 = H H † = γ0 Hγ implies B = −B ∗ = −γ0 Bγ0 . Therefore γ0 ∧ B = 12 (γ0 B + Bγ0 ) = 0, which means that B is proportional to γ0 and so is simple and timelike. Hence we can write B = a.
B. Some Theorems
95
e = 1 and R = R, then a timelike unit vector γ0 Theorem 4. If RR uniquely determines the decomposition R = HU,
(B.6a)
U = U ∗ = eib
(B.6b)
where and H = H † = ±ea . (B.6c) e = 1 and A = A = A† , so by Proof. Note that A ≡ RR† satisfies AA theorem 3 we can write RR† = e2a . Define H by 1
H = (RR† ) 2 = ±ea . Note that R† R∗ = 1, so R = RR† R∗ = H(HR∗ ). Therefore define u by U = HR∗ . We verify that e = HR∗ R† H e = HH e =1 UU and U ∗ = H ∗ R = H ∗ HHR∗ = HR∗ = U. So, by theorem 2, we can write U in the form (B.6b).
†
Theorem 5. If SS = 1, then S = eB
(B.7)
S = veB ,
(B.8)
or where v is a unit vector and B is a bivector for the five-dimensional Euclidean space E5 spanned by vectors em , defined by ei = σi = γi γ0 , e4 = γ0 , e5 = e1234 = iγ0 ,
(B.9)
96
Appendixes
where {γµ : µ = 0, 1, 2, 3} is a righthanded orthonormal basis of vectors in the Dirac algebra. Relative to the basis (B.9), B = B mn 12 [em , en ], v = v m em
(B.10)
(m, n = 1, 2, 3, 4, 5). The Clifford algebra of a 4-dimensional Euclidean space E4 is isomorphic to the Clifford algebra of a 4-dimensional Minkowski space (we call the latter the Dirac algebra). The two algebras differ only in geometric interpretation, i.e. in what elements are called vectors, bivectors, etc. An isomorphism between the two algebras is given by (B.9). If we take e1 , e2 , e3 , e4 as an orthonormal basis for E4 , then e5 can be identified as the unit pseudoscalar. We can identify the transformation u → u0 = SuS †
(B.11)
as a rotation of a vector u in E5 if S is given by (B.7) and as a reflection if S is given by (B.8). The reflections can be distinguished from rotations in E5 by the method used in section 7. We can prove that for rotations S has the form (B.7) in the same way that we proved theorem 1 of this appendix. In fact, the proof here is somewhat simpler because the scalar part of B 2 is negative definite. We also find, just as in theorem 1, that B can be written as the sum of two orthogonal simple bivectors. This represents the fact that the rotation group in five dimensions is a group of rank 2. Theorem 6. If a and b are timelike vectors and a · b > 0, then, for any d-number A, e S = 0, (aAbA)
(B.12)
and equality holds only if A vanishes. Proof. We lose no generality by taking a and b to be unit vectors. Write a = γ0 . Since, as we saw in section 18, b can be obtained from e Now define a new γ0 by a time-like rotation, we can write b = Rγ0 R. d-number B = AR, so e S = (γ0 Bγ0 B) e = (BB † )S . (aAbA)
(B.13)
C. Composition of Spatial Rotations
97
By continuing our discussion following theorem 5, we identify (B.13) as a Euclidean norm, and we recall that we proved this was positive definite in section 5. Theorem 7. If S Se = 1, then S = ±eB+A
(B.14)
S = (V + P )eB+A ,
(B.15)
or where B is a bivector, A is a pseudovector, V is a vector, P is a pseudoscalar and (V + P )2 = 1. If u is an element of the 5-dimensional linear space V5 consisting of the vectors and pseudoscalars, we can identify the transformation u → u0 = SuSe
(B.16)
as a rotation if S is given by (B.14) and as a reflection if S is given by (B.15). Theorem 7 can be proved in the same way as theorem 5, with due account taken of the metric (+ − − − −) on V5 .
C
Composition of Spatial Rotations
Recall from section 16 that a spatial rotation of a vector p can be written 1
1
p → p0 = e− 2 ib pe 2 ib .
(C.1)
This is a righthanded rotation through an angle b = |b| about the vector b. The exponential can be written as a sum of scalar and bivector parts, 1
e 2 ib = cos 12 b + i sin 12 b.
(C.2)
The series expansion for the exponential clearly shows cos 12 b = cos 12 b, ˆ sin 1 b, sin 1 b = b 2
2
(C.3a) (C.3b)
98
Appendixes
ˆ is the unit vector in the direction of b. where b The composition of rotations in Euclidean 3-space is reduced by equation (C.1) to a problem in composition of exponentials, which is fairly simple. Suppose the rotation (C.1) is followed by the rotation 1
1
p0 → p00 = e− 2 ia p0 e 2 ia .
(C.4)
Then the rotation of p into p00 is given by 1
1
1
1
1
1
p → p00 = e− 2 ic pe 2 ic = e− 2 ia e− 2 ib pe 2 ib e 2 ia .
(C.5)
From 1
1
1
e 2 ic = e 2 ib e 2 ia
(C.6)
we can find c in terms of b and a. The scalar part of (C.6) is cos 12 c = cos 12 a cos 12 b − (sin12 a) · (sin 12 b).
(C.7)
From the bivector part of (C.6) we get sin 12 c = cos 12 a sin 12 b + cos 12 b sin 12 a + (sin 12 a) × (sin 12 b).
(C.8)
Or, in more explicit form, (C.7) and (C.8) become ˆ sin 1 a sin 1 b, ˆ·b cos 12 c = cos 12 a cos 12 b − a (C.70 ) 2 2 ˆ cos 1 a sin 1 b + a ˆ sin 1 a sin 1 b. ˆ cos 12 b sin 12 a + a ˆ×b cˆ sin 12 c = b 2 2 2 2 (C.80 ) The ratio of (C.8) to (C.7) gives the law of tangents tan
1 c 2
tan 12 a + tan 12 b + (tan 12 a) × (tan 12 b) = . 1 − (tan 12 a) · (tan 12 b)
(C.9)
When a and b are collinear these formulas reduce to the familiar trigonometric formulas for the addition of angles in a plane. From these the full angle formulas can be obtained, but the half-angle formulas are simpler and adequate in most problems.
99
D. Matrix Representation of the Pauli Algebra
D
Matrix Representation of the Pauli Algebra
In this appendix we give the correspondence between operations in the Pauli algebra and operations in its matrix representation. As we saw in section 12, a p-number φ can be represented by a two by two matrix Φ over the field of the complex numbers, φ = φ0 + φi σ i = φ11 u1 + φ12 v1 + φ21 u2 + φ22 v2 , φ11 φ12 Φ= . φ21 φ22
(D.1) (D.2)
The matrix elements of Φ can be looked upon as matrix elements of φ, φab = (u†b φua )I = (vb† φva )I ,
(D.3)
where the subscript I means invariant part, i.e. scalar plus pseudoscalar part. Using (12.7) and (12.5) we can find the matrix representations of the base vectors σ i : 0 1 0 −i 1 0 σ1 = , σ2 = , σ3 = . (D.4) 1 0 i 0 0 −1 Let ΦT be the transpose of Φ, let Φ× be the complex conjugate of Φ, and let Φ† be the hermitian conjugate Φ. The following correspondences are easily established: φ∗ = φ∗0 − φ∗i σ i
∼σ 2 Φx σ 2 ,
(D.5)
φ† = φ∗0 + φ∗i σ i φe = φ0 − φi σ i
∼Φ† ,
(D.6)
∼σ 2 Φ σ 2 ,
(D.7)
e φφe = φφ
∼ det Φ,
(D.8)
φI = φ0
∼ 12 T rΦ.
(D.9)
T
For the two-component spinors we have the correspondences: 1 u1 or v1 ∼ , (D.10) 0 0 u2 or v2 ∼ . (D.11) 1
100
Appendixes
The operations of complex conjugation and transpose while of interest to matrix algebra are only of minor interest to Clifford algebra, for these operations are defined and meaningful only relative to a particular basis in the algebra. On the other hand, the operations of inversion and hermitian conjugation are independent of basis; for this reason they may be considered to be more fundamental than the operations of complex conjugation and transpose.
Bibliography [1] J. Gibbs, The Scientific Papers of J. Willard Gibbs, Dover, New York (1961). [2] E. B. Wilson, Vector Analysis, Dover, New York (1961). [3] A. Wills, Vector Analysis, Dover, New York (1958). [4] H. Grassmann, Die Ausdehnungslehre von 1862. [5] W. K. Clifford, Amer. Jour. of Math. 1, 350 (1878). [6] E. R. Caianiello, Nuovo Cimento 14 (Supp.), 177 (1959). [7] H. Jauch and F. Rohrlich, The Theory of Photons and Electrons, Addison-Wesley, Cambridge (1955). [8] M. Riesz, Dixi´eme Congr´es des Mathematicians Scandinaves, Copenhague, 1946, p. 123. [9] M. Riesz, Clifford Numbers and Spinors, Lecture series No. 38, The Institute for Fluid Dynamics and Applied Mathematics, University of Maryland (1958). [10] F. G¨ ursey, Rev. Fac. Sci., Instanbul A20, 149 (1956). [11] O. Heaviside, Electromagnetic Theory, Dover, New York (1950). [12] L. Silberstein, Theory of Relativity, Macmillan, London (1914). [13] C. Misner and J. Wheeler, Ann. Phys. 2, 525 (1957). [14] L. M. Brown, Lectures in Theoretical Physics, Interscience, N.Y., IV, 324 (1962). [15] F. G¨ ursey, Rev. Fac. Sci. Univ. Istanbul A21, 33 (1956). [16] F. G¨ ursey, Nuovo Cimento 7, 411 (1958).
Ó Springer International Publishing Switzerland 2015 D. Hestenes, Space-Time Algebra, DOI 10.1007/978-3-319-18413-5
101
102
Bibliography
[17] T. D. Lee and C. N. Yang, Nuovo Cimento 3, 749 (1956). [18] H. Green, Nuclear Phys. 7, 373 (1958). [19] V. Fock, Space, Time and Gravitation, Pergamon Press, New York (1958). [20] H. Flanders, Differential Forms, New York, Academic Press (1963). [21] A. Einstein and others, The Special Principle of Relativity, Dover, New York (1923), p. 115. [22] C. Yang and R. Mills, Phys. Rev. 96, 191 (1956). [23] R. Utiyama, Phys. Rev. 101, 1597 (1956). [24] J. J. Sakurai, Annals of Physics 11, 1 (1960).