Tomography
Tomography
Edited by Pierre Grangeat
First published in France in 2002 by Hermes Science/Lavoisier enti...
77 downloads
1466 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Tomography
Tomography
Edited by Pierre Grangeat
First published in France in 2002 by Hermes Science/Lavoisier entitled: La tomographie and La tomographie médicale © LAVOISIER, 2002 First published in Great Britain and the United States in 2009 by ISTE Ltd and John Wiley & Sons, Inc. Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced, stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers, or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address: ISTE Ltd 27-37 St George’s Road London SW19 4EU UK
John Wiley & Sons, Inc. 111 River Street Hoboken, NJ 07030 USA
www.iste.co.uk
www.wiley.com
© ISTE Ltd, 2009 The rights of Pierre Grangeat to be identified as the author of this work have been asserted by him in accordance with the Copyright, Designs and Patents Act 1988. Library of Congress Cataloging-in-Publication Data Tomographie. English. Tomography / edited by Pierre Grangeat. p. cm. Includes bibliographical references and index. Translation of: La tomographie and La tomographie médicale, originally published by Hermes Science/Lavoisier, 2002. ISBN 978-1-84821-099-8 1. Tomography. I. Grangeat, Pierre. II. Tomographie médicale. English. III. Title. [DNLM: 1. Tomography. WN 206 T661 2009a] RC78.7.T6T65513 2009 616.07'57--dc22 2009017099 British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN: 978-1-84821-099-8 Printed and bound in Great Britain by CPI Antony Rowe, Chippenham and Eastbourne.
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xvii
Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xxi
Chapter 1. Introduction to Tomography . . . . . . . . . . . . . . . . . . . . . . Pierre GRANGEAT
1
1.1. Introduction. . . . . . . . . . . . 1.2. Observing contrasts . . . . . . . 1.3. Localization in space and time 1.4. Image reconstruction . . . . . . 1.5. Application domains . . . . . . 1.6. Bibliography . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
1 2 7 9 12 17
PART 1. IMAGE RECONSTRUCTION . . . . . . . . . . . . . . . . . . . . . . . . . .
21
Chapter 2. Analytical Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michel DEFRISE and Pierre GRANGEAT
23
2.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2. 2D Radon transform in parallel-beam geometry . . . . . . 2.2.1. Definition and concept of sinogram . . . . . . . . . . . 2.2.2. Fourier slice theorem and data sufficiency condition . 2.2.3. Inversion by filtered backprojection . . . . . . . . . . . 2.2.4. Choice of filter . . . . . . . . . . . . . . . . . . . . . . . 2.2.5. Frequency–distance principle. . . . . . . . . . . . . . . 2.3. 2D Radon transform in fan-beam geometry . . . . . . . . . 2.3.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . . . 2.3.2. Rebinning to parallel data . . . . . . . . . . . . . . . . . 2.3.3. Reconstruction by filtered backprojection . . . . . . . 2.3.4. Fast acquisitions . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
23 25 25 26 27 28 31 32 32 33 33 34
vi
Tomography
2.3.5. 3D helical tomography in fan-beam geometry with a single line detector. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4. 3D X-ray transform in parallel-beam geometry . . . . . . . . . . . . . . 2.4.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4.2. Fourier slice theorem and data sufficiency conditions . . . . . . . . 2.4.3. Inversion by filtered backprojection . . . . . . . . . . . . . . . . . . . 2.5. 3D Radon transform. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2. Fourier slice theorem. . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.5.3. Inversion by filtered backprojection . . . . . . . . . . . . . . . . . . . 2.6. 3D positron emission tomography . . . . . . . . . . . . . . . . . . . . . . 2.6.1. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6.2. Approximate reconstruction by rebinning to transverse slices . . . 2.6.3. Direct reconstruction by filtered backprojection. . . . . . . . . . . . 2.7. X-ray tomography in cone-beam geometry . . . . . . . . . . . . . . . . . 2.7.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.2. Connection to the derivative of the 3D Radon transform and data sufficiency condition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7.3. Approximate inversion by rebinning to transverse slices . . . . . . 2.7.4. Approximate inversion by filtered backprojection . . . . . . . . . . 2.7.5. Inversion by rebinning in Radon space . . . . . . . . . . . . . . . . . 2.7.6. Katsevich algorithm for helical cone-beam reconstruction . . . . . 2.8. Dynamic tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.1. Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.8.2. 2D dynamic Radon transform . . . . . . . . . . . . . . . . . . . . . . 2.8.3. Dynamic X-ray transform in divergent geometry . . . . . . . . . . . 2.8.4. Inversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 3. Sampling Conditions in Tomography . . . . . . . . . . . . . . . . Laurent DESBAT and Catherine MENNESSIER n
3.1. Sampling of functions in . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.1. Periodic functions, integrable functions, Fourier transforms . . . . 3.1.2. Poisson summation formula and sampling of bandlimited functions 3.1.3. Sampling of essentially bandlimited functions. . . . . . . . . . . . . 3.1.4. Efficient sampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.5. Generalization to periodic functions in their first variables . . . . . 3.2. Sampling of the 2D Radon transform . . . . . . . . . . . . . . . . . . . . 3.2.1. Essential support of the 2D Radon transform . . . . . . . . . . . . . 3.2.2. Sampling conditions and efficient sampling . . . . . . . . . . . . . . 3.2.3. Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2.3.1. Vector tomography . . . . . . . . . . . . . . . . . . . . . . . . . .
35 37 37 38 39 40 40 41 41 42 42 42 45 46 46 46 49 50 52 53 54 54 54 55 55 58 63 63 63 65 66 68 70 71 71 73 74 74
Table of Contents
3.2.3.2. Generalized, rotation invariant Radon transform . . . . . . . 3.2.3.3. Exponential and attenuated Radon transform . . . . . . . . . 3.3. Sampling in 3D tomography . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2. Sampling of the X-ray transform. . . . . . . . . . . . . . . . . . . 3.3.3. Numerical results on the sampling of the cone-beam transform 3.4. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
76 77 79 79 80 84 85
Chapter 4. Discrete Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Habib BENALI and Françoise PEYRIN
89
4.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2. Discrete models . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3. Algebraic methods. . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1. Case of overdetermination by the data . . . . . . . . . . . 4.3.1.1. Algebraic methods based on quadratic minimization . 4.3.1.2. Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2. Case of underdetermination by the data. . . . . . . . . . . 4.3.2.1. Algebraic methods based on constraint optimization 4.3.2.2. Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 4.4. Statistical methods. . . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1. Case of overdetermination by the data . . . . . . . . . . . 4.4.1.1. Bayesian statistical methods . . . . . . . . . . . . . . . 4.4.1.2. Regularization . . . . . . . . . . . . . . . . . . . . . . . . 4.4.1.3. Markov fields and Gibbs distributions . . . . . . . . . 4.4.1.4. Potential function . . . . . . . . . . . . . . . . . . . . . . 4.4.1.5. Choice of hyperparameters . . . . . . . . . . . . . . . . 4.4.2. MAP reconstruction algorithms . . . . . . . . . . . . . . . 4.4.2.1. ML-EM algorithm . . . . . . . . . . . . . . . . . . . . . 4.4.2.2. MAP-EM algorithm . . . . . . . . . . . . . . . . . . . . 4.4.2.3. Semi-quadratic regularization . . . . . . . . . . . . . . 4.4.2.4. Regularization algorithm ARTUR . . . . . . . . . . . . 4.4.2.5. Regularization algorithm MOISE . . . . . . . . . . . . 4.4.3. Case of underdetermination by the data. . . . . . . . . . . 4.4.3.1. MEM method . . . . . . . . . . . . . . . . . . . . . . . . 4.4.3.2. A priori distributions . . . . . . . . . . . . . . . . . . . . 4.5. Example of tomographic reconstruction . . . . . . . . . . . . . 4.6. Discussion and conclusion . . . . . . . . . . . . . . . . . . . . . 4.7. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
vii
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . .
89 90 92 92 92 93 96 96 97 99 99 99 100 100 102 103 105 105 105 106 106 107 107 107 109 110 110 112
viii
Tomography
PART 2. MICROTOMOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
117
Chapter 5. Tomographic Microscopy . . . . . . . . . . . . . . . . . . . . . . . . Yves USSON and Catherine SOUCHIER
119
5.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2. Projection tomography in electron microscopy . . . . . . . 5.3. Tomography by optical sectioning . . . . . . . . . . . . . . 5.3.1. Confocal laser scanning microscopy (CLSM). . . . . 5.3.1.1. Principle of confocal microscopy . . . . . . . . . . 5.3.1.2. Image formation . . . . . . . . . . . . . . . . . . . . 5.3.1.3. Optical sectioning. . . . . . . . . . . . . . . . . . . . 5.3.1.4. Fluorochromes employed in confocal microscopy 5.4. 3D data processing, reconstruction and analysis . . . . . . 5.4.1. Fluorograms . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2. Restoration. . . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2.1. Denoising . . . . . . . . . . . . . . . . . . . . . . . . 5.4.2.2. Absorption compensation . . . . . . . . . . . . . . . 5.4.2.3. Numerical deconvolution . . . . . . . . . . . . . . . 5.4.3. Microscopy by multiphoton absorption . . . . . . . . . 5.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
119 120 121 122 122 123 126 128 129 129 130 130 132 132 135 138
Chapter 6. Optical Tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian DEPEURSINGE
141
6.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2. Interaction of light with matter . . . . . . . . . . . . . . . . . . . . . 6.2.1. Absorption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.2. Fluorescence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3. Scattering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3.1. Inelastic scattering . . . . . . . . . . . . . . . . . . . . . . . . 6.2.3.2. Elastic scattering . . . . . . . . . . . . . . . . . . . . . . . . . 6.3. Propagation of photons in diffuse media. . . . . . . . . . . . . . . . 6.3.1. Coherent propagation . . . . . . . . . . . . . . . . . . . . . . . . 6.3.2. Mixed coherent/incoherent propagation. . . . . . . . . . . . . . 6.3.3. Incoherent propagation . . . . . . . . . . . . . . . . . . . . . . . 6.3.4. Radiative transfer theory. . . . . . . . . . . . . . . . . . . . . . . 6.4. Optical tomography methods . . . . . . . . . . . . . . . . . . . . . . 6.4.1. Optical coherence tomography . . . . . . . . . . . . . . . . . . . 6.4.1.1. Diffraction tomography . . . . . . . . . . . . . . . . . . . . . 6.4.1.2. Born approximation . . . . . . . . . . . . . . . . . . . . . . . 6.4.1.3. Principle of the reconstruction of the object in diffraction tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1.4. Data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
141 142 143 145 146 146 146 150 151 156 157 158 164 166 166 169
. . . . . .
170 174
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
Table of Contents
6.4.1.5. Optical coherence tomography . . . . . . . . . . 6.4.1.6. Role of the coherence length . . . . . . . . . . . . 6.4.1.7. The principle of coherent detection . . . . . . . . 6.4.1.8. Lateral scanning. . . . . . . . . . . . . . . . . . . . 6.4.1.9. Coherence tomography in the temporal domain 6.4.1.10. Coherence tomography in the spectral domain . 6.5. Optical tomography in highly diffuse media . . . . . . . 6.5.1. Direct model, inverse model . . . . . . . . . . . . . . 6.5.2. Direct model . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2.1. Fluence . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.2.2. Radiance . . . . . . . . . . . . . . . . . . . . . . . . 6.5.3. Inverse model . . . . . . . . . . . . . . . . . . . . . . . 6.5.4. Iterative methods . . . . . . . . . . . . . . . . . . . . . 6.5.5. Perturbation method . . . . . . . . . . . . . . . . . . . 6.5.6. Reconstruction by inverse projection . . . . . . . . . 6.5.7. Temporal domain . . . . . . . . . . . . . . . . . . . . . 6.5.8. Frequency domain . . . . . . . . . . . . . . . . . . . . 6.6. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . .
ix
. . . . . . . . . . . . . . . . . .
176 177 178 178 180 181 181 182 184 184 184 185 186 186 187 187 189 190
Chapter 7. Synchrotron Tomography. . . . . . . . . . . . . . . . . . . . . . . . Anne-Marie CHARVET and Françoise PEYRIN
197
7.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2. Synchrotron radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1. Physical principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2. Advantages of synchrotron radiation for tomography . . . . . . . . 7.3. Quantitative tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.1. State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.3.2. ESRF system and applications . . . . . . . . . . . . . . . . . . . . . . 7.4. Microtomography using synchrotron radiation . . . . . . . . . . . . . . . 7.4.1. State of the art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.4.2. ESRF system and applications . . . . . . . . . . . . . . . . . . . . . . 7.5. Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.5.1. Phase contrast and holographic tomography . . . . . . . . . . . . . . 7.5.2. Tomography by refraction, diffraction, diffusion and fluorescence 7.6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.7. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
197 197 197 201 202 202 204 206 206 207 210 210 211 211 212
PART 3. INDUSTRIAL TOMOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . .
215
Chapter 8. X-ray Tomography in Industrial Non-destructive Testing . . . Gilles PEIX, Philippe DUVAUCHELLE and Jean-Michel LETANG
217
8.1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
217
x
Tomography
8.2. Physics of the measurement . . . . . . . . . . . . . . . . . . . . . . . . . 8.3. Sources of radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4. Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5. Reconstruction algorithms and artifacts . . . . . . . . . . . . . . . . . . 8.5.1. Analytical methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.2. Algebraic methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.3. Reconstruction artifacts . . . . . . . . . . . . . . . . . . . . . . . . . 8.6. Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.1. Tomodensitometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.2. Expert’s report . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.3. High-energy tomography . . . . . . . . . . . . . . . . . . . . . . . . 8.6.4. CAD models in tomography: reverse engineering and simulation 8.6.4.1. Reverse engineering . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.4.2. Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.5. Microtomography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.6. Process tomography . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.7. Dual-energy tomography . . . . . . . . . . . . . . . . . . . . . . . . 8.6.8. Tomosynthesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.6.9. Scattering tomography . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.8. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . .
218 219 220 223 223 223 223 224 224 225 225 226 227 227 228 230 230 232 233 235 236
Chapter 9. Industrial Applications of Emission Tomography for Flow Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Samuel LEGOUPIL and Ghislain PASCAL
239
9.1. Industrial applications of emission tomography . . . . . . . . . . . . . 9.1.1. Context and objectives . . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.2. Non-nuclear techniques . . . . . . . . . . . . . . . . . . . . . . . . . 9.1.3. Nuclear techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2. Examples of applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.1. Two-dimensional flow . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2. Flow in a pipe – analysis of a junction . . . . . . . . . . . . . . . . 9.3. Physical model of data acquisition . . . . . . . . . . . . . . . . . . . . . 9.3.1. Photon transport. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2. Principle of the Monte Carlo simulation . . . . . . . . . . . . . . . 9.3.3. Calculation of projection matrices . . . . . . . . . . . . . . . . . . . 9.3.3.1. Estimation of projection profiles . . . . . . . . . . . . . . . . . . 9.3.3.2. Estimation of the projection matrix . . . . . . . . . . . . . . . . 9.4. Definition and characterization of a system . . . . . . . . . . . . . . . . 9.4.1. Characteristic system parameters . . . . . . . . . . . . . . . . . . . 9.4.2. Characterization of images reconstructed with the EM algorithm 9.4.2.1. Underdetermined systems . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
239 239 240 240 242 242 244 247 247 248 249 249 251 252 252 254 254
Table of Contents
xi
9.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.6. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
255 255
PART 4. MORPHOLOGICAL MEDICAL TOMOGRAPHY . . . . . . . . . . . . . . .
257
Chapter 10. Computed Tomography . . . . . . . . . . . . . . . . . . . . . . . . Jean-Louis AMANS and Gilbert FERRETTI
259
10.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1. Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2. Evolution of CT . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3. Scanners with continuous rotation . . . . . . . . . . . . . . . . . 10.1.4. Multislice scanners . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.5. Medical applications of CT . . . . . . . . . . . . . . . . . . . . . 10.2. Physics of helical tomography . . . . . . . . . . . . . . . . . . . . . . 10.2.1. Projection acquisition system . . . . . . . . . . . . . . . . . . . . 10.2.1.1. Mechanics for continuous rotation . . . . . . . . . . . . . . . 10.2.1.2. X-ray tubes with large thermal capacity . . . . . . . . . . . . 10.2.1.3. X-ray detectors with high dynamics, efficiency, and speed 10.2.1.4. Helical acquisitions . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2. Reconstruction algorithms . . . . . . . . . . . . . . . . . . . . . . 10.2.2.1. Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2.2. Interpolation algorithms . . . . . . . . . . . . . . . . . . . . . 10.2.2.3. Slice spacing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3. Image quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3.1. Axial spatial resolution. . . . . . . . . . . . . . . . . . . . . . 10.2.3.2. Longitudinal spatial resolution . . . . . . . . . . . . . . . . . 10.2.3.3. Image noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.3.4. Artifacts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.2.4. Dose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3. Applications of volume CT. . . . . . . . . . . . . . . . . . . . . . . . 10.3.1. Role of visualization of axial slices . . . . . . . . . . . . . . . . 10.3.2. Role of 2D and 3D postprocessing . . . . . . . . . . . . . . . . . 10.3.3. Abdominal applications . . . . . . . . . . . . . . . . . . . . . . . 10.3.4. Thoracic applications . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.5. Vascular applications . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.6. Cardiac applications . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.7 Polytrauma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.8 Pediatric applications . . . . . . . . . . . . . . . . . . . . . . . . . 10.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
259 259 260 261 263 264 265 265 266 266 266 267 267 267 268 269 270 270 270 270 271 271 272 272 272 272 273 276 277 279 279 279 280
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
xii
Tomography
Chapter 11. Interventional X-ray Volume Tomography . . . . . . . . . . . . Michael GRASS, Régis GUILLEMAUD and Volker RASCHE 11.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1.1. Definition. . . . . . . . . . . . . . . . . . . . . . . . . 11.1.2. Acquisition systems. . . . . . . . . . . . . . . . . . . 11.1.3. Positioning with respect to computed tomography 11.2. Example of 3D angiography . . . . . . . . . . . . . . . . 11.2.1. Principles of projection acquisition . . . . . . . . . 11.2.2. Data processing . . . . . . . . . . . . . . . . . . . . . 11.2.2.1. Calibration . . . . . . . . . . . . . . . . . . . . . . 11.2.2.2. Reconstruction algorithm . . . . . . . . . . . . . 11.2.3. Other reconstruction methods in 3D radiology . . . 11.2.4. Visualization . . . . . . . . . . . . . . . . . . . . . . . 11.3. Clinical examples . . . . . . . . . . . . . . . . . . . . . . 11.3.1. High contrast applications . . . . . . . . . . . . . . . 11.3.2. Soft tissue contrast applications. . . . . . . . . . . . 11.3.3. Cardiac applications . . . . . . . . . . . . . . . . . . 11.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . 11.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . .
287
. . . . . . . . . . . . . . . . .
287 287 288 289 290 291 291 291 295 296 297 297 298 299 300 302 303
Chapter 12. Magnetic Resonance Imaging. . . . . . . . . . . . . . . . . . . . . André BRIGUET and Didier REVEL
307
12.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2. Nuclear paramagnetism and its measurement . . . . . . . . . . . . . . . 12.2.1. Nuclear magnetization . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.2. Nuclear magnetic relaxation and Larmor precession . . . . . . . . 12.2.3. NMR signal formation . . . . . . . . . . . . . . . . . . . . . . . . . . 12.2.4. Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.3. Spatial encoding of the signal and image reconstruction . . . . . . . . 12.3.1. Information content of the signal’s phase . . . . . . . . . . . . . . . 12.3.2. Signal sampling along k-space trajectories and use of a 2D model 12.3.2.1. Cartesian k-space sampling . . . . . . . . . . . . . . . . . . . . . 12.3.2.2. Non-Cartesian k-space sampling . . . . . . . . . . . . . . . . . . 12.4. Contrast factors and examples of applications . . . . . . . . . . . . . . 12.4.1. Density of nuclei and magnetization . . . . . . . . . . . . . . . . . . 12.4.2. Relaxation times and discrimination between soft tissues . . . . . 12.4.3. Contrast agents in MRI. . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.4. Relevance of magnetization transfer techniques . . . . . . . . . . . 12.4.5. Flow of matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12.4.6. Diffusion and perfusion effects . . . . . . . . . . . . . . . . . . . . . 12.5. Tomography or volumetry? . . . . . . . . . . . . . . . . . . . . . . . . . 12.6. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
307 308 308 308 309 311 312 312 314 314 316 318 318 318 320 320 321 322 323 323
Table of Contents
xiii
PART 5. FUNCTIONAL MEDICAL TOMOGRAPHY . . . . . . . . . . . . . . . . . .
327
Chapter 13. Single Photon Emission Computed Tomography . . . . . . . . Irène BUVAT, Jacques DARCOURT and Philippe FRANKEN
329
13.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.1. Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.1.2. Functional versus anatomical imaging. . . . . . . . . . . . . 13.2. Radiopharmaceuticals . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.1. Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.2.2. Radioactive gamma markers . . . . . . . . . . . . . . . . . . 13.3. Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.1. General principle of the gamma camera . . . . . . . . . . . . 13.3.2. Special features of single photon detection: collimator and spectrometry. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.3.3. Principle characteristics of the gamma camera . . . . . . . . 13.3.4. Principle of projection acquisition . . . . . . . . . . . . . . . 13.3.5. Transmission measurement system. . . . . . . . . . . . . . . 13.4. Image reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.1. Hypotheses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.2. Reconstruction algorithms . . . . . . . . . . . . . . . . . . . . 13.4.2.1. Tomographic reconstruction problem in SPECT. . . . . 13.4.2.2. Analytical reconstruction in SPECT . . . . . . . . . . . . 13.4.2.3. Algebraic reconstruction in SPECT . . . . . . . . . . . . 13.4.3. Specific problems of single photon detection. . . . . . . . . 13.4.3.1. Counting noise. . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3.2. Attenuation . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3.3. Scatter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.4.3.4. Variation of the spatial resolution with depth . . . . . . 13.5. Example of myocardial SPECT . . . . . . . . . . . . . . . . . . . 13.5.1. Indications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.5.2. Radiopharmaceuticals, injection and acquisition protocols 13.5.3. Reconstruction and interpretation criteria . . . . . . . . . . . 13.5.4. Importance of the accuracy of the projection model. . . . . 13.5.5. Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13.7. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
329 329 329 330 330 331 331 331
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
332 333 334 335 336 336 337 337 338 338 339 340 340 341 342 343 343 344 344 345 346 346 348
Chapter 14. Positron Emission Tomography . . . . . . . . . . . . . . . . . . . Michel DEFRISE and Régine TRÉBOSSEN
351
14.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.1. Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14.1.2. PET versus other functional imaging techniques. . . . . . . . . . .
351 351 351
xiv
Tomography
14.1.3. Functional versus anatomical imaging. . . . . . . . . . . . . . . . . 353 14.2. Data acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 14.2.1. Radiopharmaceuticals . . . . . . . . . . . . . . . . . . . . . . . . . . 353 14.2.1.1. Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 14.2.1.2. Positron emitting markers . . . . . . . . . . . . . . . . . . . . . . 354 14.2.2. Physical principle of PET . . . . . . . . . . . . . . . . . . . . . . . . 354 14.2.2.1. Positron annihilation . . . . . . . . . . . . . . . . . . . . . . . . . 354 14.2.2.2. Principle of coincidence detection . . . . . . . . . . . . . . . . . 355 14.2.2.3. Type of detected coincidences . . . . . . . . . . . . . . . . . . . 355 14.2.3. Detection systems employed in PET . . . . . . . . . . . . . . . . . . 356 14.2.3.1. Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356 14.2.3.2. Detector arrangement. . . . . . . . . . . . . . . . . . . . . . . . . 357 14.2.4. Physical characteristics of scanners . . . . . . . . . . . . . . . . . . 358 14.2.5. Acquisition modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358 14.2.5.1. Two-dimensional (2D) acquisition . . . . . . . . . . . . . . . . . 358 14.2.5.2. Three-dimensional (3D) acquisition . . . . . . . . . . . . . . . . 360 14.2.5.3. 3D data organization for a multiline scanner . . . . . . . . . . . 361 14.2.5.4. LOR and list mode acquisition . . . . . . . . . . . . . . . . . . . 361 14.2.5.5 Acquisition with time-of-flight measurement . . . . . . . . . . . 362 14.2.5.6 Transmission scan acquisition . . . . . . . . . . . . . . . . . . . . 362 14.3. Data processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363 14.3.1. Data correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 14.3.2. Reconstruction of corrected data . . . . . . . . . . . . . . . . . . . . 365 14.3.3. Dynamic studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 366 14.3.3.1. Measurement of glucose metabolism . . . . . . . . . . . . . . . 367 14.3.3.2. Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 14.3.3.3. Acquisition protocol . . . . . . . . . . . . . . . . . . . . . . . . . 369 14.4. Research and clinical applications of PET . . . . . . . . . . . . . . . . . 370 14.4.1. Clinical applications of PET: whole-body measurements of glucose metabolism in oncology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 370 14.4.1.1. Physiological basis . . . . . . . . . . . . . . . . . . . . . . . . . . 370 14.4.1.2. Acquisition and reconstruction protocol. . . . . . . . . . . . . . 370 14.4.1.3. Image interpretation: sensitivity and specificity of this medical imaging modality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 14.4.2. Clinical research applications: study of dopaminergic transmission 372 14.4.2.1. Dopaminergic transmission system . . . . . . . . . . . . . . . . 372 14.4.2.2. Dopamine synthesis . . . . . . . . . . . . . . . . . . . . . . . . . 372 14.4.2.3. Tracers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 372 14.4.2.4. Example of use: study of neurodegenerative diseases that affect the basal ganglia. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 14.5. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 14.6. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Table of Contents
Chapter 15. Functional Magnetic Resonance Imaging . . . . . . . . . . . . . Christoph SEGEBARTH and Michel DÉCORPS 15.1. Introduction . . . . . . . . . . . . . . . . . . . . . 15.2. Functional MRI of cerebrovascular responses. 15.3. fMRI of BOLD contrasts . . . . . . . . . . . . . 15.3.1. Biophysical model . . . . . . . . . . . . . . 15.3.2. “Static” conditions . . . . . . . . . . . . . . 15.3.3. “Intermediate” conditions . . . . . . . . . . 15.3.4. “Motional narrowing” conditions. . . . . . 15.3.5. In practice . . . . . . . . . . . . . . . . . . . 15.4. Different protocols . . . . . . . . . . . . . . . . . 15.4.1. Block paradigms. . . . . . . . . . . . . . . . 15.4.2. Event-related paradigms . . . . . . . . . . . 15.4.3. Fourier paradigms . . . . . . . . . . . . . . . 15.5. Bibliography . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
. . . . . . . . . . . . .
xv
377
. . . . . . . . . . . . .
377 378 380 380 381 382 382 383 383 384 384 387 389
Chapter 16. Tomography of Electrical Cerebral Activity in Magneto- and Electro-encephalography. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Line Garnero
393
16.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2. Principles of MEG and EEG . . . . . . . . . . . . . . . . . . . . . . . . 16.2.1. Sources of MEG and EEG . . . . . . . . . . . . . . . . . . . . . . . 16.2.2. Evolution of EEG . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.3. MEG instrumentation. . . . . . . . . . . . . . . . . . . . . . . . . . 16.2.4. Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3. Imaging of electrical activity of the brain based on MEG and EEG signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.1. Difficulties of reconstruction . . . . . . . . . . . . . . . . . . . . . 16.3.2. Direct problem and different field calculation methods. . . . . . 16.3.3. Inverse problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.3.3.1. Parametric or “dipolar” methods . . . . . . . . . . . . . . . . . 16.3.3.2. Tomographic or “distributed” methods . . . . . . . . . . . . . 16.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . .
393 394 394 395 395 397
. . . . . . . .
398 398 399 402 403 405 407 408
List of Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
411
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
417
Preface
Since visible light is reflected by most of the objects around us, our perception of the environment is mainly determined by the properties of their surfaces. To lift this restriction and explore their interior, we have to develop dedicated instruments that rely on penetrating radiation, such as X- and Ȗ-rays, and certain electromagnetic and acoustic waves. Tomography constitutes the culmination of this endeavor. By combining a set of measurements and performing a reconstruction, it provides a map of a characteristic parameter of the employed radiation for one or more cross sections. It thus enables us to see the interiors of objects on a screen, whereas this was previously only possible either by imagination, based on the measurements, or by direct observation, based on a physical sectioning of the objects. In the case of medical imaging, the latter involved surgical intervention. Tomography is a remarkable invention, which allows us to discover the interiors of the world and the body, their organization in space and time, without destroying them. It is the favored tool for analyzing and characterizing matter, be it dead or alive, static or dynamic, of microscopic or of astronomical scale. By giving access to its structure and the form of its components, it enables us to understand the complexity of the studied object. Computer aided tomography is a digital image acquisition technique. It produces an encoding, i.e. a digital representation on a computer, of a region of interest inside a patient, a structure, or an object. It thus provides a virtual representation of reality. The digital representation also facilitates subsequent exploitation, exchange, and storage of the associated information. By suitable processing, it then becomes possible to detect the presence of defects, to identify internal structures and to study their form and position, to quantify density variations, to model the components, the body, or the organs, and to guide interventional devices. Moreover, the user may benefit from the assistance of digital image processing, analysis, and visualization software.
xviii
Tomography
A tomographic imaging system comprises several components and technologies. It requires the participation of the final users, such as physicians, physicists, and biologists, for its specification, of researchers and engineers for its development, and of industrial manufacturing and marketing experts for its production and commercialization. These participants are usually educated in medical and engineering schools, as well as at universities. Throughout this book, we wish to provide help equally to students interested in the scientific and technological background of tomography and to the above mentioned group of people directly involved in the conception and application of tomographic systems. First of all, we focus on explaining the different fundamentals and principles of the formation of tomographic images and on illustrating their aim. Since it is the subject of the series IC2 and of the corresponding English books published by ISTE Ltd. and John Wiley & Sons Inc., we emphasize signal processing and only touch upon the components of the acquisition systems, such as the radiation sources, the detectors, the processing platforms, and the mechanics. Signal processing in tomography forms the intersection between physics for modeling the acquisition systems, mathematics for solving the measurement equations, and computer science for efficiently implementing and executing the image reconstruction. The analysis, visualization, and transmission of these images are addressed in other French books in the series IC2 and the corresponding English books. This book is compiled from two French books. La tomographie1 corresponds to the first three parts of this book, and La tomographie médicale2 corresponds to the last two parts. For the translation of these two books, several chapters have been updated to reflect advances in the respective domains since their publication. This book is the result of collective work. Therefore, I would like to dearly thank all authors for their contribution. Tomographic imaging is the “heart” of our work and our research. Each one of us committed him- or her-self to introducing the reader to tomography, in such a way that the origin of the images and the information in these images are comprehensible. By gathering engineers, physicists, mathematicians, and physicians, I formed a multidisciplinary editing team, which allows the reader to benefit from explanations by experts in the respective fields.
1 GRANGEAT P. (Ed.), La Tomographie: Fondements Microscopique et Imagerie Industrielle, Hermes, 2002.
Mathématiques,
Imagerie
2 GRANGEAT P. (Ed.), La Tomographie Médicale: Imagerie Morphologique et Imagerie
Fonctionnelle, Hermes, 2002.
Preface
xix
The translation of this book was carried out by Holger Eggers. I would like to express my gratitude for his work, which demonstrates not only perfect knowledge of the covered technical aspects but also a good command of both French and English. In this book, we have compiled in five distinct parts the mathematical foundations associated with image reconstruction, the applications linked to microscopic and industrial imaging, and the applications of medical tomography, separated into morphological and functional imaging. The book begins with an introduction to tomography, a summary of the domain. This chapter describes the large variety of tomographic systems across the range of accessible contrasts, the choice of acquisition strategies to localize information in space and time, the different approaches to define reconstruction algorithms, and the variety of application domains. Since the series IC2 and the corresponding English books address signal processing, we have compiled in the first part of the book the mathematical foundations, which serve the development of reconstruction algorithms. Analytical approaches, data sampling, and discrete approaches are discussed successively. Attempting to cover the applications of tomography exhaustively in a limited number of pages is unrealistic. Therefore, we have selected a set of contributions that illustrate the domain with examples, primarily from French-speaking experts who are actively involved in research in the respective fields. For all chapters devoted to the applications of tomographic systems, we have been committed to describing the physical, physiological, and technological principles that underlie data acquisition and contrast creation. These acquisition strategies lead to the direct problem, which describes the relation between the image to be reconstructed and the performed measurements. The reconstruction algorithms, which attempt to solve the inverse problem, are only mentioned in these chapters, and references are given to the first part of the book for the mathematical derivation. However, the specific problems of each modality, such as pre- and post-processing of data for the correction of parasitic physical effects, are covered in more detail. Finally, several chapters contain sections in which one or several typical applications of the covered imaging modality are described. The exploration of matter naturally leads to the investigation of smaller and smaller structures, the enhancement of spatial resolution, and the reduction of the scale of the images. Thus, we leaves the dimensions of the human body and looks at samples, cells, proteins, or genes. This is the domain of microtomography. Certain instruments applied in this domain are simply a miniaturization of tomographic systems employed in medical imaging, such as micro CT, MRI, SPECT, and PET
xx
Tomography
scanners. In this book, however, we are more interested in the instruments that are unique to this domain. The first chapter of the second part of this book is devoted to microscopic tomography and describes in particular the confocal scanning microscope in more detail. The second chapter deals with optical imaging in diffuse media. The last chapter covers tomography with synchrotrons, which are very intense and spatially coherent X-ray sources. The third part of this book addresses industrial applications of tomography. These must respond to the increasing demands on quality control in manufacturing, on security, and on design. Tomography thus assists design, control and maintenance engineers. In analogy to medical imaging, we have selected a chapter on X-ray tomography for the imaging of containers, which may be associated with morphological medical imaging. This chapter notably describes several examples to illustrate different uses. The second chapter covers emission tomography applied to the visualization of industrial flow, i.e. the imaging of contents, like functional medical imaging. Medical imaging constitutes the domain in which tomographic systems are developed the furthest. Two parts of this book are devoted to the modalities that are applicable to humans, either for clinical purposes or for cognitive studies. Tomography assists physicians in diagnosis, planning and intervention. The fourth part of this book deals with morphological medical imaging and covers successively computed tomography, X-ray volume tomography, and magnetic resonance imaging. Since ultrasound imaging is mostly applied to observation of the surface of internal organs, this modality is not addressed in this book. Another book in the series IC2 is devoted to depth imaging, to which ultrasound imaging naturally belongs. The fifth part of this book covers functional medical imaging in its different forms, namely single photon emission computed tomography, positron emission tomography, functional cerebral tomography by magnetic resonance imaging, and tomography of electrical activity by magneto- and electro-encephalography. By enabling us to see the invisible, to look inside matter, tomographic systems have a magical, mysterious aspect. They are routinely used tools, which open up the possibility for physicians, researchers, and engineers to answer fundamental questions on the organisms or objects that they examine. Throughout this book, we invite the reader to understand the magic of these tools and thus to discover the exciting world of tomography. Pierre GRANGEAT
Notation
set of real numbers
n
set of real vectors of dimension n
set of non-negative real numbers set of natural numbers, set of integers
N, Z
:2 Sn-1 :
^x , x,T ! 0` n
TA
Zn i x*
S n 1 u
Q Pe , h
Qc ln log e
unit disk n-dimensional unit sphere subset of the unit sphere S2; equatorial band orthogonal subspace to T for T Sn-1 unit cylinder in n 1 root of (– 1) conjugate complex frequency sampling interval cut-off frequency natural logarithm logarithm to base 10 base of the natural logarithm
Coordinates S M A r
x, y , z
point on X-ray source point in image space point on detector Cartesian coordinates in image space
xxii
Tomography
\ ,M T p, q DS, A
D M, p D(n, s), D(a, S)
azimuthal angles of projections in fan-beam, cone-beam, and parallel-beam geometry colatitude angle of oblique projections Cartesian coordinates on detector line through points S and A line defined by normalized polar coordinates M, p line defined by the unit vector n and the point to which the vector s points from the origin; line defined by the unit vector a and the point S
Specific signals
F A (x)
indicator function of the set A: F A (x) = 1 if x A, = 0 otherwise Dirac distribution
Gx sign
sincx
sign function sin x x
Jn
sine cardinal function Bessel function of order n
Transformations f Ff
function describing the image to be reconstructed Fourier transform of the function f
Fp Rf , FU Rf
Fourier transforms of Rf with respect to variables p and U
F 1m Hf Xf Yf Hc, H: FRf
inverse Fourier transform of the function m Hilbert transform of the function f X-ray, or line integral, transform of the function f
FXf D P PX
weighted X-ray transform of the function f 2D filter of the 3D X-ray transform, Colsher filter 2D Fourier transform of Rf with respect to the variables \ and p 2D Fourier transform of Xf with respect to s line in space plane in space detection plane in cone-beam geometry
Notation
xxiii
X p f , X f f , Xc f
X-ray transform in parallel-beam, fan-beam, and cone-
XP f
beam geometry attenuated line integral transform of the function f
Rf
associated with the attenuation map P Radon transform of the function f
f (k) (x) = Dk f(x) k1
kn
f
wx1 ...wxn
kn
w
... w
k1
=
( x ) , k N
DU Rf , DU2 Rf
Bm w
partial derivative n
first and second derivative of the Radon transform with
HDm, HDWm
h, hHD, hHDW hp, hf
respect to the variable U backprojection operator applied to measurements m apodization window measurements m, filtered by the ramp filter, either nonapodized or apodized with the operator W ramp filter, non-apodized ramp filter, ramp filter apodized with the operator W ramp filter in parallel-beam and fan-beam geometry
Sets of functions
L L >0, a @ , a ! 0 L1 n 2
n
n
1 p
Cf 0 :
or S Sc or Sc H c or H c S
space of integrable functions space of square-integrable functions space of periodic functions integrable over >0, a @n . space of infinitely differentiable functions whose support is contained in :
n
Schwartz space
n
space of tempered distributions
n
space of distributions with compact support
Characteristics of a variable, a vector, or a random signal X E
V
varx
random variable expectation standard deviation of a random variable variance
xxiv
Tomography
U
correlation coefficient
mx 6 e
mean of vector x covariance matrix noise in measurements m
Probability, uncertainty
pdf Pr p f , pf pm / f
probability density function probability for discrete variables probability distribution for a continuous variable or vector conditional probability distribution
Statistics M, ) , ) M P FP
potential function of Markov fields a priori distribution in the MEM approach criterion associated with the P distribution in the MEM approach
Operations, operators
f h g s h T, s xy x, y !
convolution of the function f with the kernel h convolution of the function g with the kernel h with respect to the variable s vector product of two vectors scalar product of two vectors
x
norm of a vector absolute value
x
Sup (f)
support of a function
Matrix operations R rij
>R @ij
matrix element of a matrix
ri. , r.j
i-th row and j-th column of a matrix
f
column vector representing an image or measurements
Notation
xxv
dimf
element of a column vector of samples (pixel, dexel, ...) dimension of a vector
f n
n-th iteration during the calculation of a vector
fk
1
B det B
inverse of a matrix B determinant of a matrix B
B
pseudo-inverse of a matrix B
x I
T
transpose of a vector x identity matrix
Optimization
J grad, xJ
minimization criterion gradient operator gradient of J with respect to x
2 div, ,. !
Laplace operator divergence operator
Electromagnetic and optical notation
pulsation wave vector
Z k x, y , z
(kx, ky, kz) r x, y , z , R x, y , z
spatial frequencies of the wave vector in the reciprocal space or k-space spatial vectors pointing to coordinate x, y, z
ei k ,r ! Zt E, E V Q = (Qx, Qy, Qz) H * r1 r2 , t 1 t2
plane wave electrical field vector, electrical field scalar electrical potential equivalent current dipole gain matrix cross-correlation function of fields Er1 , Er2
G r r1 r2 , t1 t2 Wi r, t B, B P P r iP i
Green function Wigner distribution magnetic field vector, magnetic field scalar permeability vacuum permeability dielectric constant
P0 H H r iH i
xxvi
Tomography
T2
vacuum dielectric constant current density vector, current density scalar conductivity tensor vector of magnetic or electric data speed of light Planck’s constant wavelength numerical aperture of an optical system refractive index absorption coefficient diffusion coefficient reduced diffusion coefficient photon density wave scalar diffusion coefficient of photon density waves longitudinal magnetization transverse magnetization nuclear magnetization density at location (x, y, z) phase of NMR signal longitudinal relaxation time transverse relaxation time
T2
effective transverse relaxation time
H0 j, j >V @ M c =
O NA n
Pa Ps P sc Ur, t D Mz MA U ( x, y, z )
) T1
Dl
free diffusion coefficient of water
4b
flip angle
d Da
mean free path apparent diffusion coefficient
TE
echo time
TR
repetition time
TI
inversion time gradient field with x, y, and z components
G
Gx , G y , Gz
J
laboratory frame of reference rotating frame of reference gyromagnetic ratio
B0
static magnetic field
x, y , z xc, yc, zc
Z0
Larmor pulsation associated with the field B0
B1
rotating magnetic field
Z1
Larmor pulsation associated with the field B1
Notation
dv
diameter of vessels
Fm M kB
magnetic susceptibility magnetization Boltzmann constant
4a
absolute temperature
xxvii
Ionizing radiation notation Ep
photon energy
N p0 , N p
number of emitted and transmitted photons, or pairs of
M p0 , M p
photons in positron tomography flux of emitted and transmitted photons
P, P T PE
linear attenuation coefficient linear attenuation coefficient at energy E
PC P PE PR P CP P AE D
linear attenuation coefficient for Compton effect
m0
Uv UZ
linear attenuation coefficient for photoelectric effect linear attenuation coefficient for Rayleigh effect linear attenuation coefficient for pair production linear energy absorption coefficient diffusion angle rest mass of the electron density
Z Z eff
electron density atomic number effective atomic number
gg
geometric magnification
dX
diameter or size of the focal spot of the X-ray source
Tr
half-life of a radioactive element
Ar
activity of a radioactive element
Da
absorbed dose
De K-edge
equivalent dose edges in absorption spectrum due to atomic transitions to the first free energy levels critical energy of the electromagnetic radiation spectrum of a synchrotron
Hc
xxviii
Tomography
K
deflection parameter for electromagnetic structures on a synchrotron
Acronyms and abbreviations 3DRP 3D-RA ADC APD ART AVM BaF2 BGO BOLD CAD CCD CDET CdTe CG CLSM CT CV CZT dexel DNA DQE ECG EEG EM EPI ESRF FBP FDG FISH FLASH fMRI FORE FRELON FWHM FWTM Gd2O2S:Tb GE GeHP GFP
3D reprojection algorithm 3D rotational angiography analog-to-digital converter avalanche photo diode algebraic reconstruction technique arteriovenous malformation barium fluoride germinate of bismuth blood oxygenation level dependent computer-aided design charge-coupled device coincidence detection emission tomography cadmium telluride conjugate gradient confocal laser scanning microscopy computed tomography coefficient of variation cadmium zinc telluride detector element deoxyribonucleic acid detection quantum efficiency electrocardiogram electro-encephalography expectation maximization echo-planar imaging European synchrotron radiation facility filtered backprojection fluorodeoxyglucose fluorescence in situ hybridization fluorescein arsenical helix binder functional magnetic resonance imaging Fourier rebinning fast readout low noise full width at half maximum full width at tenth maximum terbium-doped gadolinium oxysulfide gradient echo high purity germanium green fluorescent protein
Notation
ICM IR L-DOPA LOR LORETA LSO LUT MAP MART MCMC MEG MEM MIP ML MPR MRI MTF NA NaI NDT NEC NEQ NMR OCT OS OSL OTF PET pixel PM PMMA POCS PSF RF RGB RMS RTD SAR SE SI SIRT SNR SPECT SPM SQUID
iterated conditional mode infrared precursor of dopamine line of response low resolution electrical tomography orthosilicate of lutetium look-up table maximum a posteriori multiplicative algebraic reconstruction technique Markov chain Monte Carlo magneto-encephalography maximum entropy on the mean maximum intensity projection maximum likelihood multiplanar reformat magnetic resonance imaging modulation transfer function numeric aperture sodium iodide non-destructive testing noise equivalent count rate noise equivalent quantum nuclear magnetic resonance optical coherence tomography ordered subsets one-step-late optical transfer function positron emission tomography picture element photomultiplier polymethyl methacrylate projection onto convex sets point-spread function radiofrequency red green blue root mean square residence time distribution synthetic aperture radar spin echo international system of units simultaneous iterative reconstruction technique signal-to-noise ratio single photon emission computed tomography statistical parametric mapping superconducting quantum interference device
xxix
xxx
Tomography
SSP SSRB STEM TDM TEM TFT TOF UV voxel X-FEL
section sensitivity profile single-slice rebinning scanning transmission electron microscopy tomodensitometry transmission electron microscopy thin film transistor time of flight ultraviolet volume element X-ray free electron laser
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 1
Introduction to Tomography
1.1. Introduction Tomographic imaging systems are designed to analyze the structure and composition of objects by examining them with waves or radiation and by calculating virtual cross-sections through them. They cover all imaging techniques that permit the mapping of one or more physical parameters across one or more planes. In this book, we are mainly interested in calculated, or computer-aided, tomography, in which the final image of the spatial distribution of a parameter is calculated from measurements of the radiation that is emitted, transmitted, or reflected by the object. In combination with the electronic measurement system, the processing of the collected information thus plays a crucial role in the production of the final image. Tomography complements the range of imaging instruments dedicated to observation, such as radar, sonar, lidar, echograph, and seismograph. Currently, these instruments are mostly used to detect or localize an object, for instance an airplane by its echo on a radar screen, or to measure heights and thicknesses, for instance of the earth’s surface or of a geological layer. They mainly rely on depth imaging techniques, which are described in another book in the French version of this series [GAL 02]. By contrast, tomographic systems calculate the value of the respective physical parameter at all vertices of the grid that serves the spatial encoding of the image. An important part of imaging systems such as cameras, camcorders, or microscopes is the sensor that directly delivers the observed image. In tomography, the sensor performs indirect measurements of the image by detecting the radiation with which the object is examined. These measurements are described by the radiation transport equations, which lead to what Chapter written by Pierre GRANGEAT.
2
Tomography
mathematicians call the direct problem, i.e. the measurement or signal equation. To obtain the final image, appropriate algorithms are applied to solve this equation and to reconstruct the virtual cross-sections. The reconstruction thus solves the inverse problem [OFT 99]. Tomography therefore yields the desired image only indirectly by calculation. 1.2. Observing contrasts A broad range of physical phenomena may be exploited to examine objects. Electromagnetic waves, acoustic waves, or photonic radiation are used to carry the information to the sensor. The choice of the exploited physical phenomenon and of the associated imaging instrument depends on the desired contrast, which must assure a good discrimination between the different structures present in the objects. This differentiation is characterized by its specificity, i.e. by its ability to discriminate between inconsequential normal structures and abnormal structures, and by its sensitivity, i.e. its capacity for measuring the weakest possible intensity level of relevant abnormal structures. In medical imaging, for example, tumors are characterized by a metabolic hyperactivity, which leads to a marked increase in glucose consumption. A radioactive marker such as fluorodeoxyglucose (FDG) enables detection of this increase in the metabolism, but it is not absolutely specific, since other phenomena, such as inflammation, also entail a local increase in glucose consumption. Therefore, the physician must interpret the physical measurement in the context of the results of other clinical examinations. It is preferable to use coherent radiation whenever possible, which is the case for ultrasound, microwaves, laser radiation, and optical waves. Coherent radiation enables measurement not only of its attenuation but also of its dephasing. The latter enables association of a depth with the measured information, because the propagation time difference results in dephasing. Each material is characterized by its attenuation coefficient and its refractive index, which describe the speed of propagation of waves in the material. In diffraction tomography, we essentially aim to reconstruct the surfaces of the interfaces between materials with different indices. In materials with complex structures, however, the multiple interferences rapidly render the phase information unusable. Moreover, sources of coherent radiation, such as lasers, are often more expensive. With X-rays, only phase contrast phenomena that are linked to the spatial coherence of photons are currently observable, using microfocus sources or synchrotrons. This concept of spatial coherence reflects the fact that an interference phenomenon may only be observed behind two slits if they are separated by less than the coherence length. In such a situation, a photon interferes solely with itself. Truly coherent X-ray sources, such as the X-FEL (X-ray free electron laser), are only emerging.
Introduction to Tomography
3
In the case of non-coherent radiation, like Ȗ-rays, and in most cases of X-ray imaging and optical imaging with conventional light sources, each material is mainly characterized by its attenuation and diffusion coefficients. Elastic or Rayleigh diffusion is distinguished from Compton diffusion, which results from inelastic photon–electron collisions. In Ȗ- and X-ray imaging, we try to keep attenuated direct radiation only, since, in contrast to diffused radiation, it propagates along a straight line and thus enables us to obtain a very high spatial resolution. The attenuation coefficients reflect, in the first approximation, the density of the traversed material. Diffusion tomography is, for instance, used in infrared imaging to study blood and its degree of oxygenation. Since diffusion spreads light in all directions, the spatial resolution of such systems is limited. Having looked at the interaction between radiation and matter, we now consider the principle of generating radiation. We distinguish between active and passive systems. In the former, the source of radiation is controlled by an external generator that is activated at the moment of measurement. The object is explored by direct interrogation if the generated incident radiation and the emerging measured radiation are of the same type. This is the case in, for example, X-ray, optical, microwave, and ultrasound imaging. The obtained contrast consequently corresponds to propagation parameters of the radiation. For these active systems with direct interrogation, we distinguish measurement systems based on reflection or backscattering, where the emerging radiation is measured on the side of the object on which the generator is placed, and measurement systems based on transmission, where the radiation is measured on the opposite side. The choice depends on the type of radiation employed and on constraints linked to the overall dimensions of the object. Magnetic resonance imaging (MRI) is a system with direct interrogation, since the incident radiation and the emerging radiation are both radiofrequency waves. However, the temporal variations of the excitation and received signals are very different. The object may also be explored by indirect interrogation, which means that the generated incident radiation and the emerging measured radiation are of different types. This is the case in fluorescence imaging, where the incident wave produces fluorescent light with a different wavelength (see Chapter 6). The observed contrast corresponds in this case to the concentration of the emitter. In contrast to active systems, passive systems rely on internal sources of radiation, which are inside the analyzed matter or patient. This is the case in magneto- and electro-encephalography (MEG and EEG), where the current dipoles linked to synaptic activity are the sources. This is also the case in nuclear emission or photoluminescence tomography, where the tracer concentration in tissues or materials is measured. The observed contrast again corresponds to the concentration of the emitter. Factors such as attenuation and diffusion of the emitted radiation consequently lead to errors in the measurements.
4
Tomography
In these different systems, the contrasts may be natural or artificial. Artificial contrasts correspond to the injection of contrast agents, such as agents based on iodine in X-ray imaging and gadolinium in MRI, and of specific tracers dedicated to passive systems, such as tracers based on radioisotopes and fluorescence. Within the new field of nanomedicine, new activatable markers are investigated, such as luminescent or fluorescent markers, which become active when the molecule is in a given chemical environment, exploiting for instance the quenching effect, or when the molecule is activated or dissociated by an external signal. Finally, studying the kinetics of contrasts provides complementary information, such as uptake and redistribution time constants of radioactive compounds. Moreover, multispectral studies enable characterization of flow velocity by the Doppler effect, and dual-energy imaging enables decomposition of matter into two reference materials such as water and bone, thus enhancing the contrast. In Table 1.1, the principal tomographic imaging modalities are listed, including in particular those addressed in this book. In the second column, entitled “contrast”, the physical parameter visualized by each modality is stated. This parameter enables in certain cases, as in functional imaging, the calculation of physiological or biological parameters via a model which describes the interaction of the tracer with the organism. In the third column, we propose a classification of these systems according to their accuracy. We differentiate between very accurate systems, which are marked by + and may reach a reproducibility of the basic measurement of 0.1%, thus permitting quantitative imaging; accurate systems, which are marked by = and attain a reproducibility of the order of 1%; and less accurate systems, which are marked by – and provide a reproducibility in the order of 10%, thus allowing qualitative imaging only. Finally, in the fourth and fifth columns, typical orders of magnitude are given for the spatial and temporal resolution delivered by these systems. In view of the considerable diversity of existing systems, they primarily allow a distinction between systems with a high spatial resolution and systems with a high temporal resolution. They also illustrate the discussion in section 1.3. The spatial and temporal resolutions are often conflicting parameters that depend on how the data acquisition and the image reconstruction are configured. For example, increasing the time of exposure improves the signal-to-noise ratio and thus accuracy at the expense of temporal resolution. Likewise, spatial smoothing alleviates the effect of noise and improves statistical reproducibility at the expense of spatial resolution. Another important parameter is the sensitivity of the imaging system. For instance, nuclear imaging systems are very sensitive and are able to detect traces of radioisotopes, but need very long acquisition times to reach a good signal-to-noise ratio. Therefore, the accuracy is often limited. The existence of disturbing effects often corrupts the measurement in tomographic imaging and makes it difficult to attain an absolute quantification. We are then satisfied with a relative quantification, according to the accuracy of the measurement.
Introduction to Tomography
5
Tomographic imaging modality
Contrast
Accuracy
Spatial resolution
Temporal resolution
CT
attenuation coefficient, density of matter
+
Axial | 0.4 mm;
0.1–2.0 s per set of slices associated with 180° rotation
transaxial | 0.6–0.5 mm X-ray tomography in radiology
attenuation coefficient, density of matter
=
| 0.25 mm
3–6 s by volumetric acquisition (180° rotation)
MRI
amplitude of transverse magnetization
=
| 0.5 mm
0.1–10.0 s
SPECT
tracer concentration, associated physiological or biological parameters
–
10–20 mm
600–3,600 s per acquisition
PET
tracer concentration, associated physiological or biological parameters
=
3–10 mm
300–600 s per acquisition
fMRI
amplitude of transverse magnetization, concentration of oxy/deoxyhemoglobin
=
| 0.5 mm
0.1 s
MEG
current density
–
A few mm
10-3 s
EEG
current density
–
10–30 mm
10-3 s
TEM
attenuation coefficient
–
10-6 mm
some 100 s (function of image size)
CLSM
concentration of fluorophore
–
2 x 10-4 mm
some 100 s (function of image size)
Coherent optical tomography
refractive index
–
10-2 mm
10-1 s
Non-coherent optical tomography
attenuation and diffusion coefficients
–
10 mm
some 100 s
Synchrotron tomography
attenuation coefficient, density of matter
+
10-3–10-2 mm
800–3,600 s per acquisition (180° rotation)
6
Tomography Tomographic imaging modality
Contrast
Accuracy
Spatial resolution
Temporal resolution
X-ray tomography in NDT
attenuation coefficient, density of matter
+
10-2–102 mm (dependent on system)
1–1,000 s per acquisition (dependent on system)
Emission tomography in industrial flow visualization
concentration of tracer
–
Some 10% of the diameter of the object (some 10 mm)
10-1 s
Impedance tomography
electric impedance (resistance, capacitance, inductance)
–
3–10% of the diameter of the object (1–100 mm)
10-3–10-2 s
Microwave tomography
refractive index
–
O –O/10
10-2–10 s
with wavelength
O
(1–10 mm) Industrial ultrasound tomography
ultrasonic refractive index, reflective index, speed of propagation, attenuation coefficient
–
|O with wavelength
10-2–10-1 s
O (0.4–10.0 mm)
+ : Very accurate systems, reproducibility of the basic measurement in the order of 0.1%; quantitative imaging. = : Accurate systems, reproducibility of the basic measurement in the order of 1%. í : Less accurate systems, reproducibility of the basic measurement in the order of 10%; qualitative imaging.
Table 1.1. Comparison of principal tomographic imaging modalities
In most cases, the interaction between radiation and matter is accompanied by energy deposition, which may be associated with diverse phenomena, such as a local increase in thermal agitation, a change of state, ionization of atoms, and breaking of chemical bonds. Improving image quality in terms of contrast- or signalto-noise ratio unavoidably leads to an increase in the applied dose. In medical imaging, a compromise has to be found to assure that image quality is compatible with the demands of the physicians and the dose tolerated by the patient. After stopping irradiation, stored energy may be dissipated by returning to equilibrium, by thermal dissipation, or by biological mechanisms that try to repair or replace the defective elements.
Introduction to Tomography
7
1.3. Localization in space and time Tomographic systems provide images, i.e. a set of samples of a quantity on a spatial grid. When this grid is two-dimensional (2D) and associated with the plane of a cross-section, we speak of 2D imaging, where each sample represents a pixel. When the grid is three-dimensional (3D) and associated with a volume, we speak of 3D imaging, where each element of the volume is called a voxel. When the measurement system provides a single image at a given moment in time, the imaging system is static. We may draw an analogy with photography, in which an image is instantaneously captured. When the measurement system acquires an image sequence over a period of time, the imaging system is dynamic. We may draw an analogy with cinematography, in which image sequences of animated scenes are recorded. In each of these cases, the basic information associated with each sample of the image is the local value in space and time of a physical parameter characteristic of the employed radiation. The spatial grid serves as support of the mapping of the physical parameter. The spatial sampling step is defined by the spatial resolution of the imaging system and its ability to separate two elementary objects. The acquisition time, which is equivalent to the time of exposure in photography, determines the temporal resolution. Each tomographic imaging modality possesses its own characteristic spatial and temporal resolution. When exploring an object non-destructively, it is impossible to access the desired, local information directly. Therefore, we have to use penetrating radiation to measure the characteristic state of the matter. The major difficulty is that the emerging radiation will provide a global, integral measure of contributions from all traversed regions. The emerging radiation thus delivers a projection of the desired image, in which the contribution of each point of the explored object is weighted by an integration kernel, whose support geometrically characterizes the traversed regions. To illustrate these projective measurements, let us consider X-ray imaging as an example, where all observed structures superimpose on a planar image. In an X-ray image of the thorax seen from the front, the anterior and posterior parts superimpose, the diaphragm is combined with the vertebrae. On these projection images, only objects with high contrast are observable. In addition, only their profile perpendicular to the direction of observation may be identified. Their 3D form may not be recognized. To obtain a tomographic image, a set of projections is acquired under varying measurement conditions. These variations change the integration kernel, in particular the position of its support, to completely describe a projective transformation that is characteristic of the physical means of exploration employed. In confocal microscopy, for example, a Fredholm integral transform characterizes the impulse response of the microscope. In MRI, it is a Fourier transform, and in Xray tomography, it is a Radon transform. In the last case, describing the Radon
8
Tomography
transform implies acquiring projections, i.e. radiographs, with repeated measurements under angles that are regularly distributed around the patient or the explored object. Once the transformation of the image is described, the image may be reconstructed from the measurements by applying the inverse transformation, which corresponds to solving the signal equation. By these reconstruction operations, global projection measurements are transformed into local values of the examined quantity. This localization has the effect of increasing the contrast. It thus becomes possible to discern organs, or defects, with low contrast. This is typically the case in X-ray imaging, where we basically observe bones in the radiographs, while we can identify soft organs in the tomographs. This increase in contrast allows more sensitivity in the search for anomalies characterized by either hyper- or hypoattenuation, or by hyper- or hypo-uptake of a tracer. Thanks to the localization, it also becomes possible to separate organs from their environment and thus to gain characteristic information on them, such as the form of the contour, the volume, the mass, and the number, and to position them with respect to other structures in the image. This localization of information is of primary interest and justifies the success of tomographic techniques. A presentation of the principles of computerized tomographic imaging can be found in [KAK 01, HSI 03, KAL 06] for X-rays and [BEN 98, BAI 05] for positron emission tomography (PET). The inherent cost of tomographic procedures is linked to the necessity of completely describing the space of projection measurements to reconstruct the image. Tomographic systems are thus “scanners”, i.e. systems that measure, or scan, each point in the transformed domain. The number of basic measurements is of the same order of magnitude as the number of vertices of the grid on which the image is calculated, typically between 642 and 10242 in 2D and between 643 and 10243 in 3D. The technique used for data acquisition thus greatly influences the temporal resolution and the cost of the tomographic system. The higher the number of detectors and generators placed around the object, the better the temporal resolution will be, but the price will be higher. A major evolution in the detection of nuclear radiation is the replacement of one-dimensional (1D) by 2D, either multiline or surface, detectors. In this way, the measurements may be performed in parallel. In PET, we observe a linear increase in the number of detectors available on high-end systems with the year of the product launch. However, scanning technology must also be taken into account for changing the conditions of the acquisition. The most rapid techniques are those of electronic scanning. This is the case in electromagnetic measurements and in MRI [HAA 99]. In the latter, only a single detector, a radiofrequency antenna, is used and the information is gathered by modifying the ambient magnetic fields in such a way that the whole measurement domain is covered. The obtained information is intrinsically 3D. The temporal resolution achieved with these tomographic systems is very good, permitting dynamic imaging at video rates, i.e. at about 10 images/s. When electronic scanning
Introduction to Tomography
9
is impossible, mechanical scanning is employed instead. This is the case for rotating gantries in X-ray tomography, which rotate the source–detector combination around the patient. To improve the temporal resolution, the rotation time must be decreased – the most powerful systems currently complete a rotation in about 0.3 s – and the rotation must be combined with an axial translation of the bed to efficiently cover the whole volume of interest. Today, it is possible to acquire the whole thoracic area in less than 10 s, i.e. in a time for which patients can hold their breath. The principle of acquiring data by scanning, adopted by tomographic systems, poses problems of motion compensation when objects that evolve over time are observed. This is the case in medical imaging when the patient moves or when organs which are animated by continuous physiological motion like the heart or the lungs are studied. This is also the case in non-destructive testing on assembly or luggage inspection lines, or in process monitoring, where chemical or physical reactions are followed. The problem must be considered as a reconstruction problem in space and time, and appropriate reconstruction techniques for compensating the motion or the temporal variation of the measured quantities must be introduced. Evolution in tomographic techniques manifests itself mainly in localization in space and time. Growing acquisition rates permit increasing the dimensions of the explored regions. Thus, the transition from 2D to 3D imaging has been accomplished. In medical imaging, certain applications like X-ray and MR angiography and oncological imaging by PET require a reconstruction of the whole body. Accelerating the acquisition also makes dynamic imaging possible, for instance to study the kinetics of organs and metabolism or to guide interventions based on images. An interactive tomographic imaging is thus approached. Finally, the general trend towards miniaturization leads to growing interest in microtomography, where an improved spatial resolution of the systems is the primary goal. In this way, we may test microsystems in non-destructive testing (NDT), study animal models such as mice in biotechnology, or analyze biopsies in medicine. Nanotomography of molecules such as proteins, molecular assemblies, or atomic layers within integrated microelectronic devices is also investigated using either electron microscopy or synchrotron X-ray sources. 1.4. Image reconstruction Tomographic systems combine the examination of matter by radiation with the calculation of characteristic parameters, which are either linked to the generation of this radiation or the interaction between this radiation and matter. Generally, each measurement of the emerging radiation provides an analysis of the examined matter, like a sensor. Tomographic systems thus acquire a set of partial measurements by scanning. The image reconstruction combines these measurements to assign a local
10
Tomography
value, which is characteristic of the employed radiation, to each vertex of the grid that supports the spatial encoding of the image. For each measurement, the emerging radiation is the sum of the contributions of the quantity to be reconstructed along the traversed path, each elementary contribution being either an elementary source of radiation or an elementary interaction between radiation and matter. It is thus an integral measurement which defines the signal equation. The set of measurements constitutes the direct problem, linking the unknown quantities to the measurements. The image reconstruction amounts to solving this system of equations, i.e. the inverse problem [OFT 99]. The direct problem is deduced from the fundamental equations of the underlying physics, such as the Boltzmann equation for the propagation of photons in matter, the Maxwell equation for the propagation of electromagnetic waves [DEH 95], and the elastodynamic equation for the propagation of acoustic waves [DEH 95]. The measurements provide integral equations, often entailing numerically unstable differentiations in the image reconstruction. In this case, the reconstruction corresponds to an inverse problem that is ill-posed in the sense of Hadamard [HAD 32]. The acquisition is classified as complete when it provides all measurements that are necessary to calculate the object, which is in particular the case when the measurements cover the whole support of the transformation modeling the system. The inverse problem is considered weakly ill-posed if the principal causes of errors are instabilities associated with noise or the bias introduced by an approximate direct model. This is the case in X-ray tomography, for instance. If additionally different solutions may satisfy the signal equation, even in the absence of noise, the inverse problem is considered to be strongly ill-posed. This is the case in magnetoencephalography and in X-ray tomography using either a small angular range or a small number of positions of the X-ray source, for example. The acquisition is classified as incomplete when some of the necessary measurements are missing, in particular when technological constraints reduce the number of measurements that can be made. This situation may be linked to a subsampling of measurements or an absence of data caused by missing or truncated acquisitions. In this case, the inverse problem is also strongly ill-posed, and the reconstruction algorithm must choose between several possible solutions. To resolve these ill-posed inverse problems (in the sense of Hadamard), the reconstruction must be regularized by imposing constraints on the solution. The principal techniques of regularization reduce the number or the range of the unknown parameters, for instance by using a coarser spatial grid to encode the image, by parametrizing the unknown image, or by imposing models on the dynamics, introduce regularity constraints on the image to be reconstructed, or
Introduction to Tomography
11
eliminate from the inverse operator the smallest spectral or singular values of the direct operator. All regularization techniques require that a compromise is made between fidelity to the given data and fidelity to the constraints on the solution. This compromise reflects the uncertainty relations between localization and quantification in images, applied to tomographic systems. Several approaches may be pursued to define algorithms for image reconstruction, for which we refer the reader to [HER 79, NAT 86, NAT 01]. As for numerical methods in signal processing, deterministic and statistical methods, and continuous and discrete methods are distinguished. For a review on recent advances in image reconstruction, we refer the reader to [CEN 08]. The methods most widely used are continuous approaches, also called analytical methods. The direct problem is described by an operator over a set of functions. In the case of systems of linear measurements, the unknown image is linked to the measurements by a Fredholm transform of the first kind. In numerous cases, this equation simplifies either to a convolution equation, a Fourier transform, or a Radon transform. In these cases, the inverse problem admits an explicit solution, either in the form of an inversion formula, or in the form of a concatenation of explicit transformations. A direct method of calculating the unknown image is obtained in this way. These methods are thus more rapid than the iterative methods employed for discrete approaches. In addition, these methods are of general use, because the applied regularization techniques, which perform a smoothing, do not rely on specific a priori knowledge about the solution. The discretization is only introduced for numerical implementation. However, when the direct operator becomes more complex to better describe the interaction between radiation and matter, or when the incorporation of a priori information is necessary to choose a solution in case of incomplete data, an explicit formula for calculating the solution does not exist anymore. In these specific cases, discrete approaches are preferable. They describe the measurements and the unknown image as two finite, discrete sets of sampled values, represented by vectors. When the measurement system is linear or can be linearized by a simple transformation of the data, the relation between the image and the measurements may be expressed by a matrix–vector product. The problem of image reconstruction then leads to the problem of solving a large system of linear equations. In the case of slightly underdetermined, complete acquisitions, regularization is introduced by imposing smoothness constraints. The solution is formulated as the argument of a composite criterion that combines fidelity to the given data and regularity of the function. In the case of highly underdetermined, incomplete acquisitions, regularization is introduced by choosing an optimal solution among a set of possible solutions, which are, for example, associated with a discrete subspace that depends on a finite number of parameters. The solution is thus defined as the argument of a
12
Tomography
linear system under constraints. In both cases, the solution is calculated with iterative algorithms. When the statistical fluctuation of the measurements becomes important, as in emission tomography, it is preferable to explicitly describe the images and measurements statistically, following the principle of statistical methods. The solution is expressed as a statistical estimation problem. In the case of weakly ill-posed problems, Bayesian estimators corresponding to the minimization of a composite criterion have been proposed. They choose the solution with the highest probability for the processed measurement, or on average for all measurements, taking the measurements and the a priori information about the object into account. In the case of strongly ill-posed problems, constrained estimators, for example linked to Kullback distances, have been envisioned. Also in these cases, we resort to iterative reconstruction algorithms. When the ill-posed character becomes too severe, for example in the case of very limited numbers of measurement angles, constraints at low levels of the image samples are no longer sufficient. We must then employ a geometric description of the object to be reconstructed, in the form of a set of elementary objects, specified by a graphical model or a set of primitives. The image reconstruction amounts to matching the calculated and the measured projections. In this work, these approaches, which do not directly concern the problem of mapping a characteristic physical parameter, but rather problems in computer vision and pattern recognition, are not considered any further. We refer the reader to [GAR 06]. The image reconstruction algorithms require considerable processing performance. The elementary operations are the backprojection, or the backpropagation, in which a basic measurement is propagated through a volume by following the path traversed in the acquisition in the reverse direction. For this reason, the use of more and more sophisticated algorithms, and the reconstruction of larger and larger images, with higher spatial and temporal resolution, are only made possible by advances in the processing performance and memory size of the employed computers. Often, the use of dedicated processors and of parallelization and vectorization techniques is necessary. In view of these demands, dynamic tomographic imaging in real time constitutes a major technological challenge today. 1.5. Application domains By using radiation to explore matter and calculations to compute the map of the physical parameter characteristic of the employed radiation, tomographic imaging systems provide virtual cross-sections through objects without resorting to destructive means. These virtual cross-sections may easily be interpreted and enable
Introduction to Tomography
13
the identification and localization of internal structures. Computed tomography techniques directly provide a digital image, thus offering the possibility of applying the richness of image processing algorithms. The examination of these images notably permits characterization of each image element for differentiation, detection of defects for control and testing, quantification for measurement, and modeling for description and understanding. The imaging becomes 3D when the tomographic system provides a set of parallel cross-sections that cover the region of interest. Thus, cross-sections in arbitrary planes may be calculated, and surfaces, shapes, and the architecture of internal structures may be represented. In applications associated with patients or materials with solid structures in which fluids circulate, a distinction is made between imaging systems that primarily map the solid architecture, i.e. the structures (we then speak of morphological imaging), and imaging systems that mainly follow the exchange of fluids, i.e. the contents (we then speak of functional imaging). In optical imaging, the development of the first, simple microscopes, which were equipped with a single lens, dates back to the 17th century. From a physical point of view, the use of X-rays for the examination of matter started with their discovery by Röntgen in 1895. In the course of experiments conducted to ascertain the origin of this radiation, Becquerel found the radiation that uranium spontaneously emits in 1896 [BIM 99]. After the discovery of polonium and radium in 1898, Pierre and Marie Curie characterized the phenomenon that produces this radiation and called it “radioactivity”. For these discoveries, Henri Becquerel, Marie Curie and Pierre Curie received the Nobel prize for physics in 1903. The use of radioactive tracers was initiated in 1913 by Hevesy (Nobel prize for chemistry in 1943). The discovery of artificial radioactivity by Irène and Frédéric Joliot-Curie in 1934 opened the way for the use of numerous tracers to study the functioning of the living, from cells to entire organisms. Important research has been carried out on the production and the detection of these radiations. The first microscope using electrons to study the structure of an object was developed by Ernst Ruska in 1929. From a mathematical point of view, the Fourier transform constitutes an important tool in the study of the analytical methods used in tomography. Several scholars, such as Maxwell, Helmholtz, and Born, have studied the equation of wave propagation. The name of Fredholm remains linked to the theory of integral equations. The problem of reconstructing an image from line integrals goes back to the original publication by Radon in 1917 [RAD 17], reissued in 1986 [RAD 86], and the studies of John published in 1934 [JOH 34]. During the second half of the 20th century, a rapid expansion of tomographic techniques took place. In X-ray imaging, the first tomographic systems were mechanical devices which combined movements of the source and of the radiological film to eliminate information off the plane of the cross-section by a blurring effect. This is called longitudinal tomography, since the plane of the cross-
14
Tomography
section is parallel to the patient axis. The first digitally reconstructed images were obtained by radio astronomers at the end of the 1950s [BRA 79]. The first studies by Cormack at the end of the 1950s and the beginning of the 1960s showed that it is possible to precisely reconstruct the image of a transverse cross-section of a patient from its radiographic projections [COR 63, COR 64]. In the 1960s, several experiments with medical applications were carried out [COR 63, COR 64, KUH 63, OLD 61]. The problem of reconstructing images from projections in electron microscopy was also tackled at this time [DER 68, DER 71]. In 1971, Hounsfield built the first prototype of an X-ray tomograph dedicated to cerebral imaging [HOU 73]. Hounsfield and Cormack received the Nobel prize for physiology and medicine in 1979 [COR 80, HOU 80]. The origins of MRI go back to the first publication by Lauterbur in 1973 [LAU 73]. In nuclear medicine, the pioneering work in single photon emission computed tomography (SPECT) dates back to Kuhl and Edwards in 1963 [KUH 63], followed by Anger in 1967 [ANG 67], Muehllehner in 1971 [MUE 71], and Keyes et al. [BOW 73, KEY 73]. The first PET scanner was built at the beginning of the 1960s by Rankowitz et al. [RAN 62], followed in 1975 by Ter-Pogossian and Phelps [PHE 75, TER 75]. For more references covering the beginning of tomography, we refer the interested reader to the works of Herman [HER 79], Robb [ROB 85], and Webb [WEB 90], and to the following papers summarizing the developments achieved during the past 50 years in X-ray computed tomography [KAL 06], MRI [MAL 06], SPECT [JAS 06], PET [MUE 06], and image reconstruction [DEF 06]. In France, we have from this time the pioneering work of Robert Di Paola’s group at the Institute Gustave Roussy (IGR) on Ȗ-tomography and of the groups at the Laboratory of Electronics, Technology, and Instrumentation (LETI) at the Atomic Energy Commission (CEA). In 1976, under the direction of Edmond Tournier and Robert Allemand, the first French prototype of an X-ray tomograph was installed at Grenoble hospital, in collaboration with the company CGR [PLA 98]. In 1981, the LETI installed at the Service Hospitalier Frédéric Joliot (SHFJ) the CEA’s first prototype of a time-of-flight PET scanner (TTV01), followed by three systems of the next generation (TTV03). The SHFJ participated in the specification of these systems and carried out their experimental evaluation. The SHFJ was the first center in France having a cyclotron for medical use and a radiochemistry laboratory for synthesizing the tracers used in PET. In 1990, two prototypes of 3D X-ray scanners called morphometers were developed by the LETI, one of which was installed at the hospital in Rennes and the other at the neurocardiological hospital in Lyon, in collaboration with the company General Electric-MSE and groups working in these hospitals. In parallel, several prototypes of X-ray scanners were built for applications in non-destructive testing. The 1980s saw the development of 3D imaging and digital detection technology, as well as tremendous advances in the processing performance of computers. One
Introduction to Tomography
15
pioneering project was the DSR project (dynamic spatial reconstructor) at the Mayo Clinic in Rochester [ROB 85], which aimed at imaging the beating heart and the breathing lung using several pairs of X-ray sources and detectors. Developments continued in the 1990s, focusing notably on increasing sensitivity and acquisition rates of the systems, the latter in particular linked with the development of 2D multiline or surface radiation detectors. Several references describe the advances achieved in this period [BEN 98, GOL 95, KAL 00, LIA 00, ROB 85, SHA 89]. Medical imaging is an important application domain for tomographic techniques. Currently, X-, J-, and E-ray tomography, and MRI together make up a third of the medical imaging market. Tomographic systems are heavy and expensive devices that often require specific infrastructure, like protection against ionizing radiation, or magnetic fields or facilities for preparing radiotracers. Nevertheless, tomography plays a crucial role in diagnostic imaging today, and it has a major impact on public health. Medical imaging and the life sciences thus constitute the primary application domains of tomographic systems. In the life sciences, the scanning transmission electron microscope enables the study of fine structures, such as those of proteins or viruses, and, at a larger scale, the confocal laser scanning microscope (CLSM) allows visualization of the elements of cells. In neuroscience, PET, fMRI (functional MRI), MEG, and EEG have contributed to the identification and the localization of cortical areas and thus to the understanding of the functioning of the brain. By enabling tracking of physiological parameters, PET contributes to the study of the primary and secondary effects of drugs. Finally, the technique of gene reporters opens up the possibility of genetic studies in vivo. For these studies on drugs or in biotechnology, animal models are necessary, which explains the current development of platforms for tomographic imaging on animals. For small animal studies, optical tomography associated either with luminescence or laser-induced fluorescence is also progressing. In particular, it allows linking in vitro studies on cell cultures and in vivo studies on small animals using the same markers. Generally speaking, through MRI, nuclear and optical imaging, molecular imaging is an active field of research nowadays. In medicine, X-ray tomography and MRI are anatomic, morphological imaging modalities, while SPECT, PET, fMRI, and magneto- and electro-encephalography (MEG and EEG) are employed in functional studies. X-ray tomography is well suited to examining dense structures, like bones or vasculature after injection of a contrast agent. MRI is better suited for visualizing contrast based on the density of hydrogen atoms, such as between the white and gray matter in the brain [HAA 99]. SPECT is used a lot to study blood perfusion of the myocardium, and PET has established itself for staging of cancer in oncology. For medical applications, optical tomography [MÜL 93] remains for the moment restricted to a few dedicated
16
Tomography
applications, such as the imaging of the cornea or the skin by optical coherence tomography. Other applications like the exploration of the breast or prostate using laser-induced fluorescence tomography are still under investigation. These aspects are elaborated in the following chapters, which are dedicated to the individual modalities. Tomographic system are used in all stages of patient care, including diagnosis, therapy or surgical planning, the image-based guidance of interventions for minimal invasive surgery, and therapy follow-up. Digital assistants may help with the image analysis. Moreover, robots use these images for guidance. These digital images are easily transmitted over imaging networks and stored to establish a reference database, to be shared among operators or students. In the future, virtual patients and digital anatomic atlases will be used in the training of certain medical operations, or even surgical procedures. Such digital models also facilitate the design of prostheses. The other application domains of tomographic imaging systems are astronomy and materials sciences. They often involve large instruments, such as telescopes, synchrotrons, and plasma chambers, and nuclear simulation devices, such as the installations Mégajoule and Airix in France. Geophysics is another important domain in which, on a large scale, oceans or geological layers are examined and, on a smaller scale, rocks and their properties, such as their porosity. In astrophysics, telescopes operating at different ranges of wavelengths are involved, which exploits the natural rotation of the stars for tomographic acquisitions [BRA 79, FRA 05]. The last application domains are industry and services. Several industries are interested, such as the nuclear, aeronautic, automobile, agriculture, food, pharmaceutical, petroleum, and chemical industries [BAR 00, BEC 97, MAY 94]. Here, a broad range of materials in different phases is concerned. Tomography is first of all applied in design and development departments, for the development of fabrication processes, the characterization and modeling of the behavior of materials, reverse engineering, and the adjustment of graphical models in computeraided design (CAD). Secondly, tomography is also applied in process control [BEC 97], for instance in mixers, thermal exchangers, multiphase transport, fluidized bed columns, motors, and reactors. It concerns in particular the identification of functioning regimes, the determination of resting times and of the kinetics of products, and the validation of thermo-hydraulic models. However, in non-destructive testing, once the types of defects or malfunction are identified on a prototype, we often try to reduce the number of measurements and thus the cost, and to simplify the systems. Therefore, the use of tomographic systems remains limited in this application domain. Lastly, tomography is applied to security and quality control. This includes, for instance, production control, such as on-line testing after assembly to check dimensions or to study defects, or the inspection of nuclear waste
Introduction to Tomography
17
drums. Security control is also becoming a major market. This concerns in particular luggage inspection with identification of materials to detect drugs or explosives. In conclusion, from the mid-20th century up to the present, tomography has become a major technology with a large range of applications. 1.6. Bibliography [ANG 67] ANGER H., PRICE D. C., YOST P. E., “Transverse section tomography with the gamma camera”, J. Nucl. Med., vol. 8, pp. 314–315, 1967. [BAI 05] BAILEY D. L., TOWNSEND D. W., VALK P. E., MAISEY M. N., Positron Emission Tomography: Basic Science and Clinical Practice, Springer, 2005. [BAR 00] BARUCHEL J., BUFFIÈRE J. Y., MAIRE E., MERLE P., PEIX G. (Eds.), X-Ray Tomography in Material Science, Hermes, 2000. [BEC 97] BECK M. S., DYAKOWSKI T., WILLIAMS R. A., “Process tomography – the state of the art”, Frontiers in Industrial Process Tomography II, Proc. Engineering Foundation Conference, pp. 357–362, 1997. [BEN 98] BENDRIEM B., TOWNSEND D. (Eds.), The Theory and Practice of 3D PET, Kluwer Academic Publishers, Developments in Nuclear Medicine Series, vol. 32, 1998. [BIM 99] BIMBOT R., BONNIN A., DELOCHE R., LAPEYRE C., Cent Ans après, la Radioactivité, le Rayonnement d’une Découverte, EDP Sciences, 1999. [BOW 73] BOWLEY A.R., TAYLOR C.G., CAUSER D. A., BARBER D. C., KEYES W. I., UNDRILL P. E., CORFIELD J. R., MALLARD J. R., “A radioisotope scanner for rectilinear arc, transverse section, and longitudinal section scanning (ASS, the Aberdeen Section Scanner)”, Br. J. Radiol., vol. 46, pp. 262–271, 1973. [BRA 79] BRACEWELL R. N., “Image reconstruction in radio astronomy”, in Herman G. T. (Ed.), Image Reconstruction from Projections, Springer, pp. 81–104, 1979. [CEN 08] CENSOR Y., JIANG M., LOUIS A. K. (Eds.), Mathematical Methods in Biomedical Imaging and Intensity-Modulated Radiation Therapy (IMRT), Edizioni Della Normale, 2008. [COR 63] CORMACK A. M., “Representation of a function by its line integrals, with some radiological applications”, J. Appl. Phys., vol. 34, pp. 2722–2727, 1963. [COR 64] CORMACK A. M., “Representation of a function by its line integrals, with some radiological applications II”, J. Appl. Phys., vol. 35, pp. 195–207, 1964. [COR 80] CORMACK A. M., “Early two-dimensional reconstruction (CT scanning) and recent topics stemming from it, Nobel Lecture, December 8, 1979”, J. Comput. Assist. Tomogr., vol. 4, n° 5, p. 658, 1980.
18
Tomography
[DEF 06] DFRISE M., GULLBERG G. T., “Image reconstruction”, Phys. Med. Biol., vol. 51, n° 13, pp. R139–R154, 2006. [DEH 95] DE HOOP A. T., Handbook of Radiation and Scattering of Waves, Academic Press, 1995. [DER 68] DE ROSIER D. J., KLUG A., “Reconstruction of three-dimensional structures from electron micrographs”, Nature, vol. 217, pp. 130–134, 1968. [DER 71] DE ROSIER D. J., “The reconstruction of three-dimensional images from electron micrographs”, Contemp. Phys., vol. 12, pp. 437–452, 1971. [FRA 05] FRAZIN R. A., KAMALABADI F., “Rotational tomography for 3D reconstruction of the white-light and EUV corona in the post-SOHO era”, Solar Physics, vol. 228, pp. 219– 237, 2005. [GAL 02] GALLICE J. (Ed.), Images de Profondeur, Hermes, 2002. [GAR 06] GARDNER R. J., “Geometric Tomography”, Encyclopedia of Mathematics and its Applications, Cambridge University Press, 2006. [GOL 95] GOLDMAN L. W., FOWLKES J. B. (Eds.), Medical CT and Ultrasound: Current Technology and Applications, Advanced Medical Publishing, 1995. [HAA 99] HAACKE E., BROWN R. W., THOMPSON M. R., VENKATESAN R., Magnetic Resonance Imaging. Physical Principles and Sequence Design, John Wiley and Sons, 1999. [HAD 32] HADAMARD J., Le Problème de Cauchy et les Equations aux Dérivées Partielles Linéaires Hyperboliques, Hermann, 1932. [HER 79] HERMAN G. T. (Ed.), Image Reconstruction from Projections: Implementation and Applications, Springer, 1979. [HOU 73] HOUNSFIELD G. N., “Computerized transverse axial scanning (tomography). I. Description of system”, Br. J. Radiol., vol. 46, pp. 1016–1022, 1973. [HOU 80] HOUNSFIELD G. N., “Computed medical imaging, Nobel Lecture, December 8, 1979”, J. Comput. Assist. Tomogr., vol. 4, n° 5, p. 665, 1980. [HSI 03] HSIEH J., Computed Tomography: Principles, Design, Artifacts, and Recent Advances, SPIE Press, 2003. [JAS 06] KASZCZAK R. J., “The early years of single photon emission computed tomography (SPECT): an anthology of selected reminiscences”, Phys. Med. Biol., vol. 51, n°13, pp. R99–R115, 2006. [JOH 34] JOHN F., “Bestimmung einer Funktion aus ihren Integralen über gewisse Mannigfaltigkeiten”, Mathematische Annalen, vol. 109, pp. 488–520, 1934. [KAL 00] KALENDER W. A., Computed Tomography, Wiley, New York, 2000. [KAL 06] KALENDER W. A., Computed Tomography: Fundamentals, System Technology, Image Quality, Applications, Wiley-VCH, 2006.
Introduction to Tomography
19
[KAK 01] KAK A. C., SLANEY M., Principles of Computerized Tomographic Imaging, SIAM, 2001. [KEY 73] KEYES J. W., KAY D. B., SIMON W., “Digital reconstruction of three-dimensional radionuclide images”, J. Nucl. Med., vol. 14, pp. 628–629, 1973. [KUH 63] KUHL D. E., EDWARDS R. Q., “Image separation radioisotope scanning”, Radiology, vol. 80, pp. 653–661, 1963. [LAU 73] LAUTERBUR P. C., “Image formation by induced local interactions: examples employing nuclear magnetic resonance”, Nature, vol. 242, pp. 190–191, 1973. [LIA 00] LIANG Z. P., LAUTERBUR P. C., Principles of Magnetic Resonance Imaging. A Signal Processing Perspective, IEEE Press, Series in Biomedical Imaging, 2000. [MAL 06] MALLARD J. R., “Magnetic resonance imaging – the Aberdeen perspective on developments in the early years”, Phys. Med. Biol., vol. 51, n° 13, pp. R45–R60, 2006. [MAY 94] MAYINGER F. (Ed.), Optical Measurements: Techniques and Applications, Springer, 1994. [MUE 71] MUEHLLEHNER G., WETZEL R., “Section imaging by computer calculation”, J. Nucl. Med., vol. 12, pp. 76–84, 1971. [MUE 06] MUEHLLEHNER G., KARP J. S., “Positron emission tomography”, Phys. Med. Biol., vol. 51, n° 13, pp. R117–R137, 2006. [MÜL 93] MÜLLER G., CHANCE B., ALFANO R. et al. Medical Optical Tomography: Functional Imaging and Monitoring, SPIE Optical Engineering Press, vol. IS11, 1993. [NAT 86] NATTERER F., The Mathematics of Computerized Tomography, John Wiley and Sons, 1986. [NAT 01] NATTERER F., WUBBELING F., Mathematical Methods in Image Reconstruction, SIAM, 2001. [OFT 99] OBSERVATOIRE FRANÇAIS DES TECHNIQUES AVANCEES, Problèmes Inverses: de l’Expérimentation à la Modélisation, Tec & Doc, Série ARAGO, n° 22, 1999. [OLD 61] OLDENDORF W. H., “Isolated flying detection of radiodensity discontinuities displaying the internal structural pattern of a complex object”, IRE Trans. BME, vol. 8, pp. 68–72, 1961. [PHE 75] PHELPS M. E., HOFFMAN E. J., MULLANI N. A., TER-POGOSSIAN M. M., “Application of annihilation coincidence detection to transaxial reconstruction tomography”, J. Nucl. Med., vol. 16, pp. 210–224, 1975. [PLA 98] PLAYOUST B., De l’Atome à la Puce – Le LETI: Trente Ans de Collaborations Recherche Industrie, LIBRIS, 1998. [RAD 17] RADON J., “Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten”, Math.-Phys. Kl. Berichte d. Sächsischen Akademie der Wissenschaften, vol. 69, pp. 262–267, 1917.
20
Tomography
[RAD 86] RADON J., “On the determination of functions from their integral values along certain manifolds”, IEEE Trans. Med. Imaging, pp. 170–176, 1986. [RAN 62] RANKOWITZ S., “Positron scanner for locating brain tumors”, IEEE Trans. Nucl. Sci., vol. 9, pp. 45–49, 1962. [ROB 85] ROBB R. A. (Ed.), Three-dimensional Biomedical Imaging, vol. I and II, CRC Press, 1985. [SHA 89] SHARP P. F., GEMMEL H. G., SMITH F. W. (Eds.), Practical Nuclear Medicine, IRL Press at Oxford University Press, Oxford, 1989. [TER 75] TER-POGOSSIAN M. M., PHELPS M. E., HOFFMAN E. J., MULLANI N. A., “A positron emission transaxial tomograph for nuclear imaging (PETT)”, Radiology, vol. 114, pp. 89– 98, 1975. [WEB 90] WEBB S., From the Watching of the Shadows: the Origins of Radiological Tomography, Adam Hilger, 1990. [WIL 95] WILLIAMS R. A., BECK M. S. (Eds.), Process Tomography: Principles, Techniques and Applications, Butterworth Heinemann, 1995.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Part 1 Image Reconstruction
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 2
Analytical Methods
2.1. Introduction Tomography is a technique for studying matter based on measurements of emitted or transmitted radiation. These measurements are described by equations that link the radiation sources, the interaction of the radiation with matter, and the detectors. The reconstruction of images from these indirect measurements involves calculating maps of a characteristic parameter by inverting the measurement equations. This characteristic parameter is linked to the radiation sources in emission tomography and to the coefficients describing the interaction of the radiation with matter in transmission tomography. Image reconstruction is an inverse problem, in which first the direct problem, linking the measurements to the characteristic map, is formulated based on physical laws and then the characteristic map is calculated by solving the associated equations. The analytical methods rely on a description of the images and measurements by continuous functions and on a modeling of the physical laws by functional operators. They are of particular interest whenever the inverse operator can be expressed in the form of an explicit inversion formula or a concatenation of such formulas. Their numerical implementation leads to algorithms in which the images are directly calculated from the measurements in a single step, without resorting to more timeconsuming, iterative methods. The production and transport of photons in matter are described by partial differential equations, such as the Boltzmann equation. In image reconstruction, the simplified integral form using projection operators is preferably employed. These Chapter written by Michel DEFRISE and Pierre GRANGEAT.
24
Tomography
integral projections reflect the fact that the contributions of the whole of the traversed matter are accumulated. In general, these equations have the form of Fredholm integral equations of the first kind. The main analytical operators used in image reconstruction are the Radon transform, the X-ray transform, and the Fourier transform, which are all described in this chapter. The algorithmic implementation of these analytical methods relies on a discretization of the inversion formula. The numerical analysis basically employs filtering, backprojection, and summation operations, as well as discrete Fourier transforms. The need to accelerate the calculation leads to adaptations of the algorithms to the architecture of the computing hardware, use of efficient numerical operators, like the fast Fourier transform, and simplifications of the calculations, for instance by rebinning to other geometries. Image reconstruction is an ill-posed inverse problem in the sense of Hadamard. In particular, the reconstruction filters amplify the noise that is present in the measurements. To reduce the influence of these statistical fluctuations, regularization techniques are used, which rely on a smoothing by low-pass filters, as discussed in section 2.2.4. These analytical inversion formulas also enable study of the data sampling, as described in Chapter 3. In addition, they provide explicit formulas for the calculation of expressions that characterize the acquisition systems, as the signal-to-noise ratio in the reconstructed images or the transfer function of the tomographic imaging systems. The analytical methods are very general because they do not rely on characteristic properties of the studied object. However, explicit inversion formulas exist only for simple operators. When complex physical phenomena are introduced into the direct operator, such as multiple scattering in single photon emission computed tomography, an explicit inversion formula no longer exists. Similarly, if a priori information, constraints, or selection criteria concerning the images to be reconstructed are to be introduced, the solution is often defined by the minimum of a cost function, for which an explicit formula rarely exists. In these cases, the discrete methods described in Chapter 4 are preferably used. In this chapter, we introduce the principal operators that serve the description of different acquisition geometries encountered in tomographic systems, most notably the X-ray and the Radon transform, associated with integral projections along lines or over hyperplanes, respectively. We discuss the two-dimensional (2D) and the three-dimensional (3D) case and distinguish between parallel and divergent geometries, where the latter include the fan-beam geometry in 2D and the cone-beam geometry in 3D. We summarize the
Analytical Methods
25
main properties of these operators and describe the inversion approaches on which the analytical reconstruction algorithms that are used on tomographic imaging systems rely. Then we address 3D positron emission tomography in section 2.6 and X-ray tomography in cone-beam geometry in section 2.7. Finally, we discuss in section 2.8 dynamic tomography where the reconstructed image is changing over time. For a more detailed description and for application examples of certain formulas introduced in this chapter, the reader is referred to [BAR 81, DEA 83, HER 80, KAK 88, NAT 86, NAT 01, BAR 04], especially for sections 2.2 and 2.3, which deal with the 2D Radon transform. References to the original articles, in which the results presented below were introduced, can be found there as well. The reader may also refer to the following reviews [GRA 01, HIR 97, WAN 00]. 2.2. 2D Radon transform in parallel-beam geometry 2.2.1. Definition and concept of sinogram The function f(M) represents the value of the studied physical quantity at a point M. In the 2D case, M is in the plane of the measured slice. The Cartesian coordinates (x,y) of M are used in this plane. Depending on the context, f(M) or f(x,y) is written. The function f is assumed to be sufficiently regular and to be zero outside a centered disk with radius R. p Rf(\,p) y
M n p
D(\,p) \ x
0 f(M)
Figure 2.1. 2D Radon transform in parallel-beam geometry (equation [2.1])
The 2D Radon transform, or 2D X-ray transform, associates with the function f(M) the set of its integrals along lines D in the plane of the slice. The lines D are
26
Tomography
defined by the angle <between the axis x and the unit vector n perpendicular to D, and by the algebraic distance p between the line and the origin O (see Figure 2.1). The X-ray transform then reads: Rf \, p =
f f M dM= f³ pcos\ - tsin\, psin\ + tcos\ dt M D\, p -f
³
>2.1@
where < [0,S), |p| d R, and t denotes the abscissa along the line D. For a fixed angle <, Rf(<,.) denotes a 1D parallel projection of the function f, i.e. the set of integrals of f along lines parallel to the direction (– sin<, cos<). For this reason, we speak of an acquisition in parallel-beam geometry in this case. The reconstruction problem studied in this chapter thus involves estimating the function f(M) from its X-ray transform defined by equation [2.1]. A consequence of the linearity is that the X-ray transform of a combination of point objects of the type f0(M) = G(M í M0) equals the sum of their X-ray transforms, where Gdenotes a 2D Dirac function. For a point object at M0 = (x0, y0), we obtain:
Rf0\, p = G x0 cos\ + y0 sin\ - p
>2.2@
where Gdenotes a 1D Dirac function. Therefore, the X-ray transform of a point object is non-zero only along a sinusoidal curve in the plane (<, p). For this reason, the function Rf(<, p) is called the sinogram of f.
2.2.2. Fourier slice theorem and data sufficiency condition The transformation that associates Rf, as defined by equation [2.1], with f is not a convolution equation, but is still invariant to translations. We can verify that a translation of the function f(M) by a vector v = (vx,vy) corresponds to a translation of each parallel projection Rf(<,.) by 'p = = vx cos\ + vy sin\. This property of translation invariance plays an important role in the derivation of reconstruction algorithms. Considering the Fourier transforms of the sinogram and the object, it may also be expressed as: FpR f \ ,Q = Ff Qcos\ ,Qsin\ = Ff Qn
>2.3@
Analytical Methods
27
where FpRf is the 1D Fourier transform of Rf with respect to the radial variable p: f FpRf \,Q = ³exp- 2SipQ Rf \, p dp -f
>2.4@
and Ff is the 2D Fourier transform of f: Ff Q x ,Q y =
³³ exp - 2 S ix Q x +
y Q y f x , y dx dy
>2.5@
2
Equation [2.3] is called the Fourier slice theorem. A consequence of this theorem is that f(M) is fully determined if its X-ray transform is known for all orientations < [0,S) and for |p| d R. In fact, for each <, the Fourier transform of the corresponding parallel projection determines the Fourier transform of f along a line with orientation < that crosses the origin in the frequency domain. When < covers the interval [0, S), this line entirely scans the plane (Qx,Qy), and the Fourier transform Ff is thus known everywhere. 2.2.3. Inversion by filtered backprojection A consequence of the translation invariance and of the Fourier slice theorem is the convolution theorem for the Radon transform, which can be expressed as follows. For all pairs of object functions f and sinograms h that are sufficiently regular, we obtain: B(h *1 Rf) = (Bh) *2 f
>2.6@
where *1 on the left-hand side of equation [2.6] denotes a one-dimensional (1D) convolution along the radial variable p, and *2 on the right-hand side a 2D convolution in 2. B is the adjoint operator to R, which associates with a sinogram function g an object function Bg, whose value at each point M is the mean of g for all the lines in the plane crossing M: S
S
0
0
BgM = ³ g\, p = OM,n! d\ =³g\, xcos\ + ysin\ d\
>2.7@
This operator is called a backprojection operator. The convolution theorem permits inverting the Radon transform if a sinogram function h is known whose backprojection Bh is equal to the 2D Dirac distribution centered at O, i.e. Bh(M) = G(M). In this case, we obtain:
28
Tomography
(Bh)*f = G*f = f We can show that a solution of Bh(M) = G(M) is given by the following convolution kernel, defined by its Fourier transform Fh:
FhQ = Q
>2.8@
The Fourier transform is thus a ramp function. For this reason, h is called a ramp filter in tomography jargon. The operator corresponding to the ramp filter is the combination of the derivative transform D and the Hilbert transform H, multiplied by a factor 1/2S. By taking equations [2.6] – [2.8] and expressing the convolution h*Rf by its Fourier transform, we can now derive an inversion formula that involves two steps: 1) Filtering of the measured projection Rf(<, p) with the ramp filter for each angle <: f
f
-f
-f
HDRf \ , p = ³ Q FRf \,Q e2iSQp d Q = ³ h p' Rf \ , p - p' dp'
>2.9@
2) Backprojection of the filtered projection HDRf(<, p): S
f M = ³ HDRf \, p = OM,n ! d\
>2.10@
0
Equations [2.9] and [2.10] define inversion by filtered backprojection (FBP), whose discretization leads to the most commonly used reconstruction algorithm. If the scanner sequentially measures a set of 1D projections with different orientations <, this decomposition enables a complete processing of each 1D projection as soon as it becomes available, while the following projections are still being measured. 2.2.4. Choice of filter The ramp filter has an unlimited support and, therefore, cannot be used for two reasons: the sampling of the radial variable p in intervals of 'p limits the maximal frequency to the Nyquist frequency QNyquist = 1/(2'p), and the filter |Q| amplifies high frequencies, for which the signal-to-noise ratio is the poorest. In practice, the ramp filter is multiplied by a low-pass window w(Q) (also called an apodization window), characterized by its cut-off frequency Qmax d QNyquist and its
Analytical Methods
29
regularity. Its cut-off frequency Qmax determines the spatial resolution in the reconstructed image and its choice requires finding a compromise between statistical error (stability with respect to noise) and systematic error (resolution). The apodized convolution kernel is given by:
hHDWp =
Q max
³Q Q w Q e SQ
2i p
>2.11@
dQ
max
Most often, an ideal low-pass filter or a Hamming, Hann, or Butterworth window is chosen as the apodization window. According to the convolution theorem, this is equivalent to filtering the reconstructed image with a rotationally symmetric 2D kernel, whose transfer function corresponds to w(|Q|). The impulse responses hHDW(p) of the ramp filter with cut-off frequency 0.5 and the ramp filter with a Hamming window and cut-off frequency 0.5 are shown in Figure 2.2.
p
p - 10
-5
5
Ramp filter with cut-off frequency 0.5 and Hamming window
10
- 10
-5
5
10
Ramp filter with cut-off frequency 0.5 and without Hamming window
Figure 2.2. The ramp filter (equation [2.11])
Whatever the choice of w(Q), the convolution kernel hHDW(p) cannot have a limited support, because its Fourier transform |Q|w(Q) is not continuously differentiable at Q = 0. This implies that the reconstruction of f at a point M requires the measurement of its Radon transform Rf for all lines in the plane and not only for the lines that pass through the neighborhood of M. Therefore, the 2D tomographic reconstruction with the filtered backprojection algorithm is called non-local. For many medical and industrial applications, it is impossible to measure the whole Radon transform, since either the whole range of projection angles is not
30
Tomography
accessible (the limited angle problem), or the projections are not complete (the truncated projections problem). In the second case, the use of a reconstruction kernel with limited support (pseudo-local tomography) or even a support restricted to p = 0 (local tomography) is mandatory [RAM 96]. The simplest way to construct a pseudo-local kernel is to multiply the kernel hHDW(p) with a spatial window of limited support, for example a rectangular window with a width of 2a. f(M) may then be reconstructed from the integrals of f along the lines passing through a neighborhood of M with radius a. This approximate reconstruction sacrifices quantitative precision, but reproduces high frequencies well, which permit localization of the borders between different homogenous regions of f(M). Going even further, a local reconstruction is obtained by using the second derivative as convolution kernel, i.e. by replacing the ramp filter HD in equation [2.9] by the second derivative filter –D2: – D2Rf (\, p) =
f
2
³ 2S v FRf \ , v e f
2 iS vp
dv
>2.12@
In this case, an approximate reconstruction of f(M), which only reproduces the discontinuities, is obtained solely from the lines passing through an arbitrarily small neighborhood of M. An exact reconstruction of the region of interest is possible for certain incomplete data sets. These recent methods are based on the backprojection of the derivative of the projections:
b( M )
ʌ
³ DRf (Ȍ , p OM, n !) dȌ
0
It can be shown [NOO 04, ZHU 04] that the function obtained in this way is the Hilbert transform of f along the lines parallel to the axis y, n(0)=(0,1): b( M )
f 2 dy PV ³ f (M y n(0)) ʌ OM , n (0 ) ! y f
where PV denotes the principal value of the Cauchy integral. The function f(M) may then be reconstructed along a line L parallel to the axis y by applying the inverse Hilbert transform. The calculation of the function b(M), and thus the reconstruction of f(M), along this line L does not require knowledge of the line integrals of f except for the lines passing through a neighborhood of L. This result permits an exact reconstruction from incomplete data.
Analytical Methods
31
Finally, the Radon transform allows numerous generalizations to better describe the conditions of the propagation of radiation in matter. An example is the attenuated Radon transform for single photon emission computed tomography with correction of the attenuation of radiation in matter. An inversion formula has been proposed to solve the general case for a non-uniform attenuation map [NAT 01a, NOV 00]. 2.2.5. Frequency–distance principle Each sample (<, p) of the Radon transform (equation [2.1]) is a line integral of the unknown function f(M) (see section 2.2.1). The value of this sample thus provides no information on the localization along the line. The frequency–distance principle is an approximate property of the Radon transform which enables partial recovery of such information [EDH 86]. This property is a consequence of the stationary phase principle and is given here without proof. The 2D Fourier transform of the sinogram is defined as: FRf (k ,Q )
2S
f
0
f
³ exp(ik\ ) ³ exp(2iSpQ ) Rf (\ , p)dpd\
>2.13@
where k Z and Q . It has to be noted that each sample of this Fourier transform is a linear combination of the whole set of line integrals. The frequency– distance principle can then be formulated as follows: at high frequencies (_Q_-> f), the value of FRf(k,Q) depends in the first approximation on the values of the function f(M) at the points M located at a distance t = – k/Q along the lines of integration, with the variable t being defined as in equation [2.1]. This result enables determination of the support of the 2D Fourier transform of the sinogram of an image whose support is contained in a centered disk with radius R (see Chapter 3). The frequency–distance principle implies that FRf(k,Q) | 0 when |k|>R|Q|. This constraint may be exploited when the Radon transform of a function cannot be measured for all lines D in the slice, for example due to dead angles of the detection system [KAR 88]. The frequency–distance principle then enables extrapolation of the measurements in order to complete the sinogram, provided that the regions that have not been sampled are not too large. In section 2.6.2, another application of the frequency–distance principle in 3D PET reconstruction is discussed.
32
Tomography
2.3. 2D Radon transform in fan-beam geometry 2.3.1. Definition The parametrization of the Radon transform of a function f requires describing the set of lines in space that pass through the support of the object. On CT scanners, each line is defined by the point S of the anode of the X-ray tube where the X-ray photons are created and the center A of the detector element. On CT scanners with 1D detectors, these points A are distributed over a circle segment centered at the point S. These lines cross at S and form a fan beam. Therefore, we speak of fan-beam geometry. A line D is defined by the angle \ between the line (S, O) and the axis (O, y), and by the angle J between the acquisition line (S, A) and the line (S, O) passing through the origin O and the source point S (see Figure 2.3). The source point S moves on a circle with radius R.
y Od
A(M) M J
O
J x
D(\,J)
< S
Figure 2.3. 2D Radon transform in fan-beam geometry (equation [2.14])
Analytical Methods
33
The 2D Radon transform in fan-beam geometry associates with each function f and each line D(\,J) its integral projection:
X f f \, A = X f f \,J =
³f M dM
>2.14@
M D\ ,J
2.3.2. Rebinning to parallel data In fan-beam geometry, each line D in a space with coordinates (\,J) is associated with its coordinates (M, p) in parallel-beam geometry as defined in section 2.2.1. As each line crosses the circle at two points, this coordinate transformation is described by the following two relations:
\ = M - J ° M = \ + J ® p - R sinJ ®J = arcsinª p º ¯ « R» °¯ ¬ ¼ \ = M - J - S M = \ + J + S ° ® p R sinJ ®J = arcsinª p º ¯ «¬ R »¼ °¯
>2.15@
The principle of reconstruction algorithms, which rely on a rebinning of the data to parallel-beam geometry, involves calculating the equivalent parallel-beam sinogram, following the relation Rf(M, p) = Xf f(\, J), by first applying these formulas for the coordinate transformation (equation [2.15]), and then reconstructing the associated image using the filtered backprojection algorithm described in section 2.2.3 [BES 99]. This change in geometry involves, for each pair (M, p), a bilinear interpolation between the measured pairs (\, J) that surround the required values.
2.3.3. Reconstruction by filtered backprojection Starting from the inversion formula for the Radon transform in parallel-beam geometry defined in section 2.2.3 (equations [2.9] and [2.10]) and applying the coordinate transformation (equation [2.15]), we obtain the direct inversion formula in fan-beam geometry [HER 77]. The inversion process involves three steps: 1) Weighting of the measured projections Xf f (\, A):
34
Tomography
Yf \ , A = X f f \ , A .
SO d ,SA !
>2.16@
SO d . SA 2) 1D filtering along the variable J, for each angle \: HDYf (\, J) = hf * Yf
>2.17@
where: 2
ª J º h f J = R.« .h p >J.R @ ¬ sinJ »¼
>2.18@
and hp is the ramp filter (equation [2.8]) used in parallel-beam geometry. 3) Weighted backprojection: 2
f M where 1 = SOd r SO
SA M 2 1 2S HDYf \ , A M . .r d\ 2 ³ 2 \ 0 SM
[2.19]
is the magnification factor and A(M) is the point of inter-
section between the line (S, M) and the detector. This formula is defined for an acquisition over an entire rotation, in which \ varies between 0 and 2S. In contrast to the reconstruction algorithm using rebinning, this direct algorithm processes each acquisition sequentially as soon as it is finished. Moreover, it avoids the interpolation step and the intermediate storage of the rebinned data. However, the calculations are more complex due to the weighting and the geometry, and the numerical implementation of the backprojection in fan-beam geometry may produce subsampling or aliasing artifacts on the edges of objects.
2.3.4. Fast acquisitions The formula above supposes an entire rotation over an angular range of 360° of the source–detector pair. Since all lines cross the circular trajectory of the source S at two points, the fast acquisition mode, also called short-scan acquisition, may select only one of them. This entails the source S covering an angular range of 180°
Analytical Methods
35
plus the fan angle. Certain lines in the first and last acquisitions are, however, sampled twice. Therefore, a weighting coefficient is associated with each measurement to guarantee a uniform weight for each line in space (Parker algorithm) [PAR 82]. The exact reconstruction of a region of interest from projections in fan-beam geometry is possible even if the angular range is insufficient to measure all the integrals along lines that cross the support of the object. A sufficiency condition for an exact reconstruction is that each line passing through a neighborhood of the region of interest has at least one intersection with the segment of the trajectory described by the source S during the acquisition [NOO 02]. This result is based on the following relation between the filtered projections in parallel and fan-beam geometry:
HRf (I , p OS, n(I ) !)
2ʌ
SA
0
ʌ SA, n A (I ) !
PV ³ dJ
Xf (S, A)
where n(I) = (cos I, sin I). For example, an acquisition with a circular trajectory and an angular range of 180° permits the exact reconstruction of an object in the half-plane defined by the diameter that connects the two extremities of the halfcircle trajectory of the source S. Other applications are found in [NOO 02, CLA 04].
2.3.5. 3D helical tomography in fan-beam geometry with a single line detector A 3D image consists of a set of transverse 2D slices. For CT scanners with a single line detector, the acquisition of single slices is concatenated by combining the rotation of the scanner with a translation of the patient bed such that the source advances by exactly the slice thickness during each rotation [CRA 90, KAL 95, VAN 96]. The source thus moves on a helical trajectory with respect to the patient. The function f is parametrized in a 3D coordinate system (O, x, y, z), where (O, z) corresponds to the rotation axis. The circular rotation of the source S, parametrized by the radius R and the angular position \, is associated with a translation along z according to:
z=h
\ -\ o 2S
>2.20@
where h is the pitch of the helix and \o is the angular position of the source S in the reference plane (O, x, y) (see Figure 2.4).
36
Tomography
We denote by Xf f(\, J, z) the 3D X-ray transform, which is defined by the integral projection along the line D(\, J, z), contained in the transverse plane through z and parallel projected along the axis (O,z) onto the line D(\, J) of the plane (O, x, y). P2ʌ
P
P2
1
y x
O
Z
Z
Z
S
1
A
1
J S
<
S
1
A
1
2
z A
2
S
2
1
SS 2
W
1
W
2
Figure 2.4. 3D helical tomography in fan-beam geometry with a single line detector (equation [2.20])
If we project the acquisition geometry onto the transverse plane P2S of height z2S, we obtain a set of acquisitions in fan-beam geometry. The principle of the reconstruction algorithm is to apply the filtered backprojection algorithm, which is described in section 2.3.3, choosing as angular origin the position \2S , for which the source is contained in the transverse plane P2S. For each line D(\, J, z2S) in this transverse plane, we search for the two source positions S1 and S2 at z1 and z2, respectively, which are located on opposite sides of the plane P2S, such that the acquisition lines D(\, J, z1) and D(\, J, z2) are parallel to the line D(\, J, z). We then calculate the desired projection Xf(\, J, z) by linear interpolation between the two measured projections Xf f(\, J, z1) and Xf f(\, J, z2):
Xf f (ȥ,Ȗ,z2ʌ) = w2.Xf f (ȥ,Ȗ,z1) + w1.Xf f (ȥ,Ȗ,z2) where the interpolation coefficients w1 and w2 are given by:
>2.21@
Analytical Methods
w
z 1
2ʌ
-z
z -z 2
1
1
w
z -z 2
2ʌ
z -z
2
2
37
1
Following this principle, acquisitions for two entire rotations on both sides of the plane P2S to be reconstructed must be available to apply equation [2.19], which is defined for one entire rotation. By using the short-scan reconstruction algorithm introduced in section 2.3.4, the angular range may be constrained to twice 180° plus the fan angle. 2.4. 3D X-ray transform in parallel-beam geometry 2.4.1. Definition
The 2D Radon transform may be generalized in two ways to the 3D case: the 3D Radon transform associates with a function f(M) = f(x, y, z) the set of its integrals over planes in 3, while the 3D X-ray transform, which is studied in this section, associates with f(M) the set of its integrals along lines in 3 [NAT 86]. As before, we assume that the function f is sufficiently regular and is zero outside a centered sphere with radius R. We define a line D by its direction, given by a unit vector n, and by a point on it, to which a vector s points from the origin. To avoid redundant parameters, we constrain the vector s to the points of the plane nA perpendicular to n and through the origin O, nA = {M 3 | = 0}. The X-ray transform then reads : Xf n, s =
³
MD n ,s
f M dM =
f
³ f s + tn dt
n S2 ,
s nA
>2.22@
-f
where t denotes the abscissa along the line D. For each direction n, the set Xf(n,s), s nA, of integrals along the lines parallel to n defines a 2D parallel projection of f(M), as illustrated in Figure 2.5. We parametrize the direction by the coplanar angle E between D and the transaxial plane z = 0, and by the azimuthal angle < , with n = (– cosE sin<, cosE cos<, sinE). The position of D is defined by the two Cartesian coordinates (p, q) in nA, associated with the two orthonormal vectors 1p = (cos<, sin<, 0) and 1q = (sinE sin<, – sinE cos<, cosE), thus s = (p 1p + q 1q). We use the two notations Xf(n,s) and Xf(E,<, p, q). It is worth noting that Xf(E= ,<, p, q) is the 2D Radon transform of the slice z = q, where the variables \ and p have the same meaning as in section 2.2.1.
38
Tomography
2.4.2. Fourier slice theorem and data sufficiency conditions
The Fourier slice theorem for the 2D Radon transform (equation [2.3]) is easily generalized to the case of the 3D X-ray transform and may be expressed as: Xf(E,\,p,q) 1q
1p
f(M) z
1q
n y
E \
x
0 1p
Figure 2.5. 3D X-ray transform in parallel-beam geometry (equation [2.22])
FXf n, Ȟ = Ff Ȟ
n S2 ,
Ȟ nA
>2.23@
where FXf is the 2D Fourier transform of Xf with respect to s:
FXf n, Ȟ
³³ exp 2iʌ s, Ȟ ! Xf n, s ds
Ȟ n
A
>2.24@
nA
and Ff is the 3D Fourier transform of f:
Ff Ȟ = ³³ exp- 2iʌ OM, Ȟ ! f M dM 3
Ȟ 3
>2.25@
We assume that parallel projections are measured for a set of directions n :, where : is a subset of the unit sphere S2, which depends on the scanner. Moreover, we assume that all the lines D parallel to n are measured for each direction n :(the 2D
Analytical Methods
39
projection is untruncated). Under these conditions, the Fourier slice theorem enables calculation of the 3D Fourier transform of f(M) for a frequency Q R3 from a 2D projection in any direction n that satisfies n.Q = 0. We obtain the sufficiency condition that the data allow the stable reconstruction of a unique image f(M) if and only if all great circles on the unit sphere have a non-empty intersection with the set : of measured directions. This property is called the Orlov condition [ORL 75]. This result underlines data redundancy, an essential property of the 3D X-ray transform. In fact, if the great circle orthogonal at a certain frequency Q has more than one intersection with :, several independent, redundant estimates of Ff(Q) are available. This redundancy may also be understood intuitively by noting that f is a function of three variables (x, y, z), while Xf is a function of four (E, <, p, q). 2.4.3. Inversion by filtered backprojection
Inversion of the 3D X-ray transform may be achieved by filtered backprojection as in the 2D case. We assume that the set of measured directions : satisfies the Orlov condition and that the 2D projections are untruncated. 1) 2D filtering of the measured projection Xf(n, s) for each direction n :: H c Xf n, s = ³³ H : n,Q FXf n,X exp2iʌ s,X ! dQ n
s nA
>2.26@
A
2) Backprojection: f M = ³³ H c Xf n, s = OM - OM.n ! n dn
>2.27@
:
As in 2D, the backprojection corresponds to a sum over all lines crossing the point M. Due to data redundancy, the filter H:(n, Q) does not have a unique form [DEF 93]. The filter that is most used is the Colsher filter: § · H : n,Q = ¨¨ ³³ G n.Q dn ¸¸ ©: ¹
-1
Q n
A
>2.28@
whose explicit form may be found in [COL 80] for the case where : is an equatorial band on the unit sphere. Under fairly general hypotheses, the Colsher filter minimizes the variance of the reconstructed image.
40
Tomography
2.5. 3D Radon transform 2.5.1. Definition In an n-dimensional space, the Radon transform associates with the function f(M) the set of its integrals over the hyperplanes of the space, i.e. over the affine subspaces of dimension n–1. In three dimensions, these are conventional planes. A plane P is defined by the azimuthal angle M and the colatitude angle T of its unit normal vector n, and by the algebraic distance U between this plane and the origin O (see Figure 2.6). The 3D Radon transform of the function f then reads:
Rf n, U =
³ f M dM
MP n, U
³³ f(u. cos M. cos T v. sin M U. cos M. sin T,u. sin M. sin T
>2.29@
v. cos M U. sin M. sin T, u. sin ș U. cos T)dudv We denote by DU Rf and DU2Rf the first and second partial derivatives of Rf with respect to the algebraic distance U, and we call them first and second derivatives of the 3D Radon transform, respectively.
z
U
n
C
T
f(x,y,z)
P(n,U) 0
M
x
Figure 2.6. 3D Radon transform (equation [2.29])
y
Analytical Methods
41
2.5.2. Fourier slice theorem Let FURf(n,Q) be the Fourier transform of Rf(n,U) with respect to the variable U and let Ff(Qn) be the radial variations of the 3D Fourier transform of f along the line through the origin defined by the unit vector n. The Fourier slice theorem then reads: FURf(n,Q) = Ff(Qn)
>2.30@
According to this, the Fourier transform Ff(Qn) is known for the whole space if the Radon transform is known for all vectors n in one half of the unit sphere. When this condition is satisfied the function f can be reconstructed. This is in particular the case if (M, T) is in >0, S@ u >0, S@. 2.5.3. Inversion by filtered backprojection Starting from the inversion formula for the Fourier transform, the inversion formula for the 3D Radon transform and its derivatives is obtained with the Fourier slice theorem (equation [2.30]) [CHI 80] : f(M) =
1 4ʌ 2
ʌ ʌ 2 ³M = 0 ³T = 0 D ȡ Rf n, OM, n ! sinTdT dM
>2.31@
This formula corresponds to a filtering of the projections Rf(n,U) along U by a second derivative filter, and a backprojection in each point M along the planes crossing the point M. The filter to be applied being a second derivative, the size of its support is virtually zero. Thus, it is only necessary to know Rf for the planes passing through a neighborhood of the point M to reconstruct f(M). The inversion of the 3D Radon transform is consequently local, in contrast to the inversion of the 2D Radon transform. From a numerical point of view, it is preferable to invert the Radon transform in two backprojection steps to reduce the number of calculations [MAR 80]:
HDX p f M , B = ʌ
1 4ʌ
ʌ D 2 Rf n, OB, n ! sinT dT 2 ³T 0 U
f M = ³M = 0 HDX P f M , BM , M dM
>2.32@ >2.33@
42
Tomography
where B(M,M) is the orthogonal projection of the point M onto the meridian plane with azimuthal angle M. Equation [2.32] corresponds to a concatenation of a filtering and a backprojection along lines on each meridian plane. It leads to the X-ray transform in parallel-beam geometry XP, filtered along the transaxial direction by the ramp filter HD. Equation [2.33] defines a backprojection along the lines on each transverse plane.
2.6. 3D positron emission tomography 2.6.1. Definitions We now consider the 3D reconstruction for a cylindrical multiline positron emission tomography scanner with radius Rc and length 2Lc, whose axis is aligned with the z axis. We assume that the data have been adequately corrected for the secondary effects described in Chapter 14 and that the data can be modeled as integrals of the distribution function f(M) of the radioactive tracer along lines (or lines of response (LORs)) that connect two points of the lateral surface of the cylindrical detector. Thus, the 3D X-ray transform of f(M) for this set of lines is available to reconstruct f(M). As described in Chapter 14, these data can be recorded in the form of either oblique sinograms or truncated parallel projections (the truncation is due to the fact that the scanner is in general not long enough to cover the whole object with its field of view, i.e. the support of f(M)). The following two sections introduce reconstruction algorithms that use these two types of parametrizations. It is assumed that the sampling is continuous. The problem of discretizing these algorithms is not discussed. The 3D reconstruction algorithms in PET must meet the following criteria: – permit a rapid implementation despite the large amount of data (more than 4 x 109 LORs are measured with the ECAT HRRT scanner); – optimally exploit data redundancy to minimize the variance of the reconstructed image.
2.6.2. Approximate reconstruction by rebinning to transverse slices We assume that the data are organized in oblique sinograms (see Chapter 14): each oblique sinogram comprises all LORs that connect two points of the cylinder with axial coordinates z1 and z2, where |zi| d Lc. As is shown in Figure 2.7, we define the lines D for a fixed oblique sinogram (z1, z2) by the variables < and p (see section 2.2.1), which define the orthogonal projection of D onto the transaxial plane. The data are then given by:
Analytical Methods
§ § z + z2 g ȥ,p,z 1 ,z 2 = Xf ¨ E ,\ , p, q = ¨¨ 1 ¨ 2 © ©
· · ¸ cos E ¸ ¸ ¸ ¹ ¹
43
>2.34@
where
§ z -z ¨ 1 2
· ¸ ¸, \ >0, ʌ , p d R, zi d Lc 2 ¨ 2 R2 - p ¸ c © ¹
E = arctan ¨
and the 3D X-ray transform Xf is defined by equation [2.22]. It is worth noting that when z1 = z2 = z, the sinogram coincides with the 2D Radon transform of the transaxial slice z. The slice z may be reconstructed from this single sinogram using the numerically efficient 2D filtered backprojection algorithm (see equations [2.9] and [2.10]). f(M) may thus be reconstructed very easily from the subset of direct sinograms {g(<, p, z1 = z, z2 = z), – Lc d z d Lc}. This approach is unsatisfactory because it uses only a part of the measured data and does not exploit the data redundancy. Rebinning algorithms use all available oblique sinograms to estimate the 2D Radon transform of each slice z. Each slice is then independently reconstructed by inverting its 2D Radon transform. The simplest rebinning algorithm (called single-slice rebinning (SSRB)) [DAU 87] exploits the fact that the maximum angle E= arctan(Lc/Rc) between the lines D and the transaxial plane z = 0 does not exceed approximately 10° on many scanners. In the first approximation, this angle may be neglected, and it may then be assumed that the oblique LOR (<, p, z1, z2) measured by the scanner is identical to the direct LOR (<, p, (z1 + z2)/2, (z1 + z2)/2) in the plane of the slice z = (z1 + z2)/2, which contains the median point of the LOR. To optimize the signal-to-noise ratio, all the oblique LORs that are identical are averaged, thus yielding the following rebinned sinograms:
Xf 0,\ , p, z |
1
Lc
³ g \ , p, z 1 , z 2 = 2 z - z1 dz1 2Lc - z 2 z-Lc
0 d z d Lc
>2.35@
A similar expression is obtained for the slices íLc d z d 0. In practice, the integral over z1 is replaced by a sum over the index of the detector ring.
44
Tomography
2
1
Figure 2.7. X-ray transform in PET and rebinning of transverse slices (equation [2.34])
Figure 2.7 shows that the SSRB approximation entails a degradation of the axial resolution (in z) proportional to the distance to the axis of the scanner. For certain scanners with large axial fields of view (2Lc = 25 cm), this degradation becomes unacceptable. As is evident from Figure 2.7, this problem can easily be solved if the distance of the source of activity along the LOR is known more precisely. If this distance is t, as defined in equation [2.2], the transaxial slice z = (z1 + z2)/2 + t tan E to which this LOR must be assigned in the rebinning may be calculated. The rebinning by Fourier transform (FORE: Fourier rebinning) is based on this approach and exploits the frequency–distance principle (see section 2.2.5) to approximately determine the value of t using the 2D Fourier transform of an oblique sinogram [DEF 97]. The basic relation of the FORE algorithm is:
§ § z + z2 FXf ¨ 0, k ,Q , z = ¨¨ 1 ¨ 2 © ©
· · §k· ¸ - ¨ ¸ tan E ¸ | Fg k ,Q , z , z 1 2 ¸ ©Q ¹ ¸ ¹ ¹
Q z0
>2.36@
where E is linked to z1 and z2 as in equation [2.34], and F denotes the 2D Fourier transform with respect to the variables \ and p, to which the frequencies k Z and Q respectively correspond. The set of oblique sinograms thus provides several redundant estimates of FXf(0, k, Q, z), which are averaged to optimize the signal-tonoise ratio. As equation [2.36] is not accurate for small frequencies Q, the SSRB algorithm is often used for them. The FORE algorithm is sufficiently precise for clinical applications up to angles E of about 30°. Exact rebinning algorithms have been proposed for larger values of E.
Analytical Methods
45
2.6.3. Direct reconstruction by filtered backprojection
We have seen in section 2.4.3 that a 3D image f(M) can be reconstructed by filtered backprojection if its 2D parallel projections are known for a set of directions n : which satisfy the Orlov condition. In the case of a multiring scanner, LORs for angles < [0,S) and E [– Emax, + Emax], where Emax = arctan(Lc/Rc), are measured. The set : is then an equatorial band with an opening angle of 2Emax on the unit sphere. It satisfies the Orlov condition because it has an intersection with all great circles. Applying the filtered backprojection is not always possible because the 2D parallel projections are in general truncated for E z 0 due to the finite length of the scanner. More specifically, if the support of f(M) is axially unlimited, the projection with orientation (E, <) is only measured for:
q d Lc.cos E sin E . R 2c p 2 while it has be known for:
q d Lc.cos E sin E . R 2c p 2 to avoid truncation. The 3DRP (3D reprojection) algorithm resolves this problem by exploiting the fact that the subset of direct projections (E= 0) is in fact sufficient to reconstruct f(M) [KIN 90]. The reconstruction then proceeds in four steps: – reconstruction of a first estimate f0(M) (unbiased but with low signal-to-noise ratio) by 2D reconstruction of each slice z from its 2D Radon transform Xf(0,\,p,z); – calculation of 2D parallel projections of f0(M) in the non-measured region given by: 2 2 2 2 Lc.cos E sin E . R c p d q d Lc.cos E sin E . R c p
– fusion of these estimates with the measured data to obtain untruncated 2D projections in the band : – reconstruction of these projections by filtered backprojection with the Colsher filter (equation [2.28]). The 3DRP algorithm is the analytical algorithm of reference in 3D PET.
46
Tomography
2.7. X-ray tomography in cone-beam geometry 2.7.1. Definition
As we have seen in section 2.3, X-ray transmission acquisitions enable measurement of integral projections of the function f along lines through the point S at the anode of the X-ray tube, where the photons are created. For each point S of the trajectory * described by the anode during the acquisition, the 3D X-ray transform in cone-beam geometry associates the set of integrals along lines D in 3 that cross S with the function f (see Figure 2.8). We denote by D(a,S) the line that crosses S and it has direction a. The 3D X-ray transform then reads: f
X c f a,S
³
f M dM
M D a ,S
2
³ f S ta dt , a S , S *
>2.37@
f
These lines form a cone with apex S. To parametrize these lines, we use in the following the detection plane PX (see Figure 2.8). In the case of a planar detector, the detection plane is defined as the plane parallel to the detector that contains the origin O of the coordinate system. In the case of a helical trajectory, this plane may also be defined as the plane containing the rotation axis and perpendicular to the transverse line that connects the source S to the trajectory axis. We denote by G the orthogonal projection of the point S onto the detection plane PX. For each position S of the source and each point A of the detection plane, the 3D X-ray transform in cone-beam geometry associates with the function f its integral projection along the line D(S,A):
X c f S, A
³
f M dM
>2.38@
M D S,A
2.7.2. Connection to the derivative of the 3D Radon transform and data sufficiency condition
In 3D, the set of lines forms a four-dimensional (4D) space, while the set of planes forms a 3D space. This is the reason why the 3D Radon transform, which defines integral projections over planes, plays a central role.
Analytical Methods
47
Figure 2.8. 3D X-ray transform in cone-beam geometry (equation [2.37])
Grangeat established the exact relation that links the 3D X-ray transform in cone-beam geometry and the first derivative of the 3D Radon transform [GRA 91]. We divide this formula into two steps: 1) Weighting of the measured projections:
Yf S, A
X c f S, A .
SG,SA SG
SA
>2.39@
2) Filtering and reprojection of weighted projections: Each plane P(S,n) through S and perpendicular to n intersects the detection plane PX in a line '(S,n). We define the summed integral projection SYf(S,n) as the integral of Yf along this line '(S,n):
SYf S, n
³
A' S,n
Yf S, A dA
>2.40@
48
Tomography
Let p’ denote the algebraic distance of this line of integration from the origin O. For a given source position S, the partial derivatives of SYf and Yf with respect to p’ satisfy:
D p ' SYf S, n
³
D p 'Yf S, A dA
>2.41@
A' S,n
The Grangeat formula then reads:
SG
2
SG n
2
. D p' SYf S,n = DU Rf n, OS , n !
>2.42@
Its numerical implementation relies on filtering operations to calculate the partial derivative Dp’Yf and on summation to compute the integration along the line '(S,n). Equation [2.42] suggests that the first derivative DURf of the Radon transform may be calculated from the X-ray transform of f for all planes that cross the trajectory *. However, according to equation [2.31], f can only be reconstructed if DURf is known for all planes passing through the support of the object. Thus, the Tuy condition is obtained: the X-ray transform enables reconstruction of a function f if all the planes passing through the support of f cross the trajectory * at at least one point [TUY 83]. Equation [2.42] also suggests that this condition may be restricted to the planes that cross the region of interest within the support of f, because the first derivative of the Radon transform may be inverted locally. An infinite line and a helical trajectory that encompasses the support of the object satisfy the Tuy condition. By contrast, a circular trajectory does not satisfy the Tuy condition for all points outside the plane of the trajectory. For these points, the planes that are almost parallel to the plane of the trajectory do not cross the latter. All these planes define a shadow zone in the Radon domain. Points in the shadow zone correspond to planes which cannot be assigned a value using the available circular acquisition. Tuy established an inversion formula that can be interpreted as a relation between the X-ray transform in cone-beam geometry and the second derivative of the 3D Radon transform [TUY 83]. Smith proposed a relation between the X-ray transform in cone-beam geometry and the Hilbert transform of the first derivative of the 3D Radon transform [SMI 85].
Analytical Methods
49
2.7.3. Approximate inversion by rebinning to transverse slices
We now consider the case of circular or helical trajectories, and we choose as the detection plane PX the plane that contains the axis and that is perpendicular to the transaxial line connecting the source with the axis. Each detection line which is contained in the detection plane and is perpendicular to the rotation axis defines with the point source S an oblique plane. In the algorithms based on rebinning to transverse slices, the measured oblique line integrals in such an oblique plane are assimilated to transverse line integrals in the transverse plane that intersects the detection plane in the same detection line and contains the orthogonal projection of the point S onto this transverse plane (see Figure 2.9). The measurements in cone-beam geometry are thereby rebinned into measurements in fan-beam geometry in a set of transverse planes. In the case of helical trajectories, we assign a certain thickness to the transverse slices which must be reconstructed. Each line integral in a slice is then calculated by averaging over the rebinned transverse planes that are located axially inside this slice thickness. The averaging is implemented as a filtering along z [BRU 00, HU 99, NOO 99, TAG 98]. In this way, a set of 2D projections in fan-beam geometry is obtained for each transverse slice of the object, which may be reconstructed by the algorithms described in section 2.3, either by rebinning to parallel data, or by filtered backprojection in fan-beam geometry. The geometric accuracy of this rebinning from oblique to transverse slices decreases with increasing distance to the rotation axis. To avoid blurring on the edges of the image, such an algorithm may only be used if the maximum copolar angle of the measured line integrals is very small. This is the case on multiline scanners with only a small number of lines, typically not more than four. A variant of this method [LAR 98, KAC 00a] involves rebinning the data acquired on a helical trajectory to a set of oblique instead of transverse slices. These oblique slices are chosen such that the distance between each slice and the points of the segment of the helix used for the reconstruction are minimized (this short-scan segment covers 180° plus the fan angle (see section 2.3.4)). After reconstruction, the reconstructed oblique slices are interpolated in z to obtain normal transverse slices for the visualization. This approach allows limiting of artifacts, but is nevertheless unsuited to current multiline scanners with more than 32 lines.
50
Tomography
2.7.4. Approximate inversion by filtered backprojection
To avoid these artifacts on the edges of objects, it is preferable to adapt the reconstruction to the acquisition geometry by performing the backprojection along the oblique planes (see Figure 2.9). By analogy with the 2D rebinning algorithm, a first type of algorithms involves transforming the cone-beam projections into a set of projections in “parallel fan-beam” geometry: the line integrals are grouped into planes parallel to the rotation axis and parametrized with the usual sinogram variables (see section 2.2.1). Within each plane, the line integrals are rebinned to a fan-beam geometry. A reconstruction algorithm in parallel-beam geometry along all the oblique planes crossing the same point is then applied [GRA 00, PRO 00].
Figure 2.9. The two backprojection geometries for the direct reconstruction algorithms in cone-beam geometry
A second type of algorithm was proposed by Feldkamp, Davis and Kress [FEL 84]. Its principle is to generalize the direct reconstruction algorithm for fanbeam geometry described in section 2.3.3 to cone-beam geometry. This reconstruction algorithm, called the Feldkamp algorithm, involves three steps:
Analytical Methods
51
1) Weighting the measured projections: Yf S, A = Xf S, A .
SO , SA ! SO SA
>2.43@
2) Filtering along the variable p of each transaxial line in the detection plane PX: HDYf(S,p,q) = hp *P Yf(S,p,q)
>2.44@
where hp is the ramp filter (equation [2.8]) used in parallel-beam geometry, and p and q are the Cartesian coordinates of the point A in the detection plane. 3) Weighted backprojection in cone-beam geometry: SA M ~ 1 f M = ³\2ʌ= 0 HDYf S, AM . 2 2 SM
2
>2.45@
d\
where A(M) is the point where the line (S,M) intersects the detection plane PX. When the source moves on a helical trajectory, this algorithm may be generalized. We follow here the approach proposed by Wang [WAN 93]. To reconstruct a point M, we define the source position SM with its angle \M contained in the same transverse plane PM, and we consider an angular range corresponding to a complete helical rotation centered at SM. This is equivalent to two 180° half-rotations above and below the transverse plane PM, respectively. We then apply the same formula as in the Feldkamp algorithm (equation [2.45]):
1 f M = 2
\ M S
³\
=\ M S
HDYf S, A M .
SA M SM
2
2
d\
>2.46@
where A(M) is again the point where the line (S,M) intersects the detection plane PX. This approach may be generalized by no longer considering one rotation, but n rotations centered at PM, and by averaging the values obtained in these n rotations. For a large fan angle, particular precautions must be taken when the detector has a limited axial height compared to that of the region of interest and when the helix trajectory is not long enough to cover the complete axial extent of the object.
52
Tomography
As for the 2D fan-beam algorithm for a complete rotation, the short-scan fan-beam algorithm can be extended to a 3D reconstruction in cone-beam geometry. In the case of helical trajectories, this enables helical pitches close to twice the height of the detector measured at the level of the detection plane PX to be reached. 2.7.5. Inversion by rebinning in Radon space The algorithms proposed previously rely on geometrical approximations, which neglect certain correction factors. The principle of indirect algorithms is to exploit exact relations, using the Radon domain as an intermediary [GRA 97]. Grangeat proposed calculating the first derivative of the 3D Radon transform DURf from the X-ray transform by first using an exact formula (equation [2.42]), and then inverting the first derivative of the 3D Radon transform (equation [2.31]). If the acquisition trajectory satisfies the Tuy condition, the Radon domain is entirely covered, and the reconstruction is thus mathematically exact. Equation [2.42] enables calculation of DURf(,n) in the acquisition coordinate system (see Figure 2.8). The inversion formula requires knowledge of DURf in the spherical coordinate system (U, T, \) used to parametrize the Radon domain (see Figure 2.6). The transformation from the acquisition to the spherical coordinate system constitutes the rebinning equation. In the case of a circular acquisition trajectory with radius R, the plane with coordinates (U, T, M) crosses the trajectory in two positions \ of the source S, which satisfy an equation of the type: R.sin (M í \).sin T = U
>2.47@
As for 2D fan-beam geometry, this rebinning may be interpreted as a transformation from cone-beam to parallel-beam geometry. This principle of indirect reconstruction via the first derivative of the Radon transform is very general and may be adapted to numerous cases. It notably permits averaging redundant information in the case where several source positions belong to the same plane. In the case of missing information, as for the circular trajectory, it is possible to fill the shadow zone by interpolation and to thus reduce the associated artifacts. This principle is well suited for area detectors that are sufficiently large to obtain untruncated cone-beam projections of the object, as in the example shown in Figure 2.8 [RIZ 91]. Indirect reconstruction via the Radon transform is, by contrast, of less interest for multiline detectors that are currently in use in CT scanners, which have only a limited number of lines.
Analytical Methods
53
2.7.6. Katsevich algorithm for helical cone-beam reconstruction When the detector has limited axial height and when the axial cone angle of the helix covers only part of the analyzed object, the cone-beam projections are truncated in the axial direction. In this case, which occurs in clinical applications on multiline scanners, an exact reconstruction of a region of interest is, nevertheless, possible if the axial height of the detector is sufficient to cover the Tam window. This window [TAM 98] is the region of the detector limited by the cone-beam projection onto the detector of two turns of the helix located before and after the considered source point S. It can be shown that each point M of the object belongs to one and only one line connecting two source points S1(M) and S2(M) of the helix separated by less than one rotation. e denote by <1(M) and <2(M) the angles of S1(M) and S2(M). Such a line is called a pi-line. The Katsevich algorithm [KAT 02] is a filtered backprojection method that permits the exact reconstruction of f(M) from data measured along the helix segment between S1(M) and S2(M), defined by the pi-line containing M. The steps involved in the Katsevich algorithm are: 1) Derivation of the measured projections with respect to the angular parameter of the helix, for a constant direction n:
Dȥ Xf S, n
w Xf S(\ ), n w\
>2.48@
2) Hilbert filtering along a set of lines in the detection plane FK Xf S, A
2ʌ
1
³ S sin J DȌ Xf (S, cosJ .ș + sinJ .(ș m(S, T ))) dJ 0
>2.49]
where T = SA/||SA|| and m(S,T) is a unit vector defined in [KAT 02]. 3) Weighted backprojection in cone-beam geometry on the segment of the helix defined by the pi-line: f M = -
1 Ȍ 2(M) 1 ³Ȍ 1( M ) FK Xf S, AM . 2ʌ SM
dȌ
>2.50@
where A(M) is the point where the line (S(<),M) intersects the detection plane PX. A detailed description of the algorithm and an efficient numerical implementation can be found in [NOO 03]. A limitation of this algorithm is that it does not exploit all of the measured data but only the part contained in the Tam window.
54
Tomography
2.8. Dynamic tomography 2.8.1. Definition In this section, we address a class of problems in which the object to be reconstructed changes over time. The function f that describes the analyzed physical quantity then depends on the point M in space and on the time t. The Radon and Xray transforms must be generalized to take these spatiotemporal variations into account. 2.8.2. 2D dynamic Radon transform Starting from the definition of the 2D Radon transform given in section 2.2.1, we define the 2D dynamic Radon transform as the transformation that associates with a function f(M, t) the set of its integrals along lines D in the plane of the slice at each instant t. For a line D defined by the angle < between the axis x and the unit vector n perpendicular to D, and by the algebraic distance p between the line and the origin O (see Figure 2.1), the dynamic Radon transform reads:
Rf \ , p, t
³ f M, t dM
M D \ , p f
³ f p cos\ l sin\ , p sin\ l cos\ , t dl
>2.51@
f
where < [0,S), |p| d R, and l denotes the abscissa along the line D. By extending the concept of the sinogram introduced in section 2.2.1, the dynamic Radon transform may also be called sino-timogram. The dynamic Radon transform corresponds, for example, to a 2D positron emission tomography acquisition in which all the lines are measured simultaneously at a given instant t. The inversion formula is immediately obtained from equations [2.9] and [2.10] by applying to each instant t the inversion formula of the associated 2D Radon transform: ʌ
f M, t ³ HDRf \ , p 0
OM, n , t d\
>2.52@
Analytical Methods
55
2.8.3. Dynamic X-ray transform in divergent geometry At a given time t, CT scanners can only acquire the projection of the function f(M, t) along lines that cross the position S(t) of the point source at this time. We parametrize the trajectory * of the source point by the time t during the acquisition. Again using the notation from section 2.7.1, the dynamic X-ray transform in divergent geometry associates, for each point A of the detection plane, the function f(M, t) to its integral projection along the line D(S(t), A): X c f St , A, t
³ f M, t dM
MD St , A
>2.53@
In the particular case where the trajectory *is a circle, as in section 2.3.1, the position S(t) is defined by the angle \(t) between the line (S(t),O) and the axis (O,y). We define in an equivalent way the dynamic X-ray transform in divergent geometry:
X c f \ t , A, t
³ f M, t dM
MD\ t , A
>2.54@
To describe continuous rotation, we define the angle \(t) by:
\ t \ 0 Zt
>2.55@
where Z represents the angular velocity of the scanner and ȥ0 denotes its initial angular position at t = 0. In the case of a 1D line detector and a circular trajectory, the dynamic X-ray transform in fan-beam geometry is equivalent to the helical X-ray transform in fan-beam geometry if the time variable t is substituted by the variable z / v, where z describes the position along the rotation axis and v the axial speed of displacement. 2.8.4. Inversion
An overview of reconstruction algorithms applied in dynamic X-ray tomography is given in [BON 03]. The dynamic X-ray transform only provides a single projection direction at a given instant. It is thus impossible to exactly reconstruct the unknown function. This problem is underdetermined. It is, therefore, necessary to introduce simplifying assumptions, three of which are discussed here:
56
Tomography
1. The quasi-stationary hypothesis, which makes the assumption that the time variation of f(M, t) is sufficiently slow to be neglected during the time needed to acquire the data for the reconstruction. The reference algorithm in this category relies on the sliding window principle [HSI 97]. An angular window, defined by an initial angle \ d t and a final angle \ f t , which changes over time, is associated with each instant t. The horizon of the window covers a view angle that corresponds to a multiple of 180° plus an angle that permits smoothing the data weighting at the beginning and end of the window. The weighting function w is defined such that the sum of weights for each line is equal to one:
f ( M, t )
\ \ 0 § \ (t ) ³\ f (t ) w\ \ d t HDRf ¨\ , A[M], Z d ©
· ¸dM ¹
>2.56@
where A>M @ is the projection of the point M onto the detection plane at instant \ \ 0 .
Z
2. The periodicity hypothesis, which makes the assumption that the image sequence f(M, t) is periodic over time and that the acquisition covers several periods. This is, for example, the case for cardiac and respiratory motion [HSI 00, KAC 00b, FLO 01, KAC 02, KEA 04]. We parametrize the periodic sequence with the help of a reference index that represents time. This index can be extracted from the images using, for example, the positions of landmarks such as the center of gravity of the image or the position of the diaphragm for the respiratory motion. Alternatively, the index can be provided by fiducial markers attached to the patient, or by complementary monitoring instruments, such as an electrocardiogram for cardiac motion or a respiratory monitoring device. This index then serves to synchronize the measurements. This synchronization can be applied either to the images after reconstruction or to the projections before reconstruction. The synchronization of the images is meaningful when the rotation period of the scanner is small compared to the period of the observed phenomenon, as for respiratory motion in CT, where the quasi-stationary hypothesis is valid. When the quasi-stationary approximation is not valid, as for cardiac motion, the projections must be synchronized before reconstruction. From the sequence of acquisitions, several subsets of projections are extracted, which correspond to the function f(M, t) at a given instant t and at times shifted by one or several periods, such that a complete angular range is approximately covered in the projection space after rebinning. However, these techniques have the drawback of not being very robust if the cycle is not regular. They also require selection of the period of rotation of the scanner as a function of the period of movement, so that an acquisition of projections for the same phase of movement in the same directions is avoided. The goal is to obtain synchronized projections
Analytical Methods
57
over several rotations for the same phase of movement which entirely cover the projection space after rebinning [ROU 93]. 3. The hypothesis of a regular evolution model, which makes the assumption that the dynamic evolution of the object can be estimated and compensated. The principle of these methods is to attempt to model the evolution of the function f(M, t) over time. Depending on the applications, we distinguish: – temporal evolution models, associated, for example, with the kinetics of the fixation and redistribution of a tracer in a fixed organ in nuclear medicine; – spatial deformation models, parametrized by the time, categorized into global parametric models, such as translations, rotations, dilatations, and affine deformations [CRA 96, ROU 04, DES 07], and local models, in which the movement of each pixel is described [RIT 96, GRA 02]. A generic local model is the particle model, in which a trajectory is associated with each point. The trajectory between the reference and the acquisition time point defines a displacement vector field at each point; – spatiotemporal models, which combine temporal evolution and spatial deformation models [GRA 02]. In the case of a particle model, the intensity of the function is allowed to change along the trajectory of each particle.
In the context of analytical methods of the filtered backprojection type, we distinguish two classes of methods: – ray-oriented approaches, which try to compensate for dynamic effects in the projection domain. The general hypothesis underlying these approaches is that the image deformations are such that they preserve the geometry of the projections. This is the case for affine deformations, in which the image of a line is a line. In this case, we derive a transformation linking the dynamic sequence of projections to the projections of the image at a fixed instant. The reconstruction may then be performed either by a rebinning algorithm that concatenates this transformation and the inversion by filtered backprojection, or by a direct inversion algorithm, provided that the translation of the transformation into the inversion formula by a change of variables is possible. This is the case for affine motions [ROU 04, DES 07]. – Pixel-based (2D) or voxel-based (3D) local approaches [RIT 96], which aim at compensating the dynamic effects in image space. The general hypothesis of these approaches is that the final reconstructed image is the sum of images obtained by filtered backprojection over a temporal window that corresponds to a complete angular range in the projection domain. To take into account motion and evolution, the algorithm calculates at each point a sequence of filtered and backprojected values that are compensated for both effects. The final reconstructed value is the sum of these compensated values. To be more efficient in terms of computation time, it is preferable to regroup the projections in blocks defined by a given view
58
Tomography
angle [GRA 02], typically a complete subdivision of a complete rotation. The compensation is then applied to the filtered and backprojected images associated with each block of projections. These methods assume that it is possible to identify accurate models of the motion, for example from a first sequence of images reconstructed without compensation. The models of evolution may use identification techniques studied in signal processing for different types of models (auto-regressive model, compartmental model, factorial analysis, etc.) The deformation models may be inspired by techniques studied in image processing, which are associated with the estimation of displacement fields in motion analysis or image restoration. 2.9. Bibliography [BAR 81] BARRETT H. H., SWINDELL W., Radiological Imaging, Academic Press, 1981. [BAR 04] BARRETT H. H., MYERS K., Foundations of Image Science, John Wiley & Sons, 2004. [BES 99] BESSON G., “CT image reconstruction from fan-parallel data”, Med. Phys., vol. 26, n° 3, pp. 415–426, 1999. [BON 03] BONNET S., KOENIG A., ROUX S., HUGONNARD P., GUILLEMAUD R., GRANGEAT P., “Dynamic X-ray computed tomography”, Proc. IEEE, vol. 91, n° 10, pp. 1574–1587, 2003. [BRU 00] BRUDER H., KACHELRIESS M., SCHALLER S., STIERSTORFER K., FLOHR T., “Singleslice rebinning reconstruction in spiral cone-beam computed tomography”, IEEE Trans. Med. Imag., vol. 19, n° 9, pp. 873–887, 2000. [CHI 80] CHIU M. Y., BARRETT H. H., SIMPSON R. G., “Three-dimensional image reconstruction from planar projections”, J. Opt. Soc. Am., vol. 70, pp. 755–762, 1980. [CLA 04] CLACKDOYLE R., NOO F., “A large class of inversion formulas for the 2D Radon transform of objects of compact support”, Inverse Problems, vol. 20, pp. 1281–1291, 2004. [COL 80] COLSHER J. G., “Fully three-dimensional positron emission tomography”, Phys. Med. Biol., vol. 25, pp. 103–115, 1980. [CRA 90] CRAWFORD C. R., KING K. F., “Computed tomography scanning with simultaneous patient translation”, Med. Phys., vol. 17, n° 6, pp. 967–982, 1990. [CRA 96] CRAWFORD C. R., KING K. F., RITCHIE C. J., GODWIN J. D., “Respiratory compensation in projection imaging using a magnification and displacement model”, IEEE Trans. Med. Imag., vol. 15, pp. 327–332, 1996. [DAU 87] DAUBE-WITHERSPOON M. E., MUEHLLEHNER G., “Treatment of axial data in threedimensional PET”, J. Nucl. Med., vol. 28, pp. 1717–1724, 1987.
Analytical Methods
59
[DEA 83] DEANS S. R., The Radon Transform and its Applications, John Wiley & Sons, 1983. [DEF 93] DEFRISE M., CLACK R., TOWNSEND D. W., “The solution to the 3D image reconstruction problem from 2D parallel projections”, J. Opt. Soc. Am. A, vol. 10, pp. 869–877, 1993. [DEF 97] DEFRISE M., KINAHAN P. E., TOWNSEND D. W., MICHEL C., SIBOMANA M., NEWPORT D. F., “Exact and approximate rebinning algorithms for 3D PET data”, IEEE Trans. Med. Imag., vol. 16, pp. 145–158, 1997. [DES 07] DESBAT L., ROUX S., GRANGEAT P., “Compensation of some time dependent deformations in tomography”, IEEE Trans. Med. Imag., vol. 26, n° 2, pp. 261–269, 2007. [EDH 86] EDHOLM P. R., LEWITT R. M., LINDHOLM B., “Novel properties of the Fourier decomposition of the sinogram”, Proc. SPIE, vol. 671, pp. 8–18, 1986. [FEL 84] FELDKAMP L. A., DAVIS L. C., KRESS J. W., “Practical cone-beam algorithm”, J. Opt. Soc. Am. A, vol. 6, pp. 612–619, 1984. [FLO 01] FLOHR T., OHNSESORGE B., “Heart rate adaptive optimization of spatial and temporal resolution for electrocardiogram-gated multi-slice spiral CT of the heart”, J. Comp. Assisted Tomography, vol. 25, n° 6, pp. 907–923, 2001. [GRA 91] GRANGEAT P., “Mathematical framework of cone-beam 3D reconstruction via the first derivative of the Radon transform”, in Herman G. T., Louis A. K., Natterer F. (Eds.), Mathematical Methods in Tomography, Spinger, Lecture Notes in Mathematics, vol. 1497, pp. 66–97, 1991. [GRA 97] GRANGEAT P., SIRE P., GUILLEMAUD R., LA V., “Indirect cone-beam three-dimensional image reconstruction”, in Roux C., Coatrieux J. L. (Eds.), Contemporary Perspectives in Three-dimensional Biomedical Imaging, IOS Press, Studies in Health Technology and Informatics, vol. 30, pp. 29–52 and pp. 343–350, 1997. [GRA 00] GRASS M., KÖHLER T., PROKSA R., “3D cone-beam CT reconstruction for circular trajectories”, Phys. Med. Biol., vol. 45, pp. 329–347, 2000. [GRA 01] GRANGEAT P., “Fully three-dimensional image reconstruction in radiology and nuclear medicine”, in Kent A., Williams J. G. (Eds.), Encyclopedia of Computer Science and Technology, Marcel Dekker, vol. 44, Supp. 29, pp. 167–201, 2001. [GRA 02] GRANGEAT P., KOENIG A., RODET T., BONNET S., “Theoretical framework for a dynamic cone-beam reconstruction algorithm based on a dynamic particle model”, Phys. Med. Biol., vol. 47, pp. 2611–2625, 2002. [HER 77] HERMAN G. T., NAPARSTEK A., “Fast image reconstruction based on a Radon inversion formula appropriate for rapidly collected data”, SIAM J. Appl. Math., vol. 33, n° 3, pp. 511–533, 1977. [HER 80] HERMAN G. T., Image Reconstruction from Projections: the Fundamentals of Computerized Tomography, Academic Press, 1980.
60
Tomography
[HIR 97] HIRIYANNAIAH H. P., “X-ray computed tomography for medical imaging”, IEEE Sig. Proc. Mag., pp. 42–59, March 1997. [HU 99] HU H., “Multi-slice helical CT: scan and reconstruction”, Med. Phys., vol. 26, n° 1, pp. 5–18, 1999. [HSI 97] HSIEH J., “Analysis of the temporal response of computed tomography fluoroscopy”, Med. Phys., vol. 24, n° 5, pp. 665–675, 1997. [HSI 00] HSIEH J., MAYO J., ACHARYA K., PAN T., “Adaptive phase-coded reconstruction for cardiac CT”, Proc. SPIE Medical Imaging, vol. 3978, pp. 501–508, 2000. [KAC 00a] KACHELRIESS M., SCHALLER S., KALENDER W. A., “Advanced single-slice rebinning in cone-beam spiral CT”, Med. Phys., vol. 27, pp. 754–772, 2000. [KAC 00b] KACHELRIESS M., ULZHEIMER S., KALENDER W. A., “ECG-correlated imaging of the heart with subsecond multislice spiral CT”, IEEE Trans. Med. Imag., vol. 19, n° 9, pp. 888–901, 2000. [KAC 02] KACHELRIESS M., SENNST D., MAXLMOSER W., KALENDER W. A., “Kymogram detection and kymogram-correlated image reconstruction from subsecond spiral computed tomography scans of the heart”, Med. Phys., vol. 29, n° 7, pp. 1489–1503, 2002. [KAK 88] KAK A. C., SLANEY M., Principles of Computerized Tomographic Imaging, IEEE Press, New York, 1988. [KAL 95] KALENDER W. A., “Principles and performance of spiral CT”, in Goodman L. W., Fowlkes J. B. (Eds.), Medical CT and Ultrasound: Current Technology and Applications, Advanced Medical, pp. 379–410, 1995. [KAR 88] KARP J. S., MUEHLLEHNER G., LEWITT R. M., “Constrained Fourier space method for compensation of missing data in emission computed tomography”, IEEE Trans. Med. Imag., vol. 7, pp. 21–25, 1988. [KAT 02] KATSEVICH A. I., “Analysis of an exact inversion algorithm for spiral cone-beam CT”, Phys. Med. Biol., vol. 47, pp. 2583–97, 2002.. [KEA 04] KEALL P. J., STARKSCHALL G., SHUKLA H. et al., “Acquiring 4D thoracic CT scans using a multislice helical method”, Phys. Med. Biol., vol. 49, pp. 2053-2067, 2004. [KIN 90] KINAHAN P. E., ROGERS J. G., “Analytic three-dimensional image reconstruction using all detected events”, IEEE Trans. Nucl. Sci., vol. 36, pp. 964–968, 1990. [KUD 00] KUDO H., NOO F., DEFRISE M., “Quasi-exact filtered backprojection algorithm for long-object problem in helical cone-beam tomography”, IEEE Trans. Med. Imag., vol. 19, n° 9, pp. 902–921, 2000. [LAR 98] LARSON G. L., RUTH C. C., CRAWFORD C. R., “Nutating slice CT image reconstruction”, Patent Application WO 98/44847, 1998. [MAR 80] MARR R., CHEN C., LAUTERBUR P. C., “On two approaches to 3D reconstruction in NMR zeugmatography”, in Herman G. T., Natterer F. (Eds.), Mathematical Aspects of Computerized Tomography, Springer, pp. 225–240, 1980.
Analytical Methods
61
[NAT 86] NATTERER F., The Mathematics of Computerized Tomography, John Wiley & Sons, 1986. [NAT 01a] NATTERER F., “Inversion of the attenuated Radon transform”, Inverse Problems, vol. 17, n° 1, pp. 113–119, 2001. [NAT 01b] NATTERER F., WUEBBELING F., Mathematical Methods in Image Reconstruction, SIAM, 2001. [NOO 97] NOO F., CLACK R., DEFRISE M., “Cone-beam reconstruction from general discrete vertex sets”, IEEE Trans. Nucl. Science, vol. 44, n° 3, pp. 1309–1316, 1997. [NOO 99] NOO F., DEFRISE M., CLACKDOYLE R., “Single-slice rebinning method for helical cone-beam CT”, Phys. Med. Biol., vol. 44, pp. 561–570, 1999. [NOO 02] NOO F., DEFRISE M., CLACKDOYLE R., KUDO H., “Image reconstruction from fanbeam projections on less than a short-scan”, Phys. Med. Biol., vol. 47, pp. 2525–2546, 2002. [NOO 03] NOO F., PACK J., HEUSCHER D., “Exact helical reconstruction using native conebeam geometries”, Phys. Med. Biol., vol. 48, pp. 3787–3818, 2003. [NOO 04] NOO F., CLACKDOYLE R., PACK J., “A two-step Hilbert transform method for 2D image reconstruction”, Phys. Med. Biol., vol. 49, pp. 3903–3923, 2004. [NOV 00] NOVIKOV R. G., An Inversion Formula for the Attenuated X-ray Transform, Preprint, Department of Mathematics, University of Nantes, April 2000. [ORL 75] ORLOV S. S., “Theory of three-dimensional reconstruction. 1. Conditions of a complete set of projections”, Sov. Phys. Crystallography, vol. 20, pp. 312–314, 1975. [PAR 82] PARKER D., “Optimal short scan convolution reconstruction for fan-beam CT”, Med. Phys., vol. 9, pp. 254–257, 1982. [PRO 00] PROKSA R., KÖHLER T., GRASS M., TIMMER J., “The n-pi method for helical conebeam CT”, IEEE Trans. Med. Imag., vol. 19, n° 9, pp. 848–863, 2000. [RAM 96] RAMM A. G., KATSEVITCH A. I., The Radon Transform and Local Tomography, CRC Press, 1996. [RIT 96] RITCHIE C. J., CRAWFORD C. R., GODWIN J. D., KING K. F., KIM L. Y., FLOHR T., “Correction of computed tomography motion artifacts using pixel-specific backprojection”, IEEE Trans. Med. Imag., vol. 15, n° 3, pp. 333–342, 1996. [RIZ 91] RIZO P., GRANGEAT P., SIRE P., LE MASSON P., MELENNEC P., “Comparison of two 3D X-ray cone beam reconstruction algorithms with circular source trajectory”, J. Opt. Soc. Am. A, vol. 8, n° 10, pp. 1639–1648, 1991. [ROU 03] ROUX S., DESBAT L., KOENIG A., GRANGEAT P., “Efficient acquisition for periodic dynamic CT”, IEEE Trans. Nucl. Science, vol. 50, n° 5, pp. 1672–1677, 2003. [ROU 04] ROUX S., DESBAT L., KOENIG A., GRANGEAT P., “Exact reconstruction in 2D dynamic CT: compensation of time-dependent affine deformations”, Phys. Med. Biol., vol. 49, pp. 2169–2182, 2004.
62
Tomography
[SCH 00] SCHALLER S., NOO F., SAUER F., TAM K. C., LAURITSCH G., FLOHR T., “Exact Radon rebinning algorithm for the long object problem in helical cone-beam CT”, IEEE Trans. Med. Imag., vol. 19, n° 5, pp. 361–375, 2000. [SMI 85] SMITH B. D., “Image reconstruction from cone-beam projections: necessary and sufficient conditions and reconstruction methods”, IEEE Trans. Med. Imag., vol. 4, pp. 14–25, 1985. [TAG 98] TAGUCHI K., ARADATE H., “Algorithm for image reconstruction in multi-slice helical CT”, Med. Phys., vol. 25, n° 4, pp. 550–561, 1998. [TAM 98] TAM K. C., SAMARASEKERA S., SAUER F., “Exact cone beam CT with a spiral scan”, Phys. Med. Biol., vol. 43, pp. 1015–1024, 1998. [TUY 83] TUY H. H., “An inversion formula for cone-beam reconstruction”, SIAM J. Appl. Math., vol. 43, n° 3, pp. 546–552, 1983. [VAN 96] VANNIER M. W., WANG G., “Principles of spiral CT”, in Remy-Jardin M., Remy J. (Eds.), Spiral CT of the Chest, Springer, pp. 1–32, 1996. [WAN 93] WANG G., LIN T. H., CHENG P., SHINOZAKI D. M., “A general cone-beam reconstruction algorithm”, IEEE Trans. Med. Imag., vol. 12, pp. 486–496, 1993. [WAN 00] WANG G., CRAWFORD C. R., KALENDER W. A., “Multirow detector and cone-beam spiral/helical CT”, IEEE Trans. Med. Imag., vol. 19, n° 9, pp. 817–821, 2000. [WU 96] WU C., ORDONEZ C. E., CHEN C. T., “FIPI: fast 3D PET reconstruction by Fourier inversion of rebinned plane integrals”, in Grangeat P., Amans J. L. (Eds.), ThreeDimensional Image Reconstruction in Radiology and Nuclear Medicine, Kluwer Academic Publishers, Computational Imaging and Vision Series, pp. 277–296, 1996. [ZHU 04] ZHUANG T., LENG S., NETT B. E., CHEN G. H., “Fan-beam and cone-beam image reconstruction via filtering the backprojection image of differentiated projection data”, Phys. Med. Biol., vol. 49, pp. 5489–5503, 2004.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 3
Sampling Conditions in Tomography
3.1. Sampling of functions in
n
When a signal, or a function, is to be measured, the question arises regarding with which frequency it should be sampled. In this chapter we investigate the geometry of sampling schemes in multiple dimensions. The practical objective of this study is to sample sufficiently densely to obtain a very accurate global estimate, while minimizing the total number of samples taken, to achieve high efficiency. The choice of an adequate regular sampling scheme may be guided by Fourier analysis. We summarize the basic results of multidimensional Fourier analysis, which enable Shannon’s sampling conditions to be established, and then apply them to tomography. The reader is referred to [JER 77] for an introduction to Shannon’s sampling techniques. 3.1.1. Periodic functions, integrable functions, Fourier transforms We summarize here the basic definitions of periodic functions, square-integrable functions, and their Fourier transforms. For n 1 and a non-singular matrix W n , the function f from n to is called periodic with period W if
x n , k = n , f (x Wk )
f (x) . A set K n is called a fundamental
set associated with W if K is bounded and if K Wk, k = n is a tiling of n . In particular, the parallelepiped spanned by the columns of the matrix W is a fundamental set associated with W provided that the opposite sides are left open
^
and closed, respectively: ( y n , x >0,1>n , y
`
Wx
Chapter written by Laurent DESBAT and Catherine MENNESSIER.
is an example of a
64
Tomography
fundamental set associated with W). If K1 and K 2 are two fundamental sets associated with W and if f is a locally integrable periodic function with period W, it can easily be shown that:
³ f ( x ) dx
xK1
³ f ( x ) dx
xK 2
2iʌ W Tk ,x
, k = n , where W T is the transpose The trigonometric functions e of the inverse of the matrix W and x,y is the Euclidean scalar product of two vectors in n , are W-periodic functions. They form the basis of the Hilbert space
L2W K of periodic, square-integrable functions over a fundamental set K associated with W. f L2W K can be decomposed into a Fourier series, leading in
L2W K to: f x
¦ nc k f e
2iʌ W T k ,x
[3.1]
k=
with the Fourier coefficients ck f of f. 2i ʌ W T k , x 1 f x e dx ³ det W K
ck f
[3.2]
We conclude this section with the definition of the Fourier transform. For an integrable function f L1 n , we define its Fourier transform Ff (Ȟ ) and its
inverse Fourier transform F 1 f (x) by: Ff ( Ȟ )
2ʌ ³ f x e i x,Ȟ n 2
dx
n
F 1 f ( x )
2ʌ ³ f Ȟ e i x,Ȟ n 2
[3.3]
dȞ
n
When f is continuous and integrable, and its Fourier transform is also integrable, then F 1 Ff 2
L
n
FF 1 f
f . The Fourier transform is an isometry in S n for the
norm, where S
n
n
is the Schwartz space in , i.e. the space of
infinitely differentiable, rapidly decreasing functions. With the L2 n norm, it is
Sampling Conditions in Tomography
65
(space of square-integrable functions), since S n is a vector subspace of L which is dense in L . an isometry in L2 n 2
n
2
n
3.1.2. Poisson summation formula and sampling of bandlimited functions In Fourier analysis, the choice, and especially the validity, of a sampling scheme is based on the Poisson summation formula: §
¦ n Ff ¨ Ȟ ©
lZ
2ʌ · l¸ h ¹
2ʌ n / 2 h n ¦ f (hk ) e ih Ȟ,k
[3.4]
kZ n
where h ! 0 denotes the interval in which the function is regularly sampled. Equation [3.4] is satisfied in Sc n , the space of tempered distributions, for all
functions f whose Fourier transform Ff (Ȟ) is a distribution with compact support:
Ff İ c n . The reader is referred to [GAS 95], proposition 36.2.1, for a proof in 1D, i.e. n 1 . The proposition is generalized to n dimensions. Equation [3.4] simply expresses the equivalence between the periodic extension of the spectrum of f and its Fourier series in Sc n . The sampling conditions are generally given for a strictly bandlimited function, i.e. the support of the function’s Fourier transform satisfies, sup( Ff ) > b, b@n where b ! 0 . The function f is then necessarily in
C f n and even analytic. Unless it is zero, it cannot be of compact support. According to the Poisson summation formula, sampling f amounts to a periodic extension of its spectrum, as illustrated in Figure 3.1. Obviously, there is no spectral S overlap if the Nyquist–Shannon sampling condition h d is satisfied. The function b f is then correctly sampled, and Ff (Ȟ) can be isolated from its replicas by multiplying equation [3.4] with F n ( Ȟ ) , the indicator function of the central ª S Sº « h , h » ¬ ¼
parallelepiped of width
2S h
(more generally, F A (x) 1 if x A ; F A (x)
0
otherwise). We obtain in this way a formula for the interpolation of f by applying
the inverse Fourier transform. Assuming that f L2 n , Ff L2 n and n 2S · § Ff ¨ Ȟ l ¸ is in L2 § 0, 2S · , permitting a (unique) Fourier series expansion. ¦
lZ
©
h ¹
2S h
I ¨©
>
h
@
¸ ¹
By comparing this expansion with equation [3.4], we obtain in L2 n :
66
Tomography
Figure 3.1. Shannon’s sampling conditions satisfied (top) and violated (bottom)
Ff Ȟ
2ʌ n / 2 h n ¦
kZ
f (hk ) e
ih Ȟ ,k
n
F
ª ʌ ʌº « h , h » ¬ ¼
n
( Ȟ)
With the continuity of the Fourier transform in L2 n
[3.5]
and the assumed
such
continuity of f, we obtain Shannon’s interpolation formula: let f L2 n that sup(Ff) [íb,b] , and let h d
f ( x)
S h ( f )(x)
with x n , sinc(x)
def
S b
n j 1
§S ©h
·
¦ n f (hk )sinc¨ (x hk ) ¸
kZ
, then:
sin( x j ) x ji
¹
[3.6]
, where x j is the j-th element of x.
3.1.3. Sampling of essentially bandlimited functions An essential problem of the described approach is the hypothesis of compact support of the sampled function’s Fourier transform. In practice, the sampled functions are of compact support, and consequently their Fourier transform cannot be of compact support (in medical imaging, the cross-section of a human is of limited extent). The sampling theory must be generalized to cover functions that are not strictly bandlimited. If the function f is continuous and of compact support, it is integrable and square integrable, and its Fourier transform Ff is in C f . To evaluate the
interpolation
error
in
this
case,
we
Sampling Conditions in Tomography
67
S h ( f ) f (x)
as
may
rewrite
1
F F S h ( f ) f (x) , since f and S h f are continuous. We additionally assume
that Ff L1 n to apply the Poisson summation formula, which leads to: F ( S h f )Ȟ
2ʌ n / 2 h n ¦
kZ
§
¦ n Ff ¨ Ȟ
lZ
©
f (hk ) e
ih Ȟ ,k
n
F
ª ʌ ʌº « h , h » ¬ ¼
n
( Ȟ) [3.7]
2ʌ · l ¸F n ( Ȟ) h ¹ ª ʌ , ʌ º « h h» ¬ ¼
Thus, we may deduce:
S h ( f ) f (x) 2ʌ n / 2 ³ F ( S h f ) Ff ( Ȟ) e i x, Ȟ dȞ ª ʌ ʌº « , » ¬ h h¼
n
By inserting equation [3.7] in this equation and performing some algebraic manipulations, we obtain an upper bound for the error: S h ( f )(x) f (x) d 2(2S ) n / 2
[3.8]
n ³ Ff n( Ȟ ) dȞ, x
ª S Sº Ȟ« , » ¬ h h¼
This inequality indicates that the less significant the function is beyond the support hʌ , ʌh n , the smaller the error. This leads to the introduction of the concept
>
@
of the essential support of the Fourier transform of a function. The set K n is called the essential support of the function Ff (Ȟ) if ³ Ff ( Ȟ ) dȞ is negligible. In ȞK
> b, b@n
particular, if K the sets K
2S h
and Shannon’s sampling condition b d
l, l Z n do not overlap, the error of approximating
S h ( f )(x) is limited in the infinity norm by 2(2ʌ) n / 2
f ( x ) by
³ Ff ( Ȟ) dȞ according to
ȞK
equation [3.8].
ʌ is satisfied, or if h
68
Tomography
3.1.4. Efficient sampling The goal of efficient sampling is to minimize the number of samples taken while satisfying Shannon’s sampling condition. This may lead to the use of efficient acquisition geometries, or even the design of efficient detectors. In practice, this may also enable demonstration that a standard scheme (for instance a grid) is necessarily redundant. This redundancy may be exploited to correct defects in the detector (defective pixels) or to use certain pixels for other measurements than those needed for the reconstruction (for example the calibration of distortions of an image intensifier during measurements based on opaque markers). In Fourier analysis, the principle underlying efficient sampling is to adapt the geometry of the sampling scheme to the essential support of the Fourier transform of the sampled function. In particular, a function is called strictly b-bandlimited if the support of its Fourier transform is a centered sphere with radius b, i.e. H 0 ( f , b) is zero with H d ( f , b)
d ³ Ȟ Ff ( Ȟ) dȞ . It is essentially b-bandlimited if H 0 ( f , b) is
Ȟ !b
negligible. In two dimensions, for example, it is known that the sampling of a b-bandlimited or essentially b-bandlimited function is more efficient if performed on a hexagonal instead of a rectangular grid. In fact, the hexagonal scheme is, for instance, spanned by the vectors w1
hH
,0 2 3
T
and w 2
hH
1 3
T
,1 , with hH being the
spacing of the samples on the grid. In this way, the samples are taken at k1 w 1 k 2 w 2 , or, more precisely, at WH k , k Z n , where the column vectors of the sampling matrix
WH are w 1 and w 2 . The rectangular grid (standard scheme) is spanned by the sampling matrix hI, where I is the identity matrix in 2 . We see that the efficiency of a scheme depends on the basis we work on. Let us now consider the function f W (x) f ( Wx) , where W is a non-singular matrix (change of basis). Substituting variables with y Wx , it is very simple to show that: Ff W (Q )
2ʌ ³ f ( Wx ) e i x,Q n
2
n
dx
Ff ( W TQ ) detW
[3.9]
where W T is the transpose of the inverse of the matrix W. The Poisson summation formula (equation [3.4]) for the function f W (x) and h = 1 then reads:
Sampling Conditions in Tomography ih n / 2 ¦ nFf W Ȟ 2ʌl 2ʌ ¦ nf W (k ) e
lZ
Ȟ ,k
kZ
¦ nFf W
lZ
69
T
Ȟ 2ʌW
T
l
2ʌ n / 2 ¦ f (Wk ) e i Ȟ,k kZ n
T and at the point Ș W Q :
¦ nFf Ș 2ʌW T l
lZ
2ʌ n / 2 ¦ f ( Wk ) e i Ș,Wk
[3.10]
kZ n
As before, the sampling conditions are to be deduced from the terms on the lefthand side of the generalized Poisson summation formula (equation [3.10]). With K being the essential support of Ff, Shannon’s generalized sampling condition is satisfied if: the sets K 2SW T l, l Z n do not overlap
[3.11]
By an inverse Fourier transform of the terms on the right-hand side, multiplied by the indicator function of K, the Fourier approximation formula derived from equation [3.10] reads: S W ( f )(x)
def
2S
n
2
det W
¦ f ( Wk ) FF K (x Wk )
[3.12]
kZ n
Let f L2 ( n ) be continuous and of compact support such that Ff L1 ( n ) . We may then show that if W satisfies Shannon’s sampling condition for K:
S W ( f )(x) f (x) d 2(2ʌ) n / 2 ³ Ff (Q ) dȞ, x n
[3.13]
ȞK
In practice, we choose the sampling matrix W that satisfies Shannon’s sampling conditions as compactly as possible. In fact, if 2ʌW T l, l Z n is compact, then
det W T is small, and det W is large. The latter is simply the volume (the area in two dimensions) of the elementary sampling cell. Therefore, the larger this volume is, the fewer samples of the function necessary to cover a given volume. Equation [3.11] is thus of interest due to the possibility of exploiting the geometry of K to choose a W with minimal det W that satisfies equation [3.11].
70
Tomography
Figure 3.2. Shannon’s sampling conditions for a b-bandlimited function. Left: condition of no overlap in the Fourier domain with a standard grid. Right: condition of no overlap with a hexagonal grid. The hexagonal scheme is more efficient than the standard scheme because it is more compact in the Fourier domain (the area of the hexagon is smaller than the area of the square)
3.1.5. Generalization to periodic functions in their first variables Sampling theory may be generalized to functions that are periodic in their first variables and simply integrable in their following variables [FAR 90, FAR 94]. This generalization is of interest in tomography, because the Radon transform is only periodic in its first variable. Let f (ȥ, x) be a periodic function with period U in its
n1 first variables, T be a fundamental set associated with U , ȥ n1 , x nn1 , and (k , Ȟ ) = n1 u n n1 . We define the Fourier transform of f by: Ff (k , Ȟ )
2ʌ
n n1 2
1 det U
³ n³ nf ȥ , x e
T
i x, Ȟ
e
2 iʌ U T k , ȥ
dxdȥ
[3.14]
1
and the inverse Fourier transform of g L2 = n1 u n n1 by: F 1 g ȥ , x
2ʌ
If we assume that
¦
n n1 2
¦n
k= 1
³ g (k , Ȟ ) e nn
i x, Ȟ
dx e
2i ʌ U T k , ȥ
[3.15]
1
f L2U T u nn1
and Ff L1 ( = n1 u nn1 ) (i.e. that
³ Ff (k , Ȟ) dȞ is finite) and if equation [3.11], the Shannon sampling
k= n1 n n1
condition, is satisfied by K = n1 u n n1 , we obtain:
Sampling Conditions in Tomography
S W ( f )(x) f (x) d 2(2ʌ) ( n n1 ) / 2
¦ ³ Ff (k , Ȟ) dȞ, x n
71
[3.16]
k , Ȟ K
with a regular matrix W and
S W ( f )(x)
def
2S
( n n1 )
2
det W det U
¦ nf ( Wk ) F 1F K (x Wk )
[3.17]
kZ
3.2. Sampling of the 2D Radon transform 3.2.1. Essential support of the 2D Radon transform In 2D tomography, we wish to sample Rf (\ , p) , a 2S-periodic function in its first variable. Therefore, we have to study the essential support of the 2D Fourier transform of the sinogram:
FRf k (Q )
1 2S FRf (\ ,Q )e ik\ d\ ³ 2S 0
[3.18]
with
FRf (\ ,Q )
(2S )
1
f 2
ipQ dp ³ Rf (\ , p)e
[3.19]
f
Sampling in 2D tomography is based on the following essential result: let f be a function whose support is assumed to be contained in the unit disk (normalization) and to be essentially b-bandlimited, then the essential support of FRf k (X ) is the set
K 2 defined by: K2
° §Q §1 · ·½° ®(k , Q ) Z u , Q b, k max¨¨ , b¨¨ 1¸¸ ¸¸¾ °¯ ¹ ¹¿° © - ©-
[3.20]
More precisely, we may show that:
¦
³ FRf k (Q ) dQ d a ³ Ff ( Ȟ ) dȞ K (- , b) f
k ( k ,X ) K 2
Ȟ !b
L1
[3.21]
72
Tomography
where a > 0 and K (- , b) decreases exponentially to zero when b approaches infinity. The variable 0 - 1 controls the rate of decay: B (- ) ! 0, C (- ) ! 0 such that b ! B(- ),0 d K (b, - ) e (- )b The proof of this result is based on two inequalities which are obtained from the definition of FRf k (X ) (equation [3.18]) and the Fourier slice theorem (section 2.2.2). Taking our definition of the Fourier transform into account, it reads: 2S Ff Qș
FRf (\ ,Q )
Starting from equation [3.18], we obtain:
FRf k (Q )
1
2S
2S
0
ikI ³ Ff (Qș)e dI
[3.22]
This theorem leads, after some algebraic manipulations, to:
FRf k (Q )
(2ʌ)
1
2 ik
ik\ J k (Q x )dx ³ f ( x) e
[3.23]
:2
where J k (t ) is the k-th Bessel function of the first kind. Equation [3.22] enables us to obtain the first inequality: 2ʌ
1
¦ ³ FRf k (Q ) dQ d
2ʌ Q
k Q !b
1
³ ³ Ff (Qș) d\dQ
2ʌ
!b 0
H 1 f , b
[3.24]
while equation [3.23] enables us to establish the second inequality:
³ FRf k (Q ) dQ d
Q - k
1 2ʌ
K (- , k ) f
[3.25]
L1
In fact, with Debye’s formula [ABR 70] on the asymptotic behavior of J k (Q ) for Q - k we may show [NAT 86] that sup
-k
³ J k (rQ ) dQ is smaller than
r d1 -k 3
1 2 2 C (- )k 2 e (1- ) m / 3 . In practice, we may choose - close to, but strictly smaller
Sampling Conditions in Tomography
73
than, one (typically - 0.9 ). Natterer [NAT 86] showed that equations [3.24] and [3.25] are sufficient to establish equation [3.21]. From equation [3.21], we deduce that a sampling scheme exists that is effectively twice as efficient as the standard scheme in which the projections are all sampled at the same abscissa (see Figure 3.3). This more efficient scheme is interlaced. We will define it in the following section. Cormack [COR 78] was the first to show the efficiency of interlaced sampling. Rattey and Lindgren [RAT 81] established this result within the framework of Fourier analysis, which is presented here following Natterer [NAT 86]. The reader may also consult [ROU 80] for an introduction to the problem of sampling in tomography. 3.2.2. Sampling conditions and efficient sampling
In Figure 3.3, the essential support K2 of the Fourier transform of the Radon transform Rf of a b-bandlimited function f (with continuous support in the unit disk) and the non-overlapping schemes for a standard (equidistant in \ and s) and an interlaced sampling are shown. N\ denotes the number of projections over 180° (interval [0, S[) and N U the number of measurements per projection. The standard sampling generated by the matrix: § S / N\ ¨ ¨ 0 ©
WS
0 · ¸ and 2SWS t 2 / N U ¸¹
§ 2 N\ ¨ ¨ 0 ©
0 · ¸ SN U ¸¹
[3.26]
must satisfy Shannon’s conditions “ K 2 2ʌ WST = 2 without overlap”. Shannon’s conditions are thus satisfied if N\ t b / - and N U t 2b / ʌ , with the minimum being reached when N U
2b / ʌ, N\
b / - , i.e. when the number of projections over 180°
is slightly larger than ʌ / 2 times the number of samples per projection. We conclude from Figure 3.3 that Shannon’s conditions may be satisfied more efficiently by choosing:
2SWE t
with N\
ʌ NU
§ N\ ¨ ¨S N U © 2
0 · ¸ and WE SN U ¸ ¹
§ 2S ¨ N\ ¨ ¨ 0 ©
S
· N\ ¸ ¸ 2 N U ¸¹
[3.27]
2b - c and N U t ʌ . Each projection is thus acquired with twice as
few samples than in the standard scheme. We note that two consecutive projections
74
Tomography
are not sampled at the same abscissa p, as seen in Figure 3.3. Therefore, det WE | 2 det WS , and the interlaced sampling is about two times more efficient than the standard sampling.
K2
Q1
Q1
b
b
O
b/X
O
k
p
b/X'
k
p S/N<
S/N<
/NU /NU <
<
Figure 3.3. Shannon’s conditions in 2D tomography. Top left: non-overlapping conditions in the Fourier domain with a standard grid. Top right: non-overlapping conditions with an interlaced grid. The interlaced scheme (bottom right) is about two times better than the standard scheme (bottom left), because X c is only slightly smaller than X when X is close to one
3.2.3. Generalizations 3.2.3.1. Vector tomography In vector tomography, we try to reconstruct the velocity of a fluid v(x) from measurements of Radon transforms in the direction of Ȧ : RȦ v\ , p
f
³ v( pș tIJ ), Ȧ dt
_f
[3.28]
Sampling Conditions in Tomography
75
with a unitary vector Ȧ , possibly a function of \ and p (we remind the reader that ș cos\ , sin \ and IJ sin \ , cos\ ). Theoretical studies [PRI 96] show that two independent directions ȟ1 (\ , p) and ȟ 2 (\ , p ) , for all pairs (\ , p) , are necessary and sufficient to reconstruct v(x) . This result may be generalized to n dimensions for the reconstruction of a vector v(x) from n vector Radon transforms along independent directions. Techniques for measuring vector Radon transforms are rare. Nevertheless, the vector Radon transform along direction W plays a particular role. It appears naturally when the difference in the propagation time of ultrasound waves between two points A and B and B and A is measured, with these two points being at the boundary of a flowing fluid. Assuming that the velocity of the fluid (in a permanent regime) is negligible compared to the velocity of the sound, we can show [BRA 91, NOR 88, NOR 92] that these measurements are described by: R IJ v \ , p
f
f
_f
_f
³ v( sș tIJ ), IJ dt
³ v ( s ș t IJ ) dt , IJ
[3.29]
The Helmholtz decomposition of a vector field v(x) u w(x)e z q(x) , where q is the scalar and w the vector potential, plays a key role in solving this problem. In fact, the following Fourier slice theorem may be established:
FRIJ v\ , X
i 2S XFw(Xș)
[3.30]
Obviously, only the solenoidal component u w(x)e z may be reconstructed from the measurements R IJ v \ , p . What are the sampling conditions for
R IJ v \ , p ? As for the classical Radon transform, we define the Fourier transform of R IJ v \ , p by: FR IJ v k (Q )
1 2S FR IJ v (\ ,Q )e ik\ d\ ³ 2S 0
where:
FRIJ v (\ ,Q )
(2S )
1
f 2
isQ ³ RIJ v(\ , s)e ds
f
[3.31]
76
Tomography
By inserting the Fourier slice theorem (equation [3.30]) into equation [3.31], we obtain:
i 2S
FR IJ v k (Q )
³ QFw(Qș)e
2S
ik\
d\
[3.32]
0
After elementary algebraic manipulations, we obtain:
FRf k (Q )
(2S )
1
2 i k 1
³ w(x)e
ik\
QJ k (Q x )dx
[3.33]
:2
With
2S
³ ³ QFw(Qș)dQd\ H 1 (w, b) , equation [3.32] leads to:
0 Q !b
¦ ³ FRIJ v k (Q ) dQ d k Q !b
1 2S Q
2S
1
³ ³ QFw(Qș d\dQ
2S
!b 0
H 1 w, b
[3.34]
If v is essentially bandlimited, H 0 (w, b) and H 0 (q, b) must be negligible. In addition,
QJ k (Q )
behaves like
J k (Q )
when
Q -k .
More precisely,
multiplication with Q , or with any polynomial in Q , does not change the exponential decay of J k (Q ) with k . We thus conclude that the essential support of
FR IJ v k (Q ) is also the set K 2 . The vector Radon transform hence satisfies the same sampling conditions as the scalar Radon transform (see [DES 95]). 3.2.3.2. Generalized, rotation invariant Radon transform The generalized, rotation invariant Radon transform of weight w is defined by:
R w f \ , U
f
³ f ( Uș tIJ) w( U , t )dt
³ f (x) w( x, ș , x, IJ )dx
x,ș
f
[3.35]
U
It occurs naturally in Doppler imaging, a technique employed in astrophysics to map the temperature of the surface of stars [DES 99, MEN 97]. If w( U , t ) is a monomial
x, IJ
w( U , t )
U lt m ,
sinM \ and ( x
we
c j C
get T
x cos M , sin M ):
with
x, ș
cos(M \ ) ,
Sampling Conditions in Tomography
§ e i M \ e i M \ · ¨ ¸ ¨ ¸ 2 © ¹
l
§ e i M \ e i M \ · ¨ ¸ ¨ ¸ 2 i © ¹
m
l m
¦ c j e ij M \
77
[3.36]
j l m
Inserting this into equation [3.35] yields: l m
¦ c j e ij\
R s l t m f \ , U
j l m
l m ijM ³ f (x) x e dx
x ,ș
[3.37]
U
We may conclude that if for all j with j l m the function f (x) x
l m ijM
e
is essentially bandlimited, then the essential support of the Fourier transform of Rsl t m f \ , U , FR pl t m f k (X ) , is the set: K2
pl t m
° §X §1 · ·½° ®(k , X ) Z u , X b, k l m max¨¨ , b¨¨ 1¸¸ ¸¸¾ ¹ ¹°¿ °¯ © - ©-
[3.38]
In fact, multiplication by e ij\ simply introduces a shift of j of the essential support of FR§¨ f (x) x ©
l m ijM
e
·¸ X along the direction k. More generally, if the ¹k
weights of the function are a polynomial of degree d , i.e. w( p, t )
l m cl ,m p t ,
l md d
with l, m and d being positive integers, we get: K2
cl , m l mdd
pl t m
° §X §1 · ·½° ®(k , X ) Z u , X b, k d max¨¨ , b¨¨ 1¸¸ ¸¸¾ ¹ ¹¿° °¯ © - ©-
[3.39]
For more details on sampling the rotation invariant Radon transform with polynomial weights, the reader is referred to [DES 97b]. 3.2.3.3. Exponential and attenuated Radon transform The sampling conditions of the attenuated Radon transform were studied by Xia, Lewitt and Edholm in [XIA 95] based on the stationary phase principle applied to the Fourier transform in 2D SPECT. The authors show that the Fourier transform of the attenuated X-ray transform of a point source (Dirac distribution) is essentially negligible at points outside K 2 if the attenuation function shows only small local variations.
78
Tomography
We present here a stronger result in the far more limited context of the exponential X-ray transform. Let FF P0 f k (X ) be the Fourier transform of the exponential X-ray transform of a function of compact support: f
Pt ³ f ( UT tW )e 0 dt , with P 0 t 0
F P 0 f (\ , U )
[3.40]
f
Since the support of the function f is compact, its Fourier transform may extend over the entire complex plane. The Fourier slice theorem for this transform is simply:
1 f f
P t iUQ dtdU Ff (Qș iP 0 IJ ) [3.41] ³ ³ f ( UT tW ) e 0 e
FF P0 f (\ ,Q )
2ʌ f f
We obtain:
FF P0 f k (X )
1
2ʌ
2ʌ
0
ik\ d\ ³ Ff Xș iP 0 IJ e
[3.42]
which leads to:
³
Q !b
FF P0 f k (X ) dQ
2
³
2ʌ Ȟ !b
Ȟ
1
§ Ȟ A ·¸ dȞ Ff ¨ Ȟ iP 0 ¨ Ȟ ¸¹ ©
[3.43]
With a similar derivation [NAT 86], we obtain:
FF P0 f k (X )
1 2S
3
³2
§ S· ik ¨ M ¸ 2S f ( x )e © 2 ¹
ik\ iX x sin\ iP 0 x cos\ d\ dx ³e
[3.44]
0
In [ABR 70], the following expression for the Bessel function of the first kind of order k can be found: t§
J k (t )
1·
t§
1·
1 k 2S ik\ i 2 ¨© r r ¸¹ sin\ 2 ¨© r r ¸¹ cos\ r d\ ³e 2S 0
[3.45]
Sampling Conditions in Tomography
X P0 and t X P0
With r
FF P0 f k (X )
1 2S
³
79
X 2 P 2 x , we obtain from equation [3.44]:
§ S· ik ¨ M ¸ f ( x)e © 2 ¹
2
k
X P0 J k §¨ X 2 P 2 x ·¸dx X P0 © ¹
[3.46]
In this way, we easily understand that the asymptotic behavior of the Bessel functions may once more be exploited to establish an inequality of the type:
³
Q - k
FF P 0 f k (X ) dQ C (- )K - , k
[3.47]
In [DES 00], this inequality is proven for k ! aP 0 , - , a constant that only depends on P 0 and - . Supposing that:
§
ȞA ·
³ Ff ¨¨ Ȟ iP 0 Ȟ ¸¸ dȞ Ȟ !b ¹ ©
[3.48]
is negligible, we derive from equations [3.43] and [3.46] that the essential support of FF P0 f k (X ) is the set: K 2, P 0
° §X § §1 · ·½° · ®(k , X ) Z u , X b, k max¨¨ , max¨¨ b¨ 1¸, aP 0 , - ¸¸ ¸¸¾ ¹ °¯ © ©¹ ¹°¿ ©-
We easily conclude that for 0 - 1 and fixed P 0 , K 2,P0 becomes K 2 when b becomes large. Thus, the sampling conditions of the exponential Radon transform are essentially the same as those of the classical 2D Radon transform.
3.3. Sampling in 3D tomography 3.3.1. Introduction The sampling conditions in 3D tomography have not been studied much and remain mostly unknown globally. The apparently simple problem of sampling the cone-beam transform X c f (n, s) , with n S 2 and s ī , where ī is the trajectory of the source in space, is not solved for the general case. In fact, sampling
80
Tomography
conditions are perfectly known for the X-ray transform in parallel geometry X p f (n, s) , where s n A , only if n is restricted to a large circle (taking symmetry into account, a half-circle is sufficient from a theoretical point of view for the data to be complete, i.e. for Orlov’s conditions to be satisfied [ORL 76]). This geometry corresponds in 3D to the reconstruction of a volume as a series of independent slices. In this case, the sampling conditions may be established mathematically, as we will see in section 3.3.2. The result is an extension of the sampling conditions in two dimensions to three dimensions. All generalizations that we presented for two dimensions may be extended to three dimensions in parallel geometry when they have a meaning. By contrast, the sampling conditions in 3D cone-beam tomography are not precisely established from a theoretical point of view, including those for the most classical trajectories, the circle and the helix. They may, however, be tackled from a numerical point of view, as we will see in section 3.3.3.
3.3.2. Sampling of the X-ray transform We assume that f C 0f (:) , where :
: 2 u > 1,1@ is the plain unit cylinder in
3
. For the sake of simplicity, we define n
n\
sin \ , cos\ ,0 T
and
s( p1, p2 ) p1 cos \ , sin \ ,0 T p2e3 , with ( p1 , p2 ) 2 , \ [0,2ʌ ] . We parametrize the X-ray transform, constrained to the unit circle, in the following way: g (\ , p1 , p 2 )
X p f n(\ ), s( p1 , p 2 )
[3.49]
As in 2D tomography, we define the Fourier transform of g by:
Fg k (X1 , X 2 )
1 2S ik\ d\ ³ Fg (\ , X1 , X 2 )e 2S 0
[3.50]
with: Fg (\ , X1 , X 2 )
1 i ( p X p X ) ³ Fg (\ , p1 , p 2 )e 1 1 2 2 dp1dp 2 2S 2
[3.51]
We can show [DES 97a] that the essential support of Fg k (X1 , X 2 ) is contained in the set:
Sampling Conditions in Tomography
K3
81
° ½° § X1 § 1 ·· 2 , b¨¨ 1¸¸ ¸, X 2 cb, X1 ¾ ®(k , X1 , X 2 ) Z u , X1 b, k max¨¨ ¸ ©¹¹ °¯ °¿ © -
with: b if X1 d X- ,b max1, (1 - )b ° cb, X1 ® b 2 X12 if X- ,b X1 b ° 0 otherwise ¯
¦
More precisely, we may establish an upper bound to ³ Fg k (X1 , X 2 ) dX1dX 2 , which is a quantity that converges rapidly towards
k ( k ,X1 ,X 2 )K 3
zero when b approaches infinity (similar to what we obtained in 2D, see equation [3.21]). In Figure 3.4, a 3D visualization of K 3 is given. It is obtained from the discrete Fourier transform of the data X p f n(\ ), s( p1 , p 2 ) of a function f simulating a
c1 , c 2 , c3 0.3,0.4,0.1 .
relatively smooth peak centered at c
This peak is
constructed out of a sum of indicator functions of spheres denoted by F D(c, r ) , where c is the center and r is the radius of this sphere. We chose in this example six indicator functions of spheres with radii of r1 = 0.1; r2 = 0.07; r3 = 0.05; r4 = 0.03;
r5 = 0.02; and r6 = 0.01, i.e. f x
6
¦ F D(c,ri ) x . We show in Figure 3.4 the
i 1
isosurface of 1% of the maximum of Fg k X1 , X 2 , which, in practice, contains the largest values of Fg k X1 , X 2 . The reader may very simply verify that K 3 is contained in the intersection, denoted by K 3e , of two cylinders: one cylinder with axis k and a hexagon as its base, the other with axis X 2 and the set K 2e as its base, as seen in Figure 3.5 (the
reader may verify that X - ,b 2 3 b is necessary, i.e. - ! 3 1 , which is hardly restrictive since we are interested in the case of - being close to one (see [DES 97c])). We show in this figure the trace in the planes k , X1 and X1 , X 2 of the most compact regular packing of the sets K 3e in the Fourier domain k , X1 , X 2 , T (see equation [3.52]). It may be shown generated by the matrix 2ʌ WEH T 3 = are without overlap in = u 2 . algebraically that the sets K 3e 2ʌWEH
82
Tomography
Figure 3.4. Visualization of the set K3. The isosurface of 1% of the maximum of the magnitude of the Fourier transform of the X-ray transform in parallel geometry (restricted to the unit circle) of a sum of indicator functions of disks, simulating a peak off-center, is shown
Moreover, considering the standard sampling matrix WS- ,1,1 (see equation [3.53]), WS- ,1,1 is obviously the best standard sampling matrix associated with
K 3e , since 2ʌWST- ,1,1 is, among the diagonal matrices ' , the one that generates the most compact scheme without overlap of the sets K 3e '=3 (see Figure 3.6). The standard scheme generated by the diagonal matrix WS- ,1,1 , corresponding to an equidistant scheme independent in each of the directions \ , p1 , and p 2 , is
4- c 3-
times less efficient than the interlaced hexagonal scheme generated by the matrix 4- c WEH : in fact, det WS- ,1,1 det WEH (with 0 - c 1 , and - c close to one 3in practice).
T 2SWEH
WEH
ª1 0 « -c b« 1 2 « 0 0 ¬
ª c -c 2S «« 0 1 b« « 0 1 3 ¬
0º » 1 » 2SWST- ,1,1 3» ¼
º 0 » 0 » WS-,1,1 2 »» 3¼
ª1 0 0º « » 2b « 0 1 0» « 0 0 1» ¬ ¼
[3.52]
ª- 0 0º S« 0 1 0»» b« ¬« 0 0 1»¼
[3.53]
Sampling Conditions in Tomography
83
T 3 Figure 3.5. Tiling generated by K3e 2ʌ WEH = in the planes k ,X1 (left) and X1,X 2 (right)
In the standard scheme WS- ,1,1 = 3 , the grid of the detector is square with a spacing of
ʌ
. The angular spacing is -
ʌ , thus slightly smaller, since - is chosen b
b close to one ( 0 - 1 ). The number of projections over >0, ʌ must hence be slightly larger than
ʌ 2
times the number of samples in the direction cos\ , sin \ , 0 .
In fact, : is the unit cylinder, and we therefore sample along that direction in the interval > 1;1@ . The interlaced hexagonal scheme WEH = 3 may be described as the interlacing of two standard schemes:
WEH =3
WS 2- c,2, 2 =3 - c,1, 3
1
3
W
S 2- c, 2, 2
3
=3
[3.54]
In this way, with respect to the standard sampling WS- ,1,1 = 3 , the number of samples is two times less in the direction cos\ , sin \ ,0 and
2
times less in the direction e 3 . The grid of the detector is rectangular. It is translated by - c,1, 1 3 for 3
the odd projections (see Figure 3.7). As - c is close to - when it is close to one, the number of projections in the interlaced hexagonal scheme is very slightly larger than that in the standard scheme.
84
Tomography
Figure 3.6. Sampling on a standard grid: non-overlapping conditions in the planes k ,X1 and X1,X 2 of the sets K3e 2ʌ WST =3
Figure 3.7. Samples on the detector plane for the efficient interlaced hexagonal scheme. Shown are some samples for an even (right) and odd (left) projection angle. The sampling scheme is the same rectangular grid (even detector). It is simply translated between the even and odd projections. By superimposing even and odd projections, the hexagonal scheme is obtained
3.3.3. Numerical results on the sampling of the cone-beam transform Few theoretical results are known today as far as the sampling of the cone-beam transform is concerned. The sampling conditions in 2D tomography in fan-beam geometry were established by Natterer in 1993 [NAT 93]. In 3D tomography, these conditions are not rigorously established, even for the simplest trajectories. We present here a numerical approach to this problem for the circular trajectory (the simplest and the most common trajectory, even if it does not satisfy Tuy’s condition).
Sampling Conditions in Tomography
85
Figure 3.8. Visualization of the essential support of the Fourier transform of the cone-beam transform of a peak off-center for a circular trajectory when the radius of the source is 10 (left), 3 (center), and 1.5 (right), respectively, with the peak being contained in a sphere of radius 1. The sampling conditions obtained in parallel geometry seem to be approximately conserved down to r = 3. When the radius of the trajectory is close to the radius of the reconstructed domain, the essential support deforms and the parallel sampling conditions are no longer observed
Assuming that we sample F c f s(\ ), a( p1 , p 2 ) , with s(\) = – r(– sin\, cos\,0) =
– rW and a( p1 , p 2 ) s(\ ) p1T p 2 e 3 , where TS1, we define g c (\ , p1 , p 2 ) F c f s(\ ), a( p1 , p 2 ) . Intuitively, when the radius of the circular trajectory approaches infinity, this transform behaves like the parallel transform whose sampling conditions we just studied. In Figure 3.8, we show the isosurface of 1% of the maximum of Fg kc X1 , X 2 for different values of r, the radius of the trajectory of the source around the unit cylinder containing the function f. We chose in this example
f x
6
¦ F D(c,ri ) x , with ri as in the previous section, but c
i 1
0.7;0.5;0.3 . We see
that the sampling conditions remain similar to those obtained in parallel geometry if r is sufficiently large (in practice for r greater than 3 when the support of f is contained in the unit cylinder).
3.4. Bibliography [ABR 70] ABRAMOWITZ M., STEGUN I. A., Handbook of Mathematical Functions, Dover, 1970. [BRA 91] BRAUN H. S., HAUCK A., “Tomographic reconstruction of vector fields”, IEEE Trans. Signal Proc., vol. 39, n° 2, pp. 464–472, 1991. [COR 78] CORMACK A. M., “Sampling the Radon transform with beams of finite width”, Phys. Med. Biol., vol. 23, n° 6, pp. 1141–1148, 1978.
86
Tomography
[DES 95] DESBAT L., “Efficient parallel sampling in vector field tomography”, Inverse Problems, vol. 11, pp. 995–1003, 1995. [DES 97a] DESBAT L., “Echantillonnage parallèle efficace en tomographie 3D”, C.R. Acad. Sci. Paris, vol. 324, pp. 1193–1199, 1997. [DES 97b] DESBAT L., MENNESSIER C., “Echantillonnage efficace en imagerie Doppler”, Proc. GRETSI, pp. 563–566, 1997. [DES 97c] DESBAT L., Echantillonnage efficace en tomographie, Habilitation, Joseph Fourier University, Grenoble, 1997. [DES 99] DESBAT L., MENNESSIER C., “On the invertibility of Doppler imaging: an approach based on generalized tomography”, Inverse Problems, vol. 15, pp. 193–213, 1999. [FAR 90] FARIDANI A., “An application of a multidimensional sampling theorem to computed tomography”, Proc. AMS-IMS-SIAM Conference on Integral Geometry and Tomography, Contemporary Mathematics, vol. 113, pp. 65–80, 1990. [FAR 94] FARIDANI A., “A generalized sampling theorem for locally compact abelian groups”, Math. Comp., vol. 63, n° 207, pp. 307–327, 1994. [GAS 95] GASQUET C., WITOMSKI P., Analyse de Fourier et Applications, Masson, 1995. [JER 77] JERRI A. J., “The Shannon sampling theorem – its various extensions and applications: a tutorial review”, Proc. IEEE, vol. 65, n° 11, pp. 1565–1596, 1977. [MEN 97] MENNESSIER, C., Identification en Imagerie Doppler: Liens avec la Transformée de Radon Généralisée, PhD thesis, Joseph Fourier University, Grenoble, 1997. [NAT 86] NATTERER F., The Mathematics of Computerized Tomography, John Wiley & Sons, 1986. [NAT 93] NATTERER F., “Sampling in fan-beam tomography”, SIAM J. Appl. Math., vol. 45, n° 5, pp. 1201–1212, 1993. [NOR 88] NORTON S. J., “Tomographic reconstruction of 2-D vector fields: application to flow imaging”, Geophysical J., vol. 97, pp. 161–168, 1988. [NOR 92] NORTON S. J., “Unique tomographic reconstruction using boundary data”, IEEE Trans. Imag. Proc., vol. 3, pp. 406–412, 1992. [ORL 76] ORLOV S. S., “Theory of three-dimensional reconstruction. Conditions of a complete set of projections”, Sov. Phys. Crystallogr., vol. 20, p. 312, 1976. [PRI 96] PRINCE J. L., “Convolution backprojection formulas for 3D vector tomography with application to MRI”, IEEE Trans. Imag. Proc., vol. 5, pp. 1462–1472, 1996. [RAT 81] RATTEY P. A., LINDGREN A. G., “Sampling the 2-D Radon transform”, IEEE Trans. ASSP, vol. 29, pp. 994–1022, 1981. [ROU 80] ROUX C., Contribution à l’Etude d’un Système d’Imagerie Cardiaque en Tomographie Axiale Transverse par Rayons X, PhD thesis, Grenoble Institute of Technology, 1980.
Sampling Conditions in Tomography
87
[WAT 66] WATSON G. N., A Treatise on the Theory of Bessel Functions, Cambridge University Press, 1966. [XIA 95] XIA W., LEWITT R. M., EDHOLM P., “Fourier correction for spatially variant collimator blurring in SPECT”, IEEE Trans. Med. Imag., pp. 100–115, 1998.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 4
Discrete Methods
4.1. Introduction In this chapter, discrete methods of tomographic reconstruction are described, which are, in contrast to analytical methods, by definition based on discrete modeling of the image to be reconstructed and the measured data. These methods, which are also referred to as series expansion methods, encompass several large classes of techniques. They have been developed in different contexts to better examine certain physical and statistical phenomena that are inadequately described by the Radon transform. These methods permit integration of prior information on the acquisition process and on the images to be reconstructed, and they allow the flexible use of diverse optimization criteria. We distinguish two large classes of techniques. First, we describe the algebraic methods, which are generalized inverse methods that are adapted to a particular form of the Radon operator. The most commonly used ones are those derived from subspace projection techniques. Then, we present the statistical methods, which have been developed in the framework of Bayesian estimation or estimation by functional optimization. They have been elaborated by considering the context of the data acquisition determining the nature of the problem, which can be over- or under-determined. In the first case, the solution is unique, provided that consistency problems do not arise from different types of noise (measurement noise, discrepancy between model and data, etc.). By contrast, in the second case, only a limited number of measurements are available, which does not allow determination of a unique solution. The inverse problem may thus be weakly or strongly ill-conditioned. In the following descriptions of the individual methods they are classified into four categories resulting from the distinction between algebraic and statistical, and over- and under-determined methods. Chapter written by Habib BENALI and Françoise PEYRIN.
90
Tomography
In each section, we outline the main algorithms in use, which are, in view of the size of the problem, generally iterative. 4.2. Discrete models The discrete methods rely on a discrete representation of both the image to be reconstructed and the measured data. The image f(x) is represented by the vector f with coordinates fj in a finite basis of N square summable functions hj(x): N -1
f(x) = ¦ f j h j (x)
[4.1]
j 0
The most natural decomposition is the choice of the indicator functions of pixels or voxels for the functions hj. Other choices, such as “natural pixels” [BUO 81], “blobs” [LEW 92], B-splines, or wavelets [GUE 90], are possible as well. Generally, a projection measurement mi is expressed as a line integral along a path si(x): mi = ³ f(x) s i (x) dx
[4.2]
By introducing the decomposition of f(x) and exploiting the linearity of the continuous and discrete summation operators, the measurement is described by: N 1
>
@
mi = ¦ ³ h j ( x ) s i ( x ) d x f j j 0
[4.3]
Regrouping the set of measurements in the vector m leads to the discrete model in matrix–vector form: m=Rf
[4.4]
where R it a matrix of size M × N, whose elements rij represent the i-th projection measurement of the basis function hj(x). This formulation is generic because the matrix R, which represents the projection operator, may be defined in parallel or divergent geometry, in 2D or 3D, and for X-ray or emission tomography.
Discrete Methods
91
In X-ray tomography, modeled by the 2D Radon transform in parallel geometry (see Chapter 2), the coefficients rij represent the length of the intersection of the i-th ray with the pixel j, assuming that the basis functions are the indicator functions of pixels. For integration along a line parametrized by \i and pi, i.e. given by si(x,y) =
G(x cos \i + y sin \i í pi), we obtain: rij = ³ hj (pi cos \i í s sin \i, pi sin \i + s cos \i) ds
[4.5]
The calculation of these coefficients is complex and may lead to numerical instabilities. It must be as fast as possible, because the projection matrix R is generally not stored and hence has to be evaluated in each iteration. An efficient calculation technique is to sum the contributions of each pixel in the same way as in backprojection algorithms [JOS 82, PET 81]. In emission tomography, the coefficients rij are often interpreted as the probability of a photon emitted in the pixel or voxel j of the object in the direction of the detector reaching the detector “bin” i. The choice of hj(x) and si(x) permits modeling of the discretization of the data. It is also possible to consider the physics of the acquisition more accurately (size of the source, response of the detector, attenuation effects, etc.) without changing the nature of the problem, as long as the modeled phenomena remain linear. All choices concerning the model are implicitly contained in the matrix R and affect its inversion. Equation [4.4] may be extended by adding an explicit global noise term, which includes different types of system noise (photon noise, conversion noise, detection noise, etc.): m=Rf+e
[4.6]
The inverse problem in tomography thus involves solving this linear system of equations, i.e. determining the image f given the data m, the projection matrix R, and optionally information on the noise e and the image f. This problem belongs to the class of image restoration problems, but has some specific characteristics [GRA 99]. The matrix R is very large, with its dimensions typically being the number of pixels or voxels of the image N and the number of measurements M. It is very sparse by design, but its structure is not exploitable mathematically. Depending on the conditions of the acquisition, the inverse problem in tomography may be over- or under-determined, or it may be inconsistent due to noise. Its “well- or ill-conditioned” nature (in the sense of Hadamard) has
92
Tomography
theoretically been studied for certain particular cases. It appears that the illconditioned nature of the problem deteriorates very rapidly with the number of missing views [DAV 83]. 4.3. Algebraic methods 4.3.1. Case of overdetermination by the data 4.3.1.1. Algebraic methods based on quadratic minimization In the case of overdetermination, the solution is often found by minimization of a quadratic function J(f).
Least squares method The classical approach is to minimize the quadratic error between the measured data and the projection of the image, i.e. to choose the function:
J(f )
mRf
2
[4.7]
The minimization of this function leads to the least squares (LS) solution given by the solution of the system of normal equations:
R T R fˆ
RT m
[4.8]
where RT represents the transpose of the matrix R. Generally, the solution may be expressed as a function of the generalized inverse of R, denoted by R+, which reduces to (RTR)-1RT when R is invertible. It reads:
fˆ
R m
[4.9]
When the matrix RTR is invertible, the solution of equation [4.9] corresponds to the least squares solution with minimal norm. It is often unstable in the presence of noise, because the singular values of R may be small compared to the noise. For this reason, regularization is preferably employed.
Regularized methods The principle of regularization is to reformulate the original problem as a wellposed problem. Different approaches to regularization exist. In this chapter, we confine ourselves to those based on the incorporation of prior information.
Discrete Methods
93
In this framework, regularization in the sense of Tikhonov [TIK 63] involves imposing a smoothness constraint on the solution, described by a new function * (f). This leads to a constraint minimization problem, which may be reformulated as an unconstraint minimization of:
J(f )
mRf
2
O *(f )
[4.10]
The parameter O enables control of the relative weight attached to the data, i.e. 2 __ m – R f __ , and the a priori, i.e. * (f). 2
A classical solution is to choose * (f) in the form of __Cf __ , where C is a differential operator. The solution fˆ is then determined by the solution of the new system of equations: T T ( R R Ȝ C C) fˆ
T
R m
[4.11]
The introduction of the operator C enables improvement of the conditioning of the matrix: the obtained solution is both more stable and “smoother”. The regularization parameter O may be estimated by cross-validation techniques. The operator RT, which appears explicitly in equation [4.11], may be interpreted as the discrete version of the backprojection operator widely used in analytical methods. An approach that permits a generalization of the previous cases and that is consistent with Bayesian estimation was presented in [HER 80]. It involves minimizing a function of the form: J(f )
( m í f ) T W1 ( m í f) (f í f0 ) T W2 (f í f0 )
[4.12]
where W1 and W2 are positive definite matrices. A common choice, which allows integrating information about the noise, is to select W1 equal to the (centered) noise covariance matrix. 4.3.1.2. Algorithms The solution of equation [4.11] requires the inversion of a large matrix. Consequently, iterative techniques are suitable for calculating an approximate solution. We present here mainly minimization algorithms that are based on gradient descent methods.
94
Tomography
SIRT method The SIRT algorithm (simultaneous iterative reconstruction technique) is one of the iterative algorithms specifically developed for tomography [GIL 72, LAK 79]. Each pixel j is iteratively corrected by taking into account all measured data to which the pixel contributes. The iterative procedure may be expressed in the form [HER 76]: 1) f (n j
º º ªN M ªN (n) f (n) ² rij )» / « ¦ ¦ riq rij » j O « ¦ ( mi ¢ri. , f »¼ ¼ ¬«i 1 q 1 ¬i 1
[4.13]
where O is a relaxation factor and ri denotes the i-th row of the matrix R, corresponding to the equation of one measurement. The iterative procedure may be rewritten in the form of a descent algorithm as: f (n 1)
f (n) O B ( m R f (n) ) [4.14]
by choosing B = S RT, where S is a diagonal matrix whose elements sj are the inverse of the denominator in equation [4.13]. The algorithm thus admits the following interpretation: each pixel j is corrected by backprojection and normalized by the coefficient sj of deviation between the measured data and the calculated projections. Generally, the algorithm defined by equation [4.14] converges if and only if:
U ( I Ȝ B R) max 1 Ȝ J i 1 i
[4.15]
where U(A) denotes the spectral radius of the matrix A and the coefficients Ȗi are the eigenvalues of B R. Landweber method
The Landweber method [LAN 51] corresponds to a descent algorithm following equation [4.14] by choosing B = RT and On O . It converges to the minimum norm solution when the algorithm is initialized with zero. It is a gradient method with constant step-size, whose convergence is quite slow. Generalizations of this method have been developed to improve the speed of convergence.
Discrete Methods
95
Optimal step-size gradient method
The gradient method with optimal step-size is defined by the iterative procedure: f (n 1)
f (n) On q n
with Ȝn and
qn
q Tn q n ( R q n )T ( R q n )
[4.16]
R T (m - R f (n) )
The relaxation coefficient On is chosen such that the quadratic error __ m – R f(n+1) __2 is minimized. This method permits following the direction of steepest descent of the function to be minimized. In each iteration, the residuum is reduced, and successive residua are orthogonal. Conjugate gradient method The “conjugate gradient” (CG) method involves choosing residua that are no longer orthogonal with respect to the Euclidean norm, but with respect to RTR. It is described by:
f (n 1)
f (n) On p n
rn
R T (m - R f (n) )
p1
r0 and if n ! 1 pn 1
where p n 1 , rn ! 0
[4.17]
and
rn E n p n pn 1 , R T R pn ! 0
It follows that the coefficients On and En are given by: Ȝn and E n
pTn p n ( R p n )T ( R p n ) ( R rn 1 )T ( R p n 1 ) ( R p n 1 )T ( R p n 1 )
[4.18]
It is worth noting that this approach is difficult to implement for large volumes, since it requires us to remember the correction of the previous step pn. Minimum residual method The solution of equation [4.8] may be interpreted as a preconditioning of the system with the operator RT. The choice of the preconditioning operator, which is not limited to RT, is of particular interest in the framework of applications of emission tomography. In this case, it is the aim to solve: B R fˆ
Bm
[4.19]
96
Tomography
For solving such a system, a generalization of the CG method for asymmetric operators exists. A specific variant of this method, called the minimum residual (MR) method, was proposed in [AXE 80] and was applied to tomography in [LA 97]. The iterative procedure may be expressed by:
f (n 1)
f (n) On p n
B(m Rf (n) ) (hence rn 1
rn p n 1
rn On BRp n )
[4.20]
rn 1 E n p n
where rn 1 , BRp n ! 0
and
BRp n 1 , BRp n ! 0
4.3.2. Case of underdetermination by the data 4.3.2.1. Algebraic methods based on constraint optimization The case of underdetermination by the data is encountered when the number of measurements is limited. It may occur when the acquisition has to be very fast (cardiac applications, industrial testing, etc.) or when data for certain angles of incidence are not available or exploitable. It is, therefore, interesting to note that the general solution of the reconstruction problem described by equation [4.4] may be decomposed into:
f
fˆ f A
[4.21]
where fˆ denotes the least squares solution with minimum norm and f A an element of the kernel of R. The first term corresponds to the solution given by the generalized inverse, while the second term corresponds to a component of the “nonobservable” image. These non-observable components correspond to images for which the projections under all available angles of incidence are zero [HAN 87]. Considering the Fourier slice theorem (see Chapter 2), it is possible to show that the least squares solution with minimum norm is zero for all missing angles of incidence in Fourier space [PAY 96], which is often inconsistent with the physical reality of the expected images. To refine this solution, it is, therefore, necessary to introduce constraints on it, which permit choosing an acceptable solution among an infinite number of possible solutions. The employed constraints may be either quite general, such as positiveness or support constraints, or specific to the application, such as binary constraints. We limit ourselves here to deterministic prior information, while statistical prior information is addressed in section 4.4. Considering that each constraint defines a set, the image to be reconstructed must be an element of the intersection of these sets. When the constraints define
Discrete Methods
97
convex sets, the solution may be found by successive projections onto the different sets, a method known as POCS (projection onto convex sets). By noting that each equation itself may be interpreted as a convex constraint, this method has been applied in tomography, notably in [SEZ 82, YOU 82]. In general, the problem may be tackled by minimizing a function C(f) under the constraints imposed by the measurements [BRE 67]:
min C (f ) f
under constraint
Rf
[4.22]
m
4.3.2.2. Algorithms The ART algorithm (algebraic reconstruction technique) is very frequently employed in tomography, as is its multiplicative variant called MART. These algorithms may be interpreted as specific cases of the algorithm proposed by Bregman [BRE 67] for solving a constraint minimization problem with a quadratic (ART) or an entropy (MART) criterion. The ART algorithm was proposed in the original paper by Hounsfield [HOU 72]. It is based on the intuitive idea of correcting in each iteration all the pixels contributing to a measurement to render the solution consistent with this measurement. It may be expressed in the form:
f (n 1)
f (n) O n
(mi ¢ri. , f (n) ² ) ri .
2
riT.
[4.23]
This algorithm is equivalent to the Kaczmarz method [TAN 71], in which the solution is, in each iteration, orthogonally projected onto the hyperplane corresponding to the equation of the considered measurement. Its convergence towards the least squares solution with minimum norm is guaranteed if the value of the parameter O is strictly between 0 and 2. In each iteration, only a single row of the matrix R is used. Therefore, the algorithm belongs to the row action methods. Its popularity in tomography is linked to the ease of its implementation and its rapid convergence. If a cycle is defined as M iterations (M being the number of measurements), ART typically converges in five to six cycles. However, it shows instability in the presence of noise, since it converges towards a least squares solution. This instability may be avoided by stopping the iteration early enough, i.e. before the solution deteriorates. Methods that permit choosing the number of iterations optimally were proposed, notably in [DEF 87].
98
Tomography
The multiplicative variant of ART (MART), which converges towards a solution of maximum entropy type, is given by:
f j(n 1)
§ · ¨ ¸ mi f j( n ) ¨ (n) ¸ ¨ ri. , f ! ¸ © ¹
[4.24]
These basic algorithms may be generalized to operate on blocks. The correction is then no longer limited to a single projection measurement but to a set of such measurements. Considering a partition of the matrix R into submatrices Rin, the procedure of block-iterative ART is described by: f (n 1)
f (n) O n
R iT (m i n R i f (n) ) n
n
R iT R i n n
2
[4.25]
where min is the block of measurements corresponding to the block Rin. It is quite natural to use blocks that correspond to an entire projection. In this way, we obtain the SART (simultaneous ART) algorithm. In 3D, this type of method has been exploited for cone-beam reconstruction by using blocks that correspond to 2D projections [EGG 81, PEY 90]. The limiting case of a single block corresponds to an algorithm of SIRT type that converges very slowly. It is believed that the convergence of block-iterative ART algorithms is accelerated with respect to the SIRT algorithm by a factor roughly equal to the number of blocks. In addition, the speed of convergence may be optimized further if successive blocks are chosen to be as orthogonal as possible [HER 93]. The use of an ART with constraints, including, for example, positiveness or boundary constraints, enables improved robustness. The ART4 method allows the introduction of regularization with quadratic constraint to ensure the convergence of the algorithm towards a solution with minimum norm [HER 80, p. 192]. The constraints may be introduced by employing the POCS method. In this case, they may also be incorporated into iterative gradient descent methods [MED 87]. A CG method with support constraint was described in [KAW 85]. A generalization of the notion of support leads to a minimization of a semi-quadratic criterion, which was proposed in [PAY 96].
Discrete Methods
99
4.4. Statistical methods 4.4.1. Case of overdetermination by the data
4.4.1.1. Bayesian statistical methods We consider here the case where the projections m correspond to the realization of a random process M. The object f is also the realization of a random process F. We introduce the knowledge that we have about the object f by means of a prior probability distribution p(f). The reconstruction of the object f from the projections m may be formulated in the framework of Bayesian theory. By applying Bayes’ theorem, the probability of obtaining the object f given the projections m (i.e. the posterior probability distribution p(fIm) as a function of the conditional probability distribution p(mIf), the prior probability distributions of the object p(f), and the data p(m)) may be expressed as:
pm pf m = pm f pf
[4.26]
In emission tomography, for example, p(f) is the probability distribution of the number of photons emitted during a given duration, and p(m) is the probability distribution of the number of photons detected. The object f may be estimated by minimizing an integral cost function c(f,g) that penalizes deviations between the exact solution g and the estimated solution f. In the Bayesian formalism, the solution is given by: f solution = arg min ³ cf , g pf f
g m dg
[4.27]
Most of the estimators that are encountered in the literature may be obtained as specific cases of equation [4.27]. Maximum a posteriori (MAP) criterion
This criterion is derived from equation [4.27] by choosing c(f,g) = 1-G(f,g) as the cost function, where G is the Dirac distribution of point f. It corresponds to favoring the most likely solution given the measured data, which is : f solution = arg max pf m f
[4.28]
100
Tomography
Mean a posteriori criterion
For this criterion, the quadratic cost function c(f,g) = (f-g)T Q (f-g) is selected, where Q is a metric on the space of processes F. It corresponds to choosing the solution with minimum mean error: f solution = ³ f pf m df
[4.29]
Another specific case of the MAP criterion, which is very often used in practice, is the case of a uniform (i.e. an uninformative) prior probability distribution of the object. The conditional probability distribution p(mIf) is then equivalent to the posterior probability distribution p(fIm). Thus, the solution that maximizes the MAP criterion is that of maximum conditional probability, which corresponds to the solution that maximizes the probability of obtaining the measured values. 4.4.1.2. Regularization Several methods classified under the term regularization aim to transform an illposed inverse problem into a better conditioned inverse problem by imposing a constraint on the object f. For this purpose, a penalty term is introduced into the optimization criterion. We reformulate the problem of regularization in the Bayesian framework. The estimator we choose is the MAP estimator. Thus, an object f that solves equation [4.28] is sought. The criterion reads:
>
@
U(f ) = log p(m f ) - log>p(f )@ + Cte
[4.30]
U(f) is composed of a term of adequacy to the data U(m,f) (i.e. –log[p(mIf)]), a regularization term V(f) (i.e. –log[p(f)]), and a constant. The regularization of the illposed problem by the MAP criterion requires the definition of a probability p(f). Several choices exist for this a priori. Often, the probability distributions p(f) are introduced by means of Markov fields and Gibbs distributions, for instance to respond to questions on the conditions the a priori V(f) has to satisfy to preserve the discontinuities of the object f.
4.4.1.3. Markov fields and Gibbs distributions For the MAP estimation, we present here the prior probability distributions that are most frequently used in tomographic reconstruction. First of all, we summarize some of the theory of Markov fields, in which the object f is considered as a random field, defined over a set of sites i, where i{1,…,n}. On this set, a neighborhood system Q (i.e. Qi is the neighborhood of the pixel i) is defined. The prior probability distribution is defined by:
Discrete Methods
p f i t 0,
p f i f j, j z i = p f i f j, j Q i
101
[4.31]
The Hammersley–Clifford theorem [GEM 84] establishes the link between the local probabilities given by equation [4.31] and the global probability of the a priori p(f). The object f is then a Markov field with the neighborhood system Q and the probability p(f) defined by:
pf =
n § · 1 exp¨¨ E ¦ M i f ¸¸ Z i =1 © ¹
[4.32]
where Z is a normalization called the partition function, E a regularization parameter, and Mi(f) the potential function of local interaction that defines the local constraints on the object f. The MAP criterion reads in this case: f MAP
>
arg max ln p(m f ) ln p(f ) f
@
n ª º arg max «ln p(m f ) E ¦ M i f » f ¬ i =1 ¼
[4.33]
In emission tomography, the estimated object f corresponds to the mean number of photons emitted per voxel during a given time. Theoretically, it may be considered to be the realization of a Poisson process with a mean that is proportional to f. The MAP criterion then reads:
§ ·½ f MAP = arg min ¦ ®¦ r lk f k - ml ln¨¨ ¦ r lk f k ¸¸¾ E ¦ M k f f l ¯k k ©k ¹¿
[4.34]
with the potential function being chosen such that the discontinuities of the object are preserved. We refer to Gaussian likelihood if f is the realization of a Gaussian process. The MAP criterion is then given by: f MAP
ª arg min « m Rf f ¬
2
º E ¦ M k f » k ¼
[4.35]
In most cases, it is non-convex due to the a priori V(f). Only local minima of the criterion U(f) may thus be found with deterministic algorithms for the calculation of fMAP. The simple descent algorithms of CG or iterated conditional mode (ICM) [BES 86] type may in this way lead to a bad estimation of the object f. Non-convex optimization techniques exist which rely on stochastic optimization algorithms, such
102
Tomography
as simulated annealing [KIR 83], gradual non-convexity [BLA 87], and mean field annealing [GEI 91]. They require considerable computation time and are, therefore, little used in practice. The choice of the potential function Mand the regularization parameter E determines the object f. Therefore, it is important that it is guided by the pursued objective. A goal such as the detection of tumors in single photon emission computed tomography (SPECT) and positron emission tomography (PET) images is very different from a goal such as the quantitative estimation of radioactive concentration from these images. Information about the support of the object is essential in the first case, while the most accurate measurement of the fixation of radioactive isotopes is important in the second case. An overview of iterative tomographic reconstruction techniques employed in SPECT and PET can be found in [LEA 00]. 4.4.1.4. Potential function The choice of the potential function is intimately linked to the objective to be achieved with the reconstructed image. For technical reasons, an even function is selected to be independent of the sign of the gradient of the image. For example, the MAP criterion for the quadratic potential function M(x) = x2 reads: § U f Um, f E ¨ w x f ©
2
w yf
2·
¸ ¹
[4.36]
where wxf and wxf are the partial derivatives along x and y. It corresponds to the regularization criterion of Tikhonov [TIK 63] in the case of a Gaussian situation, where the resulting image f is smoothed and thus less sensitive to errors in the projections m. This criterion seems to be better suited for problems involving quantification. One of its drawbacks is that it does not preserve the discontinuities of the object f, for instance along the contours of the elements of the image. To solve the problem of discontinuities, Geman proposed introducing the notion of “line process” in the model of the object f [GEM 84]. This process employs a Boolean variable l = (lx,ly) (i.e. a field of Boolean variables representing the support of f). The processes lx and ly locate the discontinuities of the object f along the rows and columns, respectively. The MAP criterion thus integrates estimation of the object f and the non-observable process l. The MAP criterion reads:
f MAP with:
arg min Um, f + Vf , l f
Discrete Methods
V f ,l
^
2
O ¦ l x ij w xfij l y w yfij ij i
§ D ¦ ¨ ª1 l x ij º ¨ ¼ i © ¬
2
«ª1 l y ¬
`
103
2
º ij »¼
2
· ¸ ¸ ¹
[4.37]
The first term defines the energy of the interaction between f and l, and the second defines the energy of the regularization of the line process, which corresponds to the cost of creating a discontinuity. The parameter O is the elasticity constant, and D is the penalty associated with creating a discontinuity. This model favors the reconstruction of piecewise constant objects f [LEE 95] and shows good performance in the reconstruction of objects while respecting their discontinuities [GIN 93]. However, estimation of object f requires the implementation of annealing algorithms, which render the use of such a criterion difficult. Moreover, because of the smoothing parameters and the choice of the a priori, this model seems to be better adapted to problems involving quantification. If the objective is to preserve discontinuities of the object, another implicit representation of the line process by the potential function exists. It leads to the a priori:
V f ,l x ,l y
ª
º
¦I ª¬w x f k º¼ I ª¬w y f k º¼ »» «
E«
¬
[4.38]
¼
k
The line process is implicitly included in the potential function in this case and given by: § · ¨l ¸ © x ¹k
I ' ª¬ w x f k º¼ § · , ¨l ¸ 2 w x f k © y ¹k
I ' ª w y f º ¬ 2 w yf
k¼
k
[4.39]
Several potential functions that fit into the general framework of Markov fields have been proposed to regularize the reconstruction [BLA 87, GEM 95] (see Table 4.1). 4.4.1.5. Choice of hyperparameters The parameter Ein the MAP criterion weighs the importance of the a priori distribution of the object V(f) relative to the adequacy to the data U(m,f). It plays a crucial role in the structure of the reconstructed object. Several approaches have been employed to estimate E, such as generalized crossed validation techniques [CRA 79], L-curves [HAN 92], and F2 tests [HEB 92], as well as to estimate the maximum conditional probability P(mIE) [DJA 93].
104
Tomography
Potentials
M(x)
Properties
Green
MGR (x) 2log>cosh(x)@
convex
Hebert and Leahy
M HL (x) ln 1 x 2
Geman and McClure
MGM (x)
Hypersurface
M HS (x) 2 1 x 2 2
Bouman
MB (x)
non-convex
x2
non-convex
1 x2
xd
1d d d 2
a °° x - 4 ® 2 °x °¯ a
Muncuoglu
IHS (x)
Lipinski
ML x ®
a 2 a if x E 2
convex convex
if x ;
non-convex
°log ª¬ cosh x º¼ if x Z convex otherwise °¯ 0
Table 4.1. Examples of continuous potential functions
The estimation of the parameter ȕ from the joint density of f and m using the generalized maximum likelihood method has been proposed in [DJA 93]. The estimators are given by:
arg max §¨ p§¨ m f , E (n) ·¸ ·¸ ¹¹ f © © (n) (n 1) arg max p f E E
f (n)
E
[4.40]
The object f and the parameter Eare alternately determined while optimizing the MAP criterion. The optimization of the criterion given by equation [4.35] relies on Monte Carlo Markov Chain methods [BES 95].
Discrete Methods
105
4.4.2. MAP reconstruction algorithms 4.4.2.1. ML-EM algorithm The ML-EM (maximum likelihood expectation maximization) algorithm is probably one of the iterative algebraic algorithms employed the most in emission tomography. It is obtained in the Bayesian framework without a priori on the object f (i.e. p(f) is a uniform distribution). It iteratively performs a step E to estimate the object (“expectation”), followed by a step M of maximizing (“maximization”). This last step involves solving the equations of conditional probability given the current estimate of the object. When f corresponds to the realization of a Poisson distribution, the algorithm is given by:
1) f (n k
§ · ¨ ml ¸ r lk ¸ ¦¨ ¦ rlk l ¨¨ ¦ r ls f s(n) ¸¸ l © s ¹ (n)
fk
[4.41]
In each iteration, the step E calculates for each pixel the number of photons emitted in a direction, conditional on the mean number of photons obtained as a preceding solution and on the measured data. The step M updates this mean number by averaging the contributions over all directions. This algorithm first reconstructs the low, then the high frequencies of the object f. The final reconstructed object f is positive, a property that follows from the multiplicative character of successive iterations. Another simple approach to regularizing the EM algorithm is to introduce filtering as postprocessing after the iteration. A Gaussian filtering is often employed for this purpose [SNY 87]. 4.4.2.2. MAP-EM algorithm This algorithm is the regularized version of the ML-EM algorithm. In the case of a Poisson conditional probability and modeling of the a priori P(f) by a Gibbs distribution, the criterion U(f) to be optimized by solving the normal equations of the conditional probability is not linear in f. In [GRE 90], the proposed solution involves evaluating in each iteration the normal equations with the help of the estimate obtained in the preceding iteration. In this case, the MAP-EM algorithm has the form:
1) f (n k
(n)
fk
wU(f) ¦ r lk E wf f l
f (n)
· § ¨ ml ¸ r lk ¸ ¦¨ (n) l ¨¨ ¦ r ls f s ¸¸ ¹ © s
[4.42]
106
Tomography
4.4.2.3. Semi-quadratic regularization Semi-quadratic regularization was introduced in [GEM 85]. The criterion is quadratic in f using the line process l [GEM 92]. The principle of semi-quadratic regularization resides in an alternate minimization over the two variables l and f of the criterion U. With the criterion being quadratic in f for given a l, the iterative procedure of the minimization is given by:
arg min ª« U f , l(n 1) º» ¼ f ¬
l(n 1)
arg min ª« U f (n) , l º» ¼ l ¬
f (n 1)
[4.43]
In each step n, the value of the line process l is calculated from the estimate of the object f(n) (equation [4.43]). This defines a new quadratic criterion U, a function of f, which is minimized. Semi-quadratic regularization ensures that certain characteristics of the reconstructed objects f are preserved, such as their discontinuities. Two algorithms that implement semi-quadratic regularization in the framework of a Gaussian and Poisson conditional probability are presented here: the ARTUR algorithm for the Gaussian case and the MOISE algorithm for the Poisson case. 4.4.2.4. Regularization algorithm ARTUR The algorithm ARTUR One-Step-Late (OSL) was developed to solve the regularized MAP criterion in the case of Gaussian data [CHA 94]. The iterative procedure that solves equation [4.43] has in this case the form:
B (n) k
(n1)
fk
(n) A k 2E / k f (n) k C k
[4.44]
with: Ak
¦ rlk , l
(n) Bk
¦ ml l
rlk f k(n)
¦
rlk f k(n)
, and C (n) k
(n)
/ kf k
[4.45]
k
/k is the matrix with the elements
I ' ª¬ w f k º¼ 2 w f k
, obtained in the directions x and y of
the considered pixel. The OSL method estimates the variations of the a priori in iteration n + 1 from those in iteration n. This presupposes that the a priori varies
Discrete Methods
107
little from one iteration to the next, which is the reason for its name One-Step-Late. In [CHA 94], it is shown that this algorithm correctly restores high frequencies, for example the contours of the reconstructed objects f. 4.4.2.5. Regularization algorithm MOISE The algorithm Modified-One-Step-Implicit-Scheme-EM (MOISE) was developed to solve the regularized MAP criterion in the case of Poisson tomographic data [KOU 96]. The iterative procedure given by equation [4.43] has the form:
f k(n)
2 E C(n) k Ak
A k 2E C(n)k 2 8E/ k B(n) k 2 E/ k
[4.46]
This algorithm was applied to cerebral SPECT and PET data [KOU 96]. The results obtained with the MOISE algorithm are overall comparable to those obtained with the ARTUR algorithm. Even if the convergence of these two algorithms remains dependent on the choice of the potential function and the parameter E, the iterative procedures do not produce artifacts due to the reconstruction of high frequencies. This confirms the interest in regularization using the MAP criterion in the reconstruction of irregular objects f.
4.4.3. Case of underdetermination by the data 4.4.3.1. MEM method We present here the method of maximum entropy on the mean (MEM) [GAM 97]. It enables selection of an object from a convex set, for example the set of positive objects. An a priori P (i.e. a prior probability distribution) is imposed on the object to be reconstructed (the Poisson distribution for SPECT, for example). The MEM proceeds in two steps: first, it estimates the probability distribution pMEM that is closest to the probability distribution P in the sense of Kullback’s pseudodistance K(p,P) [KUL 59] while respecting a constraint on the data m; then, it estimates the object f by the means of this probability distribution pMEM. The modeling of the inverse problem using the MEM formalism is given by:
p MEM
arg min K p,μ p
under constraints: R E p f m, dp f 1, f
³
[4.47] MEM
E pMEM f
108
Tomography
This technique has several methodologically interesting aspects. It permits introducing a priori information on the object f, for example combinations of probability distributions. The solution pMEM is restricted to the space of exponential probability distributions. In this sense, the solution f is well conditioned. Moreover, the MEM method permits relaxing the adequacy of the data by introducing a supplementary variable of noise b, and this determines a unique extended solution (f,b) to the tomographic reconstruction problem. The interpretation of this object is, however, not always simple. In fact, the reconstructed density of noise has no physical sense in SPECT, and thus it may not be considered as a density of activity. Bayesian techniques introduce a priori knowledge about the noise in the projection data m in a simple way, but they pose the problem of uniqueness of the solution, i.e. the reconstructed object. Solving the inverse problem with the MEM method requires introduction of the Lagrangian associated with the criterion given by equation [4.47]. The new criterion depends on multiplicative coefficients of Lagrange h and hr and is given by:
h r = arg min D(h) h T
Dh h m - sup h T s - log ³ exp(h T R f ) d P (f ) s
f MEM =
§ d log ³ exp(h T R f ) d P f · ¸ ¨ T ¸ ¨ d(R h) ¹h ©
[4.48]
hr
This criterion is strictly concave; it may thus be solved with conventional optimization methods. In [AMB 01], the optimization is done using a gradient descent algorithm with optimal step-size D In each iteration n, we obtain: D (h (n) )
m Rlog ³ exp(h (n)TRf )dP
with:
h (n 1)
h (n) DD(h (n) )
[4.49]
In certain cases, a deterministic representation of the MEM criterion given by equation [4.48] exists. This representation enables a link between the a priori information Pand the mathematical expression of the optimization criterion to be established. In this way, the a priori P may be expressed explicitly for certain criteria given in the literature, without reference to a particular statistical model [HEI 97]. In [GAM 97], it is shown that equation [4.48] is equivalent to the criterion:
Discrete Methods
109
f MEM = arg min FP f Rf m
[4.50]
with, § § ·· F P f = sup ¨ s T log ¨ exp s T d f ¸ ¸ P © ¹¹ s ©
³
4.4.3.2. A priori distributions The criterion FPmay be expressed explicitly for different reference probability distributions P. We limit ourselves here to expressions for criteria used in tomographic reconstruction [HEI 97] (Table 4.2).
A priori distribution P
Criterion FP
Gaussian
Least squares T 1 § 1 · exp ¨ g f P * 1 g f P ¸ dg K © 2 ¹
P g
³
with, K =
2p N det *
Poisson
P
fP exp§¨ f P ·¸ © ¹ g
FP g
1 g fP 2
T *1 g f P
Shannon entropy g !
F P f f log
f f fP fP
Uniform over!
Burg entropy
P
F P f 1 logf
³ 1g dg
!0
Gaussian mixture
Non-explicit criterion
Table 4.2. Some a priori distributions P and associated deterministic criteria FP
110
Tomography
4.5. Example of tomographic reconstruction We present an example within the framework of reconstructing point sources in PET [AMB 01]. The presence of focal lesions leads in this application to hyperfixation of the injected tracer, localized at the level of the tumors. We aim at maximizing the probability of detecting small, localized lesions against a background, rather than at restoring the signal intensity (Figure 4.1a). Thus, we do not consider a quantification problem. We illustrate the problem with two tomographic reconstruction methods: the conventional ML-EM method and the MEM method. The MEM method was applied using the object f as the average of a random emission process F. The probability distribution Pf thus corresponds to the product of the Poisson probability distributions (Table 4.2). In the specific case of the object of interest f being composed of two classes, hyperfixation (fhyper) and background (ffond), the probability distribution Pf is expressed as:
P f f i
!
i D i exp f fond f ffond
f i!
! 1D i exp f hyper f fhyper i
f i!
where fi! represents fi factorial. The parameters fhyper and ffond, and the probability Di that a point i belongs to the background, have been fixed based on a first reconstruction of the object f. The criteria for ML-EM (equation [4.41]) and MEM (equation [4.48]) permit estimating the objects f shown in Figure 4.1. The points with hyperfixation have a higher contrast using the MEM method (Figure 4.1c). These results show the potential of such an approach and, more precisely, the interest in well specifying the objective of the tomographic reconstruction. In fact, this aim is directly linked to the choice of the a priori Pf in the case of the MEM method and to the choice of the potential functions in the case of regularized methods.
4.6. Discussion and conclusion It is often difficult to compare the performance of a tomographic reconstruction method with that of another without defining beforehand the objective to be achieved (quantification, detection, etc.). Therefore, no method hierarchy has been established.
Discrete Methods
(a) Reference
(b) ML-EM
111
(c) MEM
Figure 4.1. Cross-section of a simulated phantom consisting of a “background” cylinder of 200 mm length and 120 mm diameter, and of 50 “hyperfixation” spheres of 5 mm radius with twice the activity of the background
We have presented discrete tomographic reconstruction methods that aim at solving linear system of equations. In contrast to analytical methods, they have the advantage of taking into account physical and statistical phenomena linked to data acquisition. We have described two large classes of methods, the algebraic and statistical techniques, for the cases of over- and under-determination by the data. The presented range of methods makes use of several mathematical principles: optimization of symmetric functions (CG) or asymmetric functions (MR), projection onto convex sets (ART), contraction principle (SIRT), and Bayesian estimation (MAP-EM). The algorithms proposed in different mathematical frameworks often show links, or sometimes even lead, to identical criteria. Thus, the EM algorithm with Gaussian probability distributions leads to the same iterative procedure as the SIRT algorithm. In the framework of statistical approaches, we have favored solving linear systems by optimizing the maximum a posteriori criterion. Due to the ill-conditioned character of the projection matrix R and the presence of noise in the measured data, it is often necessary to introduce regularization. This principle has been discussed in the context of Gibbs distributions, where the discontinuities of the object are considered through potential functions. In the case of a convex potential, the MAP criterion has a global minimum. It is solved with the help of semi-quadratic regularization, and the discontinuities are modeled by the implicit or explicit line process. This formalism led in the case of a Gaussian conditional probability to the ARTUR algorithm and in the case of a Poisson conditional probability to the MOISE algorithm. These algorithms perform very similarly in terms of quantification.
112
Tomography
The principle of maximum entropy on the mean (MEM) is introduced to solve the tomographic reconstruction problem in the case of underdetermination by the data. The MEM method considers the object as the mean of a random variable, of which the probability distribution is to be estimated from a priori information on the object and from the adequacy of the data. This technique leads to a strictly convex criterion, no matter what the form of the considered a priori is. The derived primary criterion corresponds in certain cases to numerous deterministic criteria employed in regularization without a statistical model of the object. From a theoretical point of view, this technique has numerous advantages, and it will be interesting to position it with respect to the conventional Bayesian formalism. The methods described in this chapter are in principle equally applicable to the reconstruction of 2D and 3D images. However, in the case of 3D reconstruction, the required resources in terms of memory size and computation time have to be taken into account in the choice of the algorithm, because they may rapidly become prohibitive. The classical ART algorithm and the descent algorithms that rely on a short recurrence relation, like that of minimum residual, for example, are suitable. More generally, several techniques exist for accelerating the speed of convergence of algorithms. The preconditioning techniques for the matrix A are used most [LAS 94]. They provide an increase in speed of convergence of the algebraic algorithms in general and of the descent methods in particular, and of the ordered subsets techniques [HUD 94]. The techniques of matrix conditioning are combined with algebraic algorithms, which aim at optimizing the step-size and the direction of descent. The convergence of these algorithms depends on the condition number of A, cond(A), which corresponds to the ratio of the largest to the smallest eigenvalue of A. The closer cond(A) is to 1, the quicker is the convergence of the algorithm [DEM 01]. The ordered subsets technique is the uncontested method of choice to accelerate algorithms of the statistical type. This technique combines two concepts: the rearrangement (ordered) of projections m and their partitioning (subsets). Thus, the concept “ordered” involves providing in the reconstruction process successive projections that are as orthogonal as possible (i.e. which provide non-redundant information to the reconstruction process). The concept of “subsets” involves partitioning the ordered projections in subsets. A compromise must be found between the number of projections in a subset and the number of considered subsets. The “ordered subsets” technique, applied to an algorithm such as the EM (OS-EM), enables its speed of convergence to be increased by a factor equal to the number of subsets.
4.7. Bibliography [AMB 01] AMBLARD C., BENALI H., BUVAT I., COMTAT C., GRANGEAT P., “Application de la méthode du Maximum d’Entropie sur la Moyenne à la reconstruction de foyers d’hyper-
Discrete Methods
113
fixation en tomographie d’émission de positons”, Traitement du Signal, vol. 18, pp. 35– 46, 2001. [AXE 80] AXELSSON O., “Conjugate gradient type methods for unsymmetric and inconsistent systems of linear equations”, Lin. Alg. Appl., vol. 29, pp. 1–16, 1980. [BES 86] BESAG J., “On the statistical analysis of dirty pictures”, J. Roy. Statist. Soc., vol. 3, pp. 259–302, 1986. [BES 95] BESAG J., GREEN P., HIGDON D., MENGERSEN K., “Bayesian computation and stochastic systems”, Statist. Sci., vol. 10, pp. 3–66, 1995. [BLA 87] BLAKE A., ZISSERMAN A., Visual Reconstruction, MIT Press, Series in Artificial Intelligence, 1987. [BRE 67] BREGMAN L. M., “The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming”, USSR Comput. Math. and Math. Phys., vol. 7, pp. 200–217, 1967. [BUO 81] BUOCONORE M. H., BRADY W. R., MACOVSKI A., “A natural pixel decomposition for two-dimensional image reconstruction”, IEEE Trans. Biomed. Eng., vol. 28, pp. 69– 78, 1981. [CEN 97] CENSOR Y., ZENIOS S. A., Parallel Optimization: Theory, Algorithms and Applications, Oxford University Press, 1997. [CHA 94] CHARBONNIER P., Reconstruction d’Image: Régularisation avec Prise en Compte des Discontinuités, PhD thesis, University of Nice-Sophia Antipolis, 1994. [CRA 79] CRAVEN P., WAHBA G., “Smoothing noisy data with spline function”, Numer. Math., vol. 31, pp. 377–403, 1979. [DAV 83] DAVISON M. E., “The ill-conditioned nature of the limited angle tomography problem”, SIAM J. Applied Mathematics, vol. 42, n° 3, pp. 428–448, 1983. [DEF 87] DEFRISE M, “Possible criteria for choosing the number of iterations in some iterative reconstruction methods”, in Viergever M. A., Todd-Prokropek A. E. (Eds.), Mathematics and Computer Science in Medical Imaging, Springer, pp. 293–303, 1987. [DEM 01] DEMOMENT G., IDIER J., GIOVANNELLI J. F., DJAFARI A. M., “Problèmes inverses en traitement du signal et de l'image”, Techniques de l'Ingénieur, Traité Télécoms, vol. TE 5235, pp.1–25, 2001. [DJA 93] DJAFARI A. M,, “On the estimation of hyperparameters in Bayesian approach of solving inverse problems”, Proc. ICAPSSP, pp. 495–498, 1993. [EGG 81] EGGERMONT P. P. B., HERMAN G. T., “Iterative algorithms for large partitioned linear systems with applications to image reconstruction”, Linear Algebra and its Applications, vol. 40, pp. 37–67, 1981. [GAM 97] GAMBOA F., GASSIAT E., “Bayesien methods and maximum entropy method for ill-posed problems”, The Annals of Statistics, vol. 25, n° 1, pp. 328–350, 1997.
114
Tomography
[GEI 91] GEIGER D., GIROSI F., “Parallel and deterministic algorithms from MRFs: surface reconstruction and integration”, IEEE Trans. PAMI, vol. 13, pp. 401–412, 1991. [GEM 84] GEMAN S., GEMAN D., “Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images”, IEEE Trans. PAMI, vol. 6, n° 6, pp. 721–741, 1984. [GEM 92] GEMAN D., REYNOLDS G., “Constrained restoration and the recovery of discontinuities”, IEEE Trans. Image Processing, vol. 14, n° 3, pp. 367–383, 1992. [GEM 95] GEMAN D., YANG C., “Nonlinear image recovery with half-quadratic regularization”, IEEE Trans. Imag. Proc., vol. 4, pp. 932–946, 1995. [GIL 72] GILBERT P., “Iterative methods for the three-dimensional reconstruction of an object from projections”, J. Theor. Biol., vol. 36, pp. 105–117, 1972. [GIN 93] GINDI G., RANGARAJAN A., LEE M., HONG P. J., ZUBAL I. G., “Bayesian reconstruction for emission tomography via deterministic annealing”, in BARRETT H., GMITRO A. (Eds.), Information Processing in Medical Imaging, Springer, pp. 322–338, 1993. [GRA 99] GRANGEAT P., “Les problèmes inverses de reconstruction dans les systèmes d’imagerie tomographique par rayonnements ionisants X, gamma ou positon”, in Problèmes Inverses : de l’Expérimentation à la Modélisation, OFTA, pp. 105–116, 1999. [GRE 90] GREEN P., “Bayesian reconstruction from emission tomography data using a modified EM algorithm”, IEEE Trans. Med. Imaging, vol. 9, n° 1, pp. 84–93, 1990. [GUE 90] GUEDON J. P., Les Problèmes d’Echantillonnage dans la Reconstruction d’Images à Partir de Projections, PhD thesis, University of Nantes, 1990. [HAN 87] HANSON K. M., “Bayesian and related method in image reconstruction from incomplete data”, in STARK H. (Ed.), Image Recovery: Theory and Applications, Academic Press, pp. 79–125, 1987. [HAN 92] HANSEN P., “Analysis of discrete ill-posed problems by means of the L-curve”, SIAM Rev., vol. 34, n° 4, pp. 561–580, 1992. [HEB 92] HEBERT T., LEAHY R., “The GEM-MAP algorithm with 3-D SPECT system response”, IEEE Trans. Med. Imag., vol. 11, n° 1, pp. 81–90, 1992. [HEI 97] HEINRICH C., Distances Entropiques et Informationnelles en Traitement des Données, PhD thesis, University of Orsay, 1997. [HER 76] HERMAN G. T., LENT A., “Quadratic optimization for image reconstruction”, Computer graphics and image processing, vol. 5, pp. 319–332, 1976. [HER 80] HERMAN G. T., Image Reconstruction from Projections: the Fundamentals of Computerized Tomography, Academic Press, 1980. [HER 93] HERMAN G. T., Meyer L. B., “Algebraic methods can be made computationally efficient”, IEEE Trans. Medical Imaging, vol. 11, pp. 600–609, 1993. [HOU 72] HOUNSFIELD G. M., A method and apparatus for examination of a body by radiation such as X or Gamma, Patent Office, Pat. Spec. n° 1283915, 1972.
Discrete Methods
115
[HUD 94] HUDSON H. M., LARKIN R. S., “Accelerated image reconstruction using ordered subsets of projection data”, IEEE Trans. Med. Imaging, vol. 13, pp. 601–609, 1994. [JOS 82] JOSEPH P. M., “An improved algorithm for reprojecting rays trough pixel images”, IEEE Trans. Med. Imaging, vol. 1, n° 3, pp. 192–196, 1982. [KAW 85] KAWATA S., NACIOGLU O., “Constrained iterative reconstruction method by the conjugate gradient method”, IEEE Trans. Med. Imaging, vol. 4, n° 2, pp. 65–71, 1985. [KIR 83] KIRKPATRICK S.., GELATT C, VECCHI M., “Optimization by simulated annealing”, Science, vol. 220, pp. 671–680, 1983. [KOU 96] KOULIBALY P., Régularisation et Corrections Physiques en Tomographie d’Emission, PhD thesis, University of Nice-Sophia Antipolis, 1996. [KUL 59] KULLBACK S., Information Theory and Statistics, John Wiley and Sons, 1959. [LA 97] LA V., Correction d’Atténuation en Géométrie Conique avec Mesures de Transmission en Tomographie d’Emission Monophotonique, PhD thesis, INPG, Grenoble, 1997. [LAK 79] LAKSHMINARAYANAN A. V., LENT A., “Methods of least squares and SIRT in reconstruction”, J. Theor. Biol., vol. 76, pp. 267–295, 1979. [LAN 51] LANDWEBER L., “An iterative formula for Fredholm integral equations of the first kind”, Amer. J. Math., vol. 73, pp. 615–624, 1951. [LAS 94] LASCAUX P., THEODOR R., Analyse Numérique Matricielle Appliquée à l’Art de l’Ingénieur, Masson, 1994. [LEA 00] LEAHY R., BYRNE C., “Editorial: recent developments in iterative image reconstruction for PET and SPECT”, IEEE Trans. Med. Imaging, vol. 19, n° 4, pp. 257–260, 2000. [LEE 95] LEE S. J., RANGARAJAN A., GINDI G., “Bayesian image reconstruction in SPECT using higher order mechanical models as priors”, IEEE Trans. Med. Imaging, vol. 14, pp. 669–680, 1995. [LEW 92] LEWITT R. M., “Alternative to voxels for image representation in iterative reconstruction algorithms”, Phys. Med. Biol., vol. 37, pp. 705–716, 1992. [MED 87] MEDOFF B. P., “Image reconstruction from limited data: theory and applications in computed tomography”, in STARK H. (Ed.), Image Recovery: Theory and Applications, Academic Press, pp. 321–368, 1987. [PAY 96] PAYOT E., Reconstruction Vasculaire Tridimensionnelle en Imagerie par Rayons X, PhD thesis, Ecole Nationale Supérieure des Télécommunications, 1996. [PET 81] PETERS T. M., “Algorithms for fast back- and re-projection in computed tomography”, IEEE Trans. Nuclear Sciences, vol. 28, n° 4, pp. 3641–3647, 1981. [PEY 90] PEYRIN F., Méthodes de Reconstruction d’Images 3D à Partir de Projections Coniques de Rayons X, PhD thesis, INSA Lyon and UCB Lyon I, 1990.
116
Tomography
[SEZ 82] SEZAN M. I., STARK H., “Image restoration by the method of convex projections”, IEEE Trans. Med. Imaging, vol. 1, pp. 95–101, 1982. [SNY 87] SNYDER D. L., MILLER M. I., THOMAS L. J., POLITTE J. R., “Noise and edge artifacts in maximum likelihood reconstructions for emission tomography”, IEEE Trans. Med. Imaging, vol. 6, n° 3, pp. 228–238, 1987. [TAN 71] TANABE K., “Projection method for solving a singular system of linear equations and its applications”, Numer. Math., vol. 17, pp. 203–214, 1971. [TIK 63] TIKHONOV A. N., “Regularization of incorrectly posed problems”, Sov. Math. Dokl., vol. 4, pp. 1624–1627, 1963. [YOU 82] YOULA D. C., WEBB H., “Image restoration by the method of convex projections: part I – theory”, IEEE Trans. Med. Imaging, vol. 1, pp. 82–94, 1982.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Part 2 Microtomography
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 5
Tomographic Microscopy
5.1. Introduction Microscopy remains the method of choice for biologists and pathologists for observing cells and biological tissues. The observation is, however, limited to more or less thin sections between 5 μm and 50 μm in conventional photon microscopy, and between 80 nm and 200 nm in electron microscopy. The obtained images represent the information contained in a section and do not allow understanding of the three-dimensional (3D) organization of tissues, cells, and their organelles without ambiguity. Only 10 years ago it was still necessary to physically make series of sections to obtain this 3D information. This extremely painstaking work was followed by the reading of the contours of objects and the 3D reconstruction with adapted software. Besides the delicate and tedious nature of the collection of the sections, the limitations of this technique reside in the impossibility of avoiding distortions of the tissue during microtomy and in the difficulty in defining reliable fiducial markers that permit realignment of a series of sections [BRO 90, DUX 89]. Since the beginning of the 1980s, more powerful methods, which preserve the integrity of the observed specimens, have been developed. They are based on either projection tomography techniques or optical sectioning techniques. The reconstruction methods for projection tomography have essentially been developed in high-energy electron microscopy (section 5.2). While MRI does not currently provide a spatial resolution that meets the requirements of microscopy (see Chapter 12), the recent development of a commercial system for X-ray microtomography now enables resolutions in the order of 10 μm to be achieved. The invention of X-ray microscopy Chapter written by Yves USSON and Catherine SOUCHIER.
120
Tomography
techniques that use synchrotron radiation will lead to accelerated development of projection methods in high-resolution microscopy, which promise the attainment of resolutions in the order of 25 nm (see Chapter 7). These techniques remain reserved to research laboratories with privileged access to synchrotron radiation. The techniques of optical sectioning for microtomography [AGA 89] have seen an accelerated development during the last decade. Their success, and in particular that of confocal microscopy (section 5.3), is due to their ease of implementation. These methods directly produce a stack of images that are perfectly aligned and easily interpretable, and the availability of numerous commercial systems facilitates access to them at non-prohibitive costs. In addition, in the domains of biology and medicine, the development of immunofluorescence techniques, of fluorescence in situ hybridization (FISH), and of fusion proteins [GIE 06] has substantially contributed to promoting the use of photon confocal microscopy [PAW 06, CON 05]. Confocal microscopy may also be used in the analysis of surfaces. This type of application, which is receiving growing attention in materials science, is addressed in another book in the series IC2 [JOU 02]. 5.2. Projection tomography in electron microscopy Transmission electron microscopy (TEM) is indisputably the technique that enables exploration of tissues and cells with the best possible resolution. Sizes are expressed in tens of nanometers, i.e. in angstroms. To be able to gather consistent 3D information, sections of 1 to 3 μm thickness have to be used. These thicknesses are far higher than those commonly employed in TEM (80 nm to 150 nm). As a matter of fact, the techniques of projection tomography in conventional TEM require microscopes with high energy of the order of 1 MV or more. Below these energies, the penetration capability of the electrons is insufficient for traversing sections of 1 μm and for obtaining images with adequate contrast. Very fortunately, the development of the scanning TEM (STEM) has enabled the use of lower voltages of the order of 300 kV. The exploration of specimens in 3D with STEM is thus accomplished by acquiring a series of images, between which the observed object is revolved from –60° to 60° in regular angular steps (of 1 to 3°) around the y axis of the plane. In practice, the revolution is carried out by placing the specimen on a goniometric eucentric stage. The volume reconstruction is simply done by applying backprojection algorithms such as ART (Chapter 4). However, the method has two peculiarities compared to the methods for macroscopic projection tomography. The first is the poor control of the scale of the microscope’s goniometric stage. In fact, any minor eccentricity entails a notable shift of the projections. So, a systematic realignment of all projections must be performed.
Tomographic Microscopy
121
Based on the hypothesis of parallel trajectories of the electrons, the registration may simply be carried out by selecting a reference point on a point-like structure that can easily be identified on all projections and that is located close to the axis of the goniometric stage. All the projections will thus be realigned with respect to this structure [HEL 97]. The second peculiarity is the existence of a hidden zone. In contrast to macroscopic methods, the specimen may only be tilted by a total angle of 120°. Hence, the final reconstruction is imperfect, and fine structures in particular are extended by “attached shades” in the direction of the cones of uncertainty [HEL 97]. This type of artifact may be reduced by an appropriate numerical deconvolution algorithm.
-30°
-20°
-10°
0°
10°
20°
Figure 5.1. Projection tomography of a structure AgNOR with a STEM [HEL 97]. Left: some projections acquired every 10°. Right: 3D reconstructions of the complex structure, seen from the front and the side
5.3. Tomography by optical sectioning The concept of tomography by optical sectioning in microscopy and the methods for confocal photon microscopy were invented in 1955. At that time, Marvin Minsky, who wanted to observe neurons in thick sections of neural tissue, developed a prototype of a microscope with a scan table and, in particular, with what was named a double focalization system at the time and soon came to be known as a confocal system [MIN 88]. However, this technique remained unused for a long time, since it relied on the employment of a point source that provides high light intensity. The production of low-cost laser sources at the beginning of the 1980s had to be awaited before this innovative microscopy technique took the place that it has held in the domain of biology from then on. The ordinary techniques of microscopic imaging in transmission, reflection, and fluorescence may all be carried out in a confocal mode. However, no current commercial system supports the light transmission mode because of the great difficulty of aligning the illumination system and the image formation system. Confocal microscopy in reflection, well suited to the examination of surfaces, holds a privileged position, especially in the domain of microelectronics. Confocal microscopy in fluorescence is by far the
122
Tomography
method of choice in biology and medicine, since it is an incomparable tool for “functional” cellular imaging. The introduction of methods for non-linear microscopy, such as two-photon microscopy [ZIP 03, HEL 05] and secondharmonic generation microscopy [DEB 06], and of systems for rapid high-resolution video microscopy [COA 04] and structured illumination [GUS 08] has broadened the range of tools for microscopy by optical sectioning. Moreover, the reintroduction of systems with a rotating Nipkow disk, enhanced by the insertion of microlenses into the perforations of the disk, have extended the possibilities of rapid imaging of living cells [COA 04, TAN 02]. More recently, techniques such as 4S microscopy or stimulated emission depletion microscopy (STED) [HEL 03] have enabled significant improvement in lateral and axial resolution, reaching resolutions in the order of 30 nm. 5.3.1. Confocal laser scanning microscopy (CLSM) 5.3.1.1. Principle of confocal microscopy Conventional microscopy by epifluorescence, as it is widely used today in cellular biology, produces images that suffer from limited resolution. They appear as if they are being viewed through a diffuse fog. The field of the microscope is strongly illuminated by the light source and the fluorochromes are excited across the full depth of the preparation (see Figure 5.2a). The result is that the image formed by the microscope is contaminated by light from points located outside the focal plane and has low contrast when the specimen is thick.
(a)
(b)
Figure 5.2. Comparison of conventional fluorescence microscopy (a) and confocal microscopy (b)
Tomographic Microscopy
123
The trick in confocal imaging is the elimination of this parasitic light “out of focus”. In the simplest case (see Figure 5.2b), this is achieved by illuminating the preparation with a point source whose diameter is limited by diffraction. In this way, at a given moment, only one tiny part of the specimen is strongly illuminated. The fluorochrome molecules absorb this light and then return the absorbed energy by emitting new photons with a longer wavelength (fluorescence). The photons emitted by the fluorochrome are detected point-by-point with a photomultiplier tube (PMT) while scanning the preparation. A tiny hole (often called a pinhole), placed in front of the photodetector and centered at the level of the rear focus of the objective, throws back the parasitic light from points located outside the focal plane. The name confocal comes from the matching of three focal points: a point source (generally a laser source), the illuminated point located in the object focus of the objective, and the pinhole located in the image focus. 5.3.1.2. Image formation The formation of an image in a microscope is the result of two functions of the distribution of light [WIL 90]: – the point-spread function in the focal plane determining the lateral resolution (in x and y); – the defocalization function determining the axial resolution (in z).
Figure 5.3. Image formation by a lens. Formation of the Airy disk (a) and defocalization (b). The light intensity is reproduced with negative contrast to better show the diffraction rings
124
Tomography
The point-spread function (PSF) in the focal plane is the mathematical description of the image of the Airy disk (see Figure 5.3a), obtained by observation of an ideal, infinitely small point source through an objective. This familiar image, showing a maximum intensity surrounded by darker and darker rings, results from the incapacity of the lens to collect the light dispersed over a given angle D (half the opening angle of the cone of light between the focus and the lens). The cone of light between the focus and the lens defines the numerical aperture NA of the objective, which is given by NA = n sin D, where D is the half-angle of the cone and n is the refractive index of the medium. For instance, for an oil immersion objective (D = 67.5°, n = 1.515), we obtain a numerical aperture of 1.40. The resolution achievable with an objective is limited by diffraction. It results from interferences of light waves in the image plane, diffracted along the trajectory of the rays. The distance of lateral resolution dx in CLSM is deduced from equation [5.1] and is given by: dx
0.4nO NA
[5.1]
where Odenotes the wavelength. The PSF in conventional microscopy is:
I v
>2 J v v @ 1
1 2
[5.2]
and in confocal microscopy: I v
>2J v v @ 1
1 4
[5.3]
with v 2ʌO1 x 2 y 2 sinD and J1 denoting the Bessel function of first order. The PSF in the focal plane (generally denoted by hi(x,y)) describes the way in which the light, which ideally should remain concentrated in a single point, is more or less dispersed around this point. Applied to an infinitely thin and small object o(x,y), the equation of the formation of an image i(x’,y’) in an optical system without aberration is expressed by: ix' , y ' ox, y hi x, y where the symbol represents the convolution operator.
[5.4]
Tomographic Microscopy
125
The PSF of defocalization (generally denoted by ho(x,y)) describes the way in which the Airy disk changes as the object extends from the focus of the lens along the optical axis (z). The image thus becomes:
ix' , y ' ox, y hi x, y ho x, y, 'z
[5.5]
where 'z denotes the extent with respect to the focus of the lens (see Figure 5.3b).
Figure 5.4. Lateral (or radial) PSFs of the conventional microscope (gray) and the confocal microscope (black) for a wavelength O of 550 nm and an objective having a numerical aperture of 1.4
Io
Io
Io
(a) Conventional microscopy
x,y PSF
x,z PSF
z
x,z PSF
z
x,y PSF
x
Io
x
(b) Confocal microscopy
Figure 5.5. Comparison of the lateral (PSF x,y) and axial (PSF x,z) diffraction patterns. The light intensity is reproduced with negative contrast to better show the diffraction rings
126
Tomography
Two PSFs are combined in a single one that represents the 3D PSF of the objective (see Figure 5.5):
i x ' , y ' , z ' o x , y , z h x , y , z
[5.6]
5.3.1.3. Optical sectioning The most common way of measuring the axial resolution in CLSM is to consider the decrease in intensity along the z axis for an ideal point object [WIL 90]. Thus, the depth of the field (or axial resolution) may be defined as the full width at half maximum of the intensity profile (in z). Figure 5.6A shows that the depth of the field in confocal microscopy is diminished by a factor of 1.4 compared to conventional microscopy [SHE 89].
Figure 5.6. Depth of field (A) and ability of optical separation (B) of conventional microscope (gray or dashed) and the confocal microscope (solid) for a wavelength of 550 nm and an objective having a numerical aperture NA of 1.4 (arbitrary scales of intensity
The intensity variation along the optical axis is defined in conventional microscopy by equation [5.7] and in confocal microscopy by equation [5.8]: I u I u
>u 2 >u 2
1
1
@ sin u 2 @
sin u 2 2 4
[5.7] with u 8ʌO1 z sin 2 D 2
The distance of axial resolution dz in CLSM is thus given by:
[5.8]
Tomographic Microscopy
dz
1.4n 2 O NA2
127
[5.9]
The improvement in the axial resolution is inversely proportional to the square of the numerical aperture of the objective. Consequently, it is imperative to employ objectives with large numerical apertures for obtaining the best optical sectioning. At this point, an important remark has to be made: the spatial resolution achievable with the confocal microscope is anisotropic. In fact, the formulas that characterize the lateral resolution (equation [5.1]) and the axial resolution (equation [5.9]) provide different values for the same wavelength O, the same refractive index n, and the same numerical aperture NA. In practice, and depending on the numerical aperture of the objective, the resolution is 2.5 to 4 times higher in the axial direction than in the lateral direction (see Figure 5.5b). Optical sectioning is described by the decrease in the total amount of light present in the focal plane as a function of the distance to a point source [WIL 90]. In other words, its evaluation involves an integration of the PSF of defocalization for each position z:
I int u ³0f I u, v v dv
[5.10]
In Figure 5.6b, the graphs of optical sectioning obtained by applying equation [5.10] are shown. It seems that in the case of conventional microscopy, the integrated light intensity remains constant no matter what the defocalization is. By contrast, in the case of confocal microscopy, there is a rapid drop in the integrated intensity as a function of the distance to the source. The pinhole here fulfills perfectly its selective role by rejecting the light “out of focus”. This property of optical sectioning in CLSM is perfectly illustrated by Figure 5.7a, which presents a series of optical sections through a living cell (HeLa cell line) in culture, whose mitochondria have been colored with a specific fluorochrome (rhodamine-123). The optical sections are obtained by raising the stage by a step of 2 μm after the acquisition of each plane. The upper row shows the sections acquired in confocal mode and the lower row presents the corresponding sections acquired in conventional mode. The difference between the two modes is very evident. In the confocal sections, the mitochondria are very well resolved; they mass in the cytoplasm all around the nuclei, which are void of all labeling. Each of the sections provides different information from its neighbors. In the conventional mode, the mitochondria are more difficult to distinguish. They appear in a diffuse fog, which corresponds to the “phantom” of the fluorescence emitted in the adjacent planes.
128
Tomography
A
B
z x
0.0
2.0
4.0
6.0
5μ
Figure 5.7. Comparison between confocal microscopy (upper row) and conventional microscopy (lower row). (A) Four transversal (xy) optical sections separated by 2 μm through a cell in vitro, whose mitochondria have been colored with rhodamine-123. (B) Axial (xz) optical section, where the dashed line indicates the position of the glass support. The contrast has been inverted to better visualize the signals
Figure 5.7b shows another possibility for sectioning enabled by the confocal mode. In fact, it is possible to achieve an axial sectioning, i.e. to make an optical section along a plane parallel to the optical axis of the microscope. Again, the confocal mode is shown to be far superior to the conventional mode. However, it has to be noted that the obtained image is blurred in the axial direction. This is an expected consequence of the anisotropy of the resolution. 5.3.1.4. Fluorochromes employed in confocal microscopy Fluorochromes are employed as dyes for a specific cellular compartment, as sensitive sensors to the environment, or as direct or indirect markers of an antibody or a DNA or RNA probe used for immunolabeling or in situ hybridization techniques [SUZ 07]. Several cellular compartments may be recognized and visualized simultaneously by varying the excitation wavelength defined by the laser and the emission rays. New fluorochromes, such as the cyanines (Amersham) [WES 92] and Alexa (Molecular Probes) [PAN 99], appear regularly to improve the specificity, the intensity, and the stability of the marking, or to extend and adapt in the best possible way the choice of excitation and emission wavelengths. The visualization of fluorochromes that are excitable at 360 nm is possible with an ultraviolet argon laser or in multiphoton microscopy (see section 5.4.3) at a two- or three-fold wavelength [ZIP 03, HEL 05]. This mode of microscopy is also proposed for freeing an encapsulated probe that is photo-activatable by an ultraviolet impulse. More recently, nanocrystals have been proposed as an alternative to fluorochromes [BRU 98, BOU 07].
Tomographic Microscopy
129
In recent years, the most significant progress in the fluorescent labeling of a protein has been due to the development of autofluorescent proteins, such as the GFP (Green Fluorescent Protein ) [CHA 94, TSI 98]. These proteins emit a fluorescent signal as soon as they are expressed in a cell. Some of them are photo-activatable or photo-convertible. All enable following of expression and localization in living cells, or even in entire organisms, of proteins combined with one of them. They are excited and emit at several wavelengths and thus allow a double or triple labeling [SHA 05, GIE 06]. They are also well adapted to displaying molecular interactions by FRET [TRA 06] in living cells.
5.4. 3D data processing, reconstruction and analysis 5.4.1. Fluorograms The study of the colocalization of two protein or nucleus structures is one of the major applications of CLSM. The structures are marked with probes coupled to fluorochromes that have different spectral excitation and emission characteristics to be able to acquire specific images of them. The superposition of two images in color visualizes the degree of colocalization. For two fluorochromes visualized in red and green, respectively, the zones of colocalization appear yellow. This first step is not always sufficient to estimate the degree of colocalization. It may be efficiently complemented by preparation of a fluorogram (or biparametric histogram of fluorescence. A fluorogram is a cloud of points corresponding to pixels or voxels of 2D or 3D images, whose coordinates are the values of the fluorescence intensity of the two analyzed fluorochromes, for instance in red and green (see Figure 5.8) >DEM 97, MAN 93, TAN 92@. It provides a graphical representation of the joint distribution of the two fluorescences. The cloud may correspond to the entire image, to a region, such as a cell in biology, or to the immediate environment of a voxel to produce a map of local correlations >DEM 97@. The points accumulate along a line in the case of a strong colocalization, or they deviate from a linear regression model when the degree of colocalization reduces. It is a familiar representation in flow cytometry for estimating the percentage of cells that are double-positive, the point thus being a cell, not a voxel. Several methods for quantification are derived from the fluorogram. For instance, we evaluate the number of points located in the positive, negative, single or double marked areas. We calculate the linear correlation coefficient or similar coefficients proposed to better take account of the differences in intensity or the proportion of the two labelings >MAN 93@.
130
Tomography
Figure 5.8. Fluorogram of HeLa cells transfected with a plasmid coding for the EGFPtopoisomerase II. The red channel corresponds to a coloring of the DNA and the green channel to one of the GFP. Of the four cells shown, only two express the topoisomerase and produce the clouds of points c1 and c2, corresponding to their cores, and v, corresponding to their cytoplasm. The cloud r corresponds to the cell cores that do not express the topoisomerase. The contrast has been inverted to better visualize the signals
The fluorograms enable us to estimate the colocalization of two structures and to control the conditions of the acquisition: correspondence of images, transition of fluorescence from one channel to the other, stability of the fluorescence, and improvement of the signal-to-noise ratio (SNR) >DEM 97@. They apply to different modes of fluorescence, including spectral and temporal ones.
5.4.2. Restoration 5.4.2.1. Denoising The images obtained in confocal microscopy have a comparatively poor SNR. This is due to the very low light intensity used and the quantum nature of light. The obtained images are degenerated by three types of noise: quantum Poisson noise linked to the emission of photons, thermal Gaussian noise linked to the electronics of the counter, and noise linked to interference between transmission lines connecting electronic components. Quantum Poisson noise is linked to the discrete emission of photons by the light source. If we observe a point source of a given brilliance over a time t, the probability of collecting a given number of photons is distributed according to a Poisson law, i.e. the brilliance measured in a region will have the same mean value μ and variance V. The consequence is that the SNR will be better in bright regions than in dark regions. For instance, in a dark region, the mean number of photons
Tomographic Microscopy
131
will be 100 with a variation in the order of 10, which corresponds to a coefficient of variation (CV) of 10%, and in a bright region 100 times more intensive the mean will be 10,000 with a variation in the order of 100, which corresponds to a CV of 1%. Different approaches may be used to improve the SNR. The simplest means is to prolong the acquisition time and thus to increase the mean number of photons. A different means is to register k images (between four and 16 images) and to calculate a mean image from them. The improvement in the SNR is proportional to k. However, these two approaches increase the exposure of the fluorochromes and thus the photobleaching .
1
average of 8 images
average of 32 images
Figure 5.9. Photon noise in confocal microscopy. The signal-to-noise ratio increases by averaging several images of the same focal plane. The contrast has been inverted to better visualize the signals
Alternatively, digital image restoration techniques may be used. Different types of digital filters may be employed: linear convolution filters, such as Gaussian filters; rank filters, such as the median filter; or more sophisticated filters, combining a linear and a rank filter. These filters have different advantages and limitations. In general, the simplest ones (Gaussian or median) offer very efficient noise suppression. Unfortunately, they noticeably alter the spatial resolution of the images. In other words, they introduce even more blur in the images. Borders of structures and fine details are not preserved. To avoid this type of problem, it is preferable to employ filtering techniques that are called “adaptive” [BOL 92, MON 99]. As the name indicates, these methods adapt for each pixel the type and strength of the filter according to local information. Thus, these filters serve two functions: – analysis of local information (intensity gradient and standard deviation, etc.); – filtering in its true sense, where the strength and the type are modulated by the previous step.
132
Tomography
Figure 5.10. Compensation of light absorption by the specimen. Upper row: reconstruction of a block of myocardial tissue, of which the cores are Feulgen-stained. The block is shown from the side, such that the axial direction is vertical. The decrease in intensity is clearly visible. The graph of mean intensity as a function of depth in the tissue is presented on the right. Lower row: the same block reconstructed after gain compensation according to the graph of compensation shown on the right (gray). The solid line represents the mean intensity as a function of depth after application of the graph of compensation. The contrast has been inverted to better visualize the signals
5.4.2.2. Absorption compensation Although biological specimens are generally transparent, they have the tendency to slightly absorb and disperse light. In thick sections of tissue exceeding 5 μm, these absorption and dispersion effects are far from being negligible. In fluorescence, first, penetration depth into the tissue, and second, the emitted light is absorbed. This results in a drop in intensity beyond a depth of 5 μm. Given that it is in principle an absorption, the graph of intensity attenuation as a function of intensity is of a decaying exponential type. This attenuation may be modeled and quite simply be compensated for [RIG 91], either immediately, during acquisition, or later during reconstruction. Before the acquisition, the gain and voltage of the sensor (photomultiplier) must be tuned at the most superficial and at the deepest level of the specimen. The acquisition system then automatically adjusts them to each level of the section by interpolating between the extreme values. The other approach multiplies, after the acquisition of a series of sections, the intensity values in a given section by a coefficient that depends on the level of this section [ROE 93, VIS 91]. 5.4.2.3. Numerical deconvolution The image formation of a fluorescent object by the optical system will suffer from alterations due to diffraction effects. It may be summarized by the convolution
Tomographic Microscopy
133
operation of equation [5.6]. The restoration of the object may thus be reduced to an inverse linear filtering defined by: o
i h 1
[5.11]
However, this approach assumes that the inverse PSF h-1 may be measured or evaluated and that the image is not noisy. Unfortunately, it is in practice difficult to determine h-1 and the image is never noiseless. In fact, the equation of the image formation must be rewritten to take the noise term e into account :
i
o h e
[5.12]
Constraint iterative deconvolution techniques have been developed to overcome these limitations [CAR 90, FAY 89, SHA 91]. These methods are based on a progressive correction of an estimate ô of the object. In each iteration k, the quality of the estimator ô(k í 1) is evaluated by simulating its convolution with the PSF. Then if the estimator ô(k í 1) is correct, its blurred image (ô(k í 1)*h) must be identical to the one delivered by the optical system. If this is the case, the procedure may be stopped. Otherwise, the resulting error is estimated and its value serves as the basis for calculating the new estimator ô(k). Different algorithms based on this general principle have been developed and are now sold as options for commercial CLSM systems. These algorithms differ in how the error of the estimation is evaluated and in the applied correction, as well as in how the noise is taken into account. The two basic algorithms that are most frequently used are based on an additive correction (equation [5.13]) and a multiplicative correction (equation [5.14]), respectively:
>
oˆ k m oˆ k 1 i oˆ k 1 h
@
1 º ª oˆ k m oˆ k 1 «i oˆ k 1 h » ¼ ¬
p
p
,
[5.13]
[5.14]
where p is a constant that enables control of the convergence speed. In the most frequent case, p equals one and the first estimator ô(0) is the original image i. The quality of the restoration achieved by these algorithms depends on the measurement of the PSF h of the objective, on the judicious choice of the parameter p, and on the initialization of the estimator ô(0). For instance, the PSF is obtained by making confocal sections through a fluorescent block of an inner diameter of less than 0.2 μm. The choice of the first estimator ô(0) is more delicate. The most commonly applied practice is to consider the image i provided by the microscope as
134
Tomography
a robust initialization. Finally, a better tolerance to noise is obtained by inserting an intermediate step of Gaussian filtering between each iteration of the algorithm. The Richarson and Lucy algorithm (equation [5.15]), more generally known by the name maximum entropy (ME), is commonly employed for the restoration of confocal data. This algorithm offers great tolerance to noise at the expense of convergence speed. A minimum of 50 iterations is necessary to obtain an acceptable result:
oˆ (k) m oˆ (k 1) h ª i oˆ (k 1) h «¬
1 º»¼
[5.15]
1 2
xy
xz-1
xz-2
xy
xz-1
xz-2
1 2
Figure 5.11. Numerical deconvolution of a confocal data set: chromosomes in the process of expression in an extract of Xenopus laevis eggs. Upper row: raw confocal sections xy and xz. Lower row: the same sections after deconvolution. Left: transversal section xy through the group of chromatin. Middle: axial section xz at the level indicated by arrow 1 in the section xy. Right: axial section xz at the level indicated by arrow 2 in the section xy. The contrast has been inverted to better visualize the signals
Figure 5.11 illustrates the application of the algorithm (equation [5.14]) to a set of confocal data. After only five iterations, the axial resolution appears to be far better in the axial sections xz-1 and xz-2. This improvement similarly translates into an increased contrast of the chromatin filament, as can be observed in the section xy. However, it must be noted that it is almost impossible to attain a perfect isotropic resolution. Solving the inverse problem in microscopy belongs to the class of ill-posed problems. The fact that the light is only collected in the cone defined by the numerical aperture of the objective entails a definite loss of the highest spatial frequencies. These may not be restored by the proposed algorithm. In general, the maximum improvement in the axial resolution by numerical deconvolution that might be hoped for does not exceed 30%. Nevertheless, this constitutes a notable improvement with respect to the
Tomographic Microscopy
135
resolution limit of photon microscopy. Numerical deconvolution is similarly of interest for preprocessing in 3D reconstruction. Figure 5.12 enables us to compare the 3D reconstruction of HeLa cells cultivated in vitro, whose mitochondria have been stained by rhodamine-123. After numerical deconvolution, the mitochondria appear to be better defined, and it is possible to appreciate the shapes and the spatial distribution.
5μm
Figure 5.12. Application of numerical deconvolution to a confocal volume. Top: simple confocal microscopy; stereoscopic pair reconstructed for observation by convergent strabismus. Bottom: confocal microscopy followed by 3D deconvolution. Left: left view of the stereoscopic pair. Right: right view of the pair
5.4.3. Microscopy by multiphoton absorption Recent developments in non-linear optics and the progress made in the production of pulsed sources of coherent light have contributed to the emergence of a new technique for fluorescence microscopy that offers multiple advantages. This technique of fluorescence based on the non-linear absorption of light is referred to by the generic name multiphoton absorption microscopy [ZIP 03, HEL 05]. At the moment, the systems available essentially permit biphotonic absorption, hence the trivial name “two-photon microscopy”. However, in certain laboratories, experiments with fluorescence microscopy using three-photon absorption have successfully been carried out.
136
Tomography
Although the physical mechanisms involved are complex, the principle of non-linear absorption may be understood intuitively. In the case of fluorescence by linear (or monophotonic) absorption, the energy of a photon with a characteristic wavelength permits transition of the fluorescent molecule from the rest state to the excited state. Very rapidly, the molecule will return to the rest state and return the accumulated energy in different steps and in different forms. One of them involves the emission of a photon by the molecule. The emitted photon will have a longer wavelength than the absorbed photon at the time of the excitation (Figure 5.13A).
Figure 5.13. Comparison between fluorescence by “one photon” and “two photons”. (A) Jablonski diagram of conventional fluorescence. (B) Fluorescence by biphotonic absorption. 'W denotes the temporal window of absorption
In the case of linear absorption, we no longer consider one photon with the characteristic wavelength of the absorption, for instance 360 nm, but two photons of twice the wavelength, i.e. 720 nm. In this situation, taking into account the inverse proportionality of the energy and wavelength of light, it is clear that the total energy of 720 nm supplied by these two photons is equivalent to that supplied by a single photon of 360 nm. However, for the absorption of two photons to occur, it is necessary that two conditions are met. First, a very high spatial density of photons must be created. This condition is easily met in microscopy, because the light is very strongly focused on a very small volume (in the order of 0.012 μm3) by using an objective with a large numerical aperture (NA = 1.4). Second, a very high temporal density of photons must be created. In other words, the probability of two photons exciting the same molecule quasi-simultaneously must be increased. In fact, the two photons must not be separated by more than 1 fs (10í15 s). The first photon (Figure 5.13B) places the molecule in a semi-excited state that is very short lived. If the second photon arrives during the window 'W the molecule absorbs sufficient energy to reach the excited state. This temporal concentration will be achieved with a laser source, pulsed with high frequency (in the order of 100 MHz), with a pulse length in the range of 80 to 2 ps.
Tomographic Microscopy
137
Figure 5.14. Principle of optical sectioning by biphotonic absorption. (a) Linear fluorescence. The excitation light provokes the emission of fluorescence in all points in the interior of the double cone of illumination. (b) Fluorescence by absorption of “two photons”. Only the molecules located in the focus are illuminated in a way that permits biphotonic absorption. The emission occurs only at the level of the focus of the lens
It appears obvious that the implementation of biphotonic absorption in microscopy by laser scanning will enable us to achieve optical sectioning. In confocal microscopy by laser scanning, the preparation is illuminated by the absorption wavelength of the molecules of fluorochromes (Figure 5.14). In consequence, all molecules located in the illuminated cone will emit fluorescence. Optical sectioning is obtained at the moment of the image formation through the pinhole. In the case of biphotonic absorption, sectioning is achieved by the illumination process and not by image formation. Non-linear absorption may only occur at the level of the focus of the lens, where the optimal conditions for spatial and temporal concentration of photons is attained. Thus, the emission of fluorescence takes place only at the level of the focus of the objective. The practical consequences of confining biphotonic absorption to the focus of the lens are additional advantages compared to “classical” confocal microscopy. In particular, the principal limitations in CLSM are avoided. It augments the capability of light with a wavelength twice as long, thus located in the near infrared, to penetrate into thick and diffuse tissue. In addition, certain fluorochromes that normally require excitation in the near ultraviolet, which is incompatible with the observation of living cells, may be used with biphotonic excitation. Living cells are thus only exposed to infrared light, which is less detrimental to their survival. Finally, during the acquisition of a series of sections, photodegradation only affects the planes already acquired. The others remain intact.
138
Tomography
5.5. Bibliography [AGA 89] AGARD D. A., HIRAOKA Y., SHAW P., SEDAT J. W., “Fluorescence microscopy in three-dimensions”, Methods in Cell Biology, vol. 30, pp. 353–377, 1989. [BOL 92] BOLON P., “Filtrage d’ordre, vraissemblance et optimalité des prétraitements d’image”, Traitement du Signal, vol. 9, n° 3, pp. 225–249, 1992. [BOU 07] BOUZIGUES C., MOREL M., TRILLER A., DAHAN M., “Asymmetric distribution of GABA receptors during GABA gradient sensing by nerve growth cones analyzed by single quantum dot imaging”, Proc. Nat. Acad. Sci., vol. 104, n° 27, pp. 11251–11257, 2007. [BRO 90] BRON C., GREMILLET P., LAUNAY D., JOURLIN M., GAUTSCHI H. P., BÄCHI T., SCHÜPBACH J., “Three-dimensional electron microscopy of entire cells”, J. Microscopy, vol. 157, n° 1, pp. 115–126, 1990.
>BRU 98] BRUCHEZ M. Jr, MORONNE M., GIN P., WEISS S., ALIVISATOS A. P., “Semiconductor nanocrystals as fluorescent biological”, Science, vol. 281, n° 5385, pp. 2013–2016, 1998. [CAR 90] CARRINGTON W. A., FOGARTY K. E., FAY F. S., “3D fluorescence imaging of single cells using image restoration”, in Foskett J. K., Grinstein S. (Eds.), Noninvasive Techniques in Cell Biology, Wiley-Liss, pp. 53–72, 1990. [CHA 94] CHALFIE M., TU Y., EUSKIRCHEN G., WARD W. W., PRASHER D. C., “Green fluorescent protein as a marker for gene expression”, Science, vol. 263, n° 5148, pp. 802– 805, 1994. [COA 04] COATES C. G., DENVIR D. J., MCHALE N. G., THORNBURY K. D., HOLLYWOOD M. A., “Optimizing low-light microscopy with back-illuminated electron multiplying chargecoupled device: enhanced sensitivity, speed, and resolution”, J. Biomed. Opt., vol. 9, pp. 1244–1252, 2004. [CON 05] CONCHELLO J. A., LICHTMAN J., “Optical sectioning microscopy”, Nature Methods, vol. 2, n° 12, pp. 920–931, 2005. [DEB 06] DEBARRE D., SUPATTO W., PENA A. M. et al., “Imaging lipid bodies in cells and tissues using third-harmonic generation microscopy”, Nature Methods, vol. 3, pp. 47–53, 2006.
>DEM 97] DEMANDOLX D., DAVOUST J., “Multicolour analysis and local image correlation in confocal microscopy”, J. Microscopy, vol. 185, n 1, pp. 21–36, 1997. >DUX 89] DUXSON M. J., USSON Y., HARRIS A. J, “The origin of secondary myotubes in mammalian skeletal muscles: ultrastructural studies”, Development, vol. 7, pp. 743–750, 1989. [FAY 89] FAY F. S., CARRINGTON W., FOGARTY K. E., “Three-dimensional molecular distribution in single cells analysed using the digital imaging microscope”, J. Microscopy, vol. 153, n° 2, pp. 133–149, 1989.
Tomographic Microscopy
139
[GIE 06] GIEPMANS B. N. G, ADAMS S. R., ELLISMAN M. H., TSIEN R. Y., “The fluorescent toolbox for assessing protein location and function”, Science, vol. 312, pp. 217–224, 2006. [GUS 08] GUSTAFSSON M. G., SHAO L., CARLTON P. M. et al., “Three-dimensional resolution doubling in wide-field fluorescence microscopy by structured illumination”. Biophys. J., vol. 94, n°°12, pp. 4957–4970, 2008. [HEL 97] HÉLIOT L., KAPLAN L., LUCAS L. et al., “Electron tomography of metaphase nucleolar organizer regions: evidence of twisted-loop organisation”, Molecular Biology of the Cell, vol. 8, pp. 2199–2216, 1997. [HEL 03] HELL S., “Toward fluorescence nanoascopy”, Nature Biotechnology, vol. 21, n° 11, pp. 1347–1355, 2003. [HEL 05] HELMCHEN F., DENK W., “Deep tissue two-photon microscopy”, Nature Methods, vol. 2, n° 12, pp. 932–940, 2005. [JOU 02] JOURLIN M., MARTINEZ S., “Microscopie confocale”, in GALLICE J. (Ed.), Images de Profondeur, Hermès, 2002.
>MAN 93] MANDERS E., VERBEEK F., ATEN J., “Measurement of colocalization of objects in dual colour confocal images”, J. Microscopy, vol. 169, pp. 375–382, 1993. [MIN 88] MINSKY M., “Memoir on inventing the confocal scanning microscope”, Scanning, vol. 10, pp. 128–138, 1988. [MON 99] MONTEIL J., BEGHDADI A., “A new interpretation and improvement of the nonlinear anisotropic diffusion for image enhancement”, IEEE Trans. Patt. Anal. Machi. Intel., vol. 21, n° 9, pp. 940–944, 1999.
>PAN 99] PANCHUK-VOLOSHINA N., HAUGLAND R. P., BISHOP-STEWART J. et al., “Alexa dyes, a series of new fluorescent dyes that yield exceptionally bright, photostable conjugates”, J. Histochem. Cytochem., vol. 47, n° 9, pp. 1179–1188, 1999. [PAW 06] PAWLEY J. B., Handbook of Biological Confocal Microscopy, Springer, 2006. [RIG 91] RIGAUT J. P., VASSY J., “High resolution three-dimensional images from confocal scanning laser microscopy: quantitative study and mathematical correction of the effects from bleaching and fluorescence attenuation in depth”, Analyt. Quant. Cytol. Histol., vol. 13, n° 4, pp. 223–232, 1991. [ROE 93] ROERDINK J. B. T. M., BAKKER M., “An FFT-based method for attenuation correction in fluorescence confocal microscopy”, J. Microscopy, vol. 169, n° 1, pp. 3–14, 1993. [SHA 91] SHAW P. J., RAWLINS D. J., “The point-spread function of a confocal microscope: its measurement and use in deconvolution of 3D data”, J. Microscopy, vol. 163, n° 2, pp. 151–165, 1991. [SHA 05] SHANER N. C., STEINBACH P. A., TSIEN R. Y., “A guide to choosing fluorescent proteins”, Nature Methods, vol. 2, n° 12, pp. 905–909, 2005. [SHE 89] SHEPPARD C. J. R., “Axial resolution of confocal fluorescence microscopy”, J. Microscopy, vol. 154, n° 3, pp. 237–241, 1989.
140
Tomography
[SUZ 07] SUZUKI T., MATSUZAKI T., HAGIWARA H., AOKI T., TAKATA K., “Recent advances in fluorescent labelling techniques for fluorescence microscoy”, Acta Histochem. Cytochem. vol. 40, n° 5, pp. 131–137, 2007.
>TAN 92] TANEJA K. L., LIFSHITZ L. M., FAY F. S., SINGER R. H., “Poly(A) RNA codistribution with microfilaments: evaluation by in situ hybridization and quantitative digital imaging microscopy”, J. Cell Biol., vol. 119, pp. 1245–1260, 1992. [TAN 02] TANAAMI T., OTSUKI S., TOMOSADA N., KOSUGI Y., SHIMIZU M., ISHIDA H., “Highspeed 1 frame/ms scanning confocal microscope with a microlens and Nipkow disks”, Appl. Opt., vol. 41, pp. 4704–4708, 2002. [TRA 06] TRAMIER M., ZAHID M., MEVEL J. C., MASSE M. J., COPPEY-MOISAN M., “Sensitivity of CFP/YFP and GFP/mCherry pairs to donor photobleaching on FRET determination by fluorescence lifetime imaging microscopy in living cells”, Microsc. Res. Tech., vol. 69, pp. 933–939, 2006. [TSI 98] TSIEN R. Y., “The Green Fluorescent Protein”, Annual Rev. Biochem., vol. 67, pp. 509–544, 1998. [VIS 91] VISSER T. D., GROEN F. C. A., BRAKENHOFF G. J., “Absorption and scattering correction in fluorescence confocal microscopy”, J. Microscopy, vol. 163, n° 2, pp. 189– 200, 1991. [WES 92] WESSENDORF M. W., BRELJE T. C., “Which fluorophore is brightest? A comparison of the staining obtained using fluorescein, tetramethylrhodamine, lissamine rhodamine, Texas red, and cyanine”, Histochemistry, vol. 98, n° 2, pp. 81–85, 1992. [WIL 90] WILSON T., Confocal Microscopy, Academic Press, London, 1990. [ZIP 03] ZIELFEL W. R., WILLIAMS M., WEBB W. W., “Non linear magic: multiphoton microscopy in the biosciences”, Nature Biotechnology, vol. 21, n° 11, pp. 1369–1377, 2003.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 6
Optical Tomography
6.1. Introduction The interaction of light with matter gives access to both the composition and the structure of matter. Optical spectroscopy, i.e. the study of how light is absorbed or emitted by matter, is an important source of information about the composition and the atomic and molecular structure of solid, liquid and gaseous media. In the case of solid, often heterogenous matter, it is frequently necessary to determine the local composition of materials more precisely and to produce for this purpose images of them. In the case of opaque matter, such as metal and materials with high optical absorption, only their surface may be characterized by its interaction with light. Images of the surface may then be made, which, depending on the acquired data, allow more precise determination of its composition, its dielectric properties, its structure, or its texture. This gives insights into “surface states”. In the other cases, the light penetrates the material at least partially, and this enables a three-dimensional (3D) image of the composition or structure of the matter to be obtained by studying the dielectric properties. The representation of the latter in a single or multiple cross sections is the result of what is called “optical tomography” methods, which are outlined in this chapter. These methods rely on knowledge of the laws that govern the propagation of light in matter. These laws vary, depending on whether the matter is transparent or only translucent, i.e. whether the light penetrates the matter but loses the property of ordered or coherent propagation due to frequent encounters with heterogenities along its trajectory. The laws that govern the propagation may in general be modeled mathematically or simply numerically using computers. This is the “direct” model, which is of general relevance to tomographic methods. However, there is also a problem specific to the domain of electromagnetic wave optics, in which the coherent Chapter written by Christian DEPEURSINGE.
142
Tomography
and incoherent propagation of waves may simultaneously be observed in varying ratios. Solving the “inverse” problem is far more complicated, and the precision reached depends on whether a perfectly transparent or only a translucent medium is analyzed, or a turbid medium, in which the scattering of light by the heterogenities of the matter becomes a dominant factor. An application domain of growing importance for optical tomography is the observation of biological matter. The problem of “mixed” propagation of coherent and incoherent waves acutely arises in this case because of the particular characteristics of the structure of biological media. These are composed of comparatively well- structured, repeating units, of which namely the organelles in cells are identifiable by their size and dielectric properties. This property is very useful in the differentiation of tissues and serves as the basis for optical diagnostics exploiting absorption and scattering properties of light. To supplement the reading of this chapter, a certain number of references are recommended, which deal with: – the propagation of electromagnetic waves in a vacuum and dielectric materials [JAC 95]; – the principles and theories concerning coherent optics [BOR 99]; – the tomographic techniques applied in optics [BOR 99, KAK 88]; – the interaction of light with living matter, the optical absorption and scattering, and thermal balance [WEL 95]; – the physical properties of tissues [DUC 90]. 6.2. Interaction of light with matter Generally speaking, light interacts with matter, mainly with the charged particles, such as electrons and protons, according to the well-known coupling between electromagnetic fields and electric charges (Maxwell equations, see [JAC 95]). Indirectly, it also interacts, via the charged particles, with the atoms, molecules, and crystal lattices, or more precisely with their vibrational modes. We will describe this interaction in more detail, notably in the framework of organic and living matter. In principle, two types of interaction are distinguished: absorption, often linked to the phenomenon of fluorescence, and optical elastic or inelastic scattering. Basic references on the interaction of light with matter are countless. We therefore recommend readers to refer to elementary references [BOR 99]. The following basic references are also very useful in understanding this section
Optical Tomography 143
[BOH 83, ISH 78, VAN 99]. Concerning the theory of scattering and radiative transfer in general, the reader may consult [CAS 67, CHA 60, DUD 79]. With regard to biological tissues, we recommend that the reader refers to the citations in the different chapters of the book [WEL 95]. The reader may also consult [ARN 92, BEV 98, BOL 87, CHA 99, CHE 90, FLO 87, MAR 89, PRA 88, VAN 92, WIL 86, WIL 87]. For instrumentation for the analysis of tissues and for optical tomography, the reader is referred to [CHA 98a, CHA 98b]. 6.2.1. Absorption Translucent media are, by definition, solid, liquid or gaseous media that optical photons may penetrate in different frequency ranges, which depend on the composition and the dielectric properties of the analyzed matter. Among solid media, gels, polymers, and biological materials, such as living tissues, obviously have to be considered. The penetration of translucent media is basically determined by the absorption of the analyzed matter. The absorption coefficient, defined as the inverse of the penetration depth in non-diffuse media, enables the degree of optical absorption to be quantified. In general, the considered spectral range extends from the infrared to the ultraviolet. For translucent media, maximum penetration is often reached in a range close to the visible spectrum, i.e. between the near-infrared and the near-ultraviolet. The mechanisms of interaction between photons and matter determine the penetration depth of photons in a material. In the mid- and far-infrared, the coupling of electromagnetic waves with the vibrational or rotational modes of molecules and/or atoms of the material lead to a high absorption of light. The interaction between photons and phonons is effective if the energy of the photons is transmitted in quantum packages to the vibrational modes of the molecules or the atomic lattice. The absorption of the photons is then practically total. Depending on the optical cross section of the molecules or the crystal lattice, penetration is limited to several microns. In the near-infrared, the cross section is far smaller, since only the coupling of the electromagnetic radiation with the harmonics of vibrational modes (overtones) is relevant in this domain. The intensity of optical absorption, sometimes called “oscillator strength”, is thus usually several orders of magnitude smaller than for an excitation of the fundamental mode. The penetration of light is then considerably better than in the mid- or far-infrared. It may be very high in certain dielectrics without absorption, inclusion, or impurity centers, such as colored centers or “traps” that generate electronic states in the prohibited band. The penetration depths are then often as large as several kilometers in the case of glass or perfect crystals, not to mention optical fibers, in which the attenuation is so small that a transmission of signals over hundreds of kilometers becomes possible. In biological tissues, penetration depths up to several centimeters may be obtained.
144
Tomography
The quantity that permits an intuitive quantification of how often absorption occurs is the mean free path of absorption la, which represents the distance covered by a photon propagating through a diffuse medium before being absorbed and disappearing, averaged over all scattering phenomena. The mean free path of absorption is linked to the absorption cross section or the absorption coefficient by:
la
1
Pa
[6.1]
When the energy of the photon is sufficiently high, the absorption of photons by the photoelectric effect dominates the other mechanisms that limit the propagation of light in matter. The photoelectric effect describes the total transfer of the energy of a photon to an electron belonging to the electron cloud of a molecule called a “chromophore”, of an atom, or of an atomic lattice of a semiconductor. The condition for such an energy transfer is that the energy difference between the initial and the final state of the electron equals the energy of the incident photon. The transition is called “direct” if the electrons in the initial and the final state have approximately the same impulse (corresponding to the wave vector k of the electron). It turns out that the participation of one or more phonons may be necessary when these electrons have different impulses. The transition is then called “indirect”, and the energy equation must include the energy of the phonons generated and absorbed in the process. In an undoped semiconductor without impurities, optical absorption is observable when the energy of the photons is at least as high as that of the prohibited band. The absorption is a function of the energy of the photons, and it varies with the densities of the electron states. The situation is similar in a medium containing chromophores or molecules with an electron cloud composed of electrons grouped in distinct energy bands. The so-called “Soret” bands are characterized by indirect transitions between bands with sufficient density of states to produce well-defined absorption peaks. Examples of chromophores are numerous among fluids (blood, bile) and biological tissues. They play an important role in the assessment of metabolic activity of living tissues. Heme is a chromophore located within the protein hemoglobin, whose role in transporting oxygen to tissues is well known. This chromophore shapes the absorption spectrum of blood characteristically. A Soret band is the source of optical transitions between 350 nm and 450 nm. Moreover, the red color of blood stems from an absorption peak at 550 nm (high absorption in the green) due to an interband electron transition. The absorption spectrum of heme depends on the degree of oxygenation of an iron atom at the center of a porphyrin. By decomposing the measured spectrum into its reduced and its oxidized components, it is possible to calculate the degree or the “rate” of saturation of the hemoglobin (SaO2). This quantity plays an important role in the indirect evaluation of the metabolic activity of tissues, and it is routinely used in the
Optical Tomography 145
monitoring of patients at risk in intensive care units and during anesthesia. Other chromophores are of interest in the monitoring of the metabolic activity of, in particular, nerve tissue: the cytochromes a-a3 are proteins that are directly involved in the cell energy cycle and that have a characteristic spectrum in the near-infrared (at around 820 nm), where the penetration of light into tissues is considerable and thus permits tomography. Among the chromophores allowing tomographic imaging of tissues, melanin in the epidermis and rhodopsin in the retina must also be mentioned. In certain cases, water, being present in tissues in variable concentrations, is a chromophore with several absorption peaks, notably at wavelengths of more than 950 nm, which enable a rough measurement of its concentration. The same holds for fat with a spectrum that shows a specific contribution over 960 nm. Certain metabolites, such as glucose, may be measured in tissues based on an identification of their spectral characteristics. The importance of monitoring blood sugar in diabetes patients is well known, as is the possibility of measuring the glucose concentration non-invasively. Nevertheless, the task proves to be difficult because the characteristic absorption spectrum of glucose dissolved in blood plasma is only slightly different from that of water and because of the unpredictable contribution of the scattering of tissues, which has to be determined separately in each case. Certain characteristics of the spectrum at wavelengths close to 980 nm, 1.6 μm, and 2.2 μm may be exploited enable quantification. In the ultraviolet, most of the compound semiconductors and organic molecules become chromophores and allow specific spectroscopic information to be obtained, including amino acids, bases, and molecules that play a structural role in the cells and tissues, such as actin and elastin, as well as a metabolic role, such as NADH and flavin. 6.2.2. Fluorescence The electrons excited by the photoelectric effect regain their fundamental state by different relaxation processes, depending on the nature of the electrons and phonons of the material. Some are called “radiative” and “non-radiative”, depending on whether photons are emitted or not. In the case of radiative transitions, so-called “fluorescence” photons are emitted at a higher wavelength than the wavelength of the photons absorbed by the photoelectric effect. They may be distinguished from the absorbed photons by means of spectral filters, such as interference filters. Due to the random nature of the relaxation processes, the coherence with the absorbed photon is lost, and only intensity information may be exploited for imaging purposes. Fluorescence is specific to atoms, crystalline atomic structures in semiconductors, and organic molecules, in this case called “fluorophores”. Most of the active chromophores in the blue and the ultraviolet are also fluorophores, including porphyrin, NADH, flavin, elastin, and actin. The fluorescence spectrum depends on their physical and chemical characteristics and their immediate environment. In this way, the fluorescence of NADH depends on its degree of oxygenation and permits monitoring of the oxygen
146
Tomography
supply to tissues. The pH also has an influence on the spectrum and the intensity of fluorescence. The already mentioned fluorophores exist naturally in living tissues, which explains the origin of fluorescence in tissues in their natural state, called “autofluorescence”. The spectral analysis of autofluorescence enables characterization of tissues and, notably, differentiation of pathological tissues, such as tumor tissues, whose composition is radically modified by the chaos of structures, the increase in nuclear mass, and the disappearance of the cytoarchitecture. Observation of these modifications is more and more used for diagnostic purposes. The contrast in fluorescence images may be improved by introducing external fluorophores or “xenofluorophores”, which may be tissue markers, such as antibodies marked with fluorescent labels, or nanoparticles, such as quantum dots. In other cases, photodynamic agents may be used, which are in general fluorescent and reflect metabolic changes associated with the presence of tumors. For more details, the reader is referred to [ALF 89, LAK 83, WAG 98]. 6.2.3. Scattering When interacting with matter, photons are not always absorbed, but may, depending on their energy, be reemitted with or without loss of energy. This is called inelastic and elastic scattering of photons, respectively. 6.2.3.1. Inelastic scattering Inelastic scattering occurs when the energy of the photon does not permit a transition of an electron or a pure and simple absorption by a vibrational or rotational state of a molecule or atom in the crystal lattice. The photon that traverses the dielectric medium may then experience a loss or a gain in energy, corresponding to the coupling of the electromagnetic wave with a vibrational mode of the molecule or lattice. This is called Raman scattering if the energy is transmitted to an optical mode of the molecule or lattice, and Brillouin scattering if the excited mode is a low energy acoustic mode of the lattice. Fine spectroscopy of the light traversing the translucent medium enables identification of the spectrum of Raman scattering, and the analysis of vibrational modes leads to the identification of molecules or the crystalline matrix of the medium. Applications in the domain of tissue diagnostics may again be found in medicine [MAH 98]. 6.2.3.2. Elastic scattering An elastic scattering process may be described as an elastic collision of the photon with a scattering center, in which only the direction of the propagation of the photon changes, while its energy and its momentum remain constant in magnitude. In general, these scattering centers consist of dielectric singularities of the medium, such as defects, particles, dislocations and interfaces. In biological media, they have
Optical Tomography 147
only partially been identified so far, but their role in the propagation of light in tissues is nevertheless precisely known and quantified, thanks to mathematical theories and numerical models of the propagation of light in diffuse media. Particles whose size is comparable to the wavelength of the light and which possess a refractive index that is sufficiently different from that of the cytoplasm, such as the organelles (see Figure 6.1), including mitochondria, nuclei, and vacuoles, form scattering centers that collectively contribute to the elastic scattering of photons in tissues (see [BEA 94, BEA 95, LIU 93]). The interface between cytoplasm and the extracellular space, or the “extracellular matrix”, also frequently causes elastic scattering in biological media.
Figure 6.1. Schematic representation of a biological cell in a tissue. The extracellular and intracellular structures with the nucleus and the organelles, such as the mitochondria, the Golgi apparatus, and the vacuoles can be distinguished
The intuitive quantity that permits quantification of the frequency with which the phenomenon of scattering occurs is the scattering mean free path ls, which represents the distance covered by a photon that propagates in a diffuse medium before undergoing a scattering phenomenon that modifies its propagation direction, averaged over all scattering phenomena (see Figure 6.2). The scattering mean free path is linked to the scattering cross section by:
ls
1
Ps
[6.2]
An important characteristic of elastic scattering is given by the “phase function” p(T), which describes the probability of an incident photon being scattered by the angle T (ș = the angle between sˆ 0 and sˆ ,where sˆ 0 and sˆ are the unit vectors denoting the
148
Tomography
propagation direction of the incident and the scattered photon, respectively (see Figure 6.2b)). It is normalized to 1:
³ d : p T
1
[6.3]
4S
where : is the solid angle in the propagation direction.
Figure 6.2. (a) Multiple paths of a photon scattered by the scattering centers A, B, C, and D of the diffuse medium and finally absorbed by E. ls is called the mean free path between two scattering events or scattering mean free path. (b) The phase function describes the probability of a photon being scattered by an angle T
The phase function depends on the composition of the diffuse medium and results from statistics carried out on the more or less homogenous population of scattering centers. In general, the phase function is not constant over T and reflects an “anisotropic” law of scattering. The degree of anisotropy of the medium is given by the mean of cos(T) and is called the anisotropy factor g:
g
³ d : p T cos T
[6.4]
4S
g equals zero if the scattering is isotropic (in all directions), and it equals one if the scattering occurs only in the propagation direction of the incident wave (limiting case). For a biological medium, g is often close to 0.98, which indicates a high probability of scattering in the initial propagation direction of the photon. An expression very often used as approximation of the phase function is the so-called Henyey–Greenstein expression, proposed in 1941 [HEN 41] to describe the scattering of light in the interstellar medium:
Optical Tomography 149
pHG T
1 1 g2 4S 1 g 2 2 g cos T 3 2
[6.5]
A more “physical” case that allows an exact calculation of the phase function is that of a sphere consisting of a dielectric material whose refractive index is sufficiently different from the surrounding medium. The theory developed by Mie, which carries his name [MIE 08], gives an exact, analytical expression of the phase function, which may be used to model the diffuse medium. This relatively complex expression may be found in [JAC 95], for example. In all cases where the phase function may be derived from a theoretical simulation or an experimental measurement, this function may usually be approximated by a Legendre series expansion in the form:
p T
1 4S
f
¦ 2n 1 g P T n n
[6.6]
n 0
where gn is the n-th order of the phase function p(T) and may be calculated by:
gn
2S ³ Pn cos T p T dT
[6.7]
The zero-th order g0 is conventionally fixed to a value of one, while the first order g1 equals the previously defined anisotropy coefficient or asymmetry factor g. The higher orders g2, g3, etc. play a less and less important role for the phase function with increasing index n [BEV 98]. Biological media may notably be compared to heterogenous media consisting of a mixture of dielectric objects with interfaces between media with different refractive indices, such as cytoplasm, intercellular space, and different portions of connective tissue. The intercellular space is also composed of small particles, such as the organelles, including the nucleus, the mitochondria, and the vesicles. These particles are likely to play the role of scattering centers and are in general modeled by micro-spheres or -ellipsoids in suspension in a gel. The Mie theory of scattering is perfectly applicable in this case, and it yields an exact expression for the phase function for a given size of the particles and given refractive indices for the interior and exterior. A model often used for the distribution of particle sizes is a fractal model [GEL 96, SCH 96, THU 01], according to which the particle sizes obey a distribution law given by:
150
Tomography
N (d )
N 0 d D
[6.8]
where d is the diameter of the microspheres (see Figure 6.3), D is the fractal power of the distribution, and N0 is a parameter whose value is linked to the total number of particles.
Figure 6.3. Representation of a diffuse medium in the form of a set of particles of diameter d, whose number obeys a fractal law
For certain tissues, it may occur that a population of particles of relatively homogenous size are dominant in the distribution of particle sizes. This is notably the case in certain neoplastic tissues, where the nuclear mass occupies an increased part of the cell and tissue volume. For this reason, they scatter a major part of the incident light, and the spectrum of the scattered light is characteristically changed. Oscillations at wavelengths of the scattered light corresponding to characteristic variations in the phase function predicted by the Mie theory for a population of micro-spheres of homogenous size, comparable to the wavelength of the light, may thus be observed [PER 98]. 6.3. Propagation of photons in diffuse media The different interaction mechanisms between light and matter described in the previous sections determine the propagation of photons in diffuse media. Quantum theory shows that the propagation of light may be described by a wave and a particle scenario, where each of the two aspects provides a part of the truth in
Optical Tomography 151
describing the experimental facts. More exactly, the wave character must be taken into account when the coherence of the wave associated with the photon is maintained during propagation in the medium. This is generally the case when the photon propagates in a homogenous medium of a size of the order of the wavelength. Ordered media, such as crystals, and amorphous media, such as glasses and polymers, may obviously be cited as examples. The case of photonic crystals also falls into this category, in view of the fact that the propagation may only occur for certain wave vectors. In the case of heterogenous media with random variations of the dielectric properties, the coherence of the incident beam is progressively lost as the photons hit scattering and absorption centers. While the light beam propagates in the diffuse medium, the “coherent” propagation is progressively replaced by a so-called “incoherent” propagation. Therefore, two types of propagation must be considered in optical tomography. 6.3.1. Coherent propagation The propagation of light is called coherent when the value of the electrical or photonic field E at a point r1 of the wave is correlated in space, but also in time (correlation function unequal zero), with that of the measured field at a point r2 located at a certain distance from r1. For the basic theory, the reader is referred to [BOR 99, MAN 95]. Quantitatively, this correlation “in space” is given by the mutual coherence function Ƚ(r1,r2), also called mutual intensity I(r1,r2) in this case:
I r1 , r2
E r1 E r2
where the parentheses
set
set
[6.9]
represent the average over all possible dielectric
configurations with the same statistical nature as the considered medium. For r1 = r2, the mutual intensity is identical to the local intensity of the photonic field I(r). In general, and in particular in a diffuse medium, this function decreases with the distance according to a law depending on the optical conditions of the problem (in particular the radiation source) and the dielectric characteristics of the traversed medium. Thus, this function is usually not negligible, except for small and limited values of r2 í r1. It is therefore convenient to “locally” consider the mutual coherence function by introducing a function sometimes called a “first order coherence function”. For this purpose, the variables are changed as follows: R = ½(r1 + r2)
r r2 r1
[6.10] [6.11]
152
Tomography
The local distribution of the mutual coherence function may then be characterized at the point R by its Fourier transform along r. The wave vector k is introduced to fix the local propagation direction of the wave. The result of this operation is the Wigner distribution function of the optical field at point R for a wave propagating in the direction of the wave vector k. The terms “specific intensity” and “Wigner coherence function” are also used for:
I R, k
3
³ d r exp ikr
E R r 2 E R r 2
[6.12]
I(R, k) thus gives the probability density of the photon in phase space at point R for the wave vector k. It permits a local, quantitative characterization of the coherence of the propagating wave. In this way, the Wigner distribution corresponding to a planar wave with wave vector k0 is a Dirac distribution at point k = k0, for instance. The intensity of the photonic field is given by:
I R 1 2S
3
3
³ d k I R, k
[6.13]
For a given geometry of the wave and with sufficient knowledge of the medium of propagation, it is possible to calculate how this Wigner coherence function is propagated in phase space: 3
3
G
I R, k
³ d R ' d k ' G R R ', k , k ' I R ', k ' W
0
[6.14]
where GW(R í Rƍ, k, kƍ) is the Green function associated with the propagation of the initial Wigner coherence function I0(Rƍ, kƍ). The Green function GW itself is calculated from the Green function G± describing the propagation of the electric field E = (Ei, i = x, y, z): *
Ei r1 Ei r2
³
V
dr1 ' dr2 ' G r1 , r1 ' G r2 , r2 ' E *i r1 ' Ei r2 '
[6.15]
G± are the “advanced” and “retarded” Green functions, according to the terms used for instance in [JAC 95], which describe the progressive and retrograde propagation of the electromagnetic wave with wave vector k = n(r, Z) Z /c (in a vacuum k0 = Z /c, where Z is the pulsation of the wave and c is the speed of light) in a medium of variable dielectric nature, described by the complex dielectric function n(r, Z), in which the complex refractive index satisfies:
Optical Tomography 153
n 2 r, Z H r, Z
[6.16]
Let us now consider a wave traversing a sufficiently homogenous dielectric medium of about the size of the wavelength that satisfies the wave equation:
2 Ei (r, Z ) k 2 r, Z Ei (r, Z )
0
[6.17]
with:
k r, Z
k0 n r, Z
[6.18]
The Green function describing the propagation of the electric field is then given by the solution of the propagation equation:
ª¬ 2 k 2 r, Z º¼ G r r, r '
4S G r, r '
[6.19]
In many cases, the dielectric function is a constant n Z in space and time. Its dependence on the pulsation Z is given by the dispersion law of the material. In a vacuum, we obtain:
H r, Z H 0
[6.20]
In a homogenous medium in which the dielectric function varies either very slowly or very rapidly (over distances shorter than the wavelength, for example comparable to atomic or molecular dimensions) in space, the dielectric function may be considered as constant and as equal to H Z , which is the mean dielectric constant in space and time. In the case where the dielectric constant spatially fluctuates over distances neither very long nor very short (i.e. longer than the wavelength), we write:
H (r, Z ) H (Z ) 'H (r, Z )
[6.21]
where 'H (r, Z) is the spatial fluctuation of the dielectric constant. Similarly, we obtain for the refractive index:
n 2 (r, Z )
n 2 Z 'n 2 (r, Z )
[6.22]
154
Tomography
where n
2
Z
is the mean squared refractive index and 'n2(r, Z) is the spatial
fluctuation of the squared refractive index. In the case where the dielectric medium is reasonably homogenous ('H (r, Z) is negligible) and the imaginary component of the dielectric constant is not too high on the scale of the distances covered by the photon, the Green function is given by:
G
G
G
exp ri k (r, Z ) r r '
G r r, r ', Z
r r
[6.23]
'
Inserted into equation [6.15], it is possible to find a general expression of the Green function GW(R í R’, k, k’) for the Wigner coherence function. In the particular case of the intensity, we obtain the law of decreasing intensity:
G W R R ', k, k '
exp 2ki R R ' R R '
2
G k k '
[6.24]
where k = kr + iki. ki is the imaginary component of the wave vector k (if it exists), which describes the attenuation of the intensity of the wave with distance from the source due to optical absorption. Let us now consider in more detail the case of random and diffuse media. The dielectric function H (r, Z) is in general a random function of the point r and reflects the structure of the disordered medium. Statistically, it may be characterized by its autocorrelation function, averaged over the possible realizations of the medium:
B r1 , r2
Z4 c
4
H * r1 H r2
[6.25]
where the constants Z and c have been inserted into the expression for B to give it the unit of energy squared (source of potential). In the case of a homogenous disordered medium, the autocorrelation function reduces to:
B r1 , r2
B r2 r1
B R
[6.26]
where B may advantageously be described by its 3D Fourier transform B Q , where Q is the reciprocal variable to R. The autocorrelation function B(R), or its Fourier
Optical Tomography 155
transform B Q , plays an important role in the calculation of propagators, which describe the migration of photons in the dielectric medium. By comparison with the problem of particle diffusion in quantum mechanics, B(R) may be considered as a diffusion potential explaining the existence of a quantification and of “propagating” eigenmodes of the photonic field. Let us consider the particular case where the dielectric medium results from a mixture of very large molecules, aggregates, or particles (for instance, colloidal particles). We speak more generally of “patterns” (index m), which are described by the dielectric function Hm(r) and are randomly repeated and distributed over a matrix of points ri:
H r
[6.27]
¦ H r r m
i
i
The autocorrelation function reads in this case:
B r1 , r2 B(R, r )
Z4 c
f
¦
4
i,j
Z c
4
4
H *m r1 ri H m r2 rj r
f
r
H *m §¨ R ri ·¸ H m §¨ R rj ·¸ 2 2
¦
©
i,j
¹
©
[6.28]
¹
and its Fourier transform along r:
B R, q
Z c
4
4
Z4 c4
f
*
¦ H R, q H R, q exp ª¬iq r r º¼ m
m
i,j
i
j
[6.29]
Hm* q Hm q S q
with the structure factor S(q) defined by: S q
f
¦ exp ª¬iq r r º¼ i
j
[6.30]
i,j
and the form factor defined by:
Hm q
2
Hm* q Hm q
[6.31]
156
Tomography
The form factor is equal to the phase function defined in section 6.2.3.2 in the context of applying Monte Carlo simulations to radiative transport calculations. In equation [6.29], the fact that the medium is homogenous and infinite, in the sense that Hm q does not depend on R, has been taken into account.
6.3.2. Mixed coherent/incoherent propagation We follow in this section the approach developed by John [JOH 96] to describe the mixed propagation of coherent and incoherent waves in diffuse media. GW(R í R’, k, k’) is the Green function associated with the propagation of the initial Wigner coherence function I0(R’, k’). In the case of a medium with random dielectric properties characterized by the characteristic function or the autocorrelation function B(r) of the dielectric function H(r) or its Fourier spectrum B q , the Green function, propagator of the Wigner distribution, may be calculated from its Fourier transform:
Q, k , k ' G W
c4
Z
4
3
³ d R G R, k , k ' exp i Q R W
[6.32]
Q, k , k ' is in fact simply given by the elements John et al. have shown that G W of the matrix k ' H 1 k , calculated for an operator H-1. The operator H may be compared to a Hamiltonian describing the movement of a quantum particle scattered by a source of potential B(R, r). The quantum E-M filed states k and k' are eigenstates of the operator H-1. The following expression may then be derived: lˆ
GW (R R' , k , k' )
lˆ
¦ˆ ¦
exp R R' O[llm' ]
[ m] m l l ,l ' m 4SDll '
R R'
<
( l l l ' )
* lm (k )
[6.33]
where D>m@ and O >m@ are parameters derived from the calculation of GW, which ll ' ll ' correspond to an intensity weighting factor and a characteristic attenuation length in the considered term in the expansion, respectively. The indices l and m correspond to numbers called “quantum”, “azimuthal”, and, unsuitable for the case of photons, “magnetic”, which define the spherical harmonics blm kˆ . lˆ is the “maximum
azimuthal index” considered in the expansion lˆ = 1, 3, 5, … The so-called “spectral” functions < lm k are:
Optical Tomography 157
< lm k
Rl (k ) b lm kˆ
[6.34]
where Ri(k) is a “radial” function, which is concentrated around k = k0 n(Ȧ), and kˆ is the unit wave vector and the argument of the spherical harmonics blm . The characteristic attenuation length is infinite for the numbers l = l’ = m = 0, O00 f . For l , l ' t m t 1 , Oll>m' @ is finite. The terms in equation [6.33] reflect the > 0@
propagation of a significant portion of the wave for which the distribution over k is not reduced to zero, corresponding to a “coherent” propagation of the wave. A very interesting aspect underlined by John’s theory of propagation in diffuse media is that O11>0@ may be by a factor of the order of ten larger than the mean free path of transport in the diffuse medium with lt = 1/μt and μt = μa + μs in the case of a model of the diffuse medium consisting of microspheres of polycarbonate in suspension. This property, established for a particular diffuse medium, may certainly be generalized to a large class of diffuse media, including biological media. This very interesting result proves that if we select among the photons those propagating coherently in the diffuse medium the penetration depth may be higher than predicted by the theory of incoherent propagation of photons. In this way, the penetration depths or mean free paths of scattering of so-called “ballistic” photons, i.e. photons that are not subject to any interaction with the scattering and absorption centers of the medium, are in general between some hundredths and tenths of millimeters, which are smaller than the characteristic attenuation depths of the coherent wave. 6.3.3. Incoherent propagation
In equation [6.33], for sufficiently large sourceídetector distances | R í R’ |, the terms of the expansion for which l , l ' t m t 1 become negligible compared to the term for l = l’ = m = 0:
G W R R ', k , k '
1 > 0@
4S D00 R R '
< 00 k < 00 k '
[6.35]
where < 00 k is a function independent of the direction of the wave vector kˆ and peaks sharply for k = | k | | k0 n(r, Z). Dependence on k does not exist in this case, and the coherence of the wave is lost. GW(R í R’, k, k’) thus corresponds to an
158
Tomography
isotropic scattering mode, and the weighting factor D>0@ corresponds to the optical 00 scattering coefficient (see the next section, especially equation [6.57]). The distribution of the intensity is given by:
I R
1
³ dR ' 4S D> @ R R ' 0
[6.36]
I 0 R '
00
showing that 1 4S D>0@ R R ' 00
is the Green function for the propagation of the
intensity I(R) of the wave in a pure multiple scattering regime. 6.3.4. Radiative transfer theory
E (r1 ) E (r2 )
In the particular case where I (r1 , r2 )
set
| 0 for r1 z r2 , i.e.
when the coherence of the photonic field is negligible, we obtain:
I incoherent R, k
³ dr
| E R E R
3
exp ikr E R r
I R
2
E R r 2
[6.37]
k
It is worth mentioning that the Wigner distribution gives the intensity of the photonic field at point R and that it is uniform on a sphere of radius k in the reciprocal space. In fact, a perfectly incoherent photonic field is described by the Wigner distribution (equation [6.37]) and constitutes a limiting case that results from extrapolation to an infinite spectral domain, which does not correspond to any physical reality. Moreover, the apparent decoupling of the spatial variable R and the wave vector k in the expression for Iincoherent (R, k) seems to suggest a property of perfect “locality” of the photonic field, in contradiction to the uncertainty relations (comparable to the Heisenberg relations in quantum mechanics):
'Rm 'km t 1,
m
x, y , z
[6.38]
These relations allow, by following John’s, and previously Cartwright’s proposition [CAR 76], the introduction of a new distribution in phase space called specific intensity IR(R, k), commonly considered in radiative transfer theory (the index R refers to this theory), which results from the averaging of the Wigner
Optical Tomography 159
distribution over an arbitrarily defined cell of phase space that satisfies the relation 'Rm 'km 1, m x, y, z and encloses the point (R, k). Mathematically, this averaging is obtained by convolving the Wigner distribution in the 6-phase space by a Gaussian function of size and variance (ǻRm)2 and (ǻkm)2 where m = x,y,z:
I R R, k
3
3
³ d R ' d k ' U R, k , R ', k ' I R ', k '
[6.39]
with:
U R, k , R ', k ' S 3 exp R ' R
.exp k ' k
2
'k
2
2
'R
2
.
[6.40]
where 'R and 'k are deviations from R and k with arbitrary values that satisfy the relation 'R 'k = 1. We may show that this new distribution is positive definite, while the Wigner distribution is not. It physically corresponds to the intensity of the wave (in watts per square meter) at point R, and propagates in the direction k with the pulsation Z = knc and the photon energy =Z . As the elastic scattering considered in this chapter does not modify the energy, only the propagation directions of the photons sˆ ( sˆ is a unit vector) are changed by a scattering event. Consequently, the distribution of specific intensities in phase space may be decomposed into groups in which only the directions sˆ of the photons are considered. This distribution over R and sˆ is called radiance and gives the number of photons at point R that move in the direction
L R, sˆ
³ dk I R R, k with
k
k
k0 and k
sˆ : k sˆ
[6.41]
The radiance is the optical power of photons at point R that propagate in the direction sˆ , per surface element dA and unit solid angle d:(see Figure 6.4). From the expression for the radiance, it is possible to derive the number of photons at point R per unit time that move in the direction sˆ by dividing it by
=Z ...
The radiance is given by the Boltzmann transport equation:
sˆ L R , sˆ Pt R L R , sˆ
P s ³ d : ' p sˆ, sˆ ' L R , sˆ ' S R , sˆ [6.42] 4S
160
Tomography
Figure 6.4. The radiance is the photonic power at point R in the direction per unit surface dA and solid angle d:.
sˆ
The term on the left gives the rate of photons moving in the direction sˆ . The first term sˆ L R , sˆ corresponds to the variation of the radiance L R, sˆ in the direction
sˆ , while the second term Pt R L R, sˆ corresponds to the loss of photons
that were subject to an interaction with the medium, either a scattering with probability μs or an absorption with probability μa, i.e. with a total probability of interaction per unit length in the direction of sˆ of μt(R) = μs(R) + μa(R). The term on the right describes the contribution of photons in the direction sˆ . The first term corresponds to the expected contribution (probability given by μs per unit length) of photons propagating in the direction sˆ ' and scattered in the direction sˆ with the probability given by the phase function p sˆ, sˆ ' , which is a function of the angle T between
sˆ
and sˆ ' . S R, sˆ is the second term, describing a possible source emitting photons in the direction sˆ , the “source term”. The conservation of energy globally imposes equality between the two parts of the Boltzmann equation. The source term may evidently include emissions of luminescence, in particular bioluminescence. It may be associated with contributions arising from inelastic scattering mechanisms of light, such as fluorescence, Raman scattering, or Brillouin scattering. In this case, it reflects the exchange between distributions of photon populations with different energies. In phase space, these populations are confined to different spherical caps that overlap in k-space. The source term may also reflect the exchange between two photon populations characterized by different distributions in phase space. For photon populations with the same energy, we may
Optical Tomography 161
distinguish between primary radiation LP R, sˆ and secondary or multiple scattered radiation LP R , sˆ :
L R, sˆ
LP R, sˆ LS R, sˆ
[6.43]
In general, the primary radiation corresponds to a collimated beam of which the radiance is unequal to zero only in the propagation direction sˆ 0 . The source term may then be written as:
S R, sˆ
P s R p sˆ, sˆ 0 LP R, sˆ 0
[6.44]
Henceforth, we focus on the calculation of the intensity of the scattered photonic field. Such a regime may be considered as established when the radiance depends only weakly on the propagation direction sˆ of the photons. This is an important, simplifying assumption which is applicable in many situations, in general when the covered distance is longer than about ten times the scattering mean free path for common phase functions in biology (g | 0.9), i.e. in the order of millimeters. This hypothesis enables us to establish a simplified equation, governing the distribution of the intensity or the fluence of the scattered wave. The dependence of the radiance on sˆ may be expanded into a sum of spherical harmonics, of which only the first two terms are taken into account:
LS R, sˆ # ) S R
3 S F R sˆ 4S
[6.45]
where:
)S R
S
³S d : L R, sˆ
[6.46]
4
is the photonic fluence, which is the number of photons at point R circulating in all directions sˆ . This quantity plays the same role in a “particular” context as the local intensity of the photonic field I(R).
FS R
S
³S d : L R, sˆ sˆ
[6.47]
4
is a vector, namely the net photonic flux at point R, pointing in the direction in which the photons mainly move.
162
Tomography
We may integrate the two parts of the transport equation (equation [6.42]) over a solid angle of 4S to eliminate sˆ :
³ d : sˆ L R, sˆ ³S d : P R P R L R, sˆ a
4S
s
4
[6.48]
³S d : P ³S d : ' p sˆ, sˆ ' L R, sˆ ' s
4
4
³ d : P s R p sˆ, sˆ 0 LP R, sˆ 0 4S
In the first term, we may move the operator outside the integral, because it depends on R, and we may thus introduce the net photonic flux FS(R) in the expression. Then, we may permutate the two integrals in the first term of the second part, and take equation [6.3] into account to eliminate the phase function. Equally, the phase function may be eliminated in the last term of the second part. Finally, the two terms depending on μs cancel out in the two parts, yielding:
F S R P a R ) S R
P s R LP R, sˆ 0
[6.49]
Moreover, we may derive from the transport equation an expression for FS(R) by integrating anew the two parts of equation [6.42] over a solid angle of 4S, but this time multiplying them beforehand by sˆ . Without going into the details of the calculation, it has to be taken into account that:
³S d : sˆ sˆ L R, sˆ 1 3 L R S
S
[6.50]
4
G
ª
º
S
³S d : sˆ «¬ P ³S d : ' p sˆ, sˆ ' L R, sˆ ' »¼ s
4
[6.51]
g Ps
G
G
4
P
³S d : sˆ S R, sˆ ³S d : sˆ P R p sˆ, sˆ L R, sˆ s
4
0
0
4
[6.52]
P
g P s R L R, sˆ 0 sˆ 0 Using these expressions leads to the following expression for FS(R):
FS R
§1 · ¨ Ptr R ¸ ) S R g P s R Ptr R LP R , sˆ 0 sˆ 0 3 © ¹
[6.53]
where the attenuation coefficient for photonic transport in the turbid medium is defined as:
Optical Tomography 163
Ptr R Pt R g P s R Pa R 1 g P s R Pa R P s ' R with: P s ' R
[6.54]
1 g Ps R
The scattering depth then reduces to:
ls '
1
[6.55]
Ps '
By analogy with one of Fick’s laws, equation [6.53] can be rewritten as:
F R D ) R source
[6.56]
This allows us to conclude that the diffusion coefficient for the scattering of photons in turbid media is given by:
D
1 Ptr 3
[6.57]
Finally, if we insert equation [6.53] in equation [6.49], a differential equation is obtained, which governs the distribution of the rate of fluence, called the diffusion equation:
2 ) S R 3P a R Ptr R ) S R 3 g P s R LP R , sˆ 0 sˆ 0 3Ptr R P s R LP R , sˆ 0
[6.58]
This is a diffusion-type equation, whose homogenous part has the form:
2 ) S R Peff
2
R )S R
0
[6.59]
with the effective attenuation coefficient defined by:
Peff R
3P a R Ptr R
[6.60]
This equation will be particularly useful in the elaboration of the optical tomography method based on the incoherent propagation of light in disordered
164
Tomography
media. For a detailed study of the transport of photons in diffuse media, the reader may refer to the chapter by Starr in [WEL 95]. 6.4. Optical tomography methods The aim of optical tomographic methods is to provide a generally 3D image of a cross section through an object immersed in a translucent medium. The principal application examples are found in medical imaging: – observation of tumor masses: breast cancer, for instance, – functional imaging of the cortex in neurology and in the neurosciences, – of inflammations, notably articular ones, – study of muscular metabolism, – detection of defects in enamel, notably caries, in dentistry, – imaging of the eye and its components: cornea, lens, retina, etc., – imaging of the mucous membrane, epithelium, etc., – imaging of the skin, – etc. Optical tomography methods are diverse, and they must be applied considering several criteria: – the signal intensity coming from the analyzed object, – the nature of the propagation of photons in the medium separating the analyzed object from the detector, – the scale at which the tomographic method must be applied. The problem of the nature and the intensity of the analyzed signal, which permits delineating the object and determining its form, is important, since it conditions the performance of the tomographic method, that is to say the detectability of objects and the resolution with which they are perceived. In general, the detectability depends on: – the signal-to-noise ratio of the analyzed object, – optical transfer function linking the image of the cross section through the object to that captured by the detectors or the camera.
Optical Tomography 165
The signal in optical tomography is directly linked to the mechanisms of the interaction between light and matter described in section 6.2, i.e. to the phenomena of elastic and inelastic scattering. More precisely, the characteristics of the absorption and the scattering principally determine the propagation of the light in the object and enable the contrast necessary for the detection of the object to be obtained. They reflect the different characteristics of the object: absorption spectroscopy reflects the chromophore composition, while the scattering properties reflect the specificities of the forms and the dielectric structure of the material at a microscopic scale. The noise competes with the signal and reduces the probability of a correct decision on the presence or absence of the object. There are two sources of noise: the thermal and electronic noise of the detectors or the camera, and the quantization noise associated with the signal itself. In the case of a coherent detection, which is possible in the case of a coherent propagation of the optical signal in the medium, the importance of the quantization noise and other types of detector noise is much reduced because of the higher level of the detected signal, resulting from the contribution of the reference signal. This aspect of coherent detection constitutes an advantage of coherent tomographic methods. A second source of noise may arise from the disordered medium itself: we may speak of “structured noise” when these non-interpretable fluctuations of the signal come from non-interpretable structures of the medium surrounding the object or of the object itself, thus creating a variability that also reduces the probability of a correct decision concerning the presence of the object. Finally, the optical transfer function describes mathematically the relation between the optical wave traversing the object and the optical wave detected either coherently or incoherently by the detectors. It is given by models of the propagation of the wave in the possibly disordered medium surrounding the object. These models are described in detail in section 6.3. Coherent and incoherent propagation are distinguished. This distinction leads to fundamentally different expressions for the optical transfer function. Knowledge of the transfer functions enables calculation of the image propagated to the detector, thus solving the “direct problem”. In many cases, but unfortunately not in all, the reconstruction of the image of a cross section through the object by inverse propagation of the detected signal is possible by simple inversion of the optical transfer function. This approach partly gives an answer to the problem of optical tomography. The problem of the scale on which the tomography is to be applied determines the choice of the employed tomographic methods. In fact, as shown in section 6.3.1 devoted to the coherent propagation of light in diffuse media, the penetration of the “coherent” wave, although superior to that predicted by the radiative transfer theory based on the incoherent propagation of light, generally does not exceed a few
166
Tomography
millimeters in biological media, while it may be several centimeters in the case of incoherent propagation of light in tissues. 6.4.1. Optical coherence tomography In this section, we focus on the optical tomography of dielectric media for which the coherent propagation of the incident optical wave is sufficiently strong to be used for the reconstruction of the object. This is generally the case over distances of several millimeters for biological tissues. The penetration depth obviously depends on the scattering characteristics of the medium (characterized by the scattering coefficient μs) and the employed wavelength. In fact, the scattering cross section increases fairly rapidly with the energy of the incident photon. Rayleigh’s law of scattering predicts an attenuation of the scattering coefficient with Z-4 (see [JAC 95]):
Ps | 1/ Z 4
[6.61]
where Z = 2SQ = 2S c/O is the pulsation of the incident wave. Therefore, we attempt to work at wavelengths as long as possible to minimize the role of Rayleigh scattering. However, we must avoid increasing the wavelengths such that the absorption (characterized by the absorption coefficient μa) of certain chromophores begins to have an influence. Water, fat and other organic compounds have a limiting absorption from a wavelength of 1.4 Pm onwards. Thus, an optimum can be found in the range between 1.0 and 1.3 Pm. The choice is also influenced by the availability of optical sources in the respective spectral interval. The first sources available at the time research on optical coherence tomography started were superluminescence diodes at about 820 nm. The first demonstrations of coherent imaging techniques were thus made in the near infrared, but at wavelengths too short for optimal optical penetration. The sources used later were diodes or pulsed lasers at around 1,200 nm and permitted a penetration reaching more than 2 mm. 6.4.1.1. Diffraction tomography It is well known that the 3D physical structure of an object may be obtained by sectioning of the object (tomography), optionally followed by a recollection of these cross sections to produce a complete 3D image. When X- or J-rays are used, these cross sections are in general obtained by parallel or cone-beam projections. Usually, we aim to reconstruct the absorption coefficient function in the given cross section.
Optical Tomography 167
As described earlier in this book (see Chapter 2), the algorithm employed for the reconstruction of this function is based on the Radon transform and is called an inverse projection or backprojection reconstruction algorithm. The calculation of the values of this function can make use of the “Fourier slice theorem”, which provides a technique for the “projection slice” reconstruction of the cross section of the object with the Fourier transform. Other decompositions into basis functions, like wavelets, may equally lead to a powerful calculation method. The Fourier slice theorem establishes that the distribution of the values of the function f(x) in the considered cross section may simply be calculated by an inverse Fourier transform of the 2D Fourier transform (if a planar cross section is considered) of this function. The values of the function in the Fourier domain on a parallel axis passing through the origin and spanning an angle Twith the x-axis is obtained by calculating the Fourier transform of the projection of the image along an axis with angle TBy changing the angle T, the Fourier domain is completely scanned, and the 2D Fourier transform of the cross section can be determined entirely. In the case where the wavelengths are longer and comparable to the size of the analyzed object, the scattering or diffraction of the incident wave by the analyzed object becomes very important, up to the point where the probability for a photon to traverse the medium in which the object is placed without being scattered is practically zero. The so-called “inverse projection” approach developed for X- and J-ray tomography no longer works in the same way, because diffracted images instead of projections of the object are obtained. However, the Fourier slice theorem may be adapted in the following manner (see [BOR 99, CAR 70, DÄN 70, KAK 88, POR 89, WOL 69]). Let a volume V be given that limits in space a disordered medium containing objects that scatter the light (see Figure 6.5).
Figure 6.5. Volume V containing the diffuse medium and sphere S surrounding V
With similar reasoning as for the incoherent propagation of waves discussed in section 6.3.4, for an incident or primary wave P irradiating the volume V, in which a
168
Tomography
scattered wave originates and propagates to the exterior of the volume, the resulting wave of this secondary or scattered radiation S is:
Ei r, Z
EiP r, Z EiS r, Z
[6.62]
The two components P and S satisfy equation [6.17], the propagation equation of the wave. A simplifying hypothesis is made, which is in general satisfied at small scales in biological materials or materials containing heterogenities whose refractive index fluctuates only a little. The propagation of the primary wave is supposed to be practically unaffected by the fluctuations of the refractive index of the diffracting object. Only the diffracted wave is assumed to satisfy equation [6.17] with a fluctuating refractive index given by equation [6.22]. By inserting equation [6.22] into equation [6.17] and by shifting the fluctuating part of the refractive index to the second part, we obtain:
2 EiS (r, Z ) k 2 r, Z EiS (r, Z )
1 F (r, Z ) EiS (r, Z ) 4S
[6.63]
with:
F r, Z
1 2 2 k 'n r , Z 4S
[6.64]
where k2 and ǻn2(r,Ȧ) are given by equations [6.18] and [6.22]. F(r, Z) is called the scattering potential and constitutes a source term for the propagation equation. An integral expression may be found for the scattered wave starting from equations [6.19] and [6.63]. By multiplying the first with –EiS(r, Z) and the second by G±(r, r’), and by subtracting the one from the other, we obtain:
2 G r r, r ' EiS (r, Z ) G r r, r ' 2 EiS (r, Z ) 4S F (r, Z ) EiS (r, Z ) G r r, r ' 4S EiS (r, Z )G
3
[6.65]
r r '
We may then exchange r and r’ and exploit that G±(r, r’) = G±(r’, r). Finally, the integration of the two parts of the equation over the volume of a sphere S surrounding the volume V (see Figure 6.5), whose radius may be made as large as possible, yields an expression for the field EiS(r, Z):
Optical Tomography 169
EiS (r, Z )
³
Sphere
d 3r 'bF (r ', Z ) Ei (r ', Z ) G r r, r ', Z
ª w G r r, r ', Z wE S (r ', Z ) º v³ d s ' « EiS (r ', Z ) G r r, r ', Z i » Surf wn ' wn ' ¬ ¼
[6.66]
2
We may show that, by taking the effective expression of the Green function (equation [6.23]) and the asymptotic expression of EiS(r, Z) into account, the integral cancels out for R o f , leaving:
EiS (r, Z )
³
V
d 3r ' bF (r ', Z ) Ei (r ', Z ) G r r, r ', Z
[6.67]
where the fact that the integral is restricted to the volume V of scattering centers where F(r’, Z) is not zero has been considered. Finally, by inserting this expression into equation [6.62], we find an integral equation for the total field:
Ei (r, Z )
EiP (r, Z ) ³ d 3r ' bF (r ', Z ) Ei (r ', Z ) G r r, r ', Z [6.68] V
6.4.1.2. Born approximation The total field Ei(r, Z) may be derived from the implicit equation [6.68] by successive approximations. In the case where the fluctuations of the refractive index at the origin of the scattered wave are small, a perturbation approach may be taken. The non-perturbed field is:
Ei (r, Z ) 0
EiP (r, Z )
[6.69]
The calculation may be performed iteratively by calculating in step n the total field of order n from the total field of order n – 1:
Ei (r, Z ) n
EiP (r, Z ) ³ d 3r ' bF (r ', Z ) Ei V
n 1
(r ', Z ) G r r, r ', Z
[6.70]
The series rapidly converges to a very good approximation of the total field. Let a particular image be taken and the volume V be considered as a mosaic of independent scattering centers, each of which is able to scatter the incident photon. If we content ourselves with a single iteration, the total field Ei(1)(r, Z) contains the scattered field coming independently from each scattering center, without a second scattering taking place. We call this the calculation of the first Born approximation (see Figure 6.6).
170
Tomography
Figure 6.6. Wavefronts in the Born approximation. (A) incident wave; (B) large heterogenities; (C) small heterogenities (wavefront perturbed only a little)
This situation is a good working hypothesis in cases where the density of scattering centers is sufficiently small or the tomography is performed at a small scale, i.e. in the order of millimeters for biological tissues. The total field reaching the detector comes in this case from single scattering, and the coherence of the scattered wave is maintained. It is thus possible to resort to a so-called “coherent” detection, which provides an increased sensitivity, down to a few photons, and a considerable dynamics, reflecting a substantial depth of the image. In the case where the scattering coefficient is larger the probability of a single scattering becomes too small to permit an imaging based on the detection of coherently propagating waves. Mathematically, the field Ei(1)(r, Z) is no longer a sufficient approximation of the total field, and the number of iterations must be increased. This reflects the presence of multiple scattering of photons propagating in the dielectric medium. 6.4.1.3. Principle of the reconstruction of the object in diffraction tomography Admitting the validity of the first Born approximation, equation [6.68] reads:
Ei (r , Z ) 1
P
EiP (r, Z ) ³ d 3r 'bF (r ', Z ) Ei (r ', Z ) G r r, r ', Z V
[6.71]
Optical Tomography 171
As incident or primary wave, a planar wave propagating in the direction rˆ0 (unit vector) with the wave vector:
E iP (r, Z ) | exp(ik sˆ 0 r )
[6.72]
is chosen. In addition, it is assumed that the diffracted wave is captured at a sufficiently large distance from the diffracting medium:
r r ' | r sˆ r '
[6.73]
where sˆ is the unit vector in the direction of the vector r (see Figure 6.7).
Figure 6.7. Approximation of | r – rƍ | at large distances
The Green function then reads:
G r (r, r ' , Z ) |
exp(rik r ) exp(ik sˆ r ' ) r
[6.74]
and equation [6.71] becomes:
E i(1) (r, Z ) | exp(ik sˆ 0 r )
>
³ d 3 r ' F (r ' ) exp(ik (sˆ sˆ 0 ) r ' )
@ exp(rrik r )
exp(rik r ) ~ exp(ik sˆ 0 r ) F (k (s sˆ 0 )) r
[6.75]
172
Tomography
This means that the coherent wave resulting from the interaction with the scattering center comprises, in addition to the incident wave, a scattered wave spherically emanating from the scattering center, whose amplitude and phase is given by the 3D Fourier transform of the scattering potential of the scattering center, evaluated at the point K of the reciprocal space given by:
K
k sˆ sˆ 0
[6.76]
If the points covered by the vector K for variable directions sˆ of the diffracted wave are graphically represented (see Figure 6.8), we find that K describes a circle with radius k, passing through the origin O of the reciprocal space and centered at the head P of the vector ksˆ 0 .
Figure 6.8. Location of the head of the vector K when the direction of the scattered wave varies angularly (for a fixed incident wave sˆ 0 )
sˆ
To scan the reciprocal space densely, we may vary sˆ 0 in all directions by shifting O on a sphere with center O and radius k (see Figure 6.9). We may also scan the reciprocal space by changing k, i.e. by changing the wavelength O of the incident beam (see Figure 6.10). A combination of these two approaches, i.e. changing sˆ 0 and k, is also possible.
Optical Tomography 173
Figure 6.9. Scanning of the reciprocal space when the direction of the incidental wave sˆ 0 varies
Figure 6.10. Scanning of the reciprocal space when the wavelength O varies
This description may obviously be generalized to the 3D case, where the vector K describes a sphere with radius k through the origin O of the reciprocal space. This sphere is called Ewald sphere, with reference to the work of the crystallographer Ewald. By playing with combinations of the vectors sˆ, sˆ 0 and k , it is thus possible to sample the value of F K at all points of the reciprocal space contained in a sphere with radius:
K d 2k
4S O
which is called the limiting Ewald sphere (shown in Figure 6.9).
[6.77]
174
Tomography
By an inverse 3D Fourier transform of F K , we obtain a “filtered” value of the scattering potential F r, Z
1 2 2 k 'n r, Z , which reflects the distribution of the 4S
deviation of the refractive index 'n 2 (r, Z ) produced by the spatial composition of the material. The resolution with which this distribution of the refractive index may be estimated is thus Ȝ/2. 6.4.1.4. Data acquisition Practically, the data of the object may be acquired by measuring the field Ei(1)(r, Ȧ) on a plane placed in front of the object (see Figure 6.11) and by rotating it around one rotation axis if it is cylindrically symmetric, as in the case of fibers, in particular optical fibers [WED 95], or around two rotation axes otherwise. This approach has already been used for a long time for the measurement of the diffraction of crystals with X-rays. A more complex approach that is practically never used in optical tomography is to rotate the whole optical system (see Figure 6.11).
Figure 6.11. Acquisition of the diffracted field on the surface leaving Z and rotation of the diffracting object
A partial solution to the problem posed by diffraction tomography may be found without performing a rotation of the object by collecting the diffracted field on two planes as large as possible, surrounding the object on both sides (see Figure 6.12). The surface of the acquisition is nevertheless limited by the numerical aperture of the optical system collecting the light on the two planes.
Optical Tomography 175
Figure 6.12. Acquisition of the diffracted field on a surface entering Z’ and leaving Z
A scanning of the incident wave vector may then be performed, and the values of the fields may be acquired on the two planes for the reconstruction from F K . It is worth mentioning that the complex field Ei(1)(r, Ȧ) must be measured in amplitude and phase, and not only in intensity. This constitutes an additional difficulty, because this information requires a more complex data acquisition technique. Most often, an interferometric technique must be employed to obtain the phase, although a non-interferometric approach has also been proposed. An approach based on the reconstruction of the wavefront by digital holography is about to be developed in microscopy (see [CUC 97, CUC 99a, CUC 99b, CUC 00a, CUC 00b, CUC 00c], for example). This permits a direct calculation of F K from the hologram collected with a camera by backpropagating the complex field to the plane of the object. For this purpose, use is made of the Huygens–Fresnel diffraction expression, which allows an solution to both the problem of direct propagation and the problem of inverse propagation of the field. This propagation is mathematically described by:
exp rik r
r
[6.78]
in the expression for the total field Ei(1)(r, Ȧ) (equation [6.75]). The difficulties experienced in the collection of this phase information under angles hard to obtain in practice in microscopy are the reason why diffraction tomography is only poorly developed, although 3D information on the dielectric
176
Tomography
properties of biological and non-biological materials is of considerable interest for numerous applications. 6.4.1.5. Optical coherence tomography The optical tomography technique that has certainly found the most applications is optical coherence tomography. It is a particular case of the diffraction tomography technique presented in the previous section. Only the backscattered field is collected in this case, i.e. the field scattered once with a wave vector:
k
ksˆ
k 0
ksˆ and K
2sˆ 0
2ksˆ 0
[6.79]
2'k sˆ 0
[6.80]
F K is only evaluated on a segment of length:
K max K min with 'k
2(k max k min )sˆ 0
'Z / nc | 2S'O / nO 2 .
This particular situation is illustrated in Fig. 6.13. Obviously, the curve described in the reciprocal space when varying the wavelength O is a segment V. The information is well resolved in the direction sˆ 0 , but badly resolved in the perpendicular direction. Therefore, we have to resort to a scanning technique. An overview of research on optical coherence tomography can be found in [BEA 98, BRE 98, DRE 99, DRE 01, FER 96, FUJ 98, HUA 91, PUL 97, SWA 93].
Figure 6.13. Location scanned in optical coherence tomography: a segment V is covered by K when varying the wavelength O
Optical Tomography 177
6.4.1.6. Role of the coherence length The longitudinal resolution of the instrument is very roughly given by
'r 2S / 2'k nO 2 / 2'O . We may imagine using a source with variable wavelength and calculating the image by Fourier sums. A far simpler and commonly used approach is to resort to a source with large spectral bandwidth [CLI 92]. In the limit, a white source offers the best resolution. The instrument then resembles an interferometer with white light. More exactly, its resolution is given by the coherence length of the source:
lc
4 ln(2) / S O2 / 'O
[6.81]
The graph in Figure 6.14 shows the autocorrelation function of the optical signal provided by a source with weak coherence and illustrates the coherence length.
Figure 6.14. Autocorrelation function of a source with weak coherence (W delay, lc: coherence length, I: intensity)
The sources with weak temporal coherence used for so-called “weak coherence” optical tomography are very often superluminescence diodes, whose bandwidth is about 'O = 25 nm for O = 830 nm, resulting in a coherence length of about lc = 25 Pm. More recently, other sources of weak coherence have been employed. Sources at 1,310 nm have been preferred because the scattering coefficient is smaller and thus the penetration better. Pulsed lasers (titanium-sapphire lasers with mode-locked oscillators) with extremely short pulse durations (in the order of tens of femtoseconds) have been used as sources with very weak coherence. The attained coherence lengths are in the order of micrometers. The equivalent bandwidth of the pulsed lasers reaches almost 350 nm at a wavelength of about 800 nm. These
178
Tomography
super-resolutions permit us to catch a glimpse of the first cellular structures, for instance preparations of cells of the African tadpole [DRE 99], but they still remain too low to observe human epithelial cells under conditions allowing a reliable diagnostics. 6.4.1.7. The principle of coherent detection The instrument is basically an interferometer (in this case of the Michelson type, see Figure 6.15a), whose reference arm has a variable length. When the length of the measurement arm is equal to the length of the reference arm, interferences appear since the mutual intensity is not equal to zero in this case only. If we eliminate the continuous component of the signal, the residual signal gives the coherent component of the backscattered signal, which represents only a fraction of the total optical signal. The instruments have become particularly powerful since a so-called “heterodyne” detection technique has been applied. To obtain the scanning in depth, the reference mirror is moved at a basically constant or little varying speed, and the intensity of the detected signal representing the “echo” is synchronously recorded in the image, very similar to “radar”. This movement of the reference mirror creates a “Doppler” effect, i.e. a slight shift in frequency or wavelength of the reference optical beam. When a signal coming from the measurement beam appears, beating results from the interference between the measurement and the reference signal. This beating is a signal containing an alternating component, which is simple to filter and to distinguish from the continuous signal. This detection technique permits high sensitivity and large dynamics (that may reach more than 120 dB, i.e. close to 12 decades of optical power). The high sensitivity also results from specific advantages of the “coherent” detection. In fact, the part of the intensity measured by the optical detector which results from the interference of the measurement with the reference signal is proportional to the product of the amplitudes of both signals. In this way, if the amplitude of the reference signal is high, far higher than that of the measurement signal, an amplification results, which permits the detector to provide a signal higher than the thermal noise, either associated or not with the dark current, or even than the residual electronic noise. Detection of a few photons is thus possible. The production of instruments generally requires optical fiber technology that is simple and robust for practical use. The principle of such an instrument is schematically shown in Figure 6.15b. The semi-transparent mirror or the beam separator block is replaced by an optical coupler, and a phase modulator may optionally be mounted in the measurement or reference fiber. 6.4.1.8. Lateral scanning The instrumental technique used for optical coherence tomography does not itself provide any information about the structure of the diffracting object in the lateral dimension, i.e. perpendicular to the axis of the beam. To overcome this
Optical Tomography 179
limitation, it is convenient to laterally scan the diffracting object. For this purpose, the beam must be focused on the object to be visualized such that the diameter of the beam is minimized. An example of a set-up with an oscillating mirror is given in Figure 6.16.
Figure 6.15. (a) Optical system for the coherent detection of the backscattered beam in optical coherence tomography (OCT). (b) OCT system produced with optical fiber
Figure 6.16. Lateral scanning in OCT
180
Tomography
In the case of an instrument with monomode optical fibers, a confocal system is realized in this way. It combines the advantage of improved lateral resolution due to the double transfer function (transfer function convolved with itself) with the elimination of the scattered light off the focal spot. The confocal system has, however, the disadvantage of a very weak depth of field, associated with the quest for a high lateral resolution. A compromise has therefore to be found. The minimum acceptable resolution has to be determined such that it stays higher than the fixed threshold along the whole trajectory traversed in depth by the measurement beam. 6.4.1.9. Coherence tomography in the temporal domain Coherence tomography in the temporal domain is the most widespread technique. The image acquisition is performed sequentially by successively scanning the image in depth (s variable) by sliding the reference mirror, and by moving the beam perpendicularly to acquire the image in breadth. Numerous improvements have been made to accelerate the scanning in depth and breadth [BAL 97]. We will not go into details of the realization here. Acquisition speeds of 1,000–2,000 lines per second are possible so that the number of displayed images per second may reach 20, provided that we are content with about 100 lines per image. Figure 6.17 shows a typical image of the bottom of the eye. We recognize in the region of the fovea the surface of the retina with the pigmented epithelium and details down to the choroid.
Figure 6.17. Example of an OCT image of the bottom of the eye showing the fovea with the profile of the retina, the pigmented epithelium, and the choroid (512 u 1024 pixels) (courtesy of Carl Zeiss Ophthalmic Systems, Humphrey division)
Optical Tomography 181
6.4.1.10. Coherence tomography in the spectral domain If we return to equation [6.75], which gives the photonic field Ei(1)(r, Ȧ) backscattered in amplitude and phase, if we take into account that sˆ sˆ 0 , and if we consider the dependence of the second term on 1/r to be negligible, we see that Ei(1)(r, Ȧ), evaluated in amplitude and phase at different wavelengths corresponding to the values of k in a sufficiently large interval, enables us to find the value of F 2k sˆ 0 , which is the Fourier transform of the profile of the index F s sˆ , directly. We eliminate in this way the scanning in depth. In practice, a spectrograph must be placed at the exit of the Michelson interferometer, as it is shown in Figure 6.18. A numerical calculation of the Fourier transform provides a line of the image. We may also hope to evaluate the optical absorption coefficient of the medium using this approach.
Figure 6.18. “Spectral” OCT
6.5. Optical tomography in highly diffuse media
In the case where the medium is scattering strongly or where the path traversed by the light in the medium is longer than some millimeters, the contribution of coherently propagating waves becomes negligible. The so-called coherent tomographic methods are then of no use. There are, however, multiple techniques available in this case. Most of them are based on either the use of the radiative or the
182
Tomography
Boltzmann transport equation (see equation [6.42]), or the scattering equation (see equation [6.58]). For a more in-depth understanding of this field, we recommend reading [ALF 95, ALF 97, ALF 99, ARR 97a, ARR 97b, ARR 93, BEN 95, HEB 97, HEB 99, MUE 93, ONO 00, PEI 01]. 6.5.1. Direct model, inverse model
We refer to Figure 6.19 for an illustration of the developed concepts. The position of the sources and the detectors are arbitrary in this case. They may also be placed in planes located on the one and on the other side of the analyzed object.
Figure 6.19. Arrangement of sources and detectors in tomography in highly diffuse media. We distinguish the location of the propagation of the scattered light between the source S1 and the detector D3: “banana” or “spindle”
The basic approach consists of developing a direct model describing the incoherent propagation of the light in diffuse media. This model links the optical characteristics of the diffuse medium, which are the input to the model, that is to say a set of parameters denoted by the generic term p(rk), a vector indexed by the set of sampled points rk with k = 1, 2, …, l, where l is the number of points of the image (pixel), including: – the distribution of the scattering and absorption coefficients μs and μa, – the moments of the phase function g1, g2, …, gn, – the mean refractive index of the medium n(r, Ȧ),
Optical Tomography 183
to quantities describing certain characteristics of the photonic field at the boundary of the diffuse medium, at the detectors, or possibly at distributed locations in the diffuse medium, which constitute the output of the model, that is to say the following quantities, which are also denoted by a generic term U(ȡiS, ȡjD), indexed by the respective positions of the m source ȡiS, with i = 1, 2, …, m, and n detectors ȡjD, with j = 1, 2, …, n: – the distribution of the radiance L(r, sˆ ) (see section 6.3.4) or – the fluence ) r measured by the detectors, generally at the boundary of the medium. A certain amount of data has also to be taken into account in the direct model, which defines it completely, including characteristics of the light source(s), the irradiance, the boundary conditions, the delineation of the medium, and the external optical properties (assumed to be known). The direct model may be formalized by introducing the functional relation F[ ]:
U ȡSi , ȡ Dj
F ª¬p rk º¼
[6.82]
The inverse model works in the opposite sense, i.e. it permits linking, if possible unambiguously, the output to the input, thus finding the optical characteristics of the medium from the photonic field measured at the boundary of the diffuse medium. This inverse model may symbolically be represented by:
p rk
F1 ª¬ U ȡSi , ȡ Dj º¼
[6.83]
These characteristics provide a tomographic image whose information content is basically the tissue composition and organization as well as its metabolic activity, notably the degree of oxygenation. It seems that the existence of an inverse model is not guaranteed. In many respects, the problem is ill-posed, and the uniqueness of the solution is often not verified. Nevertheless, it seems that a priori information about the tomographic images and the definition of certain additional, limiting conditions efficiently contribute to removal of the indetermination. We may speculate that optical tomography of highly diffuse media will in most cases be a hybrid imaging technique, combined with another imaging technique, such as magnetic resonance imaging (MRI), ultrasound imaging, single photon emission computed tomography (SPECT) or positron emission tomography (PET). In contrast to optical coherence tomography, the aim of optical tomography in highly diffuse media seems more and more clearly not to be a method providing high-resolution images, but to be a method capable of providing information on the function of organs and tissues, i.e. a functional imaging technique.
184
Tomography
6.5.2. Direct model
The radiative or Boltzmann transport equations and the scattering equation are the basis for the development of the direct model. These equations reflect the laws governing the transport of photons in diffuse media. 6.5.2.1. Fluence The diffusion equation (equation [6.59]) is used in most research on the direct model in tomography. The numerical approach is simpler, because the equation is to be solved for the scalar photonic fluence Ɏ(r) or the distribution of accumulated photon intensities circulating in all directions at point P. The equation links Ɏ(r) to the optical constants of the medium and may be solved in different ways: – Analytical solutions for the Green functions exist for many simple geometries: spheres, ellipsoids, cylinders, slab, semi-spaces, etc. – In other cases, i.e. more complex geometries or heterogenities of the diffuse medium, numerical techniques are employed. Among them, the Monte Carlo method provides a general approach. It involves simulating the random trajectory of photons in the diffuse medium by randomly drawing the occurrence of scattering events. The disadvantage of this method is its rather long computation time. – The scattering equation may more rapidly be solved by resorting to numerical methods such as the finite element or difference method. 6.5.2.2. Radiance We should recall that the scattering equation results from simplifying assumptions that essentially rely on the continuity of the fluence Ɏ(r). This equation does not permit a prediction of the real fluence near discontinuities, such as light sources, boundaries, and rapid variations of the fluence and of the photon flux. In general, the absorption coefficient is supposed to be far smaller than the scattering coefficient. In this case, we have to fall back on the radiative transfer equation itself (equation [6.42]). The calculation is then more complex, since a function of two variables, the radiance L ( R , sˆ ) is a function of the position in space R and the propagation direction sˆ , has to be numerically evaluated. It can still be made either by a Monte Carlo method or by a numerical method, such as finite element and difference methods. The calculation, although taking longer, provides a better description of the distribution of the light in all situations where the diffuse medium contains cavities, liquid inclusions, or inclusions of non-diffuse materials [HIE 98]. An example of the latter is the brain, where the cerebrospinal fluid is generally a clear liquid and a non-diffuse medium, which is embedded in a diffuse medium, the gray
Optical Tomography 185
and white matter of the brain. The calculation of the radiance enables us to take into account interfaces between the cerebral gyri and the cerebrospinal fluid in more detail, where the radiance is strongly anisotropic. This refinement of details is of importance for the combination of optical tomography with other imaging techniques, such as MRI, which provides a precision down to millimeters in the measurement of the morphology of tissue. This complementary information permits a considerable refinement of the local estimation of optical parameters and notably a fine spectroscopy of cerebral tissue. Another domain where the estimation of the radiance is preferred to the estimation of the fluence is at small scales (millimeter), where the distribution and the propagation of the photonic flux is influenced by the phase function characterized by the moments g1, g2, … It has been shown that the second moment in this case plays a significant role in the estimation of the fluence and the radiance [BEV 98, BEV 99a, BEV 99b, DEP 99]. The concerned domain is notably that of “optical biopsies” performed in vivo, where the examination of tissues is made at the scale of a cubic millimeter, i.e. at a scale where the scattering equation cannot be applied due to the strong anisotropy of the incident beam. 6.5.3. Inverse model
The development of the inverse model has the goal of calculating the optical coefficients characterizing the diffuse medium, including the distribution of the scattering coefficient μs and the absorption coefficient μa, and the first and sometimes the second moment (in the particular case of small distances) of the phase function g1, g2. In probably the majority of cases, it is not the aim to determine the absolute values of the optical coefficients, but their deviation from values of “normal” tissue (in the background) or, concerning temporal evolution, their variations. The latter reflect the presence of cardiac pulsation and/or variations associated with metabolic changes in tissues, such as cerebral and muscular activity or vasomotricity phenomena in cerebral and muscular tissues. The study of deviations or variations is in general based on a perturbation method: the deviations are quantified and inserted into a vectorial equation. Multiple ways to approach, and in certain cases to solve, the inverse problem exist. Principally, iterative methods and direct inversion methods are distinguished.
186
Tomography
6.5.4. Iterative methods
The idea is simple: a “test” distribution of the optical coefficients is defined, basically on the basis of a priori information, such as the known type of tissue, or the expected geometries of organs and tissues. These a priori data may, for example, come from other imaging techniques or prior measurements, the latter especially when temporal evolution is to be studied. The distribution of photonic fluences or radiances is then calculated with the help of the direct models described in the previous section, and the results are compared with the experimental data, i.e. the optical intensities acquired by the detectors. The difference between prediction and measurement must be quantified with an error function, which is defined by weighting the experimental measurement vector. A new distribution of optical parameters will then be derived from the calculated error, by a so-called “steepest descent” method, i.e. a method that aims at a maximum decrease in the error for the chosen deviation of the distribution. If the test distribution is sufficiently close to the real distribution and if the conditions of the problem, including the geometry of the source–detector ensemble and of the diffuse medium, are favorable (“well-posed” problem), the convergence is in general quite rapid. It may happen that the stability of the solution is not guaranteed and that large fluctuations and deviations from the real values are observed. An advantage of iterative methods is that they practically do not rely on any restrictive hypothesis. In particular, they permit consideration in all generality of the non-linearity between optical coefficients and distributions of light intensity at the surface of the diffuse medium. The employed optimization methods thus permit a reconstruction that is generally applicable. 6.5.5. Perturbation method
Let us consider the vector of “deviations” or “variations” of optical parameters 'p(rk ) . In a direct model, these deviations may be linked after linearization to those of measured optical quantities 'U(ȡ Si , ȡ Dj ) according to:
'U(ȡ Si , ȡ Dj
J ' (ȡ Si , ȡ Dj , rk ) 'p(rk )
[6.84]
where the Jacobian matrix J ' (ȡ Si , ȡ Dj , rk ) has been introduced. In certain cases, this matrix is invertible:
'p(rk ) J ' 1 (ȡ Si , ȡ Dj , rk ) 'U(ȡ Si , ȡ Dj )
[6.85]
Optical Tomography 187
but stability problems are often encountered. Notably, the initial estimated values p(rk) play a role in the reliability and stability of the estimation of the variations 'p(rk ) . 6.5.6. Reconstruction by inverse projection The idea of pragmatically extending the algorithm of inverse projection employed in X- and J-ray tomography to optical tomography was proposed very early for solving the inverse problem in optical tomography. Instead of carrying out an integration along the projected ray, an integration over a spindle linking source and detector (see Figure 6.19) has to be performed. It represents the probability or the intensity (fluence) along the trajectory of the photons from the source to the detector. In the case of cylindrical or spherical geometry, this spindle takes the form of a banana, a term from now on used to describe the likely trajectory from a source to a detector. The Fourier slice theorem is then applicable in the first approximation, and it provides a stable, but rough solution to the inverse problem. 6.5.7. Temporal domain The idea of using very short light pulses (in the order of picoseconds or tens of picoseconds) and a very narrow temporal window (size of the same order) for the detection was proposed quite early during the development of optical tomography. In the image presented in the preceding section, the introduction of a temporal discrimination enables a reduction in the size of the spindle or banana by moving them closer to the narrow ray of projection tomography. Figure 6.20 illustrates the temporal evolution of the optical signal collected by the detectors on the boundary of the diffuse medium. In this impulse description of the propagation of photons in diffuse media, several categories of photons are distinguished: – The ballistic photons are those which penetrate into the tissue without undergoing any interaction, i.e. elastic or inelastic scattering. These are therefore the photons that propagate in a straight line and in a minimum transit time. They exit grouped from the diffuse medium, i.e. as a narrow impulse (if we do not take into account the phenomenon of chromatic dispersion, which enlarges the impulse a little). – The snake photons are those which undergo elastic scattering with small scattering angles. Thus, the photons are only slightly diverted, the trajectories are tight, and the transit times are the shortest, except for the ballistic photons.
188
Tomography
– The rest of the photons arrive in a large group, with growing transit times and more and more complex paths, deviating more and more from the rectilinear trajectory. It is in particular this category of photons that is at the origin of blurring affecting the image of objects contained in the diffuse medium.
Figure 6.20. “Theoretical” temporal evolution of the optical intensity collected by a detector when a very short optical pulse is applied, showing the distinct propagation of ballistic and snake photons
This description is in fact equally valid for the photons in the X- and J-ray domain. The difference is that in the domain of optical photons, the ballistic photons, which are also called primary radiation in radiology, are practically non-existent after a path of some millimeters. They may thus be used in incoherent tomography in diffuse media. The “useful” photons are in this case the snake ones, which, due to the narrowness of the impulse and the temporal window, allow restoration of a so-called “projective” imaging modality that provides the image quality known in radiology. For more information on this topic, the reader is referred to [DEH 93, DEH 96, MIT 94, SOL 97].
Optical Tomography 189
6.5.8. Frequency domain The instrumentation used for tomography in the temporal domain is expensive and difficult to use: pico- or femto-second lasers and ultrafast cameras (with slit scanning). Therefore, the idea of working in the frequency domain, instead of in the temporal domain, appeared attractive. In principle, a duality exists between the two domains, and they are reciprocally obtained by Fourier transform along the temporal dimension. We may thus assume that they enable the same results to be obtained. The instrumentation in the frequency domain appears simpler to employ and also less expensive. It is sufficient to work in the domain of some hundreds of megahertz to control dephasing in the order of degrees, which seems possible with the photoelectronic components common today: laser diodes modulated in the radiofrequency range, rapid photodetectors, PIN diodes, and avalanche diodes. All these components have been developed to respond to the demands of the telecommunication market. The resolution in phase in the order of degrees or even less corresponds in the temporal domain to temporal shifts as small as some hundreds of femtoseconds. A lot of research aimed at exploiting the frequency domain in the imaging domain has therefore been carried out. A notably attractive perspective is to restore a tomographic imaging modality close to diffraction tomography (see section 6.4.1.3) by generating photon density waves. This also restores the idea of using “pseudo coherent” waves composed of incoherent photons, but grouped in packets, to “structure” the real and the reciprocal space. The photon density waves are easy to produce by amplitude modulation of the laser diodes available in large quantities and at low price in the near infrared. These studies have shown a real improvement in the resolution of reconstructed images, but a dramatic reduction in the useful signal intensity for tissue thicknesses of several centimeters, which is reflected in a considerable degradation of the signalto-noise ratio in the images. After deconvolution, it appears that, for tissues thicker than a centimeter, the resolution of reconstructed images is in fact worse when we detect and process the modulated light compared to when we use a continuous source. Thus, the approach based on photon density waves only seems to be of interest for obtaining tomographic images of small objects, such as the teeth or blood vessels. Some references covering this approach are [LI 97, LI 00, MAT 97, MAT 99a, MAT 99b, MON 00].
190
Tomography
6.6. Bibliography [ALF 89] ALFANO R. R., PRADHAN A., TANG G. C., WAHL S. J “Optical spectroscopic diagnosis of cancer and normal breast tissues”, J. Opt. Soc. Am. B – Opt. Phys., vol. 6, n° 5, pp. 1015–1023, 1989. [ALF 95] ALFANO R., CHANCE B., Optical Tomography, Photon Migration, and Spectroscopy of Tissue and Model Media: Theory, Human Studies, and Instrumentation, SPIE, vol. 2389, 1995. [ALF 97] ALFANO R., CHANCE B., Optical Tomography and Spectroscopy of Tissue: Theory, Instrumentation, Model, and Human Studies II, SPIE, vol. 2979, 1997. [ALF 99] ALFANO R., CHANCE B., Optical Tomography and Spectroscopy of Tissue III, SPIE, vol. 3597, 1999. [ARN 92] ARNFIELD M. R, MATHEW R. P., TULIP J., MCPHEE M. S., “Analysis of tissue optical coefficients using an approximate equation valid for comparable absorption and scattering”, Phys. Med. Biol., vol. 37, n° 6, pp. 1219–1230, 1992. [ARR 97a] ARRIDGE S. R., HEBDEN J. C., “Optical imaging in medicine 2. Modeling and reconstruction”, Phys. Med. Biol., vol. 42, n° 5, pp. 841–853, 1997. [ARR 97b] ARRIDGE S. R., SCHWEIGER M., “Image reconstruction in optical tomography”, Philos. Trans. R. Soc. Lond. Ser. B – Biol. Sci., vol. 352, n° 1354, pp. 717–726, 1997. [ARR 98] ARRIDGE S. R., LIONHEART W. R. B., “Nonuniqueness in diffusion-based optical tomography”, Opt. Lett., vol. 23, n° 11, pp. 882–884, 1998. [BAL 97] BALLIF J., GIANOTTI R., CHAVANNE P., WALTI R., SALATHE R. P., “Rapid and scalable scans at 21 m/s in optical low-coherence reflectometry”, Opt. Lett., vol. 22, n° 11, pp. 757–759, 1997. [BEA 94] BEAUVOIT B., KITAI T., CHANCE B., “Contribution of the mitochondrial compartment to the optical properties of the rat liver – a theoretical and practical approach”, Biophys. J., vol. 67, n° 6, pp. 2501–2510, 1994. [BEA 95] BEAUVOIT B., EVANS S. M., JENKINS T. W., MILLER E. E., CHANCE B., “Correlation between the light scattering and the mitochondrial content of normal tissues and transplantable rodent tumors”, Anal. Biochem., vol. 226, n° 1, pp. 167–174, 1995. [BEA 98] BEAUREPAIRE E., BOCCARA A. C., LEBEC M., BLANCHOT L., SAINT-JALMES H., “Full-field optical coherence microscopy”, Opt. Lett., vol. 23, n° 4, pp. 244–246, 1998. [BEN 95] BENARON D. A., KURTH C. D., STEVEN J. M., DELIVORIAPAPADOPOULOS M., CHANCE B., “Transcranial optical path length in infants by near-infrared phase-shift spectroscopy”, J. Clin. Monit., vol. 11, n° 2, pp. 109–117, 1995. [BEV 98] BEVILACQUA F., Local Optical Characterization of Biological Tissues In Vitro and In Vivo, PhD thesis, Swiss Federal Institute of Technology Lausanne, 1998.
Optical Tomography 191 [BEV 99a] BEVILACQUA F, DEPEURSINGE C, “Monte Carlo study of diffuse reflectance at source-detector separations close to one transport mean free path”, J. Opt. Soc. Am. A. Opt. Image Sci. Vis., vol. 16, n° 12, pp. 2935-2945, 1999. [BEV 99b] BEVILACQUA F., PIGUET D., MARQUET P., GROSS J. D., TROMBERG B. J., DEPEURSINGE C., “In-vivo local determination of tissue optical properties: applications to human brain”, Appl. Optics, vol. 38, n° 22, pp. 4939–4950, 1999. [BOH 83] BOHREN C., HUFFMANN D., Absorption and Scattering of Light by Small Particles, John Wiley & Sons, 1983. [BOL 87] BOLIN F. P., PREUSS L. E., TAYLOR R. C., SANDU T. S., “A study of the threedimensional distribution of light (632.8 nm) in tissue”, IEEE J. Quantum Electron., vol. 23, n° 10, pp. 1734–1738, 1987. [BOR 99] BORN M., WOLF E., Principle of Optics, Cambridge University Press, 1999. [BRE 98] BREZINSKI M. E., TEARNEY G. J., BOUMA B., BOPPART S. A., PITRIS C., SOUTHERN J. F., FUJIMOTO JG, “Optical biopsy with optical coherence tomography”, Ann. NY Acad. Sci., vol. 838, pp. 68–74, 1998. [CAR 70] CARTER W. H., “Computational reconstruction of scattering objects from holograms”, J. Opt. Soc. Am., vol. 60, pp. 306–314, 1970. [CAR 76] CARTWRIGHT N., “A non-negative Wigner type distribution”, Physica, vol. 83A, pp. 210–212, 1976. [CAS 67] CASE K. M., ZWEIFEL P. F., Linear Transport Theory, Addison Wesley, 1967. [CHA 60] CHANDRASEKHAR C., Radiative Transfer, Dover, 1960. [CHA 98a] CHANCE B., “Near-infrared images using continuous, phase-modulated, and pulsed light with quantification of blood and blood oxygenation”, Ann. NY Acad. Sci., vol. 838, pp. 29–45, 1998. [CHA 98b] CHANCE B., COPE M., GRATTON E., RAMANUJAM N., TROMBERG B., “Phase measurement of light absorption and scatter in human tissue”, Rev. Sci. Instrum., vol. 69, n° 10, pp. 3457–3481, 1998. [CHA 99] CHANCE B., “New optical methods in cell physiology”, J. Gen. Physiol., vol. 114, n° 1, p. 19, 1999. [CHE 90] CHEONG W. F., PRAHL S. A., WELCH A. J., “A review of the optical properties of biological tissues”, IEEE J. Quantum Electron., vol. 26, n° 12, pp. 2166–2185, 1990. [CLI 92] CLIVAZ X., MARQUI-WEIBLE F., SALATHE R. P., NOVAK R. P., GILBEN H. H., “High resolution reflectometry in biological tissues”, Opt. Lett., vol. 17, pp. 4–6, 1992. [CUC 97] CUCHE E., POSCIO P., DEPEURSINGE C., “Optical tomography by means of a numerical low-coherence holographic technique”, J. Opt. – Nouv. Rev. Opt., vol. 28, n° 6, pp. 260–264, 1997. [CUC 99a] CUCHE E., BEVILACQUA F., DEPEURSINGE C., “Digital holography for quantitative phase-contrast imaging”, Opt. Lett., vol. 24, n° 5, pp. 291–293, 1999.
192
Tomography
[CUC 99b] CUCHE E., MARQUET P., DEPEURSINGE C., “Simultaneous amplitude-contrast and quantitative phase-contrast microscopy by numerical reconstruction of Fresnel off-axis holograms”, Appl. Optics, vol. 38, n° 34, pp. 6994–7001, 1999. [CUC 00a] CUCHE E., Numerical Reconstruction of Digital Holograms: Application to Phase Contrast Imaging and Microscopy, PhD thesis, Swiss Federal Institute of Technology Lausanne, 2000. [CUC 00b] CUCHE E., MARQUET P., DEPEURSINGE C., “Aperture apodization using cubic spline interpolation: application in digital holographic microscopy”, Opt. Comm., vol. 182, n° 1–3, pp. 59–69, 2000. [CUC 00c] CUCHE E., MARQUET P., DEPEURSINGE C., “Spatial filtering for zero-order and twin-image elimination in digital off-axis holography”, Appl. Opt., vol. 39, n° 23, pp. 4070–4075, 2000. [DÄN 70] DÄNDLIKER D., WEISS K., “Reconstruction of the three-dimensional refractive index from scattered waves”, Opt. Comm., vol. 1, pp. 323–328, 1970. [DEH 93] DE HALLER E., Time-resolved Breast Transillumination: Analytical, Numerical and Experimental Study, PhD thesis, Swiss Federal Institute of Technology Lausanne, 1993. [DEH 96] DE HALLER E., “Time-resolved transillumination and optical tomography”, J. Biomed. Opt., vol. 1, n° 1, pp. 7–17, 1996. [DRE 99] DREXLER W., MORGNER U., KARTNER F. X. et al. “In vivo ultrahigh-resolution optical coherence tomography”, Opt. Lett., vol. 24, n° 17, pp. 1221–1223, 1999. [DRE 01] DREXLER W., MORGNER U., GHANTA R. K., KARTNER F. X., SCHUMAN J. S., FUJIMOTO J. G., “Ultrahigh resolution ophthalmic optical coherence tomography”, Nat. Med., vol. 7, n° 4, pp. 502–507, 2001. [DUC 90] DUCK F., Physical Properties of Tissues, Academic Press, 1990. [DUD 79] DUDERSTADT J., MARTIN W., Transport Theory, John Wiley & Sons, 1979. [FER 96] FERCHER A. F., “Optical coherence tomography”, J. Biomed. Opt., vol. 1, n° 2, pp. 157–173, 1996. [FLO 87] FLOCK S. T., WILSON B. C., PATTERSON M. S., “Total attenuation coefficients and scattering phase functions of tissues and phantom materials at 633 nm”, Med. Phys., vol. 14, n° 5, pp. 835–841, 1987. [FUJ 98] FUJIMOTO J. G., “Optical coherence tomography: a new view toward biomedical imaging”, Photon. Spect., vol. 32, n° 1, pp. 114–115, 1998. [GEL 96] GÉLÉBART B., TINET E., TUALLE J. M., AVRILLER S., OLLIVIER J., “Phase function simulation in tissue phantoms: a fractal approach”, Pure and Appl. Opt., vol. 5, pp. 377– 388, 1996. [HEB 96] HEBDEN J. C., ARRIDGE S. R., “Imaging through scattering media by the use of an analytical model of perturbation amplitudes in the time domain”, Appl. Opt., vol. 35, n° 34, pp. 6788–6796, 1996.
Optical Tomography 193 [HEB 99] HEBDEN J. C., SCHMIDT F. E., FRY M. E., SCHWEIGER M., HILLMAN E. M. C., DELPY D. T., ARRIDGE S. R., “Simultaneous reconstruction of absorption and scattering images by multichannel measurement of purely temporal data”, Opt. Lett., vol. 24, n° 8, pp. 534–536, 1999. [HEB 01] HEBDEN J. C., VEENSTRA H., DEHGHANI H., HILLMAN E. M. C., SCHWEIGER M., ARRIDGE S. R., DELPY D. T., “Three-dimensional time-resolved optical tomography of a conical breast phantom”, Appl. Opt., vol. 40, n° 19, pp. 3278–3287, 2001. [HEN 41] HENYEY L., GREENSTEIN J., “Diffuse radiation of the galaxy”, Astrophys. J., vol. 93, pp. 70–83, 1941. [HIE 98] HIELSCHER A. H., ALCOUTTE R. E., BARBOUR R. L., “Comparison of finitedifference transport and diffusion calculations for photon migration in homogeneous and heterogeneous tissues”, Phys. Med. Biol., vol. 43, pp. 1285–1302, 1998. [HUA 91] HUANG D., SWANSON E. A., LIN C. P. et al., “Optical coherence tomography”, Science, vol. 254, n° 5035, pp. 1178-1181, 1991. [ISH 78] ISHIMARU A., Wave Propagation and Scattering in Random Media, Academic Press, 1978. [JAC 95] JACKSON J, Classical Electrodynamics, John Wiley & Sons, 1995. [JOH 96] JOHN S., PANG G., YANG Y., “Optical coherence propagation and imaging in a multiple scattering medium”, J. Biomed. Opt., vol. 1, n° 2, pp. 180–191, 1996. [KAK 88] KAK A., SLANEY M., Principle of Computerized Tomographic Imaging, IEEE, 1988. [LAK 83] LAKOWITZ J., Principle of Fluorescence Spectroscopy, Plenum, 1983. [LI 97] LI X. D., DURDURAN T., YODH A. G., CHANCE B., PATTANAYAK D. N., “Diffraction tomography for biomedical imaging with diffuse photon density waves”, Opt. Lett., vol. 22, n° 15, pp. 573–575, 1997. [LI 00] LI X. D., PATTANAYAK D. N., DURDURAN T., CULVER J. P., CHANCE B., YODH A. G., “Near-field diffraction tomography with diffuse photon density waves”, Phys. Rev. E, vol. 61, n° 4, pp. 4295–4309, 2000. [LIU 93] LIU H., MIWA M., BEAUVOIT B., WANG N. G., CHANCE B., “Characterization of absorption and scattering properties of small-volume biological samples using time-resolved spectroscopy”, Anal. Biochem., vol. 213, n° 2, pp. 378–385, 1993. [MAH 98] MAHADEVAN-JANSEN A., MITCHELL W. F., RAMANUJAM N., UTZINGER U., RICHARDS-KORTUM R., “Development of a fiber optic probe to measure NIR Raman spectra of cervical tissue in vivo”, Photochem. Photobiol., vol. 68, n° 3, pp. 427–431, 1998. [MAN 95] MANDEL L., WOLF E., Optical Coherence and Quantum Optics, Cambridge University Press, 1995.
194
Tomography
[MAR 89] MARCHESINI R., BERTONI A., ANDREOLA S., MELLONI E., SICHIROLLO A. E., “Extinction and absorption coefficients and scattering phase functions of human tissues invitro”, Appl. Opt., vol. 28, n° 12, pp. 2318–2324, 1989. [MAT 97] MATSON C. L., CLARK N., MCMACKIN L., FENDER J. S., “Three-dimensional tumor localization in thick tissue with the use of diffuse photon-density waves”, Appl. Opt., vol. 36, n° 1, pp. 214–220, 1997. [MAT 99a] MATSON C. L., LIU H. L., “Analysis of the forward problem with diffuse photon density waves in turbid media by use of a diffraction tomography model”, J. Opt. Soc. Am. A - Opt. Image Sci. Vis., vol. 16, n° 3, pp. 455–466, 1999. [MAT 99b] MATSON C. L., LIU H. L., “Backpropagation in turbid media”, J. Opt. Soc. Am. A - Opt. Image Sci. Vis., vol. 16, n° 6, pp. 1254–1265, 1999. [MIE 08] MIE G., “Beiträge zur Optik trüber Medien, speziell kolloidaler Metallösungen”, Annalen der Physik, vol. 25, pp. 377–445, 1908. [MIT 94] MITIC G., KOLZER J., OTTO J., PLIES E., SOLKNER G., ZINTH W., “Time-gated transillumination of biological tissues and tissue-like phantoms”, Appl. Opt., vol. 33, n° 28, pp. 6699–6710, 1994. [MON 00] MONTANDON-VARODA L., 3D Optical Diffraction Tomography in Thick Highly Scattering Media using Diffuse Photon Density Waves, Ph.D. thesis, Swiss Federal Institute of Technology Lausanne, 2000. [MUE 93] MUELLER G., CHANCE B., ALFANO R. et al. (Eds.), Medical Optical Tomography: Functional Imaging and Monitoring, SPIE, 1993. [ONO 00] ONO M, KASHIO Y, SCHWEIGER M, DEHGHANI H, ARRIDGE SR, FIRBANK M, OKADA E, “Topographic distribution of photon measurement density functions on the brain surface by hybrid radiosity-diffusion method”, Opt. Rev., vol. 7, n° 5, pp. 426-431, 2000. [PEI 01] PEI Y. L., GRABER H. L., BARBOUR R. L., “Influence of systematic errors in reference states on image quality and on stability of derived information for DC optical imaging”, Appl. Opt., vol. 40, n° 31, pp. 5755–5769, 2001. [PER 98] PERELMAN L. T., BACKMAN V., WALLACE M. et al., “Observation of periodic fine structure in reflectance from biological tissue: a new technique for measuring nuclear size distribution”, Phys. Rev. Lett., vol. 80, n° 3, pp. 627–630, 1998. [POR 89] PORTER R. P., “Generalized holography with application to inverse scattering and inverse source problem”, Progress Opt., vol. 27, pp. 317–397, 1989. [PRA 88] PRAHL S., Light Transport in Tissues, PhD thesis, University of Texas Austin, 1988. [PUL 97] PULIAFITO C. A., “Optical coherence tomography (OCT): an example of successful retinal technology transfer”, Invest. Ophthalmol. Vis. Sci., vol. 38, n° 4, pp. 2218–2218, 1997. [SCH 96] SCHMITT J., KUMAR G., “Turbulent nature of refractive index variations in biological tissues”, Opt. Lett., vol. 21, pp. 1310–1312, 1996.
Optical Tomography 195 [SOL 97] SOLKNER G., MITIC G., “Spatial resolution enhancement through time gated measurements”, Adv. Exp. Med. Biol., vol. 413, pp. 75–83, 1997. [SWA 93] SWANSON E. A., IZATT J. A., HEE M. R. et al. “In-vivo retinal imaging by optical coherence tomography”, Opt. Lett., vol. 18, n° 21, pp. 1864–1866, 1993. [THU 01] THUELER P., Optical Spectral Probing for Epithelial Tissue Characterisation, Ph.D. thesis, Swiss Federal Institute of Technology Lausanne, 2001. [VAN 92] VAN DER ZEE P., Measurement and Modelling of Optical Properties of Human Tissue in the Near Infrared, PhD thesis, University College London, 1992. [VAN 99] VAN ROSSUM M. C. W., NIEUWENHUIZEN T. M., “Multiple scattering of classical waves: microscopy, mesoscopy, and diffusion”, Rev. Mod. Phys., vol. 71, n° 1, pp. 313–371, 1999. [WAG 98] WAGNIERES G. A., STAR W. M., WILSON B. C., “In-vivo fluorescence spectroscopy and imaging for oncological applications”, Photochem. Photobiol., vol. 68, n° 5, pp. 603–632, 1998. [WED 95] WEDBERG T., WEDBERG W., “Tomographic reconstruction of the cross sectional refractive index distribution in semi-transparent, birefringent fibres”, J. Microscopy, vol. 177, n° 1, pp. 53–67, 1995. [WEL 95] WELCH A., VAN GEMERT M., Optical-thermal Response of Laser-irradiated Tissue, Plenum, 1995. [WIL 86] WILSON B. C., MULLER P. J., YANCH J. C., “Instrumentation and light dosimetry for intraoperative photodynamic therapy (PDT) of malignant brain tumors”, Phys. Med. Biol., vol. 31, n° 2, pp. 125–133, 1986. [WIL 87] WILSON B. C., PATTERSON M. S., FLOCK S. T., “Indirect versus direct techniques for the measurement of the optical properties of tissues”, Photochem. Photobiol., vol. 46, n° 5, pp. 601–608, 1987. [WOL 69] WOLF E., “Three-dimensional structure determination of semi-transparent objects from holographic data”, Opt. Comm., vol. 1, pp. 153–156, 1969.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 7
Synchrotron Tomography
7.1. Introduction Synchrotron radiation is a very bright source of X-rays with a very broad energy spectrum. It is produced by a relativistic electron beam deflected by strong magnetic fields. The photon beam may be made monochromatic, usually with crystals, while preserving a considerable photon intensity. Synchrotron radiation thus provides an intense, monochromatic X-ray beam with very small divergence and source size. It is well suited to serve as a source in quantitative tomography and microtomography, as well as in more exotic applications, such as fluorescence, diffusion and phase contrast tomography. In section 7.2 we describe the principle and advantages of synchrotron radiation. We discuss quantitative tomography in section 7.3 and microtomography in section 7.4. Finally, we briefly describe in section 7.5 the experimental efforts put into other applications: phase contrast, holographic, refraction, diffraction, diffusion, and fluorescence tomography. We restrict ourselves to biomedical applications. However, synchrotron radiation is particularly well suited for tomography on non-biological objects, where the constraints on dose and acquisition time are relaxed (see Chapter 8). 7.2. Synchrotron radiation 7.2.1. Physical principles A relativistic electron beam deflected by a magnetic field emits bremsstrahlung, i.e. a deceleration radiation, called synchrotron radiation. This phenomenon was observed Chapter written by Anne-Marie CHARVET and Françoise PEYRIN.
198
Tomography
for the first time with a small synchrotron at the research laboratory of General Electric in 1947 (Schenectady, NY, USA). This radiation is very intense, directional, and generally covers a broad range of energies. Therefore, it is of interest as a source of ultraviolet (UV) and X-rays. Its characteristics depend on both the magnetic field and the electron beam. When the magnetic field is uniform and static, electrons describe a circular trajectory in a plane perpendicular to the magnetic field. The photons are emitted in a cone tangential to the trajectory of the electrons. For the user, the UV and X-rays appear as a continuous layer in the horizontal plane with a very small vertical opening angle. The electromagnetic radiation emitted in such a magnetic field – called dipolar – has a very broad, continuous energy spectrum, which extends from radiofrequencies to UV and X-rays. This spectrum is characterized by an energy, called the critical energy, which is defined as the value below which half of the power is emitted. This critical energy Hc (measured in keV) depends on the energy of the electrons E (in GeV) and on the strength of the magnetic field B0 (in T):
Hc
0.665 E 2 B0
[7.1]
At the ESRF (European Synchrotron Radiation Facility, Grenoble, France), the critical energy is 19.2 keV for the bending magnets. The vertical opening angle of the X-ray beam depends on the energy of the photons. The photons with the highest energy are emitted in the cones with the smallest opening. At the ESRF, this is 170 Prad at the critical energy, which corresponds to an X-ray beam of 1.7 mm in height at 10 m from the source. For a dipole magnet, the horizontal opening is defined with the help of slits. Due to the very anisotropic emission of synchrotron radiation in the vertical direction, the appropriate quality factor is the, often vertically integrated, spectral intensity. It is measured in photons per second per milliradian (horizontal) and per 0.1% 'O/O. The last quantity means that the energy band used for the calculation for each O is 'O = 0.1% O. The use of periodic magnetic structures allows changing of the characteristics of synchrotron radiation, in particular its energy spectrum and its angular emission, by an interference phenomenon of the radiation emitted by successive magnetic periods. In the simplest case, these structures consist of magnets of alternating polarity that result in a sinusoidal trajectory of the electrons. They are often referred to by the term insertion devices and are characterized by a parameter K, called the deflection parameter:
K
0.934 B Ou
[7.2]
Synchrotron Tomography
199
where Ȝu is the period of the magnetic field in cm, and B is its intensity in T. We speak of an undulator if K is in the order of 1 and of a wiggler if K !! 1 . In the horizontal plane, wigglers and undulators deflect the electrons by a maximum angle of K / Ȗ, where J is the normalized energy of the electrons J E[ keV ] 511 keV . For wigglers (K >> 1), the angular deflection of the electrons is larger than the synchrotron radiation cone opening. The spectral intensity of a wiggler equals the spectral intensity of a dipole magnet multiplied by the number of poles of the wiggler. Consequently, it spreads over a very large, continuous range of energies. Figure 7.1 shows the spectrum of the wiggler employed on the medical beamline ID17 of the ESRF and the spectrum of an X-ray tube for comparison. The spectral intensity is plotted in photons/s/mm2/0.1% 'OeO along the ordinate and the energy of the photons in keV along the abscissa.
Figure 7.1. Comparison of the spectral intensity of the synchrotron beam produced by a wiggler and of the beam produced by an X-ray tube
The vertical opening angle of the X-ray beam emitted by a wiggler is similar to that of a dipole magnet, while the horizontal opening angle depends on the energy of the photons with a typical angle of K/Ȗ at the critical energy. In the case of an undulator, the radiation emitted by the different periods interferes coherently, and the spectral intensity is proportional to N 2, with N being the number of periods of the undulation. The energy spectrum is a line spectrum, which is characterized by a socalled fundamental wavelength. It is weaker, i.e. its mean energy is smaller, than
200
Tomography
that of a wiggler for an electron beam of given energy. The horizontal opening angle of an undulator is in the order of 1/Ȗ, thus the emitted beam is a very focused and very bright pencil beam. Moreover, synchrotron radiation is an extended, partially coherent source, which enables the study of interference phenomena, for instance by phase contrast imaging. Synchrotron radiation is produced in machines comprising three main elements, as illustrated in Figure 7.2. The electrons are first accelerated by a linear accelerator or linac (A) of low energy. Then they are brought to their nominal energy in a circular accelerator, the synchrotron (B). Finally, they are injected into a storage ring (C), where the generation of the UV or X-rays for the users takes place. In the storage ring, the electron beam is maintained at a constant energy. It follows a closed trajectory composed of circular and straight sections. Dipole magnets ensure the guidance and multipole magnets a stable focusing of the beam. Synchrotron radiation is produced in part in the arc sections due to the bending magnets and in part in the insertions, which are installed in the straight sections of the trajectory. The energy that the electrons lose in the form of photons is compensated by radiofrequency cavities. The electrons move in a vacuum chamber in the storage ring, in which a very high vacuum is sustained to avoid collisions with residual gas molecules. The electron beam is thus stored for many hours.
Figure 7.2. Schematic illustration of the principle of a synchrotron: (A) linear accelerator, (B) synchrotron, (C) storage ring
The machines that produce synchrotron radiation are generally characterized by the energy of the electrons, which determines the hardness of the emitted photons, by the emission of the electron beam, which determines the brightness of the produced photon beam, and by the type of magnetic structures that mainly produce the synchrotron radiation for the users: bending magnets or insertion devices (undulators and wigglers). Among the machines that have recently been built are the ESRF (Europe), the APS (USA), and the Spring’8 (Japan), which are optimized for production of very bright X-ray beams at about 10 keV, and the Sincrotrone Trieste (Italy), the SLS (Switzerland), and the ALS (USA), which are optimized for about 1 keV.
Synchrotron Tomography
201
7.2.2. Advantages of synchrotron radiation for tomography The advantages of synchrotron radiation for tomography were first underlined by Grodzins [GRO 83]. They primarily stem from the fact that synchrotron sources offer a considerable photon intensity over a very broad spectrum of energies. A white synchrotron beam may be used to generate a quasi-monochromatic Xray beam by employing a perfect crystal. This type of monochromator is based on Bragg’s law, which states that the wavelength O selected from a polychromatic beam when being reflected by a crystal is given by the equation nO = 2d sinT, where T is the angle of incidence of the beam on the crystal and d is the distance between two atomic layers in the crystal. The wavelength, or the energy of the beam, may thus be adjusted by changing the orientation of the crystal and be optimized for each type of sample. The use of a monochromatic beam in tomography enables us to avoid the phenomenon of beam hardening, which introduces non-linearities into the measurements [BRO 78]. Therefore, conditions are ideal for employing a model that relies on the Radon transform (see Chapter 2). Moreover, the reconstructed image is a quantitative map of the linear attenuation coefficient for the selected energy. In addition, this energy may be optimized as a function of the composition and size of the sample [GRA 91]. In practice, harmonic wavelengths of low, decreasing intensity may be present in the beam after being reflected by the crystal and may limit the quantification. They are minimized by an appropriate choice of the crystal plane for the reflection. The spectral intensity of the photon beam delivered by a synchrotron beamline (photons/s/mm2 at 0.1% pass band) is about 106 times higher than that of a classical X-ray tube (see Figure 7.1), which enables extraction of a monochromatic beam while conserving considerable intensity. The brightness of the beam plays a crucial role when images with very high spatial resolution are desired: we then speak of microtomography. In fact, for a fixed signal-to-noise ratio, the total number of incident photons needed to image a cross-section is inversely proportional to the third power of the size of the pixel, multiplied by the thickness of the cross-section [FLA 87]. Thus, when the size of the pixel is divided by a factor k, the number of photons must be multiplied by a factor k 3. A higher number of photons may be obtained by increasing the flux or the acquisition time. Considerable increases in the latter are often a limiting factor in tomographic imaging with standard X-ray tubes when spatial resolutions of less than 10 Pm are to be attained. Spatial resolutions in the order of 1 Pm have already been obtained with synchrotrons in limited acquisition times. However, tomographic imaging with synchrotron radiation also has some limitations. The primary limitation is the availability of synchrotrons. In medical applications, tomography with very high spatial resolution is not applicable in vivo, since the dose absorbed by the patient becomes unacceptable. Moreover, the X-ray
202
Tomography
source is fixed, so the object to be imaged has to be rotated, which complicates use in vivo. Finally, synchrotron sources generally have a relatively small vertical angular opening. It is possible to obtain a parallel beam of square cross-section, adapted to typical human dimensions, by optical means, but these cause a significant loss of intensity.
7.3. Quantitative tomography 7.3.1. State of the art Tomography by monochromatic radiation enables a quantitative map of the linear attenuation coefficient P for one energy to be obtained. The coefficient P depends on two physical parameters: the density U and the mean atomic number of the material in the voxel of the reconstructed image. Two energy subtraction techniques enable us to approach determination of the “nature” of the material in transmission tomography. These are the subtraction around an absorption threshold and the so-called dual-energy technique. Subtraction imaging around an absorption threshold exploits the discontinuity of the linear attenuation coefficient at absorption thresholds due to atomic transitions to the first free energy levels. Two images are acquired at energies located on both sides of the threshold and sufficiently close to each other, such that the attenuation varies only very little due to other causes. The subtraction image thus provides a map of the density of a specific element. Figure 7.3 illustrates the variation of the mass attenuation coefficient P /U as function of the energy of two elements employed as contrast agents, iodine (a) and gadolinium (b), and for tissue and bone for comparison. The absorption thresholds, or K-edges, are located at 33.17 keV for iodine and at 50.24 keV for gadolinium. For in vivo imaging, only the K-edges of heavy elements (roughly Z > 50, transitions from level 1s) correspond to photon energies that allow the dose of ionizing radiation to be limited to acceptable values. Since the elements naturally present in the human body are too light, appropriate contrast agents, such as iodine, barium, and gadolinium, must usually be applied to make subtraction images around a threshold. The K-edge technique was proposed for radiography by Jacobson [JAC 53] and was evaluated for polychromatic tomography by Riederer and Mistretta [RIE 77]. Dual-energy imaging exploits the variation of the mass attenuation coefficient as a function of the energy. An appropriate linear combination of two images acquired at different energies enables maps of the photoelectric and the Compton attenuation coefficient to be obtained [ALV 76]. The diagnostic relevance of these two maps is clearly stated by Alvarez and Macovski: “The photoelectric coefficient depends strongly on the atomic number and thus gives an indication of the composition of
Synchrotron Tomography
203
the object. The Compton diffusion coefficient depends on the electron density, which is for most materials proportional to the density.” Alternatively, this technique may be used to decompose an image according to two reference materials, such as lucite (to simulate tissues) and a mixture of copper and aluminum (to simulate bones) in bone tomodensitometry to separate bones from soft tissue. It is also commonly used in industrial applications (see Chapter 8).
Figure 7.3. Mass attenuation coefficients as function of the energy: (a) iodine, (b) gadolinium, (c) bone, (d) tissues
Synchrotron radiation produces a quasi-monochromatic radiation with a photon flux that is sufficient for medical subtraction imaging in vivo. As early as 1984, Thompson et al. [THO 84] imaged a pig’s heart, which contained iodine insets, ex vivo with subtraction tomography around the K-edge of iodine. For these experiments they used the synchrotron radiation coronary angiography system of the Stanford Synchrotron Radiation Laboratory (SSRL). In 1988, Dilmanian proposed a multi-energy monochromatic scanner, called an MECT, designed for in vivo imaging in humans. This system was developed at the beamline X17B of the International Synchrotron Light Source (NSLS) at Brookhaven National Laboratory [DIL 92]. The planned applications were cerebrovascular tomography using contrast agents (iodine and gadolinium), pharmacological studies with these contrast agents, and the application of dual-energy to follow the evolution of atheromatous plaque in the carotid arteries. To date, the performance of this scanner has been characterized with phantoms (test objects), and small animals have been imaged. At the European Synchrotron Radiation Facility (ESRF) in Grenoble, a beamline is devoted to medical applications, which includes a monochromatic tomograph for in vivo medical imaging [ELL 99]. The design of the latter was inspired by the scanner that Dilmanian had proposed. The tomograph enables cerebral imaging in humans, and
204
Tomography
studies on tumors, ischemia, and cerebral contrast agents. Apart from the characterization of its performance on phantoms, cerebral tumors were imaged in rats in vivo, using subtraction tomography around the K-edges of iodine and gadolinium, and kinetic studies were carried out [LED 00]. In the framework of a program on bronchography, tomographic images were obtained in rabbits at the K-edge of xenon after inhalation of xenon. At the KEK laboratory (Japan), a tomograph was developed on the accumulation ring Tristan for subtraction imaging around K-edges at energies between 30 keV and 52 keV. The planned applications include functional medical imaging with contrast agents (iodine, barium, gadolinium), pharmacological imaging, and studies of bone mineral distribution. Phantoms and ischemic rats have already been imaged with this system.
7.3.2. ESRF system and applications The monochromatic tomograph developed at the medical beamline of the ESRF permits the acquisition of images in an axial and a spiral mode (see Chapter 5). The X-ray beams are stationary, and the subject turns in the beam with a simple rotational movement or a helical movement that follows an axis of vertical rotation– translation. A special feature of the system are the monochromators, which produce one or more monochromatic X-ray beams, and the patient positioning system, which meets the requirement of very small static and, in particular, dynamic mechanical tolerances. The acquisition parameters of the tomographic images and the reconstruction techniques are standard. Two configurations of X-ray beams are possible. In the first, simultaneous acquisition of two energies is performed with slightly different angles of incidence (fraction of a milliradian), while in the second, sequential acquisition of two energies is performed. In the first configuration, called an angiography monochromator, two X-ray beams are produced simultaneously by a single, curved Laue crystal. The beams, which cover an energy spectrum of 150 eV each and which are separated by 300 eV, are placed on both sides of the chosen K-edge, which may vary between 17 keV and 51 keV. The two beams cross at the level of the object to be imaged. They measure 15 cm in length and 800 Pm in height at the point of convergence. The intensity is in the 10 2 order of 6 × 10 photons/s/mm , and the third-harmonic contamination is 3.6%. In the second configuration, called a tomography monochromator, a single monochromatic X-ray beam with an energy between 18 keV and 90 keV and an energy resolution of 0.1% of the chosen energy is produced. The monochromatic beam has currently a usable size of 12 cm, which must ultimately be extended to 25 cm, and an adjustable height between half a millimeter and several millimeters. Its position is independent of the energy, a so-called fixed-exit monochromator. The available flux is in the order of 9 2 9 2 1.9 × 10 photons/ s/mm at 25 keV, and of 4.0 × 10 photons/s/mm at 33 keV. The harmonic content is less than 4% under usual imaging conditions. In practice, only the
Synchrotron Tomography
205
third harmonic has a sufficiently high intensity after detection to contribute to the reconstructed image. This monochromator can only be used to image almost immobile structures, such as the brain. The detector is a crystal of high-purity germanium, which is cooled by liquid nitrogen and has an active thickness of 2 mm. Two detector lines with 432 elements each are located on this crystal. The distance between the pixels is 350 Pm, and the height of the pixels is 10 mm. The two lines are separated by 500 Pm. This solid detector has a quantum efficiency of almost 100% at 33 keV and of 45% at 99 keV. The data acquisition electronics has a dynamic range of 16 bits at each of 8 gain settings. The total readout time of the two lines is adjustable between one and some tens of milliseconds and may be varied with the rotation angle of the patient positioning system. This system has seven degrees of freedom for choosing the angular orientation of the zone to be imaged and allows either a simple rotation (with a maximum speed of one rotation in 2 s) for an axial acquisition or a helical movement with a total span of 10 cm. The image acquisition parameters are adjustable. Their nominal values correspond to doses on the skin in the order of 5 cGy (one Gray equals a joule per kilogram (J/kg) of absorbed energy) per image pair, which is comparable to those on medical scanners. The position of the rotation axis with respect to the detector is determined with an accuracy of one tenth of a pixel before each imaging session. The preprocessing of the acquired data is relatively simple. The corrections concern the decrease in the incident X-ray flux, which is due to the exponential decay of the electron current in the storage ring, the Gaussian intensity profile of the radiation produced by a wiggler, the fluctuations and the spatial variation of the transmission of the monochromator, and the response of the detector (dark current and gain variations from pixel to pixel). In practice, it is sufficient to measure – apart from the sinogram – the intensity profile without an object and the dark current of each pixel of the detector. The value of the dark current is then subtracted from all measurements made, pixel by pixel. For each line of the sinogram and for the intensity profile without an object, the signal of each pixel is normalized by the signal of a so-called reference pixel (corresponding to a given channel of the detector that never sees the object). This step enables consideration of the temporal decrease in the incident flux and all fluctuations affecting the whole beam that the monochromator transmits. Finally, each value of the sinogram is divided by the intensity without an object at the corresponding channel of the detector. This last step allows both the spatial profile of the incident X-ray beam and the gain variations of the channels of the detector to be taken into account. Since the X-ray beam is parallel and monochromatic, a simple filtered backprojection is sufficient to reconstruct the normalized sinograms. The values of P /U for contrast agents and reference materials may be measured on small samples using the same beam as for
206
Tomography
the image acquisition to improve the precision of the quantification of concentrations. The subtraction or linear combination of acquisitions must in principle be carried out before the reconstruction, but the order seems to have little effect. It should be mentioned that the reconstruction artifacts are of course more visible in the difference images than in the original images. The tomograph of the medical beamline was designed for clinical studies on ischemia, cerebral tumors, and contrast agents that are primarily employed in cerebral imaging for the quantification of cerebral blood volume and flow. Preclinical studies on rats are being performed as feasibility studies and in the framework of research projects on tumor angiogenesis. Figure 7.4 shows two examples of tumors in a rat in difference images obtained in vivo at the K-edge of iodine. The rat is positioned vertically. The height of the cross-section is 0.8 mm, and 1,440 projections were acquired over 360°. The difference images were acquired with a photon flux that results in a dose at the skin of 5 cGy. The quantity of the injected contrast agent (Hexabrix®, Guerbet, France) was 0.4 ml/100 g, which is about four times the clinical dose. In the two images, the tumor is the clearly visible stain in the brain. The clear contour in the two cross-sections is due to peripheral vascularization. The white and black bands in the bone are a subtraction artifact that is probably due to the sequential acquisition of the two energies.
Figure 7.4. Tomographic images of cerebral tumors in a rat by subtraction around the K-edge of iodine
7.4. Microtomography using synchrotron radiation 7.4.1. State of the art Different types of acquisition systems have been proposed for microtomography with synchrotron radiation. The image quality depends on the quality of both the beam and the detector. The first generation of systems relied on the principle of translation– rotation scanners and used a thin X-ray beam and a simple detector [SPA 87]. This technique enabled a spatial resolution in the order of micrometers to be obtained, as the size of the pixel is determined by the size of the beam, but the acquisition time was
Synchrotron Tomography
207
long and practically prohibitive for 3D imaging. The use of linear detectors, which avoid the translation, has also been proposed [ENG 89]. 3D images may then be obtained by stacking 2D cross-sections. The fastest technique for acquiring a 3D image collects the data for all cross-sections simultaneously, which is made possible by the use of a 2D detector. The whole volume may then be reconstructed from its 2D projections under different angles [BON 92, KIN 89]. In the case of a synchrotron beam, the divergence of the beam is very small and leads to no enlargement. We are thus faced with a reconstruction problem from parallel projections, and not from conebeam projections as in the case of most laboratory 3D X-ray scanners. Consequently, the spatial resolution of the image is directly determined by the spatial resolution of the detector, and the tomographic reconstruction introduces no approximation. When the detection system relies on conversion of the X-rays into light, optical enlargement may be obtained, although it may be limited by the spatial resolution of the scintillator screen. Another possibility for achieving enlargement with X-rays is to use an asymmetric crystal [NAG 92].
7.4.2. ESRF system and applications A 3D microtomography system by synchrotron radiation was built at the beamline ID 19 of the ESRF (high resolution, diffraction and topography beamline) [SAL 99]. This beamline is located in a building outside the ring, at a distance of 145 m from the synchrotron source. Due to the small divergence of the source, a beam of macroscopic dimensions (about 40 mm × 15 mm) is thus obtained. The beamline includes an optical hutch, which delivers an almost parallel synchrotron beam with an energy of 8 keV to 120 keV. It is equipped with a dual-crystal, fixedexit monochromator (Si(111)), which enables extraction from the white beam of a monoenergetic X-ray beam with an energy spectrum covering about 0.01%. The acquisition system of the microtomography system was developed based on the principle of a 3D parallel geometry, which is schematically illustrated in Figure 7.5. It uses the parallel X-ray beam extracted from the synchrotron radiation and a two-dimensional detector. The sample is mounted on a table, which may perform different movements (translations, rotations) with high precision. It carries out a sequence of acquisitions involving a rotation around an axis perpendicular to the beam and parallel to the detector plane. For each rotation angle, the X-ray beam transmitted by the sample is recorded on the 2D detector. The detector is composed of a scintillator (Gd2O2S:Tb), which converts the X-rays into visible light; an optical system, which allows choice of the enlargement of the image; and a 2D CCD camera, which performs the digitization. Typically, the employed CCD camera is the FRELON (Fast REad Low Out Noise), which was developed by the detector group at the ESRF. The camera is cooled by the Peltier effect, has
208
Tomography
1,024 × 1,024 elements (size 19 μm) with a dynamic range of 14 bits, and is characterized by a short readout time (220 ms). The optics consists of a system of two lenses, which enables adapting of the size of the pixel and the field of view to the application. For example, by using lenses with focal lengths of 112 mm and 210 mm, respectively, enlargement by a factor of 1.875 is obtained, which corresponds to a pixel size of 10.13 Pm in 2D projection images. Currently, different pixel sizes between 10.13 Pm and 0.4 Pm are available. The field of view thus varies approximately between 10 mm and 0.4 mm.
Figure 7.5. Principle of the acquisition system at the ESRF (beamline ID 19)
Before the acquisition, an alignment procedure is carried out to ensure that the propagation direction, the rotation axis, and the detector lines are mutually orthogonal. The acquisition records a large number of 2D projections under different angles over a total angle of 180°, dark current images (without a beam), and other reference images (without a sample). These images serve to correct the inhomogeneity of the beam, the individual sensitivity of the pixels of the detector, and the decrease in flux during the acquisition. The latter is related to the acquisition time of each image, which is often less than 1 s for a pixel size of 10 Pm, resulting in a total acquisition time of about 15 minutes. For pixel sizes in the order of micrometers, the use of a multilayer monochromator enables similar acquisition times to be obtained. The data typically acquired for a sample (900 projections of size 1,024 × 1,024 with 16 bits resolution) amount to about 2 Gbytes and allow reconstruction of a 3D image of size 1,024 3. Taking into account the parallel acquisition geometry, the problem is, after rearranging the data in a sinogram, equivalent to a sequence of 2D reconstructions (see Chapter 2). To avoid duplication of data, a program that includes the preprocessing and direct reconstruction from the acquired radiographs has been developed. By exploiting the linearity of the problem, a parallelization of the algorithm was carried out to accelerate the calculation. It essentially relies on data parallelism, which can be achieved either at the level of the processing of each projection, or at the level of the reconstruction of different subvolumes. The use of a network of eight
Synchrotron Tomography
209
Linux PCs currently enables reconstruction of a whole volume in less than two hours. Reconstruction times in the order of one hour have recently been obtained by tailoring the architecture of the machine and the structure of the program to the problem. The development of this system was motivated by the investigation of bone microarchitecture in the context of osteoporosis studies. This bone disease leads to spontaneous fractures and affects more than 30% of all women after the menopause. The mechanical properties of bone depend both on its mineral density and its microarchitecture, which describes the internal part consisting of a complex network of fine bone rods. Conventionally, it is examined by histomorphometry, i.e. by analysis of histological cross-sections from bone biopsies. Microtomography by synchrotron radiation allows study of the trabecular network non-destructively and three-dimensionally with very high spatial resolution [PEY 98]. The microtomography system at the ESRF already enables acquisition of different types of bone samples (adult and fetal vertebrae, femora of mice, femur heads, heel bones, biopsies of iliac crests, etc.). The obtained images show excellent contrast and little noise, which facilitates subsequent quantification. In addition, in contrast to many 3D medical imaging systems, the voxel is cubic, and the images do not need to be interpolated before examination or quantification. As an illustration, Figure 7.6 shows the 3D trabecular surface of a vertebra, taken from a 72 year old woman (voxel size of 6.65 Pm, sample size of 3.5 mm). A precise rendering of the bone surfaces is obtained, and the enlargement on the right demonstrates that even a perforation in the thickest rod may be distinguished.
Figure 7.6. 3D volume rendering of the bone rods of a human vertebra. (a) 3D image, voxel size of 6.65 μm; (b) enlargement of a detail of the 3D image
210
Tomography
7.5. Extensions 7.5.1. Phase contrast and holographic tomography Phase contrast tomography has been proposed by various authors, in particular for the analysis of cancerous tissue [BEC 97, MOM 95]. In fact, when an X-ray beam traverses an object, not only an attenuation but also a dephasing of the electromagnetic wave occurs. These phenomena are modeled by a complex refractive index, whose imaginary part is the attenuation coefficient and whose real part is linked to the dephasing. The latter is denoted by (1-G), because it is close to that for X-rays. Phase modulation provides in most cases vastly superior contrast compared to attenuation. Therefore, its exploitation is of particular interest for the study of light materials with low absorption. In general, phase contrast tomography relies on the use of interferometers, which allow the introduction of dephasing between the incident and the transmitted X-ray beam [BON 65]. The resulting interference patterns are combined before reconstructing the dephasing due to the examined object. The latter being proportional to the projection of G, it is thus possible to reconstruct the spatial distribution of G by repeating the interferometric measurements with different angles of incidence and by using a classical reconstruction algorithm. The spatial coherence of the synchrotron source at the ESRF permits another, simpler implementation of phase imaging by allowing the beam to propagate after traversing the object [CLO 97]. Experimentally, it is sufficient to record the transmitted images by placing the detector at a distance from the sample. When the distance between sample and detector varies, different regimes may be observed, ranging from a simple attenuation effect when this distance is zero to complex interference effects when this distance increases. The image formation may be modeled using Fresnel’s theory of diffraction. A restoration method has been developed to estimate the phase from a combination of images at different distances [CLO 99]. In the same way as before, the 2D phase images obtained for different angles of incidence may be processed with a tomographic reconstruction algorithm to calculate the 3D distribution of the real part of the refractive index. This is called holographic tomography, or holotomography. Under certain conditions, when the distance between sample and detector is small, we may be satisfied with a single image and use as a first approximation standard reconstruction algorithms. The reconstructed image then shows a contribution linked to the attenuation and a contribution linked to the Laplacian of G [PEY 97]. This image enables interfaces in materials to be revealed with very similar or small attenuation.
Synchrotron Tomography
211
7.5.2. Tomography by refraction, diffraction, diffusion and fluorescence On a microscopic scale, the interaction between an X-ray photon and an atomic electron leads to a change in the direction and/or the energy of the photon according to the photoelectric effect (photon absorption, energy transition of the electron, accompanied by the emission of a fluorescence photon), elastic Thomson scattering (change in the direction of the photon without change in the energy), or Compton scattering (change in the energy and the direction of the X-ray photon). From a macroscopic point of view, the interaction between an X-ray beam and a sample is characterized by a combination of these effects. This leads to phenomena such as the attenuation of the X-ray beam, its refraction, its diffraction, fluorescence emission, Rayleigh scattering, and small-angle scattering. All these phenomena give information on the physical and chemical nature of the sample and may be exploited by tomographic techniques to analyze inhomogenous samples [CES 89, HAR 87]. Synchrotron radiation is the X-ray source of choice for these applications, since the differential cross-sections of these diverse processes are smaller than that of the attenuation. These techniques are, however, experimentally far more demanding than tomography by attenuation. The secondary radiations have a large angular spectrum and a large energy spectrum, which require the use of sophisticated multicollimators behind the sample and energy-resolved 2D detectors (see Chapter 8). This explains why, after initial experiments that demonstrated the principle [BOI 87, DIL 00, DUV 98], time-consuming instrumentation work is now carried out on all synchrotrons that produce X-rays. Beamlines especially equipped for microfluorescence tomography are about to be put into operation [TAK 99, WEI 99]. The reconstruction techniques are borrowed from emission tomography, since each pixel behaves like an emitter of photons that depends on the incident flux and the crosssection of the interaction. The considered applications in the biological and biomedical domains include the study of small organisms (insects, etc.), biological samples, and biopsies. The applicability of these techniques in vivo is still to be evaluated, since the limitation on the dose is a severe constraint [KLE 98, YUA 97].
7.6. Conclusion Transmission X-ray tomography is a technique routinely used in hospitals. Employing synchrotron radiation as the X-ray source permits working with monochromatic beams and in parallel geometry with small source size. These characteristics have numerous advantages for quantitative and dual-energy tomographic imaging, as well as microtomography. Moreover, other, new modalities of tomographic imaging are foreseen, such as phase contrast, holographic, diffraction, diffusion, and fluorescence tomography. Different types of tomographic imaging system are already operational at certain synchrotrons. In vivo acquisitions have been performed on small animals in tomographic imaging around a K-edge. Microtomographic
212
Tomography
imaging is very attractive for a precise analysis of bone architecture, but it has also many other applications in medicine, biology, and in materials science.
7.7. Bibliography [ALV 76] ALVAREZ R. E., MACOVSKI A., “Energy spectral techniques in computerized tomography”, Phys. Med. Biol., vol. 21, pp. 733–744, 1976. [BEC 97] BECKMAN F., BONSE U., BUSH F., GÜNNEWIG O., “X-ray microtomography (μCT) using phase contrast for the investigation of organic matter”, J. Comput. Assist. Tomogr., vol. 21, pp. 539–553, 1997. [BOI 87] BOISSEAU P., GRODZINS L., “Fluorescence tomography using synchrotron radiation at the NSLS”, Hyperf. Interact., vol. 33, pp. 283–292, 1987. [BON 65] BONSE U., HART M., “An X-ray interferometer”, Applied Physics Letters, vol. 6, n° 8, pp. 155–158, 1965. [BON 92] BONSE U., NUSSHARDT R., BUSCH F., KINNEY J. H., SAROYAN R. A., NICHOLS M. C., “X-ray tomographic microscopy”, in Michette A., Morrison G., Buckley C. (Eds.), Xray Microscopy III, Springer, 1992. [BRO 78] BROOKS R. A., DI CHIRICO G., “Beam hardening in X-ray reconstructive tomography”, Phys. Med. Biol., vol. 21, pp. 689–732, 1978. [CES 89] CESAREO R., MASCARENHAS S., “A new tomographic device based on the detection of fluorescent X-rays”, Nucl. Instr. Meth., vol. A277, pp. 669–672, 1989. [CLO 97] CLOETENS P., PATEYRON-SALOMÉ M., BUFFIÈRE J. Y., PEIX G., BARUCHEL J., PEYRIN F., SCHLENKER M., “Observation of microstructure and damage in materials by phase sensitive radiography and tomography”, J. Applied Physics, vol. 81, pp. 5878– 5886, 1997. [CLO 99] CLOETENS P., Contribution to Phase-contrast Imaging, Reconstruction and Tomography with Hard Synchrotron Radiation, Ph.D. thesis, Free University of Brussels, 1999. [DIL 92] DILMANIAN F. A., “Computed tomography with monochromatic X-rays”, Am. J. Physiol. Im., vol. 3, pp. 175–193, 1992. [DIL 00] DILMANIAN F. A., ZHONG Z., REN B., WU X. Y., CHAPMAN L. D., ORION I., THOMLINSON W. C., “Computed tomography of X-ray index of refraction using the diffraction enhanced imaging method”, Phys. Med. Biol., vol. 45, pp. 933–946, 2000. [DUV 98] DUVEAUCHELLE P., Tomographie par Diffusion Rayleigh et Compton avec un Rayonnement Synchrotron: Application à la Pathologie Cérébrale, Ph.D. thesis, University Joseph Fourier, Grenoble, 1998. [ELL 99] ELLEAUME H., CHARVET A. M., BERKVENS P. et al. “Instrumentation at the ESRF medical imaging facility”, Nucl. Instrum. Method., vol. A 428, pp. 513–527, 1999.
Synchrotron Tomography
213
[ENG 89] ENGELKE K., LOHMANN M., DIX W. R., GRAEFF W., “A system for dual energy microtomography of bones”, Nucl. Instrum. Methods Phys. Res., vol. A274, pp. 380–389, 1989. [FLA 87] FLANNERY B. P., ROBERGE W. G., “Observational strategies for three-dimensional synchrotron microtomography”, J. Applied Physics, vol. 62, n° 12, pp. 4668–4674, 1987. [GRA 91] GRAEFF W., ENGELKE K., “Microradiography and microtomography”, in Ebashi S., Koch M., Rubenstein E. (Eds.), Handbook on Synchrotron Radiation, vol. 4, pp. 361– 405, North Holland, 1991. [GRO 83] GRODZINS L., “Optimum energy for X-ray transmission tomography of small samples”, Nucl. Instrum. Methods, vol. 206, pp. 541–545, 1983. [HAR 87] HARDING G., KOSANETZKY J., “Status and outlook of coherent X-ray scatter imaging”, J. Opt. Soc. Am. A., vol. 4, nº 5, pp. 933–944, 1987. [JAC 53] JACOBSON B., “Dichromatic absorption radiography: dichromography”, Acta Radiol., vol. 39, p. 437, 1953. [KIN 89] KINNEY J. H., JOHNSON Q. C., NICHOLS M. C., BONSE U., SAROYAN R. A., NUSSHARDT R. A., PAHL R., “X-ray microtomography on beamline X at SSRL”, Rev. Sci. Instrum., vol. 60, nº 7, pp. 2471–2474, 1989. [KLE 98] KLEUKER U., SUORTTI P., WEYRICH W., SPANNE P., “Feasibility study of X-ray diffraction computed tomography for medical imaging”, Phys. Med. Biol., vol. 43, pp. 2911–2923, 1998. [LED 00] LE DUC G., CORDE S., ELLEAUME H. et al., “Feasibility of synchrotron radiation computed tomography on rats bearing glioma after iodine or gadolinium injection”, European Radiol., vol. 10, n° 9, pp. 1487–1492, 2000. [MOM 95] MOMOSE A., TAKEDA T., ITAI Y., “Phase-contrast X-ray computed tomography for observing biological specimens and organic materials”, Rev. Sci. Instrum., vol. 66, n° 2, pp. 1434–1436, 1995. [NAG 92] NAGATA Y., YAMAJI Y., HAYASHI K., KAWASHIMA K., “High energy high resolution monochromatic X-ray computed tomography using the Photon Factory vertical wiggler beam”, Rev. Sci. Instrum., vol. 63, n° 1, pp. 615–618, 1992. [PEY 97] PEYRIN F., CLOETENS P., SALOMÉ-PATEYRON M., BARUCHEL J., SPANNE P., “Reconstruction 3D en tomographie par rayonnement synchrotron cohérent”, Proc. GRETSI, Grenoble, vol. 1, pp. 423–426, 1997. [PEY 98] PEYRIN F., SALOMÉ M., CLOETENS P., LAVAL-JEANTET A. M., RITMAN E., RÜEGSEGGER P., “Micro-CT examinations of trabecular bone samples at different resolutions: 14, 7 and 2 micron level”, Technology and Health Care, vol. 6, n° 5-6, pp. 391–401, 1998. [RIE 77] RIEDERER S. J., MISTRETTA C. A., “Selective iodine imaging using K-edge energies in computerized X-ray tomography”, Med. Phys., vol. 4, pp. 474–481, 1977.
214
Tomography
[SAL 99] SALOME-PATEYRON M., PEYRIN F., LAVAL-JEANTET A. M., CLOETENS P., BARUCHEL B., SPANNE P., “A synchrotron radiation microtomography system for the analysis of trabecular bone samples”, Med. Phys., vol. 26, n° 10, pp. 2194–2204, 1999. [SPA 87] SPANNE P., RIVERS M. L., “Computerized microtomography using synchrotron radiation from the NSLS”, Nucl. Instrum. Methods Phys. Res., vol. B24/25, pp. 1063–1067, 1987. [TAK 99] TAKEDA T., YU Q., YASHIRO T., YUASA T., HASEGAWA Y., ITAI Y., AKATSUKA T., “Human thyroid specimen imaging by fluorescent X-ray computed tomography with synchrotron radiation”, SPIE, vol. 3772, pp. 258–267, 1999. [THO 84] THOMPSON A. C., LLACER J., CAMPBELL FINMAN L., HUGHES E. B., OTIS J. N., WILSON S., ZEMAN H. D., “Computed tomography using synchrotron radiation”, Nucl. Instrum. Method, vol. 222, pp. 319–323, 1984. [WEI 99] WEITKAMP T., RAVEN C., SNIGIREV A., “An imaging and microtomography facility at the ESRF beamline ID 22”, SPIE, vol. 3772, pp. 311–317, 1999. [YUA 97] YUASA T., AKIBA M., TAKEDA T. et al., “Incoherent-scatter computed tomography with monochromatic synchrotron X-ray: feasibility of multi-CT imaging system for simultaneous measurement of fluorescent and incoherent-scatter X-rays”, IEEE Trans. Nucl. Sci., vol. 44, pp. 1760–1769, 1997.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Part 3 Industrial Tomography
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 8
X-ray Tomography in Industrial Non-destructive Testing
8.1. Introduction The first images obtained with tomography in the framework of process monitoring (process tomography) [BAR 57] enabled mapping of the density of particles in suspension in a fluidized bed. Industrial applications diversified after Hounsfield developed the first medical scanner in 1972 [THI 98]. Today, tomography allows the detection of defects, dimensional control, characterization of materials (e.g. measurements of density and distribution of impurities), and approval of new manufacturing processes. With the evolution in computer science and CCD cameras, three-dimensional (3D) tomography has become of primary importance for modeling the behavior of materials, for instance the damage within advanced composites or the fluid flow within rocks. The size of objects may vary from a millimeter to several meters and the size of voxels from less than a micrometer to several centimeters. It should be mentioned that all types of materials may be tested (including metals, composites, and ceramics). The term tomography is often used to designate transmission tomography, which is employed most and is based on the measurement of the attenuation of photons through an object. Yet the same term covers other types of imaging, notably emission, fluorescence, and scattering tomography. The last, which is based on the analysis of photons that are scattered by an object (by the Rayleigh and/or Compton effect), is mentioned at the end of this chapter (see section 8.6.9).
Chapter written by Gilles PEIX, Philippe DUVAUCHELLE and Jean-Michel LETANG.
218
Tomography
8.2. Physics of the measurement Transmission tomography involves collecting photons transmitted through the tested object under different angles with a detector. In medical applications, the Xray generator and detector rotate around the patient, whereas in industrial applications both are usually static, and the projections are obtained by rotation of the inspected part. During a given integration time, each of the detector elements receives a number of transmitted photons NP, which follows the Beer-Lambert law. In case of a monochromatic radiation, we obtain: Np
· § N p0 exp¨ PT ( x) dx ¸ ³ ¸ ¨ Trajectory ¹ ©
[8.1]
where P7(x) represents the linear attenuation coefficient of the material at position x along the trajectory of the incident photons, and Np0 represents their number. In this way, the experimental measurement of the quantities Np0 and Np enables determination of the integral of the quantity P7(x).dx along a line through the object:
ln ( N p0 / N p )
³ PT ( x) dx Trajectory
[8.2]
The quantity Np0 corresponds to a measurement of the direct flux in the absence of the object (reference image). The detector should not be saturated at any point on its surface. Using equation [8.2], the reconstruction algorithm provides the spatial distribution of the coefficient P7 within the considered cross-section (see Chapter 2). The linear attenuation coefficient is a function of the density ȡv of the material, of its atomic number Z, and of the energy EP of the radiation. The mass attenuation coefficient P7Uv, which is independent of the density and the physical state of the material, is also very frequently used. Figure 8.1 shows the variation of P7Uv as a function of the energy of the radiation for three materials: carbon, aluminum, and iron. Tomography is often employed with the simple aim of obtaining a qualitative image. In Figure 8.1, we note that the photoelectric domain (under 0.2 MeV) is favorable for this type of imaging, since P7Uv varies strongly with E and Z. The contrast is thus higher, due to both higher values of P7Uv and to substantial variations of this coefficient with Z at constant energy. Tomography also enables estimation of the density of the material. Therefore, we speak of tomodensitometry. Since the linear attenuation coefficient is a function of both Uv and Z, measurement of the density is only possible after calibration and under certain conditions by: – using monochromatic radiation and materials with the same effective atomic number;
X-ray Tomography in Industrial Non-destructive Testing
219
Figure 8.1. Variation of the mass attenuation coefficient as a function of the energy of the radiation, for carbon (Z = 6), aluminum (Z = 13), and iron (Z = 26). The graphsshown were obtained with the software XGAM [BER 87]
– using photons with energies in a domain where the Compton effect is predominant (approximately between 0.2 and 3 MeV, where the mass attenuation coefficient P7Uv is almost independent of the atomic number Z); – making several measurements at different energies (see section 8.6.7). 8.3. Sources of radiation The sources used in industrial applications are of two types: X-ray generators (X-ray tubes or linear accelerators) and radioactive sources (called gamma sources), such as 137Cs and 60Co sources. The generators have the advantage of emitting a substantial flux of photons, but the radiation is strongly polychromatic. By contrast, the gamma sources are not very intense, but the emission spectrum is usually monochromatic. No matter which source is used, the flux of photons must exhibit a good temporal stability throughout the acquisition. This is the reason why the X-ray tubes used in tomographic systems must have a highly stabilized power supply. Otherwise, a complementary detector, called a monitor, may be placed outside the used beam, whose measurements serve to correct the potential variations in the flux of the tube. Figure 8.2 shows the spectrum of a gamma source (left), measured with a germanium detector of high purity (GeHP), and the spectrum of an X-ray generator (right), obtained with a semi-empirical calculation method [BIR 79]. When the source of radiation is polychromatic, each of the energies in the spectrum is differently attenuated within the inspected material. The relative decrease in
220
Tomography
the low energy components is particularly important. It leads to an increase in the mean energy of the flux. This phenomenon is called beam hardening (Figure 8.3). It entails the appearance of artifacts in the reconstructed image (see section 8.5). This effect may be reduced by filtering the beam with a thin metal plate (usually made of copper). The low energy components are thus eliminated before the beam enters the object [BRO 76]. If this filtering is insufficient, different numerical correction methods may be considered [MOR 89]. 0.025 0.02 0.015 0.01 0.005
Figure 8.2. Comparison of spectra delivered by a 137Cs source (left) and an X-ray generator operating at 140 kV (right). The weak, continuous base appearing at energies of less than 0.5 MeV in the cesium spectrum corresponds to Compton scattering in the detector, therefore it is not representative of the source’s spectrum
The range of available radiation energies is very broad. X-ray tubes provide photons of low or medium energy (several 10 or 100 keV), which permit imaging the lightest and thinnest materials (composite materials, small metallic samples). By contrast, linear accelerators provide a flux of high energy (several MeV), intended to test materials with very high attenuation (high atomic numbers or thickness). 8.4. Detection The most important characteristics of a scanner (also called a tomograph) are the density resolution, the spatial resolution, and the acquisition and reconstruction time. Very diverse systems have been built, which prioritize one or the other of these characteristics. When the object is not very attenuating (at maximum 40 mm of aluminum or 5 mm of steel), the use of a medical scanner has the advantage of a very high acquisition and reconstruction speed (in the order of seconds), which, for example, enables certain dynamic phenomena to be followed. Taking the great diversity of detectors into account, a large number of configurations are possible. Nevertheless, three principal acquisition configurations may be distinguished.
X-ray Tomography in Industrial Non-destructive Testing
221
Figure 8.3. Illustration of the phenomenon of beam hardening: the emission spectrum of an X-ray generator operating at 140 kV (solid line) and the transmission spectrum after 20 mm of aluminum (dashed line) (results obtained by simulation). The mean energy of the X-ray spectrum is 55 keV and 74 keV respectively
The first configuration (Figure 8.4a) corresponds to the use of a detector with a single or a few sensitive elements. The size of the detector is, in any case, smaller than the diameter of the object to be imaged, and the acquisition of a slice requires combination of elementary displacements (rotations and translations). The second configuration (Figure 8.4b) corresponds to the use of a linear detector. The duration of the acquisition of a slice is reduced, because each measurement provides a complete projection. The last configuration (Figure 8.4c) corresponds to the use of a surface detector, and it permits the complete acquisition of an object with a simple rotation.
Figure 8.4. Three acquisition configurations:(a) first generation scanner, (b) fan-beam scanner, and (c) cone-beam scanner
An important constraint on the detection technology concerns the high dynamics that is required to simultaneously measure the full flux and the most attenuated rays.
222
Tomography
While the image areas next to the object, which receive the full flux, may be saturated in simple imaging, this should be avoided in tomography, where the linearity of the attenuation measurement is essential (see equation [8.2]). The detectors must also respond very rapidly to variations in the incident flux during rotation of the object. Usually, we distinguish between direct and indirect detection. Direct detection, which is currently used the most, involves: – Image intensifiers: This already old technology is used in standard equipment for testing of small components in real-time. Their dynamics is poor and their linearity is limited. In addition, the projections that they provide may show geometrical distortions, which should be corrected by software. – Scintillation detectors: These have an excellent dynamics and are built from a fluorescent screen, for example made of terbium-doped gadolinium oxysulfide (Gd2O2S:Tb). This screen is placed over a line of photodiodes or coupled to a matrix camera with low noise level, which may be achieved either by optical fibers or an objective lens with large aperture. Recently, flat panel detector technology, originally developed for medical applications, has become available, in which the fluorescent layer is directly placed on a matrix of amorphous silicon (a-Si), also called TFT (thin film transistor). It provides good resolution (typically 130 μm) over large areas (typically 30 x 40 cm²). With the appearance of semiconductor matrices, direct detection, which is historically the oldest (gas detectors), is again advancing. It involves: – Gas detectors (xenon): Most medical scanners were equipped with these until recently. They have a high dynamics but a mediocre efficiency. They are usually built in the form of linear detectors. – Semiconductor detectors: Initially conceived for spectrometry, they are found in the form of monodetectors, often based on germanium. More recently, matrices (CdTe, HgI2) have been developed that enable direct imaging [RIC 01]. They may be used for a broad range of energies (including the energies produced by linear accelerators). – Photoconducting layer detectors: These are usually made of amorphous selenium (a-Se) placed directly on a TFT matrix [GLA 97]. Such a technology shows less internal blurring than fluorescent layers. It dispenses with light conversion and thus with light dispersion.
X-ray Tomography in Industrial Non-destructive Testing
223
8.5. Reconstruction algorithms and artifacts Reconstruction algorithms are traditionally grouped into two classes [KAK 88, PEY 96]. The analytical methods (see Chapter 2) are based on the inversion of the Radon transform, whereas the algebraic methods (see Chapter 4) use a discrete representation of the direct problem. Getting an exact reconstruction implies that each projection covers an area that is larger than the object. In non-destructive testing, this condition may not be satisfied to accelerate the acquisition and reconstruction (truncated projections). Moreover, another necessary condition for an exact reconstruction is that all planes intersecting the object must contain at least one source point. 8.5.1. Analytical methods Analytical expressions exist for exact 2D and 3D reconstructions in parallel and cone-beam geometry. They rely on the Fourier slice theorem, which links the Fourier transforms of projections and of the image, and they are usually related to the inverse Radon transform [JAC 96]. The filtered backprojection method (FBP), generalized to cone-beam geometry by Feldkamp, is the most common reconstruction method, although it is mathematically only correct in the plane of the acquisition trajectory [FEL 84]. Different digital filters may be applied, depending on whether noise reduction or spatial resolution are to be favored [GRA 96]. 8.5.2. Algebraic methods Algebraic methods demand the solution of a linear system of equations and minimize the reconstruction error iteratively. The most common implementation follows the basic ART technique, where each ray is considered in turn. The convergence is usually rapid, making a few iterations sufficient. The discretization of the direct problem often relies on a sampling of the volume in cubic voxels with no overlap, but more elaborate models may also be applied [MAT 96]. The algebraic methods are well suited when the number of projections is deliberately limited to accelerate the acquisition. They are also employed when the data are irregularly distributed or incomplete, or when a priori information is available. 8.5.3. Reconstruction artifacts The principal forms of artifacts are described here along with their causes. Ring artifacts (Figure 8.5): the elements of the detector are not correctly calibrated.
224
Tomography
Streaks: aliasing due to angular subsampling; the number of projections must be increased. Underestimated values in the center of the object (cupping effect): due to either the presence of diffuse radiation or the use of polychromatic radiation; in the first case, the beam may be collimated, the radiation may be filtered after the object, or an anti-scatter grid may be used; in the second case, the incident radiation must be filtered, or a calibration must be performed [DOU 01]; iterative algorithms also permit correcting scattering and beam hardening effects [DON 94]. Tangential rays between the most attenuating areas (Figure 8.5): due to the use of polychromatic radiation (see previous section). Isolated rays, shades behind the most attenuating areas: certain pixels of the detector are saturated (over- or under-exposure) or defective; a dose better adapted to the dynamics of the detector must be used, or the value of the defective pixels must be corrected by interpolation in each projection. Deformation, aliasing: the geometry of the combination of source, rotation axis, and detector is inaccurately determined.
Figure 8.5. Examples of ring artifacts (left) and tangential rays (right) 8.6. Applications 8.6.1. Tomodensitometry To avoid an underestimation of the linear attenuation coefficient in the center of the object, it is necessary to correct the beam hardening effect (see section 8.5.3). For this purpose, an image of a step-wedge, manufactured from the same material as the object
X-ray Tomography in Industrial Non-destructive Testing
225
to be tested (with the same atomic number Z), is acquired: the relation between the transmitted (measured) intensity and the traversed (actual) thickness makes it possible to replace the logarithm of Np / Np0 in the projections by equivalent lengths of the traversed material. The reconstruction provides in this way a mapping of the equivalent thickness of the reference material for each pixel. In the domain of powder metallurgy, a mapping of the density enables detection of porosity, thus optimizing the manufacturing process [THI 98]. A precision of ± 0.1 gcm-3 may be attained in the range of 6 to 7 gcm-3. Tomodensitometry is also employed to optimize carbon–carbon composite materials for space applications [MOR 89] or for aircraft brakes [DOU 00]. In the first case, the aim is to refine structures with a controlled density gradient, whereas in the second case, the aim is to detect areas oxidized by braking to reuse parts that remain sufficiently dense. In the tomodensitometry of carbon, a precision of ± 0.02 gcm-3 is easily reached for densities close to 1.6 gcm-3. 8.6.2. Expert’s report Tomography is employed for the assessment of complex, non-accessible mechanisms. With its help, the company TomoAdour has acquired an image of a crosssection through a regulation system of an aeronautical turbine without opening the structure. Another example of an expert’s report, illustrated in Figure 8.6, shows the possibility of characterizing contact defects in a connector and of qualifying a batch by sampling.
Figure 8.6. Comparison of a cross-section through two connectors manufactured differently (images courtesy of TomoAdour)
8.6.3. High-energy tomography The CEA-LETI has built a scanner that enables examination of objects with high attenuation (equivalent of 1500 mm of aluminum or 300 mm of steel). This device is aimed at testing ballistic engines (study of density variations within the
226
Tomography
propellant). A relative precision of 0.5% is attained with an elementary analysis volume (voxel) of some 100 mm3. The device consists of a linear accelerator of 8 MeV and a ring of 25 semiconductor detectors (CdTe). The acquisition of 1125 projections with 1024 pixels takes some minutes. The size of the tested object may reach 500 mm. The company Aérospatiale-Matra-Lanceurs possesses a scanner based on an X-ray tube of 420 kV, which is capable of traversing 340 mm of aluminum or 100 mm of steel. This device is dedicated to the testing of very large engines (up to 2400 mm in diameter). The goal is to test the structures of engines before filling them with propellant and to detect potential delamination within the inner, composite surface and spalling of the different layers (elastomer, composite, metallic inserts). The size of the pixel is fixed in-plane to 1 mm for the largest objects. The thickness of the cross-section may be changed between 2 and 20 mm. Figure 8.7 shows an engine installed in this device. The acquisition takes between 20 and 70 minutes, depending on the diameter of the object.
Figure 8.7. Testing of the body of an engine (image courtesy of Aérospatiale-Matra-Lanceurs)
8.6.4. CAD models in tomography: reverse engineering and simulation The joint use of tomography and CAD models of tested parts is increasingly common. In reverse engineering, the reconstructed volume serves to generate a surface mesh of the object, typically by the extraction of an isosurface, to obtain its CAD model. By contrast, simulation enables a reconstructed volume from the CAD
X-ray Tomography in Industrial Non-destructive Testing
227
model to be obtained with the goal of assessing the feasibility of testing by tomography and defining the optimal experimental conditions. 8.6.4.1. Reverse engineering Reverse engineering, assisted by X-ray tomography, enables us to obtain non-invasively a CAD model of a product with complex geometry, usually a prototype. This model may be employed to validate a manufacturing process by 3D analysis. SNECMA has implemented a testing system dedicated to parts of engines of military aircrafts; dimensional control of the turbine blades is achieved with a precision of ±0.03 mm [THI 98]. A second example, concerning a part for regulation in helicopter engines, is illustrated in Figure 8.8. Reverse engineering thus enables, even if no CAD data at all are available, an analysis of the technical characteristics of that product (calculation of the structure, simulation of flow, functional analysis in 3D) [DAS 99].
Figure 8.8. 3D analysis of a part for regulation in helicopter engines. Left: a photograph of the object. Right: the CAD model of the object, obtained by reverse engineering. This study was performed by the company TomoAdour with an X-ray generator with 450 kV (resolution of 1% in density and of 100 Pm in space)
8.6.4.2. Simulation The primary interest in simulation is to assess the feasibility of testing during the conception phase of a product. The development of new test equipment generally requires the implementation of long and expensive test and measurement series. Without simulation tools, it is very difficult to study the actual influence of different parameters on image quality. Simulation thus enables us to predict and optimize the performance of a complete X-ray test system (Figure 8.9).
228
Tomography
Two types of methods are usually employed for simulating interactions between photons and matter: – Monte-Carlo methods are the closest to the physical model, but the computation times are long, and the geometry of the objects must necessarily remain simple. – Ray tracing methods have the advantage of using 3D CAD models and of permitting simple manipulations of complex geometries [DUV 00a, GLI 98, GRA 89].
Figure 8.9. Simulation of a 3D tomography of a turbine blade. On the left, the used CAD model is presented in the form of a surface rendering. On the right, two cross-sections have been extracted from the reconstructed volume. The simulation has been performed with a polychromatic source and a matrix detector with 250 x 220 pixels. We may clearly see the tangential rays and the “cupping effect” due to beam hardening (images courtesy of INSA de Lyon–CNDRI)
8.6.5. Microtomography New requirements encountered in the domain of fine characterization of materials have recently led to the development of scanners with high resolution (several micrometers). 3D tomography enables characterization of the whole sample and imaging of the different phases it consists of with the help of a surface representation. For example, it is possible to visualize glass spheres serving the stabilization of a poly(methyl methacrylate) (PMMA) model material (Figure 8.10), in which the walls separate the cells of an aluminum foam (Figure 8.11). Such images allow modeling of the mechanical behavior of the material using finite element methods. The resolution of current detectors is usually insufficient to deliver images of an object with the required resolution. A common technique to improve the resolution is to use a microfocus X-ray tube and to enlarge the image by placing the sample close to the focal spot of the X-ray tube. Such an approach offers
X-ray Tomography in Industrial Non-destructive Testing
229
Figure 8.10. Glass spheres (diameter 400 μm) within a PMMA model material. The size of the voxel is 42 μm (image courtesy of INSA-Lyon/GEMPPM-CNDR)
Figure 8.11. Aluminum foam (density close to 0.06). The edge of the cube measures 3 cm. The size of the voxel is 150 μm (image courtesy of INSA-Lyon/GEMPPM-CNDRI)
the possibility of obtaining an enlargement that is adjustable to specific applications. Denoting with s the apparent size of the focal spot and with d the size of the detector pixel, the optimal object resolution, represented by r, corresponds to the case in which the geometrical unsharpness equals d. Under these conditions, r is given by
230
Tomography
r
d .s ds
[8.3]
When d and s are equal, the optimal resolution is s/2. When d and s are very different, r tends towards the smaller of these two values. Resolution of some micrometers may thus be attained [HAL 92]. Since the power delivered by a microfocus X-ray tube is low, the exposure time needed is long. In certain cases, the acquisition of 900 projections may require several hours. Synchrotron radiation may be used instead, since its photon rate is high. However, the beam is not divergent and the enlargement of images is impossible. Therefore, screens with very high resolution must be employed [KOC 98], which implies that they are thin and thus of low efficiency. Yet the high flux of photons permits acquiring 900 projections (of size 1024 x 1024) on synchrotrons of the third generation in about 15 minutes, with the size of the pixel being less than 1 μm. 8.6.6. Process tomography The University of Bergen und the company Norsk-Hydro have developed a static device dedicated to the monitoring of the flow of a multiphase fluid [JOH 96]. This process tomography works as follows. Five 241Am sources are placed around a pipe, and five rings, each comprising 17 semiconductor elements (CdZnTe), are placed diametrically opposed. The detectors and the associated readout electronics are capable of acquiring information corresponding to several hundred images per second. An iterative reconstruction algorithm produces up to 30 images per second. The monitoring of a catalytic reaction in a fluidized bed is of considerable interest in the petroleum industry. With the aim of mapping the density of a catalyst in the form of a fluidized powder within a cracking reactor that may reach a diameter of 1.20 m, Elf’s research center in Solaize (France) uses a 137Cs source having an activity of 18 GBq and a unique detector consisting of a scintillator (NaI) and a photomultiplier [BER 95]. The acquisition of a cross-section of the column takes about 3 hours and an algebraic reconstruction method is used. The image of the steel and resistant concrete wall is eliminated by subtraction, involving a first acquisition while the reactor is halted (Figure 8.12). More recently, a scanner consisting of 32 detectors has been developed by the French Institute of Petroleum and the CEA [BOY 00]. The increase in the number of detectors enables reduction of the duration of the acquisition. 8.6.7. Dual-energy tomography In view of the fact that a given value of the attenuation μT may correspond to an infinite number of couples (ȡv, Zeff), where Zeff represents the effective atomic number
X-ray Tomography in Industrial Non-destructive Testing
231
of the material, knowledge of the distribution of μT(x, y) never permits a precise analysis of the nature of the material. Nevertheless, certain applications require a precise mapping of the different materials that are present within an object. Dualenergy tomography allows such a distinction between different materials [VIN 87]. The theory of dual-energy tomography is widely addressed in the literature [ALV 76, LEH 81, MAR 81]. Different models allow expression of μT as functions of ȡv, Z and Ep. Let m be a constant (close to 3) and Į and ȕ be two energy functions. The attenuation is then modeled by: μT (Uv, Z, Ep ) = Uv Zm D(Ep) + Uv E(Ep).
[8.4]
Figure 8.12. Density of fluidized particles of a catalyst within a catalytic cracking reactor with a diameter of 0.82 m. The gray levels in the image correspond to the density of the catalyst in kgl-1 (image courtesy of Elf)
The first step, the calibration, involves determining Į(Ep) and E(Ep) for the two energies used. The first energy is in the range where the photoelectric effect is predominant (slope of the graph in Figure 8.1), whereas the second is on the plateau of the graph, where the Compton effect dominates. In this way, a system of two equations is available that permits calculating ȡv and Z. In the most common case of polychromatic radiation, however, estimation of the functions Į and ȕ is very delicate, even after filtering the beam. Several types of approaches are conceivable. We require about 10 standards, of which the density and the atomic number are known [COE 94]. Another exploits the fact that, as far as the attenuation is concerned, any material may be replaced by a combination of two “base” materials [ROB 99].
232
Tomography
The last approach is used at CEA-LETI, which develops a system that permits the monitoring of nuclear waste drums. The distribution of the activity of the different sources contained in the drums is determined by emission tomography. Before estimating the activity precisely, it is necessary to take into account the attenuation of the photons emitted by the diverse sources in the object. It is, therefore, indispensable to know the attenuation coefficient for each point in the drum, not only for a single energy, but for the whole range of energies of the present sources. For this reason, a dual-energy tomography by transmission is carried out beforehand for the whole drum. In this way, it is possible to determine for all points which fractions of the two base materials (steel and PMMA in this case) must be mixed to obtain the same attenuation properties as those of the examined object. With the properties of the two base materials perfectly known, the linear attenuation coefficient may thus be calculated within the drum for all energies. At the French Institute of Petroleum, the flow of fluids is visualized (gas, water, petroleum) within porous rocks from oilfields [THI 98]. Since these three fluids are intimately mixed, the spatial resolution in a cross-section does not permit their separation. Measuring the mean density within each voxel is also insufficient to determine the relative proportions of the three fluids. A scanner operating first at 80 kV, then at 140 kV, provides two images, which allow recovery of the relative concentrations of the three fluids within each voxel. 8.6.8. Tomosynthesis The goal of tomosynthesis is to enhance, by a particular movement of the object, the mapping for a chosen cross-section, which is projection invariant during the movement. Different cross-sections parallel to the detector plane may be obtained by superimposing the different projections after a calibration, which is performed either directly by a particular movement of the X-ray tube–detector combination or indirectly by postprocessing (backprojection). It comes down to gathering the information that corresponds to the plane of the chosen cross-section a priori, while the contributions of objects located out of the plane of the cross-section are attenuated by a blurring effect. Some examples are described in the following: – LETI and SNPE Propulsion (National Society of Powders and Explosives) have developed a system for testing solid fuel motors for Ariane 5 to detect detachment (down to 0.3 mm) by tangential shooting [RIZ 96]. The motor, which measures 3 m in diameter, rotates with a speed of 0.75°/s. The source is a linear accelerator of 15 MeV, and the detection system consist of an amplified camera that is coupled to a Gd2O2S:Tb screen. The tomosynthesis is performed over some tens of successive images with a reconstruction rate of 6 images per second.
X-ray Tomography in Industrial Non-destructive Testing
233
– This technique is also well suited for examining the welds on electronic cards. At the Fraunhofer Institute for Non-Destructive Testing (IZFP, Berlin), an inspection method by simple linear translation (computed laminography) without relative movement between the camera and the detector has been developed [GON 99]. – BAM (Federal Institute for Materials Research and Testing, Berlin) has developed testing of circumferential welds by diametrical shooting on tubes. The process relies on a particular tomosynthesis technique, coplanar laminography, which combines a translation of the tube with an orbital of the source–detector combination. The device consists of an X-ray generator of 225 kV and a ring of 2048 detectors spaced 50 Pm apart [EWE 00]. Tomosynthesis may be considered as a special case of limited angle tomography. Therefore, algebraic methods are well suited to this type of problem, but the (linear or non-linear) averaging of images remains the most common method. 8.6.9. Scattering tomography Scattering tomography is a 3D imaging and analysis technique, whose principle is illustrated in Figure 8.13a. Since the source and the detector are located on the same side of the examined part, this technique obviously remains useful even if the object is large or access to its interior is impossible. This is one of the characteristics of scattering imaging techniques. a
b
Detector
Detector Scattering angle
Exit collimator
Measurement volume
Exit collimator
Sample Entrance collimator X-ray source
Figure 8.13. (a) Schematic illustration of the principle of the device. The measurement volume is located at the intersection of the incident and scattered beams, limited by the entrance and exit collimators, respectively. The detector, located behind the latter, counts and analyzes the scattered photons within the measurement volume; (b) set-up with parallel slits
234
Tomography
When an object of arbitrary form and nature is irradiated, interactions between photons and matter take place along the incidence trajectory, in particular Rayleigh and Compton scattering. Compton scattering, in contrast to Rayleigh scattering, is accompanied by a loss of energy, whose extent depends on the energy of the incident photons and the scattering angle. The probability of the occurrence of these two phenomena depends not only on the energy of the radiation and the scattering angle, but also on the density and the atomic number of the inspected material [HUB 75]. Two collimators allow fixing of the size and the form of the measurement volume, the volume in which the Compton and Rayleigh photons acquired by the detector are scattered. Images are classically obtained by shifting this measurement volume within the tested object. More recently, a new acquisition technique has been developed [DUV 00b] (Figure 8.13b). Using an exit collimator with parallel slits, the scattered radiation is integrated along a line corresponding to the collimated incident beam. From a movement of the object inspired by scanners of the first generation, a standard reconstruction algorithm enables a cross-section whose spatial resolution is isotropic and only depends on the aperture of the entrance collimator to be obtained. Among the scattering imaging techniques, two groups of applications may be distinguished: – Defect search, density and thickness measurement: the devices implemented for this type of application carry out a counting of scattered photons, without distinction between their energy [BAB 91, HAR 88, REI 84]. The result is a function of the density and the atomic number of the material. Typically, the device comprises an X-ray generator and a scintillator (NaI or BGO). The cross-sections presented in Figure 8.14 are from a prototype of the leading edge of the spaceship Hermes, designed with carbon–carbon composite materials by the company Aérospatiale-Matra-Lanceurs. They clearly show numerous defects, such as decay, blister, and delamination, due to severe mechanical and thermal constraints imposed on this prototype. The same device may also be used to perform thickness measurements of walls with a precision of up to 0.01 mm [BAB 92]. In addition, the density of products from powder metallurgy for the automobile industry may also be determined with a precision of 0.5% [ZHU 95]. – Effective atomic number measurement by Rayleigh vs. Compton scattering: the ratio of the number of photons scattered by the Rayleigh and the Compton effect enables determination of the effective atomic number of the material directly [DUV 99], independent of the density. This technique requires the use of a monochromatic source and an energy resolving detector to distinguish the two types of photons. The spectrum acquired for each measured point shows two peaks, which have to be separated [SEB 89].
X-ray Tomography in Industrial Non-destructive Testing
235
Figure 8.14. Left: a photograph of the leading edge of the spaceship Hermes. Right: three cross-sections obtained at different levels (images courtesy of CNDR – Aérospatiale-Matra-Lanceurs)
Scattering tomography is well suited for light materials (composites, aluminum). The testing of denser materials is possible, but the maximum depth is further reduced in this case [BAB 91]. 8.7. Conclusion X-ray tomography enables testing of objects that are made of very diverse materials and which may have complex forms. Specific devices have been built to adapt equally well to large, industrial objects and to small samples. It is worth mentioning that fluorescence [TAK 95] and diffraction [GRA 94] phenomena may also be used to gather other kinds of information on the characteristics or the nature of materials. It is envisaged that the numerous possibilities offered by tomography in the domain of non-destructive testing will make this technique very common in the coming years. Advances in source and detector technology give rise to hopes for substantial future performance improvements. Finally, the continuous progress in computer technology will probably soon render it possible to perform tomography in real-time, thanks to the emergence of simplified acquisition geometries and the development of new reconstruction algorithms. 8.8. Bibliography [ALV 76] ALVAREZ R. E., MACOVSKI A., “Energy selective reconstructions in X-ray computerized tomography”, Phys. Med. Biol., vol. 21, n° 5, pp. 733–744, 1976.
236
Tomography
[BAB 91] BABOT D., BERODIAS G., PEIX G., “Detection and sizing by X-ray Compton scattering of near-surface cracks under weld deposited cladding”, NDT & E International, vol. 24, n° 5, pp. 247–251, 1991. [BAB 92] BABOT D., LE FLOCH C., PEIX G., “Contrôle et caractérisation des matériaux composites par tomodensimétrie Compton”, Rev. Pratique de Contrôle Industriel, n° 73, pp. 64–67, 1992. [BAR 57] BARTHOLOMEW R. N., CASAGRANDE R. M., “Measuring solids concentration in fluidized systems by gamma-ray absorption”, Industrial and Engineering Chemistry, vol. 49, n° 3, pp. 428–431, 1957. [BER 87] BERGER M. J., HUBBELL J, H., XCOM: Photon Cross-Sections on a Personal Computer, National Bureau of Standards, 1987. [BER 95] BERNARD J..R., “Frontiers in Industrial Process Tomography”, in Scott D. M., Williams R. A. (Eds.), Engineering Foundation, AIChE, pp. 197–206, 1995. [BIR 79] BIRCH R., MARSHALL M., ARDRAN G. M., Catalogue of Spectral Data For Diagnostic X-rays, The Hospital Physicists Association, 1979. [BOY 00] BOYER C., FANGET B., LEGOUPIL S., “Development of a new gamma-ray tomographic system to investigate two-phase gas/liquid flows in trickle bed reactor of large diameter”, Proc. CHISA, n° 588, pp. 28–31, 2000. [BRO 76] BROOKS R. A., DI CHIRO G., “Beam hardening in X-ray reconstructive tomography”, Phys. Med. Biol., vol. 21, n° 3, pp. 390–398, 1976. [COE 94] COENEN J. G. C., MAAS J. G “Material classification by dual energy computerized X-ray tomography”, Proc. International Symposium on Computerized Tomography for Industrial Applications, pp. 120–127, 1994. [DAS 99] DASTARAC D., “Industrial computed tomography: control and digitalization”, eJournal on Nondestructive Testing & Ultrasonics, vol. 4, n° 9, 1999. [DON 94] DONNER Q., Correction de l’atténuation et du rayonnement diffusé en tomographie d’émission à simples photons, Ph.D. thesis, Joseph Fourier University, Grenoble, 1994. [DOU 01] DOUARCHE N., ROUBY D., PEIX G., JOUIN J. M., “Relation between X-ray tomography, specific gravity and mechanical properties in carbon-carbon composites braking applications”, Carbone, vol. 39, n° 10, pp. 1455–1465, 2001. [DUV 00a] DUVAUCHELLE P., FREUD N., KAFTANDJIAN V., PEIX G., BABOT D., “Simulation tool for X-ray imaging techniques”, in MAIRE E., MERLE P., PEIX G., BARUCHEL J., BUFFIERE J. Y. (Eds.), X-Ray Tomography in Material Science, Hermès, pp. 127–137, 2000. [DUV 00b] DUVAUCHELLE P., PEIX G., BABOT D., “Rayleigh to Compton ratio computed tomography using synchrotron radiation”, NDT & E International, vol. 33, n° 1, pp. 23–31, 2000. [DUV 99] DUVAUCHELLE P., PEIX G., BABOT D., “Effective atomic number in the Rayleigh to Compton scattering ratio”, NIMB, vol. 155, pp. 221–228, 1999.
X-ray Tomography in Industrial Non-destructive Testing
237
[EWE 00] EWERT U., MÜLLER J., “Laminographic X-ray testing of circumferential welds with a linear-array camera”, e-Journal of Nondestructive Testing & Ultrasonics, vol. 5, n° 4, 2000. [FEL 84] FELDKAMP L. A., DAVIS L. C., KRESS J. W., “Practical cone-beam algorithm”, J. Opt. Soc., vol. 1, n° 6, pp. 612–619, 1984. [GLA 97] GLASSER F., MARTIN J. L., THEVENIN B. et al., “Recent developments on CdTe based X-Ray detector for digital radiography”, SPIE Medical Imaging, vol. 3032, pp. 513–519, 1997. [GLI 98] GLIERE A., “Sindbad: from CAD model to synthetic radiographs”, in THOMSON D. O., CHIMENTI D. E. (Eds.), Review of Progress in Quantitative Nondestructive Evaluation, Plenum, vol. 17, pp. 387–394, 1998. [GON 99] GONDROM S., ZHOU J., MAISL M., REITER H., KRONING M., ARNOLD W., “X-ray computed laminography: an approach of computed tomography for applications with limited access”, Nuclear Engineering and Design, vol. 190, n° 1–2, pp. 141–147, 1999. [GRA 89] GRAY J. N., INANC F., SCHULL B. E., “3D modeling of projections radiography”, in THOMPSON D. O., CHIMENTI D. E. (Eds.), Review of Progress in Quantitative Nondestructive Evaluation, Plenum, vol. 8, 1989. [GRA 94] GRANT J. A., DAVIS J. R, WELLS P. et al., “X-ray diffraction tomography at the Australian National Beamline Facility”, Opt. Eng., vol. 33, n° 8, pp. 2803–2807, 1994. [GRA 96] GRANGEAT P., LA V., SIRE P., “Tomographie d’émission photonique avec correction d’atténuation par mesure de transmission”, Revue de l’Acomen, vol. 2, pp. 182–195, 1996. [HAL 92] HALMSHAW R., “The effect of focal spot size in industrial radiography”, British Journal of NDT, vol. 34, n° 8, pp. 389–394, 1992. [HAR 88] HARDING G., KOZANETZKY J., “X-ray scatter imaging in non-destructive testing”, Tomography and Scatter Imaging, n° 19, pp. 1–15, 1988. [HUB 75] HUBBELL J. H., VEIGELE W. M., BRIGGS E. A. et al., “Atomic form factor, incoherent scattering function and photon scattering cross-sections”, J. Phys. Chem., vol. 4, n° 3, pp. 471–538, 1975. [JAC 96] JACOBSON C., Fourier Methods in 3D-Reconstruction from Cone-Beam Data, PhD thesis, Linköping University, 1996. [JOH 96] JOHANSEN G. A., FRØYSTEIN T., HJERTAKER B. T., OLSEN Ø., “A dual sensor flow imaging tomographic system”, Meas. Sci. Techn., vol. 7, n° 3, pp. 297–307, 1996. [KAK 88] KAK A., SLANEY M., Principles of Computerized Tomographic Imaging, IEEE Press, 1988. [KOC 98] KOCH A., RAVEN C., SPANNE P., SNIGIREV A., “X-ray imaging with submicrometer resolution employing transparent luminescent screens”, J. Opt. Soc. Am., vol. A15, pp. 1940–1951, 1998. [LEH 81] LEHMAN L. A., ALVAREZ R. E., MACOVSKI A., BRODY W. R., “Generalized image combination in dual KVp digital radiography”, Med. Phys., vol. 8, n° 5, pp. 659–667, 1981.
238
Tomography
[MAR 81] MARSHALL W. H., ALVAREZ R. E., MACOVSKI A., “Initial results with prereconstruction dual-energy computed tomography”, Radiology, vol. 140, pp. 421–430, 1981. [MAT 96] MATEJ S., LEWITT R. “Practical considerations for 3D image reconstruction using spherically-symmetric volume elements”, IEEE Trans. Med. Imaging, vol. 15, n° 1, pp. 68–78, 1996. [MOR 89] MORISSEAU P., CASAGRANDE J. M., HUET J., PAUTON M., “Caractérisation des matériaux par tomodensimétrie à rayons X”, Mémoires et Etudes Scientifiques – Revue de Métallurgie, vol. 86, pp. 269–274, 1989. [PEY 96] PEYRIN F., GARNERO L., MAGNIN I., “Reconstruction tomographique d’images 2D et 3D”, Traitement du signal, vol. 13, n° 4, pp. 381–413, 1996. [REI 84] REIMERS P., GILBOY W. B., GOEBBELS J., “Recent developments in the industrial application of computerized tomography with ionizing radiation”, NDT & E International, vol. 17, n° 4, pp. 197–207, 1984. [RIC 01] RICQ S., GLASSER F., GARCIN M., “Study of CdTe and CdZnTe detectors for X-Ray computed tomography”, Nucl. Instr. and Meth. in Phys. Res., vol. A458, pp. 534–543, 2001. [RIZ 96] RIZO P., ANTONAKIOS M., LAMARQUE P., “Solid rocket motor nondestructive examination using tomosynthesis methods”, Proc. Conference on Non-Destructive Testing, pp. 453–456, 1996. [ROB 99] ROBERT-COUTANT C., MOULIN V., SAUZE R, RIZO P., CASAGRANDE J. M., “Estimation of the matrix attenuation in heterogeneous radioactive waste drums using dual-energy computed tomography”, Nucl. Instr. and Meth. in Phys. Res., vol. A422, pp. 949–956, 1999. [SEB 89] SEBER G. A. F., WILD C. J., Nonlinear Regression, John Wiley & Sons, 1989. [TAK 95] TAKEDA T., TOSHIKAZU M., TETSUYA Y. et al., “Fluorescent scanning X-ray tomography with synchrotron radiation”, Rev. Sci. Instr., vol. 66, n° 2, pp. 1471–1473, 1995. [THI 98] THIERY C., FRANCO P., “La tomographie à rayons X”, Contr. Indust., n° 214, pp. 26– 57, 1998. [VIN 87] VINEGAR H. J., WELLINGTON S. L., “Tomographic imaging of three-phase flow experiments”, Rev. Sci. Instrum., vol. 58, n° 1, pp. 96–107, 1987. [ZHU 95] ZHU P., PEIX G., BABOT D., MULLER J., “In-line density measurement system using X-ray Compton scattering”, NDT & E International, vol. 28, n° 1, pp. 3–7, 1995.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 9
Industrial Applications of Emission Tomography for Flow Visualization
9.1. Industrial applications of emission tomography 9.1.1. Context and objectives The understanding of flow phenomena in general, and multiphase flow phenomena in particular, is crucial in the development and control of processes in industry and process engineering. Flow systems with fluids in two or more non-miscible phases are found in numerous branches of industry, including the oil, food, pharmaceutical, and materials (foundry, plastics) industries. To illustrate the subject of this chapter, we consider the problem of designing a reservoir that is agitated by a chemical reactor. The objective of the designer is not only to optimize the system in economical terms, including its dimension and its production, but also to develop a complete control system, which assures efficient operation of the process. For this purpose, the designer resorts to mathematical models constructed from the mass and energy balance, the kinetics of the chemical reactions, and properties of the materials. However, it is often impossible to make predictions from basic principles alone. Despite a variety of processes that use multiphase flows, the understanding of the hydrodynamic problems encountered remains incomplete. The flow patterns distinguished are annular, laminar, bubble and turbulent. The designer of the reactor thus has to rely on experimental data, acquired in earlier experiments in the laboratory, or in tests on a pilot installation at reduced scale.
Chapter written by Samuel LEGOUPIL and Ghislain PASCAL.
240
Tomography
Process tomography may provide information on the distribution of phases anywhere in the reactor in real-time. A process tomography system is based on a non-invasive physical measuring technique that is adapted to the nature of the observed phenomenon in terms of physical properties, spatial resolution, sensitivity, and kinetics. The data acquired with the instrument are a set of values integrated by each of the detectors. The tomographic reconstruction method must also be adapted to the geometry of the measuring instrument. The nature of the materials, their properties, and the conditions of the system operation determine the choice of the physical measuring technique. It may be electromagnetic, electrical, acoustical, optical, or nuclear. Certain techniques may permanently be installed in the system, others, notably those based on tracers, are only used for a temporary analysis or diagnosis of the process. 9.1.2. Non-nuclear techniques Electrical impedance tomography [DIC 93] is a non-invasive technique that provides an image of the distribution of the electrical properties of the medium by means of a set of electrodes mounted inside the conduit. The physical properties that may be exploited are the capacity, the resistance, and the inductance. The choice depends on the contrast achievable with these properties between the materials. Capacitive tomography is the furthest developed of the techniques. The detectors are electrodes placed in pairs opposite to each other around the circumference of the conduit to be analyzed. The advantages of this cheaper method are that it only requires a set of static detectors and that its very short acquisition times are suitable for rapid kinetics. Examples of applications are found in the monitoring of pneumatic transport systems for dry matter, of hydrocyclones, and of powder flow [WIL 95]. Techniques using light are sensitive to variations in the refractive index of the medium [DAR 95]. They are well adapted to the characterization of foam. Various studies based on interferometric techniques have demonstrated the measurement of the density and the composition of gaseous jets. The walls of the conduit must be transparent, and the material must be translucent. Microwave tomography [BOL 95] is based on the diffraction effects of electromagnetic waves with wavelengths in the range 1 m down to 1 mm. This depends on dielectric and magnetic constants of the materials. It is mainly applied to solid foams and composite materials. Ultrasound tomography is based on variations in the refractive index of acoustic waves. This technique is applied to velocimetry and the analysis of bubble movements. 9.1.3. Nuclear techniques The absorption of X- or Ȗ-rays is the basic principle of most of the gauges known as nucleonic gauges. They enable the measurement of the density or mass of
Industrial Applications of Emission Tomography
241
matter that intercepts the beam of radiation. Dual-energy X- or Ȗ-ray absorptiometry enables the measurement of the concentration of a product in a mixture. It is a technique of interest for the characterization of biphase flows, provided that the attenuation between the products is sufficiently different. The interest in techniques based on ionizing radiation stems from their ability to be applied under real conditions in a factory. In fact, Ȗ-rays may penetrate through conduit thick walls, and contact-free measurements may be realized under real temperatures and pressures. The tomographic information may be obtained as with medical scanners by rotating the source–detector combination around the conduit. This technique is described in section 8.6.6. Nuclear techniques involve marking a phase with a radioactive tracer. The circulating fluid itself then emits the signal, the Ȗ-rays. The employed radioactive tracers have a sufficiently short life time to avoid the risk of contaminating the environment. They allow selective marking of a phase of a mixture, and the Ȗ-rays from the tracer allow contact-free, remote sensing through the wall of a conduit. This technique is used for determining the residence time distribution in process engineering. The marked product, which may be solid, liquid, or gaseous, is injected at the inlet of the system, and its presence is detected by one or more radiation detectors placed at judiciously chosen points in the system. This analysis method provides only one-dimensional information on the evolution of the tracer’s concentration. Another approach is to perform a tomographic reconstruction of the tracer’s distribution. This is analogous to applications of radioactive tracers in nuclear medicine described in Chapter 1. Some authors [PAR 95] have used positron emission tomography. The particles resulting from the radioactive disintegration annihilate on encountering an electron and emit two Ȗ-photons that propagate in exactly opposite directions. The coincidence detection of Ȗ-photons defines annihilation pairs. The position of the emission is determined by a reconstruction algorithm. The drawback of positron emission tomography is the limited choice of radiotracers. In fact, the majority of them have a very short half-life (15O: 2 min, 11C: 20 min, 18F: 1.8 h), and their production requires a cyclotron and a radiochemistry laboratory close to the installation site. Moreover, the detectors are complex and expensive. Their size does not always permit their integration in factories. Single photon emission computed tomography (SPECT) uses the same Ȗ-tracers as are commonly employed for the determination of resting times. Among the radiotracers are 198 Au with a half-life of 2.7 days and 140La with a half-life of 12.8 days for particles, 99m Tc with a half-life of 6.0 h for liquids, and 133Xe with a half-life of 5.25 days for gases. The classical sensor in SPECT is the scintillation camera, which rotates around the object. Because of its size and its fragility, it is not very suitable for use in an industrial environment. Instead, the use of a static ensemble composed of classical scintillation detectors is preferred. The system is portable and adaptable to all sorts of situations in a factory. Applications are basically the characterization of flow and the analysis of systems in process engineering.
242
Tomography
An important parameter to consider in the implementation of a SPECT system is the geometry of the flow circulation system. In most cases, it is a cylindrical pipe. The positioning of the detectors may be restricted by the presence of insulating material, a secondary pipe, or a lack of space around the pipe to be analyzed. Another crucial parameter to consider is the type of flow to be observed. A priori knowledge of the type of flow (Figure 9.1) may determine the geometrical parameters. If it is available with high confidence, it may also determine the distribution of the detectors. In the case of annular flow, it is preferable to increase the number of detectors in a projection at the expense of the number of projections around the pipe. In the remainder of this chapter, two examples of applications relying on the use of 36 independent detectors are presented. The estimation method for the transport matrix associated with this system is outlined, and the images obtained by a reconstruction with the EM algorithm are presented.
Figure 9.1. Examples of flow in a pipe (laminar, bubble, turbulent, annular)
9.2. Examples of applications 9.2.1. Two-dimensional flow The studied object is a cylinder of small height compared to the diameter; we will consider two-dimensional flow. The fluid enters and exits through two openings that are diametrically opposed (Figure 9.2). The volume is filled with water under low pressure. The flow of the water is fixed to 0.4 l/min. A colorant is simultaneously injected at 3.5 mCi of 99mTc. In addition to the close correspondence between reconstructed images and video frames (Figure 9.3), the measured activity agrees well with the injected activity. A decomposition of the image in nine zones of interest shows the dampened oscillatory nature of the activity (Figure 9.4). After travelling from the entrance to the exit of the mixer, part of the tracer is ejected, while the symmetrically separated remainder circulates along the wall before returning to the principal flux. This process is repeated while diffusion tends to homogenize the mixture.
Industrial Applications of Emission Tomography
Figure 9.2. Principle of the 2D mixer
Figure 9.3. Selected reconstructed images and corresponding video frames. The injection takes place at the right of the mixer
243
244
Tomography
Figure 9.4. Reconstructed total activity and reconstructed activity in three regions of interest
9.2.2. Flow in a pipe – analysis of a junction The objective of this application is to analyze a junction of a pipe. A flux Q2 is inserted into a flux Q1 by a 90° connection (Figure 9.5). Injection of the radioactive tracer is carried out in the branch Q2 of the circuit. Measuring Q2 enables control of the homogeneity of the tracer in the flux. The objective for Qt is to dynamically visualize the distribution of the tracer in this branch and to estimate its fluctuations. The images reconstructed for Q2 (Figure 9.6) demonstrate the penetration of flux 1 into flux 2 with a depth equal to twice the diameter of the pipe. This periodic phenomenon occurs in a section angulated by about 30°. The measurements at Qt show the asymmetry of the mixture with respect to a vertical plane and a periodic variation of the rate of flux with the same frequency as that found at Q2. Due to the inhomogeneity of the velocity of the flow in the mixer (ratio of 10 between the rates), the interpretation of concentrations of the tracer at Qt is not straightforward, because the tomographic measurement is sensitive to the velocity of the tracer. The mean concentration measured by the tomographic system is inversely proportional to the velocity of the flow. A correction model of first order, applied to the obtained measurements, yields good agreement with the numerical simulation results obtained for this type of flow (Figure 9.7).
Industrial Applications of Emission Tomography
Figure 9.5. Prototype of a scanner consisting of 6 x 6 detectors and studied junction. The rate of flow is Qt = Q1+Q2
245
246
Tomography
Figure 9.6. Left: reconstructed mean image of a cross section of the flux Q2 (top) and Qt (bottom). Right: representation of the same 3D images, with time as the third dimension (vertical axis)
100
0
Figure 9.7. Reconstructed mean image of the flux Qt and comparison to a numerical simulation of the flow
Industrial Applications of Emission Tomography
247
9.3. Physical model of data acquisition 9.3.1. Photon transport The photons emitted by the tracer undergo scattering and attenuation effects in the media they traverse. If the cross sections of these effects and the attenuation maps are perfectly known, it is possible to model the probability of the detection of the photons from random drawing of a multitude of events. The photons are detected in the scintillator when they lose all (photoelectric effect) or only some (Compton effect) of their energy in it. The probability that the photons are detected by the Compton effect is the larger the higher their energy and the smaller the volume of the scintillator . Taking into account the energy of the tracers employed in industrial applications, the suppression of photons scattered in the medium also leads to the suppression of photons scattered in the detector only. To increase the number of detected events, the photons scattered in the medium must equally be considered. Moreover, it is a major difficulty to take the scattering into account in the tomographic acquisition process due to the anisotropy of the scattering. Qualitatively, the scattering is the cause of a loss of gradients in the projections (spreading of profiles, see Figure 9.10), which results in a blurring of the image. Quantitatively, the scattering leads to incorrect estimates of the reconstructed activity. The contribution of the scattering is expressed by the build-up factor, which is defined by the ratio of the total number of detected photons to the total number of direct photons. In practice, this factor may reach infinite values at source points located outside the solid angle of a perfectly collimated detector. Despite the loss of resolution that it introduces, the scattered radiation contains information linked to the phenomenon that we wish to observe. Two approaches are essentially used to consider these events. In a global model, the transfer matrix integrates all phenomena which contribute to the detection of Ȗphotons. In particular, the elements of the transport matrix take into account the probability that a photon emitted from the image element j can be detected by the detector i after one or more scatterings (Compton or pair creation). Compensating for scattering before or after reconstruction is a second approach, in which the photons scattered in the medium are explicitly separated from the directly collected photons. These methods require detection in two energy bands, and they are only efficient if all the energy of the photons is transferred to the detector. Since the number of scattered photons in the two energy bands is proportional, it is possible to subtract the scattered photons in the energy band that contains the photoelectric peak.
248
Tomography
9.3.2. Principle of the Monte Carlo simulation The principle of the calculation relies on the fact that the history of the trajectory of a photon since its creation is composed of a succession of interactions with matter. These processes are entirely described by the laws of probability and the cross sections of interaction. From the calculation of a multitude of histories, we obtain for a given geometry of the detector (Figure 9.8) the probability of detection, the total detected spectrum, those scattered in the medium and the detector, the efficiency of the shielding and of the detector, and the build-up factor. The spectrum of the flux of incident photons to a detector may thus be estimated as the response of the detector to this flux. In particular, it is possible to estimate the number of incident photons with a certain energy that are detected with a smaller energy due to scattering in the detector. Using statistical methods, these quantities are marred by an uncertainty, which is inversely proportional to the number of simulated histories.
Figure 9.8. Geometry of detection
To ensure a good sampling of the space observed by the detector, the response of the detector must be simulated for a large number of point source positions. In addition to the source points placed within the solid angle of the detector, those located outside of it must be considered. Although the contribution of a source located outside of the geometrical angle of the collimator is small, the global contribution of these points becomes important. As an example, we consider a surface source, a collimator of depth h and diameter d, placed at a distance 1 from the surface. The measured activity is proportional to (d (l + h)/(hl))2. If h is small compared to l, the measured activity is quasi-independent of the distance to the source. The integration of a surface source rapidly compensates for the distance from this source. To rapidly calculate a large number of source points, the code for Monte Carlo simulations includes a variance reduction algorithm, which forces the photon during
Industrial Applications of Emission Tomography
249
each interaction with the medium towards the detector to interact with it. At the end of the calculation of each history, the probability of detection is calculated as the sum of the probabilities calculated for each interaction. This dedicated method for emission tomography is particularly powerful, since it may reach an acceleration by a factor of 500 over general code, such as MCNP (OCDE/Nuclear Energy Agency).
Figure 9.9. Spectra simulated for 82Br for different numbers of scatterings with the medium
9.3.3. Calculation of projection matrices 9.3.3.1. Estimation of projection profiles The projection matrix is calculated by estimating projection profiles of linear sources placed perpendicularly to the detector plane. These profiles are adjusted by empirical, analytical functions, which generally have a Gaussian form. These functions integrate all probabilities that contribute to the detection of the photons. The Monte Carlo simulation for estimating the projection matrix enables separation of the photons according to the number of scatterings in the medium before their detection (Figure 9.10). This has two advantages: the adjustment of the detected profiles is simplified by separating the non-scattered and the once, twice, or more frequently scattered photons, and the analysis of the spectra of the incident photons at the detector enables optimization of the energy threshold for the selection to mostly or entirely eliminate the photons that were scattered too often in the medium. In the example of simulated spectra for 82Br presented in Figure 9.9, the spectra are plotted separately for the different numbers of scatterings the photons experienced in the medium. The profiles obtained for this configuration show that the full width at half maximum (FWHM) evolves sufficiently little with the number of interactions. By contrast, the full width at a tenth of the maximum (FWTM) is
250
Tomography
multiplied by 2 when one or 25 interactions are considered (Table 9.1). Limiting the energy to 250 keV enables suppression of the majority of photons scattered more than twice in the detector. The FWHM is then 2.4 cm. However, 80% of the incident photons are not detected.
Figure 9.10. Projection profiles for different numbers of scatterings
The probability that a photon reaches the detector without scattering may be estimated by a direct calculation. For this purpose, the solid angle, the transmission probabilities associated with the different traversed media, and the probability that the photon is absorbed in the detector independent of its energy are considered. A first design of an experiment may be obtained from the Monte Carlo simulation. Since the quantity of the usable tracer limits the experimental conditions feasible at an industrial site, it is not necessary to accurately model the projection matrix in the
Industrial Applications of Emission Tomography
251
first phase of the production of a tomograph. The simulation is performed when the system parameters are defined.
Number of scatterings
FWHM (cm) FWTM (cm)
0
1.9
4.4
1
2.1
6.4
2
2.3
8.0
>2
2.6
13.0
Table 9.1. Evolution of the FWHM and the FWTM as function of the number of interactions
In the geometry shown in Figure 9.8, the profiles are decomposed in a direct and a scattered component. Each of these components is adjusted by an analytical function of the form:
§ § y yd ·K(x) · p(x, y) p0(x)exp¨ ¨¨ ¨ © V(x) ¸¹ © ¹
[9.1]
For the direct component, p0(x) is directly calculated by :
p0 x
Sd 4S xd x
2
§ · exp ¨ ¦ Pi di ¸ © screen ¹
[9.2]
The amplitude of the scattered component is calculated as a fraction Hd(x) of the direct component. Hd(x) and K(x) are adjusted by a second order polynomial. V(x) is in general adjusted by a linear function. In an industrial context, this modeling enables quantitative measurements to be obtained. 9.3.3.2. Estimation of the projection matrix The examined media are in general cylindrical pipes. Therefore, rotational symmetry means that not all incidences need to be simulated. If a projection consists of N detectors, the projection matrix is entirely defined by the simulation of the response of N/2 detectors if N is even ((N-1)/2 + 1 otherwise). An element Aij of the projection matrix is an estimate of the probability of detection on the surface of pixel j. Aij is then calculated as the integral of the profile over the surface of this pixel:
252
Tomography xi d yi d 2 2
Aij
³ ³ p(x, y)dxdy
xi d yi d 2 2
[9.3]
where d is the size of the pixel. The estimation of Aij for an incidence whose angle is not kS/2 is calculated by linear interpolation over the four closest precalculated values (Figure 9.11). The size of the pixel is chosen such that the influence of its orientation with respect to the projection angle is negligible.
Figure 9.11. Calculation of the element Aij by linear interpolation over a neighborhood of four
9.4. Definition and characterization of a system 9.4.1. Characteristic system parameters For a given problem, the general idea is to fix the number of detectors and to appropriately distribute them around the conduit to optimize the spatial resolution, the homogeneity over the reconstruction space (Figure 9.13), and the level of the measured signal. It is not expected that more than 100 detectors will be used at an industrial site for reasons of cost and size. For a test image representing a stratified flow, the variation of the RMS between the original distribution and the reconstructed image as a function of the number of detectors is plotted in Figure 9.12. These data were calculated without considering statistical fluctuations of the signal. The amount of activity of the tracer that is allowed to be used in industrial measurements largely defines the number and collimation of the detectors employed. The limit is imposed by the type of tracer (element, toxicity, half-life) and the conditioning of the outflow. The signal-to-noise ratio of the measurements is proportional to the product of the injected activity and
Industrial Applications of Emission Tomography
253
the cross section of the collimation. For a determined collimation, there are a number of detectors above which the spatial performance no longer changes. Thus, about 50 detectors are necessary for a collimation defined by a FWHM of the PSF at the center of the object of 25% of the diameter of the object. In practice, and for reasons of size, a prototype of a system was developed on the basis of 36 detectors distributed in a hexagon.
Figure 9.12. Representation of the RMS between an original image and the reconstructed image as a function of the number of detectors and incidences around the pipe for a stratified flow
Figure 9.13. FWHM of a tomographic system with 6 x 6 detectors as a function of the radial position
254
Tomography
9.4.2. Characterization of images reconstructed with the EM algorithm Since the detection process is a linear phenomenon, the linearity of the imaging system is linked to the properties of the reconstruction algorithm. The images are reconstructed here with the EM algorithm described in Chapter 4. Such an imaging system is characterized by its ability to distinguish two close sources (separation capability). A comparison of the theoretical and actual separation capability allows this description to be refined. For a linear system, the separation capability is directly associated with the modulation transfer function. For a non-linear system, it is necessary to measure the separation capability in more detail. This analysis relies on the reconstruction of two point sources first taken together, and then taken separately. The distance that separates them is reflected in spatial frequency. Let b be the reconstructed amplitude of the point source, a be the amplitude in the middle between the sources, and a’ be the sum of activities reconstructed in the middle for the sources taken separately. This analysis enables the separation capability to be estimated. The comparison between the theoretical and actual separation capabilities enables the linearity of the system to be estimated (Figure 9.14). The theoretical and the actual separation capabilities (SC) are defined by: SC actual (d )
ba a
SCtheoretical (d )
b a' a'
[9.4] [9.5]
9.4.2.1. Underdetermined systems In the development of the EM algorithm, the number of measurements must in theory be larger than or equal to the number of pixels. When this constraint is not satisfied, the image introduced in iteration zero of the algorithm partially defines the solution. Under these conditions, the stability of the solution with respect to noise in the measurements deteriorates severely. For a given opening of the collimation, the stability of the solution depends in practice only to a small extent on the number of detectors around the object. On the contrary, pixels located at the boundary of the objet often converge poorly when the number of detectors is reduced. These pixels correspond to small values of the sum of all Aij. The application of a median filter during the iteration improves the quality of the reconstructed image. The filter is applied when this type of pixel is detected. A blurring is applied to make the obtained solution comparable with an actual distribution of flow.
Industrial Applications of Emission Tomography
255
Actual Theoretical
Figure 9.14. Actual and theoretical separation capability of a configuration with 6 x 6 detectors
9.5. Conclusion SPECT is a technique that permits visualizing flows under real conditions (temperature, pressure, rate, etc.) through opaque conduits that make classical methods inoperable. Each situation is a particular case, which requires modeling of the process of the physics of the detection. Although the performance in terms of spatial resolution and linearity are very much inferior to those obtained in medical systems or in NDT, this approach currently enables a user or a system designer in process engineering to estimate flow characteristics in situ. 9.6. Bibliography [BOL 95] BOLOMEY J. C., “Microwave tomography: optimization of the operating frequency range”, in SCOTT D. M., WILLIAMS R. A. (Eds.), Frontiers in Industrial Process Tomography, Engineering Foundation, 1995. [CHA 97] CHAOUKI J., LARACHI F., DUDUKOVIC P., “Noninvasive tomographic and velocimetric monitoring of multiphase flows”, Ind. Eng. Chem. Res., vol. 36, pp. 4476–4503, 1997. [DIC 93] DICKIN F. J., WILLIAMS R. A., BECK M. S., “Determination of composition and motion of multicomponent mixtures in process vessels using electrical impedance tomography. I: Principles and process engineering applications”, Chem. Eng. Sci, vol. 48, pp. 1883–1897, 1993. [DAR 95] DARTON R. C., THOMAS P. D., WHALLEY P. B., “Application of optical tomography in process engineering”, Process Tomography 1995 – Implementation for Industrial Processes, pp. 427–439, 1995.
256
Tomography
[LEG 96] LEGOUPIL S., PASCAL G., CHAMBELLAN D., BLOYET D., “Determination of the detection process in an experimental tomograph for industrial fluid flow visualization using radioactive tracers”, IEEE Trans. Nuc. Science, vol. 43, n° 2, pp. 751–760, 1996. [LEG 97] LEGOUPIL S., Tomographie d’Emission Gamma à Partir d’un Nombre Limité de Détecteurs Appliquée à la Visualisation d’Ecoulement, PhD thesis, University of Caen, 1997. [LEG 99] LEGOUPIL S., TOLA F., CHAMBELLAN D., VITART X., “A SPECT device for industrial flows visualization”, Proc. Italian Association of Chemical Engineering Conference Series, 1999. [PAR 95] PARKER D. J., DIJKASTRA A. E., MCNEIL P. A., “Positron emission particle tracking studies of granular flow in a rotating drum”, Process Tomography 1995 – Implementation for Industrial Processes, pp. 352–360, 1995. [WIL 95] WILLIAMS R. A., DYAKOWSKI T., XIE C. G. et al. “Industrial measurement and control of particulate. Processes using electrical tomography”, Process Tomography 1995 – Implementation for Industrial Processes, pp. 3–15, 1995.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Part 4 Morphological Medical Tomography
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 10
Computed Tomography
10.1. Introduction 10.1.1. Definition Computed tomography (CT) is a morphological imaging modality that maps the density of human tissue in cross-sections. It employs X-rays, generated by an X-ray source, to collect the required information. The X-ray source and a linear detector, composed of a set of detector elements, are mounted opposite each other on a gantry. Both are rotated around the patient, which determines the examined crosssection [CAR 99, NEW 81, ROB 85]. The phenomenon that enables recovery of the distribution of the density is the attenuation of the X-rays as they pass through the absorbing human body. The examined cross-section is described by a function μ(x,y), which represents the linear attenuation coefficient of the tissue (for a given energy, μ is proportional to the density). On the assumptions that radiation with energy E is monochromatic and that the beam with incident flux I0 is parallel and infinitely narrow, the transmitted flux of X-ray photons is given by Lambert–Beer’s law:
I
I0 e ³ A μE x,y dl B
Chapter written by Jean-Louis AMANS and Gilbert FERRETTI.
260
Tomography
Thus, log
I0 I
³ AB μE
x,y dl
corresponds to the integral of the function μE
along the half-axis from the source through a detector element. Measuring these integrals enables reconstruction of the image μE(x,y). For the mathematical background on image reconstruction from line integrals, the reader is referred to Chapter 2. After reconstruction, the image is normalized to Hounsfield units: H = 1000
P P water P water
where P and P water are the linear attenuation coefficients of the considered tissue and of water, respectively.
10.1.2. Evolution of CT The impact of CT on diagnostic imaging practice was so profound that its inventors, Hounsfield and Cormack, received the Nobel prize in 1979. Since its introduction, CT has advanced considerably. Several generations of scanners have been developed to acquire the data required for image reconstruction more and more efficiently, i.e. to scan the required set of source–detector positions faster and faster. In the first and second generation scanners, the limited number of detector elements made several types of movements of the source and detector necessary for this purpose. In modern scanners, one of the two following geometries are employed (see Figure 10.1): –R/R geometry: a divergent beam of radiation is emitted by the X-ray source and irradiates a curved multidetector (composed of in the order of 1,000 detector elements), whose size allows at least the whole examination volume to be covered. The source and the detector rotate together; –R/S geometry: a multidetector completely surrounds the patient (ring-shaped detector), and only the source rotates around the patient. In particular, so-called electron beam tomography scanners, which completely integrate the X-ray source, rely on this geometry. The position of the radiation source is in this case determined by the direction of the electron beam that generates the X-rays when decelerating in the target. Using this geometry, mechanical movements are no longer performed [MCC 95].
Computed Tomography
R/R geometry
261
R/S geometry Fixed multidetector
Mobile X-ray source
Rotating
Figure 10.1. The two geometries of modern scanners
Such geometries are called fan-beam geometries. This development was primarily aimed at a reduction of the scan time for each slice to shorten the duration of the whole examination. In this way, errors and artifacts due to patient motion can be minimized. Today, all commercial medical CT scanners rely on R/R geometry.
10.1.3. Scanners with continuous rotation At the beginning of the 1990s, technological developments (in electronics, mechanics, and computer science) led to the appearance of scanners with continuous rotation (helical or spiral CT), which enable uninterrupted data acquisition while the patient bed moves [KAL 95, NAP 98]. The major technological problems that had to be solved were the elimination of the electrical cables between the static and the mobile part of the mechanics and the development of X-ray tubes that can provide radiation continuously over several tens of seconds. When the patient’s bed is uniformly shifted while the source–detector combination continuously rotates, the source describes a spiral or a helix with respect to the patient (see Figure 10.2). Thus, scanners with continuous rotation can acquire in a given time data sets that are 5–20 times larger than those obtained with conventional scanners from the 1980s (primarily due to the elimination of the delay between the acquisition of subsequent slices). For example, a helical acquisition of 30 s while the patient bed moves at 10 mm/s enables a volume of 300 mm in the
262
Tomography
feet–head direction to be covered (one rotation per second, 10 mm slice thickness, pitch or “step” of one).
Direction of translation
Apparent trajectory of the X-ray
z (mm) t (s)
Figure 10.2. Helical acquisition geometry of a scanner with continuous rotation
The ability to rapidly acquire 3D data sets has led to a revival of CT, both by improving the performance in existing applications and by enabling new applications. An example of an existing application that substantially benefits from the introduction of scanners with continuous rotation is thoracic imaging. The increase in the number of slices that can be acquired within a single breath hold reduces misregistration problems due to respiration and decreases the amount of contrast agent injected for lesion detection. An example of a new, rapidly evolving application is angiography, which allows a region of interest of a vascular branch to be acquired during the first pass of contrast agents. More recently, cardiac imaging has largely taken advantage of the advances in rotation speed of the source–detector combination (which today permits reaching up to three rotations per second on high-end scanners). Helical acquisitions have several advantages over sequential acquisitions [BRI 95, HEI 93]: they cover a whole volume in a single breath hold, which leads to the elimination of respiratory and other motion artifacts and an improvement in the detection capabilities of CT; they allow optimization of tissue density studies by using overlapping axial slices that pass through the center of small lesions; they enable optimization of iodine opacification, thus making angiographic and organ perfusion studies possible; they offer the possibility multiplanar and 3D imaging, which are essential for vascular opacification studies in angiography; they reduce radiation dose by using higher pitches without loss of coverage.
Computed Tomography
263
Drawbacks of helical acquisitions are decreased longitudinal spatial resolution, which is due to the enlargement of the slice profile [BRI 92]; increased noise in the images [POC 92]; a potentially higher irradiation of the patient, which is due to the swiftness and ease of acquisition of several frames at different points in time after contrast agent injection; and the longer time the physician needs to read the images.
10.1.4. Multislice scanners The end of the 1990s saw the commercial introduction of multislice scanners (multidetector CT or MDCT), which allow the acquisition of multiple slices with a single rotation. The geometry of these systems is 3D, with an X-ray cone beam and a matrix detector (see Figure 10.3). The latter is the key element of these systems.
X-ray source
*
(cone beam)
Rotation combinatio
Patient
Matrix
Figure 10.3. Multislice scanner
Initially detectors had a few lines, but now detectors composed of several tens of lines are available (between 64 lines and 320 lines in high-end scanners). They enable a volume that is several centimeters thick to be covered (from 4 cm to 16 cm on high-end scanners). The most frequently cited advantages of multislice scanners in clinical diagnostics are: – extension of the examined region for a given acquisition time; – reduction in examination time for the same coverage;
264
Tomography
– improvement in spatial resolution in the longitudinal direction; – improvement in the temporal resolution. A drawback of multislice tomography is that more scattered radiation from the patient reaches the detector and disturbs the measurements. In fact, the extreme collimation of the detector in single-slice tomography enables elimination of almost all scattered radiation. This is no longer the case in multislice tomography, where the collimation between the lines of the detector is less efficient. This issue becomes crucial when the width of the detector is increased (today to up to 16 cm). A description of the state-of-the-art and a complete list of references on scanners with continuous rotation and on multislice scanners may be found in [WAN 00, BUS 00, SEE 01].
10.1.5. Medical applications of CT CT is a morphological imaging technique that provides cross-sections through human anatomy. Its medical applications rely on two essential characteristics: the undistorted reconstruction of anatomy in axial slices and the study of densities of structures, quantified on the Hounsfield scale. The examination of pathological processes, benign or malignant, is carried out in two stages: the recognition of a modification in the anatomy by comparisons with cross-sections in anatomical atlases, and the measurement of variations in the density of abnormal structures by injection of contrast agents for enhanced sensitivity, administered orally, intravenously, or directly into the examined structure. Sequential slice-by-slice acquisitions allow satisfactory anatomical examinations, but their slowness is the source of several disadvantages: the frequent presence of respiratory, cardiac, and patient motion artifacts, the difficulty, if not the impossibility, of performing satisfactory vascular examinations of whole organs, the absence of continuity of the slices due to the use of multiple breath holds, and the need to resort to overlapping acquisitions and thus to higher radiation doses for multislice imaging. Helical acquisitions are made possible by the continuous rotation of the source– detector combination, the emission of radiation by the X-ray tube over the duration of several rotations, and the simultaneous displacement of the patient [CRA 90, KAL 90]. For helical acquisitions with a scanner that is equipped with a single-line detector, a compromise must be found between the three principal acquisition parameters: the total scan duration (dependent on the length of the breath hold), the nominal slice thickness, and the table speed. Thus, for large volumes such as the thorax or the abdomen, the use of thin slices of 1 mm thickness is impossible in
Computed Tomography
265
view of the scan time needed for their acquisition. Only slices of 5 mm thickness may be used for such large volumes, which degrades the longitudinal spatial resolution. Multislice and helical tomography are conceptually similar. The primary interest in multislice tomography arises from the ability to examine a given region in less time while using thin slices. In combination with suitable acquisition parameters, this leads to isotropic voxels [DAL 07], which are particularly well suited for 2D and 3D postprocessing. Thus, the imaging of large volumes in times allowing a tracking of contrast agents in the vascular network is made possible.
10.2. Physics of helical tomography 10.2.1. Projection acquisition system The external appearance of the mechanics (or gantry) of modern scanners has not changed much over the last 20 years. However, numerous components have undergone considerable evolution with the introduction of helical CT [NAP 98]. A block diagram of a helical CT scanner is shown in Figure 10.4.
DA X-ray detector Data memory
Collimator
Processing unit
Collimator X-ray tube
Rotatin contact High (first stage)
High (second stage)
Mobile part
Static part
Figure 10.4. Block diagram of a helical CT scanner
266
Tomography
10.2.1.1. Mechanics for continuous rotation Helical CT scanners require a mechanics that allows acquisitions with continuous rotation. For this reason, the electrical cables connecting the static and the mobile part of the mechanics, which were used in conventional CT scanners (and which limit the acquisition to one or two rotations), were replaced by rotating contacts. The latter transfer the electrical energy to the mobile part (in particular to the X-ray generator, the filters, and the collimators) and receive the acquired data from the detector on the mobile part. Today, the most rapid scanners reach a speed of more than three rotations per second. 10.2.1.2. X-ray tubes with large thermal capacity CT is the application of X-ray imaging which imposes the most severe constraints on the X-ray tubes. Conventional CT scanners use delays between the acquisition of adjacent slices to leave time for the tube to cool down. However, acquisition using helical CT scanners demands that the X-ray tube runs continuously for several tens of seconds. Thus, the main limitation of the X-ray tubes used in helical CT scanners is temperature. For example, a typical acquisition of 60 s duration at 120 kV and 300 mA produces more than 2 MJ of thermal energy in the rotating anode of the X-ray tube. Considerable effort has been made by the manufacturers of X-ray tubes over the last decade to improve the thermal capacity, the rate of thermal dissipation, and the life time (expressed in the number of slices) of tubes. Moreover, the tubes have to be adapted to support the high rotation speeds used by current CT scanners. 10.2.1.3. X-ray detectors with high dynamics, efficiency, and speed The main constraints which the detector of a CT scanner must meet are: – good quantum detection efficiency (>70%); – high dynamics (2 x 105); – very rapid decay of the signal after switching off the X-ray beam (10-4 to 3 ms). The detectors of all CT scanners currently in use are solid state detectors (scintillators coupled to photodiodes). This technology is now adopted by all manufacturers, and it has completely replaced gas detectors. Its principal advantages over gas detectors are the higher absorption rate (>90% for a thickness of some millimeters) and the possibility of building matrix detectors with it. The scintillator materials are either monocrystals, or ceramic polycrystals. Extensive research is still performed to develop the most powerful scintillator materials at reasonable cost, in particular in the context of large multislice detectors.
Computed Tomography
267
The development of multislice detectors also poses the problem of transferring considerable amounts of data from the detector to the acquisition and processing system. The increasing number of slices has led to the design of dedicated solutions. 10.2.1.4. Helical acquisitions The combination of a continuous rotation of the gantry and translation of the patient bed leads to helical acquisitions. Two parameters control the geometry of helical acquisitions [BLA 98]. The first is the size of the collimator w in mm (or slice thickness in the center of rotation). As for conventional CT scanners, w is the parameter that defines the slice profile, or section sensitivity profile (SSP), and thus has a decisive influence on the longitudinal spatial resolution. Consequently, an increase in w permits increasing the number of photons per slice, reducing the noise, and enhancing the contrast-to-noise ratio, at the expense of the longitudinal spatial resolution. The second parameter is the speed of the patient bed s in mm/s. For helical acquisitions, the longitudinal extent of the acquired volume is the product of the velocity s and the time the X-ray tube runs continuously. Although w and s may be chosen independently, it is practical to define a quantity P (without dimension) called pitch: p =
s (mm/s) x period of a 360q rotation (s) w (mm)
The pitch also corresponds to the ratio of the displacement of the patient bed during a 360° rotation and the slice thickness w. It is worth noting that a helical CT scanner with a pitch of zero mimics a conventional CT scanner (a single slice is examined). Typically, the value of the pitch is chosen between 0.5 and 3. Its relevance and influence is discussed below (see Figure 10.5).
10.2.2. Reconstruction algorithms 10.2.2.1. Hypotheses The mathematical theory that forms the basis of tomographic image reconstruction was published by Radon in 1917 [RAD 17]. An overview of it may be found in Chapters 1–3, and, in the context of medical imaging, also in [HER 80, KAK 88]. The reconstruction algorithms implemented on conventional CT scanners follow an analytical approach and essentially rely on a filtered backprojection (FBP) of the data. They implicitly make the assumption that no physiological or patient motion occurs during the acquisition. Any such motion causes not only blurring but also artifacts that may render the reconstructed images unusable [JOS 81].
268
Tomography
Figure 10.5. Position of the cross-section through a patient for four pitch values (w is the slice thickness, z is the position of the center of the slice as a function of the rotation angle T, and z0 is this position for T = 0°)
In practice, this assumption is never satisfied (involuntary patient motion and physiological motion), and considerable efforts have been spent to reduce motion at its source, to decrease the probability of movements (more rapid acquisitions), and to correct the projections before the reconstruction. Compared to sequential slice-by-slice CT, helical CT introduces movement of the patient bed during the acquisition. This movement, although with only low speed and perfectly known, required the development of original reconstruction algorithms. They use a linear interpolation of the measured projections to calculate projections that correspond to axial cross-sections. These are then processed with standard FBP algorithms (see Chapter 2). Therefore, two new “concepts”, which are specific to helical CT, have to be considered: linear interpolation and slice spacing. 10.2.2.2. Interpolation algorithms The conceptually simplest approach enables calculation of projections over 360° from projections measured over two rotations (or 720°) by linear interpolation [KAL 90]. An estimate of the projection p for a position z = zo and an angle T is obtained by a linear combination of the two closest projections measured with the same angle T (see Figure 10.6). This approach is called “linear interpolation over 360°” and requires data acquired over two complete rotations, which has advantages and disadvantages. The fact that data acquired over a considerable time span are used to reconstruct a slice
Computed Tomography
269
reduces the noise in the image, but also the longitudinal spatial resolution, and it increases partial volume effects because of the larger attenuation profile of the crosssection [POL 92].
Cross section to be reconstructed
cos T +1
0
-1 d t
z’
z z’+ d
Distance Time
Figure 10.6. Linear interpolation over 360°. A linear interpolation between the points z’ and z’ + d enables calculation of projections that correspond to an arbitrary axial cross-section at z = z0
“Linear interpolation over 180°” algorithms were developed with the aim of avoiding reduced longitudinal resolution and improving temporal resolution (and thus reducing motion artifacts). This approach was inspired by reconstruction algorithms employed in sequential tomography, which use projections over 180° plus the fan-beam angle. It relies on projections acquired over one rotation only [CRA 90]. For more details on these reconstruction algorithms, the reader is referred to Chapter 2. 10.2.2.3. Slice spacing Since helical acquisitions are continuous, it is possible to reconstruct slices whose center may be positioned anywhere along the longitudinal axis and whose spacing may be chosen with considerable overlap. In this way, a structure that is smaller than the slice thickness may be reconstructed in several overlapping slices. In one of these slices, it may be centered, leading to reduced partial volume effects, increased contrast between the structure and its environment, and less so-called stair-step artifacts in reformatted 2D images and rendered 3D images [KAL 94, WAN 94a].
270
Tomography
10.2.3. Image quality 10.2.3.1. Axial spatial resolution The spatial resolution in the axial plane depends on numerous parameters, among which are the size of the focal spot of the X-ray source and the geometry of the detector pixels. There is no significant difference between the spatial resolution in images acquired in slice-by-slice and helical mode [BLU 81]. 10.2.3.2. Longitudinal spatial resolution The longitudinal spatial resolution is described by the SSP. The SSP of a helical CT scanner depends on the collimation, the pitch, and the reconstruction algorithm. Ideally, the SSP has a rectangular shape and a width equal to the collimation of the detector. This rectangular shape is approached by acquisitions in slice-by-slice mode. Helical acquisitions entail an enlargement of the SSP. For a fixed collimation, the extent of the SSP grows with the pitch and with use of a linear interpolation over 360° rather than 180° [KAL 95, WAN 94b]. While the slices obtained with helical CT have, for a given collimation, a higher actual thickness than those obtained with sequential CT, it is possible to reconstruct them with overlap a posteriori, which reduces partial volume effects and enables improvement of the detectability of low contrast structures. This advantage minimizes drawbacks due to the enlargement of the SSP and entails no increase in the radiation dose [KAL 94]. 10.2.3.3. Image noise Helical acquisitions add to the classical factors on which the noise in tomographic images depends using slice-by-slice acquisitions (including slice thickness, output dose and voltage of the X-ray tube, geometry and efficiency of the detector, and the reconstruction filter [HAN 81]) the type of employed linear interpolation. As indicated before, the use of a linear interpolation algorithm over 360° reduces image noise at the expense of the extent of the SSP (and thus of the longitudinal spatial resolution). At the center of an image acquired in helical mode and reconstructed with a linear interpolation algorithm over 180°, the noise is increased by a factor of 1.1 to 1.4 compared to an image acquired in slice-by-slice mode under the same conditions [WAN 93].
Computed Tomography
271
10.2.3.4. Artifacts Most of the artifacts known from sequential tomography appear similarly in helical tomography [HSI 95, JOS 81]. For example, the effects of beam hardening and sampling have the same appearance. As mentioned previously, the enlargement of the SSP leads to an increase in the partial volume effects in helical tomography. The artifacts arising from involuntary patient motion or physiological motion are reduced in helical tomography. If patient motion produces comparable effects inplane in helical and sequential tomography, these effects are reduced in reformatted 2D and rendered 3D images in helical tomography. In fact, the helical mode enables the acquisition of continuous volumes in far shorter times than with the slice-byslice mode. In conclusion, the majority of image quality factors are equivalent in helical and in slice-by-slice tomography, with the exception of the longitudinal resolution and noise. However, clinical studies comparing the two acquisition modes have not shown a significant difference in image quality which may be exploited for diagnostic purposes. Obtaining an optimal image quality in helical tomography requires choosing a small slice thickness, a pitch close to one, and a large overlap between the reconstructed slices.
10.2.4. Dose CT is a modality that is increasingly employed. Since it involves using among the highest doses in modern medical imaging, it is a substantial contributor to the total radiation to which the population is exposed. Thoracic CT de facto entails a very high radiation dose, which is 50 to 200 times higher than the dose in conventional thoracic X-ray [TRI 99]. Helical CT potentially involves a lower dose than sequential CT. For a given acquisition volume, the dose of both is the same if a pitch of one and identical settings for the X-ray tube (mA, kV) are used. However, it has been shown that a pitch of more than one degrades image quality only a little, while reducing the dose significantly (for a pitch of two, the dose decreases by almost 50%, for example). This dose reduction only holds for single slice CT. For multislice CT, the mAs per slice remains constant whatever the value of pitch.
272
Tomography
10.3. Applications of volume CT It is impossible to enumerate here all medical applications of volume CT. We limit ourselves to particularly innovative applications made possible by the recent introduction of multidetector scanners, which enable acquisitions with excellent spatial resolution due to isotropic voxels, improved temporal resolution (<0.5 s), and large coverage to be obtained.
10.3.1. Role of visualization of axial slices Currently, CT acquisitions are displayed as axial slices, the indispensable basis of the medical interpretation of examinations. Reformatted 2D and rendered 3D images are, however, more and more used for certain medical indications when the axial plane is not the most favorable plane and comprehension of the pathology benefits from a different form of visualization.
10.3.2. Role of 2D and 3D postprocessing Several types of 2D and 3D reformats are used in clinical practice [LUC 05, FER 01], either on the console of the scanner, or on a separate workstation. 2D multiplanar reformats (MPR) enable visualization of the volume in other planes than the axial plane. Their advantages include instantaneous and interactive reconstruction that is simple to implement, and preservation of the grayscale on the Hounsfield scale. Maximum intensity projection (MIP) [NAP 92] and minimum intensity projection (minIP) [REM 96] reformats have the advantages of being fast and of facilitating the selection of the projected volume. MIP reformats choose the most dense voxels among the selected data volumes. They are used a lot in vascular imaging after injection of iodine contrast agents (angiography) and in pulmonary imaging to support the detection of nodules and pulmonary cancer [JAN 07]. minIP reformats choose the least dense voxels among the selected data volumes. 3D reformats are surface [CLI 91, MAG 91] or volume [FIS 87] renderings, using either external rendering by orthographic projection or internal rendering by perspective projection [FER 96].
10.3.3. Abdominal applications The essential contribution of volume CT in this domain is the combination of improved temporal resolution with iodine opacification, which enables the study of parenchymography at selective times (arterial, venous), before the contrast agent diffuses into the interstitial tissue [ASC 06].
Computed Tomography
273
Liver examinations in oncology clearly benefit from volume acquisitions. With these, it is possible to image the whole liver in a single breath hold at different points in time: in the early arterial phase, in the portal venous phase, and finally in the late equilibrium phase [KAM 05]. Such selective vascular studies, using thin, overlapping slices, provide better detection and characterization of focal hepatic lesions at the expense of a significant increase in the radiation dose the patient is exposed to. The volume acquisition and the 2D and 3D reformats allow a precise mapping of lesions with respect to the vessels of the liver, which facilitates preparation for hepatic surgery [WIN 95]. Pancreatic examinations are facilitated by a volume acquisition with an iodine contrast agent injection, which enhances the contrast between tumors and healthy parenchyma and permits an assessment of the preoperative, locoregional vascular and ganglionary extent more precisely than before [SCH 07]. Virtual colonoscopy involves volume CT acquisitions of the colon after its preparation and inflation with air. The use of advanced visualization tools, such as virtual endoscopy and computer-assisted detection (CAD), enables small tumors in the digestive system to be found with excellent sensitivity and specificity [SOH 08] (see Figure 10.7). Tomographic imaging of the kidneys facilitates the discovery, characterization, and assessment of the preoperative extent of small renal tumors [KOC 05]. The diagnostics of atypical abdominal and lumbar pains benefits from volume acquisition of the abdomen without iodine opacification to visualize radio-opaque lithiasis with low dose protocols [TAC 03]. CT in this application replaces intravenous urography, which requires more time, a higher dose, injection of an iodine contrast agent, and shows only the urinary system.
10.3.4. Thoracic applications The impact of multidetector volume CT on thoracic imaging is fundamental because of the suppression of respiratory motion artifacts: the acquisition of an adult thorax with slices thinner than 1 mm currently requires less than 6 s, which makes this technique available for practically all patients, even the most dyspneic ones. Imaging of the different parts of the thorax benefits from this technique: parenchyma, mediastinum, tracheobronchial tree, and arterial and venous vessels. The examination of pulmonary nodules, sometimes corresponding to small, operable tumors [HEN 99], is improved, allowing detection of significantly more nodules than with sequential acquisitions and a better characterization of them by using thin slices and injecting a contrast agent [SWE 00]. The recent introduction of
274
Tomography
CAD has enabled improvement in the detection of small anomalies, supplementing analysis by radiologists [GOL 08].
A
B
C Figure 10.7. Virtual colonoscopy acquired with a 64 lines scanner on a 65-year old patient presenting with abdominal pains and temporary subocclusion. Virtual 3D reconstruction of the colonic lumen, inflated by air (A). Discovery of a tissue mass occupying the colonic lumen (B). Virtual endoscopy view showing this mass and a colonic cancer (C)
Computed Tomography
A
B
C
D
275
Figure 10.8. CT of the thorax, acquired on a 50-year old patient presenting with a severe dyspnea three weeks after a tracheal intubation. The volume is acquired with a slice thickness of 3 mm. An axial cross-section (A) reveals a stenosis cutting into the trachea (arrow). The oblique frontal reconstruction (B) and the external 3D surface reconstruction (C) show the height (between the arrows) of the stenosis, an important parameter to determine before taking therapeutic decisions. The virtual endoscopy (D) shows the importance and the complexity of the stenosis non-invasively (arrow)
The non-invasive examination of the tracheobronchial tree is currently possible with axial slices and with 2D and 3D reformats, offering an unmatched sensitivity
276
Tomography
for the detection of lesions, which is especially useful for the etiological diagnosis of hemoptysis, bronchial stenoses (see Figure 10.8), and postoperative tumors and pathology [FER 07]. The combination of the administration of iodine contrast agent and volume acquisitions with thin slices (1 mm) enables CT angiography of the pulmonary arteries [GHA 08]. The sensitivity and specificity of CT angiography of more than 95% have completely changed the treatment of pulmonary embolism. The most recent recommendations simplify the procedure for patients with suspected pulmonary embolism: volume CT is the only applied imaging modality, replacing sonography of the veins of the lower limbs, pulmonary scintigraphy, and pulmonary angiography.
10.3.5. Vascular applications This is the domain in which volume CT, in combination with the administration of iodine contrast agents by an automatic injector that ensures an opacification at constant level, has brought the greatest progress compared to sequential CT [ZEM 95]. Such a non-invasive, rapid, reproducible, and, compared to angiography, cheap examination is of interest for the vasculature of the entire organism. However, CT angiography competes with MR angiography, which does not involve ionizing radiation or nephrotoxic contrast agents. Postprocessing is indispensable for the visualization of these examinations and combines MIP, multiplanar reformat, and volume rendering. The treatment of aortic, thoracic [WIN 04] and abdominal pathology, be it congenital or acquired, has been changed by CT. Currently, it is possible to image the whole aorta in a single breath hold with one intravenous injection of a contrast agent to diagnose a dissection (see Figure 10.9), examine a post-traumatic aortic rupture, and assess the extent of an aneurysm. Thus, diagnostic angiography is no longer used for examinations of the aorta and its principal branches. CT angiography has replaced the arteriography of renal arteries in examinations of arterial stenosis in patients with hypertension and preserved renal function [PRO 99]. It is about to be used instead of diagnostic angiography for the visualization of the arteries of the lower limbs [ALB 07]. CT of the carotid arteries is a robust technique for examining carotid stenosis [CHA 04]. CT provides information on calcification and partial ulceration, which is not available from conventional angiography [CUM 94]. CT angiography of the circle of Willis, using thin slices and 3D reconstruction, is about to replace angiography in the diagnosis of intracerebral hemorrhage due to intracranial aneurysms [CHA 04]. Moreover, it enables preparation of therapeutic angiographic procedures. The qualitative and quantitative study of cerebral perfusion with CT is possible,
Computed Tomography
277
enabling a non-invasive diagnosis of cerebral ischemia, as well as of cerebral reserve before therapy and of the microvascularization of tumors [HOE 04].
A
B
Figure 10.9. CT of the thoracic aorta with venous injection of contrast agent in a 35-year old patient presenting with a violent thoracic pain attributed to a dissection of the thoracic aorta. The axial cross-section (A) shows the aortic dissection involving the ascending (arrow) and the descending (bent arrow) aorta. The dissection is thrombolized, because it does not light up after injection (star). An oblique sagittal reconstruction shows the extent of the dissection at the crossing of the thoracic aorta and the integrity of the origin of the brachiocephalic arteries. This examination has completely replaced arterial angiography
10.3.6. Cardiac applications In recent years, this application has undoubtedly had the most profound impact: combining a continuous, rapid rotation (< 0.5 s), sub-millimeter detectors, and synchronization with the cardiac cycle enables images of the heart and the coronaries to be obtained without motion artifacts in different cardiac phases. The clinical applications are still being validated, but are already very promising: study of the coronary anatomy, non-invasive coronary angiography for the detection of arterial stenosis [HAL 08], and examination of cardiac function [SAV 07], the cardiac valves and myocardial perfusion (see Figure 10.10).
278
Tomography
A
B
C
D
E
Figure 10.10. CT of the heart and the coronary arteries. The multiline (64 lines) CT with injection of an iodine contrast agent and synchronization with the electrocardiogram enables the reconstruction of images as a function of the cardiac phase. It enables the reconstruction of the coronary tree without motion artifacts and the application of a volume rendering (A) or MIP (B) for visualization. CT coronary angiography allows study of the principal coronary branches down to a diameter of 2 mm non-invasively. The automated calculation of the ejection fraction of the left ventricle (C) is also possible by measuring the volumes of the left ventricle in systole (D) and diastole (E), giving CT angiography of the heart a fundamental aspect
Computed Tomography
279
10.3.7 Polytrauma The treatment of polytrauma patients, most often resulting from car accidents, sports, and work accidents, is optimized by volume CT, which enables us to acquire the cranial, cervical, thoracic, and abdominal–pelvic region with thin slices both with and without contrast agent injection in a very short time [SCA 08]. This technique has completely replaced spinal and cranial radiographs, as well as aortic angiography.
10.3.8 Pediatric applications The swiftness of the acquisition enables the applications for pediatric CT to be extended, because it enables acquisition during a brief breath hold and thus reduces the need for sedation of children [FRU 98].
10.4. Conclusion The introduction of CT scanners with continuous, rapid rotation (down to 0.3 s per rotation at present), which allow helical acquisitions, at the beginning of the 1990s represents a major advance in CT imaging. These scanners have enabled the improvement of CT efficiency in numerous existing protocols and have led to the development of new applications. Helical CT enables the examination of a volume in a far shorter time than conventional CT, while providing comparable image quality. This allows, in particular, optimization of the use of contrast agents, reduction of partial volume effects, and an increase in the number of patients that may be examined in a given time. The major innovation at the end of the 1990s in CT, multiline detectors, has led to a new boom in this imaging modality by allowing a further step towards true isotropic volume imaging. These new detectors (currently up to several tens of slices simultaneously), together with increased rotation speeds of the gantry, open up new and innovative applications of CT, notably in the domains of vascular and cardiac imaging. The arrival of multiline detectors with high rotation speeds (0.3 s for 360°) has opened up perspectives in cardiac imaging (coronaries and perfusion) thanks to the use of reconstruction techniques over 180° (which reduce the temporal resolution to 0.15 s) and to prospective or retrospective synchronization to the ECG.
280
Tomography
Developments to multiply the number of lines on the detectors and to simultaneously acquire a larger number of slices are on-going. They aim at covering complete organs, such as the heart or the brain, in a single rotation. Improved temporal resolution is another goal. Increases in the rotation speed of the source–detector combination have physical limits that are difficult to circumvent. One manufacturer (Siemens Medical Solutions) introduced at the end of 2005 a scanner that is equipped with two X-ray sources and two detectors (dual-source CT) and that is primarily dedicated to cardiac imaging. Finally, all manufacturers are working on dual-energy CT. Dual-energy X-ray tomography requires two acquisitions at two different X-ray energies and enables measurement of the mean value of the atomic number in each voxel (which allows differentiation of materials with very similar densities). Different technical solutions are evaluated (two imaging chains, sandwich detectors, alternation of the energies of the X-ray source, photon counting detectors with energy selection). Several clinical protocols are currently under development for the only clinically available solution today, dual-source CT.
10.5. Bibliography [ALB 07] ALBRECHT T., MEYER B. C., “MDCT angiography of peripheral arteries: technical considerations and impact on patient management”, Eur. Radiol., vol. 17, Suppl. 6, pp. 5– 15, 2007. [ASC 06] ASCHOFF A. J., “MDCT of the abdomen”, Eur. Radiol., vol. 16, Suppl. 7, pp. 54– 57, 2006. [BLA 98] BLANCK C. A., Understanding Helical Scanning, Williams & Wilkins, 1998. [BLU 81] BLUMENFELD S. M., GLOVER G. H., “Spatial resolution in computed tomography”, in Newton T. H., Potts D. G. (Eds.), Radiology of the Skull and Brain: Technical Aspects of Computed Tomography, C. V. Mosby, pp. 3918–3940, 1981. [BRI 92] BRINK J. A., HEIKEN J. P., BALFE D. M., SAGEL S. S., DICROCE J., VANNIER M. W., “Spiral CT decreases spatial resolution in vivo due to broadening of section-sensitivity profile”, Radiology, vol. 185, pp. 469–474, 1992. [BRI 95] BRINK J. A., “Technical aspects of helical (spiral) CT”, Radiol. Clin. North Am., vol. 33, pp. 825–841, 1995. [BUS 00] BUSHONG S. C., Computed Tomography, McGraw-Hill, 2000. [CAR 99] CARLSON C. A,, “Imaging modalities in X-ray computerized tomography and in selected volume tomography”, Phys. Med. Biol., vol. 44, pp. 23–56, 1999.
Computed Tomography
281
[CHA 04] CHAWLA S., “Advances in multidetector computed tomography: applications in neuroradiology”, J. Comput. Assist. Tomogr., vol. 28, Suppl. 1, pp. 12–16, 2004. [CRA 90] CRAWFORD C. R., KING K., “Computed tomography scanning with simultaneous patient translation”, Med. Phys., vol. 17, n° 6, pp. 967–982, 1990. [CUM 94] CUMMING M. J., MORROW I. M., “Carotid artery stenosis: a prospective comparison of CT angiography and conventional angiography”, AJR Am. J. Roentgenol., vol. 163, pp. 417–423, 1994. [DAL 99] DALY B, TEMPLETON P. A., “Real-time CT fluoroscopy: evolution of an interventional tool”, Radiology, vol. 211, pp. 309–315, 1999. [DAL 07] DALRYMPLE N. C., PRASAD S. R., EL-MERHI F. M., CHINTAPALLI K. N., “Price of isotropy in multidetector CT”, Radiographics, vol. 27, pp. 49–62, 2007. [FER 96] FERRETTI G., VINING D. J., KNOPLIOCH J., COULOMB M., “Tracheobronchial tree: three-dimensional spiral CT with virtual bronchoscopy”, J. Comput. Assist. Tomogr., vol. 20, pp. 777–781, 1996. [FER 01] FERRETTI G., BRICAULT I., COULOMB M., “Virtual tools for imaging the thorax”, Eur. Resp. J., vol. 18, pp. 1–12, 2001. [FER 07] FERRETTI G. R., PISON C., RIGHINI C., “Tomodensitométrie volumique : avancées récentes en pathologie trachéale acquise”, Ann. Otolaryngol Chir. Cervicofac., vol. 124, pp. 292–300, 2007. [FIS 87] FISHMAN E. K., DREBIN B., MAGID D. et al., “Volumetric rendering techniques: applications for three-dimensional imaging of the hip”, Radiology, vol. 163, pp. 737–738, 1987. [FRU 98] FRUSH D. P., DONNELLY L. F., “Helical CT in children: technical considerations and body applications”, Radiology, vol. 209, pp. 37–48, 1998. [GHA 08] GHAYE B., DONDELINGER R. F., “When to perform CTA in patients suspected of PE?”, Eur. Radiol., vol. 18, pp. 500–509, 2008. [GOL 08] GOLDIN J. G., BROWN M. S., PETKOVSKA I., “Computer-aided diagnosis in lung nodule assessment”, J. Thorac. Imaging, vol. 23, pp. 97–104, 2008. [HAL 08] HALON D. A., RUBINSHTEIN R., GASPAR T., PELED N., LEWIS B. S., “Current status and clinical applications of cardiac multidetector computed tomography”, Cardiology, vol. 109, pp. 73–84, 2008. [HAN 81] HANSON K. M., “Noise and contrast discrimination in computed tomography”, in Newton T. H., Potts D. G. (Eds.), Radiology of the Skull and Brain: Technical Aspects of Computed Tomography, C. V. Mosby, pp. 3941–3955, 1981. [HEI 93] HEIKEN J. P., BRINK J. A., VANNIER M. W., “Spiral (helical) CT”, Radiology, vol. 189, pp. 647–656, 1993.
282
Tomography
[HEN 99] HENSCHKE C. I., MCCAULEY D. I., YANKELEVITZ D. F. et al., “Early lung cancer action project: overall design and findings from baseline screening”, Lancet, vol. 354, pp. 99–105, 1999. [HER 80] HERMAN G. T., Image Reconstruction from Projections: The Fundamentals of Computerized Tomography, Academic Press, 1980. [HOE 04] HOEFFNER E. G., CASE I., JAIN R., GUJAR S. K., SHAH G. V., DEVEIKIS J. P., CARLOS R. C., THOMPSON B. G., HARRIGAN M. R., MUKHERJI S. K., “Cerebral perfusion CT: technique and clinical applications”, Radiology, vol. 231, pp. 632–644, 2004. [HSI 95] HSIEK J., “Image artefacts, causes, and correction”, in Goldman L. W., Fowlkes J. B. (Eds.), Medical CT and Ultrasound: Current Technology and Applications, Advanced Medical, pp. 487–518, 1995. [HU 99] HU H., “Multi-slice helical CT: scan and reconstruction”, Med. Phys., vol. 26, n° 1, pp. 5–18, 1999. [JAN 07] JANKOWSKI A., MARTINELLI T., TIMSIT J. F., BRAMBILLA C., THONY F., COULOMB M., FERRETTI G., “Pulmonary nodule detection on MDCT images: evaluation of diagnostic performance using thin axial images, maximum intensity projections, and computerassisted detection”, Eur. Radiol., vol. 17, pp. 3148–3156, 2007. [JOS 81] JOSEPH P., “Artifacts in computed tomography”, in Newton T. H., Potts D. G. (Eds.), Radiology of the Skull and Brain: Technical Aspects of Computed Tomography, C. V. Mosby, pp. 3956–3992, 1981. [KAK 88] KAK A. C., STANEY M., Principles of Computerized Tomographic Imaging, IEEE Press, 1988. [KAL 90] KALENDER W. A., SEISSLER W., KLOTZ E., VOCK P., “Spiral volumetric CT with single-breath-hold technique, continuous transport, and continuous scanner rotation”, Radiology, vol. 176, pp. 181–183, 1990. [KAL 94] KALENDER W. A., POLACIN A., SUSS C., “A comparison of conventional and spiral CT: an experimental study on the detection of spherical lesions”, J. Compt. Assist. Tomogr., vol. 18, pp. 167–176, 1994. [KAL 95] KALENDER W. A., “Principles and performance of spiral CT”, in Goldman L. W., Fowlkes J. B. (Eds.), Medical CT and Ultrasound: Current Technology and Applications, Advanced Medical, pp. 379–410, 1995. [KAM 05] KAMEL I. R., LIAPI E., FISHMAN E. K., “Liver and biliary system: evaluation by multidetector CT”, Radiol. Clin. North Am., vol. 43, pp. 977–997, 2005. [KOC 05] KOCAKOC E., BHATT S., DOGRA V. S., “Renal multidetector row CT”, Radiol. Clin. North Am., vol. 43, pp. 1021–1047, 2005. [KOP 99] KOPECKY K. K., BUCKWALTER K. A., SOKIRANSKI R., “Multi-slice CT spirals past single-slice CT in diagnostic efficacy”, Diagnostic Imaging, vol. 21, pp. 36–42, 1999.
Computed Tomography
283
[LOR 99] LORCY P., FAY A. F., VACHEY C., “Scanographie”, J. Radiology, vol. 80, n° 7, pp. 771–775, 1999. [LUC 05] LUCCICHENTI G., CADEMARTIRI F., PEZZELLA F. R., RUNZA G., BELGRANO M., MIDIRI M., SABATINI U., BASTIANELLO S., KRESTIN G. P., “3D reconstruction techniques made easy: know-how and pictures”, Eur. Radiol., vol. 15, pp. 2146–2156, 2005. [MAG 91] MAGNUSSON M., LENZ R., DANIELSON P. E., “Evaluation of methods for shaded surface display of CT volumes”, Compt. Med. Imag. Graph., vol. 15, pp. 247–256, 1991. [MER 99] MERRAN S., “Impact du fluoroscanner sur la radiologie interventionnelle”, RBM, vol. 21, pp. 131–314, 1999. [MCC 95] MCCOLLOUGH C. H., “Principles and performances of electron beam CT”, in Goldman L. W., Fowlker J. B. (Eds.), Medical CT and Ultrasound: Current Technology and Applications, Advanced Medical, pp. 411–436, 1995. [MCC 99] MCCOLLOUGH C. H., ZINK F. E., “Performance evaluation of a multi-slice CT system”, Med. Phys., vol. 26, n° 11, pp. 2223–2230, 1999. [NAP 92] NAPEL S., MARKS M. P., RUBIN G. D. et al., “CT angiography with spiral CT and maximum intensity projection”, Radiology, vol. 185, pp. 607–610, 1992. [NAP 98] NAPEL S., “Basic principles of spiral CT”, in Fishman E. K., Jeffrey R. B. (Eds.), Spiral CT: Principles, Techniques, and Clinical Applications, Lippincott-Raven, pp. 3– 15, 1998. [NEW 81] NEWTON T. H., POTTS D. G., Radiology of the Skull and Brain. Technical Aspects of Computed Tomography, C. V. Mosby, 1981. [POL 92] POLACIN A., KALENDER W. A., MARCHAL G., “Evaluation of section sensitivity profiles and image noise in spiral CT”, Radiology, vol. 185, pp. 29–35, 1992. [PRO 99] PROKOP M., “Protocols and future directions in imaging of renal artery stenosis: CT angiography”, J. Comput. Assist. Tomogr., vol. 23, pp. 101–110, 1999. [RAD 17] RADON J., “Über die Bestimmung von Funktionen durch ihre Integralwerte längs gewisser Mannigfaltigkeiten”, in Math.-Phys. Kl. Berichte d. Sächsischen Akademie der Wissenschaften, Leipzig, vol. 69, pp. 262–277, 1917. [REM 96] RÉMY-JARDIN M., RÉMY J., GOSSELIN B., COPIN M. C., WURTZ A., DUHAMEL A., “Sliding thin slab, minimum intensity projection technique in the diagnosis of emphysema: histopathologic-CT correlation”, Radiology, vol. 200, pp. 665–671, 1996. [ROB 85] ROBB R. A., “X-ray computed tomography: implementation and applications”, in Robb R. A. (Ed.), Three-dimensional Biomedical Imaging, CRC Press, vol. 1, pp. 81– 106, 1985. [RUB 99] RUBIN G. D., SHIAU M. C., SCHMIDT A. J., FLEISCHMANN D., LOGAN L., LEUNG A. N., JEFFREY R. B., NAPEL S., “Computed tomographic angiography: historical perspective and new state-of-the-art using multi detector-row helical computed tomography”, J. Comput. Assist. Tomogr., vol. 23, pp. 83–90, 1999.
284
Tomography
[SAV 07] Savino G., Zwerner P., Herzog C., Politi M., Bonomo L., Costello P., Schoepf U. J., “CT of cardiac function”, J. Thorac. Imaging, vol. 22, pp. 86–100, 2007. [SEE 01] SEERAM E., Computed Tomography: Physical Principles, Clinical Applications, and Quality Control, Elsevier, 2001. [SCA 08] SCAGLIONE M., PINTO A., PEDROSA I., SPARANO A., ROMANO L., “Multi-detector row computed tomography and blunt chest trauma”, Eur. J. Radiol., vol. 65, pp. 377–88, 2008. [SCH 07] Schima W., Ba-Ssalamah A., Koelblinger C., Kulinna-Cosentini C., Puespoek A., Goetzinger P., “Pancreatic adenocarcinoma”, Eur. Radiol., vol. 17, pp. 638–49, 2007. [SOH 08] SOHNS C., HEUSER M., SOSSALLA S., WOLFF H., OBENAUER S., “Current role and future potential of computed tomographic colonography for colorectal polyp detection and colon cancer screening-incidental findings”, Clin. Imaging, vol. 32, pp. 280–286, 2008. [SOY 96] SOYER P., HEATH D., BLUEMKE D. A., CHOTI M. A., KUHLMAN J. E., REICHLE R., FISHMAN E. K., “Three-dimensional helical CT of intrahepatic venous structures: comparison of three rendering techniques”, J. Comput. Assist. Tomogr., vol. 20, pp. 122– 127, 1996. [SWE 00] SWENSEN S. J., VIGGIANO R. W., MIDTHUN D. E., MULLER N. L., SHERRICK A., YAMASHITA K., NAIDICH D. P., PATZ E. F., HARTMAN T. E., MUHM J. R., WEAVER A. L., “Lung nodule enhancement at CT: multicenter study”, Radiology, vol. 214, pp. 7–-80, 2000. [TAC 03] TACK D., SOURTZIS S., DELPIERRE I., DE MAERTELAER V., GEVENOIS P. A., “Low-dose unenhanced multidetector CT of patients with suspected renal colic”, Am. J. Roentgenol. vol. 180, pp. 305–311, 2003. [TRI 99] TRIGAUX J. P., LACROSSE M., “Irradiation en tomodensitométrie thoracique”, Rév. Mal. Respir., vol. 16, pp. 127–136, 1999. [WAN 93] WANG G., VANNIER M. W., “Helical CT image noise: analytical results”, Med. Phys., vol. 20, pp. 1635–1640, 1993. [WAN 94a] WANG G. E., VANNIER M. W., “Stair-steps artifacts in three-dimensional helical CT: an experimental study”, Radiology, vol. 191, pp. 79–83, 1994. [WAN 94b] WANG G., BRINK J. A., VANNIER M. W., “Theoretical FWTM values in helical CT”, Med. Phys., vol. 21, pp. 753–754, 1994. [WAN 00] WANG G., CRAWFORD, C., KALENDER W.A. (guest eds.), “Special issue on multirow detector and cone-beam spiral/helical CT”, IEEE Trans. Med. Imaging, vol. 19, n° 9, 2000. [WIN 95] WINTER T. C. 3rd, FREENY P. C., NGHIEM H. V. et al., “Hepatic arterial anatomy in transplantation candidates: evaluation with three-dimensional CT arteriography”, Radiology, vol. 195, pp. 363–370, 1995.
Computed Tomography
285
[WIN 04] WINTERSPERGER B. J., NIKOLAOU K., BECKER C. R., “Multidetector-row CT angiography of the aorta and visceral arteries”, Semin. Ultrasound CT MR, vol. 25, pp. 25–40, 2004. [ZEM 95] ZEMAN R. K., SILVERMAN P. M., VIECO P. T., COSTELLO P., “CT angiography”, Am. J. Roentgenol., vol. 165, pp. 1079–1088, 1995.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 11
Interventional X-ray Volume Tomography
11.1. Introduction 11.1.1. Definition Three-dimensional (3D) X-ray tomography is a medical imaging modality providing information on the distribution of the density of human tissues within a given volume. The technique is based on the measurement of the attenuation of an X-ray beam passing the subject. The principle of the acquisition is very similar to that in computed tomography (see Chapter 10). Differences result mainly from the system architecture. Instead of using a dedicated tomographic closed bore imaging system, 3D X-ray tomography data are acquired on a conventional X-ray system, in which the X-ray tube and the two-dimensional (2D) planar detector are mounted opposite to each other on a C-shaped support (gantry). For tomographic imaging, the source–detector combination rotates around the patient and determines the volume to be imaged. Reconstruction of the 3D attenuation maps Pa(x, y, z) is performed similarly to X-ray computed tomography using the set of measured line integrals of Pa acquired in cone-beam projection geometry. The mathematical background of the data reconstruction of volumes from measured integrals is described in Chapter 2. The tomographic imaging approach described in this chapter is mainly focused on interventional imaging. The increasing complexity of minimal invasive procedures requires the availability of high resolution 3D image information for intervention Chapter written by Michael GRASS, Régis GUILLEMAUD and Volker RASCHE.
288
Tomography
planning, guidance and outcome control. In this context, recent image data are a key component to allow for accurate guidance during intervention. 11.1.2. Acquisition systems The 2D X-ray detector integrated in an interventional system is usually a flat detector device. It is based on an amorphous silicon plate covered with a scintillation material [SCH 94, BUS 02]. Recent detectors cover a planar region of up to 30 x 40 cm at high spatial resolution in the order of 180² μm² pixel size. The maximum frame rate is 60 frames per second. Similar to classical computed tomography, in an early system for generating volume information for interventional purposes, a ring-shaped gantry was used to obtain high geometrical stability. At that time, image intensifier tubes were used to acquire digital images at high frame rates. This combination has been realized in the morphometer [SAI 93, HEA 95], which has remained a prototype (see Figure 11.1).
Figure 11.1. 3D scanner: the morphometer (reprinted with kind permission from Springer Science + Business Media, Figure 1 in [HEA 95])
Interventional X-ray Volume Tomography
289
Nowadays, X-ray tomography has been transferred to conventional X-ray C-arm systems, in which the source–detector combination is mounted on a C-shaped gantry, which allows motorized motion of the imaging system around the patient. (see Figure 11.2). Besides availability, the major advantage of this solution results from access to the patient due to the openness of the C-arm, which is a paramount prerequisite in interventional imaging.
Figure 11.2. Biplane C-arm system (Philips Allura Xper FD20/20)
11.1.3. Positioning with respect to computed tomography Interventional 3D X-ray tomography obviously shows strong similarities to classical X-ray computed tomography (CT) with some important differences. A detector with a large surface and high spatial resolution is used, which allows coverage of a larger field of view at a substantially better spatial resolution than with CT scanners. Furthermore, the projection images themselves can be used for diagnosis, treatment guidance, and outcome control during the intervention. The low contrast resolution is inferior to CT due to the smaller dynamic range of the measurements/detector and less efficient scatter radiation reduction. The latter results from limitations in accurately positioning an anti-scatter grid due to the small
290
Tomography
pixel size. Focusing of the scatter grid is additionally compromised by the inherent capability of C-arm systems to change the distance between focal spot and detector. The effect of the lower dynamic range of the measurements/detector can partly be compensated by smaller detector pixel dimensions. Additionally, low contrast artifacts due to scatter can be reduced by software-based scatter correction [SIE 01, BER 05]. While image artifacts due to scatter are removed to a large extent, the noise fraction in the scattered photons cannot be compensated for, resulting in an intrinsic reduction of signal-to-noise ratio. Substantial improvements in detector and system calibration have been achieved in recent years, and the feasibility of low contrast volume imaging has been proven on interventional C-arm systems [ROS 03, RIT 07]. Nowadays, the application range of interventional volume imaging is moving from pure high contrast applications such, as angiography and bone imaging, towards low contrast applications, such as the guidance of minimal invasive abdominal and even cardiac procedures. Due to the open design and the resulting mechanical instability, as well as the limited speed of the respective detectors, interventional C-arm systems rotate at a significantly lower speed than decent CT scanners. The deformations caused by mechanical bending of the gantry have to be taken into account during the reconstruction, requiring calibrations for an assessment of the precise system geometry. Further major differences to CT are that C-arm systems are not limited to a circular movement (trajectory) of the source–detector system as are computed tomography systems and that cone-beam projection geometries can be chosen more flexibly allowing different acquisition strategies for completely filling Radon space. Alternative trajectories have been suggested [SCH 03, DEN 05], but their clinical value has not been proven yet. Furthermore, the dimensions of the planar 2D detectors always cause axially truncated projections, and correction methods need to be implemented [LEW 79, SCH 03]. 11.2. Example of 3D angiography Since neuroradiologists were already experienced with rotational data acquisition and motion of the head could be avoided during acquisition, 3D cerebral angiography was the first clinical application of interventional 3D X-ray tomography. Measuring projections along a circular arc using a single contrast agent bolus had previously been introduced in this field for evaluating the spatial relationship of vascular structures and malformations in the cerebral vessel tree [VOI 75]. The straightforward reconstruction of 3D volume representations on an
Interventional X-ray Volume Tomography
291
interventional system was enabled by the introduction of suited calibration, reconstruction and visualization methods in the clinical systems. The data acquisition and processing strategies will be described as examples of the application of this technique in neuroradiology. Modifications to this acquisition protocol, e.g. for soft tissue or cardiac imaging, will be described in section 11.3. 11.2.1. Principles of projection acquisition A schematic illustration of the procedure for acquiring projections is presented in Figure 11.3. The projections are acquired along a circular trajectory with angular coverage of at least 200°. The target area is positioned at the center of rotation (isocenter). An iodine-based contrast agent (e.g. 300 mg/ml) is injected at a flow rate of 4–5 ml/s for the duration of the acquisition, ensuring a complete filling of the arterial system [KEM 98] over the entire data acquisition. The dimension of the acquired projection image is normally 10242 with a frame rate of 30 frames per second. Depending on the system, in the order of 120 projections are acquired along the circular arc at angular velocities of up to 55°/s. Since different clinical applications require modified acquisition strategies, the protocols are optimized for each specific clinical application to optimally match user needs in terms of 3D image quality, freedom of motion, and X-ray dose. 11.2.2. Data processing 11.2.2.1. Calibration For 3D reconstruction, a precise knowledge of the projection geometry is mandatory. Interventional X-ray systems were not initially designed for this purpose, and the mechanical instabilities of the gantry require specific calibration methods to obtain such a precise knowledge of the projection geometry for a certain system trajectory. Different calibration methods have been described in the literature [GUG 92, ROU 93, SAI 93, KOP 95, KOP 96, FAH 97]. The use of bead phantoms with known bead geometry has turned out to be a reliable tool for the assessment of the projection parameters. Since the dynamics of the system strongly depends on the trajectory, patient data have to be acquired with precisely the same trajectory as used during calibration. A necessary precondition is the reproducibility of the system dynamics in subsequent scans. Due to the excellent reproducibility of recent systems, calibration is usually only necessary at system installation or after major system maintenance.
292
Tomography
Figure 11.3. Schematic illustration of the procedure for acquiring the projection
With the widely used flat detectors, which are not influenced by the Earth’s magnetic field, the calibration procedure is reduced to a single rotational run. The strategy used for determining the geometrical parameters of the projection is presented in Figure 11.4. The phantom (Figure 11.5 – left image) comprises 20 beads positioned in the corners of a regular dodecahedron. The positions of the beads are precisely known in a predefined coordinate system. Three additional markers are located along the z-axis of the phantom. The projected distribution of the points on the detector is almost isotropic (Figure 11.5 – right image). Calculation of the geometric parameters is performed automatically. The markers are detected in the projection and knowledge of the original phantom geometry enables the calculation of the deviation of the projection geometry from the ideal acquisition trajectory.
Interventional X-ray Volume Tomography
293
flat detector
geometry phantom (dodecahedron) +90°
-90° table
rotational scan
0°
X-ray tube
Figure 11.4. Principle of the measurement of the parameters of the projection (geometry calibration)
Figure 11.5. Left image: geometry phantom (dodecahedron). Right image: X-ray projection of the geometry phantom
294
Tomography
The geometry of a specific rotational trajectory is parametrized by the set of acquired projections /. Each single projection O may be described by the vector from G G the origin to the focal spot S( Ȝ) and the vector D( Ȝ) from the origin to the point on the detector from which the detector normal points towards the source. In addition, Gˆ G G uˆ , vˆ , and d define the normal vectors along the detector rows and columns, and the unit vector pointing from the detector surface towards the source. They all deG G pend on O. In the case of detector tilt, the outer product of uˆ and vˆ may not be Gˆ aligned with d . A schematic view of the acquisition geometry is given in Figure 11.6. A cone-beam projection measured in this geometry is described by:
Xf (u, v, Ȝ)
f
G
G
ˆ ³ f (S(O ) l e (u, v, Ȝ)) dl .
[11.1]
0
Xf (u, v, Ȝ) is defined as the X-ray transform of the object, where the measured line integral is parametrized by l.
v
Xf (u , v, O )
G E(u, v, Ȝ)
/ G S( O )
z
y
O
x
G x
u Gˆ d
G D(O )
G eˆ
G G Gˆ SD (x - D) d
Figure 11.6. Illustration of the 3D cone-beam geometry
Interventional X-ray Volume Tomography
295
11.2.2.2. Reconstruction algorithm For reconstruction of the projection data, a reconstruction algorithm has to be applied that considers the known acquisition geometry of the system. The majority of reconstruction methods applied to 3D cone-beam reconstruction on interventional X-ray systems is based on the Feldkamp algorithm [FEL 84]. The Feldkamp approach represents a filtered backprojection reconstruction and has been formulated for ideal circular trajectories. In interventional X-ray volume imaging, it is necessary to carefully consider deviations from the ideal trajectory (identified during the calibration) and the uneven angular sampling during the reconstruction. The described reconstruction method is known from the literature in the field of medical imaging and non-destructive testing [FEL 84, RIZ 91, CHO 95, JOH 98]. The algorithm is described by the following equation: 2
f SD G G SD G Xf (u' , v(x, O ), Ȝ)hR (u (x, O ) u ' )du'dȜ . 2 ³ SE (x G G ) G ˆ f ȁ SD (x D(O ))d(O )
G f x ʌ ³
[11.2] Note that SD and SE describe the distance from the source to the detector cenG ter and to the point at which the voxel at position x is projected onto the detector, respectively. The ramp filter used for the reconstruction is described as:
hR ( U )
f
³ Pe
j2SUP
dP .
[11.3]
f
G G G The detector coordinates u (x, O ) and v(x, O ) onto which a voxel at position x is projected by a cone-beam projection from source position O are given by: G u (x, O )
G G G SD ( x D)uˆ G G Gˆ SD -( x D)d
G , v (x, O )
G G G SD ( x D) vˆ . G G Gˆ SD -( x D)d
[11.4]
The 3D backprojection is performed according to equation [11.2] along the geometry of the rays from the source to the detector elements. In the case of truncated projections, projection extension is carried out prior to reconstruction [LEW 79, SCH 03]. Parker weighting is applied to Xf (u, v, Ȝ) for circular short scans covering an angular range of less than 360° [PAR 82]. The non-equiangular sampling due to acceleration and deceleration of the C-arm system can be compensated for by an angular weighting function. It takes the trajectory length into
296
Tomography
account, which is represented by an acquired projection [GRA 99]. The weights are defined by the normalized distance of the source position to its direct neighbors. The presented method yields a simple and direct approach to 3D reconstruction from 2D projections acquired with a C-arm system. Several sources of errors remain due to the inherent approximations of the method. These include the problem of the circular arc being an incomplete trajectory that does not satisfy the Tuy completeness condition [TUY 83, SMI 85], deviations from the ideal geometry that are not taken into account in the filtering, and the classical problem of truncated projections due to objects that extend beyond the cone covered by the projection. Nevertheless, the adapted Feldkamp method as described in this section has proven to be an excellent approach for interventional volume reconstruction with respect to image quality and computational effort.
11.2.3. Other reconstruction methods in 3D radiology Alternative reconstruction methods are used in the interventional volume imaging domain. They focus especially on the reconstruction of data sets acquired with limited angular coverage, or reduced angular sampling, or of imaging moving objects. In particular, algebraic iterative reconstruction methods (see Chapter 4) have been applied [SAI 93]. These methods permit a simple introduction of constraints or a priori information to facilitate the reconstruction and to cope with the lack of information [PAY 95, PAY 96]. Application examples have been presented in the area of interventional cardiology [BLO 06, HAN 08]. However, their application is still limited due to high computational requirements, which are not very compatible with clinical needs. Methods closer to image analysis have also been proposed [LIU 92]. They may be divided into three phases: analysis and segmentation of 2D angiograms, setting up of the correspondences between the elementary segments and reconstruction of a 3D skeleton, and calculation of the vessel surface. All these methods rely on the analysis of projections and only allow the reconstruction of vessels that are easily identifiable in the angiograms. Moreover, identification of the correspondences is in practice not very robust and generally requires manual interaction by an expert to resolve certain ambiguities. These methods, which are most often called modeling rather than reconstruction methods, are mainly dedicated to cardiovascular imaging [COA 94, CHE 00]. Methods requiring less interaction and including more projections into the modeling process have been introduced [MOV 04, JAN 07] to achieve increased applicability and higher accuracy in clinical practice.
Interventional X-ray Volume Tomography
297
11.2.4. Visualization The visualization of the reconstructed volumetric data plays a major role in the diagnostic procedure. Most often, the 3D volumes are displayed using volume rendering techniques (see Figure 11.7, left). Volume rendering techniques allow rapid interaction with the data, including operations like rotation, zoom, and the definition of the opacity function, and thus enable a quick overview of the 3D data set to be obtained. This permits perception of the morphology of the vessels and gives the operator full flexibility in choosing the view point for the extraction of structural information. It is particularly helpful in understanding vascular pathologies and thus in planning interventional procedures. The increasing image quality also enables the wider use of surface rendering visualization techniques for high contrast objects. When using interventional X-ray systems, image quality and resolution are independent of the spatial dimensions (see Figure 11.7, right). Further visualization techniques include stereoscopic rendering, 2D slice-based visualization, and multi-planar reformatting.
Figure 11.7. Visualization using volume rendering. Left image: volume reconstruction and visualization with isotropic voxels. The phantom consists of three orthogonal lines (in spheres). Right image: volume rendering of a 3D reconstruction of a cerebral aneurysm after simultaneous injection of contrast agent into both carotid arteries
11.3. Clinical examples Since its introduction in neuroradiology in the late 1990s, interventional X-ray volume imaging has broadened its application spectrum substantially. New applications have arisen in orthopedics, and soft tissue applications are currently being
298
Tomography
approached, e.g. for abdominal interventions (see e.g. [WAL 08]). Further clinical applications, for instance in cardiology, will probably follow. The full application spectrum is summarized as three-dimensional rotational X-ray (3D-RX) imaging.
11.3.1. High contrast applications In its infancy, 3D rotational angiography (3D-RA) was focused on neurological applications, including the endovascular treatment of aneurysms and arteriovenous malformations (AVMs). For these applications, 3D-RA enables the simplified guidance of catheters and interventional devices. The 3D reconstruction of the complete vascular morphology provides the anatomical information required for the planning of the procedure. Deeper insight into the lesion structure allows better prediction of the procedure outcome. Clinical examples of neurovascular and aortic cases are presented in Figure 11.8.
Figure 11.8. Examples of typical vascular applications of 3D rotational X-ray imaging. Examples show reconstruction of an AVM (upper left), stent planning for intra-cerebral stenting (upper middle), carotid stenosis with calcified plaque (upper right), abdominal aortic aneurysm (AAA) with calcified plaque (lower left), stent planning for AAA (lower middle) and stent visualization after deployment (lower right)
Interventional X-ray Volume Tomography
299
In addition, the same technique can be applied to high contrast bone imaging at unique isotropic spatial resolution. Next to the detection and analysis of fractures, the main application is in the area of planning minimal invasive interventions (see Figure 11.9) and the verification of the interventional procedure. For data acquisition and reconstruction, a similar approach as for vascular applications was applied.
Figure 11.9. Volume renderings of 3D reconstructions of a vertebra (left image) and a pelvis region (right image). The lines in the image mark a planned needle intervention path
11.3.2. Soft tissue contrast applications With the introduction of flat detectors providing a dynamic range up to 14 bits, the application of 3D-RX in the field of soft tissue contrast imaging is being investigated. This requires careful control of acquisition parameters such as tube current and voltage across the complete rotational scan, as well as the data necessary to perform gain correction. Moreover, accurate sampling needs to be taken care of and careful detector calibration is a prerequisite. Typical acquisition parameters for soft tissue imaging are acquisitions along a circular arc trajectory with 220° angular coverage and 620 projections measured at 30 frames per second with a typical scan duration of 22 seconds. The main applications of this imaging mode are in the domain of head and abdominal imaging (see Figure 11.10). The visualization and differentiation of brain tissue is a target in head imaging, where the assessment of hemorrhage, edema and ventricular size becomes feasible [SOE 08]. Abdominal soft tissue imaging is focused on guiding all kinds of
300
Tomography
biopsies and drainages or embolization procedures in the abdominal region [WAL 08].
Figure 11.10. Two slice images out of 3D reconstructions of a soft tissue brain scan (left image) and an abdominal scan (right image). The lines in the images mark a planned intervention path
11.3.3. Cardiac applications Generation of volume images of cardiac structures is a challenge for every tomographic imaging approach. Even if holding of the breath during data acquisition can be achieved, the continuous movement of the heart leads to inconsistent projection data and therefore to severe artifacts. In interventional X-ray tomography, the rotation time of the system is limited due to the open design and the required equipment in the intervention suite. So, the rotation time necessary for the acquisition of a full trajectory is in the order of 4 to 20 s and covers a multitude of cardiac cycles. With cardiac motion becoming apparent in the acquisition data, two different techniques can be applied to tackle this problem: gated and motion-compensated volume reconstruction. In the case of gated reconstruction, an electrocardiogram (ECG) is measured in parallel to the rotational acquisition of projections. The R-peaks detected from the ECG are used to select projections with consistent motion states. Either nearest neighbor gating [RAS 06] or gating with finite sized gating windows can be performed [SCH 06, HAN 08]. Unfortunately, the ECG-based projection selection leads to an even more ill-posed reconstruction problem than that usually faced in 3D-RX, since effectively an interrupted acquisition path results.
Interventional X-ray Volume Tomography
301
The reconstruction of gated data sets of the coronaries was the first attempt to visualize moving objects such as the heart [RAS 06]. The ECG was recorded simultaneously with the projection sequence, and the projections used for the reconstruction were selected according to their relative position with respect to the R-peaks. First reconstructions using filtered backprojection-based methods delivered volume data of the coronaries which enable 3D analysis and the derivation of guidance information for interventions (Figure 11.11). The ill-posed reconstruction problem can be approached by iterative reconstruction methods, which incorporate a priori knowledge of the object [HAN 08] (Figure 11.12). An alternative approach is to use motion-compensated reconstruction techniques. Here, the motion of the object is considered during the reconstruction using a 3D motion vector field. On the one hand, this approach allows incorporation of projections acquired in different motion states for the reconstruction, but, on the other hand, estimation of the motion vector field for a given patient data set poses a severe problem.
Figure 11.11. Three images of the reconstructed volume of the left coronary artery of a pig after selective contrast agent injection. Images have been reconstructed with nearest neighbor gating and are displayed as maximum intensity projections. Reprinted from [RAS 06] with permission of Academic Radiology
First attempts at motion-compensated reconstruction of high contrast objects have been made for interventional coronary artery imaging [BLO 06, SCH 06]. Significant improvements can be achieved in terms of signal-to-noise ratio at reduced motion blurring (see Figure 11.13). Motion-compensated volume reconstruction methods are still approximate, at least for arbitrary motion patterns, as they occur in cardiac imaging. Exact solutions are only known for 2D reconstruction and selected object movements [ROU 04]. Next to motioncompensated reconstruction, a solution to the problem of cardiac reconstruction based on interventional X-ray systems may be found in the multiple acquisition of
302
Tomography
circular arcs, leading to an improved data redundancy but also to prolonged acquisition time [LAU 06].
Figure 11.12. Reconstructions of a left and a right coronary artery from human data with an iterative reconstruction method with sparse data constraint and nearest neighbor gating, presented as a volume rendering [HAN 08]
Figure 11.13. Volume rendered views of a moving mathematical coronary artery phantom reconstructed from a circular arc trajectory. Left to right: nearest neighbor gated reconstruction, gated reconstruction with 20% gating window width, motion-compensated reconstruction, reference reconstruction of the static phantom from all projections [SCH 06]
11.4. Conclusion The generation of tomographic volume images of 3D objects obtained with rotational X-ray imaging provides extremely useful images for diagnosis, treatment, guidance, and outcome control in interventional radiology. In recent years, a broad application spectrum has evolved for this imaging modality, ranging from neuroradiology and orthopedics to interventional radiology, oncology and cardiology.
Interventional X-ray Volume Tomography
303
A large variety of different data acquisition protocols has been developed. The boundary conditions defining these acquisition geometries are the degrees of freedom the system allows for movement, and Tuy’s completeness condition defining the optimal measurement space. These are compromised by space limitations in the interventional scenario as well as patient parameters like breath-hold length or compliance to contrast agents. All application dependent acquisition protocols are based on a rotational acquisition of X-ray projections measured at discrete positions and the careful calibration of the corresponding trajectory. Though a large application range of existing clinical targets for minimal invasive interventions is already covered, further developments are expected, e.g. in the area of limited angle tomography or functional imaging on interventional systems.
11.5. Bibliography [BER 05] BERTRAM M., WIEGERT J., ROSE G., AACH T., “Potential of software based scatter corrections in flat panel cone beam CT”, Proc. of the SPIE Conference: Physics of Medical Imaging, vol. 5745, n° 1, pp. 271–282, 2005. [BLO 06] BLONDEL C., MALANDAIN G., VAILLANT R., AYACHE N., “Reconstruction of coronary arteries from a single rotational X-ray projection sequence”, IEEE Trans. Med. Img., vol. 25, n° 5, pp. 653–663, 2006. [BUS 02] BUSSE F., RUETTEN W., SANDKAMP B., ALVING P., BASTIAENS R., DUCOURANT T., “Performance of a flat panel cardiac detector”, Proc. of the SPIE Conference: Physics of Medical Imaging, vol. 4682, pp. 819–827, 2002. [CHE 00] CHEN S., Carroll J., “3D reconstruction of coronary arterial tree to optimize angiographic visualization”, IEEE Trans. Med. Img., vol. 19, n° 4, pp. 318–336, 2000. [CHO 95] CHO P. S., JOHNSON R. H., GRIFFIN T. W., “Cone-beam CT for radiotherapy applications”, Phys. Med. Biol., vol. 40, pp. 1863–1883, 1995. [COA 94] COATRIEUX J. L., GARREAU M., COLLOREC R., ROUX C., “Computer vision approaches for three-dimensional reconstruction of coronary arteries : review and prospects”, Critical Reviews in Biomedical Engineering, vol. 22, n° 1, pp. 1–38, 1994. [DEN 05] DENNERLEIN F., KATSEVICH A., LAURITSCH G., HORNEGGER J., “Exact and efficient cone beam reconstruction algorithm for a short scan circle combined with various lines”, Proc. of the SPIE Conference: Physics of Medical Imaging, vol. 5745, n° 1, pp. 388–399, 2005. [FAH 97] FAHRIG R., MOREAU M., HOLDSWORTH D. W., “Three-dimensional computed tomography reconstruction using a C-arm mounted XRII: Correction of image intensifier distortion, Med. Phys., vol. 24, n° 7, pp. 1097–1106, 1997. [FEL 84] FELDKAMP L. A.., DAVIS L. C., KRESS J. W., “Practical cone-beam algorithms”, J. Opt. Soc. Am., vol. A6, pp. 612–619, 1984.
304
Tomography
[GRA 99] GRASS M., KOPPE R., KLOTZ E., PROKSA R., KUHN M. H., AERTS H., OP DE BEEK J., KEMKERS R., “Three-dimensional reconstruction of high contrast objects using C-arm image intensifier projection data”, Comp. Med. Img. Graph., vol. 23, pp. 311–321, 1999. [GUG 92] GUGGENHEIM N., CHAPPUIS F., SUILEN C., DORIOT P. A., DORSAZ P. A., DESCOUTS P., RUTIHAUSER W., “3D reconstruction of coronary arteries of flow measurement”, Int. J. Card. Img., vol. 8, pp. 265–272, 1992. [HAN 08] HANSIS E., SCHÄFER D., DÖSSEL O., GRASS M., “Evaluation of iterative sparse object reconstruction from few projections for rotational coronary angiography”, IEEE Trans. Med. Img., vol. 27, n° 11, pp. 1548–1555, 2008. [HEA 95] HEAUTOT J., CHABERT E., GANDON Y., CROCI S., ROMEAS R., CAMPANGOLO R., Chereul B., Scarabin J., Carsin M., “Analysis of cerebrovascular diseases by a new 3dimensional computerized X-ray angiography system”, Neurorad., vol. 40, pp. 203–209, 1995. [JOH 98] JOHNSON R., HU H., HAWORTH S., CHO P., DAWSON C., LINEHAN J., “Circular and circle-and-line orbits for cone-beam X-ray microtomography of vascular networks”, Phys. Med. Biol., vol. 43, pp. 929–940, 1998. [JAN 07] JANDT U., SCHÄFER D., RASCHE V., GRASS M., “Automatic generation of 3D coronary artery centerlines using rotational X-ray angiography”, Proc. of the SPIE Conference: Physics of Medical Imaging, vol. 6510, pp. 65104Y, 2007. [KEM 98] KEMKERS R., OP DE BEEK J., AERTS H., KOPPE R., KLOTZ E., GRASS M., MORET J., “3D rotational angiography: First clinical application with use of a standard Philips Carm system”, Proc. of the CARS Conference, vol. 1165, pp. 182–187, 1998. [KOP 95] KOPPE R., KLOTZ E., OP DE BEEK J., AERTS H., “3D vessel reconstruction based on rotational angiography”, Proc. of the CARS Conference, pp. 101–107, 1995. [KOP 96] KOPPE R., KLOTZ E., OP DE BEEK J., AERTS H., “Digital stereotaxy/stereotactic procedures with C-arm based rotation-angiography”, Proc. of the CARS Conference, pp. 17–22, 1996. [LAU 06] LAURITSCH G., BÖSE J., WIGSTRÖM L., KEMETH H., FAHRIG R., “Towards cardiac C-arm computed tomography”, IEEE Trans. Med. Img., vol. 25, n° 7, pp. 922–934, 2006. [LEW 79] LEWITT R., “Processing of incomplete measurement data in computed tomography”, Med. Phys., vol. 6, n° 5, pp. 412–417, 1979. [LIU 92] LIU L., SUN Y., “Fully automated reconstruction of three dimensional vascular tree structures from two orthogonal views using computational algorithms and production rules”, Opt. Eng., vol. 31, pp. 2197–2207, 1992. [MOV 04] MOVASSAGHI B., RASCHE V., Grass M., VIERGEVER M., NIESSEN W., “A quantitative analysis of 3D coronary modeling from two or more projections”, IEEE Trans. Med. Img., vol. 23, pp. 1517–1531, 2004. [PAR 82] PARKER D., “Optimal short scan convolution reconstruction for fan beam CT”, Med. Phys., vol. 9, pp. 254–257, 1982.
Interventional X-ray Volume Tomography
305
[PAY 95] PAYOT E., GUILLEMAUD R., TROUSSET Y., PRÊTEUX F., “An adaptive and constraint model for 3D X-Ray vascular reconstruction”, in Grangeat P., Amans J. L. (Eds.), ThreeDimensional Image Reconstruction in Radiology and Nuclear Medicine, Kluwer Academic Publishers, pp. 47–58, 1995. [PAY 96] PAYOT E., PRÊTEUX F., TROUSSET Y., GUILLEMAUD R., “Three dimensional reconstruction from incomplete Fourier spectra: an extrapolation approach”, Proc. of the SPIE Conference: Statistical and Stochastic Methods for Image Processing, vol. 2823, pp. 160–173, 1996. [RAS 06] RASCHE V., MOVASSAGHI B., GRASS M., SCHÄFER D., KÜHL H., GÜNTHER R., BÜCKER A., “Three dimensional X-ray coronary angiography in the procine model: A feasibility study”, Acad. Rad., vol. 13, pp. 644–651, 2006. [RIT 07] RITTER D., ORMAN J., SCHMIDTGUNST C., GRAUMANN R., “3D soft tissue imaging with a mobile C-arm”, Comput. Med. Imag. Graph., vol. 31, pp. 91–102, 2007. [RIZ 91] RIZO P., GRANGEAT P., SIRE P., LEMASSON P., MELENNEC P., “Comparison of two three-dimensional X-ray cone-beam reconstruction algorithms with circular source trajectories”, J. Opt. Soc. Am., vol. A8, n° 10, pp. 1639–1648, 1991. [ROS 03] ROSE G., WIEGERT J., SCHÄFER D., FIEDLER K., CONRADS N., TIMMER J., RASCHE V., NOORDHOEK N., KLOTZ E., KOPPE R., “Image quality of flat panel cone beam CT”, Proc. of the SPIE Conference: Physics of Medical Imaging, vol. 5030, pp. 677–683, 2003. [ROU 93] ROUGEE A., PICARD C., TROUSSET Y., PONCHUT C., “Geometrical calibration for 3D X-ray imaging”, Proc. of the SPIE Conference: Image Capture, Formatting and Display, vol. 1897, pp. 161–169, 1993. [ROU 04] ROUX S., DESBAT L., KÖNIG A., GRANGEAT P., “Exact reconstruction in 2D dynamic CT: compensation of time dependent affine transformations”, Phys. Med. Biol., vol. 49, pp. 2169–2182, 2004. [SAI 93] SAINT-FELIX D., PICARD C., PONCHUT C., ROMEAS R., ROUGEE A., TROUSSET Y., “Three dimensional X-ray angiography: first in vivo results with a new system”, Proc. of the SPIE Conference: Image Capture, Formatting and Display, vol. 1897, pp. 90–98, 1993. [SCH 94] SCHIEBEL U., CONRADS N., JUNG N., WEIBRECHT M., WIECZOREK H., ZAENGEL T., POWELL M., FRENCH I., GLASSE C., “Fluoroscopic X-ray imaging with amorphous silicon thin film arrays”, Proc. of the SPIE Conference: Physics of Medical Imaging, vol. 2163, pp. 129–140, 1994. [SCH 03] SCHOMBERG H., “Complete source trajectory for C-arm systems and a method for coping with truncated cone beam projections”, Proc. of the International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Brest, France, pp. 221–224, 2003. [SCH 06] SCHÄFER D., BORGERT J., RASCHE V., GRASS M., “Motion compensated and gated cone beam filtered back projection for 3D rotational angiography”, IEEE Trans. Med. Img., vol. 25, n° 7, pp. 898–906, 2006.
306
Tomography
[SIE 01] SIEWERDSEN J., JAFFRAY D., “Cone beam computer tomography with a flat panel imager: magnitude and effect of X-ray scatter”, Med. Phys., vol. 28, n° 2, pp. 220–231, 2001. [SMI 85] SMITH B., “Image reconstruction from cone-beam projections: Necessary and sufficient conditions and reconstruction methods”, IEEE Trans. Med. Img., vol. 4, n° 1, pp. 14–25, 1985. [SOE 08] SOEDERMAN M., BABIC D., HOLMIN S., ANDERSSON T., “Brain imaging with a flat panel detector C-arm”, Neurorad., vol. 50, pp. 863–868, 2008. [TUY 83] TUY H., “An inversion formula for cone-beam reconstruction”, SIAM J. Appl. Math., vol. 43, n° 3, pp. 546–552, 1983. [VOI 75] VOIGT K., STOETER P., PETERSEN D., “Rotational cerebral roentgenography. I. Evaluation of the technical procedure and diagnostic application with model studies”, Neurorad., vol. 10, n° 2, pp. 95–100, 1975. [WAL 08] WALLACE M., KUO M., GLAIBERMAN C., BINKERT C., ORTH R., SOULEZ G., “Three-dimensional C-arm cone beam CT: Applications in the intervention suite”, J. Vasc. Interv. Radiol., vol. 19, pp. 799–813, 2008.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 12
Magnetic Resonance Imaging
12.1. Introduction The importance that magnetic resonance imaging (MRI) has gained over the last 25 years, the extent and diversity of its applications, the peculiarity of the signal formation, as well as the variety of imaging sequences, make it difficult to summarize this modality in a few pages. Inevitably, countless questions have to remain unanswered. The interested reader will find more extensive information on the subject in the book by Haacke et al. [HAA 99]. The objective of this summary is to introduce three aspects that are very specific to MRI. The first aspect concerns the physical nature of the underlying phenomenon, which has its origin in the magnetism of nuclei. The resulting precession of the magnetization permits a simple coding of the spatial position by means of the generalized phase of the nuclear magnetic resonance (NMR) signal. The second aspect concerns the collection of the information needed for the image formation. MRI is a digital technique and thus allows direct acquisition of samples in the so-called k-space, the reciprocal space of the image. The reconstruction by discrete Fourier transform immediately enables assigning to each pixel of a gray value that is proportional to the local density of the nuclear magnetization. The third, astonishing, aspect stems from the variety of possible contrasts. The measured magnetization may be modified by numerous physical factors. This is exploited by an appropriate choice of the imaging sequence. The presentation of these principles will be followed by a brief discussion of the volumetric nature of the provided information.
Chapter written by André BRIGUET and Didier REVEL.
308
Tomography
12.2. Nuclear paramagnetism and its measurement 12.2.1. Nuclear magnetization When a collection of nuclei with angular moments, commonly called spins, in thermal equilibrium is placed in a magnetic field, the existence of microscopic nuclear magnetic moments associated with the angular moments leads to the formation of a magnetization that is aligned with the field and strengthens it. This is a paramagnetic behavior, since the magnetization M0 is proportional to the magnitude of the vector B0, the applied magnetic field, and inversely proportional to 4, the absolute temperature of the sample [ABR 61]: ȂȠ
ȃ
J 2 s( s 1)= 2 3k B 4
BR
[12.1]
In this expression, N denotes the total number of nuclei of spin number s and gyromagnetic ratio JIt also uses the Boltzmann constant kB and of = , the Planck constant divided by 2S It is worth noting that this result corresponds to thermodynamic polarization that may be described by Boltzmann statistics in the case of high temperatures. Therefore, it presupposes that the thermal agitation energies are far higher than the differences in magnetic energy between the 2s + 1 levels concerned. The expression above is consequently not valid for magnetizations in transient states, as is, for example, the case with the nuclei of 3He and 129Xe. In fact, the “hyperpolarization” of these nuclei, achieved by optical pumping techniques, may durably be maintained at a higher value in a magnetic field because the life time of the magnetic states of these isotopes of rare gas nuclei are particularly long [FIT 67]. In the most frequent case of polarization in the presence of thermal agitation, nuclear magnetization cannot be measured by direct methods. It is so weak that it has to be measured with a resonance technique [FEY 63] that is tailored to the precession around the static magnetic field and the existence of the phenomenon of relaxation.
12.2.2. Nuclear magnetic relaxation and Larmor precession Thermodynamic equilibrium may be disturbed, either by hyperpolarization, a case not considered here, or by energy received from outside the sample. The orientation of the magnetization may be changed with respect to the direction of the static magnetic field. Return to equilibrium expresses itself by the disappearance of the magnetization’s component that is oriented perpendicular to the applied static magnetic field, the so-called transverse component, and in the recovery of the longi-
Magnetic Resonance Imaging
309
tudinal component, which is aligned with the static magnetic field. The “spin–spin” relaxation corresponds to the reduction in the transverse magnetization, a process to be distinguished from the “spin–lattice” relaxation, which concerns the longitudinal component [BLO 48]. This return of the magnetization to its equilibrium must be described statistically, since the number of elementary moments contained in a macroscopic sample is very high. For this reason, the following models for representing the relaxation, and consequently for describing the evolution of the magnetization’s components once they change, may be justified [BLO 46]: dM Z
M0 M Z
dt
T1
dM A dt
,
MA . T2
[12.2] [12.3]
In these equations, M Z and M A denote the longitudinal and transverse components, while the time constants T1 and T2 are referred to as “spin–lattice” and “spin– spin” relaxation times, respectively. For the protons in water, T1 | T2, and both time constants equal about 2.5 s at fields of the order of 1 T. The relaxation does not take into account any changes of the magnetization caused by changes in the magnetic field. The analysis may equally be made statistically, as the samples form collections of nuclei. This shows that the magnetization vector performs a precession around B0. The angular velocity Z of this precession, or Larmor pulsation, is thus given by the following fundamental relation:
Z JB0 .
[12.4]
The gyromagnetic ratio Ȗ is specific for each considered nucleus. In this way, in a field of 1 T, the transverse magnetization of a collection of protons performs 42 million rotations per second, while that of a collection of nuclei of 23Na performs only a little more than 10 millions. The current magnet technology enables 23.47 T to be reached with superconductive magnets, which corresponds to a precession of protons with a frequency of 1 GHz.
12.2.3. NMR signal formation During its precession with the Larmor frequency, the magnetization may generate an electromotive force at the terminals of a coil, whose axis is oriented perpendicular to the direction of the static magnetic field. By reciprocity, when a sinusoidal current with the Larmor frequency, or a frequency very close to it, runs through the coil, it creates an alternating, rectilinearly polarized radiofrequency (RF) magnetic field in the sample. Such an oscillating field may be considered as the superposition
310
Tomography
of two circularly polarized fields, one of which rotates necessarily in the same sense as the precession of the magnetization and practically at the same angular velocity. A frame of reference may be attached to the component of the vector B1, the RF magnetic field, which rotates in the same sense as the precession. In this frame of reference rotating around the static field, the value of the latter is apparently reduced to a very small value or zero [SLI 78]. It is possible to force the magnetization to turn by precession around B1. Applying the RF field B1 for a duration W leads in this way to an angle of rotation of T JB1W around B1. The same coil is thus able to first perturb the magnetization by creating a RF magnetic field that is oriented orthogonal to the static magnetic field and to then receive the signal resulting from this excitation.
Figure 12.1. (a) Rotation of the magnetization in the rotating frame of reference Oxyz. (b) Collection of a signal s(t) from the free precession of the magnetization undergoing relaxation in the fixed laboratory frame of reference OXYZ (Oz and OZ are aligned with the static field)
This signal is induced during the free precession of the magnetization around the static magnetic field, and it is successively attenuated by the relaxation of the transverse component. The most efficient way of capturing the signal is evident from Figure 12.1. At emission (generation of the RF field) as at reception (collection of the signal produced by the free precession of the magnetization), we use the coupling between the rotating magnetization and the circuit of the coil, in which decaying electric oscillations occur. This coil will be tuned to the Larmor frequency with the help of a capacitor, placed in parallel to the terminals, to build on the selectivity of the electric resonator constructed in this way.
Magnetic Resonance Imaging
311
12.2.4. Instrumentation [CHE 89] During the emission, the resonator must transmit, in the form of a pulse of the RF magnetic field, a quantity of energy to modify the state of the magnetization. Face to face with the sample that it is coupled to, the resonator plays the role of an antenna operating in the “near-field regime”, i.e. at a distance sufficiently small to neglect propagation effects. During the pulse, the antenna is connected to an RF power amplifier, which is fed with weak pulses. The pulse trains that form the excitation sequences are generated by a pulse modulator, which controls the amplitude and frequency of the excitation field. This pulse modulator is driven by a pulse sequence controller. After all RF pulses, the magnetic energy stored by the resonator in the space that it encompasses is dissipated very rapidly in the circuit forming the resonator with a time constant of typically less than 1 μs. Yet the energy that corresponds to the coupling between the RF field and the nuclear magnetization is considerably smaller than this magnetic energy. It is dissipated with the time constant T2 of the transverse relaxation, which has a value distinctly larger than 1 ms in the case of liquid samples and living tissue. Fortunately, this difference between the two time constants leaves sufficient time to allow observation of the signal induced by the precession of the magnetization. This signal is then detected using a demodulation to obtain two low frequency components in phase quadrature for a simple complex representation. The description of the experimental conditions would not be complete without giving some information on the magnet, which represents the most costly component of an MRI scanner and whose price increases exponentially with the imaging volume and the field strength. By employing superconducting magnets, users have fields with excellent temporal stability at their disposal, whose strength does not change by more than one part per million in several hours. The attention then needs to be focused on the problem of field uniformity, since the volume of the “sample”, the human body, is significant. It is judicious to choose the magnet structure that is best adapted to either the form of the examined part of the body (head, thorax, limbs, etc.) or the type of examination: open magnets for so-called “interventional” imaging, short magnets for claustrophobic patients, and shielded magnets for limited stray field and thus reduced space requirements for the installation. When the field strength has to be increased, it becomes extremely challenging to meet the specifications resulting from constraints on, for example, the field uniformity. A simple calculation immediately enables the fatal effect of a lack of homogeneity of the static magnetic field to be appreciated. We consider two small elements of the volume of a sample, between which the field strength differs by one part per million. When we assume a field strength of 1.5 T and focus on hydrogen nuclei, the number of revolutions of the magnetization in these elements differs by about 64 rotations after one second, since the Larmor frequency is close to 64 MHz.
312
Tomography
At a field strength of 4.7 T, for the same relative uniformity of the field, they differ by 200 rotations, since the frequency increases proportionally with the field strength. A higher field strength for a higher dispersion of the resonances in spectroscopy and for a higher sensitivity in imaging is not only bought at the expense of facing the difficulty of generating a higher magnetic field. The uniformity of the field must also be preserved in absolute, not in relative value. This entails resorting to necessarily complex and costly compensation coils. Among the devices for such a compensation, the coils for creating linear variations of the field B0 along the three principal spatial directions are of highest priority. One coil acts in the direction OZ of the magnet, the direction of the magnetic field of the polarization, while other coils act in the directions OX and OY. The gradients of the static magnetic field enable, in addition to the correction, the spatial encoding of the information contained in the NMR signal to obtain images.
Figure 12.2. Schematic diagram of an MRI scanner. The resonator is automatically connected to the RF transmit chain during emission and to the RF receiver chain during acquisition of the signal
12.3. Spatial encoding of the signal and image reconstruction [LIA 00] 12.3.1. Information content of the signal’s phase As has been mentioned, all variations in the strength of the static magnetic field observed between two elements of the volume of the sample at different spatial locations lead to different numbers of rotations of the magnetization contained in these elements. This difference, which increases with the duration of the evolution, causes a shift in the phase of the signals emitted by these elements of the volume [KUM 75]. In this way, the sequential application of three gradients GX, GY, and GZ during the intervals tX, tY, and tZ, respectively, leads, for the signal emitted by an element of the volume of the sample located at (X,Y,Z), to the following value of the phase:
) X , Y , Z J [G X t X X GY tY Y GZ t Z Z ] .
[12.5]
Magnetic Resonance Imaging
313
In this expression, we recognize the following scalar product: ) X , Y , Z k X X kY Y k Z Z
[12.6]
where the spatial frequencies kX, kY, and kZ, which are easily derived from equation [12.5], have been introduced. These variables define the reciprocal space to the image, as introduced by Abbe for the formation of images by diffraction [MAR 61]. A remarkable property of NMR is that this phase information may be measured in numerous ways. For example, it is possible to apply simultaneously three gradients and to modify the phase from one acquisition to the next by altering step-by-step, not the duration of the application of the gradients, but the gradient strength. As a result, if we denote by U(X, Y, Z) the nuclear magnetization density responsible for the signal at location (X, Y, Z), the received signal reads, after subtraction of the carrier frequency JB0/2S:
s k X , kY , k Z
³ X ³Y ³Z U X , Y , Z e
i k X X k Y Y k Z Z
dXdYdZ
[12.7]
This expression is an approximation, in which the effect of transverse relaxation has not been taken into account. The received three-dimensional (3D) signal is then the inverse Fourier transform of the local magnetization density U(X, Y, Z) with respect to the variables X, Y, and Z. A discrete Fourier transform of the signal s(kX, kY, kZ) with respect to the variables kX, kY, and kZ enables recovery of U(X, Y, Z) and thus reconstruction of the image. Due to its underlying principle, MRI appears in this way to be fundamentally 3D: first, because it is necessary to use a finite volume to produce sufficient signal, and second, because it leads directly to a 3D representation of the observed object. Based on this representation, it is possible to extract cross-sections in any orientation. Nevertheless, it is also possible to limit ourselves to a two-dimensional (2D) representation if only a thin volume of the sample is selected, thus permitting true tomography. Although the sensitivity is not the best in terms of signal-to-noise ratio in this case, the possibility of acquiring several cross-sections simultaneously [CRO 83] leads to an appreciable gain in acquisition time compared to a direct 3D encoding. To select a cross-section, an excitation with a narrow frequency band has to be applied during the application of a gradient oriented perpendicular to the plane of the cross-section. The technology of selecting cross-sections in NMR has been the subject of active development, and debate, since the mid 1970s [HOU 79]. This process is today well under control, enabling the description of a volume by adjacent cross-sections, each of which has a virtually uniform response across its thickness.
314
Tomography
12.3.2. Signal sampling along k-space trajectories and use of a 2D model By limiting ourselves from now on to a simple 2D model with the X- and Y-axis of the fixed laboratory frame of reference, it is possible to describe MRI in a unified manner. The signal’s generalized phase, of which each value corresponds to a pair (kX, kY), i.e. to a point in the reciprocal space, is of particular importance in the collection of the data. Consequently, the digital sampling of the signal is necessarily discrete in this space. Equation [12.5] shows that the way in which this space is traversed depends on the organization of the encoding gradients, the duration of their application, and the use of the RF pulses. In this way, a sampling trajectory in the reciprocal space, along which the signal samples are taken, corresponds to each imaging method [LJU 83]. The best way to return to image space is to use the discrete Fourier transform with respect to the k-space coordinates, starting from measured samples on a Cartesian grid. If the trajectory does not conform to a Cartesian sampling, it is possible to recreate a Cartesian grid by interpolation. The actual reconstruction thus becomes a secondary problem in MRI. What makes the difference between two MRI methods is not the way in which the image is reconstructed but the way in which the data are experimentally acquired, because the difference essentially results from the type of trajectory adopted to collect the signal samples by displacing the point of measurement in k-space. 12.3.2.1. Cartesian k-space sampling During these sequences, k-space is scanned along parallel lines, much like a TV screen is scanned. By adjusting the gap between the lines to the gap between the samples along the lines, a Cartesian grid is obtained. When the grid is rectangular, the reconstructed image is deformed, but the process remains the same. During the experiment, an initial RF pulse needed to create a transverse magnetization places the starting point of the description at the center of the Fourier space. Immediately after the pulse, the phase of the magnetization, and thus of the captured signal, has an initial value, which may be considered to equal zero for the sake of simplicity. The head of the vector (kX, kY) moves away from the origin of kspace by the combined effect of the two gradients GX and GY (Figure 12.3). A line parallel to the kX-axis may thus be sampled by reversing the sign of the gradient GX and by cancelling the value of GY. In this way, a method coined gradient-echo is at our disposal. Fast versions of this, referred to as “turbo”, exist, in which the sampling is only briefly interrupted between two lines by spacing the RF pulses very closely [MOR 86]. The time that they need to acquire an image is in the order of several seconds. Another way of initializing the signal sampling is to apply a RF pulse, called a “180°” pulse, which has the property of reversing the sign of the phase ), and thus
Magnetic Resonance Imaging
315
replacing the vector (kX, kY) by the vector (– kX,– kY), which is symmetric with respect to the origin of k-space (Figure 12.4). This method, coined spin-echo, turns out to be slower than the gradient-echo method, because the repetition time, which separates the readout of two successive lines, is necessarily longer.
Figure 12.3. Scanning of k-space with the gradient-echo technique: (a) trajectory, (b) sequence. The sign reversal of the readout gradient, in this case GX, allows, once the head of the vector (kx, ky) arrives at M, description of a line parallel to the kx axis in the sense of decreasing kx
Figure 12.4. Scanning of k-space with the spin-echo technique. The reversal of the sign of the phase, achieved by a 180° RF pulse, exchanges the points M and M’, which are symmetric with respect to the origin of k-space. Then, the line M’M’’ is described in the sense of increasing kx by maintaining only the readout gradient during sampling
An efficient way of saving time consists of scanning several lines successively. The transition from one line to another is made at the end of a line by briefly applying the gradient perpendicular to the readout gradient, the so-called “blip” gradient (Figure 12.5). This method, coined echo-planar [SCH 98], thus uses only very few
316
Tomography
RF excitations and can provide several images per second if the gradients have the capacity to switch rapidly.
Figure 12.5. In the echo-planar method, a single RF pulse enables, in principle, description of one half of k-space, as shown. The “jump” from line to line, resulting from the application of the “blip” gradient, is performed just before reversing the sign of the readout gradient
12.3.2.2. Non-Cartesian k-space sampling The gradients GX and GY may be chosen such that the head of the vector (kX, kY) moves along radial lines from the center of k-space (Figure 12.6a). This turns out to be very simple experimentally, since no manipulation of the gradient interferes with the actual measurement. The direction of the gradient that results from the application of two gradients is reoriented with each new RF pulse. This technique may also be employed to successively scan several radial lines without transmitting a new RF excitation. Note that it is impossible to sample kspace on a Cartesian grid with such a method. To perform the image reconstruction by discrete Fourier transform, an interpolation is necessary, which provides estimated values of the signal on a Cartesian grid. We may equally reconstruct the image by filtered backprojection, a method described in Chapter 2. Historically, this approach, which is commonly referred to as “projection reconstruction”, was employed at the beginning of the 1970s when MRI was discovered, and it was applied first in 2D and then in 3D [LAI 81, LAU 73]. Sampling being denser close to the center of k-space than at the border of the covered area (circle or sphere), the radial scanning of k-space often softens strong intensity variations within the image and consequently limits the ability to perceive details. We may obtain the same distribution of sampling positions as in the radial case with circular trajectories that are obtained by the application of two orthogonal, sinusoidal gradients with their phase in quadrature (Figure 12.6b). The initial RF pulse must be followed by a rapid rise in gradient which enables us to quickly place the head of the vector (kX, kY) at the chosen circumference [MAT 86]. The transition
Magnetic Resonance Imaging
317
from one circle to another demands in principle the application of a new RF pulse. This transition may also be made during scanning, resulting in spiral trajectories [AHN 86], which are easier to control than circular trajectories and are clearly advantageous, in the sense that each spiral covers a larger part of k-space (Figure 12.6c). Such an approach enables an image to be obtained in very short times, similarly to the Cartesian echo-planar technique.
Figure 12.6. Radial scanning of k-space: (a) along the direction of rays in the case of projection reconstruction, (b) along concentric circles, and (c) along spiral trajectories permitting a high acquisition speed. As in the case of Cartesian scanning, it is the temporal variation of the gradients that determines the shape of the trajectory: (a) gradient reoriented and then maintained fixed, (b) sinusoidal gradients in quadrature resulting in circular scanning, (c) oscillating gradients of increasing amplitude resulting in spiral scanning
No matter what the nature of the trajectory is, Cartesian or radial, all these sequences have certain properties in common. They may all be rendered rapid if a very powerful gradient amplifier that enables very short switching (for instance, a variation of 50 mT/m in 100 μs) is available. A second common property of a number of these rapid sequences is that they do not to use the whole magnetization polarized along the direction of the static magnetic field. Since the recovery is characterized by a spin–lattice relaxation time far longer than the time reserved for capturing the image, we rely on RF excitation angles of small amplitude in the so-called FLASH (fast low angle shot) [HAA 89] technique, which may also be based on stimulated echoes [HAA 85], or just on a large number of RF excitation pulses applied in a blast in the so-called BURST technique [HEN 93b]. These approaches, which enable us to avoid recourse to a very powerful and frequent switching of the gradients, are also examples of rapid imaging. It is certain that the rhythm in which pulses are imposed on the nuclear magnetization has an influence on the intensity of the signal in the image and may consequently have repercussions on the contrast.
318
Tomography
12.4. Contrast factors and examples of applications 12.4.1. Density of nuclei and magnetization In MRI, the gray level is linked to the amplitude of the transverse magnetization at the moment of its measurement. All factors affecting this magnetization may thus be considered as elements of contrast. The nuclei density weighting may only be effective if the transverse magnetization is created by tilting a perfectly relaxed longitudinal magnetization, which entails sequences with a very long repetition time and a signal sampling very shortly after the initial tilting of the magnetization. Among living tissues which have very similar water contents and hence proton contents, the nuclei density contrast is inevitably very disappointing, because it only permits very limited discrimination between organs, with the exception of bones, the lungs, and tendons, which appear dark in front of the adjacent tissues. It has to be remarked that the effects of magnetic susceptibility at interfaces between tissues also affect the intensity of images. This phenomenon, which interferes greatly in the case of the lungs [CUT 96] and which is exploited in functional imaging of cerebral activation (Chapter 15), will not be discussed here. We are primarily interested in differences in intensity linked to differences in transverse or longitudinal relaxation times, which provide an excellent contrast in images [WEH 84].
12.4.2. Relaxation times and discrimination between soft tissues In the biological domain, the relaxation times of protons have values clearly higher than 10 ms, because the water and lipid molecules are mobile. These values are comparable to or greater than the duration of the sampling of the free precession signal. It is thus possible to play with the chronology of sequences, including the RF pulses, to modify the appearance of images. In sequences with high flip angle (90°, for example) and short repetition time, we do not allow the magnetization to recover its equilibrium value between two successive excitations. The weighting is performed according to the spin–lattice relaxation rate (inverse of the spin–lattice relaxation time): the areas where the spin–lattice relaxation time T1 is higher appear less intense than those where this parameter is smaller. Weighting by the transverse relaxation interferes when the sampling of the signal is more or less strongly delayed with respect to the pulse that initializes the transverse magnetization at the beginning of the sequence. The spin-echo technique gives a weighting as a function of the spin–spin relaxation time T2: the areas where the spin–spin relaxation time T2 is higher appear more intense than those where this parameter is smaller. If the signal is formed by gradient-echo, the weighting is said to be T2*. This effective relaxation time of the transverse magnetization is the consequence of the lack of uniformity of the static magnetic field. In this way, by moving from a T1 to a T2 weighting, the contrast may invert.
Magnetic Resonance Imaging
319
The example referred to most frequently is that of cerebrospinal fluid, whose T1 and T2 values for protons are distinctly higher than those for the protons of water in white and gray matter. It is thus possible, depending on the chosen weighting, to totally invert the contrast between the cerebrospinal fluid, gray matter, and white matter. Such a capacity to change the contrast, an inexhaustible source of strategies for differentiating tissues, deserves to be emphasized. An example of these possibilities is illustrated in Figure 12.7.
Figure 12.7. Images of the brain obtained at 1.5 T in a transversal orientation with T2 weighting on the left and T1 weighting on the right, leading to an apparent inversion of contrast (images courtesy of Olivier Beuf)
However, when the relaxation times of the protons contained in the tissues to be distinguished have very similar values, the discrimination becomes delicate. It may only be made if the difference in signal amplitude created between the two tissues is, on average, at least equal to the squared mean value of the noise [EDE 83]. Figure 12.8 is an example of a very efficient composite contrast.
Figure 12.8. Contrast obtained at 1.5 T with a gradient-echo sequence in sagittal orientation (image courtesy of Olivier Beuf)
320
Tomography
12.4.3. Contrast agents in MRI One way of properly strengthening the “contrast-to-noise” ratio is to accentuate the process of relaxation by use of contrast agents, which interfere with the local values of the relaxation time. Use of these substances, which in general enable a target to be reached by means of blood circulation, can only lead to a shortening of the relaxation times, because these substances are solutions of paramagnetic compounds built around heavy metal ions. The problem of toxicity of these ions is then solved by a stable chelate, which surrounds the central ion (most often Gd3+), thus permitting the organism to naturally dispose of it [MUL 96]. These contrast agents essentially act on the spin–lattice relaxation times, as in the case presented in Figure 12.9. It must be noted that the efficiency of this relaxation reduces with higher strength of the static field. The spin–spin relaxation, a more complex phenomenon, is accelerated by the presence of substances whose magnetic properties perturb the local strength of the static magnetic field. This happens with iron oxide particles finely spread and maintained in solution in a biocompatible solvent (ultra small particles of iron oxide (USPIO)). Besides these artificial means of modifying, improving, or inverting the contrast, or of making it locally selective, it should not be forgotten that the oxygen that everyone breathes is, due to its paramagnetism, a natural contrast agent. Equally, it must be noted that the oxyhemoglobin in blood is diamagnetic, whereas deoxyhemoglobin is paramagnetic. These properties lead to the BOLD (blood oxygen level dependent) effect [OGA 90], which is exploited in functional imaging of cerebral activation (Chapter 15). Finally, the release of contrast agents in the organism is also exploited to image the local expression of genes (molecular imaging) [LOU 00].
12.4.4. Relevance of magnetization transfer techniques It is equally possible to change the contrast by indirectly acting on the magnetization, exploiting the population of the different nuclear energy levels. Magnetization transfer [HEN 93a] may take place by coupling between nuclear energy levels, or between nuclear and electron energy levels in atomic or molecular compounds. One example is the hyperpolarization of 3He and 128Xe nuclei by optical pumping [HAP 84]. Another example of dynamic polarization is the transfer of the magnetization of the electron of a free radical to the proton of a solvent, which allows generation of a considerable signal. This technique, originally used in magnetometry, has been employed to image in weak fields for some years now [LUR 93]. It may also serve to reinforce the detection of rarer nuclei with small a gyromagnetic ratio, such as 13C [DAY 07]. Even simpler, the magnetization of the nuclei, coupled with other nuclei, may be used to modify the magnetization of the latter. This technique is common in NMR spectroscopy for the analysis of exchange phenomena, such as that between the
Magnetic Resonance Imaging
321
protons of “free” interstitial water and the protons of bound water restricted to the environment of tissue macromolecules. The method employed in imaging involves “presaturating” the magnetization of the protons of bound water [BAL 92]. This magnetization, weakened by the saturation, is transferred to that of the free nuclei concerned in the exchange, whose signal is thus reduced. Subtle and selective variants of this approach exist, since the exchange of water molecules may take place with several types of macromolecules in tissues, and it is thus judicious to know how to achieve it [PAC 94].
(a) before injection
(b) after injection
Figure 12.9. Vascular injection of an organic complex of the gadolinium ion (Gd3+) in the liver enables the signal originating from this organ to be increased with respect to adjacent tissues (images courtesy of Olivier Beuf)
12.4.5. Flow of matter The phenomena of molecular dynamics thus interfere with contrast. The displacement of matter, its velocity, and its acceleration are also capable of modifying the appearance of the signal. The phenomenon takes place in the presence of circulating blood or cerebrospinal fluid. In the images, motion artifacts are observed along the direction of the phase encoding gradient. It is only by means of the signal’s phase that the existence of motion may easily be detected and that motion may be quantified. The signal from a volume of matter that moves with a velocity v in the direction of first a gradient of strength G over t seconds, and then a gradient of strength – G over another t seconds accumulates a phase ) = -J G v t2 at time point 2t [MOR 82]. The described manipulation has no effect on the spatial encoding of stationary elements of the volume. A simple exchange of the order of the two gradients with opposite signs changes the sign of )and leads, for instance, to an angiographic technique by phase contrast [DUM 86].
322
Tomography
In practice, the angiographic methods are a bit more complex. They enable us to visualize vessels whose diameters are in the order of millimeters. However, the distribution of the velocity of the liquid across the vascular lumen changes the appearance of the diameter, which seems to be smaller than in digital angiography with X-rays.
Figure 12.10. Visualization of the cerebral vascular system, obtained at 1.5 T with a phase contrast angiography technique that involves projecting the volumetric signal onto the imaging plane (image courtesy of Olivier Beuf)
12.4.6. Diffusion and perfusion effects The already mentioned displacement of matter is of a collective nature and hence coherent in space. Yet the microscopic random movements linked to thermal agitation, which give rise to the phenomenon of diffusion, are equally capable of attenuating the signal in the presence of gradients [CAR 54]. The intensity of areas in which the water molecules diffuse most rapidly are attenuated. The contrast of tissues is thus modified, and the local measure of the diffusion coefficient, as well as its mapping, are made possible [LEB 95]. The restriction of anisotropic movements of the water molecules in tissues leads to an apparent diffusion coefficient tensor, whose values are precisely measurable [MAT 94]. Perfusion should not be confused with diffusion, since it is provoked by the microcirculation of the blood. It is thus of a coherent nature from a temporal point of view. By contrast, the directions of the displacements are oriented very differently. It turns out to be delicate to separate, by choice of gradients, the effects of diffusion from those of perfusion. To measure perfusion, it is preferable to resort to the use of techniques of labeling the magnetization of the protons contained in blood [DET 92]. These techniques, which are applicable to functional studies of tissues by the evaluation of regional blood flow, must be implemented in a cinematic mode with good temporal resolution and a collection of views spaced as close as possible.
Magnetic Resonance Imaging
323
12.5. Tomography or volumetry? The last section made the diversity of MRI, which is due to the multi-parametric character of its contrast, evident. As shown, this diversity does not prohibit a unified presentation of its general principle. An interesting point with which to conclude this overview of MRI concerns the tomographic aspect of the approach. The selection of the cross-section, as already mentioned, is a complex operation, since its implementation has the goal of exciting all the magnetization in a welldefined volumetric slice in the same way, using an RF field and a gradient of the static field in the direction perpendicular to the desired cross-section. The orientation of the slice may be chosen absolutely arbitrarily. If adjacent slices are not desired, it is possible to acquire cross-sections with different orientations quasi-simultaneously. The duration of the image acquisition directly corresponds to the number of lines sampled in k-space. As we saw, this may be very short. The sensitivity, in terms of signal-to-noise ratio, is proportional to the strength of the static field, which leads to the use of very high field strengths to obtain very thin cross-sections. However, the conditions are still not optimal for very thin cross-sections, because the noise comes permanently from the totality of the sample, whereas the signal comes only from the observed volume. Each cross-section is individually reconstructed, and its signal-tonoise ratio may not benefit from an averaging effect of the noise. From this point of view, and as suggested by equation [12.7], it is preferable to directly use a volumetric approach, which we call volumetry, to distinguish it from tomography. This approach leads to a better signal-to-noise ratio, because the signal comes from the whole sample each time. From the obtained 3D image, it is possible to produce a posteriori all kinds of cross-sections. Unfortunately, the duration of the acquisition is penalized because it is proportional to the number of lines in each cross-section times the number of cross-sections. MRI enables multislice images to be obtained, thus renderings of volumes that are practically available in real-time. However, the signal-to-noise ratio is not optimal, and it may be preferable to resort to direct volumetric measurements of the same volume, which give practically the same cross-sections in a time approximately multiplied by the number of desired cross-sections.
12.6. Bibliography [ABR 61] ABRAGAM A., Les Principes du Magnétisme Nucléaire, Presses Universitaires de France, 1961. [AHN 86] AHN C. B., KIM J. H., CHO Z. H., “High speed spiral scan echo planar imaging”, IEEE Trans. Med. Imag., vol. 5, pp. 2–5, 1986.
324
Tomography
[BAL 92] BALABAN R. S., CECKLER T. L., “Magnetization transfer contrast in magnetic resonance imaging”, Magn. Reson. Quaterly, vol. 8, pp. 116–137, 1992. [BLO 46] BLOCH F., “Nuclear induction”, Phys. Rev., vol. 70, pp. 460–473, 1946. [BLO 48] BLOEMBERGEN N., Nuclear Magnetic Relaxation, Leyden, 1948. [CAR 54] CARR H. Y., PURCELL E. M., “Effects of diffusion of free precession in NMR experiments”, Phys. Rev., vol. 94, pp. 630–638, 1954. [CHE 89] CHEN C. N., HOULT D. I., Biomedical Nuclear Magnetic Resonance Technology, IOP Publishing Ltd., 1989. [CRO 83] CROOKS L. E., ORDENTHAL D. A., KAUFMANN L., “Clinical efficiency of nuclear magnetic resonance”, Radiology, vol. 146, pp. 123–128, 1983. [CUT 96] CUTILLO A. G., Application of Magnetic Resonance to the Study of Lung, Futura Publishing Company Inc., 1996. [DAY 07] DAY S. E., KETTUNEN M. I., GALLACHER F. A., HU D. E., LERCHE M., WOLBER J., GOLMAN K., ARDENKJAER-LARSEN J. A., BRINDLE K. M., “Detecting tumor response to treatment using hyperpolarized 13C magnetic resonance imaging and spectroscopy”, Nature Medicine, vol. 13, pp. 1382–1387, 2007 [DET 92] DETRE J. A., LEIGH J. S., WILLIAMS D. S., KORETSKY A. P., “Perfusion imaging”, Magn. Reson. Med., vol. 23, pp. 37–45, 1992. [DUM 86] DUMOULIN C. L., HART H. R., “Magnetic resonance angiography”, Radiology, vol. 161, pp. 717–721, 1986. [EDE 83] EDELSTEIN W. A., BOTTOMLEY P. A., HART H. R., SMITHE L. S., “Signal, noise and contrast”, J. Comput. Ass. Tom., vol. 7, pp. 391–401, 1983. [FEY 63] FEYNMAN R., LEIGHTON R., SANDS M., The Feynman Lectures on Physics, Addison Wesley, 1963. [FIT 67] FITZSIMMONS W. A., WALTERS G. K., “Very long nuclear spin relaxation times in gaseous 3He by suppression of 3He-surface interaction”, Phys. Rev. Letters, vol. 19, pp. 943–945, 1967. [HAA 85] HAASE A., FRAHM J., MATTHAEI D., MERBOLDT K. D., HÄNICKE W., “Rapid NMR imaging using stimulated echoes”, J. Magn. Reson., vol. 65, pp. 130–135, 1985. [HAA 99] HAACKE E. M., BROWN R. W., THOMPSON M. R., VENKATESAN R., Magnetic Resonance Imaging. Physical Principles and Sequence Design, John Wiley and Sons Inc., 1999. [HAA 89] HAASE A., FRAHM J., MATTHEI D., “FLASH imaging, rapid NMR imaging using low flip angle pulses”, J. Magn. Reson., vol. 67, pp. 388–397, 1989. [HAP 84] HAPPER W., MIRON E., SCHAEFFER S., VAN WIJNGAARDEN W. A., WENG X., “Polarization of nuclear spins of noble gas atoms by spin exchange with optically pumped alkali metal atoms”, Phys. Rev. A., vol. 29, pp. 3092–3110, 1984.
Magnetic Resonance Imaging
325
[HEN 93a] HENKELMAN R. M., HUANG X., XIANG Q. S., STANISZ G. J., SWANSON S. D., BRONSKIL M. J., “Quantitative interpretation of magnetization transfer”, Magn. Reson. Med., vol. 29, pp. 759–766, 1993. [HEN 93b] HENNIG J., HODAPP M., “Burst imaging”, MAGMA, vol. 1, pp. 39–48, 1993. [HOU 79] HOULT D. I., “The solution of the Bloch equations in the presence of a varying B1 field. An approach to selective pulse analysis“, J. Magn. Reson., vol. 35, pp. 69–86, 1979. [KUM 75] KUMAR A., WELTI D., ERNST R. R., “NMR Fourier zeugmatography”, J. Magn. Reson., vol. 18, pp. 69–83, 1975. [LAI 81] LAI C. M., LAUTERBUR P. C., “True three dimensional image reconstruction by nuclear magnetic resonance”, Phys. Med. Biol., vol. 26, pp. 851–856, 1981. [LAU 73] LAUTERBUR P. C., “Image formation by induced local interactions, example using nuclear magnetic resonance”, Nature, vol. 242, pp. 190–191, 1973. [LEB 95] LE BIHAN D., Diffusion and Perfusion Magnetic Resonance Imaging, Raven Press, 1995. [LIA 00] LIANG Z. P., LAUTERBUR P. C., Principles of Magnetic Resonance Imaging. A Signal Processing Perspective, IEEE Press, Series in Biomedical Imaging, 2000. [LJU 83] LJUNGGREN S., “A simple graphical representation of Fourier-based imaging methods”, J. Magn. Reson., vol. 54, pp. 338–343, 1983. [LOU 00] LOUIE A. Y., HÜBER M. M., AHRENS E. T., ROTHBÄCHER U., MOATS R., FRASER S. E., MEADE T. J., “In vivo visualization of gene expression using magnetic resonance imaging”, Nat. Biotechnol., vol. 18, pp. 321–325, 2000. [LUR 93] LURIE D. J., NICHOLSON I., “Proton-electron double resonance imaging of exogenous and endogenous free radicals in vivo”, Proc. Int. School Physics Enrico Fermi, Course CXXIII, Nuclear Magnetic Double Resonance, Maroviglio, B. (ed.), North Holland, Amsterdam, pp. 405–503, 1993. [MAR 61] MARECHAL A., FRANÇON M., Diffraction. Structure des Images, Editions de la Revue d’Optique Théorique et Instrumentale, 1961. [MAT 86] MATSUI S., KOHNO H., “NMR imaging with a rotary field gradient”, Magn. Reson. Med., vol. 70, pp. 157–162, 1986. [MAT 94] MATIELLO J., BASSER P. J., LE BIHAN D., “Analytical expressions for the b matrix in NMR diffusion imaging and spectroscopy”, J. Magn. Reson., vol. 108A, pp. 131–141, 1994. [MOR 82] MORAN P. R., “A flow velocity zeugmatographic interlace for NMR imaging in humans”, Magn. Reson. Imag., vol. 1, pp. 197–199, 1982. [MOR 86] MORRIS P. G., Nuclear Magnetic Resonance Imaging in Medicine and Biology, Oxford University Press, 1986. [MUL 96] MULLER R. N., Contrast Agents in Whole Body MR: Operating Mechanisms, John Wiley and Sons Inc., Encyclopedia of NMR, 1996.
326
Tomography
[OGA 90] OGAWA S., LEE T. M., KAY A. R., TANK D. W., “Brain magnetic resonance imaging with contrast depending on blood oxygenation”, Proc. Natl. Acad. Sci., vol. 87, pp. 9868–9872, 1990. [PAC 94] PACHOT-CLOUARD M., DARRASSE L., “Optimization of T2 selective binomial pulses for magnetization transfer”, Magn. Reson. Med., vol. 34, pp. 462–469, 1994. [SCH 98] SCHMITT F., STEHLING M. K., TURNER R., Echo Planar Imaging. Theory, Technique and Application, Springer, 1998. [SLI 78] SLICHTER C. P., Principles of Magnetic Resonance, Springer, 1978. [WEH 84] WEHRLI F. W., MARCHAL J. R., SHUTTS D., BREGER R., HERFKENS R. J., “Mechanisms of contrast in NMR imaging”, J. Comp. Ass. Tom., vol. 8, pp. 369–380, 1984.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Part 5 Functional Medical Tomography
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 13
Single Photon Emission Computed Tomography
13.1. Introduction 13.1.1. Definition Single photon emission computed tomography (SPECT) is a functional medical imaging modality that enables non-invasive estimation of the three-dimensional (3D) distribution of a radiopharmaceutical in vivo. The tomographic images are reconstructed from multiple projections, or planar images, which are acquired with a gamma camera that rotates around the patient. They enable qualitative analysis (detection of lesions) as well as a quantitative analysis (measurement of the concentration of radiopharmaceuticals in organs or estimation of functional parameters). The basic principles of SPECT were established by Kuhl in 1963 without using a gamma camera or a computer [KUH 63]. Anger, the inventor of the gamma camera, demonstrated in 1967 the feasibility of SPECT with a gamma camera [ANG 67]. The reconstruction of SPECT images with a computer was reported in 1971 [MUE 71], and commercial SPECT systems appeared in 1978. Today, many SPECT systems are combined with computed tomography (CT) systems and thus permit the joint acquisition of anatomical and functional information [SEO 08]. 13.1.2. Functional versus anatomical imaging SPECT images reflect the spatial distribution of a radiopharmaceutical whose uptake in the body certainly depends on the anatomy of the involved organs Chapter written by Irène BUVAT, Jacques DARCOURT and Philippe FRANKEN.
330
Tomography
(locations, volumes), but is mainly determined by their function, which motivates the use of the term “functional imaging”. The influence of the physiopathological state of examined organs on the obtained images is far weaker in other medical imaging modalities, such as CT, MRI, and ultrasound, which achieve a better spatial resolution and are classified as “anatomical imaging” modalities. The indications for a SPECT scan are different from those for anatomical imaging modalities and concern all pathologies (cardiac, neurological, oncological, osteoarticular, pulmonary, etc.). In this chapter, we study as an example a myocardial perfusion scan, which is currently a common application of SPECT. SPECT imaging relies on three components: the radiopharmaceutical whose biodistribution is studied, the gamma camera, which enables detection of the signal emitted by the radiopharmaceutical, and the tomographic reconstruction, which allows estimation and visualization of the 3D distribution of the radiopharmaceutical in the body. These three components are described in more detail in the following. 13.2. Radiopharmaceuticals The radiopharmaceutical or radioactive tracer is the basis for SPECT imaging, since the studied function depends on it and it is the origin of the measured signal. It combines a vector, which ensures functional selectivity, with a radioactive marker, which emits the photons that enable the localization. In certain cases, the vector and the marker are the same, since the radioactive element itself has an interesting specific biological behavior (201Tl, 123I, 131I). 13.2.1. Vectors Different types of vectors allow study of different physiopathological functions. For example, lung perfusion is studied with radioactively labeled albumin aggregates with a diameter of 10 to 40 μm. After intravenous injection, these are mechanically blocked in the lung capillaries, whose cross-section is smaller. Many tracers enable evaluation of the perfusion of other organs. They leave the vascular space, and they are taken up by the cells of the studied organ in amounts proportional to its perfusion (see section 13.5 on myocardial SPECT). The selectivity of other vectors results from their integration into a metabolic process. For example, polyphosphates are incorporated into the physiological process of bone renewal and thus allow imaging of the skeleton. Molecules that are hormonal precursors are also available, such as methyl iodide–benzylguanidine, which specifically enables study of tumors that produce catecholamine in excess.
Single Photon Emission Computed Tomography
331
Moreover, ligands of cellular receptors, such as somatostatin and dopamine, exist. This tracer concept opens up a broad range of applications to be explored. For example, labeled oligonucleotides may allow the in vivo visualization of the overexpression of certain genes in the future. 13.2.2. Radioactive gamma markers The radioactive markers employed in SPECT are all emitters of gamma photons, which are characterized by their discrete emission spectrum. Their energies range in practice between 80 keV for 201Tl and 360 keV for 131I. The most common marker is 99m Tc, which has several advantages. It emits only gamma photons with an energy of 140 keV, which is well suited to detection by gamma cameras, it has a short halflife of 6 h, and it may easily be produced using a 99Mo generator with a half-life of 2.7 days. 13.3. Detector Knowledge of the detection principle of gamma photons in gamma cameras enables the direct detection problem in SPECT to be formalized. 13.3.1. General principle of the gamma camera The detector employed in SPECT is a gamma camera, which is composed of one or several detection heads (Figure 13.1a) which are mounted on a gantry.
(a)
(b)
Figure 13.1. (a) Gamma camera with two detection heads;(b) main components of a detection head
Each detector head contains a scintillating crystal of sodium iodide doped with thallium – NaI(Tl) – of about 1 cm thickness, whose size determines the field of
332
Tomography
view of the camera (typically 60 u 40 cm2), an array of several tens of photomultiplier tubes, and an electronic localizing system (Figure 13.1b). They are all shielded with lead to reduce the influence of environmental radiation. The gamma photons that leave the patient and arrive at the crystal interact with the latter. Due to the properties of the NaI(Tl) crystal, part of the energy that a gamma photon transfers to it is converted into photons with an energy of about 3 eV (410 nm wavelength), corresponding to blue light, to which the crystal is transparent and the photocathodes of the photomultipliers are sensitive. The electrical currents from the different photomultiplier tubes are processed by a digital positioning circuit, which determines by barycentric calculation the location (x, y) of the interaction of the gamma photon with the crystal and the energy Ep transferred from the gamma photon to the crystal, as a function of the distribution and the amplitude of the light received by the set of photomultiplier tubes. This information is stored in the form of a list (x, y, Ep) of detected photons (list mode acquisition) or used to increment (by one event) the pixel containing the coordinate (x, y) of the image matrix, in which the photons of the selected energy are accumulated (frame mode acquisition). Detection by scintillation is thus carried out in two steps. The gamma photons are first converted into visible light by the crystal, and the visible light is then converted into electrical signals by the photomultiplier. Direct conversion of the gamma photons into electrical signals is possible using semiconductor gamma cameras (employing CdTe and CdZnTe, for instance). Such cameras, which have a better energy resolution than scintillation cameras, are being developed and may eventually compete with Anger cameras. 13.3.2. Special features of single photon detection: collimator and spectrometry To associate the location of the interaction between a gamma photon and the crystal with the location of the emission of the gamma photon in the organism, each detection head of the gamma camera is equipped with a collimator (Figure 13.1b). The collimator consists of a set of holes, which measure about 4 cm in length and 1 to 2 mm in diameter. These holes are separated by thin walls (called septa) of lead or tungsten of 0.2 to 0.5 mm thickness, which limit the passing of radiation to a very small solid angle around a given direction. The characteristics of the collimator (orientation and length of the holes, thickness of the septa) are chosen depending on the energy of the radiopharmaceutical. They determine the spatial resolution and the sensitivity (section 13.3.3). For example, the most common collimators with parallel holes (Figure 13.2a) select those gamma photons that hit the crystal approximately perpendicular to its surface. They enable scanning of organs the size of the crystal of the camera. Non-parallel, such as fan- and cone-beam collimators (Figure 13.2), also exist. They represent a different tradeoff between spatial resolution, detection sensitivity (section 13.3.3), and field of view. Due to the direction of the holes, the
Single Photon Emission Computed Tomography
333
objects in the images obtained with these collimators are enlarged in one (fan-beam) or two (cone-beam) directions at the expense of a reduced field of view (possible truncation of data). The enlargement and truncation must be taken into account in the tomographic reconstruction. The fan- and cone-beam collimators are basically used for scanning small organs (the thyroid gland, for example) or small organisms (small animals), for which they offer interesting compromises between spatial resolution and sensitivity.
Figure 13.2. (a) Parallel, (b) fan-beam, and (c) cone-beam collimators
Besides the selection of the direction of the incident photons, a spectral selection, i.e. a selection of the energy of the detected photons, is carried out. This generally involves only certain of the detected photons being considered as a “useful” signal. Typically, they are required to have an energy in a window that is centered around the energy of the emitted photons and that has a width of 15 to 20% of this energy. The choice of the width is mainly influenced by the limited energy resolution of the gamma cameras. In this way, a considerable number of photons that were Compton scattered before interacting with the crystal may be excluded. These photons provide erroneous information about the location of the emission (see section 13.4.3.3). 13.3.3. Principle characteristics of the gamma camera Gamma cameras are essentially characterized by their spatial resolution, their sensitivity, and their energy resolution. The spatial resolution R is defined as the full width at half maximum of the observed response function when an image is formed from a radioactive quasi-point source. It is a function of the distance between the source object and the camera. It is linked to the intrinsic resolution Ri, which is independent of the source–detector
334
Tomography
distance and is typically 3 mm at 140 keV, and the geometrical resolution of the collimator Rg, which depends on the source–detector distance and is usually larger than 6 mm: Rs = (Ri2 + Rg2)1/2
[13.1]
Rg varies with the characteristics of the collimator, but it always limits the global spatial resolution of the camera the most. The spatial resolution of clinical gamma cameras is generally close to 10 mm. The sensitivity of the camera corresponds to the number of detected events for a given activity, distance, and time. It is, for example, measured in counts per second and megabecquerel (cps/MBq), where one becquerel is equivalent to one radioactive decay per second. Apart from the solid angle, the collimator is the element of the gamma camera that reduces the sensitivity of the detection the most, since only about 1 photon out of 10,000 traverses it. The sensitivity of a clinical gamma camera varies with the characteristics of the collimator, but is typically in the order of 100 cps/ MBq. For parallel collimators, the sensitivity depends only weakly on the distance between the source and the collimator, while for fan- and cone-beam collimators, it varies considerably with this distance. The energy resolution of a gamma camera is quantified by the full width at half maximum of the energy response function for a gamma source with given energy, which is linked to the emission energy. It depends on the crystal’s nature, typically equals 10% at 140 keV, and improves for higher energies (it is approximately proportional to the inverse of the square root of the energy). The accuracy with which the Compton scattered and non-Compton scattered photons may be separated by analyzing the energy of the detected photons depends on this energy resolution. With CdTe cameras, the latter may be less than 5% at 140 keV. 13.3.4. Principle of projection acquisition A SPECT acquisition involves rotating the detector heads around the subject on a usually circular orbit with respect to the feet–head axis of the patient to collect a set of two-dimensional images called projections. Each projection corresponds to the distribution of activity seen under a given angle (Figure 13.1a). The acquisitions are performed with a rotation over 360° or 180°. The 180° segment is chosen such that the detector head is as close as possible to the scanned region. Classical acquisition parameters are 128 projections over 360°, a 64 x 64 or 128 x 128 matrix of pixels per projection, a voxel size between 3 x 3 and 10 x 10 mm2, and 30 s per projection. Due to the low sensitivity of SPECT, the acquisitions usually last between 15 and 60 min.
Single Photon Emission Computed Tomography
335
13.3.5. Transmission measurement system The acquisition of projections provides information on the distribution of the radiopharmaceutical in the organism. To estimate this distribution from projections, it is necessary to take the attenuation of a certain proportion of the emitted radiation by the traversed tissues into account. Some of the emitted photons never leave the organism, because they are completely absorbed by the photoelectric effect in the tissues. The magnitude of the attenuation depends on the composition and the density of the traversed tissues. To take this into account, it is necessary to know the attenuation properties of the tissues, i.e. the local values of the attenuation coefficient P. These can be measured by “transmission” acquisition systems (in contrast to classical “emission” acquisition systems), which are combined with the gamma cameras (Figure 13.3). The principle is the same as that of CT (see Chapter 10). It relies on either an X-ray generator in the hybrid SPECT–CT systems, which combine a gamma camera and a CT and which are now commercially available, or an external gamma source in the older SPECT systems, in which the same gamma camera serves as detector for both emission and transmission measurements. The 3D map of the attenuation coefficient P is estimated by tomographic reconstruction of the acquired transmission projections.
Figure 13.3. Transmission acquisition systems using (a) a planar source, (b) a scanning line source, moving in the direction indicated by the arrow, and (c) a fixed line source opposite a fan-beam collimator
In the case of transmission acquisitions with an external gamma source, the transmission acquisitions may be performed either before or during the emission acquisitions, using different systems that combine different external sources (planar, static line, scanning line, or point source) with different collimators (Figure 13.3). The external source may be an isotope used in emission imaging, such as 99mTc, or, more commonly, one with a far longer half-life, such as 153Gd (emission energy of 100 keV and half-life of 242 days), which enables the same source to be employed in transmission for a longer time. The values of the attenuation coefficient P depend
336
Tomography
on the energy of the gamma photons. Therefore, they must be estimated for the isotope used in emission by extrapolation from the values measured in transmission at a different energy. In the case of transmission acquisitions with CT, the acquisition is carried out immediately before or after the classical SPECT acquisition. The CT images are spatially resampled to obtain the same pixel size as in the SPECT images. The values of the pixels in CT images are measured in Hounsfield units (HU) and are converted to the attenuation coefficient P, which is adapted to the energy of the employed radiopharmaceutical. This conversion is commonly done using a bilinear relation between the HU values and the values of P. The use of a CT scan for the transmission measurement leads to images of P with very low noise compared to those acquired with a gamma source in a comparatively short time. These images additionally provide anatomical information, which facilitates the interpretation of the SPECT images. However, there are two disadvantages. First, the use of a CT scan increases the radiation dose for the patient substantially, since the dose induced by a gamma transmission source is comparatively small. Second, depending on the acquisition mode employed for the CT scan, the CT images may not be perfectly spatially aligned with the SPECT images in the presence of respiratory motion in the field of view. The SPECT images are averaged over a large number of respiratory cycles, while the CT images may be considered as “instantaneously” acquired and thus as reflecting a precise phase of the respiratory cycle. This poor correspondence may cause artifacts at the interface of tissues with different attenuation properties (such as lung–soft tissue interfaces). 13.4. Image reconstruction 13.4.1. Hypotheses Tomographic reconstruction in SPECT involves estimating the 3D distribution of the radiopharmaceutical in the patient’s body from two-dimensional (2D) projections that are acquired by rotation of the detector heads of the gamma camera. The implemented reconstruction methods basically rely on two hypotheses: 1) the patient and the organs are perfectly static throughout the acquisition; 2) the spatial distribution of the radiopharmaceutical does not change during the acquisition. These hypotheses boil down to the assumption that all the projections correspond to the representation of a single distribution of activity, seen under different angles of incidence.
Single Photon Emission Computed Tomography
337
These hypotheses are only approximately fulfilled. Involuntary motion of the patient introduces inconsistencies in the data, which must be corrected to avoid artifacts in the reconstructed images. Even in the absence of involuntary motion, physiological motion, such as beating of the heart or respiration, still produces inconsistencies. The reconstructed images are then affected by “motion blur”. Periodic physiological motion, such as beating of the heart, may be partially controlled by gating the acquisition with the electrocardiogram. This type of acquisition involves the acquisition of 8 (or 16) images per cardiac cycle and the addition of all the information that correspond to the first 8th of the cardiac cycle in the first image, of all the information that corresponds to the second 8th of the cardiac cycle in the second image, etc. Gating the acquisition thus results in 8 sets of projections, which permit the reconstruction of 8 distributions of activity, representing the distribution of the tracer over the cardiac cycle. The evolution of the distribution of the tracer over the cardiac cycle may thus be studied dynamically, providing information on wall motion. Finally, independently from motion, the distribution of the radiopharmaceutical is not necessarily static throughout the acquisition. In that case, dynamic tomography has to be applied to estimate the change in the 3D distribution of activity over time, using reconstruction techniques that take into account the timevarying activity distribution of the radiopharmaceutical [TSO 08]. 13.4.2. Reconstruction algorithms 13.4.2.1. Tomographic reconstruction problem in SPECT If scatter, attenuation, and the limited response function of the detector (which are covered in section 13.4.3) are neglected, the problem to be solved may be expressed in the following, continuous form:
s (\ , p )
Rf (\ , p)
f
³ f (M )dM
[13.2]
f
where s(\p) denotes the number of photons detected at position p of the projection \, and f(M) = f(x,y) denotes the distribution of activity to be estimated, i.e. the num-
ber of photons emitted by the object at position M during the acquisition (see Chapter 2). In discrete form, the problem reads: s=Rf
[13.3]
where s denotes the matrix of acquired projections (or sinograms), whose number of elements equals the number of pixels in each projection times the number of projec-
338
Tomography
tions, f denotes the distribution of activity to be estimated, and each element ij of the projector or system matrix R is proportional to the probability of one photon emitted in the voxel i being detected in the detector element (dexel) j. When the data are acquired with a parallel or fan-beam collimator, this 3D problem may be separated into a set of 2D problems. The photons emitted in a crosssection (x,y) are projected onto a single transverse line in each of the projections. Thus, each cross-section f(x,y) may independently be reconstructed from the set of one-dimensional (1D) projections. When the data are acquired with a cone-beam collimator, the photons emanating from a cross-section (x,y) are projected onto several detector lines, and a true 3D tomographic reconstruction must be performed (see Chapter 2). Tomographic reconstruction in SPECT is carried out either by analytical reconstruction (Chapter 2) or by algebraic reconstruction (Chapter 4). 13.4.2.2. Analytical reconstruction in SPECT The most often used analytical method in clinical routines is the 2D filtered backprojection reconstruction (see section 2.2.3). It is fast and robust, and its application requires only the choice of the reconstruction filter. The most common filters in SPECT are the low-pass filters by Hann and Butterworth (section 2.2.4) and the restoration filters by Wiener and Metz. The choice of filter depends in general on the type of scan, i.e. on whether the distribution of the radiopharmaceutical is more focal or diffuse and on the noise level in the images. Default settings are usually available for different scanning protocols. The filtered backprojection has two major drawbacks. First, streak artifacts, characteristic of the backprojection (section 2.2.3), may considerably disturb the interpretation of images, notably in applications where uptake of the radiopharmaceutical is very focal, as in oncology. Second, the filtered backprojection algorithm is not well suited to take scatter, attenuation, or nonstationary spatial resolution into account. For these reasons, algebraic reconstruction methods are nowadays applied in the majority of cases in SPECT. 13.4.2.3. Algebraic reconstruction in SPECT Although the use of algebraic reconstruction methods had already been proposed very early on [KUH 63], only around 1995, when computational resources enabled algebraic reconstructions to be performed in reasonable times, were they actually applied in clinical routines. Two methods are basically used in SPECT: the ML-EM algorithm (section 4.4.2.1) in its accelerated form OSEM [HUD 94], which is the most frequently applied method, and the conjugate gradient algorithm (section 4.3.1.2).
Single Photon Emission Computed Tomography
339
The argument generally put forward to justify the use of the ML-EM algorithm in SPECT is that it models the statistical properties of the acquired projections based on the assumption that the counting rate in the dexels follows a Poisson law [LAN 84]. However, the ML-EM algorithm is also successfully applied – improperly from a theoretical point of view – to preprocessed data, which no longer satisfy this assumption. The interest in the ML-EM algorithm in SPECT is largely due to the fact that it constrains, unlike the filtered backprojection or the conjugate gradient approach, the distribution of activity to positive values (the algorithm being multiplicative). This is advantageous, since it conforms to the expected distribution of activity. Its multiplicative character also induces an implicit constraint on the support. Moreover, it conserves the number of events that are reconstructed at each iteration, which is perfectly consistent with the underlying physics of SPECT acquisition. The major drawback of the ML-EM algorithm is its slow convergence speed. It needs several tens of iterations. However, this drawback was resolved by the introduction of an acceleration method called ordered subsets method [HUD 94]. The resulting algorithm was coined ordered subset expectation maximization (OSEM). Each iteration is decomposed into iterations on subsets, where each subset contains only a part of the projections. By choosing each subset such that the projections contained in it are as complementary as possible (i.e. differ by a sufficiently large angle), acceleration factors of the order of the number of subsets are attainable. An assumption made by this approach is that the probability of a photon emitted in a voxel being detected in a subset is the same as for all subsets [HUD 94]. For this assumption to be approximately valid, it is necessary to include a sufficient number of projections in each subset, for example by dividing 64 projections into 8 subsets. The conjugate gradient method (section 4.3.1.2) is another algebraic approach used for tomographic reconstruction in SPECT. The additive character of this algorithm ensures a rapid convergence (few iterations), but it does not guarantee that the reconstructed distribution of activity is positive. The conjugate gradient algorithm is not amenable to the concept of ordered subsets, even if accelerated versions of gradient descent algorithms exist, which are based on minimum residual (see section 4.3.1.2) or block reconstruction methods and basically apply ordered subset techniques (see section 4.6). 13.4.3. Specific problems of single photon detection Phenomena specific to SPECT acquisitions complicate the implementation of tomographic reconstruction algorithms and degrade the obtained image quality. The main problems are the counting noise that affects the projections, the attenuation, scatter, and the depth-dependent spatial resolution of the detector.
340
Tomography
13.4.3.1. Counting noise Statistical fluctuations are especially severe in SPECT. For safety reasons, the radioactive dose administered to the patient is small (typically some hundreds of MBq). A large part of the emitted radiation is attenuated by the tissues and does not contribute to the detected signal. Moreover, gamma cameras have low sensitivity, mainly due to the presence of the collimator. The number of detected events per pixel in the projections is only in the order of some tens. The counting noise of the detected photons follows a Poisson law (variance equal to mean). It is implicitly low-pass filtered in the filtered backprojection reconstruction. In algebraic reconstruction methods, the noise in the reconstructed images increases with the number of iterations, even with the ML-EM algorithm, which models the nature of the noise [WIL 93]. To avoid this amplification, the iteration must be stopped early (at the risk of not reaching convergence), regularization must be incorporated into the reconstruction, or the reconstructed images must be filtered. All these approaches are used in SPECT. Early termination of the iteration is done at the expense of the restoration of the spatial resolution and the image contrast. The regularization requires a priori information, for example the continuity of the distribution of activity to be reconstructed. Filtering of the reconstructed data is a simple solution to reduce the noise level, but it also reduces the spatial resolution. 13.4.3.2. Attenuation As mentioned in section 13.3.5, part of the photons emitted by the radiopharmaceutical is attenuated when passing through the body. Using the attenuated Radon transform (section 2.2.4), the reconstruction problem becomes: s(\ , p) Rf (\ , p)
l2
l1
l1
M
³ f (M) exp[ ³ PT (P) dP]dM
[13.4]
where l1 and l2 are the two points at the boundary of the attenuating medium along the projection corresponding to the pair (<, p), with l1 being located closer to the detector. Attenuation is the more important the denser the attenuating medium, the higher the value of Z characterizing the medium, and the lower the energy of the gamma photons. For example, it causes a loss of more than 70% of the emitted photons in cardiac imaging. Analytical inversion of the attenuated Radon transform has been achieved recently [NAT 01, NOV 02], but only after iterative tomographic reconstruction methods were widely adopted in routine SPECT imaging. For this reason, the ob-
Single Photon Emission Computed Tomography
341
vious approach to considering attenuation in the reconstruction is to model the attenuation when calculating the projector R [TSU 88] and to invert equation [13.3] using an iterative reconstruction method, typically the OSEM method. The attenuation model in R can be derived from a transmission scan (section 13.3.5). With this approach, it is sufficient to describe the exponential attenuation of the flux of photons emitted from each voxel as a function of the density of the tissues that it passes through before reaching a point on the detector. Algebraic reconstruction with a projector that includes the attenuation poses no specific problems, and is currently the preferred approach in SPECT imaging for compensating for attenuation. The approach involves modeling the perturbation in the projector. It is a generic strategy that can also be used to compensate for scatter and limited detector response function (see sections 13.4.3.3 and 13.4.3.4), which makes it particularly attractive. 13.4.3.3. Scatter Ideally, the photons emitted by the radiopharmaceutical would leave the patient without interaction and would transfer all of their energy to the crystal of the gamma camera. In reality, however, certain photons never reach the camera, since they are attenuated by the photoelectric effect (section 13.3.5 and 13.4.3.2). Other photons undergo one or more Compton interactions with electrons before they leave the body. Compton scatter has two consequences. First, the scattered photons deviate from their initial direction, and the location where they are detected corresponds to that of their last Compton interaction, instead of that of their emission. Thus, scattered photons are wrongly localized in the image, which leads to a loss of contrast and a quantitative bias. Second, the scattered photons lose part of their energy, depending on the number of interactions they undergo and on the scattering angle of each interaction. The integration of only those photons whose energy is within a spectral window of limited range (see section 13.3.3) enables reduction of the number of detected scattered photons, but without eliminating them due to the limited energy resolution of the gamma cameras. For example, about 30% of the photons detected in a window from 126 keV to 154 keV are scattered photons in an acquisition with 99mTc (140 keV emission energy). Scatter may be taken into account by subtracting the scattered photons from the projections or by modeling it in the projector that is employed in the algebraic reconstruction [BUV 94]. These two approaches are still used today. Subtraction of the scattered photons from the projection is performed based on spectral considerations, for example by assuming that the image of the scattered photons detected within the spectral window is almost identical, except for a factor, to the image of the photons detected within a spectral window corresponding to a lower energy, in which only scattered photons are detected. The spatial distribution
342
Tomography
of the scattered photons may also be estimated by deconvolution, for example by assuming that the image of the scattered photons detected within the spectral window may be considered as the image of the non-scattered photons detected within the spectral window convolved by a kernel that depends on the characteristics of the attenuating medium. To avoid the noise amplification associated with deconvolution, the image of the scattered photons may also be estimated by convolving the image of the detected photons with an ad hoc kernel. The scatter correction then involves subtraction of this estimated image from the acquired image. Another approach involves modeling the scatter effect in the projector. However, unlike attenuation (see section 13.4.3.2), scatter is not amenable to a simple analytical modeling. It is a 3D non-stationary phenomenon, which depends on the characteristics of the scattering medium. Theoretically, photons scattered more than once have also to be considered. Models of scattering in non-uniform media were proposed [ZAI 04], which exploit knowledge of the density of the tissues obtained from transmission acquisitions (see section 13.3.5). In these models, the photons emitted from a voxel contribute not only to one dexel, but also to adjacent dexels in each of the projections due to the non-zero probability of scatter (hence changing direction). Solving the inverse problem when scatter is modeled in R is performed with the ML-EM, OSEM, and conjugate gradient approaches. However, due to the 3D nature of the scatter effect, the matrix R corresponds to a projection operator that is far less sparse than without a scatter model. Solving the inverse problem is thus significantly more time consuming. 13.4.3.4. Variation of the spatial resolution with depth Due to the presence of the collimator, the point-spread function of the camera depends on the distance between the point source and the collimator. The 3D distribution of activity is, therefore, not seen with a constant spatial resolution. For example, the spatial resolution may be 6 mm at 5 cm from the detector and 12 mm at 20 cm from the detector with a parallel collimator. If this effect is neglected, it introduces geometric distortions and biases in reconstructed SPECT images. Taking the variation of the spatial resolution with the distance to the detector into account first requires determining the resolution of the detector as a function of the distance between the source and the collimator by means of acquisitions with quasi-point sources. Once this function is known, two processing strategies are distinguished. The first involves performing a non-stationary deconvolution of the projections, for instance in the Fourier domain by using the frequency–distance principle (section 2.2.5), which associates each point in the frequency domain with a distance between a point source and the detector [XIA 95]. The second, currently most widely used, involves modeling the response function of the detector in the
Single Photon Emission Computed Tomography
343
projector R employed in the algebraic reconstruction [ZEN 98]. This modeling is analytically fairly simple, since the photons emitted from a voxel contribute not only to a dexel in each of the projections as expected from geometric considerations only, but are also distributed around this dexel according to a Gaussian response function. As for scatter, the main drawback of this approach is that it substantially increases the computation time and slows down the convergence of the iterative algorithm. It is important to note that the correction of some but not all perturbing phenomena (for instance only an attenuation correction, but no scatter or detector response correction) may amplify artifacts in the images due to other uncorrected phenomena (section 13.5.4). The most unified approach to considering attenuation, scatter, and detector response variations at the same time is to model them in the projector R employed in the algebraic reconstruction. If the modeling of the attenuation and the response function of the detector do not pose theoretical problems, the modeling of scatter turns out to be more complex, but approximate solutions exist [ZAI 04]. Currently, the clinically available SPECT reconstruction algorithms enable in most cases compensation for attenuation, scatter, and spatial resolution variations, either by integrating the corresponding corrections into the reconstruction, or by employing hybrid approaches, such as first compensating the projections for scatter and then modeling attenuation and variations of the response function of the detector in the reconstruction. By compensating for attenuation, scatter and detector response function, SPECT images can be interpreted quantitatively, for instance the local concentration of a radiotracer can be estimated. Other phenomena, such as respiratory motion or partial volume effects (mixture of signals emanating from different structures within the same voxel) may also affect the quality of measurements made based on the images, notably in small structures or in regions largely affected by respiratory motion (section 13.5). 13.5. Example of myocardial SPECT 13.5.1. Indications To study myocardial perfusion, i.e. blood flow in the cardiac muscle, a radiopharmaceutical is intravenously injected, which is taken up by the cardiac muscle and which accumulates in it proportionally to the local blood flow. The resulting images can show ischemic regions with reversibly reduced vascular supply and infarcted or necrotic regions with irreversibly reduced vascular supply. This common scan in cardiology is used to evaluate the consequences of a narrowing of the coronary arteries by an atheroma of the myocardium [STR 98].
344
Tomography
13.5.2. Radiopharmaceuticals, injection and acquisition protocols Ischemia is reflected by a local reduction in myocardial perfusion under stress compared to that at rest, while irreversible myocardial necrosis causes reduced perfusion both under stress and at rest. Consequently, two SPECT scans are usually performed: one after an injection of the radiopharmaceutical at the maximum of physical or pharmacological stress, and the other after an injection at rest. The employed radiopharmaceuticals are 201Tl or tracers labeled with 99mTc (sestamibi and tetrofosmin). They are taken up by the myocardium proportionally to the local blood flow. This uptake is sufficiently stable for the hemodynamic conditions at the moment of the injection, i.e. under maximum stress, to still be reflected in the images acquired after the end of the stress. The projections are acquired over 360° or 180°, and the patient is placed in a supine or prone position. The acquisition orbit is circular, elliptic, or irregular. Its objective is to minimize the distance between the thorax and the detector for each projection. Parallel or fan-beam collimators with high resolution and low energy are mainly used. Synchronizing the acquisition with the electrocardiogram enables separate reconstruction of several phases of the cardiac cycle (gated SPECT; section 13.4.1). 13.5.3. Reconstruction and interpretation criteria A careful visual analysis of the projections before their reconstruction is recommended to detect possible movements of the patient between the projections, which must be corrected before the reconstruction. The reconstruction is performed using filtered backprojection or, more frequently, algebraic methods, which consider (commonly with recent cameras) or ignore (with older cameras) the physical phenomena that lead to artifacts (attenuation, scatter, response function of the detector). The nature of the low-pass filter and the number of iterations are chosen empirically. The reconstructed transaxial cross-sections are resliced along the long axis of the left ventricle (Figure 13.4). They are displayed in a standardized form to facilitate comparison between those obtained under stress and those obtained at rest. The nuclear medicine physician visually evaluates the size and severity of potential defects in the tracer uptake and the differences between the scans under stress and at rest. The regions in which the uptake is lower than normal both under stress and at rest correspond in general to irreversibly damaged, necrotic myocardial areas. The regions that are normal at rest, but not under stress, are classified as ischemic (Figure 13.4). Semi-quantitative analysis software may help in the diagnosis [GAR 85] by comparing the extent and severity of defects with stored reference data. A polar representation, a so-called bull’s eye plot (polar coordinates), may be used to display the whole myocardium. It shows in 2D the 3D myocardial uptake with the apex of the heart in the center, the anterior wall to the north, the inferior wall to the south, the lateral wall to the east, and the intra-ventricular septum to the west.
Single Photon Emission Computed Tomography
345
The cross-sections reconstructed from data that were acquired in synchronization with the electrocardiogram (gated SPECT) enable measurement of the left ventricular volume at end systole (ES) and at end diastole (ED) and calculation of the corresponding ejection fraction ((ED í ES)/ED). Dynamic analysis of the reconstructed images also allows appreciation of the myocardial wall motion and thickening. The fusion of SPECT data with CT data seems promising, as for most of the applications of SPECT imaging [BUC 08]. 13.5.4. Importance of the accuracy of the projection model Neglecting physical phenomena may lead to artifacts. In simulation and phantom studies, the impact of attenuation, scatter, and the variable spatial resolution on the resulting reconstructed images was precisely quantified for cardiac SPECT with 99m Tc [ELF 00]. These studies show that the largest bias in the estimation of activity in the myocardial wall arises from attenuation (underestimation of the activity by more than 80%) and limited spatial resolution of the detector (underestimation by about 20%). The contrast in the images is mostly decreased by scatter and the variable spatial resolution of the collimator (reduction of about 10% by each phenomenon). The uptake uniformity is mainly distorted by attenuation, which produces in practice a low uptake artifact in the inferior wall due to the attenuation of the diaphragm. The uniformity of the tracer in the myocardial wall may also be reduced by scatter in case of a large extracardiac uptake. The spatial resolution in the reconstructed images in the order of 10 mm is primarily reduced by neglecting the variation of the spatial resolution with depth. Since the interpretation is essentially based on the homogeneity of the tracer uptake in all myocardial regions, attenuation is the most critical phenomenon from a clinical point of view, because it produces fictitious low uptake in the inferior wall [DEP 89] (Figure 13.5). To avoid these artifacts, attenuation must be compensated (see section 13.3.5). The images are, therefore, generally reconstructed with an algebraic method (often OSEM) that includes a modeling of the attenuation in the projector. Today, the effects of variable spatial resolution and scatter are usually also compensated, unlike the effects of physiological motion, for which there is no simple solution. Studies have shown the necessity of using a complete model of the image formation process, including not only attenuation, but also scatter, variation in the spatial resolution, and motion, to obtain a more reliable estimate of the distribution of activity by tomographic reconstruction techniques [TSU 94]. The implementation of only some of these corrections may lead to errors in interpretation [VID 99, MCQ 08].
346
Tomography
13.5.5. Examples Figures 13.4 and 13.5 present the results of a myocardial SPECT scan, which illustrate the functional nature of emission tomography in cardiology. Pharmacological stress was applied, and sestamibi labeled with 99mTc was used. An X-ray transmission acquisition (CT) for the attenuation correction followed the emission acquisition. The cross-sections were reconstructed with OSEM, once without corrections and once with a modeling of the attenuation in the projector R (see section 13.4.3.2) based on the transmission acquisition. Three slices along three common planes are presented, acquired under stress and at rest and reconstructed with and without attenuation correction. Two regions show low uptake (hence low perfusion) under stress without attenuation correction: the inferior wall and the anterior-apical region. At rest, a better perfusion of the anterior-apical region is seen without correction, but low uptake remains in the inferior wall. In the slices reconstructed with attenuation correction, the perfusion of the inferior wall is normal under both conditions, underlining the fact that the abnormalities seen in the uncorrected slices are actually attenuation artifacts. Perfusion of the anterior-apical region remains similar to an ischemic myocardial region under stress due to a coronary pathology. The bull’s eye plots are a different representation of the same results. Again, the attenuation artifact vanishes using an attenuation correction in the reconstruction, and the anterior-apical ischemia persists. After correction, the apex of the heart seems to show reduced perfusion. This is a common observation, whose origin is currently not well understood, but partly explained by using an attenuation correction without simultaneous correction of partial volume effects and motions [BAI 99, PUR 08]. Synchronizing data acquisition with the ECG (gated SPECT) enables evaluation of the left ventricular contraction in the same patient. The slices separately reconstructed using OSEM without attenuation correction in the diastolic phase (dilated ventricle) and the systolic phase (contracted ventricle) allow appreciation of the thickening during the contraction. The volume measurements in diastole and in systole allow a calculation of the ejection fraction. 13.6. Conclusion SPECT enables a qualitative estimation of the 3D distribution of a radiopharmaceutical which is administered to the patient to detect physiopathological functional abnormalities in clinical routine. Increasingly sophisticated modeling of different physical phenomena interfering with the detection model (forward model) in SPECT in algebraic tomographic reconstruction enables extraction of increasingly accurate quantitative parameters from the resulting images (for example, the concentration of the radiopharmaceutical in a certain region). These parameters are of
Single Photon Emission Computed Tomography
347
major importance for better characterization of the studied physiological functions. This quantification is a major asset of SPECT and makes it a valuable diagnostic and prognostic tool as well as a preclinical and clinical research tool.
stress
A
rest
Stress
B
rest
Figure 13.4. Example of results of myocardial SPECT with 99mTc sestamibi. Three slices are shown (from top to bottom: short axis, horizontal long axis, vertical long axis), corresponding to the planes classically used for the interpretation of images. The bull’s eye plots represent the sets of 3D data in 2D polar maps. The slices were reconstructed using OSEM without (A) and with (B) attenuation correction by X-ray transmission (CT). An anterior-apical ischemia is seen, which indicates an ischemia that corresponds to a coronary lesion in this area. An attenuation artifact is seen in the data reconstructed without correction at the level of the inferior wall
348
Tomography
diastole
systole
Figure 13.5. Example of results of a gated myocardial SPECT. The underlying scan is the same as in Figure 13.4. The slices reconstructed in diastole and systole enable the systolic thickening to be appreciated. The 3D segmentation of the interior wall of the left ventricle provides access to the ventricular volumes and allows calculation of the ejection fraction (see text for details)
13.7. Bibliography [ANG 67] ANGER H., PRICE D. C., YOST P. E., “Transverse section tomography with the gamma camera”, J. Nucl. Med., vol. 8, pp. 314–315, 1967. [BAI 99] BAI C., ZENG G. L., KADRMAS D. J., GULLBERG G. T., “A study of apparent apical defects in attenuation corrected cardiac SPECT”, IEEE Trans. Nucl. Sci., vol. 46, pp. 2104–2110, 1999.
Single Photon Emission Computed Tomography
349
[BUC 08] BUCK A. K., NEKOLLA S., ZIEGLER S., BEER A., KRAUSE B. J., HERRMANN K., SCHEIDHAUER K., WESTER H. J., RUMMENY E. J., SCHWAIGER M., DRZEZGA A., “SPECT/CT”, J. Nucl. Med., vol. 49, pp. 1305–1319, 2008. [BUV 94] BUVAT I., BENALI H., TODD-POKROPEK A., DI PAOLA R., “Scatter correction in scintigraphy: the state of the art”, Eur. J. Nucl. Med., vol. 21, pp. 675–694, 1994. [CHA 79] CHANG L. T., “Attenuation correction and incomplete projection in single photon emission computed tomography”, IEEE Trans. Nucl. Sci., vol. 26, pp. 2780–2789, 1979. [DEP 89] DEPUEY E. G., GARCIA E. V., “Optimal specificity of thallium-201 SPECT through recognition of imaging artifacts”, J. Nucl. Med., vol. 30, pp. 441–449, 1989. [ELF 00] EL FAKHRI G., BUVAT I., BENALI H., TODD-POKROPEK A., DI PAOLA R., “Relative impact of scatter, collimator response, attenuation, and finite spatial resolution corrections in cardiac SPECT”, J. Nucl. Med., vol. 41, pp. 1400–1408, 2000. [GAR 85] GARCIA E. V., VAN TRAIN K., MADDAHI J., PRIGENT F., FRIEDMAN J., AREEDA J., WAXMAN A., BERMAN D., “Quantification of rotational thallium-201 myocardial tomography”, J. Nucl. Med., vol. 26, pp. 17–26, 1985. [HUD 94] HUDSON H. M., LARKIN R. S., “Accelerated image reconstruction using ordered subsets of projection data”, IEEE Trans. Med. Imaging, vol. 13, pp. 601–609, 1994. [KUH 63] KUHL D. E., EDWARDS R. Q., “Image separation radioisotope scanning”, Radiology, vol. 80, pp. 653–662, 1963. [LAN 84] LANGE K., CARSON R., “EM reconstruction algorithms for emission and transmission tomography”, J. Comput. Assist. Tomogr., vol. 8, pp. 306–316, 1984. [MCQ 08] MCQUAID S. J., HUTTON B. F., “Sources of attenuation-correction artefacts in cardiac PET/CT and SPECT/CT”, Eur. J. Nucl. Med. Mol. Imaging, vol. 35, pp. 1117– 1123, 2008. [MUE 71] MUEHLLEHNER G., WETZEL R., “Section imaging by computer calculation”, J. Nucl. Med., vol. 12, pp. 76–84, 1971. [NAT 01] NATTERER F., “Inversion of the attenuated Radon transform”, Inverse Problems, vol. 17, pp. 113–119, 2001. [NOV 02] NOVIKOV R. G., “An inversion formula for the attenuated X -ray transform”, Ark. Mat., vol. 40, pp. 145–167, 2002. [PUR 08] PURSER N. J., ARMSTRONG I. S., WILLIAMS H. A., TONGE C. M., LAWSON R. S., “Apical thinning: real or artefact?”, Nucl. Med. Comm., vol. 29, pp. 382–389, 2008. [SEO 08] SEO Y., MARI C., HASEGAWA B. H., “Technological development and advances in single-photon emission computed tomography/computed tomography”, Semin. Nucl. Med., vol. 38, pp. 177–198, 2008. [STR 98] STRAUSS H. W., MILLER D. D., WITTRY M. D., CERQUEIRA M. D., GARCIA E., ISKANDRIAN A. S., SCHELBERT H. R., WACKERS F. J., “Procedure guidelines for myocardial perfusion imaging”, J. Nucl. Med., vol. 39, pp. 918–923, 1998.
350
Tomography
[TSO 08] TSOUMPAS C., TURKHEIMER F. E. , THIELEMANS K., “A survey of approaches for direct parametric image reconstruction in emission tomography”, Med. Phys., vol. 35, pp. 3963–3971, 2008. [TSU 88] TSUI B. M. W., HU H. B., GILLAND D. R., GULLBERG G. T., “Implementation of simultaneous attenuation and detector response correction in SPECT”, IEEE Trans. Nucl. Sci., vol. 35, pp. 778–783, 1988. [TSU 94] TSUI B. M. W., FREY E. C., ZHAO X., LALUSH D. S., JOHNSTON R. E., MCCARTNEY W. H., “The importance of accurate 3D compensation methods for quantitative SPECT”, Phys. Med. Biol., vol. 39, pp. 509–530, 1994. [VID 99] VIDAL R., BUVAT I., DARCOURT J., MIGNECO O., DESVIGNES P., BAUDOUY M., BUSSIÈRE F., “Impact of attenuation correction by simultaneous emission/transmission tomography on visual assessment of Tl-201 myocardial perfusion images”, J. Nucl. Med., vol. 40, pp. 1301–1309, 1999. [WIL 93] WILSON D. W., TSUI B. M. W., “Noise properties of filtered-backprojection and ML-EM reconstructed emission tomographic images”, IEEE Trans. Nucl. Sci., vol. 40, pp. 1198–1203, 1993. [XIA 95] XIA W., LEWITT R. M., EDHOLM P. R., “Fourier correction for spatially variant collimator blurring in SPECT”, IEEE Trans. Med. Imaging, vol. 14, pp. 100–115, 1995. [ZAI 04] ZAIDI H., KORAL K. F., “Scatter modelling and compensation in emission tomography”, Eur. J. Nucl. Med. Mol. Imaging, vol. 31, pp. 761–782, 2004. [ZEN 98] ZENG G. L., GULLBERG G. T., BAI C., CHRISTIAN P. E., TRISJONO F., DI BELLA E., TANNER J. W., MORGAN H. T., “Iterative reconstruction of fluorine-18 SPECT using geometric point response correction”, J. Nucl. Med., vol. 39, pp. 124–130, 1998.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 14
Positron Emission Tomography
14.1. Introduction 14.1.1. Definition Positron emission tomography (PET) is a medical imaging technique that enables us to obtain a three-dimensional (3D) map of a physiological parameter, such as glucose metabolism, blood flow, the receptor density of a neural transmission system, in vivo in human organs. This 3D map is derived from a dynamic measurement of the volumetric distribution of a specific radiopharmaceutical that is injected into the subject. The first PET scanner was built at the beginning of the 1960s by Rankovitch et al. It was constructed of a ring of sodium iodine detectors [RAN 62]. The first computer-assisted scanner was described in 1975 [TER 75, PHE 75]. 14.1.2. PET versus other functional imaging techniques The other functional imaging technique that uses radiopharmaceuticals is single photon emission computed tomography (SPECT). It differs from PET in how the molecule that is injected into the patient is marked and in the characteristics of the associated detection system. PET uses positron emitters as markers. These markers have a short half-life of between 2 and 109 min; this requires their production by a cyclotron and their Chapter written by Michel DEFRISE and Régine TRÉBOSSEN.
352
Tomography
incorporation into the radiopharmaceutical at the scanner site for isotopes with the shortest half-lives. SPECT uses gamma emitters as a marker. The shortest half-life of the isotopes used is 6 hours. Moreover, these isotopes are easily available using a generator. Thus, unlike PET, SPECT does not require a cyclotron in the vicinity of the imaging center. The sensitivity of PET is about 100 times higher than the sensitivity of SPECT, because the physical collimators are replaced by electronic collimators. In addition, elimination of the physical collimators enables a uniform spatial resolution over the entire field of view to be obtained, instead of a spatial resolution that varies with the distance to the collimator. Mainly due to the cost of tracer production, PET remained for a long time a dedicated tool for clinical research. Its success was based on the large variety of available radiopharmaceuticals, which results from the simple integration of markers into biological molecules in PET. This enabled the use of PET for validating new therapeutics, such as the transplantation of neurons that are able to supplement dopamine production by the dopaminergic system in Parkinson’s disease [COC 03]. In section 14.4.2 we briefly describe how PET allows in vivo imaging of the dopaminergic transmission system. PET has also become the tool of choice for the imaging of gene expression [TAV 98]. PET enables us to perform a functional mapping of the brain by comparing the measured blood flow in cerebral regions during a motor, sensor, or cognitive stimulation to that at rest [FOX 84]. Such activation studies are at present mostly performed by two other functional imaging techniques, functional magnetic resonance imaging (fMRI) and magneto-encephalography (MEG), which have a better temporal resolution than PET. Since the mid 1990s, PET imaging of glucose metabolism has been highly successful as a tool for the detection and localization of tumors and for the followup of patients after oncological treatment. The interest in PET in oncology is linked to the fact that cancer cells have a higher glucose metabolism than normal cells [PAU 98]. In this way PET provides complementary information to that obtained by radiological studies (CT and MRI). Today, there are in France alone more than 30 imaging centers equipped with PET scanners dedicated to the clinical investigation of glucose metabolism in oncology and more than 10 imaging centers equipped with a cyclotron. Most of them are used part-time by industrial distributors of radiopharmaceuticals that have clearance for market introduction.
Positron Emission Tomography
353
14.1.3. Functional versus anatomical imaging Functional imaging using an injection of radioactive tracers suffers from a poor spatial resolution compared to anatomical imaging with CT or MRI. The spatial resolution of images is in the order of 4–5 mm for PET and 8–10 mm for SPECT in human studies and of less than 1 mm for both in small animal studies. In MRI, a spatial resolution of 1 mm is reached in humans and of some 10 μm in small animals. Anatomical and functional imaging thus do not have the same clinical indications, but provide complementary information, which may be optimally exploited by sophisticated techniques such as image fusion. 14.2. Data acquisition A 3D map of a physiological parameter is derived from measurements of the time evolution of the volumic distribution of the tracer in the organs and from mathematical models that describe how the molecule transforms in the cells. The volumetric distribution of the tracer is obtained by 3D image reconstruction from the photon pairs recorded by a set of detectors placed around the patient. These photon pairs are produced by annihilation of the positrons emitted by the marked molecule. 14.2.1. Radiopharmaceuticals Radiopharmaceuticals are injected at low dose to avoid disturbing the biochemical process to be studied. This is why they are called tracers. They are composed of a biochemical vector and a radioactive marker. 14.2.1.1. Vectors The vector is a sensor for a physiological process. Its choice depends on the physiological parameter to be measured: – for glucose metabolism, the employed molecule is a glucose analog, deoxyglucose; – for regional blood flow, water or ammonia are used; – for the dopaminergic system, a large number of vectors are available, which provide access to complementary physiological parameters (see section 14.4.2). In general, the vectors employed in PET belong to different families: – labeled endogenous substances (water, 11C-méthionine);
354
Tomography
– analogs of endogenous substances (deoxyglucose as glucose analog); – molecules with an affinity for a neurotransmission system (raclopride for the dopaminergic system, CGP for the adrenergic system); – substrates for enzymes (deprenyl, inhibitor of mono-amino-oxydase-B, the latter is involved in the degradation of catecholamines such as dopamine); – molecules for gene expression. With the use of these vectors and the quantification of their distribution in an organ, PET is also a tool that permits quantifying various molecular processes at a nanomolar scale in vivo and non-invasively. 14.2.1.2. Positron emitting markers The positron emitters used in PET are isotopes of light elements, unlike the isotopes emitting simple photons that are employed in SPECT. For this reason, they are easily integrated into biological molecules and hardly change their spatial physical structure. The characteristics of the most common isotopes used in PET are listed in Table 14.1. Isotopes
Half-life (minutes)
Conversion coefficient (%)
Maximum kinetic energy(keV)
Mean path length of positron in water (mm)
15
O
2
100
1,723
2.22
13
N
10
99
1,190
1.44
20
99
981
1.12
F
109.8
97
635
0.6
54
3,440
5.0
11
C
18 76
Br
960
68
Ga
68.2
89
1,899
2.4
82
Rb
1.3
95
3,350
4.7
Table 14.1. The most important positron emitters employed in PET
14.2.2. Physical principle of PET 14.2.2.1. Positron annihilation The nuclei that emit positrons have an excess of protons with respect to neutrons. They recover a stable state by either emitting E+ or by capturing electrons, if the transition is permitted. During the emission of E+, a proton disappears and a
Positron Emission Tomography
355
neutron appears while a positron and a neutrino are emitted. The main characteristic of E+ radioactivity is that the positron and the neutrino share the whole residual energy of the transition in the form of kinetic energy. Consequently, the positrons have a kinetic energy between zero and a maximum value (maximum kinetic energy given in Table 14.1). This kinetic energy is dissipated by collisions with electrons, before the positron is annihilated by an electron. Two photons with an energy of 511 keV are then emitted in opposite directions. These photons are the origin of the detected signal. The position of the emission of the positron and of the annihilation do not coincide: the mean distance that separates them depends on the kinetic energy of the positrons at the moment of their emission (Table 14.1) and limits the spatial resolution that may be obtained. 14.2.2.2. Principle of coincidence detection About 40% of the photons with 511 keV leave the body without interaction and are recorded by the detectors placed around the patient. The simultaneous or coincidence detection of two photons indicates that they originate from the same annihilation, which produced them inside the volume defined by the set of all straight lines connecting two detectors (Figure 14.1), the so-called lines of response (LOR). In this way, an electronic collimation is obtained, unlike SPECT where the origin of a photon is determined by a physical collimator. The detection is called simultaneous if the two photons are recorded in an interval or coincidence window determined by counting electronics. With current scanners, the coincidence window varies between 4 and 12 ns (it takes 3 ns to cover 90 cm at the speed of light). If the detection system enables measurement of the difference between the times of arrival of the two simultaneously emitted photons, the position of the annihilation may be localized inside the volume defined by the two detectors. The difference between the two times of arrival is also called time-of-flight. 14.2.2.3. Type of detected coincidences Photons originating from the annihilation of two different positrons may be detected in coincidence, which is called a random (or accidental) coincidence (Figure 14.1). These coincidences, whose counting rate is proportional to the size of the coincidence window, are not related to a single radioactive disintegration and their counting introduces a bias into the measurement. Moreover, the 511 keV photons may undergo elastic scattering by the electrons of the material (Compton scattering). When at least one of the two annihilation positrons undergoes such an interaction, a so-called scattered coincidence is recorded. For this type of coincidence, the annihilation of the positron is assigned to an incorrect position (Figure 14.1).
356
Tomography
A C
B D
E F
Figure 14.1. Principle of PET: the coincidence detection of two photons with 511 keV enables localizing a positron annihilation (star) along the line of response connecting two detectors. Schematic illustration of the types of coincidences detected in PET: AB: true coincidence between the detectors A and B; CD: so-called scattered coincidence; EF: accidental coincidence due to two independent annihilations. In the last two cases, an annihilation is falsely localized along the dotted lines
14.2.3. Detection systems employed in PET 14.2.3.1. Detectors The detectors that are used most often in PET are composed of inorganic scintillation crystals coupled to photomultipliers [LEW 08]. The characteristics of these crystals are listed in Table 14.2. The ideal scintillating crystal, i.e. the crystal that enables both good spatial resolution and good detection efficiency to be obtained, must possess the following properties: – a high density and linear attenuation coefficient to maximize the probability that the 511 keV photons interact in the small volume of the crystal; – a high photofraction such that as many photons as possible transfer in their first interaction all their energy to the crystal by the photoelectric effect; – a rapid decay of the scintillation to minimize the dead time of the crystal and to reject as many random coincidences as possible (this is an essential precondition for measuring the time-of-flight of annihilation photons); – a good light output to facilitate the analysis of the energy of the incident photons.
Positron Emission Tomography Scintillator
Density (g/cm3)
Linear attenuation
Photofraction (%)
Decay of scintillation (ns)
Light output (% of NaI)
NaI(Tl)
3.7
0.34
18
230
100
BGO
7.1
0.95
42
300
22
BaF2
4.9
0.45
19
0.8–630
5–21
LSO
7.4
0.86
33
40
75
LYSO
7.1
0.87
33
41
80
LaBr3
5.3
0.47
14
25
160
357
Table 14.2. The main scintillators employed in detectors for photons with 511 keV in PET
Currently, the most common scintillating crystals in commercial clinical scanners are an orthosilicate of lutetium (LSO), its analog, LYSO, and a germanate of bismuth (BGO). In particular, LSO possesses ideal properties for building scanners with high resolution and sensitivity. This scintillator is also used in dedicated scanners for small animal imaging. Decay of the scintillation of LSO and LYSO is considerably faster than that of BGO and NaI(Tl), thereby enabling the time-of-flight to be measured. Even more important than the decay of the scintillation is the speed of its initial growth in the measurement of the time-offlight. NaI(Tl), which is less dense and slower, is also hygroscopic, and is no longer used in commercial scanners. 14.2.3.2. Detector arrangement The optimal detector arrangement depends on the properties of the crystals employed. Systems that use sodium iodide crystals have large planar or curved detectors [KAR 90], while systems that use LSO or LYSO have small detector blocks placed on a ring [CAS 86]. In both cases, the detectors are coupled to photomultipliers, and the position of the incident photons in the detector block or the planar detector are determined according to the Anger principle described in Chapter 13. The technology of detector blocks enables a good compromise between high resolution and high sensitivity to be achieved [CAS 86]. Each block is divided into small crystals (typically 8 by 8 crystals), which are coupled to a reduced number of photomultipliers (typically 4). To optimize collection of the scintillation light and
358
Tomography
the identification of the crystal, the depth of the notches between the crystals is variable, and a reflecting material isolates the crystals. In these systems, the spatial resolution depends to a large extent on the size of the crystals. 14.2.4. Physical characteristics of scanners The detectors and their arrangement around the patient determine the characteristics of a PET scanner. The three essential characteristics of a scanner are the spatial resolution (uncertainty about the location of a radioactive molecule), the sensitivity, and its maximum counting rate. The spatial resolution of a scanner is determined by several factors, among which are the size of the detectors and the nature of the coupling between the crystals and the photomultiplier. The spatial resolution is also limited by the characteristics of the emission and the annihilation of the positrons (distance they travel before their annihilation, angle between the trajectories of the resulting pairs of photons). The sensitivity of a scanner is defined as the number of coincidences detected per unit time and per activity present in the field of view at low activity, i.e. in the absence of a dead time. The sensitivity depends on the density of detectors and the underlying solid angle. The maximum counting rate is essentially determined by the temporal characteristics of the detectors and the associated electronics. 14.2.5. Acquisition modes The volumetric distribution f(x,y,z) of the tracer must be reconstructed from the photon pairs detected in coincidence by each pair of detectors. This number provides after appropriate corrections (see section 14.3) an estimate of the integral of the distribution function along the LOR that connects the two detectors. The nature of the image reconstruction problem, as well as the implemented algorithms, thus depends on the geometrical properties of the set of measured LORs. In this section, the main acquisition modes employed in PET are described [BEN 98]. The typical characteristics of current scanners are illustrated below using the ECAT EXACT HR+ (Siemens) scanner as an example. 14.2.5.1. Two-dimensional (2D) acquisition 2Dl acquisition, also called slice-by-slice acquisition, was used by most PET scanners up until 1990. Today, it is only exceptionally used for certain types of examinations. From a geometrical point of view, 2D acquisition is based on the segmentation of the volume to be reconstructed into a stack of parallel slices, which are also called transaxial slices, since they are orthogonal to the axis 1z of the
Positron Emission Tomography
359
scanner. The distribution function of the tracer in each slice z = z0, denoted by f(x,y,z0), is reconstructed independently from the other slices, using the coincidences measured by a ring of detectors located in the plane of the slice. This ring, circular or polygonal, typically comprises ND = 576 detectors and has a diameter RD of 82.7 cm. The scanner records the coincidences between each pair of detectors in the ring. A pair of detectors is defined by the straight line (LOR) connecting the centers of the two detectors, using the classical variables for parametrizing the 2D Radon transform, i.e. the signed distance s from the origin x = y = 0 and the angle I between the perpendicular to the LOR and the x-axis (Figure 14.2). The 2D Radon transform parametrized with the variables (s, I) is called the sinogram of the slice z0 and is denoted by g(s,I, z0) (see Chapter 1). In practice, the sinogram is sampled on a rectangular grid g (s = k 's, I = l 'I, z0), k = íK,…,K, l = 0,...,ND/2, with a radial sampling interval 's = SRD/ND, an angular sampling interval 'I= S/ND, and a sufficiently high number of radial samples K to cover the object, i.e. K 's > RFOV.
Figure 14.2. The line of response connecting the detectors A and B of the ring of detectors is parametrized by the sinogram variables s and I
Having measured a slice of the object in this way, a volumetric image of the distribution of the tracer is obtained with multiring scanners built by stacking up a certain number (typically NR = 32) of identical rings, with an axial spacing 'zR (typically 4.8 mm) between adjacent rings. The sinogram measured by each ring j = 0…NR-1 thus enables reconstruction of the (so-called) direct slice z = j 'zR with an axial resolution in the order of 'zR/2. The multiring scanners also record the coincidences between two detectors that belong to two adjacent rings. In this way, the rings j and j + 1 define the sinogram of a (so-called crossed) slice located in a middle plane between the two rings, at z = (j + 1/2) 'zR. To achieve this, the angle T § 'zR/2 RD between the LOR connecting the two rings and the transaxial slices is neglected.
360
Tomography
2D acquisition thus generates (2NR í 1) sinograms, each corresponding to a slice that is separately reconstructed. The segmentation into independent slices is only rigorous if the sinogram measured for a slice is not influenced by the tracer located in the other slices, a condition that is partially invalid due to contamination by random and scattered coincidences. To reduce this contamination, the scanners which enable 2D acquisitions are equipped with septa, i.e. annular collimators located between the rings (Figure 14.3) to absorb the photons whose trajectory traverses multiple slices.
Figure 14.3. Schematic representation of a longitudinal cross section of a scanner with eight rings. a) 2D acquisition with septa; ring 3 is only in coincidence with rings 2, 3, and 4. b) 3D acquisition without septa; ring 3 is in coincidence with all eight rings
14.2.5.2. Three-dimensional (3D) acquisition The principle of 3D (or volumetric) acquisition in PET was proposed in the 1960s, and implemented in some early prototypes of PET scanners. The idea is that electronic collimation is sufficient to define the line-of-flight of the detected photons and that all mechanical collimation is superfluous. From this point of view, the septa of multiring scanners are only justified by the desire to segment the 3D problem to simplify the reconstruction, and their use contradicts the physical principles of PET. From these observations stems the idea of improving the sensitivity of scanners by removing the septa and by measuring coincidences between all pairs of detector rings. The increase in sensitivity by passing from a 2D to a 3D acquisition is due to two factors: i) the septa cover part of the surface of the crystals; ii) the number of measured LORs is multiplied by a factor proportional to NR. For example, the absolute sensitivity (fraction of the coincidences emitted by a point source in air at the center of the scanner that are detected) of the ECAT EXACT HR+ increases from 0.52% in 2D to 3.66% in 3D. The price paid for this
Positron Emission Tomography
361
gain is twofold. First, the reconstruction is no longer separable, because the LORs that traverse several slices are measured. Second, contamination by scattered coincidences increases substantially in 3D, and the same holds for contamination by random coincidences. The correction techniques described in section 14.3 satisfactorily overcome these drawbacks. Currently, all scanners offered by manufacturers exclusively use 3D acquisition without septa. Nevertheless, there are still operational scanners in service that have 2D and 3D acquisition modes. Scanners may be classified according to their geometry: i) multiring scanners equipped with retractable septa (Siemens CTI: ECAT HR, HR+, …, GE Advance), which may be introduced into or removed from the field of view, allowing a choice between 2D and 3D acquisitions [WIE 94, LEW 95]; ii) most state-of-the-art multiring scanners, which are no longer equipped with septa (Siemens ECAT HRRT, Biograph, Philips Gemini) and are thus limited to 3D acquisitions (see for example [KAR 95]); iii) scanners based on a pair of gamma cameras of the same type as those used in SPECT (see Chapter 13). The two cameras are placed in parallel on both sides of the patient and are not equipped with collimators to allow measuring coincidences for all LORs connecting the two cameras. The acquisition of sufficient 3D data for a precise reconstruction (see the Orlov condition in Chapter 1) requires a rotation of the detector pair around the patient. These scanners are far less sensitive than the other two types of scanners and are no longer used. 14.2.5.3. 3D data organization for a multiline scanner For a multiline scanner in 3D mode (without septa), a LOR is defined by the indices (k, l) defining the position of two detectors in their ring, but also by the indices r1 and r2 [1 … NR] of these rings. The data are thus parametrized by four indices (k, l, r1, r2), which corresponds to the fact that four parameters are necessary to identify a line (a LOR) in R3. The first two indices define the projection of the LOR onto the transaxial plane, while the last two indices define its axial position and the polar angle of the LOR with respect to the transaxial plane. A 3D scan comprises N2R sinograms gr1, r2(s = k 's, I = l 'I), r1 and r2 [1 … NR] (the term sinogram is improper in this case since the corresponding LORs are no longer contained in the plane, therefore, we speak of oblique sinograms). Alternatively, these data may be rearranged to obtain a set of parallel 2D projections of the function f(x,y,z). 14.2.5.4. LOR and list mode acquisition The parametrization of the data by 2D or 3D sinograms enables a simplified implementation of analytical reconstruction formulas (see Chapter 1), but it does not correspond to a “native” parametrization defined by the arrangement of the crystals.
362
Tomography
The organization of measured coincidence events as sinograms requires an interpolation, which may compromise the spatial resolution of the reconstructed images. This loss of resolution may be avoided by collecting the data in a histogram comprising one entry for each pair of detectors in coincidence, so as to conserve in the tomographic reconstruction the most precise information about the spatial localization of each event. We speak of an acquisition and reconstruction in LOR mode. This approach is, however, impractical because the total number of pairs of detectors in coincidence in a scanner (typically close to or larger than 109) generally exceeds the number of detected coincidences during a clinical examination. Currently, scanners overcome this difficulty by recording the data in list mode, i.e. as a list of coincidences comprising for each coincidence the identification of two crystals in coincidence, the energy of the two photons, and the time. We may also insert into this list temporal references (“time tags”) enabling synchronization with respiratory and cardiac motion. The iterative reconstruction algorithms may be adapted to directly process the data in list mode [REA 02]. 14.2.5.5 Acquisition with time-of-flight measurement Crystals such as BaF2, LSO, LYSO, and LaBr3 (see Table 14.2) have an initial scintillation decay that is sufficiently rapid to permit measurement of the difference in the time-of-flight of two annihilation photons with an uncertainty in the order of 't § 300 ps to 600 ps (FWHM). This information allows localizing the point of annihilation along the tube of coincidence connecting the two crystals with a precision of 'l = 't c/2, where c is the speed of light. This precision, currently in the order of 4.5 cm to 9 cm, is much higher than the resolution of the scanner and thus insufficient to make image reconstruction unnecessary. Nevertheless, it enables an improvement in the signal-to-noise ratio when the patient is significantly larger than 'l. Currently, one commercial clinical scanner allows measuring the time-of-flight (Philips Gemini TF). Specific analytical reconstruction algorithms have been developed for time-of-flight PET [TOM 81, MAZ 90], but currently iterative algorithms are usually applied. 14.2.5.6 Transmission scan acquisition The goal of the transmission scan is to measure for each LOR the attenuation of the photons in the body of the patient to enable a correction of the data. The attenuation is important since typically only 17% of the photon pairs emitted at the center of the brain emerge without having undergone an interaction. This percentage may even be less than 1% for the thorax. The effect of the attenuation on the data is a simple multiplicative factor: g measured ( LOR )
g 0 ( LOR ) a ( LOR )
Positron Emission Tomography
363
with a ( LOR )
exp(
³ P ( x , y , z ) dl )
LOR
linked to the line integral along the line of response of the linear attenuation coefficient P(x,y,z) of 511 keV photons. Most PET scanners are combined with a CT scanner (see Chapter 10), which enables performing a CT scan to be performed without taking the patient from the examination table [TOW 08]. In this way, the PET and CT data are automatically aligned, as long as the respiratory or other motion of the patient can be neglected. Besides its diagnostic value, the reconstructed CT image enables precise anatomic localization of the structures observed in the PET image. Finally, the CT image provides an accurate estimate of the attenuation factors a(LOR), though this requires taking into account the difference between the energy employed for the CT (less than 100 keV) and the energy of the annihilation photons (511 keV) [KIN 98]. Certain PET scanners are not combined with a CT scanner. In this case, the factors a(LOR) are estimated by measuring the attenuation of the flux emitted by transmission sources (linear or point sources, for example 137Cs), which rotate around the patient and enable reconstruction of an attenuation map P(x,y,z) of non-diagnostic quality, but of sufficient quality for an attenuation correction. 14.3. Data processing As for all inverse problems, a precise modeling of the relationship between the object to be reconstructed and the measured data is essential. In tomography, this modeling is exploited – either to define the transition matrix in an iterative algebraic reconstruction algorithm; nevertheless, the data are usually corrected for non-linear effects (for example, the dead time of the detectors); – or to “correct” the data for certain physical effects to estimate as precisely as possible the line integrals of the object, i.e. its X-ray transform; the corrected data are then reconstructed by an analytical algorithm. In both cases, the data processing chain comprises a calculation of the random coincidences, of the normalization, and of the scattered coincidences, as described in the following.
364
Tomography
14.3.1. Data correction i) The contamination due to random coincidences (LOR EF in Figure 14.1) is estimated either directly by counting for each LOR the number of coincidences observed in a shifted time window or indirectly from the total number of photons (coincidence or non-coincidence) that are detected by each detector (the number of singles). In the latter case, the relation Fd1,d2 = Sd1 Sd2 2W is used, where Fd1,d2 is the rate of random coincidences in the LOR connecting the detectors d1 and d2, Sd is the rate of singles in detector d, and 2W is the width of the coincidence window. ii) To normalize the data to correct the non-uniform sensitivity of the detectors, a reference or “blank” acquisition is measured with a source with known characteristics. Certain scanners are equipped with linear positron sources of low activity that are placed parallel to the axis of the scanner and that rotate around that axis to simulate a uniform activity (these so-called transmission sources are also used for attenuation correction, as described below). Alternatively, the blank acquisition is measured with a uniform cylindrical or planar source. iii) The coincidences for which at least one photon has undergone a scattering define erroneous LOR (LOR CD in Figure 14.1). The scattering implies a loss of energy. Part of this contamination is avoided by only accepting photons whose energy, estimated by the detector, is compatible with 511 keV. Unfortunately, the energy resolution of the scintillation detectors only allows a poor discrimination between non-scattered and scattered photons. In 2D mode, about 10% of the detected coincidences involve at least one scattered photon. In 3D mode, this fraction may exceed 50%. The background noise due to the scattered photons mostly changes the low spatial frequency components of the Fourier transform of the reconstructed image. The scattered photons thus affect visual analysis of the image mostly by reducing the contrast. Several techniques have been employed to estimate this contamination. The most precise ones are based on a direct calculation from the Klein-Nishina cross section [OLL 96]. This calculation requires knowledge of the scattering medium P(x,y,z), which is provided by the reconstruction of the CT or the transmission scan (see below) and also requires an estimate of the source f(x,y,z) provided by a first reconstruction without scattering correction. The calculation of the scattered photons is complex, but it may be performed in an acceptable time by exploiting the fact that the scattered photons mostly contribute to the low spatial frequencies to strongly undersample the employed matrices. iv) When an analytical reconstruction algorithm is used (Chapter 2), the attenuation correction is applied directly by dividing the data by the attenuation factors a(LOR) measured for each LOR in the transmission scan: g0(LOR) = gmeasured(LOR)/a(LOR)
Positron Emission Tomography
365
If the transmission measurement is very noisy (measurement with transmission sources rather than with CT), the impact of measurement errors on a(LOR) may be minimized by applying a low-pass filter. For the iterative reconstruction algorithms, the data are not corrected, but the coefficients a(LOR) are directly incorporated into the matrix that models the acquisition. 14.3.2. Reconstruction of corrected data After applying the corrections described above, the number of coincidences recorded between two detectors represents the integral of the radioactive distribution f(x,y,z) along the LOR defined by these two detectors. By approximating this LOR as a line, line integrals of f(x,y,z) are obtained, and the problem is reduced to the inversion of the 2D X-ray transform (in the case of a 2D acquisition) or the 3D X-ray transform (in the case of a 3D acquisition). The reader is referred to Chapter 2 for a description of algorithms that may be used. We simply describe the 2D case here. Each slice z = z0 is reconstructed by applying the filtered backprojection algorithm: each row (at fixed I) of the sinogram, corresponding to a 1D parallel projection of the slice, is convolved with a ramp filter kernel h(s), and the filtered projection is then backprojected: f ( x , y , z0 )
N I 1
' I ¦ g F ( x cos( l' I ) y sin( l' I ), l' I , z 0 ) 0
g F ( k' s , l' I , z 0 )
's
K
¦ g ( k ' ' s , l' I , z 0 ) h (( k k ' ) ' s )
K
h(s) is the inverse Fourier transform of the ramp filter |Q|, multiplied by a lowpass apodization window (for example a Hamming window) that is characterized by a cut-off frequency Q0 = 1/2's. This cut-off frequency is chosen depending on the noise level, so as to reach a compromise between the spatial resolution (§ 1/2Q0) and the variance of the reconstructed image. This choice is guided by the expression for the variance of the reconstructed image f(x,y,z) of a disk of radius r obtained from a total number of coincidences NT. At the center of this disk, we obtain: relative variance ^ f ( x
0, y
0, z 0 )
`|
3 4S 3 r 3Q D 3NT
Let us stress an important consequence of this equation. If the spatial resolution, i.e. Q0, increases by a factor of 2 in x and y, NT and hence the acquisition time must be multiplied by 8 to maintain a fixed variance in the reconstructed image. If the
366
Tomography
axial resolution (z) also increases by dividing the slice thickness by 2, NT must be multiplied by 16. If the scanner allows measurement of the time-of-flight with a precision 't < r, the relative variance defined by the above equation is reduced by a factor of 't /2r, equal to the ratio of the number of voxels that are potential sources of an annihilation along the LOR connecting the two crystals with and without measurement of the time-of-flight [TOM 81].
14.3.3. Dynamic studies Each volumetric element of a PET image represents the distribution of the radioactivity, i.e. the quantity of injected tracer and labeled metabolites that are present in the corresponding tissue element during the acquisition. A dynamic acquisition enables images of the time evolution of the radioactive distribution to be obtained. Such a dynamic acquisition sequence is implemented by dividing the total acquisition time into typically about 30 intervals and by reconstructing the data acquired during each interval with the algorithms already described. It is also possible to consider the problem as a 4D inverse problem by reconstructing the complete dynamic data set as a whole [NIC 99]. This approach is more demanding in terms of computation time, but it allows inclusion of an a priori model of the evolution of the tracer’s distribution to improve the stability of the reconstruction when the count rate is small. This is usually the case when the number of coincidences acquired during each interval is far smaller than the number of events in a static study of the same duration. The most common a priori models describe the tracer’s evolution in each voxel as a linear combination of temporal basis functions. The reconstruction then involves estimating the weights for this linear combination for each voxel. If the basis functions are well chosen, the variance of the images obtained by 4D reconstruction is smaller than that obtained by separate reconstruction of the data from each interval. Several approaches were successfully developed, but are still rarely implemented in commercial software: generalized basis functions having good approximation properties (for example splines), basis functions adapted to the specific type of dynamics expected for the used tracer (for example an exponential decay, or exponential-spline wavelets), and adaptive basis functions, which are estimated during the reconstruction instead of being selected beforehand. Finally, an optimal spatiotemporal resolution requires consideration of respiratory and/or cardiac motion in the reconstruction by synchronizing the acquisition with physiological signals. The characteristic parameters of the biological process observed by injection of the chosen tracer (density of receptors, regional concentration of an endogenous ligand, etc.) are derived from the evolution of the radioactive concentration in the volumetric elements of the PET images. The kinetics of the tracer and of its derivatives may be described in a fairly simple way by a compartment model. This
Positron Emission Tomography
367
type of model combines the information about a biological process in terms of compartments and of exchange rates between them. The compartments correspond either to a metabolic step of the tracer or to a fixation step of the tracer to a molecule. The kinetics of the tracer and of its derivatives are described by differential equations of the first order. In general, the exchange rates between the compartments are considered as constants, which are to be determined. Solving these equations and estimating these constants usually requires knowledge of the time evolution of the concentration of the tracer and its metabolites in the arterial plasma. To facilitate determination of the constants and to allow simplification of the injection and acquisition protocol as well as of the model, a good tracer must have the following properties: – it has only few metabolites and/or metabolites which appear sufficiently late after the injection; – receptor ligands must have a high affinity for these receptors and a low affinity for all other systems. It is also necessary for the injected tracer to cross the cell membrane (the blood– brain barrier for radiopharmaceuticals used in brain imaging). 14.3.3.1. Measurement of glucose metabolism Cellular glucose metabolism is measured by injecting a glucose analog, namely deoxyglucose marked with fluorine or FDG. The model that enables us to derive the rate of use of glucose in each volumetric element of the image was initially described for deoxyglucose marked with 14C [SOK 77]. Deoxyglucose marked with either 14C or 18F uses the same membrane transporter as glucose. It is also phosphorylated by the enzyme hexokinase, but it is not metabolized further in the glycolysis and thus accumulates in the cell. While an inverse reaction exists for glucose, the inverse reaction for the labeled deoxyglucose is so insignificant that certain authors neglect it. The model thus involves three compartments (Figure 14.4): an arterial plasma compartment, a precursor tissue compartment, in which the deoxyglucose is found, and a tissue compartment, in which the tracer is metabolized into deoxyglucose-6-P. This model has three (or four, depending on the authors) parameters: k1, which describes the transport of the deoxyglucose via the blood to the cells; k2, which describes the inverse transport; and k3, which characterizes the phosphorylation reaction of the deoxyglucose. These parameters are constants in a volumetric element of homogenous tissue. PET enables us to obtain the concentration of FDG and its metabolite, FDG-6-P, in tissues, and the concentration of FDP in the arteries that feed these tissues as a function of time as indicated in Figure 14.4. Blood samples provide the concentration of FDG in the plasma as a function of time and measurements of the blood sugar level of the subject during the examination.
368
Tomography
14.3.3.2. Model Using the notations in Figure 14.4, we define the total radioactive concentration in the tissue Ci*(t) as the sum of the concentrations of FDG and FDG-6-P in the tissue, denoted by Ce*(t) and Cm*(t), respectively:
Ci * (t) = Cm * (t) + Ce * (t) and thus
dCi * (t) dCm * (t) dCe * (t) = + dt dt dt Cell barrier Tissues
Plasma Precursor glucose Cp
[18F]-deoxyglucose
glucose Ce
[18F]-deoxyglucose
Metabolite glucose-6-phosphate Cm
[18F]-deoxyglucose-
6-phosphate Cp*
Ce*
Cm*
Measured in a voxel of the PET image
Figure 14.4. Model of FDG; concentrations of radioactive molecules are marked by *
The variation in the FDG concentration in the tissue Ce*(t) over time reads: dCe * (t) = k1Cp * (t) - k2Ce * (t) - k3Ce * (t). dt The variation in the FDG-6-P concentration in the tissue over time reads:
dCm * (t) = k3Ce * (t). dt
Positron Emission Tomography
369
A simplification of the solution of the system of equations is based on the hypotheses that the FDG is injected at a tracer dose (Cp >> Cp*(t) for all t), the glucose metabolism in the tissues is in equilibrium during the examination dCe 0 ), and the blood sugar level is constant during the examination. The FDG ( dt concentration in the tissues at time W then reads: W
Ce * (W ) = k1e - (k2 + k3)W ³ Cp * (t) e (k2 + k3)t dt 0
The FDG-6-P concentration in the tissues at time W reads: Wª T º Cm * (W ) = k1k3 ³ «e - (k2 + k3)T ³ Cp * (t) e (k2 + k3)t dt » dT 0¬ 0 ¼
The rate of glucose consumption by the cells in a homogeneous volumetric element may be expressed as: W
Ci * (W ) - k1³ Cp * (t) e (k2 + k3)(t -W ) dt R=
0
W Cp * (t)
LC ( ³
0
Cp
W Cp * (t)
dt ³
0
Cp
e (k2 + k3)( t -W ) dt)
where LC is a constant that takes into account the phosphorylation rate of the FDG and the glucose in equilibrium. The numerator represents the quantity of FDG-6-P accumulated in the tissues during the examination (t = 0 corresponds to the moment of injection). The denominator, without LC, represents the delay of FDG equilibrium between the tissues and the plasma. 14.3.3.3. Acquisition protocol Several methods for deriving images of the rate of glucose metabolism from the PET measurements exist: – A dynamic method, in which several PET images are acquired after the injection. For each volumetric element of the PET images, the constants k1, k2, and k3 are obtained by a non-linear fit of the equation yielding Ci*(W) to the evolution of the radioactivity sampled with PET. With these values for the constants, an image of the rate of glucose consumption is calculated using the equation yielding R. – A simplified dynamic method with two compartments, which estimates a constant of the FDG uptake by a graphical method in each volumetric element.
370
Tomography
– An autoradiographic or statistical method, in which a single PET image is acquired 40 to 60 min after injection of the FDG (the FDG is in equilibrium in the tissues with respect to the plasma). The rate of glucose metabolism is calculated for each volumetric element of a PET image with the equation yielding R by taking the means measured in a healthy population as values for the three constants [PHE 79].
14.4. Research and clinical applications of PET Today, the primary application of PET is the staging of cancer and longitudinal monitoring of patients during oncological treatment. The radiopharmaceutical that is used most is [18F]FDG. The other radiopharmaceutical that has market clearance, in this case for the diagnosis of neuroendocrine cancers and epilepsy, is [18F]-LDOPA. In the near future, [18F]fluorothymidine or [18F]FLT, a marker of cancer cell proliferation, will obtain market clearance for the early evaluation of oncological treatment.
14.4.1. Clinical applications of PET: whole-body measurements of glucose metabolism in oncology 14.4.1.1. Physiological basis Cancer cells show modified glucose transport across the cell membrane and changed activity of hexokinase enzymes compared to normal cells, which is reflected in a higher glucose metabolism. Cancer cells also proliferate more rapidly than healthy cells. The proliferation of cells is a marker of the aggressiveness of the tumor, and the rate of glucose metabolism can be correlated with the grade of the tumor [PAU 98]. 14.4.1.2. Acquisition and reconstruction protocol Whole-body acquisitions are performed after injection of FDG (between 3.7 and 7 MBq/kg) by moving the patient step-by-step through the field of view of the detectors. The number of steps depends on the axial size of the field of view of the scanner and on the size of the examined body. The acquisition time per step varies between 5 and 10 min. For each step, the acquisition is static, and the FDG is in equilibrium in the tissues with respect to the plasma. The acquisition starts about one hour after the injection. The data are usually acquired in 2D mode (if this type of acquisition is possible). For scanners not equipped with a CT, the transmission measurement takes about 2 to 5 min, but it is not systematically performed. The acquired images are characterized by a low signal-to-noise ratio. Statistical reconstruction methods are used the most in oncology, because they take into account the statistical properties of the noise in the data.
Positron Emission Tomography
371
14.4.1.3. Image interpretation: sensitivity and specificity of this medical imaging modality The images allow a qualitative interpretation (detection and localization of tumors) and/or a quantitative interpretation by calculating an index of glucose uptake, such as the ratio of the radioactive concentration measured in a tumor to the dose injected into the patient per unit mass. The sensitivity (rate of detected true positives) and the specificity (related to the rate of detected false positives) of this examination depend on the size of the tumors and on their contrast with respect to adjacent tissues. These two parameters must be optimized depending on: – the reconstruction algorithm (iterative or analytical); – the delay between the data acquisition and the injection of the tracer (this delay is the time needed to reach equilibrium of the FDG in the tissues with respect to the plasma; this is known for healthy tissues but not for tissues with altered glucose metabolism). Numerous studies are underway to optimize the acquisition and reconstruction parameters. Nevertheless, a consensus already exists on the detection threshold for PET: this imaging modality enables detection of tumors with a diameter larger than 1 cm and a 2.5 fold higher uptake than cells of healthy tissue. Figure 14.5 provides an example of a whole-body examination in oncology.
Figure 14.5. Images of the radioactive distribution acquired 50 min after an injection of 3.7 MBq/kg of [18F]-fluorodeoxyglucose to a subject
372
Tomography
14.4.2. Clinical research applications: study of dopaminergic transmission 14.4.2.1. Dopaminergic transmission system Dopamine is a neurotransmitter that is synthesized in the dopaminergic fibers located in the basal ganglia of the brain (caudate, putamen, nucleus accumbens). The dopaminergic transmission system is affected in psychiatric diseases like depression or schizophrenia. The neuroleptics used in the treatment of psychiatric diseases influence the regulation of dopamine. Parkinson’s disease is characterized by a loss of dopaminergic neurons. 14.4.2.2. Dopamine synthesis The precursor of dopamine, L-DOPA, is synthesized from tyrosine. Once set free in the synaptic cleft, the dopamine binds to the receptors of postsynaptic neurons of the striatum, ensuring the transmission of the nerve impulse. The dopamine which is not bound by the receptors is recaptured by the dopaminergic fibers via a transporter or is degraded by two enzymes, monoamine oxidase B and methyltransferase of catecholamines. 14.4.2.3. Tracers By injection of specific sensors described in Table 14.3, the dopaminergic system may be studied with PET at the presynaptic level (synthesis of dopamine and transporter to the locations where dopamine is recaptured), the synaptic level (MAO-B degrades dopamine in the synaptic cleft), and the postsynaptic level (dopaminergic receptors). Sensor [18F]-L-DOPA [18F]-dopamine [76Br ]-CBT [11C]-E-CIT [11C]-WIN 35,428 [11C]-deprenyl [11C]-methylspiperone [11C]-raclopride [76Br]-lisuride [11C]-SCH23390
Biological function Synthesis of dopamine Transporter of dopamine
Inhibitor of MAO-B Receptor D2
Receptor D1
Table 14.3. Biological sensors for the study of dopaminergic systems
Positron Emission Tomography
373
14.4.2.4. Example of use: study of neurodegenerative diseases that affect the basal ganglia These tracers are used to study neurodegenerative diseases in humans and to develop models of these diseases in animals to better understand the mechanisms leading to degeneration of dopaminergic fibers and to implement new treatments. The uptake of [18F]-L-DOPA by the dopaminergic cells is an indicator of the density of dopaminergic fibers and the concentration of endogenous dopamine [BRO 90]. In Parkinson’s disease, [18F]-L-DOPA enables the disappearance of dopaminergic neurons in the striatum to be shown with PET, which is correlated with the clinical motor signs [BRO 90]. It also allows evaluation of the rate of progression of neuronal damage during Parkinson’s disease. The use of [11C]WIN35,428 enables us to show that the clinical signs of Parkinson’s disease are correlated with a decrease in the dopamine transporter localized in the anterior part of the putamen [FRO 93]. The injection of [18F]-F-DOPA was also employed to evaluate the benefit of transplanting dopaminergic neurons in Parkinson’s disease: an increase in the uptake of [18F]-L-DOPA after transplanting them could be demonstrated in this way (Figure 14.6). This increase is correlated with an improvement in clinical signs [REM 95].
Healthy patient
De Novo Parkinson's disease
Figure 14.6. Images of the uptake of [18F]-L-DOPA in a patient who received a transplantation of dopaminergic cells
14.5. Conclusion Positron emission tomography is a clinical in vivo imaging tool used for the detection and quantification of perturbations of cellular functions. Today, the most important clinical application is in oncology; new applications are emerging, such as the in vivo imaging of gene expression, thanks to decoding of the human genome.
374
Tomography
The continuous improvement in the physical characteristics of scanners during the last 20 years has been allowed by the discovery of new dense and rapid crystals and by the replacement of photomultipliers by new photodetectors such as avalanche diodes. This progress has already enabled the building of multimodal systems combining PET with MRI [TOW 08]. The impact of these multimodal MRPET systems on patient handling will be studied in the coming years. It is worth mentioning that at present no commercial PET system dedicated to whole-body imaging is sold without an attached CT system. Data processing has evolved considerably over the last five years. In particular, the impact of new iterative reconstruction algorithms that incorporate an accurate model of the scanner and of the statistical nature of radioactive emission has been demonstrated [QI 06, PAN 06, SUR 08]. These algorithms enable a more precise quantification of physiological parameters in structures of small size, such as the basal ganglia [SUR 08], as well as an improvement in the detection efficiency of small, hyperfixating structures, such as tumors. These two advances are important for clinical diagnosis and treatment, for example of neurodegenerative diseases. In the future, temporal information will be exploited in image reconstruction in a way that further improves the quantification of molecular processes at the voxel scale. Moreover, MR-PET systems will permit an additional improvement in iterative reconstruction algorithms by incorporating anatomical information, complementary to the functional information.
14.6. Bibliography [BEN 98] BENDRIEM B., TOWNSEND D. W. (Eds.), The theory and practice of 3D PET, Kluwer Academic, 1998. [BRO 90] BROOKS D. J., SALMON E. P., MATHIAS C. J., QUINN N., LEENDERS K. L., BANNISTER R., MARSDEN C. D., FRACKOWIAK R. S. J., “The relationship between locomotor disability, autonomic function, and the integrity of the striatal dopaminergic system in patients with multiple system atrophy, pure automatic failure, and Parkinson’s disease, studied with PET”, Brain, vol. 113, pp. 1539–1552, 1990. [CAS 86] CASEY M. E., NUTT R., “A multicrystal two-dimensional BGO detector system for positron emission tomography”, IEEE Trans. Nucl. Sci., vol. 33, pp. 460–463, 1986. [COC 03] COCHEN V., RIBEIRO M. J., NGUYEN J. P., GURRUCHAGA J. M., VILLAFANE G., LOCH C., DEFER G., SAMSON Y., PESCHANSKI M., HANTRAYE P., CESARO P., REMY P., “Transplantation in Parkinson's disease: PET changes correlate with the amount of grafted tissue”, Mov. Disord., vol. 18, pp. 928–932, 2003. [FOX 84] FOX P., MINTUN M. et al., “A non-invasive approach to quantitative functional brain mapping with H215O and PET”, J. Cereb. Blood Flow Metab., vol. 4, pp. 329–333, 1984.
Positron Emission Tomography
375
[FRO 93] FROST J. J., ROSIER A. J., REICH S. G., SMITH J. S., EHLERS M. D., SNYDER S. H., RAVERT H. T., DANNALS R. F., “Positron emission tomographic imaging of the dopamine transporter with 11C-WIN 35,428 reveals marked declines in mild Parkinson's disease”, Ann. Neurol., vol. 34, pp. 423–431, 1993. [KAR 90] KARP J., MUEHLLEHNER G., MANKOFF D. A., ORDONEZ C. E., OLLINGER J. M., DAUBE-WITHERSPOON M. E., HAIGH A. T., BEERBOHM D. J., “Continuous-slice PENNPET: a positron tomograph with volume imaging capability”, J. Nucl. Med., vol. 31, pp. 617–627, 1990. [KAR 95] KARP J. S., FREIFELDER R., KINAHAN P. E., GEAGAN M. J., MUEHLLEHNER G., SHAO L., LEWITT R. M., “Evaluation of volume imaging with the HEAD PENN-PET scanner”, Proc. IEEE Med. Imag. Symposium, pp. 1877–1881, 1994. [KIN 98] KINAHAN P. E., TOWNSEND D. W., BEYER T., SASHIN D., “Attenuation correction for a combined 3D PET/CT scanner”, Med. Phy., vol. 25, pp. 2046–2053, 1998. [LAV 83] LAVAL M., MOSZYNSKI M., ALLEMAND R., CORMORECHE E., GUINET P., ODRU R., VACHER J., “Barium fluoride – inorganic scintillator for sub-nanosecond timing”, Nucl. Inst. Meth., vol. 206, pp. 169–176, 1983. [LEW 95] LEWELLEN T. K., KOHLMEYER S. G., MIYAOKA R. S., SCHUBERT S., STEARNS C. W., “Investigation of the count rate performance of General Electric Advance Positron Emission Tomograph”, IEEE Trans. Nucl. Sc., vol. 42, pp. 1051–1057, 1995. [LEW 08] LEWELLEN T. K., “Recent developments in PET detector technology”, Phys. Med. Biol., vol. 53, pp. 287–317, 2008. [MAZ 90] MAZOYER B., TREBOSSEN R., SCHOUKROUN C., VERREY B., SYROTA A., VACHER J., LEMASSON P., MONNET O., BOUVIER A., LECOMTE J. L., “Physical characteristics of TTV03, a new high spatial resolution time-of-flight positron tomograph”, IEEE Trans. Nucl. Sci., vol. 37, pp. 778–782, 1990. [NIC 99] NICHOLS T. E., QI J., LEAHY R. M., “Continuous time dynamic PET imaging using list mode data”, Proc. Conf. Inform. Proc. Med. Imaging, LNCS 1613, pp. 98–111, Springer-Verlag, 1999. [OLL 96] OLLINGER J. M., “Model-based scatter correction for fully 3D PET”, Phys. Med. Biol., vol. 41, pp. 153–176, 1996. [PAN 06] PANIN V. Y., KEHREN F., MICHEL C., CASEY M., “Fully 3-D PET reconstruction with system matrix derived from point source measurements”, IEEE Trans. Med. Imag. vol. 25, pp. 907–921, 2006. [PAU 98] PAUWELS E. K. J., RIBEIRO M. J., STOOT J. H. M. B., MCCREADY V. R., BOURGUIGNON M., MAZIÈRE B., “FDG accumulation and tumor biology”, Nucl. Med. Bio., vol. 25, pp. 317–322, 1998. [PHE 75] PHELPS M. E., HOFFMAN E. J., MULLANI N. A., TER-POGOSSIAN M. M., “Application of annihilation coincidence detection to transaxial reconstruction tomography”, J. Nucl. Med., vol. 16, pp. 210–224, 1975.
376
Tomography
[PHE 79] PHELPS M. E., HUANG S. E., HOFFMAN E. J., SELIN E. J., SOKOLOFF L., KUHL D. E., “Tomographic measurements of local cerebral glucose metabolic rate in human with [18F]2-fluoro-2-deoxy-D-glucose: validation and methods”, Ann. Neurol., vol. 6, pp. 371– 388, 1979. [QI 06] QI J., LEAHY R., “Iterative reconstruction algorithms in emission tomography”, Phys. Med. Biol., vol. 51, pp. 541–578, 2006. [RAN 62] RANKOWITZ A., “Positron scanner for locating brain tumors”, IEEE Trans. Nucl. Sci., vol. 9, pp. 45–49, 1962. [REA 02] READER A. J., ALLY S., BAKATSELOS F., MANAVAKI R., WALLEDGE R. J., JEAVONS A. P., JULYAN P. J., ZHAO S., HASTINGS D. L., ZWEIT J., “One-pass list-mode EM algorithm for high resolution 3D PET image reconstruction into large arrays”, IEEE Trans. Nucl. Sci., vol. 49, pp. 693–699, 2002. [SUR 08] SUREAU F., READER A. J., COMTAT C., LEROY C., RIBEIRO M. J., BUVAT I., TREBOSSEN R., “How does resolution modeling impact PET images”, J. Nucl. Med., vol. 49, pp. 1000–1008, 2008. [SOK 77] SOKOLOFF L., REIVICH M., KENNEDY C., DES ROSIERS M. H., PATLAK C. S., PETTIGREW K. D., SAKURADA O., SHINOHARA M., “The [14C]deoxyglucose method for the measurement of local cerebral glucose utilization: theory, procedure and normal values in the conscious and anesthetized albino rat”, J. Neurochem., vol. 28, pp. 897–916, 1977. [TAV 98] TAVITIAN B., TERRAZZINO S., KUEHNAST B., MARZABAL S., STETTLER O., DOLLE F, DEVERRE J. R., JOBERT A., HINNEN F., BENDRIEM B., CROUZEL C., DI GIAMBERARDINO L., “In vivo imaging of oligonucleotides with positron emission tomography”, Nature Medicine, vol. 4, pp. 467–471,1998 [TER 75] TER-POGOSSIAN M. M., PHELPS M. E., HOFFMAN E. J., MULLANI N. A., “A positron emission transaxial tomograph for nuclear imaging (PETT)”, Radiology, vol. 114 , pp. 89–98, 1975. [TOM 81] TOMITANI T., “Image reconstruction and noise evaluation in photon time-of-flight assisted positron emission tomography”, IEEE Trans. Nucl. Sci., vol. 28, pp. 4582–4589, 1981. [TOW 08] TOWNSEND D. W., “Multimodality imaging of structure and function”, Phys. Med. Biol., vol. 53, pp. 1–39, 2008. [TRE 98] TREBOSSEN R., BENDRIEM B., RIBEIRO M. J., FONTAINE A., FROUIN V., REMY P., “Validation of the 3D acquisition mode in PET for the quantification of the F-18 fluoradopa uptake in the striatum”, J. Cereb. Blood Flow Metab., vol. 18, pp. 951–959, 1998. [WIE 94] WIENHARD K., DAHLBOM M., ERIKSSON L., MICHEL C., BRUCKBAUER T., PIETRZYK U., HEISS W. D., “The ECAT Exact HR: performance of a new high resolution positron scanner”, J Comp. Assist. Tomog., vol. 18, pp. 110–118, 1994.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 15
Functional Magnetic Resonance Imaging
15.1. Introduction In the past two decades, there have been impressive advances in functional cerebral imaging [ORR 95]. Positron emission cameras enable the entire brain to be explored with sufficient sensitivity and spatial resolution to allow a detailed mapping of various biochemical processes or cerebral perfusion. In addition, the dynamics of cerebral activation may be followed with a temporal resolution of the order of a millisecond with electrical or magnetic source imaging. The advent of functional MRI (fMRI) in 1991 [BEL 91] and the ensuing developments in this field [MOO 99] have created an unprecedented interest in functional cerebral imaging within the neurological and cognitive science communities. The technique provides access to hemodynamic information of the same nature as that obtained with positron emission tomography (PET) using 18O-labeled water as the radioactive tracer of perfusion. Furthermore, it has several characteristics that form the basis for the interest in this technique. It allows exploration of the brain with excellent spatial and temporal resolutions [MOO 99]. It provides functional information that may easily be superimposed on anatomical images with high spatial resolution, which are obtained in the same MRI examination. The technique is strictly non-invasive, thus allowing a repetition of examinations at will on healthy subjects. Finally, it requires the use of a whole-body MRI scanner, a tool that is widely available at an operational cost far below that of a positron emission camera, which was previously considered to be the gold standard in functional cerebral imaging.
Chapter written by Christoph SEGEBARTH and Michel DÉCORPS.
378
Tomography
These methods for functional imaging enable study of the human brain at a systems level. In this sense, these methods are fundamentally different to electrophysiological methods employed to examine brain function in non-human primates. In particular, they allow identification of the functional specialization of certain cerebral regions for particular cognitive functions which cannot be studied in non-human primates. An example is the involvement of the prefrontal cortex in different speech processes. In addition to contributing to our knowledge about healthy brain function, these methods are obviously also of clinical interest. This is the case for the presurgical determination of the hemispheric predominance of language in epileptic patients resistant to medical treatment or for the presurgical assessment of patients with an intra-cerebral tumor. 15.2. Functional MRI of cerebrovascular responses The existence of a coupling between cerebral activation and the cerebrovascular system has been known for more than a century [ROY 90]. However, the precise mechanisms at the basis of this coupling have still not been fully identified. Nevertheless, it has been established that the activation of a population of neurons generates a local increase in cerebral blood volume, cerebral blood flow, and blood oxygenation. It has also been established that the coupling between neural activation and these cerebrovascular parameters is relatively rapid (within some seconds). In functional cerebral imaging with PET, it is the coupling between neuronal activation and cerebral blood flow that is exploited. In fMRI, the coupling between neuronal activation and each of the three cerebrovascular parameters may contribute to the measured signals [OGA 98]. The first functional MRI experiments showed local increases in cerebral blood volume [BEL 91]. A bolus with exogenous paramagnetic tracers (chelated molecules of gadolinium) was injected intravenously, which enables its first cerebral transit to be followed. The cerebral blood volume may be determined (except for a proportionality constant) from signal intensity curves during this transit. Cerebral blood volume maps measured under different experimental conditions may thus be compared using this approach. However, the temporal resolution, with which the experimental conditions may be changed, is low (in the order of several minutes). It is similar to that obtained with techniques using radioactive tracers, by which this approach is directly inspired. In addition, the number of experimental conditions is limited by the small number of allowed contrast agent injections. Two developments in MRI then started a true revolution in the domain of functional cerebral imaging.
Functional Magnetic Resonance Imaging
379
The first development concerns the technological advances in the fields of instrumentation and computation in MRI. In the field of instrumentation, progress enabled the routine use of “instantaneous” MRI techniques such as echo-planar imaging (EPI) (see Chapter 12) on clinical scanners at “high” magnetic fields (1.5 T). These techniques, whose principles were established early on [MAN 77], have benefited from progress in the domain of magnetic field gradient technology (constant gradients of magnetic field are temporarily applied in MRI to spatially encode the nuclear signal, see Chapter 12). In particular, the maximal strength and switching speed of gradients have reached the level (typically 30 mT/m and 70 mT/m/ms, respectively) required to acquire the set of samples in Fourier space (also called kspace) associated with a 2D image in times compatible with the transverse relaxation time constant T2* of the water molecules in cerebral tissues (some tens of ms). EPI enables exploration of the brain with typically 20 or 30 cross sections (resolution of some mm in the three spatial directions) in about two seconds. In parallel, the development of “actively” shielded gradients has allowed suppression of the induction of eddy currents in the cryostat of the superconducting magnet during the rapid switching of the gradients in EPI to a large extent [MAN 86]. These developments have enabled the cerebral transit of the bolus of a paramagnetic contrast agent to be followed with sufficient temporal resolution for the first time with MRI [BEL 91]. Thus, they also allowed a mapping of the functional increases in cerebral blood volume in the visual cortex. In the field of computation, progress has made possible the archiving and processing of the data acquired in fMRI. This is important since fMRI sessions may generate up to some Gbytes of data that must be handled. The second development resides in the identification of different endogenous contrast mechanisms with MRI that enable neuronal activation to be shown (through the detection of variations of certain cerebrovascular parameters). The first of these mechanisms is founded on the functional variations of the longitudinal magnetization MZ (see Chapter 12) of the protons of the water molecules in a cross section [DET 92], or, in other words, on the functional variations of the associated apparent longitudinal relaxation time constants (T1App). These parameters depend on perfusion of the tissue. By combining the results of measurements carried out under different MRI conditions (for example, in the presence and absence of saturation of the arterial region upstream of the examined cross section), the contribution of the longitudinal magnetization that is only due to perfusion may be isolated, and cerebral perfusion maps may be generated. The MRI techniques that generate perfusion images in this way are classified as arterial spin labeling methods. They allow quantitative measurements of cerebral perfusion with excellent spatial and temporal resolution [ROB 94, WIL 92]. By repeating these measurements under different functional conditions, functional variations in cerebral perfusion may be mapped.
380
Tomography
The second of these mechanisms is based on modifications of the transverse magnetization of the protons of water molecules. These variations are linked to functional changes in blood oxygenation [THU 82]. Strong functional increases in cerebral perfusion (in the order of several tens of percent) are in fact accompanied by a significant, but weaker, growth in oxygen consumption [FOX 86]. They result downstream of activated neuronal populations in an increase in the concentration of oxyhemoglobin in the blood, at the expense of the concentration of deoxyhemoglobin. The magnetic properties of deoxyhemoglobin and oxyhemoglobin differ. Oxyhemoglobin is diamagnetic (like the majority of biological tissues), while deoxyhemoglobin is paramagnetic. The presence of deoxyhemoglobin in the blood consequently induces a difference ' F m between the magnetic susceptibilities of blood and tissues. This difference causes a perturbing magnetic field 'B0 around the vessels, which shortens the transverse relaxation times T2 and T2* of the protons of the water molecules in the extravascular space (this space represents approximately 96% of the total cerebral volume). The functional increase in the concentration of oxyhemoglobin in the blood thus induces a prolongation of these relaxation times, and in consequence an increase in the intensity of the NMR signal measured with sequences that are T2– and/or T2*–weighted. This functional contrast mechanism is known by the acronym BOLD (blood oxygenation level dependent) [OGA 90]). The vast majority of fMRI experiments exploit this contrast mechanism. 15.3. fMRI of BOLD contrasts 15.3.1. Biophysical model The functional responses of BOLD type, detected with diverse MRI sequences, may be predicted – or interpreted – with the help of numerical Monte-Carlo simulations [KEN 94, WEI 94]. In these simulations, the cortical tissue is modeled in a simplified way by a group of vessels embedded in a diamagnetic environment and schematically represented by cylinders of infinite length and a magnetic susceptibility of greater or lesser strength, according to the degree of neuronal activity. For this approximation, analytical expressions of the perturbing magnetic field are known outside of (and inside) the vessels [OGA 93]. 'B0 is, outside of the vessels, proportional to the susceptibility difference ' F m and to the strength of the applied static magnetic field B 0 , and it reduces with the square of the “normalized distance” r with respect to the center of the vessel:
'B(r) v 'F m B0
1 r2
[15.1]
Functional Magnetic Resonance Imaging
381
The “normalized distance” is the distance normalized by the diameter of the vessel. In the first approximation, the diameter of vessels may thus be considered to constitute a sort of “characteristic distance”, over which extend the effects of the perturbing magnetic field induced by the difference between the intra- and extravascular susceptibilities. Hence, for the capillaries, these effects extend to a significant extent over distances of about 10 μm. By contrast, for the smaller and larger veins draining the activated regions, these effects extend over far larger distances.
15.3.2. “Static” conditions The sensitivity of MRI to functional variations of the magnetic susceptibility of blood depends on the size of the vessels and on the particular MRI sequences employed. Let us initially make the assumption that the water molecules of cerebral tissue are not subject to the random motion linked to thermal agitation. In this case, the MRI sequences of “spin-echo” (SE) type are strictly insensitive to BOLD effects (i.e. to a perturbing magnetic field). It should be remembered that these sequences are composed of an RF excitation pulse (generally of 90°), which tilts the longitudinal magnetization into the transverse plane, and of an “180°” RF pulse, which refocuses (in the absence of diffusion) the different “isochromats” (i.e. the transverse magnetizations that are dephased with respect to each other because of magnetic field inhomogeneities). This refocusing leads to the appearance of a “spin-echo”, which constitutes the measured signal. Presuming “static” water molecules, the sequences of “gradient-echo” (GE) type exhibit, by contrast, a strong sensitivity to BOLD effects. It should be remembered that these sequences do not refocus the “isochromats”. They are composed of a single RF pulse, which tilts the longitudinal magnetization into the transverse plane. Applying after the RF pulse a readout gradient of opposite polarity compared to the one used afterwards during the sampling leads to the appearance of a “gradient-echo”, which constitutes for this sequence the measured signal. In reality, water molecules in tissues are exposed to random motion. In the first approximation, this motion is characterized by a free diffusion constant Dl . In a given time span (in the case of MRI measurements, the time span of interest is the echo time T E ), the mean distance d covered by the water molecules in a particular direction is:
d
2Dl TE
[15.2]
382
Tomography
In cerebral tissues, the free diffusion constant of water molecules is in the order of 1 Pm2/ms [LEB 95]. Thus, for an echo time of 50 ms, the mean distance covered by the water molecules is 10 Pm. This distance is relatively short compared to the characteristic distances over which the perturbing magnetic field extends in the case of smaller veins (diameters of several tens of micrometers) and especially of larger veins (diameters up to several millimeters). The random motion of water molecules in the neighborhood of these vascular structures thus has little effect. Under these conditions, which are called “static”, the gradient-echo sequences exhibit a strong sensitivity to BOLD effects, while the spin-echo sequences show a weak sensitivity to them.
15.3.3. “Intermediate” conditions If the mean distance covered by the water molecules is comparable to the characteristic distances of the perturbing magnetic field (conditions called “intermediate”), the sensitivity to BOLD effects of gradient-echo sequences is weaker. This results from the fact that during the relevant time span the water molecules cover regions with different magnetic field strengths. In consequence, the dispersion of the mean values of the magnetic fields, to which the different water molecules are exposed during that time span, is significantly reduced compared to the “static” conditions, as is the dephasing between the transverse magnetizations of the different water molecules. By contrast, the displacements of the water molecules in “intermediate” conditions introduce a certain sensitivity of spin-echo sequences to BOLD effects. The random displacement of water molecules impedes a perfect refocusing of the transverse magnetization associated with the different water molecules. These “intermediate” conditions are encountered in the diffusion of water molecules in the extravascular space of the capillary network (the diameters of capillaries are in the order of five μm).
15.3.4. “Motional narrowing” conditions Finally, the second extreme case is encountered when the mean distance covered by the water molecules during the relevant time span is large compared to the diameter of the vessels. In this case, the water molecules sample, in the limit, all present magnetic field strengths. This further reduces the dispersion of the mean values of the magnetic field strengths experienced by the water molecules during the experiment, thus rendering gradient-echo as much as spin-echo sequences insensitive to BOLD effects. Hence, we speak of conditions of “motional narrowing”, referring to the narrowing of the resonance line width of water molecules under these con-
Functional Magnetic Resonance Imaging
383
ditions. In practice, the relevant time span in fMRI (fixed echo times based on the transverse relaxation times of cerebral tissues) are too short to encounter extreme conditions of this type.
15.3.5. In practice In conclusion, gradient-echo sequences exhibit an optimal sensitivity to the BOLD effect in the neighborhood of small and large veins (“static” conditions). The sensitivity decreases monotonically with the diameters of the vessels. The spin-echo sequences show a selective sensitivity to BOLD effects at the level of the capillary network. These conclusions may suggest that spin-echo sequences have to be used preferably in fMRI. In reality, the Monte-Carlo simulations, as well as the experimental results, indicate that the gradient-echo sequences exhibit under all conditions a sensitivity for detecting BOLD contrasts that is clearly superior to that of the spinecho sequences. However, this sensitivity remains low. Thus, with a gradient-echo sequence optimized for 1.5 T, the relative variations of the MRI signal have a limited extent. The use of higher static magnetic field strengths enables the signal-tonoise ratio of NMR signals to be improved and simultaneously emphasizes the BOLD contrasts, since the strength of the perturbing magnetic field is proportional to that of the static magnetic field. In consequence, there is a strong interest in whole-body magnets with very high fields in the fMRI community. Numerous groups are already equipped with MRI scanners at 3 T or 4 T, while some scanners operate at 7 T, 8 T, and beyond.
15.4. Different protocols In fMRI, the brain is explored repeatedly, generally over several minutes, with a temporal resolution of two or three seconds. The repetition of the measurements enables improvement of the signal-to-noise ratio, pursuing an approach often taken in NMR. The signal-to-noise ratio in the “raw” images in fMRI is typically in the order of 100 (at 1.5 T), and the functional responses do not account for more than some percent of the signal intensities in these images in the best case, and often for only some parts per thousand. The functional images are thus very “noisy”, through thermal noise (at the level of the receive chain and at the level of the examined tissues) as well as physiological noise. The identification of statistically significant functional responses consequently requires a rigorous statistical treatment. Numerous software packages have been developed with this aim. One of the most popular is the software SPM (Statistical Parametric Mapping), originally developed to process functional images obtained with PET (Wellcome Department of Cognitive Neurology, London).
384
Tomography
15.4.1. Block paradigms The majority of fMRI experiments are realized with protocols (“paradigms”) that are inspired by those used in PET. In these protocols – of block type – two experimental conditions are typically alternated (see Figure 15.1). The differences between conditions may reside in the presented stimuli and/or in the tasks assigned to the subject. The duration of the application of the two conditions is generally the same (in the order of a minute) and the alternation between the conditions is repeated a certain number of times. The latter allows removal of the effects of artifacts linked to slow variations in the base line of the measured signals during data processing.
Figure 15.1. Typical plan of an fMRI examination of “block” type. Two experimental conditions C1 and C2 are periodically alternated. Their duration is of several tens of seconds (60 s in this example). During this time, the cerebral volume is repeatedly measured (all 3 s here)
Protocols of this type have been applied in countless fMRI studies of the motor and sensor system or of certain aspects of cognitive processing. Although the temporal resolution of successive measurements of the cerebral volume is in the order of a second, the temporal resolution of the measurements associated with the different experimental conditions is only in the order of several tens of seconds. In the protocols of block type, events of the same type (stimuli and/or actions of the subject) typically follow each other every 2 s to 3 s during each of the conditions of the paradigm. The detected activations in fine reflect statistically significant differences between the mean responses measured during each of these conditions. With these paradigms, it is impossible to identify the responses associated with individual events that take place at a particular moment of the measurement.
15.4.2. Event-related paradigms More recent developments have made possible the measurement of responses to individual events and have thus largely extended the possibilities of the application
Functional Magnetic Resonance Imaging
385
and the experimental design of fMRI [ROS 98]. The principle of the approach – event-related fMRI – consists of identifying the hemodynamic response to individual, brief events that are separated in time (see Figure 15.2).
Figure 15.2. Block versus event-related paradigms. The vertical lines indicate the successive events. a) For the block paradigms, the successive events take place at constant, short interevent intervals (2 s in this example). b)–d) For the event-related paradigms, the events are separated either by long, constant intervals (15 s in example b) or by intervals of variable duration with short mean duration (3 s in examples c and d). The event-related paradigms may contain events of a single type (c, d) or of different types (for instance, by interleaving the two paradigms c and d)
The hemodynamic response to these events may show a considerable variability between different regions of the same brain as well as between a given region examined in different subjects. This variability of the hemodynamic response constitutes one of the difficulties in (event-related) fMRI. The impulse response hH (t ) of the hemodynamic response is often approximated by a gamma function [BOY 96]. Thus, in the primary visual cortex, the impulse response of the hemodynamic response has the analytical expression:
h H (t) 0
for t G
[15.3]
for t t G
[15.4]
2
hH (t)
ª t G 2 º ª t G º » exp« W » « W ¬ ¼ ¼» ¬«
where G denotes the delay between the appearance of the response and the impulse event, and W denotes a characteristic time linked to the return to the base value (zero in the present case). Typical parameters are G = 2.5 s and W = 1.25 s (see
386
Tomography
Figure 15.3). For these, the hemodynamic response increases after a latency of 2.5 s to reach its maximum about 10 s after the impulse stimulus. The response returns to its base value after about 20 s. Under certain experimental conditions, the amplitude of the hemodynamic response behaves approximately linearly [BOY 96]. The hypothesis of linearity of the hemodynamic response has often been elaborated since then. The interest in it stems from the ability to predict with it theoretical responses on the basis of a simple convolution between the paradigm (considered as a sequence of impulse events) and the impulse response of the system.
Figure 15.3. Response function describing the hemodynamic impulse response in the primary visual cortex
In many of the event-related paradigms in fMRI, successive events are separated in time by constant intervals. The optimal spacing between the events is thus in the order of 15 s. Under these conditions, the superposition of hemodynamic responses to successive events is avoided. The inevitably limited duration of an fMRI examination results in a reduced number of measurements. For an examination time of 20 min, for example, the hemodynamic responses of only 80 events can be collected. This leads to a relatively poor signal-to-noise ratio. In addition, protocols in which the stimuli are presented in regular intervals have disadvantages at a purely behavioral level. The subjects may anticipate the sequence of events. Long and regular inter-stimuli intervals also tend to induce an uncontrollable cognitive activity in the subjects. By using paradigms in which successive events are separated by intervals of pseudo-random duration, the variance of the signal increases monotonically with decreasing mean inter-event interval (assuming linear behavior of the hemodynamic response) [BUR 98]. The number of events per unit time may thus be increased significantly compared to paradigms with constant inter-stimuli intervals, leading to results that have a higher statistical power. At the behavioral level, however, the
Functional Magnetic Resonance Imaging
387
event-related paradigms with inter-event intervals of pseudo-random duration clearly require more attention of the subjects. Thus, it appears that the event-related fMRI paradigms with intervals of pseudo-random duration have a double advantage over event-related paradigms with constant intervals. In event-related fMRI paradigms, different types of events may be interleaved. They thus enable isolation of the functional responses by a post hoc selection of events on the basis of responses provided by the subjects during a particular cognitive task (according to, for example, the responses being right or wrong, or an item being recognized or not). Finally, the possibility of rapidly presenting the stimuli, at irregular inter-stimuli intervals, enables paradigms to be conceived that are very similar to those used in behavioral studies or in electrophysiology (evoked potentials).
15.4.3. Fourier paradigms The paradigms of Fourier type [DEY 94, ENG 94] are almost exclusively applied in the framework of studies of the visual system (localized at the level of the occipital pole of the brain). They exploit the retinotopic properties of numerous visual areas (a visual area is organized in a retinotopic fashion if each region of the visual field is represented in a one-to-one fashion by a particular subregion in this area). On the basis of these properties, the borders between adjacent retinotopic visual areas can be precisely traced [SER 95]. Before the development of these techniques, only a few visual areas could be localized in humans. These localizations were obtained with the help of functional imaging paradigms of block type (in PET or in fMRI). They aimed at showing the cerebral areas that specifically respond to certain characteristics of stimuli. As an example, the visual area sensible to motion (the area MT/V5) could be identified by alternating between the presentation of stationary and moving visual stimuli [WAT 93]. Fourier-type paradigms have enabled delineation of numerous areas in humans that are topographically equivalent to areas identified in macaques (the areas V1, V2, V3, V3A, V4, VP, etc.). In the quest to identify different visual areas, the topographic organization of the visual system in different functional entities in the ape serve as a guide (more than 30 functional areas have been identified by a group of methods, including electrophysiology, histology, and the injection of tracers). In Fourier-type paradigms, visual stimuli are used that periodically cover the visual field (see Figure 15.4). This is done either by a ring centered in the visual field, whose diameter varies periodically, or by a sector (typically with an angle of 20° to 30°), whose peak is centered on the point of fixation (center of the visual field) and whose
388
Tomography
angle with respect to a reference axis increases linearly with time. At each moment, the stimuli of the first type activate within the different retinotopic areas the cortical regions for which the receptive field is confined to the presented ring, while the stimuli of the second type activate within different retinotopic areas the cortical regions for which the receptive field is confined to the presented sector. The periodic, ring-shaped stimuli induce within different retinotopic areas waves of responses, traveling from cortical representations of the foveal region (localized posteriorly) to cortical representations of more eccentric regions of the visual field (localized more anteriorly). For the sector-shaped stimuli, the cortical response periodically covers within different retinotopic areas successively the cortical representations of the upper vertical meridian, the right horizontal meridian, the lower vertical meridian, and the left horizontal meridian of the visual field (for a sector initially centered on the upper vertical meridian and rotating clockwise).
Figure 15.4. Examples of visual stimuli employed in Fourier imaging of the visual system. The cross represents the fixation point. The visual field is covered periodically, either by a ring with variable eccentricity (left) or by a sector turning around the point of fixation (right)
Data processing at the end of these experiments must first allow identification of the simultaneously activated cortical areas. These areas correspond to a given eccentricity in the visual field when a stimulus in the form of a ring is employed and to a given polar angle in the visual field when a stimulus in the form of a sector is presented. In this way, we proceed with an analysis by Fourier transformation of periodical signals collected for each pixel. The simultaneity of responses leads to common phases in the frequency spectra. Second, data processing must enable delineation of the different functional areas. This operation is based on calculation of the “local visual field sign” [SER 95], which alternates among adjacent retinotopic areas. This calculation is largely simplified by a planar representation of the cortical surface. Figure 15.5 shows, as
Functional Magnetic Resonance Imaging
389
an example, the delineation of low-order visual areas on a planar representation of the occipital part of the left hemisphere.
S V3A
V3 V2d S V1
2 cm
VP
V2v 0
Figure 15.5. Left: segmented cortical surface of the occipital part of the left cerebral hemisphere of a volunteer. View of the mesial part. Right: planar representation of the segmented surface presented in the image on the left. The phase encoded in gray values is the one of the functional responses obtained for stimulation with a sector periodically covering the visual field. The different visual areas V1, V2v, VP, V2d, V3 and V3A are delineated on the basis of the properties of the phase of the functional responses. The images were provided by J. Warnking, A. Guérin-Dugué et al., INSERM U438 and INPG/LIS, Grenoble
15.5. Bibliography [BEL 91] BELLIVEAU J. W., KENNEDY D. N., MCKINSTRY R. C. et al. “Functional mapping of the human visual cortex by magnetic resonance imaging”, Science, vol. 254, pp. 716–719, 1991. [BOY 96] BOYNTON G. M., ENGEL S. A., GLOVER G. H., HEEGER D. J., “Linear systems analysis of functional magnetic resonance imaging in human V1”, J. Neurosc., vol. 16, pp. 4207–4221, 1996. [BUR 98] BUROCK M. A., BUCKNER R. L., WOLDORFF M. G. et al., “Randomized eventrelated experimental designs allow for extremely rapid presentation rates using functional MRI”, NeuroReport, vol. 9, pp. 3735–3739, 1998. [DET 92] DETRE J. A., LEIGH J. S., WILLIAMS D. S., KORETSKY A. P., “Perfusion imaging”, Magn. Res. Med., vol. 23, pp. 37–45, 1992. [DEY 94] DEYOE E. A., BANDETTINI P., NEITZ J. et al., “Functional magnetic resonance imaging (fMRI) of the human brain”, J. Neurosc. Meth., vol. 54, pp. 171–187, 1994.
390
Tomography
[ENG 94] ENGEL S. A., RUMELHART D. E., WANDELL B. A. et al., “fMRI of human visual cortex”, Nature, vol. 369, p. 525, 1994. [FOX 86] FOX P. T., RAICHLE ME, “Focal physiological uncoupling of cerebral blood flow and oxidative metabolism during somatosensory stimulation in human subjects”, Proc. Natl. Acad. Sci., vol. 83, pp. 1140–1144, 1986. [KEN 94] KENNAN R. P., ZHONG J., GORE J. C., “Intravascular susceptibility contrast mechanisms in tissues”, Magn. Reson. Med., vol. 31, pp. 9–31, 1994. [LEB 95] LE BIHAN D., TURNER R., PATRONAS N., “Diffusion MR imaging in normal brain and in brain tumors”, in Diffusion and Perfusion Magnetic Resonance Imaging. Applications to Functional MRI, LE BIHAN D. (Ed.), Raven Press, New York, pp. 134–139, 1995. [MAN 77] MANSFIELD P., “Multi-planar image formation using NMR spin echoes”, J. Phys. C, vol. 10, pp. L55–L58, 1977. [MAN 86] MANSFIELD P., CHAPMAN B., “Active magnetic screening of gradient coils in NMR imaging”, J. Phys. C, vol. 15, pp. 235–239, 1986. [MOO 99] MOONEN C. T. W., BANDETTINI P. A., “Functional MRI”, in Medical Radiology. Diagnostic Imaging, Baert A. L., Heuck F. H. W., Youker J. E. (Eds.), Springer, Berlin, 1999. [OGA 90] OGAWA S., LEE T, NAYAK A. S., “Oxygenation-sensitive contrast in magnetic resonance imaging of rodent brain at high magnetic fields”, Magn. Res. Med., vol. 14, pp. 468–478, 1990. [OGA 93] OGAWA S., MENON R., TANK D. W. et al., “Functional brain mapping by blood oxygenation level-dependant contrast magnetic resonance imaging. A comparison of signal characteristics with a biophysical model”, Biophys. J., vol. 64, pp. 803–812, 1993. [OGA 98] OGAWA S., MENON R. S., KIM S. G., UGURBIL K., “On the characteristics of functional magnetic resonance imaging of the brain”, Annu. Rev. Biophys. Biomol. Struct., vol. 27, pp. 447–474, 1998. [ORR 95] ORRISON W. W., LEWINE J. D., SANDERS J. A., HARTSHORNE M. F., Functional Brain Imaging, Mosby, Saint Louis, 1995. [ROB 94] ROBERTS D. A., DETRE J. A., BOLINGER L. et al., “Quantitative magnetic resonance imaging of human brain perfusion at 1.5 T using steady-state inversion of arterial water”, Proc. Natl. Acad. Sci., vol. 91, pp. 33–37, 1994. [ROS 98] ROSEN B. R., BUCKNER R. L., DALE A. M., “Event-related functional MRI: past, present, and future”, Proc. Natl. Acad. Sci., vol. 95, pp. 773–780, 1998. [ROY 90] ROY C. S., SHERRINGTON C. S., “On the regulation of the blood supply of the brain”, J. Physiol., vol. 4, pp. 85–108, 1990. [SER 95] SERENO M. I., DALE A. M., REPPAS J. B. et al., “Borders of multiple visual areas in humans revealed by fMRI”, Science, vol. 268, pp. 889–893, 1995.
Functional Magnetic Resonance Imaging
391
[THU 82] THULBORN K. R., WATERTON J. C., MATTHEWS P. M., RADDA G. K., “Oxygenation dependence of the transverse relaxation time of water protons in whole blood at high field”, Biochim. Biophys. Acta, vol. 714, pp. 263–270, 1982. [WAT 93] WATSON J. D. G., MYERS R., FRACKOWIAK R. S. J. et al., “Area V5 of the human brain: evidence from a combined study using positron emission tomography and magnetic resonance imaging”, Cereb. Cortex, vol. 3, pp. 79–94, 1993. [WEI 94] WEISSKOFF R. M., ZUO C. S., BOXERMAN J. L., ROSEN B. R., “Microscopic susceptibility variation and transverse relaxation: theory and experiment”, Magn. Res. Med., vol. 31, pp. 601–610, 1994. [WIL 92] WILLIAMS D. S., DETRE J. A., LEIGH J. S., KORETSKY A. P., “Magnetic resonance imaging of perfusion using spin inversion of arterial water”, Proc. Natl. Acad. Sci., vol. 89, pp. 212–216, 1992.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Chapter 16
Tomography of Electrical Cerebral Activity in Magneto- and Electro-encephalography
16.1. Introduction Functional cerebral imaging comprises the techniques that allow study of the working brain. Its goal is to answer two fundamental questions: where and when do the different information processing steps take place in the brain during cognitive tasks and sensorial stimulation? Among the functional cerebral imaging techniques, the metabolic imaging methods, like SPECT (see Chapter 13), PET (see Chapter 14), and fMRI (see Chapter 15), and the electrical imaging methods, like magnetoencephalography (MEG) and electro-encephalography (EEG), are noteworthy. The former have a high spatial resolution in the order of millimeters, but can observe cerebral phenomena only with a temporal resolution in the order of seconds, or even minutes in the case of SPECT and PET. By contrast, the latter have an excellent temporal resolution in the order of milliseconds, since they directly measure the electrical activity of the brain. Moreover, they are totally non-invasive. In recent years, the instrumentation in MEG and EEG has considerably advanced, and systems with large numbers of detectors, which enable measurement of the magnetic field around the whole head, are today offered commercially. However, the methods for reconstructing the electrical cerebral activity from the fields and potentials measured on the surface of the head face numerous difficulties. A fundamental limitation is that the reconstruction problem is underdetermined and therefore multiple solutions exist. The first methods that were developed and employed when only small numbers of detectors were available localized a limited number of active regions. Chapter written by Line GARNERO.
394
Tomography
These methods, known by the generic term dipolar methods, are not real tomographic methods, since they do not allow estimation of the distribution of the electrical activity in the brain. With the introduction of systems with larger numbers of detectors and the use of new approaches to regularizing inverse problems, real tomographic methods have been applied, the most powerful of which exploit knowledge about the anatomy of the subject to reach a high spatial resolution. In this chapter, we outline the principles of electrical cerebral imaging with MEG and EEG and describe their primary applications in basic and clinical research. We provide details on the methods for calculating fields and potentials that are applied to the direct problem and on the methods for source localization and tomographic reconstruction that are applied to the inverse problem. 16.2. Principles of MEG and EEG 16.2.1. Sources of MEG and EEG The excitation of a neuron via a synapse involves the opening of ion channels in its membrane. Since the ion concentration differs inside and outside of cells, the opening of ion channels results in a movement of charged particles in the intra- and extra-cellular space. These currents, called postsynaptic currents, are at the origin of EEG and MEG measurements. The source or primary currents cause volume or secondary currents for charge conservation [GLO 85]. In MEG systems, the magnetic field produced by the source and volume currents is recorded. By contrast, the potential differences measured between two electrodes in EEG are essentially due to line currents circulating at the surface of the scalp, and thus due to volume currents. To be detectable on the surface of the head, these currents must result from the synchronous activity of a set of cells comprising about 104 neurons in some cubic millimeters of the cortex. The superposition of the currents emitted by these neurons results in a macroscopic quantity only if it is additive. The activities recorded on the surface arise from cortical regions whose dendritic tree exhibits a columnar architecture. The currents resulting from the activity of a macrocolumn of neurons are modeled by a current dipole, whose direction is the same as the principal orientation of the dendrites, i.e. locally perpendicular to the cortical surface, and whose amplitude represents the integral current density in the considered column. The average amplitude of a dipole created by the synchronous activity of 104 neurons is in the order of 10 nAm. An important characteristic of these currents is their low conduction speed. A postsynaptic potential that produces the source currents lasts for about 10 ms. As the MEG and EEG signals may be sampled at frequencies higher than kHz, these two
Tomography of Electrical Cerebral Activity
395
modalities are the only ones that can follow the dynamics of cerebral activity in realtime. Before explaining the principles of MEG and EEG imaging, we will make some remarks on the instrumentation for these two modalities. 16.2.2. Evolution of EEG The first EEG trace was discovered by the German neurophysiologist Hans Berger in 1929 [BER 29]. The principle of the measurement has remained the same to the present, although the technical means have evolved. It involves determining potential differences between electrodes that are placed at the surface of the head. The electrical contact is insured by use of a conductive gel. The number of electrodes employed may vary considerably, from 20 in the international 10-20 system, which was used for a long time in clinics, to 64, 128, or even 256. The potential difference may be determined either between two close electrodes (in a bipolar system) or between each electrode and the so-called reference electrode. In general, the latter is chosen to be in a region where the potential is low, such as on the ear lobe. Two approaches to the analysis of the potentials acquired on a surface are distinguished: analysis of the spontaneous EEG and analysis of the evoked potentials. The first involves studying the different rhythms present in the recorded signals during sleep and while awake. The application to cognitive neuroscience requires observing the activity of the brain that is linked to the occurrence and the perception of a stimulus, as well as to the information processing required to accomplish a task, instead of the spontaneous activity of the brain. In 1937, Dawson showed that it is possible to measure specific components, so-called evoked responses, by adding the traces produced by repeating the same task. An evoked response (called evoked potential in EEG) often comprises several components, the early ones, which occur between 50 and 150 ms after the stimulus and are associated with the sensorial perception of the stimulus, and the later ones, which occur about 300 ms after the stimulus and are associated with the cognitive behavior. 16.2.3. MEG instrumentation The measurement of the first cerebral magnetic field was achieved in 1972 by David Cohen at the MIT [COH 72], thus giving birth to MEG. This late appearance of MEG is due to the weak measured magnetic fields, whose strength is in the order of tens of femtoteslas (10-14 T), i.e. a billion times weaker than the Earth’s magnetic field. The development of MEG was enabled by very sensitive magnetic field sen-
396
Tomography
sors based on superconductors. These sensors use receiver coils that are fluxcoupled to a superconducting loop. They are called superconducting quantum interference devices (SQUIDs) and transform the magnetic flux into a voltage. They were invented by Jacques Zimmerman in 1970 [ZIM 70]. The set of sensors is immersed in liquid helium to assure the cooling of the superconducting components and is placed into a cryostat [HAM 93]. Since the magnetic fields of the brain are very weak, they must be isolated from external magnetic fields. This shielding may be achieved by different means. At the level of the sensor, the receiver coil is often a gradiometer, which is built of two or more coils of opposite phase and thus enables measurement of the first or second order gradient of the magnetic field in the radial direction or in tangential directions with respect to the surface of the head. Supplementary correction systems measure the external field with sensors at a distance from the head and eliminate it at the MEG sensors by a specific filtering [VRB 91]. However, the most efficient protection is to place the measurement system in a shielded room with mu-metal walls that attenuate external magnetic fields by a factor of 103 to 104. Under these conditions, the weakest noise measured with the detectors is in the order of 10 fT in a frequency band ranging from 1 Hz to 100 Hz. It is higher for lower frequencies. With the obtained cerebral signals being in the order of 100 fT, the signal-to-noise ratio is about 10. The detectors are spaced 2 cm apart. We will see later that the source localization may, nevertheless, reach a precision of some millimeters. Due to the shielding and the low temperature superconductors, MEG is a relatively expensive imaging technology. Nevertheless, recent years have brought spectacular advances in the instrumentation: while the first commercial systems did not have more than a few detectors (between 1 and 30) and did not cover more than a part of the brain, three manufacturers (BTI – San Diego, CTF – Vancouver, Neuromag – Helsinki) currently sell systems with a helmet equipped with more than 200 SQUID sensors, which cover the entire head. Moreover, these new systems permit simultaneous recording of MEG and EEG. Figure 16.1 shows the MEG/EEG system of the Canadian company CTF, which has 151 magnetic field sensors distributed all around the head and which permits recording 64 EEG signals with a sampling frequency of 2.5 kHz simultaneously. This system has been installed at the PitiéSalpêtrière hospital in Paris since 1998. Simultaneous recording of MEG and EEG obtained with this system during an auditory experimental task is shown on the right hand side of this figure, and the MEG and EEG signals are plotted at the top and bottom, respectively.
Tomography of Electrical Cerebral Activity
397
Figure 16.1. Left: MEG/EEG system installed at the Pitié-Salpêtrière hospital (VSM, Vancouver). Right: MEG signals (top) and EEG signals (bottom) recorded simultaneously while listening
16.2.4. Applications MEG and EEG enable the primary visual, auditory and somatosensory projection areas to be localized with high precision and to show their retinotopic, tonotopic, and somatotopic organization, i.e. localizing the different affected areas as functions of the properties of the stimulation (position of the target in the field of view, frequency of the sound, or stimulated part of the body). Paradoxically, it is their high temporal resolution that enables a reliable distinction between these areas. It is possible to consider for the localization only the early components of the evoked potential, which specifically concern these areas. This does not hold for metabolic functional imaging, in which the images result from integrating the activity in the whole brain over a temporal window in the order of seconds or minutes. These properties make electrical and magnetic imaging particularly well suited for studying the reorganization of neuronal networks after vascular accidents or surgery. Moreover, in neurosurgery, MEG, accompanied by an anatomical MRI examination, turns out to be a rapid and accurate tool for localizing the basic sensory and cognitive functions and for helping in presurgical decision making as well as in defining access paths for surgery [NAK 96].
398
Tomography
MEG and EEG have also been used a lot to analyze the dynamics of activations in higher cognitive functions, such as language, memory, or attention [HAM 93]. The protocols that have been developed for healthy subjects are equally applied to patients experiencing mental difficulties to determine the neurobiological anomalies that accompany these difficulties. The applications mentioned above use the technique of evoked responses, but MEG and EEG are techniques that are particularly well suited for recording and analyzing spontaneous activity of the brain. Rhythmic activities of the brain have been much studied in both awake and sleeping humans, as well as in healthy and ill subjects. The most important clinical application of MEG and EEG is the study of electrical activity and the localization of excited areas in epileptic patients. Epilepsy is a pathology which involves the emission of sudden, intense, rapid discharges of a more or less extended population of neurons of the cortex. The appearance of these discharges may be accompanied by behavioral difficulties called a seizure. Surgery is often necessary in pharmacoresistant epilepsy. The procedure involves ablating the area of the brain that launches the critical activity, which may then propagate to other areas of the cortex. Currently, the localization of these areas, called foci, demands in numerous cases the implantation of intracranial electrodes to precisely detect the origin of the seizure, since the EEG recorded at the surface has an insufficient spatial resolution [SUT 88]. The development of methods for reconstructing the electrical activity at the moment of the seizure, or of discharges that arise between seizures, from MEG or EEG signals recorded at the surface will enable this step to be avoided. 16.3. Imaging of electrical activity of the brain based on MEG and EEG signals 16.3.1. Difficulties of reconstruction It is necessary to be able to reconstruct a map of macroscopic currents emitted by the brain from maps of the field and the potential recorded on the surface of the head for MEG and EEG to become a real imaging modality like PET or fMRI. However, this problem still requires a lot of research, since numerous difficulties arise, both in the direct and the inverse problem. Solving the direct problem requires the ability to calculate the electrical and magnetic fields due to the neuronal sources, which implies modeling the electrical and geometrical properties of the different tissues encountered inside and outside of the skull. However, these tissues have a complex geometry that is difficult to model. Moreover, their conduction properties are not well known. Thus, there is a substantial bias in the transformation from the unknowns to the data. We will see in the following section that the importance of this bias varies, depending on how real the model of the structure of the cerebral tissue is.
Tomography of Electrical Cerebral Activity
399
The primary difficulty in solving the inverse problem is the ambiguity of the solution. From a fundamental point of view, a point source may be replaced in electromagnetism by a set of surface currents on a surface surrounding this point if only the fields outside of this surface are considered. Thus, several source configurations may yield the same measurements at the detectors. Moreover, even with the most recent MEG/EEG systems, the amount of collected data is at any given moment in time small compared to the amount of measured projection data in PET and the amount of acquired samples in MRI. This number currently does not exceed some hundreds, which is insufficient to reconstruct the electrical activity in the whole cerebral volume. We will see in section 16.3.3 that constraints must be introduced in the inverse problem, either by reducing the number of variables or by regularizing the inverse operator. 16.3.2. Direct problem and different field calculation methods Equations Cerebral activity is modeled as a distribution of current densities, called source currents js(r). These sources create an electric field E, which induces currents called conduction currents jc(r) given by Ohm’s law: jc (r )
>V @E(r)
[16.1]
where [V] is the conductivity tensor at point r. In MEG and EEG, the measured currents have low frequency (smaller than 100 Hz). It can be shown that the time-variant terms in the Maxwell equations are negligible compared to the time-invariant terms, and hence that the quasi-static approximation is justified in this case [HAM 93]. Under these conditions, the Maxwell equations concerning the rotation of E and B read: E 0 ® ¯ B P0 j
E V ® ¯ , j 0
[16.2]
where V is the electrical potential and j is the total current density, i.e. the sum of source currents js and conduction currents jc: j
js jc
js >V @V
[16.3]
By replacing j in equation [16.2] by equation [16.3], the differential equation that links the electrical potential V to the source currents js is obtained: , j s
, >V @V
[16.4]
400
Tomography
The magnetic field is given by the Biot–Savart law:
B(r )
P0 j(r' ) R 4ʌ
³
R3
d 3r '
P0 ° js (r' ) R ®³ 4ʌ °¯
R3
d 3r ' ³
>V @V R d 3r'½° R3
¾ °¿
[16.5]
where R = r – r’. An often considered particular case is a set of overlapping volumes Di, separated by surfaces Sij, with homogeneous, scalar conductivity Vi. The potential at all points r may in this case be expressed in the form [GES 67]:
V (r )V (r ) V 0V0 (r )
1 R , d S ij ¦ V i V j ³ V (r' ) 3 4ʌ ij S ij R
[16.6]
where dSij is the differential vector perpendicular to the surface Sij and V0(r) is the potential created by the sources in an infinite, homogeneous medium with a conductivity V 0 equal to that of the medium containing the sources:
1 R d 3r ' ³ js (r ' ), 3 4ʌ f R
V0 (r )
If the point r approaches one of the interfaces Sij, equation [16.6] becomes [SAR 87]:
Vi V j 2
V (r ) V 0V0 (r )
R 1 , d S ij ¦ V i V j ³ V (r' ) 3 4ʌ ij S ij R
[16.7]
This equation allows linking of the potential of all points at an interface to that existing on all surfaces, including those which comprise this point. From equation [16.5], a similar expression may be derived for the magnetic field [SAR 87]: R 1 d S ij [16.8] ¦ V i V j ³ V (r' ) 3 4ʌ ij R S ij where B0 is given by the Biot–Savart law when only the source currents are considered. B0 equals the first term of the last part of equation [16.5]. B(r )
B 0 (r )
Assuming for the sake of simplicity that the surfaces Sij are concentric spheres and considering that only the radial component of the field is measured, the last term is
Tomography of Electrical Cerebral Activity
401
zero, because r is colinear with the radial direction of the field. This component may thus be calculated considering the source currents only. Sarvas showed that the other field components may also be calculated analytically and that they are independent of the different conductivities of the cerebral tissues [SAR 87]. Moreover, he showed that the radial sources yield no magnetic field at the detectors. Even in the case of realistic geometries, the magnetic field is only a little perturbed by the variations in the conductivity between different media, in contrast to the electric potential, whose lines are strongly deformed in bone, which is a far less conducting medium than skin or cerebral tissues. De Munck showed that in the case of a model assuming spherically concentric layers of homogeneous conductivity the potential at interfaces may be expressed in the form of a Legendre polynomial expansion [DEM 84]. Different head models To solve the equations above, it is necessary to know the conductivity of each cerebral tissue. However, it is impossible to measure this parameter in vivo. A large range of values are found in the literature for these conductivities, which mostly were measured in vitro or in vivo in anesthetized animals. An overview of these publications is given in [HAU 97]. Despite the heterogeneity of these values, two essential properties emerge: the skull has a far lower conductivity than all other tissues, with a ratio of about 1:80 between this bone and the skin and the cortex, which are approximately equal; the cerebrospinal fluid, which surrounds the brain and fills the ventricles at its center, has a threefold higher conductivity than the brain. In the first approximation, it is common for the direct problem to consider three media in head models: the skin, the skull, and the intracerebral volume, which is assumed to have a homogeneous conductivity. The so-called spherical model with three layers, shown in Figure 16.2a, considers these media to be of spherical form. It is very useful, because it does not require knowledge of the subject’s anatomy, and the fields and potentials may easily and rapidly be calculated analytically. A major advance in MEG/EEG was the introduction of realistic models of the interfaces between the different media, which were assumed to be of homogeneous conductivity before. These models are individually tailored to each subject using anatomical MRI images to accommodate the great variability of the human brain’s form. Segmentation techniques are employed to isolate the different tissues and to reconstruct their contours. The different surfaces are then triangulated (see Figure 16.2b) [MAN 95]. The potential V may be decomposed into triangular basis functions in this case. By writing down equation [16.7] for each vertex of the mesh and by inverting the thus obtained linear system, whose second part is V0, the potential may be calculated at all points of the mesh, and consequently at the electrodes by interpolation between the closest vertices. The magnetic field may be
402
Tomography
derived from the values of the potential at the vertices of the surface by using equation [16.8]. This method employs the technique of boundary element integration [HAM 89].
Figure 16.2. Examples of head models. (a) Spherical model with three layers. (b) Realistic surface model with three layers of homogeneous conductivity. (c) Realistic volume mode
To consider the presence of a larger number of tissues and thus of heterogeneities and anisotropies of the conductivity, it is necessary to use a volume mesh with cubes or tetrahedra as elements to model the different media of the head. Figure 16.2c shows an example of a mesh with tetrahedra as elements representing the skin, the bone, and the brain. The calculation methods used are finite elements [MAR 98] or finite differences [LEM 96] methods, which directly solve equation [16.4]. A limitation of these methods is their computation time, in view of the large number of elements of the mesh that is needed to precisely describe the different media. 16.3.3. Inverse problem As stated above, reconstruction of the electrical cerebral activity is an ill-posed inverse problem, which has more than one solution. Several approaches to its solution are described in the literature. The first appeared in the 1970s for EEG, when few electrodes were fixed to the scalp (about 20), and relied on a parametric estimation. Recently, with increasing numbers of detectors in MEG and EEG, tomographic methods have been developed, in which the amplitude of the neuronal currents is reconstructed at all points of a region of interest defined inside the skull. These methods are described in more detail below. Hybrid methods also exist, which involve scanning the cerebral volume and estimating at each point the probability of the presence of a source current. The best known of these approaches is a method based on the MUSIC algorithm [MOS 92].
Tomography of Electrical Cerebral Activity
403
16.3.3.1. Parametric or “dipolar” methods Dipolar methods, related to the source localization methods, are currently still the most common methods. They consider electrical cerebral activity to be concentrated in a small number of areas whose dimensions are small compared to their distance to the detectors. The volume current density, integrated over each of these areas, may thus be approximated by that of a single dipole Q positioned at the point r 0: ³ js (r )dv
QG (r ro )
Q is called the equivalent current dipole and is entirely determined by its position r0 and its three components Qx, Qy, and Qz,, which define the orientation and the amplitude of sources. The values of the magnetic field or the potential (combined in a vector M of m measurements) created by this dipole may be written in the generic form: M
H (r0 )Q
[16.9]
where H is a matrix, the so-called gain matrix. The columns of H represent the fields or the potentials produced by the components of the dipole of unit amplitude and are calculated by different methods, the choice of which depends on the head model being considered. The elements of this matrix vary non-linearly with r0, but given the superposition theorem, for a fixed r0, the field values vary linearly with the amplitudes of the current sources. In general, these approaches assume that there are few activated regions (less than about 10) and estimate the parameters of a predetermined number of dipoles n by minimizing the squared error between the measurements and the resulting fields: J (r, Q)
M mes H (r )Q
2
[16.10]
where r and Q comprise the coordinates of the positions and the components of n dipoles, respectively, and H is a m × 3n matrix. The minimization techniques commonly used are the Levenberg–Marquardt [MAR 63] and the simplex [NEL 65] method. It is possible to separate the minimization of Q and of r by noting that Q may be obtained by inverting equation [16.9] and by expressing it as function of M in equation [16.10]:
Q
HM J
M H(r)H (r)M
2
M I H(r)H (r)
2
[16.11]
404
Tomography
where H is the pseudo-inverse of H and I is the identity matrix. A first non-linear minimization is performed to determine r, then Q is estimated by a linear inversion. The methods in which the positions of the dipoles are estimated at each instant (M is then the vector of the measurements at this instant) are distinguished from the methods in which the positions of the dipoles are considered as fixed during a given temporal window (M in equation [16.11] is then the spatiotemporal matrix of the measurements in this window) [SCH 86]. Although cerebral activity often has a complex structure, which cannot always be described by dipolar models of the sources, these models are, nevertheless, well suited to the early components of the evoked responses, and the obtained solutions are in general compatible with knowledge of the anatomy and the physiology of the sensor and motor functions. Under these conditions, the precision of the localization in a spherical model was found to be in the order of 1 to 3 cm in the case of EEG and in the order of some millimeters in MEG. In general, the dipoles are projected onto the anatomical images after registering their coordinates onto those of the MRI images. Figure 16.3 shows an example of such a projection of a dipole. The position of the source is indicated by the white points, while the white lines indicate the direction of the emitted current. The experiment involved the electrical stimulation of four fingers of the right hand, and each dipole represents the projection of one finger onto the somatosensory cortex.
Figure 16.3. Projection of results of dipole localization onto the anatomical volume in an experiment of digital stimulation. Each dipole is associated with the stimulation of one finger of one hand (LENA UPR 640, MEG/EEG Center, Neurological Reeducation Center of the Pitié-Salpêtrière Hospital, Neurological Center of the Saint-Antoine Hospital)
Tomography of Electrical Cerebral Activity
405
16.3.3.2. Tomographic or “distributed” methods The principle of these methods is to consider a large number of sources N with fixed positions, whose amplitudes in the three directions in space, i.e. the components of the vector Q, are to be determined. The reconstruction involves inverting equation [16.9], but this time the vector r0 of the positions of all sources is fixed and Q comprises 3N elements, representing the components of N dipoles. Thus, a linear system of equations has to be solved, in which the number of unknowns 3N is in general far higher than the number of measurements m. These approaches use the so-called distributed source model. The tomographic problem is in this way solved by a discrete approach. The used techniques differ in the employed cerebral space, or source space, and in the employed regularization methods, which are identical to those described in Chapter 4. In the first publications on the distributed models [HAM 94], the source space was restricted to a plane parallel to the detector plane, and so-called minimal norm constraints were applied by calculating the pseudo-inverse of H based on a singular value decomposition. An extension of the 3D source model was proposed with the LORETA (Low Resolution Electrical Tomography) method [PAS 94]. Source trihedra are distributed over a volume grid, and the obtained solution minimizes the Laplacian of the distribution of amplitudes of the sources. With these methods, which commonly employ quadratic regularizations, the obtained solutions are very smooth and show activity in areas that may not be compatible with physiological knowledge, such as areas in white matter or the ventricles. A major advance was the introduction of cortical tomographic methods [DAL 93]. As shown in section 16.3.1, the MEG and EEG sources are produced by macrocolumns of cortical neurons that are arranged perpendicularly to the cortical surface. Thus, a very pertinent anatomical constraint is to restrict the source space to this surface, after its segmentation in the anatomical MRI images. Under these conditions, the orientation of the sources may also be forced to be perpendicular to the cortex, which further reduces ambiguities in the inverse problem. The unknowns are then the amplitude of the sources in this direction, and Q comprises only N components if N cortical sources are considered. Even with these restrictions on the source space, the problem remains underdetermined (tens of thousands of points are needed to sample the cortex with a resolution of several millimeters). Different regularization methods have been employed for electrical cortical tomography, ranging from quadratic constraints to non-quadratic a priori constraints, such as L1 norm [MAT 97] or Markovian a prori [PHI 97] constraints. In all these methods, the employed a priori is global, i.e. it is the same at all points of the cortical surface. However, according to physiological knowledge of the activation, the neuronal sources do not have the same properties in different cortical structures. In particular, the activations are not
406
Tomography
correlated between various cortical areas, such as the sulci or the gyri, even if these areas are close. Baillet introduced a new method called ST-MAP, which uses a Bayesian formalism combined with Markov fields. It permits local constraints that vary over the cortex [BAI 97]. This formalism is described in more detail in Chapter 4. A significant constraint is the ability to favor discontinuities between two sulci of the cortical surface or between a sulcus and a gyrus. The inverse problem is in this case written in the form of a minimization of a cost function J, which comprises a least squares term that is identical to the one in equation [16.10] and an a priori term that uses non-quadratic functions of the gradient of the amplitudes of the sources:
J
M HQ
2
N
O ¦ ĭQ
[16.12]
k 1
where O is a regularization parameter and Q is the gradient of vector Q in a given neighborhood. The neighborhood includes here the six closest neighbors in the Euclidean space. The function employed as )is the Lorentzian function:
ĭQ (u )
u2 · 1 §¨ u ¸ © KQ ¹
2
where Kv is a coefficient that plays the role of a local detection threshold for discontinuities in the intensity of the distribution of sources. In this way, for small values of the gradients, the local cost is quadratic, while for larger values of the gradients, the associated cost is about KQ and thus preserves the creation of discontinuities. These thresholds KQ are adjusted depending on the orientation of a source with respect to its v-th neighbor and on their Euclidean distance. Another contribution of this method was to add a term of temporal constraint to equation [16.12]. This term introduces temporal smoothness by forcing the amplitudes at an instant t to be close to those reconstructed at the previously sampled time point. The energy minimization is done using a semi-quadratic method [BAI 97]. Figure 16.4 shows a reconstruction result obtained from EEG data recorded during intercritical epileptic activity. The electrical activity is represented in white or light gray, superimposed on the cortical surface extracted from MRI images represented in darker gray.
Tomography of Electrical Cerebral Activity
407
Figure 16.4. Reconstruction of the electrical cortical activity during an intercritical event in an epileptic patient, obtained with the ST-MAP method. The segmentation of the cortical surface was based on MRI images and was performed by Jean François Mangin of the CEA/SHFJ
16.4. Conclusion
The unique ability of MEG and EEG to record cerebral activity with a temporal resolution in the order of milliseconds renders these techniques indispensable for understanding the processing steps of the brain during a cognitive task or the dynamics of cerebral activity. Numerous applications in cognitive science and clinical research have already explored these properties specifically. However, the localization of active areas only from MEG or EEG signals does not have an unique solution. Nevertheless, inverse methods have appeared over the last years which incorporate more and more anatomical and functional constraints to limit these ambiguities. A reliable reconstruction of electrical activity is only possible if all temporal information contained in the signals is completely exploited and if constraints on the temporal variation of sources are imposed. Moreover, the development of methods that combine the data provided by imaging methods with high spatial resolution, such as fMRI, with the data provided by MEG and EEG with high temporal resolution will eventually enable the spatial and temporal precision required for understanding how the brain functions to be achieved.
408
Tomography
16.5. Bibliography [BAI 97] BAILLET S., GARNERO L., “A Bayesian framework to introducing anatomofunctional priors in the EEG/MEG inverse problem”, IEEE Trans. Biomed. Eng., vol. 44, pp. 374–385, 1997. [BER 29] BERGER H., “Über das Elektroenkephalogramm des Menschen”, Archiv für Psychiatrie und Nervenkrankheiten, vol. 87, pp. 527–570, 1929. [COH 72] COHEN D., “Magnetoencephalography: evidence of magnetic fields produced by alpha rhythm currents”, Science, vol. 161, pp. 664–666, 1972. [DAL 93] DALE A., SERENO M., “Improved localization of cortical activity by combining EEG and MEG with MRI surface reconstruction: a linear approach”, J. Cogni. Neurosci., vol. 5, pp. 162–176, 1993. [DEM 84] DE MUNCK J. C., “The potential distribution in a layered spheroidal volume conductor”, J. Appl. Phy., vol. 64, pp. 464–470, 1988. [GES 67] GESELOWITZ D. B., “On bioelectric potentials in an inhomogeneous volume conductor”, Biophys. J., vol. 7, pp. 1–11, 1967. [GLO 85] GLOOR P., “Neural generators and the problem of localization in electroencephalography: the volume conductor theory to electroencephalography”, J. Clin. Neurophysiol., vol. 2, pp. 327–354, 1985. [HAM 89] HÄMÄLÄINEN M. S., SARVAS J., “Realistic conductivity geometry model of the human head for interpretation of neuromagnetic data”, IEEE Trans. Biomed. Eng., vol. 36, pp. 165–171, 1989. [HAM 93] HÄMÄLÄINEN M. S., HARI R., ILMONIEMI R. J. et al., “Magnetoencephalography theory, instrumentation, and applications to noninvasive studies of the working human brain”, Rev. of Mod. Phys., vol. 65, pp. 413–497, 1993. [HAM 94] HÄMÄLÄINEN M. S., ILMOMIENI R. J., “Interpreting magnetic fields of the brain: minimum norm estimates”, Med. Biol. Eng. Comp., vol. 32, pp. 35–42, 1994. [HAU 97] HAUEISEN J., RAMON C., EISELT M., BRAUER H., NOWAK H., “Influence of tissue resistivities on neuromagnetic fields and electric potentials studied with a finite element model of the head”, IEEE Trans. Biomed. Eng., vol. 44, pp. 727–735, 1997. [LEM 96] LEMIEUX L., MCBRIDE A., HAND J. W., “Calculation of electric potentials on the surface of a realistic head model by finite differences”, Phys. Med. Biol., vol. 41, pp. 1079–1091, 1996. [MAN 95] MANGIN J. F., FROUIN V., BLOCH I., RÉGIS J., LOPEZ-KRAHE J., “From 3D magnetic resonance images to structural representations of the cortex topography using topology preserving deformations”, J. Mathematical Imaging and Vision, vol. 5, pp. 297– 318, 1995. [MAR 98] MARIN G., GUERIN C., BAILLET S., MEUNIER G., GARNERO L., “Influence of skull anisotropy for the forward and inverse problem in EEG: simulation studies using FEM on realistic head models”, Human Brain Map, vol. 6, pp. 250–269, 1998.
Tomography of Electrical Cerebral Activity
409
[MAR 63] MARQUARDT D. W., “An algorithm for least squares estimation of non linear parameters”, J. Soc. Ind. Appl. Math., vol. 11, pp. 431–444, 1963. [MAT 97] MATSUURA K., OKABE Y., “A robust reconstruction of sparse biomagnetic sources”, IEEE Trans. Biomed. Eng., vol. 44, pp. 720–726, 1997. [MOS 92] MOSHER J., LEWIS P., LEAHY R., “Multiple dipole modeling and localization from spatio-temporal MEG data”, IEEE Trans. Biomed. Eng., vol. 39, pp. 541–557, 1992. [NAK 96] NAKASATO N., SEKI K., KAWAMURA T. et al., “Cortical mapping using an MRIlinked whole head MEG system and presurgical decision making”, Electroenceph. and Clin. Neurophysiol. Suppl., vol. 47, pp. 333–341, 1996. [NEL 65] NELDER J. A., MEAD R. A., “A simplex method for function minimization”, Computer Journal, vol. 7, pp. 308–313, 1965. [PAS 94] PASCUAL-MARQUI R. D., MICHEL C. M., LEHMANN D., “Low resolution electromagnetic tomography: a new method for localizing electrical activity of the brain”, Int. J. Psychophysiology, vol. 18, pp. 49–65, 1994. [PHI 97] PHILIPS J. W., LEAHY R. M., MOSHER J. C., “MEG-based imaging of focal neuronal current sources”, IEEE Trans. Med. Imag., vol. 16, pp. 338–348, 1997. [SAR 87] SARVAS J., “Basic mathematical and electromagnetic concepts of the biomagnetic inverse problem”, Phys. Med. Biol., vol. 32, pp. 11–22, 1987. [SCH 86] SCHERG M., VON CRAMON D., “Evoked dipole source potentials of the human auditory cortex”, Electroenceph. Clin. Neurophysiol., vol. 65, pp. 344–360, 1986. [SUT 88] SUTHERLING W. W., CRANDALL P. H., DARCEY T. M. et al., “The magnetic and electrical fields agree with intracranial localizations of somatosensory cortex”, Neurology, vol. 38, pp. 1705–1714, 1988. [VRB 91] VRBA J., HAID G., LEE S. et al., “Biomagnetometers for unshielded and well shielded environments”, Clin. Phys. Physiol. Meas., vol. 12, suppl. B, pp. 81–86, 1991. [ZIM 70] ZIMMERMAN J. E., THIENE P., HARDING J. T., “Design and operation of stable RFbiased superconducting point-contact quantum devices and a note on the properties of perfectly clean metal contacts”, J. Appl. Phys., vol. 41, pp. 1572–1580, 1970.
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
List of Authors
Jean-Louis AMANS LETI, MINATEC Commissariat à l’énergie atomique Grenoble France Habib BENALI INSERM CHU Pitié-Salpêtrière Paris France André BRIGUET RMN Claude Bernard University Lyon France Irène BUVAT CNRS University of Paris 7 – University of Paris 11 Orsay France Anne-Marie CHARVET Joseph Fourier University European Synchrotron Radiation Facility Grenoble France
412
Tomography
Jacques DARCOURT Biophysique et traitement de l’image Equipe TIRO CEA(DSV/IBEB/SBTN/TIRO)/UNS/CAL University of Nice-Sophia Antipolis France Michel DÉCORPS RMN bioclinique INSERM Grenoble France Michel DEFRISE Division of nuclear medicine Hôpital universitaire AZ-VUB Brussels Belgium Christian DEPEURSINGE Institut d’optique appliquée Ecole Polytechnique Fédérale de Lausanne Switzerland Laurent DESBAT TIMC Institut d’informatique et de mathématiques appliquées de Grenoble Joseph Fourier University Grenoble France Philippe DUVAUCHELLE CNDRI Institut national des sciences appliquées de Lyon France Gilbert FERRETTI Clinique universitaire de radiologie et imagerie médicale CHU de Grenoble France
List of Authors
Philippe FRANKEN Centre Antoine-Lacassagne Nice France Line GARNERO Neurosciences cognitives et imagerie cérébrale CNRS Paris France Pierre GRANGEAT LETI, MINATEC Commissariat à l’énergie atomique Grenoble France Michael GRASS Philips Technologie GmbH Forschungslaboratorien Hamburg Germany Régis GUILLEMAUD LETI, MINATEC Commissariat à l’énergie atomique Grenoble France Samuel LEGOUPIL LIST Commissariat à l’énergie atomique Saclay France Jean-Michel LETANG CNDRI Institut national des sciences appliquées de Lyon France Catherine MENNESSIER Ecole supérieure de chimie physique électronique de Lyon France
413
414
Tomography
Ghislain PASCAL Commissariat à l’énergie atomique Valduc France Gilles PEIX CNDRI Institut national des sciences appliquées de Lyon France Françoise PEYRIN Institut national des sciences appliquées de Lyon European Synchrotron Radiation Facility Grenoble France Volker RASCHE Department of Internal Medicine II University of Ulm Germany Didier REVEL Creatis-LRMN Hôpital cardiologique Louis Pradel Lyon France Christoph SEGEBARTH Grenoble institut des neurosciences INSERM Grenoble France Catherine SOUCHIER DyOGen INSERM Grenoble France
List of Authors
Régine TRÉBOSSEN CEA-SHFJ Orsay France Yves USSON TIMC Institut d’informatique et de mathématiques appliquées de Grenoble France
415
Tomography Edited by Pierre Grangeat Copyright 02009, ISTE Ltd.
Index
A a priori, 93, 100, 101, 103, 105–110, 112 abdominal applications, 272 absorption, 132, 143 coefficient, 143 cross-section, 144 accuracy, 4 acoustic waves, 2 acquisition blank, 364 complete, 10 configurations, 220 geometry, 24 incomplete, 10 line, 32 time, 7 active system, 3 adaptive filter, 131 affine deformations, 57 motion, 57 algebraic methods, 89, 105, 111, 112 reconstruction, 338 algorithms, 133
analytical methods, 11, 23, 24 reconstruction, 338 anatomical imaging, 15, 329 Anger, 14 angiography, 203, 204, 321, 322 angular sampling, 359 anisotropy factor, 148 annihilation, 362, 375 apodization window, 28 ART, 97, 98, 111, 112 arterial spin labeling, 379 artifacts, 223, 271 ARTUR, 106 astronomy, 16 attenuated Radon transform, 31, 77 attenuation, 201–203, 210, 211, 340, 362 coefficient, 2, 3 autofluorescence, 146 autofluorescent protein, 129 axial resolution, 123, 126, 134
B backpropagation, 12 backscattering, 3 ballistic, 187
418
Tomography
bandlimited functions, 65 barium fluoride, 357 barium, 202, 204 basal ganglia, 372 beam hardening, 201 Beer–Lambert law, 218 BGO, 357, 374 Biot–Savart law, 400, 401 blank, 364 block of projections, 58 paradigms, 384 blood, 380 flow, 353 oxygenation, 378, 380 BOLD, 380–383 Boltzmann equation, 10 Boltzmann transport equation, 159 bone microarchitecture, 209 Born approximation, 169 breast cancer, 164 bronchography, 204 build-up factor, 247 BURST, 317
C CAD models, 226 calibration, 291 cancerous tissue, 210 capillaries, 381 cardiac applications, 277 C-arm, 289 CCD camera, 207 CdTe, 332 CEA, 14 cerebral blood flow, 378 blood volume, 378 magnetic field, 395 perfusion, 380 cerebrovascular, 378 chromophore, 144
CLSM, 122, 126 cognitive functions, 398 coherence, 210 length, 177 coherent detection, 170, 178 propagation, 151 radiation, 2 coincidence, 355, 362 detection, 355 random, 355 collimator, 332, 334, 352 colocalization, 129 Colsher filter, 39, 45 completeness condition, 296 Compton, 202, 211, 212 scattering, 355 computed tomography, 13, 259, 289, 259 angiography, 276 computer vision, 12 concentration, 3 conditioned, 89, 91, 100, 108, 111, 113 conductivity tensor, 399 cone-beam, 294, 295, 333 geometry, 24, 46, 48, 49, 50, 52, 53 transform, 84 confocal laser scanning microscopy, 122 confocal microscopy, 122, 124 conjugate gradient, 338 constraint, 93, 96, 97, 98, 100, 107 continuous method, 11 rotation, 264 contrast, 3, 4, 8, 307, 317–324, 326 agents, 4, 202, 203, 205, 206 mechanisms, 379 conventional microscopy, 124 convolution theorem, 27 Cormack, 14 cortex, 164
Index
cortical tomographic methods cost, 8 cross sections, 264 cyclotron, 352 cytochromes, 145
D data redundancy, 39 deconvolution, 133 defocalization, 125 density, 3 deoxyglucose, 353 deoxyhemoglobin, 380 depth of field, 126 derivation of measured projections, 53 derivative transform, 28 design, 16 detection plane, 46, 49 detector blocks, 357 detectors, 205–208, 210, 220, 222 deterministic method, 11 dexels, 338 diagnosis, 16 diagnostic imaging, 260 diamagnetic, 380 dielectric constant, 153 medium, 152 diffraction, 240 tomography, 166 diffuse media, 150 diffusion, 322, 324, 325 coefficient, 3, 163 constant, 381 equation, 163 dipolar methods, 394
419
direct interrogation, 3 model, 182 problem, 1, 10, 23, 165, 394, 398, 399, 401 reconstruction, 50 sinograms, 43 discontinuity, 103 discrete, 89, 90, 93, 111, 114 methods, 11, 24 displacement vector field, 57 distributed source model, 405 distribution, 99, 100, 103, 105, 107, 109, 110, 112, 114 divergence, 197, 207 DNA, 130 [18F]L-DOPA, 373 dopamine, 372, 375 dopaminergic system, 352 dose, 197, 201, 202, 206, 211, 271 dual-energy, 4, 202, 203, 211 computed tomography, 280 tomography, 230 dynamic imaging system, 7 acquisition, 366 imaging, 9, 12 spatial reconstrutor, 15 X-ray transform, 55
E echo time, 381 echo-planar, 315, 316 imaging, 379 Edwards, 14 effective attenuation coefficient, 163 efficient sampling, 68, 73 elastic scattering, 146 elastodynamic equation, 10
420
Tomography
electrical cerebral activity, 393, 394, 402, 403 electrical cerebral imaging, 394 electrodes, 394 electro-encephalography, 393 electromagnetic waves, 2 electron beam, 197, 200 electronic scanning, 8 electrons, 198, 199, 200 emission tomography, 90, 91, 95, 99, 101, 102, 105, 114, 116 epilepsy, 398 equivalent current dipole, 403 ESRF, 198–200, 203, 204, 207– 210, 212, 214 essential support, 67, 71 estimation, 12 event-related paradigms, 384 evoked potentials, 395, 397 evolution model, 57 Ewald sphere, 173 expert’s report, 225 eye, 180
fluorophores, 145 fMRI, 377, 378, 382–385, 387 form factor, 155 Fourier coefficients, 64 paradigms, 387 rebinning, 44 series, 64 slice theorem, 27, 37, 41, 187 space, 314 transform, 7, 11, 24, 41, 63 fractal power, 149 frame mode, 332 Fredholm transform, 7, 11 frequency domain, 188 frequency–distance principle, 31, 44 FRET, 129 ĭ-function, 406 functional cerebral imaging, 377, 393 imaging, 13, 164, 318, 320, 329, 351 fundamental set, 63
F
G, H, I
fan-beam, 261, 333 geometry, 24, 32–36 FDG, 368, 375 Feldkamp algorithm, 50, 51,295 filtered backprojection, 28, 33, 36, 39, 45, 49, 267, 295, 316, 365 filtering, 41, 42, 47, 48, 51 finite differences, 402 elements, 402 FLASH, 317 flat detector, 288 flavin, 145 fluence, 161 fluorescence, 3, 121, 123, 145, 211 fluorochrome, 123, 128 fluorogram, 129
gadolinium, 202, 203, 213 gain matrix, 403 gamma cameras, 331, 361 gamma rays, 3, 241 gated reconstruction, 300 SPECT, 344 generalized inverse, 89, 92, 96 germanium, 205 GFP, 129, 130 glucose, 145 metabolism, 352, 367 gradient, 93–95, 98, 102, 108, 113, 115 gradient-echo, 314, 315, 318, 319, 381– 383 gradiometer, 396
Index
Grangeat formula, 48 great circle, 39 Green fluorescent protein, 129 Green function, 152 gyromagnetic ratio, 308, 309, 320 Hadamard, 24 Hamming window, 29 HeLa, 130, 134 helical acquisition, 261, 264, 267 tomography, 265 trajectory, 35, 49, 51, 52 heme, 144 hemodynamic response, 385, 386 hemoglobin, 144 Henyey–Greenstein expression, 148 high attenuation, 225 resolution, 228 highly diffuse media, 181 Hilbert filtering, 53 transform, 28, 30, 48 holotomography, 210 Hounsfield units, 260 Hounsfield, 14 hydrodynamic, 239 hyperpolarization, 308, 320 ill-posed inverse problem, 10, 24 image reconstruction, 23, 358 image-based guidance, 16 imaging sequences, 307 immunolabeling, 128 impedance, 240 in situ hybridization, 128 incoherent propagation, 157 indirect interrogation, 3 measurements, 1 reconstruction, 52 industry, 16 inelastic scattering, 146 inflammations, 164
421
infrared, 143 integral measurement, 10 projection, 24, 32 interaction of light with matter, 142 interferometric techniques, 240 interlaced, 73 hexagonal scheme, 82 sampling, 74 intermediate conditions, 382 interpolation, 268 interventional imaging, 287, 311 interventional X-ray tomography, 300 intracranial electrodes, 398 inverse Fourier transform, 64 Hilbert transform, 30 model, 182, 185 problem, 2, 10, 23, 394, 398, 399, 402, 405, 406 inversion formula, 28, 33, 41 iodine, 202, 203, 206, 213 ionizing radiation, 241 ischemia, 203, 206 iterative, 90, 93, 94, 96, 98, 102, 105–107, 111, 113, 115, 133 algorithm, 12 methods, 185 reconstruction, 296, 301
K, L Katsevich algorithm, 53 K-edge, 202–204, 206, 211, 213 kinetics, 4, 9 k-space, 307, 314–317, 323 Kuhl, 14 Landweber, 94, 115 Larmor frequency, 309, 310, 311 pulsation, 309
422
Tomography
laser, 121 lateral resolution, 123, 127 Lauterbur, 14 LETI, 14 life science, 15 line of response, 42 process, 102, 103, 106, 111 linear accelerator, 200 linearity of the system, 254 list mode, 332, 362 local tomography, 30,41 localization, 8 LOR, 359 LORETA, 405 LSO, 357, 405
M macrocolumn of neurons, 394 magnetic fields, 379, 393, 396, 400–403 susceptibility, 380 magnetism of nuclei, 307 magnetization transfer, 320 magneto-encephalography, 393 MAO-B, 372 MAP, 99, 100–107, 111, 114 MAP-EM, 105, 111 Markov field, 406 material science, 16 matrix-vector product, 11 maximum counting rate, 358 entropy, 133 Maxwell equations, 10, 399 MDCT, 263 mean free path of absorption, 144 mechanical scanning, 9 medical imaging, 15 MEM, 107, 108, 110–112 mesh with tetrahedra, 402 metabolism, 164
methyltransferase of catecholamines, 372 microtomography, 9, 120, 201, 206, 207, 209, 211–214 Mie, 149 minimal invasive surgery, 16 norm constraints, 405 mitochondria, 127, 134 mixer, 242 ML-EM, 105, 110, 111 model, 89–91, 102, 103, 108, 112, 296 MOISE, 106, 107, 111 molecular imaging, 15, 320 monochromatic, 197, 201–205, 211– 214 monochromator, 201, 204, 205, 207, 208 Monte Carlo method, 248 morphological imaging, 13, 15, 264 morphometer, 14 motion, 57, 58 artifacts, 262, 264, 273 compensated reconstruction, 301 compensation, 9 motional narrowing, 382 MRI, 3, 377 sequences, 381 multi-energy, 203 multiphase flow, 239 multiphoton absorption microscopy, 135 multiple scattering, 170 multislice, 323 scanners, 263 tomography, 264 MUSIC algorithm, 403 mutual coherence function, 151 intensity, 151 myocardial SPECT, 343
Index
N NA, 124, 127 NADH, 145 NaI(Tl), 331, 357 nanocrystal, 128 nanoparticles, 146 net photonic flux, 161 neuroradiology, 297 neurosciences, 164 neurotransmitter, 372 noise suppression, 131 noise, 89, 91, 92, 93, 97, 108, 111, 130, 270 non-coherent radiation, 3 non-linear absorption, 135 non-linear microscopy, 122 non-linear optics, 135 non-local, 29 normalization, 363 numerical aperture, 124 numerical simulation of flow, 246 Nyquist–Shannon, 65
O oblique sinograms, 42, 361 one-dimensional filtering, 28, 34 Fourier transform, 26 parallel projection, 26 operators, 24 optical biopsies, 185 coherence tomography, 166 sectioning, 120, 121 spectroscopy, 141 tomography, 141 transfer function, 165 ordered subset expectation maximization (OSEM), 339 Orlov condition, 39, 45 overtones, 143
423
oxygen consumption, 380 oxyhemoglobin, 380
P paradigms, 384 parallel fan-beam geometry, 50 parallel projection, 38 parallel-beam geometry, 26, 52 paramagnetic tracers, 378 paramagnetic, 308, 320 Parker algorithm, 34 weighting, 295 partial volume, 269 particle model, 57 passive system, 3 pattern recognition, 12 pediatric applications, 279 penetration depth, 143 perfusion, 322 periodic functions, 63 periodicity hypothesis, 56 perturbation method, 186 PET, 42, 351, 377, 378, 383, 384, 387 phase contrast, 200, 210, 211, 212 phase function, 147 Phelps, 14 phonons, 143 photo-activatable, 128 photobleaching, 131 photodetector, 123 photoelectric, 144, 211 photomultiplier tube, 123 photon density waves, 189 photonic radiation, 2 photons, 187 snake, 187 physical model of data acquisition, 247 physical properties of tissues, 142
424
Tomography
pi-line, 53 pinhole, 123, 127 pitch, 267 pixel, 7, 201, 205, 206, 208, 211 pixel-based approach, 57 point-spread function, 124 Poisson law, 101, 105, 106, 107, 109, 110, 111, 130 Poisson summation formula, 65, 67, 69 polytrauma, 279 postsynaptic currents, 394 potential, 394 potentials, 101–104, 107, 110, 111, 394, 395, 398–401, 403 precession, 307–311, 318, 324 prior, 89, 92, 96, 99, 100, 107 process control 16, 239 tomography, 230, 240 projections, 7, 290, 291, 334, 337 geometry, 291 matrix, 249 projector, 338 pseudo-local tomography, 30 PSF, 124, 133 public health, 15 pulsed source, 135
Q, R quality control, 16 quantum dots, 146 quasi-static approximation, 399 quasi-stationary hypothesis, 56 radial sampling, 359 radiance, 159 radiative transfer, 158 transitions, 145 radiopharmaceuticals, 330, 351 Radon, 13
domain, 48, 52 space, 290 transform, 7, 11, 24, 40, 166 transform, rotation invariant, 76 ramp filter, 28, 34, 42, 51, 365 support, 31 Rankowitz, 14 Rayleigh, 211, 212 ray-oriented approach, 57 rebinning, 33, 43, 49, 50, 52 reciprocal space, 173, 307, 313, 314 reconstruction algorithms, 267 reconstruction, 8, 9, 12, 26, 28, 33, 35, 36, 38, 42, 45, 53, 55, 132, 134, 295, 307, 312, 314, 316, 317, 375 reflection, 3 refractive index, 2, 153 regular evolution model, 57 regularization, 10, 11, 24, 405, 406 regularizing inverse problems, 394 reprojection, 47 residence time distribution in process engineering, 241 restoration, 131 retinotopic, 387, 388 reverse engineering, 227 RF, 309–312, 314–318, 323 rhodamine, 127, 134
S sampling, 63, 65 conditions, 65 in 2D tomography, 71 matrix, 68, 69 SaO2, 144 scanners, 260, 265, 272 with continuous rotation, 261 scatter, 289, 341 scattered coincidence, 355 radiation, 247
Index
scattering, 211 cross section, 147 mean free path, 147 potential, 168 tomography, 233 scintillation crystals, 356 scintillator, 207 SE, 381 security, 16 segmentation, 401, 405 sensitivity, 2, 357 separation capability, 254 septa, 360 sequential slice-by-slice acquisitions, 264 services, 16 shadow zone, 48 Shannon, 63 Shannon’s interpolation formula, 66 sampling condition, 67 SHFJ, 14 shielded room, 396 short light pulses, 187 short scan acquisition, 34 signal-to-noise ration, 383 simulation, 227 single photon emission computed tomography, 329, 351 singles, 364 single-slice rebinning, 43 sinograms, 26, 31, 33, 205, 337, 359 sino-timogram, 54 SIRT, 93, 98, 111, 115 slice thickness, 264, 267 sliding window, 56 snake, 187 SNR, 130 soft tissue contrast, 299 Soret bands, 144 source currents, 399, 394, 400, 401
425
sources, 219 spatial deformation model, 57 encoding, 312, 321 grid, 7 resolution, 4, 7, 201, 206, 209, 270, 353 spatiotemporal model, 57 specific intensity, 158 specificity, 2 spectral intensity, 198, 199, 201 spectrum, 197, 198, 199, 201, 204, 207, 211 spherical model, 401, 404 spin-echo, 315, 318, 381– 383 spin–lattice relaxation, 317, 318, 320 spin–spin relaxation, 318 spontaneous cerebral activity, 395 SQUIDs, 396 Standard sampling, 74 standard scheme, 68, 73, 82 static conditions, 381, 382 static imaging system, 7 statistical methods, 11, 12, 89, 96, 99, 108, 111, 112, 113 parametric mapping, 383 stimulated echoes, 317 ST-MAP, 406 strongly ill-posed, 10 structure factor, 155 subtraction, 202, 203, 205, 206 sufficiency condition, 35, 38 summation, 48 surgery, 397 surgical planning, 16 synchronization, 56 synchrotron radiation, 197, 198, 199, 200, 201, 203, 206, 207, 209, 211, 212, 213, 214 system calibration, 290
426
Tomography
T Tam window, 53 Temporal constraint, 406 domain, 187 evolution model, 57 resolution, 4, 8 Ter-Pogossian, 14 therapy follow-up, 16 three-dimensional, Fourier transform, 38, 41 derivative, 40, 48 images, 206 Radon transform, 37, 40, 41 Radon transform, second reconstruction, 119, 206–210 reprojection, 45 tomography, 79 X-ray transform, 35, 37, 39, 42, 43, 46 Thomson, 211 thoracic applications, 273 three-dimensional imaging, 7, 9, 14 time-of-flight, 355 Titane-Saphir, 177 tomodensitometry, 218, 225 tomosynthesis, 232 trabecular, 209, 213, 214 trajectory, 290 transition matrix, 363 transmission scan, 362 transmission, 3, 335 trigonometric functions, 64 truncated parallel projection, 42 tumor masses, 164 tumors, 203, 206 Tuy condition, 48, 52 two-dimensional acquisition, 358 dynamic Radon transform, 54 filtering, 39 Fourier transform, 27, 31, 38 imaging, 7, 9
parallel projection, 37, 45 Radon transform, 25, 32, 43, 45, 359, Radon transform, first derivative, 40, 48, 52 X-ray transform, 25 two-photon microscopy, 135
U, V, W, X ultrasound tomography, 240 uncertainty relation, 11 undertermined system, 254 undulator, 199 untruncated, 38 variation of spatial resolution with depth, 342 vascular accidents, 397 applications, 276 vector tomography, 74 veins, 381–383 vertebra, 209 vibrational modes, 143 visual cortex, 379, 385 system, 387 volume, 206–209 currents, 394 rendering, 297 voxel, 7 voxel-based approach, 57 weighted backprojection, 34, 51, 53 weighting, 33, 47, 51 whole body reconstruction, 9 wiggler, 199, 205, 213 Wigner coherence function, 151 distribution, 158 X-ray, 3, 7 detectors, 266 tomography, 287 transform, 80, 294 tubes, 199, 201, 266