Adaptive Optics for Vision Science Principles, Practices, Design, and Applications Edited by JASON PORTER, HOPE M. QUEENER, JULIANNA E. LIN, KAREN THORN, AND ABDUL AWWAL
A JOHN WILEY & SONS, INC., PUBLICATION
Front cover art: In an adaptive optics system, a lenslet array (left circle) is used to measure an aberrated wavefront (top circle) that is then corrected by a deformable mirror (right circle) to produce a flattened wavefront (bottom circle). Lenslet array and deformable mirror images are courtesy of Adaptive Optics Associates, Inc. and Boston Micromachines Corporation, respectively. Copyright © 2006 by John Wiley & Sons, Inc., Hoboken, NJ. All rights reserved. Published simultaneously in Canada No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data: Adaptive optics for vision science : principles, practices, design and applications / edited by Jason Porter . . . [et al.]. p. cm. “A Wiley-Interscience publication.” Includes bibliographical references and index. ISBN-10: *978-0-471-67941-7 ISBN-10: 0-471-67941-0 1. Optics, Adaptive. I. Porter, Jason. TA1520.A34 2006 621.36′9–dc22 2005056953 Printed in the United States of America 10 9 8 7 6 5 4 3 2 1
Contents FOREWORD
xvii
ACKNOWLEDGMENTS
xxi
CONTRIBUTORS
PART ONE 1
INTRODUCTION
Development of Adaptive Optics in Vision Science and Ophthalmology David R. Williams and Jason Porter 1.1
1.2
Brief History of Aberration Correction in the Human Eye 1.1.1 Vision Correction 1.1.2 Retinal Imaging Applications of Ocular Adaptive Optics 1.2.1 Vision Correction 1.2.2 Retinal Imaging
PART TWO WAVEFRONT MEASUREMENT AND CORRECTION 2
xxiii
1
3 3 3 5 9 9 11
31
Aberration Structure of the Human Eye Pablo Artal, Juan M. Bueno, Antonio Guirao, and Pedro M. Prieto
33
2.1 2.2
33 34
Introduction Location of Monochromatic Aberrations Within the Eye
v
vi
CONTENTS
2.3
2.4
2.5
2.6 2.7
3
Temporal Properties of Aberrations: Accommodation and Aging 2.3.1 Effect of Accommodation on Aberrations and Their Correction 2.3.2 Aging and Aberrations Chromatic Aberrations 2.4.1 Longitudinal Chromatic Aberration 2.4.2 Transverse Chromatic Aberration 2.4.3 Interaction Between Monochromatic and Chromatic Aberrations Off-Axis Aberrations 2.5.1 Peripheral Refraction 2.5.2 Monochromatic and Chromatic Off-Axis Aberrations 2.5.3 Monochromatic Image Quality and Correction of Off-Axis Aberrations Statistics of Aberrations in Normal Populations Effects of Polarization and Scatter 2.7.1 Impact of Polarization on the Ocular Aberrations 2.7.2 Intraocular Scatter
40 40 42 43 44 45 45 46 47 48 51 52 53 53 55
Wavefront Sensing and Diagnostic Uses Geunyoung Yoon
63
3.1
63 65 65 66 68
3.2
3.3
3.4
Wavefront Sensors for the Eye 3.1.1 Spatially Resolved Refractometer 3.1.2 Laser Ray Tracing 3.1.3 Shack–Hartmann Wavefront Sensor Optimizing a Shack–Hartmann Wavefront Sensor 3.2.1 Number of Lenslets Versus Number of Zernike Coefficients 3.2.2 Trade-off Between Dynamic Range and Measurement Sensitivity 3.2.3 Focal Length of the Lenslet Array 3.2.4 Increasing the Dynamic Range of a Wavefront Sensor Without Losing Measurement Sensitivity Calibration of a Wavefront Sensor 3.3.1 Reconstruction Algorithm 3.3.2 System Aberrations Summary
68 71 73 74 75 76 77 79
CONTENTS
4
Wavefront Correctors for Vision Science Nathan Doble and Donald T. Miller
83
4.1 4.2 4.3 4.4
83 84 86 88
4.5
4.6 5
Introduction Principal Components of an AO System Wavefront Correctors Wavefront Correctors Used in Vision Science 4.4.1 Macroscopic Discrete Actuator Deformable Mirrors 4.4.2 Liquid Crystal Spatial Light Modulators 4.4.3 Bimorph Mirrors 4.4.4 Microelectromechanical Systems Performance Predictions for Various Types of Wavefront Correctors 4.5.1 Description of Two Large Populations 4.5.2 Required Corrector Stroke 4.5.3 Discrete Actuator Deformable Mirrors 4.5.4 Piston-Only Segmented Mirrors 4.5.5 Piston/Tip/Tilt Segmented Mirrors 4.5.6 Membrane and Bimorph Mirrors Summary and Conclusion
89 90 91 92 95 98 99 101 106 107 109 111
Control Algorithms Li Chen
119
5.1 5.2 5.3 5.4
119 119 122 124 124 127 127 128 128 129 130
5.5
6
vii
Introduction Configuration of Lenslets and Actuators Influence Function Measurement Spatial Control Command of the Wavefront Corrector 5.4.1 Control Matrix for the Direct Slope Algorithm 5.4.2 Modal Wavefront Correction 5.4.3 Wave Aberration Generator Temporal Control Command of the Wavefront Corrector 5.5.1 Open-Loop Control 5.5.2 Closed-Loop Control 5.5.3 Transfer Function of an Adaptive Optics System
Adaptive Optics Software for Vision Research Ben Singer
139
6.1 6.2
139 140 140 140 141
Introduction Image Acquisition 6.2.1 Frame Rate 6.2.2 Synchronization 6.2.3 Pupil Imaging
viii
CONTENTS
6.3
6.4
6.5
6.6
6.7
7
Measuring Wavefront Slope 6.3.1 Setting Regions of Interest 6.3.2 Issues Related to Image Coordinates 6.3.3 Adjusting for Image Quality 6.3.4 Measurement Pupils 6.3.5 Preparing the Image 6.3.6 Centroiding Aberration Recovery 6.4.1 Principles 6.4.2 Implementation 6.4.3 Recording Aberration 6.4.4 Displaying a Running History of RMS 6.4.5 Displaying an Image of the Reconstructed Wavefront Correcting Aberrations 6.5.1 Recording Influence Functions 6.5.2 Applying Actuator Voltages Application-Dependent Considerations 6.6.1 One-Shot Retinal Imaging 6.6.2 Synchronizing to Display Stimuli 6.6.3 Selective Correction Conclusion 6.7.1 Making Programmers Happy 6.7.2 Making Operators Happy 6.7.3 Making Researchers Happy 6.7.4 Making Subjects Happy 6.7.5 Flexibility in the Middle
142 142 143 143 143 143 144 144 144 145 147 147 148 149 149 150 150 150 150 151 151 151 151 152 152 153
Adaptive Optics System Assembly and Integration Brian J. Bauman and Stephen K. Eisenbies
155
7.1 7.2 7.3
155 156 157 158 159 163 170 174 174
7.4
Introduction First-Order Optics of the AO System Optical Alignment 7.3.1 Understanding Penalties for Misalignments 7.3.2 Optomechanics 7.3.3 Common Alignment Practices 7.3.4 Sample Procedure for Offline Alignment AO System Integration 7.4.1 Overview 7.4.2 Measure the Wavefront Error of Optical Components 7.4.3 Qualify the DM
175 175
CONTENTS
7.4.4 7.4.5 7.4.6 7.4.7 7.4.8 7.4.9 7.4.10 7.4.11 8
184 184 185 189
8.1 8.2 8.3 8.4 8.5
189 189 191 192 194
Introduction Strehl Ratio Calibration Error Fitting Error Measurement and Bandwidth Error 8.5.1 Modeling the Dynamic Behavior of the AO System 8.5.2 Computing Temporal Power Spectra from the Diagnostics 8.5.3 Measurement Noise Errors 8.5.4 Bandwidth Error 8.5.5 Discussion Addition of Wavefront Error Terms
PART THREE
RETINAL IMAGING APPLICATIONS
194 196 198 199 200 200 203
Fundamental Properties of the Retina Ann E. Elsner
205
9.1 9.2 9.3 9.4 9.5 9.6 9.7
206 209 210 218 220 225
9.8 10
177 180 181 182 183
System Performance Characterization Marcos A. van Dam
8.6
9
Qualify the Wavefront Sensor Check Wavefront Reconstruction Assemble the AO System Boresight FOVs Perform DM-to-WS Registration Measure the Slope Influence Matrix and Generate Control Matrices Close the Loop and Check the System Gain Calibrate the Reference Centroids
ix
Shape of the Retina Two Blood Supplies Layers of the Fundus Spectra Light Scattering Polarization Contrast from Directly Backscattered or Multiply Scattered Light Summary
228 230
Strategies for High-Resolution Retinal Imaging Austin Roorda, Donald T. Miller, and Julian Christou
235
10.1 Introduction
235
x
CONTENTS
10.2 Conventional Imaging 10.2.1 Resolution Limits of Conventional Imaging Systems 10.2.2 Basic System Design 10.2.3 Optical Components 10.2.4 Wavefront Sensing 10.2.5 Imaging Light Source 10.2.6 Field Size 10.2.7 Science Camera 10.2.8 System Operation
236 237 237 239 240 242 244 246 246
10.3 Scanning Laser Imaging 10.3.1 Resolution Limits of Confocal Scanning Laser Imaging Systems 10.3.2 Basic Layout of an AOSLO 10.3.3 Light Path 10.3.4 Light Delivery 10.3.5 Wavefront Sensing and Compensation 10.3.6 Raster Scanning 10.3.7 Light Detection 10.3.8 Frame Grabbing 10.3.9 SLO System Operation
249 249 249 251 252 253 254 255 255
10.4 OCT Ophthalmoscope 10.4.1 OCT Principle of Operation 10.4.2 Resolution Limits of OCT 10.4.3 Light Detection 10.4.4 Basic Layout of AO-OCT Ophthalmoscopes 10.4.5 Optical Components 10.4.6 Wavefront Sensing 10.4.7 Imaging Light Source 10.4.8 Field Size 10.4.9 Impact of Speckle and Chromatic Aberrations
256 257 259 262 264 266 266 267 267 268
10.5 Common Issues for all AO Imaging Systems 10.5.1 Light Budget 10.5.2 Human Factors 10.5.3 Refraction 10.5.4 Imaging Time
271 271 272 272 276
10.6 Image 10.6.1 10.6.2 10.6.3 10.6.4 10.6.5 10.6.6
276 276 276 278 279 283 283
Postprocessing Introduction Convolution Linear Deconvolution Nonlinear Deconvolution Uses of Deconvolution Summary
247
CONTENTS
PART FOUR 11
VISION CORRECTION APPLICATIONS
13
289
Customized Vision Correction Devices Ian Cox
291
11.1
291
Contact Lenses 11.1.1 Rigid or Soft Contact Lenses for Customized Correction? 11.1.2 Design Considerations—More Than Just Optics 11.1.3 Measurement—The Eye, the Lens, or the System? 11.1.4 Customized Contact Lenses in a Disposable World 11.1.5 Manufacturing Issues—Can the Correct Surfaces Be Made? 11.1.6 Who Will Benefit? 11.1.7 Summary 11.2 Intraocular Lenses 11.2.1 Which Aberrations—The Cornea, the Lens, or the Eye? 11.2.2 Correcting Higher Order Aberrations— Individual Versus Population Average 11.2.3 Summary 12
xi
293 295 297 298 300 301 304 304 305 306 308
Customized Corneal Ablation Scott M. MacRae
311
12.1 Introduction 12.2 Basics of Laser Refractive Surgery 12.3 Forms of Customization 12.3.1 Functional Customization 12.3.2 Anatomical Customization 12.3.3 Optical Customization 12.4 The Excimer Laser Treatment 12.5 Biomechanics and Variable Ablation Rate 12.6 Effect of the LASIK Flap 12.7 Wavefront Technology and Higher Order Aberration Correction 12.8 Clinical Results of Excimer Laser Ablation 12.9 Summary
311 312 317 317 319 320 321 322 324
From Wavefronts To Refractions Larry N. Thibos
331
13.1 Basic Terminology 13.1.1 Refractive Error and Refractive Correction 13.1.2 Lens Prescriptions
331 331 332
325 325 326
xii
CONTENTS
13.2 Goal of Refraction 13.2.1 Defi nition of the Far Point 13.2.2 Refraction by Successive Elimination 13.2.3 Using Depth of Focus to Expand the Range of Clear Vision 13.3 Methods for Estimating the Monochromatic Refraction from an Aberration Map 13.3.1 Refraction Based on Equivalent Quadratic 13.3.2 Virtual Refraction Based on Maximizing Optical Quality 13.3.3 Numerical Example 13.4 Ocular Chromatic Aberration and the Polychromatic Refraction 13.4.1 Polychromatic Wavefront Metrics 13.4.2 Polychromatic Point Image Metrics 13.4.3 Polychromatic Grating Image Metrics 13.5 Experimental Evaluation of Proposed Refraction Methods 13.5.1 Monochromatic Predictions 13.5.2 Polychromatic Predictions 13.5.3 Conclusions
14 Visual Psychophysics With Adaptive Optics Joseph L. Hardy, Peter B. Delahunt, and John S. Werner 14.1
Psychophysical Functions 14.1.1 Contrast Sensitivity Functions 14.1.2 Spectral Efficiency Functions 14.2 Psychophysical Methods 14.2.1 Threshold 14.2.2 Signal Detection Theory 14.2.3 Detection, Discrimination, and Identification Thresholds 14.2.4 Procedures for Estimating a Threshold 14.2.5 Psychometric Functions 14.2.6 Selecting Stimulus Values 14.3 Generating the Visual Stimulus 14.3.1 General Issues Concerning Computer-Controlled Displays 14.3.2 Types of Computer-Controlled Displays 14.3.3 Accurate Stimulus Generation 14.3.4 Display Characterization
334 334 335 336 337 339 339 353 354 356 357 357 358 358 359 360
363 364 364 368 370 370 371 374 375 377 378 380 381 384 386 388
CONTENTS
14.4
14.3.5 Maxwellian-View Optical Systems 14.3.6 Other Display Options Conclusions
PART FIVE 15
16
DESIGN EXAMPLES
xiii
390 390 391
395
Rochester Adaptive Optics Ophthalmoscope Heidi Hofer, Jason Porter, Geunyoung Yoon, Li Chen, Ben Singer, and David R. Williams
397
15.1 Introduction 15.2 Optical Layout 15.2.1 Wavefront Measurement and Correction 15.2.2 Retinal Imaging: Light Delivery and Image Acquisition 15.2.3 Visual Psychophysics Stimulus Display 15.3 Control Algorithm 15.4 Wavefront Correction Performance 15.4.1 Residual RMS Errors, Wavefronts, and Point Spread Functions 15.4.2 Temporal Performance: RMS Wavefront Error 15.5 Improvement in Retinal Image Quality 15.6 Improvement in Visual Performance 15.7 Current System Limitations 15.8 Conclusion
397 398 398 403 404 405 406 406 407 409 410 412 414
Design of an Adaptive Optics Scanning Laser Ophthalmoscope Krishnakumar Venkateswaran, Fernando Romero-Borja, and Austin Roorda
417
16.1 16.2 16.3 16.4
417 419 419 420 420
Introduction Light Delivery Raster Scanning Adaptive Optics in the SLO 16.4.1 Wavefront Sensing 16.4.2 Wavefront Compensation Using the Deformable Mirror 16.4.3 Mirror Control Algorithm 16.4.4 Nonnulling Operation for Axial Sectioning in a Closed-Loop AO System 16.5 Optical Layout for the AOSLO 16.6 Image Acquisition
421 421 423 425 426
xiv
CONTENTS
16.7 Software Interface for the AOSLO 16.8 Calibration and Testing 16.8.1 Defocus Calibration 16.8.2 Linearity of the Detection Path 16.8.3 Field Size Calibration 16.9 AO Performance Results 16.9.1 AO Compensation 16.9.2 Axial Resolution of the Theoretically Modeled AOSLO and Experimental Results 16.10 Imaging Results 16.10.1 Hard Exudates and Microaneurysms in a Diabetic’s Retina 16.10.2 Blood Flow Measurements 16.10.3 Solar Retinopathy 16.11 Discussions on Improving Performance of the AOSLO 16.11.1 Size of the Confocal Pinhole 16.11.2 Pupil and Retinal Stabilization 16.11.3 Improvements to Contrast 17 Indiana University AO-OCT System Yan Zhang, Jungtae Rha, Ravi S. Jonnal, and Donald T. Miller 17.1 17.2 17.3
17.4
17.5 17.6
17.7
Introduction Description of the System Experimental Procedures 17.3.1 Preparation of Subjects 17.3.2 Collection of Retinal Images AO Performance 17.4.1 Image Sharpening 17.4.2 Temporal Power Spectra 17.4.3 Power Rejection Curve of the Closed-Loop AO System 17.4.4 Time Stamping of SHWS Measurements 17.4.5 Extensive Logging Capabilities 17.4.6 Improving Corrector Stability Example Results with AO Conventional FloodIlluminated Imaging Example Results With AO Parallel SD-OCT Imaging 17.6.1 Parallel SD-OCT Sensitivity and Axial Resolution 17.6.2 AO Parallel SD-OCT Imaging Conclusion
429 431 431 432 432 432 432 434 438 438 439 440 441 441 443 443 447 447 448 453 453 454 455 457 458 459 460 461 461 461 463 463 466 474
CONTENTS
18
Design and Testing of A Liquid Crystal Adaptive Optics Phoropter Abdul Awwal and Scot Olivier 18.1 Introduction 18.2 Wavefront Sensor Selection 18.2.1 Wavefront Sensor: Shack–Hartmann Sensor 18.2.2 Shack–Hartmann Noise 18.3 Beacon Selection: Size and Power, SLD versus Laser Diode 18.4 Wavefront Corrector Selection 18.5 Wavefront Reconstruction and Control 18.5.1 Closed-Loop Algorithm 18.5.2 Centroid Calculation 18.6 Software Interface 18.7 AO Assembly, Integration, and Troubleshooting 18.8 System Performance, Testing Procedures, and Calibration 18.8.1 Nonlinear Characterization of the Spatial Light Modulator (SLM) Response 18.8.2 Phase Wrapping 18.8.3 Biased Operation of SLM 18.8.4 Wavefront Sensor Verification 18.8.5 Registration 18.8.6 Closed-Loop Operation 18.9 Results from Human Subjects 18.10 Discussion 18.11 Summary
xv
477 477 478 478 483 484 485 486 487 488 489 491 492 493 493 495 495 496 499 502 506 508
APPENDIX A: OPTICAL SOCIETY OF AMERICA’S STANDARDS FOR REPORTING OPTICAL ABERRATIONS
511
GLOSSARY
529
SYMBOL TABLE
553
INDEX
565
Foreword
The rationale for this handbook is to make adaptive optics technology for vision science and ophthalmology as broadly accessible as possible. While the scientific literature chronicles the dramatic recent achievements enabled by adaptive optics in vision correction and retinal imaging, it does less well at conveying the practical information required to apply wavefront technology to the eye. This handbook is intended to equip engineers, scientists, and clinicians with the basic concepts, engineering tools, and tricks of the trade required to master adaptive optics-related applications in vision science and ophthalmology. During the past decade, there has been a remarkable expansion of the application of wavefront-related technologies to the human eye, as illustrated by the rapidly growing number of publications in this area (shown in Fig. F.1). The catalysts for this expansion have been the development of new wavefront sensors that can rapidly provide accurate and complete descriptions of the eye’s aberrations, and the demonstration that adaptive optics can provide better correction of the eye’s aberrations than has previously been possible. These new tools have generated an intensive effort to revise methods to correct vision, with the wavefront sensor providing a much needed yardstick for measuring the optical performance of spectacles, contact lenses, intraocular lenses, and refractive surgical procedures. Wavefront sensors offer the promise of a new generation of vision correction methods that can correct higher order aberrations beyond defocus and astigmatism in cases where these aberrations significantly blur the retinal image. The ability of adaptive optics to correct the monochromatic aberrations of the eye has also created exciting new opportunities to image the normal and diseased retina at unprecedented spatial resolution. Adaptive optics has strong roots in astronomy, where it is used to overcome the blurring effects of atmospheric turbulence, the fundamental limitation on the resolution of xvii
xviii
FOREWORD
FIGURE F.1 Number of publications listed in PubMed (National Library of Medicine) that describe work where wavefront sensors were used to measure the full wave aberration of the human eye. Types of wavefront sensors included in this graph: Shack–Hartmann, spatially resolved refractometer, crossed-cylinder aberroscope, laser ray tracing, scanning slit refractometer, video keratography, corneal topography, phase retrieval, curvature sensing, and grating-based techniques.
ground-based telescopes. More recently, adaptive optics has found application in other areas, most notably vision science, where it is used to correct the eye’s wave aberration. Despite the obvious difference in the scientific objectives of the astronomy and vision science communities, we share a technology that is remarkably similar across the two applications. Recognizing this, together with Jerry Nelson and other colleagues, we created a center focused on developing adaptive optics technology for both astronomy and vision science. The Center for Adaptive Optics, with headquarters at the University of California, Santa Cruz, was founded in 1999 as a National Science Foundation Science and Technology Center. Initially under the leadership of Jerry Nelson and more recently of Claire Max, the Center for Adaptive Optics is a consortium involving more than 30 affiliated universities, government laboratories, and corporations. The Center has fostered extensive new collaborations between vision scientists and astronomers (who very soon discovered they were interested in each others’ science as well as their technology!). This handbook is a direct result of the Center’s collaborative energy, with chapters contributed by astronomers and vision scientists alike.
FOREWORD
xix
We wish to thank all of the contributors for generously sharing their expertise, and even their secrets, within the pages of this book. Especially, we congratulate Jason Porter, lead editor, and Hope Queener, Julianna Lin, Karen Thorn, and Abdul Awwal, coeditors, for their tireless dedication to this significant project. DAVID R. WILLIAMS University of Rochester, Rochester, New York Center for Adaptive Optics
CLAIRE MAX University of California, Santa Cruz Center for Adaptive Optics
Acknowledgments
I have been extremely privileged to have worked on this book and would like to thank everyone who contributed to its development, technical and scientific content, character, and completion. I am indebted to all of the authors and reviewers from the vision science, astronomical, and engineering communities who took the time and energy to write outstanding chapters in the midst of their busy research and personal lives. Thank you to George Telecki and Rachel Witmer at John Wiley & Sons, Inc. for sticking with us over the past two years, for believing in the importance of publishing a book on this topic, and for their patience and willingness to answer any and all questions that came their way. In addition, I could not have completed the project without the energy and efforts of my fellow co-editors, Hope Queener, Julianna Lin, Karen Thorn, and Abdul Awwal. I would particularly like to thank Hope and Julianna for their tremendous dedication to compiling a book with such a high level of scientific and technical competence and integrity (and for all of the many hours and late nights required to do so!). I am also grateful for the support, ideas, and encouragement I received from David Williams and the members of his lab (including Joe, Jess, Dan, Li, Sapna, and Alexis), and the Center for Visual Science and StrongVision administrative staff (including Michele, Debbie, Teresa, Sara, and Amy). A very special thanks goes to my family (Jen, Kevin, Debbie, Sarah, and Kyle) and friends (Mike, Lana, Frank, and others who are too numerous to mention) for their support, love, belief, encouragement, and prayers, and for helping to keep me refreshed and alive. Scientifically, I will always be grateful to Claire Max, who fi rst opened my eyes to the exciting field of adaptive optics during an internship at Lawrence Livermore National Lab that subsequently led me to fi nd a path to David Williams’ lab. In addition, I will always be indebted to my mentor, David Williams, for his guidance, instruction, support, encouragement, and confidence in me on so many levels in and outside of the office—it has been a xxi
xxii
ACKNOWLEDGMENTS
pleasure to work for one of the pioneers in the fields of vision science and adaptive optics. Finally, I would like to thank the National Science Foundation and the Center for Adaptive Optics for not only supporting this project but for also supporting and continuing the long tradition of vision scientists and astronomers working together to better science, health, and technology. JASON PORTER This editorial work was made possible by the support of the National Science Foundation’s Center for Adaptive Optics and the associated scientific community. The University of Houston College of Optometry provided time and computing resources. The University of Rochester’s Center for Visual Science provided time, space, and computing resources. I wish to particularly acknowledge the tremendous efforts of co-editors Jason Porter and Julianna Lin. As the project neared completion, the helpful responses from Larry Thibos, Marcos Van Dam, Jack Werner, and Joe Hardy were greatly appreciated. HOPE M. QUEENER I would like to extend a very heart-felt thank you to all of the authors, reviewers, collaborators, and supporters who dedicated so much of their time to making this book a reality. In particular, I would like to thank Jason Porter and Hope Queener for their staunch determination and perseverance, particularly toward the end of this project. I would also like to thank my husband, Gregory Brady, and my family (Y. S. Lin, G. Y. C. Lin, I. Lin, K. Su, S. Su, and little Stephen) for their love and support, even in the midst of the editing cycle. Financial and logistical support for this project was provided by the Center for Adaptive Optics. Additional support was provided by David Williams, the University of Rochester, and the Center for Visual Science. JULIANNA E. LIN
Contributors
AUTHORS Pablo Artal, Laboratorio de Optica (Departamento de Fisica), Universidad de Murcia, Murcia, Spain Abdul Awwal, Lawrence Livermore National Laboratory, Livermore, California Brian J. Bauman, Lawrence Livermore National Laboratory, Livermore, California Juan M. Bueno, Laboratorio de Optica, Universidad de Murcia, Murcia, Spain Li Chen, Center for Visual Science, University of Rochester, Rochester, New York Julian Christou, Center for Adaptive Optics, University of California, Santa Cruz, Santa Cruz, California Ian Cox, Bausch & Lomb, Rochester, New York Peter B. Delahunt, Posit Science Corporation, San Francisco, California Nathan Doble, Iris AO, Inc., Berkeley, California Stephen K. Eisenbies, Sandia National Laboratories, Livermore, California Ann E. Elsner, School of Optometry, Indiana University, Bloomington, Indiana Antonio Guirao, Laboratorio de Optica, Universidad de Murcia, Murcia, Spain Joseph L. Hardy, Posit Science Corporation, San Francisco, California Heidi Hofer, College of Optometry, University of Houston, Houston, Texas Ravi S. Jonnal, School of Optometry, Indiana University, Bloomington, Indiana xxiii
xxiv
CONTRIBUTORS
Scott M. MacRae, Department of Ophthalmology, University of Rochester, Rochester, New York Donald T. Miller, School of Optometry, Indiana University, Bloomington, Indiana Scot Olivier, Lawrence Livermore National Laboratory, Livermore, California Jason Porter, Center for Visual Science, University of Rochester, Rochester, New York Pedro M. Prieto, Laboratorio de Optica, Universidad de Murcia, Murcia, Spain Jungtae Rha, School of Optometry, Indiana University, Bloomington, Indiana Fernando Romero-Borja, Houston Community College Central, Houston, Texas Austin Roorda, School of Optometry, University of California, Berkeley, Berkeley, California Ben Singer, Center for the Study of Brain, Mind and Behavior, Princeton University, Princeton, New Jersey Larry N. Thibos, School of Optometry, Indiana University, Bloomington, Indiana Marcos A. van Dam, W. M. Keck Observatory, Kamuela, Hawaii Krishna Venkateswaran, Alcon Research Ltd, Orlando, Florida John S. Werner, Department of Ophthalmology, Section of Neurobiology, Physiology and Behavior, University of California, Davis Medical Center, Sacramento, California David R. Williams, Center for Visual Science, University of Rochester, Rochester, New York Geunyoung Yoon, Department of Ophthalmology, University of Rochester, Rochester, New York Yan Zhang, School of Optometry, Indiana University, Bloomington, Indiana EDITOR-IN-CHIEF Jason Porter, Center for Visual Science, University of Rochester, Rochester, New York CO-EDITORS Abdul Awwal, Lawrence Livermore National Laboratory, Livermore, California
CONTRIBUTORS
xxv
Julianna E. Lin, Center for Visual Science, University of Rochester, Rochester, New York Hope M. Queener, College of Optometry, University of Houston, Houston, Texas Karen Thorn, 20 Todman Street, Brookyn, Wellington, New Zealand
FIGURE 1.7 Images of the cone mosaics of 10 subjects with normal color vision, obtained with the combined methods of adaptive optics imaging and retinal densitometry. The images are false colored so that blue, green, and red are used to represent the S, M, and L cones, respectively. (The true colors of these cones are yellow, purple, and bluish-purple). The mosaics illustrate the enormous variability in L/M cone ratio. The L/M cone ratios are (A) 0.37, (B) 1.11, (C) 1.14, (D) 1.24, (E) 1.77, (F) 1.88, (G) 2.32, (H) 2.36, (I) 2.46, (J) 3.67, (K) 3.90, and (L) 16.54. The proportion of S cones is relatively constant across eyes, ranging from 3.9 to 6.6% of the total population. Images were taken either 1° or 1.25° from the foveal center. For two of the 10 subjects, two different retinal locations are shown. Panels (D) and (E) show images from nasal and temporal retinas, respectively, for one subject; (J) and (K) show images from nasal and temporal retinas for another subject. Images (C), (J), and (K) are from Roorda and Williams [52]. All other images were made by Heidi Hofer. (See page 16 for text discussion.) (From Williams and Hofer [57]. Reprinted with permission from The MIT Press.)
830 nm
633 nm
Drusen
Some Nerve Fiber Layer
543 nm Retinal Vessels
488 nm Macular Pigment, Nerve Fiber Layer
514 nm Some Macular Pigment
633, 543, and 488 nm Images Combined
FIGURE 9.9 Images centered on the human macula, acquired with laser illumination over a range of wavelengths. The bottom right panel is the combination of three colors: red (633 nm), green (543 nm), and blue (488 nm). (See page 219 for text discussion.)
FIGURE 9.10 A color fundus photograph of the patient in Figures 9.4 and 9.5, showing that the larger retinal vessels are seen, but that the choroidal ones (other than the largest ones that feed and drain the neovascular membrane) are obscured. (See page 219 for text discussion.)
τ = −0.35 min
Kλ = +0.10 D
λ = 575 nm
τ = −0.74 min
Kλ = +0.22 D
λ = 600 nm
1-mm pupil offset
λfocus = 555 nm
Composite
Luminance
FIGURE 13.9 Image formation for a polychromatic source in the presence of chromatic aberration. Top row is for an eye with longitudinal chromatic aberration only. Bottom row is for an eye with longitudinal and transverse chromatic aberration produced by 1 mm of horizontal pupil offset from the visual axis (or, equivalently, 15° of eccentricity). The point source emits three wavelengths of light (500, 575, and 600 nm) and the eye is assumed to be focused for 550 nm. Chromatic errors of focus and position indicated for each image are derived from an analysis of the Indiana Eye model of chromatic aberration. (See page 354 for text discussion.)
τ = +0.62 min
τ=0
Kλ = −0.18 D
λ = 525 nm
PART ONE
INTRODUCTION
CHAPTER ONE
Development of Adaptive Optics in Vision Science and Ophthalmology DAVID R. WILLIAMS and JASON PORTER University of Rochester, Rochester, New York
This chapter briefly reviews the history of ocular aberration measurement and correction that paved the way to the development of adaptive wavefront correction of the eye. While the focus of this book is on the engineering of adaptive optics systems for the eye, this chapter describes recent applications of adaptive optics and the scientific discoveries that adaptive optics has made possible, encouraging the future development of this technology.
1.1 BRIEF HISTORY OF ABERRATION CORRECTION IN THE HUMAN EYE 1.1.1 Vision Correction The fi rst use of a transparent stone as a crude magnifying glass is not known, though it has been suggested that this could have been as early as 5000 bc [1]. It is also unclear who fi rst fi xed simple lenses to the head. Though corrective spectacles rank among the most important medical inventions in history, their origins are obscure [2]. Most sources attribute spectacles to an unknown Italian near the end of the thirteenth century. In any case, the invention of spectacles seems to have been based on empirical observation of the effects
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
3
4
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
of glass held before the eye rather than theoretical insight, as their invention preceded the fi rst clear description of how the image was formed on the retina [3] by at least 300 years. Kepler, an astronomer perhaps best known for his laws of planetary motion, pointed out that the retinal image is inverted and also clearly described the benefits of concave lenses for myopic correction and convex lenses for hyperopic correction. This was a time of rapid advances in the field of optics. The telescope was invented around this same time, though again there is controversy about the inventor [4]. Most scholars attribute the invention to Hans Lipperhey, a Dutch spectacle maker who produced a telescope in 1608. Galileo would soon use one of the first telescopes to observe the moons of Jupiter and sunspots. The link between astronomy and the eye apparent in Kepler’s scientific contributions and Lipperhey’s telescope is a recurring theme in the history of vision science, culminating in the recent translation of adaptive optics from astronomy to vision science. Galileo had a competitor, Christoph Scheiner, who was also an astronomer with interests in physiological optics. Scheiner demonstrated empirically that the retinal image was inverted by cutting a hole in the back of an excised animal eye and viewing the retinal image directly [5]. Scheiner also constructed what was arguably the fi rst wavefront sensor for the eye. Scheiner’s wavefront sensor evaluated the fate of light passing through only two locations in the eye’s entrance pupil. Modern ophthalmic wavefront sensors extend this concept by measuring the direction that light takes as it passes through hundreds of different locations in the eye’s pupil. Scheiner made two holes in an opaque disk. When held close to the eye, the perceived image was doubled if the eye was defocused and single only if the eye was in focus, providing subjective information about the eye’s most important aberration. It would be nearly 200 years before a clear understanding developed of astigmatism, the eye’s second most important monochromatic aberration. Thomas Young recognized the existence of astigmatism in his own eye and determined that his astigmatism was predominantly lenticular in origin by noting that it persisted even when he immersed his eye in water, largely neutralizing the cornea [6]. In 1827, Sir George Biddell Airy, yet another astronomer, fabricated the fi rst spherocylindrical lenses to correct astigmatism. This ultimately lead to the current ophthalmic practice of prescribing aberration corrections with 3 degrees of freedom corresponding to a defocus correction, the cylindrical power, and the cylinder axis. Helmholtz argued that the normal eye contained more monochromatic aberrations than just defocus and astigmatism, based in part on his own subjective observations of a bright point source viewed in the dark [7]. These monochromatic, higher order aberrations were often referred to as “irregular astigmatism” to distinguish them from the regular astigmatism that could be corrected with a cylindrical lens. Roughly one and a half centuries after Helmholtz’s description of the higher order aberrations in human eyes, we are now equipped with the adaptive optics (AO) technology that can systematically measure and correct them.
BRIEF HISTORY OF ABERRATION CORRECTION IN THE HUMAN EYE
5
1.1.2 Retinal Imaging The main hurdle to obtaining the fi rst view of the inside of the living eye was that such a small fraction of the light entering the pupil returns back out of it. The reflectance of the back of the eye is only 0.1 to 10% depending on wavelength (400 to 700 nm) [8, 9], and the pupil restricts the amount of light that can exit the eye by another factor of about 100. These two factors together reduce the light returning through the pupil by 10 −3 to 10 −5 depending on wavelength. Purkinje appreciated that under some conditions the pupil of the eye could be made to appear luminous instead of black [10, 11]. Brücke demonstrated the glow in the pupil that could be seen through a tube held in a candle flame and pointed at an eye [12]. In 1851, Helmholtz revolutionized the field of ophthalmology with the invention of the ophthalmoscope [13]. He called his invention the Augenspiegel or “eye mirror.” Jackman and Webster obtained the fi rst photographs of the human retina in vivo by attaching the camera to the patient’s head to reduce image motion during the 2.5-minute exposures that were required [14]. Subsequently, the fundus camera was improved by blocking the unwanted reflection from the corneal surface, increasing fi lm sensitivity, and the electronic flash lamp, which allowed exposures brief enough to avoid eye movement blur. The scanning laser ophthalmoscope (SLO), invented by Robert Webb [15], allowed the use of detectors such as the avalanche photodiode or the photomultiplier tube, which increased the sensitivity of retinal imaging systems well beyond that which could be achieved with photographic fi lm. The use of raster scanning in this instrument instead of flood illumination provided the advantage of real-time video imagery. Moreover, the instrument can be equipped with a confocal pinhole to reject light that originates from retinal planes other than the plane of interest, providing an optical sectioning capability. The application of optical coherence tomography (OCT) to the eye enabled even greater improvements in axial resolution [16, 17], with axial resolutions as high as 1 to 3 mm in vivo [18]. 1.1.2.1 Microscopic Retinal Imaging In Vivo While conventional fundus cameras, SLOs, and OCT systems provide a macroscopic view of the living retina, they do not have the transverse resolution needed to reveal retinal features on the spatial scale of single cells. Photoreceptor cells had been observed in the living eyes of animals with good optics and large receptors, such as the cane toad and the snake [19, 20]. At about this time, my laboratory was characterizing the human photoreceptor mosaic in vivo, relying on the subjective observations of aliasing effects caused by imaging interference fringes on the retina [21]. Artal and Navarro wondered whether another form of interferometry, akin to stellar speckle interferometry [22], could be used to obtain objective information about the cone mosaic [23]. They collected images of the speckle patterns generated by illuminating small patches of living human retina with coherent light. The average power spectrum of
6
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
multiple images revealed a local maximum corresponding to the fundamental frequency of the cone mosaic, providing the fi rst evidence that information about the granularity of the cone mosaic can be obtained from images of the living human eye. Marcos and Navarro improved this technique and were able to relate objective measures of cone spacing to visual acuity [24]. Unfortunately, this method does not allow for the direct observation of the photoreceptor mosaic because the photoreceptors are obscured by the interference of the coherent light reflected from multiple layers of the retina. To avoid this problem, Miller et al. constructed a high-magnification fundus camera capable of illuminating the retina with incoherent light [25]. By dilating the pupil to reduce the influence of diffraction and with careful correction of defocus and astigmatism, they obtained the fi rst direct images of the human cone mosaic in vivo. This could be achieved in only a subset of young eyes with very good optical quality. Better images awaited methods to measure and correct not only defocus and astigmatism but also the higher order monochromatic aberrations of the eye as well. 1.1.2.2 Adaptive Optics Modern measurements of the wave aberration began with Smirnov, who employed a subjective vernier task to measure the retinal misalignment of rays entering through different parts of the pupil, providing a description of the third- and fourth-order aberrations [26]. Smirnov recognized that his method could, in principle, allow for the fabrication of contact lenses that corrected higher order aberrations in the eye but thought that the lengthy calculations required to compute the wave aberration made this approach impractical. He could not forsee the rapid development of computer technology that would eventually make it possible to compute the eye’s wave aberration in a matter of milliseconds. Following Smirnov’s pioneering work, a number of investigators devised a variety of different methods to characterize the wave aberration [27]. Walsh, Charman, and Howland demonstrated an objective method that greatly increased our understanding of the properties of the eye’s wave aberration [28]. However, this technology was sophisticated for its time, and while its value was appreciated by some scientists, it was not ready for clinical adoption. This situation changed abruptly when Junzhong Liang, working as a graduate student in Josef Bille’s laboratory at the University of Heidelberg (see Fig. 1.1), demonstrated that it was possible to adapt the Shack–Hartmann wavefront sensor, typically used in optical metrology, to measure the eye’s wave aberration [29]. This proved to be the key development that paved the way to closed-loop adaptive optics systems for the eye. The simplicity of the Shack–Hartmann method and the fact that it is the wavefront sensor used in most astronomical adaptive optics systems made it easier to translate adaptive optics to the eye. The method was also amenable to automation, as our group at Rochester eventually demonstrated in collaboration with Pablo Artal’s group at the University of Murcia [30]. We were able to measure the wave aberration in real time, showing that the most significant temporal fluctuations in the eye’s wave aberration were
BRIEF HISTORY OF ABERRATION CORRECTION IN THE HUMAN EYE
7
FIGURE 1.1 Josef Bille and Junzhong Liang, two of the inventors of the fi rst Shack– Hartmann wavefront sensor for the eye, on the day Liang defended his Ph.D. thesis in 1992.
caused by focus changes associated with the microfluctuations of accommodation. Adaptive correction of the eye’s wave aberration has its origins in astronomy, specifically in Horace Babcock’s proposed solution to the problem of imaging stars through the turbulent atmosphere [31]. Babcock introduced the idea of an adaptive optical element that could correct the time-varying aberrations caused by atmospheric turbulence. Due to the technical complexity of measuring atmospheric aberrations and fabricating and controlling a deformable mirror to correct them, the first successful demonstration of adaptive optics in astronomy was not made until 1977 by Hardy and his colleagues [32]. Many of the major ground-based telescopes around the world are now equipped with adaptive optics, which can sometimes achieve images with higher resolution than those obtained with the Hubble space telescope [33, 34]. In 1989, Andreas Dreher and colleagues, also working in Joseph Bille’s laboratory in Heidelberg, described the fi rst attempt to use a deformable mirror to improve retinal images in a scanning laser ophthalmoscope [35]. They were able to use a deformable mirror to correct the astigmatism in one subject’s eye based on a conventional spectacle prescription. In 1993, Junzhong Liang, who had previously demonstrated the fi rst Shack–Hartmann wavefront sensor in Heidelberg, joined my laboratory as a postdoctoral fellow. We developed a high-resolution wavefront sensor that provided a more complete description of the eye’s wave aberration, measuring up to 10 radial Zernike orders [36]. These measurements showed that higher order aberrations can be significant sources of retinal image blur in some eyes, especially
8
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
when the pupil is large. Liang, Don Miller, and I then constructed the fi rst closed-loop adaptive optics system that could correct higher order aberrations in the eye (see Fig. 1.2) [37]. We may have never built this instrument were it not for the availability of the fi rst deformable mirror made by Xinxtics, Inc., a small startup that Mark Ealey had just launched. Liang and I had also received encouragement and advice from Bob Fugate, head of the Starfi re Optical Range (a satellite-tracking telescope equipped with adaptive optics). This fi rst system required about 15 min for each loop of measuring and correcting the wave aberration, with 4 or 5 loops required to complete the correction. Wavefront sensing was not yet automated for the eye and each frame of Shack–Hartmann spots required tedious adjustment of the centroid estimates to correct errors made by the imperfect centroiding algorithm available at the time. Tedious though our fi rst experiments with adaptive optics were, we were able to improve contrast sensitivity beyond what was possible with a conventional spectacle correction, and we obtained higher contrast images of the cone mosaic than Miller had previously obtained without adaptive optics. Real-time correction of the wave aberration [38, 39] was not possible until the development of automated wavefront sensing, which allowed the fi rst realtime measurement of the eye’s wave aberration [30]. Fortunately for vision science, an adaptive optics system operating with a closed-loop bandwidth of only a few hertz is adequate to capture the most important temporal changes in the fi xating eye with diminishing returns for higher bandwidths [40].
FIGURE 1.2 The University of Rochester’s adaptive optics ophthalmoscope, the fi rst system capable of measuring and correcting higher order aberrations in the eye. The instrument used a 37-actuator deformable mirror made by Xinxtics, Inc. From left to right, Don Miller, Junzhong Liang, and David Williams in 1996.
APPLICATIONS OF OCULAR ADAPTIVE OPTICS
1.2 1.2.1
9
APPLICATIONS OF OCULAR ADAPTIVE OPTICS Vision Correction
By compensating for the eye’s higher order aberrations with adaptive optics, Liang, Williams, and Miller showed that the contrast sensitivity of the eye in monochromatic light could be improved over that obtained with the best spectacle correction [37]. This result spurred the rapid development of wavefront-guided refractive surgery, in which measurements of the eye’s wave aberration control an excimer laser to correct the eye’s static higher order aberrations as well as defocus and astigmatism (see also Chapter 12). The most enthusiastic proponents had hopes that this surgical procedure could improve essentially everyone’s vision beyond 20/20. However, there are practical limitations on the visual benefit of correcting higher order aberrations. First of all, the benefit is limited to conditions under which the pupil is larger than about 4 mm, that is, dim illumination. Yoon and Williams subsequently confi rmed the increases in both contrast sensitivity and acuity with higher order aberrations corrected with adaptive optics, but due to chromatic aberration in the eye, these increases were reduced in white light compared with the increases that can be obtained in monochromatic light (see Fig. 1.3) [41]. Changes in the wave aberration with accommodation reduce the visual benefit to some extent [42]. In addition, there is the possibility that the reduction in the depth of field caused by correcting higher order aberrations could actually decrease visual performance in some circumstances. Finally, the visual benefit is only as good as the technology available to correct the wave aberration, and factors such as errors in the alignment of the excimer laser with the eye and biomechanical changes in the eye following surgery reduce the accuracy of higher order correction. Although wavefront correction will never provide everyone with supervision, it remains clear that there are eyes that can benefit from the correction of higher order aberrations, Indeed, in some eyes, the correction of higher order aberrations can produce truly dramatic improvements in visual performance by restoring poor vision to near normal levels. Examples include eyes with kerataconus and penetrating keratoplasty as well as normal eyes that happen to have large amounts of higher order aberrations. Chapter 11 addresses the prognosis for contact lenses that correct higher order aberrations, and Chapter 12 discusses current progress in customized vision correction with refractive surgery. 1.2.1.1 Automated Refraction with Adaptive Optics Adaptive optics may eventually play a role in the clinical refraction of the eye. A consortium lead by Lawrence Livermore National Laboratories, including the University of Rochester, Bausch & Lomb, Wavefront Sciences, Boston Micromachines, and Sandia National Laboratories, has developed a phoropter equipped with adaptive optics for automatically refracting the eye. This device allows a
10
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
FIGURE 1.3 The measured contrast sensitivity (lower panels) and visual benefit (upper panels) for two subjects when correcting various aberrations across a 6-mm pupil: both monochromatic and chromatic aberrations (fi lled circles, dashed black line), monochromatic aberrations only (open circles, solid gray line), and defocus and astigmatism only (× symbols, solid black line). The visual benefit is the ratio of the contrast sensitivity with higher order aberrations corrected to the contrast sensitivity with only defocus and astigmatism corrected. Note that chromatic aberration, which is present in normal viewing, tempers the visual benefit of correcting higher order aberrations. These two observers show about a twofold increase in contrast sensitivity in white light, which is the modal benefit computed from population data of the wave aberration. (From Yoon and Williams [41]. Reprinted with permission of the Optical Society of America.)
patient to view a visual acuity chart through adaptive optics. The AO system automatically measures and corrects the aberrations of the patient’s eye and provides a prescription for glasses, contact lenses, or refractive surgery. The phoropter measures the eye’s wave aberration with a Shack–Hartmann wavefront sensor. Wavefront correction in the phoropter is achieved with a microelectromechanical systems (MEMS) deformable mirror.
APPLICATIONS OF OCULAR ADAPTIVE OPTICS
11
1.2.1.2 Image Quality Metrics Assessed with Adaptive Optics The effective use of wavefront sensing to refract the eye depends on the ability to transform the wave aberration, which is usually defi ned by several dozen Zernike coefficients, into the values of sphere, cylinder, and axis that generate the best subjective image quality. This transformation is not as trivial as it might fi rst appear because the amounts of defocus and astigmatism required to optimize image quality depend on higher order aberrations as well as neural factors [43]. Adaptive optics has played a useful role in this effort because it can be used not only to correct aberrations but also to generate them (see also Section 5.4.3). This has allowed Li Chen at Rochester to measure the effect of aberrations on vision with psychophysical methods and helped to generate plausible metrics for image quality [44]. 1.2.1.3 Blur Adaptation The use of adaptive optics to generate aberrations has also been used in clever experiments by Pablo Artal to reveal the eye’s adaptation to its own point spread function (PSF) [45]. Artal measured the subjective blur when subjects viewed a scene through their normal wave aberration, as well as through rotated versions of their normal wave aberration. Li Chen and Ben Singer developed the method to rotate the wave aberration using the deformable mirror in the adaptive optics system. Despite the fact that the amount of aberration in all conditions was constant, the subjective blur varied significantly, with the least blur occurring when the subject was seeing the world through his or her own wave aberration. These experiments reveal the neural mechanisms that influence subjective image quality and show that the nervous system has learned to at least partially discount the blur produced by the particular pattern of aberrations through which it must view the world. 1.2.2 Retinal Imaging The use of adaptive optics to increase the resolution of retinal imaging promises to greatly extend the information that can be obtained from the living retina. Adaptive optics now allows the routine examination of single cells in the eye, such as photoreceptors and leukocytes, providing a microscopic view of the retina that could previously only be obtained in excised tissue. The ability to see these structures in vivo provides the opportunity to noninvasively monitor normal retinal function, the progression of retinal disease, and the efficacy of therapies for disease at a microscopic spatial scale. 1.2.2.1 Photoreceptor Optics Revealed with Adaptive Optics Retinal Imaging The benefit of adaptive optics for photoreceptor imaging can be seen in Figure 15.7. Adaptive optics has also proved useful in studying the optical properties of single cones in vivo, properties that are difficult if not impossible to study in excised retina. Cone photoreceptors appear bright in high-resolution images because they act as waveguides radiating the light
12
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
incident on them back toward the pupil in a relatively narrow beam with a roughly Gaussian profi le. Images of the cone mosaic have high contrast over a wide range of wavelengths [46] as shown in Figure 1.4. The angular dependence of the light radiated from the cones is closely related to the Stiles–Crawford effect measured psychophysically. The Stiles– Crawford effect describes the loss in sensitivity of the eye to light incident on the mosaic with increasing obliquity from the optical axes of the receptors, which point roughly toward the pupil center [47]. This tuning function is measured with a relatively large number of cones and is therefore the combination of the waveguide properties of single photoreceptors and the disarray in individual cone pointing direction. Though psychophysical methods have suggested that the disarray is likely to be small [48], it has not been possible to disentangle these factors with direct measurements in the human eye. With adaptive optics we have succeeded in measuring the angular tuning properties of individual human cones for the fi rst time, and the disarray in individual cone axes that contributes to the angular tuning properties of the retina as a whole [49]. Figure 1.5 shows images of the same patch of cones when they are illuminated with light entering different locations of the pupil. The image at
FIGURE 1.4 Images of the cone mosaic obtained at 1° eccentricity in the temporal retina with wavelengths of 550, 650, and 750 nm. The top row shows the registered raw images and the bottom row shows the same images deconvolved to remove any differential effects of the eye’s PSF with wavelength. Note that the image contrast is relatively independent of wavelength. Scale bar is 10 mm. (From Choi et al. [46]. Reprinted with permission of the Optical Society of America.)
APPLICATIONS OF OCULAR ADAPTIVE OPTICS
13
FIGURE 1.5 A series of 9 images obtained at the same retinal location (1° eccentricity) on the same retina, but with the entry point of light in the pupil shifted in 1-mm increments in the horizontal and vertical pupil meridians. The central image corresponds to a location near the pupil center, which returns the most light. Due to the directionality of the cone reflectance, the light returning when the retina is illuminated obliquely is reduced. Each image is the registered sum of 8 raw images. The pupil used to illuminate the retina had a diameter of 1.5 mm, while the pupil used to image the retina had a diameter of 6 mm. Scale bar is 5 arcmin. (From Pallikaris et al. [50]. Reprinted with permission of the Association for Research in Vision and Ophthalmology.)
the center corresponds to illumination at the pupil center, and the surrounding images correspond to oblique illumination in 1- and 2-mm increments from the pupil center. By comparing the intensities of each cone under the different illumination conditions, we can determine the directional sensitivity of each cone.
14
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
Figure 1.6 shows the pointing direction of each cone relative to the center of the pupil for two subjects. For each subject, all the cones are tuned to approximately the same direction. The disarray in cone pointing direction is only about 0.11 times the width of the tuning function for a single cone, implying that the Stiles–Crawford effect is a good estimate of the angular tuning of single cones. Additional experiments using adaptive optics have revealed new optical properties of the cone photoreceptors. Pallikaris et al. observed large differences in the reflectance of different cones and that the reflectance of the same cone changed sometimes several-fold over time [50]. These changes were found in all three cone classes and were not caused by changes in the directionality of individual cones. While the changes Pallikaris et al. observed occurred over time scales of minutes to days, Don Miller’s group has recently demonstrated that there are also short-term fluctuations in cone reflectance [51]. They have also shown that these changes can be induced by photopigment bleaching. The cause or causes of these temporal variations remains a matter of investigation, but they may ultimately provide a valuable optical diagnostic of functional activity with each cell. 1.2.2.2 Imaging the Trichromatic Cone Mosaic One of the fi rst demonstrations of the scientific value of retinal imaging with adaptive optics was its use
FIGURE 1.6 Pupils of two subjects with the origin corresponding to the geometric center of the pupil. Each dot represents the location where the optical axis of a single cone intersects the pupil plane. These locations are tightly clustered, with standard deviations of 180 and 160 mm, respectively, indicative of the small amount of disarray in the alignment of cones within the retina. (From Roorda and Williams [49]. Reprinted with permission of the Association for Research in Vision and Ophthalmology.)
APPLICATIONS OF OCULAR ADAPTIVE OPTICS
15
in identifying the photopigment in single human cones in vivo [52, 53]. We had known for nearly 200 years that human color vision depends on three fundamental channels in the retina [54], now referred to as the short wavelength (S), middle wavelength (M), and long wavelength (L) sensitive cones. The packing arrangement and relative numbers of two of the three cone classes (the L and M cones) remained unclear. With adaptive optics, we succeeded in classifying large numbers of living human foveal cones by comparing images of the photoreceptor mosaic when the photopigment was fully bleached to those when the photopigment was selectively bleached with different wavelengths of light. We used 650- and 470-nm light to selectively bleach L cones and M cones, respectively. Heidi Hofer subsequently improved this method and increased the number of eyes that were characterized [55]. Julian Christou improved it still further by showing that deconvolution of the retinal images could decrease the error rate in classifying the pigment in each cone [56]. Figure 1.7 shows the combined results of the Hofer and Roorda studies, in which the S, M, and L cones have been pseudo-colored blue, green, and red, respectively. These results illustrate the random, or nearly random, packing arrangement of the cones as well as the large variation from eye to eye in the ratio of L to M cones in the normal retina. The retinas span a 45fold range of L to M cone ratio. The ability to directly observe the ratio of L to M cones in the living eye allowed us to settle a controversy about the role of cone numerosity on color appearance. We established that color appearance does not vary with L to M ratio [58, 59]. Pokorny and Smith had previously suggested that experience with the chromatic environment rather than cone numerosity establishes the subjective boundaries between hues [60]. Neitz et al. have reported some experimental support for this view, showing that the color boundary between red and green, termed unique yellow, can undergo modification over a period of many days of exposure to an altered chromatic environment [59]. Adaptive optics has also provided an opportunity to explore the color appearance produced by stimulating individual cones or small groups of cones with tiny, brief flashes of monochromatic light [61]. It has been known since Holmgren that the color appearance of such stimuli fluctuates from flash to flash, presumably depending on the specific photoreceptors that are excited by each flash [62]. Adaptive optics allows us to make much more compact light distributions on the retina, enhancing these color fluctuations and making them easier to study in the laboratory. Hofer et al. could explain the variation in color appearance with a model in which different cones containing the same photopigment produce different chromatic sensations when stimulated [61]. These experiments showed that the color sensation produced by stimulating a cone depends on the circuitry each cone feeds rather than simply on the photopigment the cone contains. 1.2.2.3 Tracking Eye Position with Adaptive Optics Eye movements, even when the eye is fi xating, constantly translate the retina relative to the fi xation
16
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
FIGURE 1.7 Images of the cone mosaics of 10 subjects with normal color vision, obtained with the combined methods of adaptive optics imaging and retinal densitometry. The images are false colored so that blue, green, and red are used to represent the S, M, and L cones, respectively. (The true colors of these cones are yellow, purple, and bluish-purple). The mosaics illustrate the enormous variability in L/M cone ratio. The L/M cone ratios are (A) 0.37, (B) 1.11, (C) 1.14, (D) 1.24, (E) 1.77, (F) 1.88, (G) 2.32, (H) 2.36, (I) 2.46, (J) 3.67, (K) 3.90, and (L) 16.54. The proportion of S cones is relatively constant across eyes, ranging from 3.9 to 6.6% of the total population. Images were taken either 1° or 1.25° from the foveal center. For two of the 10 subjects, two different retinal locations are shown. Panels (D) and (E) show images from nasal and temporal retinas, respectively, for one subject; (J) and (K) show images from nasal and temporal retinas for another subject. Images (C), (J), and (K) are from Roorda and Williams [52]. All other images were made by Heidi Hofer. (See insert for a color representation of this figure.) (From Williams and Hofer [57]. Reprinted with permission from The MIT Press.)
target. The high transverse resolution of retinal imaging systems equipped with adaptive optics provides a unique opportunity to record these eye movements with very high accuracy. Putnam et al. showed that it is possible to record the retinal location of a fi xation target on discrete trials with an error at least 5 times smaller than the diameter of the smallest foveal cones [63]. We used this capability to measure the standard deviation of fi xation positions
APPLICATIONS OF OCULAR ADAPTIVE OPTICS
17
across discrete fi xation trials, obtaining values that ranged from 2.1 to 6.3 arcmin, with an average of 3.4 arcmin, in agreement with previous studies [63, 64]. Interestingly, the mean fi xation location on the retina was displaced from the location of highest foveal cone density by an average of about 10 arcmin (as shown in Fig. 1.8), indicating that cone density alone does not drive the location on the retina selected for fi xation. This method may have interesting future applications in studies that require the submicron registration of stimuli with respect to the retina or delivering light to retinal features as small as single cells. Whereas the method developed by our group can only record eye position on discrete trials, Scott Stevenson and Austin Roorda have shown that it is possible to extract continuous eye movement records from video-rate images obtained with an adaptive optics scanning laser ophthalmoscope (AOSLO) [66]. Eye movements cause local warping of the image within single video frames as well as translation between frames. The warping and translation information in the images can be used to recover a record of the eye movements that is probably as accurate as any method yet devised. This is illustrated in Figure 1.9, which compares the eye movement record from the AOSLO with that from a Dual Purkinje Eye Tracker. The noise in the AOSLO trace is on the order of a few arc seconds compared to about a minute of arc for the Dual Purkinje Eye Tracker. Note also the greatly reduced overshoot following a saccade in the AOSLO trace. These overshoots are thought to be partly artifacts caused by lens wobble following the saccade and do not reflect the true position of the retinal image. The AOSLO is not susceptible to this artifact because it tracks the retinal position directly rather than relying on reflections from the anterior optics. Adaptive optics will no doubt prove to
FIGURE 1.8 Area of highest cone density is not always used for fi xation. Shown are retinal montages of the foveal cone mosaic for three subjects. The black square represents the foveal center of each subject. The dashed black line is the isodensity contour line representing a 5% increase in cone spacing, and the solid black line is the isodensity contour line representing a 15% increase in cone spacing. Dots indicate individual fi xation locations. Scale bar is 50 mm. (From Putnam et al. [63]. Reprinted with permission of the Association for Research in Vision and Ophthalmology.)
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
Eye Position (arcmin)
18
20 18 16 14 12 10 8 6 4 2 0
Dual Purkinje Eye Tracker – Vertical AOSLO Retinal Image Vertical
Lens Wobble Artifact in dPi Signal
Small Retinal Image Shift in AOSLO Video Due to Lens Wobble
2.5
2.7
2.9
3.1
3.3
3.5
Time (s) FIGURE 1.9 Comparison of an eye movement trace obtained from an AOSLO (black line) and that obtained from a Dual Purkinje Eye Tracker (gray line), operating simultaneously. Note the reduced noise in the AOSLO trace and the dampened overshoot compared with the Dual Purkinje trace. (Courtesy of Scott Stevenson and Austin Roorda.)
be a useful tool for new studies of eye movements that require high accuracy. Roorda’s group is already using adaptive optics to conduct psychophysical experiments in which the effect of small eye movements on a vernier acuity task can be measured to determine whether the brain compensates for these small movements. Moreover, the process of recovering the eye movement signal makes it possible to register successive video frames and remove the warping artifacts within single frames. Roorda’s group, in collaboration with David Arathorn and Curt Vogel at Montana State University, have demonstrated dewarped and stabilized video images of the retina obtained with adaptive optics. It may eventually be possible to perform these computations in real time, allowing real-time stabilization of the retinal image. An alternative approach, under development by Dan Ferguson and Dan Hammer at Physical Sciences Corporation, is to couple a separate eye tracker to an adaptive optics scanning laser ophthalmoscope for the purposes of image stabilization [67]. These approaches may herald a new generation of psychophysical experiments in which the location of a stimulus on the retinal cone mosaic can be controlled in real time with an error less than the diameter of a single cone. 1.2.2.4 Imaging Retinal Disease One of the most exciting applications of adaptive optics is for the diagnosis and treatment of retinal disease. The advantage offered by AO is that the microscopic structure of the diseased
APPLICATIONS OF OCULAR ADAPTIVE OPTICS
19
retina can be imaged in vivo and tracked in single eyes, monitoring the progression of the disease, or the efficacy of therapy over time. Investigators are only just beginning to explore AO imaging of retinal disease, but there are already several studies that reveal its value. Photoreceptor Degeneration The fi rst discovery about the abnormal eye made with adaptive optics arose from imaging the cone mosaics of color blind eyes [68]. Dichromatic color vision results from the functional loss of one cone class. However, a central question has been whether red–green color blind individuals have lost one population of cones (rendering a patchy cone mosaic) or whether they have normal numbers of cones fi lled with either of two instead of three pigments. Evidence has accumulated favoring the latter view in which the photopigment in one class of cone is replaced, but the issue has not been resolved directly. The Rochester group obtained images from two dichromats, one of which showed a remarkable loss of cones in his retina (while the other dichromat had a normal appearing mosaic). Images from these two subjects are shown in Figure 1.10. The images are from the same retinal eccentricity, about 1° in the temporal direction. The image on the right is from a dichromat who has a novel mutation in one of his cone visual pigment genes, resulting in the loss of the corresponding cone class. The areas where normal cones do not appear have been shown to be functionally blind, using a method developed by Walt Makous. In this method, the subject’s psychometric functions for detecting microflashes delivered to the retina through adaptive optics are shallower than normal and fall well short of 100% detection at moderately high light levels. Both these features are quantitatively consistent with the fraction of the retinal area that is lacking normal cones [69]. The subject on the left is missing the gene for one of his cone visual pigments, and his mosaic is normal in appearance. This fi nding suggested that previous models of dichromacy do not hold for all subjects. One
FIGURE 1.10 Images of the retinas of two dichromats. One dichromat is missing the gene for his L cones and his mosaic is complete, as shown on the left. The other has all three photopigment genes, but his M pigment gene has a mutation that results in the loss of normal cones of that class, as shown on the right. Scale bar is 20 mm.
20
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
of the intriguing aspects of the eye of the patient missing functional cones is that his visual acuity is 20/16, normal, despite the loss of 30% of his cones. This result highlights the insensitivity of conventional clinical tests and points to the role that adaptive optics may eventually play in the early detection of diseases that produce the dropout of photoreceptors or perhaps other neurons, such as ganglion cells. Mosaic in Cone–Rod Dystrophy Adaptive optics has also proven valuable in imaging retinal degeneration. Jessica Wolfi ng and her colleagues at Rochester and UC, Berkeley have imaged a patient with cone–rod dystrophy, which is characterized by a bull’s-eye lesion in the macula [70]. Adaptive optics images were taken in the bull’s-eye lesion as well as the relatively spared central macula using Rochester’s adaptive optics ophthalmoscope and Austin Roorda’s AOSLO. Adaptive optics retinal images of the cone–rod dystrophy patient were compared to those of an age-matched normal subject (Fig. 1.11). In the center of the bull’s-eye lesion, which appeared relatively spared using conventional ophthalmoscopy, adaptive optics revealed a nearly continuous cone photoreceptor mosaic. In the central 1.25°, the cones were larger than normal and cone density was decreased (Fig. 1.12). At the fovea, the patient’s cone density was 30,100 cones/mm 2 instead of the normal average of 199,200 cones/mm 2 [71]. From 2.5° to 4°, the area corresponding to the atrophic bull’s-eye lesion, patches devoid of waveguiding cones were interspersed with highly reflective areas.
FIGURE 1.11 (Top) Adaptive optics retinal images of the right eye of the cone–rod dystrophy patient taken at (a) the fovea, (b) 1°, (c) 2.5°, and (d) 4° nasal to fi xation, respectively (from left to right). (Bottom) Images of an age-matched normal subject at the same respective eccentricities temporal to fi xation. Scale bar is 25 mm.
APPLICATIONS OF OCULAR ADAPTIVE OPTICS
21
FIGURE 1.12 Cone density and cone diameter versus retinal eccentricity in normal subjects compared with the cone–rod dystrophy patient. Measurements of cone density were made only in areas with a complete photoreceptor mosaic (central 1.25° for the cone–rod dystrophy patient). Light gray shaded region: normal range of cone densities measured using microscopy [71]. Dark gray circles: mean and standard deviation of cone density of normal subjects measured from adaptive optics images using a direct counting procedure. Black diamonds: cone density for the cone–rod dystrophy patient. Dark gray bars (bottom panel): mean normal cone diameter. Black bars (bottom panel): mean cone–rod dystrophy cone diameter.
Longitudinal studies of disease progression or investigations of early stage and presymptomatic cone–rod dystrophy patients would help determine the mechanism underlying the increased cone diameter and decreased cone density observed in this study, providing insight into developmental and degenerative processes in normal and diseased retina. This study demonstrates the viability of applying adaptive optics retinal imaging to provide quantitative cell counts in retinal disease, an important diagnostic for tracking the progression of the disease as well as for the efficacy of therapy. 1.2.2.5 High-Resolution Imaging of Vascular Structure and Blood Flow Accurate measurements of blood vessel structure and function are critical for numerous retinal diseases. Current technologies for directly measuring blood flow in small vessels are limited by the axial and transverse resolutions of retinal imaging. Doppler OCT methods improve axial resolution but still suffer from poor lateral resolution. Also, direct methods to measure flow rely on injecting contrast agents such as fluorescein, which carries a risk of a reaction that can be fatal. In addition, the observations can be made only during
22
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
the brief dwell time of the dye in the retina. Joy Martin in Austin Roorda’s group has shown that adaptive optics allows for the noninvasive measurement of blood flow without the need for contrast agents [72]. This allows for imaging over much longer observational periods in individual patients. In vivo studies of blood flow in these smallest of vessels may provide valuable information about early vascular changes in diseases such as diabetic retinopathy. Figure 1.13 shows measurements of blood velocity in different capillaries surrounding the fovea. 1.2.2.6 AO-Assisted Vitreo-Retinal Surgery Some aspects of retinal microsurgery may benefit from the high resolution provided by adaptive optics, an application of adaptive optics that has yet to be explored. For example, an AO-equipped surgical microscope may improve the ability to target microaneuryms in diabetic retinopathy. Other procedures that might benefit include the removal of micron-thick structures from the surface of the retina (such as epiretinal membranes), the removal of tissue (such as choroidal neovascularization [CNV] lesions) from the subretinal space, the treatment of retinal vein occlusions by cannulation or by incising the thin connective tissue sheath separating retinal veins and arteries. The capability to stabilize the image, described earlier, would be valuable in the delivery of therapeutic laser beams through an AO-equipped surgical microscope, since, for example, it would be possible to target the desired location and minimize damage to adjacent retinal locations.
FIGURE 1.13 Measurements of blood velocity in individual capillaries surrounding the foveal avascular zone in a normal patient. (From Carroll et al. [73]. Reprinted with permission of the Optical Society of America.)
APPLICATIONS OF OCULAR ADAPTIVE OPTICS
23
1.2.2.7 Future Directions for Improving Resolution and Contrast To date, high-resolution imaging with adaptive optics has been successful at imaging photoreceptors because they are waveguides, which glow brightly by preferentially sending incoming light back toward the pupil. Blood flow can be monitored in single capillaries because of the differences in absorptance between red and white blood cells. However, many of the retinal structures of greatest interest have low contrast as well as low reflectivity. For example, the ganglion cells, which play the critical role of conveying signals from the retina to the brain and are the cells devastated by glaucoma, reflect more than 60 times less light back than the photoreceptors [74]. They and the Müller cell matrix within which they are entangled are necessarily transparent to avoid compromising the quantum efficiency of the photoreceptors beneath them. There are additional hurdles to increasing retinal image contrast and resolution. The maximum permissible light exposure dictated by safety considerations sets an upper bound on the signal, and a lower bound on the contrast that can be detected in the presence of noise. Another constraint is speckle arising from the interference of light backscattered from different retinal structures. Speckle can be mitigated by decreasing the temporal coherence of the illumination, but the concomitant increase in spectral bandwidth requires the correction of the eye’s chromatic aberration to reap the full resolution potential of adaptive optics. Imaging inner retinal cells poses challenges that probably cannot be met by adaptive optics alone. The best ophthalmic adaptive optics systems today already approach diffraction-limited imaging using the fully dilated pupil. Modest improvements will no doubt accrue through improved wavefront sensing, better deformable mirror technology, more careful calibration, and optimized control algorithms. Small resolution gains could be achieved by decreasing wavelength, but this approach is problematic because of the eye’s susceptibility to damage at short wavelengths [75]. The most significant gains are likely to arise by combining adaptive optics with new imaging technologies. Roorda et al. have demonstrated the value of marrying confocal scanning with adaptive optics (an AOSLO), which can optically section the retina with an axial resolution of about 100 mm, rejecting unwanted photons from deeper or shallower layers of the retina than the one of interest (see also Chapter 16) [76]. Adaptive optics not only increases resolution, it also increases the signal by coupling more light into the confocal pinhole. The combination of adaptive optics with optical coherence tomography (AO-OCT) may eventually provide even better axial resolution, as small as ~2 mm. A number of groups are actively pursuing this direction (see also Chapter 17) [77–79]. AOOCT could produce a point spread function of less than 2 mm in all three spatial dimensions, which would be smaller than the cell bodies of the smallest retinal cells. Though the diffraction limit on resolution set by the relatively low numerical aperture of the eye is a formidable barrier, it may eventually be possible to surpass it using techniques such as structured illumination [80, 81], which has successfully exceeded the diffraction limit in microscopy.
24
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
FIGURE 1.14 Image on the left is a reflectance image of monkey cones obtained with 808-nm illumination in a high-magnification confocal scanning laser ophthalmoscope. Image on the right shows the same area but with fluorescence imaging of rhodamine dextran injected into the lateral geniculate nucleus and retrogradely transported to the ganglion cell bodies.
Additional technologies, such as polarization imaging [82, 83] and multiphoton imaging [84–86], may also prove useful. A promising approach is to use selective fluorescent markers to distinguish cells from the surrounding tissue. An exciting example of this approach is the recent demonstration of in vivo fluorescence imaging of apoptosis of single primate ganglion cells stained with annexin-5 [87]. Dan Gray and Bill Merigan have demonstrated a high-magnification SLO that can image individual ganglion cells in the living primate retina using rhodamine dextran retrogradely transported from the lateral geniculate nucleus into ganglion cell bodies (see Fig. 1.14) [88]. When fluorescence methods are combined with adaptive optics, it may be possible to quantify the extent of damage to particular classes of ganglion cells due to retinal disease, and perhaps even the rescue of these cells pursuant to pharmacological intervention. High-resolution imaging with adaptive optics will benefit greatly from the development of new and more selective molecular markers, especially those that signal specific biochemical events within individual cells. There is also a great need to identify intrinsic signals within the retina and new ways to transport extrinsic markers to their targets noninvasively. Adaptive optics retinal imaging is poised to capitalize on new tools for noninvasive, optical interrogation of the functional activity of single cells and the communication between them.
REFERENCES 1. Willoughby Cashell GT. A Short History of Spectacles. Proc. R. Soc. Med. 1971; 64: 1063–1064.
REFERENCES
25
2. Rubin ML. Spectacles: Past, Present and Future. Surv. Ophthalmol. 1986; 30: 321–327. 3. Kepler J. Ad Vitellionem Paralipomena, quibus Astronomiae Pars Optica Traditur. Frankfurt: Claudius Marnius and the heirs of Jean Aubry, 1604. 4. Van Helden A. The Invention of the Telescope. Trans. Am. Phil. Soc. 1977; 67: no. 4. 5. Scheiner C. Oculus, hoc est: Fundamentum opticum. Innsbruck: Oeniponti, 1619. 6. Young T. On the Mechanism of the Eye. Phil. Trans. R. Soc. London. 1801; 91: 23–28. 7. Helmholtz H. In: Southall JP, ed. Physiological Optics. Rochester, NY: Optical Society of America, 1924. 8. van Norren D, Tiemeijer LF. Spectral Reflectance of the Human Eye. Vision Res. 1986; 26: 313–320. 9. Delori FC, Pfl ibsen KP. Spectral Reflectance of the Human Ocular Fundus. Appl. Opt. 1989; 28: 1061–1077. 10. Purkinje J. Beobachtungen und Versuche zur Physiologie der Sinne [Observations and Experiments Investigating the Physiology of Senses]. Erstes Bändchen. Beiträge zur Kenntniss des Sehens in subjectiver Hinsicht. Prague: Calve, 1823. 11. Kruta V. J.E. Purkyne (1787–1869) Physiologist. A Short Account of His Contributions to the Progress of Physiology with a Bibliography of His Works. Prague: Academia, Publishing House of the Czechoslovak Academy of Sciences, 1969. 12. Brücke EW. Anatomische Beschreibung des Menschlichen Augapfels. Berlin: G. Reimer, 1847. 13. Helmholtz HLF. Beschreibung eines Augen-Spiegels zur Untersuchung der Netzhaut im lebenden Auge [Description of an eye mirror for the investigation of the retina of the living eye]. Berlin: A Förstner’sche Verlagsbuchhandlung, 1851. 14. Jackman WT, Webster JD. On Photographing the Retina of the Living Human Eye. Philadelphia Photographer. 1886; 23: 340–341. 15. Webb RH, Hughes GW, Pomerantzeff O. Flying Spot TV Ophthalmoscope. Appl. Opt. 1980; 19: 2991–2997. 16. Fercher AF, Mengedoht K, Werner W. Eye Length Measurement by Interferometry with Partially Coherent Light. Opt. Lett. 1988; 13: 186–188. 17. Huang D, Swanson EA, Lin CP, et al. Optical Coherence Tomography. Science. 1991; 254: 1178–1181. 18. Drexler W, Morgner U, Ghanta RK, Kärtner FX, Schuman JS, Fujimoto JG. Ultrahigh-Resolution Ophthalmic Optical Coherence Tomography. Nat. Med. 2001; 7: 502–507. 19. Land MF, Snyder AW. Cone Mosaic Observed Directly through Natural Pupil of Live Vertebrate. Vision Res. 1985; 25: 1519–1523. 20. Jagger WS. Visibility of Photoreceptors in the Intact Living Cane Toad Eye. Vision Res. 1985; 25: 729–731. 21. Williams DR. Aliasing in Human Foveal Vision. Vision Res. 1985; 25: 195–205. 22. Labeyrie A. Attainment of Diffraction-Limited Resolution in Large Telescopes by Fourier Analyzing Speckle Patterns in Star Images. Astron. Astrophys. 1970; 6: 85–87.
26
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
23. Artal P, Navarro R. High-Resolution Imaging of the Living Human Fovea: Measurement of the Intercenter Cone Distance by Speckle Interferometry. Opt. Lett. 1989; 14: 1098–1100. 24. Marcos S, Navarro R. Determination of the Foveal Cone Spacing by Ocular Speckle Interferometry: Limiting Factors and Acuity Predictions. J. Opt. Soc. Am. A. 1997; 14: 731–740. 25. Miller DT, Williams DR, Morris GM, Liang J. Images of Cone Photoreceptors in the Living Human Eye. Vision Res. 1996; 36: 1067–1079. 26. Smirnov MS. Measurement of the Wave Aberration of the Human Eye. Biophysics. 1961; 6: 687–703. 27. Howland HC. Ophthalmic Wavefront Sensing: History and Methods. In: Krueger RR, Applegate RA, MacRae SM, eds. Wavefront Customized Visual Correction: The Quest for Super Vision II. Thorofare, NJ: SLACK, 2004, pp. 77–84. 28. Walsh G, Charman WN, Howland HC. Objective Technique for the Determination of Monochromatic Aberrations of the Human Eye. J. Opt. Soc. Am. A. 1984; 1: 987–992. 29. Liang J, Grimm B, Goelz S, Bille JF. Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann-Shack Wave-front Sensor. J. Opt. Soc. Am. A. 1994; 11: 1949–1957. 30. Hofer H, Artal P, Singer B, Aragon JL, Williams DR. Dynamics of the Eye’s Aberrations. J. Opt. Soc. Am. A. 2001; 18: 497–506. 31. Babcock HW. The Possibility of Compensating Astronomical Seeing. Pub. Astr. Soc. Pac. 1953; 65: 229–236. 32. Hardy JW, Lefebvre JE, Koliopoulos CL. Real-Time Atmospheric Compensation. J. Opt. Soc. Am. 1977; 67: 360–369. 33. Max CE, Canalizo G, Macintosh BA, et al. The Core of NGC 6240 from Keck Adaptive Optics and Hubble Space Telescope NICMOS Observations. Astrophys. J. 2005; 621: 738–749. 34. van Dam MA, Le Mignant D, Macintosh BA. Performance of the Keck Observatory Adaptive-Optics System. Appl. Opt. 2004; 43: 5458–5467. 35. Dreher AW, Bille JF, Weinreb RN. Active Optical Depth Resolution Improvement of the Laser Tomographic Scanner. Appl. Opt. 1989; 28: 804–808. 36. Liang J, Williams DR. Aberrations and Retinal Image Quality of the Normal Human Eye. J. Opt. Soc. Am. A. 1997; 14: 2873–2883. 37. Liang J, Williams DR, Miller D. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884–2892. 38. Hofer H, Chen L, Yoon GY, et al. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express. 2001; 8: 631–643. 39. Fernandez EJ, Iglesias I, Artal P. Closed-Loop Adaptive Optics in the Human Eye. Opt. Lett. 2001; 26: 746–748. 40. Diaz-Santana L, Torti C, Munro I, et al. Benefit of Higher Closed-Loop Bandwidths in Ocular Adaptive Optics. Opt. Express. 2003; 11: 2597–2605. 41. Yoon GY, Williams DR. Visual Performance after Correcting the Monochromatic and Chromatic Aberrations of the Eye. J. Opt. Soc. Am. A. 2002; 19: 266–275.
REFERENCES
27
42. Cheng H, Barnett JK, Vilupuru AS, et al. A Population Study on Changes in Wave Aberrations with Accommodation. J. Vis. 2004; 4: 272–280. 43. Williams DR, Applegate RA, Thibos LN. Metrics to Predict the Subjective Impact of the Eye’s Wave Aberration. In: Krueger RR, Applegate RA, MacRae SM, eds. Wavefront Customized Visual Correction: The Quest for Super Vision II. Thorofare, NJ: SLACK, 2004, pp. 77–84. 44. Chen L, Singer B, Guirao A, et al. Image Metrics for Predicting Subjective Image Quality. Optom. Vis. Sci. 2005; 82: 358–369. 45. Artal P, Chen L, Fernandez EJ, et al. Neural Compensation for the Eye’s Optical Aberrations. J. Vis. 2004; 4: 281–287. 46. Choi S, Doble N, Lin, J, et al. Effect of Wavelength on in Vivo Images of the Human Cone Mosaic. J. Opt. Soc. Am. A. 2005; 22: 2598–2605. 47. Enoch JM, Lakshminarayanan V. Retinal Fibre Optics. In: Cronly-Dillon J, ed. Vision and Visual Dysfunction, Vol. 1. Boca Raton, FL: CRC, 1991, pp. 280– 309. 48. MacLeod DIA. Directionally Selective Light Adaptation: A Visual Consequence of Receptor Disarray? Vision Res. 1974; 14: 369–378. 49. Roorda A, Williams DR. Optical Fiber Properties of Individual Human Cones. J. Vis. 2002; 2: 404–412. 50. Pallikaris A, Williams DR, Hofer H. The Reflectance of Single Cones in the Living Human Eye. Invest. Ophthalmol. Vis. Sci. 2003; 44: 4580–4592. 51. Rha J, Jonnal RS, Zhang Y, Miller DT. Rapid Fluctuation in the Reflectance of Single Cones and Its Dependence on Photopigment Bleaching. Invest. Ophthalmol. Vis. Sci. 2005; 46: e-abstract 3546. 52. Roorda A, Williams DR. The Arrangement of the Three Cone Classes in the Living Human Eye. Nature. 1999; 397: 520–522. 53. Roorda A, Metha AB, Lennie P, Williams DR. Packing Arrangement of the Three Cone Classes in Primate Retina. Vision Res. 2001; 41: 1291–1306. 54. Young T. On the Theory of Light and Colours. Phil. Trans. Roy. Soc. London. 1802; 91: 12–48. 55. Hofer H, Carroll J, Neitz J, et al. Organization of the Human Trichromatic Mosaic. J. Neurosci. 2005; 25: 9669–9679. 56. Christou JC, Roorda A, Williams DR. Deconvolution of Adaptive Optics Retinal Images. J. Opt. Soc. Am. A. 2004; 21: 1393–1401. 57. Williams DR, Hofer H. Formation and Acquisition of the Retinal Image. In: Chalupa LM, Werner JS, eds. The Visual Neurosciences. Cambridge, MA: MIT Press, 2003, pp. 795–810. 58. Brainard DH, Roorda A, Yamauchi Y, et al. Functional Consequences of the Relative Numbers of L and M Cones. J. Opt. Soc. Am. A. 2000; 17: 607– 614. 59. Neitz J, Carroll J, Yamauchi Y, et al. Color Perception Is Mediated by a Plastic Neural Mechanism that Is Adjustable in Adults. Neuron. 2002; 35: 783–792. 60. Pokorny J, Smith VC. Evaluation of a Single Pigment Shift Model of Anomalous Trichromacy. J. Opt. Soc. Am. 1977; 67: 1196–1209. 61. Hofer H, Singer B, Williams DR. Different Sensations from Cones with the Same Photopigment. J. Vis. 2005; 5: 444–454.
28
DEVELOPMENT OF ADAPTIVE OPTICS IN VISION SCIENCE
62. Holmgren F. Uber den Farbensinn. Compt rendu du congres periodique international des sciences medicales Copenhagen. 1884; 1: 80–98. 63. Putnam NM, Hofer HJ, Doble N, et al. The Locus of Fixation and the Foveal Cone Mosaic. J. Vis. 2005; 5: 632–639. 64. Ditchburn RW. Eye-Movements and Visual Perception. Oxford: Clarendon, 1973. 65. Steinman RM, Haddad GM, Skavenski AA, Wyman D. Miniature Eye Movement. Science. 1973; 181: 810–819. 66. Stevenson SB, Roorda A. Correcting for Miniature Eye Movements in High Resolution Scanning Laser Ophthalmoscopy. In: Manns F, Soederberg PG, Ho A, Stuck BE, Belkin M, eds. Ophthalmic Technologies XV. Proceedings of the SPIE. 2005; 5688: 145–151. 67. Hammer DX, Ferguson RD, Iftimia NV, et al. Tracking Adaptive Optics Scanning Laser Ophthalmoscope (TAOSLO). Invest. Ophthalmol. Vis. Sci. 2005; 46: e-abstract 3550. 68. Carroll J, Neitz M, Hofer H, et al. Functional Photoreceptor Loss Revealed with Adaptive Optics: An Alternate Cause of Color Blindness. Proc. Natl. Acad. Sci. 2004; 101: 8461–8466. 69. Carroll J, Lin J, Wolfi ng JI, et al. Retinal microscotomas revealed by adaptive optics microflashes, and a model. [Abstract] J. Vis. 2005; 5: http://journalofvision. org/5/12/52, doi: 10.1167/5.12.52. 70. Wolfi ng JI, Chung M, Carroll J, et al. High Resolution Imaging of Cone-Rod Dystrophy with Adaptive Optics. Invest. Ophthalmol. Vis. Sci. 2005; 46: e-abstract 2567. 71. Curcio CA, Sloan KR, Kalina RE, Hendrickson AE. Human Photoreceptor Topography. J. Comp. Neurol. 1990; 292: 497–523. 72. Martin JA, Roorda A. Direct and Noninvasive Assessment of Parafoveal Capillary Leukocyte Velocity. Ophthalmology. 2005; 112: 2219-2224. 73. Carroll J, Gray DC, Roorda A, Williams DR. Recent Advances in Retinal Imaging with Adaptive Optics. Opt. Photon. News. 2005; 16: 36–42. 74. Miller DT. Personal communication. 2005. 75. ANSI. American National Standard for the Safe Use of Lasers ANSI Z136. 1-2000. Orlando: Laser Institute of America, 2000. 76. Roorda A, Romero-Borja F, Donnelly WJ, et al. Adaptive Optics Laser Scanning Ophthalmoscopy. Opt. Express. 2002; 10: 405–412. 77. Hermann B, Fernandez EJ, Unterhuber A, et al. Adaptive-Optics Ultrahigh-Resolution Optical Coherence Tomography. Opt. Lett. 2004; 29: 2142–2144. 78. Zawadzki RJ, Laut S, Zhao M, et al. Retinal Imaging with Adaptive Optics High Speed and High Resolution Optical Coherence Tomography. Invest. Ophthalmol. Vis. Sci. 2005; 46: e-abstract 1053. 79. Miller Zhang Y, Rha J, Jonnal RS, Miller DT. Adaptive Optics Parallel Spectral Domain Optical Coherence Tomography for Imaging the Living Retina. Opt. Express. 2005; 13: 4792–4811. 80. Gustafsson MGL. Surpassing the Lateral Resolution Limit by a Factor of Two Using Structured Illumination Microscopy. J. Microsc. 2000; 198: 82–87.
REFERENCES
29
81. Heintzmann R, Cremer C. Laterally Modulated Excitation Microscopy: Improvement of Resolution by Using a Diffraction Grating. In: Bigio IJ, Schneckenburger H, Slavik J, Svanberg K, Viallet PM, eds. Optical Biopsies and Microscopic Techniques III. Proceedings of the SPIE. 1999; 3568: 185–196. 82. Burns SA, Elsner AE, Mellem-Kairala MB, Simmons RB. Improved Contrast of Subretinal Structures Using Polarization Analysis. Invest. Ophthalmol. Vis. Sci. 2003; 44: 4061–4068. 83. Mellem-Kairala MB, Elsner AE, Weber A, et al. Improved Contrast of Peripapillary Hyperpigmentation Using Polarization Analysis. Invest. Ophthalmol. Vis. Sci. 2005; 46: 1099–1106. 84. Denk W, Strickler JH, Webb WW. Two-Photon Laser Scanning Fluorescence Microscopy. Science. 1990; 248: 73–76. 85. Williams RM, Zipfel WR, Webb WW. Multiphoton Microscopy in Biological Research. Curr. Opin. Chem. Biol. 2001; 5: 603–608. 86. Marsh PN, Burns D, Girkin JM. Practical Implementation of Adaptive Optics in Multiphoton Microscopy. Opt. Express. 2003; 11: 1123–1130. 87. Cordeiro MF, Guo L, Luong V, et al. Real-Time Imaging of Single Nerve Cell Apoptosis in Retinal Degeneration. Proc. Natl. Acad. Sci. 2004; 101: 13352– 13356. 88. Gray D, Merigan W, Gee B, et al. High-resolution in vivo imaging of primate retinal ganglion cells. [Abstract] J. Vis. 2005; 5: http://journalofvision.org/5/12/64, doi: 10.1167/5.12.64.
PART TWO
WAVEFRONT MEASUREMENT AND CORRECTION
CHAPTER TWO
Aberration Structure of the Human Eye PABLO ARTAL, JUAN M. BUENO, ANTONIO GUIRAO, and PEDRO M. PRIETO Universidad de Murcia, Murcia, Spain
2.1
INTRODUCTION
The image-forming properties of any optical system, in particular the eye, can be described completely by the wave aberration. It is defi ned as the difference between the perfect (spherical) and the actual wavefronts for every point over the eye’s pupil. A perfect eye (without aberrations) forms a perfect retinal image of a point source (Airy disk). In reality, however, imperfections in the refracting ocular surfaces generate aberrations that produce a larger and, in general, asymmetric retinal image. The monochromatic aberrations of the complete eye, considered as one single imaging system, can be measured using a large variety of wavefront sensing techniques (see also Chapter 3). Every ocular surface contributes differently to the overall quality of the retinal image. The relative contribution to the eye’s aberrations of the main ocular components (the crystalline lens and the cornea) can be obtained by the combined use of ocular and corneal aberration data. The monochromatic aberrations of the eye depend on a variety of factors that will be reviewed in this chapter: in particular, accommodation, aging, and retinal eccentricity. Beyond monochromatic aberrations, in normal whitelight illumination, chromatic aberrations also play an important role that will
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
33
34
ABERRATION STRUCTURE OF THE HUMAN EYE
be discussed. The effect of polarization and scatter in the retinal image is also considered in the latter sections of this chapter. In adaptive optics (AO) applications, it may be useful to have a statistical description of the aberrations of the system to be corrected. In the case of the atmosphere in astronomical applications, this approach is widely used and is very successful. Although, in the case of the eye, the situation is rather different, we considered a similar statistical approach for the aberrations in a population of normal eyes. If the aberrations of the eye are known, it is possible to correct them using a wavefront correcting device that compensates for the eye’s aberrations in real time. This is a direct application of AO to the eye. In the ideal case, the system of corrector + eye becomes permanently aberration free, producing perfect retinal images. In different laboratories, AO in the eye has been demonstrated using deformable mirrors or liquid crystal spatial light modulators as corrector devices [1–5]. These systems are still laboratory prototypes that include a wavefront sensor and a corrector, allowing vision science investigators to perform visual psychophysics (see also Chapter 14) or to record high-resolution retinal images (see also Chapters 1 and 10) through near to aberration-free ocular optics. However, for practical applications, aberration correction for the eye probably needs to be performed using simpler approaches than those already demonstrated in research laboratories. The most promising options are customized ablations in refractive surgery and customized contact lenses or intraocular lenses. These cases are examples of static corrections that are permanent and fi xed, such as customized corneal ablations (see also Chapter 12) or intraocular lenses (see also Chapter 11), or are reversible and mobile, such as customized contact lenses (see also Chapter 11). These technologies will be reviewed in other chapters of this book, but their correct implementation in the eye depends heavily on the understanding of the nature of ocular aberrations.
2.2 LOCATION OF MONOCHROMATIC ABERRATIONS WITHIN THE EYE The optical aberrations in the normal eye depend on many factors and conditions. They vary from individual to individual [6], with pupil size [7, 8], the age of the subject [9–12], accommodation [13, 14], retinal eccentricity [15, 16], refractive state, and so forth. In normal young subjects at the fovea, the average root-mean-square, or RMS, wavefront error of higher order aberrations for a 5-mm pupil diameter is approximately 0.25 mm (or around l/2). To have an intuitive rough idea of the relative importance of higher order aberrations in normal eyes, in a system only affected by defocus, 0.25 mm of aberration would be approximately equivalent to 0.25 diopters (D) for a 5-mm pupil. This value is very large in the context of precision optics but rather modest in ophthalmic optics. Beyond
LOCATION OF MONOCHROMATIC ABERRATIONS WITHIN THE EYE
35
defocus and astigmatism, spherical aberration, coma, and trefoil are the most significant aberrations in normal eyes. Why is the eye affected by these aberrations? Where are the sources of the aberrations in the eye? These questions can be answered by simultaneously measuring the aberrations induced by the anterior surface of the cornea and the total ocular aberrations in the same eye. Then, the aberrations of the internal ocular optics, that is, those produced by the posterior corneal surface and the lens, can be determined. This allows one to determine the relative contributions of the different optical elements of the eye to the total wavefront. Advanced applications of AO will need or will use such a detailed tomographic structure of the aberrations. The aberrations associated with the anterior surface of the cornea can be computed from its shape as measured with corneal topography instruments. The simplest approach to calculate the anterior corneal aberrations is to obtain a “remainder lens” by subtracting the best conic surface fit to the measured cornea, and calculating the aberrations by multiplying the residual surface profi le by the refractive index difference between air and the cornea. Another option is to trace rays through the corneal surface to compute the associated aberrations [17]. Figure 2.1 shows a schematic representation of the complete procedure. The corneal elevations, representing the distance (zi) from each point of the corneal surface to a reference plane tangential to the vertex of the cornea, are represented with a Zernike polynomial expansion (see Thibos et al. for a description of these polynomials and standards for reporting [18]): N
z ( r, θ ) = ∑
n
∑a
m n
Znm ( r, θ )
n= 0 m=− n
(2.1)
using a Gram–Schmidt orthogonalization method [19]. The wave aberration associated with the corneal surface (W) is obtained as the difference in optical path length between the principal ray that passes through the center of the pupil and a marginal ray: W = nz + n ′l ′ − n ′l
(2.2)
where n, n′ are the refractive indices of air and the cornea, respectively, and z, l′, and l are the distances represented in Figure 2.1. By using the Zernike representation for the corneal surface [Eq. (2.1)], the corneal wave aberration is also obtained as another Zernike expansion: N
W ( r, θ ) = ∑
n
∑c
m n
Znm ( r, θ )
(2.3)
n= 0 m=− n
where the coefficients cmn are linear combinations of the coefficients amn [17]. On the other hand, the aberrations of the complete eye can be measured using a variety of different subjective and objective techniques. Although they are described more extensively in Chapter 3, some of them are the method
36
ABERRATION STRUCTURE OF THE HUMAN EYE
N
z(r, q) =
n
Σ Σ anm Znm (r, q)
Corneal Elevations (z) Z
n = 0 m = −n
…fit using a Zernike polynomial expansion
z r
Y
Corneal Wave Aberration
z(r, q) Y n
X
q
W = nz + n′I ′− n ′I
n′
Z r θ X
I′ I Z
W(r, q) =
N
n
Σ Σ cnm Znm (r, q)
n = 0 m = −n
FIGURE 2.1 Schematic representation of the procedure to calculate the aberrations of the anterior surface of the cornea. Corneal elevations (z) provided by a videokeratoscope are fit to an expansion of Zernike polynomials. A ray tracing procedure is used to calculate the corneal wave aberration (W) as the differences in optical path between the marginal and principal rays, also expressed as a Zernike polynomial expansion (see the text for details).
of “vernier” alignment [20], the crossed-cylinder aberroscope [6], the Foucault knife-edge technique [21], calculations from double-pass retinal images [22, 23], the pyramid sensor [24], and, probably the most widely used method today, the Shack–Hartmann wavefront sensor [25–27]. Since we can now measure the wave aberrations of the complete eye and of the cornea, the relative contributions of the different ocular surfaces to retinal image quality can be evaluated. In particular, the wave aberration of the internal ocular optics, that is, the posterior surface of the cornea plus the crystalline lens, is estimated simply by directly subtracting the corneal from the ocular aberrations. Figure 2.2 shows a schematic representation of this procedure. In a simple model, the aberrations of the internal optics (c′mn) can be obtained by direct subtraction if the Zernike coefficients for both the cornea (cmn) and the eye (c ″mn) are known. It is assumed that the changes in the wave aberration are small for different axial planes, that is, from the corneal vertex to the pupil plane.
LOCATION OF MONOCHROMATIC ABERRATIONS WITHIN THE EYE m m c′m n = c″n − cn
c″nm
cm n
Internal Wave Aberration
Ocular Wave Aberration
Corneal Wave Aberration
37
FIGURE 2.2 Schematic representation of the combination of corneal and ocular wave aberrations to estimate the wave aberration of the internal optics.
A few experiments have been performed to determine the precision of the data obtained from the combination of ocular and corneal aberrations. In addition to measuring ocular and corneal aberrations, Artal et al. also directly measured the wave aberration for the internal optics using a Shack– Hartmann wavefront sensor when the aberrations of the corneal surface were canceled by immersing the eye in saline water using swimming goggles [28]. This idea was similar to that of Young in 1801 [29] and more recently Millodot and Sivak [30], who instead used current wavefront sensing technology. The comparison of the aberrations obtained from independent measurements is an indication of the validity of the combination approach. In particular, the aberrations of the cornea, measured both directly from its shape and by subtraction of the aberrations of the internal optics from the whole eye, were found to be similar within the experimental variability. This result provided a strong proof of consistency for these types of procedures to calculate corneal and internal aberrations despite the experimental and methodological difficulties involved. The relative contribution of the aberrations of the cornea and the internal optics in different eyes has been evaluated in several recent studies. Figure 2.3 shows an example of the wave aberrations and the associated point spread functions (PSFs) for the cornea, the internal optics, and the complete eye in a normal young eye. The magnitude of aberrations is larger both in the cornea and the internal optics than in the complete eye. This indicates an active role of the lens in partially reducing the aberrations produced by the cornea. Figure 2.4 shows the Zernike terms for the aberrations of the cornea (solid symbols) and the internal optics (white symbols) for a number of young normal subjects [28]. It is remarkable that the magnitude of several aberration terms is similar for the two components, but they tend to have opposite signs. This indicates that the internal optics may play a
38
ABERRATION STRUCTURE OF THE HUMAN EYE
Anterior Cornea
Internal Optics
Complete Eye
2 1.5 1 0.5 0 −0.5 −1 −1.5 −2
FIGURE 2.3 Example of wave aberrations for the cornea, the internal optics, and the complete eye in one normal young subject. The associated point spread functions (PSFs) were calculated at the best image plane from the wave aberrations and subtend 20 min of arc of visual field. The aberrations of the internal optics partially compensate for the corneal aberrations.
FIGURE 2.4 Zernike terms for the cornea (solid symbols) and the internal optics (open symbols) for a number of normal young subjects.
significant role in compensating for the corneal aberrations in normal young eyes. This behavior may not be present in every young eye, depending on the amount of aberrations or the refractive error [31]. Determining the location of the aberrations in the eye has important implications for aberration correction in adaptive optics and also for current clinical procedures, such as wavefront-guided refractive surgery (see also Chapter 12). In normal young subjects, customized ablation should be performed based on the aberrations of the complete eye. If the ablation is based on only
LOCATION OF MONOCHROMATIC ABERRATIONS WITHIN THE EYE
1.4
39
Ocular Wave Aberration
1.2
RMS (µm)
1.0
After Correction Cornea Based
0.8 0.6 0.4 0.2 0.0
Before Correction After Correction Ocular Based
FIGURE 2.5 Aberration maps for the eye before ideal surgery (middle), after an ideal surgery based on the aberration data of the complete eye (bottom), and after an ideal surgery based on corneal aberration data alone (top). The aberration maps are placed approximately at their corresponding value of total aberration, expressed using the RMS.
the corneal aberrations, the fi nal aberrations of the eye could be larger than before the ablation. Figure 2.5 shows a schematic example: the aberration maps for the eye before an ideal surgery (middle map, “Before correction”), after an ideal surgery based on the aberration data of the complete eye (bottom map, “After correction, ocular based”), and after an ideal surgery based on corneal aberration data alone (top map, “After correction, cornea based”). The aberration maps are placed approximately at their corresponding value of total aberration (expressed using the RMS). If a perfect (ideal) ablation is performed, the eye becomes limited only by diffraction (i.e., without aberrations, represented by a flat aberration map). However, if the same perfect (ideal) ablation is performed using only corneal aberration data (i.e., correcting only the corneal aberrations), the remaining eye has the aberrations corresponding to the internal surfaces that, in many cases, can be more severe in the eye than before the treatment. Another similar example is cataract surgery after the implantation of an intraocular lens (IOL). These lenses usually have good image quality when measured on an optical bench, but the fi nal optical performance in the implanted eye was typically lower than expected [32]. The reason is that the ideal substitute of the natural lens is not a lens with the best optical performance when isolated, but one that is designed to compensate for the aberrations of the cornea [33]. This is shown schematically in Figure 2.6. Intraocular and contact lenses should ideally be
40
ABERRATION STRUCTURE OF THE HUMAN EYE
Corneal Wave Aberration IOL Wave Aberration
Ocular Wave Aberration
FIGURE 2.6 Schematic illustration of the coupling between aberrations of the cornea and the intraocular lens (see the text for details).
designed with an aberration profi le matching that of the cornea or the lens to maximize the quality of the retinal image.
2.3 TEMPORAL PROPERTIES OF ABERRATIONS: ACCOMMODATION AND AGING Ocular aberrations change with time due to a variety of factors. Perhaps the most important ocular changes are those due to accommodation [14]. To focus on objects at different distances, the crystalline lens automatically changes both its shape and power, and consequently, the ocular aberrations. Eye movements and changes in the tear fi lm and humors also produce rapid, although in general, small changes of the aberrations. Hofer et al. performed a very detailed study on the dynamics of the aberrations under normal conditions and their impact on adaptive optics correction [34]. In addition, within a different and much longer time scale, it has also been demonstrated that aberrations increase with normal aging. In this section, both the effect of accommodation and aging on aberrations are considered in some detail. 2.3.1
Effect of Accommodation on Aberrations and Their Correction
The dynamic changes of ocular aberrations during accommodation can be measured using a real-time wavefront sensor [3, 34]. Due to the continuous
TEMPORAL PROPERTIES OF ABERRATIONS: ACCOMMODATION AND AGING
41
changes of the aberrations over time, an ideal, perfectly static correction will not provide stable, aberration-free optics. For example, when an eye that is perfectly corrected for distance vision accommodates to near objects, the aberrations will change and this eye will no longer be aberration free. As an example, Figure 2.7 shows how selected aberration terms (spherical aberration and horizontal and vertical coma) change dynamically during accommodation in one subject for a pupil diameter of 5.5 mm. The actual wavefronts for 0 and 2 D are also included in the figure. The changes in spherical aberration are well correlated with accommodation in most subjects. Other aberration terms may remain relatively more stable, despite the overall changes in the wave aberration. For AO, one direct implication of the change of aberrations with accommodation is the need for dynamic corrections. Figure 2.8 shows an example: The wave aberrations are depicted for far (0 D), 1 D, and 2 D of accommodation for one normal subject (upper row), along with the residual aberrations after a perfect static correction for far vision at the same three vergences (bottom row). While the eye would become aberration free for far vision, as soon as the subject accommodates to near objects, the eye will become aberrated again. For a moderate vergence, as is the case for 2 D, the eye after correction has an amount of aberration similar to, and for some subjects maybe even larger than, those present in the eye prior to any correction. In addition, it must be pointed out that in normal viewing, the precision of accommodation is not perfect, and there is a defocus term beyond the residual aberrations. This will also prevent the eye from producing diffraction-limited retinal images. These results clearly indicate that, due to the dynamic nature of the ocular optics, a static, perfect correction (as attempted in customized refractive surgery) would not remain perfect for every condition occurring during
0.2
Vert. Coma
Microns
0.15 0.1
Near
Horiz. Coma
0.05 0
−0.05 0 −0.1
−0.15 −0.2
0.5
1
1.5
2
2.5
Far Spherical Aberration
−0.25
Accommodation (Diopters) FIGURE 2.7 Values of some aberration terms (spherical aberration and coma) as a function of accommodation for subject PA (5.5-mm pupil diameter).
42
ABERRATION STRUCTURE OF THE HUMAN EYE
Far
1D
2D
Without Correction
“Perfect” Correction for Far Vision FIGURE 2.8 Wave aberration maps in subject SM with (bottom) and without (top) perfect correction for different object vergences.
normal accommodation. Of course, real-time adaptive optics systems may cope with this problem. 2.3.2
Aging and Aberrations
Normal aging affects different aspects of the ocular optics. Elderly eyes typically experience increased light absorption by the ocular media, smaller pupil diameters (senile miosis), and nearly a complete reduction of accommodative capability. In addition, Artal et al. fi rst showed that the mean ocular modulation transfer function (MTF) in a group of older subjects was lower than the average MTF for a group of younger subjects [9]. This result, although obtained in a rather small population, suggested that the ocular aberrations increase with age. More recent measurements in a larger population show a nearly linear decline of retinal image quality with age [10]. This result suggested a significant increase in the optical aberrations of the eye with age, in agreement with other studies in which aberrations were measured directly [35, 36]. In addition, intraocular scatter also increases noticeably in older eyes [37]. Different factors could contribute to the age-related increment in aberrations, such as changes in the aberrations of the cornea [38] and the lens or their relative contributions. The increment in the corneal aberrations is too small to account for the complete reduction of retinal image quality observed with age. This suggests that mechanisms other than changes in the cornea should be primarily responsible for the increase in the ocular aberrations with age. An obvious candidate could be an increase in the aberrations of the crystalline lens caused by the continuous changes in the lens with age. As the lens grows, its dimensions, surface curvatures, and refractive index change,
CHROMATIC ABERRATIONS
Cornea
Internal
43
Whole Eye
FIGURE 2.9 Example of wave aberrations (represented module-p) for the cornea, the internal optics, and the complete eye in one normal older subject. The associated PSFs were calculated at the best image plane from the wave aberrations and subtend 20 min of arc of visual field.
altering the lens aberrations. Glasser and Campbell found a large change in the spherical aberration of excised older lenses measured in vitro [39]. Another important factor to be considered is the nature of aberration coupling within the eye. The amount of aberrations for both the cornea and the internal optics was found to be larger than for the complete eye in young subjects, indicating a significant role of the internal ocular optics in compensating for the corneal aberrations to yield an improved retinal image [28, 40]. During normal aging, the relatively small corneal changes cannot account for the degradation in the retinal image quality. However, the lens dramatically changes both its shape and effective refractive index with age, leading to changes in its aberrations. In this context, it has been shown more recently that as the aberrations of the lens change with age, this compensation is partially or even completely lost [12]. This explains the overall increase in aberration and the reduction of retinal image quality throughout the life span. As an example, Figure 2.9 shows wave aberrations (and their associated PSFs) for the cornea, internal surfaces, and the complete eye for a typical older eye. This should be compared with the same type of results shown in Figure 2.3 for a young eye. In the young eye, the corneal and internal optics aberrations had similar magnitude and shape but were opposite in sign, producing an eye with lower overall aberrations. However, in the older eye, this fi nely tuned compensation is not present.
2.4
CHROMATIC ABERRATIONS
Beyond the monochromatic aberrations, chromatic aberrations in optical systems arise from chromatic dispersion or the dependence of refractive index on wavelength. Chromatic aberrations are also present in the eye. Composed of mostly water, the chromatic behavior of the eye is frequently modeled by
44
ABERRATION STRUCTURE OF THE HUMAN EYE
considering its dispersion curve, although other more elaborate models include dispersion curves based on actual measurements in real subjects. Chromatic aberrations are traditionally divided into longitudinal chromatic aberration (LCA) and transverse chromatic aberration (TCA). The former is the variation of axial power with wavelength while the latter is the shift of the image across the image plane with wavelength. Both LCA and TCA have been widely studied in the eye. These two types of chromatic aberrations can be understood as the wavelength dependence of the lower order terms of the wave aberration: LCA is the change in focus and TCA is the change in tip/tilt or prism. Only recently have the fluctuations of higher order aberrations with wavelength been studied [41]. Chromatic aberrations limit the actual retinal image quality of the eye since the real world is usually polychromatic and, therefore, its image becomes distorted in the retina in a color-dependent fashion. Furthermore, since AO systems do not present, in general, the capability for chromatic compensation, chromatic aberrations can reduce the expected benefit of this technology for improved vision. This problem should not be present in AO systems for eye examination because the light used for imaging is usually monochromatic. Another potential impact of the chromatic aberrations on AO systems comes from the current tendency to shift the sensing light toward the infrared (IR) range. If the sensing and imaging beam wavelengths are not the same, the chromatic differences between the respective wavefronts can reduce the correction efficiency. These differences, however, are typically small except in defocus, which is very predictable and easy to calibrate. 2.4.1
Longitudinal Chromatic Aberration
Since the fi rst study published in 1947 by Wald and Griffi n [42], LCA has been extensively measured and modeled. A review of measurements prior to 1992 can be found in Thibos et al. [43]. Initially, the studies consisted of measuring the subjective best focus [42, 44]. Charman and Jennings measured LCA objectively with retinoscopic methods [45]. More recently, Marcos et al. used a subjective wavefront sensor [41]. Consistently across studies and across subjects, LCA has been found to be around 2 D across the visual spectrum (see also Fig. 10.2). Furthermore, this value seems to be stable with age [46]. It is feasible to construct a lens to compensate for the eye’s LCA and several designs have been proposed [44]. However, the achromatization process is extremely sensitive to positioning errors [47]. Small displacements of the lens or the eye produce artificial TCA that degrades image quality. According to these authors, a lateral misalignment of just 0.4 mm is enough to eliminate all of the potential benefit of the achromatization. For larger displacements, the achromatizing lens actually worsens image quality. Also, since correction is on-axis, the field of view is very limited, even with perfect alignment. Although centering can be very precise in clinical instruments and achromatization could be feasible for eye examinations, this is not the case
CHROMATIC ABERRATIONS
45
for vision correcting elements, such as spectacles or contact lenses. Therefore, achromatization for improved vision remains a difficult task. 2.4.2
Transverse Chromatic Aberration
The TCA measurement techniques typically involve vernier alignment tasks for two colors at the extremes of the visible spectrum (blue and red). Two types of TCA are usually defi ned for the eye [41]. The optical TCA is measured in Maxwellian view [48] or with a pinhole pupil [49] and therefore is related to the wavelength-dependent prism differences for the center of the pupil. The perceived TCA is measured in normal (Newtonian) view covering the whole pupil and represents the mean apparent prism differences across the pupil [50]. Differences are expected and have been experimentally found between these two types of TCA measurement, not only because they are defi ned for different-sized portions of the pupil but also because the perceived TCA is affected by the Stiles–Crawford effect. Unlike LCA, the TCA widely varies both in amount and direction among studies and subjects. Since a centered system should not present TCA on-axis, it is usually assumed that ocular TCA arises from the off-axis position of the fovea and from natural pupil misalignments [51]. However, although these two factors probably produce a part of the eye’s TCA, a recent study has demonstrated that these two effects cannot explain the TCA variability alone [51]. In terms of a Zernike expansion, both TCA and LCA can be understood as the wavelength dependence of the tip/tilt and defocus terms, respectively. Marcos et al. used a psychophysical method to measure the wave aberration up to seventh order for a series of wavelengths across the visible spectrum [41]. They found a very small increase in astigmatism, coma, spherical aberration, and higher order aberrations (expressed in microns) with wavelength. 2.4.3
Interaction Between Monochromatic and Chromatic Aberrations
In most applications of retinal imaging, monochromatic or narrow-bandwidth light sources are used or can be used. In these cases, wave aberration correction through AO or any other means should not be negatively affected by chromatic aberrations. On the contrary, when the aim is improved vision, chromatic aberrations would limit the actual improvement in retinal image quality for a polychromatic scene [41]. In fact, it has been recently argued that higher order aberrations limit the impact of chromatic aberrations in white light [11]. Therefore, perfectly correcting the monochromatic wave aberration for a single wavelength would enhance the disparities between different color components of the scene, potentially allowing the effects of chromatic aberration to be observed by the subject. As a consequence, the potential impact of chromatic aberration should be considered when AO, or any other kind of wave aberration correction, is intended to improve vision.
46
2.5
ABERRATION STRUCTURE OF THE HUMAN EYE
OFF-AXIS ABERRATIONS
The previous sections are related to on-axis aberrations for foveal vision. However, when a point object is located more than 5° off-axis (outer extent of the foveal angular size), its image is formed on the peripheral retina. The plane containing the visual axis, the point object and its peripheral image, is called the tangential plane. The sagittal plane is normal to the tangential plane and contains the principal ray. From the point of view of the light coming from the off-axis point, the incident wavefront reaches a tilted optical system and rays in the sagittal and tangential planes refract dissimilarly. Thus, the oblique incidence of light on the eye produces off-axis (oblique or peripheral) aberrations. These aberrations increase with the angle of eccentricity, so the off-axis optical performance of the eye deteriorates as one moves away from the foveal center. A reduced eye model may help to give a preliminary view of the eye’s off-axis aberrations (although more sophisticated schematic models are necessary to describe the optics in real eyes). For a spherical refracting surface, the Seidel (or fourth order) off-axis aberrations are distortion (tilt), field curvature, astigmatism, and coma. Tilt is a prismatic effect that produces either an image shift for a point object or a shape distortion in the case of the image of an extended object. Astigmatism appears because the emerging refracted wavefront has two principal curvatures that determine two focal image points: the sagittal focus, where the sagittal (or horizontal) ray fan converges, and the tangential focus, where the tangential (or vertical) ray fan converges. As a point object moves away from the optical axis, these foci map out surfaces known as sagittal and tangential surfaces, which are parabolic in the Seidel approximation. The sagittal surface is behind (more hyperopic than) the tangential surface. The Sturm interval is defi ned as the difference between the sagittal and the tangential focal lengths and defi nes the amount of oblique astigmatism, which is the difference between the sagittal and tangential dioptric powers. For a spherical refracting surface with curvature, R, separating media of indices n and n′, the sagittal and tangential focal lengths are, respectively, FS =
n ′R n ′ cos βr − n cos βi
and FT = FS cos2 βr
(2.4)
where b i and b r are the off-axis incident and refracted angles, respectively. Thus, the Sturm interval for the reduced eye is
( ) n n′
2
FS sin 2 βr
(2.5)
OFF-AXIS ABERRATIONS
47
Field curvature is a defocus for off-axis objects and implies that the best image is not formed on the paraxial image plane but on a parabolic surface called the Petzval image surface. In real eyes, the retina (which may be approximated as being a sphere with a radius between 11 and 13 mm) constitutes a curved image plane that, in most individuals, compensates for field curvature. Oblique coma in the reduced eye depends on sin b i and increases linearly with off-axis position. With the exception of the attention paid to peripheral refractive errors, the off-axis optical performance of the eye has not been as well studied and characterized as has the optical quality for foveal vision. Astigmatism and defocus have traditionally been assumed to be the main off-axis aberrations in the eye. The role of oblique astigmatism was recognized by Young back in 1801 [29] and has been studied since the last quarter of the nineteenth century. 2.5.1
Peripheral Refraction
A wide literature exists on peripheral refraction, including experimental results on oblique astigmatism and field curvature [16, 52–58], accommodation [59, 60], and the development of various eye models [61–66]. Measurements have revealed a systematic (between linear and parabolic) increase in astigmatism with the field angle. Figure 2.10(a) shows an example of the Sturm interval in a subject for offaxis vision at an eccentricity of 45°. The three images show the sagittal and tangential foci, and the circle of least confusion. The dioptric value in each figure refers to the refractive position relative to the foveal refraction. The sagittal focus lies behind the retina, the tangential focus lies in front of the retina, and the circle of least confusion is close to the retina, which, in this subject, compensates quite well for field curvature. Figure 2.10(b) shows the sagittal focus in a subject for 15°, 30°, and 45° eccentricities from the fovea. The dioptric value indicates the oblique astigmatism at each angle. Figure 2.11(a) shows the oblique astigmatism of the eye across the visual field. Dashed lines indicate the range of experimental results obtained from a survey of the literature. The solid line represents the function PA = 0.01 × βv1.5
(2.6)
where PA is the value of oblique astigmatism (in diopters) and bv is the offaxis angle (in degrees), according to a fit proposed by Lotmar and Lotmar [55]. Oblique astigmatism increases systematically with eccentricity in all subjects in the human population and may reach values of up to approximately 2 D at 20°, and more than 6 D at 60°. However, the amount and type of astigmatism vary considerably between individuals. In typical schematic eyes, the sagittal surface lies behind the retina (equivalent to a hyperopic refractive error) and the tangential surface lies in front of the retina (equivalent to a myopic error). In real eyes, the relationship between the curvature
48
ABERRATION STRUCTURE OF THE HUMAN EYE (a)
Sagittal (+2 D)
(b)
15∞(2 D)
(-1 D)
30∞(3.5 D)
Tangential (-4 D)
45∞(6 D)
FIGURE 2.10 (a) Double-pass images in a −3-D myopic subject for a point object at a retinal eccentricity of 45°. Images were recorded for equal entrance and exit pupils of 1.5 mm and for three focus positions corresponding to the sagittal focus, the circle of least confusion, and the tangential focus. For a 1.5-mm pupil, higher order aberrations are small, so images show mainly the effect of oblique astigmatism. Each image subtends 80 arcmin. (b) Sagittal focus in a subject for 15°, 30°, and 45° retinal eccentricities, from double-pass images. Within parentheses is the amount of oblique astigmatism, in diopters, measured as the difference between the sagittal and the tangential powers.
of the retina, the axial length, and the refractive power of the ocular components produces large individual variations in the degree of peripheral defocus and astigmatism. The type of astigmatism may differ depending on the foveal refraction of the subject. Plots in Figure 2.11(b) are called skiagrams and represent the variation of the sagittal and tangential refractions (relative to the foveal refraction) over the horizontal field meridian for subjects classified according to their foveal refraction [58]. On average, there is a trend for the sagittal focus to move toward the opposite side of the foveal refraction. This indicates a peripheral emmetropization for the least myopic meridian. This emmetropization does not occur, however, for the circle of least confusion, which tends to be mainly myopic in the periphery. 2.5.2
Monochromatic and Chromatic Off-Axis Aberrations
Some investigators have studied retinal image quality as a function of eccentricity by measuring line spread functions [67] or double-pass retinal images [15, 16, 68]. Off-axis optical quality has been characterized by means of the modulation transfer function [69].
OFF-AXIS ABERRATIONS
49
8
(a) Astigmatisms (diopters)
7 6 5 4 3 2 1 0 −1 20
0
40
60
Off-Axis Angle (deg) (b) Sagittal Tangential
Refractive Error (diopters)
2 0 −2 −4 −6 −8 −10
Myopic 0
10
20
Emmetropic 30
40
50 0
10
20
30
Hyperopic 40
50 0
10
20
30
40
50
Off-Axis Angle (deg)
FIGURE 2.11 (a) Interval of distribution of oblique astigmatism across the visual field in the human population (dashed lines). The solid line represents the value of astigmatism obtained using the function 0.01 × b1.5 v , where b v is the off-axis angle (in degrees). (b) Refractive position of the sagittal and the tangential surfaces across the horizontal nasal meridian for three groups of myopic, emmetropic, and hyperopic subjects.
From the fi rst results, obtained with ophthalmoscopic methods used to estimate optical performance in the periphery, it was initially impossible to extract the magnitude of the monochromatic higher order aberrations, other than astigmatism and defocus, and to identify their role in the decay of optical quality. To estimate third-order aberrations, Guirao and Artal [16] developed a procedure based on a configuration of unequal entrance and exit pupils in a double-pass apparatus [70] and a geometrical method of coma image analysis. After determining and correcting the amount of peripheral astigmatism and defocus for each field angle, double-pass images with a tiny entrance pupil and a large exit pupil (or vice versa) showed the effect of higher order
50
ABERRATION STRUCTURE OF THE HUMAN EYE
aberrations, revealing a significant amount of coma. Coma estimated in four subjects showed a nearly linear increase across the visual field. Figure 2.12(a) shows double-pass images in a subject for three eccentricities with the best correction of defocus and astigmatism, indicating how the effect of coma increases across the visual field. Navarro et al., using a laser ray-tracing technique, measured aberrations up to fi fth order in four subjects across the horizontal meridian [71]. Individual variations were found to be due to a different coupling between the foveal and the peripheral aberrations in each subject. However, for large eccentricities, there was a linear increase in the higher order aberrations (mainly coma) that dominated the intersubject variability found in the aberration patterns. Atchison and Scott, using a Shack– Hartmann sensor to measure up to sixth-order aberrations in five subjects, found a slight increase in spherical aberration with angle and a large increase in coma, reaching values between two and three wavelengths at 40° from the fovea [72]. (a)
15∞
(b)
1
30∞
4-mm Pupil
45∞
1.0
3-mm Pupil Astigmatism and Defocus Corrected
0.1
20° 30° 50°
0.01 0
10 20 30 40 50 60 Spatial Frequency (c/deg)
MTF
MTF
0.8 Fovea
0.6
Fovea
0.4
20°
0.2
20°
0.0 0
Only Defocus Corrected
10 20 30 Spatial Frequency (c/deg)
FIGURE 2.12 (a) Double-pass images recorded with astigmatism and defocus corrected for a 1.5-mm entrance and 4-mm exit pupil in a subject for different retinal eccentricities. (b) MTF for different eccentricities calculated analytically by using exponential functions after a least-square fit of experimental data collected in typical young eyes. The left panel shows MTFs for a 4-mm pupil at the fovea and at 20°, 30°, and 40° eccentricities, for foveally fi xed accommodation when all aberrations are included. The right panel shows MTFs for a 3-mm pupil at the fovea (best-corrected) and at 20° for two conditions: When peripheral defocus is corrected, and when both defocus and astigmatism are corrected.
OFF-AXIS ABERRATIONS
51
In polychromatic natural viewing conditions, chromatic aberrations may also have a significant detrimental effect on peripheral image quality. Due to poor visual acuity in the human eye in the periphery, the subjective determination of longitudinal chromatic aberration is only possible within a few degrees of the fovea. For large field angles, an objective approach is necessary. Rynders et al., by focusing a monochromatic point source of four different wavelengths and measuring the objective refraction, obtained a small gradual increase in longitudinal chromatic aberration with eccentricity, from near 1.0 D (between 632.8 and 458 nm) at the fovea to approximately 1.6 D at 40° [73]. Transverse chromatic aberration also increases with eccentricity [49]. This increase is approximately linear and occurs at a rate of about 0.25 min of arc per unit of angle. 2.5.3 Monochromatic Image Quality and Correction of Off-Axis Aberrations The pattern of decay of monochromatic retinal image quality with eccentricity is summarized in Figure 2.12(b) using the modulation transfer function. These curves were calculated with the equation MTF ( f ) = ( 1 − C ) exp ( − Af ) + C exp ( − Bf )
(2.7)
where f is the spatial frequency and A, B, and C are the set of parameters for each angle. These parameters were obtained by fitting experimental data from typical young eyes for natural viewing conditions while subjects kept a foveal fi xed viewing distance (4-mm diameter pupil) and by applying a refractive correction for a 3-mm pupil. In the left panel of Figure 2.12(b), when computing the modulation transfer functions, all off-axis aberrations (including defocus and astigmatism) were left uncorrected. The right panel of Figure 2.12(b) shows the comparison between the modulation at 20° when the oblique astigmatism is corrected or not corrected. A fi nal question, relevant to vision science, is how much visual improvement can be achieved by correcting off-axis aberrations? There have been a few attempts to correct the off-axis aberrations in normal eyes, but little or no visual improvement was found for the most part. Peripheral grating resolution thresholds were not markedly improved after improving the optics [65]. An exception is the improvement in motion detection and orientation that seems to be limited by the refractive errors in the peripheral optics of the eye [74]. Thus, the general consensus is that the optics of the eye play a relatively small role in peripheral vision, with the limits set mainly by neural factors. Moreover, it has been shown that optical blur in the periphery can reduce aliasing due to receptoral and post-receptoral sampling, so an improvement of the off-axis optical quality of the eye could potentially produce peripheral aliasing and be detrimental to visual quality [15].
52
2.6
ABERRATION STRUCTURE OF THE HUMAN EYE
STATISTICS OF ABERRATIONS IN NORMAL POPULATIONS
The knowledge of the statistical properties of the eye’s aberrations would be a useful tool in the design of AO systems or any other wavefront corrector for visual optics applications. Toward this goal, several laboratories in the last few years have embarked on the systematic measurement of ocular aberrations in relatively large healthy populations [75–77]. There is reasonable agreement in results between these studies. All three of them showed good mirror symmetry between the aberrations inherent in left and right eyes of the same individual. Also, the average magnitude (i.e., absolute value) of the aberrations was found to decrease with increasing Zernike order in accordance with earlier or smaller scale studies [6, 26]. When the Zernike coefficients were averaged (preserving sign) across the population, most of the mean values were approximately zero. Defocus and astigmatism do not follow this rule, which is already well known from refraction studies. These aberrations are typically non-zero in the population. Among the higher order terms, the one exception to this general behavior was spherical aberration, found to have usually a significantly positive mean value. Both Porter et al. [75] and Thibos et al. [77] performed a principal component analysis to investigate the effectiveness with which Zernike polynomials efficiently represent the eye’s aberrations. Principal components are statistically uncorrelated functions. As a basis, they are the most efficient way of representing a wavefront from a statistical point of view. This kind of analysis is useful to study the appropriateness of the Zernike polynomials for representing the ocular aberrations. The former study shows the Zernike basis is close to optimal in this sense since little improvement is found in mode economy when principal components are compared with Zernikes (see Figs. 6 and 8 in Porter et al. [75]). This result is corroborated by the sparseness of the correlation matrix found in the latter work (see Fig. 11 in Thibos et al. [77]). A different approach was taken by Cagigal et al. [78] using the aberration data from the Murcia optics lab study [76]. Following a course of action typically taken in astronomy, they calculated the phase variance, the power spectrum, and the structure function of the eye as a phase screen and compared it with the classical Kolmogorov model. The Kolmogorov model is widely used in astronomy to describe the wavefront disturbances produced by the atmosphere. It is based on the assumption that the wavefront perturbations are produced by local changes in the refractive index of the media traversed by the light. Although the phase variance was found not to be constant radially across the pupil, as is the case according to the Kolmogorov statistics, this behavior was attributed to the lack of knowledge of the piston and tip/tilt terms, which are not assessable in the eye with current wavefront sensing technology. On the contrary, the behavior of the variance of the higher order terms, the power spectrum, and the structure function suggested that ocular aberrations follow a Kolmogorov distribution. According to this theory, the eye behaves as a statistically homogeneous medium with fi nite inner and outer
EFFECTS OF POLARIZATION AND SCATTER
53
scales for the inhomogeneities of the refractive index. Knowing the transversal size of these two scales would allow a correct simulation of a population of phase screens (i.e., eyes) with the correct statistical behavior. Using the experimental data available, the outer scale can be estimated to be around 5 mm. The spatial resolution of the wavefront is too low to estimate the inner scale, either because of an insufficient number of Zernike modes or a limited number of sampling points.
2.7 2.7.1
EFFECTS OF POLARIZATION AND SCATTER Impact of Polarization on the Ocular Aberrations
Ocular aberrations and retinal image quality may also be affected by an intrinsic property of light, such as its polarization. This may be especially important considering the birefringent nature of the ocular optics and the eventual components in AO systems. In particular, the eye changes the polarization state of the light that passes through the ocular media and is reflected by the retina in a complex way [79, 80]. The most important ocular polarization properties are located within the cornea and the retina, with the effect of the lens being smaller [81, 82]. The cornea is strongly birefringent and the introduced retardation increases as one moves away from the corneal center [83]. In addition, retinal dichroism could be responsible for differences in the intensity of the light reaching the retina as a function of the incident polarization state [84, 85]. Any technique that involves light passing through the ocular media (in single- or double-pass configurations) might be potentially affected by these changes, and ocular imaging could also be dependent on the spatially resolved polarization properties of the eye. Although information on ocular polarization was recognized early, its interaction with any apparatus used to estimate retinal image quality has always been thought to be negligible. Most techniques used to measure ocular aberrations and retinal image quality have just been limited to include a linear polarizer in the illumination pathway, without worrying about the effect of polarization on the measurements. Moreover, since polarized light is required when using spatial light modulators as wavefront correctors in AO systems [2], the study of the impact of polarization on the calculation of ocular aberrations is important. Figure 2.13(a) shows an example of two double-pass (DP) retinal images registered with parallel and crossed linear polarizers in a young eye [86]. Bueno and Artal [87] used a DP imaging polarimeter to study the effect of polarization on the estimates of retinal image quality [88]. They reported that the DP image was strongly affected by the combination of polarization states in the incoming and outgoing pathways. On the other hand, the ocular MTF was nearly independent on the polarization state of the incident light itself. Prieto et al. used a spatially resolved refractometer to measure the effect of
54
ABERRATION STRUCTURE OF THE HUMAN EYE (a)
Parallel Polarizers
Crossed Polarizers
Polarization Angle
(b) −π/4
0
π/6
π/3
FIGURE 2.13 (a) Examples of double-pass retinal images registered with parallel (left) and crossed (right) linear polarizers in the incoming and outgoing pathways. Each image subtends 59 min of arc of visual field and was taken in a young healthy eye. (b) Wave aberration maps (calculated for a 5-mm pupil in a normal young eye) for a fi xed incoming polarization state and four independent polarization states in the registration pathway. RMS values ranged from 0.26 to 0.30 mm.
changes in the polarization state of the incoming light on the eye’s wave aberration estimates obtained in a single pass [89]. Measurements carried out on four subjects revealed that the polarization state of the incident light had little influence on the measured wave aberration. Marcos et al. used two different wavefront sensors to study the effect of different polarization configurations in the aberration measurement of 71 eyes [90]. The distribution of light in the retinal spots depended on the in- and outgoing polarization configurations, but the measured aberrations did not. This was also confi rmed by Bueno et al. using an aberro-polariscope, which combined polarimetry and wavefront sensing [91]. As an example, Figure 2.13(b) shows the similarity in wave aberrations for a 5-mm pupil in a young normal eye for four independent polarization states placed in the registration pathway. In summary, different experiments have shown that there is little impact of polarization on ocular aberration measurements. Concerning the depolarization of light in the ocular media, there are a large variety of results in the previous literature. Whereas early studies found a complete depolarization of the light reflected at the retina [92], other authors show that polarization was substantially preserved [79]. Analysis of the degree of polarization (DOP) in DP images has shown that the parameter decreases toward the skirts of the image following the averaged intensity radial profi le [93]. This confi rms that skirts of the DP images contain depo-
EFFECTS OF POLARIZATION AND SCATTER
55
larized light that has been scattered and that depolarization increases with age. Moreover, the fact that the DOP of the light located at the center of the DP image is lower in older eyes indicates that guided light also suffers from scattering processes (retinal or ocular). At the pupil plane, the DOP also decreases toward the margins, but the maximum is not always located at the center of the pupil. The location of the maximum light intensity (Stiles– Crawford peak [94]) and the area with higher DOP are close to each other, confi rming that the DOP corresponding to the directional component returning from the photoreceptors is higher than the DOP associated with the diffuse component (i.e., the surrounding area that fi lls the entire pupil). 2.7.2 Intraocular Scatter Beyond the effect of aberrations, scattered light reduces the performance of any optical system in terms of imaging, including the eye. This is quite an important issue, as wavefront correctors will not remove the effects of scattered light in the eye. In addition, wavefront sensors will not capture the higher order details related to scattered light. In particular, intraocular scatter degrades retinal image quality and diminishes both visual acuity (VA) and the contrast sensitivity function (CSF). This is also generally related to glare, which forms a veil of luminance in the eye. Although, under normal conditions, the presence of intraocular scatter in a young healthy eye is low, this may become significant with aging [37] or after refractive surgery (corneal haze). The loss of transparency of the lens (ultimately resulting in cataract formation) is the main source of scatter in the older eye. Numerous techniques have been developed to measure ocular aberrations. Most methods used to measure light scatter in the human eye are subjective and require the collaboration of the subject. However, double-pass retinal images contain information on both aberrations and intraocular scatter. Whereas the central part of the DP image is associated with aberrations, the scattered light mainly comprises the tails of the image. However, it is quite difficult to actually separate out the scattered contribution. A new objective method to measure changes in intraocular scattering is based on measuring the degree of polarization of the light emerging from the eye using a DP imaging polarimeter [95]. The ocular media and the retina are the two sources of scatter in the eye. However, a technique to separate both contributions has not yet been reported. Scatter in the ocular media is mainly due to diffusion and the loss of transparency in the cornea, the lens, and the humors. Apart from the wavelength used, scatter within the retina will mainly depend on both the fraction of light passing along the photoreceptors and the layer of the retina where the light is reflected. In the light passing through the ocular media and reflecting in the retina, two components have been classically described: a directional (or guided, which maintains polarization) and a diffuse component (or scattered, which is depolarized) [96].
56
ABERRATION STRUCTURE OF THE HUMAN EYE
Acknowledgments Parts of the research described in this chapter were supported by the Ministerio de Ciencia y Tecnología (MCyT), Spain, and by Pharmacia_Groningen (The Netherlands) through grants to PA at the University of Murcia. REFERENCES 1. Liang J, Williams DR, Miller DT. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884–2892. 2. Vargas-Martin F, Prieto P, Artal P. Correction of the Aberrations in the Human Eye with Liquid Crystal Spatial Light Modulators: Limits to the Performance. J. Opt. Soc. Am. A. 1998; 15: 2552–2562. 3. Fernández EJ, Iglesias I, Artal P. Closed-Loop Adaptive Optics in the Human Eye. Opt. Lett. 2001; 26: 746–749. 4. Hofer H, Chen L, Yoon GY, et al. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express. 2001; 8: 631–643. 5. Fernández EJ, Artal P. Membrane Deformable Mirror for Adaptive Optics: Performance Limits in Visual Optics. Opt. Express. 2003; 11: 1056–1069. 6. Howland HC, Howland B. A Subjective Method for the Measurement of Monochromatic Aberrations of the Eye. J. Opt. Soc. Am. A. 1977; 67: 1508–1518. 7. Campbell FW, Gubisch RW. Optical Quality of the Human Eye. J. Physiol. 1966; 186: 558–578. 8. Artal P, Navarro R. Monochromatic Modulation Transfer Function of the Human Eye for Different Pupil Diameters: An Analytical Expression. J. Opt. Soc. Am. A. 1994; 11: 246–249. 9. Artal P, Ferro M, Miranda I, Navarro R. Effects of Aging in Retinal Image Quality. J. Opt. Soc. Am. A. 1993; 10: 1656–1662. 10. Guirao A, Gonzalez C, Redondo M, et al. Average Optical Performance of the Human Eye as a Function of Age in a Normal Population. Invest. Ophthalmol. Vis. Sci. 1999; 40: 203–213. 11. McLellan JS, Marcos S, Prieto PM, Burns SA. Imperfect Optics May Be the Eye’s Defense Against Chromatic Blur. Nature. 2002; 417: 174–176. 12. Artal P, Berrio E, Guirao A, Piers P. Contribution of the Cornea and Internal Surfaces to the Change of Ocular Aberrations with Age. J. Opt. Soc. Am. A. 2002; 19: 137–143. 13. He JC, Burns SA, Marcos S. Monochromatic Aberrations in the Accommodated Human Eye. Vision Res. 2000; 40: 41–48. 14. Artal P, Fernández E.J, Manzanera S. Are Optical Aberrations during Accommodation a Significant Problem for Refractive Surgery? J. Refract. Surg. 2002; 18: S563–S566. 15. Williams DR, Artal P, Navarro R, et al. Off-axis Optical Quality and Retinal Sampling in the Human Eye. Vision Res. 1996; 36: 1103–1114. 16. Guirao A, Artal P. Off-axis Monochromatic Aberrations Estimated from DoublePass Measurements in the Human Eye. Vision Res. 1999; 39: 207–217.
REFERENCES
57
17. Guirao A, Artal P. Corneal Wave Aberration from Videokeratography: Accuracy and Limitations of the Procedure. J. Opt. Soc. Am. A. 2000; 17: 955–965. 18. Thibos LN, Applegate RA, Schwiegerling JT, et al. Standards for Reporting the Optical Aberrations of Eyes. In: Lakshminarayanan V, ed. OSA Trends in Optics and Photonics, Vision Science and Its Applications, Vol. 35. Washington, D.C.: Optical Society of America, 2000, pp. 232–244. 19. Schwiegerling J, Greivenkamp JE, Miller JM. Representation of Videokeratoscopic Height Data with Zernike Polynomials. J. Opt. Soc. Am. A. 1995; 12: 2105–2113. 20. Smirnov MS. Measurement of the Wave Aberration of the Human Eye. Biofizika. 1961; 6: 776–795. 21. Berny F, Slansky S. Wavefront Determination Resulting from Foucault Test as Applied to the Human Eye and Visual Instruments. In: Dickenson JH, ed. Optical Instruments & Techniques. Newcastle, UK: Oriel, 1969, pp. 375–386. 22. Artal P, Santamaría J, Bescós J. Retrieval of the Wave Aberration of Human Eyes from Actual Point-Spread Function Data. J. Opt. Soc. Am. A. 1988; 5: 1201–1206. 23. Iglesias I, Berrio E, Artal P. Estimates of the Ocular Wave Aberration from Pairs of Double-Pass Retinal Images. J. Opt. Soc. Am. A. 1998; 15: 2466–2476. 24. Iglesias I, Ragazzoni R, Julien Y, Artal P. Extended Source Pyramid Wave-front Sensor for the Human Eye. Opt. Express. 2002; 10: 419–428. 25. Liang J, Grimm B, Goelz S, Bille JF. Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann-Shack Wave-front Sensor. J. Opt. Soc. Am. A. 1994; 11: 1949–1957. 26. Liang J, Williams DR. Aberrations and Retinal Image Quality of the Normal Human Eye. J. Opt. Soc. Am. A. 1997; 14: 2873–2883. 27. Prieto PM, Vargas-Martín F, Goelz S, Artal P. Analysis of the Performance of the Hartmann-Shack Sensor in the Human Eye. J. Opt. Soc. Am. A. 2000; 17: 1388–1398. 28. Artal P, Guirao A, Berrio E, Williams DR. Compensation of Corneal Aberrations by the Internal Optics in the Human Eye. J. Vis. 2001; 1: 1–8. 29. Young T. On the Mechanism of the Eye. Phil. Trans. Roy. Soc. London. 1801; 91: 23–28. 30. Millodot M, Sivak J. Contribution of the Cornea and the Lens to the Spherical Aberration of the Eye. Vision Res. 1979; 19: 685–687. 31. Salmon TO, Thibos LN. Videokeratoscope Line of Sight Misalignment and Its Effect on Measurements of Corneal and Internal Ocular Aberrations. J. Opt. Soc. Am. A. 2002; 19: 657–669. 32. Artal P, Marcos S, Navarro R, et al. Through Focus Image Quality of Eyes Implanted with Monofocal and Multifocal Intraocular Lenses. Opt. Eng. 1995; 34: 772–779. 33. Guirao A, Redondo M, Geraghty E, et al. Corneal Optical Aberrations and Retinal Image Quality in Patients in Whom Monofocal Intraocular Lenses Were Implanted. Arch. Ophthalmol. 2002; 120: 1143–1151. 34. Hofer HJ, Artal P, Singer B, et al. Dynamics of the Eye’s Wave Aberration. J. Opt. Soc. Am. A. 2001; 18: 497–506.
58
ABERRATION STRUCTURE OF THE HUMAN EYE
35. Calver R, Cox MJ, Elliot DB. Effect of Aging on the Monochromatic Aberrations of the Human Eye. J. Opt. Soc. Am. A. 1999; 16: 2069–2078. 36. McLellan JS, Marcos S, Burns SA. Age-Related Changes in Monochromatic Wave Aberrations of the Human Eye. Invest. Ophthalmol. Vis. Sci. 2001; 42: 1390–1395. 37. Ijspeert JK, de Waard PW, van den Berg TJ, de Jong PT. The Intraocular Straylight Function in 129 Healthy Volunteers; Dependence on Angle, Age and Pigmentation. Vision Res. 1990; 30: 699–707. 38. Guirao A, Redondo M, Geraghty E, et al. Corneal Optical Aberrations and Retinal Image Quality in Patients in Whom Monofocal Intraocular Lenses Were Implanted. Arch. Ophthalmol. 2002; 120: 1143–1151. 39. Glasser A, Campbell MCW. Presbyopia and the Optical Changes in the Human Crystalline Lens with Age. Vision Res. 1998; 38: 209–229. 40. Artal P, Guirao A. Contribution of the Cornea and the Lens to the Aberrations of the Human Eye. Opt. Lett. 1998; 23: 1713–1715. 41. Marcos S, Burns SA, Moreno-Barriuso E, Navarro R. A New Approach to the Study of Ocular Chromatic Aberrations. Vision Res. 1999; 39: 4309–4323. 42. Wald G, Griffi n DR. The Change in Refractive Power of the Human Eye in Dim and Bright Light. J. Opt. Soc. Am. 1947; 37: 321–366. 43. Thibos LN, Ye M, Zhang X, Bradley A. The Chromatic Eye: A New ReducedEye Model of Ocular Chromatic Aberration in Humans. Appl. Opt. 1992; 31: 3594–3600. 44. Bedford RE, Wyszecki G. Axial Chromatic Aberration of the Human Eye. J. Opt. Soc. Am. 1957; 47: 564–565. 45. Charman WN, Jennings JAN. Objective Measurements of Longitudinal Chromatic Aberration of Human Eye. Vision Res. 1976; 16: 999–1005. 46. Howarth PA, Zhang XX, Bradley A, et al. Does the Chromatic Aberration of the Eye Vary with Age? J. Opt. Soc. Am. A. 1988; 12: 2087–2092. 47. Zhang X, Bradley B, Thibos LN. Achromatizing the Human Eye: The Problem of Chromatic Parallax. J. Opt. Soc. Am. A. 1991; 8: 686–691. 48. Simonet P, Campbell MCW. The Optical Transverse Chromatic Aberration on the Fovea of the Human Eye. Vision Res. 1990; 30: 187–206. 49. Thibos LN, Bradley A, Still DL, et al. Theory and Measurement of Ocular Chromatic Aberration. Vision Res. 1990; 30: 33–49. 50. Ogboso YU, Bedell HE. Magnitude of Lateral Chromatic Aberration across the Retina of the Human Eye. J. Opt. Soc. Am. A. 1987; 4: 1666–1672. 51. Marcos S, Burns SA, Prieto PM, et al. Investigating the Sources of Variability of Monochromatic and Transverse Chromatic Aberrations Across Eyes. Vision Res. 2001; 41: 3861–3871. 52. Ferree CE, Rand G, Hardy C. Refraction for the Peripheral Field of Vision. Arch. Ophthalmol. 1931; 5: 717–731. 53. Ferree CE, Rand G. Interpretation of Refractive Conditions in the Peripheral Field of Vision: A Further Study. Arch. Ophthalmol. 1933; 9: 925–938. 54. Rempt F, Hoogerheide J, Hoogenboom WP. Peripheral Retinoscopy and the Skiagram. Ophthalmologica. 1971; 162: 1–10.
REFERENCES
59
55. Lotmar W, Lotmar T. Peripheral Astigmatism in the Human Eye: Experimental Data and Theoretical Model Predictions. J. Opt. Soc. Am. 1974; 64: 510–513. 56. Millodot M, Lamont A. Refraction of the Periphery of the Eye. J. Opt. Soc. Am. 1974; 64: 110–111. 57. Smith G, Lu CW. Peripheral Power Errors and Astigmatism of Eyes Corrected with Intraocular Lenses. Optom. Vis. Sci. 1991; 68: 12–21. 58. Seidemann A, Schaeffel F, Guirao A, et al. Peripheral Refractive Errors in Myopic, Emmetropic, and Hyperopic Young Subjects. J. Opt. Soc. Am. A. 2002; 19: 2363–2373. 59. Semmlow JL, Tinor T. Accommodative Convergence Response to Off-Axis Retinal Images. J. Opt. Soc. Am. 1978; 68: 1497–1501. 60. Gu Y, Legge GE. Accomodation to Stimuli in Peripheral Vision. J. Opt. Soc. Am. A. 1987; 4: 1681–1687. 61. Lotmar W. Theoretical Eye Model with Aspherics. J. Opt. Soc. Am. 1971; 61: 1522–1529. 62. Kooijman AC. Light Distribution on the Retina of a Wide Angle Theoretical Eye. J. Opt. Soc. Am. 1983; 73: 1544–1550. 63. Wang G, Pomerantzeff O, Pankratov MM. Astigmatism of Oblique Incidence in the Human Model Eye. Vision Res. 1983; 23: 1079–1085. 64. Dunne MCM, Barnes DA. Schematic Modelling of Peripheral Astigmatism in Real Eyes. Ophthalmic Physiol. Opt. 1987; 7: 235–239. 65. Wang YZ, Thibos LN. Oblique (Off-Axis) Astigmatism of the Reduced Schematic Eye with Elliptical Refracting Surface. Optom. Vis. Sci. 1997; 74: 557–562. 66. Escudero-Sanz I, Navarro R. Off-Axis Aberrations of a Wide-Angle Schematic Eye Model. J. Opt. Soc. Am. A. 1999; 16: 1881–1891. 67. Jennings JAM, Charman WN. Off-Axis Image Quality in the Human Eye. Vision Res. 1981; 21: 445–455. 68. Navarro R, Artal P, Williams DR. Modulation Transfer of the Human Eye as a Function of Retinal Eccentricity. J. Opt. Soc. Am. A. 1993; 10: 201–212. 69. Jennings JAM, Charman WN. Analytic Approximation of the Off-Axis Modulation Transfer Function of the Eye. Vision Res. 1997; 37: 697–704. 70. Artal P, Iglesias I, Lopez-Gil N, Green DG. Double-Pass Measurements of the Retinal Image Quality with Unequal Entrance and Exit Pupil Sizes and the Reversibility of the Eye’s Optical System. J. Opt. Soc. Am. A. 1995; 12: 2358–2366. 71. Navarro R, Moreno E, Dorronsoro C. Monochromatic Aberrations and PointSpread Functions of the Human Eye across the Visual Field. J. Opt. Soc. Am. A. 1998; 15: 2522–2529. 72. Atchison DA, Scott DH. Monochromatic Aberrations of Human Eyes in the Horizontal Visual Field. J. Opt. Soc. Am. A. 2002; 19: 2180–2184. 73. Rynders MC, Navarro R, Losada MA. Objective Measurement of the Off-Axis Longitudinal Chromatic Aberration in the Human Eye. Vision Res. 1998; 38: 513–522.
60
ABERRATION STRUCTURE OF THE HUMAN EYE
74. Artal P, Derrington AM, Colombo E. Refraction, Aliasing, and the Absence of Motion Reversals in Peripheral Vision. Vision Res. 1995; 35: 939–947. 75. Porter J, Guirao A, Cox IG, Williams DR. Monochromatic Aberrations of the Human Eye in a Large Population. J. Opt. Soc. Am. A. 2001; 18: 1793–1803. 76. Castejón-Mochón JF, López-Gil N, Benito A, Artal P. Ocular Wave-Front Aberration Statistics in a Normal Young Population. Vision Res. 2002; 42: 1611–1617. 77. Thibos LN, Hong X, Bradley A, Cheng X. Statistical Variation of Aberration Structure and Image Quality in a Normal Population of Healthy Eyes. J. Opt. Soc. Am. A. 2002; 19: 2329–2348. 78. Cagigal MP, Canales VF, Castejón-Mochón JF, et al. Statistical Description of Wave-front Aberration in the Human Eye. Opt. Lett. 2002; 27: 37–39. 79. van Blokland GJ. Ellipsometry of the Human Retina in Vivo: Preservation of Polarization. J. Opt. Soc. Am. A. 1985; 2: 72–75. 80. Bour LJ. Polarized Light and the Eye. In: Charman WN, ed. Visual Optics and Instrumentation, Vol. 1. New York: Macmillan, 1991, pp. 310–325. 81. Weale RA. On the Birefringence of the Human Crystalline Lens. J. Physiol. 1978; 284: 112–113. 82. Bueno JM, Campbell MCW. Polarization Properties of the in Vivo Old Human Crystalline Lens. Ophthalmic Physiol. Opt. 2003; 23: 109–118. 83. van Blokland GJ, Verhelst SC. Corneal Polarization in the Living Human Eye Explained with a Biaxial Model. J. Opt. Soc. Am. A. 1987; 4: 82–90. 84. Bone RA. The Role of the Macular Pigment in the Detection of Polarized Light. Vision Res. 1980; 20: 213–220. 85. Dreher AW, Reiter K, Weinreb RN. Spatially Resolved Birefringence of the Retinal Never Fiber Layer Assessed with a Retinal Laser Ellipsometer. Appl. Opt. 1992; 31: 3730–3735. 86. Santamaría J, Artal P, Bescós J. Determination of the Point-Spread Function of the Human Eye Using a Hybrid Optical-Digital Method. J. Opt. Soc. Am. A. 1987; 6: 1109–1114. 87. Bueno JM, Artal P. Double-Pass Imaging Polarimetry in the Human Eye. Opt. Lett. 1999; 24: 64–66. 88. Bueno JM, Artal P. Polarization and Retinal Image Quality Estimates in the Human Eye. J. Opt. Soc. Am. A. 2001; 18: 489–496. 89. Prieto PM, Vargas-Martín F, McLellan JS, Burns SA. Effect of the Polarization on Ocular Wave Aberration Measurements. J. Opt. Soc. Am. A. 2002; 19: 809–814. 90. Marcos S, Díaz-Santana L, Llorente L, Dainty C. Ocular Aberrations with Ray Tracing and Shack-Hartmann Wave-front Sensors: Does Polarization Play a Role? J. Opt. Soc. Am. A. 2002; 19: 1063–1072. 91. Bueno JM, Berrio E, Artal P. Aberro-polariscope for the Human Eye. Opt. Lett. 2003; 28: 1209–1211. 92. Vos JJ, Munnik AA, Boogaard J. Absolute Spectral Reflectance of the Fundus Oculi. J. Opt. Soc. Am. 1965; 55: 573–574. 93. Bueno JM. Depolarization Effects in the Human Eye. Vision Res. 2001; 41: 2687–2696.
REFERENCES
61
94. Stiles WS, Crawford BH. The Luminous Efficiency of Rays Entering the Eye Pupil at Different Points. Proc. Roy. Soc. London B. 1933; 112: 428–450. 95. Bueno JM, Berrio E, Ozolinsh M, Artal P. Degree of Polarization as an Objective Method to Estimate Scattering. J. Opt. Soc. Am. A. 2004; 21: 1316–1321. 96. Burns SA, Wu S, Delori FC, Elsner AE. Direct Measurement of HumanCone-Photoreceptor Alignment. J. Opt. Soc. Am. A. 1995; 12: 2329–2338.
CHAPTER THREE
Wavefront Sensing and Diagnostic Uses GEUNYOUNG YOON University of Rochester, Rochester, New York
3.1
WAVEFRONT SENSORS FOR THE EYE
Various wavefront sensing techniques have been developed for the human eye [1–12]. Wavefront sensing is a key technique required to better understand the optical quality of the eye and to develop advanced vision correction methods, such as adaptive optics, customized contact lenses, and customized laser refractive surgery. It is also a necessary technique for high-resolution imaging of the retina. The most commonly used wavefront sensors are the spatially resolved refractometer, the laser ray tracing technique, and the Shack–Hartmann wavefront sensor. Wavefront sensors measure the aberrations of the entire eye generated by both corneal surfaces and the crystalline lens, whereas corneal topography can only measure the aberrations induced by the anterior or both anterior and posterior corneal surfaces. Since wavefront sensing light needs to pass through the cornea and crystalline lens, the eye’s pupil is the absolute limiting aperture of wavefront sensing, which may require pupil dilation. Wavefront sensing may be very difficult if parts of the eye have opacities, such as cataracts and corneal scars. Wavefront sensors can be categorized by whether the measurement is based on a subjective or objective method and whether the wavefront sensor measures the light going into the eye or coming out of the eye, as shown in
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
63
64
WAVEFRONT SENSING AND DIAGNOSTIC USES
Figure 3.1. However, all wavefront sensors developed for vision science and ophthalmology are based on the same principle, which is an indirect measurement of local wavefront slopes and the reconstruction of the complete wavefront by integrating these slopes, as illustrated in Figure 3.2. The relationship between wavefront slope (the fi rst derivative of the wavefront) and the spot displacement, ∆xS and ∆yS with respect to the x and y directions, can be expressed as: ∂W ( x, y ) ∆xS = ∂x F ∂W ( x, y ) ∆yS = ∂y F
FIGURE 3.1 aberration.
(3.1) (3.2)
Representative wavefront sensors developed to measure the eye’s wave
FIGURE 3.2 Principle of measuring ocular aberrations with a wavefront sensor. A wavefront sensor measures local wavefront slopes and calculates the complete wavefront shape from the measured slopes.
WAVEFRONT SENSORS FOR THE EYE
65
where F is the focal length of the focusing optics. With the measured spot displacements in the x and y directions at each sampling point, the original wavefront can be calculated using different reconstruction algorithms. In this section, the principle of operation and advantages and disadvantages for each wavefront sensor will be summarized. 3.1.1
Spatially Resolved Refractometer
The spatially resolved refractometer [5, 6] has two light sources: a fi xed source that serves as a reference (the light from which passes through the center of the pupil) and a movable source (the light from which is moved to different locations in the pupil). For each location of the movable source, the subject’s task is to change the position of the movable light source on the retina until it is aligned with the reference spot formed by the fi xed light source. The same task is repeated at different locations of the movable light source in the pupil plane. The change in the angle of incidence of the movable light required to align the spots at different locations in the pupil is a measure of the local wavefront slope. The main advantage of this type of wavefront sensor is its large dynamic range. However, the subjective method of measurement has the disadvantage that the measurement performance depends on the subject’s ability to precisely complete the task. The measurement accuracy varies given the subject’s degree of training and attention. More importantly, the measurement process is very time consuming, which makes this method inappropriate for use in a clinical environment and for real-time control, such as in an adaptive optics system. 3.1.2
Laser Ray Tracing
An objective version of the spatially resolved refractometer is the laser ray tracing [7–10] device. This technique was developed by Navarro and Losada for the living human eye [7] and consists of a sequentially delivered light pencil (i.e., a beam with a small diameter) that comes from a point object and passes through different locations in the eye’s pupil. A charge-coupled device (CCD) camera acquires images of each spot pattern focused on the retina, and the displacement of each spot from the location of the reference (or chief) ray is computed. These displacement data provide information about the local wavefront slopes. This type of wavefront sensor can have a larger dynamic range compared to that of the conventional Shack–Hartmann wavefront sensor because each spot is acquired sequentially and processed independently. In the Tscherning wavefront sensor, which uses the same measurement principle as the laser ray tracing, the entire spot array pattern is captured at once. Therefore, this Tscherning approach has a smaller dynamic range of measurement than the laser ray tracing technique.
66
WAVEFRONT SENSING AND DIAGNOSTIC USES
A major disadvantage of the laser ray tracing system is the error caused by distortions in the spot intensity distribution when the beam falls on a patch of retina that is spatially nonuniform due to blood vessels, melanin distribution, or other factors that influence retinal reflectance. In the laser ray tracing system, the beam entering the pupil forms a spot on the retina, the location of which is determined by the slope of the wave aberration at the entry point in the pupil. Then, the light reflected off the retina passes back through the eye’s optics and forms an image of the retinal spot on a light detector. It is the position of this image outside of the eye that is carefully measured to infer the wavefront slope for each point in the pupil. However, the measured position of this image could be influenced by nonuniformities in the retina, especially since each pupil entry point casts the retinal spot on a different retinal location and each retinal location will have its own distinct spatial nonuniformities. This may produce an error in the aberration measurement, especially when measuring highly aberrated eyes that have much larger local refractive errors on the cornea. 3.1.3
Shack–Hartmann Wavefront Sensor
The Shack–Hartmann wavefront sensor [11, 12] contains a lenslet array that consists of a two-dimensional array of a few hundred lenslets, all with the same diameter and the same focal length. Typical lenslet diameters range from about 100 to 600 mm. Typical focal lengths range from a few millimeters to about 30 mm. The light reflected from a laser beacon projected on the retina is distorted by the wave aberration of the eye. The reflected light is then spatially sampled into many individual beams by the lenslet array and forms multiple spots in the focal plane of the lenslets, as shown in Figure 3.3. A CCD camera placed in the focal plane of the lenslet array records the spot array pattern for wavefront calculation. For a perfect eye (i.e., an aberrationfree or diffraction-limited eye), light reflected from the retina emerges from the pupil as a collimated beam, and the Shack–Hartmann spots are formed along the optical axis of each lenslet, resulting in a regularly spaced grid of spots in the focal plane of the lenslet array. In contrast, individual spots formed by an aberrated eye, which distorts the wavefront of the light passing through the eye’s optics, are displaced from the optical axis of each lenslet. The displacement of each spot is proportional to the wavefront slope at the location of that lenslet in the pupil and is used to reconstruct the wave aberration of the eye. This type of wavefront sensor has advantages common to those of laser ray tracing, such as the fact that both techniques are objective and can be operated in real time. These advantages allow for the automatic measurement of ocular aberrations, which is essential for routine clinical testing. A main difference between the Shack–Hartmann wavefront sensor and laser ray tracing is the method used to acquire the spot image. In contrast with laser ray tracing, where the incident beam is scanned sequentially over
WAVEFRONT SENSORS FOR THE EYE
67
FIGURE 3.3 Schematic diagram of the measurement principle of a Shack–Hartmann wavefront sensor. Two Shack–Hartmann images for perfect (left) and real (right) eyes are also shown.
the entrance pupil to measure light going into the eye, the Shack–Hartmann wavefront sensor measures light coming out of the eye using a parallel process to acquire multiple spots over the exit pupil. By comparing the aberrations measured using laser ray tracing and other psychophysical methods with those from a Shack–Hartmann wavefront sensor in the same eyes, several investigators have demonstrated that there is no significant difference in the eye’s measured aberrations when using either ingoing or outgoing light [13, 14]. Since there is only one spot formed on the retina when using the Shack– Hartmann wavefront sensor, any intensity variation of the spot does not generate a measurement error because individual spots created by the lenslet array are equally affected. If this retinal location has nonuniform reflectance, the distortion in the light distribution on the detector will be essentially the same for all the pupil locations sampled, and the wave aberration measurement will not be corrupted by the nonuniformity. If the wavefront shape within one lenslet significantly varies, the spot pattern formed by that lenslet can be blurred and cause an error in estimating the centroid of the spot. This typically does not happen with lenslets that are a few hundred microns in diameter because such a small sized aperture is almost diffraction-limited after passing through the eye’s optics, provided the tear fi lm thickness is uniform. However, the major disadvantage of the Shack– Hartmann device, which will be addressed in this chapter, is its relatively small dynamic range that is limited by the lenslet spacing (or number of lenslets across the pupil) and the focal length of the lenslet array. Although this chapter will focus on optimization strategies for a Shack–Hartmann wave-
68
WAVEFRONT SENSING AND DIAGNOSTIC USES
front sensor, most of the design concepts can be applied to different types of wavefront sensors based on wavefront slope measurements.
3.2
OPTIMIZING A SHACK–HARTMANN WAVEFRONT SENSOR
There are a few design parameters that need to be properly determined. These parameters can be different depending on what application the wavefront sensor will be used. What makes optimizing the wavefront sensor difficult is that these parameters are not typically independent from each other. In other words, there are trade-offs between these parameters. The four most important parameters—the number of lenslets (or lenslet diameter), dynamic range, measurement sensitivity, and the focal length of the lenslet array—are discussed in this section. 3.2.1 Number of Lenslets Versus Number of Zernike Coefficients As will be discussed in Chapter 6, the conversion matrix used to reconstruct the wave aberration consists of the fi rst derivatives of individual Zernike polynomials with respect to the lateral directions, x and y, at each lenslet location. A method of singular value decomposition (SVD) is typically used to calculate its inverse matrix. However, this method becomes inaccurate if the reconstruction algorithm calculates too many Zernike polynomials compared to the number of lenslets and causes unpredictable errors in the reconstructed wave aberration. Therefore, it is important to understand the relationship between the number of lenslets and the maximum number of Zernike coefficients that can be calculated reliably with the reconstruction algorithm. Figure 3.4 shows the maximum number of Zernike coefficients that can be accurately calculated as a function of the number of lenslets sampling the pupil. Wave aberrations originally measured through tenth-order Zernike coefficients with a large number of lenslets (217) were theoretically sampled with different numbers of lenslets. Different numbers of Zernike coefficients were then calculated using the algorithm described in Chapter 6. The calculated coefficients were compared to the coefficients originally measured with the large number of lenlsets. The maximum number of Zernike coefficients that could be used to adequately represent the original wavefront was determined by minimizing the difference in root-mean-square (RMS) wavefront error between the resampled and recalculated wavefront and the originally measured wavefront. As Figure 3.4 illustrates, the maximum number of coefficients that the reconstruction algorithm can reliably calculate is approximately the same as the number of lenslets. For example, if Zernike coefficients up to the tenth order (corresponding to 63 total coefficients without piston, tip, and tilt, which cannot be measured accurately using Shack–Hartmann
OPTIMIZING A SHACK-HARTMANN WAVEFRONT SENSOR
69
FIGURE 3.4 Maximum number of Zernike modes that can be calculated reliably for a given number of sampling points. The dashed line represents a slope of 1.
wavefront sensors) need to be calculated, at least 63 lenslets (sampling points) are required for a reliable reconstruction. When selecting the number of lenslets to use to sample the pupil, it is important to consider the total number of Zernike coefficients needed to effectively represent the true wave aberration. As shown in Figure 3.5, the true higher order aberration profi le can be represented more precisely when sampling the same area with a larger number of lenslets and consequently by calculating more Zernike coefficients. The required number of Zernike coefficients is related to the population of eyes to be measured with the wavefront sensor. For a population of normal eyes with no pathology (i.e., eyes without keratoconus or corneal transplantation), the majority of higher order aberrations are typically included in Zernike modes up to and including eighthorder Zernike coefficients, corresponding to 42 coefficients in total (excluding piston, tip, and tilt) [12]. The number of Zernike coefficients (excluding piston, tip, and tilt), J, through a given Zernike order, N, can be computed by the following equation: J=
( N + 1) ( N + 2 ) −3 2
(3.3)
This indicates that at least 42 lenslets are needed to reliably measure the higher order aberrations in these eyes. If a 6-mm pupil is considered, the
70
WAVEFRONT SENSING AND DIAGNOSTIC USES
FIGURE 3.5 Relationship between the reliability of wavefront representation and the number of lenslets (or lenslet diameter). More lenslets provide a more precise representation of the wavefront. (From Yoon et al. [21]. Reprinted with permission from SLACK Inc.)
TABLE 3.1 Total Number of Zernike Coefficients for Different Lenslet Diameters (6-mm Pupil) Lenslet Diameter (mm)
Maximum Number of Zernike Coefficients
200 300 400 500 600
657 (~ up to 34th order) 277 (~ up to 22th order) 145 (~ up to 15th order) 89 (~ up to 12th order) 61 (~ up to 9th order)
maximum size of each lenslet would be approximately 0.65 mm in diameter (providing a total of 49 lenslets in the pupil). On the other hand, highly aberrated eyes or eyes with abnormal and larger amounts of higher order aberrations generally need more Zernike coefficients to adequately represent the wavefront. In other applications of wavefront sensing, such as an adaptive optics system, an additional number of lenslets depending on the number of actuators and the control agorithm may be required for more effective deformable mirror actuator control. Table 3.1 summarizes the maximum number of Zernike coefficients that can be represented for different lenslet diameters over a 6-mm pupil. The highest number of approximate Zernike orders that can be calculated for a given number of lenslets are also included in the table.
71
OPTIMIZING A SHACK-HARTMANN WAVEFRONT SENSOR
The maximum number of Zernike coefficients shown in Table 3.1 is based on the case where there is no other effect, such as various noise sources that may exist in an actual wavefront sensor, temporal variations in the tear fi lm surface profi le, and a partial occlusion of peripheral lenslets by the pupil boundary. Therefore, it is often desirable to oversample the wavefront with more lenslets than those specified in the table. 3.2.2
Trade-off Between Dynamic Range and Measurement Sensitivity
As mentioned in Section 3.1, a Shack–Hartmann wavefront sensor reconstructs the wave aberration by measuring the displacement of the focused Shack–Hartmann spots from a reference spot position, corresponding to the slope of the wavefront. In the Shack–Hartmann wavefront sensor, the relationship between the wavefront slope, q, and the spot displacement, ∆s, can be expressed as: ∆s = Fθ
(3.4)
where F is the focal length of the lenslet. If the focal length is constant, larger wavefront slopes will cause larger displacements of the spot. Measurement accuracy of the wavefront sensor is directly related to the precision of the centroid algorithm, that is, to the measurement precision of ∆s. A conventional centroid algorithm will fail to fi nd the correct centers of the spots if the spots partially overlap or fall outside of the virtual subaperture (located directly behind the lenslet) on the photodetector array (see Fig. 3.6) unless a special algorithm is implemented. These factors limit the dynamic range of
FIGURE 3.6 Limitations in a Shack–Hartmann wavefront sensor: multiple spots, overlapped spots, and spot crossover. (From Yoon et al. [21]. Reprinted with permission from SLACK Inc.)
72
WAVEFRONT SENSING AND DIAGNOSTIC USES
the wavefront sensor. As diagrammed in Figure 3.7, the dynamic range, q max, is the wavefront slope when the Shack–Hartmann spot is displaced by the maximum distance, ∆smax, within the subaperture, which is equal to one-half of the lenslet diameter for a given focal length lenslet array (when ignoring spot size). It is important to note that spot size needs to be considered in Eqs. (3.5) and (3.6) when the f-number of the lenslet (defi ned as the ratio of the focal length to the lenslet diameter) is relatively large. Equation (3.4) can be rewritten to defi ne the dynamic range of the sensor as: ∆smax F d = 2F
θ max =
(3.5) (3.6)
where d is the lenslet diameter. To increase the dynamic range of the sensor, a larger lenslet diameter and/or a shorter focal length lenslet needs to be used. Assuming that the lenslet diameter is determined by the required number of Zernike coefficients (as discussed in the previous section), the only way to increase the dynamic range is to shorten the focal length of the lenslet.
FIGURE 3.7 Trade-off between the dynamic range and measurement sensitivity of a Shack–Hartmann wavefront sensor: Increasing the dynamic range results in a decrease in measurement sensitivity and vice versa.
OPTIMIZING A SHACK-HARTMANN WAVEFRONT SENSOR
73
However, if the focal length is too short, this causes a decrease in measurement sensitivity. The term measurement sensitivity can be described as the minimum wavefront slope, q min, that the wavefront sensor can measure. Equation (3.4) can also be rewritten to defi ne the measurement sensitivity as:
θ min =
∆smin F
(3.7)
where ∆smin is the minimum detectable spot displacement. Typically ∆smin is determined by the pixel size of the photodetector, the accuracy of the centroid algorithm and the signal-to-noise ratio of the sensor. Therefore, a longer focal length lenslet is needed for better sensitivity if ∆smin is constant. By combining Eqs. (3.6) and (3.7), the relationship between dynamic range, q max, and measurement sensitivity, q min, can be described as:
θ min =
2∆sminθ max d
(3.8)
This equation defi nes the trade-off between the dynamic range and sensitivity of a Shack–Hartmann wavefront sensor. These quantities are inversely related, that is, increasing the dynamic range of the wavefront sensor results in a decrease in its sensitivity (i.e., an increase in q min) and vice versa for a constant lenslet spacing, d. 3.2.3
Focal Length of the Lenslet Array
As discussed in the previous section, understanding the trade-off between the dynamic range and measurement sensitivity in a Shack–Hartmann wavefront sensor is critical when selecting the focal length of the lenslet array. Using the longest focal length that meets both the dynamic range and measurement sensitivity requirements is ideal. To illustrate this, the fraction of subjects whose wave aberrations could be measured accurately using lenslet arrays of different focal lengths and different lenslet diameters was calculated based on a large population of 333 normal preoperative, laser refractive surgery eyes. The results for two different lenslet diameters (200 and 400 mm) are shown in Figure 3.8. The solid, dashed, and dotted lines represent the lenslet focal length required to measure a certain percentage of eyes with no aberration correction, a defocus correction, and a second-order (defocus and astigmatism) correction, respectively. A smaller lenslet diameter requires a shorter focal length to cover a certain fraction of subjects. For both lenslet sizes, precompensating for defocus (dashed line) allows for the use of longer focal length lenslets compared to that with no precompensation (solid line). In addition to precompensating for defocus, correcting astigmatism (dotted line) slightly improves the focal length requirement. However, the improvement is very small, especially in the case of the 200-mm lenslet diameter.
74
WAVEFRONT SENSING AND DIAGNOSTIC USES
Measurable Fraction of Subjects (%)
7-mm Pupil, n = 333 Normal Eyes 100
100 Lenslet Diameter = 200 mm
80
Second–Order Aberrations Corrected
60
Lenslet Diameter = 400 mm
80 60 40
40 20
Defocus Corrected
No Corrected
20
0 0
10
20
30
40
50
0 60 0
10
20
30
40
50
60
Focal Length of Lenslet (mm)
FIGURE 3.8 Measurable fraction of subjects that can be measured adequately when the wave aberration is sampled with different focal length lenslet arrays. Two different lenslet diameters, 200 mm (left) and 400 mm (right), were assumed. Solid, dashed, and dotted lines represent cases with no aberration correction, a defocus-only correction, and a defocus and astigmatism correction, respectively.
An appropriate focal length lenslet for measuring most normal eyes can be chosen from these plots when the lenslet diameter is given. 3.2.4 Increasing the Dynamic Range of a Wavefront Sensor Without Losing Measurement Sensitivity It is also important to choose the proper lenslet focal length to provide enough measurement sensitivity for the given application. It could be possible that the lenslet focal length is too short to measure small amounts of the wave aberration while adequate dynamic range can be obtained. For example, a much shorter focal length lenslet is required for eyes with abnormal corneal conditions (such as keratoconic and postcorneal transplant eyes) than for normal eyes, simply due to the presence of larger amounts of aberrations inherent in abnormal eyes. However, appropriate measurement sensitivity may not be obtained in this case. Therefore, it becomes more important to have the capability of increasing the wavefront sensor’s dynamic range without sacrificing its measurement sensitivity. A few methods have been proposed to increase the dynamic range of a Shack–Hartmann wavefront sensor without losing measurement sensitivity. One of the simple ways is to increase the magnification of the pupil size at the lenslet array. Since the wavefront is expanded across the magnified pupil without changing the magnitude of the wavefront, magnifying the pupil
CALIBRATION OF A WAVEFRONT SENSOR
75
reduces the averaged wavefront slope, which generates smaller spot displacements. However, a larger CCD camera is required to capture the spot array pattern over the magnified pupil, which increases the cost of the wavefront sensor. Ophthalmic trial lenses (spherical and cylindrical lenses) can be used to precompensate for the majority of the eye’s aberration, which are defocus and astigmatism. In this method, it is important to know the exact position and power of the trial lenses used for the precompensation if those lenses are not inserted in the pupil plane. This precompensation can also be performed with a software algorithm that allows the user to reposition individual centroid boxes irregularly according to an estimated amount of defocus and astigmatism. Computer algorithms, including the unwrapping [15] and iterative spline fitting methods [16, 17], have been suggested as ways of overcoming this limitation. These algorithms are capable of reassigning focused spots to their corresponding lenslets, although there is still a limitation on the maximum measurable wavefront curvature when crossover between adjacent spots occurs. Other methods use hardware to resolve the problem described above. One method, by Lindlein et al., uses a spatial light modulator to selectively block and unblock different parts of the wavefront in an effort to resolve ambiguities relating to the presence of multiple spots within a virtual subaperture [18]. This method fails when overlapping spots exist. An “adaptive” Shack–Hartmann sensor has also been proposed, where the static lenslet array is replaced with a liquid crystal display (LCD) device [19]. The LCD can be programmed to generate an array of Fresnel microlenses with different focal lengths. Due to the pixelated structure of the Fresnel lenses, light diffracted into the higher diffraction orders may reduce the reliability of determining centroid locations, affecting higher order aberration measurements. Another method that has recently been proposed is to increase the size of the virtual subaperture by using a translatable plate with discrete clear apertures that are the same size as the lenslets [20]. For example, if the translatable plate selectively blocks every other lenslet in both the horizontal and vertical directions, this increases the virtual subaperture by a factor of 2, resulting in a twofold increase in dynamic range. The translatable plate will then need to be translated to capture spots that are blocked in the previous frame. In a two-dimensional space, two translations in both the horizontal and vertical directions are required to acquire a complete set of Shack–Hartmann spots.
3.3
CALIBRATION OF A WAVEFRONT SENSOR
In calibrating a wavefront sensor, there are two main parts: software and hardware calibration. The software calibration should be done to confi rm that the wavefront reconstruction algorithm correctly calculates the expected Zernike coefficients from the spot displacement data. This process also includes determining the accuracy of the centroid algorithm to precisely detect the center of each spot. Once the software is calibrated, the hardware
76
WAVEFRONT SENSING AND DIAGNOSTIC USES
needs to be calibrated by measuring a known wave aberration. This process calibrates systematic errors in the wavefront sensor (i.e., system aberrations generated by various optical components, such as lenses and beam splitters, misalignments, and any manufacturing errors in the lenslet array’s focal length and spacing). Detailed calibration procedures are described below. 3.3.1
Reconstruction Algorithm
Figure 3.9 shows a flowchart illustrating how to calibrate the reconstruction algorithm. Although this calibration process is designed for calibrating a Shack–Hartmann type of wavefront sensor, it can easily be used for other types of wavefront sensors, with some minor modifications. The basic idea of this process is to run the reconstruction algorithm on simulated data and check if the Zernike coefficients output from the reconstruction algorithm are the same as the input coefficients that were used to generate the simulated data. Any combination of Zernike coefficients for lower and higher order aberrations that reliably represent the wave aberration in an optical system can be assumed as an input. Using these coefficients, a wave aberration, W(x, y), can be calculated as:
FIGURE 3.9 Flowchart illustrating the calibration process for the reconstruction algorithm using a simulated spot array pattern.
CALIBRATION OF A WAVEFRONT SENSOR
77
J
W ( x, y ) = ∑ c j Z j ( x, y )
(3.9)
j =3
where cj and Zj (x, y) are the Zernike coefficients and their corresponding Zernike polynomials, respectively, and J is the total number of Zernike coefficients to be included in calculating the wave aberration. This input wave aberration, which fi lls the entire pupil, can then be divided into subapertures that have the same diameter as each lenslet. The local wave aberration, Wk (x, y), for each subaperture is used to build the subpupil function, pk (x, y), defi ned as: 2π pk ( x, y ) = exp −i Wk ( x, y ) λ
(3.10)
where l is the wavelength of the light source used by the wavefront sensor. Then, a Fourier transformation is applied to each subpupil function, pk (x,y), to simulate the spot pattern produced for each individual lenslet. This yields the ideal Shack–Hartmann spot array pattern simulated from the known set of Zernike coefficients, in the absence of noise. Noise sources that can be measured or statistically modeled can be added to the simulated spot array pattern to test the reliability of the centroiding algorithm to the noise. Using this simulated image, the centroid and reconstruction algorithms are applied to detect the center of each spot and to calculate the output Zernike coefficients. Note that the total number of calculated Zernike coefficients cannot be larger than the maximum number of Zernike coefficients that can be calculated reliably with the given number of lenslets (as discussed in Section 3.2.1). The output coefficients are compared with the ones used to generate the simulated spot pattern. If there is no error in the reconstruction processes, both the signs and magnitudes of the output coefficients should be the same as the input ones. This calibration process is also useful for confi rming that the coordinate system and sign convention of the reconstruction algorithm are correct.
3.3.2
System Aberrations
Although the reconstruction algorithm may be well calibrated, there are other factors in an actual wavefront sensor that may yield inaccuracies in the measured wave aberration. Potential error sources include (1) a misalignment of optical components, such as relay lenses used in an optical system, (2) residual aberrations of the optics themselves, and (3) manufacturing errors in the lenslet array. These factors can significantly affect measurement performance if they are not correctly calibrated. Calibrating lenslet array parameters, such as its focal length and spacing, is especially important for achieving precise measurements of the aberrations.
78
WAVEFRONT SENSING AND DIAGNOSTIC USES
A good technique for calibrating the aberrations inherent in a wavefront sensor is to measure an aberration profi le with both an interferometer, one of the most reliable methods available to measure aberrations, and with the developed wavefront sensor. Figure 3.10 shows a schematic diagram of an optical layout for measuring a wave aberration simultaneously with these two instruments. This setup consists of a set of relay optics that produces another plane conjugate with the pupil of the optics to be measured. The collimated beam from the illumination path of an interferometer passes through the relay optics and is reflected by either an aberration generator or a flat mirror. The reflected beam then returns to the two sensors, via a beamsplitter, after passing through the same relay optics. In practice, aberrations are present in all of the optical components, including the lenslet array, of the optical system. Each wavefront sensor also measures slightly different aberrations because light is reflected off of the beamsplitter for the interferometer, as opposed to being transmitted through the beamsplitter for the wavefront sensor. Therefore, the aberrations of the entire system fi rst need to be measured using a flat mirror. Following this measurement, the flat mirror is then replaced with an aberration generator that has an irregular surface profi le that aberrates the wavefront of the incident collimated beam. This aberration generator can be a custom optic or a deformable mirror. The distorted wavefront is then measured with the two sensors. However, the measured aberration includes both the aberrations from the aberration generator and the aberrations from the system previously measured with the flat mirror. Therefore, the true aberration of the aberration generator is the difference between the measured aberrations from the aber-
FIGURE 3.10 Schematic diagram of an optical setup to calibrate the measurement performance of a Shack–Hartmann wavefront sensor by comparing it to an interferometric method. The system aberrations measured with a flat mirror in each sensor should be subtracted from the measured aberration of the aberration generator.
REFERENCES
79
ration generator (measured with each wavefront sensor) and the system aberrations (measured with the flat mirror). The true aberration from the interferometer measurements is used as the reference to evaluate the measurement results from the Shack–Hartmann wavefront sensor. When the CCD camera used to capture the spot array pattern is precisely place at the focus of the lenslet array, there is typically a discrepancy between the two sensors in the magnitude of the individual Zernike coefficients. The signs of each coefficient should be the same if the coordinate systems used for both instruments are the same and the calibration of the reconstruction algorithm (described in Section 3.3.1) was successful. If the magnitude of each Shack–Hartmann Zernike coefficient tends to be larger than the interferometric measurement, the focal length of the lenslet array used in the reconstruction algorithm needs to become longer by the ratio of the coefficient magnitude from the Shack–Hartmann sensor to that from the interferometer.
3.4
SUMMARY
In this chapter, we reviewed various wavefront sensing techniques and their advantages and disadvantages. When developing a wavefront sensor, it is very important to carefully consider certain parameters, such as the number of sampling points and the dynamic range that are required, in order to develop the wavefront sensor that works best for each particular application. There are also practical issues to improve the reliability of the measured aberration. Those include calibrating the eye’s chromatic aberration, the effect of accommodation on the higher order aberrations, the temporal variability of the aberrations and the potential shift of the pupil center between dilated and natural pupil conditions. This effort results in the most effective and reliable outcome when using wavefront sensing techniques. There has been a significant increase in the interest and application of wavefront sensing techniques in the vision science and ophthalmological communities. It is becoming a more critical tool that allows us to correct higher order aberrations to improve visual performance using laser refractive surgery and customized optics (described in Chapters 11 and 12) and to obtain highresolution images of the retina (described in Chapters 10, 15, 16, and 17).
REFERENCES 1. Howland HC, Howland B. A Subjective Method for the Measurement of Monochromatic Aberrations of the Eye. J. Opt. Soc. Am. 1977; 67: 1508–1518. 2. Iglesias I, Lopez-Gil N, Artal P. Reconstruction of the Point-Spread Function of the Human Eye from Two Double-Pass Retinal Images by Phase-Retrieval Algorithms. J. Opt. Soc. Am. A. 1998; 15: 326–339.
80
WAVEFRONT SENSING AND DIAGNOSTIC USES
3. Smirnov MS. Measurement of the Wave Aberration of the Human Eye. Biophysics. 1961; 6: 687–703. 4. Walsh G, Charman WN, Howland HC. Objective Technique for the Determination of Monochromatic Aberrations of the Human Eye. J. Opt. Soc. Am. A. 1984; 1: 987–992. 5. Webb RH, Penney CM, Thompson KP. Measurement of Ocular Local Wave-front Distortion with a Spatially Resolved Refractometer. Appl. Opt. 1992; 31: 3678–3686. 6. He JC, Marcos S, Webb RH, Burns SA. Measurement of the Wave-front Aberration of the Eye by a Fast Psychophysical Procedure. J. Opt. Soc. Am. A. 1998; 15: 2449–2456. 7. Navarro R, Losada MA. Aberrations and Relative Efficiency of Light Pencils in the Living Human Eye. Optom. Vis. Sci. 1997; 74: 540–547. 8. Navarro R, Moreno E, Dorronsoro C. Monochromatic Aberrations and PointSpread Functions of the Human Eye across the Visual Field. J. Opt. Soc. Am. A. 1998; 15: 2522–2529. 9. Pallikaris IG, Panagopoulou SI, Molebny VV. Evaluation of TRACEY Technology for Total Eye Refraction Mapping. Reproducibility Tests. Invest. Ophthalmol. Vis. Sci. 2000; 41: S301. 10. Pallikaris IG, Panagopoulou SI, Siganos CS, Molebny VV. Objective Measurement of Wavefront Aberrations with and without Accommodation. J. Refract. Surg. 2001; 17: S602–S607. 11. Liang J, Grimm B, Goelz S, Bille J. Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann–Shack Wave-front Sensor. J. Opt. Soc. Am. A. 1994; 11: 1949–1957. 12. Liang J, Williams DR. Aberrations and Retinal Image Quality of the Normal Human Eye. J. Opt. Soc. Am. A. 1997; 14: 2873–2883. 13. Moreno-Barriuso E, Merayo-Lloves JM, Marcos S, Navarro R. Ocular Aberrations after Refractive Surgery Measured with a Laser Ray Tracing Technique. Invest. Ophthalmol. Vis. Sci. 2000; 41: S303. 14. Salmon TO, Thibos LN, Bradley A. Comparison of the Eye’s Wave-front Aberration Measured Psychophysically and with the Shack–Hartmann Wave-front Sensor. J. Opt. Soc. Am. A. 1998; 15: 2457–2465. 15. Pfund J, Lindlein N, Schwider J. Dynamic Range Expansion of a Shack– Hartmann Sensor by Use of a Modified Unwrapping Algorithm. Opt. Lett. 1998; 23: 995–997. 16. Groening S, Sick B, Donner K, et al. Wave-front Reconstruction with a Shack– Hartmann Sensor with an Iterative Spline Fitting Method. Appl. Opt. 2000; 39: 561–567. 17. Unsbo P, Franzn LK, Gustafsson J. Increased Dynamic Range of a Hartmann– Shack Sensor by B-spline Extrapolation: Measurement of Large Aberrations in the Human Eye. Invest. Ophthalmol. Vis. Sci. 2002; 43: U465. 18. Lindlein N, Pfund J, Schwider J. Algorithm for Expanding the Dynamic Range of a Shack–Hartmann Sensor by Using a Spatial Light Modulator Array. Opt. Eng. 2001; 40: 837–840. 19. Seifert L, Liesener J, Tiziani H. The Adaptive Shack–Hartmann Sensor. Opt. Comm. 2003; 216: 313–319.
REFERENCES
81
20. Pantanelli SM, Yoon G, Jeong T, MacRae S. Large Dynamic Range Shack– Hartmann Wavefront Sensor for Highly Aberrated Eyes. Invest. Ophthalmol. Vis. Sci. 2003; 44: U1. 21. Yoon G, Pantanelli S, MacRae SM. Optimizing the Shack–Hartmann wavefront sensor. In: Krueger RR, Applegate RA, MacRae SM, eds. Wavefront Customized Visual Correction: The Quest for Super Vision II. Thorofare, NJ: SLACK, 2004, pp. 131–136.
CHAPTER FOUR
Wavefront Correctors for Vision Science NATHAN DOBLE Iris AO Inc., Berkeley, California DONALD T. MILLER Indiana University, Bloomington, Indiana
4.1
INTRODUCTION
Aberrations of the ocular media and diffraction generated by the fi nite size of the eye’s pupil fundamentally limit our ability to resolve fi ne retinal structure when looking into the eye. Conversely, with the light path reversed, diffraction and aberrations limit visual acuity to well below the spatial bandwidth imposed by the neural visual system, such as that dictated by the sampling of the photoreceptor mosaic. Conventional corrective methods, such as spectacles, contact lenses, and refractive surgery, provide a static amelioration of prism, sphere, and cylinder, which correspond to the lower order Zernike aberrations of tilt, defocus, and astigmatism. Image quality in the eye, however, can be significantly increased by dilating the pupil to minimize diffraction and subsequently correcting the ocular aberrations across the large pupil using, for example, an adaptive optics (AO) system. In recent years, AO has been successfully applied to correct both the lower and higher order ocular aberrations in a variety of retinal camera architectures. These include conventional fundus cameras [1–3], confocal scanning laser ophthalmoscopes (cSLOs) [4], and optical coherence tomography
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
83
84
WAVEFRONT CORRECTORS FOR VISION SCIENCE
(OCT) [5–7]. The increase in contrast and resolution permits observation of retinal structure at the single-cell level, which could not otherwise be seen in the living eye. AO has also been used to improve vision by controlling the type and amount of aberrations to which the retina is exposed. Specifically, AO provides a means to directly assess the visual impact of individual types of aberration [8, 9] and allows patients to experience beforehand the predicted visual benefit of invasive surgical procedures, such as refractive surgery [10, 11]. In general, the ability of AO to improve resolution in the eye makes it a key enabling technology for probing the microscopic living retina and enhancing vision. The extent to which AO can effectively improve resolution, however, fundamentally depends on its ability to accurately measure, track, and correct the ocular aberrations. This chapter focuses on the last step, correction. While all of the steps are critical, the performance limiter of current AO systems for vision science appears to be the wavefront corrector. This limitation, coupled with the expense of wavefront correctors ($7000 to >$100,000), motivates the need for discussion of this device. This chapter attempts to bring together wavefront corrector information that is important for the design of AO systems for vision science. Much of this is not readily found in the adaptive optics literature, which is heavily centered on atmospheric applications. Section 4.2 introduces the principal components of an AO system. Section 4.3 presents the primary types of wavefront correctors, and Section 4.4 surveys versions that have been applied or are in the process of being applied to vision science AO systems. Section 4.5 contains theoretical performance predictions for the most common types of wavefront correctors. The section summarizes results already in the literature with new predictions for corrector types not yet evaluated, all within the mathematical framework of Fourier optics. Predictions are based on modeling the correctors’ principal operation in conjunction with measured wave aberrations collected on two large populations. This theoretical analysis extends that already reported for segmented piston-only devices [12].
4.2
PRINCIPAL COMPONENTS OF AN AO SYSTEM
The late 1980s saw the fi rst AO wavefront corrector, a membrane mirror, applied to the human eye for the correction of astigmatism, which was obtained by a conventional refraction [13]. This was followed in the early 1990s by the fi rst Shack–Hartmann wavefront sensor (SHWS) [14] applied to the eye [15]. The fi rst complete AO system, which successfully corrected the eye’s most significant higher order aberrations, was built in the mid-1990s [1]. A conceptual schematic of AO as employed for retinal imaging and vision testing is shown in Figure 4.1. As illustrated, AO systems consist of three main components:
PRINCIPAL COMPONENTS OF AN AO SYSTEM
85
FIGURE 4.1 Basic schematic of an AO system for the eye. Light from a laser beacon (not shown) is focused onto the retina, some of which reflects and fi lls the dilated pupil of the eye. Light exiting the eye reflects off the wavefront corrector (deformable mirror) and is directed by a beamsplitter into the wavefront sensor. Centroid positions are obtained from the raw sensor data and are used in the wavefront reconstructor to determine the appropriate drive voltages for the wavefront corrector. The wavefront sensing and the corrector control process repeat in a continuous fashion. The system is considered a real-time, closed-loop AO system if it operates sufficiently fast to track temporal changes in the ocular aberrations. Once a good wavefront correction is obtained, images are collected of the retina using the science camera, or vision experiments are conducted with the visual stimulus. The illumination source for the science camera is not shown. (Figure courtesy of M. Helmbrecht, Iris AO, Inc.)
• The wavefront sensor measures the optical aberrations in the pupil plane of the eye. While there are numerous types of wavefront sensors, the Shack–Hartmann wavefront sensor is almost universally used for the eye [15, 16]. More recently, other alternatives have been investigated, including pyramid sensing [17] and interferometry [18]. Chapter 3 provides a detailed discussion of the wavefront sensing techniques. • The control computer converts the raw output of the wavefront sensor into voltage commands that are sent to the wavefront corrector. For the eye, the computational requirements to operate in a real-time, closedloop fashion are modest and can be achieved with a desktop computer. Details of this stage can be found in Chapter 5. • The wavefront corrector compensates for the measured aberrations by generating a surface shape that is ideally conjugate to the aberration profi le. (If a reflective wavefront corrector is used, the mirror surface
86
WAVEFRONT CORRECTORS FOR VISION SCIENCE
shape should be equal to, but with half of the amplitude of, the incoming wavefront.) The wavefront corrector is placed in a plane conjugate to both the pupil of the eye and the wavefront sensor. The most common wavefront corrector consists of a continuous reflective surface and an array of adjoining computer-controlled actuators or electrodes that physically or electrically push and pull on the surface, transforming it into a desired shape. Wavefront correctors have been commercially available for many years, though their construction is still a field of active research. Many different types of correctors exist, including those based on conventional piezoelectric, bimorph, liquid crystal, and microelectromechanical system (MEMS) technologies [19, 20].
4.3
WAVEFRONT CORRECTORS
Wavefront correctors alter the phase profi le of the incident wavefront by changing the physical length over which the wavefront propagates or the refractive index of the medium through which the wavefront passes. Most wavefront correctors are based on mirror technology and impart phase changes by adjusting their surface shape (i.e., change their physical length while keeping the refractive index constant). Other devices, such as those based on liquid crystal technologies, rely on localized changes in refractive index with the physical length being held constant. Using a nomenclature similar to that by Tyson [19] and Hardy [20], there are four broad categories of wavefront correctors that one is most likely to consider for an AO system designed for the eye. These categories are depicted in Figure 4.2 and are described below. Discrete Actuator Deformable Mirrors Deformable mirrors of this type have a continuous mirror surface whose profi le is controlled by an underlying array of actuators [Fig. 4.2(a)]. Pushing one actuator produces a localized (also termed zonal) Gaussian-like deflection of the mirror surface, termed the influence function. The deflection extends to adjacent actuators, where it typically changes the surface height by 10 to 15% of the peak deflection. The 10 to 15% deflection is commonly referred to as the coupling coefficient, as it describes the degree of cross-coupling between actuators. The influence functions are not identical nor are they independent, the extent of which depends on many factors, such as the thickness and material properties of the top facesheet, and the characteristics of the underlying actuator, such as the modulus of elasticity and the type of surface/actuator junction. A detailed treatment of the impact of these additional parameters is given by Tyson [19] and Hardy [20]. Segmented Correctors Mirrors of this type consist of an array of adjacent, planar mirror segments that are independently controlled [Fig. 4.2(b)].
WAVEFRONT CORRECTORS
(a)
V1
V2
V3
(b)
V4 (c)
87
V5
V6
V1
V2
V3
V4
V5
V6
(d)
FIGURE 4.2 Four main classes of wavefront correctors. (a) Discrete actuator deformable mirrors consist of a continuous, reflective surface and an array of actuators, each capable of producing a local deformation in the surface. (b) Piston-only segmented correctors consist of an array of small planar mirrors whose axial motion (piston) is independently controlled. Liquid crystal spatial light modulators modulate the wavefront in a similar fashion but rely on changes in refractive index rather than the physical displacement of a mirror surface. Piston/tip/tilt-segmented correctors add independent tip and tilt motion to the piston-only correctors. (c) Membrane mirrors consist of a grounded, flexible, reflective membrane sandwiched between a transparent top electrode and an underlying array of patterned electrodes, each of which is capable of producing a global deformation in the surface. (d) Bimorph mirrors consist of a layer of piezoelectric material sandwiched between a continuous top electrode and a bottom, patterned electrode array. A top mirrored layer is added to the top continuous electrode. An applied voltage causes a deformation of the top mirrored surface.
Piston-only segmented mirrors have one degree of freedom that corresponds to a pure, vertical piston mode. Piston/tip/tilt segmented mirrors have two additional degrees of freedom (tip and tilt) for slope control. This results in much better wavefront fitting and significantly reduces the number of segments needed to achieve the same correction as that for piston-only mirrors. Segmented mirrors are considered zonal correctors in that each segment induces a localized wavefront correction. Mirror segments are not coupled, producing a coupling coefficient of zero. The influence function of piston-only mirrors is described by a “top hat” function. Piston/tip/tilt mirrors are more complicated and have three influence functions per segment. Unlike discrete actuator deformable mirrors, the segmented mirrors have gaps between the segments that can trap and scatter incident light. These reduce the efficiency and quality of the correction. Gaps also necessitate co-phasing of the segments, that is, matching the reflected wavefront phase at the segment boundaries to ensure a continuous wavefront profi le. These gaps are typically characterized by their fi ll factor, the ratio of the actual mirrored surface to
88
WAVEFRONT CORRECTORS FOR VISION SCIENCE
the total corrector surface area. Fill factors vary considerably among devices with some approaching 100% and others well below 50%. Liquid crystal spatial light modulators (LC-SLMs) are another type of piston-only, segmented corrector. Instead of mirrored segments that physically move, LC-SLMs change the refractive index over a range that permits optical path changes of at least one wavelength. The resulting effect on the wavefront is essentially the same as for the segmented mirrors. Control of the refractive index is achieved electronically (via electrodes) [21] or optically (by imaging an intensity pattern directly onto the liquid crystal housing) [22]. Membrane Mirrors Mirrors of this type consist of an edge-clamped, flexible, reflective membrane (analogous to a drumskin) sandwiched between a transparent top electrode and an underlying array of patterned electrodes [Fig. 4.2(c)]. When no voltage is applied, the membrane is in principle flat. Application of a voltage to one electrode induces an electrostatic attraction that deflects the entire membrane. Hence membrane mirrors are often viewed as modal correctors, as opposed to discrete actuator deformable mirrors and segmented mirrors that are quasi- or fully zonal. Because the edges of the membrane are clamped and cannot move, only the central two-thirds of the membrane is useful for correction. Most membrane mirrors rely on electrostatics, though other activation mechanisms are possible, such as magnetic [23, 24], thermal [25], or voicecoil [26] methods. Bimorph Mirrors Bimorph mirrors are another form of modal corrector [19, 20] and at their most basic level consist of two dissimilar layers with one or both being piezoelectric. These are sandwiched between a continuous top electrode and a bottom, patterned electrode array [Fig. 4.2(d)]. A mirrored layer of high optical quality is added to the top continuous electrode. Application of a voltage across the top and bottom electrodes changes the underlying surface area of the two dissimilar layers and results in a bending of the entire mirror. The magnitude of the deformation is dependent on the electric field and the dielectric properties of the material. Bimorphs are particularly adept at providing large dynamic ranges at low spatial frequencies.
4.4
WAVEFRONT CORRECTORS USED IN VISION SCIENCE
Several types of wavefront correctors have been used in vision AO systems for the correction of ocular aberrations. These include macroscopic discrete actuator deformable mirrors, LC-SLMs, bimorph mirrors, and a variety of MEMS-based mirrors. The LC-SLMs represent a form of segmented corrector; the MEMS-based mirrors typically represent a form of membrane mirror or discrete actuator deformable mirror. The use of each of these is surveyed below.
WAVEFRONT CORRECTORS USED IN VISION SCIENCE
4.4.1
89
Macroscopic Discrete Actuator Deformable Mirrors
A discrete actuator deformable mirror was employed in the first vision science AO system that successfully corrected the most significant higher order aberrations of the eye and with which the University of Rochester authors performed high-resolution retinal imaging and vision testing [1]. This initial Rochester AO system consisted of a Shack–Hartmann wavefront sensor and a 37-channel discrete actuator deformable mirror that was manufactured by Xinxtics, Inc. [27]. The mirror has a dynamic range of 4 mm (8 mm in reflection) for an applied voltage of ±100 V. A photograph of a larger version of the mirror is shown in Figure 4.3. The initial Rochester system took tens of seconds to converge and so more accurately depicted an active, rather than adaptive, optics system. An improved 37-channel Xinxtics mirror with lead–magnesium–niobate (PMN) actuators was used in the next generation of this AO system and with which a closedloop bandwidth of approximately 1 Hz was achieved [2]. The same mirror was also used in the fi rst AO confocal laser scanning ophthalmoscope (see also Chapter 16) [4], and the fi rst AO time-domain and spectral-domain OCT cameras (see also Chapter 17) [5, 6].
FIGURE 4.3 The discrete actuator deformable mirror manufactured by Xinxtics, Inc. The device shown has 97 actuators that are arranged in a rectilinear pattern behind the 8-cm mirror. Maximum stroke of the actuators is ±2 mm, produced by an applied voltage of ±100 V. (Reprinted with permission of Xinxtics, Inc.)
90
WAVEFRONT CORRECTORS FOR VISION SCIENCE
Discrete actuator deformable mirrors with substantially more actuators have yielded improved aberration correction in the eye. A 97-channel, electrostrictive PMN Xinxtics mirror was successfully integrated into the Rochester AO system and routinely compensates the wave aberration across a 6.8-mm pupil diameter to a root-mean-square (RMS) wavefront error of <0.1 mm (see also Chapter 15) [28]. More recently, a 109-channel discrete actuator ITEK deformable mirror was employed in a conventional flood illumination AO camera [29, 30]. While Xinxtics deformable mirrors have proven effective in AO systems for the eye, their high cost (roughly $1000 per actuator, which does not include the electronic driver), physically large size (>5 cm in diameter), and limited actuator stroke (i.e., displacement of the mirror surface) are notable limitations. The large size leads to long optical path lengths that are necessary to magnify the small pupil of the eye (<8 mm) onto the deformable mirror. Their 4-mm stroke can easily be consumed by the second-order aberrations of eye. Conserving stroke for the higher order aberrations requires the meticulous use of trial lenses, which can be time consuming and cumbersome. 4.4.2
Liquid Crystal Spatial Light Modulators
Early commercial LC-SLMs for wavefront correction, such as those manufactured by Meadowlark Optics [21], operated in transmission and modulated the wavefront using a hexagonal pattern of electrodes (i.e., pixels), each being individually addressable. Thibos and Bradley [31] and Vargas-Martin et al. [32] provided early evaluation of the Meadowlark devices for correcting ocular errors, in both cases driving the devices in open loop. The 69 and 127 pixels employed in these devices limited their spatial resolution and effectiveness to correct ocular aberrations. More recently, Awwal et al. [33], Prieto et al. [34], and Bessho et al. [35] independently evaluated an optically addressed LC-SLM [22] for the human eye with the former using the device as part of an AO phoropter (see also Chapter 18). Prieto et al. reported an RMS wavefront error reduction from 1.7 to 0.l mm for a 5.5-mm pupil [34]. The optically addressed LC-SLM was manufactured by Hamamatsu Corporation; a photograph of the device is shown in Figure 4.4. Optically addressed LC-SLMs have extremely high spatial resolution, coupled with low control voltages (~5 V). Both XGA (1024 × 768 piston-only pixels) and VGA resolution (effective area 480 × 480) models are available. A limitation is their narrow corrective range (or stroke), which is usually around one optical wavelength. A solution to this problem is modulo-2p phase wrapping, or an integer thereof, which is discussed by Thibos and Bradley [31] and Miller et al. [12]. While this confi nes effective compensation to a restricted spectral range as discussed by Prieto et al. [34], the intrinsic chromatic aberrations of the eye ultimately limit the spectral bandwidth over which one can correct [12].
WAVEFRONT CORRECTORS USED IN VISION SCIENCE
α-SiH Photoconductor Glass Substrate
Parallel Aligned Liquid Crystal Dielectric Mirror
Addressing Optical Beam
Anti-Reflection Coating Transparent Electrode Light Blocking Layer
91
Glass Substrate
Readout Optical Beam
Driving Voltage
Anti-Reflection Coating Transparent Electrode Liquid Crystal Alignment Layer
FIGURE 4.4 (Left) Schematic layout of the optically addressed LC-SLM marketed by Hamamatsu Corporation. Intensity changes of the addressing beam, which originate from a liquid crystal television (not shown), alter the applied voltage across the LC cell at 480 × 480 independent pixel locations. Voltage changes rotate the birefringent liquid crystal molecules and induce changes in the refractive index. The readout light, incident from the right, is phase modulated twice by the LC cell, the second time after reflecting from the dielectric mirror. (Right) Photograph of the Hamamatsu optically addressed LC-SLM. (Figures reprinted with permission of the Hamamatsu Corporation.)
All LC-SLMs applied to the eye to date have been based on nematic liquid crystals. These liquid crystals respond slowly (~4 to 10 Hz refresh rate) [34] and operate only on linearly polarized light. Specifically, light that is polarized along the axis of the liquid crystal molecules is phase modulated, while that polarized orthogonally experiences no modulation. The orthogonal component can be easily removed with a linear polarizing fi lter, but this reduces efficiency even with highly polarized light sources because an appreciable fraction of the light reflected from the retina is depolarized. Furthermore, the detection of both polarization components (i.e., polarization-sensitive imaging) provides additional information about the retinal reflection and is becoming increasingly important. Faster liquid crystals based on ferroelectric technology are also commercially available, such as the binary devices manufactured by Displaytech. They have been used in prototype AO systems (though not in vision science) to provide half- and quarter-wave correction [36]. 4.4.3
Bimorph Mirrors
Bimorph mirrors have been employed in several conventional floodilluminated retina cameras [37–39]. Some of these have produced images of the cone photoreceptors that are comparable in image quality to that obtained using the macroscopic discrete actuator deformable mirrors described previously [39]. These devices have intrinsically high mirror stroke (typically
92
WAVEFRONT CORRECTORS FOR VISION SCIENCE
>10 mm), most of which is available for lower order aberration correction. This is particular attractive for correcting the large amounts of second-order aberrations that are typically present in the human eye. The available stroke, however, decreases rapidly as the square of the radial Zernike order, reducing the effectiveness of the mirror to compensate higher spatial frequency aberrations. A detailed description of the optical characteristics of bimorph mirrors can be found in Horsley et al. [40] and Dalimier and Dainty [57]. Their performance is further discussed in Section 4.5.6. 4.4.4
Microelectromechanical Systems
Microelectromechanical systems (MEMS) mirrors are a group of correctors that offer enormous potential for aberration correction in the eye, especially from a commercialization standpoint. MEMS technology leverages the considerable investments made in the integrated circuit industry and has the potential to provide low-cost, compact devices. Fabricated predominantly from silicon, MEMS mirrors promise the ability of batch fabrication and a design space that allows for extremely high dynamic ranges, high temporal frequency operation, and large degrees of freedom (i.e., number of actuators or electrodes). Depending on their method of fabrication, MEMS mirrors can be subdivided into two main classes—bulk and surface micromachined devices [41]. Bulk micromachined mirrors are a form of membrane mirror [42] that consist of a grounded, flexible membrane lying between a continuous transparent electrode and an underlying array of patterned electrodes [Fig. 4.2(c)]. Applying a voltage between one of the patterned electrodes and the top electrode causes the entire membrane to deform, much like striking a drum skin. These modal correctors have a broad influence function that is similar to that of bimorph mirrors. Consequently, their available stroke decreases rapidly with spatial frequency. As an example, if 6 to 7 mm of mirror deflection is available for low-order aberration correction (second order), then less than 0.25 mm will be available for the correction of fi fth-order terms in the Zernike expansion [43]. Figure 4.5 shows a 37-actuator device from OKO Technologies [42]. Agile Optics (formerly Intellite) also manufactures a range of MEMS-based membrane mirrors [44]. Interestingly, the fi rst wavefront corrector applied to the eye was a sixelectrode membrane mirror that was integrated into a custom scanning laser ophthalmoscope [13]. Since objective wavefront sensors, such as the SHWS, had not yet been developed for the eye, the authors were limited to correcting the astigmatism in one subject’s eye based on a subjective refraction. Bartsch et al. [45] and Fernandez et al. [3, 43] used a more powerful membrane mirror from OKO Technologies (37 electrodes) in conjunction with a SHWS to provide real-time correction of the aberrations in the human eye. The other class of MEMS mirrors are the surface micromachined devices, which are typically a form of discrete actuator deformable mirror. These mirrors are fabricated by depositing successive layers of material onto a sub-
WAVEFRONT CORRECTORS USED IN VISION SCIENCE
93
FIGURE 4.5 (Left) Enlarged view of a 37-electrode, 15-mm aperture OKO Technologies MEMS mirror [43]. (Reprinted with permission of Flexible Optical B.V.) A maximum deflection of 8 mm is possible for an applied voltage of 210 V. (Right) The mDM140 MEMS deformable mirror manufactured by Boston Micromachines Corporation [47, 48]. The device has a clear aperture of 3.3 to 4.4 mm, 140 actuators on a square grid, and greater than 3.5 mm of stroke. (Reprinted with permission of Boston Micromachines Corporation.)
strate and then patterning each one using masks and sacrificial layers. The structure is then built up from a stack of such layers. With this fabrication process, it is possible to make several different types of deformable mirrors all of which can be scaled to the desired number of actuators while keeping the overall physical size less than or equal to that of a dilated pupil (~8 mm). Doble et al. [46] evaluated a surface micromachined MEMS deformable mirror from Boston Micromachines Corporation (Fig. 4.5) [47, 48] and provided the fi rst images of the human photoreceptor mosaic using a wavefront corrector other than a discrete actuator Xinxtics mirror. Actuation in the Boston device is realized with electrostatics, as opposed to piezoelectrics in the Xinxtics mirror, and provided 2 mm of stroke for an applied voltage of 220 V. Like the Xinxtics mirror, actuator deformation is local and generates an influence function with a coupling coefficient of 15%. As with all electrostatic devices, the lack of hysteresis (which is present in piezoelectric devices) is a considerable advantage. Other industrial groups are also fabricating MEMS-based mirrors. Gehner et al. describe the fabrication and use of a 200 × 200 piston-only segmented MEMS device built directly onto complementary metal–oxide– semiconductor (CMOS) circuitry [49]. The device has one optical wavelength of stroke as it is intended to be used in conjunction with phase wrapping. Figure 4.6 shows a high stroke, modal MEMS membrane mirror described by Kurczynski et al. [50]. These devices are made from low-stress silicon and
94
WAVEFRONT CORRECTORS FOR VISION SCIENCE
are operated by electrostatic attraction. The mirrors have a 10-mm diameter active area, actuated by 1024 electrodes, and have demonstrated ±20 mm of wavefront deformation for lower-order modes while operating at less than 20 V in a closed-loop system. Figure 4.7 shows a scanning electron micrograph of a 37-segment piston/ tip/tilt MEMS-based mirror described by Doble et al. [51]. High-quality mirror surfaces are bonded to the actuator platforms shown on the left of Figure 4.7. Fill factor values greater than 98% have been achieved with over 7 mm of stroke available for an applied voltage of 60 V. The array shown in the figure is approximately 2.5 mm in diameter. Each 700-mm diameter segment requires three independent voltages that control the axial position and the slopes of the segment. Table 4.1 lists many of the vision AO systems currently in use. The list includes 20 AO systems with 9 employed in flood illumination retinal cameras, 6 in scanning laser ophthalmoscopes, 4 in optical coherence tomography systems, and 1 in a phoropter. Various types of mirrors are being used with actuator counts ranging from 13 to 144. All employ Shack–Hartmann wavefront sensors to measure the wave aberration of the eye. Typical (approximate) performance parameters as reported by the authors are given for each system.
FIGURE 4.6 Closeup of the transparent electrode membrane device described by Kurczynski et al. [50]. Wire bonds to the ceramic package are visible on four sides of the device with the single wire on the right being the connection to the top transparent electrode. The membrane is circular and is positioned beneath the transparent electrode. Holes in the transparent electrode are positioned outside the active area and mitigate squeeze fi lm damping. The mirrors have a 10-mm diameter active area, actuated by 1024 electrodes, and have demonstrated ±20-mm wavefront deformation for lower-order modes, while operating at less than 20 V in a closed-loop system. The scale bar is indicated in centimeters. (Figure courtesy of P. Kurcyznski.)
PERFORMANCE PREDICTIONS FOR VARIOUS TYPES
95
FIGURE 4.7 (Left) Scanning electron micrograph of a piston/tip/tilt 37-segment MEMS mirror described by Doble et al. [51]. The actuators are shown without the high-quality mirror segments that are normally bonded to their top surface. (Right) Interferogram of a fully assembled subarray with random tip and tilts applied. Each mirror segment is larger than its underlying actuator. This design provides a fi ll factor greater than 98%. The segments are on the order of 700 mm in diameter. Strokes of 7 mm have been achieved for an applied voltage of 60 V. (Photographs courtesy of Iris AO, Inc.)
4.5 PERFORMANCE PREDICTIONS FOR VARIOUS TYPES OF WAVEFRONT CORRECTORS As surveyed in the previous section, numerous types of wavefront correctors have been applied to the eye and have successfully reduced the degrading impact of the ocular aberrations, providing the eye with optical quality exceeding that with which it is endowed. However, none have been reported to provide sufficient correction to yield diffraction-limited imaging for large pupils (≥6 mm), where the aberrations are most severe and the benefit of AO is the largest. Typically reported residual RMS values are ~100 nm for a 6.8mm pupil, corresponding to l/6 in the visible (although some subjects correct better than this) [2]. Even at smaller pupil sizes, many of the devices have not reached diffraction-limited imaging. This raises a fundamental concern as to what characteristics of the correcting device, such as actuator number and stroke, are required to achieve diffraction-limited imaging, and to optimally match corrector performance and cost to that required of a particular imaging task in the eye. Wavefront correctors, including many of those described in the previous section, have been largely designed for imaging through atmospheric turbulence. Specifically, their actuator number, stroke, influence functions, and speed are weighted toward the spatial and temporal properties of the atmosphere [19, 20]. Ocular aberrations, on the other hand, have different properties that should lead to different optimal corrector designs. As an example,
96
Flood
AO OCT Nonimaging
Murcia
Murcia/Vienna Imperial Col/ City, UK Galway, Ireland San Diego UC Davis/ LLNL
b
a
AO cSLO
Flood Flood Flood
AO phoropter AO cSLO
MEMS (144)
Bimorph (35) Membrane (19) Discrete actuator (109) MEMS (144) and bimorph (35) MEMS (144) MEMS (144) and bimorph (35) Bimorph (19 or 37) Bimorph (13) Bimorph (18)
Discrete actuator (97) MEMS (144) MEMS (144) Discrete actuator (37) MEMS (144) Discrete actuator (37) Discrete actuator (37) MEMS (144) and bimorph (35) Membrane (37), LC-SLM Membrane (37) Membrane (37)
Correctorb
All systems use a SHWS for measuring the aberrations of the eye. Number of wavefront corrector actuators specified in parentheses.
LLNL (will be operated at Doheny) Chengdu, China Paris Moscow/ Kestrel Corp. Schepens
AO cSLO AO cSLO AO cSLO Flood AO OCT
Berkeley/ Houston Indiana
Flood Flood Flood AO OCT
Flood
TYPE
BMC
CILAS
AOPTIX OKO Tech ITEK BMC AOPTIX BMC BMC AOPTIX
Xinxtics BMC BMC Xinxtics BMC Xinxtics Xinxtics BMC AOPTIX OKO Tech Hamamatsu OKO Tech OKO Tech
Company [2, 28] [46]
<0.1 mm RMS, d = 6.8 mm 0.12 mm RMS, d = 6 mm Coming online 0.15 mm RMS, d = 7 mm Coming online <0.1–0.2 mm RMS, d = 6.8 mm <0.1–0.2 mm RMS, d = 6.8 mm Coming online
Coming online
<0.2 mm RMS, d = 5 mm <0.3 mm RMS, d = 7 mm >0.1 mm RMS, d = 4.8 mm
[54]
[38] [39] [37]
[53] [45] [29, 30]
0.06 mm RMS, d = 4.8 mm ~0.3 mm RMS, d = 8 mm ~0.1 mm RMS, d = 7 mm Online Online Coming online
[3, 43] [32] [7] [56]
0.1 mm RMS, d = 5.2 mm ~0.1 mm RMS, d = 5.5 mm ~0.1 mm RMS, d = 3.68 mm >0.35 mm RMS, d = 4 mm
[5, 52] [5, 6]
[4]
Reference
Performance
Location and Description of Many AO Systems in Use Today in the Vision Communitya
Rochester
Group
TABLE 4.1
PERFORMANCE PREDICTIONS FOR VARIOUS TYPES
97
the temporal bandwidth of ocular aberrations is only a few hertz [55, 56], which is roughly two orders of magnitude less than that of the atmosphere [19, 20]. An additional factor is cost. While wavefront correctors represent a small fraction of the total cost of ground-based telescopes in which they are employed, they currently exceed the total cost of most commercial retinal cameras. Mirrors with diameters comparable to the dilated pupil size of the eye (4 to 8 mm) would allow for more compact systems. Clearly, these devices must be made much less expensive and physically smaller for commercialization to move forward. While there has been substantial effort in recent years to fabricate a variety of different types of wavefront correctors for vision science, there has been almost no prior modeling to guide these efforts. Just in the last year, for instance, the performance of a few types of wavefront correctors (specifically piston-only segmented mirrors [12] and bimorph and bulk micromachined membrane mirrors [57]) were fi nally modeled in conjunction with measured wave aberration data of normal eyes. As no similar analysis has yet to be reported for Gaussian discrete actuator deformable mirrors and other forms of segmented mirrors (e.g., piston/tip/tilt), we have opted in this section to extend the analysis used by Miller et al. [12] to these other devices with additional results for piston-only segmented mirrors. It is somewhat unusual to include such detailed work in a chapter, but our only alternative would have been to ignore this critical void in our knowledge of the performance of wavefront correctors for the eye. This latter choice does not seem particularly appealing. Our detailed modeling and results are followed by a brief summary of the performance results, which were obtained by others, for specific commercially available membrane and bimorph mirrors. While the presentation of this body of work may appear uneven, it largely reflects the extent to which the various types of wavefront correctors have been modeled for vision science, and it clearly indicates that this type of work is in its infancy. Adaptive optics systems are being used on a wide range of patients and under different refractive conditions. As a fi rst step to capture some of this variance in normal healthy eyes, our analysis of deformable and segmented mirrors incorporated two large population studies, one from Indiana University (described by Thibos et al. [58]) and the second data set from the University of Rochester and Bausch & Lomb (unpublished). Each population is investigated using three second-order aberration conditions. Collectively, these six scenarios (two populations, each with three second-order conditions) traverse a wide range of aberration strengths and should provide a rough guide for the correction performance of particular devices for specific imaging and vision experiments. The peak-to-valley (PV) wave aberration errors distributed across both populations are used to determine the required actuator stroke of the mirrors. Performance results for the discrete actuator deformable mirrors and the piston/tip/tilt segmented mirrors are given for array sizes ranging from 0 × 0
98
WAVEFRONT CORRECTORS FOR VISION SCIENCE
to 21 × 21 actuators and segments, respectively. Performance results for the piston-only segmented mirrors are given for arrays of 0 × 0 to 150 × 150 segments. For all three corrector types, the wavelength was 0.6 mm and the pupil diameter was 7.5 mm, selected to simulate a dilated pupil condition. For the deformable mirror, wavelengths of 0.4, 0.8, and 1.0 mm were also investigated. As required actuator stroke is obtained from the PV errors, modeling corrector performance was simplified by assuming unlimited actuator stroke. For completeness, results from Miller et al. [12] will be referred to in the segmented discussion and that from Dalimier and Dainty [57] for membrane and bimorph mirrors will be summarized at the end. 4.5.1
Description of Two Large Populations
The fi rst population study, based at the University of Rochester and Bausch & Lomb, measured 70 right eyes of normal subjects using the Bausch & Lomb Zywave aberrometer. The aberrometer returned all the aberration coefficients (including defocus and astigmatism) up to fi fth-order Zernike polynomials for a 7.5-mm pupil diameter. The overall wave aberration was measured without any form of refractive correction (e.g., trial lenses). The ages ranged from 20 to 59 years with a mean age of 33.8 years and a standard deviation of 9.7 years. The range of spherical equivalent errors was −0.8 to −8.5 D with a mean of −3.5 D and a standard deviation of 1.5 D. Similarly for the cylinder, the range was 0 to −2.75 D with a mean of −0.8 D and a standard deviation of 0.6 D. All subjects were myopic candidates for laser refractive surgery. The subjects were dilated prior to measurement (2.5% phenylephrine and/or 1% tropicamide), and their heads were stabilized with a chin and forehead rest. Five wavefront measurements were collected on each subject and averaged. The second population study was based at Indiana University and involved 100 individuals drawn from the student body and faculty at the Indiana University School of Optometry. The mean age was 26.1 years with a standard deviation of 5.6 years. The range of spherical equivalent errors was +5.5 to −10 D with a mean of −3.1 D and a standard deviation of 3.0 D. The magnitude of cylinder ranged from 0 to 1.75 D with a mean of 0.3 D and a standard deviation of 0.38 D. All subjects were free of ocular disease. Of the 200 eyes 140 had pupil sizes ≥7.5 mm, and the corresponding 70 right eyes were selected for our corrector analysis. Subjects were dilated and accommodation was paralyzed by administration of cyclopentalate (0.5%, 1 drop). Aberrometry was performed with a laboratory Shack–Hartmann wavefront sensor in conjunction with trial lenses that were determined by a subjective refraction. A bite bar was used to stabilize the head. A minimum of three wavefront measurements were collected on each subject. Zernike modes up through 10th radial order were reconstructed by the method of least squares. Additional details can be found in Thibos et al. [58]. Figure 4.8 shows the wavefront variance decomposed by Zernike order for the Rochester and Indiana studies. The magnitude of the wave aberrations
Log of Wavefront Variance (mm2)
PERFORMANCE PREDICTIONS FOR VARIOUS TYPES
99
Rochester
3 2 1 0 −1 −2 −3 −4 −5
Indiana
1
2
3
4
5
6
7
8
9
10
Zernike Order
FIGURE 4.8 Log10 of the wavefront variance plotted as a function of Zernike order for the two populations. Diamonds and corresponding dashed curves represent the mean and mean ± two times the standard deviation of the log10 (wavefront variance), respectively, for the 70 eyes measured in the Rochester (black) and Indiana (gray) studies.
for the Rochester population is significantly higher than that for Indiana for all Zernike orders. We suspect these differences stem from differences in the refraction protocol, age of the subjects, and possibly population type. In the modeling that follows, different levels of second-order aberrations were considered, with the motivation being that second-order aberrations almost always dominate the total wavefront, yet their magnitude varies considerably depending on the refractive state of the subject and the manner in which trial lenses or translating lenses are applied. Three specific second-order states were investigated. In the fi rst condition, all three second-order modes took their measured values. For the second condition, only the defocus coefficient was zeroed. For the third condition, all three second-order Zernike coefficients in the measured wavefront were zeroed. Note that both measured population data sets represented essentially static wave aberrations and therefore did not capture the temporal behavior of the ocular media. As such, the temporal responses of the mirrors were not assessed in our modeling. This is not a fundamental limitation, however, as these devices (with the exception of LC-SLMs) have resonant frequencies well above that needed for the temporal dynamics of the eye’s aberrations. 4.5.2
Required Corrector Stroke
Regardless of type, the dynamic range of the corrector must be at least equal to the peak-to-valley (PV) error of the aberrations for effective compensation. For discrete actuator deformable mirrors, membrane mirrors, and bimorph mirrors, this means the maximum physical excursion of their reflective surface must be at least one half of the PV error. For segmented devices, the stroke
100
WAVEFRONT CORRECTORS FOR VISION SCIENCE
can be effectively increased with phase wrapping, though limitations apply and are discussed at the end of this section. As a guide for assessing actuator stroke for ocular aberrations (assuming no phase wrapping), Figure 4.9 shows the cumulative percentage of the populations that would be corrected by a wavefront corrector providing a given PV error. Three curves are shown for each study and correspond to the three different second-order states as described earlier. Note that the second-order aberrations in the Rochester population included each subject’s entire refractive error, while that for the Indiana data included only the residual aberrations after a subjective refraction. This makes a direct comparison of the two data sets difficult as higher order aberrations influence the patient’s best subjective refraction and lead to nonzero residual values for defocus and astigmatism. Simply zeroing Zernike coefficients (as was done for four of the six curves in Fig. 4.9) is therefore not directly equivalent to an ideal conventional refraction and can lead to higher PV errors.
Percentage of Population Corrected (%)
100 90 80 70 60 50
All Aberrations Zero C4 Zero C3, C4, C5 Refraction Zero C4 Zero C3, C4, C5
40 30 20 10 0
0
5 10 15 20 25 30 35 40 45 50 55 60 65 70 Peak-to-Valley Wavefront Error (mm)
FIGURE 4.9 The PV wavefront error theoretically required to correct a given percentage of the population in the Rochester (black lines) and Indiana (gray lines) population studies. All results are presented for 70 right eyes in each population measured over a 7.5-mm pupil. For the Rochester data, three cases are presented: (i) all aberrations present (short dashed line), (ii) all aberrations present with zeroed Zernike defocus (long dashed line), and (iii) all aberrations present with zeroed defocus and astigmatism (solid line). For the Indiana data set, the three cases are: (i) residual aberrations after a conventional refraction using trial lenses (short dashed line), (ii) all aberrations present with zeroed Zernike defocus (long dashed line), and (iii) all aberrations present with zeroed defocus and astigmatism (solid line). The PV wavefront errors required to theoretically correct the aberrations in 95% of the Rochester population were 53, 18, and 11 mm, respectively, for conditions (i), (ii), and (iii), while those for the Indiana data were 11, 10, and 7 mm, respectively.
PERFORMANCE PREDICTIONS FOR VARIOUS TYPES
101
As shown in Figure 4.9, the PV error noticeably increased with the addition of the second-order terms and was consistently larger for the Rochester population for all three second-order states. As can be extracted from Figure 4.9, the PV error that encompassed 95% of the Rochester population was 53, 18, and 11 mm for the three second-order conditions. For the Indiana population, the corresponding 95% errors were 11, 10, and 7 mm, respectively. The largest errors of 53 mm (Rochester) and 11 mm (Indiana) depict the most demanding conditions for the corrector. For comparison, Xinxtics deformable mirrors employed in vision science cameras have a stroke of only 4 mm (8 mm in reflection), which approaches the 11 mm (Indiana data) required to correct for the patient’s residual defocus and astigmatism and higher order aberrations, but falls considerably short of the 53 mm (Rochester data) needed to correct for all of the patient’s defocus, astigmatism, and higher order aberrations. There are several strategies for reducing the PV error to a level that is correctable by the wavefront corrector. For retinal imaging applications, PV errors can be reduced further by meticulously employing trial lenses to optimize retinal image quality in conjunction with continuously adjustable lenses to avoid quantization errors (as opposed to a subjective refraction, which was employed for the Indiana data). This approach was chosen for the fi rst successful attempt to correct the higher order aberrations in the eye using a Xinxtics deformable mirror [1]. A second approach is to use one wavefront corrector to correct the large second-order aberrations and a second wavefront corrector to compensate for the eye’s higher order aberrations. The cascade of the lower and higher order correctors represents a woofer–tweeter combination and is being investigated by UC-Davis and Indiana (see also Table 4.1). Third, some retinal imaging and vision applications have the flexibility to select subjects with reasonable optics (i.e., reduce the stringent 95% criteria) or to reduce the pupil size to a diameter <7.5 mm for wavefront correction and imaging. For example, reducing the pupil size to 4 and 6 mm in the Rochester population reduces the PV error for the three second-order conditions to 16, 5, and 2 mm and 30, 11, and 6 mm, respectively. For the Indiana population, the corresponding PV error was reduced to 4, 3, and 2 mm and 9, 5, and 3 mm, respectively. 4.5.3
Discrete Actuator Deformable Mirrors
The surface shape of a deformable mirror can be predicted from its actuator influence functions and the interdependency of these influence functions. Finite-element analysis [59] is a classic numerical approach that can incorporate many of the mirror and actuator parameters (such as thickness, modulus of elasticity, and Poisson’s ratio) that determine the influence functions and ultimately mirror shape [60–62]. Finite-element analysis, however, generally requires detailed calculations that are restricted to a very specific mirror. A more simplified and common approach, which is chosen here, is to assume identical and independent influence functions [63, 64]. The assumption of
102
WAVEFRONT CORRECTORS FOR VISION SCIENCE
independent influence functions permits modeling the mirror surface as a linear sum of the influence functions [Eq. (4.2)]. While this approach does not fully capture the performance of real correctors, it does reflect their main attributes and provides rough performance estimates for deformable mirrors one might consider for a vision AO system. Using these assumptions, the discrete actuator deformable mirrors were modeled with a square array of regularly spaced actuators, each deflecting the local mirror surface into a Gaussian shape (a reasonable approximation of the influence function for many deformable mirrors) specified by: −x −y g A ( x, y ) = K exp 2 exp 2 2σ 2σ 2
2
(4.1)
where K is the maximum deflection, x and y are the mirror spatial coordinates, and s is the spatial extent of the influence function; s was set to give a coupling coefficient of 12%, an approximate value for many discrete actuator deformable mirrors, such as that manufactured by Xinxtics and Boston Micromachines. The height profi le of the mirror surface, m(x, y), is defi ned by the sum of identical actuator influences given by: m ( x, y ) = ∑ g A ( x, y )
(4.2)
M
where M is the total number of actuators. Note that this linear model generates a slightly rippled surface when all of the actuators are pushed by the same amount. This rippling, called pinning error [19], does not occur in actual mirrors that have stiff faceplates. This error does lead to an underestimation of the actual device performance. To correct for a specific aberrated wavefront, ϕi , the mirror shape was determined by generating an influence function matrix, inverting the matrix, and then multiplying the inverted matrix with the aberrated wavefront. This matrix multiplication produced the amplitudes for each actuator. The deformable mirror surface was then reconstructed, ϕc , and subtracted from the incoming wavefront to give the residual aberration, ϕr. This approach utilized least-squares fitting, which minimized the RMS residual wavefront error. Figure 4.10 illustrates the sequence of steps in the simulations to correct the wave aberration of each eye and compute the resulting point spread function and corresponding Strehl ratio for a discrete actuator deformable mirror, a piston-only segmented mirror, and a piston/tip/tilt segmented mirror (all with 11 actuators or segments across the pupil). The gray-scale image in Figure 4.10(a) depicts the measured wave aberration profi le, ϕi , across a 7.5mm pupil for one eye from the Rochester population. Figure 4.10(b) shows the desired (conjugate) phase profi le of the deformable mirror, ϕc, for compensating the wave aberration in Figure 4.10(a). Figure 4.10(c) shows the residual aberrations, ϕr = (ϕi − ϕc), after correction of the wave aberration in
PERFORMANCE PREDICTIONS FOR VARIOUS TYPES
103
No Correction
(a) RMS = 1.48 µm PV = 10.31 µm Strehl = 0.01
Discrete Actuator Deformable Mirror (b)
(c) RMS = 0.07 µm PV = 1.84 µm
(d ) Strehl = 0.74
(e)
(f) RMS = 0.33 µm PV = 3.40 µm
(g) Strehl = 0.03
(h)
(i) RMS = 0.03 µm PV = 0.35 µm
( j) Strehl = 0.88
Piston-Only Segmented Mirror
Piston/Tip/Tilt Segmented Mirror
FIGURE 4.10 Compensation of aberrations across a 7.5-mm pupil using a discrete actuator deformable mirror, piston-only segmented mirror, and piston/tip/tilt segmented mirror. Each of the mirrors has 11 actuators or segments across the pupil diameter. Wavefront phase is represented by a gray-scale image (black and white tones depict minimum and maximum phase, respectively). (a) Initial wavefront: The measured uncorrected wave aberration for one subject’s eye from the Rochester population with defocus zeroed. (b) The conjugate mirror surface that minimizes the RMS wavefront error for the subject’s wave aberration in (a) for the discrete actuator device. (c) The residual aberrations after correction of the wave aberration in (a) with the corrector phase profi le in (b). The phase RMS and PV are specified at the bottom of each image. The corresponding corrected point spread function and Strehl ratio are given in (d), with the former computed using scalar diffraction theory that incorporated the residual wave aberration and a circular pupil (l = 0.6 mm). (e) to (g) show the mirror phase profi le, residual aberrations, and the corrected point spread for the piston-only case, respectively. (h) to (j) show the analogous figures for the segmented piston/tip/tilt mirror. Note that the segmented piston/tip/tilt device has three actuators per segment.
104
WAVEFRONT CORRECTORS FOR VISION SCIENCE
Figure 4.10(a) with the corresponding corrector phase in Figure 4.10(b). To compute the corrected point spread function, the corrected complex field, Ψ, was fi rst represented at the pupil as Ψexp[iϕr] with the amplitude of the wavefront, Ψ, defi ned as a circular pupil with 100% reflection. Next, applying a Fourier transform operation to Ψ and taking its squared modulus yielded the corresponding corrected point spread function and Strehl ratio, as shown in Figure 4.10(d). The point spread function generated in this manner included the impact of residual aberrations and scalar diffraction effects generated by the fi nite size of the pupil. In this example, the Strehl ratio was increased from 0.01 to 0.74, which demonstrates a significant improvement in image quality. The corresponding RMS wavefront error was reduced from 1.48 to 0.07 mm, a 21-fold decrease. Figure 4.11 (top) shows the predicted corrected Strehl ratio for discrete actuator deformable mirrors as a function of the number of actuators across a 7.5-mm pupil (l = 0.6 mm) for the two populations. Each set contains three different levels of second-order aberrations. All curves exhibit a similar shape that is monotonic and positively sloped. With zero facets, the corrected Strehl reflects the image quality of the eye without the mirror. For five to nine actuators across the pupil, a significant improvement in image quality is predicted as the Strehl ratio rises sharply with increasing actuator numbers. Small changes in the number of actuators in this range lead to noticeable changes in corrected image quality. This increase is likely due to the effective correction of the lower order aberrations, which contain the largest percentage of the variance in the eye’s wave aberration. Interestingly, the Xinxtics deformable mirrors that have been employed in AO systems for vision science contain a total of 37 (7 across) [2] and 97 (11 across) [28] actuators and fall within this range of high sensitivity, with the smaller of the two deformable mirrors increasing the Strehl to only 20 to 30%. For the 37-channel Xinxtics deformable mirror, Hofer et al. report dynamically corrected Strehl ratios of 34 ± 12% for a smaller (6-mm) pupil in 550-nm light after a trial lens refraction (average of 6 subjects) [2]. Partly because of this reason, Xinxtics mirrors and essentially all other wavefront correctors typically operate across smaller pupil sizes (usually <7 mm) to provide better correction than that predicted in Figure 4.11. For more than 9 actuators across the pupil, the Strehl ratio gradually rises to a value of one for large facet numbers. The diminishing improvement in corrected image quality makes larger and more expensive correctors increasingly less attractive. For the Indiana population, a minimum of 14 actuators across the pupil diameter were required to achieve a Strehl ratio ≥0.8, with the actual number being sensitive to the magnitude of the second-order terms in the wave aberration. For the Rochester population, which had a larger average magnitude of aberrations, a minimum of 15 actuators across the pupil diameter were required to achieve a Strehl ratio ≥0.8, even after zeroing the secondorder Zernike terms. In fact, in the worst-case scenario for the Rochester data set in which all refractive errors were present, the largest number of
PERFORMANCE PREDICTIONS FOR VARIOUS TYPES
105
Strehl Ratio (%)
100 90 80 70 60 50 40
All Zero C4 Zero C3, C4, C5 Refraction Zero C4
30 20
Zero C3, C4, C5
10 0 0
3
5
7
9
11
13
15
17
19
21
Number of Actuators across the Pupil 100 90 Strehl Ratio (%)
80 70 60 50
1 µm
40 30
0.8 µm 0.6 µm 0.4 µm
20 10 0 0
3
5
7
9
11
13
15
17
19
21
Number of Actuators across the Pupil
FIGURE 4.11 (Top) Corrected Strehl ratio for discrete actuator deformable mirrors as a function of actuator number for the Rochester (black) and Indiana (gray) populations. Pupil diameter and wavelength were set to 7.5 mm and 0.6 mm, respectively. Three curves are shown for each population and correspond to the presence of all aberrations (short dashed lines), all aberrations with zeroed Zernike defocus (long dashed lines), and all aberrations with zeroed second-order aberrations (sold lines). Note that the all aberrations condition in the Rochester data set includes the subject’s inherent amount of defocus and astigmatism, while that for the Indiana data set includes the residual defocus and astigmatism after a spherocylindrical correction with trial lenses. (Bottom) Strehl ratios at different wavelengths (0.4, 0.6, 0.8, and 1 mm) as a function of the number of actuators across the pupil for the case when all aberrations were present with zeroed defocus in the Indiana population.
actuators considered here (21 across the pupil) only increased the Strehl to 0.18. Figure 4.11 (bottom) shows the predicted corrected Strehl ratio for discrete actuator deformable mirrors at wavelengths of 0.4, 0.6, 0.8, and 1.0 mm.
106
WAVEFRONT CORRECTORS FOR VISION SCIENCE
Wavefront correction at shorter wavelengths requires noticeably more actuators than at longer wavelengths to achieve the same imaging performance. Further discussion of the wavelength impact can be found in Miller et al. [12]. 4.5.4
Piston-Only Segmented Mirrors
Piston-only segmented mirrors were modeled as a two-dimensional array of square segmented mirrors that completely covered the circular pupil of the imaging system. Each segment was restricted to modulating only the piston component of the local wavefront and provided a 100% fi ll factor. Mirror performance with other fi ll factors can be found in Miller et al. [12]. In practice, fi ll factors approaching 98 to 99% are achievable. For correcting a specific aberrated wavefront, the phase profi le of the mirror was determined by setting the phase of each mirror segment to the negative of the average wavefront phase incident on the segment. Each facet was assumed to have essentially infi nite phase resolution. The performance of the piston-only segmented mirrors was predicted using the same simulation process as for the discrete actuator deformable mirrors described previously. The third row of Figure 4.10 illustrates the steps of the simulation to correct the wave aberration of the eye using a piston-only segmented mirror with 11 segments across the pupil diameter. Figure 4.12 shows the predicted corrected Strehl ratio for the piston-only segmented mirrors as a function of the number of actuators across a 7.5-mm pupil (l = 0.6 mm). Two sets of three curves are shown whose conditions correspond to that for the six curves in Figure 4.11 (top). While performance follows a similar pattern to that predicted for the deformable mirrors, the number of required segments is substantially higher in order to achieve the same imaging performance. For example, 90 × 90 piston-only segments would be required to achieve the same Strehl ratio (0.8) as a 13 × 13 array of actuators for a discrete actuator device in the Indiana population when including all of the residual second-order aberrations in the wavefront. For the Indiana population, 50 to 90 segments across the pupil diameter were required to achieve a Strehl ratio of 0.8, with the actual number within this range being highly sensitive to the magnitude of the second-order aberrations that need to be corrected in the wave aberration. For the Rochester population, 100 to 115 segments were required to achieve a Strehl ratio of 0.8 if defocus and astigmatism (C3 to C5) or defocus alone (C4) were zeroed prior to correction. Even the largest segmented mirror considered in our study (150 × 150 actuators) performed poorly when attempting to correct all of the second- and higher order aberrations, just reaching a Strehl of 0.2. With a segmented approach, the above PV errors can also be corrected by imparting a phase profi le onto the mirror that is correct to modulo 2p. Most liquid crystal spatial light modulators (such as the one shown in Fig. 4.4) are purposely designed for 2p phase correction and hence rely on phase wrapping
Strehl Ratio (%)
PERFORMANCE PREDICTIONS FOR VARIOUS TYPES 100 90 80 70 60 50 40
107
All Zero C4 Zero C3, C4, C5 Refraction Zero C4
30 20 10 0
Zero C3, C4, C5
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150
Number of Actuators across the Pupil FIGURE 4.12 Corrected Strehl ratio for piston-only segmented mirrors as a function of the number of segments across the pupil for the Rochester (black lines) and Indiana (gray lines) populations. Pupil diameter and wavelength were set to 7.5 mm and 0.6 mm, respectively. Three curves are shown for each population and correspond to the presence of all aberrations (short dashed lines), all aberrations with zeroed Zernike defocus (long dashed lines), and all aberrations with zeroed second-order aberrations (solid lines). Note that the all aberrations condition in the Rochester data set includes the subject’s inherent amount of defocus and astigmatism, while that for the Indiana data set includes the residual defocus and astigmatism after a spherocylindrical correction with trial lenses.
to extend their dynamic range [31, 32]. A fundamental weakness of this approach, however, is that the system only perfectly corrects the wavefront at a single wavelength and its associated harmonics. Miller et al. modeled the effects of phase wrapping for correcting the aberrations of the eye in polychromatic light [12]. This model included the impact of the dispersion of the liquid crystal material (E-7) and the longitudinal chromatic aberrations of the eye. Figure 4.13 shows the theoretical performance based on a population of 12 eyes for the four possible combinations of phase wrapping (2p modulo) and material dispersion. The impact of the eye’s own longitudinal chromatic aberrations is also shown and is revealed to be significantly more degrading than either phase wrapping or material dispersion. 4.5.5
Piston/Tip/Tilt Segmented Mirrors
Piston/tip/tilt segmented mirrors were modeled identically to that of the piston-only segmented mirrors with the addition of tip and tilt functionality at the segment level. Specifically, each segment had three modes of orthogonal movement. The phase profi le of the mirror was determined in the same manner as for the piston-only segmented mirrors with the addition that the
108
1.0
WAVEFRONT CORRECTORS FOR VISION SCIENCE No Wrap, No Dispersion no Wrap, E-7
Corrected Strehl
0.8
No Wrap, No Dispersion
1.0
0.8 no Wrap, E-7
2p Wrap, No Dispersion
0.6
0.6 2p Wrap, No Dispersion
2p Wrap, E-7 0.4
0.4 DiffractionLimited Eye
0.2
0.0 0.4
0.5
0.6
Wavelength (mm)
0.7
DiffractionLimited 2p Wrap, Eye E-7
0.2
0.8
0.0 0.4
0.5
0.6
0.7
0.8
Wavelength (mm)
FIGURE 4.13 As reported in Miller et al. [12], the average predicted performance of piston-only segmented devices in the presence of polychromatic light are plotted for 3- (left) and 6-mm (right) pupils for a population of 12 eyes. Error bars represent ±1 standard deviation across the 12 subjects and are shown for only one curve due to limited space. The top four curves represent the performance of four types of pistononly segmented devices, which cover the four possible combinations of dispersion and phase wrapping. Dispersion was taken to be that of the commonly used liquid crystal material E-7. Phase wrapping was modulo 2p at the design wavelength of 0.6 mm. The curves at the bottom represent the performance of the diffraction-limited eye corrupted only by the eye’s naturally occurring longitudinal chromatic aberration. (From Miller et al. [12]. Reprinted with permission of the Optical Society of America.)
local slope of the aberrated wavefront was taken into account. The bottom row of Figure 4.10 illustrates the steps of the simulation to correct the wave aberration of the eye using a piston/tip/tilt segmented mirror with 11 segments across the pupil diameter. Figure 4.14 shows the predicted performance of the piston/tip/tilt segmented mirrors as a function of the number of segments across the pupil diameter. For the Indiana population, a minimum of 8 to 9 segments across the pupil diameter were required to achieve a Strehl ratio ≥0.8 with the actual number being insensitive to the magnitude of the second-order aberrations. For the Rochester population, 12 segments were required to achieve a Strehl ratio of 0.8 if defocus (C4) alone was zeroed prior to correction. A Strehl ratio of 0.8 can be reached with just 19 actuators across the pupil if attempting to correct all of the second- and higher order aberrations. For the piston/tip/tilt segmented devices described in the previous section, even 150 actuators across the pupil were not sufficient. The addition of tip and tilt control clearly
Strehl Ratio (%)
PERFORMANCE PREDICTIONS FOR VARIOUS TYPES 100 90 80 70 60 50 40 30 20 10 0
109
All Zero C4 Zero C3, C4, C5 Refraction Zero C4 Zero C3, C4, C5
0
3
5 7 9 10 13 15 17 19 Number of Actuators across the Pupil
21
FIGURE 4.14 Corrected Strehl ratio for piston/tip/tilt segmented mirrors as a function of segment number for the Rochester (black) and Indiana (gray) populations. Pupil diameter and wavelength were set to 7.5 mm and 0.6 mm, respectively. Three curves are shown for each population and correspond to the presence of all aberrations (short dashed lines), all aberrations with zeroed Zernike defocus (long dashed lines), and all aberrations with zeroed second-order aberrations (solid lines). Note that the all aberrations condition in the Rochester data set includes the subject’s inherent amount of defocus and astigmatism, while that for the Indiana data set includes the residual defocus and astigmatism after a spherocylindrical subjective correction with trial lenses.
improves the mirror fit to the wavefront, especially at the edges of the pupil where the slopes can be great. Of course, this comes at the expense of additional complexity in mirror control in that three movement controls are required for each segment rather than one. 4.5.6
Membrane and Bimorph Mirrors
Several authors have examined the performance of both membrane [43] and bimorph mirrors [37–40] for the correction of ocular aberrations. The operation of both mirror types is governed by Poisson’s equation. For the membrane mirror, the surface profi le m(x, y) is described by Eq. (4.3) [20]: 2
∇ 2 m ( x, y ) = −ε 0
[ v ( x, y ) ] 2 K m agap
(4.3)
where v(x, y) is the applied voltage profi le, Km is the membrane stress, agap is the gap between the bottom electrodes and the membrane, and e 0 is the permittivity.
110
WAVEFRONT CORRECTORS FOR VISION SCIENCE
Similarly, for the bimorph mirror, the equation governing it profi le [20] is given by: 2
∇ 2 m ( x, y ) = −
4 [ v ( x, y ) ] 2 K p abim
(4.4)
where Kp is the piezoelectric constant, and abim is the bimorph thickness. The micromachined membrane mirror manufactured by OKO Technologies [42], and shown in Figure 4.5, consists of a silicon nitride membrane coated with aluminum and has an effective mirror diameter of 15 mm. Voltages applied to the underlying 37 control electrodes electrostatically drive the membrane shape, given by Eq. (4.3). The performance of this device was tested extensively by Fernandez et al. [3, 43] and Paterson et al. [65], and evaluated further by Dalimier and Dainty [57]. The influence functions of these devices are far from orthogonal, making it necessary to use singular value decomposition to obtain a set of orthonormal wavefront functions that describe the mirror deformation. Due to edge constraints (the membrane is clamped at the edges), it is common to use only two-thirds of the available aperture. This device is prone to saturation and is typically used with small pupil sizes (<5 mm diameter) in order to minimize PV error. Retinal cameras employing bimorph mirrors have produced in vivo images of the human cone photoreceptor mosaic. Glanc et al. [39] used a 13-electrode bimorph mirror and reported corrected RMS wavefront errors of 0.15 mm for a 7-mm pupil. It was noted that the pupil required careful centration prior to any correction due to the limited number of actuators. The 35-channel device, manufactured by AOptix, has been characterized by Horsley et al. [40] and Dalimier and Dainty [57]. A total mirror stroke of 16 mm was reported. The performance of this device is compared to that of the OKO device described in [42] as well as a newer 19-element piezoelectric device (also by OKO Technologies). The performance of these devices was determined by theoretically fitting 100 eyes from the study by Thibos et al. [58], and the results are shown in Figure 4.15. It should be noted that these corrections are performed after a conventional refraction of the subject’s sphere and cylinder to minimize the mirror stroke required for these terms. As observed in our simulation of the other mirror types using the Rochester dataset, this second-order correction has a large effect on not only the required mirror stroke, but also on the number of actuators required across the pupil. Figure 4.14, which shows the results for the piston/ tip/tilt segmented mirrors, illustrates this point well. For devices such as the bimorph and membrane mirrors, where stroke falls off sharply with radial order, this is a serious consideration. Inclusion of these lower order terms can quickly lead to mirror saturation when higher order corrections are required.
SUMMARY AND CONCLUSION
RMS Wavefront Deviation (µm)
1.8
111
Initial Wavefront
1.6
37 ch OKO MMDM Residual 19 ch OKO Residual
1.4
35 ch AOptix Residual
1.2 1 0.8 0.6 0.4 0.2 0
0
20
40
60
80
100
Eyes
FIGURE 4.15 Residual aberration per eye for the three types of modal mirrors, namely the 37-channel membrane mirror by OKO Technologies, the 19-channel piezoelectric device, and the 35-channel AOptix bimorph mirror. (From Dalimier and Dainty [57]. Reprinted with permission of the Optical Society of America.)
4.6
SUMMARY AND CONCLUSION
The extent to which AO can effectively improve resolution and contrast in the eye fundamentally depends on its ability to accurately measure, track, and correct the ocular aberrations. This chapter focuses on the last step, correction. Wavefront correctors appear to be the limiting performance factor in essentially all AO systems built for vision science and certainly are the most expensive hardware component. As surveyed in the chapter, numerous types of wavefront correctors have been applied to the eye, yet none have been reported to yield diffraction-limited imaging for large pupils (≥6 mm), where the aberrations are most severe and the benefit of AO is largest. This raises a fundamental concern as to the characteristics required of a correcting device to achieve diffraction-limited imaging, and to optimally match corrector performance and cost to that required of a particular imaging task in the eye. To this end, we considered the static performance of several types of mirrors (discrete actuator deformable mirrors, piston-only segmented mirrors, and piston/tip/tilt segmented mirrors) assuming identical and independent influence functions. Performance results obtained by others for specific commercially available bimorph and membrane mirrors were also reviewed. Table 4.2 summarizes these results in terms of wavefront corrector stroke and number of actuators for a 7.5-mm pupil. The table also lists corrector specifi-
112
WAVEFRONT CORRECTORS FOR VISION SCIENCE
TABLE 4.2 Summary of Predicted Specifications for Key Wavefront Corrector Parameters Considered Necessary to Achieve Diffraction-Limited Imaging (Strehl ≥ 80%) in the Human Eyea Parameter Temporal bandwidth Reflectivity Mirror diameter Mirror stroke (7.5-mm pupil, 95% population) Number of actuators or segments across the pupil diameter (for 80% Strehl)
Value 1–10 Hz (AO closed-loop frequency) >90% (400–950 nm) 4–8 mm 10–53 mm (Rochester) 7–11 mm (Indiana) >14 (Roch.), 11–14 (Ind.) discrete actuator >>90 (Roch.), 45–85 (Ind.) piston-only segmented 12–19 (Roch.), 9–10 (Ind.) piston/tip/tilt segmented
a
Mirror stroke and the number of actuators or segments are specified for a 7.5-mm pupil, 0.6-mm wavelength, and PV correction in 95% of the normal population. Predicted number of actuators (or segments) is shown for three general types of wavefront correctors (discrete actuator deformable mirror, piston-only segmented mirror, and piston/tip/tilt segmented mirror). (For performance parameters for the membrane and bimorph mirrors, the reader is directed to Table 4.1. Dalimier and Dainty predict close to diffraction-limited performance for a 6-mm pupil with a 35-actuator bimorph [57]. The required parameters for a membrane mirror are unknown.)
cations for temporal bandwidth, reflectivity, and corrector size, which are also important for vision science. As shown in the table, the temporal bandwidth of 1 to 10 Hz follows from the studies of Hofer et al. [55] and Diaz-Santana et al. [56]. A mirror reflectivity >90% assures high throughput efficiency of the camera, which is particularly critical for retinal imaging applications in which a hard upper limit exists for the amount of light that can be safely directed into the human eye. A corrector size that is comparable to a dilated pupil (~8 mm) permits the use of short focal length lenses or mirrors, and facilitates compact system designs. The required corrector stroke was observed to vary with the population as well as the second-order aberration condition. The required stroke ranged from 10 to 53 mm (Rochester data) and 7 to 11 mm (Indiana data). The required number of mirror actuators also varied depending on the population, the second-order aberration condition, and mirror type. Bimorph and membrane mirrors (which are not listed in table 4.2) have also been evaluated [57]. Indeed these devices are well suited to correcting the large amounts of lower order aberrations present in the human eye. In simulations, the AOptix bimorph device was reported to achieve corrections close to the diffraction limit for <5-mm pupils. While we have taken a fi rst pass at exploring the most critical corrector parameters, a more detailed assessment to account for differences in the influence functions and their interdependency could use, for example, a finite-
REFERENCES
113
element analysis. Certainly other corrector parameters, such as dynamic behavior, will influence performance. For example, hysteresis is intrinsic to piezoelectric materials and provides further challenges to closed-loop control. Variations in pupil size and optical wavelength certainly impact mirror performance and will be discussed in a future publication. In conclusion, correcting the wave aberration of the eye is a formidable challenge. Understanding the performance parameters that enable diffraction-limited imaging in the population at large will yield more effective correctors that are optimized for the human eye. This will ultimately expand the capabilities of research and commercial instruments for both retinal imaging and vision testing. Acknowledgments The authors would like to thank Dr. Ian Cox of Bausch & Lomb and Professor Larry Thibos at Indiana University for providing the subject data for the two populations. Nathan Doble acknowledges the assistance of Dr. Stacey Choi. Don Miller would like to thank Dr. Huawei Zhao for his early help with the wavefront corrector modeling. This work has been supported in part by the National Science Foundation Science and Technology Center for Adaptive Optics (CfAO), managed by the University of California at Santa Cruz under cooperative agreement No. AST-9876783. Financial support was also provided by the National Eye Institute grant 5R01 EY014743. REFERENCES 1. Liang J, Williams DR, Miller DT. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884– 2892. 2. Hofer H, Chen L, Yoon GY, et al. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express. 2001; 8: 631–643. 3. Fernandez EJ, Iglesias I, Artal P. Closed-Loop Adaptive Optics in the Human Eye. Opt. Lett. 2001; 26: 746–748. 4. Roorda A, Romero-Borja F, Donnelly WJ, et al. Adaptive Optics Laser Scanning Ophthalmoscopy. Opt. Express. 2002; 10: 405–412. 5. Zhang Y, Rha J, Jonnal RS, Miller DT. Adaptive Optics Parallel Spectral Domain Optical Coherence Tomography for Imaging the Living Retina. Opt. Express. 2005; 13: 4792–4811. 6. Miller DT, Qu J, Jonnal RS, Thorn K. Coherence Gating and Adaptive Optics in the Eye. In: Tuchin VV, Izatt JA, Fujimoto JG, eds. Coherence Domain Optical Methods and Optical Coherence Tomography in Biomedicine VII. Proceedings of the SPIE. 2003; 4956: 65–72. 7. Hermann B, Fernandez EJ, Unterhuber A, et al. Adaptive-Optics UltrahighResolution Optical Coherence Tomography. Opt. Lett. 2004; 29: 2142–2144. 8. Artal P, Chen L, Fernández EJ, et al. Adaptive Optics for Vision: The Eye’s Adaptation to Point Spread Function. J. Refract. Surg. 2003; 19: S585–S587.
114
WAVEFRONT CORRECTORS FOR VISION SCIENCE
9. Artal P, Chen L, Fernández EJ, et al. Neural Compensation for the Eye’s Optical Aberrations. J. Vis. 2004; 4: 281–287. 10. Piers PA, Fernandez EJ, Manzanera S, et al. Adaptive Optics Simulation of Intraocular Lenses with Modified Spherical Aberration. Invest. Ophthalmol. Vis. Sci. 2004; 45: 4601–4610. 11. Bille JF. Preoperative Simulation of Outcomes Using Adaptive Optics. J. Refract. Surg. 2000; 16: S608–610. 12. Miller DT, Thibos LN, Hong X. Requirements for Segmented Correctors for Diffraction-Limited Performance in the Human Eye. Opt. Express. 2005; 13: 275–289. 13. Dreher W, Bille JF, Weinreb RN. Active Optical Depth Resolution Improvement of the Laser Tomographic Scanner. Appl. Opt. 1989; 28: 804–808. 14. Platt BC, Shack R. History and Principles of Shack-Hartmann Wavefront Sensing. J. Ref. Surg. 2001; 17: 573–577. 15. Liang J, Grimm B, Goelz S, Bille JF. Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann-Shack Wave-front Sensor. J. Opt. Soc. Am. A. 1994; 11: 1949–1957. 16. Liang J, Williams DR. Aberrations and Retinal Image Quality of the Normal Human Eye. J. Opt. Soc. Am. A. 1997; 14: 2873–2883. 17. Iglesias I, Ragazzoni R, Julien Y, Artal P. Extended Source Pyramid Wave-front Sensor for the Human Eye. Opt. Express. 2002; 10: 419–428. 18. Shirai T. Liquid-Crystal Adaptive Optics Based on Feedback Interferometry for High-Resolution Retinal Imaging. Appl. Opt. 2002; 41: 4013–4023. 19. Tyson RK. Principles of Adaptive Optics, 2nd ed. Boston: Academic, 1998. 20. Hardy JW. Adaptive Optics for Astronomical Telescopes. Oxford: Oxford University Press, 1998. 21. Love GD. Wave-front Correction and Production of Zernike Modes with a Liquid-Crystal Spatial Light Modulator. Appl. Opt. 1997; 3: 1517–1520. 22. Li FH, Mukohzaka N, Yoshida N, et al. Phase Modulation Characteristics Analysis of Optically-Addressed Parallel-Aligned Nematic Liquid Crystal Phase-Only Spatial Light Modulator Combined with a Liquid Crystal Display. Opt. Rev. 1998; 5: 174–178. 23. Cugat O, Basrour S, Divoux C, et al. Deformable Magnetic Mirror for Adaptive Optics: Technological Aspects. Sensor Actuat. A-Phys. 2001; 89: 1–9. 24. Hamelinck R, Rosielle N, Kappelhof P, et al. A Large Adaptive Deformable Membrane Mirror with High Actuator Density. In: Calia DB, Ellerbroek BL, Ragazzoni R, eds. Advancements in Adaptive Optics. Proceedings of the SPIE. 2004; 5490: 1482–1492. 25. Vdovin G, Loktev M. Deformable Mirror with Thermal Actuators. Opt. Lett. 2002; 27: 677–679. 26. Miller DL, Brusa G, Kenworthy MA, et al. Status of the NGS Adaptive Optic System at the MMT Telescope. In: Calia DB, Ellerbroek BL, Ragazzoni R, eds. Advancements in Adaptive Optics. Proceedings of the SPIE. 2004; 5490: 207–215. 27. Oppenheimer BR, Palmer DL, Dekany RG, et al. Investigating a Xinetics Inc. Deformable Mirror. In: Tyson RK, Fugate RQ, eds. Adaptive Optics and Applications. Proceedings of the SPIE. 1997; 3126: 569–579.
REFERENCES
115
28. Putnam NM, Hofer HJ, Doble N, et al. The Locus of Fixation and the Foveal Cone Mosaic. J. Vis. 2005; 5: 632–639. 29. Doble N. High Resolution, in Vivo Retinal Imaging Using Adaptive Optics and Its Future Role in Ophthalmology. Expert Rev. Med. Devices. 2005; 2: 205–216. 30. Choi SS, Alam S, Doble N, et al. Correlation between Functional Vision Tests and in Vivo Images of Retinal Diseases Obtained with the UC-Davis Adaptive Optics Ophthalmoscope. Invest. Ophthalmol. Vis. Sci. 2005; 46: 3549. 31. Thibos LN, Bradley A. Use of Liquid-Crystal Adaptive-Optics to Alter the Refractive State of the Eye. Optom. Vis. Sci. 1997; 74: 581–587. 32. Vargas-Martin F, Prieto PM, Artal P. Correction of the Aberrations in the Human Eye with a Liquid-Crystal Spatial Light Modulator: Limits to Performance. J. Opt. Soc. Am. A. 1998; 15: 2552–2562. 33. Awwal A, Bauman B, Gavel D, et al. Characterization and Operation of a Liquid Crystal Adaptive Optics Phoropter. In: Tyson RK, Lloyd-Hart M, eds. Astronomical Adaptive Optics Systems and Applications. Proceedings of the SPIE. 2003; 5169: 104–122. 34. Prieto PM, Fernández EJ, Manzanera S, Artal P. Adaptive Optics with a Programmable Phase Modulator: Applications in the Human Eye. Opt. Express. 2004; 12: 4059–4071. 35. Bessho K, Yamaguchi T, Nakazawa N, et al. Live Photoreceptor Imaging Using a Prototype Adaptive Optics Fundus Camera: A Preliminary Result. Invest. Ophthalmol. Vis. Sci. 2005; 46: 3547. 36. Love GD, Andrews N, Birch P, et al. Binary Adaptive Optics: Atmospheric Wavefront Correction with a Half Wave Phase Shifter. Appl. Opt. 1995; 34: 6058–6066; Addendum 1996; 35: 347–350. 37. Larichev AV, Ivanov PV, Iroshnikov NG, et al. Adaptive System for Eye-Fundus Imaging. Quantum Electron. 2002; 32: 902–908. 38. Ling N, Zhang Y, Rao X, et al. Small Table-Top Adaptive Optical Systems for Human Retinal Imaging. In: Gonglewski JD, Vorontsov MA, Gruneisen MT, Restaino SR, Tyson RK, eds. High-Resolution Wavefront Control: Methods, Devices, and Applications IV. Proceedings of the SPIE. 2002; 4825: 99–108. 39. Glanc M, Gendron E, Lacombe F, et al. Towards Wide-Field Retinal Imaging with Adaptive Optics. Optics. Comm. 2004; 230: 225–238. 40. Horsley DA, Park H, Laut SP, Werner JS. Characterization for Vision Science Applications of a Bimorph Deformable Mirror Using Phase-Shifting Interferometry. In: Manns F, Soederberg PG, Ho A, Stuck BE, Belkin M, eds. Ophthalmic Technologies XV. Proceedings of the SPIE. 2005; 5688: 133–144. 41. Doble N, Williams DR. The Application of MEMS Technology for Adaptive Optics in Vision Science. IEEE J. Sel. Top. Quant. Elec. 2004; 10: 629–635. 42. Vdovin GV, Sarro PM. Flexible Mirror Micromachined in Silicon. Appl. Opt. 1995; 34: 2968–2972. 43. Fernandez EJ, Artal P. Membrane Deformable Mirror for Adaptive Optics: Performance Limits in Visual Optics. Opt. Express. 2003; 11: 1056–1069. 44. Bush K, German D, Klemme B, et al. Electrostatic Membrane Deformable Mirror Wavefront Control Systems: Design and Analysis. In: Gonglewski JD, Gruneisen MT, Giles MK, Belkin M, eds. Advanced Wavefront Control: Methods, Devices, and Applications II. Proceedings of the SPIE. 2004; 5553: 28–38.
116
WAVEFRONT CORRECTORS FOR VISION SCIENCE
45. Bartsch D, Zhu L, Sun PC, et al. Retinal Imaging with a Low-Cost Micromachined Membrane Deformable Mirror. J. Biomed. Opt. 2002; 7: 451–456. 46. Doble N, Yoon G, Chen L, et al. The Use of a Microelectromechanical Mirror for Adaptive Optics in the Human Eye. Opt. Lett. 2002; 27: 1537–1539. 47. Bifano TG, Perreault J, Krishnamoorthy-Mali R, Horenstein MN. Microelectromechanical Deformable Mirrors. IEEE J. Sel. Top. Quant. Elec. 1999; 5: 83–90. 48. Perreault JA, Bifano TG, Levine BM, Horenstein MN. Adaptive Optic Correction Using Microelectromechanical Deformable Mirrors. Opt. Eng. 2002; 41: 561–566. 49. Gehner A, Doleschal W, Elgner A, et al. Active-Matrix Addressed Micromirror Array for Wavefront Correction in Adaptive Optics. In: Motamedi ME, Goering R, eds. MOEMS and Miniaturized Systems II. Proceedings of the SPIE. 2001; 4561: 265–275. 50. Kurczynski P, Dyson HM, Sadoulet B, et al. Fabrication and Measurement of Low-Stress Membrane Mirrors for Adaptive Optics. Appl. Opt. 2004; 43: 3573–3580. 51. Doble N, Helmbrecht M, Hart M, Juneau T. Advanced Wavefront Correction Technology for the Next Generation of Adaptive Optics Equipped Ophthalmic Instrumentation. In: Manns F, Soederberg PG, Ho A, Stuck BE, Belkin M, eds. Ophthalmic Technologies XV. Proceedings of the SPIE. 2005; 5688: 125–132. 52. Thorn KE, Qu J, Jonnal RJ, Miller DT. Adaptive Optics Flood-Illuminated Camera for High Speed Retinal Imaging. Invest. Ophthalmol. Vis. Sci. 2003; 44: 999. 53. Dalimier E, Hampson KM, Dainty JC. Effects of Adaptive Optics on Visual Performance. In: Murtagh FD, ed. Imaging and Vision. Proceedings of the SPIE. 2005; 5823: 20–28. 54. Tumbar R, Elsner AE, Weber A, Burns SA. Large Field of View, High Resolution Scanning Laser Ophthalmoscope Using Adaptive Optics. Invest. Ophthalmol. Vis. Sci. 2005; 46: 3548. 55. Hofer H, Artal P, Singer B, et al. Dynamics of the Eye’s Aberrations. J. Opt. Soc. Am. A. 2001; 18: 497–506. 56. Diaz-Santana L, Torti C, Munro I, et al. Benefit of Higher Closed-Loop Bandwidths in Ocular Adaptive Optics. Opt. Express. 2003; 11: 2597–2605. 57. Dalimier E, Dainty C. Comparative Analysis of Deformable Mirrors for Ocular Adaptive Optics. Opt. Express. 2005; 13: 4275–4285. 58. Thibos LN, Hong X, Bradley A, Cheng X. Statistical Variation of Aberration Structure and Image Quality in a Normal Population of Healthy Eyes. J. Opt. Soc. Am. A. 2002; 19: 2329–2348. 59. Zienkiewicz OC. The Finite Element Method in Engineering Science, 2nd ed. London: McGraw-Hill, 1971. 60. Lee JH, Uhm T-K, Youn S-K. First-Order Analysis of Thin-Plate Deformable Mirrors. J. Korean Phys. Soc. 2004; 44: 1412–1416. 61. Arnold L. Influence Functions of a Thin Shallow Meniscus-Shaped Mirror. Appl. Opt. 1997; 36: 2019–2028.
REFERENCES
117
62. Menikoff A. Actuator Influence Functions of Active Mirrors. Appl. Opt. 1991; 30: 833–838. 63. Hudgin RH. Wave-front Compensation Error Due to Finite Element Corrector Size. J. Opt. Soc. Am. 1977: 67: 393–395. 64. Roggemann MC, Welsh B. Imaging through Turbulence, Boca Raton: FL: CRC, 1996. 65. Paterson C, Munro I, Dainty JC. A Low Cost Adaptive Optics System Using a Membrane Mirror. Opt. Express. 2000; 6: 175–185.
CHAPTER FIVE
Control Algorithms LI CHEN University of Rochester, Rochester, New York
5.1
INTRODUCTION
The control algorithm is the vital link between the wavefront sensor and the wavefront corrector in an adaptive optics (AO) system for vision science. Its function is to convert the wave aberration measurements made by the wavefront sensor into a set of actuator commands that are applied to the wavefront corrector to satisfy a suitable system performance criterion, either minimizing the residual wave aberrations or presenting other wave aberrations to the eye. The goal of this chapter is to describe the control algorithms for a vision science AO system both in the spatial and temporal domains. In Section 5.2, the configurations that determine the spatial relationship between the wavefront sensor lenslets and the wavefront corrector actuators are described. In Sections 5.3 and 5.4, the calibration and control of the wavefront corrector are described. In the last section, the AO system is represented as a set of transfer functions and the temporal behavior of the AO system is also given.
5.2
CONFIGURATION OF LENSLETS AND ACTUATORS
Most current AO systems for vision science use a Shack–Hartmann wavefront sensor to measure the eye’s wave aberration. A Shack–Hartmann wavefront
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
119
120
CONTROL ALGORITHMS
sensor is a gradient sensor with a lenslet array placed conjugate to the pupil plane of the eye (see also Chapter 3). For a given wavefront corrector, the configuration of the lenslets and actuators can greatly affect the performance of an AO system. In astronomy, there are two different gradient sensor geometries for a Shack–Hartmann wavefront sensor with square lenslets: a Southwell configuration and a Fried configuration [1, 2]. In the Southwell configuration, shown in Figure 5.1(a), each lenslet of the wavefront sensor is centered on one actuator of the wavefront corrector. This configuration makes an AO system more difficult to calibrate because the movement of one actuator is not sensed in the corresponding lenslet centered on the actuator. The gradient influence function of this actuator is mainly measured by its neighboring lenslets. Figure 5.1(b) shows the Southwell configuration for a 97-actuator deformable mirror (DM). The Fried configuration, shown in Figure 5.2(a), is most commonly used by Shack–Hartmann wavefront sensors in which actuators are aligned to the intersection of the corners of four neighboring lenslets. Figure 5.2(b) shows the Fried configuration for a 97-actuator DM. The lenslets surrounding the actuator are quite sensitive to actuator displacement in the Fried configuration, making calibration easier. However, the gradient measurements can be resolved into 45° components, connecting diagonal nodes. This leads to two separate, interlaced networks, shown in Figure 5.3(a). If a displacement exists between the two grids, a checkerboard wavefront pattern, or waffle, will be produced, to which the wavefront sensor is insensitive. Figure 5.3(b) shows an example of waffle in the residual wave aberration produced by the Fried configuration in Figure 5.2(b). Such a waffle shape cannot be sensed by the wavefront sensor, but it can be rejected in the controller if it is specifically programmed to fi lter it out [3]. An alternative is to slightly displace the lenslet with respect to the actuators, which makes the waffle mode detectible.
Lenslet
Actuator
Edge of Pupil (a)
(b)
FIGURE 5.1 Southwell configuration. Squares represent individual lenslets, while small circles represent individual actuators on the deformable mirror (DM).
CONFIGURATION OF LENSLETS AND ACTUATORS Lenslet
121
Actuator
Edge of Pupil (a)
(b)
FIGURE 5.2 Fried configuration. Squares represent individual lenslets, while small circles represent individual actuators on the deformable mirror (DM).
(a)
(b)
FIGURE 5.3 (a) Diagonal modes that can be created with a Fried configuration. (b) Waffle produced in the residual wave aberration with a Fried configuration.
Higher density lenslet configurations can avoid the waffle mode existing in the wavefront compensation at the expense of requiring more light intensity for sensing the wavefront. In current vision science AO systems [4, 5], there are enough photons for the Shack–Hartmann wavefront sensor to operate with a higher density lenslet array and still be able to measure the eye’s wave aberration accurately. Figure 5.4(a) shows the configuration of a 17 × 17 lenslet array for an 11 × 11 actuator array DM. Figure 5.4(b) shows the residual wavefront error after correction with this configuration.
122
CONTROL ALGORITHMS Lenslet
Actuator
Edge of Pupil (a)
(b)
FIGURE 5.4 High-density lenslet configuration, with more lenslets than actuators. Squares represent individual lenslets, while small circles represent individual actuators on the deformable mirror (DM).
5.3
INFLUENCE FUNCTION MEASUREMENT
Deformable mirrors are the most widely used devices for wavefront correction in vision science. The structure of a continuous faceplate deformable mirror consists of a reflective glass facesheet deformed by an array of discrete axial push–pull actuators mounted on a rigid support (see also Chapter 4). When a unit voltage is applied to one actuator on the deformable mirror, the surface deformation produced by this activated actuator is called the actuator’s influence function. The application of the influence function in control theory assumes that the AO system is linear. The closed-loop nature of the controller will provide a tolerance to variation from this assumption. The influence function measurement is a necessary part of the calibration process. In an AO system, the influence function can be either the slope response measured from the wavefront sensor or the peak wavefront response at the actuator’s position converted from the wavefront reconstruction, as measured by the wavefront sensor. If the wavefront sensor has K lenslets and the wavefront corrector has M actuators, then adding a unit voltage to the mth actuator will cause the wavefront corrector to shape the wavefront profi le, ϕm . The corresponding slope vector, s m , is measured with the wavefront sensor: s m = [ s1 mx , s1 my , s2 mx , s2 my , . . . , skmx , skmy , . . . , sKmx , sKmy ]
T
(5.1)
where (skmx, skmy) is the local average slope of this wavefront error, ϕm , at the kth lenslet,
INFLUENCE FUNCTION MEASUREMENT
skmx =
∆xSk = F
∫∫ k
∂ϕ m ( x, y ) dx dy ∂x
∫∫ dx dy
123
(5.2)
k
skmy =
∆ySk = F
∫∫ k
∂ϕ m ( x, y ) dx dy ∂y
∫∫ dx dy
(5.3)
k
In the above expression, F is the focal length of the lenslet and ∆xSk is the displacement of the focal spot in the horizontal direction from lenslet k due to the wavefront error. The term ∆ySk is the displacement of the spot in the vertical direction. Figure 5.5 shows the x-slope influence function (solid line) and the y-slope influence function (dashed line) when the center actuator is pushed by a unit voltage from the configuration of Figure 5.4(a). The tip-tilt components are removed from the slope vector. The wavefront influence function of each actuator can be computed from its slope influence function using a Zernike mode reconstruction or by fitting Gaussian functions to be consistent with the slope data. Zernike mode reconstruction is commonly used in vision science [6].
×103
Displacement of Hartmann Spot (µm)
4 3 2 1 0 −1 −2 −3 −4 20
40
60
80 100 120 140 160 180 200 220 Lenslet Number
FIGURE 5.5 Slope influence function of the center actuator. Solid and dashed lines represent spot displacements in the horizontal and vertical directions, respectively.
124
CONTROL ALGORITHMS
The Zernike reconstruction matrix is given by Z11x Z 11y Zk 1x Z= Zk 1y ZK 1x ZK 1y
. . . Z1 jx . . . Z1 j y . . . Zkjx
Zkj y
. . . ZKjx . . . ZKj y
. . . Z1 J x . . . Z1 J y . . . ZkJ x . . . ZkJ y . . . ZKJ x . . . ZKJ y 2K × J
(5.4)
where
Zkjx =
∫∫
∂ Z j ( x, y ) ∂x
k
∫∫ dx dy k
and
Zkj y =
∫∫
∂ Z j ( x, y ) ∂y
k
∫∫ dx dy k
are the average slopes of the jth Zernike mode at the kth lenslet (k = 1, 2, . . . , K and j = 1, 2, . . . , J). The wavefront reconstructed from the corresponding slope vector, s m , is c m = Z† s m
(5.5)
where Z† is the pseudo-inverse of the reconstruction matrix, Z, and c m = [cm1, cm2 , . . . , cmJ] T is the reconstructed Zernike coefficient vector (with J modes) for the influence function wavefront of the mth actuator. Figure 5.6 shows the wavefront influence function, recovered using the Zernike mode reconstruction method, of the center actuator when a voltage is applied to it from the configuration of Figure 5.4(a). 5.4 5.4.1
SPATIAL CONTROL COMMAND OF THE WAVEFRONT CORRECTOR Control Matrix for the Direct Slope Algorithm
Repeating the slope influence function measurement to all actuators, one can construct the slope influence matrix A [7, 8]:
SPATIAL CONTROL COMMAND OF THE WAVEFRONT CORRECTOR
125
4
Wavefront (µm)
3 2 1 0 −1 −2 −3 −4 2
y-A x
2
is P
FIGURE 5.6
upi
0
l (m
m)
0 −2
−2
is
x x-A
m)
l (m
i Pup
Wavefront influence function of the center actuator.
A = [ s1 , . . . , s m , . . . , s M ] s11x . . . s1 mx . . . s1 M x s . . . s1 my . . . s1 M y 11y sk 1x . . . skmx . . . skM x = skmy . . . skM y sk 1y sK 1x . . . sKmx . . . sKM x sK 1y . . . sKmy . . . sKM y 2K×M
(5.6)
This matrix is a 2K × M matrix, and here M is the number of actuators and K is the number of wavefront sensor lenslets. In this influence matrix, the mth column is the measurement vector corresponding to a unit voltage applied to the mth deformable mirror actuator, that is, to the mth actuator influence function. The influence matrix, A, defi nes the sensitivity of the wavefront sensor to the deformable mirror and can be used to compute the actuator command vector, v, from the measurement vector, s, of the wavefront sensor: v = A† s
(5.7)
126
CONTROL ALGORITHMS
where A† is the pseudo-inverse matrix of the influence matrix, and s is a column vector of the aberrated wavefront measured from the same wavefront sensor. s = [ s1x , s1y , s2 x , s2 y , . . . , skx , sky , . . . , sK x , sK y ]
T
(5.8)
Using singular value decomposition, the pseudo-inverse matrix of the influence matrix, A†, can be deduced from the influence function matrix. In order to avoid instabilities in closed-loop control, certain modes must be fi ltered before the voltage commands are sent to the deformable mirror. These fi ltered modes are ones that the wavefront sensor does not detect typically, such as piston and/or waffle. Since tip-tilt will not affect the quality of the retinal image, the wavefront corrector does not correct for tip-tilt errors either. The tip-tilt component can be fi ltered from the slope vector by subtracting the global tip-tilt from the slope vectors: K
sx =
∑s k =1
kx
(5.9)
K K
sy =
∑s k =1
ky
(5.10)
K
The new slope vector is s ′ = [ ( s1x − sx ) , ( s1y − sy ) , ( s2 x − sx ) , ( s2 y − sy ) , . . . ,
( skx − sx ) , ( sky − sy ) , . . . , ( sK x − sx ) , ( sK y − sy ) ]
T
(5.11)
The piston component M
∑v
m
v=
m =1
(5.12)
M
can be removed from the command vector. The new command vector for the deformable mirror is T
v ′ = [ ( v1 − v ) , ( v2 − v ) , . . . , ( vM − v ) ]
(5.13)
The direct slope control algorithm is the most direct and flexible method of wavefront correction in AO. It allows the flexibility of modifying the control law to accommodate variations in operating conditions and hardware status. Since the control objective of the direct slope control algorithm is to zero the local slope vector as opposed to moving to a desired wavefront, it has the
SPATIAL CONTROL COMMAND OF THE WAVEFRONT CORRECTOR
127
advantage of avoiding mode aliasing or cross-coupling in wavefront reconstruction. This method also has a tolerance to small misalignments between the lenslets and the actuators due to the self-calibrating process of measuring the influence function matrix rather than relying on a model. 5.4.2
Modal Wavefront Correction
The control objective of the modal wavefront control algorithm for wavefront correction is to adjust the deformable mirror to the desired wavefront. If the Shack–Hartmann wavefront sensor has K lenslets to sample the wavefront error in the pupil, the wavefront error measurement can be denoted as s = [ s1x , s1y , s2 x , s2 y , . . . , skx , sky , . . . , sK x , sK y ]
T
(5.14)
and the reconstructed modal coefficient vector is c = Z† s
(5.15)
where Z† is the pseudo-inverse of a reconstruction matrix, Z. If Z is the Zernike reconstruction matrix, then c = [c 1, c 2 , . . . , cJ] T is the reconstructed coefficient vector of the Zernike polynomials expanded to J modes. Then, for a known slope vector, s, and with a least-squares algorithm, the actuator control vector is v = A † Zc
(5.16)
where A† is the pseudo-inverse of wavefront influence function matrix. By multiplying a mode fi lter matrix by the mode vector, we can have the reconstructed wavefront retain only the set of desired modes. From the reconstructed wavefront, one can then compute the average piston for each actuator. Modal wavefront control has the advantage of allowing the system to control some specific modes and can also control the wave aberration using different bandwidths for each mode [9]. 5.4.3
Wave Aberration Generator
Adaptive optics is usually used to correct the eye’s wave aberration. To do this, the wavefront corrector predistorts the wavefront, compensating for the eye’s aberrations, so that either retinal image quality or subjective visual quality is sharp. However, an AO system can be used not only to correct the eye’s wave aberration but also to generate specific aberrations to intentionally blur the retinal image for psychophysical experiments [10, 11]. To do this in closed loop, the AO controller is fed the desired wave aberration as a reference rather than a plane wave as a reference. Figure 5.7 shows examples of generating a single Zernike mode and a real wave aberration from a postLASIK (laser in situ keratomileusis) patient.
128
CONTROL ALGORITHMS
(a)
(b)
FIGURE 5.7 Aberrations generated in a real eye with adaptive optics. The DM can be used to correct the eye’s aberrations and introduce (a) individual Zernike modes (such as vertical coma) or (b) entire wave aberrations (such as a post-LASIK wave aberration).
5.5 TEMPORAL CONTROL COMMAND OF THE WAVEFRONT CORRECTOR In an AO system, the control commands for the wavefront corrector can be split into static and dynamic components. The static component estimates the control voltage vector, v, which will give the best fit of the wavefront measurement, s, if the voltages were sent to the wavefront corrector. This procedure is performed by multiplying the slope vector, s, by the control matrix, A†. The dynamic component ensures the stability and accuracy of the closed-loop correction. This component is deduced from the time sequence of correction increments and the effective voltage through a control algorithm. A feedback closed-loop control system can be defi ned, in general, as v t +T = v t − KG ⋅ vT
(5.17)
where T is the integration time (or exposure time) of the charge-coupled device (CCD). KG is the coefficient of the controller and is proportional to the system gain, K. The controller coefficient represents a trade-off value between control system stability and accuracy. The value of KG can be changed to optimize the closed-loop performance of the control system. (For example, in the Rochester Adaptive Optics Ophthalmoscope, KG = 0.01 × K for optimal closed-loop feedback control.) The design and the complexity of a control system are directly related to the requirements of the system’s application(s). Control systems are broadly classified as either open-loop systems or closed-loop systems [12]. 5.5.1 Open-Loop Control In an open-loop control operation, the wavefront sensor is placed in the optical path before the wavefront corrector. The measurement from the wavefront sensor directly controls the wavefront corrector. Figure 5.8 is a block diagram of an open-loop AO system for vision science. In this diagram, the
129
TEMPORAL CONTROL COMMAND OF THE WAVEFRONT CORRECTOR Wave Aberration
FIGURE 5.8
Wavefront Sensor
Control Computer
Wavefront Corrector
Open-loop control system for correcting the wave aberration.
wavefront sensor measures the wave aberration before it is corrected by the wavefront corrector. Open-loop systems are simple and stable, but they must be very accurately calibrated since any uncertainty or disturbance can greatly affect the accuracy of the system. It is controlled directly by an input signal, without the benefit of feedback. 5.5.2 Closed-Loop Control Closed-loop control systems are used currently in all vision AO systems because they are faster and more accurate than open-loop control systems. In a closed-loop AO system, the wavefront sensor measures the wave aberration after it is corrected by the wavefront corrector. Figure 5.9 is a block diagram of a closed-loop AO system for vision science. In this diagram, the input, ϕi (x, y, t), is the uncompensated wave aberration from the eye at a spatial point, (x, y), and at time, t. The correction introduced by a wavefront corrector, for instance, a DM, is denoted by ϕc (x, y, t), and the residual wave aberration after AO correction is ϕ r ( x, y, t ) = ϕi ( x, y, t ) − ϕc ( x, y, t )
(5.18)
In an automatic feedback algorithm, the wavefront corrector is informed by the wavefront sensor that the desired wavefront correction has taken place. This closed-loop operation allows the AO system to minimize the residual wave aberration, ϕr (x, y, t), with the fi nite spatial and temporal resolution given by the wavefront sensor. The spatial resolution of the wavefront sensor
Wave Aberration ji (x, y, t) +
Residual Aberration jr (x, y, t)
Wavefront Sensor
Control Computer
− jc(x, y, t)
Wavefront Corrector
FIGURE 5.9
Closed-loop control system for correcting the wave aberration.
130
CONTROL ALGORITHMS
is given by the number of lenslets that sample the circular aperture of the eye at the pupil plane. The temporal resolution of the wavefront sensor is characterized by the sampling frequency of the wavefront detector readout, for example, the frame rate of the CCD camera, which is typically dominated by the exposure time. The closed-loop AO control system increases the system accuracy, such as its ability to faithfully reproduce the input. It reduces the sensitivity of the ratio of the output to input due to variations in system characteristics. The nonlinearity in the response of individual wavefront corrector actuators and their hysteresis characteristics, plus other static aberrations in the optical path, can be sensed and corrected by the control loop. The disadvantages of closed-loop control are its increased complexity and cost. It also can sometimes yield unstable or objectionable oscillations in wavefront correction. 5.5.3 Transfer Function of an Adaptive Optics System An AO system is a multiple input/output system in which the inputs are the control commands of the wavefront corrector and the outputs are the wavefront measurements from the wavefront sensor. With the appropriate choice of controller, the AO system’s control loop can be split into multiple independent single servosystems. We can analyze the temporal behavior of each system independently. Figure 5.10 shows a block diagram representation of an AO control system for the eye, in which X(s) is the eye’s wave aberration (with s = i2pf and i 2 = −1), R(s) is the residual wave aberration, N(s) is the noise from the wavefront sensor, and M(s) is the control signal for wavefront compensation. To study the overall temporal behavior of an AO system, a transfer function approach is used to model each of the components in the block diagram [1, 13–15]. Transfer Function of Wavefront Sensor, HSTARE We consider here two possible practical implementations of a wavefront sensor: Shack–Hartmann and curvature sensors. No matter what kind of principle the wavefront sensor uses,
Sensor Noise N(s) Residual Wave Aberration Aberration X (s) +
R(s)
T
τ
+ HSTARE
HWC
HCC
HZOH
HHVA
HDM
− M(s)
FIGURE 5.10 Block diagram of an adaptive optical control system for the eye. T = integration time; t = readout and computation delay time; WC = wavefront computer; CC = control computer; ZOH = zero-order hold; HVA = high-voltage amplifiers; DM = deformable mirror.
TEMPORAL CONTROL COMMAND OF THE WAVEFRONT CORRECTOR
131
it requires a detector to accumulate photons coming from the eye for measuring the wave aberration. The detector, such as a CCD camera or an array of single detector APDs (avalanche photodiodes), is characterized by its integration time, T (or exposure time). The wave aberration from the eye, ϕi (x, y, t), is sampled with period, T, and the temporal behavior of the wavefront sensor can be written as: 1 T 1 = T
H STARE (t ) =
∫ ∫
t +T t
ϕ i ( x, y, t ) dt
∞ t
ϕ i ( x, y, t ) dt −
1 T
∫
∞ t +T
ϕi ( x, y, t ) dt
(5.19)
Using the Laplace transform, an integrator and a time lag T can be written as: ∞ 1 L ∫ ϕ i ( x, y, t ) dt = t s
(5.20)
L ( t + T ) = e −Ts
(5.21)
where s = i2pf and i 2 = −1. Then the transfer function of the wavefront sensor (WS) is 1 1 1 1 −Ts − e T s T s 1 − e −Ts = Ts
H STARE ( s) =
(5.22)
Transfer Function of Wavefront Sensor Computer Delay, HDELAY The temporal characteristic of the wavefront sensor computer is the time lag, t, due to the readout of the detector and the computing of the wavefront measurements (such as the centroid calculation in the case of a Shack–Hartmann wavefront sensor), and the voltage deduced from the wavefront measurements. H DELAY ( s ) = e − τ s
(5.23)
Transfer Function of Control Computer, HCC The main task of the control computer is to apply a control law to the voltages used for updating the wavefront corrector, allowing the AO system to perform the real-time compensation required to optimize the AO closed-loop response. Here we use HCC (s) to represent the temporal characteristic of the control computer. Transfer Function of Digital-to-Analog Converters, HZOH Digital-to-analog converters (DACs) convert the control voltage from a digital-to-analog signal for driving the wavefront corrector. A DAC usually holds the current control
132
CONTROL ALGORITHMS
voltage during the exposure time until the next voltages are available from the control computer, so this transfer function is designated as the zero-order hold (ZOH). The transfer function can be written as: H ZOH ( s ) =
1 − e −Ts s
(5.24)
Transfer Function of High-Voltage Amplifiers, HHVA The high-voltage amplifiers (HVAs) amplify the low control voltage from the DACs to drive the actuators of the wavefront corrector. Since the bandwidth of the HVAs is much greater than the sampling frequency of the AO system, we use a simple model for the transfer function of the HVAs: H HVA ( s ) = 1
(5.25)
Transfer Function of Deformable Mirror (DM) Generally, the resonant frequency of the wavefront corrector is much greater than the sampling frequency of the wavefront detector. The transfer function of the DM can also be simply modeled as: H DM ( s ) = 1
(5.26)
Continuous Laplace models can be used to analyze the AO control system. Figure 5.11 shows the block diagram of an AO control system modeled with transfer functions. The open-loop transfer function of the AO control loop is HOPEN ( s ) =
M ( s) R ( s)
(5.27)
And the closed-loop transfer function is
Sensor Noise N(s) Residual Wave Aberration Aberration X(s) +
R(s) −
+
1 − e−Ts Ts
e−ts
1 − e−Ts s
HCC(s)
1 M(s)
FIGURE 5.11
Block diagram of the AO control system using transfer functions.
TEMPORAL CONTROL COMMAND OF THE WAVEFRONT CORRECTOR
M ( s) X ( s) HOPEN ( s ) = 1 + HOPEN ( s )
133
HCLOSED ( s) =
(5.28)
The error transfer function is H ERROR ( s) = =
R ( s) X ( s) 1 1 + HOPEN ( s )
(5.29)
From these three transfer functions, three criteria of control bandwidths are used when analyzing an AO control system [14, 16]. 1. Open-Loop Bandwidth The open-loop bandwidth is defi ned as the 0-dB cut-off frequency, fOPEN, of the open-loop transfer function: 2
HOPEN ( i 2π fOPEN ) = 1
(5.30)
2. Closed-Loop Bandwidth The closed-loop bandwidth, f3dB, is the −3-dB closed-loop cut-off frequency of the closed-loop transfer function: 2
HCLOSED ( i 2π f3dB ) =
1 2
(5.31)
The closed-loop bandwidth determines the frequency range over which the controller rejects the temporal variations in the eye’s aberration. 3. Error Transfer Function Bandwidth The error transfer function bandwidth, fe, is the 0-dB closed-loop error cut-off frequency of the error transfer function: 2
H ERROR ( i 2π fe ) = 1
(5.32)
The error transfer function determines the frequencies over which the closed-loop system responds to the wavefront sensor measurement noise. As an example, Figure 5.12 represents a closed-loop AO system where the feedback control is modeled with the transfer function HCC (s) = K/s, in which K is the gain. To simplify the analysis, we ignore the noise for now. The theoretical transfer functions of an AO closed-loop system are now HOPEN ( s ) = K
( 1 − e −Ts ) Ts3
2
e−τ s
(5.33)
134
CONTROL ALGORITHMS
Wave Aberration
Residual Aberration
X(s) +
R(s)
(1 − e−Ts)2 Ts2
−
K s
e−ts
1 M(s)
FIGURE 5.12
Model of an AO control system with transfer functions.
50 40 30
Gain (dB)
20 10 0 −10 −20 −30 −40 −50 10−1
100 Frequency (Hz)
FIGURE 5.13
101
Gain of the open-loop transfer function.
K ( 1 − e −Ts ) e − τ s 2
HCLOSED ( s ) = H ERROR ( s ) =
Ts3 + K ( 1 − e −Ts ) e − τ s Ts3 2
Ts3 + K ( 1 − e −Ts ) e − τ s 2
(5.34) (5.35)
Figure 5.13 is the gain of the open-loop transfer function HOPEN of an AO system with a 30-Hz wavefront sensor sampling rate and time lag t = T. The open-loop bandwidth is 1.3 Hz with a feedback control loop gain of K = 30. Figure 5.14 is the gain of the closed-loop transfer function HCLOSED with K = 30. The closed-loop bandwidth is 1.8 Hz.
TEMPORAL CONTROL COMMAND OF THE WAVEFRONT CORRECTOR
135
10
0
Gain (dB)
−10 −20 −30 −40 −50 10−1
FIGURE 5.14
100 Frequency (Hz)
101
Gain of the closed-loop transfer function.
10
0
Gain (dB)
−10 −20 −30 −40 −50 10−1
FIGURE 5.15
100 Frequency (Hz)
101
Gain of the the error transfer function.
Figure 5.15 gives the gain of the error transfer function HERROR in which the bandwidth of the error transfer function is 0.9 Hz. Once the temporal behavior of a feedback closed-loop AO system has been described, it can be used to optimize the AO system performance under various signal-to-noise ratio conditions (see also Chapter 8) [17, 18].
136
CONTROL ALGORITHMS
REFERENCES 1. Hardy JW. Adaptive Optics for Astronomical Telescopes. Oxford: Oxford University Press, 1998. 2. Tyson RK. Principles of Adaptive Optics, 2nd ed. Boston: Academic, 1998. 3. Poyneer LA, Gavel DT, Brase JM. Fast Wavefront Reconstruction in Large Adaptive Optics Systems with Use of the Fourier Transform. J. Opt. Soc. Am. A. 2002: 19; 2100–2111. 4. Hofer H, Chen L, Yoon GY, et al. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express. 2001; 8: 631–643. 5. Roorda A, Romero-Borja F, Donnelly WJ, et al. Adaptive Optics Laser Scanning Ophthalmoscopy. Opt. Express. 2002; 10: 405–412. 6. Thibos LN, Applegate RA, Schwiegerling JT, et al. Standards for Reporting the Optical Aberrations of Eyes. In: Lakshminarayanan V, ed. OSA Trends in Optics and Photonics, Vision Science and Its Applications, Vol. 35. Washington, D.C.: Optical Society of America, 2000, pp. 232–244. 7. Boyer C, Michau V, Rousset G. Adaptive Optics: Interaction Matrix Measurement and Real Time Control Algorithms for the COME-ON Project. In: Schulte-in-den-Baeumen JJ, Tyson RK, eds. Adaptive Optics and Optical Structures. Proceedings of the SPIE. 1990; 1271: 63–81. 8. Jiang W, Li H. Hartmann-Shack Wavefront Sensing and Control Algorithm. In: Schulte-in-den-Baeumen JJ, Tyson RK, eds. Adaptive Optics and Optical Structures. Proceedings of the SPIE. 1990; 1271: 82–93. 9. Ellerbroek BL, Van Loan C, Pitisianis NP, Plemmons RJ. Optimizing Closed-Loop Adaptive-Optics Performance with Use of Multiple Control Bandwidths. J. Opt. Soc. Am. A. 1994; 11: 2871–2886. 10. Chen L, Singer B, Guirao A, Porter J, Williams DR. Image Metrics for Predicting Subjective Image Quality. Optom. Vis. Sci. 2005; 5: 358–369. 11. Artal P, Chen L, Fernandez EJ, et al. Neural Compensation for the Eye’s Optical Aberrations. J. Vis. 2004; 4: 281–287. 12. Franklin GF, Powell JD, Workman M. Digital Control of Dynamic Systems, 3rd ed. New York: Addison-Wesley, 1997. 13. Demerlé M, Madec PY, Rousset G. Servo-loop Analysis for Adaptive Optics. In: Alloin D, Mariotti JM, eds. NATO Advanced Science Institutes Series on Adaptive Optics for Astronomy. Dordrecht, The Netherlands: Kluwer Academic, 1993, pp. 73–88. 14. Roddier F. Adaptive Optics in Astronomy. New York: Cambridge University Press, 1999. 15. Gavel DT. Control of Adaptive Optics Systems. Center for Adaptive Optics Summer School, University of California, Santa Cruz, 2000. 16. Li X, Jiang W. Control Bandwidth Analysis of Adaptive Optical Systems. In: Tyson RK, Fugate RQ, eds. Adaptive Optics and Applications. Proceedings of the SPIE. 1997; 3126: 447–454.
REFERENCES
137
17. Ellerbroek BL, Rhoadarmer TA. Optimizing the Performance of Closed-Loop Adaptive-Optics Control Systems on the Basis of Experimentally Measured Performance Data. J. Opt. Soc. Am. A. 1997; 14: 1975–1987. 18. Dessenne C, Madec PY, Rousset G. Optimization of a Predictive Controller for Closed-Loop Adaptive Optics. Appl. Opt. 1998; 37: 4623–4633.
CHAPTER SIX
Adaptive Optics Software for Vision Research BEN SINGER Princeton University, Princeton, New Jersey
6.1
INTRODUCTION
A simple description of what any adaptive optics (AO) software should do is: • Measure the wave aberration. • Correct the wave aberration. • Repeat. The repeating cycle of wavefront measurement and correction may be called the AO loop. Here we deal with software for a Shack–Hartmann wavefront sensor (see also Chapter 3) [1] coupled with a deformable mirror, so the above description can be specified further: • Acquire an image of the lenslet array spots. • Find the centers of the spots and measure their displacements from a fi xed reference. • Minimize aberrations by deforming the mirror. • Repeat.
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
139
140
ADAPTIVE OPTICS SOFTWARE FOR VISION RESEARCH
Furthermore we want to use the system as a general tool for vision research, for doing things like retinal imaging and psychophysics [2], so our description becomes: • Acquire an image of the lenslet array spots. • Find the centers of the spots and measure their displacements from a user-defi ned reference. • Find Zernike coefficients as a compact description of the wavefront in order to: • Record a subject’s aberration over time. • Provide feedback for an operator on the state of the system. • Completely or selectively correct aberrations that have visual significance. • Repeat until the residual wavefront error is minimized. • Snap a retinal image or trigger a change in a psychophysical stimulus. Examples will be taken from software developed in the lab of David R. Williams at the Center for Visual Science, University of Rochester, in collaboration with William Vaughn and Li Chen. The fi rst “real-time” version (25 measurements and corrections of the wave aberration per second) appeared early in 2000 [3].
6.2 6.2.1
IMAGE ACQUISITION Frame Rate
The limiting factor on the speed of the AO loop is the wavefront sensor camera’s frame rate. A good update target for an AO loop in vision science research is 30 frames per second (fps). The optics and camera should be chosen such that a good wavefront measurement (spots) image can result from an exposure time of around 33 ms. One can trade resolution for light sensitivity in many research-grade cameras by “binning” (usually so that 4 sensor elements, in sets of 2 × 2, effectively become one). Often this is the only way to run at a desired 30 frames per second using high-resolution cameras. 6.2.2
Synchronization
The camera should also be able to pipeline its expose-readout functions so that it can expose the next frame while reading out the current one (Table 6.1). Usually the camera can only pipeline in this way if it has control over the time course of image acquisition, which means putting the camera in free run mode. Free-running the camera in this way also removes the need, in the
IMAGE ACQUISITION
141
TABLE 6.1 Pipeline of the AO Loop for Each Frame Interval to Expose the Image, Perform Image Readout, and Complete Processing and Correction
Frame 1
t = 0 ms
t = 33 ms
t = 67 ms
Expose
Readout
Process/ correct Readout
Frame 2 Frame 3
Expose
Expose
t = 100 ms
Process/ correct Readout
t = 133 ms
Process/ correct
case of CCD cameras, to “clean” the CCD of any built-up charge between frames. Free-running the camera also provides a live picture of the spots, which helps for aligning the subject and during the setup phase before the loop begins. In the Williams lab, we used a frame-transfer 1024 × 512 pixel camera that produced a 512 × 512 image at a maximum rate of 15 fps, or a 256 × 256 image (2 × 2 binning) at a maximum rate of 30 fps. Free-running the camera has the disadvantage of synchronizing the AO loop to the camera. All loop processing needs to complete in one frame time (typically 33 ms; see Table 6.1), or frames will go unprocessed. Fortunately, the processing steps in the AO loop are not especially computationally or memory intensive by today’s standards, given the preprocessing steps outlined in the later sections. For instance, a 500-MHz PowerPC G4 is capable of measuring the aberrations in a 256 × 256 image containing 221 spots in 20 ms and could update the mirror with the appropriate compensating deformation in 5 ms, leaving 8 ms to spare. The spare time is used for storing frame data in memory, providing feedback on progress via the trace window, checking for loop termination conditions, or performing experiment-specific tasks. 6.2.3
Pupil Imaging
A live picture from a pupil-imaging camera is useful when aligning the subject. If pupil images are captured at the same time as the wavefront measurement image, one has a record of the pupil position at wavefront measurement time. However, to do that, the pupil camera needs to be synchronized to the camera gathering the spots image, which is not easy if the spots camera is freerunning. A trigger would have to be provided from the spots camera to the pupil camera, coincident with the internally generated trigger on the spots camera, and the pupil camera would have to be able to run at least as fast as the spots camera in similar light conditions. Since pupil cameras only need to provide image quality on par with that provided by inexpensive consumer “web” cameras, triggers and efficient operation in low-light conditions are usually unavailable. A pupil image taken up to 30 ms before or after a
142
ADAPTIVE OPTICS SOFTWARE FOR VISION RESEARCH
corresponding spots image is useful nonetheless since eye position is typically stable for 100 to 300 ms during fi xation [4].
6.3 6.3.1
MEASURING WAVEFRONT SLOPE Setting Regions of Interest
The general problem of fi nding spots in an image is a larger one than the one that needs solving in our context. We can take advantage of the fact that each spot comes from a corresponding lenslet, assuming that the optics of the system are designed so that spots lie in their lenslet’s region of interest, or search box, which does not overlap the box of any other lenslet (see Fig. 6.1). Optical issues are outside the scope of this chapter (see also Chapter 3), but the key parameter that helps ensure this basic assumption is a sufficiently short lenslet focal length. Search boxes can be constructed initially so that they center on a theoretical reference (usually the center of each search box), a point where the spots would appear in an aberration-free system.
FIGURE 6.1 Image of the lenslet spots and search boxes. Shown is an image used for calibrating the system. The square search boxes contain the pixels used in the centroid computation. A displacement is computed between the centroid and its corresponding reference (often the center of each box). When scaled by the lenslet focal length, the displacement provides a sample of the wavefront slope over the lenslet.
MEASURING WAVEFRONT SLOPE
6.3.2
143
Issues Related to Image Coordinates
The distance between pixel centers is an important specification to obtain from the spots imaging camera manufacturer. A typical value is on the order of 10 mm. Note that a sensor array that samples with equal density in both the vertical and horizontal dimensions makes life much easier for measuring the wavefront. Altering the binning mode dynamically during an imaging session presents a challenge for the software, which is fairly trivial to deal with, but can be a source of some common bugs. When the camera is put into a 2 × 2 binning mode, the effective interpixel distance doubles; this will impact any algorithms that use image coordinates. For instance, a set of empirical reference points can be saved to the disk when the camera, and associated image format, is in a different state than the one it is in when, sometime later, the reference is loaded from the disk. 6.3.3
Adjusting for Image Quality
Once the subject is aligned and stable, the entire set of search boxes will often need to be translated manually by the operator to contain the maximum number of well-defi ned spots. One does not always get a well-focused, single spot in every lenslet’s box. The most common case is that the subject’s pupil is not sufficiently dilated to provide spots for the outer boxes, and so the set of lenslet spots that fi lls the smaller pupil size should be used instead. The number of samples one has of the wavefront slope, one per spot, is an important parameter used by data structures throughout AO software, so allowing for this number to change dynamically needs to be designed into the software at an early stage. 6.3.4
Measurement Pupils
Three measurement pupils need to be considered: (1) the maximum pupil that contains all the unmasked lenslets in the array, (2) the same or smaller pupil over which good spots are obtained due to the size of the subject’s pupil, and (3) another, same, or smaller pupil over which to compute the wave aberration. As the lab software evolved, these three pupils, initially one, have become decoupled. The motivation for decoupling the first two has been explained above—the subject’s pupil is often not dilated enough to contain all of the available lenslets. The motivation for decoupling the third is either eliminating the influence of reduced data quality at the edges or for examining wavefront reconstructions centered off-axis. 6.3.5
Preparing the Image
Once the set of search boxes has been determined, it is time to fi nd each spot center, or centroid. First, any image processing that needs to be done is per-
144
ADAPTIVE OPTICS SOFTWARE FOR VISION RESEARCH
formed. Note that the number of pixels to be processed is much lower if all operations are restricted to pixels within search boxes rather than over the entire image. Also, any expensive operations will have a negative impact on frame rate. For instance, subtracting a background image, averaging frames, thresholding, or flat fielding improve the likelihood of fi nding good spot centers. However, when possible, it is preferable to set up a clean, low-noise optical environment free of back reflections, using imaging wavelengths that minimize speckle, than to spend valuable time image processing. The Williams lab software currently does no subtracting, averaging, or thresholding, and the spots are sufficiently salient that their centers are found consistently and accurately. However, the centroiding method in use is iterative (see next section), a crucial property of the algorithm. Perhaps clever image processing could allow a single-pass centroiding algorithm to work just as effectively as the iterative version. 6.3.6
Centroiding
The iterative centroid algorithm simply performs a standard center-of-mass centroiding operation (which weights pixel position by intensity) but does so recursively, shrinking the box from the original size down to the size of a box that would just contain the diffraction-limited spots. Each new, smaller box is formed by reducing both its width and height by one pixel, and by centering it on the centroid found in the previous step (or the box center, in the initial case). No data outside the box is used, so any shrunken box is clipped to the initial box.
6.4 6.4.1
ABERRATION RECOVERY Principles
Zernike modes are the standard way of expressing the wave aberration in vision science [1, 5]. The way in which the Zernike coefficients for each mode are recovered algorithmically is described here. Zernike polynomials, Z, being basis functions for describing the wavefront, W, when weighted by an optimal set of coefficients, c, will minimize the difference between the estimated wavefront Zernike coefficients, Zc, and the unknown actual W: Zc − W ≅ 0
(6.1)
Samples of the wavefront derivative, ∂W(x, y)/∂x and ∂W(x, y)/∂y, with respect to both x and y, are in the form of measured spot displacements in the wavefront image, scaled by the lenslet focal length. The spot displacement divided by the lenslet focal length yields a slope measurement, designated by s. We can solve for c via:
ABERRATION RECOVERY
c = Z† s
145
(6.2)
In the above notation, the elements, z′, of the matrix Z are the derivatives of the basis functions, Z, and the dagger indicates pseudo-inverse. Looking at the elements of the matrices c, Z†, and s: ′ c1 z11 c z21 ′ 2 ′ c3 = z31 . . . cJ z′ J1
z12 ′ z22 ′ z32 ′ zJ′2
z13 ′
. . . z1′( 2 K ) s1 z23 ′ . . . z2′ ( 2 K ) s2 z33 ′ . . . z3′( 2 K ) s3 tt ... . . . zJ′3 . . . zJ′ ( 2 K ) s( 2 K )
(6.3)
The coefficients c result from multiplying the pseudo-inverse of the derivative of the Zernike polynomials, Z†, by the slope vector, s. In the pseudo-inverted matrix, shown in Eq. (6.3), there will be as many rows, J, as Zernike polynomials (or coefficients) that we want to recover, and as many columns, 2K, as twice the number of lenslets for which we have data (twice because we have derivatives with respect to both x and y). An index-numbering system for both Zernike polynomials and lenslets, each along a single dimension, needs to be consistent throughout the software. 6.4.2
Implementation
Call sf a “full” column vector with a length equal to twice the number of unmasked lenslets in the array. Knowing the derivatives of the Zernike polynomials, one can precompute the average value of the derivative of Z, with respect to both x and y, over the area of all unmasked lenslets and store them in a “full” matrix, Zf, with the number of columns equal to the number of coefficients we want to recover (a good heuristic is no more than half the number of lenslets), and with the number of rows equal to the length of sf. 6.4.2.1 Recovering Zernikes from Partial Data At run time, when the subset of lenslets with good spots in their search boxes have been determined, Z f can be resized so that the rows corresponding to the lenslets with unused data are removed, and the columns with coefficients that one is not recovering can be removed (or flagged via a masking scheme). This forms a subset of Z f that we can call Z s. After taking the pseudo-inverse of Z s via a singular-valuedecomposition (SVD) algorithm [6], we arrive at Z†s, which multiplied by s s results in Zernike coefficients c. 6.4.2.2 Example Pseudo-Code The pseudo-code for computing the Zernike coefficients consists of the following stages:
146
ADAPTIVE OPTICS SOFTWARE FOR VISION RESEARCH
1. Precompute Z′f: NL = full number of lenslets NC = full number of coefficients or modes dZfull = matrix(rows: NL*2, columns: NC) for each row in the set [1,NL] for each column in the set [1,NC] dZfull[row][column] = dZdX(lenslet: row, mode: column) dZfull[row+NL][column] = dZdY(lenslet: row, mode: column) end for columns end for rows The functions dZdX and dZdY take a lenslet number and a Zernike mode number as a parameter and return the average value of the derivative for that Zernike mode’s polynomial over the area of the lenslet. 2. Compute coefficients: NLS = sum(vector: lensletMask) NCSM = sum(vector: coefMask) dW = vector(rows: NLS*2) dW = ComputeDerivatives(forLenslets: lensletMask) dZsub = matrix(rows: NLS*2, columns: NCSM) dZsub = FillDZsubset(whichLenslets: lensletMask, whichCoefs: coefMask) dZsubInv = matrix(rows: NCSM, columns: NLS*2) dzsubInv = ComputeInverse(from: dZsub) C = vector(rows: NC) usedCoef = 0 for each coef in the set [1,NC] if coefMask[coef] is 1 then increment usedCoef by 1 C[coef] = 0 for each lensletXY in the set [1,NLS*2] increment C[coef] by dzSubInv[usedCoef][lensletXY]*dW[lensletXY] end for lenslets else C[coef] = NotComputedConstant end if coefMask end for coefs The vectors lensletMask and coefMask have lengths equal to the maximum number of lenslets and coefficients set up at initialization. The vector lensletMask has 0’s in the position of the lenslets that were not included in the subject’s pupil, had poor spot centers that could not be deter-
ABERRATION RECOVERY
147
mined, or perhaps were ones the user has chosen not to include (and 1’s otherwise). The vector coefMask has 0’s in the position of the coefficients chosen by the user to be excluded from the reconstruction, or for coefficients of higher order than can be recovered given the number of lenslets for which data is being supplied (and 1’s otherwise). Taking the sum of either of these vectors gives the number of entries with the value 1, hence the number of elements to be included in the analysis. The function ComputeDerivatives would fi nd the difference between the previously computed spot centroids and the corresponding reference points (both assumed to be accessible by the function) for lenslets included in the analysis, scaled appropriately, to fi nd dW (corresponding to s s above). The function FillDZsubset would fi ll the dZsub matrix with rows and columns from the full matrix (dZfull above, assumed to be accessible by the function) corresponding to the included lenslets and Zernike coefficients, respectively, to make dZsub (corresponding to Z s above). The function ComputeInverse would compute the pseudo-inverse of the dZsub matrix to produce dZsubInv (corresponding to Z†s above). The usual method for doing this is via an SVD algorithm available in numerical analysis packages [6]. The fi nal step is the matrix multiplication between dZsubInv and dW to produce the vector of coefficients, C, corresponding to the vector c above. Note that the numbering of the elements in vector C retains the original index numbering and flags coefficients that are not computed with a constant, rather than eliminating them. This is so the numbering convention in place for the Zernike coefficients is maintained and the respective polynomial weights can be identified by their element position in the vector C. 6.4.3
Recording Aberration
An advantage of recovering a compact description of the wavefront in the form of a set of Zernike coefficients is that the full set of coefficients can be saved to memory on each frame with minimal impact on timing and memory— for instance, 66 double-precision (64-bit) coefficients, or up to 10th-order Zernike modes, consumes only 528 bytes of memory. One can save all of the measured coefficients to memory within the loop since it can be done cheaply. Contrast this with saving the reconstructed wavefront, which consumes on the order of 1000 times the memory and can take more than a frame time to compute. 6.4.4
Displaying a Running History of RMS
A good way to visualize the progress of the correction is a trace of the rootmean-square (RMS) wavefront error frame by frame. A simple plot, seismograph-style, provides a history on the value of the RMS wavefront error or
148
ADAPTIVE OPTICS SOFTWARE FOR VISION RESEARCH
perhaps the value of a Zernike coefficient of interest, such as defocus. An alternate visualization of Zernike coefficients, which provides no history but easily allows a wide range of coefficients to be viewed at once, is a set of histograms similar to a frequency equalizer on an audio receiver. The advantage of the trace in real-time applications is that for each frame, only a point needs to be added to the end of the trace, so time spent drawing is minimized and can be done from within the AO loop for immediate feedback. Monitoring RMS wavefront error aids the correction loop as well: A momentary spike in RMS can mean a blink or misalignment, and deforming the mirror can be bypassed temporarily until the RMS comes back in range. If the RMS starts increasing exponentially, this usually indicates a feedback problem; in that case, the operator can terminate the loop manually as soon as possible. 6.4.5
Displaying an Image of the Reconstructed Wavefront
Another form of visual feedback is a reconstructed wavefront image. The reconstructed wavefront image (Fig. 6.2) provides the most direct visualization of the fi nal result of wavefront sensing: the wavefront itself as computed from the Zernike modes recovered by the operations detailed above. Since
FIGURE 6.2 Image of wavefront reconstruction. Provides feedback on current aberrations via a contour map of gray levels for quick reference. An aberration-free image would be entirely black.
CORRECTING ABERRATIONS
149
there are tens of polynomials and thousands of points of evaluation in the image, such an image can be expensive to compute, but one can precompute a floating-point image for each polynomial and then weight each appropriately by the coefficients at run time. Even so, the image computation cannot be completed quickly enough, in current implementations, to include it in the AO loop. It should be computed in a thread of lower priority than the AO loop thread, or on a separate processor. The colors in the reconstructed wavefront image can be contour-mapped gray levels, as in the figure; another useful color intensity profi le is a spectrum ranging from dark blue to bright red. An advantage of the former is that contours are salient features and can be readily computed via a modulus operator, and the map metaphor is familiar and appropriate. An advantage of the latter is that it provides an absolute scale for comparing images to each other and is straightforward to set up and modify quickly via a color lookup table. Another good choice would be a three-dimentional (3D) surface plot, which given the ubiquity of 3D graphics accelerators should render quickly. However, 3D plots are self-occluding.
6.5
CORRECTING ABERRATIONS
Deformable mirrors typically consist of an array of actuators that deform a smooth surface by pushing or pulling it in response to an applied voltage. The set of actuator voltages required to null the measured wave aberration can be computed by a modal method [2]. The modal method evaluates the estimated wavefront at the actuator positions and moves the actuators in the opposite direction to null the aberrations. An alternate, direct-slope method [3] computes voltages directly from the measured displacements, bypassing the Zernike mode recovery in the control algorithm (see also Chapter 5). 6.5.1
Recording Influence Functions
The direct-slope method proceeds by fi rst measuring a set of influence functions that specify the full set of spot displacements caused by the movement of each individual mirror actuator as a function of its applied voltage. The software loops through each actuator, and applies a minimal and maximal voltage within the linear range of actuator operation. The resulting centroids are recorded as displacements for use by an offl ine script. The script interpolates the displacements to form a set of influence functions, resulting in a single matrix, A, that relates the average slope of the wavefront for a given applied voltage, such that vA = s. The pseudo-inverse of A, A†, relates the wavefront slope, s, to the actuator voltages (see also Chapter 5): v = A† s
(6.4)
150
ADAPTIVE OPTICS SOFTWARE FOR VISION RESEARCH
The vector s, as described previously, is a stacked vector of length 2K, equal to twice the number of lenslets. The vector v is of length equal to the number of actuators, M, in the mirror. A† consists of 2K columns and M rows. Loading the matrix A† at program launch is common, since it is unlikely to change except after mirror calibration sessions when new influence functions are generated. It would be desirable to eliminate columns of A† when lenslets are dynamically removed from the analysis, but the validity of doing so has not been systematically explored. In practice, new influence functions are generated for each of three concentric subsets of lenslets, and one is chosen for a subject based on pupil dilation. 6.5.2
Applying Actuator Voltages
The voltages, v, are not written directly to the mirror; rather, they are weighted by a scalar gain factor, K, and added to the existing voltages, vo, already on the mirror to get new voltages, vn: v n = v o + Kv
(6.5)
Since mirrors currently in use are write-only devices, the actual state of the mirror is unknown, and so vo is not read off the mirror but comes from a stored voltage history held in memory during a session. In addition, due to hysteresis, putting a mirror into a known state requires flattening it (typically by writing a median voltage value to all actuators), and then applying the full set of voltages stored since it was last flattened. With newer, fully digital devices [7], hysteresis should not be a problem. 6.6 6.6.1
APPLICATION-DEPENDENT CONSIDERATIONS One-Shot Retinal Imaging
When imaging via one-shot retinal imaging (e.g., using a flash lamp that takes approximately a few seconds to recharge between images), the retinal imaging camera should be triggered as soon as possible after the criterion correction is reached. Often there is a series of TTL pulses that needs to be sent with precise timing, synchronized with the flash. For instance, current retinal imaging software in the Williams lab triggers the imaging camera, closes a safety shutter 50 ms after optimal correction is reached, and triggers the flash lamp 15 ms after the shutter command to ensure it is closed. Obtaining such precise timing typically requires programming external timer/counter hardware coupled with digital output lines. 6.6.2
Synchronizing to Display Stimuli
Psychophysical experiments often consist of a series of trials, with particular aberration profi les placed on the mirror during trial intervals. In such experi-
CONCLUSION
151
ments, the AO loop runs in short bursts ending with signals sent to a separate display computer that updates its display appropriately, and gathers and records subject responses. The state of the AO system, display, and the corresponding subject response needs to be recorded with respect to a common time reference. 6.6.3
Selective Correction
At the termination of the AO loop, the residual error between the reference and the estimated wavefront will have been minimized. This can result in a system with minimal aberrations or one with a set of desired aberrations. A user-provided reference containing a set of desired aberrations for experimental purposes has been used for experiments probing performance under rotated point spread functions [8], for example. A reference expressed as a set of Zernike coefficients helps in this case, as opposed to one expressed in image coordinates. But image coordinates are used for measuring displacements between the centroid and the reference locations in the loop (see above). Working backwards from Eq. (6.2), one can multiply the reference coefficients, c, by the matrix Z to get an estimate of the corresponding wavefront slope, s, for each lenslet. Going still further, multiplying each element of s by the lenslet focal length and subtracting the coordinates of the lenslet centers results in a reference target in image coordinates that can be used in the loop’s minimization procedure.
6.7
CONCLUSION
In addition to the mechanics of quickly and accurately measuring and correcting the wave aberration, AO software for vision research needs to keep in mind its many users: programmers, operators, researchers, and subjects. 6.7.1
Making Programmers Happy
The software should consist of a set of loosely coupled modules in order to be flexible and reusable for other programmers to build upon. At the very least, hardware control modules (image acquisition, mirror control, shutter triggers, etc.) that interface with vendor-supplied software development kits should be compiled into loadable modules for use in scripting languages or interpreted environments such as MATLAB (The MathWorks, Inc.). These loadable modules should expose the subset of the device control parameter space that is relevant and assume invariant parameters otherwise. 6.7.2
Making Operators Happy
The user interface needs to be simple enough that an operator can pick it up in a day or two. Spatially, the camera images, mirror state, and aberration
152
ADAPTIVE OPTICS SOFTWARE FOR VISION RESEARCH
history should be visible at a glance in separate windows on screen. Temporally, the software should start up in alignment mode with live images, the mirror out of the loop, and with simple keyboard manual adjustments active so that the region of interest in the spots image can be honed in on and identified quickly. Then, the user activates the mirror, a “run” key initiates the correction loop, and the loop proceeds until automatic termination or a manual “stop” key is hit, with minimal latency in the latter case.
6.7.3
Making Researchers Happy
The software needs to be robust and well designed, but it needs to be flexible because it will inevitably be asked to do things that could not be anticipated. New methods for wavefront measurement and correction are a subject of current research. These lie at the heart of the computational engine. New imaging cameras are constantly being swapped in and out. Each researchgrade camera has its own idiosyncrasies that must be discovered and exploited to pipeline the acquisition optimally. New generations of mirrors, such as microelectromechanical system (MEMS) mirrors, are coming online [7]. Each has its own control interface, often very low-level, and a robust calibration and testing mode needs to exist in the software for handling novel hardware. Invariants in early versions of the software inevitably become variables in later versions, and after about 2 or 3 complete rewrites, new layers of abstraction can be put in place intelligently. Once that occurs, turn-around time for new algorithms and new hardware comes down, and new experiments can proceed.
6.7.4
Making Subjects Happy
Subjects are usually in an uncomfortable position, perhaps with a bite bar in their mouth, trying to keep their eyes fi xated on a target despite a variety of visual distractions while the system is being aligned and configured. The software should make this process as easy as possible by providing live pictures of the spots and pupil for mechanical alignment, and by allowing processing to proceed on a region of interest within the wavefront image. The region to process should be easy to identify quickly by the operator via the graphical interface; lenslets with bad data should be easily excluded via a mouse click. There should be pupil tracking so that the analysis can proceed even when the pupil moves slightly; generally the session should be stopped for realignment only when the wavefront cannot be computed (in principle) rather than due to constraints in the software itself. The system state, including the state of all the hardware devices and all of the data gathered, should be automatically tracked and saved so that it is easy to resume where one left off with a particular subject.
REFERENCES
6.7.5
153
Flexibility in the Middle
Making all the above work requires flexibility at all layers of the software, but especially in the middle. Software engineers can write time-critical device control code on the bottom and well-crafted graphical user interfaces on the top, but the whole system needs to be constructed in such a way that algorithms and experiment protocols in the middle can be written by researchers; the middle layer should be in a higher level scripting language researchers are familiar with or can pick up quickly. The middle layer should be able to control the hardware beneath it in a generic way and should expose its parameter space to the graphical interface above it in an automatic way, so that naïve operators can use the new algorithms immediately. Although it may be possible to cobble together pieces of existing software toward this end, more likely a completely custom system needs to be built that can achieve these goals. Acknowledgments The quest to create fast, accurate, intuitive AO software for vision research would have been impossible without the kind help of Li Chen, Ian Cox, Nathan Doble, Heidi Hofer, Junzhong Liang, Don Miller, Orin Packer, Jason Porter, Austin Roorda, Alan Russell, John Swan-Stone, Ted Twietmeyer, Bill Vaughn, David R. Williams, and Geunyoung Yoon.
REFERENCES 1. Liang J, Grimm B, Goelz S, Bille JF. Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann-Shack Wave-front Sensor. J. Opt. Soc. Am. A. 1994; 11: 1949–1957. 2. Liang J, Williams DR, Miller DT. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884–2892. 3. Hofer H, Chen L, Yoon GY, et al. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express. 2001; 8: 631–643. 4. Carpenter RHS. Oculomotor Procrastination. In: Fisher DF, Monty RA, Senders JW, eds. Eye Movements: Cognition and Visual Perception. Hillsdale, NJ: Lawrence Erlbaum Assoc., 1981, pp. 237–246. 5. Thibos LN, Applegate RA, Schwiegerling JT, et al. Standards for Reporting the Optical Aberrations of Eyes. In: Lakshminarayanan V, ed. OSA Trends in Optics and Photonics, Vision Science and Its Applications, Vol. 35. Washington, D.C.: Optical Society of America, 2000, pp. 232–244. 6. Press WH, Teukolsky SA, Vetterling WT, Flannery BP. Numerical Recipes in C: The Art of Scientifi c Computing, 2nd ed. New York: Cambridge University Press, 1992. 7. Doble N, Yoon G, Chen L, et al. The Use of a Microelectromechanical Mirror for Adaptive Optics in the Human Eye. Opt. Lett. 2002; 27: 1537–1539. 8. Artal P, Chen L, Fernández EJ, et al. Neural Compensation for the Eye’s Optical Aberrations. J. Vis. 2004; 4: 281–287.
CHAPTER SEVEN
Adaptive Optics System Assembly and Integration BRIAN J. BAUMAN, Lawrence Livermore National Laboratory, Livermore, California STEPHEN K. EISENBIES Sandia National Laboratory, Livermore, California
7.1
INTRODUCTION
Getting an adaptive optics (AO) system to work as intended can be a painstaking process where errors and misalignments are slowly worked out. Good optomechanical design, alignment procedures, and integration procedures make this relatively straightforward to accomplish; poor ones make the process nearly impossible. The purpose of this chapter is to describe how to get from a design to a working AO system. This chapter will discuss optomechanical design, alignment techniques, and integration procedures that turn AO systems into reality. Alignment refers to adjusting the positions of optics, sources, and detectors so that the AO system is set up optically as designed. Integration refers to the process of getting all of the components to work together as an AO system. Optomechanical design is central to both of these topics and is the sine qua non of reliable AO systems. Optomechanical design and alignment procedures are also covered in excellent books [1–3] and in short courses available privately or at optical
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
155
156
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
engineering conferences. It should be noted that this chapter uses approaches suited for nonproduction quantities. Production quantities may require, and also make available, other techniques not conducive to one-off systems.
7.2
FIRST-ORDER OPTICS OF THE AO SYSTEM
The AO relay refers to the optics of the AO system from objects/sources to images/detectors. The basics of first-order optics for the AO relay are straightforward. Two sets of conjugate planes are important: a set of planes conjugate to the retina of the eye and a set of planes conjugate to the pupil of the eye. The set of surfaces nominally conjugate to the retina includes the reference beacon, the subject’s fi xation target, the display for psychophysical experimentation, a flood illumination source for the retina, and the science chargecoupled display (CCD) plane (if any). These are among the points in an optical layout drawing where the light rays appear to come to a focus. (See Fig. 10.1 for an example.) The reason for using the word nominally above is that the retina is not an infi nitesimally thin surface but has a fi nite thickness. Therefore, there is not a single, specific retinal plane. The CCD will be conjugate to some surface within the retina, but that surface is neither well defi ned nor exactly known a priori. The other set of conjugate planes in AO systems is a set of pupil planes. The subject’s pupil, the deformable mirror (DM), the scanning mirrors (if any), and the lenslet array (in a Shack–Hartmann wavefront sensor) are all nominally conjugate to each other and each may be referred to as a pupil. When referring to the pupil of a human subject, the term subject’s pupil will be used to avoid confusion with pupil, the more general term from optical engineering. Again, the use of the word “nominally” reflects the ambiguity of the position of the subject’s pupil. In addition, the DM and the lenslet array may not be exactly conjugate to each other because the DM is generally tilted at a nonnormal incidence angle to the incoming beam, whereas the lenslet array generally is normal to its incoming beam. What is the penalty if the subject’s pupil, the DM, and the lenslet array are not conjugate to each other? As shown in Figure 7.1, the answer depends on the field of view that is used. If the field of view were infi nitesimal, then there would be no penalty if the aberrations of the eye and the DM were conjugate to slightly different planes. In fact, astronomical AO systems work well onaxis (i.e., in the direction of the reference beacon) even though the aberrations of the atmosphere are widely distributed and the DM cannot be conjugate simultaneously to all of the atmosphere’s aberrations. Similarly, the AOcorrected imagery of the retina will be perfect on-axis. However, points offaxis will have errors in the correction. In short, the error is an effect of parallax (or shear) between the DM and the aberration plane(s). If the shear in the wavefront is significantly smaller than a subaperture (or actuator spacing), then the parallax is usually negligible.
OPTICAL ALIGNMENT WS Pupil
DM
157
Subject’s Pupil Chief Ray for Off-Axis Field Point ∆hsubject
∆hws
lws
Chief Ray for On-Axis Field Point
lsubject
FIGURE 7.1 Parallax when the DM, wavefront sensor (WS) pupil, and subject’s pupil are not coincident but are separated from the DM by lws and lsubject. The pupils at the WS and the subject are sheared by ∆hws and ∆h subject. “On-axis” refers to the optical axis established after calibrating the system.
A more subtle effect of not having the subject’s pupil, DM, and lenslet array conjugate to each other is that diffraction effects may present problems. The diffraction effects are negligible as long as the nominally conjugate surfaces are within the near field of each other, or conversely, one can expect problems if one surface is so far from its nominally conjugate mate that the wavefront evolves significantly in amplitude or in phase between the two. A rough rule of thumb is based on concepts similar to the Rayleigh range used in laser propagation. Diffraction effects may become an issue when the axial distance between the nominal pupil planes is on the order of ∆z ≈ (lo) 2 /l, where lo is the semilength of a characteristic feature size, such as the interactuator distance, and l is the wavelength of interest. For a microelectromechanical system (MEMS) mirror with a 0.400-mm interactuator distance at a wavelength of 0.8 mm, 2
∆z ≈
7.3
( 0.5 × 0.400 mm ) = 50 mm 0.8 µ m
OPTICAL ALIGNMENT
Optical alignment refers to the process of adjusting the positions of optics, sources, and detectors so that the AO system is set up optically as designed. This is not the same as having a working AO system, which is the goal of integration, a broader task. The task of optical design will not be covered here except for a description of the afocal relay, a staple of AO system optical design. The afocal relay (often called a relay telescope) consists of two lenses or curved mirrors (with focal lengths F1 and F2) separated by F1 + F2 . The telescope usually has collimated light in and collimated light out; that is, the
158
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
images of the retina on the input and output sides of the telescope are at infi nity. The beam size is changed according to the magnification of the telescope, −F2 /F1, where a negative magnification means that the output image is inverted with respect to the input. Most often, F1 and F2 are both positive in order to create real (as opposed to virtual) pupils in the input and output spaces of the telescope. The entrance pupil is usually placed at a distance F1 in front of the telescope, which creates an exit pupil at a distance F2 behind the telescope. This is sometimes referred to as a 4F configuration—an obvious name in the special case of a unity magnification telescope where both lenses have a focal length F. Since the input and output beams do not change size in their respective spaces, the exact axial positioning of a DM or WS is not as crucial as it might otherwise be. Positioning the pupils at the front and rear focal planes generally performs well optically and is easy to understand and remember. The alignment and integration of an afocal relay telescope is discussed in Section 7.3.4. For further information regarding optical design, the reader is referred to several references [4–11]. 7.3.1
Understanding Penalties for Misalignments
When considering a tolerance, it is important to examine any penalties for misalignment. If the goal of the AO system is to achieve a Strehl ratio of 80%, then one may think that the system needs to be aligned to better than l/14 root-mean-square (RMS) wavefront error (i.e., the Maréchal criterion corresponding to the diffraction-limited wavefront error required to achieve a Strehl ratio of 80%). But this is somewhat too restrictive, because it ignores the fact that common-path wavefront errors may be cancelled by the DM and non-common-path errors may be negated by calibration of the wavefront sensor. Common-path wavefront errors refer to aberrations that are common to both the science and WS legs of the instrument. Since the WS sees the aberration, the AO system can correct it for the science and WS legs. Noncommon-path aberrations refer to aberrations in only one leg of the system. If the aberration is only in the science leg, then the WS will not see it and therefore the AO system will not correct it in the science leg. Conversely, if the aberration is only in the WS leg, then the AO system will “correct” an aberration that does not exist in the science leg, which will degrade the science image. Absorbing small common-path aberrations with the DM is often acceptable as long as doing so does not consume too much DM stroke. In the case of conventional DMs, the stroke is often several micrometers, so consuming 0.5 mm for common-path aberrations may well be acceptable. The price one pays for doing so is that the population that can be corrected with the instrument may be slightly smaller, but depending on the application, this may be an acceptable trade-off in order to make the alignment process easier. However, one should strive to minimize these errors, and integration proceeds more smoothly when the system is set up close to the nominal design.
OPTICAL ALIGNMENT
159
Noncommon-path aberrations can, in principle, be negated by calibrating the wavefront sensor, that is, fi nding a set of reference centroids that will produce a perfect image on the science camera when the AO loop is closed. Of course, doing this means that you are committing to measuring reference centroids! Some AO systems do not measure reference centroids but rather simply use reference centroids that are on a regular grid—in other words, the assumption is made that there are no non-common-path aberrations. Thus, any non-common-path error is transferred to the science path. If reference centroids are to be measured, then the capability to do this easily should be designed into the system—it is often significantly harder to implement this later on. A later section will discuss how to implement the capacity to calibrate the wavefront sensor. With a mind toward making adjustments easier, the next section will discuss the optomechanics of the AO system. In a developmental design, alignment is often done in an ad hoc manner, developing alignment techniques as the integration itself progresses. With some forethought, though, alignment aids or other ease-of-alignment features can be built into the design. These features can take advantage of the very good tolerances available to machining techniques [as good as 0.001 in. (25 mm) or perhaps even better]. 7.3.2
Optomechanics
7.3.2.1 Fundamentals A well-integrated optomechanical design for an AO system will consider the functional requirements of the whole system as well as being adaptable for changes. This begins with a completed, wellunderstood, and functionally fi xed optical design. Flexibility to insert folds or adjust angles as needed for packaging may still exist, but when possible, changes should have a minimal effect on the total wavefront error in the system. Packaging a compact AO system can be a challenge. Three-dimensional computer-aided design (CAD) software is a powerful tool for visualizing how components will go together in a design. In general, a CAD model of an optical system begins with a digital version of the optical design. Modern optical design software will usually export design data in formats that can be translated by most CAD software packages. The output is usually in the form of ray traces and element surfaces. Three-dimensional computer models of system hardware integrated with the beam path in a CAD assembly quickly and clearly show where confl icts between components and light paths will occur. Vendors often provide CAD models of their components that can be downloaded directly from their website in formats that can be imported into a wide range of CAD platforms. Even in the case where models for commercial parts are not available online, calling technical support can usually provide access to the desired files. If all else fails, a set of vendor drawings or a pair of calipers is all even a minimally experienced CAD user needs to produce adequate models of commercial parts.
160
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
The sensitivity of the optical system to the alignment of individual elements will guide in selecting appropriate fi xtures. Before specifying optomechanical components, it is helpful to have determined the steps required for aligning the individual elements in the system, including what degrees of freedom (DOFs) are needed. This way, components are selected or designed based on exactly what is needed for a particular element, avoiding redundancy and simplifying future realignment. Alignment strategies are developed element by element, working through the optical design. In some cases, groups of elements can be put together in a subassembly and aligned independently, such as in a telescope, simplifying system-level alignment. It is also important to consider at an early stage what alignment aids (e.g., targets, lasers) will be used so that they can be included in the design. The purpose of optomechanical components is to provide stable fi xturing and a means of alignment for the various optical elements that make up the system. A thoughtful error budget and well-defi ned set of alignment procedures will largely determine what is individually required for each element. Functional features of fi xtures are split into three main parts: position stability, DOFs, and resolution. Position stability represents the capability of fi xtures to maintain multiple elements in alignment relative to each other over time, regardless of environmental influences, such as temperature, vibration, and shock loads (all within well-defi ned limits defi ned by the error budget). DOFs represent the motions that the fi xture is designed to provide in order to facilitate proper alignment. Typical motions provided by most fi xtures are translation and rotation, either alone or in orthogonal combinations. Resolution refers to the fi neness of the adjustments. Detailed considerations for specifying optomechanical components are discussed in the next section. With an optical design complete and requirements for fi xturing understood, assembly of the system in a CAD environment may proceed. A logical approach is to assemble the CAD model as if it were being assembled in the real world, following the alignment procedures step by step, and inserting components and elements as they would be on the bench. Defi ne a beam height and include posts or riser blocks for mounting on a breadboard or custom base plate. Alignment aids should be included in the model as the assembly proceeds to make sure that space remains available for them during the actual assembly. As parts are added, the assembly grows into a completed virtual model of the system. CAD provides a means to manipulate the model on the computer screen to quickly review the placement of parts from all angles. Unexpected confl icts between parts are almost inevitable and in most cases will be readily apparent in the model. Clipping of the beam path can also be observed this way. Accessibility for making adjustments is another important consideration. An adjustment screw buried in an inaccessible location is going to be difficult to adjust when it comes time to use it. As confl icts are found, changes can be made to the model. It is much easier to do this in the virtual world than it is to go back and modify or redesign hardware, and it represents the real advantage that utilizing CAD technology can provide.
OPTICAL ALIGNMENT
161
It is particularly helpful in tightly packed systems with many components. Once the main elements and fi xturing are placed in the assembly and confl icts are corrected, other important features can be added to the model. These could be a breadboard or base plate, enclosures, electronics boxes, carts, and any number of other important systems that will interact with the optical system. Even a realistic scale model of a human head can be included to visualize subject comfort. One can have a better idea about what the completed system will look like as more details are incorporated into the model. 7.3.2.2 Selecting Hardware Depending on system requirements, optical mounts may be either specified from a catalog or custom designed. Advantages to using commercial mounts are that they are familiar, relatively inexpensive, and readily available. However, commercial mounts are generally designed to be used in a well-controlled laboratory setting on an isolated breadboard. Outside of this environment, long-term stability could become a problem. Commercial parts can be bulky, and packaging them in a compact design can be a challenge. Designing custom mounts provides a solution to these issues. Virtually unlimited flexibility for packaging and tight tolerances are available with custom parts, and fi xtures can be made as robust and application specific as needed. Custom mounts, however, are usually inflexible to unanticipated modifications or upgrades after integration and tend to come with a relatively long lead time at a greater expense. Most important in selecting fi xturing for an element or group of elements comes from the specific DOFs required for proper alignment, determined mainly by the alignment procedures. The range for each DOF in an element mount is based on how far an element might need to be adjusted to make up for its own manufacturing form error, as well as providing a general ease in initial alignment. Relevant form errors can be measured for each element or found in manufacturer specifications for that item. Accuracy requirements are driven by the appropriate application of analysis and error budgeting. Placement accuracy drives the level of fi neness of the adjustment. In general, higher levels of accuracy result in the need for fi ner adjustments. Adjustment hardware should be as robust and transparent as possible, preventing accidental or inadvertent misalignment caused by a bump or even by the misguided efforts of a well-intentioned technician. Elements should be securely locked in place once aligned, either directly by fasteners or epoxy or by utilizing lockable adjustment screws. When possible, adjustment knobs should be disabled or removed altogether once the alignment is complete. When custom-fabricated fi xtures are used, it may be possible to avoid any knobs by taking advantage of tight machine tolerances. Setting the beam height is a good example. 7.3.2.3 Other Design Considerations When considering potential causes of position error during system specification and the selection of hardware,
162
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
keep in mind less obvious sources. In many cases, fi xtures designed to provide angular adjustments often introduce a residual linear translation. This effect is called Abbe error. These errors occur laterally or axially relative to the element. The effect will be reduced by minimizing the linear distance between the plane defi ned by the surface of the optical element (normal to the optical axis) and the axis of rotation of the adjustment. When practical, Abbe error will be eliminated completely by passing the adjustment’s axis of rotation through the exact point where the optical axis intersects the element. Thermal effects have the potential to affect the performance of precision systems. In most cases it is good practice to remove the influences of heat from within systems. If this is not possible, the minimum response should be to channel the heat away as efficiently and with as little impact as possible. Thermally induced alignment errors occur when a system is used at a temperature different than that during alignment. If an expected heat source is found through analysis to be a likely potential source of system error, steps should be taken to minimize the effect [3]. An example of a heat source common to AO systems would be a projector. Power supplies may also introduce some heat. If the heat source was not present during the initial alignment or calibration and is present when the system is operating, some degradation in performance may occur, especially if the heating is not consistent over time. Unless designed to negate the effect, uniform temperature difference in a component will result in spatial changes, such as the difference in the relative position between two elements. A nonuniform temperature gradient in a component will result in both spatial and angular motions between multiple elements. Moving loads within the system, a focusing stage for example, are potential sources of error. As a supported load moves within a system, weight on the supports will redistribute and some bending of the support structure will occur. The system should be designed such that this bending has minimal effect on the performance of the system. In some cases, errors from moving loads can be avoided completely by aligning the moving load with the prevailing gravity vector. Mechanical isolation from dynamic external loads is another important feature that should be considered when putting together an adaptive optics system. Isolation should be capable of damping out any shock or vibration and prevent adverse affects of applied loads that may be applied to the system as a part of normal use. The most likely source of a dynamic applied load is from immobilizing and aligning subjects in systems designed for vision science. This is a challenge because the subject must be held in place relatively rigidly with respect to the adaptive optics system. Usually in adaptive optics systems, bite bars are used to immobilize and position human subjects. Except in the most rigid systems, such as those built on a large optics table, immobilizing a subject with hardware mounted directly on an AO system structure is not
OPTICAL ALIGNMENT
163
advisable. A subject will inadvertently readjust how he or she is resting against the support, and the resulting dynamic loads will deform the support structure of the system and potentially couple into the wavefront measurements in unpredictable ways. When practical, the subject should be supported independently from the structure of the AO system. There may still be relative motion of the subject to the AO system, but those motions will not couple into the alignment. The idea of immobilizing a subject into an AO system for vision science is not a straightforward one and continues to be a topic of discussion. Currently, the most common means of placing a living human subject in an AO system is to use a bite bar. This approach does a good job of rigidly maintaining the skull, and ultimately the eye, of the subject at a constant point in space. The problem with such systems is that they are very uncomfortable and will potentially reduce the quality of data as well as the amount of time measurements can be taken from a particular subject. Having the subject lie down is an alternative method that has been suggested, and as AO systems become smaller, this may become more practical. In a clinical setting, it is desirable to use a chin-and-head rest to position a subject. Such an arrangement is easy for a clinician to use and patients are familiar with it. Challenges exist and work is underway to improve the ability of AO systems to move away from bite bars. There is still a great deal of progress to be made on the topic of interfacing a subject with an AO system. The performance of AO systems may also be impacted by stray light entering the wavefront sensor or other cameras. Enclosures should be incorporated to block out as much stray light as possible. While not initially obvious, the mass of enclosures on a system has the potential to produce significant bending loads on the overall structure of a system, affecting alignment and thus wavefront performance. Excepting the most refi ned systems, access is often required to the insides of systems to make adjustments. If taking covers on and off changes the load on the structure, recalibration will likely be necessary each time. Covers should be isolated mechanically from the system as a whole or, at a minimum, be designed to have a repeatable impact as they are removed or replaced. 7.3.3
Common Alignment Practices
This section will describe some of the tools and methods commonly used in optical alignment procedures, many of which can be implemented via the optomechanical design. This is certainly not an exhaustive list, and the reader is encouraged to consult the various references and short courses mentioned at the beginning of this chapter. Alignment techniques can be divided into several functions: laying out the components, establishing/checking the optical axes, placing optics onto the optical axes, evaluating collimation or wavefront quality, and other general tools and techniques.
164
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
7.3.3.1 Layout One approach to layout is to design a custom breadboard. The custom breadboard may be relatively expensive and time consuming to design and fabricate, but it provides excellent guidance in the placement of optics. A drawback is that the custom breadboard may be inconvenient to modify, although this can be somewhat alleviated by designing the breadboard with a plethora of standard-sized (e.g., ANSI standard ¼-20 or ISO standard M6) mounting holes. The other approach to layout is to use a commercial optical table or breadboard, which is flexible and convenient. The problem is that it can be difficult to set up the optical system because there are no guides on the table to aid in the placement of the optics. If the mechanical design has been done using a CAD tool, then it can be very helpful to print a full-size drawing of the layout (including bases of the optomechanics), tape it to the optical table, then place the components on the table over their footprints on the drawing. This will quickly place the components very close to the nominal design and enable one to tell if the integration is starting to deviate from the nominal design. The paper can be removed as necessary to access the table’s threaded holes and eventually can be removed entirely. 7.3.3.2 Establishing the Optical Axis In order to establish optical axes on the instrument, one needs a means for a line of sight (LOS) and targets to defi ne the intended optical axis. As much as is practical, it is best to design so that the LOS mechanism and the targets are either permanently positioned or so that anything that is temporarily positioned is easily and repeatably removed and replaced. The repeatability is very important and should be measured before integration begins in earnest so that one knows that the target repeatability is adequate. Line of Sight Two common vehicles for the LOS are an alignment laser and an alignment telescope. An alignment laser is intended to follow the optical axis of the system. By aligning the laser path to the targets, an optical axis is established; optics can then be aligned to keep the beam on the targets. An alignment telescope, such as that manufactured by Taylor Hobson Ltd., is a precision optical sighting instrument that can detect displacements as small as 25 mm and tilts as small as a few arc-seconds. An adjustable focus, an internal reticle/projection source boresighted with the telescope mechanical axis, and a precision, customized five-axis mount make the alignment telescope a very flexible and powerful tool. Point Targets An optical axis can be defi ned by two points or by one point and an angle. Points along the optical axis can be defi ned by targets of various kinds: • Machined Targets These targets may attach to the baseplate or to the mounting cell of an optic. The targets can be installed permanently or temporarily via kinematic placement techniques. Rough, ad hoc targets
OPTICAL ALIGNMENT
•
• •
•
165
can also be made by printing targets on paper (use a CAD program and print full-scale targets for best results). Iris This is useful as a permanently positioned target—it can be opened for normal use and closed for alignment. If used for visual alignment with an alignment laser, the iris can be closed to a size slightly smaller than the size of the alignment laser. When the beam is aligned to the iris, there will be a uniformly wide ring of light from the alignment laser visible on the iris, which will be concentric with the center of the aperture. CCD A CCD on a stable platform can be used as a target, where the fiducial may be a particular pixel on the CCD. Wire Crosshair This is a crosshair using, for example, piano wire bent around pins that are placed via machining processes. The position of an alignment laser versus the wire crosshair can be evaluated by looking downstream at the shadow of the crosshair. The alignment at subsequent crosshairs can be evaluated either with shadows from the nearby crosshair or more accurately by centering the diffraction pattern from the first crosshair onto the second crosshair. This is a very sensitive, vernier-like technique. Optical Fiber and Supporting Optomechanics An optical fiber is a very versatile alignment tool. As a point target, the fiber is most useful with an alignment laser. The target is the position that maximizes power coupled into an optical fiber. Easily acquired rough targets or precision targets can be created according to the fiber core size used. By using a connectorized fiber (e.g., an ANSI-standard FC fiber) and a sturdy bulkhead connector, this target’s position can be repeatable to the micrometer level. The fiber can be positioned on either side of the bulkhead to create a target in either direction of the optical train. In addition, by connecting a light source to the fiber, one can create an alignment source rather than a point target. Further, the through-hole in the bulkhead connector itself can be used as a rough target by aligning the line of sight through the hole. The fiber is also flexible with respect to wavelength. Lastly, a single-mode fiber can be used to create an ideal point source for wavefront evaluation purposes.
Angle Targets An optical axis can also be defi ned with a point and an angle. The angle can be defi ned in a few ways: • CCD on a Rail A CCD can be used with an alignment laser to determine the slope of the beam path. When the image of the alignment laser becomes stationary with axial CCD motion, the axial translation axis is parallel to the alignment laser. • Mechanical Surface Often, there will be a mechanical surface that is intended to be perpendicular to the incoming beam, such as the front
166
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
flange of a lens housing or subsystem. To gauge this, place a mirror against the mechanical surface (can be the outside rim of the mechanical housing) and look for the reflected beam, which should be reflected on itself. A target can be used that has a hole for the incident beam to pass through. The return beam should be centered on the hole in the target. This technique relies on a flange that is perpendicular to the axis of the housing, a mirror with adequate parallelism of the front and back surfaces, and clean mating surfaces for repeatability. These requirements are usually not onerous for these applications, but testing the repeatability is essential. • Point Target at an Image Plane This is equivalent to an angle target in a collimated space. Rough Targets Sometimes, rough targets serve as “sanity checks” during the alignment process. If an optical path is designed to follow a line of holes on the optical table, then an L-square can be used as a rough guide: An L-square with its vertical edge positioned over the center of the holes roughly defi nes the position of the optical axis in the horizontal axis; the height of the beam on the square roughly defi nes the position of the optical axis in the vertical axis. Custom, machined targets that attach to the table can also be used. 7.3.3.3 Sources, Detectors, and General Tools • Infrared Card or Viewer These tools are very useful for aligning infrared beams, such as a wavefront beacon for the eye. • Artificial Eye (Source) This is a diffuse retroreflecting target placed at the position of the eye in the optical system. One version uses a lens, with a spinning target at the focal point of the lens. The spinning target simulates the movements of the eye [12] that tend to eliminate speckled return from the retina. The light scattering off the target is diffuse and fi lls the aperture of the system, much as the diffusely scattered light from a retina does. This allows for testing of the entire system without involving the difficulties of a human subject. In using an artificial eye, one needs to consider that the light levels returned from the artificial eye are typically higher than that returned from a human eye. As soon as practical, one should put a human eye into the system to verify that light levels are adequate. • Artificial Eye (Detector) This setup is similar to the artificial eye (source) mentioned above, except that a CCD is placed at the focal plane of the lens in place of the spinning disk. • Human Eye Sometimes there is no substitute for a human eye, especially when checking for adequate photon flux returning from the eye. The human eye is also very good for rough boresighting and acquisition since its field of view is large and its pupil can be moved around to fi nd the entrance pupil of the system. The key is knowing when to use an
OPTICAL ALIGNMENT
167
undilated eye, a dilated eye, or an eye with paralyzed accommodation. An undilated subject is convenient and allows the subject to continue normal activities following the procedure—important if the subject is part of the integration team! A subject can be dilated in about 20 to 30 minutes and may be able to continue working in darkened conditions following the procedure. A subject’s accommodation can be paralyzed, but it can take many hours for the subject to recover, and so could end the subject’s workday. Therefore, paralyzing accommodation should be used only when necessary for progress. For boresighting, an undilated eye is often adequate. For measuring or setting illumination levels, one may use a dilated eye. However, if one is trying to close the control loop and image the retina or measure the aberrations of the eye, then the subject’s accommodation will “fight” the AO loop and attempt to accommodate any changes applied by the AO system. This is a case where an eye with paralyzed accommodation may be required. In general, one cannot rely on a subject providing a “relaxed,” unaccommodating eye. Even for an emmetropic subject viewing an object at infi nity, the eye may undergo small “instrument accommodation” [perhaps 0.25 diopter (D)] as a natural response to the proximity of the instrument [13]. In cases such as these, using a presbyopic eye can be expeditious, but one should be aware that presbyopic eyes could still have small amounts of accommodation. • Flat and Interchangeable Mirrors At some point during integration or use, one will want to replace the DM with a flat mirror. It is inevitable that a DM will either be unavailable, not working, need to be exchanged, or not well understood, and one will want to be able to progress without it. The optomechanics should be designed using kinematic principles with a permanent kinematic base and interchangeable tops (e.g., one for the DM, one for a flat mirror, one for a spare DM) that are coaligned in position/angle offl ine. This way, one can integrate and troubleshoot more easily without worrying that previous alignment work will be lost. 7.3.3.4 Aligning Optics There are a few different approaches that one can take to align optics, but all approaches amount to either mechanical alignment or optical alignment. At fi rst glance, it may seem that an optical alignment would be preferred to obtain maximum performance, but it sometimes is more practical to choose a mechanical approach rather than to attempt an optical one. Alignment of Optics onto Optical Axis deviation of line of sight This is useful for centering a lens onto a predetermined optical axis and relies on the fact that if the LOS passes through the center of the lens (or more accurately through the nodal point of the lens), then the LOS will not be deviated. The procedure would be as follows:
168
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
• Start with a LOS established using an alignment laser or alignment telescope, without the lens in place. • Note where the LOS intersects a distant target. Put the lens to be centered into the path and adjust the lens transversely until the beam hits the same mark on the target. mechanical techniques A mounting cell for an optic can be placed on an optical axis by placing a machined target in the cell (this needs to be a repeatable operation), then adjusting the cell position to center the LOS onto the target. An optic can be centered in its cell by designing the cell to have a narrow clearance around the optic (perhaps 0.5 mm) and inserting equalwidth plastic shim stock between the optic and the cell in three places around the perimeter of the optic. The optic can then be glued in place. While these mechanical techniques are not always necessary, they provide a useful frame of reference and tend to keep the system near its nominal design. Focusing We start here with a word of caution. One often attempts to focus, collimate, or assess light that has reflected off a DM. The DM is rarely as flat or as well characterized as one believes, so assessing wavefronts is often best done in such a way as to exclude the DM—often in the collimated space just before the light strikes the DM. In focusing, the usual technique is to move the CCD (or other evaluation screen) to find the sharpest image. This is straightforward, so the concentration here will be on techniques to use when focusing is more difficult. The concept of depth of focus is familiar in photography—it is the axial distance that the image plane can move without blurring the image unacceptably. Often, the depth of focus, ∆z, is taken as the axial distance that an image moves after adding a quarter wavelength of defocus to the wavefront and is given as ∆z = 2 λ ( f # )
2
(7.1)
where l is the wavelength of light and f/# refers to the f-number, or the focal ratio of the beam in image space. When the f/# is large (i.e., when the beam is “slow”), fi nding the best focus can be difficult because of the long depth of focus. Of course, the long depth of focus also means that the focusing process is forgiving, but there are special techniques that apply here: • Midpoint of Two Out-of-Focus Planes Looking on one side of focus, fi nd where the image fi rst appears blurred. Find the position on the opposite side of focus where the image looks equally blurred. The focus is at the midpoint of these two positions. • Dynamic Focusing When the depth of focus is long, a good technique is to focus dynamically because it is very hard to see changes in the image when adjusting the focus slowly. Imaging onto a card or CCD, quickly
OPTICAL ALIGNMENT
169
move the card back and forth on either side of focus (perhaps one to two seconds for each pass) to determine the best focal plane. Repeating the operation a few times will give a sense of the precision of the operation. • Knife Edge This technique can be used in a few different ways: • To focus a CCD, pass a knife edge transversely through a pupil plane and evaluate the motion of the image on the CCD. Translate the CCD axially until the image does not move. • To locate a knife edge at an image plane, position a CCD downstream of the image plane (a pupil plane is best, but not necessary) and evaluate in which direction the pupil is extinguished as the knife edge is translated transversely. As the knife edge is brought from one side of focus to the other side, the direction in which the pupil is extinguished will reverse. When the knife edge is at the image plane, the entire pupil will extinguish simultaneously. These tests can also be used for locating pupils by interchanging the words image and pupil in the descriptions above. One special technique for locating the WS lenslet array at the pupil is to design the WS so that the lenslet array is easily removable. After imaging the pupil onto the bare CCD of the WS, one can replace the lenslet array and slide the WS assembly downstream by the lenslet’s focal length (plus any glass thickness considerations). The lenslet array will then be at the pupil plane. This kind of mechanical interchange technique is useful, as described earlier with respect to interchangeable DMs and flat mirrors. Mechanical methods of focusing are described next. As mentioned earlier, sometimes mechanical methods are preferable. For example, consider a lenslet array with a pitch (d) of 500 mm and a focal length (F) of 10 mm positioned in front of a CCD. The lenslet array creates Hartmann spots at the CCD. Ideally, one could adjust the focus of the CCD in order to create the smallest Hartmann spots, but this turns out to be difficult since the depth of focus (∆z) is so long: 2
∆z = 2 λ ( f # ) = 2 λ ( F d ) 2 = 2 ( 0.8 µ m ) ( 20 ) = 640 µ m
2
Rather, it is easier to set the distance between the lenslet array and the CCD mechanically (which can be done within a few tens of micrometers). Even if the focal length tolerance of the lenslets is 1%, this is only 100 mm and is much smaller than the depth of focus. This process is also better because it avoids the possibility that the WS will be far from its nominal design, which might occur by pursuing a weak optical optimization function. Another place where it can be better to focus mechanically is in locating the pupils of the system. Ideally, one could select a pupil plane in the system
170
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
(e.g., the plane of the subject’s pupil), image that plane onto a CCD, and then adjust the axial position of the other pupils (e.g., DM, lenslet array) until they appear in focus. In practice, this can be difficult, again because of potentially long depths of focus, and so a mechanical alignment here can be a good alternative. Wavefront Measurement Tools A collection of pupil-oriented techniques is given here. Focal plane techniques, such as phase diversity, can also be used to measure the wavefront [14–16]: • Shear Plate Often, the beam will be nominally collimated at some point in the AO system. Usually, collimation is not strictly required, but it serves as a good practice and as a consistent standard to set to. A convenient tool for checking the collimation of a beam, as well as some lower-order aberrations, is a shear plate [17]. A shear plate is a lateral shearing interferometer that yields the slope of the wavefront in one axis. Rotating the shear plate 90° about the beam axis measures the wavefront slope in the orthogonal axis. Briefly, a collimated beam will show straight fringes parallel to a reference line on the screen. When a defocused (i.e., noncollimated) beam is incident on the shear plate, the screen shows tilted fringes that are identical in the two orientations. Astigmatism is manifested on the shear plate as differently tilted fringes in the two orientations since astigmatism amounts to different amounts of defocus in the two axes. • Interferometer This is a very accurate, albeit bulky and expensive, way to measure the wavefront through the optics. • WS Once calibrated, the WS itself can be used in alignment as a wavefront measurement device. 7.3.3.5 Offline Alignment of Subsystems One may choose to align an optical subsystem offl ine rather than in situ. A more precise, controlled alignment may be possible then. A good candidate for offl ine alignment is an afocal relay telescope, which can be aligned in front of an interferometer. 7.3.4
Sample Procedure for Offline Alignment
The following is a sample procedure for the offline alignment of an afocal relay telescope using mechanical and optical techniques. This telescope is designed to relay the 7-mm pupil from the subject’s eye to a 3-mm pupil at which a MEMS mirror is placed. The telescope consists of two lenses (Fig. 7.2), each placed in a lens cell. One lens cell is fi xed and machined so that it lies on the intended optical axis. The other lens has transverse (x, y) and axial (z) adjustments. There are no tilt adjustments for the lenses because the machine tolerances for tilts are adequate (as evaluated by tolerancing of the optical design) and having tilt adjustments invites mischief. The lens cells are
OPTICAL ALIGNMENT
Eye
171
MEMS
F1
F1
F2
F2
FIGURE 7.2 Afocal relay telescope for sample alignment procedure. Light is collimated on the input and output ends of the telescope and rays from two field angles are shown. A 7-mm-diameter pupil is at left and is relayed to a 3-mm-diameter pupil at right. The lenses have focal lengths of 233 mm (F1) and 100 mm (F 2), and each pupil is a focal length from the neighboring lens. The layout is stretched in the vertical axis.
designed with a small clearance (~0.5 mm) between the outer diameter of the lens and the inner diameter of the lens cell. This clearance allows for diameter tolerances and avoids problems in assembling the lens into the cell. If the space is too small and the lens is inserted into the cell at a slight angle, it is easy for the lens to be wedged into the cell at an angle. The interferometer establishes the angle of the optical axis and the relay telescope will be aligned parallel to this axis. For this procedure, the transverse position of the optical axis is not important since the telescope design has very good performance over the clear aperture of the lenses. When the relay telescope is integrated into the AO system, the center of the fi xed lens will be positioned onto the optical axis. The following steps and Figure 7.3 describe how this offl ine alignment procedure could be done in practice. Bulleted comments reflect what is accomplished by completing each step. 1. Place a reflecting reference flat in front of the interferometer, far enough away so that the relay telescope to be aligned will fit between the mirror and the interferometer [Fig. 7.3(a)]. Tilt the reference flat until the fringes are nulled out. • This step transfers the optical axis from the interferometer to the reflecting flat. 2. Using a mirror against the fi xed lens cell, adjust the tip and tilt of the relay telescope until the beam is reflected back on itself into the interferometer [Fig. 7.3(b)]. The telescope is properly oriented when the fringes are nulled out on the interferometer. In this design, the performance is very good for up to a 5° off-axis angle, so the tip/tilt alignment of the relay telescope is not crucial. The telescope should be nominally centered in the interferometer aperture, but this is also not crucial. • This step is an example of a mechanical alignment using a flange of the telescope. The optical axis is now transferred to this flange so that the telescope can be placed in the same orientation in the AO system.
172
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
(d )
(a)
Glue
Fixed Lens Cell
Interferometer
Lens
Reference Flat
Shim
(b)
(e)
Fixed Lens Cell
Movable Lens Cell
Interferometer
Flat Mirror
(c)
Interferometer
Telescope Body
Parallel Surfaces
Interferometer
(f )
Interferometer
Flat Mirror
(g)
FIGURE 7.3 Offl ine alignment procedure for an afocal relay telescope. (a) The interferometer establishes the orientation of the optical axis. A reference flat is tilted until the interferometer fringes are nulled. (b) The telescope body is placed in the setup with a mirror against its front flange. The telescope body is tilted so that the interferometer fringes are nulled. (c) Placing the mirror against the back flange checks that the flanges of the telescope body are parallel. (d) Lens is placed in the fi xed lens cell with equal-thickness shims so that the lens is mechanically centered in the cell. Three dots of glue hold the lens. After gluing, the shims can be removed. The same
OPTICAL ALIGNMENT
173
• Check that the second lens cell of the telescope is also perpendicular to the optical axis by placing a mirror against that cell and looking for fringes [Fig. 7.3(c)]. There will likely be some tilt fringes, but as long as they are visible, that is good enough. For example, if a 25mm-diameter mirror is tilted by an acceptable amount of 1 mrad (~0.06°) relative to the fi xed lens cell, then the optical path difference (OPD) from the tilt fringes will be OPD = 2 ( 1 mrad ) ( 25 mm ) = 50 µ m = 79 fringes
( for λ = 632.8 nm )
In other words, if the fringes are “countable,” then the tilt must be smaller than the 1-mrad tolerance. • This stage illustrates a mechanical alignment using optical design and tolerance information. 3. Center the fi xed and movable lenses into their cells using shims [Fig. 7.3(d)]. • This represents a mechanical alignment. 4. Adjust the focus of the second lens to nominally collimate the beam by eliminating as many bulls-eye fringes as possible [Fig. 7.3(e)]. • Collimation is an example of an optical alignment. By setting the telescope to a collimated light-in/collimated light-out configuration, the collimation can be transferred from the input end of the telescope to the output end. 5. Adjust the transverse position of the second lens cell so that the beam that passes through the relay telescope bounces off of the reference flat and reflects back on itself through the telescope [Fig. 7.3(f)]. • This stage illustrates an optical alignment that does not alter the LOS. The optical axis from the input end of the telescope is transferred to the output end of the telescope. 6. Repeat steps 4 and 5 until the interferometric fringes do not have power or tilt. 7. Measure the wave aberration of the relay telescope in this double-pass configuration and compare the measurement to the desired requirements.
procedure is repeated for the movable lens cell. (e) The movable lens cell is translated axially (i.e., focused) so that the beam is collimated, as evidenced by a lack of circular fringes on the interferometer. (f ) The movable lens cell is translated transversely to eliminate tilt fringes in the interferometer. This should also reduce any residual aberrations in the telescope. Steps (e) and (f ) may require iteration. (g) The completed telescope assembly is integrated into the system by translating and tilting it onto the already established LOS.
174
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
• This fi nal stage is an example of optically evaluating the wavefront with an interferometer. Finally, the telescope may be integrated into the AO system with an existing LOS by using point and angle targets [Fig. 7.3(g)]: 1. Start with a previously established LOS in the AO system passing through point targets on either side of the telescope to be positioned. 2. Place a point target in the front lens cell of the telescope and translate the telescope transversely until the target is on the LOS of the AO system. 3. Adjust the angle of the telescope assembly until the LOS passes through the point target following the telescope. 7.4 7.4.1
AO SYSTEM INTEGRATION Overview
As mentioned earlier, it can be difficult to simply throw together an AO system and have it work. Rather, building up subsystems and integrating the system in a particular order builds confidence in the subsystems and eliminates nagging questions that can haunt subsequent steps of the integration. At the same time, though, it is unlikely that integration will be completed without some iteration as unforeseen problems arise, particularly for a new instrument. It is recommended to proceed in the integration in a systematic way but to be prepared for inevitable iteration. The following is an overview of a systematic method of integrating an AO system. The steps will be elucidated in some detail and sample procedures will be given. For a new design, it is often helpful to quickly run through the entire alignment procedure to verify that there are no mechanical problems, that alignment aids and DOFs exist with sufficient range, resolution, and ease of use, and that the sources and detectors will operate as planned. Once any major initial problems have been resolved, one may proceed with a more careful integration. AO System Integration Procedure 1. Measure the wavefront error of optical components. 2. Qualify the DM. 3. Qualify the wavefront sensor. a. Qualify the wavefront sensor CCD. b. Assemble the WS. c. Prove that centroid measurements are repeatable.
AO SYSTEM INTEGRATION
4. 5. 6. 7. 8. 9. 7.4.2
175
d. Prove that centroid measurements do not depend on centroid locations with respect to CCD pixels and measure the plate scale. e. Prove that known changes in the wavefront produce the correct changes in centroids. Check wavefront reconstruction. Boresight fields of view (FOVs). Perform DM-to-WS registration. Measure the slope influence matrices and generate control matrices. Close the loop and check the system gain. Calibrate the reference centroids. Measure the Wavefront Error of Optical Components
It is prudent to begin the AO integration process by measuring the wavefront error of the optical components to be used. Doing so eliminates doubts about the surface figure of the optics, which is very helpful when trying to track down the source of an aberration later in the integration. In addition, if spares are purchased, one can select the better quality components for use. Labeling the components with serial numbers on the outside edge of the component and on its mount avoids later confusion about the optic’s identity. Ideally, the measurement is performed with an interferometer and with the optic mounted in its cell. If it is not possible to measure the mounted optic, then viewing the optic between crossed polarizers can provide a qualitative evaluation of stress in the mounting. 7.4.3
Qualify the DM
The DM should be qualified before integration so that one knows that any future performance problems are not due to limitations of the DM. Two important characteristics, both of which are accurately measured with a phase-shifting interferometer, are the stroke of the various actuators and the uncorrectable irregularity of the surface itself (e.g., from “print-through” on a MEMS mirror). Inadequate stroke will cause the system to perform poorly and may even preclude convergence of the control loop. Uncorrectable surface irregularities will limit the performance of the science instrument. Both problems are difficult to sort out later in the integration but are more easily handled before the integration begins in earnest. The following is a sample procedure for qualifying a DM: 1. With the DM in front of a phase-shifting interferometer and in a nominally flat state [i.e., unpowered or with uniform voltage applied to the actuators on a conventional DM or with a constant signal applied to a spatial light modulator (SLM)], measure the wavefront reflected from
176
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
the DM. Assess the wavefront for correctable and noncorrectable errors; a power spectral density analysis of the surface is very helpful here. Correctable errors are generally those with spatial frequencies below the Nyquist limit, that is, those frequencies with periods greater than two interactuator distances. 2. Generally, the DM should have enough stroke to flatten the mirror, that is, to negate the correctable errors. For conventional DMs or MEMS mirrors, measure the range of stroke for each actuator by applying its maximum and minimum voltage with all other actuators set at midrange. The data will demonstrate the stroke as well as the influence function for each actuator. Liquid crystal SLMs can generate phase discontinuities or steep phase slopes that can create problems for the interferometer in interpreting the phase. For this reason, it is best to use a truncated cone phase profi le for SLMs so that any change greater than l /2 occurs over at least a few pixels on the interferometer (Fig. 7.4). 3. Compare the achievable stroke against the reflected wavefront from the nominally flat mirror to assess whether there is enough stroke to flatten the mirror. 4. If a “flat fi le” of signals that flattens the mirror does not exist, then one can flatten the mirror manually at this stage by adjusting the signal on each actuator. This is not necessary and can be laborious for a DM with many actuators, but it does build confidence that one has full control of
FIGURE 7.4 Phase function (versus position on a SLM) for testing SLM stroke using an interferometer. The vertical axis is the wavefront phase to be placed on the SLM. The key point is that the phase should change only over the course of many pixels on the interferometer; otherwise the phase-unwrapping algorithm of the interferometer may fail.
AO SYSTEM INTEGRATION
177
the mirror. Eventually, one wants to obtain a flat fi le for the DM so that one can put the DM in a known, benign state for integration or calibration operations. 7.4.4
Qualify the Wavefront Sensor
Although the DM is the glamorous component of the AO system, the heart of the system is the WS. Simply put, the AO system cannot correct what it does not see. The WS qualification process involves verifying the CCD performance, aligning the WS, and checking the robustness of the centroid measurement process. In the material below, it is assumed that the wavefront is measured using a Shack–Hartmann wavefront sensor (SHWS), although analogous procedures can be developed for other wavefront sensors. The image created by each lenslet will be referred to as a SH spot or just spot. Ideally, these spots are actually the diffraction patterns corresponding to the lenslet shape (e.g., a bi-sinc2 function from a square lenslet). The term plate scale will be used to refer to the change in wavefront slope required to move the SH spot by one pixel on the wavefront sensor CCD and is often measured in milliradians per pixel. Since the size of the pupil at the WS and the size of the subject’s pupil may not be the same size, it is important to specify whether the slope is taken in “eye space” or in “WS space.” 7.4.4.1 Qualify the Wavefront Sensor CCD Qualifying the WS CCD involves evaluating the CCD’s detection, gain, and noise properties so that one knows that any later WS problems are not due to the CCD. The following procedure establishes these CCD characteristics. In all cases, measuring over a large number of frames (e.g., 100) will produce results that are more meaningful. 1. Record a “dark” image by blocking the CCD with an opaque surface to characterize the noise floor of the CCD. Read noise and dark noise can be separated by taking very short and long exposures. 2. Take “flat” images by uniformly illuminating the CCD. One can accomplish this by covering the camera with a white sheet of paper and illuminating the paper from behind with a uniform source. 3. Use dark and flat images to create a gain curve for each pixel and evaluate whether it is acceptable to assume that the gain is identical for each pixel. Doing so simplifies the code that determines centroid locations as well as the calibration procedures used in operations. For example, if assuming identical gains for all pixels would result in centroid measurement errors that are 1% of the SH spot size (i.e., l/100 error in the tilt in a subaperture), then the identical gain assumption is probably acceptable. Generally, the gains are assumed identical and many people omit the measurement and analysis in these last two steps.
178
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
7.4.4.2 Assemble the WS The assembly procedure of the WS is developed out of alignment techniques such as those described earlier in this chapter as well as from the alignment requirements between the lenslet array and CCD. Proper alignment between the lenslet array and CCD may require transverse position, axial position, and rotation adjustments of the lenslet array with respect to the CCD or vice versa. These adjustments are particularly important if the WS uses quad-cell centroiding techniques, which do not perform well if the Hartmann spot is not centered on the quad cell. Techniques where the Hartmann spot covers many pixels are often less exacting in alignment. In addition, requirements for the alignment/registration between the DM and lenslet array may impact the optomechanics of the WS. Analogous to the quad cell in the WS, a Fried geometry is sensitive to the alignment between the DM and lenslet array. In a Fried geometry (see also Section 5.2), one generally needs transverse position, axial position, and rotation adjustments of the DM with respect to the lenslet array or vice versa. In addition, one needs tip/tilt adjustment for the DM, as with any other mirror. Tolerances for registration between the DM and the lenslet array in a Fried geometry are ~10 to 30% of a subaperture in order for the AO loop to work properly. This same tolerance also applies to the stability of all DM–WS geometries once the DM–WS registration has been calibrated. These considerations determine what DOFs will be necessary in the WS and DM and exactly how they will be aligned in the AO system. One useful feature that can be built into the WS is a single-mode fiber chuck that is permanently or repeatably installed at an image plane in front of the WS. Placing a fiber in the chuck pointed toward the WS gives a perfect image with which to calibrate the WS and allows one to check collimation of any optical spaces designed to be collimated. Inserting the fiber from the other direction in the chuck and propagating the light toward the DM to a collimated space reconciles focus in the two parts of the system. 7.4.4.3 Prove That Centroid Measurements Are Repeatable The WS needs to produce centroid data that are consistent with varying light levels. The following procedure establishes the repeatability of the WS measurements. It is written assuming that the WS is being tested offl ine, but the same approach can be used with the WS integrated into the AO system. The centroid measurement software alone can be tested using a variant of this procedure. Note that although the term centroid is used, the procedure applies to any method of determining a SH spot’s position, such as matched-fi lter correlation. 1. Using a nominally collimated source with the same wavelength as the WS beacon, project bright but nonsaturating light into the WS. Using a bright source isolates detector read and dark noise issues so that any problems in this step are not incorrectly attributed to the detector. If the WS is already integrated into the AO system and the light path includes the DM, then the DM should be in a fi xed position, either off or with
AO SYSTEM INTEGRATION
2. 3.
4.
5. 6.
179
voltages so that the mirror is as flat as possible. Stray light control should mimic the actual use conditions since the purpose of this is to determine the limiting noise sources (e.g., stray light, air currents). Record a large number of frames (i.e., 30 to 100). Find the characteristics of the centroid calculation (histogram, standard deviation of the SH spot position) for each subaperture; this can be expressed in terms of pixels or as a fraction of the SH spot size. If the SH spots are diffraction limited, then the RMS wavefront measurement variability can be approximated using the RMS centroid error in terms of the full width at half maximum (FWHM) of the spot size. For example, if the RMS centroid error is 0.10 of the FWHM spot size, then the RMS wavefront variability is approximately 0.10 l. The measured value can be compared to the WS error allowed in the AO system performance error budget. Repeat steps 2 and 3 with the light levels expected during actual use and with several light levels above and below this point. The purpose here is to ascertain the noise sources under realistic conditions. Compare the centroid computations for the different light levels and evaluate differences. Characterize the SH spot size and shapes from the high light-level data.
7.4.4.4 Prove That Centroid Measurements Do Not Depend on Centroid Locations with Respect to CCD Pixels and Measure the Plate Scale For any centroid algorithm, other than a quad cell, a given amount of wavefront tilt should result in a uniform shift in all of the centroid positions, independent of where the SH spot is with respect to the CCD pixels. In theory, this could be just a software test with “dry-lab” data (since this procedure is really testing the centroid calculation), but prudence suggests performing the test in hardware, such as outlined below, in case there are unforeseen errors. 1. Put a single-mode fiber source at a convenient image plane (i.e., a plane conjugate to the retina) using a bright, nonsaturating source at the wavefront sensing wavelength. All of the lenslets should be illuminated. 2. Take 100 images of the SH spot array pattern and fi nd the centroid locations corresponding to each subaperture. 3. Translate the fiber so that SH spots move by ~0.2 pixel. Use a dial indicator, motor encoding, or other method on the fiber stages to accurately translate the fiber so that the plate scale can be calibrated. 4. Repeat the previous steps up to a total translation of at least 1.0 pixel. Also repeat the procedure for a translation of several pixels so that plate scale measurement is as accurate as possible (to avoid accruing errors due to dividing by a small number to get the plate scale).
180
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
5. Find the characteristics of the centroid shift (histogram, standard deviation of the SH spot position) for each subaperture; the plate scale should be the same for all subapertures. 6. Check that the measured plate scale agrees with the design plate scale. Large differences (i.e., greater than 10 to 20% of the desired value) indicate problems such as errors in component focal lengths or in object and/or image locations. 7.4.4.5 Prove That Known Changes in the Wavefront Produce the Correct Changes in the Centroids The purpose of this section is to verify that the WS behaves correctly in response to a known aberration. If the DM’s characteristics are independently and accurately known, then the DM can be used conveniently to introduce aberrations. However, the DM’s characteristics are often not well known a priori and/or are nonrepeatable, so aberrations may need to be introduced by other means. One good method, particularly convenient in vision science, is to introduce a known value of defocus or astigmatism using spherical or cylindrical trial lenses, respectively. The characteristics of trial lenses are well known, although the magnitude of the introduced aberration can be fairly large. For example, a spherical lens with 0.125 D of power (the smallest increment available in trial lenses) introduces 1 mm of peak-to-valley (PV) wavefront error over an 8-mm pupil. The amount of introduced aberration can be reduced by using nearly crossed positive and negative cylinders. Another method is to use a single-mode fiber at an image plane in front of the WS and translate it transversely or axially to generate tilt and defocus, respectively. The procedure for this test is similar to that in the previous section, except that varying amounts of focus or astigmatism are introduced rather than varying amounts of wavefront tilt. 7.4.5
Check Wavefront Reconstruction
Some AO systems reconstruct the wavefront from the WS slope measurements, fi lter certain Zernike modes of the wavefront, then apply the control loop gain and correction. Other AO systems use the WS slope measurements to control the DM directly. For those AO systems that explicitly reconstruct the wavefront, this section provides a check of the reconstruction. The purpose of this section is to test the wavefront reconstruction code by putting a known aberration into the system, measuring the change in the WS centroids, and reconstructing the wavefront to see if the reconstruction matches the introduced aberration. While this procedure is often a test of software, it may also be used in hardware. The procedure for testing the reconstruction is as follows: 1. Generate (analytically) a set of centroid locations of known aberration. The centroid sets to be tested should include the following:
AO SYSTEM INTEGRATION
181
a. Centroids that are identical to the reference centroids. The reconstruction algorithm should calculate zero wavefront error. b. Centroids corresponding to wavefronts with varying amounts of tilt. c. Centroids corresponding to wavefronts with varying amounts of defocus. In the laboratory, defocus can be introduced with trial lenses of known value. d. Centroids corresponding to wavefronts with varying amounts and orientations of astigmatism. In the laboratory, astigmatism can be introduced with trial lenses of known value. e. Centroids with random combinations of the aberration modes that will be corrected by the AO system. 2. Add to each set of centroids a set of reference centroid offsets. An irregular or randomly generated set of centroids tests whether the code has preconceived notions about the locations of the reference centroids. 3. Use these centroids as inputs into the reconstructor and check that the reconstructed wavefront is correct in shape, magnitude, and sign.
7.4.6
Assemble the AO System
The procedures used in assembling the AO system are derived from the alignment procedures given in Section 7.3.3. In general, the AO system assembly occurs in an iterative fashion, slowly “herding” the system into alignment. The following list is an example of an assembly procedure for an AO system similar to that depicted in Figure 10.1. 1. Align relay telescopes offl ine as described in Section 7.3.4. 2. Establish a LOS on the table with an alignment laser or alignment telescope. A logical place to do this is in the optical space of the subject’s eye. 3. Using one of the layout techniques described in Section 7.3.3.1, install mirrors (fold mirrors, powered mirrors) into the system; installing beamsplitters also may be helpful. The optomechanical adjustments for the optics initially should be centered within their travel—this reduces the possibility of running out of travel during the course of alignment. One may choose to install a flat mirror in place of the DM. It is helpful if the optomechanics are designed so that the flat mirror and DM can be interchanged in a repeatable manner that does not require adjustments each time the optics are placed. 4. Starting with the mirror furthest upstream and working downstream, adjust the tilt of each mirror so that the LOS hits the center of each mirror. This can be evaluated with a target over the front of the mirror or by eye. This is a coarse alignment.
182
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
5. Again starting with the mirror furthest upstream, adjust the tilt of each mirror to align the LOS to machined point targets along the optical path. This is a more precise alignment. 6. Starting with the relay telescope furthest upstream, integrate relay telescopes into the AO system as described in Section 7.3.4. 7. Using fold mirrors in the WS leg (or other degrees of freedom), align the WS pupil to the DM and the intermediate image plane of one relay telescope to the image plane in front of the WS. The image location can be evaluated by placing a single-mode fiber source at the image plane in front of the WS and checking that the light goes through a target at the relay telescope intermediate image plane. The pupil registration may be evaluated by looking at the pupil illumination pattern on the WS camera or by substituting a CCD at the lenslet plane, as described in Section 7.3.3.4. 8. If there are any collimated spaces in the design, check for collimation using a point source at an image plane and a shear plate in the collimated space. For example, if the DM is in collimated space, then one can check the focus of the WS by placing a single-mode fiber at the image plane in front of the WS, propagating the light backward through the system toward the DM, and evaluating the wavefront on a shear plate just before the DM. 9. Using a collimated source in the optical space of the subject’s eye or a single-mode fiber source at an image plane, adjust fold mirrors in the science camera path to center the image on the science camera. Focus the camera by translating it axially. This same procedure can be used for the other legs in the system. This paragraph is actually a boresight procedure, which is more thoroughly discussed next. 7.4.7
Boresight FOVs
Boresighting refers to the process of coaligning the FOV of the various optical subsystems. The wavefront sensor beacon, WS FOV, fi xation target FOV, psychophysical experiment display FOV, flood illumination source, and science image plane FOV should all be centered with respect to one another. The key element is to establish a repeatable alignment target. For example, using a mechanical alignment target at an image plane within the AO relay (i.e., the part of the instrument common to all paths to be boresighted) produces a repeatable fiducial that also guarantees that the various fields will be aligned to the designed optical axis of the AO relay. A logical place to evaluate boresight is at the subject’s position, since all optical paths are visible from this location. As described in Section 7.3.3.3, one can use a human eye or an artificial eye (detector) to assess boresight. The human eye, which need not be dilated, is useful for rough alignment when the target and fields to be aligned are sufficiently illuminated. More precise boresighting is possible with an artificial eye.
AO SYSTEM INTEGRATION
7.4.8
183
Perform DM-to-WS Registration
The DM-to-WS registration refers to the mapping between the DM actuators and the WS subapertures (or lenslets). The penalty for misregistration is that the loop will converge too slowly or not at all. For successful closed-loop control, the WS needs to be calibrated such that the wavefront error measured by the WS results in the formation of the appropriate signals being sent to the appropriate actuators. In closed-loop control, the exact magnitude of the actuator response does not need to be perfect since subsequent iterations will reduce the error. However, registration, which is the mapping of the DM actuators to the WS subapertures, is crucial. The calibration can be done either by assuming that the mapping between the DM and the WS is as designed or by measuring the WS signals in response to actuator commands. Once the measurements or calculations have been completed, the registration between the WS subapertures and the DM actuators must remain fi xed within a small fraction of an actuator separation or the loop will fail to converge. As mentioned earlier, for AO systems with a nominal Fried geometry, not only must the DM-to-WS registration remain constant, but it must be as designed. Errors in translation or rotation of the mapping must be no more than a small fraction (perhaps 10 to 30%) of a subaperture for proper operation of the AO control loop. An AO system with a heavily oversampled WS (i.e., many subapertures per actuator) is fairly forgiving—the AO loop will work for any reasonable alignment of the WS and the DM. A Fried geometry, however, is much less forgiving, because small deviations from the nominal design—in transverse position, rotation, or scale—can result in regions of the pupil with a geometry that is closer to the poor-performing Southwell geometry (mentioned in Chapter 5) rather than a Fried geometry. The tolerances in DM-to-WS registration affect the DOFs necessary in the optomechanical design of the AO system. In the most stringent case, a Fried geometry with a quad-cell WS, one will need three axes of translation and axial rotation of the WS lenslet array to register the lenslet array with respect to the DM and three axes of translation plus axial rotation of the CCD to register the CCD with respect to the lenslet array. These adjustments may require micrometers. In a heavily oversampled WS with a large number of pixels per subaperture, several of these DOFs may be unnecessary or else achievable with less sophisticated and more robust means. A few approaches to checking the registration are listed below. (See also Chapter 5 for more details on the geometry and registration of actuators and subapertures.) • Push a single actuator or fiducial pattern of actuators to see if the WS subapertures show the correct response. For example, if an actuator is located at the junction of four subapertures, then pushing that actuator should yield equally sized motions of the spots in the neighboring subapertures, all toward or away from the actuator. Asymmetries in the
184
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
spot displacements indicate a misalignment between the actuator and the lenslet array. • In a Fried geometry, “waffle” is an undetectable mode. When the registration is correct, a waffle push will yield no change in the centroids (see also Section 5.2). In many vision AO systems, the WS subaperture pitch is fi ner than the DM spacing; therefore, the system does not possess a Fried geometry, and a waffle mode on the DM is detectable. Nevertheless, asymmetry in the spot displacements in response to a waffle pattern on the DM quickly indicates misregistration.
7.4.9
Measure the Slope Influence Matrix and Generate Control Matrices
The actuator influence function, slope influence matrix, and derivation of a control matrix are discussed in Chapter 5. This section briefly describes a procedure for measuring the slope influence matrix. In the most basic procedure, one pushes an actuator and measures the x and y displacements of each SH spot. The WS data from each actuator push become a column in the actuator influence matrix. One would then repeat the procedure for each actuator to build a complete actuator influence matrix, which would then be pseudoinverted to generate a control matrix. If there are a large number of actuators, one can expedite the process by pushing several well-separated actuators at once, assuming that the influence function is zero further than, say, three interactuator distances from each actuator. Truncating the influence function in this way also has a benefit in that it prevents noise in WS measurements from creeping into the control matrix.
7.4.10
Close the Loop and Check the System Gain
Control algorithms are discussed in Chapter 5. In the terminology of that chapter, a set of actuator voltages to be put on the DM, vt+T, is related to the previous set of actuator voltages, vt, and the change in actuator voltages (as found by multiplying the slope vector, s, by the control matrix, A) according to the equation vt+T = vt − KG ⋅ vT [Eq. (5.17)]. This control algorithm is often called an “integrator.” As mentioned in Chapter 5, KG is referred to as the coefficient of the controller. As an example, a test procedure for a simple integrator is given below. This procedure can be done in software alone, but it should be done with hardware to guarantee that the DM will correct as expected. 1. Generate the reference centroids. 2. Put a simple (e.g., defocus) aberrated wavefront on the DM. 3. Close the loop for one cycle using at least two gain parameters: 0 (i.e.,
AO SYSTEM INTEGRATION
185
no further correction) and 1.0 (i.e., attempt to apply a 100% correction of the wavefront in a single iteration). 4. Generate a new wavefront measurement following the correction. If done in software, the new measurement is generated by multiplying the new actuator voltages by the slope influence matrix. Check that the new wavefront measurement is correct for the gain parameter under test. For a unity system gain, the resulting wavefront centroids should be identical to the reference centroids. It is very easy to have an incorrect scaling factor in the control algorithm, so one should check not just the form of the correction but also its magnitude and sign. 7.4.11
Calibrate the Reference Centroids
Up to this point, assumed reference centroids have been adequate, and in fact preferred, because creating calibrated centroids takes time and would bog down earlier stages of the integration. If reference centroids are to be measured in practice rather than computed or assumed, then this is the point where the calibrated reference centroids need to be created. In particular, calibrated reference centroids are necessary when one wants to measure accurately the wave aberration of the subject’s eye or to obtain the highest retinal image quality possible. On the other hand, measured reference centroids may not be necessary when the non-common-path aberrations are known to be acceptable (by design or previous analysis) for the application at hand. There are several techniques for creating calibrated reference centroids. Among them are the following: • Place a collimated source or artificial eye (illuminated with the WS beacon) at the position of the subject’s eye and measure the resulting wavefront. The DM should be made as flat as possible or replaced with a flat mirror during this process. The measured centroid locations become the reference centroid positions. This method implicitly assumes that common-path and non-common-path aberrations are negligible. • Place a source in front of the WS. This source can be a “perfect” collimated beam in front of the lenslet array or a point source (such as a single-mode fiber) placed at an intermediate image plane just before the WS (often just before a collimating lens that precedes the lenslet array). This method allows the common-path AO relay optics to be corrected but implicitly assumes that there are no non-common-path aberrations between the WS leg and science leg of the instrument. • Place a source in the common path (such as those described in the fi rst bullet) and evaluate the image quality at the science image plane with a CCD. The image quality can be evaluated using (near-) image plane techniques such as phase diversity. The image can then be image
186
ADAPTIVE OPTICS SYSTEM ASSEMBLY AND INTEGRATION
sharpened (see also Chapter 8) by adding wavefront modes or centroid offsets. In general, the further downstream that the output wavefront is evaluated, the better and more relevant the calibration. Thus, evaluating the image at the science plane is the most accurate method, but it can be time consuming and overkill for some situations. For some applications, it may be possible to show via optical component measurements and analysis that non-commonpath aberrations are small enough to ignore. These methods may also be used in succession. Oftentimes, later stages in integration require better quality references that would slow down earlier stages of integration. As an example of one of the approaches above, the following is a sample procedure that uses a fiber source at an intermediate image location in front of the WS: • Position a fiber chuck at the intermediate focus before the last powered lens in front of the WS. The alignment procedure should include a method for boresighting the fiber chuck to the reference beacon. • Put a single-mode fiber in the fiber chuck and connect the fiber to a bright, nonsaturating source of the same wavelength as the WS beacon. • Save the reference centroid positions. REFERENCES 1. Ahmad A, ed. Handbook of Optomechanical Engineering. Boca Raton, FL: CRC, 1997. 2. Yoder PR. Mounting Lenses in Optical Instruments. Bellingham, WA: SPIE Optical Engineering, 1995. 3. Vukobratovich D. Introduction to Opto-mechanical Design Video Short Course Notes. Bellingham, WA: SPIE Optical Engineering, 1993. 4. Fischer RE, Tadic-Galeb B, with contributions by Rick Plympton. Optical System Design. New York: McGraw-Hill, 2000. 5. Geary JM. Introduction to Lens Design: With Practical ZEMAX Examples. Richmond, VA: Willmann-Bell, 2002. 6. Mahajan VN. Aberration Theory Made Simple. Bellingham, WA: SPIE Optical Engineering, 1991. 7. O’ Shea DC. Elements of Modern Optical Design. New York: Wiley, 1985. 8. Shannon RR. The Art and Science of Optical Design. New York: Cambridge University, 1997. 9. Smith GH. Practical Computer-Aided Lens Design. Richmond, VA: WillmannBell, 1998. 10. Smith WJ. Modern Lens Design: A Resource Manual. New York: McGraw-Hill, 1992.
REFERENCES
187
11. Welford WT. Aberrations of Optical Systems. Philadelphia: A. Hilger, 1989. 12. Ditchburn RW, Foley-Fisher JA. Assembled Data on Eye Movements. Optica Acta. 1967; 14: 113–118. 13. Kotulak JC, Morse SE. Relationship among Accommodation, Focus, and Resolution with Optical Instruments. J. Opt. Soc. Am. A. 1994; 11: 71–79. 14. Paxman RG, Schulz TJ, Fienup JR. Joint Estimation of Object and Aberrations by Using Phase Diversity. J. Opt. Soc. Am. A. 1992; 9: 1072–1075. 15. Lofdahl MG, Scharmer GB. Wave-front Sensing and Image-Restoration from Focused and Defocused Solar Images. Astron. Astrophys. Sup. 1994; 107: 243– 264. 16. Gonsalves RA. Phase Retrieval and Diversity in Adaptive Optics. Opt. Eng. 1982; 21: 829–832. 17. Malacara D. Optical Shop Testing. New York: Wiley, 1992.
CHAPTER EIGHT
System Performance Characterization MARCOS A. VAN DAM W. M. Keck Observatory, Hamuela, Hawaii
8.1
INTRODUCTION
Characterizing an adaptive optics (AO) system refers to understanding its performance and limitations. The goal of an AO system is to correct wave aberrations. The uncorrected aberrations, called the residual errors and referred to in what follows simply as the errors, degrade the image quality in the science camera. Understanding the source of these errors is a great aid in designing an AO system and optimizing its performance. This chapter explains how to estimate the wavefront error terms and the relationship between the wavefront error and the degradation of the image. The analysis deals with the particular case of a Shack–Hartmann wavefront sensor (WS) and a continuous deformable mirror (DM), although the principles involved can be applied to any AO system.
8.2
STREHL RATIO
A figure of merit often used to characterize the error of an AO system is the Strehl ratio, S. It is defi ned as the ratio of the maximum value of the measured point spread function (PSF) over the maximum value of the diffraction-
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
189
190
SYSTEM PERFORMANCE CHARACTERIZATION
limited PSF. Consequently, the Strehl ratio lies between 0 and 1, with values greater than 0.8 corresponding to essentially diffraction-limited images. The Strehl ratio is related to the wavefront errors via the Maréchal approximation [1]: S=e
− σ φ2 − σ χ2
e
(8.1)
where s f2 is the wavefront phase variance and s 2c is the variance of the lognormal amplitude at the pupil plane. The amplitude varies if the pupil is not uniformly illuminated or, in the case of astronomical or horizontal path adaptive optics, if the wave propagates large distances after being aberrated, a phenomenon referred to as scintillation. The human eye is more simplistic in this regard owing to the close proximity of the optics of the eye to its pupil, which prevents the occurrence of significant scintillation effects. Equation (8.1) is accurate for root-mean-square (RMS) phase errors less than 1 radian; even when the approximation does not hold, it is still true that the larger the phase aberration, the lower the Strehl ratio. For this reason, the Strehl ratio has found widespread use in adaptive optics. Since the Strehl ratio is a function of the phase, f, which is related to the wave aberration, W, via f = W2p /l, it increases with increasing wavelength, l. One should include the wavelength when quoting the Strehl ratio. An AO system with a single wavefront corrector conjugate to the pupil plane, which occurs in all existing vision science systems, can only correct the wave aberrations and not the scintillation. In addition, since there is only one WS, the wave aberration is only measured at one angle, the optical axis of the WS. Hence the goal of any vision science AO system is to minimize the on-axis wavefront error. The Strehl ratio can be used during the calibration process to gauge the image quality on a pointlike light source (hereafter called a point source) located where the retina of the eye would be. Measuring the Strehl ratio is more complicated than it appears [2]. Five steps are required to calculate the Strehl ratio from well-sampled images (i.e., the core of the image is at least four pixels wide): 1. Determine the diffraction-limited PSF using Fourier optics [3]. This is relatively easy if the PSF is monochromatic but requires a weighted average over the passband if the source has a large spectral width. Normalize the PSF such that the total intensity is unity. 2. Find the maximum of the diffraction-limited PSF using some subpixel interpolation method. Fast Fourier transform (FFT) interpolation works well if the data is well sampled. 3. Find the total flux of the image. This is especially difficult when the pixel size is small and there are a lot of pixels over which to sum the intensity. Each pixel measurement has an associated error, and these errors can dominate when the number of pixels is large. Similarly, accu-
CALIBRATION ERROR
191
rate background subtraction is imperative: Small errors in the value of the background can result in large errors in the Strehl ratio estimate. To reduce the error in the flux estimate, the area over which the total flux is estimated must be windowed, at the expense of overestimating the Strehl ratio. Windows with large radii result in estimates that are noisier but less biased. Normalize the image intensity. 4. Find the maximum of the normalized image using the same interpolation method. 5. Divide the maximum value of the normalized image by the maximum value of the diffraction-limited PSF to obtain the Strehl ratio. Another image quality metric that is commonly used is the full width at half maximum (FWHM) of the image. As the name suggests, this quantity describes the angular distance between opposite points where the intensity is equal to half the peak intensity. The resulting FWHM is often called the resolution of the optical system. The FWHM has an obvious meaning when the data is continuous and one dimensional but is more difficult to defi ne from images, which are inherently pixelated and two dimensional. In practice, the FWHM is computed by assuming that the core of the image is approximately Gaussian. The standard deviation of the intensity distribution is calculated over a window with a length of about six standard deviations centered around the peak. The FWHM of a Gaussian is equal to 2.355 times its standard deviation. The FWHM is hence very easy to calculate for spots with a Gaussian profi le and is relatively insensitive to noise and background subtraction, since few pixels are used. The disadvantage of this metric is that it is not directly related to the wavefront error: It is much more sensitive to low-order aberrations, such as tip, tilt, and defocus than to high-order aberrations. In the sections that follow, the wavefront error terms are presented along with a description of how to calculate them. The effect of all these wavefront errors is to reduce the Strehl ratio and to increase the FWHM of the images. 8.3
CALIBRATION ERROR
The term calibration error refers to the residual wavefront error in the absence of any external aberrations. In the absence of calibration error or aberrations external to the AO system, a point source at the location of the retina should result in a perfect diffraction-limited image in the science camera. This does not occur because there are optical aberrations in the common path and in the imaging path. A deformation can be placed on the DM during the calibration process to compensate for these aberrations: The process of estimating and applying the desired deformation is known as image sharpening (see also Chapter 7). Because some errors in the camera
192
SYSTEM PERFORMANCE CHARACTERIZATION
can be eliminated through the image sharpening process, camera and calibration errors are bundled together. The deformation introduced on the DM and imperfections in the lenslet array lead to the decentering of the WS spots from their nominal positions. The resulting centroids are defi ned to be the reference centroids (also known as centroid offsets) and are subtracted from the measured centroids when the AO loop is closed. If the reference centroids are inaccurate, for example, if the optics in the AO system are misaligned or if the measurement of the reference centroids is noisy, then there will be additional calibration errors. The calibration error can be measured by closing the loop and simultaneously imaging a point source with no external aberrations. Then one can measure the Strehl ratio, S CALIB, as described in Section 8.2 and use the Maréchal approximation to calculate the wavefront error, sCALIB. However, images of a point source contain much more information that just the wavefront error: It is also possible to derive the wavefront itself. Phase retrieval algorithms estimate the amplitude and phase at the pupil plane from intensity measurements at the image plane and knowledge of the size of the pupil [4, 5]. Additional constraints, such as prior information about the wavefront or amplitude of the pupil or noise in the image can be incorporated in the algorithm. The disadvantage of this class of algorithms is that if the pupil is symmetric, there is an ambiguity about the sign and the orientation of the phase so this information cannot be easily used for image sharpening [6]. For example, images acquired through a circular pupil that have a positive or a negative defocus aberration look identical. In addition, these algorithms work best if a point source is being imaged. Both these issues can be resolved by implementing phase diversity [7, 8]. Here, two images are captured: one at the focal plane and one slightly out of focus. The extra information obtained allows one to resolve the ambiguity problem and also to estimate the object if it is not a point source [9]. The resulting phase estimate can be fed back to improve the calibration of the system. For example, a phase diversity algorithm by Loefdahl and Scharmer is employed at Keck Observatory to remove lower-order aberrations [10].
8.4
FITTING ERROR
The fitting error is defi ned to be the component of the wave aberration that the DM cannot fit. This error depends on the spatial characteristics of the aberrations to be corrected and on the spatial characteristics of the DM, such as the spacing, influence function, and stroke of the actuators. To a good approximation, a continuous DM such as those produced by Xinxtics can be thought of as a high-pass spatial fi lter with a cutoff spatial frequency given by the Nyquist criterion of the actuator positions (the inverse of twice the spacing between adjacent actuators). Then any power in the power spectral density at spatial frequencies lower than the Nyquist criterion
FITTING ERROR
193
will be corrected while any spatial frequencies higher will contribute directly to the fitting error [11]. This implicitly assumes that the actuator influence function is a sinc (the Fourier transform of a rectangle function) interpolator, which is only approximately true. Hence the fitting error will be larger in practice. If the actuator influence function and the wave aberration are known, the fitting error can be found by doing a least-squares fit of the actuator influence functions to the wavefront. The residual is the fitting error. If the AO system has a WS with a fi ner spatial resolution than the DM (e.g., a Shack–Hartmann WS with the length of the lenslets smaller than the interactuator spacing), then the residual centroid data can be used to estimate the fitting error up to the Nyquist sampling rate of the WS. A set of many residual centroid measurements, s[n], is taken when the loop is closed on the eye, and these measurements averaged across the frames, giving s¯. The component that can be corrected by the AO system is removed to give the uncorrectable residual, s = s − MRs
(8.2)
where M is the influence matrix formed by pushing the actuators one by one and measuring the centroids and R is the reconstruction matrix. The fi nal step is to convert s˜ into a wavefront by using a geometric zonal reconstructor [12, 13]. Figure 8.1 shows the fitting error observed on one subject’s eye using
FIGURE 8.1 The fitting error over the 6.5-mm pupil of an eye using the Indiana University AO system. The RMS fitting error is 41 nm.
194
SYSTEM PERFORMANCE CHARACTERIZATION
the AO system at Indiana University, which has 37 actuators and 221 subapertures. The RMS value of this wavefront was found to be 41 nm and this is the fitting error, s FITTING.
8.5
MEASUREMENT AND BANDWIDTH ERROR
The two remaining sources of error to be described in this chapter are measurement error and bandwidth error. The measurement error term is due to noise in the wavefront slope measurement propagating through the control loop to the mirror. The bandwidth error is due to the component of the dynamic aberration that is not compensated by the AO system due to the fact that the AO system does not respond instantaneously. It depends on the dynamic response of the controller and on the dynamic change in the aberrations of the eye [14]. In order to calculate the measurement and the bandwidth errors, it is necessary to fi rst model the dynamic behavior of the adaptive optics system. The analysis presented here draws heavily from control theory, including the application of Laplace and z transforms. The reader unfamiliar with this material is referred to textbooks on control theory [15] and signal processing [16]. 8.5.1
Modeling the Dynamic Behavior of the AO System
The dynamic behavior of an AO system can be modeled using the blocks displayed in Figure 8.2. [17]. First, the wavefront sensing camera stares at the residual wavefront for one sampling period. This is followed by a computational delay, t c, which corresponds to the lag between the moment the camera stops integrating and the time that the voltages are updated in the DM. This consists of the time taken to read the charge-coupled device (CCD), compute the centroids, multiply the centroids by the reconstruction matrix, and calculate the new voltages. The compensator calculates the voltages to be applied from the previous voltages and the reconstructed wavefront. Typically, the compensator consists of an integral controller of the form
+ Aberrations X( f ) − Mirror M( f )
ZOH
Compensator
+
Delay +
Diagnostic D( f )
FIGURE 8.2
Schematic of the control loop.
Noise N( f )
Stare
MEASUREMENT AND BANDWIDTH ERROR
y [ n ] = y [ n − 1 ] + Ku [ n ]
195
(8.3)
where K is a variable loop gain, y[n] is the output from the compensator, and u[n] is the input to the compensator at time n. The transfer function of the integral compensator can be written as: HCOMP ( z ) =
K 1 − z− 1
(8.4)
where z is the complex z-transform variable. Equation (8.4) can be rewritten in the Laplace domain by substituting z = esT. Finally, the mirror is held in position for one sampling period. This is called a zero-order hold because it is a zeroth-order (constant) approximation to the temporal evolution of the wavefront. The transfer functions of the individual blocks are as follows: 1. Camera stare and the zero-order hold with sampling period T = 1/fs, where fs is the sampling frequency: H STARE ( s ) = H ZOH ( s ) =
1 − e − sT sT
(8.5)
2. Computational delay time t c : H DELAY ( s ) = e − sτ c
(8.6)
3. Integral compensator with gain K: HCOMP ( s ) =
K 1 − e − sT
(8.7)
In the above equations, s = i2pf is the complex frequency variable, where f is the frequency and i = −1 . In what follows, all the blocks will be written with f as the argument, since f has a more intuitive meaning than s and is computed directly from the discrete Fourier transform (DFT) of the diagnostic data from the AO system. In order to calculate the wavefront errors, we must convert centroid measurements from diagnostics into wave aberrations. The residual mirror commands are the corrections to the current mirror position that would be applied if the loop gain were equal to unity. If the reconstruction matrix is R and the vector of centroid measurements is s, then the residual mirror commands, v, are given by v = Rs. Then, using the relationship between the mirror commands (actuator voltages) and the induced wavefront, we obtain a wave aberration at the position of each actuator. For continuous DMs, cross talk between the actuators can be well-modeled as a
196
SYSTEM PERFORMANCE CHARACTERIZATION
convolution of the actuator voltages with the response of the neighboring actuators to the applied voltage [18]. The entire feedback arm of the loop, H(f), can be written as the product of all the blocks: H ( f ) = H STARE ( f ) H DELAY ( f ) HCOMP ( f ) H ZOH ( f )
(8.8)
There are two inputs into the control system: aberrations of the eye, X(f), and the noise, N(f), which is assumed to be white (same power at all temporal frequencies). Likewise, there are two outputs: the mirror position, M(f), and the residual mirror commands obtained in the diagnostics, D(f). The position of the diagnostics in the control loop is just after the addition of the noise, while the mirror position is just after the zero-order hold. For notational simplicity, we consider the noise to be input before, rather than after, the stare. This assumption has little impact on the transfer function of the control loop. The transfer functions relating the outputs (the mirror position and centroid diagnostics) to the inputs (the aberrations of the eye and measurement noise) are D( f ) =
1 ( X ( f ) + N ( f )) 1+ H(f )
(8.9)
M(f ) =
H(f) ( X ( f ) + N ( f )) 1+ H(f )
(8.10)
and
Figure 8.3 plots the modulus squared of these transfer functions for a hypothetical adaptive optics system with the following parameters: T = 0.05 s, t c = 0.05 s, and K = 0.25.
8.5.2
Computing Temporal Power Spectra from the Diagnostics
The time series of the residual wavefront at each actuator location is converted to a power spectrum using the DFT. In practice, the FFT is often used for speed of computation. The defi nition of the DFT used in this chapter is D(p) =
P
∑ d [ n ] exp P
1
n =1
−i 2π ( p − 1 ) ( n − 1 ) N
(8.11)
where P is the number of diagnostic frames. This defi nition maintains power of the coefficients equal in either domain, that is,
MEASUREMENT AND BANDWIDTH ERROR
197
Transfer Function Squared
2.0
1.5
1.0
0.5
0.0 0
FIGURE 8.3 curve).
2
4 6 Frequency (Hz)
8
10
Plots of |1/(1 + H(f))|2 (top curve) and |H(f)/(1 + H(f))|2 (bottom
P
P
∑ D(p) = ∑ d [n ] 2
p =1
2
(8.12)
n =1
The power spectrum of the diagnostics is taken using the discrete Fourier transform: 2
D ( p ) = DFT [ d [ n ] w [ n ]]
2
(8.13)
where w[n] is a windowing function used to avoid spectral leakage due to the nonperiodicity of d[n], the residual wavefront as measured by the diagnostics. To convert to frequency space, we use the relationships D ( f ) = D ( fs p P )
(8.14)
D ( − f ) = D ( fs − fs p P )
(8.15)
and
Common windows include the Hanning, Hamming, and Blackman–Harris windows. There is a trade-off in eliminating the effect of spectral leakage at the expense of a reduction in spectral resolution inherent in each window. Care must be taken to scale w[n] to ensure that the average power in the window is unity: P
∑ w[n] n =1
2
=P
(8.16)
198
SYSTEM PERFORMANCE CHARACTERIZATION
The power spectrum is then averaged over all the actuators and, if possible, over several sets of power spectra from the same eye. While the power spectrum is used to compute the error terms, the power spectral density (PSD) is often used for plotting purposes. The PSD is a continuous function with dimensions of wavefront squared per hertz and is obtained by dividing the power spectrum by PT. The PSD is usually displayed with the positive frequencies doubled and the negative frequencies discarded. Another number of interest is the crossover frequency, which is defi ned to be the lowest frequency at which there is no correction. In Figure 8.3 this occurs at 1 Hz. In practice, it is usually determined by plotting the closed-loop power spectrum superimposed on the open-loop power spectrum and determining where these two curves fi rst cross [19].
8.5.3 Measurement Noise Errors The measurement noise squared error, s 2NOISE, is given by: 2 =∑ σ NOISE
H(f) 1+ H(f )
2
N(f)
2
(8.17)
where the summation is for all the discrete values of f 僆 [−fs /2, fs /2). The noise power spectrum may be computed from fi rst principles using knowledge of the spot size, the light level, and the characteristics of the WS camera, such as the dark current and readout noise [20]. Alternatively, the noise can be calculated from the power spectrum. By inspection of Figure 8.3, it can be seen that the loop transfer function for the noise as seen by the diagnostics is close to unity at high frequencies. If the noise power is dominant over the aberration power at high temporal frequencies, the noise is given by the value of the power spectrum in the region close to half the sampling frequency. One can tell if this is the case by verifying that the power spectrum follows the |H(f)/(1 + H(f))|2 curve at high frequencies. Since the noise is assumed to be white, this is an estimate of N(f) at all frequencies. Figure 8.4 plots the PSD of the residual aberrations using data from W. M. Keck Observatory’s astronomical AO system. The PSD value of |N(f)|2 may be read from the plot as the value of the PSD for f = 200 Hz, converted to a power spectrum value and inserted in Eq. (8.17) to calculate the measurement noise error. Inserting the value of the noise floor from the diagnostics into Eq. (8.17) gives 2 σ NOISE =∑
H(f) 1+ H(f )
2
D ( fs 2 )
2
(8.18)
MEASUREMENT AND BANDWIDTH ERROR
199
3
Log PSD
2
1
0
−1 0
50
100 Frequency (Hz)
150
200
FIGURE 8.4 Power spectral density in nanometers squared per hertz of the residual aberrations using the residual centroid measurements obtained at W. M. Keck Observatory. The theoretical noise curve is superimposed.
8.5.4
Bandwidth Error
The bandwidth squared error is given by: 2 σ BW = ∑ X(f )−
=∑
H(f) X(f) 1+ H(f )
X(f) 1+ H(f )
2
2
(8.19)
The diagnostics measure the bandwidth error with an added noise term due to the noise on the centroid measurement propagating through the control loop: D( f ) =
X(f )+ N(f ) 1+ H(f )
(8.20)
Combining Eqs. (8.19) and (8.20) gives the bandwidth squared error: 1 2 2 σ BW = ∑ D( f ) − 1+ H(f )
2
2 N(f)
(8.21)
and it is evaluated by inserting the measured values of |N(f)|2 and |D(f)|2 into Eq. (8.21).
200
SYSTEM PERFORMANCE CHARACTERIZATION
8.5.5
Discussion
The gain, K, and frame rate, fs, should be chosen so as to minimize the sum of the bandwidth and measurement error terms, which depend on the temporal power spectrum of the eye aberrations and the brightness of the spots on the WS, respectively. The optimal trade-off can be achieved by calculating the two terms using residual centroids and a dynamic model of the system or simply by adjusting the parameters and evaluating the image quality. If the measurement error term dominates, then the frame rate or gain should be reduced. In addition, one would think about improving the centroiding algorithm. The accuracy of the slope estimate can be improved by implementing background subtraction (and reducing the background), flat-fielding, removing bad pixels, optimizing the area over which the centroid is calculated, and using maximum correlation instead of a centroid algorithm. On the other hand, if the bandwidth error dominates, then increasing the frame rate or the loop gain (up to a point) is beneficial. In addition, an improved controller design may reduce the bandwidth error [17, 21]. 8.6
ADDITION OF WAVEFRONT ERROR TERMS
If all the error terms are statistically independent, then the total error is equal to the sum in quadrature of the individual error terms: 2 2 2 2 σ TOTAL = σ CALIB + σ FITTING + σ BW + σ NOISE
(8.22)
Using the Maréchal approximation, STOTAL = SCALIB SFITTING SBW SNOISE
(8.23)
where, for instance, S NOISE is the Strehl degradation due to the noise wavefront 2 error variance, e−sNOISE . The fact that the error terms are added in quadrature implies that the total error is dominated by the largest error terms and small terms have a negligible effect on the image quality. It is more important to accurately measure and, where possible, mitigate the large error terms rather than focusing on small sources of error. Other aberrations that might have a significant bearing on the error budget are chromatic aberration if the wavefront sensing occurs at a different wavelength to the science imaging and anisoplanatism, which occurs when the light takes a different path to the science camera relative to the WS. Acknowledgments This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory, under contract W-7405-Eng-48. The work has been supported by the National Science Foundation Science and Technology Center for Adaptive Optics, managed by the University of California at Santa Cruz
REFERENCES
201
under cooperative agreement No. AST-9876783. Marcos van Dam received a CfAO minigrant to visit Don Miller’s Lab at the Indiana University School of Optometry.
REFERENCES 1. Hardy JW. Adaptive Optics for Astronomical Telescopes. New York: Oxford University Press, 1998. 2. Roberts Jr L, Perrin MD, Marchis F, Sivaramakrishnan A, Makidon RB, Christou JC, Macintosh BA, Poyneer LA, van Dam MA, Troy M. Is That Really Your Strehl ratio. In: Bonaccini D, Ellerbroek BL, Ragazzoni R, eds. Advancements in Adaptive Optics. Proc. SPIE. 2004; 5490: 504–515. 3. Goodman J. Introduction to Fourier Optics. San Francisco: McGraw-Hill, 1968. 4. Gerchberg R, Saxton W. A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Picture. Optik. 1972; 35: 237–246. 5. Fienup JR. Phase Retrieval Algorithms: A Comparison. Appl. Opt. 1982; 21: 2758–2769. 6. Lane RG, Fright WR., Bates RHT. Direct Phase Retrieval. IEEE Trans. Acoust., Speech, Signal Processing. 1987; ASSP-35: 520–526. 7. Gonsalves RA. Phase Retrieval and Diversity in Adaptive Optics. Opt. Eng. 1982; 21: 829–832. 8. Paxman RG, Fienup JR. Optical Misalignment Sensing and Image Reconstruction Using Phase Diversity. J. Opt. Soc. Am. A. 1988; 5: 914–923. 9. Paxman RG, Schulz TJ, Fienup JR. Joint Estimation of Object and Aberrations by Using Phase Diversity. J. Opt. Soc. Am. A. 1992; 9: 1072–1085. 10. Loefdahl MG, Scharmer GB. Wavefront Sensing and Image Restoration from Focused and Defocused Solar Images. Astron. Astrophys. Suppl. Ser. 1994; 107: 243–264. 11. Rigaut FJ, Véran J-P, Lai O. Analytical Model for Shack-Hartmann-Based Adaptive Optics Systems. In: Bonaccini D, Tyson RK, eds. Adaptive Optical System Technologies. Proc. SPIE. 1998; 3353: 1038–1048. 12. Southwell WH. Wave-front Estimation from Wave-front Slope Measurements. J. Opt. Soc. Am. 1980; 70: 998–1006. 13. Tyler GA. Reconstruction and Assessment of the Least-Squares and Slope Discrepancy Components of the Phase. J. Opt. Soc. Am. A. 2000; 17: 1828– 1839. 14. Hofer H, Artal P, Singer B, Aragón JL, Williams DR. Dynamics of the Eye’s Wave Aberration. J. Opt. Soc. Am. 2001; 18: 497–506. 15. Franklin GF, Powell JD, Emami-Naeini A. Feedback Control of Dynamic Systems. Reading, MA: Addison-Wesley, 1998. 16. Oppenheim AV, Schafer RW. Discrete-Time Signal Processing. Upper Saddle River, NJ: Prentice Hall, 1999. 17. Madec P-Y. Control Techniques. In: Roddier F, ed. Adaptive Optics in Astronomy. Cambridge: Cambridge University Press, 1999, pp. 131–154.
202
SYSTEM PERFORMANCE CHARACTERIZATION
18. Oppenheimer BR, Palmer D, Dekany RG, Sivaramakrishnan A, Ealey MA, Price TR. Investigating a Xinetics Inc. Deformable Mirror. In: Tyson RK, Fugate RQ, eds. Adaptive Optics and Applications, Proc. SPIE. 1997; 3126: 569–575. 19. Hofer H, Chen L, Yoon GY, Singer B, Yamauchi Y, Williams DR. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberration. Opt. Express. 2001; 8: 631–643. 20. van Dam MA, Le Mignant D, Macintosh BA. Performance of the Keck Observatory Adaptive-Optics System. Appl. Opt. 2004; 43: 5458–5467. 21. Dessenne C, Madec P-Y, Rousset G. Optimization of a Predictive Controller for Closed-Loop Adaptive Optics. Appl. Opt. 1998; 37: 4623–4633.
PART THREE
RETINAL IMAGING APPLICATIONS
CHAPTER NINE
Fundamental Properties of the Retina ANN E. ELSNER Schepens Eye Research Institute and Harvard Medical School, Boston, Massachusetts
Retinal imaging is constrained by the fundamental properties of the retina, the thin, transparent layer of neural tissue that initiates a visual signal at the back of the eye and transmits this visual information toward the brain. New light-tissue interactions are revealed with each advancement in imaging technology. This chapter fi rst lays out the main anatomical structures and general topography of the retina, those features immediately observed with traditional instrumentation in the living human eye. Next, the two blood supplies, which place constraints on both anatomy and imaging, are described. Then the details of the smaller divisions and cell types within layers are covered. Next, there is a discussion of spectra and features in detail, as might be encountered in imaging, and fi nally there are details of light-tissue interactions that may be useful for imaging studies. The diagram in Figure 9.1 shows the relative locations of several of the main features of a normal retina in relation to the eye, with the three main layers comprising the globe being the retina, the choroid, and the sclera. The retina subserving vision for the central 20° of visual angle is illustrated by a dark box. The retina is not flat, nor is it a smooth hollow sphere that follows the inside of the globe. Instead, the retina varies in thickness and elevation, according to key anatomical features, pathological conditions, and changes with age. Excellent descriptions of the anatomy of the retina indicate why some structures are raised, depressed, or change with heart beat [1–5]. Thus,
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
205
206
FUNDAMENTAL PROPERTIES OF THE RETINA
FIGURE 9.1 A schematic diagram of the human eye. The black square (on the optical axis) indicates a region that is roughly 20° in extent.
Lens
Vitreous
Pupil Cornea
Macula
Optic Nerve
Retina
the shape of the retina, and its visualization at a distance of about 2 cm (in an adult) through a pupil that is rarely greater than 8 mm, have implications for imaging according to the principles of geometrical optics. Specific length or width measurements given here should be considered to be subject to measurement error, whether due to histological artifacts of the delicate tissues or simplifying assumptions with optical means. 9.1
SHAPE OF THE RETINA
The fovea is thought of as the central region of the retina, usually less than 10° from the optical axis of the eye. This anatomical region is crucial for visualizing fi ne detail, color vision, and gaze, with individual layers described later in more detail. The fovea develops over early childhood to become the region with the highest density of photoreceptors [1–9] and the fewest layers of capillaries [1–3, 10, 11]. As diagramed in Figure 9.2, the foveal pit is the roughly 200-mm central region of the fovea, which is in the center of the macula. This pit is formed by the depression of the retinal surface toward the subretinal layers, caused by a decrease in the number of layers and retinal vessels in the central fovea. The overall fewer layers and radially displaced cell processes are hypothesized to minimize the scattering of light incident on the photoreceptors. Similarly, a region containing no blood vessels, called the foveal avascular zone, extends roughly 400 mm in diameter in most adults. A single layer of connected capillaries, which are the smallest blood vessels, typically rings around the fovea to form the outer edge of the avascular zone in the healthy adult eye [10, 11]. The foveal pit causes a strong reflection above the retinal surface, in the manner of a parabolic mirror. There is a decrease in light returning from the sloping sides of the fovea. Therefore, if the focal volume of an imaging instrument is wide, or there is a large degree of scattered light for any reason, then the fovea will not always produce an observable reflection. If the focal volume is narrow and at the level of the superficial layers of the fovea, the foveal reflection, known as foveal reflex, may be visible. If the illumination is not orthogonal to this layer, the reflection may be elongated and displaced radially. By obtaining sequential images of the fovea while
SHAPE OF THE RETINA
207
Foveal Crest
Foveal Pit Optic Nerve Head Vein Artery Vein
FIGURE 9.2 The major retinal features marked on a 20° × 20° visual angle, near infrared (780 nm), confocal image of a normal subject. The image matches the position marked by the black square in Figure 9.1. The foveal pit appears darker because relatively less light is returned through a small confocal aperture when the focal plane is at the superficial layers of the retina. The eye position is such that the sloping portion of the retina where the retinal thickness increases, or foveal crest, has a strong reflection, thereby appearing whiter. The larger retinal vessels have brighter center portions than their side walls, illustrating the effects of geometrical optics on round structures. The gray-scale changes in the retinal regions with no particular features vary due to detector noise and are not speckle or pigmentary changes, since a high gain is needed when a small confocal aperture is used. The optic nerve head and retinal arteries are usually reddish, due to their relatively higher concentration of oxygenated hemoglobin. The retinal veins appear bluer since they have less oxygenated hemoglobin. The larger arteries and veins appear different at wavelengths for which there are absorption differences. The retinal arteries appear light gray, whereas the retinal veins appear dark gray, in these images.
moving an imaging instrument with respect to the eye, the foveal reflex may be located. A procedure called retinoscopy has long been known and uses the direct observation of this reflex to correct the spherical error of the retina by inserting lenses between the light source and the eye to diminish the amplitude of the motion. It is important to distinguish this reflection from reflective pathological changes that occur even in young adults [12]. There is a measurable broadening of the reflex due to changes with aging, since the foveal curvature is altered as the vitreous detaches and the retina thins [9, 13]. The retina continues to thicken with increasing distance from the fovea, and the retinal surface slopes toward the vitreous to form the foveal crest, which is much thicker and more elevated than the foveal pit. Not only the thickness but also the layer-by-layer composition and optical properties of the
208
FUNDAMENTAL PROPERTIES OF THE RETINA
foveal crest are different from the foveal center. Away from the center, there are more layers of cells, more blood vessels, and cell processes are displaced radially [1–5, 10, 12]. As with the fovea, the position and dimensions of the focal volume of an imaging instrument dictate whether there are unwanted reflections off the foveal crest, and at what position on the retina they will occur. While these reflections are useful in depth measurements to obtain the relative height or thickness of the retina [14–16], it is necessary to optimize instrument design and technique to avoid significant artifacts, particularly across a wide region of the retina in normal subjects [8, 17–19]. In those patients with highly reflective retinal lesions, it is crucial to separate this unwanted strong light return from the other components of the image [20]. The retinal layers become gradually thinner with increasing eccentricity beyond the foveal crest, as the density of the photoreceptors and subserving neural layers drops, although there are increased populations of some cell types at the anterior margin of the sensory retina, called the ora serrata. The retinal surface is dramatically altered at the optic nerve head, as shown partially on the right of Figure 9.2 and centered in Figure 9.3. The retina is thick where the retinal nerve fiber bundles are collected to dive past the choroid and exit the eye at the optic nerve head, which is roughly 5° to 10° of visual angle in diameter. To emphasize the elevation change, Figure 9.3 was selected from a sample of patients with glaucoma, a multifactorial disease in which the neurons exiting the optic nerve head die [21]. The nerve bundles
Nerve Fiber Bundles Optic Nerve Head Retinal Artery Lamina Cribrosa
Retinal Vein
Peripapillary Region
FIGURE 9.3 The major optic nerve head features marked on a 15° × 15° visual angle, near infrared (780 nm), confocal image of a patient with glaucoma. The nerve fiber bundles travel in an approximately radial manner into the nerve head. The image position overlaps Figure 9.2 and uses the same coding scheme.
TWO BLOOD SUPPLIES
209
and retinal arteries and veins are supported laterally by the collagenous basket called the lamina cribrosa. This structure is readily visible as part of the cup of the optic nerve head, particularly on higher magnification and a deeper focal plane. The optic disc is the tissue that surrounds the cup, although the edge of this is subjective. There are no photoreceptors associated with the location of the optic nerve head, and thus no vision. The overlying fiber bundles or scattered light can obscure this structure. Although an elongated eye shape and tilted optic nerve head are typical in myopia and a shortened eye shape is typical in hyperopia [as shown on magnetic resonance imaging (MRI)], some myopic eyes are merely larger and do not differ in shape [22]. The shapes of both the normal and abnormal optic nerve heads and nearby tissues present the same problems for imaging that the foveal pit and crest do, in that light striking the retinal surface at the sides of the optic nerve head may reflect off at an angle that does not pass through the pupil of the eye. The foveal center lies about 11° to 12° of visual angle from the nearest edge of the optic nerve head and slightly inferiorly [22]. The size and shape of the optic nerve head vary dramatically among individuals, as does the entire posterior pole. The shape and apparent edge of the superficial optic nerve head vary with age or in any disease in which the retina thins.
9.2
TWO BLOOD SUPPLIES
The eye has two circulations: one to supply the retina and the other to supply the layer that surrounds the retina, the choroid, as described later [1, 2]. As seen in Figures 9.2 and 9.3, the central retinal artery enters the eyeball via the optic nerve head, branching to form the retinal arteries. These branches arch around the macula, further branching until the fi fth-order vessels are capillaries. Similarly, blood flows through the capillaries, which meet to form larger and larger vessels until the veins exit the eyeball via the optic nerve head. These great arching arteries and veins form the arcades, and the region of the retina that lies within them is generally considered the macula for clinical purposes (rather than the macula having a specific dimension). Outside the fovea, the numbers of capillary layers increase from one to four [10]. This greatly increases the chances of light striking the contents of a blood vessel and of either being absorbed by the blood vessel contents or scattered by the particles or vessel walls. The healthy capillaries are typically less than 10 mm in diameter, whereas the largest retinal arteries are rarely more than 100 mm, and the veins are 140 mm or less. Arteries carry oxygenated blood, with the capillaries eventually serving as a transmission point for gases and nutrients to pass out of the blood vessels, resulting in veins carrying blood with a lower oxygen concentration. In the healthy eye, the cells and large molecules (such as proteins) from the bloodstream do not pass through the walls of these capillaries, as the cells forming the walls have tight junctions that form a blood–retinal barrier, similar to the blood–brain barrier of the central nervous
210
FUNDAMENTAL PROPERTIES OF THE RETINA
system. As described by Rodieck [1], the metabolically active cells in mammals are typically located not more than 100 mm from the nearest capillary, so that molecules can be exchanged by means of diffusion. This puts sharp constraints on the thickness of layers in the retina. In diseases such as diabetes that have small vessel complications, the vessels undergo remodeling. Then capillaries often dilate, are nonperfused, or have aneurysms. Proteins and fluids leak from the vessels, causing the edema that alters retinal elevation, thickness, and the index of refraction. The remodeling and fluid buildup are potential sources of vessel tortuosity and retinal traction that significantly alter the shape and landmarks of the retina over time. The choriocapillaris is a bed of capillaries that forms a plexus to nourish the overlying tissues, supplying metabolic support for the active outer retina. This plexus is fed in a redundant manner by a wealth of arteries and veins that enter the choroid through the sclera rather than the stereotypical branching retinal arteries and veins. Unlike the retinal vessels, the choriocapillaris vessels are naturally fenestrated to the extent that large molecules pass through their walls. This allows those tissues that receive metabolic support to be displaced more than 100 mm away, and moves the outer blood–retinal barrier interior from the choriocapillaris. When performing studies with contrast enhancing agents injected into the bloodstream (an imaging technique known as an angiogram), the dyes with the smaller molecules (such as sodium fluorescein dye) leak readily out of the choriocapillaris. Image contrast is decreased in a rapid manner once the dye begins leaking. The degree of porosity of the blood–retinal barrier is illustrated in Figure 9.4. For a patient with age-related macular degeneration, there is dye leakage early in the angiogram from the more normal retina and choroiocapillaris, as well as extensive dye leakage from a membrane of new vessel growth and remodeling, called a choroidal neovascular membrane. The dye does not leak from the larger retinal vessels. Larger dye molecules, such as those in indocyanine green dye, that bind to proteins are more likely to stay within the choroidal circulation and lead to excellent visualization of these deeper vessels if scattered light is sufficiently controlled [23, 24]. Figures 9.4 and 9.5 illustrate the complexity of the choroidal circulation, in distinction to the rather treelike retinal circulation. By these methods, the watershed zone around the optic nerve head is illustrated in which the rates differ at which the tissues are perfused. When using indocyanine green dye and imaging at near-infrared wavelengths, these delays are quite noticeable in patients with some forms of age-related macular degeneration since different arteries subserve different regions [25]. The choriocapillaris itself is difficult to image since it is only about 10 to 30 mm thick and thins with increasing age.
9.3
LAYERS OF THE FUNDUS
The retina, choroid, and sclera are the three main layers of the ocular fundus [1, 2]. Figure 9.6 shows the main layers thought to provide light return for
LAYERS OF THE FUNDUS
211
Retinal Vein
Neovascular Membrane
Retinal Artery
FIGURE 9.4 An early phase fluorescein angiogram, acquired with the research scanning laser ophthalmoscope (SLO) [17] and Argon 488-nm excitation wavelength with a Schott OG520 barrier fi lter, showing a neovascular membrane associated with exudative age-related macular degeneration. The retinal and more superficial vessels are better visualized with this dye and wavelength combination. The retinal arterial circulation is fi lled in the retina and the choroid, but the choroidal vessels become obscured early on due to dye leakage. The venous circulation is beginning to exhibit laminar flow along the outermost portion of the vessel lumen of the retinal veins. Some portions of the retina, such as below the optic nerve head, appear to be nonperfused or to have perfusion delays, such as inferior to the optic nerve head. The darkened region surrounding the neovascular membrane is related to the light lost due to scatter from the fluid buildup from the leaking new vessels. The membrane has many components, and the tissues are severely disrupted in this macula.
imaging in the normal eye. Figure 9.7 shows the main absorbing pigments in the retina and the relation to fundus reflectance in lightly and darkly pigmented fundi. Modeling the optical properties of these layers is ongoing [26, 27], but simple assumptions have already been shown to be incorrect. Neither Lambertian scatter nor specular reflection describes the properties of the retinal layers taken together, nor does Rayleigh scatter. Beer’s law can describe only those measurements for which some degree of uniformity is present in a structure, and anterior reflections must have been minimized. A group index of refraction is inaccurate for the combination of the retinal layers, particularly in the presence of proteins or lipids that commonly leak from the blood vessels. The sclera is the outermost layer, the “white of the eye.” Made largely of collagen, it provides some stability to the shape of the eye and is a strong scatterer. The entire outer structures of the eye are thin enough to be penetrated for imaging purposes, particularly for anterior segment structures [28], and penetration is improved by near-infrared illumination. Great care with technique must be used because scattering greatly degrades image quality. The scattering is not readily overcome by increasing the power of illumination
212
FUNDAMENTAL PROPERTIES OF THE RETINA
Retinal Artery Retinal Vein Choroidal Vein Choroidal Artery Feeding Neovascular Membrane
FIGURE 9.5 An early phase indocyanine green angiogram of the neovascular membrane acquired from the same patient as Figure 9.4 with the research SLO [17] using a Ti : SaF laser at 805 nm and a custom barrier fi lter that cuts on at 807 nm. The dye and instrument combinations visualize both retinal and choroidal vessels. The retinal arterial circulation is fi lled in the retina and the choroid. The venous circulation is already fi lled in much of the choroid, but is beginning to exhibit laminar flow along the outermost portion of the vessel lumen of the retinal veins. Some portions of the retina appear nonperfused or to have perfusion delays, such as inferior to the optic nerve head. The darkened region across the superior part of the image is related to the light lost due to scatter from the fluid buildup from the leaking new vessels. The tissues are severely disrupted in this macula. The choroidal veins have nearly the same concentration of oxygenated hemoglobin as the arteries.
Inner Limiting Membrane Photoreceptors RPE Bruch s Membrane Choroid Sclera
FIGURE 9.6 Diagram of the cross section of the retina, choroid, and sclera, showing the major tissues that result in light returning to an instrument in a normal eye. The deeper layers, in particular, result in longrange light scatter that can serve to retroor side-illuminate the overlying structures. The relatively transparent and densely packed layers of the neural retina lie between the photoreceptors and the inner limiting membrane.
LAYERS OF THE FUNDUS
213
FIGURE 9.7 A diagram of the major absorbing pigments and the spectra obtained from reflectometry from the human fundus, illustrating particularly why shortwavelength imaging can fail in darkly pigmented eyes or when there are lens changes in older patients. (Top) Reflectance from a darkly pigmented (DP) and a lightly pigmented (LP) fundus, and the maxima of the macular pigment (MP); the short (S), middle (M), and long (L) wavelength sensitive cone photopigments, and rod (R) photopigment. (Bottom) The spectra calculated for physiological thicknesses, adapted from [40], for the lens (Le), melanin (Me), oxygenated hemoglobin (HbO), hemoglobin with a low oxygen content (Hb), and water (H 2O). Note the greatly decreased spectral information in the darkly pigmented eye, and that the two spectra differ significantly in shape and not just overall absorption. The wavelength correspondence shows how the various pigments appear in broadband, visible wavelength light.
due to the light safety constraints of the eye, which is exquisitely designed to capture and use light. This is one reason that retinal imaging is usually constrained to be performed through the narrow pupil of the eye. Lining the sclera is the choroid, which contains large vessels and a substantial amount of melanin encapsulated into melanosomes that vary according to ethnic group and age. The choroid is innervated and also contains connective tissue and contractile bodies [29]. The choroid strongly absorbs and scatters light, due to the presence and complex distribution of the above structures
214
FUNDAMENTAL PROPERTIES OF THE RETINA
and its many arteries and veins. The choroidal veins are frequently >200 mm in diameter and become irregular in shape in patients with vascular changes. The choriocapillaris is the innermost part of the choroid. There is a marked loss of density of blood vessels in many forms of disease and with age. Moving interiorly toward the vitreous, Bruch’s membrane is an acellular structure that is in contact with the choriocapillaris. Bruch’s membrane is considered as one of the major reflective surfaces of the human ocular fundus and is used as a landmark in depth or as a source of retroreflection in models of light-tissue interactions [26, 27, 30]. Bruch’s membrane is comprised of five layers that are quite thin early in life but increase in thickness due to debris deposited over the life span [31–33]: the basement membrane of the choriocapillaris, the three porous structures that provide some degree of rigidity (the outer collagenous layer, the elastin layer, and the inner collagenous layer), and the basement membrane of the retinal pigment epithelium. Thus, the light-tissue interactions cannot remain constant in an individual over time since there is a sharp increase in highly reflective molecules (lipids, proteins, and calcium) in Bruch’s membrane combined with a decrease in the absorption due to fewer blood vessels in the choroid. Significant distortion of tissues is caused by both the thinning and the debris. For example, inclusions called drusen develop between the retinal pigment epithelial basal lamina and the inner collagenous layer, and reach as large as 400 mm thick, compared with the entire Bruch’s membrane that is less than 10 mm in young humans. The retinal pigment epithelium is a monolayer of cells that is bound to Bruch’s membrane in the normal eye. Normally, the cells of the retinal pigment epithelium are so densely packed as to form tight junctions, providing the outer blood–retinal barrier. They deliver nutrients to and remove waste from the photoreceptors that invaginate into them, but do not normally permit the choroidal circulation to come into contact with the retina despite the lack of retinal blood vessels to supply the outermost portion of the retina. The retinal pigment epithelial cells also renew the portion of the photoreceptor containing the photopigment, the photoreceptor outer segment, by pinching off and digesting the tips in a process called phagocytosis. Debris remaining from this process can build up in the retinal pigment epithelium cell with aging or disease, leading to disruption in cellular function. The distribution of retinal pigment epithelial cells varies greatly with age up to 30 years across individuals, and with increasing age varies even more greatly across retinal locations [34]. The denser distributions, and therefore smaller diameter cells, are typically found by midadolescence in the macula and the central temporal retina. There continue to be lateral asymmetries. With increasing age, the density of cells becomes more heterogeneous, and the more eccentric retina has, on average, a lower cell density. The fairly sparse melanin contained within each cell is thought to provide some degree of protection from light damage or buffering functions rather than forming a highly reflective surface that is similar to the tapetal structures found in nocturnal animals. However, in disease, this melanin distribution is distorted when there is clumping of
LAYERS OF THE FUNDUS
215
damaged retinal pigment epithelial cells. This is highly visible as a strong absorber and, when sufficiently dense, causes mirror-like reflections [35]. The retina is comprised of many different layers, as shown in Figure 9.8. The layers are defi ned by the neural cell types that each contains, with the glial structures running largely perpendicular to the layers. The glia in the eye, the Mueller cells, astrocytes, and microglia are described below. Excellent micrographs and size details can be found of the human retina in cross section and en face that are far more detailed than space permits in the present chapter [1–6]. The fragile neural cells of the human retina do not migrate long distances after early childhood, nor are they known to produce new cells through mitosis or meiosis. However, the neural connections are sometimes altered after early childhood, such as in the case of disease. In close contact with the retinal pigment epithelium, the photoreceptors provide the fi rst real step of vision, capturing quanta with the photopigmentfi lled outer segments, and transducing light energy into a neural signal that is transmitted by the inner segments. After the photopigment breaks down, it must be regenerated so that it can capture light again. This continuous process is called the visual cycle and depends upon the retinal pigment epithelium for recycling the portion of the molecule that is separated from the photoreceptor membranes. There are no retinal blood vessels in close contact with the photoreceptor outer segments to play a role in rapid regeneration. The spectra of both the photoreceptors and the retinal pigment epithelium are altered according to the state of the photopigment. Cones, named for their asymmetric shape when they are not densely packed, subserve daylight vision, color vision, and fi ne spatial vision. Rods, named for their sticklike shape and small size, subserve primarily vision in dim illumination, although they have a role in color vision and motion. The photopigment maxima for a person with normal color vision are shown in the upper panel of Figure 9.7 for the three cone types: the long wavelength sensitive (L), middle wavelength sensitive (M), and short wavelength sensitive (S) cones. The rod (R) maximum is also shown. All photopigment spectra are broad, although illustrated by lines to indicate the location of the peak absorption. The shape of the spectra depends on the concentration. The central fovea is nearly rod free. As the density of cones decreases with increasing distance from the fovea, the density of rods increases, reaching a peak at about 17° eccentricity. However, the elliptical contours of equal density are asymmetric: The nasal side of the retina has a higher density than the temporal side at the same distance from the fovea. The density of cones and rods as a function of distance from the fovea, along with normal photopigment spectra, will be described in the context of light absorption and scattering across the retina in Section 9.5. The most densely packed photoreceptors in the fovea are largely L and M cones, with center-to-center cone spacings of about 1 mm, but increasing in diameter to 4 or 5 mm while decreasing in packing density as the distance from the fovea increases. The waveguide nature of the photoreceptors, also called
216
FUNDAMENTAL PROPERTIES OF THE RETINA
Pigment Epithelium Outer Segments of Rods and Cones
Inner Segments of Rods and Cones Outer Limiting Membrane
Outer Nuclear Layer
Rod and Cone Terminals Outer Plexiform Layer
Inner Nuclear Layer
Inner Plexiform Layer
Ganglion Cell Layer
Optic Nerve Fiber Layer
Inner Limiting Membrane
FIGURE 9.8 A diagram of the main layers of the human retina, showing that many layers contain tightly packed cells in a multilevel organization or processes [2]. The tight packing of cells and the cellular processes make it difficult to image individual cell bodies in an en face manner. No color was used in the illustration to emphasize that the unstained retina is relatively transparent. Micrographs showing more detail of the neural processes may be found in [1–5], with the thickness of each layer depending strongly on the distance from the fovea and the age of the donor. (From Boycott and Dowling [2]. Reprinted with permission of The Royal Society.)
LAYERS OF THE FUNDUS
217
directionality or the Stiles–Crawford I effect, is thought to be a more significant phenomenon for cones than for rods [36–38]. The cones at about 2.5° eccentricity outside of the central fovea have the most selective guidance of light [37, 38]. The light returning that is guided by the cones retains its polarization, implying that a single reflector beneath the cones is likely responsible for a significant portion of the light leaving the eye from the central retina [39]. This strong guidance of light back through the photoreceptors helps to increase the contrast between the photoreceptor and the surrounding matrix and tissues for purposes of imaging experiments, in sharp distinction to most cells in the retina and the retinal pigment epithelium. In the central fovea, the elongated cone photoreceptor outer segments, which are densely packed, provide the highest concentration of photopigment per unit of retinal area [8–9, 17–19]. The outer segment length decreases with distance from the fovea, ranging from 40 to 60 mm at the fovea for L and M cones, and then decreasing to about 10 mm in the periphery [4]. The optical density of the cone photopigment often exceeds 0.4 log units at the central fovea for wavelengths near the maxima of the long and medium wavelength sensitive photopigment spectra. The distribution of cone photopigment across the retina may be thought of as a mountain at the foveal center for young adult subjects with normal color vision, but more of a hill or even a crater in middle-aged or older subjects [9, 12]. At 2.5° eccentricity, where the photoreceptors are slightly broader and serve as efficient guides of light in young people, there may not be a high concentration of photopigment in older subjects. Disease further alters the photopigment distribution and decreases the optical density [9, 22, 27]. Both the absorption per se and the differing distributions across the retina can lead to striking artifacts when performing imaging studies to measure other pigments without fi rst bleaching the photopigment to a dilute concentration. Rod photopigment has an overall lower optical density, rarely exceeding 0.2 log units when referred to 514 nm (Argon green) [27]. The rod photopigment has a smooth distribution outside the fovea, except in the presence of diseases that damage the photoreceptors. As there are no photoreceptors associated with the optic nerve head, there is no measurable photopigment there. This region of 5° and sometimes up to 10° of visual angle is called the blind spot. A cascade of events eventually leads to a graded signal being transmitted to both the horizontal cells and the bipolar cells. In the central fovea, this next layer of cells is laterally displaced so that only the inner segment and the axons lie above the elongated outer segments. It is in this region that the macular pigment (or macular yellow) is found, mainly in the cone axons. It has been hypothesized that the macular pigment acts as a fi lter to protect the cones from short-wavelength light. However, the type of photopigment that is photolabile, that is, readily broken apart by light, in the short-wavelength region is in the S cones; yet there are few S cones in the central fovea where the macular pigment is densest. Further, the cone axons are not in close contact with the outer segments where the photopigment is contained. All photoreceptor outer segments are lipid rich, regardless of their spectral
218
FUNDAMENTAL PROPERTIES OF THE RETINA
absorption properties or eccentricity in the retina. The inner segments might also be susceptible to light damage, again widely distributed over the retina. Therefore, as the cone axons are most highly concentrated in the fovea, and macular pigment is associated with them, the resulting fi lter effect may be chance. Further, there are many other pigments under the photoreceptors that might absorb short-wavelength light, such as blood and melanin. The photoreceptor outer segments have their metabolic support from the choroidal circulation, while the inner segments benefit from the retinal circulation once outside the central fovea. The Mueller cells run perpendicularly to the layers, helping to form a rigid support network and serving as a contact to the retinal blood vessels and other cells not in a given layer. The Mueller cells are the permanent glial cells in the retina, and their endfeet are the inner limiting membrane. That is, they run from the inner limiting membrane to the outer limiting membrane. Microglia are much smaller and migratory. Continuing inward toward the vitreous, the outer limiting membrane separates the many columns of multiple nuclei in the outer nuclear layer from the photopigment in the outer segments. Next comes the outer plexiform layer, and then the inner nuclear layer that contains the horizontal and bipolar cells. These nuclei are also multilevel and interdigitated, rather than forming a monolayer, such as the retinal pigment epithelium. Then comes the inner plexiform layer, and fi nally the ganglion cell layer with up to six levels of cells. The axons from the retinal ganglion cells form into bundles and exit the eye through the optic nerve head to carry the visual signal to the brain. The astrocytes, another type of glial cell, are found in this layer and are most concentrated near the optic nerve head. The vitreo-retinal interface, at which the nerve fiber bundles, the inner limiting membrane, and the collagenous vitreous meet, is the strongest reflector in the fundus. It obscures tissues beneath it. The collagenous vitreous detaches gradually with age; the portions that float freely above the retina are called floaters. These cast shadows that make quantitative imaging difficult in some cases, causing one of the worst artifacts in the imaging of the aged eye. 9.4
SPECTRA
The layers of the retina and choroid contain two main absorbers, blood and melanin. The spectra of these structures, as well as the absorption due to lens changes, hemoglobin, and oxygenated hemoglobin, were computed for layers of the approximate thickness of the normal adult retina, as shown in Figure 9.7 [40]. In general, the absorbers may not be treated as a uniform sample across the retina, as there are large spatial differences in the distribution of blood, cone photopigments, and macular pigment. The main absorbers, then, have different effects according to the illumination wavelength, as shown in Figure 9.9 [40]. If the images from three wavelengths are combined, the color is not realistic without color balance manipulation. Even then, the images produced in this manner may differ from color fundus photography (as shown
SPECTRA
830 nm
633 nm
Drusen
Some Nerve Fiber Layer
543 nm Retinal Vessels
488 nm Macular Pigment, Nerve Fiber Layer
219
514 nm Some Macular Pigment
633, 543, and 488 nm Images Combined
FIGURE 9.9 Images centered on the human macula, acquired with laser illumination over a range of wavelengths. The bottom right panel is the combination of three colors: red (633 nm), green (543 nm), and blue (488 nm). (Figure also appears in the color figure insert.)
FIGURE 9.10 A color fundus photograph of the patient in Figures 9.4 and 9.5, showing that the larger retinal vessels are seen, but that the choroidal ones (other than the largest ones that feed and drain the neovascular membrane) are obscured. (Figure also appears in the color figure insert.)
220
FUNDAMENTAL PROPERTIES OF THE RETINA
in Fig. 9.10), if the imaging method differs between the monochromatic images and the color image. The strongest light return in the normal retina is from the vitreo-retinal interface and nerve fiber layer, but this is altered in pathological retinal conditions, such as macular holes [41], macular cysts [42], or pigment epithelial detachments [43]. In macular holes, the superficial retinal tissue is missing so that a stronger than normal light return is available from the deeper layers. In macular cysts, both the top and bottom of rounded cysts contained within the retina return light, due to light-tissue interactions from the convex and concave surfaces of the cysts where normally thinner and flatter retina would be. Such cysts are often found in edematous tissues, which do not have the specular properties that a normal retinal surface has. Another example is in one form of pigment epithelial detachment, when there is a split within Bruch’s membrane that is fi lled with fluid, elevating the retinal surface as much as 600 mm. Light does not readily pass through the dense fluid. In severe cases, the neurosensory retina detaches as well. The entire region of fluid-filled tissues can appear darker than the surrounding tissues, and the light from the deeper layers is blocked. Consequently, since the choroid contains a larger proportion of the blood vessels and melanin than the overlying retina, the spectral signature of the fundus changes in the case of these fluid-filled lesions, as illustrated in Figure 9.10, which shows a choroidal neovascular membrane. Even without marked pathological features, the fundus of an older eye often absorbs less light and is altered in its spectral properties, due to the gradual loss of blood vessels, and therefore less absorption due to blood. Features, such as the major choroidal vessels, become more prominent with age, and the inner limiting membrane loses the sheen found in a young human. The major features of the fundus change according to the wavelength of illumination, as shown in Figure 9.9 for a normal macula and Figures 9.10, 9.11, and 9.12 for a patient with age-related macular degeneration. In the case of fluorescence, there is variation according to the wavelengths providing illumination and the wavelengths sampled for emission. The major absorbers in the fundus are often more concentrated within a structure or region. Thus, although there is blood within the four layers of capillaries in the retina outside the central fovea, this is not as readily demonstrated as within the major retinal arteries. Comparing the spectra of hemoglobin and oxygenated hemoglobin, there are large differences expected at some illumination wavelengths and virtually none at others, and the resulting difference spectra can provide information concerning the oxygenation within vessels, so long as reflectance artifacts are removed [44]. Veins often appear darker, even at wavelengths greater than 800 nm, due to their larger size and consequently greater absorption. 9.5
LIGHT SCATTERING
Some retinal and subretinal layers and tissues are considered to be specular reflectors, while others serve as scatterers. When most of the light that strikes
LIGHT SCATTERING
221
FIGURE 9.11 A confocal image of the patient in Figures 9.4, 9.5, and 9.10, acquired with the research SLO [17] using a Ti:SaF laser at 860 nm. Despite this wavelength appearing dim to the patient, the neovascular membrane is well visualized as ringed with fluid. The feeder and drainer vessels of the membrane are seen, as are the bright exudates that are comprised of lipids and proteins that have leaked from the damaged blood vessels over time.
Retinal Artery
Retinal Vein
Neovascular Membrane Nerve Fiber Bundles
FIGURE 9.12 A confocal image of the patient in Figures 9.4, 9.5, 9.10, and 9.11, acquired with the research SLO [17] using Argon 488-nm illumination source and no barrier fi lter. This wavelength and imaging technique combination emphasizes the superficial layers, with the major retinal vessels well visualized, as well as the more superficial vessels of the neovascular membrane. The exudates are also well seen. The neovascular membrane is visualized primarily by its superficial features. The feeder and drainer vessels of the membrane and the retinal vessels are seen, as are the bright exudates that are comprised of lipids and proteins that have leaked from the damaged blood vessels over time.
222
FUNDAMENTAL PROPERTIES OF THE RETINA
a structure returns through the narrow pupil to an instrument, having only struck the structure and changed optical path once, the light-tissue interaction is called single scattering or direct backscattering. Examples are found in Figures 9.2 and 9.3: The glint off the center of the large retinal vessels is one example of single scattering, and the reflection from the vitreo-retinal interface is another. In contrast, when most of the light that strikes a structure returns through the pupil only after having struck several different tissues, multiple scattering has occurred. Figure 9.13 illustrates the same eye shown in Figures 9.5 and 9.11, collected at similar wavelengths, except that the imaging technique collected multiply scattered light using an indirect imaging mode without dye. Note that the retinal vessels so prominent in Figures 9.2, 9.3, and 9.4 are barely visible, and almost none of the features from the superficial retina can be seen. Instead, the borders of the new vessel membrane beneath the retina are emphasized. There are a variety of techniques to obtain useful scattered light images of the eye, so that subretinal and other features can be emphasized and the superficial retinal layers downplayed, as illustrated in Figures 9.14 and 9.15. One of these techniques is to use an aperture in the plane of focus, but instead of using a small and centered aperture, use an annular aperture or an offset
Retinal Artery
Retinal Vein Border of Neovascular Membrane Traction Lines Due to Neovascular Membrane
FIGURE 9.13 Indirect mode image of the patient in Figures 9.4, 9.5, 9.10, 9.11, and 9.12, acquired with the research SLO [17] using a Ti : SaF laser at 890 nm. Despite this wavelength appearing dim to the patient, the neovascular membrane is well visualized along with the outer ring of fluid. The radiating traction lines indicate that the neovascular membrane has considerable elevation. The fluid and hemorrhage in the region beneath the superior retina indicates the severity of the leakage, as this wavelength readily passes through thin layers of blood. Note that the smaller retinal vessels are not seen, but that the larger retinal vessels and optic nerve head provide adequate landmarks for location of the membrane.
LIGHT SCATTERING
223
FIGURE 9.14 Computed images are built up by combining data across conditions, pixel by pixel, including varying focal plane for topographic imaging, wavelength for spectroscopy, polarization properties for polarimetry, and the like.
Closed Confocal
FIGURE 9.15 Examples of confocal aperture types and the light returning from the fundus that passes through them. The long, black arrow in the left panels represents illumination light striking the retina and choroid. The smaller arrows indicate potential return paths.
Open Confocal
Indirect
one [9, 12, 25, 30, 40, 45]. Another technique is to use an illumination source that is offset with respect to an aperture, so that it blocks directly backscattered light traveling on axis, while passing multiply scattered light that is traveling to the side of the aperture [46–48]. By the use of multiply scattered light, fundus features have been visualized that are not seen by other methods. A variety of imaging techniques can be used to analyze data for different wavelengths or as the result of the optical separation of the specular return
224
FUNDAMENTAL PROPERTIES OF THE RETINA
from multiply scattered light, as shown in Figure 9.14. The best known one in the human eye is confocal imaging, in which a small aperture is placed in the plane of the pupil. This method reduces the amount of out-of-focus light because it does not pass through the aperture, as shown in Figure 9.15. The larger the aperture, the greater the proportion of scattered light allowed through the aperture. With an annular aperture, or an offset aperture, primarily scattered light reaches the detector. These two forms of imaging are called direct mode and indirect mode, respectively, or confocal and multiply scattered light imaging. The resulting images are demonstrated in Figures 9.11 and 9.13, respectively. When the illumination beam is scanned, there is less long-range scatter [14, 15, 49], which is particularly important in the near-infrared wavelength range since absorption is much less. An instrument that uses these techniques is called a confocal scanning laser ophthalmoscope (SLO). By using a small aperture, the amount of light reaching the detector is often reduced, but it is mainly the scattered light that is blocked by the aperture. There are a variety of instruments that differ greatly in the ratio of the diameters of the illumination spot to aperture [47]. If the aperture diameter is only about two to three times the diameter of a tightly focused illumination spot (for example, about 30 mm compared to 10 mm on the retina) the instrument is likely designed for confocal sectioning. In contrast, for instruments that are light efficient but use a confocal aperture to reduce out-of-plane scatter, the aperture diameter is likely to be 10 or more times the diameter of a small scanned spot, or both the aperture and the spot may be relatively large. The relative locations within the fundus with a strong change in the index of refraction, such as the vitreo-retinal interface, have long been used to measure retinal height changes in normal and diseased eyes [15, 16]. This has been clinically useful in the study of glaucoma [14, 15] and macular disease [12, 41–43, 50, 51]. The original assumptions that the axial transfer function in depth was too broad in the human eye to allow the detection of two or more interfaces in the eye have recently been shown to be incorrect for both macular cysts and age-related macular degeneration [42, 50–52]. With the optical correction of the anterior segment aberrations, the resolution in depth will be further improved, and further information will be gained about ever smaller structures [53–55]. This will be discussed further in Chapter 10. While this might seem obvious, it is also important that the contrast of large but poorly visualized structures might be increased sufficiently so as to make them visible. Finally, as the depth resolution is improved, thin structures or those that vary in depth may become visible. Light that is absorbed is unavailable to be scattered, and this differs greatly according to the region of the fundus imaged and illumination conditions. Sections 9.1 and 9.2 described the locations of the major features when viewing the eye in an en face manner. Sections 9.3 and 9.4 described some of the major absorbers in the retina. Macular pigment, which overlies the photoreceptor
POLARIZATION
225
outer segments, can obscure the deeper layers when short-wavelength illumination is used. Similarly, imaging through blood vessels or pooled blood remains problematic at those wavelengths with relatively greater proportions of absorption, regardless of the techniques used. Conversely, if there is greater absorption, then there is the possibility of reducing long-range scatter. Highcontrast retinal images of the blood vessels in the superficial retinal layers of young, healthy eyes are obtained by using wavelengths close to the blood absorption peaks since the contribution from the deeper layers is reduced by absorption in the choroid. Illumination intensity and wavelength also play a key role in light scatter due to the absorption by photopigments. The concentration of photopigment is reduced by an intense illumination light and a sufficient exposure duration to cause bleaching, which leads to a greater proportion of light returning from the photoreceptors. The distribution of photoreceptors (Fig. 9.16), together with the optical density and absorption spectra for photopigments (Fig. 9.17) and waveguide properties of the photoreceptors [36–39], determine how much light will be absorbed by the photoreceptors. For the L or M cone photopigments, illuminating with 20 mW/cm 2 on the retina near the maximum wavelength range for absorption, under steady-state illumination conditions, will cause significant bleaching. More intense illumination, 140 mW/cm 2 , will lead to the bleaching of all but a small fraction of the L and M cone photopigments. When characterizing the sensitivity to light after the exposure to bright lights, it is necessary to ensure that photopigment has been bleached. A light might appear bright to a human subject, yet not actually be reducing the cone photopigments by an appreciable amount. Further, when measuring other aspects of the fovea, such as macular pigment [9], it is useful to minimize cone and rod photopigment at the beginning of an experiment. This provides more light return from the retina and reduces variability over time, if the measurement light can bleach photopigment. For rod photoreceptors, the steady-state illumination on the retina required to bleach a large proportion of the photopigment is about 185 mW/cm 2 when a wavelength about 15 nm from the peak absorption is used.
9.6
POLARIZATION
Any optical change may be used to provide information about retinal structures as illustrated in Figure 9.18. For instance, polarization has been used for thickness measurements of the retinal nerve fiber layer [60]. For any fundus tissue to be characterized by its polarization properties, the entire optical system of the eye must be modeled, so that the changes due to the cornea can be separated from that of the retinal nerve fiber layer [44, 61, 62] or the photoreceptors [38]. In general, the nerve fiber layer is thought to be linearly birefringent, with the lens having little effect. This birefringence comes at least in part from the form birefringence of the microtubules within
226
FUNDAMENTAL PROPERTIES OF THE RETINA
Photoreceptors/mm
2
250,000 200,000
S N I T Central
150,000 100,000 50,000 0 0
20
40 60 80 100 120 Degree Visual Angle
Photoreceptors/mm
2
2,500 2,000
S N I T Central
1,500 1,000 500 0 0
50 100 Degree Visual Angle
150
Photoreceptors/mm
2
200,000 S N I T Central
150,000
100,000
50,000
0 0
50 100 Degree Visual Angle
150
FIGURE 9.16 The distribution of photoreceptors across the retina in photoreceptors/mm 2 , for the photoreceptors outside the central 6° of visual angle [56] and for the photoreceptors within the central 6° of visual angle [57]. The more peripheral data show data along the superior, nasal, temporal, and inferior meridians, indicated as S, N, I, and T, respectively. The peripheral data are, on average, higher, possibly reflecting a difference in technique or study sample. (Left) L + M cones. (Middle) S cones. (Right) Rods. At 10° eccentricity, the nasal curve is plotted as 0 to indicate the location of the optic nerve head, which lacks photoreceptors.
1 0 −1 −2 −3 −4 −5 −6 −7 −8 300
S
400
M
500 600 700 Wavelength (nm)
Log Spectral Sensitivity
Log Spectral Sensitivity
POLARIZATION
L
800
900
1 0 −1 −2 −3 −4 −5 −6 −7 −8 300
400
500 600 700 Wavelength (nm)
800
227
900
FIGURE 9.17 Photopigment spectral sensitivities, showing broad spectra and overlapping spectra for normal photopigments in mammals. (Left) The log spectral sensitivity for S, M, and L cone types, based on data from 2°, centrally viewed stimuli [58]. (Right) An example of rod spectral sensitivity [59]. The breadth of the curves vary with photopigment concentration, with very dilute concentration having curves that are narrower than those of higher concentrations.
Focal Volume
Raw Intensity Data
Depolarization Image
RPE
Polarization Preserving Image
Pixel-by-Pixel Polarization Analysis
250 µm
Inner Retinal and Nerve Fiber Layer
Photoreceptors
Polarization Data
FIGURE 9.18 Schematic of polarimetry computations to obtain directly backscattered light images from those image components that retain polarization, which theoretically have a narrower point spread function than typical confocal images. Similarly, the depolarized light images are a type of multiply scattered light image, with the assumption that light that does not retain its polarization does so because of multiple interactions with tissues before reaching the detector.
228
FUNDAMENTAL PROPERTIES OF THE RETINA
the nerve fiber bundles. This allows probing of this layer with polarization techniques to a greater extent than ocular tissues that lead to more random polarization of the light illuminating them. Light returning from healthy, foveal photoreceptors also can interact with light in a polarization preserving manner, indicating a relatively direct return of light under some conditions. However, the human eye has not been shown to distinguish photons on the basis of polarization for the purposes of seeing. 9.7 CONTRAST FROM DIRECTLY BACKSCATTERED OR MULTIPLY SCATTERED LIGHT Scattered light images have been obtained from polarimetry data [21]. An image that reveals the birefringence is computed from the modulation of the light return at a given pixel according to the polarization angle of the illumination, as shown in Figure 9.19. For the optic nerve head, the thicker bundles have greater retardance, with measurements again assuming that the light returning to the measurement device is made possible by a strong specular return from beneath the nerve fiber layer rather than the strong reflector lying
Nerve Fiber Bundles Optic Nerve Head Retinal Artery Lamina Cribrosa
Retinal Vein
Peripapillary Region
FIGURE 9.19 The major optic nerve head features marked on a 15° × 15° visual angle, near-infrared (780 nm) image of the patient with glaucoma, computed from the same polarimetry image series as Figure 9.3, as described in [21]. This image emphasizes the birefringence of the nerve fiber bundles because the amplitude of each pixel is computed from the modulation due to the change in the polarization angle of the illumination. White indicates a larger retardance, related to thicker nerve fiber bundles at those locations.
CONTRAST FROM DIRECTLY BACKSCATTERED OR MULTIPLY SCATTERED LIGHT
229
Nerve Fiber Bundles Optic Nerve Head Retinal Artery Lamina Cribrosa
Retinal Vein
Peripapillary Region
FIGURE 9.20 The major optic nerve head features marked on a 15° × 15° visual angle, near-infrared (780 nm) image of the patient with glaucoma, computed from the same polarimetry image series as Figures 9.3 and 9.19, as described in [21]. This image deemphasizes the birefringence of the nerve fiber bundles while revealing the layers beneath them because the amplitude of each pixel is computed by subtracting out the modulation due to the change in the polarization angle of the illumination.
over the nerve fiber layer. A depolarized light image is shown in Figure 9.20, which emphasizes quite different features, reminiscent of images in which confocal apertures of different shapes or positions produce a scattered light image. This depolarized light image is not merely the result of a single image using crossed polarizers to reduce glint, but rather is obtained from a minimum value of light detected through a crossed detector for a series of polarization angles of the illumination [21]. Similar to the other multiply scattered light images, the depolarized light images reveal features in deeper fundus layers usually obscured by the overlying retinal nerve fiber layer. Another common method to separate directly backscattered from multiply scattered light is to use interferometry. Recent devices use low coherence sources to defi ne layers as local amplitude maxima at strong index of refraction changes, and thus the name optical coherence tomography. The resolution in depth of these instruments is under 30 mm in a layered glass sample, about a factor of 10 better than confocal measurements without the correction of optical aberrations beyond sphere. However, the actual measurements are based on optical path length, not tissue path Length. Many techniques use a group index of refraction, which is often semiquantitative in the presence of exudation with lipid or protein. Further, with disruption of the layers of the
230
FUNDAMENTAL PROPERTIES OF THE RETINA
eye, it is difficult to make cross-sectional images due to alignment errors. There are a variety of hybrid techniques, both in the laboratory and available commercially, such as a scanning laser ophthalmoscope combined with an optical coherence tomography device [63]. One advantage of such a device is the accurate placement of the coherence measurement system. Another hybrid device is a polarization-sensitive optical coherence tomography device [64]. This device can use the coherence information as a range fi nder to obtain polarization information. The design of most coherent instruments is aimed at using the interfaces when there is a strong index of refraction change, and thus works well in the superficial retina. However, Figure 9.13 demonstrates that perhaps the most useful information about the location of a neovascular membrane is unrelated to this superficial information, and that useful scattered light is discarded by interferometry methods rather than controlled for use in imaging. 9.8
SUMMARY
The retina contains millions of cells of varying types, and even within a type the cells can vary in size. Many retinal structures vary in their light-tissue interactions, with some having strong index of refraction changes that permit depth or thickness measurements. To visualize structures in the relatively transparent retina, it is necessary to make use of absorption, scattering, polarization, or (as a less desirable means) a contrast medium. Some structures are viewed best en face, and some in cross section or both. Techniques that work in young, well-structured, and relatively clear eyes prove difficult in the presence of distorted tissues or unwanted absorbers such as hemorrhages. Acknowledgments Supported by EYO7624, EB002346 , EYO4395, and EY014375. The author thanks Drs. Stephen Burns and Francois Delori and Peter Mallen for their long-term collaboration on figures, and Dr. John Dowling for his section of the human retina. Detailed, helpful comments were given by Mr. Michael Cheney, Mr. Anthony Morandi, Dr. Quinn Smithwick, and Dr. Anke Weber. Dr. Weber and Mr. Cheney helped in figure revision. REFERENCES 1. Rodieck RW. The First Steps in Seeing. Sunderland, MA: Sinauer Associates, 1998. 2. Boycott BB, Dowling JW. Organization of the Primate Retina: Light Microscopy. Phil. Trans. Roy. Soc. Lond. B. 1969; 255: 109–184. 3. Kolb H, Fernandez E. The Organization of the Retina and Visual System. Available at: http://www.webvision.med.utah.edu. Accessed August 31, 2003.
REFERENCES
231
4. Stockman A, Sharpe LT. CVRL Colour & Vision Database at the Institute of Ophthalmology in London. Available at: http://cvrl.ioo.ucl.ac.uk/. Accessed August 31, 2003. 5. Kaufman PL, Alm A. Adler’s Physiology of the Eye, 10th ed. St. Louis, MO: Mosby, 2002. 6. Curcio CA, Sloan KR, Packer O, et al. Distribution of Cones in Human and Monkey Retina: Individual Variability and Radial Asymmetry. Science. 1987; 236: 579–582. 7. Yuodelis C, Hendrickson AE. A Qualitative and Quantitative Analysis of the Human Fovea During Development. Vision Res. 1986; 26: 847–856. 8. Marcos S, Tornow R-P, Elsner AE, Navarro R. Foveal Cone Spacing and Cone Photopigment Density Difference: Objective Measurements in the Same Subjects. Vision Res. 1997; 37: 1909–1915. 9. Elsner AE, Burns SA, Beausencourt E, Weiter JJ. Foveal Cone Photopigment Distribution: Small Alterations Associated with Macular Pigment Distribution. Invest. Ophthalmol. Vis. Sci. 1998; 39: 2394–2404. 10. Snodderly DM, Weinhaus RS, Choi JC. Neural-Vascular Relationships in Central Retina of Macaque Monkeys (Macaca fascicularis). J. Neurosci. 1992; 12: 1169–1193. 11. Bradley A, Zhang H, Applegate RA, et al. Entoptic Image Quality of the Retinal Vasculature. Vision Res. 1998; 38: 2685–2696. 12. Elsner AE, Moraes L, Beausencourt E, et al. Scanning Laser Reflectometry of Retinal and Subretinal Tissues. Opt. Express. 2000; 6: 243–250. 13. Gorrand JM, Delori FC. Reflectance and Curvature of the Inner Limiting Membrane at the Foveola. J. Opt. Soc. Am. A. 1999; 16: 1229–1237. 14. Dreher AW, Tso PC, Weinreb RN. Reproducibility of Topographic Measurements of the Normal and Glaucomatous Optic Nerve Head with the Laser Tomographic Scanner. Am. J. Ophthalmol. 1991; 111: 221–229. 15. Kruse FE, Burk RO, Volcker HE, et al. Reproducibility of Topographic Measurements of the Optic Nerve Head with Laser Tomographic Scanning. Ophthalmology. 1989; 96: 1320–1324. 16. Hee MR, Izatt JA, Swanson EA, et al. Optical Coherence Tomography of the Human Retina. Arch. Ophthalmol. 1995; 113: 325–332. 17. Elsner AE, Burns SA, Hughes GW, Webb RH. Reflectometry with a scanning laser ophthalmoscope. Appl. Opt. 1992; 31: 3697–3710. 18. Burns SA, Elsner AE. Color Matching at High Illuminances: Photopigment Optical Density and Pupil Entry. J. Opt. Soc. Am. A. 1993; 10: 221–230. 19. Elsner AE, Burns SA, Webb RH. Mapping Cone Photopigment Density in Humans. J. Opt. Soc. Am. A. 1993; 10: 52–58. 20. Elsner AE, Burns SA, Weiter JJ. Retinal Densitometry in Retinal Tears and Detachments. Clin. Vis. Sci. 1992; 7: 489–500. 21. Burns SA, Elsner AE, Mellem-Kairila MB, Simmons RB. Improved Contrast of Subretinal Structures Using Polarization Analysis. Invest. Ophthalmol. Vis. Sci. 2003; 44: 4061–4068. 22. Chen J-F, Elsner AE, Burns SB, et al. The Effect of Eye Shape on Retinal Responses. Clin. Vis. Sci. 1992; 7: 521–530.
232
FUNDAMENTAL PROPERTIES OF THE RETINA
23. Wolf S, Wald, KJ, Elsner AE, Staurenghi G. Indocyanine Green Choroidal Videoangiography: A Comparison of Imaging Analysis with the Scanning Laser Ophthalmoscope and the Fundus Camera. Retina. 1993; 13: 266–269. 24. Wald KJ, Elsner AE, Wolf S, et al. Indocyanine Green Vidoeoangiography for the Imaging of Choroidal Neovascularization Associated with Macular Degeneration. In: Jakobiec FA, Adamis AD, Volpe NJ, eds. International Ophthalmology Clinics. Boston, MA: Little, Brown, 1994, pp. 311–325. 25. Hartnett ME, Weiter JJ, Staurenghi G, Elsner AE. Deep Retinal Vascular Anomalous Complexes in Advanced Age-Related Macular Degeneration. Ophthalmology. 1996; 103: 2042–2053. 26. Delori FC, Pfl ibsen K. Spectral Reflectance of the Human Ocular Fundus. Appl. Opt. 1989; 28: 1061–1077. 27. Berendschot TT, DeLint PJ, van Norren D. Fundus Reflectance—Historical and Present Ideas. Prog. Retin. Eye Res. 2003; 22: 171–200. 28. Radhakrishnan S, Rollins AM, Roth JE, et al. Real-Time Optical Coherence Tomography of the Anterior Segment at 1310 nm. Arch. Ophthalmol. 2001; 119: 1179–1185. 29. Poukens V, Glasgow BJ, Demer JL. Nonvascular Contractile Cells in Sclera and Choroid of Humans and Monkeys. Invest. Ophthalmol. Vis. Sci. 1998; 39: 1765–1774. 30. Elsner AE, Bartsch DU, Weiter JJ, Hartnett ME. New Devices in Retinal Imaging and Functional Evaluation. In: Freeman W, ed. Practical Atlas of Retinal Disease and Therapy, 2nd ed. New York: Lippincott-Raven, 1998, pp. 19–55. 31. Spraul CW, Lang GE, Grossniklaus HE. Morphometric Analysis of the Choroid, Bruch’s Membrane, and Retinal Pigment Epithelium in Eyes with Age-Related Macular Degeneration. Invest. Ophthalmol. Vis. Sci. 1996; 37: 2724–2735. 32. Hageman GS, Mullins RF. Molecular Composition of Drusen as Related to Substructural Phenotype. Mol. Vis. 1999; 3: 5–28. 33. Ruberti JW, Curcio CA, Millican CL, et al. Quick-Freeze/Deep-Etch Visualization of Age-Related Lipid Accumulation in Bruch’s Membrane. Invest. Ophthalmol. Vis. Sci. 2003; 44: 1753–1759. 34. Harman AM, Fleming PA, Hoskins RV, Moore SR. Development and Aging of Cell Topography in the Human Retinal Pigment Epithelium. Invest. Ophthalmol. Vis. Sci. 1997; 38: 2016–2026. 35. Kelley LM, Walker JP, Wing GL, et al. Scanning Laser Ophthalmoscope Imaging of Age Related Macular Degeneration and Neoplasms. J. Ophthalmol. Photo. 1997; 19: 89–94. 36. Stiles WS, Crawford BH. The Luminous Efficiency of Rays Entering the Eye Pupil at Different Points. Proc. Roy. Soc. Lond. B. 1933; 112: 428–450. 37. Gorrand JM, Delori FC. A Reflectometric Technique for Assessing Photoreceptor Alignment. Vision Res. 1995; 35: 999–1010. 38. Burns SA, Wu S, He J, Elsner AE. Variations in Photoreceptor Directionality across the Central Retina. J. Opt. Soc. Am. A. 1997; 14: 2033–2040. 39. Burns SA, Wu S, Delori F, Elsner AE. Direct Measurement of Human Cone Photoreceptor Alignment. J. Opt. Soc. Am. A. 1995; 12: 2329–2338.
REFERENCES
233
40. Elsner AE, Burns SA, Weiter JJ, Delori FC. Infrared Imaging of Subretinal Structures in the Human Ocular Fundus. Vision Res. 1996; 36: 191–205. 41. Snodderly DM, Auran JD, Delori FC. The Macular Pigment II. Spatial Distribution in Primate Retina. Invest. Ophthalmol. Vis. Sci. 1984; 25: 674–685. 42. Beausencourt E, Remky R, Elsner AE, et al. Infrared Scanning Laser Tomography of Macular Cysts. Ophthalmology. 2000; 107: 375–385. 43. Kunze C, Elsner AE, Beausencourt E, et al. Spatial Extent of Pigment Epithelial Detachments in Age-Related Macular Degeneration. Ophthalmology. 1999; 9: 1830–1840. 44. Denninghoff KR, Smith MH, Lompado A, Hillman LW. Retinal Venous Oxygen Saturation and Cardiac Output During Controlled Hemorrhage and Resuscitation. J. Appl. Physiol. 2003; 94: 891–896. 45. Hartnett ME, Elsner AE. Characteristics of Exudative Age-Related Macular Degeneration Determined in Vivo with Confocal Direct and Indirect Infrared Imaging. Ophthalmology. 1996; 103: 58–71. 46. Elsner AE, Dreher A, Beausencourt E, et al. Multiply Scattered Light Tomography: Vertical Cavity Surface Emitting Laser Array Used for Imaging Subretinal Structures. Lasers Light Ophthalmol. 1998; 8: 193–202. 47. Elsner AE, Miura M, Burns SA, et al. Multiply Scattered Light Tomography and Confocal Imaging: Detecting Neovascularization in Age-Related Macular Degeneration. Opt. Express. 2000; 7: 95–106. 48. Elsner AE, Zhou Q, Beck F, et al. Detecting AMD with Multiply Scattered Light Tomography. Int. Ophthalmol. 2001; 23: 245–250. 49. Webb RW, Hughes GH, Delori FC. Confocal Scanning Laser Ophthalmoscope. Appl. Opt. 1987; 26: 1492–1499. 50. Bartsch DU, Intaglietta M, Bille JF, et al. Confocal Laser Tomographic Analysis of the Retina in Eyes with Macular Hole Formation and Other Focal Macular Diseases. Am. J. Ophthalmol. 1989; 108: 277–287. 51. Miura M, Elsner AE. Three Dimensional Imaging in Age-Related Macular Degeneration. Opt. Express. 2001; 9; 436–443. 52. Miura M, Elsner AE, Beausencourt E, et al. The Grading of Infrared Confocal Scanning Laser Tomography and Video Displays of Digitized Color Slides in Exudative Age-Related Macular Degeneration. Retina. 2002; 22: 300–308. 53. Burns SA, Marcos S, Elsner AE, Barra S. Contrast Improvement for Confocal Retinal Imaging Using Phase Correcting Plates. Opt. Lett. 2002; 27: 400–402. 54. Bartsch DU, Zhu L, Sun PC, et al. Retinal Imaging with a Low-Cost Micromachined Membrane Deformable Mirror. J. Biomed. Opt. 2002; 7: 451–456. 55. Roorda A, Romero-Borja F, Donnelly WJ, et al. Adaptive Optics Scanning Laser Ophthalmoscopy. Opt. Express. 2002; 10: 405–412. 56. Ahnelt PK. The Photoreceptor Mosaic. Eye. 1998; 12: 531–540. 57. Jonas JB, Schneider U, Naumann GO. Count and Density of Human Retinal Photoreceptors. Graefes. Arch. Clin. Exp. Ophthalmol. 1992; 230: 505–510. 58. Stockman A, MacLeod DI, Johnson NE. Spectral Sensitivities of the Human Cones. J. Opt. Soc. Am. A. 1993; 10: 2491–2521.
234
FUNDAMENTAL PROPERTIES OF THE RETINA
59. Baylor DA, Nunn BJ, Schnapf JL. The Photocurrent, Noise and Spectral Sensitivity of Rods of the Monkey Macaca fascicularis. J. Physiol. 1984; 357: 575–607. 60. Choplin NT, Lundy DC, Dreher AW. Differentiating Patients with Glaucoma from Glaucoma Suspects and Normal Subjects by Nerve Fiber Layer Assessment with Scanning Laser Polarimetry. Ophthalmology. 1998; 105: 2068–2076. 61. Bueno JM. Measurement of Parameters of Polarization in the Living Human Eye Using Imaging Polarimetry. Vision Res. 2000; 40: 3791–3799. 62. Huang XR, Knighton RW. Linear Birefringence of the Retinal Nerve Fiber Layer Measured in Vitro with a Multispectral Imaging Micropolarimeter. J. Biomed. Opt. 2002; 7: 199–204. 63. Podoleanu A, Rogers JA, Jackson DA, Dunne S. Three Dimensional OCT Images from Retina and Skin. Opt. Express. 2000; 7: 292–298. 64. de Boer JF, Milner TE, Nelson JS. Determination of the Depth-Resolved Stokes Parameters of Light Backscattered from Turbid Media by Use of PolarizationSensitive Optical Coherence Tomography. Opt. Lett. 1999; 24: 300–302.
CHAPTER TEN
Strategies for High-Resolution Retinal Imaging AUSTIN ROORDA, University of California, Berkeley, Berkeley, California DONALD T. MILLER, Indiana University, Bloomington, Indiana JULIAN CHRISTOU University of California, Santa Cruz, Santa Cruz, California
10.1
INTRODUCTION
This chapter focuses on technical issues pertaining to the overall design of high-resolution ophthalmoscopes that employ adaptive optics (AO). The complexities of AO coupled with the inherent difficulties of safely and effectively imaging inside living human eyes create formidable challenges. As with any field, there is a certain amount of black art necessary to develop the instrument to a level that, in this case, is useful for basic and clinical research. Unfortunately much of the technical know-how is not reflected in the current literature. As such, we attempt to articulate some of the most beneficial technical and hands-on issues related to the overall design of an AO ophthalmoscope. These will help answer questions such as: “What type of AO ophthalmoscope is best for my application?”, “How short must the exposure be to avoid retinal motion blur?”, “What is an effective approach to integrate AO into my ophthalmoscope?”
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
235
236
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
The chapter begins with three sections that address three types of AO ophthalmoscopes: conventional flood illumination, confocal scanning laser ophthalmoscopes (cSLO), and optical coherence tomography (OCT). Collectively these ophthalmoscope architectures (without AO) cover the major imaging modalities currently available to patients visiting an eye care clinic. Each requires unique approaches for integrating AO. Each has performance strengths and retinal imaging applications that generally compliment those of the other two. For example, the optical sectioning capability of these ophthalmoscopes is wide ranging, with conventional flood illumination systems providing little axial resolution, while OCT systems extract the thinnest retinal slices (<10 mm). The optical performance of the eye (diffraction and ocular aberrations) fundamentally limits lateral resolution and defi nes the smallest internal structures that can be observed when looking “into” the eye with any of the three ophthalmoscopes. Significant gains in ophthalmoscopic resolution can accrue by correcting the eye’s aberrations, and this will allow smaller structures to be observed. Following these sections is a section containing general information that is applicable to all AO ophthalmoscopes. The fi nal section deals with image deconvolution, specifically as it applies to the postprocessing of AO-corrected retinal images. While our experience is almost exclusively with research-grade AO ophthalmoscopes, much of the chapter content is directly applicable to the development of commercial systems. We hope this information expedites their development and look forward to when AO is an integral component of the commercial ophthalmoscope. Note that the specific details of AO system construction, operation, and performance are covered in Chapters 4 to 8.
10.2 CONVENTIONAL IMAGING Conventional imaging in the eye consists of flood illuminating the retina and then recording, with a two-dimensional (2D) detector, the reflected light that exits the eye. This approach was fi rst exploited by Helmholtz around 1850 using a light source, a mirror with a hole, and his own eye as the detector [1]. Observations of the retina were initially recorded in hand paintings and then replaced by photographic fi lm, with the fi rst successful photograph of the human retina taken in 1886 using a 2.5-minute exposure [2]. The next 100 years produced a steady advance in better light sources (in particular, the electronic flash lamp), fi lm quality, procedures to collect retinal photographs, and more recently, the use of highly sensitive electronic cameras and automated computer-driven systems. Today, the conventional (modern) ophthalmoscope is an indispensable tool in the clinic for diagnosing the health of the posterior eye. Interestingly, it was not until AO was recently integrated into the conventional ophthalmoscope that its optical resolution improved beyond that observed by Helmholtz more than 150 years ago.
CONVENTIONAL IMAGING
10.2.1
237
Resolution Limits of Conventional Imaging Systems
Conventional ophthalmoscopes operate in cascade with the optics of the eye (cornea + crystalline lens). The maximum pupil and numerical aperture (NA) of the ophthalmoscope and eye combination is dictated by that of the eye itself. For example, the maximum physiological pupil size and numerical aperture of the human eye are about 8 mm and 0.23, respectively. Assuming a well-designed diffraction-limited ophthalmoscope, the fi nite pupil size of the eye imposes a fundamental limit on image resolution, with the width of the diffraction-limited point spread function (PSF) governed by:
θ dl =
1.22 λ0 nd
(10.1)
where qdl is the angle subtended between the peak of the Airy disk and the fi rst minimum, l0 is the wavelength of light in a vacuum, n is the index of refraction of the medium, and d is the diameter of the eye’s pupil. An increase in pupil size or a decrease in wavelength narrows the PSF and improves image quality. For example, an 8-mm pupil at a wavelength of 0.55 mm produces a qdl in the eye (n ~ 1.33) of 0.22 min of arc (or equivalently 1.4 mm for a 22.2mm focal length reduced eye). This is narrower than the smallest photoreceptor apertures in the human eye. The human eye, however, also suffers from aberrations that become significant at large pupil sizes (see also Chapter 2 or Fig. 11.1). This further blurs the retinal image and reduces resolution to, at best, about 1 min of arc (depending on the subject and the imaging wavelength) as long as defocus and astigmatism are well corrected. In principle, compensation of the eye’s aberrations with AO allows image quality to be limited solely by diffraction. In practice, diffraction-limited imaging through large pupils (>6 mm) has not been reported with the bottleneck likely to be technological limitations of current AO systems (see also Chapter 4).
10.2.2
Basic System Design
Figure 10.1 shows the basic layout for a conventional flood illumination ophthalmoscope endowed with AO. In this AO system, wavefront sensing is realized with a Shack–Hartmann wavefront sensor (SHWS). The sensor uses a point light source [e.g., a superluminescent diode (SLD) or laser diode] to form a small beacon (~3.3 arcmin) on the subject’s retina. Some of the scattered light from the focused spot passes back through the full pupil of the eye and is distorted by the ocular aberrations. This distorted wavefront (at p0) is imaged by an afocal lens pair (L5–L6) onto the surface of a wavefront
238
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
DM p3
L7 L6
FL
r6
L2
r4
r3
EP p1 L3 r2 DBS1
L10
L9
DBS2
r1
FS
SLD
p5
p2 L1
IF
CCD r7
L8
L5 BS L4
p0
p4
LA r5 CCD
eye r0
FIGURE 10.1 Schematic of a conventional flood-illuminated AO ophthalmoscope. The optical path is described in the text. The labeled elements are: FL, flash lamp; IF, interference fi lter; FS, field stop; EP, entrance pupil; DBS1, dichroic beamsplitter 1; BS, beamsplitter ~5% reflectance, 95% transmittance; DM, deformable mirror; DBS2, dichroic beamsplitter 2; LA, lenslet array for SHWS; CCD, charge-coupled device (digital camera). Pupil and retinal conjugate planes are labeled p and r, respectively. Lenses are labeled by number (L#) starting at the flash lamp. Afocal lens pairs, which relay the eye’s pupil through the instrument, are L1–L2, L3–L4, L5–L6, L7–L8, and L7–L9.
corrector [e.g., a deformable mirror (DM) whose surface is initially flat]. A second afocal lens pair (L7–L8) images the wavefront corrector onto the lenslet array (LA) of the wavefront sensor. In this configuration, measurement and correction of the wave aberration occur in the entrance pupil of the eye. The wavefront sensor, corrector, and control computer operate in closed loop so that, within each iteration, one wavefront measurement and correction are made. Once the AO system achieves a specified correction [e.g., an acceptable root-mean-square (RMS) wavefront error or a prespecified number of iterations], the system captures an image or a sequence of images of the retina. Retinal imaging is accomplished with a separate illumination channel (FL) that flood illuminates a patch of retina (typically ~1°). A small percentage of the scattered light (r0) passes back through the ocular media and pupil of the eye. The wavefront corrector removes the monochromatic
CONVENTIONAL IMAGING
239
aberrations inherent in the eye and the corrected wavefront is focused onto an electronic camera, such as a charge-coupled device (CCD), that records the retinal image. This depicts the general operation of all conventional AO ophthalmoscopes that have been developed to date. The sections to follow expand on various technical aspects of the conventional system. 10.2.3 Optical Components As shown in the schematic, the deformable mirror and lenslet array of the wavefront sensor are strategically positioned conjugate to the eye’s pupil. There are two motivations for this, both of which stem from the fact (or assumption) that the primary sources of aberrations in the system are the cornea and crystalline lens. First, current correctors provide only phase correction. Corneal and lenticular aberrations manifest themselves as essentially pure phase (i.e., a distorted wavefront) at the eye’s pupil. At other planes, propagation of the ocular aberrations generates intensity fluctuations (scintillation) in which the original phase-only wavefront becomes encoded as both phase and intensity. Scintillation is most significant at the image plane. Since phase-only correctors cannot compensate directly for scintillation, their maximum effectiveness will occur at a plane in the system where scintillation is negligible, which for the eye will occur at (or near) its pupil. Second, just as the ocular aberrations intrinsically vary with field angle, so do the (compensatory) aberrations that are purposely introduced by the AO system. This variation limits optimal correction to only one field point. To maximize the field of view (i.e., isoplanatic patch size) about this point, field variation in the total aberration pattern must be minimized, and this occurs when the mirror (assuming only one corrector element) is positioned at (or near) the pupil. At this position, field sizes at least as large as two degrees have been observed with no loss in image quality. The same rationale for positioning the mirror at the eye’s pupil also applies to minimizing the intrinsic aberrations of the ophthalmoscope’s optical system. Specifically the magnitude and physical origin of the optical system’s aberrations impact how effectively these aberrations can be corrected by the AO system and the field size over which the retinal image remains sharp. Relaying the light from the eye through the optical system can be done with lenses or curved mirrors. Lenses have an advantage in that they can be used on-axis and have tolerable off-axis aberrations. Both the original Rochester AO ophthalmoscope (RAOI) and the Indiana AO ophthalmoscope (IAO) rely entirely on lenses and planar mirrors [3, 4]. The lenses were precision achromats that minimized spherical aberration and coma when operating at infi nite conjugate ratios and yielded diffraction-limited performance at the design wavelength. A problem with lenses is that their surface reflections may project back onto the wavefront sensor and science CCD camera and mask the signal from the retina. Such reflections are often difficult to remove as a properly aligned lens (i.e., one that maximizes optical
240
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
performance) generates reflections that follow the optical axis of the system. Antireflective coatings covering the range of wavelengths in use noticeably help but are rarely sufficient as the effective retinal reflection (<0.1% of the light entering the eye) is significantly dimmer. An effective design step is to simply minimize the number of optical surfaces in those channels of the system that produce unwanted reflections (e.g., channels containing the subject’s eye). Note the system in Figure 10.1 contains no optics between the BS and the eye (except for trial lenses, which must be tilted to avoid back reflections). If the BS is a cube beamsplitter, its primary internal reflections are discarded by rotating the cube. A pellicle is a clean solution, as it creates no disturbing reflections, although image quality may be slightly compromised. An inevitable reflection originates at the anterior corneal surface and is avoided in a couple of ways, which are described in later subsections. Curved mirrors can be used instead of lenses. Mirrors have some advantages over lenses, including the facts that they have no chromatic aberration, have no back reflections, and provide the flexibility to make the optical system more compact (by folding the beam). The trade-off of mirrors include their expense and restriction to off-axis imaging, which gives rise to unwanted aberrations, in particular astigmatism. Rochester’s second-generation floodilluminated AO ophthalmoscope (RAOII) relies on long focal length mirrors to image the pupil onto a 97-channel Xinxtics deformable mirror. To overcome off-axis aberrations, they use off-axis parabolas, which provide aberration-free imaging at one point in the field of view. The choice of spherical mirrors would have been more economical but may have given rise to unwanted aberrations in the system. It is generally worthwhile to model the performance of an AO instrument using commercial ray tracing software (e.g., Zemax, OSLO, etc.). Proper modeling permits optimization of the system, establishes upper bounds on system performance, and helps avoid unwanted surprises prior to the purchase of costly equipment.
10.2.4
Wavefront Sensing
Wavefront sensing in current conventional AO ophthalmoscopes is realized with a Shack–Hartmann wavefront sensor [5–7]. The light source of the sensor is typically an SLD whose beam is collimated, propagated parallel to the optical axis of the retinal camera, and then directed into the eye. As illustrated in Figure 10.1, the SLD is positioned as close as possible to the eye to minimize the number of reflections from optical elements. The beam at the pupil is small (~1-mm diameter) and slightly displaced (~1 to 2 mm). The small beam diameter has two primary advantages: (i) it produces a long depth of focus that keeps the spot size at the retina relatively constant even when the eye has refractive error, and (ii) it avoids essentially all of the aberrations in the eye permitting a nearly round spot at the retina (although not a point source) that is conducive for fi nding the centroid locations of the Shack–
CONVENTIONAL IMAGING
241
Hartmann spots. Slight displacement of the beam removes the corneal reflex that would otherwise enter the wavefront sensor [8]. Another effective solution is to insert an aperture conjugate to the retina in the wavefront sensor path (r4 in Fig. 10.1). The aperture blocks the out-of-focus corneal reflection, while passing the in-focus retinal reflection. Typical closed-loop operation of the AO system (>10 Hz) necessitates short SHWS exposures (<50 ms). Retinal motion over this time scale is minimal and inadequate for destroying speckle noise in the SHWS spot pattern. An SLD is preferred over a typical laser (e.g., a HeNe or laser diode) as its short coherence length (<20 mm) causes the reflected light from different depths in the thick retina (100 to 400 mm) to incoherently interfere. This effectively mitigates speckle noise and enables more repeatable centroid determination. The power level of the SLD at the cornea is typically 4 to 20 mW and depends on the throughput efficiency of the system and the configuration of the SHWS (such as the CCD’s quantum efficiency, and the number and NA of the lenslets). The ideal wavefront sensor operates simultaneously with the imaging system and uses the same optical path. The Rochester and Indiana floodilluminated systems approach this by sensing and imaging at two different wavelengths. Custom dielectric beamsplitters were designed to reflect and transmit the corresponding wavelengths, and this permits simultaneous operation without the loss or mixing of light. Furthermore, the wavefront sensor and science camera were positioned as close as possible in the system to minimize noncommon path errors that would lead to differences in the aberrations seen at the science camera and the wavefront sensor. The wavefront sensor typically operates in the near-infrared (near-IR), which confers several advantages over visible wavelengths (0.4 to 0.7 mm). These include the following: • Near-IR is less damaging to retinal tissue, which relaxes the retinal safety limits and permits more light for wavefront sensing. • Near-IR appears less bright, which is more comfortable and less distracting for the subject. • Near-IR reflects more from the retina [9], which provides more light for wavefront sensing or equivalently enables the incident light power to be reduced for increased eye safety and subject comfort. A disadvantage of separate wavelengths for AO and retinal imaging is that the intrinsic chromatic aberrations of the eye cause a shift in focus between the two wavelengths (Fig. 10.2). The magnitude of this shift can be significant. Effective compensation is typically realized by axially translating a lens in the imaging arm of the system (L9 and L10 in Fig. 10.1) or the science camera itself. Fortunately, the higher order aberrations of the eye are largely insensitive to wavelength making higher order chromatic compensation largely unwarranted [10–12]. An additional complication of employing separate
242
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
wavelengths is that the retinal reflection is wavelength dependent with nearIR penetrating deeper into the tissue, the extent of which varies between subjects. Although chromatic aberration is reasonably stable within a given eye [13, 14], a unique adjustment for the focus must be made for each individual after AO correction. The operational specifics of the wavefront sensing system are described in Chapter 3. 10.2.5
Imaging Light Source
General optical requirements for the imaging light source are low spatial coherence, a narrow spectral band, uniform illumination, and high optical energy deliverable during a short exposure. Low spatial coherence helps to reduce image speckle noise, which can mask retinal information in the recorded image. High-resolution retinal imaging is particularly sensitive to speckle as microscopic features, such as single cells, approach the size of individual speckles (which is inversely related to the numerical aperture of the eye). A narrow spectral band prevents blurring caused by the large amount of chromatic aberrations in the human eye. Note that too narrow a bandwidth, however, will lead to significant temporal coherence and increased speckle noise. High optical energy overcomes the large loss of light in the eye (~10 −4) [9], and short exposures arrest blur induced by retinal motion even for a fi xating eye. In addition to these requirements, the Lagrange invariant fundamentally limits the efficiency of the illumination channel as dictated by the size and divergence of the light source, the NA of the eye, and the desired illumination patch (field of view). There are few commercial sources that meet all of these requirements. As an example, the RAOI and RAOII use a broadband krypton flash lamp that emits 200 J of optical energy in 4p steradians per 4-ms pulse. This corresponds to 50,000 W (during a pulse)! A very small fraction of this is collected by the illumination channel and is further reduced by an interference fi lter located conjugate to the pupil (IF placed at p2 in Fig. 10.1). This location minimizes beam nonuniformities (at the retina) that result from the fi lter. The krypton flash lamp emits over a broad range of wavelengths and provides very high beam uniformity (at the retina), but the flashes can only be generated once every 5 to 10 s (which is the time required to recharge the source capacitors). As a compliment to this, the IAO uses a 10-mW SLD passed through a multimode fiber. In the fiber, the light is distributed among the fiber modes with modal dispersion causing each to propagate at different axial velocities. The 25 m was of sufficient length to cause the time delay between exiting modes to be larger than the temporal coherence length of the SLD that effectively mitigated the source’s spatial coherence (removes speckle). The SLD is highly directional and quasi-monochromatic, making it substantially more efficient than the krypton flash lamp. For example, for the same exposure time (4 ms) and retinal illumination size (1°), the SLD is 5 million times more
CONVENTIONAL IMAGING
243
efficient (50,000 W versus 10 mW). The SLD source also occupies a much smaller physical space. Furthermore, the SLD can be modulated by a computer (with > kHz frequencies), which enables high-speed imaging (see also Chapter 17). Disadvantages of the SLD approach are that SLDs emit over a narrow spectral range (typically <50 nm) compared to a flash lamp and are available at only limited wavelengths with the shortest being 0.675 mm. To expand to other wavelengths without sacrificing camera performance, the Indiana group recently demonstrated in the human eye their high-speed fiber technique with a laser diode (see also Chapter 17), which offers a wider range of possible wavelengths. To obtain a sufficiently bright image of a 1° patch of retina, the RAOI and RAOII require about 1 mJ per flash at the cornea for a single frame of 550-nm light. The IAO requires about 4.5 mJ at the more reflective wavelength of 679 nm. Power levels depend heavily on the field size and must be calculated carefully for each system. The flash or exposure duration has to be short enough to arrest the motion of the retina, otherwise the image will be blurred. Exposure times of 4 ms have been empirically found to yield a high incidence of frames without visual evidence of motion blur. The IAO ophthalmoscope has been used to explore other exposure durations (–13 , 1, 4, 10, 20, 33, 66, and 100 ms). Although the results are preliminary, they may be helpful as a rough guide: –13 - and 1-ms exposures yielded the highest incidence of sharp frames and were necessary to freeze cellular motion in capillaries. Many of the images were visually acceptable, but 10 ms produced some blur. Exposure durations above 10 ms were largely unacceptable with 33 ms and longer producing substantial blur that destroyed most microscopic detail. Interestingly, even at 100 ms, microscopic structure could be observed in the sporadic image that happened to coincide with the endpoint of a retinal movement at which time the retina momentarily came to rest. If an interference fi lter is used to control the wavelength of the flash (as with the krypton flash lamp), the fi lter bandwidth must be sufficiently narrow to prevent chromatic blur induced by the chromatic aberrations of the eye. Typical fi lter bandwidths are less than 25 nm. Figure 10.2 shows a plot of the chromatic change in refractive power of the eye as a function of wavelength. The right scale bar converts the change in diopters to a corresponding change in RMS wavefront error for a 7-mm pupil. The illumination path can also be controlled to regulate the reflected intensity of different features in the retina. This is done by controlling the size and location of the entrance pupil diameter (EP at p1 in Fig. 10.1) in the illumination path. For example, the cone photoreceptors act as waveguides, and they will reflect maximally if they are illuminated along their optical axis. If illuminated obliquely, the cones will reflect less light and the uncoupled light will leak into the interstitial cone media contributing not only to a loss in cone reflectivity but to an added loss in cone contrast. To increase the visibility of cones, therefore, it is advised to use a small entrance beam
244
0.75
1.3
0.5
0.9
0.25
0.4
0
0.0
−0.25
−0.4
−0.5
−0.9
−0.75
−1.3
−1
−1.8
−1.25
−2.2
−1.5
−2.7
−1.75 400
RMS Aberration for 7-mm Pupil (µm)
Refractive Error (D)
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
−3.1 450 500
550 600
650 700 750
800 850
900
Wavelength (nm) FIGURE 10.2 Change in refraction as a function of wavelength in the human eye taken from the chromatic-reduced-eye model of Thibos et al. [14]. All refraction values are anchored to 0 for 589-nm light. The left scale indicates the amount of defocus in diopters. The right scale shows the associated RMS wavefront error for a 7-mm pupil.
for illumination, centered on the pointing direction of the cones, which corresponds to the peak of the Stiles–Crawford effect [15]. This point can be found by moving the entrance beam location to maximize reflected intensity. This effect is demonstrated well in Roorda and Williams where they measured the optical fiber properties of cones using the RAOI instrument [16]. Conversely, the illumination angle can be made intentionally oblique to reduce the contribution of the cones and increase the contrast of other features. 10.2.6
Field Size
A large field size is almost always advantageous. A larger size permits easier navigation through and across the retina and reduces certain complications in postprocessing, such as those stemming from eye motion (e.g., image registration). Several camera- and eye-related factors, however, constrain the field size for AO imaging. These must be carefully weighed in context with the application and are described below.
CONVENTIONAL IMAGING
245
Retinal Safety An increase in field size, while maintaining constant retinal illumination (J/cm 2) as well as (CCD) pixel sampling density (pix/cm 2), requires that the total light flux entering the eye increase as the square of the field size diameter (i.e., proportional to the retinal area being illuminated). Under these conditions, the number of photons per pixel is held constant. While the total light flux incident on the retina increases as the square of the illumination diameter, the retinal hazard does not for intermediate field sizes (1.5 to 100 mrad or 0.09° to 5.7°). At intermediate sizes, thermal heat generated by the light dissipates less rapidly. Specifically, the thermal maximum permissible exposure (MPE) grows linearly with (rather than the square of) the field size from 1.5 mrad (0.09°) to 100 mrad (5.7°) [17]. This means that the MPE in terms of retinal irradiance actually decreases in a linear fashion with field size. Within this field range, the highest retinal irradiance therefore occurs when using the MPE for the smallest field of 1.5 mrad. For diameters larger than 100 mrad, the MPE grows as the square. Isoplanatism The eye is an excellent wide-field imaging system but ocular aberrations still change with the field angle. The isoplanatic angle refers to the maximum angular extent over which the aberration pattern remains largely constant. In a conventional system, AO corrects the center of the field where it provides optimal correction. At field extents where the aberration pattern has appreciably changed, the correction is effectively lost. High image quality is restricted to within the isoplanatic patch created by the eye and ophthalmoscope combination. The isoplanatic angle of the eye has not been quantified, but in practice, images as large as 2° in extent have shown no degradation in image quality at the edges of the field. Changes in the aberration pattern beyond that field size may prevent effective AO correction, but it remains to be explored. Detector Size and Image Acquisition Rates Essentially all AO ophthalmoscopes to date sample the retina at least two times finer than the diffractionlimited PSF. This assures that all spatial frequencies up to the diffraction limit cutoff are properly recorded (no aliasing). For large field sizes, this results in a considerable pixel count. For example, given a diffraction-limited PSF with a Rayleigh resolution limit of 1.9 mm [using Eq. (10.4) with l0 = 0.55 mm, d = 6 mm, F = 22.2 mm, and n = 1.33] and a pixel size of 0.95 mm (at the retina), approximately 2609 × 2609 pixels are required to sample an 8° field (~2400 mm in diameter). If the images are 2 bytes per pixel and are sampled at video rates, the data rate is ~410 MB/s, which is beyond the bandwidth of current data storage systems. Scattered Light Light can scatter at oblique angles as well as multiple times in the thick retina, causing a fraction of the incident light to eventually exit the retina at a location different from where it entered. This reduces the contrast of retinal structures in the image. Smaller fields have been observed to
246
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
generate higher contrast images, suggesting that the amount of multiply scattered light per retinal area that strikes the science camera (CCD) grows with field size. For example, the highest contrast images of cones were obtained with a small 7-arcmin illumination field [18]. 10.2.7 Science Camera Ocular hazards limit the amount of light that can safely enter the eye and ultimately strike the science camera. This places a premium on detector quantum efficiency (QE) and fi ll factor. Both the Rochester and Indiana cameras employ back-illuminated CCDs. Back illumination provides a QE in excess of 75% at the desired wavelength, and the CCD pixel architecture permits a fi ll factor of 100%. For typical retinal illumination levels, photon noise dominates the detected retinal image even for moderate levels of read noise (~50 electrons RMS). Because higher read rates generate higher read noise, an appropriate balance of the two depends on the specific application and available light budget. Dark noise of the CCD is largely unimportant, owing to the short (~4 ms) exposure times. CCDs in current AO ophthalmoscopes typically have 12 bits of dynamic range. For some retinal imaging tasks, such as those that entail precision contrast measurements (e.g., classifying cones), careful flat-fielding of the CCD is also important, as well as high linearity of the detector. In the future, complementary metal–oxide–semiconductor (CMOS) devices will be an attractive alternative to CCDs. CMOS devices allow on-chip system integration, operate at a high speed, and are of low cost and low power. At present they represent a maturing technology and generally lag the CCD in nearly every other performance category, including quantum efficiency and pixel fi ll factor. 10.2.8 System Operation A calibrated fi xation target is provided to the subject. To ensure that the same location is imaged each time, the subject initiates the imaging procedure by pressing a button that signals the control computer. This also ensures that the subject is not blinking during the exposure. In the RAOII, the image sequence begins with a rapid AO correction. Once the aberrations are sufficiently low, the flash is triggered and the image is grabbed. Hence, for single snapshot imaging, the AO system only operates for a moment prior to image capture. This allows the patient to blink and look around freely between frames. The operation of the IAO is slightly different, acquiring streams of images (rather than a single snapshot) at real-time rates (10 to 500 Hz) using a science camera that is synchronized to a strobing light source. Optimal correction is obtained by operating the AO system (up to 22 Hz) during the entire image acquisition rather than stopping prior to acquiring the retinal image (as is the case for the RAOII).
SCANNING LASER IMAGING
247
10.3 SCANNING LASER IMAGING Scanning laser imaging systems were invented in 1955 by Marvin Minsky [19]. Scanning laser ophthalmoscopes (SLOs) were invented by Robert Webb in 1980 [20]. The SLO is the same as a scanning laser microscope except that the eye is used as the objective lens and the retina is always the sample being imaged. A SLO image is acquired over time as the scattered light is recorded, pixel by pixel, from the focused spot as it scans in a raster pattern across the retina. Effective implementation of this imaging technique relies on the property of scanning and descanning of the beam. By rule of reversibility, the scattered light from the retina returns along the incoming light’s path. Once the scattered light reflects back off the scanning mirror (or mirrors), the beam is no longer scanning but follows directly along the stationary part of the incoming light path. This allows one to place a single small detector in the returned light path to detect the light that forms the image. The image is formed digitally in a frame grabber, which combines horizontal and vertical position information from the scanning mirrors (termed hsync and vsync, respectively) with the digitized values of the analog intensity stream from the detector, to form an extended field image. A feature of the SLO is that scattered light from the focused point on the retina can be reimaged prior to detection. At the location of the aerial image of the spot, one can place a “confocal” pinhole for which only the light that passes through gets detected. The confocal pinhole serves to limit the light reaching the detector to that originating from the focal plane only. The operation of the confocal pinhole is illustrated in Figure 10.3. The result is that images have a higher contrast than what would be obtained with a conventional flood-illuminated imaging system. Another advantage is that one can move the focal plane through the retina to see different features in depth. SLOs are suited for real-time imaging and have been used routinely for real-time as well as single-shot applications. Video imaging applications have been used for fluorescein angiography, retinal eye tracking, and blood flow measurements based on laser Doppler principles (Heidelberg Retina Flowmeter, Heidelberg Engineering GmbH). Snapshot-based imaging has been used for routine screening and diagnosis (Optomap, Optos plc). Autoflourescence imaging has been used to look at the accumulation of lipofuscein in normal and diseased eyes [21]. The addition of polarization sensitivity has been used to measure properties of the nerve fiber layer (GDx, Laser Diagnostic Technologies) and different wavelength and detection strategies have been used to image subretinal features [22]. Finally, optical sectioning has been used to measure the topography of the nerve fiber layer, particularly in the area of the optic disk (Heidelberg Retina Topographer, Heidelberg Engineering GmbH). SLOs are not only used for imaging. In some systems, the laser that is being raster scanned on the retina is modulated to produce complicated retinal
248
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
(a) Scanning Optics and Adaptive Optics
Plane of Focus
(b)
Illumination
Confocal Pinhole
Detector
Scanning Optics and Adaptive Optics
(c) Scanning Optics and Adaptive Optics
FIGURE 10.3 Schematic to illustrate optical sectioning in a confocal scanning laser ophthalmoscope. In the illumination path, (a) the light is focused to a specific plane in the retina. Although the light is focused to a plane, the incident light will scatter from all layers of the retina. Light that scatters from the plane of focus (b) passes through the optical system and is focused to an aerial image, where the confocal pinhole is placed. Only light from the focal plane passes through the confocal pinhole and contributes to forming the image. Light that scatters from deeper (or more anterior) layers (c) is reimaged in a plane other than the confocal pinhole plane and is blocked from reaching the detector. The confocal pinhole limits the light in the image to that originating from the scattered light near the plane of focus.
patterns that not only the patient can see but can also be seen in the image. This makes it possible to image the retina while it is fi xating, tracking a moving object, or reading text or while presenting and recording electroretinograph stimuli [23, 24]. But since an SLO uses light-based imaging, it is limited in resolution by aberrations in the optics of the eye. The incorporation of AO into the SLO improves lateral resolution, as well as axial resolution.
SCANNING LASER IMAGING
10.3.1
249
Resolution Limits of Confocal Scanning Laser Imaging Systems
In a confocal scanning laser imaging system, the effective PSF is computed as the product of the ingoing PSF with the convolution of the outgoing PSF and the confocal aperture. When the confocal aperture approaches a size equaling the radius of the Airy disk of the collection path of the system, the effective PSF is simply the product of the ingoing and outgoing PSFs. Under such conditions, the resolution is equally determined by the input PSF and the outgoing PSF, and the lateral resolution can exceed that of conventional imaging by about 40% [as assessed by the full width at half maximum (FWHM) of the PSF]. If the confocal aperture is large, then image quality is governed only by the ingoing PSF; effective optical sectioning disappears and lateral resolution approaches that of conventional imaging systems. Axial resolution in confocal SLOs can be defi ned and computed in several ways. The standard way to determine axial resolution is to measure the detected intensity that is recorded from a planar surface as a function of its axial location from the focal plane of the SLO [25]. The full width (axial distance) at half maximum of the resulting intensity distribution is a measure of the axial resolution of the SLO. For small confocal pinholes, the intensity measured from a plane as it is moved through focus (i.e., the axial resolution) is computed with the following steps: • Compute the product of the ingoing and outgoing PSFs of the system in three dimensions (for a range of defocus levels). • Integrate the intensity of each optical slice of the squared threedimensional (3D) PSF [26] and plot the intensity as a function of axial location. • Measure the FWHM of the resulting curve to get the axial resolution. Using the minimum pinhole size and a 6.3-mm pupil, the axial resolution in the eye gets as low as 30 mm. When larger confocal pinholes are used, which is normally the case, more sophisticated methods have to be used to compute axial resolution [27]. Experimental measures of axial resolution in an AOSLO report axial resolutions as low as 71 mm (see Fig. 10.7 for comparisons with other instruments) [28]. 10.3.2 Basic Layout of an AOSLO This section illustrates and describes the basic components of an adaptive optics scanning laser ophthalmoscope (AOSLO). The schematic for the AOSLO is shown in Figure 10.4 [29]. 10.3.3
Light Path
The light path in an AOSLO can be thought of as a series of telescopes that relay the light to the various elements that act on the beam. For example,
250
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
r6
Laser CL L1
AOM
PMT
r5
CP r9
EP
L5
L2 p4
BS
r7
p5
M1
r4
CCD r8
p6 L4
LA
L3 M2
DM p 3 M3 M4
r3
M5
r2
p 2 HS
M6 VS p 1 M7 PT
r0
r1 M8
eye p0
FIGURE 10.4 Schematic of an adaptive optics scanning laser ophthalmoscope. The optical path is described in the text. The specific labeled elements are: CL, collimating lens; AOM, acousto-optic modulator; EP, entrance pupil; BS, beamsplitter; DM, deformable mirror; HS, horizontal scanning mirror; VS, vertical scanning mirror; PT, pupil tracking mirror; LA, lenslet array; CP, confocal pinhole; PMT, photomultiplier tube. Pupil conjugates and retinal conjugates are labeled p and r, respectively. Mirrors and lenses are labeled M# and L# though the optical path. Telescope lens/mirror pairs for relaying the pupil through the path are L1–L2, L3–L4, M1–M2, M3–M4, M5–M6, M7–M8.
wavefront sensing (p6 in Fig. 10.4), wavefront correcting (p3), and tip and tilt adjustments (p1 and p2) for raster scanning the beam all need to be done in a pupil-conjugate plane. At present, each of these actions is done with a separate component, and hence, a telescope is required to relay conjugate images of the pupil to the various components. So the AOSLO is comprised of five different telescopes that relay the light to six different pupil-conjugate positions (aside from the actual pupil). To consider the optical path in this way makes calculations of the magnification of the system simple since the relative magnification between any two pupil-conjugate planes is the product of the
SCANNING LASER IMAGING
251
magnification of the telescopes that lie between them. Likewise, the retinalconjugate paths are connected by a series of telescopes made up of the same mirror lenses. Maintaining conjugacy is critical in an AOSLO. If, for example, the pupil is not conjugate to the scanning mirror, the beam would wander across the pupil as it is scanned, and the wavefront sensor would not see a stable aberration pattern (nor would the deformable mirror). The AO system is not fast enough to keep up with aberrations that change as fast as the beam is scanning (typically kHz line frequencies), so the performance of the system would be compromised. The AOSLO is a double-pass system, meaning that it uses the same optics for delivering as well as for imaging the light. In such a system, the potential for back reflections is high. To avoid back reflections, different strategies can be used. If lenses are used throughout the system, then polarization methods could be used to reject back reflections. More commonly, mirrors are used in the double-pass portion of the optical path. A consequence of using mirrors in the optical path is that they have to be used off-axis, which generates unwanted aberrations into the optical path. Astigmatism, for example, gets compounded at each oblique reflection from a spherical surface. Coma of various signs and magnitude is also produced at every surface. To avoid aberrations in the optical path, off-axis paraboloids can be used to relay the light. These are costly and may be difficult to align. Alternatively, one can use other means to compensate for the astigmatism that is generated in the system, and then proceed to optimize the system so that higher order aberrations, like coma, are minimized. It is possible to minimize off-axis coma because of the different signs that are generated, depending on the vergence of the beam that is reflecting from the mirror. Each mirror in the system can be adjusted to compensate for the coma generated by the previous reflection. The remaining astigmatism can be corrected by a cylindrical lens [30]. In the path from the source to the eye, there are pupil-conjugate planes at the deformable mirror and the two scanning mirrors. When laying out the system it is advised to place the scanning components as close to the eye as possible. This minimizes the number of optical surfaces where the beam propagates eccentric to the optical axis. For example, by placing the deformable mirror in the stationary part of the path, the size of the mirrors used to form an image of the pupil onto the DM can be as small as the DM itself, thereby minimizing the deflected angle (and hence aberrations), as well as the size and cost of the fi nal system. 10.3.4
Light Delivery
It is important to start with a point source or a well-collimated beam in order to focus a beam onto the retina. The light source in an SLO generally originates from a spatial fi lter pinhole or from a single-mode optical fiber. Fibers
252
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
are convenient because hardware fi xtures are readily available that allow them to be switched (e.g., for different wavelengths) without requiring any new alignment. The point source can be collimated and propagated into the imaging system. The type of light source and the wavelength depend on the imaging goals. Chapter 9 talks about how different wavelengths reflect from the retina. Because these systems are highly confocal and the sampling is very fine, speckle can generate unwanted noise into the images. For that reason, a low coherence light source may be a better choice for the light source. Preliminary testing of low-coherence light sources is currently underway. To minimize any unnecessary exposure to the retina, it is advised to modulate the laser beam so that the retina is only exposed when image pixels are being acquired. To control the beam, the laser can be modulated directly, or an acousto-optic modulator can be placed in the light delivery path (outside of the double-pass optical path) to turn the beam on during the forward part of the horizontal scan and off on the return. 10.3.5
Wavefront Sensing and Compensation
Wavefront sensing and compensation is done in a unique way in an AOSLO. First, AO is used in both directions; to focus the light to a sharp point on the retina and to take the scattered light from the eye and focus it to a sharp point at the confocal pinhole. But a complication arises because the optical path of the SLO includes scanning mirrors, which means that conventional wavefront sensing methods that project a stable laser beacon onto the retina will not work. However, the solution to this complication is elegant and provides several advantages over conventional AO systems [29, 31]. In the AOSLO, the wave aberration is measured with the same light that is used to form the image. This is possible because, although the light is being scanned in a raster pattern on the retina, the light is descanned on the return path, which renders the beam stationary. Thus, the Shack–Hartmann wavefront sensor sees the light from the retina as though it were coming from a single spot, which makes an aberration measurement possible. This method has several advantages. First, it automatically implements the method of Hofer et al. in which scanning on the retina is employed to remove speckle from the short-exposure retinal images [8, 60]. This is important because the light source in the AOSLO may be highly coherent. A second advantage is that the average aberration is measured over the entire field of view of the system. This ensures more uniformity in the correction over the field that is being imaged. For small fields, correcting average aberrations likely has minimal benefit since the eye is isoplanatic over at least one degree [32]. The final advantage is that the wavefront sensing and the imaging portions of the system use the same light path and light source, which reduce non-common-path errors and eliminate noncommon aberrations between the wavefront sensor and imaging camera due to chromatic effects.
SCANNING LASER IMAGING
10.3.6
253
Raster Scanning
Two mirrors are generally used to scan the focused spot in a raster pattern on the retina. One mirror scans the line, defining the line rate, and the other scans vertically, to set the frame rate. For the line scanner, an SLO can use a polygon scanner or a galvanometric scanner. The ratio of the line scan frequency to the vertical scan frequency defi nes the number of lines per frame. Two important considerations in the optical system are the beam size and the scan angle at the location of the eye. As described in an earlier section, the pupil size at any conjugate plane is easily computed by multiplying the size of one pupil by the product of the magnifications of the telescopes that lie between the two planes. The opposite occurs with field angle, since, as the beam size is reduced, the scan angle enlarges. This is an invariant in the optical system and is governed by the following equations:
( Beam size )a = m a..z × ( Beam size )z
(10.2)
( Scan angle )a = ( 1 m a..z ) × ( Scan angle )z
(10.3)
where ma..z is the product of the magnifications of the telescopes between the conjugate planes, a and z. These considerations bear on the selection of the scanner. For example, if the desired field scan angle is 1° and the pupil size is 7 mm, then two examples of acceptable configurations are: Option 1: 7-mm beam, 1° scan angle (0.5° mechanical scan angle) Option 2: 1-mm beam, 7° scan angle (3.5° mechanical scan angle) Option 1 requires a 1 : 1 magnification between the scanner and the eye, and option 2 requires a magnification of 7×. Option 2 is less desirable since the higher scan angle will introduce more aberrations in the fi nal telescope(s) of the optical system due to off-axis aberrations. Given the beam and angle requirements, galvanometric scanners are most suitable for small field/large pupil scanning. Scanners up to 16-kHz frequency are commercially available and can scan a 2.5-mm beam over a 10° optical angle. For lower frequencies, many beam sizes and scan angles are available. However, high-frequency scanners provide the highest line counts at high frame rates. The 16-kHz frequency is especially convenient because it is inaudible for most people. An advantage of scanning mirrors over polygons is that their scan angles can be adjusted, which in turn changes the field angle, or magnification, of the retinal image. Reducing the scan amplitude can, in effect, “zoom in” on the image. High line rates at small scan angles will often require a resonant-type scanner, which achieves high scan frequencies via a sinusoidal scanning motion of the mirrors. The sinusoidal scanning motion is not desirable since
254
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
it introduces distortions into the image (assuming a fi xed-frequency pixel clock is used), but the distortions are stable and well-characterized so they can be removed with postprocessing. 10.3.7 Light Detection The light from the retina is descanned and is focused onto a confocal pinhole. Choosing the size of the confocal pinhole is important as it plays a role in governing the axial and lateral resolution of the instrument. The goal is to be able to make the pinhole as small as possible, until the detected light levels become the limiting factor. The size of the pinhole has to be selected based on the geometry of the collection optics and how the pinhole projects into retinal space. A convenient way to defi ne the size of the pinhole is in coordinates normalized to the diffraction-limited point spread function, or the Airy disk. One reason for choosing this scale is because the smallest, practical pinhole size is equal to the radius of the Airy disk of the collection optics. Smaller confocal pinholes, even in a diffraction-limited system, will not serve to improve axial or lateral resolution to any meaningful level but will only serve to reduce the amount of detected light [27, 33]. Therefore, a confocal pinhole that has a diameter equal to the radius of the Airy disk of the collection system is given a normalized size of 1 Airy disk unit. This normalization facilitates the interpretation of the results and allows a direct comparison between similar instruments. With the pinhole sizes normalized in this way, one can easily compute actual pinhole sizes, or the pinhole size projected into retinal coordinates, using the following equation: 1 Airy disk radius = 1.22
λ0 F nd
(10.4)
where l0 is the wavelength of light in a vacuum, F is the focal length, n is the index of refraction of the media in which the light is focused, and d is the diameter of the beam forming the image. For example, in the AOSLO described in Chapter 16 (l = 660 nm), the collector lens in front of the confocal pinhole (in air) has a focal length (F) of 100 mm and the beam diameter at this lens is 3.5 mm (d); hence, the radius of the Airy disk at the confocal pinhole is 23 mm. A 23-mm pinhole projects into the reduced eye retinal space (6.3-mm pupil diameter, F = 22.2 mm, n = 1.33) as approximately 2.13 mm. The selection of a detector in an AOSLO is also critical. The amount of light that can be used to expose the retina is limited by retinal safety factors. Furthermore, the small amount of light that reflects from the retina is sampled on such a fi ne scale (less than 1 mm per pixel) that the number of photons per pixel can be quite small. Recent measurements from the AOSLO have determined that after best correcting the wave aberration, the average power
SCANNING LASER IMAGING
255
passing through the confocal pinhole when 30 mW of 660-nm light (measured at the cornea) is used to expose the retina can be as low as 200 pW. With a pixel integration time of 50 ns, this power converts to less than 33 photons per pixel, which is even fewer when you consider the quantum efficiency of the detector. A photomultiplier tube (PMT) or an avalanche photodiode (APD) is the best choice for such an application. PMTs may be more desirable, even though their quantum efficiency is less because there is less intrinsic noise in the detection path, making it suitable for detecting extremely low light levels. 10.3.8 Frame Grabbing Frame grabbers are used to generate a 2D image from three inputs: the hsync, vsync, and detected light signal. The frequency of the analog/digital (A/D) conversion of the signal dictates the number of pixels that are acquired across a single line scan. Generally, the frequency will be set to oversample the diffraction-limited point spread function so that the resolution of the image is not sampling limited. A pixel size, projected onto the retina, of 1 mm or less is advised. (A single foveal cone has a diameter of about 2 mm.) The frame grabbing board should be capable of receiving and displaying nonstandard video since the master timing for scanning imaging systems is generally governed by the scanning mirrors themselves. 10.3.9
SLO System Operation
In an SLO, real-time imaging runs continuously and simultaneously with the AO system. Defocus, fi xation, and pupil alignment adjustments can be made while imaging takes place. One advantage of real-time imaging is that the operator gets immediate feedback on the image quality for every adjustment that is made, making image collection more efficient. A major feature of the AOSLO is its ability to do optical sectioning of the retina. The focus adjustments for optical sectioning can be made with the deformable mirror itself if defocus is built into the AO-closed loop control. Rather than driving the deformable mirror to reduce the aberrations to a null state, the mirror is driven toward a specific defocus amount (while compensating for all other aberrations). Controlling the focus in this way allows the AO system to continue to run, which is important to maintain a well-corrected and stable focus plane in the retina. Defocus control with the deformable mirror is limited by the stroke of the deformable mirror, but current technology allows for defocus control up to ±0.5 diopters (D), which moves the focal plane about 300 mm in a human eye. While this is sufficient for scanning through the normal retinal thickness, it is not sufficient for imaging through the optic nerve. However, when microelectromechanical systems (MEMS) mirrors become a reality, then these mirrors should provide an adequate focus range for most applications.
256
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
10.4 OCT OPHTHALMOSCOPE Optical coherence tomography is a noninvasive, interferometric imaging modality that provides significantly higher sensitivity and axial resolution than the conventional flood illumination and SLO architectures discussed previously. The term OCT was coined in 1991 in reference to the fi rst optical B-scan (x-z plane) images collected of the in vitro human retina using interferometry [34]. This was preceded by substantial work in the 1980s on a technique called low-coherence interferometry that provided depth-resolved A-scan (z) images of the retina using essentially the same interferometric principles [35, 36]. Since the early 1990s, OCT technology and knowledge have grown rapidly. This has led to an increasingly large and diverse array of OCT designs, all of which can, in principle, be combined with AO to increase their transverse resolution and sensitivity. In an AO-OCT combination, OCT provides the high axial resolution, and AO the complementary high transverse resolution. Together, the AO-OCT combination musters a potentially powerful imaging tool whose 3D resolution and sensitivity in the eye can substantially surpass those of any current retinal imaging modality. The benefits of adding AO to an OCT system, however, are not without costs, with the primary ones being increased system complexity and expense. The numerous OCT designs that have been used to image the living human retina can effectively be categorized into five main embodiments, three in the time domain and two in the spectral domain. Time and spectral domains refer to the temporal and spectral detection of the OCT signal. Figure 10.5 shows the five embodiments along with leading performance specifications for image acquisition rate, sensitivity, and resolution. Further details of these example systems can be found in the citations listed in the rightmost column. Note that many other research groups not listed have made significant contributions to the broad field of OCT, and their work is readily available in the vast OCT literature. As indicated in the figure, AO has been combined with time-domain en face (x-y) flood illumination OCT using a CCD [37] and tomographic scanning (x-z) OCT [38]. The two have been reported to achieve 3D resolutions of 2.3 mm × 2.3 mm × 14 mm and 5 to 10 mm × 5 to 10 mm × 3 mm in the eye, respectively, though results have been limited, demonstrating more the operation of the techniques rather than clinical or scientific benefits. More recently, AO has been successfully combined with spectral-domain OCT to achieve the highest reported 3D resolution in the living human retina at 3.0 mm × 3.0 mm × 5.7 mm [39]. System performance was sufficient to observe the interface between the inner and outer segments of individual photoreceptor cells, resolved in both the lateral and axial dimensions. Such observations have not been reported with conventional flood illumination and SLO systems endowed with AO. Finally, en face scanning spectral-domain OCT instruments with AO are under development at several sites including the University of
OCT OPHTHALMOSCOPE
1. Image Acquisition Rate (per second)
257
3. Resolution (µm) 2. Sensitivity Axial (dB) (retinal tissue) Lateral
Reference
OCT Method
AO
1. Tomographic scanning OCT (time domain)
No
400 A-scans
>85
>10
20
Stratus OCT3
No
50 A-scans
95
3
15
Drexler et al. [40]
Yes
125−250 A-scans
N/A
3
5−10
No
2−10 C-scans
N/A
10
15
Rogers et al. [41]
No
53 C-scans
~83−85
10−11
15
Hitzenberger et al. [42]
3. En face flood illumination OCT (time domain)
Yes
1 C-scan/7 ms
95
14
2.3
Miller et al. [37]
4. En face scanning OCT (spectral domain)*
No
29,000 A-scans
90
Cense et al. [43]
16,000 A-scans
98 (max)
3.5 2.1
N/A
No
N/A
Wojkowski et al. [44]
5. Line illumination OCT (spectral domain)
Yes
100 A-scans/1 ms (short burst)
94 (max)
5.7
3
Zhang et al. [39] (see also Chapter 17)
2. En face scanning OCT (time domain)
Hermann et al. [38]
* AO en face scanning OCT (spectral domain) is being developed at the University of California at Davis, University of Vienna, and Indiana University with the first two already presenting early results.
FIGURE 10.5 Leading performance specifications for three time-domain (1–3) and two spectral-domain (4–5) OCT embodiments as cited in the representative literature for the restrictive case of imaging the living human retina. Three of the five methods have been integrated with AO and reported in the literature. A fourth is in the process of being integrated. A-scan refers to a one-dimensional axial scan (z) through the retinal tissue. C-scan refers to an en face or traversal two-dimensional slice (x-y).
California at Davis, the University of Vienna, and Indiana University with the fi rst two already presenting early results. Due to the variety of OCT embodiments that can be coupled with AO, the rapid and somewhat unpredictable developments in the OCT field, and the fact that AO-OCT instruments are less established than AO conventional flood illumination and SLO systems, this section will emphasize a more global picture of AO-OCT rather than focus on the details of a single OCT embodiment. 10.4.1
OCT Principle of Operation
Optical coherence tomography is commonly referred to as the optical analog of ultrasonography, a common clinical method for measuring distances between ocular surfaces. Ultrasonography directly measures the time of fl ight and intensity of sound pulses that are launched into the eye (using a contact transducer) and are reflected or backscattered from internal tissue interfaces
258
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
that exhibit an acoustic impedance mismatch. Resolution is typically on the scale of 100 mm. A strict optical analog of ultrasonography does not exist, however, as optical detectors are too slow to directly measure the speed of light (which is 5 orders of magnitude faster than that of sound in tissue) over distances relevant for ocular imaging. To circumvent this technical bottle-
zref Reference Mirror
∆z
Ψref Retina BS LowCoherence Light Source
Ψretina Ψ
Detector
l(zref) = lref + lretina + K√lref √l(zref)retina
Intensity Recorded by Detector with Mirror at Position zref
Amplitude Reflection of Cross Sectional Slice of Retina at zref
exp -
Convolution
4π 3.33 zref 2 cos ∆z (10.5) λ lc
Coherence Function of Light Source Acts as Axial PSF
FIGURE 10.6 (Top) Conceptual layout of OCT for imaging the eye. Principal components of the interferometer include a low-coherence light source, an optical path length modulator (e.g., translating reference mirror), and a detector. (Bottom) The detected intensity (when the mirror is at position zref), I(zref) ≡ Ψ2 ≡ Ψretina + Ψref2 , is a sum of two DC components, I ref ≡ Ψref2 and I retina ≡ Ψretina2 , plus an interference term whose amplitude is proportional to the product of I ref and I ( zref )retina , a reflection from a slice of retina. The latter is fi rst convolved with the coherence function of the light source, in this case a Gaussian with coherence length, lc . The coherence function acts as the axial point spread function of the instrument. The translational change of the reference mirror is given by ∆z. Extraction of I(zref) retina can be realized in a variety of ways and is used to construct an image of the retina.
OCT OPHTHALMOSCOPE
259
neck, OCT cleverly employs a Michelson interferometer in conjunction with a low-coherence light source to coherently fi lter light reflecting from the sample (Fig. 10.6). Indirect measurements of the time of fl ight, and therefore distance, can be made from this interference pattern. Axial movement of the coherent filter through the thick sample, that is, the retina, in conjunction with temporally resolved detection of the interference signature permits the reconstruction of reflectivity profi les over a range of depths in the sample from which one-, two-, and three-dimensional images of the tissue are generated. This detection scheme is termed time-domain OCT. As a complement to time-domain OCT, spectral-domain OCT records the reflected sample signature in the spectral (optical wavelength) domain rather than the time domain and requires no modulation or scanning of the reference channel. At the most basic level, conversion of a time-domain OCT system to a spectral-domain OCT system is realized by replacing the point detector in the detection channel with a spectrometer and (linear or areal) detector array. Processing the recorded spectral image consists of several steps, the most important of which are reference subtraction, interpolation from wavelength to wave number space, Fourier transformation, and dispersion balancing. The fi nal processed result is an intensity reflectivity profi le through depth in the sample. A good resource for further information on the technology and theory of OCT is the Handbook of Optical Coherence Tomography [45]. A very nice theoretical and historical development of the early years of OCT can be found in Fercher [46]. Many of the recent advances in spectral-domain OCT are still largely only in reviewed journal publications, such as those listed in Figure 10.5. Lastly, OCT has been extended well beyond intensity imaging to other detection schemes, including polarization-sensitive, phase-sensitive, Doppler, and spectroscopic OCT. While each extracts different optical information from the sample, all will benefit from AO in terms of increased transverse resolution and sensitivity. As such, we confi ne our attention to intensity imaging assuming that AO integration will follow a similar path for these other OCT modalities.
10.4.2 Resolution Limits of OCT In contrast to traditional imaging systems, such as conventional flood illumination systems and SLOs, transverse and axial resolutions for OCT are independently controlled with the latter being dictated by the mean wavelength and bandwidth of the illumination source rather than the numerical aperture (NA). Specifically, the axial resolution of a traditional diffraction-limited system is proportional to the square of the pupil diameter, while for OCT, the full width at half height (FWHH) of the axial PSF is given by:
260
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
FWHH axialPSF =
( )
2 ln 2 λ 2 π ∆λ
(10.6)
where l is the center wavelength of the source and ∆l is the FWHH of its spectrum. Equation (10.6) assumes a Gaussian spectral source and reveals that increased axial resolution is realized with broader spectral sources. The l2 dependence indicates that longer wavelength sources require disproportionately broader spectral widths to achieve the same resolution. As examples, the commercial Stratus OCT3 (Zeiss Meditec, Inc.) has an axial resolution in retinal tissue of ~10 mm and ultra-high-resolution OCT recently demonstrated ≤3 mm [38, 40, 43, 44]. (See Figure 10.5 for specific resolution examples.) While the axial resolution in OCT is governed by the source spectral properties, its transverse resolution is governed by the same focusing properties (i.e., limited by diffraction and aberrations) that govern conventional flood illumination systems and SLOs. It is for this reason that AO and OCT are viewed as complementary technologies. The high transverse resolution obtained with AO is fundamentally independent of the high axial resolution from OCT. The need for very high 3D resolution for imaging the retina microscopically is illustrated in Figure 10.7. The figure shows point spread functions for the major retinal ophthalmoscope architectures in combination with AO as well as those for two commercial instruments (without AO). A scaled histological cross section of the human retina is shown on the left for comparison. In the figure, the three experimental AO-OCT point spread functions are significantly smaller than those of the other ophthalmoscope architectures [37–39]. Currently, the smallest point spread volume is that of the Indiana AO spectral-domain OCT ophthalmoscope at 51 mm3, which is 272 and 78 times smaller than that of the commercial SLO and OCT systems, respectively. Note that this point spread function is at least as small as many of the cell nuclei shown in the retinal cross section, suggesting that these cells could be resolved and could be observed if there is sufficient backscatter signal. The AO-OCT point spread functions depicted in Figure 10.7 assume the axial (i.e., coherence gate) and lateral (i.e., focal plane) resolutions are superimposed at the same retinal depth, as is intrinsically and conveniently the case for the conventional flood illumination and SLO systems. For OCT, this need not be the case. The axial and lateral resolutions of OCT systems are independent and therefore can be at physically separate locations in the retina. This provides an additional degree of freedom. For example, the focal plane could be strategically positioned axially in front of the coherence gate such that cells lying at the focal plane are viewed at high lateral resolution and in pseudo-transmission by light reflected from cells at the more deeply located gate. An example of this scenario is shown in Figure 10.10 (right). This additional freedom, however, necessitates additional ophthalmoscope complexity in order to control both the focal plane and gate positions. This control must
OCT OPHTHALMOSCOPE
261
Choriod Photoreceptors
Miller et al. [37]
Outer Nuclear Layer
Hermann et al. [38]
Outer Synaptic Layer
Zhang et al. [39]
Inner Nuclear Layer Inner Synaptic Layer 100 µm
Comm. OCT
Ganglion Cell Layer
Nerve Fiber Layer
AO cSLO
AO OCT
Comm. cSLO AO Flood Illumination
FIGURE 10.7 Resolution of current imaging devices relative to the human retina. The left shows a histological cross section of human retina at 4.17° eccentricity and a 100-mm scale bar. (From Boycott and Dowling [47]. Reproduced with permission from The Royal Society.) To the right are point spread functions that are drawn to scale for various combinations of AO and ophthalmoscope architectures (conventional flood illumination, cSLO, and OCT). For simplicity, the PSFs are displayed as 2D projections with their width and height representing the ophthalmoscope’s lateral and axial resolutions, respectively. Note that the displayed PSF for the AO floodilluminated ophthalmoscope represents an effective PSF rather than the true PSF, since out-of-focus light reduces image contrast, not resolution. Two commercial ophthalmoscopes (comm. cSLO and comm. OCT) are also shown.
account for instabilities in the eye, such as microfluctuations in accommodation and head movements that unfortunately independently shift the focus and gate positions. For example, axial head and eye motion shifts the gate, but not the focus, while accommodation shifts the focus, but not the gate. It may appear at fi rst glance that to achieve the AO-OCT point spread functions in Figure 10.7 for time-domain OCT, exquisite control is required to precisely position the gate at the focal plane. Fortunately, the necessary overlap is noticeably more relaxed due to the fi nite depth of focus that straddles the geometric focal plane. More so, because depth of focus is proportional to the square of the transverse resolution (which is inversely proportional to pupil size for an unaberrated eye), placement of the gate is increasingly less difficult for smaller pupil sizes, albeit at the expense of reduced lateral resolution. Figure 10.8 shows the theoretical trade-off between lateral resolution and depth of focus for an unaberrated eye at 0.83 mm. As an example for a 6-mm pupil, the transverse resolution ranges from 2.8 mm (w o) to 4.0 mm ( 2ω)oacross a narrow 60-mm depth of focus. For the same system, but with
30
500
25
400
20
300
15
200
√2ωo
10
100
ωo
5
0
Depth of Focus (µm)
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
Transverse Resolution (µm)
262
0 0
1
2
3
4
5
6
7
8
Pupil Diameter (mm) FIGURE 10.8 Transverse resolution (solid) and depth of focus (dashed) as a function of pupil size for an unaberrated eye. Resolution at the focal plane, w o, is defi ned by 1.22 lF/d, where l is the wavelength of light (0.83 mm), F is the focal length of the eye (22.2 mm/1.33 = 16.7 mm), and d is the pupil diameter. Assuming a Gaussian beam intensity profi le, depth of focus is specified as two times the Rayleigh range with resolution across the depth of focus ranging from w o to 2ω o [48].
a smaller 3-mm pupil, the transverse resolution is reduced by a factor of two (5.6 to 8.0 mm), but the depth of focus increases fourfold (241 mm) and extends across half to the full thickness of the retina, depending on retinal eccentricity. The fundamental trade-off between transverse resolution and depth of focus suggests that the two can be collectively optimized for specific AO retinal imaging applications, for example, targeting specific cell layers such as the ganglion cell layer, or the inner or outer nuclear layer, each of which is <75 mm thick. Note that light collection efficiency (~pupil 2) and speckle size (~1/pupil) also depend on the pupil and should be taken into account when optimizing the system. For spectral-domain OCT, the overlap requirement of the gate and focal plane is substantially more relaxed than that of time-domain OCT, as individual A-scans are acquired in a single snapshot. This is a major performance advantage of the spectral-domain approach. 10.4.3 Light Detection Light detection, or equivalently sensitivity, for OCT commonly refers to the weakest retinal reflection that is detectable relative to the optical power that enters the eye. It is commonly defi ned as the square of the signal amplitude divided by the variance of the noise floor [49]. While the sensitivity of OCT varies widely depending on the design, typical values are between −90 dB (10 −9) and −100 dB (10 −10). Figure 10.5 shows sensitivity levels for specific OCT
OCT OPHTHALMOSCOPE
263
instruments. To obtain a feel for what this means for retinal imaging, the brightest retinal reflections start between −50 to −60 dB (10 −5 to 10 −6 less than the amount of light that enters the eye) and typically come from the nerve fiber, photoreceptor, retinal pigment epithelial, and choroidal layers. The remaining 30 to 40 dB sensitivity of a typical OCT instrument (with −90 dB sensitivity) is sufficient for detecting structures in essentially all of the stratified layers of the retina. Assuming that reflections in the transparent retina are due to refractive index mismatches, ∆n, −90 to −100 dB corresponds to a ∆n of only 10 −4 to 10 −5. Such high sensitivity in an OCT instrument is achieved via optical heterodyning in which weak tissue reflections are amplified into the photon noise regime by means of a strong reference beam. This approach is in stark contrast to that employed by conventional flood illumination and SLO systems that rely on direct rather than interferometric imaging. The sensitivity of time-domain OCT and spectral-domain OCT depends on slightly different temporal parameters. The signal-to-noise ratio (SNR) of time-domain OCT is given by:
ηP SNR TD = 10 log retina hv∆fx
(10.7)
where h is the detector quantum efficiency, Pretina is the optical power of a retinal reflection that reaches the detector, hv is the energy of a photon, and ∆f x is the acquisition bandwidth. Equation (10.7) reveals a fundamental tradeoff between sensitivity and acquisition rate (or bandwidth). Higher acquisition rates (i.e., wider bandwidths) fundamentally decrease sensitivity, yet are often necessary to mitigate eye motion artifacts or to achieve denser sampling when, for example, a broader light source is used for higher axial resolution. While power entering the eye can be increased to offset faster acquisition speeds, a hard upper limit is imposed by light damage to the ocular tissue. The signal-to-noise ratio of spectral-domain OCT is given by: SNR FD = 10 log
(η
PretinaT hv
)
(10.8)
where T is the exposure time of a single A-scan [50]. In contrast to timedomain OCT, spectral-domain OCT is independent of the acquisition rate due to the fact that an entire A-scan is collected simultaneously rather than sequentially. Sensitivity, therefore, does not degrade with increased resolution. This is a major advantage, as it permits ultrahigh axial resolution (~3 mm) images of the retina to be acquired at the same sensitivity as standard resolution (~10 mm) OCT images. This is particularly relevant for AO retinal imaging applications where speed and sensitivity are critical. Note that the signal-tonoise benefit of spectral-domain OCT relative to time-domain OCT goes as:
264
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
1 N T = A = NA T∆fx T
(10.9)
where NA is the number of resolution elements that fit within a single A-scan. This reveals that the sensitivity advantage of spectral-domain OCT over that of time-domain OCT can be substantial (two to three orders of magnitude greater) for A-scans composed of many resolution elements. Conversely, for very thin samples that traverse few A-scan resolution elements, the sensitivity advantage of spectral-domain OCT is, in principle, largely lost and both types of OCT perform similarly in terms of sensitivity, speed, and resolution. As specified in Eqs. (10.7) and (10.8), sensitivity is proportional to the retinal reflection that reaches the detector, Pretina, which in turn depends on the collection efficiency of the eye’s optics. The combination of a large pupil with AO to correct the wave aberration of the eye will, in principle, permit substantial increases in OCT sensitivity above that in current research and commercial instruments. For example, the Vienna AO tomographic scanning OCT instrument utilized a deformable mirror (OKO Technologies) for aberration correction across a 3.7-mm pupil [38]. By adding this basic adaptive optics system to correct mainly the defocus of the system (almost 1 D), they found an increase in the SNR of 9 dB when using adaptive optics at the 3.7mm pupil size. The mirror used in this system was not optimal for higher order aberration correction; improvements in sensitivity beyond what was shown could be obtained by using a better wavefront corrector, such as the Xinxtics deformable mirror. As an example, the Indiana AO line illumination spectral-domain OCT ophthalmoscope employed such a mirror for the correction of aberrations across a 6-mm pupil. For this ophthalmoscope, the signal-to-noise ratio of the detected reflection from the photoreceptor layer was highly sensitive to the level of ocular aberrations and defocus with changes of 11.4 and 13.1 dB observed when the ocular aberrations (astigmatism, third order, and higher) were corrected and when the focus was shifted by 200 mm (0.54 D) in the retina, respectively. Over and above the sensitivity increase obtained from adding adaptive optics, the sensitivity will increase with the pupil size. For a diffraction-limited system, simply increasing the exit pupil from 1 to 6 mm theoretically increases the sensitivity of the system by 15.6 dB. 10.4.4 Basic Layout of AO-OCT Ophthalmoscopes Regardless of the OCT embodiment, two fundamentally distinct approaches exist for integrating OCT with AO, both of which are depicted in Figure 10.9. Each has strengths and weaknesses that more or less complement the other. Figure 10.9(a) shows the AO system positioned in the sample channel of the interferometer. This straightforward approach physically restricts the AO system to measure and correct only for the wave aberrations for both light entering and exiting the eye (i.e., the sample), which increases system effi-
265
OCT OPHTHALMOSCOPE Reference reference mirror Mirror low LowCoherence coherence light Light Ψref source Source
Reference reference Mirror mirror
retina Retina BS
detector Detector
AO System system
Lowlow Coherence coherence Ψref Light light Source source
retina Retina BS
Ψretina
Ψretina
AO System system
OCT System detector Detector OCT System
(a)
(b)
FIGURE 10.9 Two fundamental schemes for integrating AO into OCT. The most straightforward approach is to position the AO system in the sample channel (a), while the second approach is to position it in the detector channel (b) where the channel is configured as a point diffraction interferometer.
ciency, while leaving the reference beam untouched. The increase is somewhat tempered, however, by inevitable light loss in the AO system (double pass), which can easily exceed 50% in a single pass. An additional strength is that the OCT subsystem can be realized with optical fibers that permit a decoupling of the four arms of the interferometer. This makes alignment of the system easier and effectively separates the ophthalmoscope components that control the lateral and axial resolution. A weakness of this approach is that the reference channel must duplicate the optical path of the sample channel, which can be physically long owing to the relay optics and long focal lengths required when using conventional deformable mirrors in current AO systems. For example, each channel can be several to tens of meters in length. Such long noncommon paths decrease the stability of the interferometer, require additional optics, increase the complexities of dispersion balancing, and noticeably enlarge the physical footprint of the ophthalmoscope. An additional drawback is that back reflections from the AO components can pose problems for the wavefront sensor and any viewfinder cameras, and may necessitate using curved mirrors rather than lenses. The Vienna AO tomographic scanning OCT [38] and the AO en face scanning spectral-domain OCT systems currently under development (see Fig. 10.5) are based on this approach. Figure 10.9(b) shows the second approach that strategically positions the AO system in the detection channel, which is downstream of both the reference and sample arms. This arrangement permits the noncommon path lengths (i.e., the reference and sample arm lengths) to be orders of magnitude shorter
266
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
(~10 cm) than the first approach. The shorter noncommon paths provide better interferometric stability, require fewer optics, reduce the number of components that must be dispersion matched, and greatly reduce the physical size of the ophthalmoscope. A potential problem is that the AO system will act on both the sample and reference wavefronts meaning that the full correction of the sample aberrations will cause the conjugate of these aberrations to be imparted onto the reference. There are a number of potential design strategies to prevent this contamination. One effective strategy that is employed in the Indiana AO flood illumination OCT and Indiana AO line illumination spectral-domain OCT ophthalmoscopes involves configuring the detection channel as a point diffraction interferometer [51]. In this approach, the reference beam impinges on a confined central region of the corrector that is influenced at most by a handful of mirror actuators, while the sample beam is exposed to all actuators. Disadvantages of this second approach are the additional optical constraints imposed by the point diffraction design, the inability to correct the light entering the eye (which reduces efficiency), and the restriction to freespace (rather than fiber-based) OCT systems. 10.4.5
Optical Components
Light reflecting out of the eye can be relayed through the AO-OCT instrument with lenses or curved mirrors similar to that for conventional flood illumination and SLO systems. The Vienna AO tomographic scanning OCT, the Indiana AO flood illumination OCT, and the Indiana AO line illumination spectral-domain OCT instruments all relied largely on lenses. (See Chapter 17 for a detailed layout of the Indiana spectral-domain ophthalmoscope.) The advantages and disadvantages of lenses and mirrors in an AOOCT instrument are essentially the same as those described in Section 10.2.3 for conventional flood illumination. One difference is that OCT is largely insensitive to back reflections from lenses, though its wavefront sensor still is. Another difference is that high-resolution and ultra-high-resolution OCT are highly sensitive to chromatic dispersion induced by lenses, and therefore care must be taken if lenses are used in the uncommon paths of the OCT interferometer. Because of this, mirrors are an attractive alternative, exhibiting little dispersion if appropriate coatings are selected. The AO scanning SDOCT instruments currently being developed at UC-Davis and Indiana are mirror based. Beamsplitters can be cube, plate, or pellicle, though vibrations with the latter can be picked up by the OCT detector. The additional complexity of AO-OCT instruments over that of AO conventional flood illumination and AOSLO systems strongly warrants performance modeling using commercial ray tracing software. 10.4.6
Wavefront Sensing
To date Shack–Hartmann wavefront sensors have been employed in all AOOCT ophthalmoscopes and operate in a manner similar to that employed in
OCT OPHTHALMOSCOPE
267
AO conventional flood illumination and AOSLO systems (see Sections 10.2.4 and 10.3.5). Integration of the SHWS into flood illumination OCT follows that for conventional flood illumination; integration into scanning OCT follows that for the SLO. For the scanning embodiments, some light from the OCT system must be split off before the detector to be used for the AO wavefront sensor. While this will decrease the sensitivity of the system because less photons will be detected, the loss is minimal. This loss in sensitivity will be more than offset by the increase in sensitivity obtained by correcting a larger pupil size and the more efficient coupling of light into the collection fiber given by adding adaptive optics. 10.4.7 Imaging Light Source General requirements for the OCT light source include an optical spectrum that has a smooth profi le to minimize side lobes in the axial point spread, and is sufficiently broad to achieve the axial resolution (~l2 / ∆l) needed for the specific retinal imaging task. To date, AO-OCT instruments have employed Ti:sapphire lasers and SLDs. Advantages of the Ti:sapphire laser include higher output power and a wider optical bandwidth. For example, the Ti: sapphire laser employed by Hermann et al. [38] contained a 130-nm optical bandwidth centered at 800 nm, which produced an axial resolution in the retina of 3 mm. In comparison, the SLDs employed in the two AO-OCT ophthalmoscopes at Indiana had optical bandwidths of less than 50 nm and resulted in axial resolutions no greater than 5.7 mm. Recent advances in SLD performance have narrowed the gap with commercial devices having optical bandwidths up to 200 nm and delivering more than 15 mW of optical power. Additionally, SLDs are turnkey, physically compact, and less expensive ($5000 to $15,000). Note that high-power SLDs are highly sensitive to back reflections that can destroy the SLD and therefore careful use of optical isolators is essential. Light sources are readily available for conventional flood illumination and SLO ophthalmoscopes that extend across the visible and near-infrared spectrum. In contrast, light sources for OCT are currently confi ned to wavelengths above 670 nm with the absorption properties of the ocular media imposing a fundamental upper limit at roughly 1 mm. A detailed description of the various types of light sources used in OCT can be found in Chapter 3 of the Handbook of Optical Coherence Tomography [45]. 10.4.8 Field Size The constraints on field size for AO-OCT are no different than those for AO conventional flood illumination and SLO systems. Constraints include the need for high illumination levels during short image acquisition periods (tens of milliseconds) as well as a dense sampling of the retinal image (<1 mm/ pixel). The Indiana AO flood illumination OCT and Indiana line illumination
268
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
spectral-domain OCT ophthalmoscopes acquire single images of the retina well within 10 ms, the maximum duration over which the retina is largely stationary at the cellular level. For single-image acquisition, the field size of these ophthalmoscopes was limited by the output power of their SLD sources (10 to 20 mW) rather than detector speed. Both the flood and line illumination ophthalmoscopes had field sizes of about 1/3° even though the detectors they employed had a sufficient number of pixels to acquire images of the retina up to 1.7°. For comparison, the Indiana AO conventional flood illumination ophthalmoscope typically images up to 1.7° patches with exposure durations around 1 to 4 ms. For the AO en face scanning spectral-domain OCT ophthalmoscopes that are being developed, their field of view is limited by the image acquisition rate of their line scan CCD detectors, as opposed to the illumination power of their light sources. The fastest reported line scan detector for spectral-domain OCT operates at 29,300 A-scans/s with exposure durations of 34 ms [43]. This is 680 times slower than the 50-ns integration time for the AOSLO reported in Section 10.3.7. Obviously this seriously constrains the field of view that can be realized in the same period of time as with the SLO. At Indiana, we have recently evaluated a higher speed line scan detector that acquires up to 73,000 A-scans/s (2.5 times faster) with 8.5 ms exposure durations. While image signal-to-noise is reduced at the higher speed, images still carry significant information about the retinal layers. In principle, volumetric imaging at this higher A-scan rate for a 30-Hz frame rate can be accomplished for a 49 × 49 × 256 pixel volume of retinal tissue. While the 49 × 49 en face pixel area is significantly less than that routinely achieved with AO-SLOs (512 × 512 pixels), the OCT approach has the advantage of en face information at all depths of the retina. 10.4.9
Impact of Speckle and Chromatic Aberrations
A fundamental obstacle of AO-OCT is the confounding effects of speckle noise, which is intrinsic to the interferometric nature of OCT. While AO can, in principle, increase transverse resolution to the diffraction limit for large pupils, the average theoretical transverse size of speckle is already comparable to the diffraction spot size (~lF/d). Speckle imposes two serious problems for validating the AO-OCT benefit: (1) It largely prevents the observation (and confi rmation) of microscopic structures in the retinal image, even for those structures physically larger than the average speckle (e.g., small capillaries), and (2) it prevents effective focusing in the retina due to the fact that speckle size is largely insensitive to defocus and aberrations in the system. As an example, Figure 10.10 shows images of the cone mosaic in one subject’s eye acquired with the Indiana AO conventional flood illumination (left panel) and Indiana AO flood illumination OCT (right panel). The images were acquired simultaneously, at the same wavelength, and of adjacent patches of retina. While the spacing of the bright spots in the conventional flood illuminated image matches that of cones predicted histologically at this retinal
OCT OPHTHALMOSCOPE
269
50 µm
Spatially Incoherent Image
En face OCT Image
FIGURE 10.10 Images of adjacent patches of cone photoreceptors in one subject acquired simultaneously by the (left) Indiana AO conventional flood illumination ophthalmoscope using an incoherent multimode fiber light source and (right) Indiana flood illumination AO-OCT ophthalmoscope. The latter represents a pseudotransmission view of the cones that was realized by focusing on the cone apertures and strategically positioning the coherence gate behind the cones in the proximity of the highly reflective retinal pigment epithelium and choroid. The two images were acquired simultaneously through essentially the same instrument and at the same wavelength. Cone clarity in the incoherent image confi rmed that the OCT system was focused on the cone apertures. Pupil size was 6 mm, and the illumination wavelength was 679 nm (∆l = 11 nm) for both images. (Images courtesy of Rha, Jonnal, and Miller, Indiana University.)
eccentricity, many (although not all) of the corresponding spots in the OCT image are noticeably of fi ner grain and more tortuous. This observation, coupled with the fact that the cones must be in focus in the OCT image, suggests that cone information is largely masked by speckle noise that dominates the image. As a second example, Figure 10.11 shows a more conventional tomographic slice acquired with a scanning spectral-domain OCT instrument (without AO). Note that while the 27° image has a photographic-like quality, the benefit of AO cannot be judged on this large scale. An appropriate scale (i.e., comparable to that for most other AO ophthalmoscopes) is the magnified 1° subsection (inset) that clearly reveals well-developed, high-contrast speckle in essentially all retinal layers, which is hidden in the 27° image. Speckle reduction methods for OCT have been addressed by numerous investigators and include image processing, compounding (angular, spatial, and frequency), and the use of spatially incoherent illumination. The first two often compromise resolution, which is the primary reason for using AO and OCT to begin with. The latter suffers from reduced signal to noise due to low photon number per coherence volume and sensitivity to alignment and optical mismatches between the sample and reference channels. In general, none of these approaches appear ideally suited for AO-OCT retinal imaging. An alternative approach is to use a multimode fiber cascaded with a spatially coherent source to generate partially spatial coherent light. This has the benefit of obtaining substantial numbers of photons per coherence volume in addition
270
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
1 deg 27 deg (8 mm)
FIGURE 10.11 Tomographic cross section covering 27° and centered on the foveal pit in one subject. (Inset) Magnified view of a 1° section extracted at about 1° eccentricity reveals significant speckle noise at all retinal layers. The image was acquired with a tomographic scanning spectral-domain OCT system and consists of 4000 A-scans spaced at 2 mm. A-scan exposure time was 50 ms. Axial and lateral resolution are 6 and 10 mm, respectively. Mean SLD wavelength was 840 nm. (Image courtesy of Robert Zawadski and John S. Werner, UC-Davis.)
to being less sensitive to reference and sample channel differences [52]. While early results are promising, the method has yet to be applied to the eye. Other potential directions include modulating the reference or sample beams and time averaging. In general, speckle noise remains an obstacle for AO-OCT retinal imaging with more effective reduction methods greatly needed. It is well known that dispersion introduced by instrument lenses and the sample material (i.e., the eye) can critically degrade OCT resolution and sensitivity if it is not sufficiently balanced between the sample and reference channels. Dispersion balancing is an established art and frequently includes the use of such devices as rapid scanning delay lines, saline water vials, translating prism pairs, and postdetection software. While most OCT systems are sufficiently balanced to permit an axial resolution that approaches the theoretical limit set by the source spectrum, balancing does not remove the chromatic aberrations of the system, in particular the longitudinal chromatic aberrations of the eye. Without chromatic correction, only one wavelength of the low-coherence source will come to a sharp focus on the detector. The extent of the chromatic blur will naturally depend on the magnitude of the chromatic aberrations and the NA through which the retina is imaged. To quantify this impact on lateral resolution, Figure 10.12 shows the spectral focal shift in the retina relative to that for 825 nm. The curve was derived using the longitudinal chromatic aberration (LCA) of the eye as specified by the Indiana schematic eye [14]. Note that although the Indiana eye was derived for wavelengths over the visible spectrum, more recent studies report that it is also a reliable predictor of chromatic defocus in the near-infrared [12, 53]. The figure also includes the spectral bandwidth and corresponding
COMMON ISSUES FOR ALL AO IMAGING SYSTEMS
271
100
Focal Displacement (µm)
50 0 -50 -100 -150 (c) 30 µm
-200 (a) 12 µm
-250 0.6
0.7
(b) 92 µm
0.8
0.9
1
Wavelength (µm)
FIGURE 10.12 Focus shift in the retina due to the intrinsic longitudinal chromatic aberrations of the eye (modeled using the Indiana schematic eye). Displacement (listed below each band) is relative to a wavelength of 825 nm, picked for its proximity to the mean wavelength of many low coherent light sources used for OCT. Arrows below the curve correspond to the spectral bands for light sources employed in the three AO-OCT ophthalmoscopes listed in Figure 10.8: (a) 679-nm SLD with ∆l = 11 nm [37], (b) 800-nm Ti:sapphire laser with ∆l = 130 nm [38], and (c) 843-nm SLD with ∆l = 49.4 nm [39].
focal displacements in the retina for the three AO-OCT ophthalmoscopes listed in Figure 10.5. While these displacements are large relative to the gate resolution and size of individual retinal cells, they are shorter than the depth of focus as specified in Figure 10.8 for the corresponding pupil size. This implies that the LCA will likely produce minimal image degradation for these ophthalmoscopes. Note, however, that if a wide bandwidth source, such as the Ti:sapphire laser, is used with a 6-mm pupil, instead of the small 3.7-mm pupil that was used in Figure 10.12(b), the LCA of the eye would need to be corrected since the depth of focus would shrink to 58 mm, which is shorter than the 92-mm focal displacement. In general, as source spectra become increasingly broader and are shifted to shorter wavelengths where the eye’s LCA is substantially higher, compensation of eye’s LCA will become increasingly important in order to preserve the lateral resolution afforded by AO.
10.5 10.5.1
COMMON ISSUES FOR ALL AO IMAGING SYSTEMS Light Budget
In a high-resolution ophthalmic imaging system, the light budget needs careful consideration as ocular hazards place a hard threshold on the amount of light that can be safely launched into the eye. In addition, the double-pass nature
272
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
of the eye is highly inefficient with light levels emerging from the eye 3 to 5 orders of magnitude below that which entered [9]. Considerable effort, therefore, should be made to maximize the number of photons that reach the imaging detector. An important design step is to couple the illumination beam into the system with a weak reflecting beamsplitter (BS in Fig. 10.1, and BS1 in Fig. 10.4). Typically 5% or less is used. Light reflected from the retina is then efficiently transmitted by the splitter (>95%). A disadvantage of this approach is that it requires a more powerful light source as the double-pass throughput of the beamsplitter is small (5% × 95% = 4.8%). Another important step is to use highly reflective mirrors and antireflective coated lenses. Even though light loss may be small for any single optical element, their cumulative loss can be significant. The goal is to minimize the amount of light used for wavefront sensing so that as much light as possible is used to form the retinal image.
10.5.2 Human Factors Fixation An entire system can be precisely aligned, but the fi nal image location depends on the patient and the direction they are looking with respect to the axis of the ophthalmoscope. To control the lateral retinal image location, we typically rely on the subject. To control the image location, a calibrated fi xation target is provided for the subject to view while imaging. The wavefront sensor beam will be just visible in (or near) the center of the fi xation field. If the subject looks in that direction, wavefront sensing and imaging will be done at the fovea. If the subject looks up, then the image is of superior retina, and so forth. Stability Patient stability is maintained using a bite bar (or mouth rest), or a rigid dental impression that is fi xed to an x-y-z positioner on the optical table. The patient can be brought into alignment so that the optical axis of the instrument passes through the entrance pupil of the eye. Another method currently being explored is the use of a video-based pupil tracking system that will steer the optical path so that the beam always goes through the same part of the pupil. A good pupil tracking system will relieve the need for a bite bar and may also be used to pass the beam through the least aberrated part of the eye being imaged.
10.5.3 Refraction To relieve the demand on the deformable mirror, it is important to correct for as much of the refractive error of the patient prior to AO correction. The defocus of the eye can be corrected by placing a trial lens in a suitable pupil
COMMON ISSUES FOR ALL AO IMAGING SYSTEMS
273
plane or by changing the axial distance of the eye along with the lens that is closest to the eye, thereby changing the vergence of the beam entering the eye, without moving the pupil conjugate positions. Astigmatism can be corrected by placing cylindrical lenses of the appropriate power and orientation in front of the eye’s pupil or in any other pupil conjugate plane. If the correcting lens is to be placed in a plane conjugate to the pupil, then the vergence has to be corrected by the following formula: Vergence pN =
vergence p 0 ( m 0 ,N )2
(10.10)
where m 0,N is the magnification between the eye and the plane where the lens is placed. The refraction of the eye can be easily determined from the wave aberration. The formula for converting Zernike coefficients to refraction in diopters is given below. Start with the three coefficients for the second-order terms, C3, C4, and C5, respectively. First, compute the axis from the astigmatism terms,
α = 0.5 × arctan
( ) C3 C5
(10.11)
Then compute the two intermediate values A and D, A=
C3 × 2 6 sin ( 2α )
or
A= −
D = 2 3C 4 −
C5 × 2 6 cos ( 2α )
A 2
(10.12)
Finally compute the refraction values in diopters. Axis = Cylinder = Sphere =
180α π 2A
( pupil radius ) 2D
( pupil radius )
2
2
(10.13)
Be sure to convert all distance scales to meters, to get the fi nal values in diopters. Some care has to be taken when either of the astigmatism terms are zero, and the axis should be expressed as an angle from 0° to 180°, by convention. One example of computation code used for determining an appropriate refraction from the Zernike coefficients follows:
274
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
% Matlab code to convert diopters % input variables Z3 = input(‘\nWhat is the Z4 = input(‘\nWhat is the Z5 = input(‘\nWhat is the pupilsize = input(‘\nWhat
Zernike coefficient to
coeff for Z3? ‘); coeff for Z4? ‘); coeff for Z5? ‘); is the pupil diameter in mm? ‘);
% convert the coefficients to units of meters Z3=Z3/1e6; Z4=Z4/1e6; Z5=Z5/1e6; if (Z5==0) alpha=(-1)*sign(Z3)*pi/4; % special case when Z5 is equal to zero else alpha = (-1)*0.5*atan(Z3/Z5); end if (abs(Z5)
If you are converging on a good correction but are not quite there, then sometimes an overrefraction is required (and can be obtained by measuring the refraction through lenses that are already present). Instead of adding more lenses, which will cause light loss and increase the chance of back reflections, computing the correct pair of replacement lenses is the better solution. The computation for an overrefraction follows (summarized from Bennett & Rabbetts [13]): % This is a program that will calculate the over refraction. % The method is from Bennett and Rabbetts, Clinical Visual Optics,2nd ed, p 105-106 % currentspecs: correction that is currently in place currentsphere = -2.75;
COMMON ISSUES FOR ALL AO IMAGING SYSTEMS
275
currentcyl = -1; currentaxis = 40; currentaxisrad=currentaxis*pi/180; % residualspecs: residual refractive state residualsphere = -1; residualcyl = -1.5; residualaxis = 60; residualaxisrad=residualaxis*pi/180; % requiredspecs: correction required to replace the current correction (currentspecs + residualspecs) % step 1: break cylinders into C0 and C45 components currentcyl0 = currentcyl*cos(2*currentaxisrad); currentcyl45 = currentcyl*sin(2*currentaxisrad); currentmeansphere = currentsphere + currentcyl/2; residualcyl0 = residualcyl*cos(2*residualaxisrad); residualcyl45 = residualcyl*sin(2*residualaxisrad); residualmeansphere = residualsphere + residualcyl/2; %step 2: compute the required cylinder requiredcyl = sign(currentcyl0+residualcyl0)*sqrt ((currentcyl0+residualcyl0)^2 + (currentcyl45+residualcyl45)^2); %step 3: compute the axis of the required cylinder requiredaxisrad = 0.5 * atan((currentcyl45+residualcyl45)/(currentcyl0+residualcyl0)); if (requiredaxisrad < 0) requiredaxisrad = requiredaxisrad + pi; end requiredaxis = requiredaxisrad*180/pi; requiredsphere = currentmeansphere + residualmeansphere - requiredcyl/2; % convert to negative cyl notation: if (requiredcyl>0) requiredcyl = (-1)*requiredcyl; if (requiredaxis < 90) requiredaxis = requiredaxis + 90; else requiredaxis = requiredaxis - 90; end requiredsphere = requiredsphere - requiredcyl; end fprintf(‘current correction\n%f\t%f\t%f\n’,currentsphere,currentcyl,currentaxis); fprintf(‘residual correction\n%f\t%f\t%f\n’, residualsphere,residualcyl, residualaxis); fprintf(‘required correction\n%f\t%f\t%f\n\n’,requiredsphere,requiredcyl, requiredaxis);
276
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
10.5.4
Imaging Time
When dealing with subjects and patients, it is important that the imaging goals be accomplished as quickly as possible. After some time, the precorneal tear fi lm will degrade from prolonged periods of holding the eye open (even though the subject is encouraged to blink as often as they need), the dilating agent will begin to wear off, and the patient will get physically tired and bored. A reasonable goal to accomplish the imaging objective, whenever possible, is about 60 minutes. To facilitate this, the subject setup time (alignment, bite bar, etc.) has to be as efficient as possible.
10.6 IMAGE POSTPROCESSING 10.6.1
Introduction
Adaptive optics can make significant improvements in image quality, producing images with much fi ner detail than otherwise possible. However, AO systems do not produce perfect images in that there are residual aberrations that remain uncorrected. Although these cannot be corrected in real time, they can be corrected for after the fact, that is, with postprocessing. In order to do this, it is important to understand the physics of an imaging system. 10.6.2 Convolution The principles of Fourier optics say that the image formed through an optical system is the convolution of the object irradiance with the point spread function (PSF) of the optical system [54]. If the measured image distribution is denoted as i(x, y), the object irradiance as o(x, y), and the PSF as psf(x, y), then the image distribution is expressed as: i ( x, y ) =
∫∫ o ( ς, υ ) psf ( x − ς, y − υ ) dς dυ = o ( x, y ) ⊗ psf ( x, y )
(10.14)
where ⊗ denotes the convolution operator given by the superposition integral. This can be visualized as taking the PSF distribution shifted by a vector (x, y) relative to the object distribution and then taking the volume of the product of the two. Essentially this is a smoothing operation, as illustrated in Figure 10.13. This figure shows the resulting image (left) obtained by the convolution of the true object irradiance (center) with a PSF (right). The PSF is the response of the optical system to a point source, similar to an impulse response function in signal processing. Thus, every point in the object is blurred by the PSF. This equation makes the assumption that the optical system is isoplanatic, that is, that the PSF is the same for all points in the image. From Fourier optics, it is important to note that the PSF is the modulus squared of the Fourier transform of the complex wave-
IMAGE POSTPROCESSING
277
FIGURE 10.13 Convolution illustrated. The observed image (left) is the convolution of the true object (center) with the PSF (right).
front at the exit pupil of the imaging system, or the power spectrum. This is illustrated in Figure 10.14 and the ringing of the PSF is caused by the hard edge of the pupil. Because of the fi nite size of the aperture, the PSF is band limited. This means that only specific spatial frequencies are passed by the aperture. [This is also illustrated in Fig. 10.14 that shows the modulation transfer function (MTF) corresponding to this pupil.] The MTF is the autocorrelation of the aperture or the modulus of the Fourier transform of the PSF. The hard edge of the aperture means that the autocorrelation also goes to zero at the band-limited spatial frequency of the optical system. This band limit is analogous to a band-limited electronic signal that only passes specific temporal frequencies. The highest spatial frequency sampled, fc, is determined from the ratio of the aperture size to the wavelength of light (i.e., d/l). For a 6-mm aperture at visible wavelengths (e.g., 550 nm), this critical spatial frequency is 0.01 cycles/mrad corresponding to a spatial scale of 92 mrad. As shown in Eq. (10.14), i(x, y) is the measured image irradiance, o(x, y) is the true object irradiance, and psf(x, y) is the PSF of the AO system, where the blurring is caused by residual aberrations after compensation. In order to remove the effect of this residual blurring, the image needs to be deconvolved,
FIGURE 10.14 Examples of Fourier optics. The power spectrum of the aperture (left) produces the PSF (center). The modulation transfer function (MTF) (right) is the modulus of the Fourier transform of the PSF and represents the transmission of spatial frequencies by the aperture.
278
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
that is, we need to invert the convolution above. Assuming that an estimate of the PSF is known, then the inversion is possible but not trivial.
10.6.3 Linear Deconvolution The simplest form of deconvolution makes use of the Fourier properties of convolution. The Fourier transform of the convolution operation is simply the product of Fourier transforms of the object irradiance and PSF. If Fourier transforms are denoted in upper case and (u, v) represents the spatialfrequency domain, then I ( u, v ) = FT [ i ( x, y ) ] = O ( u, v ) ⋅ P ( u, v )
(10.15)
where O(u, v) and P(u, v) are the Fourier transforms of the object irradiance and the PSF, respectively. The Fourier transform of the object irradiance can be obtained by simply taking the quotient of the measurement Fourier transform with the PSF Fourier transform, O ( u, v ) =
I ( u, v ) P ( u, v )
(10.16)
and the object irradiance, o(x, y), is obtained by inverse Fourier transforming the quotient. However, there is a problem related to the bandpass sampling discussed above. This quotient is only defi ned for spatial frequencies less than the cutoff frequency and inversion of the quotient will produce nonphysical ringing artifacts because of the hard edge, similar to the structure in the PSF as illustrated in Figure 10.14. One way to get around this is to restore the object irradiance as viewed through a perfect aperture, that is, I ( u, v ) oˆ ( x, y ) = FT −1 H ( u, v ) P ( u, v )
(10.17)
where H(u, v) is the transfer function of the perfect optical system, that is, the Fourier transform of the perfect PSF. However, the presence of noise in the measured data [i.e., i(x, y) = i′(x, y) + n(x, y), where i′(x, y) is the noiseless image] can still introduce problems in the Fourier inversion. This generally occurs at high spatial frequencies where there could well be small number divisions in the above quotient. This effect is usually minimized by using a Wiener fi lter, Θ(u, v), I ( u, v ) Θ ( u, v ) oˆ ( x, y ) = FT −1 P ( u, v )
(10.18)
IMAGE POSTPROCESSING
279
where Θ ( u, v ) =
I ′ ( u, v ) 2
2
I ′ ( u, v ) + N ( u, v )
2
(10.19)
where N(u, v) is the Fourier transform of the noise. A Wiener fi lter is a signalto-noise fi lter. When the signal power spectrum, I′(u, v)2 , is much greater than the noise, Θ(u, v) ≅ 1, and when the signal is much less than the noise, Θ(u, v) ≅ 0. This can be a very powerful filtering technique and reduces (but does not necessarily remove) the unphysical results in the form of negativity. There is no guarantee that the transform of the Weiner fi lter is positive. The presence of negativity is illustrated in the inverse fi lter reconstruction (Fig. 10.15) and is shown here to illustrate that linear inversion fi lters should be used with caution. However, the benefit of this approach is that it is simple and straightforward. In practice, the Wiener fi lter is built from measurements/estimates of the measurement noise in the image. In many cases, this takes the form of white noise (i.e., has a uniform spatial-frequency distribution), and measurements of the noise obtained at spatial frequencies greater than the cutoff frequency (fc) can be interpolated to the lower spatial frequencies, permitting an estimate of the noise term for the Weiner fi lter. Algorithmically, this is a very straightforward setup and any mathematical-based package can easily be used. 10.6.4
Nonlinear Deconvolution
Problems associated with straight inversion as described above can be overcome by more sophisticated deconvolution approaches. These are nonlinear approaches that solve for the object irradiance directly and not from its
FIGURE 10.15 Wiener fi ltering. The target (left) has been deconvolved using the PSF (center left) via an inverse fi lter [Eq. (10.17)] to produce the object reconstruction (center right). The object reconstruction with a Wiener fi lter is shown on the right. The inverse fi lter solution (center right) has been chosen to show the ringing and negativity (black areas) in this reconstruction that are artifacts of inverse fi ltering.
280
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
Fourier transform. These algorithms are based upon statistical analyses, especially Bayesian statistics, and use approaches known as maximum likelihood (ML) and maximum a posteriori (MAP). It is beyond the scope of this chapter to delve into the depths of statistical analysis and the reader is directed to the reference [55]. A more descriptive outline of a typical algorithm is given below. In a leastsquares setting, the following metric, E, is used to solve for the object irradiance: E = ∑ i ( x, y ) − oˆ ( x, y ) ⊗ psf ( x, y )
2
(10.20)
This metric calculates the difference between the measurement i(x, y) and an estimate of the measurement obtained by convolving the PSF with an estimate (indicated by the ∧) of the object irradiance. In this analysis, the estimate is updated so that convergence occurs, which minimizes the metric shown above. How the estimate is updated depends strongly upon the algorithm/ statistical approach being used. This least-squares problem can be solved using a conjugate-gradient technique that essentially varies all of the elements of the object-irradiance map in order to minimize the expression. Thus for a 256 × 256 pixel image, there are approximately 65,000 variables being simultaneously minimized. Therefore, these nonlinear approaches take significantly longer to solve than the simple Fourier inversion discussed above. Various algorithms are available, one of the most common being the Lucy– Richardson maximum-likelihood approach [56, 57]. If the variables obey Gaussian statistics, then the ML approach is equivalent to an error metric minimization of Eq. (10.20). The nonnegativity constraint can be simply applied by reparameterizing the object irradiance as the square of the variable being solved for, that is, o(x, y) = a (x, y) 2 . In all of the above analysis it has been assumed that the PSF is well known. In many cases this is not so, and a technique known as multiframe blind deconvolution (MFBD) can be used to estimate both the object irradiance and the PSF simultaneously [58]. MFBD assumes that the object irradiance is stationary and that only the PSF changes from one exposure to the next, that is, i1 ( x, y ) = o ( x, y ) ⊗ psf1 ( x, y ) i2 ( x, y ) = o ( x, y ) ⊗ psf2 ( x, y ) in ( x, y ) = o ( x, y ) ⊗ psfn ( x, y )
(10.21)
For a single observation, this is a highly underdetermined problem in that there is one observation and two unknowns. For multiple observations, the degree of underdeterminism is decreased and prior knowledge of the object irradiance and the PSFs make the solution tractable. The error metric can now be written as:
IMAGE POSTPROCESSING
ˆ k ( x, y ) E = ∑ ∑ ik ( x, y ) − oˆ ( x, y ) ⊗ psf
2
281
(10.22)
k
where the k subscript refers to the number of frames and both o(x, y) and psfk (x, y) are now estimates. The prior information permits the solution space to be reduced. We have already discussed prior information concerning the object irradiance (i.e., the nonnegativity). An additional constraint is that the object irradiance does not change so the set of equations can be solved for simultaneously (i.e., one object irradiance and n PSFs). Similar constraints can be applied to the PSFs. These are (1) nonnegativity because the PSF of an incoherent optical system is the power spectrum of the complex wavefront at the pupil; (2) the PSF cannot be better than that produced by the perfect aperture (i.e., it is band limited) and the MTF cannot exceed the shape of the perfect MTF; and (3) the PSF cannot be significantly worse than that produced by the AO system being used. These physical constraints mean that we are not working blindly. This approach is commonly known as myopic deconvolution. A known PSF for each observation is used but is allowed to vary within these constraints. It is important to note that the variability of the PSF from frame to frame aids the solution. The commonality in the deconvolution is assumed to be due to the object irradiance. In practice, however, the PSF has common structure in all frames and this can affect the solution. An example of MFBD is shown in Figure 10.16. A synthetic, fully bleached AO retinal image of a living retina is shown before and after deconvolution. The before image is the average of 10 independent observations and the after image is the common solution to the set of observations. The reconstructed PSFs for the individual frames are shown in Figure 10.17, compared with the
FIGURE 10.16 Deconvolution of a fully bleached AO retinal image—average image (left) and MFBD solution (right). (From Christou et al. [59]. Reprinted with permission from the Optical Society of America.)
282
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
FIGURE 10.17 True PSFs (top) and reconstructed PSFs (bottom) using MFBD for the individual frames used in the image in Figure 10.16. (From Christou et al. [59]. Reprinted with permission from the Optical Society of America.)
470 nm
650 nm
Full Bleach (550 nm)
FIGURE 10.18 (Top) Adaptive optics images of a living retina for the three bleach cases—470 nm (top left), 650 nm (top middle), and full bleach (top right). The individual cones are clearly visible as are the shadows of blood vessels. Each image is the average of 36 individual measurements. (Bottom) Corresponding deconvolution of the 36 frames. The contrast is enhanced, signifying a reduction in the strength of the PSF wings, and individual cones are more clearly identified. (From Christou et al. [59]. Reprinted with permission from the Optical Society of America.)
true PSFs used to generate the images. The difference in AO compensation between frames is clearly seen. Figure 10.18 shows the application of the MFBD to actual retinal images for three different bleach cases. Each bleach case comprised 36 individual observations, and these were averaged to obtain the AO image shown in the
IMAGE POSTPROCESSING
283
figure. All 36 frames were also reduced simultaneously; 36 PSFs and one common object yielded 606,208 variables to be minimized. The contrast enhancement is clearly seen. 10.6.5
Uses of Deconvolution
The results in the previous section illustrate a significant contrast enhancement of features in the deconvolved image (in this case, of the retinal cones). Deconvolution reduces the effect of the uncompensated components of the AO correction in the PSFs to enhance the high spatial-frequency structure in the image. Thus it is very good for morphological identification. It also turns out to be helpful for quantitative measurements of the radiometry of retinal structures in the images. Analyses by Christou, Roorda, and Williams have demonstrated that retinal cone classification is improved with deconvolution, as shown by the absorptance plots in Figure 10.19 [59]. 10.6.6 Summary Adaptive optics imaging significantly improves structural details in the retinal image. However, uncorrected wavefront errors produce some residual blurring that can be removed by image postprocessing techniques, such as deconvolution. Linear deconvolution solves for the Fourier components of the object irradiance, but these then have to be inverted to obtain the image that introduces a number of problems in the presence of noise. An alternative approach is the nonlinear estimation of the object irradiance via statistical
FIGURE 10.19 Cone classification from absorptance measurements for the synthetic images (left) compared to the deconvolved images (right) in the same subject. These plots illustrate the effectiveness of deconvolution for making quantitative measurements. The solid lines are linear fits to the L and M cones absorptance distributions. The L and M cones are segregated more accurately using the deconvolved images, as the two distributions are better separated. (From Christou et al. [59]. Reprinted with permission from the Optical Society of America.)
284
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
principles. This addresses some of the problems associated with inversion but is significantly slower. In general, the PSF is not well known and an estimate has to be used for deconvolution. Myopic deconvolution permits the PSF to be recovered as well as the object irradiance. This is in the form of a multiframe blind deconvolution where multiple observations of the same target are used with the assumption that the PSF varies between them. This proves to be a powerful technique not only for improving the spatial structure in the image but also the radiometric (intensity) measurements from the image. Acknowldgments This work has been supported in part by the National Science Foundation Science and Technology Center for Adaptive Optics, managed by the University of California at Santa Cruz under cooperative agreement No. AST-9876783. Financial support was also provided by the National Eye Institute grants EY014743 to DTM and EY014375 to AR.
REFERENCES 1. Helmholtz H. Helmholtz’s Treatise on Physiological Optics. Rochester, NY: Optical Society of America, 1924. 2. Jackman WT, Webster JD. On Photographing the Retina of the Living Human Eye. Philadelphia Photographer. 1886; 23: 340–341. 3. Liang J, Williams DR, Miller D. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884– 2892. 4. Thorn KE, Qu J, Jonnal RJ, Miller DT. Adaptive Optics Flood-Illuminated Camera for High Speed Retinal Imaging. Invest. Ophthalmol. Vis. Sci. 2003; 44: 1002. 5. Liang J, Grimm B, Goelz S, Bille JF. Objective Measurement of Wave Aberrations of the Human Eye with Use of a Hartmann-Shack Wave-front Sensor. J. Opt. Soc. Am. A. 1994; 11: 1949–1957. 6. Liang J, Williams DR. Aberrations and Retinal Image Quality of the Normal Human Eye. J. Opt. Soc. Am. A. 1997; 14: 2873–2883. 7. Prieto PM, Vargas-Martin F, Goelz S, Artal P. Analysis of the Performance of the Hartmann-Shack Sensor in the Human Eye. J. Opt. Soc. Am. A. 2000; 17: 1388–1398. 8. Williams DR, Yoon G. Wavefront Sensor with Off-Axis Illmination. US Patent 6,264,328 B1, July 24, 2001. 9. Delori FC, Pfl ibsen KP. Spectral Reflectance of the Human Ocular Fundus. Appl. Opt. 1989; 28: 1061–1077. 10. Marcos S, Burns SA, Moreno-Barriuso E, Navarro R. A New Approach to the Study of Ocular Chromatic Aberrations. Vision Res. 1999; 39: 4309–4323.
REFERENCES
285
11. Llorente L, Diaz-Santana L, Lara-Saucedo D, Marcos S. Aberrations of the Human Eye in Visible and Near Infrared Illumination. Optom. Vis. Sci. 2003; 80: 26–35. 12. Fernández EJ, Unterhuber A, Prieto PM, et al. Ocular Aberrations as a Function of Wavelength in the Near Infrared Measured with a Femtosecond Laser. Opt. Express. 2005; 13: 400–409. 13. Bennett AG, Rabbetts RB. Clinical Visual Optics. London: Butterworths, 1989. 14. Thibos LN, Ye M, Zhang X, Bradley A. The Chromatic Eye: A New ReducedEye Model of Ocular Chromatic Aberration in Humans. Appl. Opt. 1992; 31: 3594–3600. 15. Stiles WS, Crawford BH. The Luminous Efficiency of Rays Entering the Eye Pupil at Different Points. Proc. Roy. Soc. Lond. B. 1933; 112: 428–450. 16. Roorda A, Williams DR. Optical Fiber Properties of Individual Human Cones. J. Vis. 2002; 2: 404–412. 17. ANSI. American National Standard for the Safe Use of Lasers ANSI Z136.1– 2000. Orlando: Laser Institute of America, 2000. 18. Miller DT, Williams DR, Morris GM, Liang J. Images of Cone Photoreceptors in the Living Human Eye. Vision Res. 1996; 36: 1067–1079. 19. Minsky M. Memoir on Inventing the Confocal Scanning Laser Microscope. Scanning. 1988; 10: 128–138. 20. Webb RH, Hughes GW, Pomerantzeff O. Flying Spot TV Ophthalmoscope. Appl. Opt. 1980; 19: 2991–2997. 21. von Ruckmann A, Fitzke FW, Fan J, et al. Abnormalities of Fundus Autofluorescence in Central Serous Retinopathy. Am. J. Ophthalmol. 2002; 133: 780–786. 22. Elsner AE, Burns SA, Weiter JJ, Delori FC. Infrared Imaging of Sub-retinal Structures in the Human Ocular Fundus. Vision Res. 1996; 36: 191–205. 23. Mainster MA, Timberlake GT, Webb RH, Hughes GW. Scanning Laser Ophthalmoscopy. Clinical Applications. Ophthalmology. 1982; 89: 852–857. 24. Bultmann S, Rohrschneider K. Reproducibility of Multifocal ERG Using the Scanning Laser Ophthalmoscope. Graefes. Arch. Clin. Exp. Ophthalmol. 2002; 240: 841–845. 25. Dreher AW, Bille JF, Weinreb RN. Active Optical Depth Resolution Improvement of the Laser Tomographic Scanner. Appl. Opt. 1989; 28: 804–808. 26. Donnelly III WJ, Roorda A. Optimal Pupil Size in the Human Eye for Axial Resolution. J. Opt. Soc. Am. A. 2003; 20: 2010–2015. 27. Venkateswaran K, Roorda A, Romero-Borja F. Theoretical Modeling and Evaluation of the Axial Resolution of the Adaptive Optics Scanning Laser Ophthalmoscope. J. Biomed. Opt. 2004; 9: 132–138. 28. Romero-Borja F, Venkateswaran K, Roorda A, Hebert TJ. Optical Slicing of Human Retinal Tissue in Vivo with the Adaptive Optics Scanning Laser Ophthalmoscope. Appl. Opt. 2005; 44: 4032–4040. 29. Roorda A, Romero-Borja F, Donnelly WJ, et al. Adaptive Optics Laser Scanning Ophthalmoscopy. Opt. Express. 2002; 10: 405–412.
286
STRATEGIES FOR HIGH-RESOLUTION RETINAL IMAGING
30. Donnelly WJ. Improving Imaging in the Confocal Scanning Laser Ophthalmoscope. M.S. Dissertation. Houston (TX): University of Houston, 2001. 31. Roorda A. Method and Apparatus for Using Adaptive Optics in a Scanning Laser Ophthalmoscope. US Patent 6,890,076, May 10, 2005. 32. Williams DR, Liang J, Miller D, Roorda A. Wavefront Sensing and Compensation for the Human Eye. In: Tyson RK, ed. Adaptive Optics Engineering Handbook. New York: Marcel Dekker, 1999, pp. 287–310. 33. Wilson T, Sheppard CJR. Theory and Practice of Scanning Optical Microscopy. London: Academic, 1984. 34. Huang D, Swanson EA, Lin CP, et al. Optical Coherence Tomography. Science. 1991; 254: 1178–1181. 35. Fercher AF, Roth E. Ophthalmic Laser Interferometry. In: Mueller GJ. Optical Instrumentation for Biomedical Laser Applications. Proceedings of the SPIE. 1986; 658: 48–51. 36. Fercher AF, Mengedoht K, Werner W. Eye Length Measurement by Interferometry with Partially Coherent Light. Opt. Lett. 1988; 13: 186–188. 37. Miller DT, Qu J, Jonnal RS, Thorn K. Coherence Gating and Adaptive Optics in the Eye. In: Tuchin VV, Izatt JA, Fujimoto JG, eds. Coherence Domain Optical Methods and Optical Coherence Tomography in Biomedicine VII. Proceedings of the SPIE. 2003; 4956: 65–72. 38. Hermann B, Fernandez EJ, Unterhuber A, et al. Adaptive-Optics UltrahighResolution Optical Coherence Tomography. Opt. Lett. 2004; 29: 2142–2144. 39. Zhang Y, Rha J, Jonnal RS, Miller DT. Adaptive Optics Parallel Spectral Domain Optical Coherence Tomography for Imaging the Living Retina. Opt. Express. 2005; 13: 4792–4811. 40. Drexler W, Morgner U, Ghanta RK, et al. Ultrahigh-Resolution Ophthalmic Optical Coherence Tomography. Nat. Med. 2001; 7: 502–507. 41. Rogers JA, Podoleanu AG, Dobre GM, et al. Topography and Volume Measurements of the Optic Nerve Using en-face Optical Coherence Tomography. Opt. Express. 2001; 9: 533–545. 42. Hitzenberger CK, Trost P, Lo PW, Zhou QY. Three-Dimensional Imaging of the Human Retina by High-Speed Optical Coherence Tomography. Opt. Express. 2003; 11: 2753–2761. 43. Cense B, Nassif NA, Chen TC, et al. Ultrahigh-Resolution High-Speed Retinal Imaging Using Spectral-Domain Optical Coherence Tomography. Opt. Express. 2004; 12: 2435–2447. 44. Wojtkowski M, Srinivasan BJ, Ko TH, et al. Ultrahigh-Resolution, High-Speed, Fourier Domain Optical Coherence Tomography and Methods for Dispersion Compensation. Opt. Express. 2004; 12: 2404–2422. 45. Bouma BE, Tearney GJ. Handbook of Optical Coherence Tomography. New York: Marcel Dekker, 2002. 46. Fercher AF. Optical Coherence Tomography. J. Biomed. Opt. 1996; 1: 157– 173. 47. Boycott BB, Dowling JE. Organization of the Primate Retina: Light Microscopy. Phil. Trans. Roy. Soc. Lond. 1969; 255: 109–184.
REFERENCES
287
48. Milonni PW, Eberly JH. Lasers. New York: Wiley, 1988. 49. Leitgeb R, Hitzenberger CK, Fercher AF. Performance of Fourier Domain versus Time Domain Optical Coherence Tomography. Opt. Express. 2003; 11: 889–894. 50. Nassif N, Cense B, Park BH, et al. In Vivo Human Retinal Imaging by UltrahighSpeed Spectral Domain Optical Coherence Tomography. Opt. Lett. 2004; 29: 480–482. 51. Malacara D, DeVore SL. Interferogram Evaluation and Wavefront Fitting. In: Malacara D, ed. Optical Shop Testing. New York: Wiley-Interscience, 1996, pp. 455–499. 52. Kim J, Miller DT, Kim E, et al. Optical Coherence Tomography Speckle Reduction by a Partially Spatial Coherent Light Source. J. Biomed. Opt. 2005; 10: 064034. 53. Llorente L, Diaz-Santana L, Lara-Saucedo D, Marcos S. Aberrations of the Human Eye in Visible and Near Infrared Illumination. Optom. Vis. Sci. 2003; 80: 26–35. 54. Goodman JW. Introduction to Fourier Optics. New York: McGraw-Hill, 1968. 55. Vogel CR. Computational Methods for Inverse Problems. Philadelphia: Society for Industrial and Applied Mathematics, 2002. 56. Richardson WH. Bayesian-Based Iterative Method of Image Restoration. J. Opt. Soc. Am. 1972; 62: 55–59. 57. Lucy LB. An Iterative Technique for the Rectification of Observed Distribution. Astron. J. 1974; 79: 745–754. 58. Jefferies SM, Christou JC. Restoration of Astronomical Images. Astrophys. J. 1993; 415: 862–874. 59. Christou JC, Roorda A, Williams DR. Deconvolution of Adaptive Optics Retinal Images. J. Opt. Soc. Am. A. 2004; 21: 1393–1401. 60. Hofer H, Artal P, singer B, et al. Dynamics of the Eye’s Wave Aberration J. Opt. Soc. Am. A. 2001; 18: 497–506.
PART FOUR
VISION CORRECTION APPLICATIONS
CHAPTER ELEVEN
Customized Vision Correction Devices IAN COX Bausch & Lomb, Rochester, New York
11.1
CONTACT LENSES
Contact lenses are a potential means of correcting the higher order aberrations of the human eye. Their immediate advantage over spectacles, the most commonly prescribed method of correcting lower order aberrations in the eye, is that they are centered over the cornea, close to the visual axis, and this alignment remains relatively constant in all positions of gaze. This optical advantage was identified in the earliest days of contact lens prescribing [1], and since then, contact lenses and the higher order aberrations of the eye have been intrinsically linked. The presence of higher order aberrations in the eye has been known for several decades [2–4] and, until recently, were primarily identified as spherical aberration. However, the significant role of nonrotationally symmetrical aberrations (such as coma) has been studied and identified more recently as a major contributing factor to the less than ideal optical performance of the human eye [5]. Current measurements using Shack– Hartmann-based wavefront sensors have opened the door to identifying the magnitude and form of higher order wave aberrations in large, preoperative, physiological populations [6, 7]. Initial inspection of the average distribution of higher order aberrations across a typical prepresbyopic population requiring refractive correction shows a distinct deviation from zero for spherical
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
291
292
CUSTOMIZED VISION CORRECTION DEVICES
aberration while all other Zernike terms have an average close to zero, a value expected of a biological optical system attempting to optimize itself across a population (Fig. 11.1). This would suggest that an appropriate aspheric correcting surface would reduce the wave aberration of the eye significantly if it were incorporated into a contact lens. However, as Figure 11.2 shows, there is a wide range of spherical aberration values across the patient population, which is not related to another variable, such as degree of ametropia. To realistically correct more than 38% of the population, spherical aberration would have to be introduced as an additional parameter in the contact lens, rather than an average level of correction added into every lens. Indeed, attempts have been made to improve visual performance by correcting the spherical aberration of the eye using soft contact lenses manufactured specifically for individual eyes [8, 9]. However, the results were mixed, with the improvement in visual performance being less than expected. One reason for this may be that spherical aberration, while significant in the hierarchy of the higher order aberrations of the eye, is typically not the dominant aberration
8
0.3 0.2
Micrometers of Aberration
6
0.1 0
4
-0.1 -0.2
2
-0.3
Z-13 Z13 Z-33 Z33 Z04 Z24 Z-24 Z44 Z-44 Z15 Z-15 Z35 Z-35 Z55 Z-55
0
-2
Z 02 Z -22 Z 22 Z -13 Z 13 Z -33 Z 33 Z 04 Z 24 Z -24 Z 44 Z -44 Z 15 Z -15 Z 35 Z -35 Z 55 Z -55 Zernike Mode
FIGURE 11.1 Distribution of aberrations in the patient population [6]. Mean values of all Zernike modes in the population across a 5.7-mm pupil. The error bars represent plus and minus one standard deviation from the mean value. The variability of the higher order modes is shown in the inset of the figure, which excludes all second-order modes (Z 2−2 , Z 02 , and Z22) and expands the ordinate. (From Porter et al. [6]. Reprinted with permission of the Optical Society of America).
293
CONTACT LENSES 0.5 n = 199
Spherical Aberration, Z 04 (um)
0.4 0.3 0.2 0.1 0.0 -0.1 -0.2 -12
-10
-8
-6
-4
-2
0
2
4
6
Degree of Ametropia (D)
FIGURE 11.2 Distribution of spherical aberration (Z 04) as a function of ametropia for a 5.7-mm pupil in a large physiological preoperative patient population.
in most eyes. Third-order Zernike terms, such as coma and trefoil, are usually the largest magnitude higher order aberrations found in the general population. These aberrations must be corrected by a rotationally stable contact lens and manufactured using a process capable of creating nonrotationally symmetrical surfaces. In fact, Figure 11.3 shows the Strehl ratio for a large sample of the general ophthalmic population when corrected alternatively with sphere and cylinder only, with sphere, cylinder, and spherical aberration, and with sphere, cylinder, third-order Zernike terms, and spherical aberration. Clearly, while some patients benefit from the correction of spherical aberration in addition to their myopia and astigmatism, there is a substantially larger increase in retinal image quality when the third-order Zernike terms are also corrected. This has been demonstrated experimentally by authors who have used phase plates generated to correct the third-, fourth-, and fi fth-order aberrations of the eye [10, 11]. In both cases, the majority of wave aberration was corrected, with a correlated improvement in retinal image quality. Therefore, a contact lens must be designed to correct both symmetrical and nonrotationally symmetrical higher order aberrations of the eye if a true visual benefit is to be realized across a substantial portion of the population. 11.1.1
Rigid or Soft Contact Lenses for Customized Correction?
This leads to the question, which type of contact lens would be ideal for neutralizing higher order aberrations? Rigid gas-permeable (RGP) lenses have
294
CUSTOMIZED VISION CORRECTION DEVICES 140 130 120 Myopia, Astigmatism Corrected
Number of Observations
110
Myopia, Astigmatism, & SA Corrected
100
Myopia, Astigmatism, Coma, Trefoil & SA Corrected
90 80 70 60 50 40 30 20 10 0 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Strehl Ratio
FIGURE 11.3 Distribution of calculated Strehl ratios in a large physiological preoperative patient population following the theoretical correction of sphere and cylinder only (boxes with positively sloped lines); sphere, cylinder, and spherical aberration (boxes with negatively sloped lines); and sphere, cylinder, coma, trefoil, and spherical aberration (boxes with vertical lines).
been utilized for years as a method for correcting eyes with pathological or postsurgical deformities of the anterior corneal surface, where standard spherocylindrical spectacle lenses do not provide adequate visual acuity. However, it is the rigid nature of the lens itself, rather than innovative optics, that provides the aberration correction of these lenses [12]. Vision of eyes with significantly distorted corneas can be improved with RGP lenses because the anterior surface of the contact lens forms the new refracting surface, with the tear fi lm fi lling in the difference between the irregular cornea and the regular back surface of the lens. In this way, a large portion of the higher order aberrations generated by the abnormal corneal surface are corrected, with the potential for any residual higher order aberrations from the ocular system to be corrected on the front surface of the RGP contact lens. From a manufacturing perspective, generating a complex wavefront correcting surface on the front surface of a RGP lens is preferred, since the material is rigid during processing, and does not undergo any post-processing expansion or contraction that is inherent in soft hydrophilic or silicone-type lenses. Unfortunately, the physical nature of RGP lenses, and the way they must be fit to ensure tear exchange and mobility on the eye, renders them less desirable as a method for correcting aberrations of the eye through complex optical surfaces. RGP lenses are designed to be very mobile on the eye, moving several millimeters with each blink, and fi nding a position of rest with up to 1-mm difference relative to the optical axis of the eye following each
CONTACT LENSES
295
blink [13]. Hence, stabilizing a conventional RGP lens such that it repeatedly returns to the same horizontal and vertical location relative to the visual axis, without rotating around this axis, and maintaining physiologically desirable tear exchange behind the lens, is very difficult. Attempts to develop scleral RGP lenses, which rest primarily on the scleral surface beyond the limbal junction with the cornea, hold potential promise, as they are very stable in their location on the eye. Typically, lenses of this type require an extraordinary degree of fitting skill and clinician/patient interaction to generate a clinically acceptable fitting. However, newer designs are currently being developed whose fitting requirements are within the scope of the typical contact lens clinician, and that may make this type of lens a more viable option for a mainstream ophthalmic correction. Soft hydrophilic lenses are held in position relative to the cornea by an entirely different set of forces. These lenses are always fit with a sagittal depth greater than the cornea/sclera beneath the lens. In this way, the lens is squeezed onto the cornea with the fi rst blink and deformed to take the shape of the anterior cornea beneath. The deformation that the lens undergoes during this process generates radial stresses in the lens, and it is these forces combined with gravity that center the lens on the cornea at the position of equilibrium. When the lid blinks, the soft lens is moved away from this position of equilibrium in a vertical direction through interaction with the eyelid. As the lens is moved further away from the center of the cornea, the radial stress within the lens increases to the point where it is greater in magnitude than the eyelid interaction, which causes the lens to reverse direction and return to its position of equilibrium on the corneal surface. Since there is only one optimal position on the corneal/scleral surface that provides the least radial stress, clinicians fi nd that well-fitted soft lenses relocate to the same position on the cornea, within 0.1 to 0.4 mm after every blink [14]. Although the lens is not centered on the visual axis of the eye, it does relocate consistently relative to that axis after every blink. Control of rotation around the visual axis with soft lenses has been realized for over a decade, and the current generation of sophisticated, prism-ballasted designs provide rotational stability within 5° between any series of blinks (Fig. 11.4). It is this ability to relocate with great precision that makes soft lenses more clinically desirable than RGP lenses for correcting higher order aberrations. 11.1.2
Design Considerations—More Than Just Optics
In theory, even slight changes in the centration and rotation of a lens designed to correct aberrations will significantly reduce the visual benefits experienced by that correction [15]. Calculations to understand the tolerance/benefit ratio to a fi xed (or static) decentration and rotational alignment of correcting surfaces typical of those found in the general ophthalmic population have been performed by Guirao et al. [16]. They found that Zernike terms with higher azimuthal frequencies (or angular orders) were more sensitive to rotations of
296
CUSTOMIZED VISION CORRECTION DEVICES Misrotation 1 min after 45° Temporal Rotation 80 70
Absolute Rotation (deg.)
60 50 40 30 20 10 Max Min 75% 25%
0 −10 Focus Sunsoft Prefer Torisoft FRP-T Optima
Gmt
Sof66T Fresh Freq55
Median
FIGURE 11.4 Soft toric lens rotational stability values (change from original position) based on biomicroscopic reticule measurements on 20 eyes. Lenses were rotated 45° from their position of equilibrium and their position was remeasured 1 min later.
the correcting surface from the ideal position. Hence, primary and secondary coma are most tolerant to a rotation of the correcting surface, offering a visual benefit of correcting higher order aberrations with a rotation of up to 60°. Astigmatism and secondary astigmatism are the next most tolerant aberrations, offering a visual benefit with a rotation of up to 30°. Trefoil and secondary trefoil are the next most tolerant aberrations, offering a visual benefit with a rotation of up to 20°, and so on. Rotation from the ideal correction position that is greater than these values would provide retinal image quality that is worse than leaving the Zernike term uncorrected. A higher order aberration correction will generate lower order aberrations when the ideal correcting lens is decentered from the ideal correcting axis. Hence, a correcting lens with coma will generate astigmatism and defocus when decentered; spherical aberration will produce coma, tip, and tilt, while defocus or astigmatism will produce only tip and tilt (prism). In general, Zernike terms of higher radial order are less tolerant to decentration than lower order radial terms. Figures 11.5(a) and 11.5(b) show the reduction in visual benefit from lenses designed to correct the aberrations of 10 eyes with increasing lens rotation and decentration, respectively. Clearly the visual benefit of a lens designed to correct both lower and higher order aberrations is dependent on repeatable lens centration and rotation following each blink or eye movement. The values generated by Guirao et al.’s [16] analysis suggest that a soft lens is capable of remaining within these limits to yield an optical correction that is better than a conventional spherocylindrical correction.
CONTACT LENSES 0.6
defocus + spherical aberration
0.5
0.5
0.4
0.4
RMS ( mm)
RMS ( mm)
0.6
0.3 0.2
2nd 3rd 4th 5th 6th
0.3 0.2 0.1
0.1 (a) 0
297
(b) 0
0
5
10 15 20 25 Rotation (degrees)
30
0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 Translation (mm)
FIGURE 11.5 Mean values of the root-mean-square (RMS) wavefront error (from 10 eyes) of the residual wave aberration for a 6-mm pupil as a function of (a) fi xed rotations and (b) translations when the ideal correcting surface corrects the higher order aberrations up to second (dotted line), third (dashed-dotted line), fourth (short dashed line), fi fth (long dashed line), and sixth order (solid line). Results for translation are averaged across the x and y axis. Also shown in (a), the RMS when only defocus and spherical aberration (rotationally symmetric aberrations unaffected by rotation) are corrected. (From Guirao and Williams [16]. Reprinted with permission of the Optical Society of America).
11.1.3
Measurement—The Eye, the Lens, or the System?
It is apparent that a contact lens designed to correct the higher order aberrations of the eye needs to be a soft lens that is prism-ballasted to control rotation, with the higher order correction on the anterior surface of the lens. Since it would be extremely difficult to predict the manner in which the soft lens would distort as it is squeezed onto the cornea by the lid, the most pragmatic method to ensure proper centration and orientation while also knowing its aberrations would be to measure the wave aberration with the contact lens on the eye. Hence, a trial lens, with all of the physical properties of the final custom-correcting lens but without the custom-correcting optical surface, would be placed on the eye. After being allowed to settle, wavefront sensor measurements would be taken through the lens-eye combination. In this way, the fi nal custom-correcting lens will compensate for any variations in the eye’s higher order aberrations and aberrations introduced by the lens itself, or by the tear fi lm between the lens and the cornea [17]. Alignment of the wavefront measurement is critical. It is well known that soft lenses typically fi nd their position of equilibrium centered on, or close to, the corneal apex (which is typically temporal and superior to the visual axis). Therefore, the line of sight, as defi ned by the center of the pupil, will be decentered relative to the geometric center of the lens. This implies that wavefront measurements must be performed with reference to both the center of the pupil and to the center of the lens. To achieve this, a marking scheme needs to be implemented on the trial lenses used for these measurements, such as that shown in Figure 11.6. These markings provide an indication of
298
CUSTOMIZED VISION CORRECTION DEVICES
FIGURE 11.6 One possible marking scheme that could be used on a soft trial lens to locate the center of the lens and the rotational orientation of the lens during measurements of the wave aberration of the lens–eye combination.
the center of the lens, as well as the orientation of the lens during the wave aberration measurements. It is important that the marking system used on the trial lenses does not create additional interaction with the upper eyelid. This might alter the centration and orientation of the trial lens on the eye, leading to errors in the fi nal wavefront-correcting lens. In an extreme situation, a raised marking or a deep groove may cause discomfort for the wearer, leading to reflex tearing, and excessive movements and/or rotations of the trial lens on the eye. Ideally, the marking pattern would be detected automatically by the wavefront sensor, providing horizontal and vertical decentration coefficients, as well as rotational measures over a series of blinks, to provide an average lens position relative to the center of the pupil. 11.1.4
Customized Contact Lenses in a Disposable World
Having established the desired measurement procedure, another question that arises is how to provide an aberration correcting soft contact lens in today’s paradigm of affordable, frequently replaced and disposable lenses. These lenses need to be customized to an individual eye, yet the physiological needs of patients and the demands of clinicians make it necessary to provide
CONTACT LENSES
299
up to a year’s supply of weekly to monthly replacement lenses at any one time. Figure 11.7 demonstrates one proposed model for ordering and delivering an aberration correcting soft lens customized to an individual eye. First, the patient’s eye is measured with the trial lens in place, and the Zernike coefficients are uploaded to a central remote server (along with other necessary data) through a modem in the wavefront sensor computer. The lathing parameters necessary to cut the nonrotationally symmetrical front surface using a three-axis CNC lathe are calculated by the central server and downloaded into the lathe, generating the prescribed number of lenses. These lenses are then processed, packaged, sterilized, labeled, and returned to the prescribing clinician’s office within a few days. If necessary, the process can be repeated with the new lens in situ, to refi ne the aberration correction. Lens manufacturing methods need not be limited to lathing only. A cast molding procedure, similar to that currently used to manufacture disposable lenses, could be used to provide the basic lens structure, with the lathing process used to create the fi nal higher order wavefront correcting front surface over the optical zone of the lens. Alternatively, a laser, similar to those used for refractive surgery procedures, could be used to create the fi nal optical zone surface as a secondary process to the current manufacturing procedures used for creating disposable prism-ballasted toric lenses [18, 19].
(a)
(b)
(d )
(c)
FIGURE 11.7 Customized contact lens order and delivery model. (a) Wave aberrations are measured in the clinician’s office and (b) are uploaded to a remote server, where lathing parameters are calculated automatically and (c) downloaded into the lathe. The lens is processed, packaged, sterilized, labeled, and (d) sent back to the clinician for dispensing to the patient.
300
11.1.5
CUSTOMIZED VISION CORRECTION DEVICES
Manufacturing Issues—Can the Correct Surfaces Be Made?
Do the currently available contact lens lathes have the capability of creating the surface profi les necessary to correct higher order aberrations found in both the physiologic and pathologic ophthalmic population? Figure 11.8 shows a test surface created using a currently available CNC contact lens lathe for pentafoil, a fi fth-order Zernike term. This interferogram demonstrates that the CNC lathes have sufficient rotational resolution to generate the required surfaces. While this shows its capability of creating normal surfaces, there might be concern that the complex surfaces generated by pathologies, such as keratoconus, may be beyond the reach of these lathes. Figure 11.9 shows the outcome of creating a Plexiglas cylinder with a single front surface that mimicked the wave aberration of a patient with keratoconus. This cylinder was cut to the appropriate length to mimic the degree of defocus measured in the patient’s eye. A rigid Plexiglas lens with only a second-order Zernike correction, and a second lens with a second- through fi fth-order Zernike correction, were manufactured and alternately placed on the model eye, with the lenses centered and aligned empirically. In this model, the lenses were not designed to perfectly align to the surface of the eye, nor was there a fluid interface between the model eye and the correcting lens. As the results clearly show, the majority of the higher order aberrations were corrected by the higher order wavefront correcting lens generated by the CNC lathe manufacturing process. Transferring these surfaces into a contact lens design, particularly one that is made from a hydrophilic material, requires stringent control of all the manufacturing steps and may require a complete reassessment of the traditional procedures used for lathing conventional spherocylindrical lens designs. However, once this has been accomplished, the surfaces can be generated to
FIGURE 11.8 technology.
Interferogram demonstrating rotational resolution of CNC lathing
CONTACT LENSES
301
FIGURE 11.9 (a) Measured wave aberrations, (b) point spread functions (PSFs), and (c) image convolutions of a keratoconic Plexiglas model eye and its correcting surfaces generated by a three-axis CNC lathe. The second column represents the model eye corrected by a Plexiglas contact lens generated by the same lathing technology. This lens corrected defocus and astigmatism (sphere and cylinder) only. The main residual aberration in this keratoconus model after a second-order correction is coma. The third column represents the model eye corrected by a Plexiglas contact lens with a front surface designed to correct up through fi fth-order Zernike terms. All measurements were calculated for a 5.7-mm pupil.
account for the hydration characteristics of the lens material so that the fi nal hydrated lenses demonstrate the wave aberration of the original design [20, 21]. 11.1.6
Who Will Benefit?
Obviously, eyes with significantly greater levels of higher order aberrations, particularly those with aberrations induced by pathological conditions, such as keratoconus (steepening anterior corneal surface) or surgery such as penetrating keratoplasty (corneal transplants), will benefit in terms of improved retinal image quality with even partial correction of these aberrations.
302
CUSTOMIZED VISION CORRECTION DEVICES
However, one of the questions that still remains is whether patients whose ocular higher order aberrations fall within the normal preoperative population values can have a sufficient correction of higher order aberrations to provide a perceptible difference in visual performance. In other words, is this a concept that is limited to the relatively small population of patients who have significantly larger magnitudes of higher order aberrations than the normal population, or a potential correction that would benefit the population in general? Experiments conducted at our research lab using this prescribing and manufacturing model have demonstrated the feasibility of a custom soft contact lens designed to correct wave aberrations up to and including the fi fth-order Zernike terms in eyes representing the normal population. Measurements were made using a wavefront sensor located in New York, and the information was uploaded to a server in Florida, where the lenses were lathed and processed. Figure 11.10 shows the results of correcting a single eye with different soft custom-correcting lenses that compensated for a variety of measured Zernike terms: second order only (defocus and astigmatism); second-order and spherical aberration; second and third order; second-order, third-order, and spherical aberration; and second through fi fth order.
FIGURE 11.10 Residual higher order RMS wavefront error, Strehl ratios, convolved retinal images, and logMAR visual acuities of an eye with different levels of wave aberration correction with soft customized contact lenses.
CONTACT LENSES
303
Residual higher order root-mean-square (RMS) wavefront errors, Strehl ratios, convolved retinal images, as well as high- and low-contrast logMAR visual acuities are presented. Visual acuities were measured through a 6-mm artificial pupil under photopic illumination conditions (with the natural pupil pharmacologically dilated with a mydriatic) to ensure a large, standardized pupil while maintaining optimal illumination for resolution testing. As anticipated, the results show that as the higher order aberrations are corrected sequentially, the RMS wavefront error is reduced, the Strehl ratio increases, and the low-contrast visual acuity improves up to one line with the 6-mm pupil. Perhaps, not surprisingly, the high-contrast visual acuity does not improve dramatically since retinal image quality enhancement primarily affects contrast rather than resolution. These results are exciting and encouraging, although studies expanded across larger sample sizes have shown that lens alignment and rotational stability variations have made it difficult to replicate the visual gains reported in this case study. This latter fi nding reflects similar results reported by Lopez-Gil et al. [22]. The authors reported some success in correlating ex vivo and in vivo measurements of higher order aberrations in asymmetrical prism-ballasted soft contact lenses, but found that spurious aberrations could be induced if the centration and alignment of the lens was not perfect, leading them to conclude that the “principal limitations for the ocular wavefront aberration correction by contact lens are its translation and rotation with respect to its ideal position.” An additional report by the same group showed success in reducing higher order aberrations by 43% across a 5-mm pupil in keratoconic patients with a correlated average increase in visual acuity of 37%. Normal eyes showed no decrease in higher order aberrations however [22]. Jeong et al. approached the question of what is the maximum visual benefit that can be expected in normal and pathological/surgical eyes by controlling the centration and rotation of the contact lens using a modified wavefront sensor [23]. By adding an additional optical plane conjugate with the iris and the lenslet array, they were able to introduce phase plates (manufactured in Plexiglas with the same contact lens lathing technology described in Section 11.1.4) into the optical system, creating the optical equivalent of a flat contact lens. By introducing a beamsplitter and visual acuity task into the optical setup, the authors were able to optimize the wave aberration reduction while also measuring high- and low-contrast visual acuity in a controlled environment. In the normal eyes, the phase plate reduced the higher order RMS wavefront error from 0.39 to 0.15 mm for a 6-mm pupil. For the abnormal eyes, the higher order RMS wavefront error was reduced from 2.1 to 0.55 mm with the phase plate for a 6-mm pupil. With the phase plate, the average high contrast visual acuities were −0.24 logMAR in normal eyes and −0.17 logMAR in abnormal eyes. On average, normal eyes experienced a half-line of improvement in high-contrast visual acuity (VA) and one line of improvement in low-contrast VA. Abnormal eyes had a two-line improvement in high-contrast VA and a three-line improvement in low-contrast VA.
304
CUSTOMIZED VISION CORRECTION DEVICES
Clearly these results show it is feasible to correct higher order aberrations and improve visual performance in pathological or postsurgical eyes. The extent to which eyes within the normal preoperative population can benefit is still to be established and may vary greatly on an individual basis not only by the magnitude of the higher order aberrations present but also in the ability of fitting the lens design in a stable and centered fashion. 11.1.7
Summary
The concept of a customized soft contact lens designed to correct the wave aberration of the eye up through fi fth-order Zernike terms is feasible. Manufacturing technologies are available to create the complex surfaces necessary to achieve this precision, and a business model that provides the means of communicating the necessary individual parameters to the lab and delivering lenses within an acceptable time frame has been demonstrated. The ability to reduce the lower and higher order Zernike terms in a typical presurgical eye has been demonstrated and the commensurate improvements in large pupil vision are detectable. What remains to be confirmed is the visual benefit obtained with these lenses across a wide range of physiologic and pathologic eyes. In addition, we must determine whether this type of correction will be desirable for improved night vision to the general contact lens wearing population, or if its appeal will be confi ned to that segment of the population who have significantly greater higher order aberrations resulting from pathological or postsurgical conditions.
11.2
INTRAOCULAR LENSES
Intraocular lenses (IOLs), typically used to replace the natural lens of the eye following its removal after the development of a cataract, create what is known as a pseudophakic eye. In this procedure, a small incision is made near the limbus (the cornea/sclera junction), and after inserting the appropriate surgical instruments though this incision, a circular hole of approximately 5 mm in diameter is made in the anterior surface of the natural lens capsule. This procedure is known as a capsulorhexis. A special device, which allows both ultrasonic vibrations and the injection and removal of isotonic fluid at its tip, is positioned in the incision and used to break up and remove the natural lens nucleus. This procedure is known as phacoemulsification and leaves the lens capsule in place. Once the natural lens nucleus is removed, the intraocular lens is folded and inserted through the small limbal incision into the lens capsule, using either folding forceps or a syringelike injector. It is then positioned by the surgeon so that it is centered within the dilated pupil. Intraocular lenses are typically only 6 to 8 mm in diameter, with typical optical zone diameters of 5 to 7 mm. Small loops of material known as haptics, either contiguous with the main lens material or made from a different mate-
INTRAOCULAR LENSES
305
rial, extend from the lens as loops or plates to center and stabilize the lens within the capsular bag. Lens materials may be rigid like Plexiglas or flexible like silicone or a hydrogel. For rigid lenses, the corneal or scleral incision may have to be enlarged to allow the lens to be inserted into the eye. Flexible lenses can be folded before insertion into the eye. By folding the intraocular lens for insertion into the eye, the incision size is minimized. Since incision size is directly related to the induced amount of corneal astigmatism and other higher order aberrations, using flexible lenses and the smallest incision size possible leads to fewer induced aberrations. Flexible IOLs are the preferred modality in use today, and rigid IOLs are only used in cases where a flexible lens cannot be utilized (such as a case where the capsule has been damaged and will not adequately support a flexible IOL), or where a comparable design selected by the surgeon is not available in a flexible material. Depending on the corneal curvature, the axial length of the eye, and the degree of ametropia that the eye demonstrated prior to cataract formation, intraocular lenses might require spherical (i.e., defocus) optical powers of up to +30 D to provide emmetropic correction postoperatively. More recently, toric intraocular lenses have been utilized to correct preoperative astigmatism, particularly when the magnitude is 2.00 D or greater. These lenses are usually designed with plate haptics to avoid unwanted lens rotation once the lens is implanted. 11.2.1
Which Aberrations—The Cornea, the Lens, or the Eye?
The procedure described in the previous paragraphs highlights a number of quandaries that surround the concept of an intraocular lens designed to correct higher order aberrations of the eye. The very nature of cataract development and the eventual loss of transparency of the natural lens suggest that any measurement of the higher order aberrations of an eye with a developing cataract will be misleading. Cataract development leads to a substantial increase in the higher order aberrations of the eye due to the refractive index shifts that occur within the lens. Eventually, measurements cannot be made due to the increased light scatter caused by the cataract formation [24]. Therefore, it is unlikely that an intraocular lens could be made to correct the higher order aberrations of an individual eye based on preoperative measurements. In the physiological eye, both the cornea and natural lens contribute to the higher order aberrations of the total eye. Artal et al. have shown that the natural lens partially compensates for both the lower and higher order aberrations induced by the cornea in younger eyes [25]. This compensation of higher order aberrations—in particular, spherical aberration—becomes less accurate with age as early lenticular changes manifest themselves clinically (presbyopia) [26]. This is recorded as an increase in the higher order aberrations of the total eye with age [27]. Removal of the natural lens will significantly increase the higher order aberrations of the eye because the
306
CUSTOMIZED VISION CORRECTION DEVICES
compensatory effect of the natural lens is removed and the true cornealgenerated aberrations become apparent [28–30]. Furthermore, the majority of intraocular lenses are designed with spherical surfaces, and, given the high lens powers and steep curvatures used to generate these small lenses, they typically exhibit significant positive spherical aberration [31, 32]. The combination of removing the natural lens and replacing it with a spherically surfaced intraocular lens leads to the typical finding that a pseudophakic eye exhibits greater higher order aberrations (in particular, positive spherical aberration) than corneal topography alone would suggest. 11.2.2 Correcting Higher Order Aberrations—Individual Versus Population Average Since measurements of the wave aberration in an eye with cataract formation are unreliable estimates of the postsurgical optical state of the eye, two possible pathways are open to correcting higher order aberrations in the pseudophakic eye. The fi rst concept, designed to correct all of the measured aberrations of the postoperative pseudophakic eye, is to be able to alter the optical power of the implanted IOL postsurgically. One system under development by Calhoun Vision Inc., known as the Light Adjustable Lens (LAL), utilizes a polymeric material that can be partially cured during manufacture to generate an IOL of the appropriate spherical power, size, and shape for implantation. The surgeon implants the LAL using standard surgical techniques and has the patient return after the eye has healed (2 to 4 weeks after surgery). The surgeon then precisely and noninvasively adjusts the lens power to the patient’s specific visual need by directing a cool, low-intensity beam of light onto the eye. The application of near-ultraviolet light to a portion of the lens optic results in a disassociation of the photoinitiator to form reactive radicals that initiate the polymerization of the photosensitive macromers within the irradiated region of the silicone matrix. Polymerization itself does not result in changes in lens power; it does, however, create a concentration gradient within the lens resulting in the migration of nonirradiated macromers into the region that is now devoid of macromer as a result of polymerization. Equilibration from the migration of the macromers into the irradiated area causes swelling within that region of the lens, with an associated change in the radius of curvature and power. These changes can be across the whole lens, which would change the overall power (or defocus) of the lens, or localized in nature, providing changes to astigmatism or potentially higher order aberrations such as coma or spherical aberration. Once the desired power change is achieved, irradiation of the entire lens to polymerize all of the remaining macromers “locks-in” the adjustment so that no further power changes can occur. Demonstration of the feasibility of this system in a rabbit model has shown that power changes of ±2 D of defocus can be induced following lens implantation [33]. More recently a small sample clinical trial has also shown that the system can correct overcorrections and undercorrec-
INTRAOCULAR LENSES
307
tions of defocus and astigmatism within this range of powers [34]. However, it is worth noting that the lens position within the capsule can vary up to 6 months postoperatively, as the lens capsule fibroses around the haptic loops. This could lead to a tilt and decentration of the IOL and, of course, induced higher order aberrations. To what degree this limits the technology to correct lower order aberrations, or correct higher order aberrations within a relatively short window of time postoperatively, remains to be established as additional clinical data is gathered using this new technology. Another alternative correcting scheme is to reduce higher order aberrations based on the average values measured for an age-matched population. Studies have shown that the population average values of nearly all higher order aberrations in typical preoperative eyes are approximately zero, except for spherical aberration, which is typically slightly positive and increases in magnitude with increasing age (Fig. 11.1) [6, 24]. Conceptually, correcting the mean spherical aberration of the aged population using aspheric surfaces on the IOL should reduce the eye’s wave aberration and improve retinal image quality postoperatively compared to a normal spherically surfaced IOL. The Tecnis lens, produced by Advanced Medical Optics, has been designed using this concept. The clinical results reported to the Food and Drug Administration (FDA) suggest that the lens does, on average, reduce the measured spherical aberration of the postoperative eye and improve visual acuity compared with a conventional, spherically surfaced IOL. Not surprisingly, while the majority of patients showed a reduction in spherical aberration, a small percentage of patients showed an increase in spherical aberration with the Tecnis lens compared to a spherically surfaced lens implanted in their fellow eye. Whenever a lens is designed to correct for the population average for a particular aberration, eyes on the tails of the normal distribution will be overor undercorrected, possibly leading to a reduction in retinal image quality. Furthermore, an IOL with aspheric surfaces designed to correct the spherical aberration of the eye will improve retinal image contrast and resolution only when centered on the visual axis. A decentration of the lens due to surgeon error or postoperative healing will lead to the induction of additional deleterious third-order aberrations, such as coma. Finally, another concept for correcting the wave aberration in association with the implantation of an IOL is a bioptic procedure, where the higher order aberrations of the eye are measured once optical stability has been achieved following IOL surgery, and a customized wavefront-guided refractive surgery procedure is performed as a secondary procedure. This technique separates the difficulty of predicting the higher order aberrations induced by the surgical healing process when attempting to compensate for them in the IOL itself, while minimizing the biomechanically induced higher order aberrations of the refractive procedure (since a relatively small amount of tissue is removed from the cornea due to the fact that the majority of the defocus correction of the eye is accomplished through the IOL itself). Indeed, to further minimize induced higher order aberrations, the laser in situ keratomileusis (LASIK)
308
CUSTOMIZED VISION CORRECTION DEVICES
flap cut in the cornea can be performed during the IOL surgery itself, and only lifted later on during the secondary refractive procedure. Although relatively rare at this time, it is likely that this procedure will become more commonplace as the two specialized areas of cataract surgery and refractive surgery converge in the next few years. 11.2.3
Summary
Concepts of a customized intraocular lens designed to correct the wave aberration of the eye up through fi fth-order Zernike terms have been proposed, but the difficulties of measuring these aberrations reliably preoperatively and of surgically implanting the lens in exactly the right position and orientation, preclude any concept that does not allow a postsurgical modification of the optical power of the lens. The development of a suitable material is required before this pathway can be investigated clinically in the human eye. A more immediate approach to reducing the higher order aberrations of the pseudophakic eye is to correct the mean spherical aberration of the elderly population. This is the one higher order aberration that has been reported to have a population mean that is not zero, and correction can be achieved through the use of aspheric surfaces on one or both surfaces of the intraocular lens. Standard manufacturing technologies used in the production of IOLs today are capable of producing these surfaces, and initial clinical reports suggest that this is a viable method of reducing the postoperative higher order aberrations of the pseudophakic eye.
REFERENCES 1. Westheimer G. Aberrations of Contact Lenses. Am. J. Optom. Arch. Am. Acad. Optom. 1961; 38: 445–448. 2. Ivanoff A. Letter to the Editor: About the Spherical Aberration of the Eye. J. Opt. Soc. Am. 1953; 46: 901–903. 3. Jenkins TCA. Aberrations of the Eye and Their Effect upon Vision. Part II. Br. J. Physiol. Opt. 1963; 20: 161–201. 4. Koomen M, Tousey R, Scolnik R. The Spherical Aberration of the Eye. J. Opt. Soc. Am. 1949; 39: 370–376. 5. Howland HC, Howland B. A Subjective Method for the Measurement of Monochromatic Aberrations of the Eye. J. Opt. Soc. Am. 1977; 67: 1508–1518. 6. Porter J, Guirao A, Cox IG, Williams DR. Monochromatic Aberrations of the Human Eye in a Large Population. J. Opt. Soc. Am. A. 2001; 18: 1793–1803. 7. Thibos LN, Hong X, Bradley A, Cheng X. Statistical Variation of Aberration Structure and Image Quality in a Normal Population of Healthy Eyes. J. Opt. Soc. Am. A. 2002; 19: 2329–2348. 8. Chateau N, Blanchard A, Baude D. Influence of Myopia and Aging on the Optimal Spherical Aberration of Soft Contact Lenses. J. Opt. Soc. Am. A. 1998; 15: 2589–2596.
REFERENCES
309
9. Dietze HH, Cox MJ. Correcting Ocular Spherical Aberration with Soft Contact Lenses. J. Opt. Soc. Am. A. 2004; 21: 473–485. 10. Navarro R, Moreno-Barriuso E, Bara S, Mancebo T. Phase Plates for WaveAberration Compensation in the Human Eye. Opt. Lett. 2000; 25: 236–238. 11. Burns SA, Marco S, Elsner AE, Bara S. Contrast Improvement of Confocal Retinal Imaging by Use of Phase-Correcting Plates. Opt. Lett. 2002; 27: 400– 402. 12. Lu F, Mao X, Qu J, et al. Monochromatic Wavefront Aberrations in the Human Eye with Contact Lenses. Optom. Vis. Sci. 2003; 80: 135–141. 13. Knoll HA, Conway HD. Analysis of Blink-Induced Vertical Motion of Contact Lenses. Am. J. Optom. Physiol. Opt. 1987; 64: 153–155. 14. Young G. Soft Lens Fitting Reassessed. Contact Lens Spect. 1992; December: 56–61. 15. Bara S, Mancebo T, Moreno-Barriuso E. Positioning Tolerances for Phase Plates Compensating Aberrations of the Human Eye. Appl. Opt. 2000; 39: 3413–3420. 16. Guirao A, Williams DR, Cox IG. Effect of Rotation and Translation on the Expected Benefit of an Ideal Method to Correct the Eye’s Higher-Order Aberrations. J. Opt. Soc. Am. A. 2001; 18: 1003–1015. 17. Ho A. Aberration Correction with Soft Contact Lens: Is the Postlens Tear Film Important? Eye Contact Lens: Sci. Clin. Prac. 2003; 29: S182–S185. 18. Marsack J, Milner T, Rylander G, et al. Applying Wavefront Sensors and Corneal Topography to Keratoconus. Biomed. Sci. Instrum. 2002; 38: 471–476. 19. Chernyak DA, Campbell CE. A System for the Design, Manufacture, and Test of Custom Lenses with Known Amounts of High Order Aberrations. Invest. Ophthalmol. Vis. Sci. 2002; 43: e-abstract 2053. 20. Jeong TM, Menon M, Yoon G. Measurement of Wave-front Aberration in Soft Contact Lenses by Use of a Shack-Hartmann Wave-front Sensor. Appl. Opt. 2005; 44: 4523–4527. 21. Lopez-Gil N, Castejon-Mochon JF, Benito A, et al. Aberration Generation by Contact Lenses with Aspheric and Asymmetric Surfaces. J. Refract. Surg. 2002; 18: S603–S609. 22. Lopez-Gil N, Benito A, Castejón-Mochón JF, et al. Aberration Correction Using Customized Soft Contact Lenses with Aspheric and Asymmetric Surfaces. Invest. Ophthalmol. Vis. Sci. 2002; 43: e-abstract 973. 23. Jeong T, Yoon G, Williams DR, Cox IG. Vision Improvement Using Customized Optics in Normal and Abnormal Eyes. Invest. Ophthalmol. Vis. Sci. 2004; 45: e-abstract 1078. 24. Kuroda T, Fujikado T, Maeda N, et al. Wavefront Analysis in Eyes with Nuclear or Cortical Cataract. Am. J. Ophthalmol. 2002; 134: 1–9. 25. Artal P, Guirao A. Contribution of the Cornea and the Lens to the Aberrations of the Human Eye. Opt. Lett. 1998; 23: 1713–1715. 26. Glasser A, Campbell MC. Presbyopia and the Optical Changes in the Human Crystalline Lens with Age. Vision Res. 1998; 38: 209–229. 27. Guirao A, Gonzalez C, Redondo M, et al. Average Optical Performance of the Human Eye as a Function of Age in a Normal Population. Invest. Ophthalmol. Vis. Sci. 1999; 40: 203–213.
310
CUSTOMIZED VISION CORRECTION DEVICES
28. Oshika T, Klyce SD, Applegate RA, Howland HC. Changes in Corneal Wavefront Aberrations with Aging. Invest. Ophthalmol. Vis. Sci. 1999; 40: 1351–1355. 29. Guirao A, Redondo M, Artal P. Optical Aberrations of the Human Cornea as a Function of Age. J. Opt. Soc. Am. A. 2000; 17: 1697–1702. 30. Artal P, Berrio E, Guirao A, Piers P. Contribution of the Cornea and Internal Surfaces to the Change of Ocular Aberrations with Age. J. Opt. Soc. Am. A. 2002; 19: 137–143. 31. Atchison DA. Optical Design of Poly(Methyl Methacrylate) Intraocular Lenses. J. Cataract Refract. Surg. 1990; 16: 178–187. 32. Taketani F, Matsuura TD, Yukawa E, Hara Y. High-Order Aberrations with Hydroview H60M and AcrySof MA30BA Intraocular Lenses Comparative Study. J. Cataract Refract. Surg. 2004; 30: 844–848. 33. Chang SH, Brait A, Delker G, et al. New Material for an Adjustable Intraocular Lens. Invest. Ophthalmol. Vis. Sci. 2003; 44: e-abstract 260. 34. Chang SH, Sandstedt CA, Vega JA, et al. Light Adjustable Lens: Clinical Trial Results. Invest. Ophthalmol. Vis. Sci. 2005; 46: e-abstract 803.
CHAPTER TWELVE
Customized Corneal Ablation SCOTT M. MACRAE University of Rochester, Rochester, New York
12.1
INTRODUCTION
Laser refractive surgery has evolved rapidly from the first treatments, which were done in blind eyes by Seiler in 1985 [1] and then on sighted eyes in 1987 using photorefractive keratectomy, or PRK [2]. In 1990, Pallikaris combined the lamellar splitting of the corneal stroma with the treatment of an excimer laser, which formed the basis of modern-day laser in situ keratomileusis (LASIK) surgery [3]. Since then, the field of refractive surgery has advanced quickly and millions of patients worldwide have benefited from its use. The incorporation of scanning spot lasers to create smoother and more subtle ablations and the use of eye trackers to compensate precisely for eye movements when delivering treatment have contributed to the refi nement of laser refractive surgery. These refi nements have improved the delivery system of excimer ablations, but the basic diagnostic and treatment inputs driving the ablation process have remained relatively unchanged. The treatment patterns were driven by the manifest and cycloplegic refractions, which are subjective measurements that rely on the patient’s subjective assessment. The incorporation of wavefront technology into refractive surgery has signaled an important transition from the use of subjective methods of measuring and treating refractive error to objective methods of vision correction. This chapter will give a brief practical overview of refractive surgical ablation and wavefrontguided treatments.
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
311
312
12.2
CUSTOMIZED CORNEAL ABLATION
BASICS OF LASER REFRACTIVE SURGERY
A comprehensive review of laser refractive surgery is beyond the scope of this chapter. The reader is directed to some excellent overviews of this field [4, 5]. This chapter will concentrate on the basic requirements of laser refractive surgery and on some of the challenges encountered with the refi nement of refractive surgery. The area of the cornea over which the laser correction is applied is termed the ablation optical zone, or optical zone. The optical zone is typically made equal to or larger than the patient’s natural low mesopic pupil size. Current laser systems often remove tissue in a region outside of the optical zone, termed the transition zone, to blend the edge of the optical zone with the surrounding tissue. The purpose of the transition zone is to mitigate sharp discontinuities in corneal curvature that could produce undesired aberrations. The fi rst treatment of refractive errors using an excimer laser started with treating simple myopia. This involved removing more corneal tissue centrally than peripherally, effectively flattening the central cornea [Fig. 12.1(a)]. Astigmatic treatments were later employed by removing a cylindrical mass of tissue to flatten one meridian more than the meridian perpendicular to it [Fig. 12.1(b)]. Hyperopic treatments were then developed to remove more tissue in the midperiphery of the cornea, leaving the central cornea with less treatment [Fig. 12.1(c)]. A doughnutlike mass of tissue is removed to steepen the central cornea. All of these treatments were performed using a 193-nm excimer argon fluoride laser. Early treatments used broad-beam excimer lasers that were several millimeters in diameter [6]. An expanding iris diaphragm or a rotating disc containing several differently sized apertures dictated the size of the optical zone and beam diameter on the cornea. Scanning (or flying-spot) lasers with smaller beam diameters have more recently been utilized to expand beyond 6-mm optical zones while incorporating transition zones with wider, more gradual blends. Wavefront sensors have recently been introduced for research in ophthalmology and vision science. Liang et al. developed the Shack–Hartmann wavefront sensor for the human eye in 1994 [7]. Subsequently, in 1997, a Shack–Hartmann wavefront sensor was coupled with an adaptive optics deformable mirror to improve in vivo retinal imaging and demonstrate marked improvements in visual performance with higher order aberration correction [8, 9]. In 1999, Mrochen et al. coupled the Tscherning diagnostic wavefront sensor with a flying-spot excimer laser to treat patients with customized ablation [10]. By 2003, three wavefront-guided excimer laser systems were approved by the U.S. Food and Drug Administration (FDA) and even more were being utilized worldwide. The exciting field of wavefront technology and ocular higher order aberration correction had been established, but there were and remain many important challenges.
BASICS OF LASER REFRACTIVE SURGERY
313
FIGURE 12.1 Corneal cross sections illustrating excimer ablation optical and transition zone profi les in light gray for (a) myopic, (b) myopic-astigmatic, and (c) hyperopic or hyperopic-astigmatic treatments. (a) A simple myopic treatment ablates more tissue from the central cornea compared to the peripheral cornea so a convex lens is removed, flattening the central cornea. (b) Myopic-astigmatic treatments remove tissue of uniform thickness in the flat meridian (i.e., the meridian of the cornea with the flattest preoperative curvature). This causes no change in power in the flat meridian. The steep meridian, shown below, has a convex lenticule that is removed to flatten its curvature. (c) With hyperopic treatments, a doughnut-shaped ablation removes more tissue in the midperipheral portion of the optical zone than in the central cornea. This treatment steepens the central cornea. Hyperopic astigmatism simply applies this same pattern to steepen the flat meridian solely, while leaving the steep meridian untreated.
Prior to the approval of wavefront technology for refractive surgery applications, a number of large studies were conducted in myopic, hyperopic, and astigmatic treatments using conventional methods that treated only sphere and cylinder (or defocus and astigmatism) based on subjective refractions. The most well known and rigorous of these studies are those done for FDA approval, summarized in Tables 12.1 and 12.2 for conventional myopic and hyperopic treatments, respectively. These studies show that myopic treatment allows for a larger treatment range [generally up to 10 to 12 diopters (D) of myopia] that is roughly 2.5 to 3 times greater than the treatment range for hyperopic treatments (generally up to about 4 to 5 D of hyperopia) [11]. This author believes that the reason one is only able to treat about one-third of the
314
WaveLight ALLEGRETTO WAVEb
VISX a
Summit c
Nidekc
Autonomous LadarVision a,b B&L Technolas 217b,c LaserSight
2.00 to −2.99 D
−3.00 to −3.99 D
−4.00 to −4.99 D
−5.00 to −5.99 D
−6.00 to −6.99 D
62.1% (n = 29) 85.7% (n = 21) 55.6% (n = 9) 88.2% (n = 17) 46.4% (n = 28) 59.0% (n = 39) 84.1% (n = 96)
73.4% (n = 64) 90.4% (n = 73) 51.5% (n = 33) 61.2% (n = 152) 54.7% (n = 53) 51.7% (n = 58) 90.6% (n = 117)
48.9% (n = 47) 83.8% (n = 80) 67.9% (n = 53) 61.2% (n = 152) 41.7% (n = 48) 65.2% (n = 89) 89.7% (n = 127)
59.6% (n = 52) 84.0% (n = 81) 45.7% (n = 46) 54.3% (n = 164) 47.7% (n = 44) 64.3% (n = 84) 79.8% (n = 108)
40.5% (n = 37) 84.2% (n = 57) 41.7% (n = 36) 54.3% (n = 164) 50.0% (n = 52) 45.1% (n = 82) 75.2% (n = 80)
27.8% (n = 18) 77.5% (n = 40) 32.1% (n = 109) 38.1% (n = 425) 37.8% (n = 37) 51.1% (n = 94) 74.8% (n = 65)
Percentage of Eyes with UCVA of 20/20 or Better at 6 Months Postop Based on Preop MRSE
−1.00 to −1.99 D
Preoperative MRSE
49.1% (n = 53) 90.0% (n = 10) 32.1% (n = 109) 38.1% (n = 425) 32.0% (n = 147) 43.0% (n = 200) 66.3% (n = 77)
−7.00 D and above
Summary of Industry-Sponsored Myopic (Nearsighted) Conventional LASIK Study Results for U.S. FDA Approval
Laser Platform
TABLE 12.1
315
96.6% (n = 29) 100% (n = 21) 100% (n = 9) 94.1% (n = 17) 92.9% (n = 28) 97.4% (n = 39) 95.7% (n = 107)
93.8% (n = 64) 100% (n = 73) 87.9% (n = 33) 86.2% (n = 152) 94.3% (n = 53) 96.6% (n = 58) 97.3% (n = 124)
95.7% (n = 47) 100% (n = 80) 90.6% (n = 53) 86.2% (n = 152) 91.7% (n = 48) 98.9% (n = 89) 98.6% (n = 137)
92.3% (n = 52) 98.8% (n = 81) 80.4% (n = 46) 86.6% (n = 164) 95.5% (n = 44) 95.2% (n = 84) 97.4% (n = 128)
78.4% (n = 37) 100% (n = 57) 77.8% (n = 36) 86.6% (n = 164) 98.1% (n = 52) 96.3% (n = 82) 95.3% (n = 98)
88.9% (n = 18) 97.5% (n = 40) 32.1% (n = 114) 82.6% (n = 425) 86.5% (n = 37) 94.7% (n = 94) 95.9% (n = 80)
96.2% (n = 53) 100% (n = 10) 32.1% (n = 114) 82.6% (n = 425) 88.4% (n = 147) 91.5% (n = 200) 95.7% (n = 106)
Note: n = number of eyes treated; UCVA = uncorrected visual acuity, i.e., visual acuity without a second-order correction of sphere and cylinder (or defocus and astigmatism); MRSE = manifest refractive spherical equivalent. a Data obtained using 4000 Hz eye tracker system. b Three months postoperative data reported. c Data obtained prior to introduction of an eye tracker system. Source Document References: FDA “Summary of Safety and Effectiveness Data,” publicly available for download from www.fda.gov/cdrh/. Acknowledgment: Table compiled by Joseph Stamm, OD.
WaveLight ALLEGRETTO WAVEb
VISX c
Summit c
Nidekc
Autonomous LadarVision a,b B&L Technolas 217b,c LaserSight
Percentage of eyes with UCVA of 20/40 or Better at 6 Months Postop Based on Preop MRSE
316 +2.00 to +2.99 D
+3.00 to +3.99 D
+4.00 to +4.99 D
52.8% (n = 72) 67.6% (n = 111) NR 79.0% (n = 76)
100% (n = 2) 88.9% (n = 9) NR
77.8% (n = 27)
63.3% (n = 60)
45.6% (n = 79) 58.4% (n = 77) NR 37.5% (n = 24)
41.2% (n = 34) 40.6% (n = 32) NR 50.0% (n = 16)
25.9% (n = 27) 50.0% (n = 4) NR
Autonomous LadarVision B&L Technolas 217a
98.6% (n = 72) 95.5% (n = 111) 97.4% (n = 76)
96.2% (n = 79) 94.8% (n = 77) 95.0% (n = 60)
79.4% (n = 34) 90.6% (n = 32) 91.7% (n = 24)
85.2% (n = 27) 100% (n = 4) 93.8% (n = 16)
93.0% (n = 214) 94.8% (n = 233) 95.6% (n = 203)
45.3% (n = 214) 61.4% (n = 233) 58.7% (n = 167) 67.0% (n = 203)
Cumulative Total
Note: NR = not reported; UCVA = uncorrected visual acuity, i.e., visual acuity without a second-order correction of sphere and cylinder (or defocus and astigmatism); MRSE = manifest refractive spherical equivalent. Source Document References: FDA “Summary of Safety and Effectiveness Data,” publicly available for download from www.fda.gov/cdrh/. Acknowledgment: Table compiled by Joseph Stamm, OD.
100% (n = 2) 100% (n = 9) 96.3% (n = 27)
Percentage of Eyes with UCVA of 20/40 or Better at 6 Months Postop Based on Preop MRSE
VISX (12 month post-op) WaveLight ALLEGRETTO WAVE
Autonomous LadarVision B&L Technolas 217a
WaveLight ALLEGRETTO WAVE
+1.00 to +1.99 D
Percentage of Eyes with UCVA of 20/20 or Better at 6 Months Postop Based on Preop MRSE
0.00 to +0.99 D
Preoperative MRSE
Summary of Industry-Sponsored Hyperopic (Farsighted) Conventional LASIK Study Results for U.S. FDA Approval
Laser Platform
TABLE 12.2
FORMS OF CUSTOMIZATION
317
hyperopia when compared to myopia is that a hyperopic ablation has three times as many transition points where the curvature changes when compared to myopia [11]. The transition points are demonstrated in Figure 12.1. The myopic results in these studies were generally better than an equivalent dioptric correction amount of hyperopic treatment. In 2002, the FDA approved the fi rst customized, wavefront-guided myopic laser ablation in the United States. Two other laser systems were approved in 2003. The results for the clinical trials are summarized in Table 12.3 and show considerable improvements compared to the conventional treatment platforms. Although the visual acuity results are excellent, further refi nements may be possible based on subtle improvements and a better understanding of the requirements for customized ablation. The following is a brief overview of the technical challenges facing customized ablation.
12.3
FORMS OF CUSTOMIZATION
The ultimate goal of a customized ablation is to optimize the treatment to help satisfy a patient’s visual needs. This goal is best achieved by doing three forms of customization: (1) functional, (2) anatomical, and (3) optical customization. 12.3.1
Functional Customization
Functional customization includes a complete assessment of the patient’s visual needs and incorporates the patient’s age, occupation, hobbies, and treatment expectations. A truck driver may have more stringent distance visual requirements that need to be maximized while a musician may generally need better near and intermediate vision. Presbyopia, a condition that affects individuals over the age of 40, causes patients to lose their accommodation and near vision due to a progressive hardening of the human lens with age. Myopic (nearsighted) individuals who are presbyopic see poorly at distance but often can take off their glasses and see well at near. This is because the focal point of the eye lies in front of the retina. A 1-D myope has an optimal focus point at 1 m, a 2-D myope has a focus point at –12 m, a 3-D myope has a focus point at –13 m, and so on. These patients need to be informed that their ability to read may be reduced after treatment, but they will probably receive a dramatic improvement in their distance vision. (Most people have a strong preference for good distance vision.) It is not uncommon for presbyopic patients to be treated with monovision, in which one eye is fully corrected for distance and one eye is intentionally left with a moderate amount of nearsightedness. Typically, a monovision correction intentionally makes one eye −1.25 to −1.50 D myopic. In minimonovision, one eye is typically made to be −0.25 to −0.75 D myopic to give the patient more depth of focus (or greater dynamic range) when using both
318 −3.00 to −3.99 D
−4.00 to −4.99 D
−5.00 to −5.99 D
NR 77.9% (n = 86) NR
NR
73.2% (n = 41) NR
70.7% (n = 82) NR
NR 66.7% (n = 57) NR
NR 61.5% (n = 39) NR
NR
87.0% (n = 46) 95.3% (n = 86) 89.8% (n = 59)
77.6% 65.6% 40.2%
223/340 141/351
%
257/331
Number of Eyes
Eyes with >0.50 D Preop Astigmatism
96.8% (n = 31) 97.6% (n = 41) 98.1% (n = 54)
None 12 eyes
None
Retreatments
91.7% (n = 36) 92.7% (n = 82) 89.8% (n = 59)
Up to 7 mm 6.0 mm
6.5 mm
Treatment Zone
84.2% (n = 38) 91.2% (n = 57) 75.6% (n = 41)
84.6% (n = 13) 81.5% (n = 27) NR
Cylinder Up to −4.00 D Up to −3.00 D Up to −3.00 D
Sphere Up to −8.00 D Up to −7.00 D Up to −6.00 D
Approved Treatment Range
71.9% (n = 32) 84.6% (n = 39) 70.0% (n = 20)
66.7% (n = 27) NR
NR
−6.00 to −6.99 D
Note: NR = not reported; UCVA = uncorrected visual acuity, i.e., visual acuity without a second-order correction of sphere and cylinder (or defocus and astigmatism); MRSE = manifest refractive spherical equivalent. a Three months postoperative data reported. Source Document References: FDA “Summary of Safety and Effectiveness Data,” publicly available for download from www.fda.gov/cdrh/. Acknowledgment: Table compiled by Joseph Stamm, OD.
Autonomous LadarVision CustomCornea B&L Zyoptix VISX CustomVuea
VISX CustomVuea
2.00 to −2.99 D
Percentage of Eyes with UCVA of 20/20 or Better at 6 Months Postop Based on Preop MRSE
Autonomous LadarVision CustomCornea B&L Zyoptix
VISX CustomVuea
−1.00 to −1.99 D
Percentage of Eyes with UCVA of 20/16 or Better at 6 Months Postop Based on Preop MRSE
Autonomous LadarVision CustomCornea B&L Zyoptix
Laser Platform
Preoperative MRSE
TABLE 12.3 Summary of Industry-Sponsored Myopic (Nearsighted) Customized (or Wavefront-Guided) LASIK Study Results for U.S. FDA Approval
FORMS OF CUSTOMIZATION
319
eyes together, providing the presbyopic patient with more independence from reading glasses. Some patients, however, have a strong need to see well with both eyes at distance and therefore are best treated by planning the treatment for optimal distance correction in both eyes. The use of a soft contact lens on a trial basis allows the patient to simulate monovision or mini-monovision to aid in making a decision whether this is a viable option for the patient [12]. 12.3.2
Anatomical Customization
The second form of customization is anatomical customization and includes careful measurements of corneal curvature using corneal topography, corneal thickness using ultrasonic pachymetry [13–15], and pupil size under low-light (mesopic) conditions [16]. These measurements are critical in helping to design an optimal ablation pattern that gives an adequate optical zone diameter [11, 17] and avoids treating with too deep an ablation. The larger the optical zone, the greater the tissue removal [11]. The most popular method of excimer laser refractive surgery is LASIK, which creates a corneal flap using a microkeratome or a femtosecond laser. The corneal flap is retracted, the underlying cornea is reshaped with the excimer laser, and then the flap is repositioned over the ablated corneal tissue (Fig. 12.2). In myopic treatments, the flap then takes on the shape of the newly
Microkeratome
LASIK Procedure
FIGURE 12.2 LASIK is performed by fi rst creating a thin flap of corneal tissue with a microkeratome (pictured in the left panel). The flap is then lifted and the excimer laser ablation is performed (right panel). The flap is carefully repositioned over the reshaped corneal surface, creating either a flatter or steeper surface, as noted in Figure 12.1.
320
CUSTOMIZED CORNEAL ABLATION
flattened cornea underneath it. Finally, the recovery time for patients after LASIK is typically much shorter than in other forms of excimer laser ablation, as most patients recover in 24 to 48 h after treatment. In order to maintain the structural integrity of the cornea, it is important to consider the amount of tissue that would need to be ablated to correct for a given refractive or wavefront error. The normal cornea has an average total corneal thickness of approximately 500 to 540 mm. In LASIK, the thickness of the corneal flap is usually between 110 and 180 mm, implying that, for this average corneal thickness, anywhere between 320 and 430 mm of corneal tissue remains after the flap cut. However, surgeons like to avoid treatments in which the deepest part of the ablation would penetrate deeper than the posterior portion, or the remaining 250 mm, of the cornea after the flap is lifted (to avoid weakening the cornea). Therefore, approximately 70 to 180 mm of tissue could be removed safely. Typical laser ablations remove between 10 and 160 mm of tissue. Another option for surgeons is to treat with a surface ablation technique. There are two common surface ablations, PRK and LASEK (laser-assisted epithelial keratoplasty). In PRK the superficial layer of the cornea, the corneal epithelium, is removed and the laser treatment is applied. A therapeutic or “bandage” soft contact lens is applied to the cornea postoperatively, and drops are given to minimize discomfort. LASEK is a variant on PRK, where the corneal epithelium is peeled back often after the application of diluted alcohol to loosen the corneal eptihelium, the laser treatment is applied, the epithelial layer is floated back over the treated cornea, and a bandage soft contact lens is applied over the cornea for comfort. PRK and LASEK have longer recovery periods than LASIK, usually 2 to 4 days, and there may be more discomfort because the surface layer of the cornea is disrupted. There are now mechanical microkeratomes that separate the epithelium from the underlying corneal stroma, which may have the advantage of better corneal epithelial preservation (by not using dilute alcohol), possibly allowing for a quicker recovery. This is sometimes referred to as Epi-LASIK, and the benefits of the technique are being assessed. Interestingly, the outcomes for LASIK, PRK, and LASEK are similar in the few studies that have compared the treatments in the same patients in paired eye studies [18, 19]. LASIK is used for the typical patient while PRK or LASEK are used more commonly in patients that have thin corneas that are not deep enough for LASIK treatments or in patients with large refractive errors requiring too deep an ablation [20]. Surface ablation is also used preferentially in patients who have a tendency for dry eyes since it tends not to increase dryness symptoms [21]. 12.3.3
Optical Customization
The third form of customization is optical customization. Optical customization involves measuring and treating the second (or lower) order aberrations
THE EXCIMER LASER TREATMENT
321
of sphere (either myopia or hyperopia) and cylinder (astigmatism) and also higher order Zernike aberrations. This includes third-order aberrations, such as coma and trefoil, as well as spherical aberration, a fourth-order aberration typically found in the normal population. A wavefront sensor measures these aberrations and then a treatment fi le can be developed to treat the aberrations using the excimer laser. There are a variety of wavefront sensors used to do optical customization, including the Shack–Hartmann, Tscherning, and scanning slit wavefront sensors, and an objective spatially resolved refractometer. The most popular of these systems is the Shack–Hartmann wavefront sensor, which is used by at least four of the laser refractive surgical companies offering customized ablation. Each system has relative strengths and weaknesses. A more detailed discussion is included elsewhere [5] and is beyond the scope of this chapter.
12.4
THE EXCIMER LASER TREATMENT
In the early years of refractive surgery, patients were treated with broad-beam lasers and the optical zones were sometimes as small as 4 to 5 mm. These small zones tended to cause night glare and halos when the pupil naturally dilated at night to a diameter greater than that of the optical zone and caused night-driving symptoms. Although these patients had symptoms because of their small optical zone, the refractive correction has remained relatively stable based on 12-year follow-up data, as noted by Rajan and co-workers [22]. The excimer laser systems used to treat patients currently are much more sophisticated than the earlier platforms, using small spot treating systems to create fi ner ablation profi les and fast eye tracking systems that minimize pupil decentrations due to eye movements. The use of larger optical zones and limiting the treatment to less than 12 D have reduced the likelihood of patients having visual problems postoperatively. Patients with larger amounts of myopic refractive error are often corrected with phakic intraocular lenses (or phakic IOLs) [23, 24]. Now, many of the patients receiving customized excimer laser treatments have less night-driving symptoms than they noted before the surgery. The spot sizes of excimer lasers have decreased in some systems to less than 1.0 mm and the rate of treatment has increased from a repetition rate of 10 Hz to as fast as 500 Hz [25]. Guirao and co-workers (as well as Huang and Arif) noted that a spot size of 0.5 to 1.0 mm is capable of reducing lower and higher order aberrations [26, 27]. A smaller spot size (such as a 1-mm spot) can treat fi ner aberrations, but larger spot sizes (i.e., ≥2 mm in diameter) can treat only sphere or cylinder. The trend over recent years has been to use smaller spot sizes and faster laser repetition rates, from 50 to 500 Hz, to perform customized treatments. These faster repetition rates are preferable since they reduce the treatment time, thereby better preserving the hydration of the cornea to
322
CUSTOMIZED CORNEAL ABLATION
reduce outcome variability. Thus, shorter treatment times allow for more uniform and predictable ablations. Bueeler and Mrochen compared 0.25 mm to 1.0 mm ablation depths (per pulse) with laser spot diameters ranging from 0.25 to 1.0 mm and tracker latencies of 0, 5, 30, and 100 ms (as well as with no eye tracking) to simulate the efficacy of a scanning spot correction of (1) 0.6 mm of vertical coma, (2) 0.75 mm of spherical aberration, and (3) 0.075 mm of secondary vertical coma, all over a 5.7 mm pupil diameter [28]. They found that a shallower ablation depth of 0.25 mm with a larger spot size of 1.0 mm was most stable and least dependent on tracker latency. A beam with a larger spot size, however, is less capable of treating fi nely detailed aberrations than a beam with a smaller diameter but is also less susceptible to errors induced by eye movements. A shorter latency is advantageous since it reduces the time between an eye movement and the repositioning of the laser beam by internal mirrors in reaction to the movement [5]. Eye tracking has been incorporated into treatments using video-based and laser radar systems, with tracking rates varying from 60 to 4000 Hz. Although faster tracking is advantageous, studies done by Porter, Yoon, and co-workers showed that approximately 90% of the eye movements that typically occur during laser refractive surgery could be compensated by a 1 to 2-Hz closedloop tracking system [29]. In addition, these studies indicated that the most critical component of eye tracking was the accuracy of the surgeon in manually centering the tracker over the pupil center at the time the tracker was activated. Small decentrations of 200 to 400 mm were not uncommon in the above study, even with meticulous centering by the surgeon, suggesting that a greater magnification of the pupil and a more automated system to lock the eye tracker onto the center of the patient’s pupil may be advantageous. Small eye movements do occur during the ablation, as well as static decentration errors, which occur when attempting to center the tracker over the pupil. Guirao and co-workers found that a translation of 0.3 to 0.4 mm or a rotation of 8° to 10° could still correct up to 50% of the higher order aberrations in a normal eye [30]. (Similar theoretical results were noted by Bueeler et al. in a larger population of eyes [31, 32].) The corollary of this is that half of the benefit of correcting higher order aberrations would be lost with such translations or rotations, stressing the importance of proper centration and an adequate tracking system.
12.5
BIOMECHANICS AND VARIABLE ABLATION RATE
The biomechanics of refractive surgery is a complicated subject, but there are several empirical observations that have helped to clarify the cornea’s response to laser treatments. The most prominent change that takes place with myopic excimer laser surgery is an increase in positive spherical aberration, while hyperopic treatments tend to induce an increase in negative spherical aberration (Fig. 12.3) [33, 34]. Normally, most individuals in the population have
BIOMECHANICS AND VARIABLE ABLATION RATE PostOP Cornea with Biomechanics
PreOP Cornea PostOP Cornea
323
PostOP Cornea with Biomechanics
Central Flattening
Central Steepening
Peripheral OZ Steepening
Peripheral OZ Flattening
6.0-mm Optical Zone
6.0-mm Optical Zone
Myopic Biomechanical Response (a)
Hyperopic Biomechanical Response (b)
FIGURE 12.3 A hypothesis by Yoon et al. of the biomechanical response of the cornea to excimer laser refractive surgery after (a) myopic and (b) hyperopic procedures [41]. Preoperative corneal shape, postoperative corneal shape (without biomechanical effects), and postoperative corneal shape including biomechanical effects are denoted using solid gray, dashed black, and solid black lines, respectively. (a) In a myopic laser correction, the central cornea is flattened. The peripheral portion of the optical zone (OZ) also steepens (causing an undercorrection in the periphery), creating positive spherical aberration. (b) In hyperopia, the central cornea and ablation optical zone steepen, but the peripheral part of the optical zone flattens (resulting in an undercorrection in the periphery), causing negative spherical aberration. (Reprinted from Yoon et al. [41], with permission from ASCRS & ESCRS.)
a slight amount of positive spherical aberration [35, 36], meaning that the central light rays fall directly on the macula in an emmetropic individual, but the peripheral light rays entering near the edge of the pupil focus in front of the retina. Roberts and co-workers have shown that the cornea actually steepens and thickens slightly in the midperiphery after myopic excimer laser treatment, partially accounting for the increases in positive spherical aberration noted after myopic LASIK or PRK procedures [37–39]. Mrochen and Seiler postulated that the ablation in the central cornea is more effective than that in the peripheral cornea due to the differences in the angle of incidence of the laser beam at these two locations [40]. Yoon et al. modeled the variable ablation rate as the beam moves to the periphery of the optical zone and the effect of biomechanics and wound healing on the shape of the postoperative cornea [41]. In this model, pulses perpendicularly striking the central cornea remove more tissue (or have higher efficiency) than pulses that obliquely strike the more peripheral cornea. Yoon et al. found that this variable ablation rate can account maximally for an 8% decrease in efficiency when the beam reaches the peripheral part of a 6.0-mm-diameter optical zone. In that same model, the biomechanical and healing responses of the cornea would increase positive spherical aberration by approximately 7% of the attempted spherical correction of the myopic treatment and increase
324
CUSTOMIZED CORNEAL ABLATION
negative spherical aberration by 25% of the attempted spherical correction in a hyperopic treatment [41]. Overall, roughly 50 and 30% of the spherical aberration induced after myopic and hyperopic treatments, respectively, can be attributed to changes in the efficiency of the laser ablation with eccentricity on the cornea. The majority of the remaining changes can be attributed to the postoperative biological response of the cornea.
12.6
EFFECT OF THE LASIK FLAP
A study by Porter and co-workers noted that the increase in positive spherical aberration with LASIK is primarily related to the excimer laser ablation and not to the cutting of peripheral collagen fibers by the microkeratome incision [42]. In this study, a microkeratome was used to create a corneal flap with a superior hinge in only one of the patient’s eyes. The flap-induced aberrations were then measured for 2 months. In the lift group, the flap was lifted and a sham ablation was performed. In the no-lift group, the flap was not lifted and the eye was simply observed for 2 months. A negligible increase in the higher order RMS wavefront error was noted in the no-lift group. In the lift group, there was a 0.19-mm (or 50%) increase in the higher order RMS wavefront error from preoperative levels. Horizontal trefoil was the only higher aberration that consistently increased, possibly due to the sweeping motion of the microkeratome during the flap cut. After 2 months, the flap was relifted, and the cornea was ablated with the excimer laser to treat myopia. With the ablation, there was an increase in positive spherical aberration. The increase in positive spherical aberration was correlated with the amount of attempted myopic correction, with higher amounts of myopic treatment inducing larger amounts of positive spherical aberration [42]. Pallikaris and co-workers noted a slightly different result after using a microkeratome to create a nasally hinged flap and observing the change in higher order aberrations for several months [43]. This study found an increase in spherical aberration after cutting a flap. In addition, there was also an increase in horizontal coma that, when coupled with the fi nding of Porter et al. [42], suggests that the motion of the microkeratome could influence the aberrations induced by the flap cut. In summary, the central cornea flattens more in a myopic laser ablation, with a tendency for the peripheral cornea to steepen and thicken, resulting in an unanticipated introduction of positive spherical aberration. This causes the peripheral light rays to focus more anterior to (or in front of) the retina than the central light rays. In hyperopic corneal laser surgery, the tendency is for the central cornea to steepen while the peripheral cornea tends to flatten, inducing unanticipated negative spherical aberration. In this case, the central light rays are focused on the retina in an emmetropic eye, but the lights rays passing through the midperipheral pupil are focused behind the retina.
CLINICAL RESULTS OF EXCIMER LASER ABLATION
325
12.7 WAVEFRONT TECHNOLOGY AND HIGHER ORDER ABERRATION CORRECTION The original spherocylindrical excimer laser treatments were based on the ablation profi les created in plastic. Various correction factors were then applied to these profi les to account for differences in the ablation rates between plastic and human stromal tissue. The attempted correction used to drive the ablation was based on the patient’s manifest refraction at the time of examination. This refraction is what the optometrist or ophthalmologist would typically give the patient when prescribing glasses. For the past 200 years, clinicians have been treating myopia and hyperopia based on refractions obtained using a phoropter or trial lenses in their offices. More recently, clinicians have been using wavefront sensors to measure and treat the subtle higher order aberrations of the eye (in addition to sphere and cylinder). The wavefront sensor provides an estimate of the lower and higher order aberrations inherent in a patient’s eye. The wavefront error can be documented and then transferred to the excimer laser electronically. A corneal ablation profi le (that is opposite in shape to the wave aberration) can then be formulated to correct the aberration pattern. This technique is called customized (or wavefront-guided) ablation. A treatment that corrects for only second-order sphere and cylinder is called a conventional ablation. Both customized and conventional excimer laser treatments can be performed with LASIK, LASEK, or PRK. The fi rst wavefront-guided treatments did not take into account the biomechanical response of the cornea to the ablation or the change in laser ablation efficiency as a function of eccentricity on the cornea. Both of these factors account for the majority of the development of positive spherical aberration in patients after myopic treatments and negative spherical aberration after farsighted treatments. Laser companies have incorporated correction factors in an attempt to minimize the induced positive or negative spherical aberration created by the ablation with refractive surgery.
12.8
CLINICAL RESULTS OF EXCIMER LASER ABLATION
A number of large, well-controlled clinical trials have been performed by different laser companies as part of the U.S. FDA clinical trial process. These trials have provided evidence of the relative success of excimer laser treatments for correcting refractive errors. The results of these trials are summarized in Table 12.1 for conventional myopic (or nearsighted) treatments, Table 12.2 for conventional hyperopic (or farsighted) treatments, and Table 12.3 for customized myopic treatments. In the conventionally treated myopic eyes, the data suggest that the percentage of postoperative eyes with 20/20 or better uncorrected visual acuity (UCVA) (i.e., visual acuity without spectacles or contacts), ranged between 40 and 90% depending on the level of myopic
326
CUSTOMIZED CORNEAL ABLATION
treatment and the laser used (Table 12.1). The percentage of eyes that obtained a UCVA of 20/20 or better with a customized (or wavefront-guided) ablation ranged between 70 and 98% depending on the attempted correction and laser platform (Table 12.3). The results for the conventional hyperopic procedures were less predictable than in the conventional or customized myopic treatments. The percentage of eyes that obtained a UCVA of 20/20 or better after a hyperopic correction of less than +4 D ranged between 40 and 100% and declined as the attempted correction increased (Table 12.2). Most customized LASIK or surface treatments introduce slight increases in higher order aberration. At the University of Rochester, our group has conducted other studies comparing the use of customized LASEK to conventional LASIK using the same laser system (unpublished data). In this paired study on 24 patients, one eye was treated with customized LASEK and the contralateral eye was treated with conventional LASIK. There was a 0.07-mm average increase (6.0-mm pupil) in higher order aberrations in the customized LASEK eyes compared to a 0.15-mm average increase with conventional LASIK. We compared these results to those from an equivalent group of 340 eyes from the U.S. FDA Bausch & Lomb customized myopic ablation clinical trial where there was a 0.11-mm average increase in higher order aberrations (6.0-mm pupil). The magnitude of increase in higher order aberrations is relatively trivial when compared to the amount of RMS wavefront error introduced with 0.25 D of spherical refractive error (equivalent to a 0.32-mm RMS wavefront error across a 6.0-mm pupil), or one click on the phoropter. Thus, the magnitude of higher order aberrations introduced with customized ablation is equivalent to about one-third of a click of sphere on a phoropter. (These results are summarized in Fig. 12.4.) However, the blur in the retinal image produced by a defocused wavefront will be different from that imposed by a similar magnitude of higher order aberrations. Based on our experience and clinical observations, our group has also found that patients with larger amounts of preoperative higher aberration tend to experience a better correction of their eye’s aberrations with customized ablation. This is similar to what we have noted with eyes with astigmatism. If patients have more preoperative astigmatism, it is more worthwhile to treat these eyes with an astigmatism treatment. Similarly, if an eye has a large degree of preoperative higher order aberrations, the eye is more likely to benefit from a higher order customized correction.
12.9
SUMMARY
With current state-of-the-art refractive surgery treatments in normal eyes, it is possible to obtain excellent visual outcomes as the field continues to conduct studies to improve its techniques. There are still several things to learn about the role of biomechanics and tissue healing in refractive surgery as well as how the correction of higher order aberrations influences the correction of
REFERENCES
327
FIGURE 12.4 Summary of higher order aberrations induced with different refractive surgical interventions. Flap manipulation associated with lifting the flap caused the greatest amount of higher order aberration increase (solid dark gray). It is believed that this increase in higher order aberrations is related to flap swelling and less meticulous attention to symmetrically laying down the flap. Customized LASEK (white with black dots) induced the smallest amount of higher order RMS wavefront error, on average, followed by customized LASIK (white with diagonal lines) and conventional LASIK (solid white). The differences between conventional LASIK, customized LASIK, and customized LASEK were not significant, although the sample sizes were smaller for the conventional LASIK and customized LASEK procedures (n = 24 each). Moreover, the increase in RMS wavefront error for all interventions was less than 0.32 mm of Zernike defocus, which is equivalent to 0.25 D of sphere, or one click on the phoropter.
sphere and cylinder. The field of refractive surgery has been revolutionized by the use of wavefront sensing, which has helped clinicians to better understand how effective their attempts have been in reducing ocular aberrations. With this understanding, surgeons have been able to correct for refractive errors while minimizing the increase in higher order aberrations. This exciting field has been led by the synergy between basic scientists and clinicians who have worked together to improve the patient’s quality of vision.
REFERENCES 1. Rich LF. History of Refractive Surgery. In: Elander E, Rich LF, Robin JB, eds. Principles and Practice of Refractive Surgery. Philadelphia: WB Saunders, 1997, pp. 1–7.
328
CUSTOMIZED CORNEAL ABLATION
2. L’Esperance Jr FA, Taylor DM, Del Pero RA, et al. Human Excimer Laser Corneal Surgery: Preliminary Report (Presented at American Ophthalmological Society Meeting, Hot Springs, Ark.). Trans. Am. Ophthalmol. Soc. 1988; 86: 208–275. 3. Pallikaris IG, Papatzanaki ME, Stathi EV, et al. Laser in Situ Keratomileusis. Lasers Surg. Med. 1990; 10: 463–468. 4. MacRae SM, Krueger RR, Applegate RA, eds. Customized Corneal Ablation: The Quest for Super Vision. Thorofare, NJ: SLACK, 2001. 5. Krueger RR, Applegate RA, MacRae SM, eds. Wavefront Customized Visual Correction: The Quest for Super Vision II. Thorofare, NJ: SLACK, 2004. 6. Ren Q, Keates RH, Hill RA, Berns MW. Laser Refractive Surgery: A Review and Current Status. Opt. Eng. 1995; 34: 642–660. 7. Liang J, Grimm B, Goelz S, Bille JF. Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann–Shack Wave-front Sensor. J. Opt. Soc. Am. A. 1994; 11: 1949–1957. 8. Liang J, Williams DR, Miller DT. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884– 2892. 9. Liang J, Williams DR. Aberrations and Retinal Image Quality of the Normal Human Eye. J. Opt. Soc. Am. A. 1997; 14: 2873–2883. 10. Mrochen M, Kaemmerer M, Seiler T. Wavefront-Guided Laser in Situ Keratomileusis: Early Results in Three Eyes. J. Refract. Surg. 2000; 16: 116–121. 11. MacRae SM. Excimer Ablation Design and Elliptical Transition Zones. J. Cataract Refract. Surg. 1999; 25: 1191–1197. 12. DePaolis M, Aquavella J. How to Respond To Common Questions. Contact Lens Spect. 1993; 8: 36. 13. Machat JJ, Slade SG, Probst LE, eds. The Art of Lasik, 2nd ed. Thorofare, NJ: SLACK, 1998. 14. Sutton HFS, Reinstein DZ, Holland S, et al. Anatomy of the Flap in LASIK by Very High Frequency Ultrasound Scanning. Invest. Ophthalmol. Vis. Sci. 1998; 39: S244. 15. Maldonado MJ, Ruiz-Oblitas L, Munuera JM, et al. Optical Coherence Tomography Evaluation of the Corneal Cap And Stromal Bed Features after Laser in Situ Keratomileusis for High Myopia and Astigmatism. Ophthalmology. 200; 107: 81–87. 16. Martinez CE, Applegate RA, Klyce SD, et al. Effect of Pupillary Dilation on Corneal Optical Aberrations after Photorefractive Keratectomy. Arch. Ophthalmol. 1998; 116: 1053–1062. 17. Gimbel HV, Anderson Penno EE. LASIK Complications. Thorofare, NJ: SLACK, 1998. 18. El Danasoury MA, El Maghraby A, Klyce SD, Mehrez K. Comparison of Photorefractive Keratectomy with Excimer Laser in Situ Keratomileusis in Correcting Low Myopia (from −2.00 to −5.50 diopters). A Randomized Study. Ophthalmology. 1999; 106: 411–420. 19. MacRae SM, Cox I, Williams DR. Higher Order Aberration Correction after Customized LASEK versus LASIK. 2003 American Society of Cataract and
REFERENCES
20. 21.
22. 23.
24.
25. 26.
27. 28.
29.
30.
31.
32.
33.
34.
35.
329
Refractive Surgery (ASCRS) Symposium on Cataract, IOL and Refractive Surgery (San Francisco, CA); Abstract nr 500. Ambrosio Jr R, Wilson S. LASIK vs LASEK vs PRK; Advantages and Indications. Semin. Ophthalmol. 2003; 18: 2–10. Battat L, Macri A, Dursun D, Pflugfelder SC. Effects of Laser in Situ Keratomileusis on Tear Production, Clearance and the Ocular Surface. Ophthalmology. 2001; 108: 1230–1235. Rajan MS, Jaycock P, O’Brart D, et al. A Long-Term Study of Photorefractive Keratectomy; 12-Year Follow-up. Ophthalmology. 2004; 111: 1813–1824. Pineda-Fernandez A, Jaramillo J, Vargas J, et al. Phakic Posterior Chamber Intraocular Lens for High Myopia. J. Cataract Refract. Surg. 2004; 30: 2277– 2283. Kohnen T, Kasper T, Buhren J, Fechner PU. Ten-Year Follow-up of a Ciliary Sulcus-Fixated Silicone Phakic Posterior Chamber Intraocular Lens. J. Cataract Refract. Surg. 2004; 30: 2431–2434. Iseli HP, Mrochen M, Hafezi F, Seller T. Clinical Photoablation with a 500-Hz Scanning Spot Excimer Laser. J. Refract. Surg. 2004; 20: 831–834. Guirao A, Williams DR, MacRae SM. Effect of Beam Size on the Expected Benefit of Customized Laser Refractive Surgery. J. Refract. Surg. 2003; 19: 15–23. Huang D, Arif M. Spot Size and Quality of Scanning Laser Correction of HigherOrder Wavefront Aberrations. J. Cataract Refract. Surg. 2002; 28: 407–416. Bueeler M, Mrochen M. Simulation of Eye-Tracker Latency, Spot Size, and Ablation Pulse Depth on the Correction of Higher Order Wavefront Aberrations with Scanning Spot Laser Systems. J. Refract. Surg. 2005; 21: 28–36. Porter J, Yoon G, Lozano D, et al. Aberrations Induced in Wavefront-Guided Laser Refractive Surgery Due To Shifts between Natural and Dilated Pupil Center Location. J. Cataract Refract. Surg. 2005; in press. Guirao A, Cox IG, Williams DR. Method for Optimizing the Correction of the Eye’s Higher Order Aberrations in the Presence of Decentrations. J. Opt. Soc. Am. A. 2002; 19: 126–128. Bueeler M, Mrochen M, Seiler T. Maximum Permissible Lateral Decentration in Aberration-Sensing and Wavefront-Guided Corneal Ablation. J. Cataract Refract. Surg. 2003; 29: 257–263. Bueeler M, Mrochen M, Seiler T. Maximum Permissible Torsional Misalignment in Aberration-Sensing and Wavefront-Guided Corneal Ablation. J. Cataract Refract. Surg. 2004; 30: 17–25. Cano D, Barbero S, Marcos S. Comparison of Real and Computer-Simulated Outcomes of LASIK Refractive Surgery. J. Opt. Soc. Am. A. 2004; 21: 926– 936. Marcos S, Cano D, Barbero S. Increase in Corneal Asphericity after Standard Laser in Situ Keratomileusis for Myopia Is Not Inherent to the Munnerlyn Algorithm. J. Refract. Surg. 2003; 19: S592–S596. Porter J, Guirao A, Cox IG, Williams DR. Monochromatic Aberrations of the Human Eye in a Large Population. J. Opt. Soc. Am. A. 2001; 18: 1793–1803.
330
CUSTOMIZED CORNEAL ABLATION
36. Thibos LN, Hong X, Bradley A, Cheng X. Statistical Variation of Aberration Structure and Image Quality in a Normal Population of Healthy Eyes. J. Opt. Soc. Am. A. 2002; 19: 2329–2348. 37. Roberts C. The Cornea Is Not a Piece of Plastic. J. Refract. Surg. 2000; 16: 407–416. 38. Dupps WJ Jr, Roberts C. Effect of Acute Biomechanical Changes on Corneal Curvature after Photokeratectomy. J. Refract. Surg. 2001; 17: 658–669. 39. Katsube N, Wang R, Okuma E, Roberts C. Biomechanical Response of the Cornea to Phototheraputic Keratectomy When Treated as a Fluid-Filled Porous Material. J. Refract. Surg. 2002; 18: S593-S597. 40. Mrochen M., Seiler T. Influence of Corneal Curvature on Calculation of Ablation Patterns Used in Photorefractive Laser Surgery. J. Refract. Surg. 2001; 17: S584–S587. 41. Yoon G, MacRae S, Williams DR, Cox IG. Causes of Spherical Aberration Induced by Laser Refractive Surgery. J. Cataract Refract. Surg. 2005; 31: 127– 135. 42. Porter J, MacRae S, Yoon G, et al. Separate Effects of the Microkeratome Incision and Laser Ablation on the Eye’s Wave Aberration. Am. J. Ophthalmol. 2003; 136: 327–337. 43. Pallikaris IG, Kymionis GD, Panagopoulou SI, et al. Induced Optical Aberrations Following Formation of a Laser in Situ Keratomileusis Flap. J. Cataract Refract. Surg. 2002; 28: 1737–1741.
CHAPTER THIRTEEN
From Wavefronts to Refractions LARRY N. THIBOS Indiana University, Bloomington, Indiana
13.1 13.1.1
BASIC TERMINOLOGY Refractive Error and Refractive Correction
The purpose of the eye’s optical system is to cast an image of the external world onto the photoreceptor layer of the retina. If the system were perfect, it would focus all rays of light from a distant point source into a single image point on the retina. Real eyes, however, suffer from three types of optical imperfections that degrade the quality of the retinal image: aberrations, diffraction, and scattering. Since image formation in the eye is entirely refractive in nature (i.e., dioptric), as opposed to reflective (i.e., catoptric), one might suppose that the terms aberrations and refractive errors are synonymous. However, in ophthalmic contexts, the term refractive error has been restricted historically to spherical and astigmatic focusing errors. In the language of Zernike wavefront analysis, these two types of refractive error are called aberrations of the second Zernike order. In some cases, ophthalmic prescriptive lenses may also contain prisms to overcome binocular difficulty in forming an image of the point of regard on the foveal region of the retinas of the two eyes. Such prisms are described as Zernike aberrations of the fi rst order, but they are generally disregarded for the purposes of assessing retinal image quality because prismatic deviations shift the image laterally without blurring it. The reason ophthalmic prescriptions have historically excluded the Zernike aberrations of third and higher orders (e.g., coma, trefoil, spherical aberration, etc.) is that these aberrations could not easily be measured or corrected Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
331
332
FROM WAVEFRONTS TO REFRACTIONS
with spectacles or contact lenses. Consequently, refractive aberrations beyond the second order were historically outside the domain of clinical practice. However, this attitude has been reversed dramatically in recent years as researchers and industry have sought new methods for correcting also the higher order refractive imperfections of eyes to achieve unprecedented quality in the retinal image and of spatial vision [1, 2]. The sign convention for specifying the refractive errors of eyes can appear confusing. Although researchers may be inclined to describe the refractive errors of the eye, per se, clinicians invariably prefer to specify the lens needed to correct the eye’s refractive errors. For example, a myopic eye has too much refractive power, which requires a correcting lens of negative power. Thus the refractive error of a myopic eye is negative by standard clinical conventions. Conversely, a hyperopic eye has too little power and therefore requires a positive correcting lens, so it is said to have a positive refractive error. Unless stated otherwise, it is understood that the object distance is infi nite. Further confusion can arise when discussing the state of focus of eyes with higher order aberrations because two systems for describing higher order aberrations are in wide circulation. Traditionally, a Seidel power-series expansion of wave aberrations was used for which spherical aberration varies as the fourth power of radius, W(r) = r4. An eye with this aberration function is well focused in the Seidel sense because the coefficient of r 2 (the focus term) is zero, and therefore wavefront curvature is zero at the pupil center. Thus paraxial rays are focused at infi nity, but marginal rays are focused closer than infi nity, as shown in the upper half of Figure 13.1. Traditionally, this eye would be described as myopic for marginal rays, which requires a negative correcting lens to focus the marginal rays at infi nity. In recent years, the traditional Seidel expansion of ocular aberrations has been replaced by a Zernike expansion with orthogonal basis functions, for which spherical aberration takes the form W(r) = (r4 − r 2). Such an eye is well focused in the Zernike sense because the root-mean-square (RMS) wavefront error over the whole pupil will only increase by changing the coefficient of the r 2 term. Thus, one describes an eye with zero Zernike defocus as well balanced with respect to the focusing effect of r 2 . However, as shown in the lower half of Figure 13.1, neither marginal nor paraxial rays are focused at infi nity in an eye with zero Zernike defocus. Marginally, the eye appears myopic (negative refractive error) and paraxially the eye appears hyperopic (positive refractive error), but a global analysis over the entire pupil shows the eye to be well focused in the minimumRMS sense. 13.1.2
Lens Prescriptions
Several conventions are in common use for specifying the spherocylindrical lens needed to correct an eye’s refractive errors. The differences between these conventions arise from different ways of envisioning how a prescribed spherocylindrical lens is built from component lenses. Two of these conven-
BASIC TERMINOLOGY
333
Seidel Defocus = 0 Marginal (Real) Focus
Paraxial Focus @ Infinity Wavefront
Zernike Defocus = 0 Marginal (Real) Focus
Wavefront Paraxial (Virtual) Focus
FIGURE 13.1 The meaning of focus in an aberrated eye. The upper diagram shows an eye that is well focused in the Seidel sense because the retina is conjugate to infi nity for paraxial rays. The lower diagram shows an eye that is well focused in the Zernike sense because, on balance over the whole pupil, the retina is conjugate to infi nity.
tions presuppose that the correcting lens is the combination of a spherical lens of power (Psph) juxtaposed with a cylindrical lens of power (Pcyl) oriented with the axis of the cylinder at some angle (a cyl). The difference between these two conventions is the sign of Pcyl. Optometrists typically prefer to think of Pcyl as a negative number, whereas ophthalmologists typically prefer to think of Pcyl as a positive number. Simple rules allow conversion from one convention to another, a process called transposition. A third convention replaces Psph with PSE, the so-called spherical equivalent power. If one imagines correcting an eye with a spherical lens of power PSE, the eye would remain astigmatic by some amount PJ. The value of PJ refers to the power of a socalled Jackson crossed-cylinder, which is made from two conventional cylindrical lenses of equal but opposite powers oriented with their axes mutually perpendicular. All three of the conventions described above may be said to be in polar form since the astigmatic component of the correction is given in terms of a magnitude (lens power) and an angle (the orientation of the lens). For computational purposes, conversion from polar form to rectangular form is usually advantageous. This is easily accomplished for the cross-cylinder convention using the following transposition formulas:
334
FROM WAVEFRONTS TO REFRACTIONS
PSE = Psph +
Pcyl 2
PJ 0 = −
Pcyl 2
cos 2α
PJ 45 = −
Pcyl 2
sin 2α
(13.1)
The three-dimensional vector [PSE, PJ0 , PJ45] is known in the literature as a power vector and the length of a power vector is called blur strength [3]. Blur strength quantifies the overall blurring effect of spherocylindrical refractive error on visual acuity [4, 5]. The components of a power vector are closely related to the second-order Zernike coefficients as shown by the following formulas: PSE =
−c20 4 3 r2
PJ 0 =
−c22 2 6 r2
PJ 45 =
−c2−2 2 6 r2
(13.2)
where cmn is the nth-order Zernike coefficient of meridional frequency m and r is the pupil radius. (When cmn is expressed in micrometers and r is expressed in millimeters, PSE, PJ0, and PJ45 will be expressed in units of diopters.) The ophthalmic convention for specifying Zernike coefficients was established by a task force of the Optical Society of America (see also Appendix A) [6] and subsequently adopted for ANSI standard Z80.28 [7]. The normalization scheme used in this standard allows an interpretation of individual Zernike coefficients as the RMS wavefront error of the corresponding aberration mode. Thus, we may infer from Eq. (13.2) that defocus and astigmatism in units of diopters are proportional to the ratio of RMS wavefront error to pupil area. 13.2
GOAL OF REFRACTION
Refraction is the process of determining the refractive error of the eye. Refractions can be performed objectively (e.g., with a clinical autorefractor, an instrument that determines spherical and astigmatic focusing errors of the eye) or subjectively, using psychophysical methods. The purpose of a refraction is to determine that correcting lens that maximizes the optical quality of the eye so that vision is optimized. Clinicians generally regard the subjective refraction as defi nitive since it includes the patient’s judgment of optical focus based on their own visual perception using their own neural visual system. In addition, other considerations (e.g., balancing image size in the two eyes) may lead the clinician to prescribe correcting lenses that are somewhat suboptimal optically in order to achieve the overall objectives of visual comfort and performance. Consequently, the prescribed lens is often different from that indicated by a refraction. 13.2.1
Definition of the Far Point
The far point of an eye is that point in object space that is optically conjugate to the entrance apertures of the foveal photoreceptors. A true far point exists
GOAL OF REFRACTION
335
only for an eye that is free of all optical aberrations except defocus. Real eyes invariably suffer from other aberrations besides defocus, and therefore an operational defi nition of the approximate far point is required. For example, an astigmatic eye may be described as having two far points, one for line objects oriented parallel to the axis of astigmatism and another for the orthogonal orientation. The point midway between these two extremes is a widely accepted approximation for the far point of an astigmatic eye since this is where the far point would be located if the astigmatism were corrected with spectacles. One reasonable approximation for the far point of an eye with higher order aberrations is that location in object space where an object must be located in order to maximize the optical quality of the retinal image of that object. Unfortunately, this approximation does not locate the far point uniquely for several reasons. First, it depends on the characteristics of the visual stimulus (e.g., luminance, size, spatial frequency spectrum, wavelength spectrum) and therefore may change as the stimulus changes. Second, there are many aspects to image quality that might give different locations of the far point. As shown below, image quality may be defi ned in a variety of ways, none of which are currently endorsed universally by the vision science community. Third, the psychophysical judgment of optimum focus by an observer depends on factors other than optical image quality, such as the neural sampling of the retinal image, cortical neural processing, and perceptual weighting given to the various features of optical blur, such as contrast attenuation, loss of sharpness, and doubling or ghosting of the image [8]. Thus, ultimately the selection of an objective method for locating the nominal far point of an eye will depend on the empirical demonstration that some methods work better than others. This is an active area of contemporary research and a brief summary of our current understanding is given in Section 13.5. 13.2.2
Refraction by Successive Elimination
Even if higher order aberrations are absent, an eye will still lack a true far point if it has uncorrected astigmatism. For this reason, clinical methods for performing a subjective refraction are best described as iterative routines that determine the optimum correcting lenses by successive elimination of the dominant remaining aberration. The fi rst approximation eliminates the bulk of the defocus error by correcting the eye with a spherical lens of power PSE, the so-called spherical equivalent. Next, the eye’s astigmatism is corrected with a cylindrical lens, followed by a fi ne tuning of the spherical lens power. The same rationale applies to objective, clinical instruments such as autorefractors and wavefront aberrometers (or wavefront sensors). Measuring small amounts of astigmatism in the presence of large amounts of defocus is difficult, so clinical instruments typically compensate for the majority of defocus errors before measuring the residual spherocylindrical refractive error.
336
13.2.3
FROM WAVEFRONTS TO REFRACTIONS
Using Depth of Focus to Expand the Range of Clear Vision
A simplified way to think about correcting the eye’s refractive error is that the cylindrical portion of the correcting lens eliminates astigmatism so that the far point is reasonably well defi ned. The far point is then moved further away (with negative spherical lenses in the case of a myopic eye) or closer (with positive spherical lenses in the case of a hyperopic eye) until the far point of the eye–lens system coincides with infi nitely distant objects. Thus, distant objects like stars would be well focused with accommodation relaxed and near objects would be brought into focus as the crystalline lens accommodates. Such a well-corrected eye may then be said to be emmetropic. The actual strategy typically used by clinicians to prescribe correcting lenses differs slightly from this simplified view, however [9]. Although retinal image quality is maximized when the object is located at the eye’s far point, experience shows that the object may be placed slightly closer or slightly further than the far point without a noticeable loss of image quality or visual performance. This range of object distances for which image quality is not significantly degraded is called the eye’s depth of focus. The size of this depth of focus depends on a variety of optical and neural factors, such as pupil size, the amount of higher order aberrations present in the eye, the spatial frequency spectrum of the visual stimulus, and the patient’s subjective criterion for judging the endpoints of the focusing interval [10]. Typically, the depth of focus is in the range of ±0.25 to ±0.5 diopters (D). Since real objects cannot be located physically beyond infi nity, it makes sense to position the far point of the corrected eye slightly closer than infi nity so that the far end of the depth of focus (rather than the midpoint) is located at optical infi nity, as shown in Figure 13.2. By adopting this strategy, the range of clear vision when accommodation is relaxed is expanded by an amount equal to half of the depth of focus. Furthermore, this strategy ensures that no latent hyperopia is present in the corrected eye. The far end of the depth-of-focus interval is called the hyperfocal point. Thus, standard clinical practice, known as “hyperfocal refraction,” aims to place the hyperfocal point, rather than the far point, at optical infi nity when accommodation is relaxed. It is worth noting that although it is natural to suppose that accommodation is fully relaxed in darkness, this is usually not the case because humans tend to accommodate slightly in the absence of visual stimuli. To the contrary, cycloplegic drugs for paralyzing accommodation during the refraction procedure usually relax accommodation beyond the normal physiological state, thereby pushing the far point of the eye further away. Since negative lenses are used to move the far point of the corrected eye further away, the above strategy calls for using a lens power that is slightly less negative (or more positive) than is needed to place the far point at infi nity. Thus the clinician’s maxim when prescribing the spherical component of corrective lenses is the “maximum plus for best visual acuity” [9]. In other words, prescribe the maximum amount of positive lens power consistent with
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
337
Uncorrected Eye Far Point
Hyperfocal Point
Depth of Focus
Corrected Eye Hyperfocal Point Far Point
Depth of Focus
FIGURE 13.2 The goal of hyperfocal refraction. An eye with uncorrected spherical refractive error (upper diagram) has a far point that is closer (in the case of a myopic eye) or farther (in the case of a hyperopic eye) than infi nity. The far point lies at the middle of the depth of focus. The end of the depth-of-focus range that is farthest from the eye is the hyperfocal point. The purpose of a correcting lens (lower diagram) is to move the hyperfocal point to optical infi nity.
no loss of visual performance for distant objects. This is an example of prescribing a lens that is slightly suboptimal in order to achieve another worthwhile objective, in this case expanding the range of clear vision without latent hyperopia.
13.3 METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION FROM AN ABERRATION MAP Aberrometers (or wavefront sensors) measure all of the eye’s aberrations for monochromatic light and display the result in the form of a wave aberration map. Since the second-order aberrations of astigmatism and defocus are included in this map, it seems reasonable to suppose that the parameters of the ideal correcting lens can be deduced from the aberration map. Indeed, Eq. (13.2) would seem to indicate a straightforward method for computing the power vector parameters of the correcting lens based on the Zernike coefficients of the wave aberration. Unfortunately, the problem is not so simply solved. It is true that prescribing a correcting lens according to Eq. (13.2) would reduce the second-order Zernike aberrations to zero. However, this does not necessarily guarantee that optical quality or vision are
338
FROM WAVEFRONTS TO REFRACTIONS
maximized if higher order aberrations are present [11, 12]. A dramatic example is shown in Figure 13.3, which shows point spread functions (PSFs) for a postsurgical eye with a large amount of positive spherical aberration. Setting Zernike coefficient c 02 = 0 is equivalent to adding that spectacle lens that minimizes the RMS wavefront error of the eye–lens system. In this example, the result is an inferior PSF that is diffusely blurred. Changing the indicated lens in the positive direction (i.e., towards Seidel focus) by 1, 2, or 3 D yields steady improvement in the compactness of the PSF that is an indicator of improved image quality. This example demonstrates convincingly that eliminating the second-order Zernike coefficients does not necessarily optimize image quality. The clear failure of refraction based on minimizing RMS wavefront error has launched a search for alternative methods for inferring the optimum spherocylindrical prescription from a wave aberration map. This is currently an active area of research that is summarized next. Two general strategies can be adopted for converting an aberration map to a spherocylindrical refractive correction. The fi rst is based on fitting the aberration map with a representative quadratic surface (Section 13.3.1) and the second is based on throughfocus calculations that seek to maximize the eye’s optical quality (Section 13.3.2).
FIGURE 13.3 Monochromatic PSFs for a human eye with a large amount of positive spherical aberration. Computations simulate the PSF that would occur when the eye views a point source through lenses of four different powers relative to the lens prescribed from the second-order Zernike coefficients. The 0-D lens corresponds to the minimum RMS condition (Zernike focus). Improvement in image quality is possible by increasing the power of the correcting lens in the positive direction.
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
13.3.1
339
Refraction Based on Equivalent Quadratic
The equivalent quadratic for a wave aberration map is the quadratic surface that best represents the map. This idea of representing an aberrated map with a quadratic is a simple extension of the common ophthalmic technique of representing a quadratic (i.e., a spherocylindrical) surface with an “equivalent sphere.” Two methods for determining the equivalent quadratic from an aberration map are presented below. 13.3.1.1 Least-Squares Fitting One common way to fit an arbitrarily aberrated wavefront with a quadratic surface is to minimize the sum of the squared deviations between the two surfaces. This least-squares fitting method is the basis for the Zernike expansion of wavefronts. Because the Zernike expansion employs an orthogonal set of basis functions, the solution is given by the second-order Zernike coefficients regardless of the values of the other coefficients. The Zernike coefficients can be converted to a power vector prescription using Eq. (13.2). 13.3.1.2 Paraxial Curvature Matching Curvature is the property of wavefronts that determines how they focus. Thus, a reasonable way to fit an arbitrary wavefront with a quadratic surface is to match the curvature of the two surfaces at some reference point. A variety of reference points could be selected, but the natural choice is the pupil center. Two surfaces that are tangent at a point and have exactly the same curvature in every meridian are said to “osculate.” Thus, the surface we seek is the osculating quadratic. Explicit results are obtained by computing the wavefront curvature at the pupil center for a wavefront expressed as a weighted sum of Zernike polynomials. This process effectively collects the quadratic terms from the various Zernike modes, thereby yielding the power vector coordinates of the correcting lens in terms of the Zernike coefficients of the wavefront. The formulas below are truncated at the sixth Zernike order but could be extended if warranted: −c20 4 3 + c40 12 5 − c60 24 7 + … r2 2 2 −c2 2 6 + c4 6 10 − c62 12 14 + … PJ 0 = r2 12 −2 −c2 2 6 + c4 6 10 − c6−2 12 14 + … PJ 45 = r2 PSE =
13.3.2
(13.3)
Virtual Refraction Based on Maximizing Optical Quality
Our operational defi nition of the far point (Section 13.2.1) was based on the intuitive notion of moving an object axially along the line of sight until the retinal image of that object was in clearest focus. This procedure is easily
340
FROM WAVEFRONTS TO REFRACTIONS
simulated mathematically in silico by adding a spherical wavefront to the eye’s aberration map and then computing the retinal image using standard methods of Fourier optics. The curvature of the added wavefront can be systematically varied to simulate a “through-focus” experiment that varies the optical quality of the eye–lens system over a range from good to bad. Given a suitable metric of optical quality, this computational procedure yields the optimum power PSE of the spherical correcting lens needed to maximize the optical quality of the corrected eye. With this virtual spherical lens in place, the process can be repeated for “through-astigmatism” calculations to determine the optimum values of PJ0 and PJ45 needed to maximize image quality. With these virtual astigmatic lenses in place, we can fi ne tune the determination of PSE by repeating the through-focus calculations. This computational method, called virtual refraction, captures the essence of traditional refraction by successive elimination (Section 13.2.2) by mathematically simulating the effect of spherocylindrical lenses of various powers. That quadratic wavefront that, when added to the eye’s aberration map, maximizes the eye’s optical quality defi nes the ideal correcting lens [12, 13]. To implement the method of virtual refraction requires an acceptable metric of optical quality. Optical quality can be defi ned in many ways. Below, three general approaches are described based on (1) wavefront quality, (2) retinal image quality for point objects, and (3) retinal image quality for grating objects. It is probably unrealistic to expect a single metric to capture all aspects of optical quality in aberrated eyes. Thus it seems likely that future research will show that a combination of metrics will be needed to optimize this refraction method. 13.3.2.1 Metrics of Wavefront Quality A perfect optical system has a flat aberration map and therefore metrics are required that capture the idea of flatness. An aberration map is flat if its value is constant across the pupil, but also if its slope is everywhere zero, or if its curvature is everywhere zero. Accordingly, we seek meaningful scalar metrics based on the wave aberration map, the slope map, and the curvature map. The wave aberration map describes the optical path errors across the pupil that give rise to phase errors for light entering the eye through different parts of the pupil. These phase errors produce interference effects that degrade the retinal image. Two common metrics of wavefront flatness are the following: 1. RMSW = root-mean-square wavefront error computed over the whole pupil (micrometers): 2 1 RMSW = W ( x, y ) − W ) dx dy ( ∫ A pupil
0.5
(13.4)
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
341
where W(x, y) is the wave aberration function defi ned over the (x,y) coordinates of the pupil, A is the pupil area, and the integration is performed over the domain of the entire pupil. Computationally, RMSW is just the standard deviation of the values of the wavefront error specified on a uniform grid of pupil locations. Although RMSW is a commonly used metric of optical quality and is closely related to image quality criteria (e.g., Strehl ratio) in weakly aberrated systems [14], its use in highly aberrated systems like the eye leads to ambiguous results. As shown in Figure 13.4, if a wave aberration exceeds 1 wavelength (l) of light, the phase map wraps modulo 2p. Although the wrapped and unwrapped versions of the wave aberration map yield exactly the same PSF, the value of RMSW computed for the two wavefronts can be vastly different. Note, however, that phase wrapping does not affect wavefront slope. This suggests that a metric based on wavefront slope [e.g., RMSS in Eq. (13.6)] may be more robust in eyes with large aberrations. 2. PV = peak-to-valley difference (micrometers): PV = max ( W ( x, y ) ) − min ( W ( x, y ) )
(13.5)
Rayleigh’s criterion (PV < l/4) is a traditional threshold for judging diffraction-limited optical quality based on peak-to-valley difference.
2
Wavefront Error
Zernike Defocus: W(r) = 2r 2 1.5
1
Phase Wrap for λ = 550 nm
0.5
0 -1
-0.5
0
0.5
1
Normalized Pupil Radius FIGURE 13.4 Two wave aberration functions that produce the same PSF but have different RMS values. Dashed curve is a parabolic wavefront associated with defocus. Phase wrapping produces the solid curve, for which the RMS is lower. In this phasewrapping domain, the RMS of the dashed curve is nearly independent of the amount of defocus, but the RMS of the solid curve is proportional to defocus [see Eq. (13.2)].
342
FROM WAVEFRONTS TO REFRACTIONS
Wavefront slope may be interpreted as transverse ray aberrations that blur the image. Wavefront slope is a vector-valued function of pupil position that requires two maps for display. One map describes the slope in the horizontal (x) direction and the other map describes the slope in the vertical (y) direction. The RMS value of each slope map is a measure of the magnitude of the ray aberrations that blur the image. The square root of the sum of the squares of these two RMS values is thus a convenient scalar metric of wavefront quality. 3. RMSS = root-mean-square wavefront slope computed over the whole pupil (arcmin): 1 2 2 ( Wx ( x, y ) − Wx ) + ( Wy ( x, y ) − Wy ) dx dy RMSS = ∫ A pupil
0.5
(13.6)
where Wx = ∂W/∂x and Wy = ∂W/∂y are the partial spatial derivatives of W(x,y) and the bar notation indicates the mean. Wavefront curvature describes focusing errors that blur the image. To form a good image, curvature must be the same everywhere across the pupil. Deviations describe focusing errors that blur the image. Curvature is also a vector-valued function of position that is complicated because it varies not only with pupil position but also with orientation at any given point on the wavefront. Fortunately, Euler’s theorem of differential geometry assures us that the curvature at any given point is captured by the principal curvatures, which are related to maps of mean curvature, Cmean (x, y), and Gaussian curvature, C Gauss (x, y), by C1 ( x, y ) + C2 ( x, y ) 2 CGauss ( x, y ) = C1 ( x, y ) ⋅ C2 ( x, y )
Cmean ( x, y ) =
(13.7)
Solving these equations simultaneously yields the principal curvature maps C1(x, y), C 2 (x, y) in terms of Cmean and C Gauss, 2 ( x, y ) − CGauss ( x, y ) C1 ( x, y ) , C2 ( x, y ) = Cmean ( x, y ) ± Cmean
(13.8)
Given the principal curvature maps, we can reduce the dimensionality of wavefront curvature by computing the blur strength at every pupil location. The idea of a blur strength map is to think of the wavefront locally as a small piece of a quadratic surface for which a power vector can be computed. To compute the blur strength map, we fi rst use the principal curvature maps to compute the astigmatism map, C J ( x, y ) =
C1 ( x, y ) − C2 ( x, y ) 2
(13.9)
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
343
and then combine the astigmatism map with the mean curvature map using the Pythagorean formula to produce a blur strength map, 2 ( x, y ) + C J2 ( x, y ) Cb ( x, y ) = Cmean
(13.10)
The spatial average of this blur strength map is then a scalar value that represents the average amount of focusing error in the system that is responsible for image degradation. 4. bave = average blur strength (diopters): bave =
1 ∫ Cb ( x, y ) dx dy A pupil
(13.11)
In addition to the four metrics described above, another six metrics of wavefront quality can be defi ned based on the concept of pupil fraction. Pupil fraction is defi ned as the fraction of the pupil area for which the optical quality of the eye is reasonably good. A large pupil fraction is desirable because it means that most of the light entering the eye will contribute to a good-quality retinal image: Pupil fraction =
area of good pupil total area of pupil
(13.12)
Two general methods for determining the area of the good pupil are illustrated in Figure 13.5. The fi rst method, called the “critical-pupil” or “central-
Concentric Subaperture
Critical Pupil Method
Good-Quality Tiles Are Shaded
Tessellation Method
FIGURE 13.5 Pupil fraction can be calculated two ways. In the critical-pupil method (left diagram) a subaperture that is concentric with the pupil is increased in diameter until some criterion level of wavefront quality is reached. In the tessellation method (right diagram) the pupil is tiled with small subapertures that are labeled “good” or “bad” according to some criterion of wavefront quality. In either method, pupil fraction is the ratio of the good pupil area to the total pupil area.
344
FROM WAVEFRONTS TO REFRACTIONS
pupil” method, examines the wavefront inside a subaperture that is concentric with the eye’s pupil [15, 16]. We imagine starting with a small subaperture where image quality is guaranteed to be good (i.e., diffraction limited) and then opening up the aperture until some criterion of wavefront quality is reached. The result is the critical diameter, which can be used to compute the pupil fraction (critical-pupil method) as follows: critical diameter PFc = pupil diameter
2
(13.13)
To implement this formula requires some criterion for “good” wavefront quality. For example, the criterion could be based on the wave aberration: • PFWc = PFc when the critical pupil is defi ned as the concentric area for which RMSw < some criterion (e.g., l/4). Alternatively, the criterion could be based on the wavefront slope: • PFSc = PFc when the critical pupil is defi ned as the concentric area for which RMSs < some criterion (e.g., l/4). Or, the criterion could be based on the wavefront curvature (i.e., blur strength): • PFCc = PFc when the critical pupil is defi ned as the concentric area for which the average blur strength bave is less than some criterion (e.g., 0.25 D). The second general method for determining the area of the good pupil is called the “tessellation” or “whole-pupil” method. We imagine tessellating the entire pupil with small subapertures and then labeling each subaperture as good or bad according to some criterion. The total area of all those subapertures labeled “good” defi nes the area of the good pupil, from which we compute pupil fraction as PFt =
area of good subapertures total area of pupil
(13.14)
To implement this method requires some criterion for deciding if the wavefront over a subaperture is good. For example, the criterion could be based on the wave aberration: • PFWt = PFt when a good subaperture satisfies the criterion PV < some criterion (e.g., l/4).
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
345
Alternatively, the criterion could be based on the wavefront slope: • PFSt = PFt when a good subaperture satisfies the criterion that the horizontal slope and vertical slope are both less than some criterion (e.g., 1 arcmin). Or, the criterion could be based on the wavefront curvature and the blur strength: • PFCt = PFt when a good subaperture satisfies the criterion Cb < some criterion (e.g., 0.25 D). 13.3.2.2 Metrics of Image Quality for Point Objects A perfect optical system images a point object into a compact, high-contrast retinal image as illustrated in Figure 13.6. The image of a point object is called a PSF, denoted psf(x, y) in the following equations. Scalar metrics of image quality that capture the quality of the PSF in aberrated eyes are designed to capture these attributes of compactness and contrast. The fi rst five metrics listed below measure spatial compactness and the last six metrics measure contrast. Most of the metrics are completely optical in character, but a few also include knowledge of the neural component of the visual system. Several of these metrics are discussed in standard textbooks [17]. Some have a long history of use in vision science, as indicated by references to the original literature. Although not essential, many of the metrics are normalized by the value expected for an ideal, diffraction-limited eye with the same pupil diameter. The advantage of this convention is that the metric becomes unitless, with a range spanning 0 to 1. The disadvantage is that real differences in optical quality between eyes with different-sized pupils are hidden by the normalization.
High Contrast
Low Contrast
More compact
Less compact
(a)
(b)
FIGURE 13.6 (a) The retinal image of a point source in a high-quality eye has high contrast and compact form. (b) In an aberrated eye a point source produces an image with low contrast that is blurred spatially (low quality).
346
FROM WAVEFRONTS TO REFRACTIONS
• D50 = diameter of a circular area centered on the PSF peak that captures 50% of the light energy (arcmin): D50 = r where r is defi ned implicitly by 2π
r
0
0
∫ ∫ psf
N
( r, θ ) r dr dθ = 0.5
(13.15)
where psf N is the normalized (i.e., total intensity = 1) PSF centered on the origin (i.e., the peak of psf N is located at r = 0). • EW = equivalent width of the centered PSF (arcmin): ∞
∞
4 ∫ ∫ psf ( x, y ) dx dy EW = −∞ −∞ π psf ( x0 , y0 )
0.5
(13.16)
where (x0, y0) are the coordinates of the peak of the PSF. The equivalent width is the diameter of the circular base of that right cylinder, which has the same volume as the PSF and the same height. A variant of this defi nition that is less susceptible to spurious peaks in the PSF substitutes the mean intensity of the central core of the PSF for the peak value in the denominator. • SM = square root of the second moment of the light distribution (arcmin): ∞
∫ SM = −∞
∞
∫ ( x + y ) psf ( x, y ) dx dy −∞ ∞ ∞ ∫ ∫ psf ( x, y ) dx dy −∞ −∞ 2
0.5
2
(13.17)
This metric is analogous to the moment of inertia of a distribution of mass and is also known as the Gaussian moment. Large values of SM indicate a rapid roll-off of the optical transfer function at low spatial frequencies [17]. The merits of this metric for estimating best focus were pointed out by Röhler and Howland [18] in connection with Charman and Jenning’s experimental measurement of the eye’s longitudinal chromatic aberration [19, 20]. • HWHH = half width at half height (arcmin): 1 HWHH = π
∞
∫
−∞
∞
∫ qH ( x, y ) dx dy −∞
0.5
(13.18)
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
347
where qH (x, y) = 1 if psf(x, y) > max(psf)/2; otherwise qH (x, y) = 0. Although widely used, this metric was criticized by Röhler and Howland [18]. • CW = correlation width of the light distribution (arcmin): 1 CW = π
∞
∫
−∞
∞
∫ qA ( x, y ) dx dy −∞
0.5
(13.19)
where qA (x, y) = 1 if PSF 䉺 PSF > max(PSF 䉺 PSF)/2; otherwise qA (x, y) = 0. In this expression, PSF 䉺 PSF is the autocorrelation of the PSF. • SRX = Strehl ratio computed in the spatial domain: SRX =
max ( psf ) max ( psfDL )
(13.20)
where psf DL is the diffraction-limited PSF for the same pupil diameter. • LIB = light-in-the-bucket = percentage of the total energy falling in the diffraction core: LIB =
∫
psfN ( x, y ) dx dy
(13.21)
DL core
where psf N is the normalized (i.e., total intensity = 1) PSF. The domain of integration is the central core of a diffraction-limited PSF for the same pupil diameter. An alternative domain could be the entrance aperture of the cone photoreceptors. Similar metrics have been used in the study of depth of focus [21]. • STD = standard deviation of intensity values in the PSF normalized to the diffraction-limited value:
STD =
2 ∫ ( psf ( x, y ) − psf ) dx dy PSF
0.5
(13.22) 0.5 2 ∫ ( psfDL ( x, y ) − psf DL ) dx dy PSF where psf DL is the diffraction-limited PSF. The domain of integration is a circular area centered on the PSF peak and large enough in diameter to capture most of the light in the PSF. • ENT = entropy of the PSF: ∞
ENT = −
∞
∫ ∫ psf ( x, y ) ln ( psf ( x, y )) dx dy
(13.23)
−∞ −∞
This metric was inspired by an information theory approach to optics [12].
348
FROM WAVEFRONTS TO REFRACTIONS
• NS = neural sharpness, normalized to the sharpness value for a diffraction-limited PSF [12]. The neural sharpness metric convolves the PSF with a neural weighting function before computing the Strehl ratio as a way to capture the effectiveness of a point image for stimulating the neural portion of the visual system: ∞
∞
∫ ∫ psf ( x, y ) g
N
NS =
−∞ −∞ ∞ ∞
∫ ∫ psf
DL
( x, y ) dx dy (13.24)
( x, y ) gN ( x, y ) dx dy
−∞ −∞
where gN (x, y) is a bivariate-Gaussian, neural weighting function. • VSX = visual Strehl ratio computed in the spatial domain. Like the neural sharpness metric, the visual Strehl ratio weights the PSF with a neural weighting function before computing the Strehl ratio [13]. The difference between NS and VSX is in the choice of weighting functions: ∞
VSX =
∞
∫ ∫ psf ( x, y ) N
−∞ −∞ ∞ ∞
∫ ∫ psf
DL
csf
( x, y ) dx dy (13.25)
( x,, y ) N csf ( x, y ) dx dy
−∞ −∞
where Ncsf(x, y) is a bivariate neural weighting function equal to the inverse Fourier transform of the neural contrast sensitivity function for interference fringes [22]. Several of the metrics defi ned above require knowledge of the center of the PSF, which can be problematic. It is not uncommon, for example, for the PSF of a human eye to have two or more central areas of nearly equal intensity (e.g., see Fig. 13.3, panel +3 D). It may be advantageous, in this case, to choose the midpoint between the two bright areas as the center. Another challenging example is that of a weakly aberrated but defocused eye, for which the PSF can be a bright ring surrounding a relatively dark core. Symmetry would suggest in this case that the dark core, rather than a single point on the bright ring, is the preferred location for the PSF center. One way to defi ne the PSF center that satisfies these preferences is the centroid of all those points in the PSF that exceed some intensity criterion, such as 80% of the peak value. Although some of the metrics described above explicitly include some property of the visual nervous system, any metric of optical quality based on the PSF can be converted into a metric of visual quality by convolving the optical PSF with a neural weighting function that represents the spatial filtering properties of the visual system. The result is a neural PSF that describes the spatial distribution of activity in the domain of the neural image due to a point source of light [23]. This neural PSF can then be substituted for the optical PSF to compute the corresponding metric. Unlike the optical PSF,
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
349
which is always positive, the neural PSF can be negative, corresponding to areas of inhibition in the weighting function. 13.3.2.3 Metrics of Image Quality for Grating Objects Unlike point objects, which can produce an infi nite variety of PSF images depending on the nature of the eye’s aberrations, small patches of grating objects always produce sinusoidal images no matter how aberrated the eye. Consequently, there are only two ways that aberrations can affect the image of a grating patch: They can reduce the contrast or translate the image sideways to produce a spatial phase shift, as illustrated in Figure 13.7. In general, the amount of contrast attenuation and the amount of phase shift both depend on the grating’s spatial frequency. This variation of image contrast with spatial frequency for an object with 100% contrast is called a modulation transfer function (MTF). The variation of image phase shift with spatial frequency is called a phase transfer function (PTF). Together, the MTF and PTF comprise the eye’s optical transfer function (OTF). Optical theory tells us that any object can be conceived as the sum of gratings of various spatial frequencies and orientations. Within this context we think of the optical system of the eye as a fi lter that lowers the contrast and changes the relative position of each grating in the object spectrum as it forms a degraded retinal image. A highquality OTF is therefore indicated by high MTF values and low PTF values. Scalar metrics of image quality in the frequency domain are based on these two attributes of the OTF.
Image (Output)
High Contrast No Phase Shift
Object (Input) High Quality Low Quality
Low Contrast Phase Shifted
FIGURE 13.7 Optical aberrations degrade the retinal image of a grating object by reducing contrast and inducing lateral phase shifts. The two graphs on the right show cross-sectional intensity profi les of the object (thin lines) and image (thick lines) gratings.
350
FROM WAVEFRONTS TO REFRACTIONS
• SFcMTF = spatial frequency cutoff of the radially averaged modulation transfer function (rMTF). Cutoff frequency is defi ned here as the intersection of the radially averaged MTF (rMTF) and the neural contrast threshold function [24]. If the curves intersect more than once, the intersection with the highest spatial frequency is chosen (includes spurious resolution). The rMTF is computed by integrating the full twodimensional MTF over all orientations. Note that the rMTF is not affected by phase shifts in the OTF and therefore this metric does not capture spatial phase errors. SFcMTF = highest frequency for which rMTF > neural threshold where rMTF ( f ) =
(13.26)
2π
∫ abs ( OTF ( f , φ )) dφ
and OTF(f, f) is the optical
0
transfer function for spatial frequency coordinates f (frequency) and f (orientation). • SFcOTF = cutoff spatial frequency of the radially averaged optical transfer function (rOTF). Cutoff frequency is defi ned here as the intersection of the radially averaged OTF (rOTF) and the neural contrast threshold function. If the curves intersect more than once, the intersection with the lowest spatial frequency is chosen (excludes spurious resolution). The radially averaged OTF is determined by integrating the full twodimensional OTF over all orientations. Since the OTF is a complexvalued function, integration is performed separately for the real and imaginary components. Conjugate symmetry of the OTF ensures that the imaginary component vanishes, leaving a real-valued result. Since phase shifts in the OTF are taken into account when computing the rOTF, this metric is sensitive to spatial phase errors in the image: SFcOTF = lowest spatial frequency for which rOTF < neural threshold
(13.27)
2π
where rOTF( f ) =
∫ OTF ( f , φ ) dφ
and OTF(f, f) is the optical transfer
0
function for spatial frequency coordinates f (frequency) and f (orientation). • AreaMTF = area of visibility for the rMTF (normalized to the diffraction-limited case). The area of visibility is the region lying below the radially averaged MTF and above the neural contrast threshold function [25, 26]: cutoff
AreaMTF =
∫
cutoff
rMTF ( f ) df −
0 cutoff
∫ 0
∫
TN ( f ) df
0 cutoff
rMTFDL ( f ) df −
∫ 0
(13.28) TN ( f ) df
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
351
where TN is the neural contrast threshold function, which equals the inverse of the neural contrast sensitivity function. When computing the area under the rMTF, phase-reversed segments of the curve count as a positive area to be consistent with our defi nition of the SFcMTF as the highest frequency for which the rMTF exceeds the neural theshold. This allows spurious resolution to be counted as beneficial when predicting visual performance for certain tasks (e.g., contrast detection). Metrics based on the volume under the MTF have been used in studies of chromatic aberration [27] and visual instrumentation [26]. • AreaOTF = area of visibility for the rOTF (normalized to the diffraction-limited case): cutoff
AreaOTF =
∫
cutoff
∫
rOTF ( f ) df −
0 cutoff
∫
TN ( f ) df
0 cutoff
∫
rOTFDL ( f ) df −
0
(13.29) TN ( f ) df
0
where TN is the neural contrast threshold function defi ned above. Since the domain of integration extends only to the cutoff spatial frequency of the SFcOTF, phase-reversed segments of the curve do not contribute to the area under the rOTF. This is consistent with our defi nition of the SFcOTF as the lowest frequency for which the rOTF is below the neural theshold. This metric would be appropriate for tasks in which phasereversed modulations (i.e., spurious resolution) actively interfere with performance. • SRMTF = Strehl ratio computed in the frequency domain (MTF method): ∞
∞
∫ ∫ MTF ( f , f x
SRMTF =
−∞ −∞ ∞ ∞
∫ ∫ MTF
DL
y
) dfx dfy (13.30)
( fx , fy ) dfx dfy
−∞ −∞
The Strehl ratio computed by the MTF method is equivalent to the Strehl ratio computed in the spatial domain for a hypothetical PSF with even symmetry (i.e., PTF = 0). • SROTF = Strehl ratio computed in the frequency domain (OTF method): ∞
∞
∫ ∫ OTF ( f , f x
SROTF =
−∞ −∞ ∞ ∞
∫ ∫ OTF
DL
y
) dfx dfy (13.31)
( fx , fy ) dfx dfy
−∞ −∞
The Strehl ratio computed by the OTF method quantifies the relative intensity of the PSF at the coordinate origin, rather than at the peak (as in the SRX).
352
FROM WAVEFRONTS TO REFRACTIONS
• VSMTF = visual Strehl ratio computed in the frequency domain (MTF method). This metric is similar to the SRMTF, except that the optical MTF is weighted by the neural contrast sensitivity function (CSF N): ∞
∞
∫ ∫ CSF
N
−∞ −∞ ∞ ∞
VSMTF =
∫ ∫ CSF
N
( fx , fy ) ⋅ MTF ( fx , fy ) dfx dfy (13.32)
( fx , fy ) ⋅ MTFDL ( fx , fy ) dfx dfy
−∞ −∞
This metric differs from the VSX by quantifying image quality at the coordinate origin, rather than at the peak of the PSF. VSMTF is equivalent to the VSX for a hypothetical PSF that is well centered with even symmetry computed as the inverse Fourier transform of the MTF (which implicitly assumes PTF = 0). • VSOTF = visual Strehl ratio computed in the frequency domain (OTF method). This metric is similar to the SROTF, except that the optical OTF is weighted by the neural contrast sensitivity function (CSF N): ∞
∞
∫ ∫ CSF
N
( fx , fy ) ⋅ OTF ( fx , fy ) dfx dfy
−∞ −∞ ∞ ∞
VSOTF =
∫
(13.33)
∫ CSFN ( fx , fy ) ⋅ OTFDL ( fx , fy ) dfx dfy
−∞ −∞
This metric differs from the VSX by emphasizing image quality at the coordinate origin, rather than at the peak of the PSF. • VOTF = volume under the OTF normalized by the volume under the MTF: ∞
∞
∫ ∫ OTF ( f , f x
VOTF =
y
−∞ −∞ ∞ ∞
∫ ∫ MTF ( f , f x
) dfx dfy (13.34)
y
) dfx dfy
−∞ −∞
This metric is intended to capture phase shifts in the PTF. Since MTF ≥ real part of OTF, this ratio is always ≤1. • VNOTF = volume under the neurally weighted OTF, normalized by the volume under the neurally weighted MTF. This metric is similar to the VOTF, except that the optical OTF and MTF are weighted by the neural contrast sensitivity function (CSF N): ∞
∞
∫ ∫ CSF ( f , f x
VNOTF =
y
−∞ −∞ ∞ ∞
∫ ∫ CSF ( f , f x
y
) ⋅ OTF ( fx , fy ) dfx dfy (13.35)
) ⋅ MTF ( fx , fy ) dfx dfy
−∞ −∞
This metric is intended to capture the visually significant phase shifts in the PTF.
353
METHODS FOR ESTIMATING THE MONOCHROMATIC REFRACTION
Any metric of optical quality based on the OTF can be converted into a metric of visual quality by replacing the optical OTF with its neural counterpart, computed as the Fourier transform of the neural PSF described in Section 13.3.2.2. 13.3.3
Numerical Example
As a numerical example, virtual refraction was performed for a hypothetical eye that is free of all aberrations except Zernike spherical aberration, for which c 40 = 0.25 mm for a pupil diameter of 6 mm. The results are displayed in Figure 13.8 in the form of through-focus curves with lens power on the abscissa and metric value on the ordinate. The optimum lens power determined from these curves is shown for each wavefront metric, PSF metric, and OTF metric in the three stem plots in the lower right corner. As this example shows, the optimum value of the correcting lens can vary over a substantial
10
10
1
0.4
1
5
5
0.5
0.2
0 −2
0
2
0 −2 0.4
2
0.2
0 CW
2
5
0 −2
0
2
0 −2
0
2
0 −2
2
2
0 −2
0 LIB
0
2
2
0.5
0 −2
0
2
0 −2
EW 10
20
20
5
10
0 STD
2
0 −2
0
2
0 −2
SM
40
0 −2
1
0.5
D50
0.5
0 SRX
0 −2
PFCc
0 ENT
2
0 −2
0 NS
2
0 −2
1
20
15
1
1
0.2
0.5
10
10
0.5
0
0 −2
SFcMTF
0 2 AreaMTF
0 −2
0 2 SFcOTF
0 −2
0 2 AreaOTF
5 −2
0 2 SROTF
0 −2
0 VOTF
2
−1 −2
60
1
100
1
0.4
1
1
40
0.5
50
0.5
0.2
0.5
0
20 −2
0
2
0 −2
0
2
0 −2
SRMTF
VNOTF 1
0.4
1
0
0.2
0.5
−1 −2
0
2
0 −2
0
0
2
0 −2
0
2
0 −2
2
0 −2
Lens Power (D)
0
0
2
0 −2
Wave
VSMTF
2
0.5
0
0
−0.5
0
5
0
2
−1 −2
PSF
0.5
−0.5 10 0
0
2
HWHH 5
0.4
Best Lens
Metric Value
3
1 −2
0 PFCt
Bave
PFSc
PFSt
PFWt
PFWc
RMSs
PV
RMSw 2
0 VSX
2
0 2 VSOTF
0
2
OTF 0.5 0
5
−0.5 10 0
5
10
Metric #
FIGURE 13.8 Through-focus curves of virtual refraction. Each graph shows how a particular monochromatic metric of image quality varies with the power of a lens added to an eye’s wave aberration function. The three graphs on the bottom row, right side, summarize the lens power that optimizes each metric.
354
FROM WAVEFRONTS TO REFRACTIONS
range (−0.4 D to +0.75 D in this case) depending on the metric used to assess optical quality.
13.4 OCULAR CHROMATIC ABERRATION AND THE POLYCHROMATIC REFRACTION The human eye suffers from significant amounts of chromatic aberration caused by chromatic dispersion, which is the variation of the refractive index of the eye’s refractive media with wavelength. Chromatic dispersion causes the focus, size, and position of retinal images to vary with wavelength. Variation in the focusing power of the eye with wavelength is called longitudinal (or axial) chromatic aberration (LCA) and is measured in diopters. In effect, the eye’s far point varies with wavelength, which means that only one wavelength of light emitted by a point source can be well focused on the retina at any moment in time. Retinal image size for extended objects also varies with wavelength, called chromatic difference of magnification (CDM), and is specified as a fractional change. For any given point on an extended object, the image is spread across the retina like a tiny rainbow or colored fringe. This phenomenon is called transverse (or lateral) chromatic aberration (TCA) and is specified as a visual angle (i.e., an angle subtended at the eye’s nodal point by the colored fringe). The effects of LCA and TCA on the polychromatic PSF are illustrated in Figure 13.9. In general, LCA tends to smooth the λ = 525 nm Kλ = −0.18 D
λ = 575 nm
λ = 600 nm
Composite
Kλ = +0.10 D
Kλ = +0.22 D
λfocus = 555 nm
τ = −0.35 min
τ = −0.74 min
1−mm pupil offset
Luminance
τ=0
τ = +0.62 min
FIGURE 13.9 Image formation for a polychromatic source in the presence of chromatic aberration. Top row is for an eye with longitudinal chromatic aberration only. Bottom row is for an eye with longitudinal and transverse chromatic aberration produced by 1 mm of horizontal pupil offset from the visual axis (or, equivalently, 15° of eccentricity). The point source emits three wavelengths of light (500, 575, and 600 nm) and the eye is assumed to be focused for 550 nm. Chromatic errors of focus and position indicated for each image are derived from an analysis of the Indiana Eye model of chromatic aberration. (Figure also appears in the color figure insert.)
OCULAR CHROMATIC ABERRATION AND THE POLYCHROMATIC REFRACTION
355
PSF since dark rings in the PSF for one wavelength are fi lled in by the bright rings of another wavelength. However, this smoothing effect is not as effective when TCA is present because the various PSFs at different wavelengths are not concentric. Clinical refractions are invariably performed with white light, which means patients are required to make subjective judgments about the quality of their vision based on some spectral aggregate of their sensations. It is generally presumed that these judgments are based primarily on stimulus luminance, rather than hue or saturation qualities of the colored image. Under this assumption, monochromatic methods for an objective refraction can be extended into the polychromatic domain with the aid of an optical model of the eye’s ocular chromatic aberration. One such model is the Indiana Eye, a reduced eye model (i.e., a single refracting surface) that accounts for a large experimental literature on ocular chromatic aberration [28, 29]. The variation of refractive error with wavelength of this model is shown in Figure 13.10. One need for such a model is to determine the focus shift associated with referencing measurements taken at some convenient wavelength (e.g., infrared) to a visible wavelength in focus. A chromatic aberration model is also needed when conducting a virtual refraction in polychromatic light, as described next. Virtual refractions simulate the placement of lenses of different powers in front of the eye for the purpose of determining the lens that maximizes retinal image quality. As illustrated in Figure 13.10, when an eye views through a spherical lens, the LCA curve shifts vertically. Positive lenses change the eye–lens system in the myopic direction (which corresponds to a negative refractive error clinically); hence the curve shifts downward. Conversely, negative lenses shift the LCA curve upward. This shifting of the curve changes the balance between the state of focus and the relative luminance of each wavelength component of polychromatic light. For example, shifting the curve upward reduces the amount of defocus in the shorter wavelength but increases the amount of defocus in the longer wavelengths. Whether this yields a net gain of image quality depends strongly on the luminance spectrum of the source. For this example, a blue source would benefit from a negative lens but a red source would not. In theory, the best lens is that which optimizes the polychromatic PSF according to some metric of optical quality. In the example of Figure 13.10, the eye is optimally focused for 555-nm monochromatic light according to some monochromatic metric. The LCA curve of the Indiana Eye model is passed through this point to quantify the amount of hyperopia expected at longer wavelengths and the amount of myopia expected at shorter wavelengths. In white light, the optimum PSF (according to some polychromatic metric of image quality) occurs when this particular eye views through an additional spherical lens of power −0.25 D. By comparison, when either a +0.25- or a −0.75-D lens is added, the PSF deteriorates markedly. Notice that when the negative lens is introduced to optimize the polychromatic PSF, the
356
FROM WAVEFRONTS TO REFRACTIONS
FIGURE 13.10 Polychromatic refraction shifts the longitudinal chromatic aberration function vertically. If the eye is emmetropic (i.e., refractive error = 0) at some reference wavelength (555 nm in this example), then the same eye will appear to be myopic (i.e., refractive error < 0) when viewing through a positive lens. At the same time, the wavelength in focus (i.e., zero crossing) will shift to a longer wavelength. Conversely, the eye will appear to be hyperopic when viewing through a negative lens and the wavelength in focus will shift to a relatively short wavelength. Thus the lens value (−0.25 D in this example) that optimizes retinal image quality for polychromatic light (according to some polychromatic metric) corresponds to a unique wavelength in focus (515 nm in this example) when the eye is well focused for polychromatic light emitted by a distant object.
eye is no longer well focused for 555 nm. Instead, 515 nm becomes the wavelength in focus (according to the chosen metric) when white light is optimally focused for this eye. 13.4.1
Polychromatic Wavefront Metrics
The wave aberration function is a monochromatic concept. If a source emits polychromatic light, then aberration maps for each wavelength are treated separately because lights of different wavelengths are mutually incoherent and do not interfere. For this reason, metrics of wavefront quality do not
OCULAR CHROMATIC ABERRATION AND THE POLYCHROMATIC REFRACTION
357
generalize easily to the case of polychromatic light. This lack of generality is a major limitation of virtual refraction based on wavefront quality. One possible approach, which would require justification, is to compute the weighted average of monochromatic metric values computed for a series of wavelengths, Metric poly = ∫ S ( λ ) Metric ( λ ) dλ
(13.36)
where the weighting function S(l) is the luminance spectrum of the source. 13.4.2
Polychromatic Point Image Metrics
The luminance component of a polychromatic point spread function, PSFpoly, is a weighted sum of the monochromatic point spread functions, psf(x, y, l), PSFpoly = ∫ S ( λ ) psf ( x, y, λ ) dλ
(13.37)
where the weighting function S(l) is the luminance spectrum of the source. Given this defi nition, PSFpoly may be substituted for PSF in any of the equations given in Section 13.3.2.2 to produce new, polychromatic metrics of image quality. In addition to these luminance metrics of image quality, other metrics can be devised to capture the changes in color appearance of the image caused by ocular aberrations. For example, the chromaticity coordinates of a point source may be compared to the chromaticity coordinates of each point in the retinal PSF and metrics devised to summarize the differences between image and object. Evaluation of Eq. (13.37) for a discrete series of wavelengths requires recalculating the PSF for each wavelength. This can become prohibitively time consuming, especially in a virtual refraction paradigm in which the calculations have to be repeated for a variety of added lens powers. A useful simplification in this case is to neglect the scaling of the diffraction-limited PSF with wavelength. Under this assumption, one may precompute a sequence of defocused PSFs for a given wave aberration map and reuse each one for every combination of through-focus lens power and longitudinal chromatic aberration that produces the same net defocus. Another useful simplification is to assume that all of the Zernike coefficients except defocus (c 02) are independent of wavelength. 13.4.3
Polychromatic Grating Image Metrics
Given the polychromatic PSF defi ned above in Eq. (13.37), a polychromatic optical transfer function OTFpoly may be computed as the Fourier transform of the PSFpoly. Substituting this new function for the OTF and its magnitude for the MTF in any of the equations given in Section 13.3.2.3 will produce new metrics of polychromatic image quality defi ned in the frequency domain.
358
FROM WAVEFRONTS TO REFRACTIONS
13.5 EXPERIMENTAL EVALUATION OF PROPOSED REFRACTION METHODS The metrics of optical quality defi ned above have many potential uses, only one of which is to determine the refractive error of the eye. Which method works best for any particular task can only be determined empirically. To judge the success of an objective method of refraction requires a “gold standard” for comparison. The most clinically relevant choice is a so-called subjective refraction in which the clinician adjusts the spherical and astigmatic power of correcting lenses to maximize the patient’s visual acuity. Acuity is quantified by the dimensions of the smallest letters a patient is able to read correctly on a letter chart illuminated by white light. Using this gold standard of subjective clinical refraction, several recent experimental evaluations of the refraction methods described above are summarized below. 13.5.1
Monochromatic Predictions
In the Indiana Aberration Study [30] subjective refractions were performed to the nearest 0.25 D on 200 normal, healthy eyes from 100 subjects using the conventional, hyperfocal refraction procedure outlined in Sections 13.2.2 and 13.2.3. Accommodation was paralyzed with one drop of 0.5% cyclopentalate during the refraction. The refractive correction was taken to be that spherocylindrical lens combination that optimally corrected astigmatism and located the hyperfocal point of the corrected eye at optical infi nity. This prescribed refraction was then implemented with trial lenses and worn by the subject for subsequent measurements of the eye’s wave aberrations (l = 633 nm). This experimental design emphasized the effects of higher order aberrations by minimizing the presence of uncorrected second-order aberrations. Since all eyes were optimally corrected during aberrometry (according to the psychophysical criterion of maximum visual acuity), the predicted refraction computed from the aberration map was PSE = PJ0 = PJ45 = 0. The level of success achieved by the various methods described above was judged on the basis of precision and accuracy at matching these predictions. Accuracy in this context is defi ned as the dioptric difference between the population mean refraction and the prediction from virtual refraction based on monochromatic metrics of optical quality. Precision is a measure of the variability in results and is defi ned for PSE by the standard deviation of the population values. For the astigmatic components of refraction, precision was defi ned as the geometric mean of the major and minor axes of the 95% confidence ellipse computed for the bivariate distribution of PJ0 and PJ45. The two methods for fitting the aberration map with an equivalent quadratic surface gave strikingly different results. The least-squares method and Eq. (13.2) predicted a mean spherical refractive error of PSE = −3/8 D. In other words, this method predicted the eyes were, on average, myopic when in fact
EXPERIMENTAL EVALUATION OF PROPOSED REFRACTION METHODS
359
they were well corrected. To the contrary, the method based on paraxial curvature matching and Eq. (13.3) predicted an average refractive error close to zero for our population. The mean error for predicting astigmatic errors was less than 0.1 D and precision was less than 0.4 D by both methods. The accuracy of the 31 methods for predicting the spherical component of refraction based on metrics of PSF and OTF quality varied widely from −0.50 to +0.25 D. A rank ordering of the accuracy of all 33 methods (2 based on wavefront fitting and 31 based on optical quality) indicated that paraxial curvature matching was the most accurate method, closely followed by maximizing the wavefront quality metrics PFWc and PFCt. However, these results should not be taken as defi nitive for a variety of reasons [13], the most important of which is that conventional hyperfocal refractions are biased in the sense described in Section 13.2.3. For this reason, we anticipate that it will be easier to accurately predict the result of subjective refractions designed to optimize image quality for distant objects by focusing the retina at infi nity rather than the hyperfocal point. A different way to judge the success of various methods described above for converting wave aberrations into refractive errors is to compare the visual performance of the patient when viewing through the lenses prescribed by the different methods. This is the approach taken by Cheng et al. [31] and by Marsack et al. [32] in systematic studies of the change in visual acuity produced when selected, higher order aberrations are introduced into an eye. The experimental design of the Cheng study was somewhat simpler in that monochromatic aberrations were used to predict monochromatic visual performance, whereas Marsack used monochromatic aberrations to predict polychromatic performance. Nevertheless, both studies concluded that changes in visual acuity are most accurately predicted by the wavefront quality metric PFSt and by the image quality metric VSOTF. Furthermore, both studies concluded that three of the least accurate predictors were RMSw, HWHH, and VOTF. In addition, the Cheng study demonstrated that, as expected, those metrics that accurately predicted changes in visual acuity also predicted the lens power that maximized acuity in a through-focus experiment. This was an important result because it established experimentally the anticipated link between variations in monochromatic acuity and monochromatic refractive error. 13.5.2
Polychromatic Predictions
Ultimately, the goal is to use a wave aberration map to predict the lens prescription that will optimize retinal image quality and visual performance for everyday objects emitting polychromatic light. Unfortunately, polychromatic light introduces several new factors that must be taken into account in the virtual refraction procedure. First, there is the need for an accurate optical model of the eye’s chromatic aberration in order to compute polychromatic
360
FROM WAVEFRONTS TO REFRACTIONS
metrics of image quality. Although many studies have demonstrated a remarkable consistency between eyes in the longitudinal (focusing) aspect of chromatic aberration [28], significant amounts of individual variation in the transverse aspect of chromatic aberration are known to exist [33]. Thus the development of polychromatic wavefront aberrometers might be required to take account of individual variation in ocular chromatic aberration [27]. Such technology may help determine the wavelength that is in focus when the eye is optimally focused for polychromatic light. Wavelength in focus is critical for modeling polychromatic images because it determines how much defocus is present for all other wavelengths present in the source. The luminance spectrum of the source is another important variable that can have a significant impact on polychromatic virtual refractions since it acts as a weighting function for computing metrics of optical quality in Eqs. (13.36) and (13.37). Although defi nitive results have not yet been published, this is an active area of research that should yield useful results in the near future.
13.5.3
Conclusions
The various methods for objective refraction described in this chapter are able to predict the outcome of subjective refraction with various degrees of accuracy and precision. The majority of the variability between methods may be attributed to the spherical component of refraction, PSE, rather than the astigmatic component. This suggests that uncertainty regarding the wavelength in focus when the eye is viewing a polychromatic target is a major limiting factor in evaluating the various methods. Recent experiments using monochromatic light suggest that the wavelength in focus for a typical whitelight source is approximately 570 nm for most subjects [34]. The full implementation of the polychromatic metrics of image quality described above should provide a sound basis for interpreting these experimental results. Predicting the results of conventional, hyperfocal refraction is particularly challenging because it involves not only the optimum correcting lens but also the eye’s functional depth of focus. Thus, computational methods are also required that can identify the depth of focus of an eye through wavefront analysis. The experimental literature on depth of focus suggests that individual variability and task dependence will be major factors to be addressed by these computational methods. Variability in subjective refraction, the gold standard used to judge the accuracy of predictions, is another likely source of disagreement between objective and subjective methods of refraction. If such variability makes the current gold standard a moving target, then it is conceivable that wavefront-based methods of objective refraction will become the preferred gold standard of the future. Acknowledgment Support for the writing of this chapter and the experiments reported therein was provided by NIH/NEI grant R01 EY05109.
REFERENCES
361
REFERENCES 1. Krueger RR, Applegate RA, MacRae SM, eds. Wavefront Customized Visual Correction: The Quest for Super Vision II. Thorofare, NJ: SLACK, 2004. 2. MacRae SM, Krueger RR, Applegate RA, eds. Customized Corneal Ablation: The Quest for Super Vision. Thorofare, NJ: SLACK, 2001. 3. Thibos LN, Wheeler W, Horner DG. Power Vectors: An Application of Fourier Analysis to the Description and Statistical Analysis of Refractive Error. Optom. Vis. Sci. 1997; 74: 367–375. 4. Raasch TW. Spherocylindrical Refractive Errors and Visual Acuity. Optom. Vis. Sci. 1995; 72: 272–275. 5. Schwendeman FJ, Ogden BB, Horner DG, Thibos LN. Effect of Sphero-cylinder Blur on Visual Acuity. Optom. Vis. Sci. 1997; 74/12S: 180. 6. Thibos LN, Applegate RA, Schwiegerling JT, et al. Standards for Reporting the Optical Aberrations of Eyes. In: Lakshminarayanan V, ed. OSA Trends in Optics and Photonics, Vision Science and Its Applications, Vol. 35. Washington, D.C.: Optical Society of America, 2000, pp. 232–244. 7. ANSI. American National Standard for Ophthalmics—Methods for Reporting Optical Aberrations of Eyes. ANSI Z80.28–2004. Merrifield, VA: Optical Laboratories Association, 2004. 8. Williams DR, Applegate RA, Thibos LN. Metrics to Predict the Subjective Impact of the Eye’s Wave Aberration. In: Krueger RR, Applegate RA, MacRae SM, eds. Wavefront Customized Visual Correction: The Quest for Super Vision II. Thorofare, NJ: SLACK, 2004, pp. 78–84. 9. Ciuffreda KJ. Accommodation, the Pupil and Presbyopia. In: Benjamin WJ, ed. Borish’s Clinical Refraction. Philadelphia: W.B. Saunders, 1998, pp. 77–120. 10. Atchison DA, Smith G. Optics of the Human Eye. Oxford: ButterworthHeinemann, 2000. 11. Applegate RA, Sarver EJ, Khemsara V. Are All Aberrations Equal? J. Refract. Surg. 2002; 18: S556–S562. 12. Guirao A, Williams DR. A Method to Predict Refractive Errors from Wave Aberration Data. Optom. Vis. Sci. 2003; 80: 36–42. 13. Thibos LN, Hong X, Bradley A, Applegate RA. Accuracy and Precision of Methods to Predict the Results of Subjective Refraction from Monochromatic Wavefront Aberration Maps. J. Vis. 2004; 4: 329–351. 14. Mahajan VN. Aberration Theory Made Simple. In: O’Shea DC, ed. Tutorial Texts in Optical Engineering, Vol. TT6. Bellingham, WA: SPIE Optical Engineering Press, 1991. 15. Howland HC, Howland B. A Subjective Method for the Measurement of Monochromatic Aberrations of the Eye. J. Opt. Soc. Am. 1977; 67: 1508–1518. 16. Corbin JA, Klein SA, van de Pol C. Measuring Effects of Refractive Surgery on Corneas Using Taylor Series Polynomials. In: Rol PO, Joos KM, Manns F, Stuck BE, Belkin M, eds. Ophthalmic Technologies IX. Proceedings of the SPIE. 1999; 3591: 46–52. 17. Bracewell RN. The Fourier Transform and Its Applications, 2nd ed. New York: McGraw-Hill, 1978.
362
FROM WAVEFRONTS TO REFRACTIONS
18. Röhler R, Howland HC. Merits of the Gaussian Moment in Judging Optical Line Spread Width—Coment on a Paper by W. N. Charman and J. A. M. Jennings. Vision Res. 1979; 19: 847–849. 19. Charman WN, Jennings JA. Objective Measurements of the Longitudinal Chromatic Aberration the Human Eye. Vision Res. 1976; 16: 999–1005. 20. Charman WN, Jennings JA. Merits of the Gaussian Moment in Judging Optical Line Spread Widths. Vision Res. 1979; 19: 851–852. 21. Marcos S, Moreno E, Navarro R. The Depth-of-Field of the Human Eye from Objective and Subjective Measurements. Vision Res. 1999; 39: 2039–2049. 22. Campbell FW, Green DG. Optical and Retinal Factors Affecting Visual Resolution. J. Physiol. 1965; 181: 576–593. 23. Thibos LN, Bradley A. Modeling Off-Axis Vision—II: The Effect of Spatial Filtering and Sampling by Retinal Neurons. In: Peli E, ed. Vision Models for Target Detection and Recognition. Singapore: World Scientific, 1995, pp. 338–379. 24. Thibos LN. Calculation of the Influence of Lateral Chromatic Aberration on Image Quality Across the Visual Field. J. Opt. Soc. Am. A. 1987; 4: 1673–1680. 25. Charman N, Olin A. Image Quality Criteria for Aerial Camera Systems. Photogr. Sci. Eng. 1965; 9: 385–397. 26. Mouroulis P. Aberration and Image Quality Representation for Visual Optical Systems. In: Mouroulis P, ed. Visual Instrumentation: Optical Design and Engineering Principle. New York: McGraw-Hill, 1999, pp. 27–68. 27. Marcos S, Burns SA, Moreno-Barriuso E, Navarro R. A New Approach to the Study of Ocular Chromatic Aberrations. Vision Res. 1999; 39: 4309–4323. 28. Thibos LN, Ye M, Zhang X, Bradley A. The Chromatic Eye: A New ReducedEye Model of Ocular Chromatic Aberration in Humans. Appl. Opt. 1992; 31: 3594–3600. 29. Thibos LN, Bradley A. Modeling the Refractive and Neuro-sensor Systems of the Eye. In: Mouroulis P, ed. Visual Instrumentation: Optical Design and Engineering Principle. New York: McGraw-Hill, 1999, pp. 101–159. 30. Thibos LN, Hong X, Bradley A, Cheng X. Statistical Variation of Aberration Structure and Image Quality in a Normal Population of Healthy Eyes. J. Opt. Soc. Am. A. 2002; 19: 2329–2348. 31. Cheng X, Bradley A, Thibos LN. Predicting Subjective Judgment of Best Focus with Objective Image Quality Metrics. J. Vis. 2004; 4: 310–321. 32. Marsack JD, Thibos LN, Applegate RA. Metrics of Optical Quality Derived from Wave Aberrations Predict Visual Performance. J. Vis. 2004; 4: 322–328. 33. Rynders MC, Lidkea BA, Chisholm WJ, Thibos LN. Statistical Distribution of Foveal Transverse Chromatic Aberration, Pupil Centration, and Angle psi in a Population of Young Adult Eyes. J. Opt. Soc. Am. A. 1995; 12: 2348–2357. 34. Coe CD, Thibos LN, Bradley A. Psychophysical Determination of the Wavelength of Light That Is Focused by a Polychromatic Subjective Refraction. Invest. Ophthalmol. Vis. Sci. 2005; 46: e-abstract 1188.
CHAPTER FOURTEEN
Visual Psychophysics with Adaptive Optics JOSEPH L. HARDY and PETER B. DELAHUNT Posit Science Corporation, San Franciso, California JOHN S. WERNER University of California Davis Medical Center, Sacramento, California
Psychophysics is the study of the relations between human performance and physical stimulus variables. For nearly 200 years, visual psychophysicists have worked to quantify these relations for visual tasks and light stimuli. Vision scientists combine psychophysical techniques with measures of anatomy and physiology to gain an understanding of how the visual system processes information. To the extent that adaptive optics (AO) approaches diffraction-limited correction of the optics of the eye, it offers an exciting new tool for addressing fundamental questions about human vision. Bypassing the aberrations of the eye with AO, psychophysicists will be able to present precisely controlled stimuli directly to the visual nervous system. This will help address questions about the relative contributions of optical and neural factors in defi ning the limits of visual performance, as well as fundamental questions about neural information processing. Additionally, combining physiological and anatomical data from AO-based imaging techniques with measures of visual performance, vision scientists will be able to advance further the understanding of the relation between form and function in the visual nervous system. This chapter is a short introduction to psychophysics and psychophysical methods. A comprehensive treatment of these topics would be impossible in Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
363
364
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
a single chapter. Instead, key terms, concepts, and techniques for designing and implementing psychophysical experiments are introduced, with an emphasis on information relevant in the context of AO. The fi rst part of the chapter describes the notion of a psychophysical function and introduces a few examples. The second part of the chapter addresses psychophysical methods— experimental techniques for measuring psychophysical functions. Finally, the third part describes some of the equipment and procedures for producing visual stimuli for psychophysical experiments.
14.1 14.1.1
PSYCHOPHYSICAL FUNCTIONS Contrast Sensitivity Functions
Psychophysics seeks to measure the relations between human performance and physical variables. Such a relation is quantified by a psychophysical function. Figure 14.1 presents an example of a commonly measured psychophysical function from the spatial domain in vision that is important for vision scientists interested in AO, the contrast sensitivity function (CSF). Here, the
Contrast Sensitivity
1000.0
100.0
10.0
1.0
0.1
1 10 Spatial Frequency (cpd)
100
FIGURE 14.1 Contrast sensitivity is plotted as a function of spatial frequency. Data points were fitted with a double-exponential function. Inset shows a luminancevarying stimulus defi ned by a spatial Gabor function (i.e., a sinusoid windowed by a Gaussian). The use of a Gabor function is important for limiting the bandwidth in the spatial frequency domain.
PSYCHOPHYSICAL FUNCTIONS
365
sensitivity to the contrast of a Gabor pattern varying in luminance (radiance1 of a light source fi ltered by the human spectral efficiency function) is plotted as a function of spatial frequency. Gabor patterns are one-dimensional sinusoidal gratings weighted by a two-dimensional Gaussian function. Sensitivity refers to a characteristic of the individual, but it is defi ned by a physical variable, the inverse of the contrast required to produce a criterion level of performance, for example, 75% correct on a detection task. There are several ways to defi ne contrast, V, but here it is defi ned as: VM =
Lmax − Lmin Lmax + Lmin
(14.1)
where L max is the maximum luminance in the pattern and L min is the minimum luminance in the pattern. This is known as the Michelson contrast, and it can vary between 0.0 and 1.0. The Michelson contrast is most appropriate when discussing the contrast of patterns that contain both luminance increments and decrements in equal proportion and vary from a background of a spaceaveraged mean luminance value, as is the case with sinusoidally varying and Gabor patterns. An alternate defi nition of contrast following Weber relates the luminance of a stimulus (L stim) to the luminance of the background (Lback): VW =
Lstim − Lback Lback
(14.2)
This defi nition of contrast can take any value. This metric is more appropriate for stimuli that are luminance increments or decrements on a background where the maximum and minimum luminance values are not uniformly distributed about a space-averaged mean luminance and where large positive or negative values are often meaningful. The other physical variable in the CSF is spatial frequency. The spatial frequency of a pattern is usually represented by the number of cycles per degree of visual angle (c/deg). Visual angle (qvis) is defi ned as:
θ vis = 2 tan −1 ( a 2l )
(14.3)
where a is the length of the stimulus along an axis orthogonal to the direction of viewing and l is the distance from the stimulus to the nodal point of the eye. The distance between the front surface of the cornea and the nodal point of the eye is approximately 7 mm for an average adult. Retinal image size can be calculated from visual angle if the focal length of the eye is known. A typical focal length for the human eye is 1/60 m. 1
Power per unit projected area per unit solid angle reflected from a surface toward the eye, expressed in watts per meters squared per steradian; W/m 2 /sr.
366
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
As is often the case in contrast sensitivity measurements, the patterns used to generate the CSF in Figure 14.1 were Gabor patches. The spatial luminance profi le of a Gabor pattern is x 2 y2 L ( x, y ) = L0 1 + V M exp − 2 + 2 cos ( 2π fx + φ ) σ σ x y
(14.4)
where L 0 = mean luminance V M = grating contrast sx = horizontal standard deviation of the Gaussian window sy = vertical standard deviation of the Gaussian window f = spatial frequency ø = grating phase These types of patterns are used regularly in studies of spatial vision because the “soft edges” produced by the Gaussian window eliminate the high spatial frequencies present in sinusoidal gratings that are cut off abruptly. If a small, sinusoidally varying grating with hard edges (rectangular-wave window) is presented in a contrast sensitivity experiment, the observer may be more sensitive to the frequency components produced by the edges than to the frequency of the sinusoid being tested. This could lead to an overestimation of the sensitivity of the observer to certain spatial frequencies. Spatial and temporal vision are often characterized using a CSF such as shown in Figure 14.1 because it can be useful in predicting the sensitivity of the visual system to more complex patterns when such patterns are represented by their Fourier decomposition [1, 2]. This approach has proven useful, notwithstanding the rather limited range over which the assumptions of linear systems analysis are valid for the human visual system. Various spatial and temporal variables need to be considered when discussing the CSF, and contrast sensitivity can be thought of as a family of functions rather than a single characteristic of the visual system. For example, contrast sensitivity varies as a function of the space-average luminance (we are generally more sensitive to contrast at higher light levels). Contrast sensitivity also depends on the temporal frequency of stimulus motion or fl icker, as well as on the chromatic properties of the stimulus. In addition, many individuals are more sensitive to vertical and horizontal gratings than to oblique (45° or 135° from horizontal) gratings of the same frequency. This phenomenon is referred to as the oblique effect [3]. The region of retina tested is also a critical variable for contrast sensitivity. Sensitivity is highest in the fovea, especially for high spatial frequencies [4]. In addition to these stimulus variables, observer variables need to be taken into account. There is a great deal of individual variability in sensitivity to contrast patterns, even among healthy people of approximately the same age. Both optical [5] and neural [6] factors are known to contribute to this variability. Additional variability can be expected if the observer pool includes
PSYCHOPHYSICAL FUNCTIONS
367
participants with diseases known to reduce contrast sensitivity [7]. Finally, scotopic (rod-mediated vision) and photopic (cone-mediated vision) CSFs vary with observer age [8]. The CSF is particularly important in AO applications due in part to its close relation to the optical modulation transfer function (MTF) of the eye. The optical MTF of the eye refers to the proportion of the contrast present in a stimulus that is preserved in the retinal image, as a function of spatial frequency. The CSF of the individual observer can be thought of as a product of the optical MTF and an MTF due to neural filtering of spatial information by the visual system. Under a wide variety of photopic conditions, the human CSF shows peak sensitivity at intermediate spatial frequencies, usually around 2 to 6 c/deg of visual angle, with sensitivity falling off rapidly at higher and lower frequencies [1]. Low-spatial-frequency attenuation is caused by neural factors. Specifically, lateral inhibition in the visual pathways is thought to be responsible for this reduction in sensitivity to low-spatial-frequency patterns [2]. Under normal viewing conditions, reduced sensitivity to high spatial frequencies in the human CSF is due to both optical and neural factors [9]. Image blur due to higher order monochromatic aberrations will reduce sensitivity at high spatial frequencies. As the spatial frequency of the image increases, neural processing efficiency decreases somewhat, further reducing contrast sensitivity. The neural sampling properties of the visual system seem to be fairly well matched to the optical quality of the normal eye [10]. In other words, the Nyquist sampling limit2 of the cone mosaic in the fovea is fairly closely matched to the highest spatial frequencies passed by the eye’s optics. Thus, while contrast sensitivity at high spatial frequencies may be improved through adaptive optics [11], this does not necessarily imply that this additional information can be used effectively by the visual system to improve visual function under natural viewing conditions. It is possible, for example, that higher-spatial-frequency information is detected by mechanisms that are optimally tuned to lower spatial frequencies. If the visual system interprets this signal as being produced by image elements at these lower frequencies, the result is aliasing. Under these circumstances, increasing the contrast of the highest spatial frequencies available in the retinal image would not be a benefit to vision but would act as noise. However, there is some evidence that the visual nervous system is equipped to process spatial frequencies higher than those normally passed by the eye’s optics [12]. AO can be used to address such fundamental issues in vision science. Of particular interest in defi ning the limits of spatial vision is the resolution limit. This can be estimated by the spatial frequency at a sensitivity of 0.0 using a linear extrapolation from the high-frequency limb of the CSF. More commonly in clinical applications, the resolution limit is described by visual 2
The Nyquist sampling theorem states that the number of uniformly spaced samples needed to specify a waveform of a particular frequency is two per cycle.
368
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
acuity measured with an eye chart [13]. This measure of acuity is often referred to as Snellen acuity [14]. These charts are used to measure the smallest gap in a letter that can be detected at a specified distance. Resolution, or visual acuity, VA, is given by: VA = l ′ l
(14.5)
where l′ is the standard viewing distance (20 ft in the United States or 6 m in Europe) and l is the distance at which the smallest identifiable test stimulus subtends a visual angle of 1′ (1/60 of a degree). For example, if the smallest line on the eye chart a patient can read has letters with gaps that subtend 2′ of visual angle at 20 ft (equivalently, 1′ at 40 ft), the patient’s visual acuity is 20/40. More recently, clinicians have begun using a different, but closely related, system for defi ning visual acuity, called the logarithm of the minimal angle of resolution, or logMAR. As the name suggests, the notation in this system refers to the logarithm of the visual angle (in minutes of arc) of the smallest identifiable features. A logMAR value of 0.0 is equivalent to 20/20 Snellen acuity. 14.1.2
Spectral Efficiency Functions
Visual performance depends on the number of quanta received from a stimulus, but because we are not equally sensitive to all wavelengths of light, and this sensitivity varies between photopic and scotopic levels of illumination, purely physical metrics do little to describe the effectiveness of a stimulus for vision. Instead, the radiance of a stimulus fi ltered by the spectral efficiency of the visual system is used. This quantity is known as luminance. The International Commission on Illumination (Commission Internationale de l’Eclairage, CIE) has developed a system for specifying luminance according to the spectral sensitivity of the human observer. The spectral sensitivity function used by the CIE is called the standard observer’s visibility function or Vl when specifying lights under photopic conditions and V′l when specifying lights viewed under scotopic conditions. Luminance is thus defi ned as: Lv = K ∫ Lλ Vλ dλ
(14.6)
where Ll is the radiance contained in the wavelength interval d l and Vl is the relative photopic spectral sensitivity function for the standard observer of the CIE. For scotopic conditions the same formula applies except that V′l is used instead of Vl . These spectral efficiency functions are shown by smooth curves in Figure 14.2. The functions Vl and V′l are tabulated by Wyszecki and Stiles [15] and can be downloaded from the website of the Colour and Vision Research Laboratory (CVRL) at University College London (http://cvrl.ioo. ucl.ac.uk/). The K in Eq. (14.6) is related to the units in which luminance is specified, the most common in current usage being the candela per square
Log Relative Sensitivity (Quanta at the Cornea)
PSYCHOPHYSICAL FUNCTIONS
369
0
-1.0
-2.0
-3.0
-4.0
-5.0 400
500
600
700
Wavelength (nm) FIGURE 14.2 The solid and dashed curves show the log relative sensitivity of the CIE standard observer under scotopic (CIE V′l ) and photopic (CIE Vl ) conditions, respectively. The CIE curves are often plotted on an energy basis but shown here on a quantal basis. These curves are tabled as normalized values having a peak of 1.0, but when normalized to measured data as shown here, one can see the absolute sensitivity differences for scotopic and photopic vision more clearly. The fi lled symbols represent detection data obtained following 30 min dark adaptation for stimuli modulated at 2 Hz. The open symbols represent data from the same observer using heterochromatic fl icker photometry (minimizing fl icker by adjusting the radiance of a monochromatic light presented in 14 Hz counterphase to a 3.3 log troland, broadband standard) with a 1° diameter foveal stimulus. (After Werner [16].)
meter (cd/m 2), sometimes referred to as the nit. For these units, the value of K is 683 for photopic luminance or 1700 for scotopic luminance. In the literature one may fi nd luminance specified in different units by different investigators; conversion factors are provided by Wyszecki and Stiles [15]. There are a few points to note about luminance specifications. First, there is no subjectivity inherent in the measurement of luminance. One simply measures the radiance at each wavelength and multiplies this value by the relative sensitivity of the standard observer at that wavelength. These products are summed across wavelengths for broadband stimuli. Alternatively, one may
370
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
directly measure luminance with a photometer—a meter that has been calibrated to have the spectral sensitivity of the CIE standard observer. Second, while there is no subjectivity in the measurement of luminance, it was the original intent of the CIE to develop a metric that would be closely related to the brightness of a visual stimulus. Brightness, however, depends on many variables such as the preceding or surrounding illumination, and these variables are not taken into account in specifying luminance. Thus, the luminance of a stimulus is often of limited value in specifying brightness. The term luminance should be reserved for the specification of radiances, and the term brightness should be reserved for a description of the appearance of a stimulus. Also, any individual may have a spectral sensitivity function that differs from that of the CIE standard observer. For this reason, precise studies often require stimuli in which the luminosity function is measured individually. This can be particularly important for studies with older observers for whom spectral efficiency is much lower than the CIE observer at short wavelengths [17]. Finally, while visual stimuli are often specified in terms of luminance, the illuminance of a stimulus on the retina is the critical value for visual performance. Retinal illuminance depends on the eye’s pupil size, 3 which varies with light level for a given observer and across observers for a given light level [18]. 14.2
PSYCHOPHYSICAL METHODS
When measuring psychophysical functions, such as the CSF, many different techniques can be employed, depending on the goals and context of the research program. Some of these techniques, such as magnitude estimation and hue scaling, are designed to quantify the perceptual experience created by a visual stimulus, such as its brightness or hue. These issues may become relevant in the context of AO in the future when assessing the more subjective consequences of improved retinal image quality. However, in the short term, the more critical questions for the AO researcher will probably concern the extent to which the limits of visual performance can be extended through higher order wavefront correction. So, rather than asking how a stimulus appears, AO researchers using psychophysical methods will often be asking whether the stimulus can be seen, discriminated from other stimuli, or identified. This section of the chapter will address some of the theoretical and procedural issues associated with answering these questions. 14.2.1
Threshold
The limiting value of a physical variable (e.g., number of quanta, contrast) for a criterion level of performance on a psychophysical task is referred to as 3 When apparent pupil area (natural or due to an artificial pupil) is known, stimuli can be specified in terms of retinal illuminance. The most common measure is the troland (luminance in cd/m 2 × area of the eye’s pupil in mm).
PSYCHOPHYSICAL METHODS
371
the threshold. The inverse of threshold is sensitivity. Historically, thresholds were thought to be discrete barriers to perception. The threshold was considered either the stimulus strength above which the stimulus could always be detected (absolute threshold) or the difference between two stimuli beyond which they were always distinguishable from one another (difference threshold). However, this notion of threshold does not take into account the inherent variability in physical stimuli and the physiological mechanisms transducing these stimuli. Due to external and internal variability, a stimulus of a particular intensity may be detectable on a given experimental trial and undetectable on another seemingly identical trial. To complicate matters further, when observers are forced to make a choice about the presence or absence of lowintensity stimuli that they claim to be invisible, they will often display abovechance performance. Thus, rather than discussing thresholds as discrete barriers above which a stimulus is just detectable or for which two stimuli are just noticeably different, we consider thresholds in a statistical sense.
14.2.2
Signal Detection Theory
Our modern understanding of a sensory threshold has been greatly influenced by signal detection theory (SDT). This theory attempts to explain the variability in measures of detection in a systematic and quantitative fashion. In SDT, thresholds are assumed to depend on two independent processes: a sensory process and a detection process. According to SDT, the sensory process is noisy. As a consequence, the output of this process in response to a given stimulus input is variable. Critically, the sensory process also outputs a variable spontaneous response due to noise alone when no stimulus is presented. The task of the decision process is to distinguish between sensory activity produced in response to a stimulus (signal plus noise) and sensory activity produced in response to no stimulus (noise alone). However, in the SDT framework, the decision process only has access to a single output value from the sensory process, and thus a priori cannot distinguish between signal-related and noise-related activity. SDT presents a theoretical framework for understanding how sensory systems might be able to solve this problem. To understand SDT, it is helpful to consider a detection experiment. Imagine that in this experiment, on a given trial, a target stimulus (e.g., a low-contrast grating) is either presented or not. The observer’s task is simply to state “yes” if it is believed that the stimulus has been presented or “no” if it is believed that the stimulus has not been presented (a yes/no procedure). There are four possible results on any given trial. One possible result is that the observer says yes and the stimulus was presented on that trial. This is called a hit. Another possibility is that the observer responds yes and stimulus was not presented. This is called a false alarm. A correct rejection occurs when the stimulus was not presented and the observer indicates no. Finally,
372
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
when the stimulus was presented and the observer says no, this is called a miss (Fig. 14.3). Consider the trials on which no stimulus is presented, and the observer is simply viewing a blank field. During these trials, according to SDT, the sensory process will be generating a response that is entirely due to noise. Since this response is due to random activity in the sensory system, the output produced by noise alone will vary in strength from trial to trial. If we assume that this random noise response is generated by many independent sources, then it may be represented by a Gaussian probability density function, as shown in Figure 14.4. This distribution of response strengths is called the noise distribution. Now consider the probability of a given response strength on trials when a stimulus is presented. If the effects of the signal on the system are independent of the noise effects, and the response strengths from these sources are additive, then the resulting signal-plus-noise probability density function will also be a Gaussian function with similar variance. The signal-plus-noise distribution will be shifted to the right along the sensory strength axis relative to the noise-alone distribution (Fig. 14.4). How far the signal-plus-noise distribution is shifted will depend on the effectiveness of the stimulus for producing a sensory response. In SDT, the same output from the sensory system may be expected during some trials in which no stimulus is presented—that is, a noise-alone trial— and some trials in which a stimulus is presented—that is, a signal-plus-noise trial. If the same output can be produced in these two kinds of trials, how does the observer decide to say yes or no on a given trial? Notice that the likelihood that a given sensory response strength was generated by signal plus noise, rather than noise alone, increases as the signal strength increases. A sensible strategy would be to say yes when the sensory strength is above a certain value and to say no when it is below that value. This value is termed the criterion. The dashed vertical line in Figure 14.4 represents a criterion
Yes
No
Stimulus Present
Hit
Miss
Stimulus Absent
False Alarm
Correct Rejection
FIGURE 14.3 The possible outcomes from a trial in a detection experiment utilizing a yes/no procedure.
PSYCHOPHYSICAL METHODS
373
′
FIGURE 14.4 Theoretical sensory response strength probability density functions for noise-alone and signal-plus-noise trials. According to SDT an observer will report that a stimulus is detected only when the sensory strength is above criterion (represented by the vertical dashed line). The portion of the noise distribution fi lled with lines oriented to the right represents false alarms. The portion of the signal-plus-noise distribution fi lled with lines oriented to the left represents hits. d′ is defi ned in Eq. (14.7).
that a hypothetical observer in our experiment might select. During some trials in which the stimulus was absent (noise-alone), the sensory output will be higher than the criterion. Thus, an observer using this criterion will sometimes say that the target stimulus was present when it was not—a false alarm. In other trials, when a stimulus was presented (signal-plus-noise), the sensory output will be lower than the criterion, and the observer will fail to detect the stimulus—a miss. Suppose that the observer wished to make fewer false alarms and therefore more correct rejections. To do this, the observer could adopt a higher criterion (which could be represented by moving the dashed vertical line in Fig. 14.4 to the right) so that the noise-alone outputs would exceed it less frequently. If the observer did this, there would be an increase in the number of trials in which the observer would fail to detect the stimulus when it was presented— hit rates would decrease. In other words, there is a trade-off between hits and false alarms. The relation between hits and false alarms provides a way of measuring the criterion that an observer adopts. When the effect of the signal remains constant, a relatively low frequency of hits and false alarms indicates a high or cautious criterion, whereas a high frequency of hits and false alarms indicates a relatively low or lax criterion.
374
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
One factor affecting the choice of criterion is how willing the observer is to accept risk. Some observers are simply more cautious than others and will thus choose a more stringent criterion. The perceived impact of hits, misses, false alarms, and correct rejections also plays a critical role in determining criterion. For example, if the reward for a hit is significantly greater than the cost of a false alarm, observers will tend to adopt less stringent criteria. An additional important factor affecting criterion is the observer’s expectancy about the frequency of trials in which a stimulus is presented. Observers will be more willing to say they saw a stimulus if signal-plus-noise trials are more frequent than noise-alone trials. The observer’s sensitivity to a stimulus in an experiment can be described in terms of the distance between the noise-alone distribution and the signalplus-noise distribution. This sensitivity parameter is called d-prime (d′). Assuming that the distributions are normal as depicted in Figure 14.4, d′ is the difference between the mean of the signal-plus-noise distribution (s + n) and the mean of the noise distribution (n¯) divided by the standard deviation of the noise distribution, (s n): d′ =
( s + n) − n σn
(14.7)
The quantity d′ is a measure of sensitivity, independent of the observer’s criterion and expectancies, and it can be calculated from the proportion of hits and false alarms. When d′ = 0, the observer is performing at chance levels. When stimulus strength is reduced, the value of d′ decreases. Correspondingly, with more intense stimuli, detection is easier and d′ increases. Threshold may be defi ned as the stimulus that produces a particular value of d′, often 1.0. In addition to estimating sensitivity, SDT offers a set of techniques for estimating response bias. This process is beyond the scope of this chapter, but theoretical presentations of SDT in the context of psychophysics are provided by Green and Swets [19] and by Van Trees [20]. Practical user’s guides to SDT are provided by Macmillan and Creelman [21] and Wickens [22]. 14.2.3
Detection, Discrimination, and Identification Thresholds
One type of threshold that is frequently measured is the detection threshold. The detection threshold is the stimulus strength necessary to elicit a specific level of performance on a task in which an observer is asked to state if, when, or where a stimulus was presented. The level of performance that corresponds to the threshold will depend on the methods employed. Another important type of threshold is the discrimination threshold. This is the difference along a particular stimulus dimension that is necessary for the observer to correctly differentiate two or more stimuli with a given probability. In some contexts, the discrimination threshold is referred to as a just noticeable difference (jnd). The increase or decrease in stimulus intensity
PSYCHOPHYSICAL METHODS
375
necessary to detect a difference reliably depends on the initial intensity of the stimulus [23]. Specifically, if I is the intensity of the stimulus and ∆I is the change in intensity necessary to detect the change (the jnd) and KW is a constant, then ∆ I I ≈ KW
(14.8)
This relation is referred to as Weber’s law and is a fundamental principle in psychophysics. In this equation, KW is known as the Weber fraction. Although Weber’s law has wide generality, it tends to break down for very low and high stimulus values. A common discrimination task is contrast discrimination. In such a task, the observer is presented with two patterns that differ only in their contrast. The observer may be asked to indicate which of the two patterns is higher or lower in contrast. The contrast discrimination threshold will be the contrast difference necessary to yield a criterion level of performance on this task. The contrast detection measure mentioned above can be thought of as a special case of contrast discrimination where the contrast of one of the patterns is zero. A third type of threshold is the identification threshold. In an identification task, an observer is asked to state which stimulus was presented on a particular trial. A familiar example of such a task is the acuity chart used in an optometrist’s office. In this task, the patient is asked to read lines of letters. The letters are made progressively smaller until the patient is no longer able to correctly identify a given proportion of the letters. The smallest line on the chart on which most of the letters can be identified accurately can be thought of as the threshold. This particular identification threshold is referred to as visual acuity. 14.2.4
Procedures for Estimating a Threshold
There are several procedures for estimating thresholds, and each method has advantages and disadvantages. In discussing methods for estimating thresholds, contrast detection threshold will be used again as a representative example. However, it should be noted that some variant of each of these methods can be used for discrimination and identification tasks, as well. The quickest way to estimate a threshold is usually the method of adjustment. In this technique, the observer controls the stimulus strength directly. In the case of a contrast detection task, the observer adjusts the contrast of the stimulus until it is just barely visible. When satisfied with the setting, the observer makes some indication to end the trial, often with the press of a button. This procedure is repeated several times, and the mean of all settings is taken as the estimate of threshold. This procedure is intuitive and is generally accomplished reliably and comfortably after some practice. However, the visual sensation that corresponds to “just barely visible” may differ between
376
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
observers or even within the same observer under different conditions. In other words, the criterion that is used to decide when a stimulus is just detectable depends on the judgment of the observer and cannot be assessed objectively with this technique. When the observer in the threshold task is naïve to the purposes of the experiment and has no vested interest in the outcome of the study, differences in criteria will contribute to noise in the data and can make comparisons across observers or between conditions difficult to interpret. However, the problem becomes particularly insidious when the observer is also an experimenter on the project. No matter how hard one tries to maintain a constant criterion, the danger is always present that one’s knowledge of project aims can influence threshold adjustments. An alternative method for assessing thresholds is the yes/no procedure (see also Section 14.2.2). Accurate measures of threshold can be obtained using this technique; however, like the method of adjustment, the yes/no procedure is also susceptible to criterion effects. The accuracy and efficiency of this method will depend on the stability of the criterion used by the observer. While this is not an intractable problem, the net result is that a large amount of data is needed to pull apart the effects of criterion from the effects of the underlying sensory response. This concern is of particular importance for psychophysical experiments with AO systems, where the correction presumably has some temporal dependence, and where it would be desirable to test patients who are not experienced with lengthy psychophysical tasks. The N-alternative-forced-choice (NAFC) procedures are relatively efficient, criterion-free techniques for measuring sensory thresholds. In this class of procedure, the observer is given 2 or more response options on a given trial and is obliged to respond even if no stimulus is detected. For example, in a temporal 2-alternative-forced-choice (t2AFC) procedure, a stimulus is presented in one of two periods of time, which can be delineated with auditory (e.g., tones) or visual (e.g., the numbers 1 and 2) markers. The observer indicates, often with the press of a button, whether the stimulus appeared in interval 1 or 2. In such a procedure, the observer will tend to guess correctly 50% of the time when the stimulus is so far below threshold that it is never detected (assuming an equal number of presentations in each interval). The particular level of performance that is taken as threshold depends on underlying assumptions about the probability density function that will defi ne the relation between stimulus strength and the probability of the stimulus being detected (as described in the next section). Often, 75% correct is used to defi ne threshold-level performance. While 2AFC procedures are quite common, the larger the N used in an NAFC procedure, the more efficient it will be. This is simply because the correct guessing rate will approach 1/N with large numbers of trials, and correct answers yield more information the lower the guessing rate.
PSYCHOPHYSICAL METHODS
377
Identification tasks can be analyzed as NAFC procedures. For example, reading letters on an eye chart could be considered a 26-alternative-forcedchoice procedure. In this case, the guessing rate would be 1/26 if all letters were equally identifiable and equally probable. 14.2.5
Psychometric Functions
The probability of correctly detecting a stimulus increases smoothly as a function of stimulus intensity, when operating in a stimulus regime that is close to threshold. This relation between task performance and stimulus strength is referred to as the psychometric function (not to be confused with psychophysical functions discussed above). The psychometric function can be understood in terms of the concepts of SDT. Take the example of a t2AFC detection task. Assuming that the observer is paying attention for both intervals and responding based on sensory information alone, the observer responds correctly when the sensory response produced during the stimulus (signal-plusnoise) interval is greater than the sensory response produced during the no-stimulus (noise-alone) interval. When the stimulus strength is low, the signal-plus-noise and noise-alone distributions are highly overlapping. There will be many trials in which the sensory response due to noise alone will be greater than that due to signal plus noise. However, as the signal strength increases, the probability will be much greater that the larger sensory response is due to the signal. Thus, as signal strength increases, the proportion of trials in which the observer makes a correct response increases. Notice that in the 2AFC procedure mentioned here, criterion plays no role. Several distributions model the psychometric function acceptably well. Two of the most commonly used distributions are the cumulative normal function and the Weibull function [24]. The cumulative normal form of the psychometric function is used in signal detection theory based on the assumption that there are multiple independent sources of variability with unknown distributions that are feeding into a unitary sensory process. The Weibull function is used in detection models that assume probability summation among multiple independent detection mechanisms. Figure 14.5 shows psychophysical data from a contrast detection experiment. The x axis is the log10 of the stimulus (Michelson) contrast, and the y axis is the proportion of correct responses. In this task, the observer was instructed to indicate whether a Gabor patch was presented on the left or right of a central fi xation mark (spatial 2AFC). The data are fitted with a cumulative normal distribution using a maximum-likelihood fitting procedure. This approach is called Probit analysis [25]. The model has two parameters: the mean and the standard deviation. The mean of the distribution is commonly taken as the threshold. For a 2AFC task and a cumulative normal distribution, the mean corresponds to 75% correct. The standard deviation corresponds to the slope of the psychometric function.
378
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
Percent Correct
100%
75%
50%
-3.0
-2.5
-2.0
-1.5
-1.0
Log Contrast FIGURE 14.5 Psychometric function from a luminance contrast detection task. Squares represent the percent correct data from a 2-alternative-forced-choice procedure in which the observer indicated on which side of fi xation a Gabor patch was presented. The data are fitted with a cumulative normal distribution.
14.2.6
Selecting Stimulus Values
To accurately estimate the mean and/or standard deviation of a psychometric function, several data points based on many trials are needed. Obtaining these data points efficiently requires a judicious choice of stimulus values (e.g., contrasts) to be tested. The goal is to test a range and distribution of stimulus values that will yield performance values from near the guessing rate to approaching 100% correct, while including enough points in between to fit the appropriate distribution to the data. Testing several stimulus values that are too far below threshold, where the subject is always guessing, or too far above threshold, where the subject is always correct, results in wasted trials. There are two basic ways to approach the choice of stimulus values and the number of trials to be performed. One approach is to choose the values based on information acquired from experience with the task, perhaps from preliminary experiments using other methods (such as method of adjustment) or from published data. The other approach is to choose the stimulus values and the number of trials dynamically, based on the performance of the observer in a given session. The stimulus values and the number of trials at each value to be tested are chosen in advance in the method of constant stimuli. On a given trial, the stimulus value is chosen at random (without replacement) from the predefi ned choices. This is probably the best method to use when interested in knowing
PSYCHOPHYSICAL METHODS
379
the exact shape of the psychometric function. With equal numbers of trials distributed evenly across the function, results obtained using this method can provide excellent fits to the psychometric function, given a large amount of data. When good prior information is available about the approximate mean and standard deviation of the psychometric function, the method of constant stimuli can be reasonably efficient and yields precise results. When such information is not available, the method of constant stimuli can be cumbersome, and a method that selects stimulus values dynamically based on performance may be preferred. Such procedures are referred to generally as staircase methods. When the observer is performing well, the task is made more difficult. When the observer makes errors, the task is made easier. As a result, a plot of stimulus strength (e.g., contrast) versus trial number resembles stairs that rise and fall intermittently. There are many variants of the staircase method. One simple choice is the M-up/N-down procedure [26]. One form of this procedure used often in conjunction with a 2AFC task is the 1-up/3-down version. Take the example of a contrast detection task again. In such an experiment, the staircase might begin at a fairly high contrast level and be reduced (“down”) each time the observer makes 3 consecutive correct responses. When the observer can no longer detect the stimulus and makes a single incorrect response, the contrast is increased (“up”). This point of inflection is called a reversal. As the experiment proceeds, the staircase converges toward a particular percent correct performance level (~79% for the 1-up/3-down rule). After a predetermined number of reversals, the experiment is terminated, and the last several reversals are averaged as an estimate of threshold. While the M-up/N-down staircase procedure is simple and intuitive, it is not the most efficient choice with respect to the number of trials necessary to achieve a given statistical confidence level on the estimate of threshold. More efficient adaptive psychometric methods have been developed [27–29], and the use of these efficient methods is now well established in the vision science literature. Two commonly used methods are called QUEST and ZEST. These adaptive psychometric methods rely on Bayesian statistics to combine prior information about the probability density function of the threshold with the ongoing results of an experiment to determine a maximum-likelihood estimate of the threshold. On each trial, the threshold is estimated and the stimulus value that corresponds to that estimate is presented to the observer. Based on whether the observer is correct or incorrect, the threshold estimate is adjusted. The net result is a systematic and sensible way of choosing stimulus values on each trial such that all trials are used efficiently. In addition, these methods offer a variety of justifiable termination rules. For example, a set number of trials for the experiment can be selected in advance, or the experiment can be terminated when a particular confidence interval is achieved. An example of the trial-by-trial estimates of threshold from an experiment performed using a QUEST procedure is shown in Figure 14.6. Two functions are plotted in this figure, each representing an estimate of threshold as a function
380
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
Contrast
0.50
0.10
0.02 0
10
20
30
40
50
60
70
80
Trial Number FIGURE 14.6 Trial-by-trial estimates of threshold for two independent measures from the same observer using the QUEST staircase procedure. Each point represents the value of contrast presented on a given trial in a temporal 2AFC contrast detection task.
of trial number for two statistically independent staircases. The two staircases were obtained in the same session, and in a given trial, the contrast presented was randomly selected from one of the two staircases. Randomly interleaving staircases in this way has the important advantage of helping to maintain the statistical independence of each trial [27], in addition to providing an additional estimate of the threshold. Each method has its relative merits. For example, QUEST is quite efficient and easily implemented in a short computer program. An implementation of this procedure is provided in the PsychToolbox software package for MATLAB [30, 31] (available at http://www.psychtoolbox.org). A version of the QUEST procedure written in the C programming language is provided by Farell and Pelli [32]. ZEST, a close relative of QUEST, is computationally more complex but somewhat more efficient. It should be noted that adaptive staircases can be used to estimate the slope of the psychometric function as well as the threshold [33].
14.3
GENERATING THE VISUAL STIMULUS
Measuring psychophysical functions requires careful control of the physical stimulus parameters. Some of the biggest practical concerns for vision scientists involve the production of visual stimuli. Many devices for producing
GENERATING THE VISUAL STIMULUS
381
highly controlled visual stimuli exist, and creative engineers and vision scientists are constantly coming up with new machines for this purpose. Historically, stimuli for psychophysical experiments were produced in elaborate optical systems, or with a bank of function generators, custom electronics, and oscilloscopes. In recent years, however, psychophysicists are relying more heavily on stimuli generated by computers, often displayed on monitors. Computer-controlled display systems are popular because they provide a high degree of control over the stimulus to be presented and are available at a reasonable cost. However, computer-controlled displays have some disadvantages. In particular, luminance and chromaticity limitations make them inappropriate for some applications. When these factors limit production of suitable stimuli, other systems need to be considered. 14.3.1
General Issues Concerning Computer-Controlled Displays
While almost any device that can generate light can be controlled by a computer, this section discusses commercially available display solutions that work with desktop or laptop computers, such as cathode ray tube (CRT) monitors, liquid crystal display (LCD) monitors, plasma screen monitors, and light projector systems. For the most part, these displays have primary commercial functions that are not optimal for vision science applications. However, with proper software control, accurate characterization, and minor hardware additions or modifications, these displays can act as flexible and robust tools for studying vision. Computer-controlled displays offer many advantages for the vision scientist. Once the display is characterized, the user can accurately specify color and luminance values on a pixel-by-pixel basis. The temporal and spatial resolution of most of these devices is adequate for many vision research applications. In addition, computers with high processing speeds and large amounts of memory can be purchased for relatively low cost. Finally, when using computer-controlled displays, the same system that generates the visual stimulus can also conveniently collect, store, and analyze experimental data. Several factors need to be weighed when considering which type of computer-controlled display will be best for a given research goal. One important consideration is the luminance output of the display. A typical commercially available CRT computer monitor, for example, produces a maximum luminance output of about 100 to 150 cd/m 2 when using all three color channels. While these values are sufficient for many purposes, the effective maximum luminance level will depend on several factors, including the desired chromaticities. The effective luminance of the display will be reduced when viewed through the components of an optical system. This issue can be of particular importance in AO systems where beamsplitters or polarizers are used, often greatly reducing the amount of light available from the display. Other types of display technology can provide higher luminance values but often at the cost of spatial or temporal resolution.
382
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
There are important chromatic limitations to be aware of for most computer-controlled displays. Most emit relatively broad spectral bands of light (see Fig. 14.7 for an example) and this can be problematic for some research applications. Filters can be used with these displays to produce narrow-band light, but the luminance will be greatly reduced, making this solution impractical in most cases. It should also be noted that not all chromaticities in the visible spectrum can be produced using a computer-controlled display. The gamut, or range of displayable chromaticities, depends on the chromaticities of the light emitted from each of the color channels used in the display. Figure 14.8 shows the chromaticities available from a typical CRT display in CIE xy chromaticity coordinates. The gamut of the CRT is a triangle in CIE chromaticity space with vertices defi ned by the chromaticity coordinates of the phosphors. All chromaticities within this triangle can be obtained at a low luminance level, but the gamut is increasingly restricted at increasingly higher luminance levels. Additionally, the intensity resolution allowed by the computer should be considered. Most computers are supplied with 8-bit graphics cards that allow 256 (2 8) discrete levels of luminance for each color channel. This resolution level is insufficient for some applications (e.g., contrast threshold experiments) that require very low luminance contrast values or subtle chromatic variations. A variety of options are available to obtain greater luminance and color resolution, and these are discussed in a later section. Display devices vary in their temporal and spatial characteristics. Frame rates of 75 Hz or higher are desirable to reduce the perception of fl icker and
Power
Red
Blue Green
400
450
500
550
600
650
700
750
Wavelength (nm) FIGURE 14.7 The spectral power distributions of a typical set of CRT phosphors are shown. Note that the red phosphor has two major peaks.
GENERATING THE VISUAL STIMULUS
383
0.9 0.8
y Chromaticity
0.7 Green
0.6 0.5 0.4 0.3
Red
0.2 0.1 Blue
0.0 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
x Chromaticity FIGURE 14-8 The chromaticities of typical CRT phosphors are shown by the open circles plotted in the CIE xy chromaticity space. The thick lines show the maximum chromaticity gamut available using these phosphors.
to allow for precise temporal modulation. Most commercially available computer-controlled displays are designed to operate at this or higher rates. However, some experiments require greater temporal resolution, and displays that operate at higher frame rates are available, although there may be a trade-off between temporal and spatial resolution. Additionally, frame rate is not the only consideration for the temporal resolution of a display. The persistence of the light source is also critical. For example, phosphors differ in how quickly they reach maximum light output when stimulated and return to minimal output when turned off. Also, spatial resolution should be at least high enough so that pixels cannot be resolved at the desired viewing distance. Generally, higher spatial resolutions are better. The highest spatial frequencies that can be produced will depend on the size of the pixels as measured in the retinal image. A display with relatively large pixels (low spatial resolution) can produce high-spatial-frequency patterns only if the observer is placed far enough away. However, this will limit the effective size of the display, making it difficult to display low-spatial-frequency information. Thus, only a limited range of spatial frequencies can be displayed at any given magnification. Display devices are useful for research purposes only if they can be characterized accurately. Characterization (or “calibration”) allows for accurate specification of chromatic and luminance values and is relatively straightforward to perform if the following assumptions hold:
384
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
• Spectral Constancy The relative spectral power distribution of each color channel should be constant over the range of intensity values. • Color-Channel Independence The output of each color channel should be unaffected by the output of the other channels. • Spatial Homogeneity The luminous output of each pixel should be consistent across the display. • Spatial Independence The output of each pixel should be unaffected by neighboring pixels. It is recommended that these assumptions be checked within the operating range of the experimental application. Complicated software manipulations are possible to correct for some violations [34, 35]. A few other issues should be considered. Ideally, when the input to the red, green, and blue (RGB) channels is zero, no light should be emitted from the display. However, most display types produce some residual light at this setting. This can be minimized by adjusting the brightness and contrast settings but can be a persistent problem for some display types. Also, the intensity of the light from a pixel can vary with the viewing angle for some displays, as is the case with LCDs. This problem can be reduced if the head position of observers is stabilized. The space available for a display might be an issue for some applications. Flat screen displays take up less space than traditional CRTs while some projector-based displays require a lot of room. Finally, the cost of a display can range from a few hundred dollars for CRTs to many thousands of dollars for some projector-based displays and plasma screens. More detailed information on computer-controlled displays can be found elsewhere [36, 37]. 14.3.2
Types of Computer-Controlled Displays
Cathode ray tube (CRT) displays emit light when an electron beam excites a phosphor coating on a display screen. A monochrome monitor uses one electron beam to excite a single phosphor type that emits broadband light. A color monitor uses three electron beams to separately excite three different phosphors that are dominated by short-, middle-, or long-wavelength light. Higher luminance values can be obtained using some monochrome monitors (e.g., those designed for specialized applications such as medical imaging) compared to color monitors. For chromatic stimulation at higher luminance values, other display options might be more appropriate. The monitor is made up of thousands of picture elements known as pixels. In a color monitor, each pixel is made up of red, green, and blue components. By varying the intensity of the electron beam, the intensity of the light emitted can be manipulated. The electron beams scan the screen in a raster pattern from left to right, moving rapidly from the top to the bottom of the screen many times per second (e.g., 75 Hz). Some CRTs can operate at high frame rates (>150 Hz).
GENERATING THE VISUAL STIMULUS
385
However, specialized graphics cards and monitors are usually necessary to take advantage of these rates, and spatial resolution may be compromised when operating in these modes. The persistence of CRT phosphors is generally quite short with a relatively quick return to minimal light output; however, different phosphor types have different persistence times. Cathode ray tubes are an attractive option for vision research because their characteristics are well understood and they are relatively inexpensive. The assumptions of spectral constancy, spatial homogeneity, color-channel independence, and spatial independence hold to a close approximation for most CRTs over a useful range [36, 38]. Liquid crystal display (LCD) monitors are commonly used in laptop computers and are increasingly supplied as standard equipment with desktop computers. They use polarized light that is transmitted through aligned liquid crystals. The alignment of the crystals can be disturbed by applying an electric current. This results in decreased light transmittance. In an LCD display, a polarized light source is positioned behind an array of liquid crystal elements that are controlled separately to produce an image. Color images are created when three separate RGB light sources are used. The luminosity of LCDs can be much higher than CRTs because it is mainly a function of the choice of backlights. LCD displays are also much more compact than CRT displays. There are, however, some disadvantages from a vision science perspective. For one, the intensity of the display output can vary greatly depending on the viewing angle. Also, useful refresh rates tend to be lower than with CRTs since the persistence of the liquid crystal elements is longer than that of CRT phosphors. Another potential problem with LCD monitors is that the spectra of the RGB outputs can vary as the intensity changes [36], which may be problematic when attempting to characterize the display. Plasma displays produce light by exciting plasma gas pockets coupled to phosphors. Plasma gas contains about equal numbers of positive ions and electrons. When this balance is disturbed by an electric current, the gas becomes excited, causing photons to be released. The photons emitted by the gas are mostly in the ultraviolet (UV) part of the spectrum and thus invisible. However, this energy can be used to excite phosphors, which do emit visible light. In a plasma display, there are many separate gas cells that can be stimulated independently. Since the plasma cells can be stimulated independently with no scanning electron beam, spatial homogeneity and spatial independence can be excellent with these displays. This also allows for the possibility of very large displayable areas. The luminance output and chromaticity gamut will depend on the phosphors being used but will be similar to that of CRTs. Plasma technology is fairly new and currently relatively expensive. Projectors are an important option for vision scientists because they can produce chromatic images at high luminance levels. Many home entertainment systems employ self-contained projection and screen setups that look like large TVs. Projectors and screens can also be purchased separately. There are a number of issues to consider when projection systems are used for vision
386
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
research. Keystone distortion can occur when the image is projected at an angle (e.g., a circle may appear as an oval). However, this distortion can normally be corrected by adjusting controls on the projector. Projectors often produce much higher luminance values at the center of the screen than at the edges. This can be countered to some degree by using specially designed screens (e.g., Fresnel/lenticular screens). Spatial resolution can be poor for some types of projectors. The resolution will also vary with the projection distance. Separate projector and screen systems can also require a lot of space. Three main types of projector technologies are currently available. CRT projectors use very bright CRT tubes combined with lenses to project images. Various configurations are used, but CRT projectors are becoming less popular now that improved image quality can be obtained using alternative technology. LCD projectors use the same technology as LCD displays but employ more powerful backlights combined with a lens to project the image. The spatial resolution is typically better than with CRT projectors. Digital light projectors (DLPs) use a large array of digital micromirror devices (DMDs) that can be tilted independently. The DMDs reflect light onto the screen when in the normal position or into a light trap when tilted. The mirrors can be tilted many times per second, and the intensity of the light reaching the screen depends on the proportion of time the mirror is in the normal position. DLPs come in two main formats. The fi rst format uses one DMD array and a color wheel with separate RGB segments. Color images are produced by synchronizing the wheel with the mirror activity. With this type of display, refresh rates are typically low and colors can appear to separate with fast eye movements. This effect is sometimes referred to as rainbowing. The second format uses three separate DMD arrays that reflect three different light sources (RGB) that are combined to form a color image. This type of DLP avoids the issue of rainbowing but is more expensive. Packer et al. [39] reported that DLPs have good contrast, high light levels, and offer the potential for larger color gamuts than CRTs. 14.3.3
Accurate Stimulus Generation
With any of the display options discussed above, several issues will need to be addressed in order to generate the desired stimuli accurately. Some of the most important concerns are the accurate control of chromaticity and luminance. Most computers are sold with a standard 8-bit graphics card. Higher intensity resolution capabilities are required for some applications, for example, the low-contrast stimuli that are used in contrast threshold experiments. Figure 14.9 illustrates the importance of adequate luminance resolution for correct rendering of low-contrast spatial sine-wave patterns. At a mean luminance level of 35 cd/m 2 , the graphics card with 8-bit resolution is unable to produce a smoothly varying sine-wave pattern at 1% contrast. This problem
GENERATING THE VISUAL STIMULUS
387
35.5
Luminance (cd/m2)
8 bits 10 bits 14 bits
35.0
34.5 0
250
500
Pixels
FIGURE 14.9 Quantization effects for a luminance-modulated sine-wave stimulus with a contrast of 1% and a mean luminance of 35 cd/m 2 . The 14-bit resolution (solid curve) provides a luminance profi le that is very close to a sine wave. The 10-bit resolution (thick dashed curve) results in contrast artifacts (the desired maximum and minimum luminance levels cannot be generated), and the luminance profi le is a relatively poor approximation of a sine wave. The 8-bit resolution (thin dashed curve) cannot produce a sine-wave profi le at this contrast level.
is exacerbated at lower luminance levels and lower contrasts. It should be noted that some graphics cards that provide a greater number of possible luminance levels allow only 256 color values to be written to the display at any one time. This contrasts with the 8-bit per color channel (24 bits) of simultaneous colors available in most commercial graphics cards. This is not a problem for many vision experiments but may be problematic when attempting to display more naturalistic images. When using these systems, color look-up tables (CLUTs) are used to select the 256 colors to be displayed on a given frame. Clever use of CLUTs can greatly reduce the inconvenience of having a limited number of simultaneous colors. CLUTs are discussed in greater detail by Robson [40]. A simple way to obtain higher color resolution is to replace the existing graphics card with a commercially available 10-bit card. If the experiment requires monochromatic stimuli only, then the Pelli attenuator [41] is an attractive option. This device combines the outputs of the three 8-bit color channels to produce 12-bit monochrome resolution. If a color monitor is used, the output typically uses the green phosphor only as this produces the highest luminance. Bit stealing is another technique that provides higher resolution for monochromatic stimuli using a color monitor [42]. The three elements of each pixel are dithered to provide fi ner luminance steps. This can, however,
388
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
result in some chromatic artifacts. This problem is reduced if the observer is far enough from the screen. Cambridge Research Systems (CRS) provides several “visual stimulator” products for producing high luminance and color resolution stimuli. The Bits+ + (see http://www.crsltd.com) takes input from a standard Digital Video Interface (DVI) graphics card and converts each channel to 14 bits using digital-to-analog converters. The Bits++ has three operating modes. The basic mode converts three 8-bit color channels to three 14-bit channels using a color look-up table, limited to 256 simultaneous colors. The mono mode converts two 8-bit channels to one monochrome 14-bit channel. The third mode, color, overcomes the previous 256-color limitation of the basic mode at the expense of horizontal spatial resolution. Both mono and color modes have the potential for all values to be displayed concurrently (i.e., true 14 bit). The Vision Stimulus Generator (VSG) systems, manufactured by CRS, have proven popular with vision scientists. These add-in cards for the personal computer (PC) provide up to 15 bits of resolution per color channel, but only 256 sets of RGB values simultaneously. The latest generation of this product line, the ViSaGe, is an external device that connects to a dedicated second monitor in a dual-monitor PC system. Changing color look-up tables and synchronizing image frames usually requires some real-time processing, and an important advantage of these systems is that a dedicated microprocessor provides high-speed temporal modulation of the stimuli, separate from the host computer operating system. These devices have many desirable features for vision science applications but are more expensive than other computer graphics generators. There is a large variety of software packages available for a range of psychophysical experiments (a comprehensive list can be found at the Vision Science website: http://www.visionscience.com/vs-software). However, offthe-shelf software packages will not be sufficiently flexible for many users. Software can be written from scratch using programming languages such as Java or C++; however, it is much easier to incorporate existing software libraries into your code. Two popular libraries specifically designed for vision science are the VSG Software Library (http://www.crsltd.com/catalog/vsl/) and PsychToolbox [30, 31] (http://www.psychtoolbox.org). 14.3.4
Display Characterization
To allow accurate specification of luminance and chromaticity, the display device must be characterized (or calibrated). Two main steps are required to characterize a display device. First, the chromaticity of each phosphor must be measured with a colorimeter or spectroradiometer. Second, the relation between display luminance and the digital-to-analog-conversion (DAC) input values needs to be characterized for each color channel. For CRTs, display luminance can be modeled as a power function (often referred to as a “gamma”
GENERATING THE VISUAL STIMULUS
389
function in this context) of DAC values. A typical gamma function is shown in Figure 14.10. When the chromaticities and gamma functions of the three color channels are characterized, the DAC values necessary to create the desired stimulus chromaticity and luminance values can be calculated, provided that the assumptions of spectral constancy, color-channel independence, spatial homogeneity and spatial independence hold. The necessary computations are described by Wandell [43] and Nakano [44]. While it is often useful to specify display outputs in terms of luminance and chromaticity values, it is increasingly desirable to make these specifications in terms of more physiologically meaningful units. Reliable measurements of the absorption spectra of human cone photoreceptors are available [45–48], though there are important individual differences in the peak sensitivity, optical density, and ocular media through which the light is fi ltered before reaching the receptors. Using this information together with the above display measurements, conversions can be made between RGB values and cone coordinates (or other related color spaces). A detailed discussion of display characterization and color conversions can be found in Brainard, Pelli, and Robson [36]. The appropriate measurement device must be chosen for proper display characterization. For monochrome display applications, a photometer may be appropriate. A photometer is a device that measures the radiance of a light source weighted by a fi lter that emulates a human spectral efficiency function. The output of this device is in terms of luminance. For color applications, a colorimeter or spectroradiometer will be required. A colorimeter is a light
Relative Luminance
1.00
0.75
0.50
0.25
0.00 0
50
100
150
200
250
Input (DAC) Value FIGURE 14.10 Luminance of a CRT display is plotted as function of DAC value. The best fitting gamma function is shown. Luminance values have been normalized to unity.
390
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
measuring device that uses three fi lters based on a set of color-matching functions derived from psychophysical experiments. Colorimeters perform best when measuring broadband light. They do not perform as well when measuring light that is narrow band or has sharp spectral peaks. Spectroradiometers measure light over a large number of steps across the visible spectrum. Because each measurement combines multiple estimates of light (e.g., 101 measurements at 4-nm steps from 380 to 780 nm), with some noise introduced at each estimate, the signal-to-noise ratio can be low, especially for lower intensities. Colorimeters and spectroradiometers are discussed in greater detail by Mollon [49]. On most displays, a number of adjustments (e.g., contrast and brightness) and display modes are available. Any adjustments should be made prior to making a display characterization, and no further adjustments should be made until the next characterization. The measuring device should be placed at the same distance and viewing angle as the position of the observer in the experiment. Regular characterization is recommended to take account of changes in the display device over time. Characterization is advised each time the display is moved since the surrounding electromagnetic fields can affect performance. 14.3.5
Maxwellian-View Optical Systems
Maxwellian-view optical systems were fi rst introduced by James Clerk Maxwell almost 150 years ago. The basic Maxwellian-view system uses a bright light source (such as a tungsten or xenon lamp) that is imaged by a lens onto the plane of the observer’s pupil. The observer sees the lens uniformly fi lled with light. The principle advantage is that high retinal illuminances can be obtained with spatially uniform narrow-band light. More detailed information on these systems can be found in Wyszecki and Stiles [15] and Westheimer [50]. A hands-on account of how to build a Maxwellian-view optical system is provided by Boynton [51]. Various additional devices can be used with Maxwellian-view systems to produce a variety of stimuli. Interference fi lters or monochrometers can be employed to produce narrow-band light. Beam choppers and polarizers can be used to create temporally modulated stimuli. Multiple channel systems can be used to produce complex stimuli. Maxwellian-view systems can be used to project discrete spots of light onto the retina by using a series of pinholes. These systems have been successfully used with AO systems for vision research purposes. For example, Hofer, Singer, and Williams [52] used a Maxwellianview system together with an AO system to stimulate single cones. 14.3.6
Other Display Options
Interferometry can be used to produce gratings on the retina that are independent of the optical aberrations in the eye. Two small points of coherent
REFERENCES
391
light are passed through the pupil of the eye and interfere to produce gratings on the retina. This technique has been used to test the limits of visual resolution in the absence of optical aberrations [6, 10, 12, 53]. Other light sources might also be considered for visual stimulation [e.g., lasers and light-emitting diodes (LEDs)]. Finally, the vision scientist can use real objects to test visual performance, if the visual environment is carefully controlled (e.g., Kraft and Brainard [54]).
14.4
CONCLUSIONS
Psychophysics seeks to examine the relations between physical stimuli and visual performance and to understand the anatomical and physiological mechanisms that process those stimuli to create the world that we perceive. By increasing our understanding of the ocular optics forming the retinal image, the wavefront measurement technologies that are an integral part of AO are providing the psychophysicist with new tools for understanding the inputs to the visual system. Additionally, by correcting the higher order aberrations of the eye, AO allows very high resolution images to be placed on the retinas of human observers, providing the opportunity to present complex stimuli to the visual system more precisely than previously possible. In turn, what is learned through psychophysical means could be important to an engineer whose primary interest is in developing AO technologies. By understanding the fundamental neural limits to visual performance, we can assess the potential commercial viability of optometric and ophthalmic devices that rely on modern wavefront measurement and AO correction. For example, if it could be established that the visual system would benefit greatly from correction of higher order aberrations, this would drive the demand for new technologies that take advantage of AO to improve visual performance. Acknowledgments We thank Vicki J. Volbrecht, Lewis O. Harvey, Jr., John A. Wilson, David H. Brainard, Cynthia Angel, and William P. Hardy for their helpful comments on the manuscript. Supported by the National Institute on Aging (grant AG04058), the National Eye Institute (grant EY014743), and the University of California Campus Laboratory Exchange Program.
REFERENCES 1. De Valois RL, De Valois KK. Spatial Vision. Oxford: Oxford University Press, 1990. 2. Cornsweet TN. Visual Perception. New York: Academic, 1970.
392
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
3. Appelle, S. Perception and Discrimination as a Function of Stimulus Orientation: The “Oblique Effect” in Man and Animals. Psychol. Bull. 1972; 78: 266–278. 4. Hess RF. Spatial Scale in Visual Processing. In: Chalupa LM, Werner JS, eds. The Visual Neurosciences. Cambridge, MA: MIT Press, 2004, pp. 1043–1059. 5. Atchison DA, Woods RL, Bradley A. Predicting the effects of optical defocus on human contrast sensitivity. J. Opt. Soc. Am. A. 1998; 15: 2536–2544. 6. Burton KB, Owsley C, Sloane ME. Aging and Neural Spatial Contrast Sensitivity: Photopic Vision. Vision Res. 1993; 33: 939–946. 7. Jackson GR, Owsley C. Visual Dysfunction, Neurodegenerative Diseases, and Aging. Neurol. Clin. 2003; 21: 709–728. 8. Werner JS, Schefrin BE. Optics and Vision of the Aging Eye. In: Bass M, et al., eds. OSA Handbook of Optics, Vol. III. Classical, Vision & X-Ray Optics. New York: McGraw-Hill, 2000, pp. 13.1–13.31. 9. Sekiguchi N, Williams DR, Brainard DH. Efficiency in Detection of Isoluminant and Isochromatic Interference Fringes. J. Opt. Soc. Am. A. 1993; 10: 2118– 2133. 10. Williams DR. Aliasing in Human Foveal Vision. Vision Res. 1985; 25: 195– 205. 11. Williams DR, Yoon GY, Porter J, et al. Visual Benefit of Correcting Higher Order Aberrations of the Eye. J. Refrac. Surg. 2000; 16: S554–S559. 12. Smallman HS, MacLeod DI, He S, Kentridge RW. Fine Grain of the Neural Representation of Human Spatial Vision. J. Neurosci. 1996; 16: 1852–1859. 13. Riggs LA. Visual Acuity. In: Graham CH, ed. Vision and Visual Perception. New York: Wiley, 1965, pp. 321–349. 14. Snellen H. Test-Types for the Determination of the Acuteness of Vision. London: Norgate and Williams, 1866. 15. Wyszecki G, Stiles WS. Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd ed. New York: Wiley, 1982. 16. Werner JS. Human Color Vision: 1. Color Mixture and Retino-geniculate Processing. In: Backhaus W, ed. Neuronal Coding of Perceptual Systems. London: World Scientific, 2001, pp. 79–101. 17. Kraft JM, Werner JS. Spectral Efficiency across the Life Span: Flicker Photometry and Brightness Matching. J. Opt. Soc. Am. A. 1994; 11: 1213–1221. 18. Winn B, Whitaker D, Elliott DB, Phillips NJ. Factors Affecting Light-Adapted Pupil Size in Normal Human Subjects. Invest. Ophthalmol. Vis. Sci. 1994; 35: 1132–1137. 19. Green DM, Swets JA. Signal Detection Theory and Psychophysics. Los Altos, CA: Peninsula, 1988. 20. Van Trees HL. Detection, Estimation and Modulation Theory. New York: Wiley, 2001. 21. Macmillan NA, Creelman CD. Detection Theory: A User’s Guide. Cambridge, UK: Cambridge University Press, 1991. 22. Wickens TD. Elementary Signal Detection Theory. Oxford: Oxford University Press, 2002.
REFERENCES
393
23. Weber EH. Der Tastsinn und das Germeinful [The Sense of Touch and General Sensation]. In: Wagner R, ed. Handworterbuch der Physiologie, Vol. 3. Braunshweig: Vieweg, 1846; pp. 481–588. 24. Harvey LO. Efficient Estimation of Sensory Thresholds. Behav. Res. Meth. Instru. Comp. 1986; 18: 623–632. 25. Finney DJ. Probit Analysis. Cambridge, UK: Cambridge University Press, 1971. 26. Levitt H. Transformed Up-Down Methods in Psychoacoustics. J. Acoust. Soc. Am. 1971; 49: 466–477. 27. Watson AB, Pelli DG. QUEST: A Bayesian Adaptive Psychometric Method. Percept. Psychophys. 1983; 33: 113–120. 28. Harvey LO Jr. Efficient Estimation of Sensory Thresholds with ML-PEST. Spat. Vis. 1997; 11: 121–128. 29. King-Smith PE, Grigsby SS, Vingrys AJ, et al. Efficient and Unbiased Modifications of the QUEST Threshold Method: Theory, Simulations, Experimental Evaluation and Practical Implementation. Vision Res. 1994; 34: 885– 912. 30. Brainard DH. The Psychophysics Toolbox. Spat. Vis. 1997; 10: 433–436. 31. Pelli DG. The VideoToolbox Software for Visual Psychophysics: Transforming Numbers into Movies. Spat. Vis. 1997; 10: 437–442. 32. Farell B, Pelli DG. Psychophysical Methods, or How to Measure a Threshold, and Why. In: Carpenter RHS, Robson JG, eds. Vision Research: A Practical Guide to Laboratory Methods. Oxford: Oxford University Press, 1999, pp. 129– 136. 33. Kontsevich LL, Tyler CW. Bayesian Adaptive Estimation of Psychometric Slope and Threshold. Vision Res. 1999; 39: 2729–2737. 34. Brainard DH, Brunt WA, Speigle JM. Color Constancy in the Nearly Natural Image. I. Asymmetric Matches. J. Opt. Soc. Am. A. 1997; 14: 2091–2110. 35. Post DL, Calhoun CS. An Evaluation of Methods for Producing Desired Colors on CRT Monitors. Color Res. Appl. 1989; 14: 172–186. 36. Brainard DH, Pelli DG, Robson T. Display Characterization. In: Hornak J, ed. Encyclopedia of Imaging Science and Technology. New York: Wiley, 2002, pp. 172–188. 37. Cowan WB. Displays for Vision Research. In: Bass M, ed. Handbook of Optics: Vol. 1. Fundamentals, Techniques, and Design. New York: McGraw-Hill, 1995, pp. 27.1–27.44. 38. Brainard DH. Calibration of a Computer Controlled Color Monitor. Color Res. Appl. 1989; 14: 23–34. 39. Packer O, Diller LC, Verweij J, et al. Characterization and Use of a Digital Light Projector for Vision Research. Vision Res. 2001; 41: 427–439. 40. Robson T. Topics in Computerized Visual-Stimulus Generation. In: Carpenter RHS, Robson JG, eds. Vision Research: A Practical Guide to Laboratory Methods. Oxford: Oxford University Press, 1999, pp. 81–105. 41. Pelli DG, Zhang L. Accurate Control of Contrast on Microcomputer Displays. Vision Res. 1991; 31: 1337–1350.
394
VISUAL PSYCHOPHYSICS WITH ADAPTIVE OPTICS
42. Tyler CW. Colour Bit-Stealing to Enhance the Luminance Resolution of Digital Displays on a Single Pixel Basis. Spat. Vis. 1997; 10: 369–377. 43. Wandell BA. Foundations of Vision. Sunderland, MA: Sinauer, 1995. 44. Nakano Y. Appendix Part III: Color Vision Mathematics: A Tutorial. In: Kaiser PK, Boynton RM, eds. Human Color Vision, 2nd ed. Washington, DC: Optical Society of America, 1996, pp. 544–562. 45. Smith VC, Pokorny J. Spectral Sensitivity of the Foveal Cone Photopigments between 400 and 500 nm. Vision Res. 1975; 15: 161–171. 46. Vos JJ, Estevez O, Walraven PL. Improved Color Fundamentals Offer a New View on Photometric Additivity. Vision Res. 1990; 30: 937–943. 47. Stockman A, Sharpe LT, Fach C. The Spectral Sensitivity of the Human ShortWavelength Sensitive Cones Derived from Thresholds and Color Matches. Vision Res. 1999; 39: 2901–2927. 48. Stockman A, Sharpe LT. The Spectral Sensitivities of the Middle- and LongWavelength-Sensitive Cones Derived from Measurements in Observers of Known Genotype. Vision Res. 2000; 40: 1711–1737. 49. Mollon JD. Specifying, Generating, and Measuring Colours. In: Carpenter RHS, Robson JG, eds. Vision Research: A Practical Guide to Laboratory Methods. Oxford: Oxford University Press, 1999, pp. 106–128. 50. Westheimer G. The Maxwellian View. Vision Res. 1966; 6: 669–682. 51. Boynton RM. Vision. In: Sidowski JB, ed. Experimental Methods and Instrumentation in Psychology. New York: McGraw-Hill, 1966, pp. 273–330. 52. Hofer HJ, Singer B, Williams DR. Different Sensations from Cones with the Same Photopigment. J. Vis. 2005; 5: 444–454. 53. Sekiguchi N, Williams DR, Brainard DH. Aberration-Free Measurements of the Visibility of Isoluminant Gratings. J. Opt. Soc. Am. A. 1993; 10: 2105–2117. 54. Kraft JM, Brainard DH. Mechanisms of Color Constancy under Nearly Natural Viewing. Proc. Natl. Acad. Sci. 1999; 96: 307–312.
PART FIVE
DESIGN EXAMPLES
CHAPTER FIFTEEN
Rochester Adaptive Optics Ophthalmoscope HEIDI HOFER, JASON PORTER, GEUNYOUNG YOON, LI CHEN, BEN SINGER, and DAVID R. WILLIAMS University of Rochester, Rochester, New York
15.1
INTRODUCTION
The Rochester Adaptive Optics Ophthalmoscope uses a Shack–Hartmann wavefront sensor with 221 lenslets and a continuous faceplate deformable mirror (Xinxtics, Inc.) with 97 lead–magnesium–niobate (PMN) actuators to measure and correct the ocular wave aberration over a 6.8-mm pupil. This mirror has high enough spatial resolution to correct aberrations up to eighthorder radial Zernike modes and enough stroke, ±2 mm per actuator, to correct a maximum peak-to-valley wavefront error of 8 mm. The system operates at rates up to 30 Hz, resulting in a 0.7-Hz closed-loop bandwidth, which is high enough to track most of the temporal fluctuations in the eye’s wave aberration [1]. Temporal performance is in good agreement with predictions based on theory. Residual root-mean-square (RMS) wavefront error is typically brought below 0.1 mm for a 6.8-mm pupil. The system incorporates both a floodilluminated retinal camera and a visual stimulus display for psychophysical experiments.
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
397
398
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
15.2
OPTICAL LAYOUT
15.2.1
Wavefront Measurement and Correction
Figures 15.1 and 15.2 show a schematic diagram and a picture (respectively) of the optical layout of the Rochester Adaptive Optics Ophthalmoscope. This system is a modification of the original system demonstrated by Liang, Williams, and Miller [2] that is largely identical to the system described by Hofer et al. [3]. The major difference between the previous systems and the current system is that the correcting element was upgraded from a 37-channel deformable mirror (DM) to a 97-channel mirror. The Shack–Hartmann wavefront sensor beacon is a collimated superluminescent diode (SLD) with a central wavelength of ~825 nm and a beam diameter of ~1 mm. The power of the SLD at the eye’s pupil is always kept at 5 mW or less during all experiments, more than 120 times smaller than the maximum permissible exposure for continuous viewing at this wavelength [4]. SLDs have very short coherence lengths (approximately 30 mm for our SLD), resulting in much less speckle in the
R
Eye 825-nm Superluminescent Diode
P
Pellicle Beamsplitter
SL
R
D R
Dichroic Mirror R
P
Off-axis Parabolic Mirrors
P
Pellicle Beamsplitter
Fixation Target
Deformable Mirror
R
Krypton Flash Lamp
Computer
R
Bleaching Lamp
P
Dichroic Mirror
CCD R
Lenslet Array
P R
CCD Wavefront Sensor
Retinal Imaging Camera
Focusing Lens R
DMD Visual Stimulus Display
FIGURE 15.1 Schematic diagram of the Rochester Adaptive Optics Ophthalmoscope. Planes marked R and P are conjugate with the eye’s retina and pupil, respectively.
OPTICAL LAYOUT
399
Deformable Mirror
Wavefront Sensor
Retinal Camera FIGURE 15.2 Photograph of the Rochester Adaptive Optics Ophthalmoscope in its retinal imaging mode. The major system components are labeled and the path that light follows once exiting the subject’s pupil is indicated with a black line. The stimulus display arm of the system is not shown in this image.
Shack–Hartmann spots than when using a coherent laser source. Because it is a near-infrared source, it also provides more comfortable and safer viewing conditions for the subject due to the eye’s decreased sensitivity at longer wavelengths. Before entering the eye, the SLD shares a common path with the illumination path from the retinal imaging flash lamp. To avoid losing more light from the flash lamp than necessary, the SLD is coupled into this path using a customized dichroic optic that reflects light over a narrow band centered at the SLD wavelength and transmits light of shorter (visible) or longer (infrared) wavelengths. A fi xation target conjugate with the retina also shares the flash lamp path and is coupled with an uncoated pellicle beamsplitter. To avoid unnecessary reflections in retinal images or the Shack– Hartmann spot images, the SLD–flash lamp path is coupled with the rest of the system using a 50/50 pellicle beamsplitter just before the eye’s entrance pupil. To eliminate the corneal reflection in the Shack–Hartmann images, an off-axis scheme for the SLD illumination is employed [5]. A 97-channel PMN deformable mirror (Xinxtics, Inc.) is used for the correcting element. (See also Chapter 4 for a description of this type of mirror.) It is very large, about 8 cm in diameter, and requires a very long path to magnify the eye’s pupil to nearly fi ll the entire mirror diameter. The use of
400
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
curved mirrors instead of very large focal length lenses reduces the chromatic aberration of the system and allows the optical path to be folded to fit on the optical table without requiring extra optical components. To minimize aberrations, the two mirrors are 5° off-axis parabolic sections, with the foci at the two retinal planes located on either side of the deformable mirror. The mirror has ±2 mm stroke for each actuator, allowing a wavefront shift of ±4 mm in the reflected beam. The wavefront sensor uses a square lenslet array with a 24-mm focal length and 0.4-mm lenslet spacing. There is a small magnification difference between the entrance pupil of the system and the plane of the lenslet array, resulting in a sampling distance of 0.384 mm in the eye’s pupil. The Shack–Hartmann spots are recorded with a 12-bit, cooled, frame transfer CCD camera (PentaMAX-512EFT) with 15-mm square pixels in a 512 × 512 array. This camera has a maximum frame rate of 15 Hz that can be increased to 30 Hz if 2 × 2 binning is used, which results in an effective pixel size of 30 mm. For wavefront correction and reconstruction, only the Shack–Hartmann spots formed by the central 221 lenslets are considered. Figure 15.3 illustrates the configuration of the lenslets and mirror actuators relative to the eye’s pupil. Although the wavefront is corrected using a direct-slope method that allows mirror actuator voltages to be determined directly from Shack–Hartmann spot displacements (without the need for reconstructing the wavefront) [6], this sampling density allows Zernike modes up to and including the 10th radial order to be reconstructed for a 6.8-mm pupil. The wavefront is usually
FIGURE 15.3 The configuration of lenslets and mirror actuators relative to the eye’s pupil in the Rochester Adaptive Optics Ophthalmoscope.
OPTICAL LAYOUT
401
reconstructed in order to estimate the RMS wavefront error and Strehl ratio, which are monitored in real time to assess the quality of the wavefront correction. Before beginning adaptive correction, the subject roughly aligns his or her pupil by looking at the SLD superimposed on the fi xation target. The pupil is held steady by means of a dental bite plate. The subject’s spherical refractive error, if any, is subjectively removed by moving the bite plate and the fi rst lens of the system in tandem on an axial slide until the target appears to be in best focus. If the refractive error is very large or if a significant amount of astigmatism is present, trial lenses are also placed immediately in front of the eye. Lower order aberrations (defocus and astigmatism) are removed in this manner prior to adaptive correction so that the limited range of the deformable mirror is used only to reduce higher order aberrations and is not wasted on aberrations that can be removed by other means. The subject’s pupil position and focus offset are then refi ned by looking at the Shack–Hartmann spot image and the defocus coefficient reported by the wavefront sensor (see Fig. 15.4 for an example of the diagnostic display).
FIGURE 15.4 The diagnostic display and user interface for controlling and monitoring the adaptive optics system performance.
402
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
Once the initial alignment is complete, subjects are able to start each instance of adaptive correction themselves, when ready, with a keystroke on a small computer keypad. Aberrations are usually corrected over a 6.8-mm pupil diameter but can be corrected over only the central 6 mm if the subject’s pupil is not large enough. During the experiment, a shutter prevents the SLD light from entering the eye at any time except when the wavefront is being actively measured. Once the subject begins correction, the light from the SLD enters the eye, the wavefront is measured, and the deformable mirror actuator voltages are updated at a rate between 15 and 30 Hz, depending on the needs of the particular experiment. Within 5 to 10 iterations, which require only a fraction of a second to complete, the residual RMS wavefront error usually reaches its minimal value. Correction is automatically terminated after the residual RMS wavefront error reaches a prespecified value or after a maximum number of iterations, whichever comes fi rst. Immediately following correction, the adaptive optics computer can send a signal to either the flash lamp or a computer producing a visual stimulus, allowing a retinal image to be acquired or a stimulus to be presented. Figure 15.4 shows a picture of the user interface for the program used to monitor and control the adaptive optics system. The window named Spots (top left, Fig. 15.4) displays an image of the Shack–Hartmann spot array pattern. Superimposed on this image are the centroid locations and a grid of search boxes that defi nes the areas used for the centroid calculations (see also Section 6.3). Once the aberrations have been minimized and the spot locations have become stable, the size of the search boxes can be minimized to reduce the computational time required for calculating the centroid locations, thereby decreasing the delay of the system. Next to this window is the Wave Aberration window (top center, Fig. 15.4), which displays a contour plot of the current wave aberration reconstructed from the spot centroids. (In Fig. 15.4, this window shows a wavefront that is nearly perfectly flattened following adaptive compensation with the deformable mirror.) The Mirror window (middle center, Fig. 15.4) shows gray-scale values for each actuator that reflect the current actuator voltages sent to the deformable mirror and also displays whether the mirror is updating continuously (“in loop”) or is in a static state (“not in loop”). Clicking on individual actuators allows the operator to access the numerical values of the actuator voltages as well as to change the voltage sent to a particular actuator. The Console window (top right, Fig. 15.4) shows diagnostic information, such as the correction rate, the timing of individual frame measurements, when a visual stimulus or flash lamp was triggered, the current spherocylindrical correction required to minimize the second-order aberrations, and any response keys pressed by the subject. The Traces window (bottom, Fig. 15.4) shows a running trace of the values of the RMS wavefront error, Strehl ratio, and a specific Zernike coefficient. (The Zernike coefficient typically displayed is defocus. However, the user can choose to display any desired coefficient.) This information is primarily used to assess the correction
OPTICAL LAYOUT
403
performance of the system but may also be used to determine the best initial subject refraction. Other system parameters, such as the exposure time, binning, gain, maximum number of frames for correction, and/or minimum RMS wavefront error for stopping the correction, are controlled by dropdown menus at the top of the screen. (See Chapter 6 for a more detailed discussion of the adaptive optics system computer interface.) 15.2.2
Retinal Imaging: Light Delivery and Image Acquisition
The retina is imaged with flood illumination from a krypton flash tube. The imaging wavelength is controlled by interference fi lters. Typically used for retinal imaging is 550- and 650-nm light; however, both shorter (such as 500 nm) and longer wavelengths (such as 900 nm) are available with a different choice of interference fi lter. The flash illuminates a 1° circular field and subjects are asked to look at specific locations on the fi xation target to image different retinal eccentricities. The flash duration is set to 4 ms to avoid motion blur in retinal images due to eye movements. The intensity of the flash is controlled by changing the voltage across the tube or by changing the size of an aperture stop in the pupil plane of the flash lamp path. Changing the extent of the pupil through which light from the flash may enter the eye has a significant effect on the contrast of cone photoreceptors in the retinal images due to their waveguide properties. This phenomenon, where light entering through the pupil margins is less likely to be coupled into the cones than that entering through the pupil center, is known as the Stiles–Crawford effect [7]. Usually images of the best contrast are obtained when the diameter of the pupil aperture controlling the flash illumination is not larger than 2 to 3 mm. With these aperture sizes, the energy of a 550-nm flash entering the eye is typically 0.3 to 0.6 mJ per flash. The light from the flash exiting the eye follows the same path through the system as the wavefront sensor beacon until it reaches a dichroic mirror just prior to the lenslet array. This mirror directs all light of wavelengths above or below the SLD wavelength toward the retinal imaging charge-coupled device (CCD) camera. The imaging path consists of an aperture stop in a pupil conjugate plane, a 60-cm focal-length achromat mounted on an electronically driven movable stage (for focusing the retinal image), and a camera to collect the retinal image. The pupil aperture stop is set so that light from only the central 6 mm of the pupil is collected in the retinal image, avoiding edge effects from the deformable mirror correction. The lens responsible for bringing the retinal image into focus on the camera is mounted on a movable stage because there is generally a difference in focus between the wavelength used for wavefront sensing and that used for imaging due to the longitudinal chromatic aberration of the human eye. In this system configuration, the focus difference is compensated by translating the lens toward the camera. The focus offset generally required to bring the photoreceptor layer into best focus when imaging at 550 nm is approximately 0.8 diopter (D). The lens may also
404
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
be moved to focus on more anterior retinal structures, such as the blood vessels and capillaries. One of the drawbacks of using this particular scheme to compensate for the effects of the eye’s longitudinal chromatic aberration is that the magnification of the path changes as the lens position is changed. In an attempt to fi x the magnification of the retinal image, we plan to add an additional two lenses after the pupil plane in the imaging arm to gain access to an additional retinal plane. The telescope, consisting of the second of these new lenses and the current focusing lens, will then be mounted with the CCD camera on the same translatable plate so that the entire imaging arm of the system will move together. This will allow the focus to change while the magnification remains fi xed. The camera used for acquiring the retinal image is a cooled, back-illuminated CCD camera from Princeton Instruments. It contains a 512 × 512 chip with 24-mm square pixels. Magnifications resulting from typical positions of the focusing lens make one pixel correspond to 0.10 to 0.13 min of arc. For comparison, cone photoreceptors are approximately half an arcminute in diameter in the central fovea and are approximately 1 arcmin at 1° retinal eccentricity. 15.2.3
Visual Psychophysics Stimulus Display
Psychophysical experiments are conducted using a digital light projector (DLP) in the adaptive optics system. (See also Chapter 14 for more information on DLPs.) The DLP is coupled into the optical path via a mirror that is inserted between the focusing lens and the retinal imaging camera in the system’s imaging path. Visual stimuli displayed by the projector are reflected by the dichroic mirror and follow a reversed path back through the adaptive optics system. After passing through the off-axis paraboloids and the deformable mirror, these stimuli are projected onto the subject’s retina. The projector used to display visual stimuli (Compaq MP1600) contains a digital micromirror device (DMD) chip. When looking through the adaptive optics system, subjects directly view the visual stimuli displayed on the DMD chip. The color wheel and all projection optics in the path between the DMD chip and the mirror coupling the projected light into the adaptive optics system were removed from the projector. The color wheel could not be completely separated from the projector because the projector would function only if it detected a working color wheel. To remedy this problem, we removed the color wheel and attached it to a specially designed circuit, connected to the projector, which activates the wheel whenever the projector is powered. Therefore, the projector detects an operational color wheel despite its absence from the optical path. The DMD chip contains 1024 × 768 pixels, with a pixel size of 17 mm on an edge and a center-to-center spacing of 18 mm. Each pixel is a reflective micromirror that can rapidly tilt to alter the gray-scale value of its particular location in the image. Due to magnifications inherent in the optical system
CONTROL ALGORITHM
405
and focusing lens, one pixel on the DMD typically corresponds to 0.075 to 0.1 min of arc on the retina. This relationship places a minimum of nearly five pixels across the smallest of foveal cones. Each pixel has 8 bits of intensity resolution and has a response time of approximately 20 ms. Currently, we do not use any technique to enlarge the bit depth of the DMD chip, although this could be done by customizing the temporal control of the DMD chip. There are several advantages for using a DLP over a conventional cathode ray tube (CRT) monitor. (For comparisons between devices used to display visual stimuli, refer to Chapter 14.) DLPs can be controlled just like regular CRT monitors but can be made extremely bright and have good contrast levels [8]. The Compaq MP1600 has a brightness of 600 lm, and a contrast ratio of 150 : 1. (This contrast ratio is defi ned as the ratio of the maximum to minimum light outputs measured with an ANSI checkerboard pattern.) Visual stimuli are generally displayed on the DMD with custom software written using MATLAB® (The MathWorks, Inc.). For experiments involving monochromatic stimuli, the wavelength of the stimulus can be controlled by placing a narrow-band fi lter (10 or 25 nm bandwidths) immediately in front of the DMD chip. Adjusting the focusing lens can offset the difference in chromatic aberration between the 825-nm wavefront sensing wavelength and the wavelength of the fi lter in front of the DMD chip. A more elegant solution is to place an appropriate trial lens in the pupil conjugate plane of the imaging arm, as shifts in the position of the focusing lens will induce changes in the magnification of the DMD on the retina. Once subjects are aligned, they are able to initiate the adaptive correction and subsequent psychophysical procedure by pressing the appropriate key on a small computer pad or by pressing the appropriate button on an altered gaming joystick. Depending on the task, aberrations are typically corrected in real time throughout the duration of the procedure. For some psychophysical experiments, the adaptive optics system may also act as an aberration generator to simultaneously remove the subject’s native aberrations while superimposing a new subset or pattern of aberrations [9].
15.3
CONTROL ALGORITHM
The Rochester Adaptive Optics Ophthalmoscope uses a mirror control scheme in which the applied actuator voltages are determined by the directslope method [6]. In this method, an influence function is computed for each mirror actuator (see also Section 5.3 for a more detailed description). Each influence function specifies the Shack–Hartmann spot displacements caused by the movement of a single mirror actuator as a function of its applied voltage. These influence functions are then combined to construct a single matrix relating the Shack–Hartmann spot displacements directly to the actuator voltages. This method has the advantage that it is extremely quick, requiring only a single matrix multiplication, and does not require the reconstruction
406
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
of the wavefront shape in terms of Zernike polynomials or other basis functions. Therefore, it is less sensitive to fitting errors and edge artifacts. The direct-slope control method is also somewhat self-calibrating and impervious to small misalignments between the deformable mirror and the lenslet array. A simple proportional control scheme is used to correct the wavefront and maintain correction. In each iteration, the actuator voltages required to null the wavefront are calculated from the Shack–Hartmann spot displacements and then a fraction of these voltages, the gain, is applied. The gain of the system is an adjustable parameter, with 30% gain usually providing the best results for our system. In a noisy system or a system with an inadequate sampling rate, a gain too high will result in an unstable correction. However, a gain too low will also result in a poor correction, as the system requires a long time to complete the correction and lacks the agility to deal with rapid changes in the wavefront. The integration time of the wavefront sensor CCD camera is an adjustable system parameter that also affects both the rate and delay time of the system. If the light level of the Shack–Hartmann spots is high, then four pixels can be binned into one without affecting the accuracy of the Shack–Hartmann spot centroiding process, and 33 ms camera integration time is adequate. In this case, the system corrects at a rate of 30 Hz. Under low light levels or if the quality of the Shack–Hartmann spots is poor, a longer integration time may be necessary to reduce noise. The wavefront sensor camera runs in a double-buffer mode, which means that one frame transfers to the computer while the other frame is integrating. Thus there is a delay time due to image transfer equal to the camera integration time. In addition, there is a delay due to the time it takes to calculate the Shack–Hartmann spot centroids from the image data. When binning the camera pixels and using 33 ms camera integration time, the total delay between the end of the frame integration and the application of the new actuator voltages is 67 ms. This control method can also be used to induce a particular pattern of aberrations as well as to remove the wavefront error. In this case, the Shack– Hartmann spot displacements that would result from the desired aberration profi le are computed and the mirror actuator voltages necessary to null the difference between these and the actual locations of the Shack–Hartmann spots are applied.
15.4 WAVEFRONT CORRECTION PERFORMANCE 15.4.1
Residual RMS Errors, Wavefronts, and Point Spread Functions
Within a fraction of a second from the start of adaptive correction, the eye’s residual RMS wavefront error can usually be reduced to 0.06 to 0.10 mm over a 6.8-mm pupil, depending on the individual subject. Figure 15.5 illustrates
WAVEFRONT CORRECTION PERFORMANCE
407
GYY, 6.8-mm pupil
Without Aberration Compensation
With Aberration Compensation
Wave Aberration
Point Spread Function
FIGURE 15.5 Improvement in the eye’s wave aberration and PSF obtained for one subject by correcting the eye’s aberrations with the Rochester Adaptive Optics Ophthalmoscope. Panels on the left show the measured wave aberration for one subject over a 6.8-mm pupil before and after aberrations were compensated for. Contour lines occur at single wavelength intervals (l = 550 nm). Panels on the right show the associated PSFs calculated from the wave aberration at a wavelength of 550 nm. Correcting the eye’s aberrations greatly improves the compactness of the PSF.
the improvement afforded by the adaptive optics system on the wave aberration and its associated point spread function (PSF) for one subject. The PSF was calculated from the measured wave aberration for a wavelength of 550 nm. Without adaptive correction, the RMS wavefront error over a 6.8-mm pupil was 1.3 mm and the PSF was very irregular and distended. After aberrations were corrected, the residual RMS wavefront error was reduced to 0.09 mm and the PSF was distinctly sharpened. 15.4.2
Temporal Performance: RMS Wavefront Error
The expected performance given the usual system parameters can be calculated from a simple temporal model if a few assumptions are made. The first assumption is that the deformable mirror has adequate spatial resolution to reconstruct the eye’s aberration profi le, and the second is that the sampling rate of the system is high enough to accurately measure the dynamics of the eye’s aberrations. The fi rst assumption should be approximately true since the 97-channel mirror has enough spatial resolution to reconstruct aberrations up to eighth-order radial modes and the typical eye does not contain substantial aberrations in modes of higher orders than this to significantly impact image
408
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
quality [10]. The second assumption is believed to be true because the eye’s dynamics show only negligible activity above approximately 6 Hz [1]. Thus a rate of only 12 Hz should be needed to adequately capture the eye’s dynamics, which is well below the sampling rates used. The effects of noise are subsequently ignored. The model consists of the following components: a wavefront sensor (integrator), frame readout and slope calculation (simple delay), and mirror update (discrete proportional control). Since the speed of the mirror is 4 kHz and is much faster than any other element in our system, we assume that the mirror responds instantaneously to changes in voltage signals. These components have the following Laplace transforms (see also Section 8.5.1 for more details on modeling temporal performance): • Wavefront sensor: 1 − e − sT sT
(15.1)
(integration over the exposure time T = 33 ms) • Delay: e − sτ c
(15.2)
(t c = 67 ms and includes the time for CCD frame transfer and slope calculation) • Simple digital proportional mirror control: z transform of deformable mirror: K ⋅z ( K = mirror loop gain ) z−1 Continuous Laplace transform equivalent:
(15.3) K 1 − e − sT
(15.4)
(new mirror voltages are applied once every T = 33 ms) • Zero-order hold: 1 − e − sT sT
(15.5)
(mirror voltages are also held constant over each sampling interval T = 33 ms) These terms can be combined to calculate the total closed-loop and openloop system transfer functions, which allows the system correction bandwidth to be predicted. The correction bandwidth of the system is defi ned as the temporal frequency where these functions are equal. Fluctuations in the eye’s aberrations of frequencies lower than this are reduced by the system, while
IMPROVEMENT IN RETINAL IMAGE QUALITY
409
1
Averaged Power
Open-Loop Correction Closed-Loop Correction
0.1
0.01 Closed-Loop Bandwidth ~0.7 Hz
0.001 0.1
1
10
Temporal Frequency (Hz)
FIGURE 15.6 Log-log plot of the temporal power spectra of the eye’s residual RMS wavefront error during open-loop and closed-loop aberration correction. The correction bandwidth of the system is defi ned as the temporal frequency where these curves cross, here just slightly higher than 0.7 Hz. Fluctuations in the eye’s aberrations of frequencies lower than this are reduced by the system, while fluctuations with higher frequency components are somewhat exacerbated. These data were taken with a system gain of 30% and a rate of 21 Hz. Results are averaged across three subjects, with 10 runs per subject.
fluctuations with higher frequency components are somewhat exacerbated. The gain, K, used in the model (K = 28%) was the optimal gain as determined by Bode analysis given the various system parameters. Figure 15.6 shows the open-loop and closed-loop temporal power spectra of the measured residual RMS wavefront error. Empirically, we determined that a 30% mirror loop gain provided optimal performance, consistent with the model’s predicted optimal gain. The model predicts a correction bandwidth of 0.9 Hz, nearly what we observe empirically (~0.7 Hz).
15.5
IMPROVEMENT IN RETINAL IMAGE QUALITY
Wavefront measurements provide a theoretical estimate of the benefit of adaptive correction on the eye’s optical performance. However, the actual benefit may be worse than predicted by wavefront measurements, due to unaccounted non-common-path errors, errors in wavefront reconstruction, or other reasons. The actual benefit of adaptive correction is reflected in the improvement in the quality of retinal images acquired before and after adaptive correction. Figure 15.7 shows two single representative images of
410
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
Without Adaptive Optics (Single Image)
With Adaptive Optics (Single Image)
With Adaptive Optics (Sum of 31 Images)
AP, 1 Deg Retinal Eccentricity
FIGURE 15.7 Improvement in the quality of retinal images afforded by the Rochester Adaptive Optics Ophthalmoscope. The left panel shows a single image of the photoreceptor mosaic acquired without adaptive compensation; this is the best image quality that can be achieved with conventional means. Even though this subject possesses superior optical quality, only hints of the structure of the mosaic can be seen without aberration correction. The middle panel shows a single image of the same section of the mosaic acquired after the aberrations have been compensated. After the aberrations have been corrected, individual receptors can be clearly resolved. The right panel shows the further improvement in image quality that can be achieved by averaging (31) multiple images. All images were acquired at 1° temporal retinal eccentricity at a wavelength of 550 nm.
precisely the same retinal location, ~1° temporal retina, taken for one subject with and without aberrations corrected, as well as an image showing the further improvements in image quality that can be achieved by adding together many individual images acquired after aberrations have been corrected. This particular subject has superior optical quality compared with the majority of subjects, yet there is still a dramatic improvement in the quality of the retinal images taken with adaptive optics. Cones that are barely detectable in the leftmost image are clearly visible in the images acquired after aberrations were corrected. The improvement in image quality is so dramatic that it is possible to routinely image cones in nearly all subjects and, for some subjects, even obtain clear images of cones in the central fovea, as seen in Figure 15.8.
15.6
IMPROVEMENT IN VISUAL PERFORMANCE
After correcting the eye’s higher order aberrations with the Rochester Adaptive Optics Ophthalmoscope using the initial 37-channel deformable mirror (instead of the 97-channel mirror currently used), subjects’ contrast sensitivity
IMPROVEMENT IN VISUAL PERFORMANCE
411
FIGURE 15.8 Montage illustrating a foveal patch subtending approximately 2° in a living human retina. Several 1° retinal images, taken with adaptive optics at a wavelength of 550 nm, were combined to form this montage. The approximate location of the subject’s foveal center is marked with a +. Photoreceptor size becomes visibly larger with increasing distance from the foveal center. Scale bar represents 100 mm. (From Roorda and Williams [12]. Reprinted with permission from SLACK Inc.)
for a large (6-mm) pupil was increased by about a factor of 2 in broadband illumination [11]. In monochromatic light, which avoids the eye’s chromatic aberration, improvement in contrast sensitivity was higher, a factor of approximately 3 to 5 times above that when only correcting defocus and astigmatism. These improvements were well matched by those expected given the improvement calculated in the eye’s modulation transfer function from the measured wave aberration before and after correction. These measurements were also performed under open-loop correction only. While no similar contrast sensitivity data have been acquired with the Rochester Adaptive Optics Ophthalmoscope since incorporating the 97-channel deformable mirror, it is reasonable to assume that even greater improvements would have been seen if the same experiment were performed now with the higher resolution mirror and closedloop correction. Figure 15.9 shows the improvement in visual acuity when correcting the eye’s higher order aberrations with the Rochester Adaptive Optics Ophthalmoscope. The figure shows visual acuity for three subjects in monochromatic (550-nm) and white light for a 6-mm pupil with and without higher order aberrations corrected. These data were acquired before incorporating the 97-channel deformable mirror; however, the measurements were taken under open-loop conditions (only a static aberration compensation was employed).
412
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
FIGURE 15.9 Improvement in visual acuity afforded by the Rochester Adaptive Optics Ophthalmoscope. This plot shows visual acuity for three subjects in monochromatic (550-nm) and white light for a 6-mm pupil with higher order aberrations corrected and with only defocus and astigmatism corrected (without correction). Acuity was measured using a four-alternative-forced-choice orientation discrimination procedure with an illiterate E. These measurements were made under open-loop conditions (i.e., only a static aberration compensation was employed). Even so, all subjects showed a marked improvement in acuity after aberrations were compensated for with the adaptive optics system, achieving an ultimate acuity of approximately 20/10. Acuity was slightly worse after aberrations were corrected in white light (white bars) than in monochromatic light (black bars) due to the impact of the eye’s chromatic aberration. Monochromatic acuity before aberration correction is not shown because it is not significantly different from acuity in white light (due to the impact of the eye’s higher order aberrations).
Even so, all subjects showed a marked improvement in acuity after higher order aberrations were corrected, achieving an acuity of approximately 20/10. Acuity was slightly worse after aberrations were corrected in white light than in monochromatic light due to the impact of the eye’s chromatic aberration.
15.7
CURRENT SYSTEM LIMITATIONS
There are several factors that limit the correction that can be provided by the Rochester Adaptive Optics Ophthalmoscope or impact the ability to image the retina or perform psychophysical studies. Both the limited range of the Shack–Hartmann sensor and the deformable mirror affect the correction ability. For subjects with very high wavefront errors, there can be potentially an overlap of the spots in the Shack–Hartmann spot image. This makes it
CURRENT SYSTEM LIMITATIONS
413
impossible to tell which spot corresponds to which lenslet and, therefore, to calculate the appropriate mirror voltages or reconstruct the wavefront. However, in practice, this is not a major limitation since subjects for whom this is problematic tend to have wavefront errors so large that the deformable mirror does not have enough stroke to correct them. Based on aberration data from a population of 70 pre-LASIK patients, it is estimated that a deformable mirror stroke of 26 mm (~53 mm in the wavefront) is needed (7.5-mm pupil) to correct the eye’s total wavefront error without the need for correcting defocus and astigmatism with trial lenses (or other means) beforehand (see also Section 4.5). The Rochester Adaptive Optics Ophthalmoscope has a range of 8 mm in the wavefront, allowing for few subjects to be corrected in this manner. If the investigator does not mind performing a refraction beforehand, then a range of only 5.5 mm (11 mm in the wavefront) is needed. In this case, the Rochester Adaptive Optics Ophthalmoscope will be able to correct the higher order aberrations in approximately 80% of subjects. The Rochester Adaptive Optics Ophthalmoscope, while providing impressive correction of the eye’s wavefront error, also does not use the best control methods available. Currently, only proportional control is used. Using a more sophisticated algorithm, such as a method incorporating integral as well as derivative control, should result in an even quicker and better correction (see Chapter 5 for more on different control algorithms). The changing magnification of the visual stimulus/imaging path as different amounts of chromatic aberration are compensated for in the Rochester Adaptive Optics Ophthalmoscope is a factor that potentially limits correction ability as well as causing inconvenience during psychophysical and imaging experiments. Correction ability is limited because the changing position of the focusing lens makes it difficult to maintain alignment along the length of the lens travel. In addition, since the optical configuration changes as the lens moves, noncommon path aberrations also change. This makes it difficult to characterize the non-common-path aberrations and correct them. This configuration causes inconveniences during imaging or psychophysical experiments because the angle subtended by a pixel of either the retinal imaging CCD camera or the visual stimulus display depends upon the focus offset used. Even for the same wavelength, individual subjects may require slightly different focus offsets to bring a retinal image or visual stimulus in focus. This means that retinal image magnification or stimulus magnification must be recalibrated in real time as a function of the focus adjustment needed. For retinal imaging applications, the flood-illuminated scheme utilized in the Rochester Adaptive Optics Opthalmoscope is excellent for obtaining en face images of blood vessels or the photoreceptor mosaic. However, this scheme is not well suited for other applications that require the optical sectioning of retinal tissue or the imaging of other retinal structures since there is no means of rejecting light originating from other depth planes. Because of the particular CCD camera and light source used for retinal imaging, the Rochester Adaptive Optics Ophthalmoscope is also unable to
414
ROCHESTER ADAPTIVE OPTICS OPHTHALMOSCOPE
acquire real-time images of the retina. The Rochester Adaptive Optics Ophthalmoscope does have the ability to deliver visual stimuli in real time. However, if this is to be done while correcting aberrations, the eye’s wavefront error must also be measured in order to maintain correction while the visual stimuli are being displayed. This means the SLD wavefront sensor beacon is visible to the subject during the psychophysical task and is superimposed on the visual stimulus field. While the SLD is infrared and rather dim, it is still visible to the subject and could potentially interfere with the perception of visual stimuli during some experiments. To minimize its influence, the beacon is often displaced slightly from the center of the stimulus so that aberrations are measured and corrected just slightly off-axis relative to the visual axis [5]. This does not significantly impact the correction afforded by the adaptive optics system since the eye’s aberrations do not vary significantly over small field angles (see also Chapter 10).
15.8
CONCLUSION
Dynamically correcting the eye’s aberrations using the Rochester Adaptive Optics Ophthalmoscope provides excellent optical system performance and significant improvements in retinal image quality and visual performance. The system measures the eye’s aberrations using a SLD (l = 825 nm) and a Shack–Hartmann wavefront sensor with 221 lenslets (F = 24 mm, d = 400 mm) and corrects for them using a 97-channel Xinxtics continuous faceplate deformable mirror with ±2 mm of mirror stroke. Aberration measurement and correction take place at a rate of up to 30 Hz over a 6.8 mm pupil diameter, providing a closed-loop bandwidth of approximately 0.7 Hz. Residual RMS wavefront errors after correction are typically better than 0.1 mm and can be obtained in 0.25 to 0.50 s. After achieving an adequate correction, visual psychophysics or retinal imaging are conducted over the central 6 mm of the 6.8-mm pupil to avoid edge artifacts in the adaptive optics (AO) correction. A DLP is used as the stimulus for the visual psychophysics experiments. Flood-illuminated retinal imaging is done using a krypton flash lamp (with 4-ms flashes) combined with appropriate interference fi lters. The Rochester Adaptive Optics Ophthalmoscope continues to serve as an excellent instrument for conducting clinical and basic scientific research.
REFERENCES 1. Hofer H, Artal P, Singer B, et al. Dynamics of the Eye’s Aberrations. J. Opt. Soc. Am. A. 2001; 18: 497–506. 2. Liang J, Williams DR, Miller DT. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884– 2892.
REFERENCES
415
3. Hofer H, Chen L, Yoon GY, et al. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express. 2001; 8: 631–643. 4. ANSI. American National Standard for the Safe Use of Lasers. ANSI Z136.1. Orlando, FL: Laser Institute of America, 2000. 5. Williams DR, Yoon G. Wavefront Sensor with Off-Axis Illmination. US Patent 6,264,328 B1. July 24, 2001. 6. Jiang W, Li H. Hartmann-Shack Wavefront Sensing and Control Algorithm. In: Schulte-in-den-Baeumen JJ, Tyson RK, eds. Adaptive Optics and Optical Structures. Proceedings of the SPIE. 1990; 1271: 82–93. 7. Stiles WS, Crawford BH. The Luminous Efficiency of Rays Entering the Eye Pupil at Different Points. Proc. R. Soc. Lond. B. 1933; 112: 428–450. 8. Packer O, Diller LC, Verweij J, et al. Characterization and Use of a Digital Light Projector for Vision Research. Vision Res. 2001; 41: 427–439. 9. Chen L, Singer B, Guirao A, et al. Image Metrics for Predicting Subjective Image Quality. Optom. Vis. Sci. 2005; 82: 358–369. 10. Liang J, Williams DR. Aberrations and Retinal Image Quality of the Normal Human Eye. J. Opt. Soc. Am. A. 1997; 14: 2873–2883. 11. Yoon GY, Williams DR. Visual Performance after Correcting the Monochromatic and Chromatic Aberrations of the Eye. J. Opt. Soc. Am. A. 2002; 19: 266–275. 12. Roorda A, Williams DR. Retinal imaging using adaptive optics. In: Krueger RR, Applegate RA, MacRae SM, eds. Wavefront Customized Visual Correction: The Quest for Super Vision II. Thorofare, NJ: SLACK, 2004, pp. 43–51.
CHAPTER SIXTEEN
Design of an Adaptive Optics Scanning Laser Ophthalmoscope KRISHNAKUMAR VENKATESWARAN Alcon Research Ltd, Orlando, Florida FERNANDO ROMERO-BORJA Houston Community College, Houston, Texas AUSTIN ROORDA University of California, Berkeley, Berkeley, California
16.1 INTRODUCTION Images taken of human retinas with a scanning laser ophthalmoscope (SLO) can rarely resolve features as small as cone photoreceptors, nor can the axial sections reveal the independent retinal layers. This is because image quality in SLOs is limited by the aberrations of the eye, which leave lateral and axial resolution to be about 5 and 300 mm, respectively [1, 2]. For comparison, cone photoreceptors in the center of the fovea are as small as 2 mm [3] and the retinal thickness is about 300 mm. Combining adaptive optics (AO) and introducing the smallest possible confocal pinhole at the imaging plane, the imaging resolution of the SLO is increased dramatically [4]. For example, in a typical human eye, using a light source of wavelength 660 nm, a 5.81-mm pupil, an 80-mm confocal pinhole, and adaptive optics, we can achieve a Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
417
418
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
lateral resolution of about 2.5 mm and axial resolution as low as 100 mm. This approaches an order of magnitude decrease in the volume resolution element. This chapter provides a detailed description of a current AO scanning laser ophthalmoscope. Henceforth in our discussions, we will term the AO system combined with the SLO as AOSLO (see also Chapter 10). Figure 16.1 shows the actual layout of the AOSLO [4]. The AOSLO occupies a 1.5 m × 1.4 m area on an optical table. The different components in the AOSLO can be considered as different modules, which derive signals from each other and operate in a closed loop. Based on the order in which the light passes through the system, the main components are (1) light delivery optics, (2) wavefront compensation optics, (3) raster scanning mirrors, (4) wavefront sensor, (5) light detector, and (6) image recording. Another major aspect of the design is the relay optics, which optically connect these different components. The details of the relay optics are explained in the design process as we discuss the optical layout starting from light delivery to light detection. Detailed references are provided at the end of this chapter. We also present some of the results from the present AOSLO and discuss a few ways of improving the AOSLO system, which will be useful for developing next-generation AOSLO imaging systems. In all our discussions on Zernike polynomials, we adhere to the VSIA (Vision Science and Its Applications) standards on reporting aberrations [5].
1. Light Delivery 5. Light Detection
FIBER OPTIC r
L L 6. Frame Grabbing
CP p
p
L
p
M1
2. Wavefront Sensing
PMT r
L
r
r
CCD
LA
r
3. Wavefront Compensation DM DM
COMPUTER
M2
p
M3 r
M4
p r
M5 VS
HS
4. Raster Scanning M6 M7
p r
M8
p
Eye r
FIGURE 16.1 Optical layout of the adaptive optics scanning laser ophthalmoscope. The retinal and pupil planes along with the components are labeled. L, lenses; M, mirrors; VS, vertical scanner; HS, horizontal scanner; BS, beamsplitter; DM, deformable mirror; Acousto-Optic Modulator AOM; CP, confocal pinhole; AP, artificial pupil; LA, lenslet array; FO, fiber optic; PMT, photomultiplier tube; CCD, chargecoupled device camera; r and p are the retinal and pupil planes, respectively.
RASTER SCANNING
16.2
419
LIGHT DELIVERY
The light from a diode laser is coupled to a single-mode optical fiber. The wavelength of the light delivered is 660 nm. The tip of this optical fiber provides a point source, which is then collimated using a 30-mm focal length achromatic doublet. The lens is followed by a variable neutral density fi lter, which allows us to control the laser light levels at the imaging plane. The beam is then focused with a 150-mm focal length achromat to an acousto-optic modulator (AOM). The optical path after the AOM is aligned so that the fi rst diffracted order of the AOM is passed through the system. This allows us to control the light into the system and is used primarily to limit exposure to the retina to only those times when data is being recorded. The signal, which drives the AOM, is the same signal that is used to gate each recorded line in the video image. The fi rst-order diffracted beam output from the AOM is recollimated using a second 150-mm focal length achromatic lens. The collimated beam is passed through an iris diaphragm, which blocks the zerothorder diffracted beam from the AOM and serves as the entrance pupil for the system. After the entrance pupil, the beam is introduced into the primary SLO path with a glass wedge beamsplitter, preventing ghost reflections. The beamsplitter is about 5% reflective, which allows for 95% of the returning light to be collected for wavefront sensing and imaging. Safety limits on the exposure of the retina to radiation prevent us from increasing the source power arbitrarily. The light levels to which the retina will be exposed are kept 10 or more times lower than the maximum permissible exposure specified by the American National Standards Institute for the safe use of lasers [6]. The AOSLO presently operates with about 30 mW of laser power at the corneal plane at a wavelength of 660 nm and a duty cycle of 40%. The most significant light loss occurs in the eye itself. The reflectivity, scattering, and absorption of light in the eye varies between individuals [7], and the signal-to-noise ratio in the fi nal image depends highly on the optical properties of the subject’s eye. The amount of light reflected off the human retina constrains the total light available for wavefront sensing and imaging. Although SLOs capture images in a different manner than traditional imaging systems, there is still a possibility that the coherence of the light source will generate a type of speckle, affecting the photometry and reducing the signal-to-noise ratio of the images. We are currently investigating low coherence laser sources to remove these artifacts.
16.3
RASTER SCANNING
The beam is scanned on the retina with a resonant and galvanometric scanner combination. The horizontal scanning mirror, which is a resonant scanner that operates at a 16 kHz line frequency, is the master timer for the system.
420
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
The vertical scanning mirror synchronizes with the horizontal scanning mirror in order to provide 525 lines per frame. The vertical scanner operates in a sawtooth pattern where about 480 lines make up the image frame, and the remaining 45 lines occur in the time it takes the vertical scanner to return to the top of the frame. With a 16 kHz line frequency and 1/525 frame frequency, the SLO runs at a rate of about 30 frames per second. Both the horizontal and vertical scanners are placed conjugate to each other and also to the entrance pupil plane of the eye, which is the pivot point of the raster scanning beam. The amplitude of the scan in the present setup can be adjusted to a field size from about 3° × 3° to 1° × 1°. The signals acquired by the frame grabber board are recorded as image frames. The horizontal synchronization pulse (hsync) and the vertical synchronization pulse (vsync) that defi ne the frame are provided for the frame grabber by converting the analog outputs from the scanner units into transistor transistor logic (TTL) pulses. The mirror size of the horizontal and vertical scan mirrors is 3 × 3 mm and 12 × 5 mm, respectively. The resonant scanner used is a product of Electro-Optics Products Corp. This will be discussed in more detail in the section on image recording.
16.4
ADAPTIVE OPTICS IN THE SLO
16.4.1 Wavefront Sensing To compensate for the aberrations introduced by the eye’s optics, we use adaptive optics. The fi rst step in AO is to measure the aberrations, which is typically done with a Shack–Hartmann wavefront sensor. The lenslet array is made up of 24-mm focal length lenslets, each with an actual diameter of 400 mm. There is magnification of 1.21× between the pupil of the eye and the wavefront sensor so the lenslets project to a size of 331 mm in the eye. The sampled wavefront is decomposed mathematically into Zernike modes describing the wavefront. In the present AOSLO geometry, the wavefront sensor has square subapertures, 17 across the diameter and a total of 241 lenslets inside a 7-mm pupil (see Fig. 16.2). The centroid of the focused spot from each lenslet is estimated by calculating the fi rst moment of intensity at every spot location. The accuracy of the centroid location algorithm is dependent on the signal-to-noise ratio of the focused spots at the focal plane of the different lenslets. With all the light loss due to the absorption and scattering of light in the optics of the AOSLO remaining the same, the retinal absorption of the human retina dictates the required exposure time for Shack–Hartmann slope measurements, which in turn dictates the frequency of the closed-loop system in the AOSLO. As mentioned earlier, the retinal absorption is different for different people. For more detailed discussions on wavefront sensing, the reader is referred to Chapter 3.
ADAPTIVE OPTICS IN THE SLO
421
FIGURE 16.2 Shack–Hartmann wavefront sensor’s output image. A total of 241 lenslets sample the wavefront. Each spot is the focused spot of the wavefront sampled by the lenslet.
16.4.2
Wavefront Compensation Using the Deformable Mirror
A 37-channel Xinxtics deformable mirror (DM) is placed in the optical path conjugate to the entrance pupil of the eye. By placing the DM before the raster scanners, the size of the mirrors required for relaying the light through the system is minimized. Minimization of the mirror sizes allows for smaller reflection angles, which reduces the inherent aberrations in the system. The diameter of the DM is 47 mm, and therefore the pupil has to be magnified to fi ll the mirror aperture. The size of the DM is the primary reason for the large size of the instrument. Aberrations are compensated on both the in-going and the out-going light paths. Correcting the wave aberrations on the way into the eye helps in presenting a compact spot on the retina and results in increased resolution of features in the retina. Correcting the aberrations on the way out helps to focus the light to a compact spot in the confocal plane, resulting in higher axial resolution with increased light throughput from the scattering layer being imaged. 16.4.3
Mirror Control Algorithm
The wavefront sensor computes the wave aberration, which is in turn mapped onto the DM actuator array to compensate for the aberrations.
422
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
Let 2K be the number of slope measurements (or twice the number of lenslets) and M the number of actuators in the DM. In the current AOSLO configuration, 2K >> M and therefore the system is said to be overdetermined. The wavefront modeled using 2K slope measurements from the Shack– Hartmann sensor is projected on to a deformable mirror with M actuators using an influence matrix (A). Figure 16.3 shows the wavefront sensor geometry superimposed on the DM actuator position geometry. The average slope of the wavefront over every lenslet can be written as a superposition of the product of the actuator element voltage v and the elements a of the influence matrix A such that s1 = a11v1 + a12 v2 + a13v3 + . . . a1 M vM s2 = a21v1 + a22 v2 + a23v3 + . . . a2 M vM ... s( 2 K ) = a( 2 K )1v1 + a( 2 K ) 2 v2 + a( 2 K ) 3v3 + . . . a( 2 K ) M vM
(16.1)
where s represents the slope measurements, and v represents the 37 actuator voltages on the Xinxtics mirror. In our case, the scattered light from the object is imaged using a 5.81-mm pupil, which is then projected onto the lenslet array of diameter 7.00 mm. Zernike polynomials up to eighth order (a total of 45 modes with piston, tip, and tilt set to zero) are fit to the estimated wavefront slopes and projected onto the deformable mirror. The voltages generated to compensate for the aberrations are sent to the mirror driver in the sequence as prescribed by the DM manufacturer.
5.89-mm Pupil Size 47-mm DM Aperture
}
400-µm Lenslet Spacing
}
7-mm DM Actuator Spacing
FIGURE 16.3 Centers of the lenslets of the Shack–Hartmann wavefront sensor projected onto the DM actuator array. The physical aperture of the DM projects to a pupil size of 5.89 mm and serves as the limiting aperture of the system. In the AOSLO, there are 241 lenslets inside the pupil and 37 actuators on the DM.
ADAPTIVE OPTICS IN THE SLO
423
In our current system, the reconstruction is done with several intermediate stages, which permits us to closely monitor the system performance. First, the wave aberration is fit with a Zernike expansion. From the Zernike coefficients, several metrics such as the root-mean-square (RMS) wavefront error, Strehl ratio, point spread function (PSF), and values of specific modes can be displayed to monitor the performance. Once the fit has been made, the desired voltages for the actuators on the deformable mirror are computed as one half of the value of the wavefront computed at each actuator position. Before sending the voltage values, they are multiplied by a gain factor (<1) to ensure a smooth convergence. The gain factor that is currently used is 0.4. A flowchart of the AO closed-loop is shown in Figure 16.4. Cross coupling and aliasing between Zernike modes when fit over a limited set of points have been shown by Cubalchini [8] and Hermann [9]. To prevent such aliasing from occurring, a more effective method of obtaining the reconstruction matrix is used to compute the mirror modes directly. This also helps in compensating for the nonuniformity in the displacement of actuators with applied voltage. The response of every actuator and its influence on the other actuators are accounted for in these modes. By applying a known amount of voltage on the actuator and measuring the slope vector using the wavefront sensor, one can measure the influence function of every actuator. This process is repeated and the influence function is obtained for all 37 actuators. Using these, we can calculate the amount of voltage to be applied to push or pull the actuator to displace the corresponding centroid location by a specified amount. The mirror modes of the DM are generated and a reconstruction matrix is obtained. A more detailed discussion on the different control matrices for the DM is given in Chapter 5. 16.4.4 Nonnulling Operation for Axial Sectioning in a Closed-Loop AO System The AOSLO performs axial sectioning of the retina by introducing defocus with the deformable mirror (DM). There are two different ways in which axial sectioning can be performed. One method is to compensate for the eye’s aberrations using the AO system in closed loop, then open the loop and apply defocus on the DM to perform axial sectioning. Defocus is increased or decreased by superimposing the static correction on the DM with the Zernike coefficient corresponding to defocus. In this method, not only defocus changes, but other modes are also influenced, which results in relatively poor wavefront compensation. The other method is to use as a target wavefront a nonzero target wavefront instead of the wavefront with zero phase, which we term as a nonnulling operation. In closed-loop defocus operation, a nonzero target vector of Zernike coefficients is defi ned to control the shape of the wavefront. This target vector is subtracted term by term from the measured Zernike coefficients to obtain the resultant coefficient vector, which is used to generate the voltages for the
424
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
Record WFS Image
Find Centroids and Compute Local Slope
Display Slope Vector
Fit Using Zernike Polynomials (up to 8th order)
Display RMS
Compute Wavefront Map and Stroke (in µm) on Each Actuator
Convert Stroke (in µm) to Voltages
Display Wave Aberration Map
Display Computed PSF
Multiply Voltages by Gain (0.4) and Send to Mirror
FIGURE 16.4
Flowchart for AO closed-loop operation in the AOSLO.
DM. Whenever the coefficient of any target aberration is adjusted (e.g., to adjust defocus), then the value of that coefficient in the target vector is updated. As the loop is closed, the mirror converges toward the target aberration rather than a flat wavefront, whose shape is determined by the userdefi ned aberrations. In the section on results on axial resolution, we show the improvement in axial resolution obtained using a nonnulling operation compared to defocus introduced in an open-loop AO system for optical slicing.
OPTICAL LAYOUT FOR THE AOSLO
16.5
425
OPTICAL LAYOUT FOR THE AOSLO
The optical design of the AOSLO was done using ZEMAX optical design software. The software allowed three-dimensional modeling of the system and provided detailed analysis and optimization of the optical quality of the beam in the optical path. Nonoptical components were also integrated into the design. The overall physical dimensions and optical performance were thus evaluated before the actual assembly of the AOSLO was done. This is very useful since a complete performance evaluation helps the designer when buying off-the-shelf components and reduces the overall time in building such a system. Mirrors Primarily spherical front surfaced mirrors were used. Mirrors allow folding of the optical path of the system making it more compact. Mirrors, unlike lenses, do not generate unwanted ghost reflections, which are important in a double-pass system. Also, mirrors do not suffer from chromatic aberrations. However, using mirrors off-axis in the light path introduces astigmatism and coma in the wavefront, and hence the optical performance is not diffraction limited. But conventional optics can be used to correct astigmatism and therefore the fi nal optimization of the system concentrated on reducing the coma introduced by off-axis mirrors in the system. Beamsplitters The two beamsplitters used in the system were selected based on light-level requirements for imaging and wavefront sensing. Since the AOSLO is a double-pass system, the paths of the input beam and output beam are the same, except for the path traveled in the wavefront sensing and imaging arms. The weak reflection off the human retina reduces the number of photons available for wavefront sensing and imaging. Safety limits on the energy density of radiation at a given wavelength impose strict constraints on the amount of light that can be delivered into the eye. The signal-to-noise ratio of the spot pattern seen by the wavefront sensor camera impacts the wavefront compensation algorithm. Hence, one has to optimize components such as beamsplitters for sharing the light between imaging and wavefront sensing, without compromising on the exposure times used for wavefront sensing. The exposure times set in the wavefront sensor camera determine the frequency of the adaptive optics control system. The beamsplitter (shown as BS2 in Figure 16.1), which splits the light into the wavefront sensing arm and the imaging arm, has a reflectance of 75% (for imaging), and a transmittance of 25% (for wavefront sensing). Another beamsplitter (BS1 in Fig. 16.1) is used at the initial beam delivery stage. This beamsplitter has about 5% reflectance and about 95% transmittance. Scanning System Module As discussed before, the scanning system is a very important part of the AOSLO. Most of the design revolves around the geom-
426
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
etry and configuration of this scanning system. The scanning mirrors are placed at a plane conjugate to the entrance pupil of the eye, which makes the beam stationary at the eye. At each position of the scanning mirrors, the beam passes through a different part of the optical system of the SLO and hence the aberrations change. The optical design program considered a grid of points across the field so that the optical quality of the system was optimized over the entire field, not just along the axis. Ideally, the deformable mirror would compensate for aberrations at every scanning mirror position. However, the 16 kHz line scan frequency prevents this since the AO correction would have to take place at many times this frequency to account for the changing aberrations from one side of the field to the other. Since the wavefront sensor and the DM cannot operate at that frequency, the optical system is optimized for both on- and off-axis aberrations over the field. A small amount of field curvature shows up in the extreme edges of the scan field but is in the same direction of the retinal curvature. For a more detailed discussion on designing the AOSLO refer to Donnelly [10]. 16.6
IMAGE ACQUISITION
A photomultiplier tube (PMT) is used as the sensing device in the AOSLO. Unlike the conventional camera, a PMT is a photon counting device. We chose to use a PMT because it is best suited for high-speed, low-light detection [11]. Specifically, we use a Hamamatsu 7422-40 PMT module, which has broadband sensitivity (due to the GaAs cathode), relatively high quantum efficiency, and high gain. The downside is that the cathode in this type of PMT is very susceptible to damage even from relatively short overexposure, and so we are currently investigating alternate detectors. After current-to-voltage conversion and amplification, the analog signal from the PMT is input to the analog input of the frame grabber (Genesis-LC, Matrox). This signal is a 0- to −1.5-V analog signal with a bandwidth of about 50 MHz. The analog signal is further conditioned into a pseudo-NTSC (National Television System(s) Committee) signal by inverting the voltage and multiplexing it with a direct current (DC) reference that is placed into the blanking interval between each line of the image. The DC reference sets the zero scale for digitizing the analog signal. The frame grabber board is an alternating-current- (AC) coupled analog frame grabber. It is capable of accepting standard NTSC signals as well as nonstandard video signal with independent timing signals. We use the frame grabber in the custom mode, where the nonstandard analog signal is from the PMT and timing signals are derived from the hsync and the vsync signals from the scanning mirror drivers. The hsync pulses indicate the beginning of a new line, and the vsync pulses determine the beginning of a new frame. In practice the vsync is only used to set the phase of the frames, but the frame grabber simply counts 525 lines before starting the next frame. This maintains more stable images since we are not subject to noise from the vsync signal.
IMAGE ACQUISITION
427
The horizontal scanner, which scans the beam with a sinusoidal velocity profi le, combines with the vertical scanner, which moves in a sawtooth pattern, to trace the beam into a sinusoid as shown in Figure 16.5. A resonant sinusoidal scanner introduces problems because its motion is inherently nonlinear and the scan is bidirectional, but it was chosen because it was the only type of scanner that could provide the required line frequency for the AOSLO. To overcome the complications of the sinusoidal scanner, we configured the frame grabber to collect pixels (i.e., read the analog signal) from the retina only while the beam was moving through the most linear part of the scan. Furthermore, we opted to collect pixels only when the beam was moving in the forward direction. Thus, the duty cycle for collecting pixels was only about 40% of the period for one cycle. The AOM switches the laser out of the optical path during the periods when the scanning mirror is moving from one line to another and also at the beginning and end of every scan line when the motion of the mirrors is completely nonlinear. This minimizes the duration for which the retina is exposed to the laser beam. During these times, the frame grabber expects a DC reference voltage, which represents the zero level of the video signal. In this section we discuss the various hardware components used in AOSLO for acquiring the signals in video format. Even though the images are recorded during the most linear phase of the mirror scan, there is still nonuniformity in sampling both in the horizontal
Active Time
Dead Time
FIGURE 16.5 Path of the scanner as it scans across the field. The sinusoidal pattern results in changing velocities at the edges of the scan, and also the frequency of the sinusoid is not a constant from top to the bottom of the scan resulting in nonlinear sampling of the object. As shown in the above figure, the laser beam is modulated in such a way that, the beam is incident on the retina only during the forward part of the scan (40% duty cycle), the nonlinear scan part and the return part of the signal are considered as dead time.
428
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
and vertical direction. This change in sampling is more pronounced as we go toward the edges of the image. The images are corrected for this distortion offl ine. A user interface allows one to control the field of view, change the gain parameters depending on the voltage levels generated by the PMT, and also modulate the laser to optimize the retinal exposure to laser illumination. Control over these parameters helps the user to adjust the controls in the AOSLO to optimize the signal-to-noise ratio of the digitized signal by direct feedback from the image. A block diagram of the electronic hardware control of the AOSLO is shown in Figure 16.6. An 8-channel multiplexer is used to switch between the PMT signal during the active period and the DC offset during the front and back portion of the signal. A monostable multivibrator is used to generate the control signal for the multiplexer.
PLD XYG Mirror Driver
HSCAN
VSCAN
Schmitt Trigger
Vsync Modulator Photomultiplier Tube
Frame Grabber
HSync Modulator
Multiplexer CD4051B
Power Amplifier & Inverter
Low-Pass RC Filter
Internally Generated DC Offset
Nand Gate 7400
Voltage Divider
Laser Modulator
FIGURE 16.6 Block diagram of the image defi nition process including the shape of the waveform. Signals from the two components, the PMT and the scanner, are coupled to the frame grabber, which help in defi ning the image.
SOFTWARE INTERFACE FOR THE AOSLO Pulsewidth
429
Phase Delay
X-scan
Horizontal Synchronization Pulse
Volts Modulated Horizontal Synchronization Pulse
Modulated PMT Signal
DC Reference Voltage Level
0
450 µs
FIGURE 16.7 Timing diagram showing different waveforms used to coordinate the signals from the scanner and the PMT. The hysnc is designed to trigger when the mirror crosses the zero position (so that its phase is not affected by the amplitude of the scan). The electronics module is used to add the phase delay and the pulsewidth to produce the hsync-modulated signal, which in turn is used to drive the switch between PMT output and DC reference, as well as to modulate the laser.
A class A common emitter configuration amplifier with negative feedback is used to invert and amplify the signal from the multiplexer. The cutoff frequency of the frame grabber is 10 MHz. A fi rst-order RC passive fi lter is used to attenuate frequencies beyond 10 MHz to avoid high frequencies propagating in the system. The fi lter is designed with a cutoff frequency at about 15 MHz. Figure 16.7 shows the timing diagram of the different signals used for defi ning the images. This figure explains the way in which the signals obtained by the PMT temporally are used to make a digital image. For a more detailed description of the back end electronics refer to Sundaram [12].
16.7
SOFTWARE INTERFACE FOR THE AOSLO
The effective operation of the AOSLO depends greatly on the software interface. Custom software was developed to display the wavefront sensor image and output various parameters used for AO control. Figure 16.8 shows a
430
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
(a) (b) (c)
(d)
(g)
(e) (f )
(h) (i)
FIGURE 16.8 Screen shot of the AOSACA application software used to interactively control various parameters in the AOSLO. A set of operations are performed before the AO is in closed-loop operations. The icons labeled in the figure are the ones used during an open-loop operation to measure optical quality of the optics of the eye: (a) show the live wavefront sensor (WFS) image, (b) take a snapshot of the WFS image, (c) save a background image, which is then subtracted from every WFS image before computing centroids, (d) subtract background from WFS image, (e) fi nd centroids in the WFS image, (f) increase and decrease the size of the search box within which centroids are calculated, (g) fi nd centroids, (h) add and delete centroids that will be then excluded from further computations, and (i) compute the wavefront phase map and the point spread function associated with such a wavefront.
screenshot of the adaptive optics sensing and correcting algorithm (AOSACA). The different displays are labeled in the figure. The lenslet image and the corresponding optical parameters are displayed in the interface. The AOSACA software is segmented in several modules. First is the camera control module, which is used to set exposure times and save backgrounds, and the like. The analysis module is used to set up the centroid-fi nding routine and compute wave aberration information. The mirror control module is used to activate the mirror, reset iterations by resetting voltages on the mirror to 0 V, send zero voltage values to the mirror, and control the mirror for defocus. This allows the user to adjust the defocus
CALIBRATION AND TESTING
431
manually or to set a range and step size (in diopters) for calibrated and automated defocus adjustment. The defocus can be adjusted in open-loop mode or closed-loop mode. In open-loop mode, the appropriate voltages to change the focus of the mirror are computed and simply added to the existing voltages on the deformable mirror. In closed-loop mode, the reference is changed from the flat (null) state to the desired defocus state, and the AO system continues to close the loop to achieve the desired defocus amount. The closedloop mode maintains a stable defocus even in the presence of accommodation, and it also minimizes any other aberrations that arise when the defocus of the mirror is adjusted. The duration at every defocus location gives the DM time to correct the wavefront for that defocus and then proceed to the next defocus location. The refraction module computes and displays the appropriate spectacle correction to apply, based on the wave aberration information provided by the wavefront sensor. It also prescribes the appropriate lenses to replace the correcting lenses that are already in place [13], thus correcting for defocus and astigmatism, which comprise the majority of the aberrations in the wavefront. The other displays are the RMS wavefront error, the number of centroids found, the update frequency, a color-coded map of the actuator voltages, the wave aberration plot, the PSF, a real-time plot of the magnitude of selected Zernike terms, and the slope vector plot, which represents the direction and magnitude of the local slope of the wavefront at each lenslet. A second computer is used for image acquisition. Custom software is used to control the frame grabber and also to display and record video segments. The frame grabber performs an 8-bit digitization of the analog signals from the PMT, and the data is written onto the hard disk directly in uncompressed Audio Video Interleave (AVI) format. Videos are generally stored on a DVD disk at the conclusion of every imaging session.
16.8 CALIBRATION AND TESTING 16.8.1 Defocus Calibration Different telescopes are used in the system to magnify and demagnify the beam size as required by the varying geometry of optical components, such as the scanning mirrors, the DM, and the wavefront sensor. Hence, calibration is necessary to ensure that the defocus measured by the wavefront sensor is equal to the defocus introduced in the system. Known amounts of defocus were introduced in two ways. The fi rst method required introducing trial lenses of known focal lengths in the spectacle plane. The second method involved moving the image plane in a model eye by known distances along the optical axis. A micrometer was used to translate a diffusely scattering surface (Spectralon, Labsphere) behind a 10-D lens, which was mounted at the pupil plane. Known amounts of defocus were
432
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
introduced by moving the micrometer stage forward and backward from the lens and the defocus measured using the wavefront sensor. In both cases, the defocus was measured with the wavefront sensor and plotted against the actual power of the lens introduced in the spectacle plane to measure linearity. 16.8.2
Linearity of the Detection Path
The detection path was checked for linearity since it is important that a pixel value be proportional to the average number of detected photons. The linearity was checked at each stage, including the PMT output, the PMT + amplifier output, the PMT + amplifier + signal conditioner output, and fi nally the pixel values recorded by the frame grabbing board. In all cases the signal was directly proportional to the amount of scattered light. 16.8.3
Field Size Calibration
A model eye was constructed with a grid in place of the retina. Each square of the grid subtends 0.1° of image field. Prior to all imaging sessions, the grid is imaged and the amplitudes of the scanning mirrors are adjusted to establish the desired field size. A short video of the grid is always captured and saved for each imaging session. This allows us to measure the nonlinearities in the scan that are caused by horizontal and vertical scanning mirrors and remove them offl ine.
16.9
AO PERFORMANCE RESULTS
16.9.1 AO Compensation At present, the reconstruction matrix is generated for up to 45 modes (up to and including 8 orders of Zernike polynomials). The average residual RMS wavefront error we obtain when correcting a diffuse reflecting model eye is about 0.03 mm. On normal subjects with clear ocular media, we can typically get the AO corrected RMS wavefront error over a 5.81-mm pupil to be 0.10 mm or less (based on the residual RMS wavefront error as measured by the wavefront sensor after AO correction). Figure 16.9 shows the RMS wavefront error plotted as a function of time in a closed-loop AOSLO in a human eye. For a typical human eye, the average DM update frequency is about 2.2 to 3 Hz depending on the amount of light reflected back from the retina. Figure 16.10 shows the plot of the mean value of all the Zernike coefficients estimated using the wavefront sensor with and without AO compensation on eight different subjects. We see that most of the power in the wave aberration resides in the lower order modes [14]. From the plot we see that, between open- and closed-loop AO control, there is no significant reduction in the
AO PERFORMANCE RESULTS
433
0.5 0.45
Wavefront RMS (µm)
0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 0
10
20
30
40
50
Time (s)
FIGURE 16.9 Plot of wavefront RMS as a function of time for a typical subject computed for a 5.81-mm pupil. Once the AO loop is closed, we can see a drop in wavefront RMS from 0.43 mm to about 0.07 mm ± 0.01 mm in about 6 s.
Mean RMS of Wavefront (8 Subjects)
0.45 0.4 Without AO
0.35 0.3 0.25 0.2 0.15 0.1
With AO
0.05 0 0
2
4
6
8
10
12
Zernike Order
FIGURE 16.10 Plot of the mean of the Zernike coefficients obtained over a time period of about 50 s in 8 subjects computed over a 5.81-mm pupil.
value of the mean Zernike coefficients for eighth order and beyond. Hence, the AO system in the AOSLO is currently used to correct up to an eighthorder Zernike polynomial. Figure 16.11 shows an example of a high-resolution image of the retina imaged using the AOSLO. We see a significant increase in resolution, intensity, and contrast between the images obtained with and without AO. Each image is a registered sum of 10 images selected from a single movie sequence. Registration of subsequent frames from the AOSLO cannot be done with a simple “shift-and-add” technique because subsequent frames in the video are often distorted. The distortion occurs because each frame is acquired over a certain period of time, and eye movements during that time cause different
434
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
FIGURE 16.11 Images showing the photoreceptor mosaic in a living human retina imaged using the AOSLO. Comparing the images obtained without and with AO shows evidence for increase in resolution and contrast. The histogram shows the increase in the dynamic range of the intensity values with AO. The scale bar is about 100 mm.
parts of the retina to be imaged. The motion of the eye in different directions with respect to the scanning mirror direction results in compression, stretching, and shearing of the images [15]. In order to add frames, we currently scan through the movie to fi nd a subset of frames that have the smallest distortions and shift and add those frames only. However, efforts are ongoing to remove the image distortions due to eye movements. 16.9.2 Axial Resolution of the Theoretically Modeled AOSLO and Experimental Results A theoretical model of the optics of the AOSLO helped us to evaluate the performance of the AOSLO. The residual aberrations after AO compensation were used as an input in our model and the resulting axial resolution was computed. For a more detailed discussion on the model refer to Venkateswaran et al. [16] and Romero-Borja et al. [17]. Figure 16.12 is a plot of the theoretically calculated axial resolution for different subjects obtained using the residual wave aberrations. The Zernike polynomial defi ning the wave aberration was computed over a 5.81-mm pupil size. The percentage of intensity getting through the confocal pinhole to the detector plane was calculated and normalized to the intensity at the detector plane. To obtain limits on the axial resolution, we calculated the axial resolution as a function of three parameters: confocal pinhole size, imaging wavelength, and pupil size. Figure 16.13 shows the calculated axial resolution as a function of these three parameters.
AO PERFORMANCE RESULTS
435
300.00 80-µm Confocal Pinhole
Axial Resolution (µm)
250.00
No AO
200.00
150.00
100.00
Diffraction Limit
Subjects with AO Correction
50.00
0.00 0.00
1.00
2.00
3.00
4.00
5.00
6.00
Pinhole Diameter in Units of Airy Disk Radius
FIGURE 16.12 Axial resolution computed using residual Zernike coefficients after AO correction plotted as a function of pinhole size. Typically we use an 80-mm pinhole size for confocal imaging. We can see that with AO, the axial resolution performance predicted by the model is close to the diffraction limit of the AOSLO. Zernike coefficients are computed over a 5.81-mm pupil (based on present AOSLO optics).
300 L0
Axail Resolution (µm)
250 4 mm @ 800 nm
200
AOSLO—5.89 mm at 660 nm
150 100
0.46L0
L0 8 mm @ 400 nm
50
0.77L0
0 0
20
40
60
80
100
120
Pinhole Size (µm)
FIGURE 16.13 This plot shows the diffraction-limited axial resolution performance of the AOSLO computed based on a theoretical model. The model shows the influence of wavelength and pupil size on axial resolution. The relative intensity is also labeled on the plot. For example, for a 8-mm pupil imaged using 400-nm light, the decrease in peak intensity as we go from a 100- to a 23-mm (Airy disk radius for the AOSLO) confocal pinhole, there is a drop in intensity by 23%, whereas for a 4-mm pupil imaged using 800 nm, the drop in peak intensity is 54%, where L 0 is the peak intensity obtained using a 100 mm confocal pinhole. Zernike coefficients are computed over a 5.81-mm pupil (based on present AOSLO optics).
436
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
The axial resolution of the AOSLO was also measured experimentally and compared with the theoretically predicted axial resolution. We fi rst visually analyzed movies and selected single surface features, such as blood vessels, in the retina. We then asked the subject to fi xate so that the feature of interest was in the system’s field of view. The defocus was adjusted over a range of values ranging from −0.8 to +0.8 D in steps of 0.025 D, which corresponded to an axial range of just over 600 mm for an eye with a lens power of 60 D and a refractive index of 1.33 (each step size of 0.025 D corresponded to about 10 mm). Figure 16.14 is a montage showing an image obtained at each axial location. At each of these defocus locations, the average intensity over the region of interest was calculated. The average intensity was plotted as a function of defocus, and the full width at half maximum (FWHM) of the function was computed as the axial resolution. We fi rst compared the axial resolution measurements obtained by two different methods. Method A was to apply defocus on the DM with static corrections and method B was the nonnulling operation (as described in Section
FIGURE 16.14 Montage of slices of the human retina obtained using the AOSLO. Starting from a fi xed plane (in this case the photoreceptor plane) defocus is applied in steps of 10 mm (0.025 D) to section the retina axially. We can see that a small part of the blood vessel protruding out through the nerve fiber layer clearly demonstrating the axial sectioning capability of the AOSLO. In this case, the distance from the photoreceptor plane to the nerve fiber layer is about 300 mm.
AO PERFORMANCE RESULTS
437
16.4.4). Using a model retina with an 80-mm confocal pinhole, the axial resolution was about 15% better using method B. The axial resolution measurements in the following discussions have been obtained using method B. Features such as photoreceptors and blood vessels were imaged using different pinhole sizes at the confocal pinhole plane and axial resolution was estimated. These were compared to the axial resolution calculated using the theoretical model. The pinhole sizes used were 30, 50, 80, 100, 150, and 200 mm. When we compared the results with our diffuse scattering model, we found that the experimental axial resolution measurements were better than the theoretical axial resolution. However, the predicted axial resolution for a plane reflector, or mirror yields better axial resolution than the diffuse reflector [18]. Figure 16.15 shows a plot of the experimentally obtained axial resolution measurements using a specular reflector, a diffuse reflector (for confocal pinholes of size 30 and 80 mm), and three human subjects (for confocal pinholes of 50, 80, 100, 150, and 200 mm) along with the results obtained from theoretical modeling. To put all the data on a single plot, we converted the axial resolution into normalized units [18], where the conversion between normalized units and actual units is given by: z=
( )
2
λ F u 2π rn
(16.2)
Axial Resolution in Normalized Units
70 60 (c) 50
(b)
40 30 (a)
20 10 0 0
1
2
3
4
5
6
7
8
9
10
Pinhole Diameter in Normalized Units
FIGURE 16.15 Plot of (a) theoretically computed axial resolution for plane reflector, (b) diffuse reflector, (c) average axial resolution calculated using residual Zernike coefficient after AO correction for eight different subjects using diffuse reflector model. Experimental values plotted are obtained using a 30- and 80-mm confocal pinhole for the plane and diffuse reflector. For the human eye, the confocal pinholes used are 50, 80, 100, 150, and 200 mm. 䊐, experimentally measured axial resolution on three different subjects; 䊊, experimental axial resolution using plane reflector; and ∆, experimental axial resolution using diffuse reflector.
438
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
where z is axial depth in micrometers, l is the wavelength of light, F is the focal length of the eye (or model eye), r is the radius of the beam at the eye, n is the index of refraction of the media in the eye, and u is the normalized axial coordinate. We can see that a plane reflector yields better axial resolution measurements than a diffuse reflector. This result suggests that the better-than-expected axial resolution measurements in human eyes probably arise from specular components of the reflected light from the eye, which has not been accounted for in the model. In terms of Strehl ratio, we cannot do a comparison between experimental results and theory since the AOSLO is presently not calibrated for intensity measurements.
16.10 IMAGING RESULTS As discussed in previous sections, the unique combination of adaptive optics and the confocal laser scanning technique results in very sharp, high-contrast images and improved resolution, both axially and laterally. The improved light throughput to the detector and the ability of capturing live video allow the study of dynamic properties of the eye components and its blood circulation. Some experimental projects in each of these two groups are listed in the next paragraphs and some samples of the achievements with the AOSLO instrument are provided. 16.10.1 Hard Exudates and Microaneurysms in a Diabetic’s Retina Figure 16.16 displays the evolution in time of hard exudates in the retina of a diabetic patient. The axial sectioning capabilities of the AOSLO allow us to move to the best imaging plane, in front of the photoreceptor plane, where the
FIGURE 16.16 Time evolution of a hard exudate in a diabetic’s eye. One can observe clear changes in the morphology of the exudates at scales as small as 20 mm. These kinds of images show the potential of AOSLO to follow up on patients who are undergoing treatments for different retinal diseases.
IMAGING RESULTS
439
exudates are in focus and show good contrast. In this particular case, the exudates were located about 700 mm from the fovea of the patient’s right eye. Another eye pathology that can be detected in the diabetic patient with the AOSLO instrument is the development of microaneurysms, as illustrated in Figure 16.17. In this frame, the AOSLO is focused on the cones in the photoreceptor layer. The “broccoli-shape” shadows are from the capillaries, which contain microaneurysms (three are shown by the arrows). Many of these were not detectable in a conventional fluorescein angiogram. These retinal features were detected just nasal to the fovea of the same diabetic patient discussed above. 16.10.2 Blood Flow Measurements Dynamic, live-video recording with the AOSLO allows us to measure velocities of blood cells in the net of capillary vessels around the fovea. Blood flow measurements were done and the velocity of blood flow in some capillaries were measured [19]. Frame-by-frame observations of AOSLO videos with excellent resolution and good contrast has made it possible to quantify blood flow with a noninvasive technique. Videos showing the actual motion of the blood cells can be downloaded from http://vision.berkeley.edu/roordalab/.
2.5° Nasal
100 mm FIGURE 16.17
Microaneurysms observed in a diabetic’s eye.
440
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
16.10.3 Solar Retinopathy In recent AOSLO imaging sessions, we were able to observe a solar retinopathy in the right eye of one of our subjects (Fig. 16.18). This burn near the fovea was caused by unprotected observation of the October 25, 1995, total solar eclipse observed in India. The lesion/damage to the photoreceptor layer is the dark semicircular section of the retina where photoreceptors are nonreflective. This lesion was never noticed in routine eye exams or even in fundus photographs. The size of the burn is consistent with the size of the image of the sun, which is 0.5° of visual angle and forms an image on the retina about 100 mm across.
FIGURE 16.18 Patch of damaged photoreceptors close to the fovea. Damage is attributed probably to the subject’s unprotected direct viewing of the sun during total solar eclipse from an aircraft during an experiment. The white scale bar is about 100 mm.
DISCUSSIONS ON IMPROVING PERFORMANCE OF THE AOSLO
441
16.11 DISCUSSIONS ON IMPROVING PERFORMANCE OF THE AOSLO The AOSLO operates with light levels that are well below hazardous levels, and because the retina is such a weak reflector, the dominant noise source is photon noise. Therefore, improving axial resolution by using smaller confocal pinholes may be facilitated only by improved performance of the AO system. Improvements in AO performance can be achieved through real-time AO operation and/or by having a higher number of actuators for correcting higher order aberration modes, both of which have been demonstrated to improve performance in the Rochester second-generation adaptive optics ophthalmoscope, a conventional imaging system without axial slicing capabilities (see also Chapter 15) [20]. The subjects whose resulting axial resolution is higher in our calculation (i.e., smaller FWHM) have correspondingly higher peak intensities of the axial resolution profi le. This may not be the case in an experiment since the absorption of photons by the retina varies between individuals [7], and this has not been taken into account in our calculations. Therefore, axial intensity profi les discussed here depend only on the aberrations and not on the reflective properties of the retina for a specific subject. Based on the reflective quality of the retina, the confocal pinhole diameter can be chosen for the best possible axial resolution to obtain reasonable signal-to-noise ratio in the images. When the aberrations are low, the axial resolution is linearly related to the Strehl ratio, whether calculated from the RMS wavefront error of the residual wave aberration or from the PSF itself. This result demonstrates that, after AO correction, the RMS wavefront error alone should be sufficient to predict the experimental axial resolution. We are planning to do experiments to test this hypothesis. 16.11.1 Size of the Confocal Pinhole Using smaller confocal pinholes will improve axial resolution by decreasing the contamination of light scattered from layers in front of and behind the layer being imaged. The requirement for better optical alignment between the confocal pinhole and the rest of the optical system can prohibit obtaining images with a good signal-to-noise ratio. The demands on alignment of the confocal pinhole are higher as the confocal pinhole diameter gets closer to the Airy disk radius, for which we see significant drops in peak intensity for misalignments of a few micrometers (shown in Fig. 16.19). From Figures 16.19 and 16.20, we can see that for smaller confocal pinhole diameters, the peak intensity of the axial spread function is more sensitive to misalignment of the confocal pinhole than the axial resolution. We can also see that when the pinhole diameter is considerably larger than the Airy disk radius, the effects of pinhole misalignment are less significant, for both intensity and resolution. It is worth noting here that in some cases, the axial resolution can improve
442
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE 1.00
Fractional Drop in Intensity
0.80
0.60
0.40
0.20
0.00 0
1
2
3
4
5
6
7
8
9
10
Misalignment of Confocal Pinhole with the Optical Axis (µm)
FIGURE 16.19 Plot of fractional drop in peak intensity of the axial spread function as a function of lateral misalignment distance between center of the confocal pinhole and the center of the AO-corrected point spread function at the confocal pinhole plane. We can see that there is about 80% drop in the peak intensity of the axial spread function for a misalignment distance of 5 mm for a 23-mm diameter confocal pinhole. •, diffraction-limited eye; ▲, subject with lowest residual aberrations; 䊏, subject with highest residual aberrations. The lower and upper curves are for the 23and 80-mm pinholes, respectively. 160 140
Axial Resolution (µm)
120 100 80 60 40 20 0 0
1
2
3
4
5
6
7
8
9
10
Misalignment of Confocal Pinhole (µm)
FIGURE 16.20 Theoretically calculated axial resolution as a function of lateral misalignment distance between center of the confocal pinhole and the center of the AO-corrected point spread function at the confocal pinhole plane. We show the dependence of axial resolution on misalignment for pinhole diameter 23 mm (Airy disk radius) and 80 mm (typically used in the AOSLO). We see that the effect of misalignment on axial resolution is more pronounced for the 23-mm diameter confocal pinhole. •, diffraction-limited eye; ▲, subject with lowest residual aberrations; 䊏, subject with highest residual aberrations. The three lower curves and the three upper curves are for the 23- and 80-mm pinholes, respectively.
DISCUSSIONS ON IMPROVING PERFORMANCE OF THE AOSLO
443
slightly with a pinhole misalignment, at little cost to the peak detected intensity, which in some cases is due to the asymmetry in the PSF. The optimal confocal pinhole choice will be the one for which the axial spread function has a small FWHM and high peak intensity. For example, with the present AOSLO, if an axial resolution of 60 mm is desired, then the confocal pinhole diameter would have to be about 35 mm (1.5 times the Airy disk radius), but in our current system this would result in a relative reduction in peak intensity of 30% compared to what would be detected through large pinholes. Only superior AO performance can help us achieve better axial resolution without appreciable loss in intensity. 16.11.2 Pupil and Retinal Stabilization Maintaining stability of the eye during imaging would vastly improve the system performance. Currently, a bite bar is used to maintain head stability, but bite bars are undesirable and problematic in some clinical situations. The AO system can maintain a stable correction as long as the pupil of the AOSLO still fits within the dilated pupil of the subject, but occasionally, the eye’s pupil will vignette the beam and the AO system will fail. Furthermore, head stabilization does not guarantee retinal stabilization since the eyes are free to fi xate wherever they want and are only controlled by the fi xation target. Retinal or fi xational eye movements cause distortions of retinal features recorded in a movie sequence. Adaptive methods are being explored to reduce both of these problems. A video-based pupil tracking system, combined with a tip-tilt mirror will be implemented to keep the exit beam of the AOSLO in line with the pupil of the eye. In addition, high accuracy active retinal tracking devices may also be implemented in future versions of the AOSLO [21]. 16.11.3 Improvements to Contrast Adaptive optics improves resolution, which in turn improves the contrast of the images. But if the sample has features with intrinsically low contrast to start with, then no improvements in resolution will help to visualize it. Other non-AO methods to improve contrast can be used and are planned for implementation into the current AOSLO. A common method to improve contrast is to use different wavelengths to enhance the anatomical structure of different retinal features. Shorter wavelengths, for instance, can help to increase the contrast of the blood vessels and longer wavelengths can be used to penetrate deeper into the subretinal layers [22]. The control of polarization is another method that has been used to improve the contrast in non-AO ophthalmoscopes [23, 24]. Certain features in the eye (cornea, lens, nerve fibers) have birefringent and dichroic properties. Therefore, we should be able to improve the contrast of specific features according to their polarization signatures.
444
DESIGN OF AN ADAPTIVE OPTICS SCANNING LASER OPHTHALMOSCOPE
Acknowledgments Preparation of this chapter was supported in part by NIH support to to AR (EY13299 and EY014375) and by the National Science Foundation’s Science and Technology Center for Adaptive Optics, managed by the University of California at Santa Cruz under cooperative agreement No. AST-9876783. REFERENCES 1. Liang J, Williams DR. Aberrations and Retinal Image Quality of the Normal Human Eye. J. Opt. Soc. Am. A. 1997; 14: 2873–2883. 2. Donnelly WJ, Roorda A. Optimal Pupil Size in the Human Eye for Axial Resolution. J. Opt. Soc. Am. A. 2003; 20: 2010–2015. 3. Curcio CA, Sloan KR, Kalina RE, Hendrickson AE. Human Photoreceptor Topography. J. Comp. Neurol. 1990; 292: 497–523. 4. Roorda A, Romero-Borja F, Donnelly WJ, et al. Adaptive Optics Scanning Laser Ophthalmoscopy. Opt. Express. 2002; 10: 405–412. 5. Thibos LN, Applegate RA, Schwiegerling JT, et al. Standards for Reporting the Optical Aberrations of Eyes. In: Lakshminarayanan V, ed. OSA Trends in Optics and Photonics, Vision Science and Its Applications, Vol. 35. Washington, D.C.: Optical Society of America, 2000, pp. 232–244. 6. ANSI. American National Standard for Safe Use of Lasers. ANSI Z136.1-2000. Orlando, FL: Laser Institute of America, 2000. 7. Delori FC, Pfl ibsen KP. Spectral Reflectance of the Human Ocular Fundus. Appl. Opt. 1989; 28: 1061–1077. 8. Cubalchini R. Modal Wave-front Estimation from Phase Derivative Measurements. J. Opt. Soc. Am. 1979; 69: 972–977. 9. Herrmann J. Cross Coupling and Aliasing in Modal Wave-front Estimation. J. Opt. Soc. Am. 1981; 71: 989–992. 10. Donnelly WJ. Improving Imaging in the Confocal Scanning Laser Ophthalmoscope. M.S. Dissertation. Houston (TX): University of Houston, 2001. 11. Webb RH, Hughes GW. Detector for Video Rate Scanning Imagers. Appl. Opt. 1993; 32: 6227–6235. 12. Sundaram R. Video and Image Processing Hardware and Software for an Adaptive Optics Scanning Laser Ophthalmoscope. M.S. Dissertation. Houston (TX): University of Houston, 2003. 13. Bennett AG, Rabbetts RB. Clinical Visual Optics, 2nd ed. London: Butterworths, 1989. 14. Porter J, Guirao A, Cox IG, Williams DR. Monochromatic Aberrations of the Human Eye in a Large Population. J. Opt. Soc. Am. A. 2001; 18: 1793–1803. 15. Mulligan JB. Recovery of Motion Parameters from Distortions in Scanned Images. Proc. of the First Image Registration Workshop. (November 20–21 1997, Greenbelt, MD). 16. Venkateswaran K, Roorda A, Romero-Borja F. Theoretical Modeling and Evaluation of the Axial Resolution of the Adaptive Optics Scanning Laser Ophthalmoscope. J. Biomed. Opt. 2004; 9: 132–138.
REFERENCES
445
17. Romero-Borja F, Venkateswaran K, Hebert TJ, Roorda A. Optical Slicing of Human Retinal Tissue in Vivo with the Adaptive Optics Scanning Laser Ophthalmoscope. Appl. Opt. 2005; 44: 4032–4040. 18. Wilson T. The Role of the Pinhole in Confocal Imaging Systems. In: Pawley JB, ed. The Handbook of Biological Confocal Microscopy. New York: Plenum, 1990, pp. 99–113. 19. Martin JA, Roorda A, Venkateswaran K. Non-invasive Direct Assessment of Parafoveal Capillary Leukocyte Velocity. Invest. Ophthalmol. Vis. Sci. 2003; 44: e-abstract 3628. 20. Hofer H, Chen L, Yoon GY, et al. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express. 2001; 8: 631–643. 21. Hammer DX, Ferguson D, Magill JC, et al. Image Stabilization for Scanning Laser Ophthalmoscopy. Opt. Express. 2002; 10: 1542–1549. 22. Elsner AE, Burns SA, Weiter JJ, Delori FC. Infrared Imaging of Sub-retinal Structures in the Human Ocular Fundus. Vision Res. 1996; 36: 191–205. 23. Bueno JM, Campbell MCW. Confocal Scanning Laser Ophthalmoscopy Improvement by Use of Mueller-Matrix Polarimetry. Opt. Lett. 2002; 27: 830–832. 24. Burns SA, Elsner AE, Mellem-Kiraila MB, Simmons RB. Improved Contrast of Subretinal Structures Using Polarization Analysis. Invest. Ophthalmol. Vis. Sci. 2003; 44: 4061–4068.
CHAPTER SEVENTEEN
Indiana University AO-OCT System YAN ZHANG, JUNGTAE RHA, RAVI S. JONNAL, and DONALD T. MILLER Indiana University, Bloomington, Indiana
17.1
INTRODUCTION
The Indiana adaptive optics (AO) retinal camera was originally developed to perform en face time-domain optical coherence tomography (OCT) using a novel flood illumination approach and a two-dimensional charge-coupled device (CCD) [1]. Its core AO design evolved out of the original Rochester AO conventional flood illumination camera by Liang et al. [2]. With the discovery of the significant speed and sensitivity advantages of spectral-domain OCT (SD-OCT) over time-domain approaches, the camera was redesigned for parallel SD-OCT using a novel line illumination scheme [3]. The threedimensional (3D) optical resolution of the resulting AO SD-OCT camera (3.0 mm × 3.0 mm × 5.7 mm in the horizontal, vertical, and axial directions, respectively) is the highest to date in the living human eye and was sufficient to observe the interface between the inner and outer segments of individual photoreceptor cells, resolved in both the lateral and axial dimensions. This subcellular structure in all three dimensions has not been previously reported due to the poor axial resolution of conventional AO cameras, such as flood illumination and scanning laser ophthalmoscope (SLO) systems, and the poor lateral resolution of OCT systems. A current limitation of the AO
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
447
448
INDIANA UNIVERSITY AO-OCT SYSTEM
SD-OCT camera is speckle noise that results from the coherent nature of OCT detection. This noise makes it difficult to correlate retinal reflections with cellular features, though the application of speckle reduction techniques and the implementation of real-time 3D imaging in the future will certainly lead to improved performance. Parallel to the AO-OCT development, we discovered an elegant approach for rapidly flood illuminating the retina with incoherent light. The high imaging rates (60 Hz continuous: 500 Hz short burst) coupled with the high transverse resolution afforded by the AO system permitted quick navigation through the retina and imaging of individual cells that were of relatively high contrast, such as photoreceptors and moving blood cells, without image warp and motion blur. These high speeds are two to three orders of magnitude faster than other AO flood-illuminated retinal cameras and can be more than an order faster than AO scanning laser ophthalmoscopes. The aim of this chapter is to provide a detailed overview of the Indiana camera’s technical features, its optical performance, and fi nally, example results. General background information on OCT imaging systems can be found in Chapter 10.
17.2
DESCRIPTION OF THE SYSTEM
The Indiana AO retinal camera was designed for sequentially collecting 3D, high-resolution SD-OCT and 2D, high-resolution conventional floodilluminated images of the living human retina. This dual approach permits two types of imaging in the same eye essentially under the same conditions. This facilitates accurate comparison of the imaging approaches and is economical from the standpoint that both capabilities exist in a single AO system. The system occupies a large portion of a 5 ft × 8 ft optical table, which (although large) is convenient for alignment and modifications and consists of four subsystems: (1) parallel SD-OCT for B-scan imaging, (2) conventional flood illumination for en face imaging and focusing in the retina, (3) AO for compensation of the eye’s wave aberration, and (4) pupil retroillumination and fi xation channels to align the subject’s eye to the system. Figure 17.1 shows conceptual and detailed layouts of the system. The parallel SD-OCT system is based on a free-space Michelson interferometer design. Its illumination channel contains an 11-mW broadband superluminescent diode (SLD) (l = 843 nm, ∆l = 49.4 nm) that is used in conjunction with a spherocylindrical lens for forming a line illumination pattern on the retina. The beam enters the subject’s eye with a diameter of less than 2 mm. The reference channel contains a retroreflector mounted on a computerdriven voice coil translator (9-mm range in tissue) for controlling the reference optical path length and a 24-mm water vial for dispersion balancing the subject’s eye. Light reflecting from the retina fi lls the pupil and is combined with the reference beam to form superimposed line images at the 50-mm slit
DESCRIPTION OF THE SYSTEM 843 nm SLD for OCT & Conventional Imaging
Multimode Fiber (25 m)
Polarization Controller
Retractable Mirror Spherocylindrical Lens
Ψref SLD
Retina BS
Field Stop
Fixation Target
Retractable Mirrors
Xinξtics Deformable Mirror
Ψretina
AO System
Diffraction Grating
Pupil Camera
Pupil
BS
Detector
CCD Camera
OCT System
(a)
Planar Mirror
90/10 BS
ND Water Vial
Shutter Eye
Light Block
RetroVoice Coil reflector Stage Reference Channel
Slit
Grating
788 nm SLD for AO Polarization Controller Optical Isolator
Optical Isolator
Reference Mirror
449
Relay Telescope Shack− Hartmann Wavefront Sensor
OCT & Flood-Illuminated Detection Channel
(b)
FIGURE 17.1 (a) Conceptual layout shows the AO system as part of the SD-OCT detection channel. (b) Detailed layout of the AO parallel SD-OCT retinal camera. The camera consists of four subsystems: (1) The AO system corrects the ocular aberrations using a 788-nm superluminescent diode (SLD), Shack–Hartmann wavefront sensor, and Xinxtics deformable mirror (dotted line). (2) The pupil retroillumination and fi xation channels permit alignment of the subject’s eye to the retinal camera. (3) Conventional flood illumination is used to validate focusing and the physical size of microstructures in the retina (solid line). (4) Parallel SD-OCT acquires single-shot B-scan images of the retina (dashed line). BS = beamsplitter. (Adapted from Zhang et al. [3]. Reprinted with permission from the Optical Society of America.)
(2.8 mm at retina) in the detection channel. The slit is oriented parallel to the line images. Upstream of the slit in the detection channel is an aperture that is conjugate to the pupil of the eye and is used to control the imaging pupil size, which was typically set to 6 mm at the eye. Axial translation of the relay telescope permits focusing in the retina. Light passing through the slit is collimated by an achromatic doublet and diffracted by a 1200-line/mm transmission grating (Wasatch Photonics) at Littrow’s angle. Grating efficiency for the +1 order is 86% at 830 nm. Two achromatic doublets, designed for the near infrared, focus the spectrally dispersed image onto a back-illuminated, scientific-grade, 12-bit CCD (Quantix:57, Roper Scientific, Inc.). Charge-coupled device read noise and pixel capacity for the two available CCD settings are 30.7/15.1 electrons root-mean-square (RMS) and 137,000/36,000 electrons, respectively. The setting with the larger pixel capacity was used for OCT and permitted photon-limited imaging. The CCD array was 1056 × 530 pixels and consisted of a light-sensitive region of 512 × 530
450
INDIANA UNIVERSITY AO-OCT SYSTEM
pixels plus a similar storage area underneath the frame transfer mask. The predicted A-scan depth range for a gross refractive index of the retina, nret, of 1.38, and a spectral resolution of the imaging spectrometer, ∆l res, of 0.125 nm (limited by the spectral width of a single CCD pixel) is 2
( 843 nm ) λ2 ≈ 1 mm = 4 n ret ∆λ res 4 ( 1.38 ) ( 0.125 nm )
(17.1)
The useable range is about half this. As a fi rst step, the power of the spherocylindrical lens was chosen so as to illuminate roughly a 100-mm long line on the retina. One hundred A-scans per B-scan permit the temporary storage of up to 10 B-scans on the CCD array before reading, and thus allows for the acquisition of a fi nite number of images at very high rates using the kinetics mode of the CCD, a method that we will refer to as “short-burst” imaging. Control software was developed to permit short-burst image acquisition rates of up to 500 Hz. For the short-burst imaging, the exposure and delay durations were each 1 ms with eight images acquired in 15 ms. In order to obtain a measurement of the reference beam that is highly representative of the reference state during retinal imaging (i.e., one that accounts for temporal fluctuations of the reference due to vibrations and temporal changes in the camera), the fi rst image of each short-burst sequence was that of the reference only. This was realized by timing a mechanical shutter, inserted in front of the eye (see Fig. 17.1), to momentarily close for just the fi rst image of the burst sequence. Custom software was developed in Visual BASIC to acquire raw spectral images, subtract reference spectra, divide by the square root of the reference spectra (which removes irregularities in the illumination pattern), interpolate into k space, balance dispersion using the foveal reflection [4], Fourier transform, and fi nally display the reconstructed B-scans of the retina. The exposure level at the cornea for the 843-nm SLD was 1 mW, which illuminated an extended line of about 1/3° at the retina. This is more than an order of magnitude below the maximum permissible exposure for bursts of 1-ms pulses (8 in total) at 500 Hz [5]. With an earlier en face AO-OCT retinal camera that we developed [6], spatial coherence of the SLD was found to generate noticeable speckle noise that masked the visibility of microstructures in the retina and made focusing in the retina difficult (e.g., see Fig. 10.10). To avoid the focusing problem here, the axial position of the focal plane in the retina is established using an incoherent flood-illuminated subsystem that captured en face images of structures such as the cone mosaic or retinal capillary bed. Observation of these structures with the incoherent system confi rms the plane of focus. Because focusing in the eye is sensitive to the light source spectrum (due to ocular chromatic aberrations), as well as to the alignment of the camera components, the incoherent flood-illuminated system was integrated with the parallel SD-OCT system. Both imaging systems employed the same SLD, and their beams fol-
DESCRIPTION OF THE SYSTEM
451
lowed essentially identical paths, including that in the detection channel (see Fig. 17.1). This provided considerable assurance that both imaging systems were focused at the same retinal depth. In the illumination channel of the incoherent subsystem, a motorized retractable mirror redirects light from the 843-nm SLD into 25 m of multimode step-index optical fiber (Lucent Technologies, Inc.). The fiber has a numerical aperture (NA) of 0.22, a core diameter of 105 mm, and a core refractive index (ncore) of 1.457 at l = 0.633 mm. In the fiber, the light is distributed among the fiber modes (via modal dispersion) causing each to propagate at different axial velocities. The 25 m was of sufficient length to cause the time delay between exiting modes to be larger than the temporal coherence length of the SLD, effectively mitigating the source’s spatial coherence and significantly reducing speckle contrast [6, 7]. Light exiting the fiber is redirected back into the system via a second motorized retractable mirror and propagates into the subject’s eye where it flood illuminates a 1° patch of retina. The tip of the fiber is conjugate to the subject’s retina. The flood illumination is centered on the line illumination of the parallel SD-OCT. To avoid unwanted back reflections from the reference channel, a light block is physically inserted immediately upstream of the retroreflector. In the detection channel, the slit and transmission grating are temporarily removed with the latter being replaced with a planar mirror that was carefully prealigned such that the reflected beam followed the same path as the +1 order of the diffraction grating. Insertion of the slit and interchange of the grating and planar mirror were realized using kinematic base plates. Precision of the plates was more than sufficient due to the large diameter of the slit (50 mm) and the large axial magnification of the instrument (1-mm axial displacement at the CCD corresponds to 5.5 mm at the retina). In this flood illumination configuration, the CCD-captured aerial images of the retina whose acquisition was synchronized to the strobing, current-modulated SLD. The exposure duration, SLD intensity, and delay between consecutive images were computer controlled. The exposure level at the cornea for the 843-nm SLD was 0.73 mW (flood illumination), which illuminated a 1° patch of retina. This is more than 30 times below the maximum permissible exposure for individual 4-ms pulses and bursts of 4-ms pulses (8 in total) at 500 Hz [5]. Conventional flood illumination is also possible at shorter wavelengths. Specifically, light can be directed into the system from a 10-mW SLD at 679 nm, which is fi rst coupled to 25 m of multimode step index optical fiber (same as described above). The system can also use a 200-mW multimode laser diode at 670 nm, which is coupled into 100 m of fiber (200-mm core, 0.37 NA). Although a longer fiber is necessary due to the longer temporal coherence length of the laser diode, advantages of the laser diode include higher optical power and lower cost. The AO system is an extension of that reported by Hofer et al. [8] with additional modifications and improvements, described in Section 17.4, some of
452
INDIANA UNIVERSITY AO-OCT SYSTEM
which are necessary to operate simultaneously with OCT. The AO system consists of a Shack–Hartmann wavefront sensor (SHWS) and a deformable mirror (Xinxtics, Inc.) that are controlled in closed-loop fashion via a desktop computer. The sensor employs a 0.75-mW pigtailed, single-mode SLD operating at 788 nm (∆l = 20 nm) that enters the subject’s eye with a diameter less than 1 mm and is displaced from the corneal apex to avoid the bright corneal reflection. The beam focuses to a small spot on the retina and is roughly located at the geometric center of the parallel SD-OCT and conventional flood illumination patterns. The reflected light fills the pupil and is distorted when passing back through the refracting media of the eye. A lenslet array (17 × 17 lenslets, F = 24 mm; d = 0.4 mm), placed conjugate to the eye’s pupil, samples the exiting wavefront across a 6.8-mm pupil. The array of focal spots produced by the lenslets is recorded with a scientific-grade 12-bit CCD camera (CoolSNAP HQ, Roper Scientific, Inc.). The raw slope data are fit in real time to the derivatives of Zernike circle polynomials, up to the 10th order, by the method of least squares described by Liang et al. [9]. The deformable mirror is positioned upstream of the lenslet array at a plane conjugate to the eye’s pupil. The middle horizontal row of mirror actuators traverses 6.8 mm at the eye’s pupil. The 37 lead–magnesium–niobate (PMN) actuators of the deformable mirror have a mechanical stroke of ±2 mm. A direct slope control method is used to rapidly convert the lateral shifts of the spots (local wavefront slopes) to actuator voltages (see also Chapter 5) [10]. Artifacts from the edge actuators were avoided by imaging the retina through the central 6 mm of the pupil. Custom dielectric beamsplitters were designed to reflect and transmit the 679-, 788-, and 843-nm SLDs, as well as the 670-nm laser diode. In addition, the 788-nm SHWS beacon is directed into the eye via a separate 90/10 pellicle to avoid back reflections from the reference channel. This allows simultaneous wavefront correction with conventional flood illumination or parallel SD-OCT. The chromatic aberration of the eye caused a shift in defocus between the two wavelengths that was offset by axially translating the relay telescope in the detection channel. The AO system measures and corrects for aberrations up to 22 times per second. Reduced rates of 6 and 14 Hz are typically used during parallel SDOCT, however, in order (1) to minimize the probability that an OCT exposure will coincide with, and produce a disturbance in, a wavefront measurement, and (2) to permit additional diagnostic tools to run in the background. While these slower rates are not optimal, it is well established that the dominant temporal components in the eye’s wave aberration occur at low frequencies, and the correction of these is typically sufficient for yielding sharp images of the cone mosaic and retinal capillary bed [1, 6, 8]. The exposure level at the cornea for the 788-nm SLD was 5 mW, more than 117 times below the maximum permissible exposure for continuous intrabeam viewing [5]. As shown in Figure 17.1, the AO system is positioned in the detection channel, downstream of both the reference and sample channels. While a seemingly more straightforward approach would be to place the AO system
EXPERIMENTAL PROCEDURES
453
in the sample channel (see also Chapter 10), the detection channel position offers several advantages. Specifically, the non-common-path lengths between the reference and sample channels can be made quite short (11 cm for the system shown in Fig. 17.1) and back reflections from the AO system’s optics are unable to reach the SHWS sensor, which is highly sensitive to such reflections. Short noncommon paths promote interferometric stability, require fewer optical elements to be dispersion matched, and add little to the physical size of the camera (only a 11-cm reference channel is added). An obvious complication is that the AO system acts on both the sample and reference wavefronts, thereby removing aberrations from the sample arm, but imparting the conjugate of the sample arm’s aberrations onto the reference arm. To prevent reference contamination, our approach was to design the detection channel as a point diffraction interferometer [11]. In this approach, the reference beam impinges on a confi ned central region of the corrector that is influenced at most by the middle 5 actuators, while the sample beam is exposed to all 37 actuators. This allows the reference beam to pass through the AO system largely unaffected and enables the sample beam to be well compensated. To prevent dynamic changes in the middle 5 actuators from altering the reference beam, the middle 5 actuators are frozen immediately after establishing a correction, prior to collecting the reference image. This approach provides effective wavefront correction with little compromise of the reference wavefront. A disadvantage of this approach is that the AO system acts only on light exiting the eye and not on that entering as well. Therefore, the full gain in sensitivity that could be realized with AO is not attained. Conversely, the spatial resolution of the retinal image depends only on the light exiting the eye and is not compromised by this approach.
17.3 17.3.1
EXPERIMENTAL PROCEDURES Preparation of Subjects
The subject’s line of sight is centered along the optical axis of the retinal camera with the aid of a fi xation target, bite bar stage, and video camera that monitors the subject’s pupil in retroillumination. The fi xation target is located at the subject’s far point and typically consists of high-contrast cross hairs positioned on a rectilinear grid with lines spaced at 0.5° intervals. The target is back-illuminated with uniform red light. A dental impression attached to a sturdy bite bar translation stage and a rigid forehead rest stabilize the eye and provide accurate pupil positioning in all three dimensions. Retroillumination of the pupil is realized with the 788-nm SLD. The subject is typically cyclopleged and his pupil is dilated using tropicamide or cyclopentalate hydrochloride that is administered prior to the experiment. Phenylephrine hydrochloride is sometimes applied prior to the experiment for additional dilation.
454
INDIANA UNIVERSITY AO-OCT SYSTEM
Sphere (defocus) and cylinder (astigmatism) are minimized in terms of the measured wavefront RMS by inserting appropriate trial lenses at the spectacle plane. The trial lenses are slightly tilted to avoid their strong back reflection. Residual defocus and astigmatism associated with the quantization of the spectacle lens power (0.25 D) and subjective criterion for optimum focus in the presence of higher order aberrations [12] are corrected with the AO system. 17.3.2
Collection of Retinal Images
The protocol for collecting retinal images on subjects depends on the specific experiment and the types of imaging (AO flood illumination and/or AO SDOCT) that are planned. If both types of imaging are used, a general protocol would be the following: 1. Compensate for ocular aberrations with trial lenses and AO. 2. Focus the retinal camera onto a specified retinal layer (e.g., cone photoreceptors) using the conventional flood-illuminated subsystem with dynamic correction and image acquisition at about 1 Hz. 3. Acquire video streams (1 to 60 Hz) of the conventional floodilluminated images. 4. Convert the camera from its conventional flood-illuminated modality to a parallel SD-OCT modality by removing the planar mirror, inserting the diffraction grating and slit, and lowering the two motorized retractable mirrors. 5. Acquire the parallel SD-OCT images with dynamic correction. When focusing on the cone mosaic (step 2), we typically acquire images at discrete focus intervals of 1/36 diopters (D) or 10.3 mm axial translation in the retina, across a depth range over which the cones are coarsely observed (~62 mm). As a rule for this instrument, a shift of 1/18 D (20.6 mm) from the position of clearest cones was found to visually reduce cone clarity and therefore defi nes the accuracy (±10.3 mm) of focus for this retinal layer. The optimal focal location is that which provides the visually sharpest images of the cones. If images acquired with and without AO compensation are needed, a fair comparison of the two requires refocusing (without AO) in order to account for the residual defocus that is no longer compensated for with the AO system. It is often useful to collect images of essentially the same patches of retina with non-AO instrumentation, such as commercial time-domain OCT systems (Stratus OCT3, Zeiss Meditec, Inc.), commercial fundus cameras (Topcon TRC-50EX), and a research-grade, fiber-based scanning SD-OCT system. Images from these established instruments are helpful in validating images from the AO camera and provide a reference for what can be realized
AO PERFORMANCE
455
with state-of-the-art commercial and research-grade cameras not endowed with AO.
17.4
AO PERFORMANCE
RMS Wavefront Error (µm)
Image quality is typically assessed in terms of the root-mean-square (RMS) wavefront error and the full width at half height (FWHH) of the point spread function (PSF) or the Strehl ratio. Figure 17.2 shows traces of the former as measured by the SHWS before and during dynamic correction on the same subject for conventional flood-illuminated imaging. The three curves depict different gain values (10, 20, and 30%). While the mean corrected RMS wavefront error is essentially the same for the three cases (~0.2 mm), the AO system with 30% gain responds noticeably faster, requiring 0.29 s to reach its stabilized minimum RMS value, compared to the worst case (10% gain) of 1.24 s. As described in Section 17.2, a complication of our OCT design is that the AO system removes aberrations from the sample wavefront but imparts the conjugate of the aberrations onto the reference wavefront. To prevent reference contamination we designed the detection channel as a point diffraction interferometer realized by having the reference illuminate only the central five actuators. This approach was tested extensively on human subjects under a variety of conditions, an example of which is shown in Figure 17.3. The figure shows traces of the total wavefront error for one subject as measured by the SHWS before and during dynamic correction. Once initiated, full correction takes less than one second and reaches a mean corrected RMS wavefront error of 0.08 mm. At full correction, the middle five actuators were frozen, which did not affect the total RMS wavefront error after correction.
Gain = 10% Gain = 20% Gain = 30%
1.5
1
AO Correction Initiated
0.5
0
0.5
1
1.5
2 Time (s)
2.5
3
3.5
FIGURE 17.2 Total RMS wavefront error over time as measured by the SHWS before and during dynamic correction on one subject. The three traces correspond to gain values of 10% (solid line), 20% (dashed line), and 30% (dotted line). Pupil size was 6.8 mm, and AO correction was performed at 21 Hz.
456
INDIANA UNIVERSITY AO-OCT SYSTEM All Actuators Enabled Five Frozen Actuators
RMS Wavefront Error (µm)
1 0.8 0.6
0.2 0
Actuators Frozen
AO Correction Initiated
0.4
0
0.5
1
1.5
2
2.5 Time (s)
3
3.5
4
4.5
5
FIGURE 17.3 Total RMS wavefront error over time as measured by the SHWS before and during dynamic correction on one subject. The two traces depict correction with (dotted line) and without (solid line) the middle five actuators frozen to a fi xed correction. (The remaining 32 actuators continued to provide dynamic correction.) There was no appreciable difference in the traces observed after the time when the actuators were frozen. (Adapted from Zhang et al. [3]. Reprinted with permission of the Optical Society of America.)
Further testing by visually evaluating the quality of cone images acquired with conventional flood illumination yielded the same conclusion. Figure 17.4 shows specific RMS wavefront errors measured across a 6.8mm pupil during the OCT retinal imaging experiment, averaged over three subjects, with and without AO correction. The error is displayed in terms of the total error (2nd- through 10th-order aberrations), Zernike defocus (C4), two Zernike astigmatism terms (C3 and C5), and the higher order aberrations (3rd- through 10th-order aberrations). As indicated in the figure, AO decreased the total RMS error by more than an average factor of 7, with an average residual corrected error of 0.13 mm. It is interesting to note that the majority of residual wavefront error, after correction, is due to contributions of higher order (3rd through 10th) aberrations. For the experiments, the total RMS wavefront error during dynamic correction varied between 0.07 and 0.21 mm, all of which produced a diffraction-limited FWHH of the PSF of 3.0 mm. The above discussion and corresponding examples illustrate the overall effectiveness of the Indiana AO system to correct ocular aberrations when used for conventional flood-illuminated and OCT imaging, but they do not reflect some of the software advancements we have implemented that go beyond basic AO control. We fi nd these additions helpful for diagnosing AO problems, optimizing system performance, and efficiently operating the system for the eye. As such, some of these are listed and described below. Note that some of the diagnostics were originally developed by Marcos van Dam for optimizing the AO system on the 10-m Keck telescope (see also Chapter 8). Most of the additions are integrated into Macwave (the control software) or are written in MATLAB (The MathWorks, Inc.).
AO PERFORMANCE
457
RMS Wavefront Error (µm)
1.4
Without AO With AO
1.2 1 0.8 0.6 0.4 0.2 0
Total Error
Defocus (C4)
Diag Astig (C3)
Astig (C5)
3rd + Order
FIGURE 17.4 Average RMS wavefront error across a 6.8-mm pupil, measured in three subjects, with (dark gray) and without (light gray) AO compensation. RMS wavefront error is shown for the total aberrations (2nd through 10th order), Zernike defocus (C4), two Zernike astigmatism modes (C3 and C5), and higher order aberrations (3rd through 10th order). Error bars represent ±1 standard deviation from the mean.
17.4.1
Image Sharpening
Image sharpening refers to a calibration procedure by which the fi xed noncommon-path aberrations are compensated for via an iterative approach that maximizes the peak of the PSF recorded by the science camera and calibrates the SHWS accordingly. This process, which was realized with MATLAB code, iteratively adds known amounts of individual Zernike modes via the wavefront corrector while monitoring the effect of those changes on image quality (i.e., peak value of the PSF) as recorded by the science camera. The process requires updating the reference coordinates of the SHWS after each iteration. While image sharpening proved effective for determining the noncommon-path aberrations, these were found to be relatively small for the Indiana retinal camera, suggesting that it was constructed close to its design specifications. As an example, Figure 17.5 shows point spread functions of the AO retinal camera obtained by imaging a point source (the fiber tip of the laser diode) through the system and capturing its image with the science CCD camera, and of the calculated PSF after measuring the wavefront with the SHWS. Note the strong qualitative similarity between the two PSFs confi rming that the non-common-path aberrations are small and substantiating the SHWS measurement as an accurate indicator of the system aberrations.
458
INDIANA UNIVERSITY AO-OCT SYSTEM Induced Wave Aberration 1
Induced Wave Aberration 2
(a)
(b)
FIGURE 17.5 Point spread functions of the AO retinal camera as (a) recorded with the science CCD camera and (b) reconstructed from SHWS aberration measurements. Note the similarity between the recorded PSFs and the reconstructed PSFs.
17.4.2
Temporal Power Spectra
The performance of the AO system is often assessed by looking at the energy distribution of the residual error in the temporal power spectrum (TPS). We originally generated such spectra from reconstructed wavefronts based on the Zernike coefficients. More recently, we have determined them directly from the raw SHWS data, speculating that we would capture more of the higher order aberrations and bypass fitting errors potentially generated by the use of Zernike modes. Interestingly, both approaches produced similar TPS results. Figure 17.6 shows several power spectra that were determined from SHWS measurements on one subject before and after dynamic correction. The latter is shown at different gain values (10, 30, and 50%), with 30% gain likely providing the best correction. Note the significant amount of energy that is effectively corrected by AO at very low frequencies (<0.2 Hz). Frequencies as low as 0.04 Hz were measured. Also shown is a TPS for an unstable AO correction (30% gain). At frequencies >0.2 Hz, the instability causes substantial residual energy that is roughly one order of magnitude higher than that with stable correction. Interestingly, the instability stemmed
AO PERFORMANCE
459
10000
No NoCorrection Correction 10% gain Gain
Power Spectra (nm2/Hz)
1000
30% gain Gain 50% gain Gain
100
Unstable (30% gain) Gain) 10
1
0.1
0.01
0.001 0.01
0.1
1
10
Frequency (Hz)
FIGURE 17.6 Temporal power spectra of the RMS wavefront error before (black solid line) and after dynamic correction with gains of 10% (gray short-dashed line), 30% (gray solid line), and 50% (gray long-dashed line) on one subject for a 6-mm pupil. The SHWS measurements were recorded across 20-s intervals (200 frames) with the AO system operating at 10 Hz. The exposure time was 50 ms. No blinks occurred during the 20-s intervals. The subject was dilated and mildly cyclopleged with 1% tropicamide and 2.5% phenylephrine. The unstable (30% gain) curve (black dashed line) was collected 6 months earlier on the same subject, but with the system unstable due to a centroiding software error and a defective ribbon cable that sent improper voltage commands to the deformable mirror driver. The plot shows that dynamic correction lowers the power at low temporal frequencies (frequencies below the AO bandwidth) when compared with the no correction case, and that the AO system effectively corrects aberrations at those temporal frequencies when running at gains of 10, 30, and 50%.
from a centroiding software error and a slight defect in the ribbon cable used to send voltage commands to the DM driver. 17.4.3
Power Rejection Curve of the Closed-Loop AO System
The power rejection curve is defi ned as the ratio of the dynamic correction to open-loop (or no correction) curves, an example of which is given in Figure 17.7, based on the data from Figure 17.6. The corresponding theoretical prediction, which is based on the camera parameters, is also shown and agrees well with the experimental results. As shown in the figure, the cutoff
460
INDIANA UNIVERSITY AO-OCT SYSTEM
Power Rejection Magnitude
10
1
0.1
0.01
0.001
0.0001 0.01
0.1
1
10
100
Frequency (Hz)
FIGURE 17.7 Experimental and theoretical power rejection curves. Experimental curve (jagged line) was obtained on one eye using a gain of 30% for a 6.8-mm pupil. The corresponding theoretical curve was based on the current system parameters (50 ms loop time, 45 ms time delay, 30% gain). The cutoff frequency is 0.87 Hz, suggesting that aberrations with temporal frequencies below 0.87 Hz are corrected by the system, while those above 0.87 Hz are amplified. The second theoretical curve (rightmost) predicts the performance of Indiana’s next-generation AO system (20 ms loop time, 16 ms time delay, 30% gain).
frequency (the frequency at which the rejection curve attains a value of 1) is 0.87 Hz, suggesting that aberrations with temporal frequencies below 0.87 Hz are reduced by the system, while those above 0.87 Hz are amplified. A second theoretical curve is also shown that predicts the performance of Indiana’s next-generation AO system, which consists of a CCD in the wavefront sensor with a 60-Hz frame rate, and an exposure time and computational time delay of 20 and 16 ms, respectively. Note the projected 2 Hz cutoff frequency, which is 2.3 times higher than that of the current system. 17.4.4 Time Stamping of SHWS Measurements For some types of experiments, such as those involving deconvolution, it is highly desirable to know when the SHWS measurements occur relative to the acquisition of science images. To accomplish this, a simple communication link between the AO and science computers was established that accurately records the time at which the SHWS measurements and science images are collected. Accuracy was measured at ±3 ms. Figure 17.8 shows a typical time sequence for acquiring SHWS measurements and the corresponding science images. The sequence was reconstructed from the time stamping.
EXAMPLE RESULTS WITH AO CONVENTIONAL FLOOD-ILLUMINATED IMAGING
461
Exposing
Science Camera Waiting Exposing
SHWS Waiting
50
100
150 200 Time (ms)
250
300
350
FIGURE 17.8 Timing diagram of concurrent SHWS measurements (14 Hz) and the acquisition of science images (24 Hz) as obtained on one subject’s eye.
17.4.5
Extensive Logging Capabilities
It is often helpful for users to save and view various experimental data relating to the performance of the AO system, such as the centroid positions, centroid derivatives, and actuator voltages. These data logs are then used offl ine for further analysis. For example, the centroid logs are used to generate temporal power spectra in MATLAB, and the voltage logs are used to study mirror stability. These logging capabilities were written directly into the AO control software, Macwave. 17.4.6
Improving Corrector Stability
Eye blinks, the sudden removal of a subject’s eye from the system, and saturation of the SHWS detector by other light sources in the system (such as the flashing flood illumination or OCT beams) all cause significant contamination of the wavefront measurement. These factors can lead to significant instabilities in wavefront measurement and correction. We have developed a robust software fi lter that ignores such measurements, where wavefront measurements are acquired and stored, but corrections are not performed. The fi lter can be tailored to specific experimental conditions. Figure 17.9 illustrates the effectiveness of the fi lter to suppress instabilities in the RMS wavefront error immediately following an eye blink.
17.5 EXAMPLE RESULTS WITH AO CONVENTIONAL FLOOD-ILLUMINATED IMAGING The flood-illuminated images shown in Figure 17.10 represent typical, wellfocused cone images that can be acquired with the Indiana AO retinal camera
462
INDIANA UNIVERSITY AO-OCT SYSTEM
RMS Wavefront Error (µm)
Without Error Suppression 2 1.5 1 0.5 2
4
6
8
10
8
10
RMS Wavefront Error (µm)
With Error Suppression 1.5 1 0.5 2
4
6 Time (s)
FIGURE 17.9 Measured RMS wavefront error traces with and without a real-time software fi lter that suppresses erroneous SHWS measurements, such as those caused by eye blinks. Note the stability of the RMS wavefront error immediately following each blink (as indicated by the black arrows) when the fi lter is employed compared to when it is not.
using near-infrared illumination (843 nm). The four images were collected at two retinal eccentricities (1° and 2.4°) with and without AO. Defocus is less 1 than — D for all four images as the collection protocol entailed adjusting focus 36 1 in small increments of — D (10.3 mm) and selecting the visually clearest cone 36 images. Cones at both eccentricities were observed over a focus range of approximately 62 mm. As evident in Figure 17.10, some cone information is present without correcting higher order aberrations (left column), but the mosaic is clearly of higher contrast and is resolved significantly better with correction (right column). Center-to-center cone spacing was directly measured from the central portion of the two AO compensated images in Figure 17.10 and was 5.0 and 7.0 mm at 1° and 2.4°, respectively. These spacings agree well with average center-to-center spacings (4.9 and 7.1 mm) obtained from anatomical studies of excised retina for these same eccentricities [13]. As further demonstration of the imaging performance of the flood illumination camera, Figure 17.11(a) shows a through-focus sequence of images
EXAMPLE RESULTS WITH AO PARALLEL SD-OCT IMAGING
Without AO
463
With AO
1∞
2.4∞
FIGURE 17.10 Raw, single images of cone photoreceptors collected on a subject with conventional flood-illuminated imaging. The four images represent the sharpest images collected at two retinal eccentricities (1° and 2.4°) with and without AO. The images are 0.67° × 0.33° (200 mm × 100 mm) in size and were collected using the 843nm SLD. The exposure duration was 4 ms, and the imaging pupil size was 6 mm. (Adapted from Zhang et al. [3]. Reprinted with permission from the Optical Society of America.)
(extracted from a 30-Hz video) that were acquired at increasing depths in the retina (left to right). Capillaries are clearly visible in the leftmost image, albeit of low contrast due to their high transparency at the 675-nm illumination wavelength. The underlying cone photoreceptors are well out of focus in this image. With the focus shifted further inward (the two intermediate depths), the capillaries disappear and cone structure begins to appear. The rightmost and most posterior image shows a sharp cone mosaic. As a second example, Figure 17.11(b) shows a sequence of images (extracted from a 60-Hz video) that were acquired while focusing on an individual retinal capillary. The resolution with AO was found sufficient to differentiate individual blood cells, and the 60-Hz imaging rate readily captured the flow dynamics of these cells. Note the difference in scale between Figures 17.11(a) and 17.11(b); the capillaries in both figures are about the same size, having a diameter of approximately 6 mm.
17.6 17.6.1
EXAMPLE RESULTS WITH AO PARALLEL SD-OCT IMAGING Parallel SD-OCT Sensitivity and Axial Resolution
OCT sensitivity (as applied to the eye) refers to the weakest backscattered light from the retina that can be detected. More specifically, sensitivity is defi ned as the square of the signal amplitude divided by the variance of the noise floor [14]. The sensitivity of the parallel SD-OCT system was measured
464
INDIANA UNIVERSITY AO-OCT SYSTEM
Anterior Retina
Intermediate Depths
Posterior Retina
(a)
100 µm
(b)
60 µm
FIGURE 17.11 (a) Sequence of raw, single images that depict through-focus, optical sectioning of the retina with conventional flood-illuminated imaging on one subject. Location is 1.4° eccentricity in the superior retina. Images are 0.8° × 0.8° (240 mm × 240 mm). The retinal image and SHWS acquisition rates were 30 and 14 Hz, respectively, and their exposure durations were 2 and 33 ms, respectively. The AO-corrected RMS wavefront error was 0.11 mm during image acquisition. Illumination was provided by the 670-nm laser diode. (b). Sequence of raw, single images that depict the flow of individual blood cells in a retinal capillary. Images were 1 ms in duration and were acquired at 60 Hz with conventional flood illumination on one subject. Dark arrows point to a single blood cell whose brightness fluctuates as it traverses the capillary. The time duration between the leftmost and rightmost images is 50 ms. Retinal location is 2.5° eccentricity, and images are 0.4° × 0.4° (120 mm × 120 mm).
by substituting a planar mirror for the eye in the sample channel. Neutral density fi lters were added to the sample channel to roughly mimic the total light loss in the eye (~4 ND). The exposure duration and power level at the eye were the same as that used in the retinal imaging experiments. Interference spectra were recorded as the optical path length of the reference channel was incremented in 45-mm steps via the voice coil stage across a range of 760 mm (equivalent to 550 mm in the retina). For each step, B-scans were reconstructed. Sensitivity was calculated for various A-scans along the B-scan. Figure 17.12 shows the resultant sensitivity of the parallel SD-OCT system. Note the abscissa was rescaled for retinal tissue. Sensitivity is highest (94 dB)
EXAMPLE RESULTS WITH AO PARALLEL SD-OCT IMAGING
465
100 95
Central A-scan of Line Illumination
Sensitivity (dB)
90 85
25th A-scan from Center
80 75
50th A-scan from Center
70
75th A-scan from Center
65 60 0
200
400
600
800
Depth in Tissue (µm)
FIGURE 17.12 Parallel SD-OCT sensitivity as a function of depth, converted from depth measured in air to that in tissue using a refractive index of n = 1.38. The four A-scan locations are along the line illumination profi le. The A-scan location is in microns at the retina, e.g., the 25th A-scan is 25 mm from the central A-scan. (Adapted from Zhang et al. [3]. Reprinted with permission of Optical Society of America.)
for the central A-scan up to a depth of 185 mm and decreases monotonically with tissue depth and location along the line illumination relative to the central A-scan. The middle ±25 A-scans drop at a rate of roughly 2.1 dB/100 mm of tissue and provide >90 dB sensitivity up to a depth of 363 mm. The noticeable drop between the 25th and 50th A-scans is attributable to nonuniformities in the roughly Gaussian intensity profi le of the 843-nm SLD and the spherocylindrical lens. The 75th A-scan is more than 20 dB below the central A-scan. While this drop is impractically large for retinal imaging due to the high loss of light in the eye, the results do illustrate that the instrument can acquire a single B-scan composed of 150 A-scans in 1 ms and achieve shortburst rates of 75,000 A-scans/s. These rates are 5 and 2.5 times higher than current SD-OCT retinal cameras and could be more effectively realized in the eye if a more uniform illumination source with a retinal irradiance comparable to the maximum of the current system were used. The FWHH of the axial PSF was also measured and converted to that in retinal tissue assuming a retinal refractive index (nret) of 1.38. The theoretical FWHH axial resolution in retinal tissue for the 843-nm SLD (∆l = 49.4 nm) is 2
( 843 nm ) 2 ln 2 λ 2 2 ln 2 FWHH = = = 4.6 µ m π n ret ∆λ π ( 1.38 ) ( 4 9.4 nm )
(17.2)
The measured resolution obtained with the planar mirror in the sample channel is 7.6 mm (air), which corresponds to 5.5 mm in the retina. The differ-
466
INDIANA UNIVERSITY AO-OCT SYSTEM
ence between the theoretical and experimental values is likely due to a residual dispersion mismatch and the dependence of the SLD spectrum on output power, which was set below its maximum as a precautionary measure. As a third measurement, the specular inner limiting membrane (ILM) reflection at the base of the foveal pit was captured and used to determine an in vivo resolution of 5.7 mm. 17.6.2
AO Parallel SD-OCT Imaging
Images acquired with the parallel SD-OCT camera have been largely used to validate the instrument performance and to compare it to current OCT systems (such as the Stratus OCT3 and a research-grade scanning SD-OCT system) and to AO conventional flood-illuminated systems. To this end, we targeted two distinct patches of retina at eccentricities of 1° and 2.4°, of which images were collected using an AO conventional flood-illuminated system (Fig. 17.10), a conventional OCT system (Fig. 17.13), and an AO parallel SDOCT system (Fig. 17.14).
Stratus OCT3
Scanning SD OCT
2.4 deg 1 deg
NFL GCL & IPL INL OPL ONL IS/OS RPE
FIGURE 17.13 (Top) Stratus OCT3 and (bottom) scanning SD-OCT B-scans collected in the same subject. Images are centered on the fovea and bisect the superior and inferior retinal fields (Stratus OCT3) and nasal and temporal fields (scanning SD-OCT). B-scans are 4.9 mm (16.3°) wide and 0.75 mm in depth. White rectangles depict 100-mm-wide by 560-mm-deep subsections that are centered at eccentricities of 1° and 2.4° and were imaged using an AO conventional flood-illuminated system and an AO parallel SD-OCT system. A magnified view of these subsections is shown in Figure 17.14 with the corresponding AO parallel SD-OCT B-scans. The retinal layers include the nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), the junction between the inner and outer photoreceptor segments (IS/OS), and the retinal pigment epithelium (RPE). (Adapted from Zhang et al. [3]. Reprinted with permission of the Optical Society of America.)
EXAMPLE RESULTS WITH AO PARALLEL SD-OCT IMAGING
Without AO With AO (focus at cones) (focus at cones)
Stratus OCT3
467
Scanning SD OCT
1∞ 61 µm (dof)
NFL
2.4∞
GCL & IPL INL OPL
61 µm (dof)
ONL IS/OS RPE
Linear Images of IS/OS Interface and RPE
No AO
AO
FIGURE 17.14 (Left two columns) B-scan images acquired with the AO parallel SD-OCT instrument (shown in Fig. 17.1) with and without AO at eccentricities of 1° and 2.4° (superior). (Right two columns) Stratus OCT3 and scanning SD-OCT Bscans are shown at the same retinal eccentricities (from white rectangular boxes in Fig. 17.13). All images were acquired on the same subject and are 100 mm wide and 560 mm deep. (bottom) The interface between the inner and outer segments and RPE are enlarged and displayed as an amplitude on a linear scale (as opposed to a logarithmic scale). Images without AO are normalized to the corresponding AO images, including the enlarged images. Depth of focus (dof) is 61 mm and is defi ned in the text. The retinal layers include the nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), the junction between the inner and outer photoreceptor segments (IS/OS), and the retinal pigment epithelium (RPE). (Adapted from Zhang et al. [3]. Reprinted with permission from the Optical Society of America.)
Figure 17.13 shows the Stratus OCT3 and scanning SD-OCT B-scans that radially bisect the foveal pit. The axial resolution of the Stratus and scanning SD-OCT instruments is 10 and 6 mm, respectively. The white rectangular boxes depict the targeted retinal patches that are centered at 1° and 2.4° eccentricities, and are 100 mm × 560 mm (width × height) in size. Labels for the intraretinal layers are also shown and depict current anatomical interpretations of high-resolution OCT images [15]. Most of these layers are also
468
INDIANA UNIVERSITY AO-OCT SYSTEM
suggested in the Stratus OCT3 image but are not as well defi ned due to increased speckle and lower resolution [16]. Figure 17.14 shows the corresponding AO SD-OCT images (extracted from a sequence of short bursts) that were acquired at the two eccentricities. The corresponding subsections of the Stratus OCT3 and scanning SD-OCT images (white rectangles in Fig. 17.13) are also shown for comparison. Thickness measurements of the retina (i.e., the distance between the ILM and the posterior edge of the cone outer segments) are in reasonable agreement. Differences in the AO SD-OCT retinal thickness measurements of 20.6 and 8.8% relative to the Stratus at the two eccentricities are likely within the error imposed by image interpretation, sampling errors, and differences in retinal location. A comparison of retinal thickness between the AO SD-OCT image and the scanning SD-OCT image was not performed as the latter was acquired from a different meridian, albeit at roughly the same eccentricity. The images from the three cameras contain grossly similar bright and dark bands that occur at similar depths in the retina. Interestingly, the stratification of the intraretinal layers appears most defi ned in the AO parallel SD-OCT
Collage of AO-OCT Images at 2.4° ecc.
Collage of AO-OCT Images at 1° ecc.
NFL GCL & IPL INL OPL ONL IS/OS RPE
FIGURE 17.15 Two collages created by digitally pasting together an alternating sequence of AO parallel SD-OCT images acquired at eccentricities of 1° and 2.4°. The collages are roughly 3.25° to 3.5° wide. Focus is at the cone photoreceptor layer. Images in each alternating sequence were taken from the same short-burst series. The collage at 2.4° eccentricity was generated from two images (each with 70 A-scans from the central region of a B-scan; dashed white rectangle represents the combined two images and is 140 A-scans) that were repeated about 7 times. The collage at 1° eccentricity contains three images (each with ~70 A-scans; dashed white rectangle represents 210 A-scans) that were repeated about 5 times. The collages were axially registered to each other by aligning the reflection from the IS/OS junction. The retinal layers include the nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), the junction between the inner and outer photoreceptor segments (IS/OS), and the retinal pigment epithelium (RPE). (Adapted from Zhang et al. [3]. Reprinted with permission of the Optical Society of America.)
EXAMPLE RESULTS WITH AO PARALLEL SD-OCT IMAGING
469
images (left two columns of Fig. 17.14) even though the AO parallel SD-OCT images are somewhat darker, which suggests a reduced signal-to-noise ratio (SNR). Some of the differences between AO parallel SD-OCT images and the standard OCT images are caused by image resizing necessitated by differences in A-scan density. While the layers are clearly less distinct than those in Figure 17.13, they can be better visualized in a collage. The collage was created by compressing a few AO parallel SD-OCT images along their lateral dimension to roughly the same dimensions as the sections shown in Figure 17.13, and then digitally pasting together duplications of those images to form a collage, an example of which is shown in Figure 17.15. By simulating a larger field of view, the two collages (1° and 2.4°) more clearly show the stratification of the retinal layers that is typically observed with high-resolution OCT. Bright reflections are visible at the nerve fiber layer, inner segment/outer segment junction, and RPE; typically, in OCT images, bright reflections occur at the interfaces between media with different refractive indices. Note the distinct physical separation between the inner nuclear layer and adjacent plexiform layers as well as the anterior (inner segment/outer segment junction) and posterior sides of the outer segments. The outer segments of the photoreceptors are slightly longer at the smaller eccentricity as expected. Collectively, the results from Figures 17.13 to 17.15 indicate that the AO parallel SD-OCT instrument is sufficiently sensitive to detect reflections from essentially all major layers of the retina (nerve fiber layer to the retinal pigment epithelium). Speckle is an unfortunate by-product of the coherent nature of OCT detection and is indeed readily visible in the images acquired from all three OCT instruments (Fig. 17.13 to 17.15). For the AO parallel SD-OCT images, it clearly hinders our ability to correlate retinal reflections to microscopic retinal structures, especially those that approach the size of an individual speckle. Speckle in the AO-OCT images is about the average size predicted theoretically (2.9 and 5.7 mm in the lateral and axial directions, respectively) for the imaging configuration used here. As evident in Figure 17.14, the commercial and research-grade OCT instruments generate speckle of noticeably larger size and of different shape. These differences originate from differences in the pupil diameter and coherence lengths of the SLD light sources. In Figure 17.14, smaller speckle appears clearly less disruptive of retinal features and illustrates an advantage of larger pupils and shorter coherence length sources. However, even with the large 6-mm pupil and 5.7-mm coherence length of the AO parallel SD-OCT instrument, our results show that fully developed speckle is still present and substantially limits the microscopic retinal structures we can observe. This is despite the fact that the instrument has achieved the necessary 3D resolution, sensitivity, and speed required to observe these structures. While speckle is clearly disruptive, some microscopic structures are apparent (particularly within the instrument’s depth of focus) when the camera is focused at the location where cones are clearest in the flood-illuminated image. [The depth of focus for the Indiana AO parallel SD-OCT instrument is about 61 mm (see Fig. 17.14) and is defi ned as two times the Rayleigh range
470
INDIANA UNIVERSITY AO-OCT SYSTEM
for a Gaussian beam and a 6-mm pupil.] Specifically the bright reflection from the interface between the inner and outer segments of the photoreceptors appears spatially segmented, having a quasi-regular pattern with a periodicity of several microns. This unique pattern is not observed in any other part of the OCT images, for example, in any of the other retinal layers. Due to the random position of the slit on the retina, the portion of the mosaic sampled by the camera varied from acquisition to acquisition. The quasi-regular pattern largely disappears into a thin line when there is no AO correction (Fig. 17.14). The presence of some pattern information without AO should be expected, as there are hints of structure in the cone mosaic in the floodilluminated images without AO (Fig. 17.10) for subjects with normal optical quality when defocus and astigmatism are meticulously corrected. While the images showing an enlarged view of the RPE and the interface between the inner and outer segments (bottom of Fig. 17.14) appear to contain structural information specific to the cones, only the AO image contains structures whose regular spacing (~7 mm) matches that measured using flood-illuminated imaging (Fig. 17.10). The structural spacing without AO is noticeably smaller, suggesting it is corrupted by speckle, whose average size is similar to the spacing. Note the increased brightness of the IS/OS interface and the RPE (to a lesser extent) when the aberrations are compensated. This is particularly evident in the enlarged amplitude images of Figure 17.14 and would be even more dramatic if displayed as an intensity on a linear scale. As a potential means to reduce the contrast of the speckle noise, we investigated the impact of averaging images within a short-burst sequence. Micromovements of the retina that involuntarily occur between images (with a 2-ms spacing) might be sufficiently large to spatially alter the speckle pattern, while still sufficiently small to preserve much of the retinal signal at the cellular level. Figure 17.16 shows examples of averaging across three short-burst sequences. For the left two columns, averaging reduces speckle contrast with some increase in clarity of the IS/OS junction when AO compensation is employed. However, averaging seems to reduce the contrast in many of the other retinal layers, which is expected given that these layers are out of focus and therefore should carry little cellular (or high frequency) information. Based on these preliminary observations, speckle noise is the likely source of the high contrast in these layers. As an example, the contrast of the GCL and IPL layers decreased by 1.8 and 1.6 dB for the left two columns in Figure 17.16. The rightmost column is an example with almost no contrast change after averaging. Analysis of the short-burst images reveals that the retina was effectively stationary during the 15-ms short-burst sequence and produced no change in the speckle pattern. This latter example is rather atypical in that small amounts of retinal motion are usually present. While many more images than that used in Figure 17.16 are necessary to achieve appreciable speckle contrast reduction, these preliminary results illustrate a possible approach, provided that lateral motion of the retina during a single exposure is smaller than the size of the structures of interest.
EXAMPLE RESULTS WITH AO PARALLEL SD-OCT IMAGING
Without AO
With AO
(focus at cones)
(focus at cones)
471
Single Image
NFL GCL & IPL INL Averaged Image
OPL ONL IS/OS RPE
FIGURE 17.16 (Top row) Single B-scans acquired with AO parallel SD-OCT with and without AO compensation at 2.4° eccentricity (superior). (Bottom row) Average B-scans computed from seven (left and right) and three (middle) images from the same short-burst sequence, at the same eccentricity as the top row. Images are displayed on a logarithmic scale. Images are 100 mm wide and 560 mm deep. Major layers of the retina are labeled on the right. The retinal layers include the nerve fiber layer (NFL), ganglion cell layer (GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), the junction between the inner and outer photoreceptor segments (IS/OS), and the retinal pigment epithelium (RPE). The figure shows that micromovements of the retina allow averaging to reduce speckle contrast, but they may have been insufficient for significantly reducing speckle in this data.
As additional substantive evidence that the quasi-regular pattern present in the B-scans corresponds to reflected light at the IS/OS junction, we analyzed both flood-illuminated and SD-OCT data in the Fourier domain [17]. To account for the pseudo-hexagonal distribution of the cones and the random position of the narrow slit at the cones due to eye motion, a virtual 2.8 mm × 100 mm slit was projected onto the 1° and 2.4° flood-illuminated cone images. One-dimensional power spectra were computed along the slit length using only the narrow slice of the mosaic sampled by the slit. Because only the
472
INDIANA UNIVERSITY AO-OCT SYSTEM
(a) 2.4°
8
10
Flood OCT, cones OCT, OPL OCT, ONL OCT, IPL
7
8
10
Flood OCT, AO OCT, no AO
7
10
Power
(b) 2.4°
6
10
6
10
5
8
10
Flood OCT, AO OCT, no AO
7
10
10
(c ) 1.0°
6
10
5
10
10
4
5
10
4
10
10
3
4
10
3
10
10 20 40 60 80 Frequency (cyc/deg)
20 40 60 80 Frequency (cyc/deg)
20 40 60 80 Frequency (cyc/deg)
FIGURE 17.17 Average power spectra obtained by 1D Fourier transformation of the conventional flood-illuminated AO images (thin solid line) in Figure 17.10 after being sampled by the 2.8 mm × 100 mm slit and (a) of cross-sectional slices through the interface between the inner and outer segments (thick solid line), outer plexiform layer (dash-dot-dashed line), outer nuclear layer (dash-dash-dotted line), and inner plexiform layer (dotted line) at 2.4° eccentricity. OCT curves are an average of six shortburst sequences, each containing 7 B-scans. (b,c) Power spectra are shown with (thick solid line) and without (dotted line) AO at eccentricities of 2.4° and 1°, respectively. All parallel SD-OCT curves are normalized to have the same power at 0 cyc/deg.
general location is known at which the slit sampled the retinal patch in the actual AO parallel SD-OCT experiment, a rolling average of the power spectra was computed. This was realized by laterally shifting the virtual slit across the portion of each cone mosaic image (Fig. 17.10) where the slit was known to fall. The resultant power spectra (thin solid lines) are shown in the three plots of Figure 17.17 for the two eccentricities. Note the cusps in the spectra occurring at frequencies corresponding to 5 and 7.1 mm for the 1° and 2.4° eccentricities, respectively. Average power spectra [Fig 17.17(a)] generated from six short-burst sequences of OCT images are also shown for crosssectional slices through the interface between the inner and outer segments, outer plexiform layer, outer nuclear layer, and inner plexiform layer. As the figure reveals, only the OCT cone curve contains noticeable energy localized near that of the spatial frequency of the cones from the conventional floodilluminated image. The power spectra in Figures 17.17(b) and 17.17(c) show
473
EXAMPLE RESULTS WITH AO PARALLEL SD-OCT IMAGING
the impact of AO at both 1° and 2.4° eccentricities when focused on the cones. Spectra were averaged across six short-burst sequences, each containing seven B-scans at each of the two eccentricities. Cusps in the AO-OCT power spectra are again observed. The cusp in the AO-OCT power spectrum at 2.4° agrees very well with the spatial frequency of cone photoreceptors observed in the corresponding flood-illuminated power spectrum; the cusp in the AOOCT power spectrum at 1° is of significantly smaller amplitude but occurs at the spatial frequency of the cone photoreceptors observed in the 1° floodilluminated power spectra. The gain in spatial resolution afforded by AO should also be accompanied by a gain in SNR as correcting aberrations produces a more concentrated focus of the retinal reflection at the CCD detector. The change in the SNR of the photoreceptor and nerve fiber layer (NFL) reflectances were studied for three imaging scenarios: (1) the image was focused at the cones without AO correction, (2) the image was focused at the cones with AO dynamic correction, and (3) the image was focused near, but not at, the NFL with AO dynamic correction. The same 2.4° retinal patch was imaged in all three scenarios. To reduce the speckle contamination that would lead to a wrong comparison, the SNR of the cone photoreceptor layer and NFL (SNRcone and SNR NFL , respectively) were calculated from the average of 20 contiguous Ascans obtained during each short-burst image. Figure 17.18 shows typical profi les (average of 20) for the three scenarios, in which the reflectivity is the inverse Fourier transformation of the interference fringes collected by the CCD. Note that comparing the SNR visually from Figure 17.18 does not precisely reflect the true trend because of differences in the noise floor in these profi les, though they differ by less than 1.3 dB.
Without AO (focused on cones)
With AO (focused on cones)
With AO (focused near NFL)
cones Reflectivity (dB)
40
NFL
40
cones
30
30
20
20
10
10 0
200
400
600
0
200
400
600
0
200
400
600
Depth in Retina (µm)
FIGURE 17.18 Average profi le of 20 contiguous A-scans centered about the brightest region in the B-scan. Images were obtained at a retinal eccentricity of 2.4°. The figure shows that the SNR of the photoreceptor layer increases significantly when the camera is focused on it and the ocular aberrations are compensated by AO. (Adapted from Zhang et al. [3]. Reprinted with permission from the Optical Society of America.)
474
INDIANA UNIVERSITY AO-OCT SYSTEM
TABLE 17.1 Average SNR for the Cone Photoreceptor Layer (SNRcone) and NFL (SNRNFL) for Three Imaging Scenarios* Without AO (Focused on Cones)
With AO (Focused on Cones)
With AO (Focused near NFL)
44.4 40.5
41.3 51.9
46.4 38.8
SNR NFL (dB) SNRcone (dB)
* Adapted from Zhang et al. [3]. Reprinted with permission of the Optical Society of America.
Table 17.1 lists SNRcone and SNR NFL for the three imaging scenarios. The table shows, as expected, that the detected retinal reflection is highly dependent on focus and the presence or absence of ocular aberrations. For example, an 11.4-dB increase in SNRcone is observed when aberrations are corrected with dynamic AO for the case when the image is focused on the cone layer. This increase reflects the influence of the correction of 0.50 mm RMS wavefront error (astigmatism, third order and higher) by AO. It is expected that there will be an additional increase in SNRcone if AO acts on both the light entering and exiting the eye instead of only the latter, as in this AO parallel SD-OCT camera. It is also worth pointing out that there is a 13.1-dB drop in SNRcone when the focus is shifted from the layer at which the clearest floodilluminated images of cones were acquired to a layer 200 mm anterior (near the NFL) in the presence of dynamic AO correction. This decrease in SNR was due to a 0.55-D (200-mm) change in defocus since AO was dynamically correcting the ocular aberration during the experiment. From a theoretical standpoint, light efficiency through the OCT slit is predicted to decrease by 12.3 dB when 0.55 D of defocus are added and the system is assumed to be diffraction limited for a 6-mm pupil. The NFL and outer nuclear layer also show consistent differences in reflectivity associated with focus and AO correction, although to a lesser extent.
17.7
CONCLUSION
This chapter provides a technical overview of the Indiana AO-OCT retinal camera. The camera was designed for sequentially collecting 3D highresolution SD-OCT and 2D high-resolution conventional flood-illuminated images of the microscopic retina in the living human eye. The overview includes a detailed description of the camera, general performance procedures for preparing subjects and collecting retinal images, a performance assessment of the AO system, and fi nally, imaging results. For AO conventional flood-illuminated imaging, the high acquisition rates (60 Hz; 500 Hz) coupled with the high lateral resolution due to AO provides the ability to quickly navigate through the retina, recognize individual cells
REFERENCES
475
of relative high contrast without image warp and motion blur, and monitor retinal dynamics occurring at the cellular level (e.g., capillary blood flow). The 3D resolution of AO SD-OCT substantially surpasses that of either methodology alone. The camera was found to have sufficient 3D resolution (3.0 mm × 3.0 mm × 5.7 mm), sensitivity (up to 94 dB), and speed (100,000 Ascans/s for single shot of 100 A-scans) for imaging the retina at the single-cell level. This system provided the fi rst observations of the interface between the inner and outer segments of individual cones, resolved simultaneously in both the lateral and axial dimensions. The camera sensitivity was sufficient for observing reflections from essentially all neural layers of the retina. The signal-to-noise ratio of the detected reflection from the photoreceptor layer was highly sensitive to the level of ocular aberrations and defocus. A critical limitation of the current AO SD-OCT instrument is that highcontrast speckle noise hinders our ability to correlate retinal reflections to specific cell-sized retinal structures. While speckle is a serious problem, a meaningful solution will permit OCT to reap the full benefit of AO that conventional flood-illuminated systems and scanning laser ophthalmoscopy now enjoy, but with the additional benefits of considerably higher axial resolution and sensitivity. The AO parallel SD-OCT results presented here already reveal subcellular structure in the cone photoreceptor layer that have not been reported with either flood-illuminated or SLO systems. More recently, a scanning AO SD-OCT method was developed that allowed the observation of cones in volume images [18]. These fi rst results will surely improve as speckle reduction techniques are applied and real-time 3D imaging is implemented. Acknowledgments The authors thank previous group members Karen Thorn, Junle Qu, and Huawei Zhao, as well as Thomas Milner, Robert Zawadzki, and Weihua Gao for advice on the project. Assistance from Marcos van Dam with the AO diagnostics is much appreciated. The authors also thank William Monette and Daniel Jackson’s group for electronics and machining support. Financial support was provided by the National Eye Institute grant 5R01 EY014743. This work was also supported in part by the National Science Foundation Science and Technology Center for Adaptive Optics, managed by the University of California at Santa Cruz under cooperative agreement No. AST-9876783. REFERENCES 1. Miller DT, Qu J, Jonnal RS, Thorn K. Coherence Gating and Adaptive Optics in the Eye. In: Tuchin VV, Izatt JA, Fujimoto JG, eds. Coherence Domain Optical Methods and Optical Coherence Tomography in Biomedicine VII. Proceedings of the SPIE. 2003; 4956: 65–72. 2. Liang J, Williams DR, Miller DT. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884– 2892.
476
INDIANA UNIVERSITY AO-OCT SYSTEM
3. Zhang Y, Rha J, Jonnal RS, Miller DT. Adaptive Optics Parallel Spectral Domain Optical Coherence Tomography for Imaging the Living Retina. Opt. Express. 2005; 13: 4792–4811. 4. Cense B, Nassif NA, Chen TC, et al. Ultrahigh-Resolution High-Speed Retinal Imaging Using Spectral-Domain Optical Coherence Tomography. Opt. Express. 2004; 12: 2435–2447. 5. ANSI. American National Standard for the Safe Use of Lasers. ANSI Z136.1. Orlando, FL: Laser Institute of America, 2000. 6. Thorn KE, Qu J, Jonnal RJ, Miller DT. Adaptive Optics Flood-Illuminated Camera for High Speed Retinal Imaging. Invest. Ophthalmol. Vis. Sci. 2003; 44: 1002. 7. Hecht J. Understanding Fiber Optics. Upper Saddle River, NJ: Prentice Hall, 1998. 8. Hofer H, Chen L, Yoon GY, et al. Improvement in Retinal Image Quality with Dynamic Correction of the Eye’s Aberrations. Opt. Express. 2001; 8: 631–643. 9. Liang J, Grimm B, Goelz S, Bille J. Objective Measurement of Wave Aberrations of the Human Eye with the Use of a Hartmann-Shack Wave-front Sensor. J. Opt. Soc. Am. A. 1994; 11: 1949–1957. 10. Jiang W, Li H. Hartmann-Shack Wavefront Sensing and Control Algorithm. In: Schulte-in-den-Baeumen JJ, Tyson RK, eds. Adaptive Optics and Optical Structures. Proceedings of the SPIE. 1990; 1271: 82–93. 11. Malacara D. Optical Shop Testing, 2nd ed. New York: Wiley, 1992. 12. Thibos LN, Hong X, Bradley A, Applegate RA. Accuracy and Precision of Methods to Predict the Results of Subjective Refraction from Monochromatic Wavefront Aberration Maps. J. Vis. 2004; 4: 329–351. 13. Curcio CA, Sloan KR, Kalina RE, Hendrickson AE. Human Photoreceptor Topography. J. Comp. Neurol. 1990; 292: 497–523. 14. Leitgeb R, Hitzenberger CK, Fercher AF. Performance of Fourier Domain versus Time Domain Optical Coherence Tomography. Opt. Express. 2003; 11: 889– 894. 15. Nassif N, Cense B, Park BH, et al. In Vivo Human Retinal Imaging by UltrahighSpeed Spectral Domain Optical Coherence Tomography. Opt. Lett. 2004: 29; 480–482. 16. Sander B, Larsen M, Thrane L, et al. Enhanced Optical Coherence Tomography Imaging by Multiple Scan Averaging. Br. J. Ophthalmol. 2004; 89: 207–212. 17. Miller DT, Williams DR, Morris GM, Liang J. Images of Cone Photoreceptors in the Living Human Eye. Vision Res. 1996; 36: 1067–1079. 18. Zhang Y, Rha J, Cense B, et al. Motion-free volumetric retinal imaging with adaptive optics spectral-domain optical coherence tomography. In: Manns F, Sderberg PG, Ho A, eds. Ophthalmic Technologies XVI. Proceedings of the SPIE. 2006; 6138 (submitted).
CHAPTER EIGHTEEN
Design and Testing of a Liquid Crystal Adaptive Optics Phoropter ABDUL AWWAL and SCOT OLIVIER Lawrence Livermore National Laboratory, Livermore, California
18.1
INTRODUCTION
Conventional phoropters are used by ophthalmologists and optometrists to estimate and correct for the lower order aberrations of the eye, defocus and astigmatism, in order to derive a prescription for their patients. An adaptive optics phoropter measures and corrects the aberrations in the human eye using adaptive optics techniques, which are capable of dealing with both the standard lower order aberrations and higher order aberrations, including coma and spherical aberration. This chapter describes the design and testing of an adaptive optics (AO) phoropter based on a Shack–Hartmann wavefront sensor to measure the aberrations of the eye, and a liquid crystal spatial light modulator to compensate for them. The goal is to produce near diffractionlimited image quality at the retina, which will enable the investigation of the psychophysical limits of human vision. We will later show some preliminary results from testing human subjects. Corrective lenses can generally improve Snellen visual acuity to better than 20/20 in normal eyes by correcting the lower order aberrations, defocus, and
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
477
478
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
astigmatism, also known as sphere and cylinder [1]. Higher order aberrations remain untreated, however, and continue to affect visual performance. One of the goals of designing an AO phoropter is that it can correct higher order aberrations to improve acuity beyond what can be achieved with conventional spectacles or contact lenses. This improvement has been coined “supernormal vision” [2]. The same design can be extended to produce in vivo images of the human retina that are sharper, with higher resolution than conventional fundus photography [2, 3]. Conventional deformable mirror (DM) devices, such as continuous faceplate mirrors, are used in vision science and astronomical applications. In addition to being expensive, these DMs typically have much larger apertures than the eye. This leads to a large optical system in order to magnify the eye’s dilated pupil to the larger size of the deformable mirror. This combination of cost and size limits the suitability of an AO system using a conventional DM for clinical trials and eventual commercialization. Recently, new DM technologies have been developed based on both liquid crystal (LC) devices and microelectromechanical system (MEMS) mirrors, which are both compact and less expensive than the conventional DM devices. The AO group at the Lawrence Livermore National Laboratory (LLNL) has previously demonstrated very high order wavefront correction using LC and other DM technology [4, 5]. This chapter demonstrates the use of new LC technologies in the area of vision correction. We start with a discussion of important design parameters related to the wavefront sensor, light source, and spatial light modulator (SLM). Then, we describe the testing of each subsystem followed by the testing of the combined system. Results from human subjects testing are discussed at the end, along with suggestions for future design improvement.
18.2 18.2.1
WAVEFRONT SENSOR SELECTION Wavefront Sensor: Shack–Hartmann Sensor
The Shack–Hartmann wavefront sensor (SHWS) serves as the wave aberration measuring device [6, 7]. It consists of an 8-bit digital camera coupled with a lenslet array. The principle of operation of an SHWS can be described as follows: When a plane wave is incident upon the sensor, it produces a regular array of spots, each of which are located on the optical axes of the corresponding lenslets. The position of the initial array of spots is called the reference position. When a wavefront with aberrations is incident on the sensor, the focal spot of each subaperture (i.e., lenslet) shifts relative to the reference position by a factor proportional to the local slope of the wavefront. The position of the focal spot is determined by a centroid operation. The difference of the spot position from its reference position yields an estimate of the phase in each subaperture location.
WAVEFRONT SENSOR SELECTION
479
For an SHWS, there is a minimum and maximum phase that can be measured. The minimum phase is determined by the sensitivity of the SHWS, and the maximum phase is determined by the dynamic range of the SHWS. The sensitivity and dynamic range of the SHWS are discussed next. (For a more detailed discussion of sensitivity and dynamic range, see Chapter 3). Sensitivity Sensitivity is a measure of the smallest wavefront slope that can be accurately measured with a given lenslet array and charge-coupled device (CCD) camera. This relationship between the local slope of the incident wavefront and the Shack–Hartmann spot shift can be estimated from the dimensions of the SHWS. For our system, the lenslet diameters are 203 mm, with a focal length of 5.8 mm. This focal length was evaluated at a wavelength of 632.8 nm, which differs from the design wavelength (785 nm) chosen for the wavefront sensor beacon. However, the change in focal length for the longer wavelength was determined to be minimal. The SHWS has 20 × 20 subapertures. The camera pixels are 16 mm 2 . The sensitivity (or scale) of the SHWS in tilt angle per pixel is approximated by dividing the pixel size, dpix, by the focal length, F, of the lenslets: Sensitivity ≈
dpix F
=
16 µ m pixel = 2.76 mrad pixel 5.8 mm
(18.1)
Thus a centroid will shift by one pixel when the local tilt of the wavefront at a subaperture shifts by 2.76 mrad, as shown in Figure 18.1. As a result of this tilt, two parallel rays reaching two neighboring subapertures will be delayed by a phase difference, ∆f. This difference is (approximately) equivalent to the distance of the line segment formed along a ray between the perpendicular to the tilted wavefront and the detector (see Fig. 18.1). From similar triangles, the angle subtended by ∆f is equal to the angle subtended by a pixel at the detector plane. Therefore, the phase difference between two neighboring subapertures caused by a single pixel of tilt is
FIGURE 18.1
Sensitivity of a Shack–Hartmann wavefront sensor.
480
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
∆φ =
dpix F
d
(18.2)
where d is the diameter of the subaperture. Substituting the appropriate values yields ∆φ = 2.76 mrad × 203 µ m = 0.56 µ m
(18.3)
For a wavelength of 785 nm, this phase difference would correspond to (2p radians) × (0.56 mm/0.785 mm) = 4.48 radians. Dynamic Range The dynamic range of a Shack–Hartmann wavefront sensor is an important parameter of an AO system. The first step is to have an estimate of the range of aberrations that will need to be measured and compensated for. A statistical study of the aberrations of the human eye indicates that a dynamic range of 4 diopters (D) will be required to handle more than 90% of the population with pupil sizes under 6 mm, after correcting the lower order aberrations with trial lenses [8]. The second step is to determine the dynamic range of an SHWS. Dynamic range is defi ned as the maximum phase difference that an SHWS can measure, without having the focused lenslet spot leave its search box area (see Fig. 18.2) and fall behind a neighboring subaperture. The maximum wavefront slope will occur when the lenslet spot is at the edge of its search box. The spot size, S, produced by a square-sized subaperture at a wavelength of 785 nm is S≈
2 Fλ = 44.86 µ m = 2.8 pixels d
(18.4)
Now, drawing a perpendicular from the oblique ray to the center of the subaperture (see Fig. 18.2), we again form two similar triangles. Here, the phase difference of interest is the optical path difference between the oblique ray
FIGURE 18.2
Dynamic range of a Shack–Hartmann wavefront sensor.
WAVEFRONT SENSOR SELECTION
481
and the ray hitting the center of the aperture, and is denoted by ∆f. Since this phase difference subtends the same angle, q, as the distance (d/2 − S/2) of the larger triangle, we obtain tan θ = [ ( d 2 − S 2 ) F ]
(18.5)
Thus, the maximum phase difference that can be measured by each subaperture is Dynamic range = ∆φ = ( tan θ ) ( d 2 − S 2 ) = [( d 2 − S 2 ) F ] ( d 2 − S 2 ) 2 = (d 2 − S 2 ) F
(18.6)
Given the system parameters listed in Figure 18.2, the dynamic range of our wavefront sensor is 1.08 mm per subaperture. The dynamic range could be improved if a spot tracker is used, where the new spot position is tracked independent of the search box positions. In order to relate the above number to the dioptric power of a lens, consider that the peak-to-valley phase delay produced by a spherical lens can be described by the equation Phase delay ≈ 0.5 Psph r 2
(18.7)
where Psph is the dioptric value of the lens and r is the pupil radius. Here, a 5.8-mm pupil (measured at the pupil plane of the eye) was used to limit the magnitude of the higher order aberrations. Since our optical system contained a 2 : 1 scaling factor between the pupil plane of the eye and the CCD plane of the SHWS, the equivalent subaperture size at the pupil plane is 2 × 203 mm = 406 mm, and approximately 14 subapertures can fit within a 5.8-mm pupil. Thus, from the center of the SHWS, a maximum of 7 subapertures will be able to detect a total of 1.08 × 7 = 7.56 mm of phase difference. Using r = 2.9 mm in Eq. (18.7), Phase delay, φ = 0.5 ( Psph ) ( 2.9 ) µ m = 7.56 µ m 2
(18.8)
Solving for Psph yields Psph = 1.8 D Light Sensitivity Next, we estimate the camera response in terms of digital numbers (DN) given a certain amount of incident power. A digital number is an integer increment within the available bit range of a digital device. For instance, a camera with 8-bit pixels can represent an integer range from 0 to 255 in each pixel. Based on the camera manufacturer’s spectral responsivity
482
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
curve, the camera (Dalsa CA-D1) produces 10 DN for an incident illumination of 1 nJ/cm 2 at 785 nm. This is equivalent to 1011 photons/pixels per DN (see calculation below). Of the 5 mW of power incident on the retina from the wavefront sensor beacon, approximately 0.02% is reflected. This 1 nW of power, when incident on the SHWS for 100 ms, produces 190 DN (calculation to follow) and is sufficient for detecting the wavefront. The following detailed calculations show how we derived the digital number (DN) corresponding to 1 nW of light exposure: 1. To convert from nJ/cm 2 to photons/pixel, we fi rst calculate the number of photons in 1 nJ of energy at 785 nm: Energy in 1 photon at 785 nm = hv = hc λ = ( 6.626 × 10 −34 Js ) ( 3 × 10 8 m s ) ( 7.85 × 10 −9 m ) = 2.53 × 10 −19 J Number of photons in 1 nJ = ( 1 × 10 −9 J ) ( 2.53 × 10 −19 J ) = 3.95 × 10 9 photons
(18.9) (18.10)
Next, we calculate the number of pixels per cm 2 : 2
Number of pixels per cm 2 = ( 1.0 cm ) ( 16 µ m ) = 390, 625 pixels
2
Thus, the number of photons producing 10 DN per pixel is (3.95 × 109)/ (390,625) = 10,109, and approximately 1011 photons will produce 1 DN per pixel. 2. The total power coming to the camera from the eye is 1 nW. This is integrated for 100 ms. There are 20 × 20 subapertures. We assume that all of the energy contained within a single lenslet (i.e., subaperture) is focused to approximately 5 pixels on the CCD camera. Thus, the energy/ pixel of the camera is Energy pixel = ( 1 nW × 0.1 s ) ( 20 × 20 × 5 pixels ) = ( 0.1 nJ ) ( 400 × 5 pixels ) Photons pixel = ( 0.1 nJ × 3.95 × 10 9 photons nJ ) ( 400 × 5 pixels ) = 197,449 photons pixel Consequently, the digital number for 1 nW of light exposure is DN for 1 nW = ( 197,449 photons pixel ) ( 1010.9 photons DN pixel ) = 190 DN
(18.11)
WAVEFRONT SENSOR SELECTION
483
FIGURE 18.3 Experimental curve of peak pixel intensity versus integration time. The bottom curve is obtained from the digital number (DN) reading from one pixel of the camera that is a relatively dim pixel peak. The top curve is taken from a relatively brighter pixel at a different pixel location.
This number was verified in our laboratory by varying the integration time of the camera while recording the peak pixel intensity of the focused spots from the various subapertures. For a 100-ms integration time, we obtained peak values in the range of 109 DN to 220 DN, which encompasses the value obtained in the actual calculation. The graph in Figure 18.3 shows data recorded from two pixels from the Shack–Hartmann spots, one with relatively low light levels and the other with relatively high light levels. When using a human eye, the transmission/reflection of the beamsplitter used in front of the eye will be an extremely important design parameter to minimize the loss of photons that have been reflected by the eye. 18.2.2
Shack–Hartmann Noise
The noise statistics of the camera were measured for various camera integration times. The statistics provide the mean noise level (or dark noise) as well as the variance of the noise level, which determines the minimum detectable noise power. For example, for an integration time of 110 ms, the mean noise floor is 20.7 DN, while the standard deviation of the noise at any pixel is 0.03 DN for a dark exposure. When exposed to uniform light, the root-meansquare (RMS) error of the centroid location for 100 frames calculated using a center-of-mass algorithm with dark subtraction is 0.04 to 0.05 pixels. Note
484
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
from Eq. (18.3) that one pixel movement of the centroid is equivalent to 0.56 mm of phase error. Thus, this centroid error corresponds to a phase error of approximately 0.0256 mm, or ~l/31 (for l = 785 nm). When using a Gaussian fit algorithm to determine the centroid location, the RMS error is 0.0047 pixels (or 0.0026 mm ≈ l/300). 18.3 BEACON SELECTION: SIZE AND POWER, SLD VERSUS LASER DIODE At the beginning of the project, a 785-nm laser diode (LD) was used as the wavefront sensing beacon. However, we later changed to a superluminescent diode (SLD) when testing human subjects due to its lower observed speckle noise in the Shack–Hartmann spots. Two typical Shack–Hartmann spot patterns are shown in Figure 18.4, one for an LD and one for an SLD. Visually, it is obvious that the Shack–Hartmann spots produced by the laser diode have more noise. To observe the effect quantitatively, we calculated the standard deviation of the position of the Shack–Hartmann spots for 100 consecutive frames of images for each subaperture location. The result is compared for both the LD and SLD, as shown in Figure 18.5. The calculations were performed only on the illuminated pixels. The subapertures at the edges were not included in the calculation. The mean RMS noise of the illuminated subapertures for the laser diode is 163 nm and is 95 nm for the SLD. A consequence of changing the light source is that we also had to change the SLM to one that had a higher reflectivity at 820 nm. However, most of the characterizations were performed with the 785-nm source.
FIGURE 18.4 Hartmann spots for the human eye with a laser diode (LD) and a superluminescent diode (SLD). The LD shows more speckle in each individual spot than the SLD spot array pattern.
WAVEFRONT CORRECTOR SELECTION
485
FIGURE 18.5 Standard deviation of the Shack–Hartmann spot positions for a human eye, measured using a LD (left) and an SLD (right). The LD exhibited 163 nm of noise versus only 95 nm for the SLD.
18.4
WAVEFRONT CORRECTOR SELECTION
The spatial light modulator that we used (Hamamatsu Model 7665) contains approximately 230,000 phase control points and serves as the wavefront corrector in the AO phoropter. The parallel aligned nematic liquid crystal spatial light modulator (PAL-SLM) is an optically addressable (intensity to phase) spatial light modulator, as shown in Figure 18.6. The PAL-SLM has an amorphous silicon layer, a dielectric mirror, and a liquid crystal (LC) layer sandwiched between two glass substrates with transparent electrodes. A write light beam impinges on the amorphous silicon side, and the read beam is presented on the LC side. The impedance of amorphous silicon becomes extremely high when no write light is present. When the write light is applied, the impedance of the amorphous silicon is lowered reducing the voltage drop across it. Consequently, the voltage across the liquid crystal layer is increased. The increase in voltage across the liquid crystal layer affects the molecular orientation of the crystals and changes its index of refraction, causing a phase modulation of the read beam. To control the optical intensity of the write beam, a laser diode is coupled with a liquid crystal display (LCD) used in the transmissive mode. This allows the projection of any intensity pattern on the write side of the PAL-SLM. The PAL-SLM module combines the laser diode and LCD with the PAL-SLM so that the entire system acts as an electronically addressable phase/intensity spatial light modulator. The SLM contains 480 × 480 individually addressable control points on a 20-mm × 20-mm surface, where each control point can provide up to 0.8 mm of phase modulation. The rise time is 140 ms and the fall time is 230 ms for the X7665, with a readout wavelength between 550 and 850 nm. This timing was measured by the manufacturer with a pulsed laser diode writing directly
486
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
LCD PAL SLM
Laser Diode VGA Signal
ITO Electrode Layer
5Vpp
LC
Dielectric Mirror
ITO Electrode Layer
Read Beam Write Beam
Readout Light
Optical Optical Alignment Alignment α-SiH Glass Layer Layer PhotoGlass conductor Substrate Substrate
FIGURE 18.6 The liquid crystal (LC) spatial light modulator (SLM). A picture of the device is shown in the left image, while a schematic of the SLM’s constituent layers is shown on the right.
on the write surface. However, in the actual SLM module, an LCD panel is used to modulate the write light and the LC panel has its own delay. Thus, the delay of the SLM practically may be much higher than the rise and fall times listed. Another important consideration of the SLM is its reflectivity at green wavelengths, the wavelength at which the image will be viewed, since the brightness of the target image is important for psychophysical experiments. Also note that the reflectivity of the SLM at 820 nm is important for the sensitivity of the SHWS. Choice of the proper SLM must take into consideration all of these variables.
18.5
WAVEFRONT RECONSTRUCTION AND CONTROL
As described in Section 18.2, the wavefront is calculated by comparing its deviation from a plane wave. To determine the plane wave position of the lenslet spots, the Shack–Hartmann wavefront sensor is illuminated with a planar wavefront. In each subaperture, the light will be focused on the optical axis of that subaperture. These positions are marked by calculating the centroid location of each spot and are called the reference centroid positions. When there is a phase gradient across the wavefront, the rays of light will be tilted and the focus point of each subaperture will be shifted by a proportional amount. Thus, when an aberrated wavefront is sampled by the Shack– Hartmann wavefront sensor, the locations of the new centroids are shifted according to the local phase gradient. These new centroid locations are calculated, and the difference between these locations and their reference
WAVEFRONT RECONSTRUCTION AND CONTROL
487
positions provides a measure of the local slope of the wavefront. This can be expressed as: ∂W ( x, y ) ∆xS = ∂x F
(18.12)
where ∂W/∂x is the slope of the wavefront in the x direction, ∆xS is the displacement of the centroids in the x direction, and F is the focal length of the lenslet. A similar equation can be derived for the wavefront slope in the y direction and a set of discrete linear equations can be written relating the slopes (in both x and y directions) to the derivatives and eventually to the displacement of the centroids (see Chapters 3 and 5). This leads to a discrete set of equations relating the phase to the slope. These equations can be solved by either a least-squares method or a Fourier transform method to yield the reconstructed wavefront. In our system, we utilize a Fourier transform technique to reconstruct the wavefront from the centroid differences [9]. Note also that the sampling interval of the wavefront slope is equal to the pitch of the subaperture array. Thus, using the Nyquist sampling theorem, we will only be able to detect phase variations corresponding to spatial frequencies that are half the sampling rate, as determined by the size of the subapertures. 18.5.1
Closed-Loop Algorithm
The algorithm for closed-loop error correction consists of the following steps: 1. Retrieve reference centroid locations from a planar wavefront. 2. Obtain test centroids from the aberrated wavefront. 3. Threshold both images for read noise and dark noise. (The dark noise level was estimated from a dark frame and read noise was estimated from the variance of constant illumination.) 4. Calculate centroid locations and the difference between the reference and the test centroid positions. 5. Input the difference data to the reconstructor and reconstruct the wavefront. 6. Estimate the correction to be applied, which is a function of the gain parameter of the control loop. This parameter determines how much of the error term is applied as a correction. (Lower gain typically implies that more iterations are necessary to achieve convergence.) 7. Convert the correction (in phase units) to SLM units using a lookup table or other simple formula. 8. Repeat steps 2 to 7 a prespecified number of iterations, or until the error is below a certain threshold value.
488
18.5.2
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
Centroid Calculation
Accurate calculation of the centroid positions is one of the most important operations in wavefront reconstruction algorithms. We compared the variances in calculating centroid locations with different algorithms, such as the center of mass, Gaussian fitting, and the diminishing area of interest technique [10]. One hundred frames of Shack–Hartmann spots were collected and the variation of the centroid position of each subaperture was measured. Ideally, with no change of wavefront slope, the spots should remain stable, but because of noise the estimated position (in pixels) changes. These changes can be converted to phase differences using our 0.56 mm per pixel scaling factor derived in Eq. (18.3). The standard deviations of the centroid positions in the center-of-mass technique were on the order of 10 −2 pixels (0.0056 mm), whereas those obtained from Gaussian fitting were on the order of 10 −3 pixels or 0.00056 mm. The standard deviation of the diminishing area of interest (or pyramidal) centroid technique approaches a Gaussian fitting technique as the area of the smallest box approaches that of the spot size. For tightly focused Shack–Hartmann spots with high signal-to-noise ratios, the Gaussian method provided excellent repeatability in the presence of noise. The noise condition was varied by changing the signal level while keeping the noise level relatively constant, resulting in a variation in the signal-to-noise ratio. Although the Gaussian fitting technique yielded better results, it was not used ultimately because the method consumes a considerable amount of time (~5 s or more) and sometimes fails to converge. An RMS measurement reveals the variation in determining centroid locations over time; however, it may not reveal a systematic error. For real Shack– Hartmann spots from the eye, fi nding an appropriate threshold for each Shack–Hartmann spot is another important step. It should be noted that a typical subaperture after dark subtraction shows a significant amount of background noise around the central peak, as shown in Figure 18.7. One of the problems with fi nding an appropriate global threshold for excluding this background is the variance of intensities in the Shack–Hartmann spots among different subapertures when a human eye is used. Pyramidal thresholding [10] seems to do a better job in reducing the noise than simple thresholding over the whole plane. In the pyramidal technique, an initial estimate of the spot position (based on a center-of-mass calculation) is made using a bounding box (search box) that is equal in size to an array of 11 × 11 pixels per subaperture. Thereafter, the bounding box is reduced to a size of 9 × 9 pixels per subaperture, centered on the previous estimate and a new estimate is made. The process is repeated and the bounding box reduced until the box size is equal to the estimated spot size. A practical problem of applying the pyramidal technique is when the estimated spot size is very small, such as 2.4 pixels. However, when practical, this technique allows for more effective noise fi ltering by thresholding and creating a smaller bounding box around the spot.
SOFTWARE INTERFACE
489
FIGURE 18.7 Pixel intensity values of a Shack–Hartmann spot within a single, typical subaperture after dark subtraction. Note the presence of noise outside of the main lobe (with a radius of 1.7 pixels and a central value of 226 DN). The noise floor that has been subtracted is 20 DN.
To provide a visual estimate of the effect of various centroiding algorithms, wavefronts reconstructed when a Gaussian phase function was applied to the SLM are shown in Figures 18.8 and 18.9. The Gaussian pulse shape is recreated in all the pictures, however, the floor of the reconstructed wavefront demonstrates significant fluctuations from an estimated flat wavefront. The RMS wavefront error is calculated over the flat region. As the aperture size is reduced in four consecutive steps (as shown in Fig. 18.8), the pyramidal reconstruction starts to look very similar to the Gaussian curve fit results shown in Figure 18.9. A curve is plotted (Fig. 18.10) to show the reduction in noise as the width of the aperture decreases. Finally, other systematic error sources that may creep up are round-off errors in calculating centroid differences. This occurs when subtracting two numbers that may be very small, such as subtracting two centroid positions to fi nd the local slope.
18.6
SOFTWARE INTERFACE
There are two types of software needed to run the AO system: the control software and the diagnostic software. All software for this project was written using IDL (Research Systems Inc., Boulder, CO) and the interface was run from the IDL command line. Examples of some of the properties of the diagnostic software are to display the live position of the Shack–Hartmann spots within the subaperture boundaries, measure the phase response by writing phase functions and recording the spot intensities, measure the statistics of spot locations on any number of consecutive frames, or simply record consecutive Shack–Hartmann spots. The diagnostic software is useful in aligning the optics with the Shack–Hartmann spots and also in displaying the effect of the pupil function directly. The control software has the capability
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
2
Phase (rad)
Phase (rad)
490
1 0
2 1 0 15
15 10 5 0 0
15
10
5
20
10 5
Phase (rad)
Phase (rad)
2 1 0
5 0 0
10
5
0 0
5
10
15
20
2 1 0
15 15
5
Subaperture Size = 9 ¥ 9
Subaperture Size = 11 ¥ 11
10
0 0
15
20
10 5
Subaperture Size = 7 ¥ 7
10
15
20
Subaperture Size = 5 ¥ 5
Phase (rad)
Phase (rad)
FIGURE 18.8 Various stages of the pyramidal centroiding process. As the subaperture size (in pixels) decreases, the floor of the Gaussian phase function appears smoother.
2 1
0 15
2 1 0
15 10 5 0 0
5
10
Center of Mass (11 ¥ 11)
15
20
10 5 0 0
5
10
15
20
Gaussian Curve Fit
FIGURE 18.9 Comparison of the center-of-mass centroiding technique using 11 × 11 subapertures, versus centroid calculations using a Gaussian curve fit.
of displaying the reconstructed wavefront or point spread function, showing the RMS error or other statistics when the loop is closed, and so forth. Simulation software was also created that could run the experiments offl ine from the stored corrections sent to the SLM and the stored Shack–Hartmann spots.
AO ASSEMBLY, INTEGRATION AND TROUBLESHOOTING
491
RMS Wavefront Error (rad)
0.4
0.3
0.2
0.1
0 2
4
6
8
10
12
Subaperture Size (Pixels) FIGURE 18.10 Effect of subaperture size on RMS wavefront error. Here, the error is calculated based on the RMS wavefront error on the flat region of the Gaussian phase function.
18.7
AO ASSEMBLY, INTEGRATION AND TROUBLESHOOTING
A schematic of the adaptive optics phoropter system is shown in Figure 18.11. The instrument uses a 5-mW superluminescent laser diode at 820 nm that is focused onto the retina of a human eye. (Initially, the system was equipped with a 785-nm laser diode used for characterizing the system. It was switched to an 820-nm SLD to reduce the speckle in the Shack–Hartmann spots from the human eye.) The laser beacon is reflected off the retina and out through the optics of the eye, thereby sampling its aberrations. The light reflected by the retina (~0.02% of the input intensity) is transmitted to a Shack–Hartmann wavefront sensor after being reflected by a wavefront corrector. This arrangement allows for a closed-loop correction of the optical aberrations. Here, the Shack–Hartmann wavefront sensor initially measures the wavefront when the wavefront corrector is flat or is in its nonaltered position. The deviation of the reconstructed wavefront from an ideal flat wavefront is estimated, and this produces the error term. This error term is used to calculate the correction required to compensate for the deviation. The wavefront is flattened after a few iterations by the wavefront corrector, or SLM. A control loop is used to update the correction applied to the wavefront corrector in a stable fashion. During the closed-loop operation of the AO system, a correction is applied until the error converges to a minimum value. When the correction is applied successfully in the closed-loop system, it should result in improved optical image quality and enhanced vision. After the system has converged to a stable, low aberration value, the subject views
492
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
Mirror
Removable Reference
Lenslet Array
Lens
WS Mirror
Mirror
2 ft
4 ft
Dichroic Phoropter
Monitor for Acuity Test Lens
Eye
SLM Lens
Lens Polarizer
Fiber Source SLD
FIGURE 18.11 Prototype adaptive optics phoropter using a liquid crystal spatial light modulator. SLM, spatial light modulator; WS, wavefront sensor; SLD, superluminescent diode.
any of a variety of visual stimuli (e.g., sine-wave gratings) on a custom, highintensity cathode ray tube (CRT) computer display. This system is used to perform psychophysical tests examining the effects of a higher order correction on the limits of visual performance. Figure 18.11 illustrates two light paths. The darker gray beam shows light emerging from the eye and entering the wavefront sensor after being reflected by the wavefront corrector. The lighter gray beam represents rays from the visual stimulus traveling to the subject’s eye. 18.8 SYSTEM PERFORMANCE, TESTING PROCEDURES, AND CALIBRATION As a fi rst step, each separate subsystem is tested independently [11]. Then, they are combined in order of complexity, and the overall functionality of the
SYSTEM PERFORMANCE, TESTING PROCEDURES, AND CALIBRATION
493
system is tested. The testing of the SLM subsystem is described in Sections 18.8.1, 18.8.2, and 18.8.3, and the testing of the wavefront sensor is discussed in Section 18.8.4. Testing the combined system (i.e., registration and closedloop operation) is covered in Sections 18.8.5 and 18.8.6. 18.8.1 Nonlinear Characterization of the Spatial Light Modulator (SLM) Response A set of experiments were carried out to determine the phase modulation characteristics of the liquid crystal SLM. This characterization will determine the gray level needed to achieve a certain value of phase modulation. The SLM was characterized by applying a periodic rectangular wave of varying amplitude to the SLM and measuring the far-field pattern. The relative magnitude of the zeroth- and fi rst-order components provides an indication of the phase jump magnitude. For example, for a phase jump of p radians, the zeroth-order intensity becomes zero, while the fi rst-order intensity becomes maximal. The same phenomenon is observed across each of the subapertures of the Shack–Hartmann wavefront sensor, which instead of consisting of a single, big lens, uses a microlens array. Thus, a second spacevariant test was developed to determine the phase response of individual actuators, or SLM pixels. If a step function of varying amplitude is applied over a single aperture, each subaperture produces a far-field pattern corresponding to the Fourier transform of the phase jump. The history of all the wavefront sensor responses as a function of the amplitude of the step function was recorded. The phase response of individual actuators was calculated. A lookup table for the SLM was devised, combining the individual subaperture responses with the overall SLM response. The desired phase angle (i.e., desired compensation expressed in radians, where 2p represents one wave of modulation) is the input to the lookup table and the required gray level of the SLM to cause that phase change is the output. The phase-response behaviors obtained from the individual subapertures reveal that the SLM has space-variant phase modulation characteristics. A spatially varying lookup table was devised that compensated for the nonuniformity of the phase response across the SLM surface. The average phase response from the lookup table is expressed in the curve shown in Figure 18.12. The x axis represents the phase modulation, while the y axis represents the driving gray level for the SLM. For convenience, the phase modulation (as shown on the x axis) has been remapped to integer values in the 0 to 255 range; this was accomplished by multiplying the desired phase value by 40 and then rounding to the nearest integer. The SLM driving gray levels (y axis) are the values sent to the SLM. 18.8.2
Phase Wrapping
Note that the SLM can achieve only a fi xed range of phase modulations, corresponding to approximately one wave of compensation. Due to the limited
494
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
SLM Driving Gray Level
250
200
150
100
50
0 0
50
100
150
200
250
Desired Phase Modulation (a.u.) FIGURE 18.12 SLM.
Graph illustrating the lookup table for the phase response of the
phase modulation of the SLM, the excess phase must be remapped to a phase between 0 to 2p. This is done by a simple modular operation, as demonstrated visually in Figure 18.13(b). This process is known as phase wrapping. In order to verify the phase wrapping technique, Gaussian inputs with peak phase differences varying from 1 to 3 waves were applied to the SLM. The corresponding reconstructed wavefronts were observed. The peak values of the input and output wavefronts were recorded. The plot of the output peak values versus the input peak values is shown in Figure 18.13. The phase wrapping technique was also verified in an earlier setup built at LLNL, known as an AO test bed [12]. In this system, a periodic pattern was written on the whole SLM, and the SLM was exposed to two different frequencies of light. When one of the frequencies of light matched the frequency used for the phase wrapping operation, the far-field image showed a single spot corresponding to a uniform phase delay, while the other frequency of light generated a diffraction pattern due to the periodic phase grating seen by that frequency. Thus, when operating the SLM at two different frequencies, the phase modulation characteristics should ideally be derived separately for both frequencies. Alternately, the appropriate phase correction could be approximated by multiplying the desired correction by the ratio of the two frequencies. The phase wrapping point, however, will change in this latter case. The simplest way to handle this complexity is to express the desired phase modulation in radians (rather than microns) for the second frequency. Then, one can perform the wrapping in the radian domain and then convert into radians for the fi rst frequency by scaling the wrapped phase using the frequency factor. This procedure allows us to use the same lookup table derived for the first
SYSTEM PERFORMANCE, TESTING PROCEDURES, AND CALIBRATION
495
Maximum Phase of Reconstructed Wavefront (Waves)
2.5
2.0
1.5
1.0
0.5
(b) 0.0 1.0
1.5
2.0
2.5
3.0
Maximum Phase of Aberration Applied to SLM (Waves) (a) FIGURE 18.13 (a) Linearity of the wrapped phase. (b) The wrapped input to the SLM results in a continuous response of the phase.
frequency of light to fi nd the necessary gray levels to send to the SLM to achieve the desired wrapped phase for the second frequency. 18.8.3
Biased Operation of SLM
A typical correction pattern reveals that both positive and negative corrections must be applied to compensate for an aberration. This is achieved by operating the SLM at a p-phase bias, allowing both positive and negative corrections to be applied. Here p refers to the phase of the SLM with a graylevel value corresponding to a phase delay of l/2, which for an 8-bit SLM with a linear response would be around a gray-level value of 128. Thus, negative phase shifts from −p to 0 would be achieved by sending the SLM graylevel values from 0 to 128. Similarly, positive phase shifts from 0 to +p would be achieved by sending the SLM gray-level values from 128 to 255. Phase shifts outside this range can be achieved by subtracting or adding integer multiples of 2p and sending the corresponding phase-wrapped value to the SLM. As a result of the bias point, the fi rst phase wrap occurs after a l/2 wave excursion and the second phase wrap will occur after 3l/2. 18.8.4
Wavefront Sensor Verification
To verify the correct operation of the wavefront sensor and the reconstruction algorithm, we measured the wavefront produced by a lens with a known focal
496
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
length. For this experiment, a model eye consisting of a lens and a rotating disk at the position of the retinal plane was used. The distance between the lens and the rotating disk was fi xed at the focal length of the lens and both items were securely mounted on a rail, which could be easily inserted into the AO system. The rotation of the disk reduced the speckle in the Shack– Hartmann spots. A 0.25-D lens with an 8-mm diameter was selected on the phoropter, which was physically located in front of the model eye. Using the data from the Shack–Hartmann wavefront sensor, the wavefront was reconstructed using the Fourier reconstructor and the peak-to-valley wavefront error of the lens was measured to be exactly 2 mm for the 8-mm diameter lens (see Fig. 18.14). For comparison, the peak-to-valley phase difference was calculated using Eq. (18.7) and was also found to be 2 mm for an 8-mm pupil diameter. 18.8.5
Registration
The test for registration evaluated how well multiple subsystems (such as the SLM, the wavefront sensor and the laser beacon) worked together. Using an artificial eye, a systematic procedure was developed to register the SLM with the wavefront sensor. If the SLM is not properly registered with the wavefront sensor, then the correction calculated from the wavefront measurement will be misaligned with respect to the position of the aberrations on the incoming wavefront. Figure 18.15 illustrates an example of this problem. An initial aberration is applied to the SLM, and this elicits a compensating correction.
FIGURE 18.14 Reconstructed wavefront for a 0.25-D lens showing 2 mm of peak-tovalley phase difference.
SYSTEM PERFORMANCE, TESTING PROCEDURES, AND CALIBRATION
497
Actual Location of Aberration Peak
3
Phase (rad)
2 1 0 −1 20
Location of Aberration Correction
15 10 5 0 0
5
10
15
20
SLM Plane FIGURE 18.15 Effect of misregistration between the wavefront sensor and corrector. The wavefront after one iteration of the closed-loop correction has become more aberrated because the wavefront sensor and corrector are not properly aligned. Instead, the correction has been applied to the wrong location. This position error contributes to the next iteration through the closed-loop system, resulting in a new incorrect position for the wavefront correction.
Due to misregistration, the wavefront sensor sees the aberration in one location but the AO system applies the correction at a different location. Figure 18.16 shows the results after 3, 5, and 11 iterations. A misregistration of the SLM in the x direction causes vertical lines to appear on the SLM, which increase in magnitude with each iteration. The bright spot in Figure 18.16 is the initial location of the aberration, also shown as the positive peak in Figure 18.15. In the leftmost panel of Figure 18.16, the negative correction is shown as a dark spot next to the initial positive peak. In the same way, the negative correction will then generate a positive correction at a neighboring location. This is shown as a bright spot located to the right of the dark spot [see Fig. 18.16 (center)]. Thus, with each iteration, the error propagates to the right resulting in a series of dark and bright lines [see Fig. 18.16 (right)]. To address this problem, the registration method starts by writing an asymmetric pattern of known size and shape on the SLM, as shown in the left panel of Figure 18.17. The asymmetry helps to identify any ambiguities in rotation and/or reflection. The actuation function is then compared to the wavefront reconstruction obtained from the wavefront sensor data, as shown in the right panel. Making this comparison provides information about the registration in terms of rotation, scale, and position of the SLM, with respect to the wavefront sensor. For improved accuracy, the comparison is performed at the pixel level of the SLM plane. Having compared the actuation function and the reconstructed wavefront, the size and position of the pattern is modified until the written and detected patterns corresponded.
498
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
FIGURE 18.16 Effect of misregistration between the wavefront sensor and corrector. Note that the SLM shows the initial aberration as a bright spot. (Left) The negative correction applied shows up as a dark spot next to the initial aberration, shown as a bright spot. (Center) The dark spot generates a correction that appears as a neighboring bright spot. (Right) After several iterations, this misregistration leads to an alternating appearance of bright and dark spots on the SLM, and an improper correction.
FIGURE 18.17 Registration verification. (Left) An asymmetric pattern written on the SLM and (right) its reconstruction detected by the Shack–Hartmann wavefront sensor.
Figure 18.18 show a more detailed example of checking for correspondence between an image written by the SLM and an image detected and reconstructed by the Shack–Hartmann wavefront sensor. This process yields four parameters: the size (or scaling factor) of the pattern in the x and y dimensions; and its x and y positions relative to the whole SLM plane on which the pattern could be written. Neglecting rotation, these four parameters describe exactly the position of the pattern written on the SLM.
SYSTEM PERFORMANCE, TESTING PROCEDURES, AND CALIBRATION
499
FIGURE 18.18 Registration verification (enlargement of zoom area marked on Fig. 18.17). Finding the correspondence between (a) the pattern written on the SLM and (b) the pattern detected by the wavefront sensor. Four parameters are varied until these two patterns match.
18.8.6
Closed-Loop Operation
In order to verify the proper operation of all the subsystems, the AO system must be operated in a closed-loop fashion. In the closed-loop mode, the system operates to minimize the error in the wavefront. The system was tested with two specific aberrations: (i) an input on the SLM serving as a source of aberration and (ii) a trial lens from the phoropter at the pupil plane. In the fi rst test, an artificial aberration (in the form of a Gaussian function of fi xed width) was applied through the SLM. This was achieved by writing a graylevel Gaussian function to the SLM, as shown in Figure 18.19. Since the wavefront sensor will register the function as an aberrated wavefront, the control loop will change the surface of the SLM to reduce the wavefront error of the system. If the system is working properly, the wavefront error will be gradually reduced. The Shack–Hartmann wavefront sensor is used to measure the wave aberration for each iteration of the closed-loop operation. As shown in Figure 18.20, the aberration is gradually compensated by the correction applied through the control loop. The RMS wavefront error was reduced from 0.050 mm (or 0.40 rad of phase) to 0.025 mm (or 0.20 rad of phase), as shown in Figure 18.21. Next, an external aberration was applied to the system using a spherical trial lens. The reconstructed wavefront depicted in Figure 18.22 shows that the total aberration (peak-to-valley) produced by this 0.25-D lens was 2 mm or ~3 waves at 785 nm. Thus, to compensate for this aberration using the SLM, which has about 0.8 mm of phase modulation, phase wrapping was performed to extend the range of the compensation across multiple waves. When the loop was closed with the AO system operating, the wavefront was gradually flattened by the AO control loop, as shown in the four panels of Figure 18.22.
500
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
Initial Input on SLM
Reconstructed Phase Phase (rad)
4 2 0 −2 −4 20 15
1
0 5 S SH 0 0 Nu ubap WS mb ert er, ure yA xis
10
5
15
20
SHWS re ertu Subap x Axis er, Numb
FIGURE 18.19 (a) An aberration input on the SLM and (b) its reconstructed phase for an 8-mm pupil.
Phase (rad)
2nd Iteration (0.34 s)
4th Iteration (1 s)
4
4
2
2
0
0
−2
−2
−4
−4
Phase (rad)
6th Iteration (1.7 s)
10th Iteration (3.1 s)
4
4
2
2
0
0 −2 −4 20 1
−2 −4
51
0
5 0 0 S SH Nu ubap WS mb ert er, ure yA xis
5
10
15
20
SHWS re ertu Subap x Axis er, Numb
FIGURE 18.20 Reconstructed wavefront (in radians of phase) after 2, 4, 6, and 10 iterations, with a pupil size of 8 mm.
The corresponding SLM phase modulations after the 2nd and 24th iterations of the closed-loop correction are shown in Figure 18.23. The right panel of Figure 18.23 demonstrates the phase wrapping boundaries at 0.5, 1.5, and 2.5 waves above the 0.5 wave bias level, which was necessary to correct 3
SYSTEM PERFORMANCE, TESTING PROCEDURES, AND CALIBRATION
501
FIGURE 18.21 Graph of the RMS wavefront error, showing convergence of the AO loop on a Gaussian input to a fi nal RMS error of 0.025 mm.
Phase (mm)
0th Iteration
3rd Iteration
2
2
1
1
0
0
−1
−1
20
20
15
10
5
0
0
5
10
15
20
Phase (mm)
5th Iteration 2
1
1
0
0
−1
−1
15
20 10
5
0
0
5
10
5
10
15
20
5
10
15
20
5
0 0
9th Iteration
2
20
15
10
15
20
15
10
5
S SH Nu ubap WS mb er er, tur yA e xis
0
0
SHWS re ertu Subap x Axis r, e b Num
FIGURE 18.22 (Top left) A 0.25-D lens in the adaptive optics system produces an initial aberration. (Top right) After three iterations, the aberration is partially compensated. The aberration is further reduced after the fi fth (bottom left) and ninth (bottom right) iterations. The lens was used with a pupil size of 8 mm.
502
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
FIGURE 18.23 SLM phase modulation applied by the control loop to correct defocus across an 8-mm pupil. Note that phase wrapping at 0.5, 1.5, and 2.5 waves is needed to achieve the full 3 waves of wavefront compensation.
FIGURE 18.24 Convergence of the closed-loop correction of the 0.25-D lens. The RMS wavefront error was reduced from 0.70 to 0.05 mm. Each iteration was 0.34 s in duration.
waves of aberration in the trial lens. The gray background shows the constant bias of 0.5 waves. Finally, as shown in Figure 18.24, the RMS wavefront error was gradually reduced from 0.7 mm (nearly one wave RMS) to 0.05 mm (nearly 1 – waves). A gain of 0.2 was used to improve the stability of the system. 16 18.9
RESULTS FROM HUMAN SUBJECTS
In order to optimally measure and correct for the aberrations of the human eye, subjects must be seated, have their eyes dilated, and they must bite down
RESULTS FROM HUMAN SUBJECTS
503
on a molded plastic bite bar to stabilize the motion of their head. The bite bar is also used to position the subject’s pupil in three dimensions (horizontally, vertically, and axially) because it is important that the subject’s pupil remain centered on the optical axis of the system during the entire experiment. A typical setup with human subjects is shown in Figure 18.25. Note that the light reflected from the eye is detected by the Shack–Hartmann wavefront sensor, shown next to the display with the letter C. The monitor displaying the letter is the source of the visual stimulus viewed by the subject after aberration correction. The fi rst step is to correct the subject’s refractive error using a conventional phoropter placed in front of the eye to eliminate the defocus and astigmatism error. This procedure allows us to use the limited phase modulation of the SLM for correcting higher order aberrations, instead of using the bulk of the SLM phase modulation to correct for defocus and astigmatism. Then, the subject’s pupil is aligned using the x-y-z positioners on the bite bar. By monitoring the pupil camera image, the subject’s pupil is placed in a plane conjugate with the SLM and the wavefront sensor. The subject is asked to fi xate on a target on the monitor. Head position may again have to be adjusted slightly to align the subject’s pupil to the Shack–Hartmann wavefront sensor, if it is not already aligned perfectly. After the subject is stabilized and all the Shack–Hartmann spot data are obtained, the subject is asked to maintain fi xation on the target while the loop is closed.
FIGURE 18.25 subject.
A complete view of the adaptive optics phoropter with a human
504
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
The left panel of Figure 18.26 shows a typical Shack–Hartmann spot pattern obtained from a human eye, for a pupil size of 8 mm. This figure shows noise in the form of speckle due to retinal scatter generated by the laser diode source. We have since switched to a superluminescent diode (SLD) to reduce speckle. The SLD has a center wavelength of 820 nm and a bandwidth of 30 nm. Using this source, the wavefront across a 5.8-mm pupil was measured at different steps during the closed-loop correction (see Fig. 18.27). The reconstruction shown in this figure appears blotchy because the wavefront has been reconstructed from a smaller pupil support that spans only 13 × 13 subapertures, instead of 20 × 20 subapertures. Also, no smoothing was applied to the reconstructed wavefront.
FIGURE 18.26 (Left) A typical Shack–Hartmann wavefront sensor pattern and (right) the corresponding reconstructed wavefront for a human eye (8-mm pupil). The RMS wavefront error for this subject was 3.5 mm.
FIGURE 18.27 The wave aberration from a human subject using a zonal compensation after (left) 2, (center) 5, and (right) 10 iterations of closed-loop correction.
RESULTS FROM HUMAN SUBJECTS
505
As the loop converges, the RMS wavefront error was reduced from 0.56 to 0.15 mm, as depicted in Figure 18.28. This figure also shows that the RMS wavefront error fluctuates while it converges. The reason for this was traced to the speed of the response of the SLM in the software-integrated environment. Although a 400-ms delay was assumed, in reality the SLM was responding with a 2-s delay. Thus, with an update time of 0.4 s, the wavefront was being measured before the effect of the correction took place. As a result, the error measured by the Shack–Hartmann wavefront sensor was sometimes higher than the actual error to be corrected. This would cause more correction to be applied in the next step that could again be sometimes in the wrong direction. Thus the error was seen to fluctuate. When a 2-s delay was added to the loop, the system converged much more smoothly in two to three iterations, as shown in Figure 18.29. Once the subject’s vision has been corrected and the laser beam is turned off, any of a number of psychophysical tests can be performed (see also Chapter 14). For example, in one typical experiment, a subject is asked to determine in which of two temporal intervals a sinusoidal grating appears. The contrast of the grating is adjusted using an adaptive procedure in order to determine the contrast that will yield correct performance on 82% of the trials. This contrast is referred to as the subject’s contrast threshold for that stimulus. By determining contrast thresholds at a number of spatial frequencies, a contrast sensitivity function (CSF) can be obtained. The human CSF is a product of the optical modulation transfer function (MTF) of the eye and the neural CSF of the visual nervous system. By measuring the CSF before
FIGURE 18.28 Convergence of RMS wavefront error for a human eye over a 5.8-mm pupil. The initial RMS wavefront error was 0.56 mm and was corrected to an RMS error of 0.15 mm.
506
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
FIGURE 18.29 Smooth convergence of RMS wavefront error for a human eye over a 5.8-mm pupil, when the proper 2-s delays were inserted in to the control loop.
and after wavefront correction, the benefits for improving vision by improving the optical MTF can be estimated.
18.10
DISCUSSION
One of the problems of the SLM is its wavelength-dependent phase modulation. Normally, a correction is applied based on the wavelength of the beacon used for the Shack–Hartmann wavefront sensor. However, the subject typically views an object using a green wavelength of light that generates a different phase response from the SLM than the 820-nm beam of light. To compensate for this effect in our vision experiments, we calculated the fi nal correction to be applied to the SLM (in radians) by converting the desired value to green wavelengths and phase wrapping the result appropriately, using the new wavelength. Here, we assumed the change in wavelength caused a proportional change in the phase modulation generated by the SLM, which may not be true. This issue could be overcome by deriving the SLM phase modulation characteristics using a green laser on the SLM. Then, by employing a lookup table for the green wavelength, this could yield a more realistic correction for the viewing conditions. Since this was not done, it was difficult to carry out any psychophysical experiments. However, to verify the closed-loop performance of the system when imaging at a second wavelength, we replaced the human eye with a CCD camera placed at the position of the eye, so that it was looking at a test pattern displayed on the monitor. The CCD was adjusted to produce a sharply focused image of the test pattern, thus mimicking the
DISCUSSION
507
FIGURE 18.30 Images from a CCD camera viewing the pattern on a display when imaged through a cylindrical lens, placed horizontally across the image path. (Left) Image before adaptive optics correction. (Right) Image after correction with the LC SLM.
focusing ability of the human eye. Then, the pattern was aberrated using a cylindrical trial lens, as shown in Figure 18.30, and subsequently the loop was manually closed by gradually increasing the correction and checking on the imaged object. The image, which was severely aberrated, was subsequently improved by the AO correction. This proves that the AO system could, in fact, improve image quality. The problem of applying an automated closedloop correction stems from the difficulty of measuring the aberrations produced by the artificial eye, since the surface of the CCD camera does not provide a very good reflection of the beacon back to the Shack–Hartmann wavefront sensor. Additional problems were encountered due to a high noise level from the Shack–Hartmann spots at low light levels. Sometimes, low light levels would result in double spots. Changing the beamsplitter in front of the eye to allow for the transmission of more light would improve the noise situation, though double spots could also be an artifact of the low dynamic range of the Shack– Hartmann wavefront sensor. The discrepancy between the published speed of the SLM and the speed of the SLM in the AO loop also caused difficulty in smoothly converging to a stable correction. The effect was minimized by choosing a low gain and increasing the delay time between loop iterations. However, if the error is not properly minimized, then the psychophysical tests will be unpredictable. It is also possible that, due to the limited dynamic range of the wavefront sensor, aliasing can occur for the Shack–Hartmann spots, which would result in an incorrect measurement and hence an incorrect AO correction. This incorrectness can only be detected by a human observer, and not by the wavefront sensor measurements or by the graphs of the RMS error. The impact of this could be minimized, however, by reducing the pupil size.
508
DESIGN AND TESTING OF A LIQUID CRYSTAL ADAPTIVE OPTICS PHOROPTER
18.11
SUMMARY
This chapter describes a systematic approach to the design and characterization of an AO system using a spatial light modulator. A series of steps were taken to fully characterize the performance of each subsystem. These steps are necessary to estimate the accuracy and limitations of the system, devise necessary remedies, and ensure repeatability of the measurements. These steps include recording noise statistics, verifying the accuracy of the centroid calculation, performing accurate registration between the wavefront sensor and corrector, understanding the effect of misregistration between these two components, characterizing the nonlinear behavior of the SLM and determining its phase wrapping properties, translating the nonlinear response of the SLM using a lookup table, quantifying the performance of the wavefront sensor, and verifying the closed-loop system operation using internally and externally generated aberrations. After characterizing the AO system, we were able to measure and correct for the aberrations in human eyes, which is needed to perform psychophysical experiments. It is expected that growing interest on LC-based vision correction [13] may lead to better LC devices suitable for the vision community. Acknowledgments This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48. Dr. Awwal acknowledges Scott Wilks, Brian Baumann, Don Gavel, Jack Werner, Joseph Hardy, Thomas Barnes, Steve Jones, and Dennis de Silva for their help on various stages of this project.
REFERENCES 1. Slataper FJ. Age Norms of Refraction and Vision. Arch. Ophthalmol. 1950; 43: 466–481. 2. Liang J, Williams DR, Miller DT. Supernormal Vision and High-Resolution Retinal Imaging through Adaptive Optics. J. Opt. Soc. Am. A. 1997; 14: 2884– 2892. 3. Roorda A, Williams DR. The Arrangement of the Three Cone Classes in the Living Human Eye. Nature 1999; 397: 520–522. 4. Max CE, Avicola K, Brase JM, et al. Design, Layout, and Early Results of a Feasibility Experiment for Sodium-Layer Laser-Guide-Star Adaptive Optics. J. Opt. Soc. Am. A. 1994; 11: 813–824. 5. Kartz MW, Olivier SS, Avicola K, et al. High Resolution Wavefront Control of High Power Laser Systems. Proc. of 2nd International Workshop on Adaptive Optics for Industry and Medicine. University of Durham, England: 1999, pp. 16–21.
REFERENCES
509
6. Hardy JW. Adaptive Optics for Astronomical Selescopes. Oxford: Oxford University Press, 1998. 7. Tyson RK. Principles of Adaptive Optics. 2nd ed. Boston: Academic, 1998. 8. Cagigal MP, Canales VF, Castejón-Mochón JF, et al. Statistical Description of Wavefront Aberration in the Human Eye. Opt. Lett. 2002; 27: 37–39. 9. Poyneer LA, Gavel DT, Brase JM. Fast Wavefront Reconstruction in Large Adaptive Optics Systems with Use of the Fourier Transform. J. Opt. Soc. Am. A. 2002; 19: 2100–2111. 10. Hofer H, Artal P, Singer B, et al. Dynamics of the Eye’s Aberrations. J. Opt. Soc. Am. A. 2001; 18: 497–506. 11. Awwal AAS, Baumann BJ, Gavel DT, et al. Characterization and Operation of a Liquid Crystal Adaptive Optics Phoropter. In: Tyson RK, Lloyd-Hart M, eds. Astronomical Adaptive Optics Systems and Applications. Proceedings of the SPIE. 2003; 5169: 104–122. 12. Wilks SC, Thompson CA, Olivier SS, et al. High-Resolution Adaptive Optics Test Bed for Vision Science. In: Tyson RK, Bonaccini D, Roggemann MC, eds. Adaptive Optics Systems and Technology II. Proceedings of the SPIE. 2002; 4494: 349–355. 13. Prieto PM, Fernández EJ, Manzanera S, Artal P. Adaptive Optics with a Programmable Phase Modulator: Applications in the Human Eye. Opt. Express. 2004; 12: 4059–4071.
APPENDIX A
Optical Society of America’s Standards for Reporting Optical Aberrations* LARRY N. THIBOS,1 RAYMOND A. APPLEGATE,2 JAMES T. SCHWIEGERLING,3 ROBERT WEBB,4 and VSIA STANDARDS TASKFORCE MEMBERS 1 School of Optometry, Indiana University, Bloomington, Indiana 2 Department of Ophthalmology, University of Texas Health Science Center at San Antonio, San Antonio, Texas 3Department of Ophthalmology, University of Arizona, Tucson 4Schepens Research Institute, Boston, Massachusetts
Abstract In response to a perceived need in the vision community, an OSA taskforce was formed at the 1999 topical meeting on vision science and its applications (VSIA-99) and charged with developing consensus recommendations on defi nitions, conventions, and standards for reporting of optical aberrations of human eyes. Progress reports were presented at the 1999 OSA annual meeting and at VSIA-2000 by the chairs of three taskforce
* From LN Thibos, RA Applegate, JT Schwiegerling, et al. Standards for Reporting the Optical Aberrations of Eyes. In: V Lakshminarayanan, ed. OSA Trends in Optics and Photonics, Vision Science and Its Applications, Vol. 35. Washington, D.C.: Optical Society of America, 2000, pp. 232–244. Reprinted in its entirety with permission from the Optical Society of America. Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
511
512
APPENDIX A OPTICAL SOCIETY OF AMERICA’S STANDARDS
subcommittees on (1) reference axes, (2) describing functions, and (3) model eyes. The following summary of the committee’s recommendations is available also in portable document format (PDF) on OSA Optics Net at http://www.osa.org/. OCIS codes: (330.0330) Vision and color; (330.5370) Physiological optics
BACKGROUND The recent resurgence of activity in visual optics research and related clinical disciplines (e.g., refractive surgery, ophthalmic lens design, ametropia diagnosis) demands that the vision community establish common metrics, terminology, and other reporting standards for the specification of optical imperfections of eyes. Currently there exists a plethora of methods for analyzing and representing the aberration structure of the eye but no agreement exists within the vision community on a common, universal method for reporting results. In theory, the various methods currently in use by different groups of investigators all describe the same underlying phenomena and therefore it should be possible to reliably convert results from one representational scheme to another. However, the practical implementation of these conversion methods is computationally challenging, is subject to error, and reliable computer software is not widely available. All of these problems suggest the need for operational standards for reporting aberration data and to specify test procedures for evaluating the accuracy of data collection and data analysis methods. Following a call for participation [1], approximately 20 people met at VSIA-99 to discuss the proposal to form a taskforce that would recommend standards for reporting optical aberrations of eyes. The group agreed to form three working parties that would take responsibility for developing consensus recommendations on defi nitions, conventions and standards for the following three topics: (1) reference axes, (2) describing functions, and (3) model eyes. It was decided that the strategy for Phase I of this project would be to concentrate on articulating defi nitions, conventions, and standards for those issues which are not empirical in nature. For example, several schemes for enumerating the Zernike polynomials have been proposed in the literature. Selecting one to be the standard is a matter of choice, not empirical investigation, and therefore was included in the charge to the taskforce. On the other hand, issues such as the maximum number of Zernike orders needed to describe ocular aberrations adequately is an empirical question which was avoided for the present, although the taskforce may choose to formulate recommendations on such issues at a later time. Phase I concluded at the VSIA-2000 meeting.
REFERENCE AXIS SELECTION
513
REFERENCE AXIS SELECTION Summary It is the committee’s recommendation that the ophthalmic community use the line-of-sight as the reference axis for the purposes of calculating and measuring the optical aberrations of the eye. The rationale is that the line-of-sight in the normal eye is the path of the chief ray from the fi xation point to the retinal fovea. Therefore, aberrations measured with respect to this axis will have the pupil center as the origin of a Cartesian reference frame. Secondary lines-of-sight may be similarly constructed for object points in the peripheral visual field. Because the exit pupil is not readily accessible in the living eye whereas the entrance pupil is, the committee recommends that calculations for specifying the optical aberration of the eye be referenced to the plane of the entrance pupil. Background Optical aberration measurements of the eye from various laboratories or within the same laboratory are not comparable unless they are calculated with respect to the same reference axis and expressed in the same manner. This requirement is complicated by the fact that, unlike a camera, the eye is a decentered optical system with non-rotationally symmetric components (Fig. 1). The principle elements of the eye’s optical system are the cornea, pupil, and the crystalline lens. Each can be decentered and tilted with respect to other components, thus rendering an optical system that is typically dominated by coma at the foveola.
FIGURE 1 The cornea, pupil, and crystalline lens are decentered and tilted with respect to each other, rendering the eye a decentered optical system that is different between individuals and eyes within the same individual.
514
APPENDIX A OPTICAL SOCIETY OF AMERICA’S STANDARDS
The optics discipline has a long tradition of specifying the aberration of optical systems with respect to the center of the exit pupil. In a centered optical system (e.g., a camera, or telescope) using the center of the exit pupil as a reference for measurement of on-axis aberration is the same as measuring the optical aberrations with respect to the chief ray from an axial object point. However, because the exit pupil is not readily accessible in the living eye, it is more practical to reference aberrations to the entrance pupil. This is the natural choice for objective aberrometers which analyze light reflected from the eye. Like a camera, the eye is an imaging device designed to form an in-focus inverted image on a screen. In the case of the eye, the imaging screen is the retina. However, unlike fi lm, the “grain” of the retina is not uniform over its extent. Instead, the grain is fi nest at the foveola and falls off quickly as the distance from the foveola increases. Consequently, when viewing fi ne detail, we rotate our eye such that the object of regard falls on the foveola (Fig. 2). Thus, aberrations at the foveola have the greatest impact on an individual’s ability to see fi ne details.
FIGURE 2 An anatomical view of the macular region as viewed from the front and in cross section (below). a: foveola, b: fovea, c: parafoveal area, d: perifoveal area. From Histology of the Human Eye by Hogan. Alvarado Weddell, W.B. Sauders Company publishers, 1971, page 491.
METHODS FOR ALIGNING THE EYE DURING MEASUREMENT
FIGURE 3 sight.
515
Left panel illustrates the visual axis and right panel illustrates the line of
Two traditional axes of the eye are centered on the foveola, the visual axis and the line-of-sight, but only the latter passes through the pupil center. In object space, the visual axis is typically defi ned as the line connecting the fi xation object point to the eye’s fi rst nodal point. In image space, the visual axis is the parallel line connecting the second nodal point to the center of the foveola (Fig. 3, left). In contrast, the line-of-sight is defi ned as the (broken) line passing through the center of the eye’s entrance and exit pupils connecting the object of regard to the foveola (Fig. 3, right). The line-of-sight is equivalent to the path of the foveal chief ray and therefore is the axis which conforms to optical standards. The visual axis and the line of sight are not the same and in some eyes the difference can have a large impact on retinal image quality [2]. For a review of the axes of the eye see [3]. (To avoid confusion, we note that Bennett and Rabbetts [4] redefi ne the visual axis to match the traditional defi nition of the line of sight. The Bennett and Rabbetts defi nition is counter to the majority of the literature and is not used here.) When measuring the optical properties of the eye for objects which fall on the peripheral retina outside the central fovea, a secondary line-of-sight may be constructed as the broken line from object point to center of the entrance pupil and from the center of the exit pupil to the retinal location of the image. This axis represents the path of the chief ray from the object of interest and therefore is the appropriate reference for describing aberrations of the peripheral visual field.
METHODS FOR ALIGNING THE EYE DURING MEASUREMENT Summary The committee recommends that instruments designed to measure the optical properties of the eye and its aberrations be aligned co-axially with the eye’s line-of-sight.
516
APPENDIX A OPTICAL SOCIETY OF AMERICA’S STANDARDS
Background There are numerous ways to align the line of sight to the optical axis of the measuring instrument. Here we present simple examples of an objective method and a subjective method to achieve proper alignment. Objective Method In the objective alignment method schematically diagramed in Fig. 4, the experimenter aligns the subject’s eye (which is fi xating a small distant target on the optical axis of the measurement system) to the measurement system. Alignment is achieved by centering the subject’s pupil (by adjusting a bite bar) on an alignment ring (e.g., an adjustable diameter circle) which is coaxial with the optical axis of the measurement system. This strategy forces the optical axis of the measurement device to pass through the center of the entrance pupil. Since the fi xation target is on the optical axis of the measurement device, once the entrance pupil is centered with respect to the alignment ring, the line-of-sight is co-axial with the optical axis of the measurement system. Subjective Method
Alignment Ring
Measurement System
In the subjective alignment method schematically diagramed in Figure 5, the subject adjusts the position of their own pupil (using a bite bar) until two alignment fi xation points at different optical distances along and co-axial to the optical axis of the measurement device are superimposed (similar to
Alignment Camera BS BS FP FIGURE 4 Schematic of a generic objective alignment system designed to place the line of sight on the optical axis of the measurement system. BS: beam splitter, FP: on axis fi xation point.
517
Measurement System
DESCRIPTION OF ZERNIKE POLYNOMIALS
FP
FP
BS
BS
FIGURE 5 Schematic of a generic subjective alignment system designed to place the line of sight on the optical axis of the measurement system. BS: beam splitter, FP: fi xation point source.
aligning the sights on rifle to a target). Note that one or both of the alignment targets will be defocused on the retina. Thus the subject’s task is to align the centers of the blur circles. Assuming the chief ray defi nes the centers of the blur circles for each fi xation point, this strategy forces the line of sight to be co-axial with the optical axis of the measurement system. In a system with significant amounts of asymmetric aberration (e.g., coma), the chief ray may not defi ne the center of the blur circle. In practice, it can be useful to use the subjective strategy for preliminary alignment and the objective method for fi nal alignment. Conversion Between Reference Axes If optical aberration measurements are made with respect to some other reference axis, the data must be converted to the standard reference axis (see the tools developed by Susana Marcos at our temporary web site: //color.eri. harvard/standardization). However, since such conversions involve measurement and/or estimation errors for two reference axes (the alignment error of the measurement and the error in estimating the new reference axis), it is preferable to have the measurement axis be the same as the line-of-sight.
DESCRIPTION OF ZERNIKE POLYNOMIALS The Zernike polynomials are a set of functions that are orthogonal over the unit circle. They are useful for describing the shape of an aberrated wavefront
518
APPENDIX A OPTICAL SOCIETY OF AMERICA’S STANDARDS
in the pupil of an optical system. Several different normalization and numbering schemes for these polynomials are in common use. Below we describe the different schemes and make recommendations towards developing a standard for presenting Zernike data as it relates to aberration theory of the eye. Double Indexing Scheme The Zernike polynomials are usually defi ned in polar coordinates (r, q), where r is the radial coordinate ranging from 0 to 1 and q is the azimuthal component ranging from 0 to 2p. Each of the Zernike polynomials consists of three components: a normalization factor, a radial-dependent component and an azimuthal-dependent component. The radial component is a polynomial, whereas the azimuthal component is sinusoidal. A double indexing scheme is useful for unambiguously describing these functions, with the index n describing the highest power (order) of the radial polynomial and the index m describing the azimuthal frequency of the sinusoidal component. By this scheme the Zernike polynomials are defi ned as Znm ( ρ, θ ) =
{
N nm Rnm ( ρ ) cos mθ ; for m ≥ 0 − N mm Rnm ( ρ ) sin mθ ; for m < 0
}
(1)
where Nnm is the normalization factor described in more detail below and Rn|m| (r) is given by Rnm ( ρ ) =
( n− m ) 2
∑ s= 0
s
( −1 ) ( n − s ) ! ρ n− 2 s s ![ 0 . 5 ( n + m − s ) ]![ 0 . 5 ( n − m − s ) ]!
(2)
This defi nition uniquely describes the Zernike polynomials except for the normalization constant. The normalization is given by N nm =
2 ( n + 1) 1 + δ m0
(3)
where d m0 is the Kronecker delta function (i.e., d m0 = 1 for m = 0, and d m0 = 0 for m ≠ 0). Note that the value of n is a positive integer or zero. For a given n, m can only take on values −n, −n + 2, −n + 4, . . . n. When describing individual Zernike terms (Table 2), the two index scheme should always be used. Below are some examples. Good: “The values of Z3 −1(r, q) and Z24 (r, q) are 0.041 and −0.121, respectively.” “Comparing the astigmatism terms, Z2−2 (r, q) and Z22 (r, q) . . .” Bad “The values of Z7(r, q) and Z12 (r, q) are 0.041 and −0.121, respectively.” “Comparing the astigmatism terms, Z5 (r, q) and Z 6 (r, q) . . .”
DESCRIPTION OF ZERNIKE POLYNOMIALS
519
Single Indexing Scheme Occasionally, a single indexing scheme is useful for describing Zernike expansion coefficients. Since the polynomials actually depend on two parameters, n and m, ordering of a single indexing scheme is arbitrary. To avoid confusion, a standard single indexing scheme should be used, and this scheme should only be used for bar plots of expansion coefficients (Fig. 6). To obtain the single index, j, it is convenient to lay out the polynomials in a pyramid with row number n and column number m as shown in Table 1. The single index, j, starts at the top of the pyramid and steps down from left to right. To convert between j and the values of n and m, the following relationships can be used: j=
n(n + 2) + m 2
( mode number )
(4)
−3 + 9 + 8 j ( radial order ) n = roundup 2 ( angular frequency ) m = 2 j − n(n + 2)
(5) (6)
Coordinate System
0.02 0.01
36
32
28
24
20
16
8
12
−0.01
4
0 0
Expansion Coefficient Value
Typically, a right-handed coordinate system is used in scientific applications as shown in Fig. 7. For the eye, the coordinate origin is at the center of the eye’s entrance pupil, the +x axis is horizontal pointing to the right, the +y axis is vertical pointing up, and the +z Cartesian axis points out of the eye and coincides with the foveal line-of-sight in object space, as defi ned by a chief ray emitted by a fi xation spot. Also shown are conventional defi nitions of the polar coordinates r = x 2 + y2 and q = tan−1(y/x). This defi nition gives x = r cos q and y = r sin q. We note that Malacara [5] uses a polar coordinate system
−0.02
Single Index, j FIGURE 6 Example of a bar plot using the single index scheme for Zernike coefficients.
520
10
−4
16
6
−3
11
3
−2
17
7
1
−1
12
4
j=0
0
18
8
2
+1
13
5
+2
Row number is polynomial order n, column number is sinusoidal frequency m, table entry is the single-index j.
15
0 1 2 3 4 5
a
−5
Zernike Pyramida
n/m
TABLE 1
19
9
+3
14
+4
20
+5
521
DESCRIPTION OF ZERNIKE POLYNOMIALS Clinician’s View of Patient
r
y
x
y
r
y
r
q q z
q x
x Right Eye = OD
Left Eye = OS
FIGURE 7 Conventional right-handed coordinate system for the eye in Cartesian and polar forms.
in which x = r sin q and y = r cos q. In other words, q is measured clockwise from the +y axis (Figure 1b), instead of counterclockwise from the +x axis (Figure 1a). Malacara’s defi nition stems from early (pre-computer) aberration theory and is not recommended. In ophthalmic optics, angle q is called the “meridian” and the same coordinate system applies to both eyes. Because of the inaccessibility of the eye’s image space, the aberration function of eyes are usually defi ned and measured in object space. For example, objective measures of ocular aberrations use light reflected out of the eye from a point source on the retina. Light reflected out of an aberration-free eye will form a plane-wave propagating in the positive z-direction and therefore the (x, y) plane serves as a natural reference surface. In this case the wavefront aberration function W(x, y) equals the z-coordinate of the reflected wavefront and may be interpreted as the shape of the reflected wavefront. By these conventions, W > 0 means the wavefront is phase-advanced relative to the chief ray. An example would be the wavefront reflected from a myopic eye, converging to the eye’s far-point. A closely related quantity is the optical path-length difference (OPD) between a ray passing through the pupil at (x, y) and the chief ray point passing through the origin. In the case of a myopic eye, the path length is shorter for marginal rays than for the chief ray, so OPD < 0. Thus, by the recommended sign conventions, OPD(x, y) = −W(x, y). Bilateral symmetry in the aberration structure of eyes would make W(x, y) for the left eye the same as W(−x, y) for the right eye. If W is expressed as a Zernike series, then bilateral symmetry would cause the Zernike coefficients for the two eyes to be of opposite sign for all those modes with odd symmetry about the y-axis (e.g., mode Z2−2). Thus, to facilitate direct comparison of the two eyes, a vector R of Zernike coefficients for the right eye can be converted to a symmetric vector L for the left eye by the linear transformation L = M * R, where M is a diagonal matrix with elements +1 (no sign change) or −1 (with sign change). For example, matrix M for Zernike vectors representing the fi rst 4 orders (15 modes) would have the diagonal elements [+1, +1, −1, −1, +1, +1, +1, +1, −1, −1, −1, −1, +1, +1, +1].
522
APPENDIX A OPTICAL SOCIETY OF AMERICA’S STANDARDS
TABLE 2 Listing of Zernike Polynomials up to 7th order (36 terms) j = Index
n = Order
m = Frequency
0
0
0
1
1
−1
2r sin q
2
1
1
2r cos q
3
2
−2
4
2
0
3 ( 2 ρ 2 − 1)
5
2
2
6 ρ 2 cos 2θ
6
3
−3
8 ρ 3 sin 3θ
7
3
−1
8 ( 3ρ 3 − 2 ρ ) sin θ
8
3
1
8 ( 3ρ 3 − 2 ρ ) cos θ
9
3
3
8 ρ 3 cos 3θ
10
4
−4
10 ρ 4 sin 4θ
11
4
−2
10 ( 4 ρ 4 − 3ρ 2 ) sin 2θ
12
4
0
13
4
2
10 ( 4 ρ 4 − 3ρ 2 ) cos 2θ
14
4
4
10 ρ 4 cos 4θ
15
5
−5
12 ρ 5 sin 5θ
16
5
−3
12 ( 5 ρ 5 − 4 ρ 3 ) sin 3θ
17
5
−1
18
5
1
19
5
3
12 ( 5 ρ 5 − 4 ρ 3 ) cos 3θ
20
5
5
12 ρ 5 cos 5θ
21
6
−6
14 ρ 6 sin 6θ
22
6
−4
14 ( 6 ρ 6 − 5 ρ 4 ) sin 4θ
23
6
−2
24
6
0
25
6
2
26
6
4
14 ( 6 ρ 6 − 5 ρ 4 ) cos 4θ
27
6
6
14 ρ 6 cos 6θ
28
7
−7
4r7 sin 7q
29
7
−5
4 (7r7 − 6r5) sin 5q
30
7
−3
4 (21r7 − 30r5 + 10r3) sin 3q
31
7
−1
4 (35r7 − 60r5 + 30r3 − 4r) sin q
32
7
1
4 (35r7 − 60r5 + 30r3 − 4r) cos q
33
7
3
4 (21r7 − 30r5 + 10r3) cos 3q
34
7
5
4 (7r7 − 6r5) cos 5q
35
7
7
4r7 cos 7q
Znm (r,q) 1
6 ρ 2 sin 2θ
5 ( 6 ρ 4 − 6 ρ 2 + 1)
12 ( 10 ρ 5 − 12 ρ 3 + 3ρ ) sin θ
12 ( 10 ρ 5 − 12 ρ 3 + 3ρ ) cos θ
14 ( 15 ρ 6 − 20 ρ 4 + 6 ρ 2 ) sin 2θ 7 ( 20 ρ 6 − 30 ρ 4 + 12 ρ 2 − 1 )
14 ( 15 ρ 6 − 20 ρ 4 + 6 ρ 2 ) cos 2θ
STANDARD ABERRATOR FOR CALIBRATION
523
STANDARD ABERRATOR FOR CALIBRATION The original goal was to design a device that could be passed around or massproduced to calibrate aberrometers at various laboratories. We first thought of this as an aberrated model eye, but that later seemed too elaborate. One problem is that the subjective aberrometers needed a sensory retina in their model eye, while the objective ones needed a reflective retina of perhaps known reflectivity. We decided instead to design an aberrator that could be used with any current or future aberrometers, with whatever was the appropriate model eye. The fi rst effort was with a pair of lenses that nearly cancelled spherical power, but when displaced sideways would give a known aberration. That scheme worked, but was very sensitive to tilt, and required careful control of displacement. The second design was a trefoil phase plate (OPD = Z33 = kr 3 sin 3q) loaned by Ed Dowski of CDM Optics, Inc. This 3rd order aberration is similar to coma, but with three lobes instead of one, hence the common name “trefoil”. Simulation of the aberration function for this plate in ZEMAX® is shown in Figs. 8, 9. Figure 8 is a graph of the Zernike coefficients showing a small amount of defocus and 3rd order spherical aberration, but primarily C 33. Figure 9 shows the wavefront, only half a micron (one wave) peak to peak, but that value depends on k, above. We mounted the actual plate and found that it had even more useful qualities: As the phase plate is translated across the pupil, it adds some C 22 , horizontal astigmatism. When the plate is perfectly centered, that coefficient is zero. Further, the slope of C 22 (∆x) measures the actual pupil. Z 33 ( x − x0 ) = κ ( r − x0 ) sin 3θ = κ ( 3 xy2 − x 3 )
(7)
Zernike Coefficient Value (µm)
0.4 0.3 0.2 0.1 0 −0.1 −0.2 2
4
6
8
10 12
14 16 18
20
22 24
26
28 30
32 34 36
Zernike Coefficient Index (Zmax convention)
FIGURE 8 Zernike coefficients of trefoil phase plate from ZEMAX® model (note different numbering convention from that recommended above for eyes).
524
APPENDIX A OPTICAL SOCIETY OF AMERICA’S STANDARDS
FIGURE 9
Wavefront map for trefoil phase plate from the ZEMAX® model.
so
∂Z 33 ( x − x0 ) = 3κ ( y2 − x 2 ) = 3Z 22 ∂x
(8)
∂Z33 ( x − x0 ) = −6κ xy = −3Z22 ∂y
(9)
and similarly
m m This means that ∆Z33 = 3Z22∆x and then, since W = ∑ ∑ C n Zn , we get a new 2 term proportional to ∆x. Plotting the coefficient C 2 against ∆x, we need to normalize to the pupil size. That could be useful as a check on whether the aberrator is really at the pupil, or whether some smoothing has changed the real pupil size, as measured. Figures 10–13 confi rm this behavior and the expected variation with rotation (3q). Although the phase plate aberrator works independently of position in a collimated beam, some aberrometers may want to use a converging or diverging beam. Then it should be placed in a pupil conjugate plane. We have not yet built the mount for the phase plate, and would appreciate suggestions for that. Probably we need a simple barrel mount that fits into standard lens
STANDARD ABERRATOR FOR CALIBRATION
525
5 4 3
2
2
1
1 µm
Pupil Position (mm)
3
0
0
−1
−1 −2
−2
−3 −4
−3 −3
−2
−1
0
1
2
3
−5
Pupil Position (mm)
FIGURE 10
Wavefront map from the aberrator, using the SRR aberrometer.
5 4
2
3
1
2 1 µm
Pupil Position (mm)
3
0
0 −1 −2
−1 −2
−3 −4
−3 −3
−2
−1
−5 0
1
2
3
Pupil Position (mm)
FIGURE 11
The phase plate of Figure 10 has been moved horizontally 4 mm.
holders—say 30 mm outside diameter. We expect to use a standard pupil, but the phase plate(s) should have 10 mm clear aperture before restriction. The workshop seemed to feel that a standard pupil should be chosen. Should that be 7.5 mm? We have tested the Z33 aberrator, but it may be a good idea to have a few others. We borrowed this one, and it is somewhat fragile. Bill Plummer of Polaroid thinks he could generate this and other plates in plastic for “a few thousand dollars” for each design. Please send suggestions as to whether other designs are advisable ([email protected]), and as to whether we will want to stack them or use them independently. That has some implications for the mount design, but not severe ones. We suggest two Z33 plates like this one, and perhaps a Z 06, fi fth order spherical.
526
APPENDIX A OPTICAL SOCIETY OF AMERICA’S STANDARDS
Zernike Coefficient (µm)
0.6 0.4
c33
Trefoil
c02
Defocus
0.2 0 −0.2
Trefoil
c-3 3
c-2 2 Oblique Astigmatism
−0.4 −0.6
c22
Horizontal/Vertical Astigmatism 0
0.1
0.2
0.3
0.4
Arbitrary Horizontal Position (pupil radius units) FIGURE 12 for C 33.
Zerike coefficients are stable against horizontal displacement, except
Zernike Coefficient (µm)
0.4 Modulus
0.3
c33
0.2 0.1 0 −0.1 −0.2
c-3 3
−0.3 −0.4
0
10
20
30
40
50
60
Phase Plate Rotation (deg) FIGURE 13 Zernike coefficients C 33 and C 3−3 as a function of rotation of the phase plate about the optic axis.
At this time, then, our intent is to have one or more standard aberrators that can be inserted into any aberrometer. When centered, and with a standard pupil, all aberrometers should report the same Zernike coefficients. We do not intend to include positioners in the mount, assuming that will be different for each aberrometer.
PLANS FOR PHASE II
527
Another parameter of the design is the value of k. That comes from the actual physical thickness and the index of refraction. Suggestions are welcome here, but we assume we want coefficients that are robust compared to a diopter or so of defocus. The index will be whatever it will be. We will report it, but again any chromaticity will depend on how it’s used. We suggest that we report the expected coefficients at a few standard wavelengths and leave interpolation to users.
PLANS FOR PHASE II Reference Axes Subcommittee • develop a shareware library of software tools needed to convert data from one ocular reference axis to another (e.g., convert a wavefront aberration for the corneal surface measured by topography along the instrument’s optical axis into a wavefront aberration specified in the eye’s exit pupil plane along the eye’s fi xation axis.) • generate test datasets for evaluating software tools Describing Functions Subcommittee • develop a shareware library of software tools for generating, manipulating, evaluating, etc. the recommended describing functions for wavefront aberrations and pupil apodizing functions. • develop additional software tools for converting results between describing functions (e.g., converting Taylor polynomials to Zernike polynomials, or converting single-index Zernikes to double-index Zernikes, etc.). • generate test datasets for evaluating software tools. Model Eyes Subcommittee • build a physical model eye that can be used to calibrate experimental apparatus for measuring the aberrations of eyes. • circulate the physical model to all interested parties for evaluation, with results to be presented for discussion at a future VSIA meeting. Acknowledgements The authors wish to thank the numerous committee members who contributed to this project.
528
APPENDIX A OPTICAL SOCIETY OF AMERICA’S STANDARDS
REFERENCES 1. Thibos LN, Applegate RA, Howland HC, Williams DR, Artal P, Navarro R, Campbell MC, Greivenkamp JE, Schwiegerling JT, Burns SA, Atchison DA, Smith G, Sarver EJ. “A VSIA-sponsored effort to develop methods and standards for the comparison of the wavefront aberration structure of the eye between devices and laboratories,” in Vision Science and Its Applications (Optical Society of America, Washington, D.C., 1999), pp. 236–239. 2. Thibos LN, Bradley A, Still DL, Zhang X, Howarth PA. “Theory and measurement of ocular chromatic aberration,” Vision Research 30: 33–49 (1990). 3. Bradley A, Thibos LN. (Presentation 5) at http://www.opt.indiana.edu/lthibos/ ABLNTOSA95. 4. Bennett AG, Rabbetts RB. Clinical Visual Optics, 2nd ed. (Butterworth, 1989). 5. Malacara D. Optical Shop Testing, 2nd ed. (John Wiley & Sons, Inc., New York, 1992).
Glossary Term
Defi nition
Chapter
Aberration
The optical deviations of a wavefront from a reference plane or spherical wavefront that degrade image quality.
3
Ablation optical zone
The diameter of a laser refractive surgical ablation on the central cornea that is designed to correct the eye’s refractive error and/or higher order aberrations. This is also called the “optical zone.”
12
Acousto-optic modulator
A device that varies the amplitude, frequency, or phase of the light (such as a laser) propagating through it.
16
Adaptive optics
An optical system that adapts to compensate for optical artifacts (such as aberrations) introduced by the medium between the object and the image.
1
AFC
See N-alternative-forced-choice.
14
Ametropia
The degree of defocus measured in an eye (either myopia or hyperopia).
11
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
529
530
GLOSSARY
Term
Defi nition
Chapter
AO
See adaptive optics.
1
AO loop
The repeating cycle of wavefront measurement and correction in an adaptive optics system.
6
AOM
See acousto-optic modulator.
16
Aperture stop
The physical constraint that limits the size of the light bundle from an on-axis field point that passes through an optical system.
7
Aphakic
The physiological eye that has had its natural lens removed; literally “without lens.”
11
A-scan
A single axial profi le (along the z-axis) of optical reflectivity as measured by optical coherence tomography (OCT).
17
Axial chromatic aberration
See longitudinal chromatic aberration.
13
Axial point spread function
Axial intensity distribution in the threedimensional image of a point object.
17
Bandwidth error
Wavefront error due to the temporal lag between the occurrence of wave aberrations and the time at which the adaptive optics system corrects them.
8
Bimorph mirror
A modal device that consists of a piezoelectric material sandwiched between a continuous top electrode and a bottom, patterned electrode array. A mirrored layer is added to the top continuous electrode. Application of a voltage across the top and bottom electrodes changes the underlying surface area of the two dissimilar layers and results in a bending of the entire mirror.
4
Boresighting
The process of co-aligning the fields of view (FOV) of various optical subsystems.
7
531
GLOSSARY
Term
Defi nition
Chapter
B-scan
A two-dimensional profi le (x–z plane) of optical reflectivity that is composed of a sequence of adjacent A-scans as measured by optical coherence tomography (OCT).
17
Calibration error
Wavefront error in the absence of any aberrations external to the adaptive optics system.
8
Cathode ray tube
A computer-controlled display that produces light when an electron beam excites a phosphor coating on the display screen. These displays come in monochrome and color varieties.
14
CDM
See chromatic difference of magnification.
13
Center-of-mass
If f(x, y) is the density function, then the centroid or center-of-mass (xc, yc), of a thin plate is given by
18
Mxc =
∫∫ xf ( x, y ) dx dy R
Myc =
∫∫ yf ( x, y ) dx dy R
where M is the mass, M =
∫∫ f ( x, y ) dx dy, R
and R is the region of interest. Centroid
The centroid of the wavefront sensor spot image is the center-of-mass, generally computed over a rectangular area of interest called a search box. See also center-of-mass.
6
Chief ray
The ray for a given field point that passes through the center of the aperture stop. There is a distinct chief ray for each field point.
7
Chromatic difference of magnification
The fractional change in retinal image size due to variation in wavelength.
13
532
GLOSSARY
Term
Defi nition
Chapter
Closed-loop control
In closed-loop (or feedback) control systems, the wavefront corrector precedes the wavefront sensor in the system’s optical path. The wavefront corrector compensates for the aberrated wavefront fi rst. The wavefront sensor then measures the residual wave aberration (thereby receiving feedback on the accuracy of the correction), the required corrections are computed, and the wavefront corrector is updated. This process is repeated iteratively until the desired correction or wavefront profi le is obtained.
5
Coherence length, lc
Average path length over which the phase of a light source remains constant; quantity that addresses the spectral purity of a source and is defi ned as:
15
lc ≅
λ2 ∆λ
where l is the center wavelength of the source and ∆l is its bandwidth. Colorimeter
A device that measures the radiance of a light source weighted by three chromatic fi lters based on a set of color-matching functions derived from psychophysical experiments, thereby permitting specification of the light source in terms of chromaticity and luminance.
14
Contrast sensitivity function
Sensitivity of an observer to the contrast of a Gabor pattern that is varying in luminance as a function of spatial frequency.
14
Conventional ablation
Refractive surgery where only sphere and cylinder (or defocus and astigmatism) are corrected.
12
Conventional or classic refractive surgery
See conventional ablation.
12
533
GLOSSARY
Term
Defi nition
Chapter
Corneal aberration
Wave aberration typically corresponding to the anterior surface of the cornea.
2
Coupling coefficient
The amount of dependence between the actuators of a deformable mirror. A coupling coefficient of 15% implies pushing one actuator with unit magnitude causes displacement of the mirror surface at the location of the adjacent actuators to be 0.15.
4
cpd
Cycles-per-degree (of visual angle), typically used to measure spatial frequency in a visual stimulus.
14
CRT
See cathode ray tube.
14
CRT projectors
A computer-controlled display that uses bright CRT tubes combined with lenses to project images onto a display screen.
14
CSF
See contrast sensitivity function.
14
Customized ablation
Refractive surgery where sphere, cylinder, and higher order aberrations are corrected using wavefront measurements that are unique to the eye being treated. Also called “personalized” or “wavefront-guided” refractive surgery.
12
Customized refractive surgery
See customized ablation.
12
Cycloplegic refraction
A measurement of sphere, cylinder, and axis that uses a cycloplegic agent that inhibits accommodation.
12
Cylinder
The astigmatic component of a spectacle prescription.
11
Deformable mirror
An adaptive optical element that creates a uniform wavefront by applying an optical distortion to compensate for an incident distorted wavefront.
1
534
GLOSSARY
Term
Defi nition
Chapter
Degree of polarization
Fraction of light that remains polarized after passing through an optical system. It is related to the amount of scatter.
2
Depth of focus
For the eye, the range of object distances for which image quality is not significantly degraded, which depends on a variety of optical and neural factors.
13
Detection threshold
The stimulus strength necessary to elicit a criterion level of performance on a task in which an observer is asked to state if, when, or where a stimulus was presented.
14
DFT
See discrete Fourier transform.
8
Digital light projector
A computer-controlled display that uses a large array of digital micromirror devices (DMDs) to control the intensity of light reaching the screen.
14
Digital micromirror device
A chip-based device that consists of thousands (or millions) of tiny mirrors, whose positions can be controlled by electrical signals.
14
Digital numbers
An integer increment within the available bit range of a digital device.
18
Direct slope control algorithm
The direct slope control algorithm, also called the direct gradient control algorithm, treats the wavefront distortion using a zonal approach in which the wavefront sensor and wavefront corrector used in an adaptive optics system work on a slope basis.
5
Discrete actuator deformable mirror
A continuous mirror surface whose profi le is controlled by an underlying array of actuators. Pushing one actuator produces a localized (also termed zonal) deflection of the mirror surface, termed the influence function.
4
Discrete Fourier transform
A mathematical transform for discretely computing a Fourier transform (which expresses a signal as an integral of sinusoidal basis functions).
8
535
GLOSSARY
Term
Defi nition
Chapter
Discrimination threshold
The minimum difference along a particular stimulus dimension that is necessary for the observer to correctly differentiate two or more stimuli with a given probability. Sometimes called a just noticeable difference (jnd).
14
DLP
See digital light projector.
14
DM
See deformable mirror.
1
DMD
See digital micromirror device.
14
DN
See digital numbers.
18
DOP
See degree of polarization.
2
Dynamic range
In the context of wavefront sensing, the maximum wavefront slope that can be measured reliably. For a Shack–Hartmann wavefront sensor, it is a function of spot size, subaperture size, and the focal length of the lenslets.
3, 18
Emmetropia
A condition of the eye where light passing through the optical surfaces of the eye comes to focus at the retinal plane.
11
Equivalent quadratic
Given a wave aberration map, the quadratic surface that best represents the map.
13
Farsightedness
See hyperopia
11
Fast Fourier transform
An efficient algorithm for computing the discrete Fourier transform.
8
FFT
See fast Fourier transform.
8
First-order optics
The optical theory related to ideal imaging, which applies to optical systems with very small fields and apertures. In Snell’s law, sin q is approximated as q. The discrepancy between the results of fi rst-order optics and real ray tracing represents optical aberrations. (Also called Gaussian optics or paraxial optics.)
7
536
GLOSSARY
Term
Defi nition
Chapter
Fitting error
Wavefront error due to the deformable mirror’s inability to correct spatial frequencies larger than the inverse of the interactuator spacing.
8
Fovea
The central, approximately 600-mm area of the human retina that contains the densest photoreceptor packing needed for optimal spatial resolution and color vision.
9
Full width at half maximum
Given a function that has a central peak in the y dimension, this dimension is the distance between the x intercept points on either side of the peak where the function’s y value is half that of the peak.
8
Fundus
The concave portion of an anatomical structure, which for the eye includes the retina and choroid.
9
FWHM
See full width at half maximum.
8
Gabor pattern
A one-dimensional sinusoidal luminance grating weighted by a two-dimensional Gaussian function.
14
Gain, K
The fraction of the measured aberrations that the wavefront corrector attempts to correct in a single iteration. A gain of 1 implies that the wavefront corrector attempts to correct all of the aberrations just measured by the wavefront sensor, while a gain of 0.3 indicates that the wavefront corrector attempts to correct 30% of the aberrations just measured by the wavefront sensor.
15
Gamut
The range of displayable chromaticities for a light source. The gamut depends on the chromaticities and maximum luminance outputs of the color channels used in the display.
14
537
GLOSSARY
Term
Defi nition
Chapter
Horizontal synchronization pulse
The change in voltage level of a video signal that triggers the end of one line and the start of a new line. On a video display or frame grabber, each line in an image starts at the end of the hsync pulse and ends with the start of the next hsync pulse.
16
Hsync
See horizontal synchronization pulse.
16
Hyperfocal point The far end of the eye’s depth-of-focus interval. 13 Hyperopia
A condition of the eye where light passing through the optical surfaces of the eye comes to focus behind the retinal plane. Also called “farsightedness.”
11
Identification threshold
The stimulus strength necessary to elicit a criterion level of performance on a task in which an observer is asked to state which of a set of possible stimuli was presented.
14
Influence function
The surface deformation produced by one actuator when a unit voltage is applied to this actuator on the deformable mirror.
4, 5
Inner segment
The portion of the photoreceptor that contains its cell body and eventually terminates on the next neural stage of bipolar cells or horizontal cells.
9
Internal aberration
Wave aberration typically corresponding to the posterior surface of the cornea and the crystalline lens.
2
Intraocular lens
A lens implanted in the eye.
2
Intraocular scatter
Scattered light produced by the ocular media that degrades the quality of retinal images or image formation beyond the effect of aberrations.
2
IOL
See intraocular lens.
2
538
GLOSSARY
Term
Defi nition
Chapter
Keratoconus
A disease of the eye characterized by a steepening and decentering of the central corneal surface relative to the line of sight, in association with a thinning of the central corneal tissue. The steepening and decentration of the cornea, the single most powerful refracting surface of the eye, produces significant increases in lower and higher order aberrations of the eye.
11
LASEK
See laser-assisted epithelial keratomileusis.
12
Laser-assisted epithelial keratomileusis
A variant of photorefractive keratectomy (PRK) in which a dilute alcohol solution is applied to the front cells of the cornea and the corneal epithelial cells are gently peeled back in a continuous layer to expose the cornea. The excimer laser is then applied to the cornea with the epithelial layer retracted back away from the treatment area. After the ablation, the epithelial cells are then gently placed back into their original position over the cornea and a bandage soft lens is applied. The theoretical advantages of LASEK over PRK are the preservation of the epithelium, as well as quicker, more comfortable recovery, although there has not been convincing evidence of this to date.
12
Laser in situ keratomileusis
A technique to reshape the cornea that creates a corneal flap with a microkeratome or femtosecond laser. The flap is lifted and the excimer laser is used to reshape the corneal surface. For myopia, the laser treatment removes more tissue centrally to flatten the central cornea. For hyperopia, the laser removes more tissue in the periphery to steepen the central cornea.
12
LASIK
See laser in situ keratomileusis.
12
LCA
See longitudinal chromatic aberration.
2
539
GLOSSARY
Term
Defi nition
Chapter
LCD
See liquid crystal display.
14
LCD projectors
A computer-controlled display that uses the same technology as liquid crystal displays (LCD) but employs more powerful backlights combined with a lens to project the image onto a display screen.
14
LC-SLM
See liquid crystal spatial light modulator.
4, 18
Lenslet
A single small lens in an array of small lenses used to sample the wavefront in a Shack–Hartmann wavefront sensor.
6
Limbal junction
The region of the eye’s structure where the opaque sclera meets the transparent cornea. The limbal region is translucent and defi nes the change in curvature from the flatter radius sclera to the steeper radius cornea.
11
Liquid crystal display
A computer-controlled display that produces a visual stimulus by modulating a polarized light source with individually addressable liquid crystal elements.
14
Liquid crystal spatial light modulator
A liquid crystal spatial light modulator (LC-SLM) uses the electro-optic effects of liquid crystals to achieve modulation. These devices rely on the rotation of a liquid crystal molecule to induce localized refractive index changes, which in turn causes phase changes in the incoming wavefront. Both reflective and transmissive devices are available.
4, 18
logMAR
The logarithm of the minimum angle of resolution, in minutes of arc.
14
logMAR visual acuity
A measure of visual acuity using a chart specifically designed such that each line of the chart decreases in size according to a logarithmic progression. LogMAR charts are considered to be ideal for standardized clinical testing of visual acuity.
11
540
GLOSSARY
Term
Defi nition
Chapter
Longitudinal chromatic aberration
The variation of axial power of an optical system with wavelength.
2, 13
Luminance
A photometric measure of the effectiveness of a light source for human vision, based upon the radiance fi ltered by a human spectral efficiency function. Luminance is often expressed in candelas per square meter (cd/m 2).
14
Manifest refraction
A measurement of sphere, cylinder, and axis that is done without using any pharmacologic dilating or cycloplegic drugs.
12
Manifest refractive spherical equivalent
The measurement of refractive error that combines both sphere and cylinder into a single value. It is derived by taking one half of the cylinder value and adding it to the sphere value.
12
Measurement error
Wavefront error due to the noise in the wavefront slope measurements.
8
Measurement sensitivity
In the context of wavefront sensing, the minimum wavefront slope that can be measured reliably.
3
Membrane mirror
A type of mirror that consists of an edgeclamped, flexible, reflective membrane (analogous to a drumskin) sandwiched between a transparent top electrode and an underlying array of patterned electrodes. Application of a voltage causes deformation of the entire membrane.
4
MEMS
See microelectromechanical systems.
4
Method of adjustment
A procedure for estimating thresholds in which an observer adjusts a stimulus until a criterion perceptual experience (e.g., just barely visible) is achieved.
14
541
GLOSSARY
Term
Defi nition
Chapter
Method of constant stimuli
A technique for measuring a threshold in which the stimulus values and the number of trials at each value are chosen in advance. On a given trial, the stimulus value is chosen at random from the predefi ned choices.
14
Microelectromechanical systems
Devices with both electrical and mechanical functionality developed from microfabrication methods. This technology leverages the batch fabrication process developed for the integrated circuit industry.
4
Modal wavefront control algorithm
This algorithm treats wavefront distortions as a sum of modal functions, such as Zernike polynomials for a circular aperture, or modes that come from the AO system itself.
5
Modulation transfer function
Transfer function characterizing the proportion of contrast present in the object that is preserved in the image formed by an optical system.
10, 13
Monostable multivibrator
A timer circuit with only one stable state.
16
MRSE
See manifest refractive spherical equivalent.
12
MTF
See modulation transfer function.
10, 13
Multiplexer
A circuit with many inputs and one output. By applying appropriate control signals, many inputs can be steered to the output.
16
Myopia
A condition of the eye where light passing through the optical surfaces of the eye comes to focus in front of the retinal plane. Also called “nearsightedness.”
11
N-AFC
See N-alternative-forced-choice.
14
542
GLOSSARY
Term
Defi nition
Chapter
N-alternativeforced-choice
A procedure for estimating thresholds in which the observer has N (2 or more) response options on a given trial and is obliged to respond even if no stimulus is detected. For example, in the temporal 2AFC procedure, the observer is asked to indicate in which of two intervals the test stimulus was presented.
14
National Television Subcommittee
A television standard adopted for image transmission (mainly in the United States).
16
Nearsightedness
See myopia
11
Noncommon path aberration
Aberration arising from an element(s) in a portion of the optical system not common to the wavefront sensor (such as in an imaging arm) and is, consequently, not detected by the sensor.
15
NTSC
See National Television Subcommittee.
16
OCT
See optical coherence tomography.
17
Ocular aberration
Wave aberration of the complete eye.
2
Open-loop control
In open-loop (or feed-forward) control systems, 5 the wavefront sensor precedes the wavefront corrector in the system’s optical path. The wavefront sensor measures the uncorrected wavefront fi rst. The required corrections are computed and then fed to the wavefront corrector, with no feedback from the wavefront sensor on the accuracy of the aberration correction.
Optic nerve head
The anatomical opening in the eye where the nerve bundles exit the retina and the retinal blood vessels communicate with their vascular supply. The optic nerve head is a relatively rigid, collagenous structure that has its own circulation, but contains no neural elements to initiate sight.
9
543
GLOSSARY
Term
Defi nition
Chapter
Optical axis
(1) The axis of symmetry in a rotationally symmetric optical system; (2) the actual or desired series of line segments defi ned by the chief ray of the on-axis field point.
7
Optical coherence tomography
An optical imaging modality, typically based on Michelson interferometery, that optically sections tissue with microns of axial resolution realized by interferometrically discriminating reflected or backscattered light by its time of fl ight.
17
Optical transfer function
Transfer function that characterizes the performance of an optical system. It is the Fourier transform of the point spread function or the autocorrelation of the generalized pupil function, and consists of the modulation transfer function (MTF) and the phase transfer function (PTF). When multiplied by the object spectrum, the image spectrum is obtained.
13
Optical zone
See ablation optical zone.
12
OTF
See optical transfer function.
13
Outer segment
The portion of the photoreceptor that contains a lipid membrane supporting photopigment in a configuration allowing light to interact and cause bleaching, leading to a visual signal.
9
PAL-SLM
See parallel aligned nematic liquid crystal spatial light modulator.
18
Parallel aligned nematic liquid crystal spatial light modulator
An optically addressable (intensity to phase) spatial light modulator with an amorphous silicon layer, a dielectric mirror, and a liquid crystal layer sandwiched between two glass substrates with transparent electrodes.
18
Phakic
The physiological eye that retains its natural lens; literally “with lens.”
11
544
GLOSSARY
Term
Defi nition
Chapter
Phase transfer function
Transfer function characterizing the variation of phase shift in the image as a function of spatial frequency.
13
Phase wrapping
The mechanism of folding the phase greater than 2p into an equivalent phase less than 2p using a modulo operation. When the amount of to-be-compensated phase is more than 2p, then the excess phase must be folded within the 2p range. This is essential if the device has a maximum modulation of a single wave (2p at the operating wavelength).
18
Phoropter
A refractive device used to determine a patient’s sphero-cylindrical refractive error. When coupled with adaptive optics, the system can also measure and correct for the patient’s higher order aberrations (e.g., for psychophysical testing).
18
Photomultiplier tube
A photon counting device.
16
Photometer
A device that measures the radiance of a light source weighted by a fi lter that emulates a human spectral efficiency function.
14
Photopigment
The photolabile pigment that breaks down when struck by light, leading to the intiation of a neural signal.
9
Photoreceptor
A specialized neural cell that captures light, converts light energy to chemical energy, and transmits a neural signal.
9
Photorefractive keratectomy
A technique that removes the cells on the front surface of the cornea (the corneal epithelium) and then applies the excimer laser treatment to the cornea. A bandage soft contact lens is then applied postoperatively to aid in healing.
12
Piston-only segmented mirror
An array of adjacent, planar mirror segments that are independently controlled and have one degree of freedom that corresponds to a pure, vertical piston mode.
4
545
GLOSSARY
Term
Defi nition
Chapter
Piston/tip/tilt segmented mirror
An array of adjacent, planar mirror segments that are independently controlled and have three degrees of freedom that correspond to a vertical piston mode and two additional degrees of freedom (tip and tilt) for slope control.
4
Plasma display
A computer-controlled display that produces light by exciting plasma gas pockets coupled to phosphors.
14
PMT
See photomultiplier tube.
16
Point spread function
The response of an optical system to a point source of light. It is calculated as the squared modulus of the Fourier transform of the generalized pupil function.
8, 13
Power spectrum
The power spectrum is the squared modulus of the Fourier transform of a signal. Unlike the Fourier transform, it ignores phase information only and has positive values. It quantifies how much each spatial frequency is represented in a given pattern, image, or stimulus. For example, the image of a picket fence would contain a lot of power at the frequency corresponding to the spacing of the pickets.
8
Power spectral density
A continuous function with dimensions of wavefront squared per hertz, obtained by dividing the power spectrum of the wavefront by the product of the number of diagnostic frames and the sampling period.
8
Presbyopic
A condition of the eye where the natural lens can no longer accommodate (or increase its positive power by steepening its surfaces) to allow near objects to focus at the retinal plane. Presbyopic eyes are in focus for distant objects, but hyperopic for near objects.
11
PRK
See photorefractive keratectomy.
12
546
GLOSSARY
Term
Defi nition
Chapter
PSD
See power spectral density.
8
Pseudophakic
The physiological eye that has had its natural lens replaced with an artificial lens, such as an intraocular lens (IOL).
11
PSF
See point spread function.
8, 13
Psychometric function
The probability of correct performance on a psychophysical task as a function of stimulus strength.
14
Psychophysical function
The value of a physical stimulus variable required to produce a criterion level of performance on a psychophysical task as a function of a second physical stimulus variable.
14
Psychophysical tests
These tests are done, in the context of AO systems, to evaluate the effect of adaptive optics in enhancing the visual perception and performance of an observer. This is as opposed to objective measurements obtained by the wavefront sensor. A laboratory test is designed to simulate a real-life task that the observer must perform. For example, a test could be conducted by displaying a series of sinusoidal patterns in front of the observer and evaluating his contrast threshold function with and without AO correction.
18
Psychophysics
The study of the relations between human performance and physical variables. Also, the methods and techniques used in this study.
14
PTF
See phase transfer function.
13
Pupil
In a given optical space, the image of the aperture stop.
7
Refractive error
In the context of optometry and ophthalmology, 13 the spherical and astigmatic focusing errors of the eye.
547
GLOSSARY
Term
Defi nition
Chapter
Refractive surgery
Surgery done to improve the eye’s optics and correct refractive error. This can be accomplished with corneal refractive surgery (such as an excimer laser reshaping of the cornea) or with an ocular implant placed inside the eye to correct for refractive errors.
12
Registration
The relative alignment between two planes. In the context of AO system design, registration describes a method to co-align the wavefront sensor with the wavefront corrector. In the absence of registration, the corrections will be applied at the wrong place, which in essence will increase the error instead of decreasing it.
7, 18
Relay
Any imaging optical system.
7
Retina
The thin, transparent layer of neural tissue that initiates a visual signal at the back of the eye and transmits it toward the brain.
9
Retinal pigment epithelium
The melanin-containing monolayer of cells that 9 provides metabolic support for and helps in renewal of the photoreceptors.
RMS
See root-mean-square.
2
Root-meansquare
In statistics, the magnitude of a varying quantity, calculated as the square root of the mean of the sum of the squared values of the quantity. For a wave aberration described using Zernike polynomials, the root-meansquare (RMS) wavefront error is defi ned as the square root of the sum of the squares of a given number of Zernike coefficients.
2
Sclera
The opaque (white) portion of the eye. It is composed of randomly oriented collagen fibrils.
11
SD-OCT
See spectral-domain optical coherence tomography.
17
548
GLOSSARY
Term
Defi nition
Chapter
Search box
A rectangular (typically square) region of interest used to compute the centroid (center of mass) of the image formed by a single lenset of a Shack–Hartmann wavefront sensor. See also center of mass.
6
Segmented corrector
A wavefront correction component that uses segmented optical reflection or transmision. See also segmented mirror and liquid crystal spatial light modulator.
4
Segmented mirror
An array of adjacent, planar mirror segments that are independently controlled. See also piston-only segmented mirror and piston/tip/ tilt segmented mirror.
4
Sensitivity
The inverse (reciprocal) of threshold.
14
Shack– Hartmann wavefront sensor
A wavefront sensor that uses a regular array of lenslets to image the incident wavefront on a CCD array. A uniform plane wave produces a regular array of spots. An aberrated wavefront produces focal spots that are displaced from these reference positions by a factor proportional to the local slope of the wavefront.
3, 18
SLM
See spatial light modulator.
18
Spatial light modulator
A device for imprinting an amplitude or phase pattern (or both) on an optical beam. By modulating the phase of an aberrated beam, the aberrations of the beam can be compensated. A spatial light modulator (SLM) can be used as an optical input device for processing information in an optical information processing system. A SLM can also be used as an optical matched fi lter for an optical correlator.
18
Speckle
A spatially random intensity distribution produced from the coherent interference of light that reflects from an optically rough surface or propagates through a turbulent medium.
17
549
GLOSSARY
Term
Defi nition
Chapter
Spectraldomain optical coherence tomography
A highly efficient form of OCT that records the optical spectrum of the interferometric signal using a spectrally dispersive element, e.g., diffraction grating, and a linear or aerial CCD in the detection channel.
17
Spectroradiometer
A device that measures the radiance of a light source over a large number of steps across the visible spectrum.
14
Sphere
The defocus component of a spectacle prescription designed to correct for myopia or hyperopia.
11
Spherical equivalent
See manifest refractive spherical equivalent.
13
Spherocylindrical
Refraction that includes the defocus (sphere) and astigmatism (cylindrical) values. For correction, a lens designed to correct both defocus and astigmatism where at least one of the surfaces is toric in shape.
11
Spot
In a Shack–Hartmann wavefront sensor, the term “spot” refers to the image formed by light focused from a single lenslet.
6
Staircase method
A psychophysical technique in which threshold is measured by selecting stimulus values dynamically based on observer performance throughout the course of the experiment.
14
Stiles– Crawford effect
Light entering the pupil at different locations does not elicit the same visual response due to the waveguide nature of the photoreceptors and their pointing direction.
15
Strehl ratio
The ratio of the peak intensity of the point spread function of an optical system relative to the peak intensity of a diffraction-limited point spread function.
8
Stroke
The dynamic range of the deformable mirror actuators, typically measured in microns.
4
550
GLOSSARY
Term
Defi nition
Chapter
TCA
See transverse chromatic aberration.
2
Threshold
The limiting value of a physical variable (e.g., number of quanta, contrast) for a criterion level of performance on a psychophysical task (see also detection, discrimination, and identification thresholds).
14
Transition zone
The additional width of a laser refractive surgical ablation that extends past the ablation optical zone and is used to blend the ablation optical zone with the untreated peripheral cornea.
12
Transverse chromatic aberration
Change in the apparent magnification of the optical system with wavelength.
2
Troland
A photometric unit used to quantify retinal illuminance, calculated by multiplying the luminance value (in cd/m 2) with the area of the pupil (in mm 2). Units of (cd/m 2) ∗ (mm 2).
14
VA
See visual acuity.
14
Vertical synchronization pulse
The change in voltage level of a video signal that triggers the end of one frame and the start of a new frame. On a video display or frame grabber, each frame in an image starts at the end of the vsync pulse and ends with the start of the next vsync pulse.
16
Virtual refraction
A computational method that captures the essence of a traditional refraction by successive elimination by mathematically simulating the effect of spherocylindrical lenses of various powers.
13
Visual acuity
One measure of the resolution limit of vision. Acuity is often expressed in terms of Snellen notation, equivalent to the ratio of a standard viewing distance (20 ft in the United States or 6 m in Europe) to the distance at which the smallest identifiable symbol subtends a visual
14
551
GLOSSARY
Term
Defi nition
Chapter
angle of 5′, with the lines and their interdigitated spaces having a thickness of 1′ (1/60 of a degree). Visual angle
The portion of the visual world included between two lines that converge at the pupil.
9
Visual benefit
The improvement in visual performance that an eye can gain by correcting lower or higher order aberrations. As a metric, it can be defi ned as the ratio of the modulation transfer function of the ideally corrected eye to the modulation transfer function of the partially corrected or uncorrected eye for any selected spatial frequency.
11
Vitreo-retinal interface
The intersection between the collagenous vitreous body and the innermost retinal component, the nerve fiber layer, which is often so reflective as to obscure the deeper layers.
9
Vitreous humor
The collagenous body, between the crystalline lens and the retina, that supports the globe from the inside, and is in contact with the retinal surface prior to aging changes.
9
Volume resolution element
Defi nes the size of the smallest possible volume 16 in a three-dimensional image calculated using the formula for the volume of a cylinder.
Vsync
See vertical synchronization pulse.
16
Wave aberration
Function defi ned as the difference between the aberration-free (spherical or reference) wavefront and the actual wavefront for every point over the pupil.
2
Wavefront
An imaginary surface that represents the direction of propagating light. The wavefront is always perpendicular to the direction of travel at all points in space.
3
552
GLOSSARY
Term
Defi nition
Chapter
Wavefront sensor
A wavefront sensor is used to measure the wave aberration of the light. See also Shack– Hartmann wavefront sensor.
3, 18
Wavefrontguided refractive surgery
See customized ablation.
12
WS
See wavefront sensor.
7, 8
Yes/no procedure
A procedure for estimating thresholds in which an observer indicates whether a stimulus was detected on a given trial.
14
Zernike polynomials
Orthogonal polynomials used to expand an aberration function defi ned within the unit circle. The Zernike coefficients represent standard deviations of individual polynomials.
3
Symbol Table Symbol
Represents
䉺
Autocorrelation
丢
Convolution operator
∇2
Laplacian operator
a
Axis of astigmatism (or cylinder)
bi
Off-axis incident angle
br
Off-axis refracted angle
bv
Off-axis viewing angle
c
Log-normal amplitude
∆
Step, difference, or change in a variable
∆f
Phase difference
∆l
Bandwidth of light source
∆l res
Spectral resolution of imaging spectrometer
∆fx
Acquisition (spatial frequency) bandwidth
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
553
554
SYMBOL TABLE
Symbol
Represents
∆h
Deviation of off-axis chief ray when deformable mirror, wavefront sensor’s pupil, and subject’s pupil are not coincident
∆s
Spot displacement
∆smax
Maximum measurable spot displacement
∆smin
Minimum measurable spot displacement
∆xS
Centroid or spot displacement of a Shack–Hartmann wavefront sensor in the x (horizontal) direction
∆yS
Centroid or spot displacement of a Shack–Hartmann wavefront sensor in the y (vertical) direction
∆z
Range or change along the optical axis, such as depth of focus, axial distance between pupil planes, or in the context of optical coherence tomography, the translational change of the reference mirror
e0
Permittivity
f
In the context of the radially averaged OTF metric, the orientation variable for the integral over all orientations
f
In the context of a periodic function, such as a sinusoid, phase
h
Quantum efficiency
ϕc (x, y, t)
Wavefront profi le of correction, as a function of position and time
ϕi (x, y, t)
Wavefront profi le, uncompensated, as a function of position and time
ϕm
Wavefront profi le at actuator m
ϕr (x, y, t)
Wavefront profi le of residual aberration, as a function of position and time
l
Wavelength of light
n
Frequency of light
SYMBOL TABLE
Symbol
555
Represents
q
In the context of polar coordinates, the angular component of a polar coordinate
q
In the context of optical wavefronts, the slope of the wavefront
q max
Maximum wavefront slope that can still include one spot within each virtual subaperture
q min
Measurement sensitivity: the minimum wavefront slope that the wavefront sensor can measure
qdl
Angle subtended by the peak of the Airy disk and the fi rst minimum (width of the point spread function)
qvis
Visual angle
Θ(u, v)
Wiener fi lter in the spatial frequency domain
ς
Integration variable, shifted from x, in the defi nition of convolution
s
Standard deviation
s2
Variance
sCALIB
Calibration error
sn
Standard deviation of noise distribution
t
Delay time constant or time lag
tc
Computational delay
u
Integration variable, shifted from y, in the defi nition of convolution
wo
Resolution at the focal plane in microns
Ψ
Complex field of the corrected wavefront
Ψref
Complex wavefront from the reference arm of an optical coherence tomography system
556
SYMBOL TABLE
Symbol
Represents
Ψretina
Complex wavefront from the retina measured by an optical coherence tomography system
a
Length of stimulus along an axis orthogonal to the direction of viewing
abim
Bimorph thickness
agap
The distance between the electrodes and the membrane in a membrane mirror
akm
An element of the actuator influence matrix A (see below). Each element is obtained by applying known voltage to each actuator element m and obtaining the resulting slope measurement k in x and y directions for each lenslet.
amn
Zernike coefficients expressed according to Gram– Schmidt orthogonalization method for the cornea surface. The coefficients are indexed by angular frequency, m, and radial order, n
A
Actuator influence matrix that is used to compute the average slope of the wavefront produced by a given actuator voltage vector as vA = s
A†
Pseudo-inverse of the actuator influence matrix A. A† is used to compute the desired actuator voltage from the measured slope vector as v = A†s
bave
Average blur strength
cj
Element of the Zernike coefficient vector at mode index j
cJ
Maximum Zernike coefficient (mode) included
cmn
Zernike coefficient for the wave aberrations for angular frequency, m, and radial order, n
C1 or C 2
Principal curvature map
Cb (x, y)
Blur mapping function
C Gauss
Gaussian curvature map
SYMBOL TABLE
Symbol
557
Represents
CJ
Astigmatism curvature map
Cmean
Mean curvature
C3
Zernike coefficient value at the third mode (astigmatism term)
C4
Zernike coefficient value at the fourth mode (defocus term)
C5
Zernike coefficient value at the fi fth mode (astigmatism term)
c
Zernike coefficient vector
d
In the context of optical design, the diameter of a lenslet, pupil, or subaperture. May also represent lenslet or pixel spacing
d[n]
Residual wavefront as measured by the diagnostics
d′
In the context of signal detection theory, sensitivity, also called “d-prime”
dpix
Pixel diameter
D(f)
Residual mirror commands obtained in the diagnostics (i.e., input to compensator or “control computer”)
D(p)
Power spectrum of the diagnostics for frame p where p = 1 to P
D
Diopters
e−sT
Exponential function for converting the complex z-transform variable z to the Laplace domain using frequency variable s
exp(x)
Exponential function of variable x
E
Error metric of an optical system
f
Frequency; such as i2pf or spatial frequency
558
SYMBOL TABLE
Symbol
Represents
fc
Cutoff frequency
fs
Sampling frequency
F
Focal length
FS
Sagittal focal length
FT
Tangential focal length
f/#
f-number
gA (x, y)
Gaussian response used to model actuator surface
gN (x, y)
Gaussian neural weighting function used to compute neural sharpness
h
Plank’s constant (where the energy of a photon is hn)
H(f)
Transfer function for spatial frequency f
H(s)
Transfer function for frequency s, where s = i2pf
H(u, v)
Transfer function for two-dimensional spatial frequency u, v
i
Square-root of negative 1
i(x, y)
Image irradiance in the spatial domain
I(u, v)
Fourier transform of the image irradiance
I
Intensity
I(zref) retina
Intensity of a reflection from a slice of retina
Iref
Intensity at the reference arm of an optical coherence tomography system
Iretina
Intensity from the retina in an optical coherence tomography system
j
Index of Zernike mode (coefficient) in the range 1 to J
SYMBOL TABLE
Symbol
559
Represents
J
Number of Zernike modes (coefficients)
k
The index of a lenslet in a Shack–Hartmann wavefront sensor in the range 1 to K.
K
A constant in an expression, such as constant gain or unit conversion factor
KG
Coefficient of the controller (proportional to the system gain, K)
Km
Membrane stress constant
Kp
Piezoelectric constant of a bimorph mirror
KW
Weber fraction
K
Number of lenslets in a Shack–Hartmann wavefront sensor
l
Distance, as in viewing distance
lo
Length of a characteristic feature size
lc
Coherence length
Ll
Radiance at a given wavelength
Lback
Background luminance in a stimulus image
L max
Maximum luminance in a stimulus image
L min
Minimum luminance in a stimulus image
L stim
Stimulus luminance in a stimulus image
Lv
Luminance
m
In the context of Zernike coefficients, angular frequency index, used as a superscript
m
In the context of the adaptive optics control systems, the index of the deformable mirror actuator in the range 1 to M
560
SYMBOL TABLE
Symbol
Represents
m(x, y)
Mirror actuator signal magnitudes at actuator position (x, y)
M(f) or M(s)
Mirror position signal from a linear system
m0,N
Magnification between the eye and the plane where the lens is placed
ma..z
Product of the magnifications of the telescopes between conjugate planes a and z in a reaster scanning optical system
M
Number of deformable mirror actuators
M
Influence matrix of actuators
n
In the context of signal detection theory, a noise signal
n
In the context of Zernike coefficients, the index of the radial order
n¯
Mean of noise n
N(f) or N(s)
Additive noise signal input to a closed-loop linear system
N(u, v)
Spatial frequency (Fourier) domain expression for noise influence, on the Wiener filter
Ncsf(x, y)
Neural weighting function based on the inverse Fourier transform of the contrast sensitivity function
n
In the context of control systems at discrete time intervals, the time index
n or n′
Refractive index
ncore
Refractive index of the core of an optical fiber
nret
Refractive index of the retina
N
Number of Zernike radial orders
NA
Number of resolution elements in an A-scan
SYMBOL TABLE
Symbol
561
Represents
o(x, y)
True object irradiance in the spatial domain
ô(x, y)
Estimate of object irradiance in the spatial domain
O(u, v)
Fourier transform of the true object irradiance
p
Index of a diagnostic frame in a closed-loop control system, in the range 1 to P
pk (x, y)
Subpupil function used to simulate the spot pattern for each lenslet in a Shack–Hartmann wavefront sensor
P(u, v)
Fourier transform of the point spread function of an optical system
PA
Oblique astigmatism in diopters
Pcyl
Optical power of a cylindrical lens
PJ0 or PJ45
Optical power of a Jackson crossed cylinder
Pretina
Optical power of a retinal reflection
Psph
Optical power of a spherical lens (diopters)
PSE
Spherical-equivalent power
P
Number of diagnostic frames in a closed-loop control system
psf(x, y)
Point spread function in the spatial domain
psf N (r, q)
Normalized point spread function in polar coordinates
qA (x, y)
Binary function used to compute the correlation width of light distribution as a function of the autocorrelation of the PSF
qH (x, y)
Binary function that is 1 where the PSF is greater than one half the maximum and 0 elsewhere, used to compute the half-width at half-height metric
r
Radial distance as a variable; radius of beam of light at the eye
562
SYMBOL TABLE
Symbol
Represents
R
Radius of curvature
R(f ) or R(s)
Residual aberration in a closed-loop system, where s = i2pf
r
Radius
R
Reconstruction matrix
s
In the context of modeling the dynamic behavior of a system, complex frequency variable, where s = i2pf
sk
In the context of slope vectors, a single element, at lenslet measurement k, of the slope vector s
¯sx
Global tip-tilt from the horizontal component slope vectors
¯sy
Global tip-tilt from the vertical component slope vectors
(s + n)
Mean of signal plus noise
S
Strehl ratio
S(l)
Weighting function for computing polychromatic metrics
S CALIB
Strehl ratio corresponding to calibration error
S
Spot size
s
Slope vector or vector of centroid measurements
s[n]
Set of many residual centroid measurements at time n
s˜
Uncorrectable residual of centroid measurements
s′
Slope vector corrected for tip-tilt
s¯
Average of set s[n] of centroid measurements over many frames
TN ( f )
Contrast threshold function as a function of spatial frequency
T
Exposure time or sampling period. Used in the context of wavefront sensing or optical coherence tomography A-scan
SYMBOL TABLE
Symbol
563
Represents
u
Normalized axial unit
u[n]
Input to wavefront compensator at time n (unit step)
v(x, y)
Mirror voltage profi le at position (x, y)
v¯
Average of the actuator control voltage vector. To remove the piston component from the control voltage vector, subtract this value from each element of the vector v.
vm
The mth deformable mirror actuator control voltage
V′l
Scotopic luminous efficiency function at wavelength l
Vl
Photopic luminous efficiency function at wavelength l
v
Deformable mirror command vector
v′
Deformable mirror command vector with the piston component removed.
VM
Michelson contrast
VW
Weber contrast
w[n]
Windowing function used to avoid spectral leakage
W
Wavefront aberration in waves (relates to phase, via f = W2p /l)
W(r, q)
Wavefront or wave aberration function in polar coordinates
Wk (x, y)
Wavefront or wave aberration function, for the subaperture of a particular lenslet, k, expressed in Cartesian coordinates
W(x, y)
Wavefront or wave aberration function in Cartesian coordinates
∂W ( x, y ) ∂x
Wavefront slope in the x direction, expressed as the partial derivative of the wavefront
∂W ( x, y ) ∂y
Wavefront slope in the y direction, expressed as the partial derivative of the wavefront
564
SYMBOL TABLE
Symbol —
Wx —
Represents Mean of the spatial derivative in x of the wave aberration
Wy
Mean of the spatial derivative in y of the wave aberration
x
Variable in the horizontal direction, typically orthogonal to the optical axis of a system
X( f ) or X(s)
Aberrations (input signal to linear system)
y
Variable in the vertical direction, typically orthogonal to the optical axis of a system
y[n]
Compensator consists of an integral controller of the form y[n] = y[n − 1] + Ku[n]
z
In the context of Cartesian coordinate systems, the direction along the optical axis, where z is perpendicular to the (x, y) plane
z
In the context of the Laplace transform, the complex z-transform variable where z = e−s−T
z(r, q)
Corneal elevations, defi ned as the distance from each point of the corneal surface to a reference plane tangential to the vertex of the cornea
z′jk
Derivative of the jth mode of the Zernike representation of the wavefront for the kth lenslet
Z
Zernike polynomial
Zj (x, y)
Zernike polynomial at mode j, where j = 1 to J
Zmn (r, q)
Zernike polynomial, for angular frequency m and radial order, n
Z
Reconstructor matrix that computes Zernike coefficients from slope vectors
Zm
Zernike reconstruction matrix for individual actuator m
Z†
Pseudo-inverse of reconstructor matrix Z
INDEX
Abbe error, 162 Aberration(s): characterized, 36–37, 66, 90, 236, 239 chromatic, 9, 33–34, 43–45, 51, 79, 90, 107–108, 200, 268–271, 354–356, 540, 550 corneal, 35–37, 533 correction of, 307, 34, 92, 291 defi ned, 529 generator, 77–78, 127–128 internal, 537 lenticular, 239 map, 340–341, 357–358 measurement, 63, 297 monochromatic, xvii, 4, 6, 33–40, 238–239, 358–359 in normal eye, 34–35 ocular, defi ned, 542 off-axis, 46–51, 239–240, 426 polarization effects, 34, 53–55 population statistics: Indiana, 52, 97–99, 100–109 Murcia optics lab, 52 Rochester, 52 Rochester & Bausch & Lomb, 97–99, 100–109 principal components analysis of, 52 refractive, 332 reporting (OSA Standards), 518–522 scatter effects, 34, 55 statistics of aberrations, 34, 52–53 temporal properties, 40–43, 97
Aberrometry, 98, 358 Aberro-polariscope, 54 Aberroscope, crossed-cylinder, xviii Ablation, see Corneal ablation conventional, 325, 532 customized, 324, 533 ocular, 40–43 optical zone, 312, 529 rate, 323–324 Absorption/absorption spectra, 42, 218–220, 225, 284, 420 Accommodation, 7, 34, 40–43, 336 Achromatization process, 44–45 Acoustic impedance mismatch, 257–258 Acousto-optic modulator (AOM), 419, 529 Acquisition bandwidth, 263 Actuator, see specifi c types of wavefront correctors AOSLO, 422–423 configuration of, 122–124 deformable mirror, discrete, 86, 97, 534 DM-to-WS registration, 183 influence function, 122, 184 lead magnesium niobate (PMN), 397, 452 mirror, 266 Nyquist criterion, 192–193 Rochester Adaptive Optics Ophthalmoscope, 400–401 slope influence function and, 124–125 spacing of, 156 stroke, wavefront correctors, 99–100 voltages, 150, 175, 184, 196
Adaptive Optics for Vision Science, Edited by Porter, Queener, Lin, Thorn, and Awwal Copyright © 2006 John Wiley & Sons, Inc.
565
566
INDEX
Acuity, defi ned, 9 Adaptive optics scanning laser ophthalmoscope (AOSLO): axial resolution, 434–438 basic layout of, 249 calibration, 431–432 characterized, 17–18, 20, 23 compensation, 432–434 electronic hardware control of, 428 image acquisition, 426–429 imaging results, 438–440 light detection, 254 light path, 249–251 optical layout for, 425–426 performance strategies, 441–444 SLO system operation, 255 software interface for, 429–431 Adaptive optics sensing and correcting algorithm (AOSACA), 430 Adaptive optics (AO) system: aberration correction, 38 assembly of, 181–182, 491 benefits of, 63, 83–84 defi ned, 529 fi rst-order optics, 156–157 human factors, 272 imaging time, 276 light budget, 271–272 optical alignment, 157–174 optomechanical design, 155 performance: bandwidth error, 199–200 calibration error, 191–192 fitting error, 192–194 measurement error, 194–199 significance of, 163, 189 Strehl ratio, 189–191 testing procedures, 492 wavefront errors, 200–201 principal components of, 84–86 real-time, 42 refraction, 272–276 registration, 496–498 retinal imaging, 11–24 software, 139, 489 system integration, 174–186 transfer function of, 130–135 vision correction, 9–11 wavefront correctors, 86–111 Adaptive optics with optical coherence tomography (AO-OCT): basic layout of, 264–266 characterized, 23, 448 chromatic aberrations, 271
Afocal relay telescope, 157–158, 170–174 Aging eye/aging process: effect on aberrations, 34, 40–43 fundus, 220 light absorption, 42 retinal changes, 218, 224–225 Airy, George Biddell, Sir, 4 Airy disk, 33, 237, 254, 435, 441 Algorithm(s): centroid, 200 center-of-mass, 144, 483–484, 488, 531 control, 119–135 direct slope, 124–127, 534 iterative centroid, 144 least-squares, 127 mirror control, 421–423 phase retrieval, 192 phase-unwrapping, 176 reconstruction, 76–77 Aliasing, 5, 51, 127, 423, 507 Alignment, see Optical alignment AO system: aligning optics, 167–170 CAD applications, 159–160, 164 common practices, 163–170 detectors, 166–167 error budget and, 160–161 general tools, 166–167 layout, 164 mirrors, flat and interchangeable, 167 offl ine alignment, 170–174 optical axis establishment, 164–166 sources, 166–167 laser, positioning of, 165 Rochester Adaptive Optics Ophthalmoscope, 402 telescope, 164 wavefront measurement methods, OSA Standards, 515–517 American National Standards Institute (ANSI), 419 ANSI standards Z80.28, 334 Ametropia, 292–293, 305, 529 Amplifiers, high-voltage (HVAs), 132 Analog/digital (A/D) conversion, 255 Angiograms, 210–211, 439 Angle targets, optical axis, 165–166 Annexin-5, 24 AO loop, 139, 530 Aperture: annular, 222 AOSLO, 422 confocal, 207, 224 entrance, 334
INDEX light scattering techniques, 222–224 numerical, 241, 259, 451 photoreceptors, 237 Rochester Adaptive Optics Ophthalmoscope, 403 size of, 278 stop, defi ned, 530 subaperture, 71–72, 75, 156, 177, 180, 183–184, 478, 481, 490–491, 500 wavefront correctors and, 110 Aphakic, defi ned, 530 AreaMTF, 350–351 AreaOTF, 351 Arcades, 209 Arteries, retinal, 206–207, 209–211, 222, 228–229 Artificial eye, 166, 182, 185 A-scan, 257, 263–264, 268, 270, 450, 465, 469, 530 Astigmatism: AO-OCT experiments, 454 AO system assembly, 181 characterized, 4, 36, 46, 83, 293, 306, 401 conventional imaging, 237, 240 correction of, 6, 50–51, 84, 273, 294, 335, 413, 431, 457, 474, 478, 503 custom-correcting contact lenses, 296, 301–302, 333–334 far point and, 335 high-resolution retinal imaging, 264 image quality and, 11 oblique, 47–48, 51 ocular adaptive devices, 9 off-axis aberrations, 47–48 peripheral refraction, 47–48 refractive surgery, 313 Shack-Hartmann wavefront sensing, 73, 75 on shear plate, 170 statistics of aberrations, 52 surgical correction of, 306–307, 313 wavefront correctors and, 101, 105, 107 Astrocytes, 215, 218 Atmospheric turbulence, impact of, 7, 95 Automated refraction, 9–10 Avalanche photodiodes (APDs), 5, 131, 255 Axial: point spread function (PSF), 465, 530 resolution, 5, 23, 256, 265, 267, 270, 434–438, 441, 467 sectioning, closed-loop AO system, 423–424 slicing, 441 Azimuthal frequency, 295
567
Babcock, Horace, 7 Backscattered light, 23, 222, 227–230 Bandwidth: acquisition, 263 closed-loop, 8, 133–135 error, 194, 199–200, 530 error transfer function, 133, 135 narrow, 242 open-loop, 133–134 spectral, 23, 90, 270 temporal, 97, 112 Basal lamina, 214 Bayesian statistics, applications of, 280, 379 Beam path, slope of, 165 Beam size, in scanning laser imaging, 253 Beamsplitter(s), 78, 240–241, 266, 272, 303, 399, 418, 425, 452, 483 Beer’s law, 212 Bias, SLM, 495 Bimorph mirrors, 88, 91–92, 96–97, 530 Bimorph technology, 86 Bipolar cells, 217 Birefringence, 227, 229, 443 Bi-sinc function, 177 Bit stealing, 387 Blackman-Harris window, 197 Bleaching, 15, 225 Blind spot, 217 Blood: -brain barrier, 209 flow: high-resolution imaging of, 21–22 measurement of, 439 monitoring, 23 light absorption, 218–220, 225 retinal barrier, 210, 214 Blur/blurring: conventional imaging, 243 convolution and, 278 eye movement, 5 neural adaptation to, 11 optical, 51 retinal image, 7 sources of, 326 strength, 334 strength map, 342–343 Bode analysis, 409 Boresight, 167, 175, 182, 530 Brain: blur/blurring and, 11 central nervous system, 209 cortical neural processing, 335 blood-brain barrier, 209 eye movement and, 17
568
INDEX
Bruch’s membrane, 214, 220 B-scan, 448–450, 465, 467, 471–472, 531 Calibration: AOSLO, 431–432 OSA Standards, 523–527 chromatic aberration, 79 defocus and, 431–432 error, 191–192, 531 hardware, 75–76 lenslet arrays, 77–78 liquid crystal AO phoropter, 492–502 performance errors, 191–192 reconstruction algorithm, 76–77, 127 reference centroids, 185–186 Shack-Hartmann wavefront sensing, 75–79, 158–159 Camera(s): charge-coupled device (CCD), 65–66, 75, 79, 130–131, 141, 400, 404, 406, 413, 418, 452, 479, 482, 506–507 high-resolution, 140 retinal, 83, 95, 110, 450 science, 85, 241–242, 246 stare, 195 Cane toad, photoreceptor cells, 5 Cannulation, 22 Capillaries, 206, 209–210 Capsulorhexis, 304 Cataract(s): characterized, 63, 304, 306, 308 formation of, 55 surgery, 39 Cathode ray tube (CRT): defi ned, 531 monitors, 381–385, 388, 405, 492 projectors, 386, 533 Catoptric image, 331 Center for Adaptive Optics, xviii Center-of-mass algorithm, 144, 483–484, 488, 531 Central nervous system, blood-brain barrier, 209 Central-pupil method, 343–344 Centroid algorithm: bounding box, 488 implications of, 71, 144, 200, 406, 459 pyramidal, technique, 488, 490 RMS wavefront error, 489 Centroids: AOSLO, 423, 430 characterized, 143, 484 conventional imaging, 240 defi ned, 531
estimates, 8 image preparation, 143–144 liquid crystal AO phoropter, 487–488 measurement of, 177–180 reference, 185–186 standard deviations of, 488 system performance and, 192 wavefront reconstruction process, 180–181 Charge-coupled device (CCD): applications, 239, 245–246, 256, 268, 449 camera: characterized, 141 retinal images, 452 Indiana University AO-OCT System, 452 liquid crystal adaptive optics, 479, 482, 506–507 Rochester Adaptive Optics Ophthalmoscope, 400, 404, 406, 413 scanning laser ophthalmoscope design, 418 wavefront correctors, 130–131 wavefront sensing, 65–66, 75, 79 detector, 473 plane: optical alignment, 156, 169, 177–179, 182 registration of, 183 Chief ray, 157, 531 Choriocapillaris, 210 Choroid: functions of, 213–214 photoreceptors, 261 Choroidal neovascular membrane, 210, 219–220 Chromatic: aberrations: axial, 354 calibration of, 79 characterized, 43–44, 243 conventional imaging, 241 intrinsic, 90 longitudinal (LCA), 44–45, 51, 107–108, 270–271, 540 monochromatic aberration interaction, 45 OCT ophthalmoscopes and, 268–271 ocular adaptive optics, 9 peripheral image quality and, 51 significance of, 33 transverse (TCA), 44–45, 51, 354–355, 550 wavefront errors, 200 compensation, 44
INDEX difference of magnification (CDM), 354, 531 dispersion: AO-OCT ophthalomoscopes, 266 implications of, 354 OCT ophthalmoscopes, 266 Chromaticity, 357, 382, 389 CIE standard observer, 368–370 Circle of least confusion, 47–48 Clinical trials, laser surgery, 317–318 Closed-loop, in AO systems: bandwidth, 8, 133–134 control, 129–130, 241, 432, 532 correction, 411, 491 implications of, 85, 241, 423–424, 431, 459–460 operation, 499–502 power spectrum, 198 response, AO, 131 tracking system, 322 transfer function, 132–133 transfer system, 408 Computer numerical control (CNC) contact lens lathe, 299–300 Coherence/coherent: AO parallel SD-OCT imaging, 469 laser source, 399 lengths, 398, 451, 469, 532 light, generally, 5–6, 390–391 Collagen, 212, 324 Collimation, 157, 163, 173, 178, 182, 185, 419 Color: appearance, adaptive optics, 15 blindness, 19 -channel independence, 384 fundus photography, 219 lookup tables (CLUTs), 387 vision, 15, 215 Colorimeter, 389–390, 532 Coma: aberration structure, 35, 41, 46–47, 49–50 correction, 477 corneal ablation, 322 customized vision correction devices, 291, 293–294, 296, 301, 306–307 Command vector, 125 Commission Internationale de l’Eclairage (CIE), 368–370, 382 Common path aberrations, 158 Compaq MP1600, 405 Compensation: AO-OCT experiments, 454 AOSLO, 432–434 characterized, 241, 252
569
chromatic, 44 liquid crystal AO phoropter, 504 scanning laser imaging, 252, 255 scanning laser ophthalmoscope (SLO), 421 spatial light modulator (SLM) response, 493 wavefront, 418, 421 zonal, 504 Compensator gain, 195 Complementary metal-oxide-semiconductor (CMOS), 92, 246 Computer-aided design (CAD) software, 159–160, 164–165 Computer software applications, see Software Computer technology, impact of, 6 Concave lenses, 4 Cone(s): absorption measurements, 284 angular tuning properties, 12, 14 AO parallel SD-OCT imaging, 474–475 characterized, 215–217, 410, 439 conventional imaging, 243–244, 246 daylight vision, 215 density, 17, 21, 215 diameter, measurement of, 21 directional sensitivity of, 13–14 long wavelength sensitive (L), 15, 16, 19, 215, 217, 226, 284 middle wavelength sensitive (M), 15, 16, 19, 215, 217, 226, 284 mosaic: AO-OCT experiments, 454, 462 AO parallel SD-OCT imaging, 470–472 color blindness and, 19 implications of, 5–6, 8, 367 primate, 24 retinal imaging, 12 trichromatic, 16 photoreceptor mosaic, 16, 20, 24, 110 photoreceptors, 11, 14, 225–227, 237 reflectance of, 14 short wavelength sensitive (S), 15, 16, 215, 217, 226 spacing, 6, 17 trichomatic, 14–15 Cone-rod dystrophy, mosaic in, 20–21 Confidence ellipse, 358 Confocal: aperture, 207, 224 imaging, AOSLO, 435 light imaging, 223 pinhole: AOSLO, 434, 437, 441 size of, 418, 441–443
570
INDEX
scanning laser imaging, resolution limits, 249 scanning laser ophthalmoscopes (cSLO), 23–24, 83, 89, 223, 236, 261 Contact lenses: advantages of, 291 benefits/beneficiaries of, 45, 83, 301–304, 332 cast molding process, 299 customized, 34, 63, 298–299 design considerations, 295–297 disposable, 298–299 fabrication for, 6 flexible, 305 hydrophilic lenses, 295, 300 lathing parameters, 299–300, 303 manufacturing issues, 300–301, 304 measurement for, 297–298 rigid gas-permeable (RGP), 293–295, 300 rotational stability, 295–296 silicone, 294, 305 soft, 293–295, 297–298, 302–304 spherical aberration, 291–293 tolerance/benefit ratio, 295 trial, 297–298 Contrast: agents, 21–22, 210–211 AO-OCT, 448 AOSLO, 443–444 attenuation, 335 improvement strategies for, 23–24 Michelson, 365, 377 resolution, 387–388 sensitivity: characterized, 8–9, 366, 410–411 function (CSF), 55, 364–367, 505–506, 532 Weber, 365 Control algorithms: actuators, configuration of, 119–122 implications of, 184–185 influence function measurement, 122–124 lenslets, configuration of, 119–123 spatial control command of wavefront corrector, 119, 124–128 temporal control command of wavefront corrector, 119, 128–135 Rochester Adaptive Optics Ophthalmoscope, 405 Control loop: schematic of, 194 system gain of, 184–185
Control matrices, 124–127, 184 Conventional imaging: basic system design, 237–239 field size, 244–246 implications of, 441 light source, 242–244 optical components, 239–241 resolution limits, 237 retinal, 236–246 science camera, 241–242, 246 system operation, 246 wavefront sensing, 240–242 Convergence: characterized, 502 error, 487 iterations, 487 Convolution, characterized, 277–278. See also Deconvolution CoolSNAP HQ, 452 Cornea, see Corneal abnormal conditions, 74 aging process, 43–44 astigmatism research, 4 cross section diagram, 313 LASIK surgery, 308 light exposure level at, 450, 452 polarization impact, 53 refractive errors on, 66 soft contact lenses and, 297 transparency in, 55 Corneal: aberrations: calculation of, 35–36 characterized, 63, 239 coupling with intraocular lens, 40 customized, 34 data, 33, 36 defi ned, 533 measurement, 35 pathological conditions, 301 refractive surgery, 38–39 schematic representation of, 37 ablation: anatomical customization, 319–320 biomechanics, 322–324 conventional, 325 correction strategies, 311 customized, 317–321, 324, 325, 533 excimer laser treatment, 312, 321–323, 325–326 functional customization, 317, 319 laser refractive surgery, 312–317, 326 LASIK flap, 324 optical customization, 320–321
INDEX variable rate, 322–324 apex, 297 curvature, 305 curvature map, 340 reflection, 241 topography, xviii, 36, 319 transplantation, 69, 74, 301 Corrective lenses, 336, 477 Corrective methods, conventional, 83 Corrector: devices, types of, 34, 86–88 segmented, 86–87, 548 stability, 461 wavefront, 84–111, 119, 124, 135, 170, 297, 398–403 Correction bandwidth, 408 Correlation width (CW), 347 Cortical neural processing, 335 Coupling coefficient, 86–87, 531 Cross-coupling, 86, 127, 423 Cross-cylinder convention, 333 Crossed-cylinder aberroscope, 36 Crossover, Shack-Hartmann wavefront sensing, 71 Crossover frequency, 198 Crystalline lens, 33, 36, 40, 43, 63, 336 C-scan, 257 Curvature map, 340 Curvature sensing, xviii Customized ablation, 317–321, 324, 325, 533 Customized vision correction devices: contact lenses, 291–304 intraocular lenses (IOLs), 304–308 Cutoff frequency, 280, 350, 429 Cycles per degree (cpd), 533 Cpd, defi ned, 533 Cyclopegic: drugs, 336 refractions, 311, 533 Cylinder: AO-OCT experiments, 454 characterized, 293, 478 correction of, 294 defi ned, 533 Cylindrical: lens, 333 trial lenses, 180 Cysts, macular, 220, 224 Dalsa CA-D1, 482 Dark noise, 178, 246, 484 Data: logs, 461 storage systems, 245
571
Daylight vision, 215 Decentered optical system, 307, 321–322, 513 Deconvolution: applications of, 282–283 linear, 278–280, 284 multiframe blind (MFBD), 281–282 nonlinear, 280–282 Defocus: aberrations, 52, 192 accommodation of, 41 AO-OCT system, 454, 456 AOSLO, 430–432, 436 AO system assembly and integration, 181–182 calibration, 431–432 characterized, 35, 83 closed-loop AO system, 423–424, 431 coefficient, wavefront corrector case illustration, 99 contact lenses and, 296–297, 301–302 conventional imaging, 237 correction/correction efficiency, 4, 6, 44, 50, 335, 413, 457, 474, 477, 503 custom-correcting contact lenses, 301–302 far point and, 335 image quality/image quality metrics, 348 induction of, 306 lens prescriptions, 334 OCT ophthalmoscopes, 264 off-axis aberrations, 47–48 overcorrections, 306 peripheral, 48, 50 point spread function (PSF), 357 polychromatic images, 360 polychromatic light, 355 Rochester Adaptive Optics Ophthalmoscope, 401 scanning laser imaging, 255 Seidel, 333 Shack-Hartmann wavefront sensing, 73, 75 undercorrections, 306–307 wavefront correctors and, 100–101, 105, 107 wavefront error computation, 41 Zernike, 100, 333 Deformable mirror: aberration correction, 149 actuator influence function, 149, 193 actuators on, 120 adaptive optics applications, 34 adaptive optics system, 11 AO-OCT system, 452
572
INDEX
AOSLO, 418, 421, 423–424, 426 characterized, 7–8, 10, 238 components of, 239 conventional, 478 defi ned, 533 discrete actuator, 86, 97, 534 fitting error and, 192 Fried configuration, 120–121, 184 OCT ophthalmoscope, 264–265 optical alignment, 167–168, 182 parallel SD-OCT system, 449 qualification of, 175–181 refraction, 272–273 Rochester Adaptive Optics Ophthalmoscope, 398–400, 402, 408, 411–412 scanning laser imaging, 255 Shack-Hartmann wavefront sensing, 70 Southwell configuration, 120 technology, future directions for, 23 transfer function of, 132 waffle, 120, 184 wavefront correctors, 101–102, 106, 112, 126–127 Degree of polarization (DOP), 54–55, 534 Degrees of freedom (DOFs), 4, 92, 160–161, 174 Depolarization, impact of, 54–55, 228–229 Depth of field, 9 Depth of focus, 168, 240, 261, 336–337, 347, 360, 467–469, 534 Detection threshold, 374–375, 534 Detector(s): alignment process, 166–167 charge-coupled device (CCD), 473 quantum efficiency (QE), 246 Shack-Hartmann wavefront sensor, 461 size of, 245 two-dimensional, 236 Development of adaptive optics, see Historical perspectives Diabetics/diabetes: retinal disease, 438–439 retinopathy, 22 visual effects, 210 Diagnostic display, Rochester Adaptive Optics Ophthalmoscope, 401 Dichromacy, 19 Dielectric beamsplitters, 241, 452 Diffraction: AOSLO, 435 implications of, 6, 23, 83, 157, 179, 236, 331 grating, 451, 454
-limited imaging, 41, 95, 111–113, 245, 254, 264, 442 point-, 265–266, 453, 455 scalar theory, 103 Shack-Hartmann wavefront sensing, 75 Digital current (DC), 427 Digital light projector, 386, 404, 405, 414, 534 Digital micromirror devices (DMDs), 386, 404–405, 534 Digital numbers (DN), 481–482, 534 Digital Video Interface (DVI), 388 Digital-to-analog conversion (DAC), 131–132, 388–389 Dilation methods, 23, 98, 167, 453, 459 Diopters, 273, 331, 334, 336, 343, 431, 454 Dioptric value, 46–47 Direct backscattered light, 222, 227–230 Direct slope control algorithm/matrix, 124–127, 406, 534 Discrete actuator deformable mirrors: applications, 96 characterized, 86, 111–112 defi ned, 534 Gaussian, 97 macroscopic, 89–90 wavefront corrector illustration, 102–104, 109 Discrete Fourier transform (DFT), 195, 534 Discrimination threshold, 374–375, 535 Dispersion balancing, 259, 265, 270 Displays for psychophysics: cathode ray tube (CRT), 381–385, 388, 405, 531 characterization, 383–384, 388–390 contrast resolution of, 387–388 digital light projector (DLP), 386, 404, 405, 414, 534 gamma function, 389 gamut, 382–383, 536 liquid crystal display (LCD), 381, 384–385, 539 plasma displays, 381, 385, 545 projector systems, 381, 384–386 Display stimuli, 150–151 Disposable contact lenses, 298–299 Distance vision, 41, 317 DM-WS: geometries, 178 registration, 178, 183–184 Doppler OCT techniques, 21, 259 Double-pass (DP) retinal images, 53 d-prime (d′), 374. See also Signal detection theory (SDT) Drusen, 214, 219
INDEX Dry eyes, 320 Dual Purkinje Eye Tracker, 17–18 Dynamic behavior: camera stare, 195 computational delay, 194 zero-order, 195–196 Dynamic corrections, 41 Dynamic focusing, 168–169 Dynamic range: characterized, 480–481 conventional imaging, 246 defi ned, 535 liquid crystal AO phoropter, 507 measuring spherical lenses, 481 microelectromechanical systems, 92 Shack-Hartmann wavefront sensor (SHWS), 481 wavefront sensing research, 67–68, 71–75 Elderly, light absorption in, 42. See also Aging eye/Aging process Electrostatics, 88, 95 Emmetropia, 336, 535 En face: AO-OCT retinal camera, 450 imaging, 448 scanning, 256–257, 265 Entropy (ENT), image quality metrics, 347 Epi-LASIK, 320 Epiretinal membranes, removal of, 22 Equivalent quadratic, 339, 358, 535 Equivalent width (EW), image quality metrics, 346 Error(s), types of: Abbe, 162 bandwidth, 194, 199–200, 530 budget, 160–161 calibration, 191–192, 531 convergence, 487 fitting, 192–194 measurement, 194–199 noncommon path, 409 performance, 191–192 phase, 409 refractive, 66, 104–105, 240, 331, 338, 546 transfer function, 133, 135 wavefront, 41, 158, 175, 180, 190, 200–201, 302–303, 324, 326–327, 332, 334, 338, 340–342, 397, 402–403, 406–409, 423, 432–433, 441, 449, 455–457, 459, 461, 474, 491, 505, 505–506 Euler’s theorem, 342
573
Excimer laser systems, corneal ablation: characterized, 9, 312, 321–322 clinical results, 325–326 surgery, biomechanical response, 323 Extrinsic markers, retinal imaging, 24 Eye, see Cornea; Corneal; Pupil aging process and, 40 artificial, 166 basic components, 206 biomechanical changes, 9, 322–323, 325 diffraction-limited, 442 -lens system, 336 light sensitivity, 12 movement, 40, 311, 321–322, 433, 443 peripheral optics of, 51 position, tracking with adaptive optics, 15–18 refraction, 11 tracking (in surgery), 322 wave aberration measurement, 10 wavefront sensors for, 63–79 Far point, 334–335, 339 Farsightedness, see Hyperopia Far vision, perfect correction for, 41–42 Fast Fourier transform (FFT), 190, 535 FC fiber, 165 Feedback closed-loop control system, 128 Femtosecond laser, 319 Ferroelectric technology, 91 Fiber-based OCT systems, 266 Field: angle, 50–51, 245 curvature, 46–47 scan angle, 253–254 size, significance of, 244–246, 267–268, 432 of views (FOVs), 175, 182, 469 Fifth-order: aberrations, 50 Zernike polynomials, 98 Fill factor(s), 87–88, 95, 106, 246 Finite-element analysis, 101–102, 112–113 First-order optics, 156–157, 535 Fitting errors, 192–194, 536 Fixation: implications of, 255, 272 target, 156, 246 Flap cut, effect of, 324 Flash lamp, 242–243, 399, 414 Flat-fielding, 200 Flat fi le, 176 Flexible intraocular lenses (IOLs), 305 Floaters, 218
574
INDEX
Flood illumination: AO-OCT system, 266, 447–448, 452, 461–463 characterized, 182, 238 conventional, 236, 256, 263, 266–269, 449, 451, 454, 456, 461–464, 474 OCT ophthalmoscope, 256–257, 259 retinal cameras, 95 Rochester Adaptive Optics Ophthalmoscope, 403, 413 SD-OCT system, 450 source of, 156 Fluorescein, 21, 211, 439 Fluorescent imaging, vascular structure and blood flow, 21 Flourescent markers, 24 Flying-spot lasers, 312 F-number, 168 Focal: length, 73–74 plane, 263 Focus/focusing: conventional imaging and, 242 depth of, 336–337, 347, 360, 467–469 optical alignment, 168–169 scanning laser imaging, 255 Fold mirrors, installation of, 181–182 Foucault knife-edge technique, 36 Fourier optics, 277 Fourier transform: Discrete Fourier transform (DFT), 195, 534 Fast Fourier transform (FFT), 190, 535 implications of, 104, 259, 348, 450, 487, 493 inverse, 473 linear deconvolution, 278 nonlinear deconvolution, 280 Fourth-order aberrations, 6, 46 Fovea/foveal: anatomical view, 514 avascular zone, 206 blood flow, 439 characterized, 34, 206, 215, 217, 225, 410–411 cones, human, 15 crest, 207–208 off-axis aberrations, 51 defi ned, 536 photoreceptors, 227, 334 pit, 207–208, 270, 467 refraction, 47–48 vision, 47 Frame: grabbing, 255, 426, 428–429 rate, 140, 200
Free run mode, 140–141 Free-space OCT systems, 266 Frequency domain, 351, 357 Fresnel/lenticular screens, 385 Fresnel microlenses, 75 Fried: configuration, 120–121 geometry, 178, 183–184 Full width at half height (FWHH), 259–260, 455–456, 465 Full width at half maximum (FWHM), 179, 191, 441, 443, 536 Functional customization, 317, 319 Fundus, 18, 220, 223–225, 229, 536 Gabor function, 366 Gabor pattern, 364–366, 536 Gain/gain factor, 184–185, 406, 536 Galileo, 4 Galvanometric scanner, 253, 419 Gamma function, 389 Gamut, 382–383, 536 Ganglion cells/cell layer: AO parallel SD-OCT imaging, 467–468, 470–471 characterized, 20, 23–24, 218, 261 of retina, 216 Gaussian: beam, 470 curvature, 342, 492 fit algorithm, 484 moment, 346 optics, 535 probability density function, 372 statistics, 281 Genesis-LC, 426 Geometrical optics, 205, 207 Glaucoma, 23, 224 Glial cells, 215, 218 Gram-Schmidt orthogonalization method, 35 Graphical user interface (GUI), 153 Ground-based telescopes, 7, 97 Half width at half height (HWHH), 346–347, 359 Hamming window, 197 Hanning window, 197 Haptics, 304–305 Hardware calibration, Shack-Hartmann wavefront sensing, 75–76 Hartmann spot, 169, 178. See also ShackHartmann wavefront sensor Helmholtz, 4, 236 Heterochromatic fl icker photometry, 369
INDEX High-contrast visual acuity, 303 Higher order aberrations: correction strategies, 4, 6–9, 11, 44, 49–50, 291, 294, 296–297, 302–303, 305–308, 312, 322, 325–327, 336, 358–359, 401, 462, 478, 492 surgical correction of, 307–308 High-frequency scanning, 253 High-quality eye, 345 High-resolution: camera, 140 imaging, 23–24 OCT images, 467 retinal, see High-resolution retinal imaging wavefront sensor, 7 High-resolution retinal imaging: AOSLO, 433 characterized, 63, 79 common issues of, 271–276 conventional imaging, 236–246 image postprocessing, 276–284 OCT ophthalmoscope, 236, 256–271 overview of, 235–236 scanning laser imaging, 247–255 High-speed imaging, 243 High-voltage amplifiers (HVAs), 132 Historical perspectives: aberration correction in human eye, 3–9 ocular adaptive optics, 9–24 Horizontal cells, 217 Horizontal scanner, AOSLO, 427–428, 432 Horizontal scanning mirror, 419–420 Horizontal synchronization pulse (hsync), 420, 537 hsync signal, 426, 428–429, 537 Hubble space telescope, 7 Hue, 355 Human eye, see Eye Human subjects, 502–503 Hydrophilic lenses, 295 Hyperfocal: point, 336, 537 refraction, 336–337, 358, 360 Hyperopia: characterized 4, 209, 317, 321, 323, 332 corneal laser surgery, 324 defi ned, 537 latent, 337 positive spherical lenses, 336 refractive surgery, 313 Identification threshold, 374–375, 537 Illumination beam, 223 Image, see Image acquisition; Image postprocessing; Image quality
575
flux, 190 optical alignment, 169 recording, 418 sharpening, 191–192, 457 stabilization, 18 Image acquisition, AO software: frame rate, 140 pupil imaging, 141–142 rates, 245 synchronization, 140–141 Image postprocessing: convolution, 277–278 deconvolution applications, 282–283 linear deconvolution, 278–280 nonlinear deconvolution, 280–282 overview of, 276–277 Image quality: loss of, 336 metrics, 11 Area of visibility for rMTF (AreaMTF), 350–351 Area of visibility for rOTF (AreaOTF), 351 Entropy (ENT), 347 Equivalent width (EW), 346 Light-in-the-bucket (LIB), 347 Neural contrast threshold function (CSF N), 351–352 Neural sharpness (NS), 348 Radially averaged modulation transfer function (rMTF), 350 Radially averaged optical transfer function (rOTF), 350 Square root of second moment (SM), 346 Strehl ratio, frequency domain, MTF Method (SRMTF), 351 Strehl ratio, frequency domain, OTF Method (SROTF), 351 Standard deviation (STD), 347, 374, 377, 379 Visual Strehl ratio, frequency domain, MTF Method (VSMTF), 352 Visual Strehl ratio, frequency domain, OTF Method (VSOTF), 352, 359 Visual Strehl ratio, spatial domain (VSX), 348 Volume under neurally weighted OTF (VNOTF), 352 Volume under OTF (VOTF), 352, 359 neural mechanisms of, 11 retinal, 296 significance of, 143, 200, 335, 357, 359–360 Imaging, see Conventional imaging pipeline, 140
576
INDEX
time, 276 wavelength, 210 Impedance, 257–258, 485 Incoherent: light, retinal imaging research, 6 subsystem, SD-OCT system, 450–451 Independent influence, 101–102 Index of refraction, 230, 438, 469 Indiana AO ophthalmoscope, 239, 241–243, 246, 266, 460 Indiana Eye model, 354–355 Indiana schematic eye, 270 Indiana University AO-OCT system: AO performance, 455–461 conventional flood-illuminated imaging, 461–463 description of, 448–453 experimental procedures, 453–455 parallel SD-OCT imaging, 463–474 significance of, 447–448 Indocyanine green dye, 210–211 Influence: functions, 101–102, 122, 149–150 matrix, 126, 184 In-focus reflection, 241 Infrared beams, 166 Infrared (IR) light, 44 Inner and outer photoreceptor segments (IS/OS), 215–216, 467–468, 470–472, 537, 543 Inner limiting membrane (ILM), 216, 466 Inner layers: nuclear (INL), 216, 218, 467–468, 471 plexiform (IPL), 216, 218, 467–468, 470–472 Integrator, defi ned, 184 Interchangeable mirrors, alignment and, 167 Interference: AO parallel SD-OCT imaging, 473 effects, 340 fi lter (IF), 242–243 signature, 259 Interferogram, 94, 300 Interferometer: AO system integration, 176 in optical alignment, 170–171, 173–174 Shack-Hartmann wavefront sensing, 78–79 Interferometry, 5, 85, 256 Intermediate image location, 186 Internal aberration: defi ned, 537 measurement of, 37 Internal ocular optics, 35–38
Internal optics, aging process, 43 Intraocular lenses (IOLs): aberrations, types of, 305–306 characterized, 34 defi ned, 304, 537 flexible, 305 higher order aberrations, 306–308 implanted, 39, 306 lens decentration, 307 manufacturing process, 308 phakic, 321 polymerization, 306 rigid, 305 Intraocular scatter, 42, 537 Inverse Fourier transformation, 473 Iris: custom-correcting contact lenses, 303 as point target, 165 Irradiance, 277 Irregular astigmatism, 4 Isoplanatism, 245 Jackson crossed cylinder, 333 Just noticeable difference (JND), 374 Keck telescope, 456 Kepler, 4 Kerataconus, 9, 69, 74, 300–301, 303, 538 Keratoplasty, 9, 301 Kinematic: placement techniques, 164 principles, 167 Knife edge, optical alignment technique, 169 Kolmogorov distribution, 52 Krypton flash lamp, 242–243, 414 Lambertian scatter, 212 Lamina cribrosa, 209, 228 Laminar flow, 211 Laplace transforms, 131, 194, 408 Laser-assisted epithelial keratoplasty (LASEK), 320, 325–327, 538 Laser(s): ANSI standards, 419 corneal ablations, 311–327 diodes, 243, 451, 484–485 illumination, 219 low coherence, 419 propagation, 157 ray tracing, xviii, 63, 65–66 surgical procedures, see Laser-assisted epithelial keratoplasty (LASEK); Laser in situ keratomileusis (LASIK); Laser refractive surgery
INDEX Laser in situ keratomileusis (LASIK): applications, 325, 327 corneal ablation, 311, 319, 323 defi ned, 538 higher order aberration correction, 307–308 hyperopic treatment, 316–317 myopic treatment, 314–315, 317–318 research, 314–316 wave aberration, 127 Laser refractive surgery: applications, 73, 79, 98, 311 basics of, 312–317 customized, 63 Lateral chromatic aberration, see Longitudinal chromatic aberration (LCA) Lateral misalignment, 442 Lateral resolution, 265, 270 Lawrence Livermore National Laboratory (LLNL), 9, 478, 494 Layout, 164 Lead-magnesium-niobate (PMN) actuators, 89–90 Least-squares: algorithm, 127 fitting, 339 method, 487 Lens: cataract surgery, 304 contact lenses and, 295–298 crystalline, 33, 36, 40, 43, 63, 336 displacement of, 44 intraocular lenses (IOLs), 304–305 natural, 305–306 optical power of, 306, 308 optical zone of, 299 polarization impact, 53 prescriptions, 332–334 schematic diagram, 206 transparency of, 55 Lenses, cylindrical, 180, 273 Lenslet(s), see Lenslet arrays AOSLO, 420–422 centroid measurement, 142, 179 characterized, 66 configuration of, 120 conventional imaging, 241 defi ned, 539 high-density configuration, 121–122 Rochester Adaptive Optics Ophthalmoscopes, 400 Shack-Hartmann wavefront sensing, 67–71, 75, 78–79
577
Lenslet arrays: characterized, 238 calibration process, 77–78 focal length of, 73–74 Lenticular aberrations, 239 Lesions: retinal, 208, 220 solar retinopathy and, 440 Leukocytes, 11 Life span, retinal image quality, 43 Light: absorption, 42 angular dependence of, 12 budget, 271–272 delivery optics, 418–419 detection, 262–264, 418 distribution, polarization effects, 53–54 scatter, see Light scatter/scattering sources, 242–244, 267 Light Adjustable Lens (LAL), 306 Light-emitting diodes (LEDs), 391 Light-in-the-bucket (LIB), 357 Light scatter/scattering, implications of, 55, 206–207, 420 Limbal junction, 295, 539 Linear deconvolution, 278–280, 284 Linear inversion fi lter, 279–280 Linear sum model (LSM), 102 Line of sight (LOS), optical alignment, 165, 167–168, 174, 181, 297 Lipperhey, Hans, 4 Liquid crystal AO phoropter: AO assembly, 491–492 beacon selection, 484–485 calibration, 492–502 human subject results, 502–506 integration, 491–492 software interface, 489–491 system performance, 492–502 testing procedures, 492–502 troubleshooting, 491–492 wavefront corrector selection, 485–486 wavefront reconstruction, 486–489 wavefront sensor selection, 478–484 Liquid crystal display (LCD): characterized, 75 defi ned, 539 monitors, 381, 384–385 PAL-SLM, 485–486 projectors, 386, 539 Liquid crystal spatial light modulators (LC-SLMs), 34, 88, 90–91, 99, 176, 477, 539 Liquid crystal technology, 86
578
INDEX
Littrow’s angle, 449 LogMAR: defi ned, 539 visual acuities, 302–303, 368, 539 Longitudinal chromatic aberration (LCA), 44–45, 51, 108, 270–271, 346, 354–355, 357, 404, 540 Long-range scatter, 225 Long wavelength sensitive (L) cones, 15, 16, 19, 215, 217, 226, 284 Lookup table, 493–494, 508 Loop gain, 195, 200 Low coherence laser sources, 419 Low-contrast visual acuity, 303 Lower-order aberrations, 191, 307, 320, 401, 477 L-square applications, 166 Luminance, 335, 355, 357, 365–366, 368–369, 381, 540 Macromers, photosensitive, 306 Macula/macular: anatomical view, 514 bull’s-eye lesion in, 20 characterized, 206, 209, 211 degeneration of, see Macular degeneration disease, 224 pigment, 217–219, 225 Macular degeneration, 210–211, 220, 224–225 Macwave, 456, 461 Magnetic resonance imaging (MRI), 209 Magnification telescope, 158 Manifest refraction, 311, 540 Manifest refractive spherical equivalent, 315, 540 Maréchal approximation/criterion, 158, 192, 200 Matched-fi lter correlation, 178 Material dispersion, 107 MATLAB, 151, 380, 405, 456–457, 461 Maximum a posteriori (MAP), 280 Maximum likelihood (ML), 280–281 Maximum permissible exposure (MPE), 23, 245, 398, 419 Maxwellian-view optical systems, 390 Measurement error, 194–199, 540 Measurement sensitivity: defi ned, 540 Shack-Hartmann wavefront sensing, 67, 71–75 Meiosis, 215 Melanin, 66, 214, 218–220 Melanosomes, 213
Membrane mirrors, 88, 96, 109–112, 540 Method of adjustment, 375–376, 540 Method of constant stimuli, 378–379, 541 Michelson: contrast, 377 interferometer, 258 Microaneurysms, 22, 438–439 Microelectromechanical system (MEMS) mirrors: AO software applications, 152 applications, 96 characterized, 92–95, 157, 478 deformable, 10 scanning laser imaging, 255 Microelectromechanical system (MEMS) technology, 86, 176, 541 Microflashes, detection of, 19 Microglia, 215, 218 Microkeratome laser, 319, 324 Middle wavelength sensitive (M) cones, 15, 16, 19, 215, 217, 226, 284 Midpoint, out-of-focus planes, 168 Mirror(s), see specifi c types of mirrors AOSLO, 425–426 bimorph, 88, 91–92, 96–97, 530 control of, see Mirror control curved, 240 diameter, wavefront correctors, 112 OCT ophthalmoscope, 258 raster scanning, 418 reflective wavefront correcter and, 85–86 scanning laser imaging, 253, 255 Mirror control: algorithm, 421–423 AOSLO, 430 significance of, 151 Misalignment: implications of, 158–159 pinhole, 442–443 sources of, 161 Misregistration, 497 Mitosis, 215 Modal correctors, 88 Modal wavefront control algorithm, 127, 541 Modulation transfer function (MTF): aging eyes, 42 characterized, 42, 277–278, 281, 349–350, 357, 367, 505–506 defi ned, 541 human eye, 367 implications of, 277–278, 281, 349, 357, 505–506 off-axis aberrations, 48, 50–51 polarization effects, 53
INDEX Monochromatic: aberrations: characterized, xvii, 4, 6, 35–40, 45, 238–239, 358–359 influential factors, 33–34 interaction with chromatic aberrations, 45 measurement of, 33 light, 9, 15, 411 metrics, 358 refraction, estimation from aberration map: equivalent quadratic, 339 methodology evaluation, 358–359 numerical example, 353–354 overview, 327, 337–338 virtual refraction, 339–353, 355 retinal image quality, 51 Monostable multivibrator, 428, 541 Monovision, 317 Müller cell matrix, 23, 215, 218 Multiply scattered light imaging: characterized, 222–223 contrast from, 227–230 Multiframe blind deconvolution (MFBD), 281–282, 284 Multimode laser diode, 451 Multimode step index optical fiber, 451 Multiphoton imaging, 24 Multiplexer, 428, 541 Murcia optics lab study, 52 Mutated genes, 19 Myopia: characterized, 98, 209, 293, 314–315, 317–318, 321, 323, 332, 356 correction of, 4, 294 defi ned, 541 monochromatic refraction, 358–359 negative spherical lenses, 336 polychromatic light, 356 refractive surgery, 313–314 treatment strategies, 323, 325 Myopic deconvolution, 284 N-alternative-forced-choice (NAFC) procedure, 376, 529, 542 Narrow bandwidth, 242 National Science Foundation Science and Technology Center, xviii National Television Subcommittee (NTSC), 426, 542 Near-infrared: illumination, 210, 212, 462 imaging, 241
579
source, 399 spectrum, 267 Nearsightedness, see Myopia Near-ultraviolet light, 306 Negative magnification, 158 Neovascular membrane, 221–222 Nerve fiber bundles, 227–229 Nerve fiber layer: AO parallel SD-OCT imaging, 467–469, 471, 473–474 AOSLO, 436 retinal, 219–220, 227, 229, 261 Nervous system, blur adaptation and, 11 Neural contrast threshold function (CSF N), 351–352 Neural sampling, 335 Neural sharpness (NS), image quality metrics, 348 Neural vision system, 334 Neurosensory retina, 220 Neutral density fi lters, 464 Night driving, 321 Noise: AO parallel SD-OCT imaging, 473 AO system performance, 196 AOSLO trace, 17–18 conventional imaging, 241, 246 dark, 178, 246, 484 error measurement, 198–199 linear deconvolution and, 284 Shack-Hartmann, 483–484 speckle, 270 Noise-alone distribution, 371–373, 377 Noncommon path: aberrations, 159, 185, 413, 457, 542 errors, 409 lengths, AO-OCT system, 453 Nonlinear deconvolution, 280–282 Nonnulling operation, 424, 436 Nuclear layer, of retina, 216, 218, 261 Numerical aperture (NA), 23, 241, 259, 451 Nyquist criterion, 192 Nyquist sampling: limit, human eye, 367 theorem, sampling rate, 487 Objective refraction, 360 Object irradiance, 278–282, 284 Oblique astigmatism, 46 Oblique: coma, 47 effect, 366
580
INDEX
Ocular aberrations, see Aberrations Ocular adaptive optics, see Adaptive Optics (AO) system Ocular hazards, 271 Ocular media, 55, 83, 389 Ocular optics, aberration-free, 34 Off-axis: aberrations: characterized, 46–47, 239–240, 426 chromatic, 48–51 correction of, 51 monochromatic, 48–51 monochromatic image quality, 51 peripheral refraction, 47–48 field point, 157 SLD illumination, 399 Offl ine alignment, 170–174 On-axis: correction, 44 field point, 157 wavefront error, 190 Open-loop: AO control systems, 128–129, 424, 432 bandwidth, 133–134 control system, 128–129, 542 system transfer function, 408 Ophthalmoscope: conventional, 236 development of, 5, 8 OCT, see Optical Coherence Tomography (OCT) SLO, see Scanning Laser Ophthalmoscope (SLO) Ophthalmoscopy, conventional, 20 Optic: disk, 209 nerve: fiber layer, of retina, 216 head, 207–210, 218, 222, 226, 228–229, 542 schematic diagram, 206 Optical alignment: common practices, 163–170 components of, 157–158, 180–182 misalignment penalties, 158–159 offl ine alignment, sample procedure, 170–174 optomechanics, 159–163 Optical: axis: angle targets, 165–166 conventional imaging, 243 defi ned, 543 establishment of, 164
measurement of, 294 optic alignment onto, 167–170 point targets, 164–165 rough targets, 165–166 coherence tomography (OCT). See also Optical Coherence Tomography (OCT) density, 225, 389 focus, 334 path: difference (OPD), 172, 321 length, 35, 321, 464 OCT ophthalmoscopes, 265 quality metrics, 360 quality, off-axis, 48 slicing, 424 transfer function (OTF), 346, 349, 357, 359, 543 zone (OZ), 323 Optical Coherence Tomography (OCT): AO-OCT ophthalmoscopes, basic layout of, 264–266 resolution, 23 AO parallel spectral-domain OCT, 466–475 characterized, 5, 83–84, 95, 229–230, 236, 256–257, 447 chromatic aberrations, impact of, 268–271 dispersion balancing, 259 Doppler, 21, 259 fiber-based, 259 field size, 267–268 free-space, 259 imaging light source, 267 light detection, 262–264 optical components, 266 parallel spectral-domain (SD-OCT), 447, 452, 454, 463–465 phase-sensitive, 259 polarization sensitive, 259 principle of operation, 257–259 resolution limits, 5, 236, 259–263 speckle, impact of, 268–271 spectral-domain (SD-OCT), 89, 256–257, 259–260, 262–266, 268–270, 549 spectroscopic, 259 time-domain, 256–257, 259, 261–264, 266, 454 wavefront sensing, 266–267 Optical Society of America: functions of, 334 Standards for Reporting Optical Aberrations, 511–527
INDEX Optimal closed-loop feedback control, 128 Optomechanics, in AO system: adjustment fi xtures, 162 design considerations, 155 fundamentals of, 159–161 hardware selection, 161 immobilizing subjects, 162–163 mechanical isolation, 162–163 moving loads, 162 stray light, avoidance strategies, 163 thermal effects, 162 Ora serrata, 208 OSLO ray tracing software, 240 Outer limiting membrane, of retina, 216 Outer layers: nuclear (ONL), 216, 218, 467–468, 471– 472, 474 plexiform (OPL), 216, 218, 467–468, 471–472 Outer segment, 215, 218, 225, 543 Out-of-focus reflection, 241 Out-of-plane scatter, 224 Overrefraction, 275 Parallax, 156–157 Parallel aligned nematic liquid crystal spatial light modulator (PAL-SLM), 485, 543 Parallelism, 166 Paraxial: curvature matching, 339 optics, 535 rays, 333 Partial data, 145 Patient stability, 272 Peak sensitivity, 389 Peak-to-valley (PV): difference, 341, 499 errors, 101, 110, 180 wave aberrations, 97–101, 103, 106 wavefront error, 180 Penetrating keratoplasty, 301 Perceptual weighting, 335 Peripapillary region, 228–229 Peripheral light, 323 Phacoemulsification, 304 Phakic: defi ned, 543 intraocular lenses, 321 Phase: delay, spatial light modulator (SLM) response, 495 diversity, 192 errors, 350 function, testing SLM stroke, 176
581
modulation, 91, 485 plate, 293, 303 resolution, 106 retrieval, xviii, 192 -sensitive OCT, 259 transfer function (PTF), 349, 544 wrapping, 92, 100, 106–108, 341, 493–495, 500, 544 Phoropter, adaptive optics, 9–10, 544. See also Liquid crystal AO phoropters Photometers, 389, 544 Photomultiplier tube (PMT): characterized, 5, 255, 418, 426, 428–429, 432 defi ned, 544 Photons: conventional imaging, 245 flux, 166 multiphoton, 24 Photopigments: characterized, 15, 214, 217 defi ned, 544 distribution of, 217, 225 spectral sensitivities, 227 Photoreceptor(s): characterized, 5–6, 11, 206, 215 cones, see Cones defi ned, 544 degeneration, 19–20 distribution of, 225–226 inner segments, 215–216, 467–468, 470–472, 537 mosaic, 16, 20, 24, 220, 411 mosaic, cone-rod dystrophy, 21 optics, retinal imaging, 11–14 outer segments, 215, 218, 225, 467–468, 470–472, 543 rods, see Rods sampling, 51, 367 trichromatic mosaic, 16 Photorefractive keratectomy (PRK), 311, 320, 323, 325, 544 Piezoelectric: constant, 110 technology, 86 Pinhole pupil, 45 Pinning error, 102 Piston-only mirrors: characterized, 87, 111 segmented mirrors, 97–98, 106–107, 112, 544 Piston/tip/tilt mirror: functions of, 103, 107–109 segmented mirrors, 87, 94–95, 97, 111–112, 545
582
INDEX
Pixel(s): in AO system assembly and integration, 177–178 architecture, 246 CCD, 179 conventional imaging, 245 OCT ophthalmoscopes, 268 psychophysics, 383–384 sampling density, 245 SD-OCT system, 449–450 Shack-Hartmann wavefront sensor (SHWS), 481–482 Plane reflector, AOSLO, 437–438 Plasma display, 381, 385, 545 Plate scale, measurement of, 177, 179–180 Plexiform layer, of retina, 216, 218 Plexiglas contact lens, 300–301, 303 Point-diffraction: AO-OCT ophthalmoscopes, 266 interferometer, 265, 453, 455 Point source, 190 Point spread function (PSF): axial, 258–260, 465, 530 AO-OCT system, 456–458 AOSLO, 423, 431, 441–442 chromatic aberrations, 354–356 convolution and, 277 corneal aberrations, 37–38 deconvolution and, 280–282 defi ned, 545 diffraction-limited, 245 image quality metrics, 345–349 implications of, 11–12, 104, 189–190, 237, 301 monochromatic, 357 Rochester Adaptive Optics Ophthalmoscope, 407 scanning laser imaging, 255 selective correction, 151 wavefront correctors, 102 Point targets, optical axis: CCD, 165 defi ned, 164 on image plane, 166 iris, 165 machined targets, 164–165 optical fiber, 165 wire crosshair, 165 Polarimetry, 54, 228 Polarization: AOSLO, 443 imaging, 24 impact on ocular aberrations, 34, 53–55, 229–230
-sensitive: imaging, 91 OCT, 259 Polychromatic: light, 360 optical transfer function, 357 refraction: characterized, 354–356 clinical, 355 grating image metrics, 357 methodology evaluation, 359–360 point image metrics, 357 wavefront metrics, 356–357 Polygon scanner, 253 Postoperative healing process, 307 Postreceptoral sampling, 51 Postsurgical eyes, visual acuity, 303–304 Power: rejection curve, 459–460 spectra: AO parallel SD-OCT imaging, 471–473 defi ned, 277, 545 temporal, 196–198, 409, 458–459, 461 spectral density (PSD), 198–199, 545 vector, 334, 339, 341–342 Powered mirrors, installation of, 181 Precision optics, 34 Presbyopia, 305, 317, 545 Primates, retinal imaging, 24 Prisms, 331 Projector systems: characterized, 381, 384–386 CRT, 386 digital light (DLPs), 386, 404, 414 LCD, 386 Prototypes/prototyping, 34 Pseudo-code, 145–146 Pseudophakic eye, 304, 306, 546 Psychometric functions, 377–378, 546 Psychophysical criterion, 358 Psychophysical function, 364, 546 Psychophysical methods: constant stimuli, 378–379 forced-choice, 376 implications of, 156, 334–335 method of adjustment, 375 staircase, 379–380 threshold, 370–375 Psychophysical tests, liquid crystal AO phoropter, 507, 546 Psychophysics: characterized, 140, 363–364, 391 contrast sensitivity function (CSF), 364–367
INDEX defi ned, 546 displays, see Displays for psychophysics psychometric functions, 377–378, 546 psychophysical functions, 363–364 methodologies, see Psychophysical methods sensitivity, 371 stimulus display, Rochester Adaptive Optics Ophthalmoscope, 404–405 threshold, 370–371 yes/no procedure, 371–372 PsychToolbox software, 380, 388 Pupil: alignment, 255 artificial, 418 decentrations, 321 defi ned, 156, 546 diameter of, 243, 259 dilation of, 98, 453, 459 in the elderly eye, 42 entry point, 66 fraction, 343–344 fully dilated, 23 glow in, 5 imaging with AO software, 141–142 light passing through, 4–5, 65 magnification of, 74–75 measurement of, 143 off-axis aberrations, 48–50 optical alignment, 169–170 photoreceptor optics, 12–13 polarization effects, 55 prism differences, 45 reducing size of, 101 retinal imaging, 212 scanning laser imaging, 253 schematic diagram, 206 Shack-Hartmann wavefront sensing, 79 size of, 34, 83, 95, 98, 104–105, 107, 110, 112–113, 237, 271, 336 stabilization of, 443 tracking system, 152, 443 visual acuity measurement process, 303 wavefront, 182 wavefront sensor research, 63, 66–67 Purkinje, 5 Pyramid sensing, 36, 85 Quantitative imaging, 218 Quantix:57, 449 Quantum efficiency (QE), 241, 246, 263 QUEST staircase procedure, 379–380 Radially averaged modulation transfer function (rMTF), 350
583
Radial Zernike order, 7, 518–522 Radiation, energy density of, 425 Radially averaged optical transfer function (rOTF), 350 Raster scanning: characterized, 5, 253–254 mirrors, 418–420 Rayleigh: criterion, 341 range, 157, 469–470 resolution limit, 245 scatter, 212 Ray tracing software, 240, 266 Real-time AO, 441 Reconstruction: algorithm, calibration process, 76–77, 127 matrix, 124, 193, 423 wavefront, 180–181, 486–489, 504 Zernike matrix, 207 Zernike mode, 123–124, 128 Reconstructor, 193 Red blood cells, 23 Red, green, and blue (RGB) channels, 384– 386, 388–389 Red-green color blindness, 19 Redundancy, 160 Reference: axis selection, OSA Standards, 513–515 beacon, 156 beam, 263 centroids: AO system gain and, 184–185 calibration of, 185–186 characterized, 159, 181, 184, 192 Reflectance, 5, 66, 241–242 Reflectivity, 112, 259 Reflectometry, 213 Reflex tearing, 298 Refraction: automated, 9 clear vision range, 336–337 conventional, 84, 212, 272–276, 358, 360 far point, 334–335 goal of, 334–337 impact of, see Refractive methodology evaluation, 358–360 monochromatic, 339–354 off-axis aberrations, 51 peripheral, 47–48 polychromatic, 354–356 by successive elimination, 335 virtual, 339–353 Refractive: aberrations, 332
584
INDEX
correction, 291, 331–332 errors, 66, 104–105, 240, 331, 546 index: aging process and, 42–43 corneal aberrations, 35 electronic control of, 88 statistics of aberrations, 52 surgery: applications, 34, 299, 326–327 defi ned, 547 intraocular lenses (IOLs), 307–308 wavefront-guided, 9, 38–39 Refractometer, xviii, 53, 63, 65 Regions of interest, 142 Registration: defi ned, 547 DM-to-WS, 183–184 spatial light modulator (SLM), 496–497, 499 Relay, 77–78, 156, 547 Relay telescope, 157–158, 170–174, 181–182, 449 Remainder lens, 35 Residual aberration, 133–134 Resolution: axial, 5, 23, 256, 267, 270, 434–438, 441, 463–467, 475 contrast, 387–388 conventional imaging, 237 defi ned, 191 high-: camera,140 imaging, 21–24, 467 retinal imaging, 63, 79, 235–284, 433 wavefront sensors, 7 improvement strategies for, 23–24 lateral, 265, 270 limits, 237, 245, 249, 259–262 OCT ophthalmoscopes, 259–262 in optical alignment, 160 phase, 106 spatial, 90, 129–130, 381, 383 spurious, 350–351 temporal, 130, 381, 383 transverse, 5, 21, 260–262, 448 ultra-high, 260, 263, 266 VGA, 90 volume, 418 XGA, 90 Resonant frequency, 99 Resonant scanner, 419 Retina, see Retinal angular tuning properties, 12 blood supplies, 209–210 contrast from scattered light, 227–230
cross section diagram, 212 defi ned, 205, 547 fundus, 210–218 images/imaging, see High-resolution retinal imaging; Retinal imaging; Resolution, high-, retinal imaging light: distributions on, 15 scattering, 220–227 main layers, diagram of, 216 neural, 212, 215 polarization, 227 shape of, 206–209 spectra, 218–220 stabilization of, 443 visual angle, 205 Retinal: cameras, 110 degeneration, 20–21 densitometry, 16 disease, 11, 18–21, 24 eccentricity, 33–34, 51 hazard, 245 illuminance, 156, 245, 370 image/imaging, see Retinal image pigment epithelium (RPE), 214–218, 467–471, 547 vein occlusions, 22 Retinal image: adaptive optics, 6–8, 11–24 AO-OCT experiments, 454–455 AO software applications, 140, 150 blood flow, 439 characterized, 205 conventional, 236–246 deconvolution of, 15 diffraction-limited, 41 double-pass, 36, 53 high-contrast, 225 high-resolution, see High-resolution retinal imaging; Resolution, high-, retinal imaging historical perspectives, 236 microscopic in vivo, 5–6, 11 one-shot, 150 peak-to-valley (PV) errors, 101 quality: influential factors, 33–34, 36, 42–44, 355–357 limitations on, 44 polarization state and, 53 Rochester Adaptive Optics Ophthalmoscope, 403–404 size, 354
INDEX Retinopathy, solar, 440 Retinoscopy, 207 Retroreflection, 214 Reversal, defi ned, 379 Rhodamine dextran, retinal imaging, 24 Rigid gas-permeable (RGP) lens, 293–295 Rigid intraocular lenses (IOLs), 305 Rochester Adaptive Optics Ophthalmoscope: characterized, 128, 239–244, 246, 397 control algorithm, 405–407 optical layout, 398–405 retinal image quality, improvement in, 409–410 schematic diagram of, 398 system limitations, 412–414 visual performance, improvement in, 410–412 wavefront correction performance, 406–409 Rods: characterized, 215–217 cone-rod dystrophy, 20–21 photoreceptors, 225–227 Root-mean-square (RMS): conventional imaging, 246 defi ned, 547 fitting errors, 192–194 phase errors, 190 wavefront, 68, 95, 110 wavefront correctors, 90, 102–104 wavefront error: AOSLO, 423, 432–433, 441 customized vision correction devices, 297, 302–303 customized corneal ablation, 324, 326–327 convergence of, 505–506 high-resolution retinal imaging, 238 history, trace of, 147–148 Indiana University AO-OCT System, 449, 455–457, 459, 461, 474 in normal young subjects, 34 liquid crystal AO phoropter, 489, 491, 502 misalignments, 158 refraction and, 332, 334, 338, 340–342 Rochester Adaptive Optics Ophthalmoscope, 397, 402–403, 406–409 wavefront sensor, 483–484, 499, 501–502 Saggital: focus, 46–48 plane, 46 Sampling frequency, 130, 198
585
Sandia National Laboratories, 9 Satellite-tracking telescopes, 8 Saturation, 355 Scalar: diffraction theory, 103 metrics, 340, 342 Scanning, see specifi c types of scanning AOSLO system, 425–426 lasers, 312 mirrors, AOSLO, 432 slit refractometer, xviii slit wavefront sensors, 321 Scanning laser imaging: basic layout of AOSLO, 249 compensation, 252 confocal, resolution limits, 249 frame grabbing, 255 light: delivery, 251–252 detection, 254–255 path, 249–251 overview, 247–248 raster scanning, 253–254 SLO system operation, 255 wavefront sensing, 252 Scanning laser ophthalmoscope (SLO): architecture, 256 characterized, 7, 95, 417–418, 447, 475 closed-loop AO system, axial sectioning, 423–424 custom, 92 development of, 5 high-magnification, 24 light delivery, 251–252, 419 mirror control algorithm, 421–423 raster scanning, 253, 419–420 retinal imaging, 211, 221–223, 229 system operation, 255 wavefront: compensation, 421 sensing, 252, 420–421 Scars, corneal, 63 Scatter/scattering: contrast and, 227–230 impact of, 34, 55, 206–207, 220–227, 245–246, 331, 420 intraocular, 55, 537 Lambertian, 212 long-range, 225 multiply, 222–223, 227–230 out-of-plane, 224 single, 222 Scheiner, Cristoph, 4 Science camera, 85, 241–242, 246
586
INDEX
Scintillation, 190, 239 Sclera: characterized, 212–213 defi ned, 547 RGP lenses, 295 Scripting, 151, 153 SD-OCT imaging, AO parallel: characterized, 466–474 defi ned, 447 sensitivity and axial resolution, 463–466, 475 Search box, 142, 548 Second-order: aberrations, 97, 99–101, 104, 106–108, 112, 337, 358 Zernike coefficients, 331, 334, 339 Zernike terms, 104 Segmented corrector, 86–87, 548 Segmented mirrors, 87–88, 97–98, 111, 548 Seidel: approximation, 46 power-series expansion, 332 Selective correction, 151 Senile miosis, 42 Sensitivity, defi ned, 365, 548 Sensor(s), functions of, see specifi c types of sensors Sensory response strength probability density function, 373 Shack-Hartmann wavefront sensor (SHWS): aberration detection, 291 aliasing, 507 AO-OCT system, 266, 452–453, 455–458, 460 AOSLO, 420, 422 characterized, xviiii, 6–8, 10, 36–37, 50, 64–68, 85, 89, 156, 237, 477–483 configuration of, 120 conventional imaging, 240–241 corneal ablation, 312, 321 crossover, 71 defi ned, 548 detector, 461 development of, 84 double spots, 507 dynamic range, 71–75, 480–481, 507 fitting errors, 193 hardware calibration, 75–76 limitations of, 71 measurement sensitivity, 71–75 microelectromechanical systems (MEMS) mirrors and, 92, 95 optimization strategies, 68–75 qualification of, 177
registration, 496–498 slope: displacement, 405–406 measurement, 420 spatial light modulation (SLM), 503, 507 spot pattern, 178, 241 spot image, 177, 179, 241, 400–401, 411, 484, 488–490, 504 time stamping of measurements, 460–461 Sharpness: image sharpening strategies, 191–192, 457 loss of, 335 neural, 348 Shear plate, in optical alignment, 170 Shift-and-add technique, 433 Short-burst images, 470, 472–473 Short wavelength sensitive (S) cones, 15, 16, 215, 217, 226 Shutter triggers, 151 Signal detection theory (SDT), 371–374, 377 Signal-plus-noise distribution, 371–373, 377 Signal-to-noise: fi lter, 279 ratio (SNR), 73, 135, 263, 420, 428, 441, 469, 473–474 Silicon nitride membrane, 110 Sine-wave gratings, 492 Single: cones, in vivo studies, 11–12 -image acquisition, 268 -mode fiber, 179–180, 182, 186 scattering, 222 Singular value decomposition (SVD), 68, 126, 145 Sinusoidal scanner/scanning, 253, 427 Sixth-order aberrations, 50 Skiagrams, 48 Slope: of beam path, 165 direct, 124–127, 406 influence function, 124–125 influence matrix, 184 map, 340 measurement, 184 Shack-Hartmann, 405–406, 420 vector, 123, 126, 145 wavefront: implications of, 64, 66, 71, 142–144, 149, 341–342 measurement, 142–145 Snake, photoreceptor cells, 5 Snellen visual acuity, 477 Snell’s law, 535 Soft contact lenses, 293–295, 297–298, 302–304
INDEX Software: applications: aberration correction, 149–150 aberration recovery, 144–149 application-dependent considerations, 150–151 AO loop, 139 AOSLO, 429–431 image acquisition, 140–142, 151, 245 liquid crystal AO phoropter, 488–491 overview, 139–142, 151–153 psychophysical experiments, 387–388 ray tracing, 240, 266 retinal imaging, 140, 150 wavefront slope measurement, 142–144 CAD, 159–160, 164–165 calibration, Shack-Hartmann wavefront sensing, 75–76 control, 489 design, 240, 425 diagnostic, 489 PsychToolbox, 380, 388 simulation, 490 Solar eclipse, 440 Southwell: geometry, 183 configuration, 120 Spatial: coherence/coherent light, 242, 269, 451 control command, wavefront correctors: control matrix for direct slope algorithm, 124–127 modal wavefront correction, 127 wave aberration generator, 127–128 fi ltering, 348 frequency, see Contrast sensitivity aberrations, 92 AO parallel SD-OCT imaging, 472 characterized, 278, 282, 350, 365, 367 cutoff (SFcMTF), 350–351 distribution, 280 spectrum, 335 homogeneity, 384, 389 independence, 384, 389 light modulator (SLM): characterized, 75, 175, 485–486 closed-loop operation, 499–502 defi ned, 478, 548 nonlinear characterization, 493 phase modulation, 493, 503 phase-response, 493 time delay, 505 wavelength-dependent, 506 phase, 349
587
resolution, 90, 129–130, 381, 383 vision, 366 Spatially resolved refractometer, 63, 65 Speckle: AO-OCT system, 451 AO parallel SD-OCT imaging, 469–470, 473, 475 defi ned, 548 high-resolution retinal imaging, 23, 242 noise, 241, 419, 448, 484–485 OCT ophthalmoscopes, 268–271 Spectacles, 3–4, 7–8, 45, 83, 294, 332 Spectral: absorption, 218 bandwidth, 23, 90, 270 constancy, 384 detection, 256 -domain OCT, 89, 256–257, 259–260, 262–266, 268–270, 447, 549 efficiency, human eye, 365, 368 leakage, 197 power distribution, 382 Spectralon, 431 Spectrometer, 259 Spectroradiometers, 390, 549 Spectroscopic OCT, 259 Specular reflector, 437 Sphere, characterized, 293–294, 454, 478, 549 Spherical: aberration, 35, 41, 291–293, 296–297, 302, 306–308, 322–324, 338, 477 -equivalent power, 333, 549 refraction, 46 trial lenses, 180 Spherocylindrical: correction, 105 defi ned, 549 lenses, 4, 294, 332–333 refractive correction, 338 refractive error, 335 Spot: array pattern, Shack-Hartmann wavefront sensing, 76–77 defi ned, 177, 549 displacement, 64–65, 71–73, 75, 144 position, 478 size, 240, 321 Spurious: aberrations, 303 resolution, 350–351 Square lenslets, 120 Square root (SR), image quality metrics, 346 SRMTF, 351 SROTF, 351
588
INDEX
Staircase method, 379–380, 549 Standard deviation (STD), 347, 374, 377, 379 Starfi re Optical Range, 8 Static corrections, 34, 41 Stellar speckle interferometry, 5 Stiles-Crawford effect, 12, 14, 45, 217, 244, 403, 549 Stratus OCT3 images, 260, 454, 466–468 Stray light control, 179 Strehl ratio, 34, 102, 104–109, 158, 189–190, 293–294, 303, 347, 351, 402, 423, 549 Stroke: bimorph mirrors, 91–92 defi ned, 549 deformable mirrors, 90, 98, 158 microelectromechanical systems (MEMS) mirrors, 93–94 Rochester Adaptive Optics Ophthalmoscope, 397, 414 scanning laser imaging, 255 wavefront correctors, 113 Structured illumination, 23 Sturm interval, 46–47 Subaperture: DM-to-WS registration, 183–184 fi rst-order optics, 156 liquid crystal AO phoropters, 478, 481, 490–491, 500 monochromatic refraction, 344–345 Shack-Hartmann wavefront sensing, 71–72, 75 wavefront sensor, 177, 180 Superluminescent diodes (SLDs): characterized, 448 high-resolution retinal imaging, 237, 241–243 liquid crystal adaptive optics phoropter, 484–485, 504 OCT ophthalmoscope, 267–268, 270 Rochester Adaptive Optics Ophthalmoscope, 398–399, 401, 414 Supernormal vision, 478 Surgery, see specifi c surgical procedures Surgical microscope, 22 Symbol Table, 553–564 Symmetric aberrations, 297 Synaptic layer, 261 Synchronization, 140–142, 150–151 System integration, AO systems: assembly of AO system, 181–182 boresight FOVs, 182 control algorithms, 184–185 control matrices, generation of, 184
deformable mirror (DM), qualification of, 175–171 DM-to-WS registration, 178, 183–184 influence matrix, measurement of, 184 overview, 174–175 reference centroid calibration, 185–186 slope measurement, 184 system gain of the control loop, 184–185 wavefront: error measurement, 175 reconstruction, 180–181, 486–487 sensor qualification, 177–180 Tangential: focus, 46–48 plane, 46 refraction, 48 Tecnis lens, 307 Telescope applications, 4, 7, 404. See also specifi c types of telescopes Temporal: bandwidth, 97, 112 coherence, 242 detection, OCT signal, 256 frequencies, 278 power spectra, 196–198, 409, 458–459, 461 resolution, 130, 381, 383 2-alternative-forced-choice (t2AFC) procedure, 376–377 vision, 366 Terminology: lens prescriptions, 332–334 refractive correction, 331–332 refractive error, 331 Tessellation method, 344 Test bed, 494 Tests/testing: liquid crystal AO phoropters, 492–502 psychophysical, 507 SLM stroke, 176 Third-order aberrations, 6, 49 3D graphics, 149 Threshold: characterized, 370–371 contrast, 382 defi ned, 550 detection, 374–375, 534 discrimination, 374–375, 535 identification, 375 Thresholding, pyramidal, 488 Ti:sapphire lasers, 221, 267 Tilt: impact of, 46, 83, 296
INDEX piston/tip/tilt mirror, 87, 94–95, 97, 103, 107–109, 111–112 tip/tilt alignment, 171 wavefront, 179 Time-domain OCT systems, 256–257, 259, 261–264, 266, 454 Time lag, in wavefront sensor transfer function, 131, 134 Time of fl ight, 258 Timing signal, image acquisition, 426 Tip: customized vision correction devices, 296 piston/tip/tilt mirror, 87, 94–95, 97, 103, 107–109, 111–112 tip/tilt alignment, 171 Tomographic studies, 5, 23, 83–84, 95, 229–230, 236, 256, 264–266, 271, 447–448 Toric lenses, 299 Traction, retinal, 210, 222–223 Transfer function: implications of, 130–135, 195–196 optical (OTF), 346, 349, 357, 359, 543 modulation (MTF), 42, 277–278, 281, 349–350, 357, 367, 505–506, 541 phase (PTF), 349, 544 Transition zone, 312, 550 Translation, retinal imaging system, 17 Transmission grating, 449 Transplantation, corneal, 69, 74, 301 Transposition formulas, 333 Transverse: chromatic aberration (TCA), 44–45, 51, 354–355, 550 resolution, 5, 21, 260–262, 448 Trefoil, 35, 293–294, 296, 324 Trial lenses, 180–181, 240, 358, 401, 413, 454 Trichromatic cone mosaic, 14–15 Troland, 369, 550 Tscherning wavefront sensor, 65, 312, 321 20/20 vision, 9 2-alternative-forced-choice (2AFC) procedure, 376–378, 380 Two-dimensional (2D) detector, 236 Ultra-high-resolution: axial, 263 OCT, 260, 266 Ultrasonography, 257–258 Ultraviolet (UV) light, 385 Uncorrected visual acuity (UCVA), 325–326
589
U.S. Food and Drug Administration (FDA): contact lenses, 307 corneal ablations, 312, 317 University of Rochester, 9, 326 Vascular structure, high-resolution imaging of, 21–22 Veins: choroidal, 214 retinal, 206–207, 209–211, 220, 222, 228–229 Vernier: acuity task, 17 alignment, 36, 45 Vertical: scanner/scanning, 420, 427–428, 432 synchronization pulse (vsync), 420, 550 VGA resolution, 90 Video: imagery, real-time, 5 keratography, xviii signal, image acquisition, 426 Vienna AO tomographic scanning OCT, 265–266 Virtual refraction: defi ned, 550 experimental method evaluation, 357–358 image quality metrics: defi ned, 340 grating objects, 349–353 point objects, 345–349 wavefront, 340–345 Vision: correction: customized devices, see Customized vision correction devices historical perspectives, 3–8 ocular adaptive optics, 9–11 Vision Stimulus Generator (VSG) system, 388 Visual: acuity (VA), 10, 55, 303–304, 336, 358, 367–368, 411–412, 477, 539, 550–551 acuity, improvement strategies, 477–478 angle, 205, 365, 551 benefit, 301–304, 551 cycle, 215 postsurgical eyes, 303–304 stimuli: generation of, 380–391 types of, 335 Strehl ratio, spatial domain (VSX), 348. See also Strehl ratio
590
INDEX
Visual Strehl ratio, frequency domain, MTF Method (VSMTF), 352 Visual Strehl ratio, frequency domain, OTF Method (VSOTF), 352, 359 Visualization, 148 Vitreo-retinal: interface, 218, 220, 222, 224, 551 surgery, adaptive optics-assisted, 22 Vitreous humor, 206–208, 218, 551 Voice coil: stage, 464 translator, 448 Voltage: actuator, 150, 196 adaptive optics system, 85 AOSLO, 423 temporal control command and, 128 wavefront influence function and, 122–124 Volume resolution element, 418, 551 Volume under neurally weighted OTF (VNOTF), 352 Volume under OTF (VOTF), 352, 359 VSG Software Library, 388 VSIA standards, 418 vsync signal, 426, 428–429 Warping, retinal imaging system, 17–18 Wave aberration, see Aberration characterizing, 6, 293 correction of, 8, 307 defi ned, 33, 551 measurement techniques, 9–10, 301 Seidel power-series expansion, 332 temporal fluctuations in, 6–7 Wavefront: beacon, 166 compensation, 418, 421 correction of, see Wavefront corrector(s) defi ned, 551 error, 41, 158, 175, 180, 189–190, 200–201, 238, 297, 302–303, 324, 326–327, 332, 334, 338, 340–342, 397, 402–403, 423, 432–433, 441, 449, 455–457, 458, 461, 474, 489, 491, 502, 505–506 error measurement, 175 flatness, 340 -guided ablation, 324 quality metrics, 340 reconstruction, 64, 180–181, 486–487 sensing, see Wavefront sensing; Wavefront sensor slope, see Wavefront slope technology, 311 tilt, 179
Wavefront corrector(s): availability of, 86 bimorph mirrors, 88, 91–92, 109–111 characterized, 9, 111–113, 237–239 classes of, 86–88 deformable mirrors, discrete actuator, 86, 89–90, 109, 111 functions of, 85 historical perspectives, 84–85 key parameters of, 112 liquid crystal spatial light modulator, 90 measurement, 170, 297, 398–403 membrane mirrors, 88, 109–111 microelectromechanical systems (MEMS), 92 performance predictions, case illustrations, 95–111 segmented correctors, 86–88, 111 sensing, see Wavefront sensing spatial control command of, 119, 124–128 stroke requirements, 99–101, 111 temporal control command of, 119, 128–135 vision science applications, 88–95 Wavefront reconstruction, 504 Wavefront sensing: AOSLO, 418, 420–421 conventional imaging, 240–242 corneal aberration measurement, 37 current technology, 52 future directions for, 23 OCT ophthalmoscopes, 265–267 scanning laser imaging, 252 Wavefront sensor, see Shack-Hartmann wavefront sensor (SHWS) AOSLO, 418, 425, 430–432 calibration of, 75–79, 158–159 categories of, 63–64 computer delay, transfer function of, 131 contact lens measurement, 298, 302 conventional imaging, 241 corneal ablation, 312 defi ned, 552 functions of, 63, 85 historical, 4 laser ray tracing, 63–66 modern, 4 polarization and, 54 qualification of, 177–180 real-time, 40 reconstruction algorithm, 488, 495 Rochester Adaptive Optics Ophthalmoscope, 408 slope measurement, 180 spatially resolved refractometer, 63–65
INDEX transfer function of, 130–131 Tscherning, 65 verification, 495–496 Wavefront slope: implications of, 64, 66, 71 influence functions, 149 measurement: centroiding, 144 image coordinates issues, 143 image preparation, 143–144 image quality, 143 pupil measurement, 143 regions of interest, 142 monochromatic refraction, 341–342 Wavelength: in focus, 356 lambda, 237 spectrum, 335 Webb, Robert, 5 Weber’s law, 375 Weibull function, 377 Weighting, neural, 348 White blood cells, 23 White-light illumination, 9, 33, 411 Whole-pupil estimation method, 344 Wiener fi ltering, 279–280 Wire crosshair, point target technique, 165 XGA resolution, 90 Yes/no procedure, 371–372, 552 Young, Thomas, 4
колхоз 7/17/06
591
ZEMAX optical design software, 240, 425 Zernike: aberrations, 83, 331, 337 astigmatism, 457 azimuthal order, 518–522 coefficient, 11, 52, 68–71, 75–77, 79, 99–100, 144–145, 299, 302, 273, 334, 357, 402, 423, 433, 435, 437, 521 correction, contact lenses and, 300 defocus, 105, 107, 333, 456–457 expansion, 92, 332, 339, 423 mode reconstruction, 123–124, 128 polynomials: defi ned, 552 implications of, 35–36, 52, 68, 127, 144–145, 339, 406, 418, 422, 432–434 OSA Standards for Reporting Optical Aberrations, 517–522 radial order, 518–522 reconstruction matrix, 127 recovering from partial data, 145 spherical aberration, 353 terms, 291–293, 295–296, 300, 302, 304, 308, 423 wavefront analysis, 331 Zero-order-hold (ZOH) transfer function, 130, 132, 195 ZEST staircase procedure, 379–380 Zonal: compensation, 504 correctors, 87 z transform, 194